name
stringlengths 5
6
| title
stringlengths 8
144
| abstract
stringlengths 0
2.68k
| fulltext
stringlengths 1.78k
95k
| keywords
stringlengths 22
532
|
---|---|---|---|---|
586884 | Separation of NP-Completeness Notions. | We use hypotheses of structural complexity theory to separate various NP-completeness notions. In particular, we introduce an hypothesis from which we describe a set in NP that is $\mbox{${\leq}^{\rm P}_{\rm T}$}$-complete but not $\mbox{${\leq}^{\rm P}_{tt}$}$-complete. We provide fairly thorough analyses of the hypotheses that we introduce. | Introduction
Ladner, Lynch, and Selman [LLS75] were the first to compare the strength of polynomial-time
reducibilities. They showed, for the common polynomial-time reducibilities, Turing
btt ), and many-one
s means that # P
r is properly stronger than # P
s B,
but the converse does not hold. In each case, the verifying sets belong to
Ladner, Lynch, and Selman raised the obvious question of whether reducibilities differ on
NP. If there exist sets A and B in NP (other than the empty set or S # ) such that A# P T B but
A
immediately. With this in mind, they conjectured
that P #= NP implies that # P
m differ on NP.
In the intervening years, many results have explained the behavior of polynomial-time
reducibilities within other complexity classes and have led to a complete understanding
of the completeness notions that these reducibilities induce. For example, Ko and
Moore [KM81] demonstrated the existence of # P
T -complete sets for EXP that are not # P
complete. Watanabe [Wat87] extended this result significantly, showing that # P
btt -,
tt -, and # P
T -completeness for EXP are mutually different, while Homer, Kurtz, and Royer
[KR93] proved that # P m - and # P
1-tt -completeness are identical.
# Department of Computer Science and Engineering, University at Buffalo, Buffalo, NY 14260. Email:
Department of Computer Science and Engineering, University at Buffalo, Buffalo, NY 14260. Email:
selman@cse.buffalo.edu
However, there have been few results comparing reducibilities within NP, and we have
known very little concerning various notions of NP-completeness. It is surprising that
no NP-complete problem has been discovered that requires anything other than many-one
reducibility for proving its completeness. The first result to distinguish reducibilities within
NP is an observation of Wilson in one of Selman's papers on p-selective sets [Sel82].
It is a corollary of results there that if NE# co-NE #= E, then there exist sets A and B
belonging to NP such that A# P
ptt B, B# P
ptt denotes positive truth-table
reducibility. Regarding completeness, Longpr- e and Young [LY90] proved that there
are # P
-complete sets for NP for which # P
T -reductions to these sets are faster, but they did
not prove that the completeness notions differ. The first to give technical evidence that # P
-completeness for NP differ are Lutz and Mayordomo [LM96], who proved that
if the p-measure of NP is not zero, then there exists a # P 3-tt -complete set that is not # P m -
complete. Ambos-Spies and Bentzien [ASB00] extended this result significantly. They
used an hypothesis of resource-bounded category theory that is weaker than that of Lutz
and Mayordomo to separate nearly all NP-completeness notions for the bounded truth-table
reducibilities.
It has remained an open question as to whether we can separate NP-completeness notions
without using hypotheses that involve essentially stochastic concepts. Furthermore,
the only comparisons of reducibilities within NP known to date have been those just listed.
Here we report some exciting new progress on these questions. Our main new result introduces
a strong, but reasonable, hypothesis to prove existence of a # P
T -complete set in NP
that is not # P
tt -complete. Our result is the first to provide evidence that # P tt -completeness
is weaker than # P
-completeness. Let Hypothesis H be the following assertion: There is
a UP-machine M that accepts 0 # such that (i) no polynomial time-bounded Turing machine
correctly computes infinitely many accepting computations of M, and (ii) for some
e > 0, no 2 n e
time-bounded Turing machine correctly computes all accepting computations
of M. Hypothesis H is similar to, but seemingly stronger than, hypotheses considered by
researchers previously, notably Fenner, Fortnow, Naik, and Rogers [FFNR96], Hemaspaan-
dra, Rothe and Wechsung [HRW97], and Fortnow, Pavan, and Selman [FPS99].
This result is especially interesting because the measure theory and category theory
techniques seem to be successful primarily for the nonadaptive reducibilities. Wewill prove
an elegant characterization of the genericity hypothesis of Ambos-Spies and Bentzien and
compare it with Hypothesis H. Here, somewhat informally, let us say this: The genericity
hypothesis asserts existence of a set L in NP such that no 2 2n time-bounded Turing machine
can correctly predict membership of infinitely many x in L from the initial characteristic
sequence That is, L is almost-everywhere unpredictable within time
2 2n . Clearly such a set L is 2 2n -bi-immune. In contrast, we show that Hypothesis H holds if
there is a set L in UP#co-UP such that L is P-bi-immune and L#0 # is not in DTIME(2 n e
for some e > 0. Thus, we replace "almost-everywhere unpredictable" with P-bi-immunity
and we lower the time bound from 2 2n to 2 n e
, but we require L to belong to UP# co-UP
rather than NP.
We prove several other separations as well, and some with significantly weaker hy-
potheses. For example, we prove that NP contains # P
T -complete sets that are not # P m -
complete, if NP# co-NP contains a set that is 2 n e
-bi-immune, for some e > 0.
Preliminaries
We use standard notation for polynomial-time reductions [LLS75], and we assume that
readers are familiar with Turing, # P
T , and many-one, # P
reducibilities. A set A is truth-table
reducible to a set B (in symbols A # P tt B) if there exist polynomial-time computable
functions g and h such that on input x, g(x) is a set of queries Q= {q 1 , q 2 , - , q k }, and x #A
if and only if h(x,B(q 1 1. The function g is the truth-table generator
and h is the truth-table evaluator. For a constant k > 0, A is k-truth-table reducible to B
k-tt B) if for all x, and A is bounded-truth-table reducible to B (A# P
there is a constant k > 0 such that A # P
k-tt B. Given a polynomial-time reducibility # P r ,
recall that a set S is # P r -complete for NP if S # NP and every set in NP is # P r -reducible to
S.
Recall that a set L is p-selective if there exists a polynomial-time computable function
such that for all x and y, f (x,y) # {x,y} and f (x,y) belongs to L, if either
x # L or y # L [Sel79]. The function f is called a selector for L.
Given a finite alphabet, let S w denote the set of all strings of infinite length of order
type w. For r # S #S w , the standard left cut of r [Sel79, Sel82] is the set
where < is the ordinary dictionary ordering of strings with 0 less than 1. It is obvious that
every standard left cut is p-selective with selector f (x,y) =min(x,y).
Given a p-selective set L such that the function f defined by f
selector for L, we call f a min-selector for L. We will use the following simplified version
of a lemma of Toda [Tod91].
be a p-selective set with a min-selector f . For any finite set Q there exists
a string z # Q#} such that z}. The
string z is called a "pivot" string.
Now we review various notions related to almost-everywhere hardness. A language
L is immune to a complexity class C , or C -immune, if L is infinite and no infinite subset
of L belongs to C . A language L is bi-immune to a complexity class C , or C -bi-immune,
if L is infinite, no infinite subset of L belongs to C , and no infinite subset of L belongs
to C . A language is DTIME(T (n))-complex if L does not belong to DTIME(T (n)) almost
everywhere; that is, every Turing machine M that accepts L runs in time greater than T (|x|),
for all but finitely many words x. Balc- azar and Sch- oning [BS85] proved that for every
time-constructible function T , L is DTIME(T (n))-complex if and only if L is bi-immune
to DTIME(T (n)).
Given a time bound T (n), a language L is T (n)-printable if there exists a T (n) time-bounded
Turing machine that, on input 0 n , prints all elements of L#S =n [HY84]. A set S
is T (n)-printable-immune if S is infinite and no infinite subset of S is T (n)-printable.
In order to compare our hypotheses with the genericity hypothesis we describe time-bounded
genericity [ASFH87]. For this purpose, we follow the exposition of Ambos-Spies,
Neis, and Terwijn [ASNT96]. Given a set A and string x,
is the n-th string in lexicographic order. We identify the initial
segment A|z n with its characteristic sequence; i.e., A|z n =A(z condition is a
set C # S # . A meets C if for some x, the characteristic sequence A|x #C. C is dense along A
if for infinitely many strings x there exists i # {0,1} such that the concatenation (A|x)i #C.
Then, the set A is DTIME(t(n))-generic if A meets every condition C#DTIME(t(n)) that is
dense along A. To simplify the notation, we say that A is t(n)-generic if it is DTIME(t(n))-
generic.
Finally, we briefly describe the Kolmogorov complexity of a finite string. Later we will
use this in an oracle construction. The interested reader should refer to Li and Vit- anyi [LV97]
for an in-depth study. Fix a universal Turing machine U . Given a string x and a finite set
the Kolmogorov complexity of x with respect to S is defined by
0, then K(x|S) is called the Kolmogorov complexity of x, denoted K(x). We will use
time-bounded Kolmogorov complexity K t (x) also. For this definition, we require that U(p)
runs in at most t(|x|) steps.
3 Separation Results
Let Hypothesis H be the following assertion:
Hypothesis H: There is a UP-machine M that accepts 0 # such that
1. no polynomial time-bounded Turing machine correctly computes infinitely many accepting
computations of M, and
2. for some e > 0, no 2 n e
time-bounded Turing machine correctly computes all accepting
computations of M.
Theorem 1 If Hypothesis H is true, then there exists a # P
-complete language for NP that
is not # P tt -complete for NP.
Proof. Let M be a UP-machine that satisfies the conditions of Hypothesis H. For each
a n be the unique accepting computation of M on 0 n , and let l |. Define the
language
Define the infinite string a = a 1 a 2 ., and define
to be the standard left-cut of a.
We define to be the disjoint union of L 1 and L 2 . We will prove that L is
T -complete for NP but not # P
T -complete for NP.
Proof. It is clear that L belongs to NP. The following reduction witnesses that SAT# P
Given an input string x, where use a binary search algorithm that queries L 2 to find
a n . Then, note that x # SAT if and only if #x,a n # belongs to L 1 .
Lemma 3 L is not # P
tt -complete for NP.
Proof. Assume that L is # P
tt -complete for NP. Define the set
| the i-th bit of a
Clearly, S belongs to NP. Thus, by our assumption, there is a # P tt -reduction #g,h# from S to
L. Given this reduction, we will derive a contradiction to Hypothesis H.
Consider the following procedure A :
1. input
2. compute the sets Q
3. Let Q 1 be the set of all queries in Q to L 1 and let Q 2 be the set of all queries in Q to
4. If Q 1 contains a query #x,a t #, where t # n e , then output "Unsuccessful" and Print a t ,
else output "Successful".
Observe that this procedure runs in polynomial time. We treat two cases, namely, either
A (0 n ) is unsuccessful, for infinitely many n, or it is successful, for all but finitely many n.
If the procedure A (0 n ) is unsuccessful for infinitely many n, then there is a polynomial
time-bounded Turing machine that correctly computes infinitely many accepting
computations of M, thereby contradicting Clause 1 of Hypothesis H.
Proof. If A (0 n ) is unsuccessful, then it outputs a string a t such that t # n e . Hence, if
A (0 n ) is unsuccessful for infinitely many n, then for infinitely many t there exists an n,
outputs a t . The following procedure uses this observation to
compute infinitely many accepting computations of M in polynomial time.
do
if A (0 j ) outputs a t
then output a t and halt.
The procedure runs in polynomial time because the procedure A (0 j ) runs in polynomial
time.
but finitely many n, then there is a 2 n e
time-bounded
Turing machine that correctly computes all accepting computations of M, thereby contradicting
Clause 2 of Hypothesis H.
Proof. We will demonstrate a procedure B such that for each n, if A (0 n ) is successful,
then B on input 0 n outputs the accepting computation of M on 0 n in 2 n e
time.
If A (0 n ) is successful, then no member of the set Q 1 is of the form #x,a t # where t # n e .
We begin our task with the following procedure C that for each query
decides whether q # L 1 .
1. input
2. If z #= a t for some t, then #y, z# does not belong to L 1 ; (This can be determined in
polynomial time.)
3. if z = a t , where t # n e , then #y, z# belongs to L 1 only if belongs to SAT.
(Since t # n e this step can be done in time 2 n e
Thus, C decides membership in L 1 for all queries q in Q 1 . Therefore, if for each query
q in Q 2 , we can decide whether q belongs to L 2 , then the evaluator h can determine whether
each input #0 n , belongs to S. That is, if for each query q in Q 2 , we can decide
whether q belongs to L 2 , then we can compute a n . We can accomplish this using a standard
proof technique for p-selective sets [HNOS96, Tod91]. Namely, since L 2 is a standard left-
cut, by Lemma 1, there exists a pivot string z in Q 2 #} such that Q 2 #L 2 is the set of
all strings in Q 2 that are less than or equal to z. We do not know which string is the pivot
string, but there are only #Q 2 # choices, which is a polynomial number of choices. Thus,
procedure B on input 0 n proceeds as follows to compute a n : For each possible choice of
pivot and the output from procedure C , the evaluator h computes a possible value for each
j-th bit of a n . There are only a polynomial number of possible choices of a n , because there
are only a polynomial number of pivots. B verifies which choice is the correct accepting
computation of M on 0 n , and outputs that value. Finally, we have only to note that the entire
process can be carried out in 2 n e
steps. This completes the proof of our claim, and of the
theorem as well.
Let Hypothesis H # be the following assertion:
There is an NP-machine M that accepts 0 # such that for some 0 < e < 1,
no
time-bounded Turing machine correctly computes infinitely-many accepting computations
of M.
Theorem 2 If Hypothesis H # is true, then there exists a Turing complete language for NP
that is not # P m -complete for NP.
Proof. Let M be an NP-machine that satisfies the conditions of Hypothesis H # . For each
a n be the lexicographically maximum accepting computation of M on 0 n , and let
. Define the language
an accepting computation
of M on 0 m ,
Let a = a 1 a 2 a 3 -, and define
It is easy to see, as in the previous argument, that L is # P
T -complete for NP. In order to
prove that L is not # P m -complete, we define the set
| y is a prefix of an accepting computation of M on 0 n
which belongs to NP, and assume there is a # P m -reduction f from S to L. Consider the
procedure D in Figure 1: First we will analyze the running time and then we treat two
cases, namely, either D (0 n ) is successful for infinitely many n, or it is unsuccessful for all
but finitely many n.
3 The above procedure halts in O(l n 2 n e 2 /2 ) steps.
Proof. Consider an iteration of the repeat loop. The most expensive step is the test of
whether "z # SAT". This test occurs only when Hence we can decide
whether z belongs to SAT in 2 n e 2 /2 steps. All other steps take polynomial time. Hence the
time taken by the procedure is O(l
the running time of procedure D is bounded by 2 n e
for infinitely many n, then there is a 2 n e
-time-bounded Turing
machine that correctly computes infinitely many accepting computations of M.
input
Repeat l n times
begin
if both x 0 and x 1 are queries to L 2
then if x 0 # x 1
then y := y0
else y := y1
else {At least one of x 0 and x 1 is a query to L 1 {0,1} be the least index
such that x b queries L 1 , and let x
if u is not an accepting computation of M {thus, x b /
then
else {u is an accepting computation of M on 0 t
then output "Unsuccessful," print u, and terminate
else {t < n e
then y := yb
else {x b /
output "Successful" and print y.
Figure
1: Procedure D
Proof. We demonstrate that if D is successful on an input 0 n , then the string that is
printed is an accepting computation of M on 0 n . In order to accomplish this, we prove by
induction that y is a prefix of an accepting computation of M on 0 n during every iteration
of the repeat loop (i.e., a loop invariant). Initially when l this is true. Assume that y is
a prefix of an accepting computation of M at the beginning of an iteration. Then, at least
one of f (#0 n , must belong to L. If both x 0 and x 1 are queries
to L 2 , then the smaller of x 0 and x 1 belongs to L 2 because L 2 is p-selective. Thus, in this
case, the procedure extends y correctly. If at least one of x 0 and x 1 is a query to L 1 , then
the procedure determines whether x b # L 1 , where x b is the query to L 1 with least index. If
x b belongs to L, then #0 n , yb# S. Hence, yb is a prefix of an accepting computation. If
# L, then x -
b belongs to L, because at least one of x b or x - b belongs to L. Thus, in this
case, y -
b is a prefix of an accepting computation. This completes the induction argument.
The loop repeats l n times. Therefore, the final value of y, which is the string that D
prints, is an accepting computation.
but finitely many n, then there is a 2 n e
-time-
bounded Turing machine that correctly computes infinitely many accepting computations
of M.
Proof. The proof is similar to the proof of Claim 1. The following procedure computes
infinitely many accepting computations of M.
input
do
if D (0 j ) outputs u and u is an accepting computation of M on 0 n
then print u and terminate.
The running time of this algorithm can be bounded as follows: The procedure D (0 j )
runs in time l steps. So the total running time is - n 1/e
Since the cases treated both by Claims 4 and 5 demonstrate Turing machines that correctly
compute infinitely many accepting computations of M in 2 n e
time, we have a contradiction
to Hypothesis H # . Thus L is not # P
m -complete for NP.
The following results give fine separations of polynomial time reducibilities in NP from
significantly weaker hypotheses. Moreover, they follow readily from results in the literature
Theorem 3 If there is a tally language in UP-P, then there exist two languages L 1 and
in NP such that L 1 # P tt
Proof. Let L be a tally language in UP-P. Let R be the polynomial-time computable
relation associated with the language L. Define
and
i-th bit of w is one}.
It is clear that L 1 is # P tt -reducible to L 2 . To see that L 2 is # P
T -reducible to L 1 , implement
a binary search algorithm that accesses L 1 to determine the unique witness w such that
then find the i-th bit.
Observe that L 2 is a sparse set. Ogihara and Watanabe [OW91] call L 1 the left set of L,
and they and Homer and Longpr- e [HL94] proved for every L in NP that if the left set of L
btt -reducible to a sparse set, then L is in P. Hence L 1 # btt L 2 .
We now prove that Turing and truth-table reducibilities also differ in NP under the same
hypothesis.
Theorem 4 If there is a tally language in UP-P, then there exist two languages L 1 and
in NP such that L 1 # P
Proof. Hemaspaandra et al. [HNOS96] proved that the hypothesis implies existence
of a tally language L in UP-P such that L is not # P tt -reducible to any p-selective set. In
the same paper they also showed, given a tally language L in NP-P, how to obtain a p-
selective set S such that L is # P
T -reducible to S. Combing the two results we obtain the
theorem.
4 Analysis of the Hypotheses
This section contains a number of results that help us to understand the strength of Hypotheses
H and H # .
1 The class of all languages that are # P
T -equivalent to L 1 is a noncollapsing degree.
4.1 Comparisons With Other Complexity-Theoretic Assertions
We begin with some equivalent formulations of these hypotheses, and then relate them to
other complexity-theoretic assertions. The question of whether P contains a P-printable-
immune set was studied by Allender and Rubinstein [AR88], and the equivalence of items 1
and 3 in the following theorem is similar to results of Hemaspaandra, Rothe, and Wechsung
[HRW97] and Fortnow, Pavan, and Selman [FPS99]. The second item is similar to the the
characterization of Grollmann and Selman [GS88] of one-one, one-way functions with the
addition of the attribute almost-always one-way of Fortnow, Pavan, and Selman.
Theorem 5 The following statements are equivalent:
1. There is a language L in P that contains exactly one string of every length such that
L is P-printable-immune and, for some e > 0, L is not 2 n e
-printable.
2. There exists a polynomial-bounded, one-one, function , such that f is
almost-everywhere not computable in polynomial time, for some e > 0, f is not computable
in time 2 n e
, and the graph of f belongs to P.
3. Hypothesis H is true for some e > 0.
Proof. Let L satisfy item one. Define
the unique string of length n that belongs to L.
Clearly, f us polynomial-bounded and one-one. The graph of f belongs to P, because L
belongs to P. Suppose that M is a Turing machine that computes f and that runs in polynomial
time on infinitely many inputs. Then, on these inputs, M prints L#S n . Similarly, f is
not computable in time 2 n e
Let f satisfy item two. Define a UP-machine M to accept 0 # as follows: On input 0 n ,
M guesses a string y of length within the polynomial-bound of f , and accepts if and only if
The rest of the proof is clear.
Let M be a UP-machine that satisfies item three, i.e., that satisfies the conditions of
Hypothesis H. Let a n be the unique accepting computation of M on 0 n and let |a n
r n be the rank of a n among all strings of length n l . Now, we define L as follows: Given a
string x, if belongs to L if and only if x = a n . If (n-1) l < |x| < n l ,
then x belongs to L if and only if the rank of x (among all the string of length |x|) is r n-1 . It
is clear that L # P and has exactly one string per each length. We claim that L is P-printable-
immune and is not 2 n r
-printable, where machine that prints infinitely many
strings of L in polynomial time can be used to print infinitely many accepting computations
of M in polynomial time. Thus L is P-printable-immune. Any machine that prints all the
strings of L in 2 n r
time can be used print all the accepting computations of M in 2 n e
time.
Thus L is not 2 n r
-printable.
We prove the following theorem similarly.
Theorem 6 The following statements are equivalent
1. There is a language L in P that contains at least one string of every length such that,
for some e > 0, L is 2 n e
-printable-immune.
2. There is polynomial-bounded, multivalued function such that every refinement
of f is almost-everywhere not computable in 2 n e
-time, and the graph of f
belongs to P.
3. Hypothesis H # holds for some e > 0.
Next we compare our hypotheses with the following complexity-theoretic assertions:
1. For some e > 0, there is a P-bi-immune language L in UP#co-UP such that L#0 # is
not in DTIME(2 n e
2. For some e > 0, there is language L in UP#co-UP such that L is not in DTIME(2 n e
3. For some e > 0, there is a 2 n e
-bi-immune language in NP# co-NP.
Theorem 7 Assertion 1 implies Hypothesis H and Hypothesis H implies Assertion 2.
Proof. Let L be a language in UP# co-UP that satisfies Assertion 1. Define M to be the
UP-machine that accepts 0 # as follows: On input 0 n , nondeterministically guess a string
If w either witnesses that 0 n is in L or witnesses that 0 n is in L, then accept 0 n . It is
immediate that M satisfies the conditions of Hypothesis H.
To prove the second implication, let M a UP-machine that satisfies the conditions of
Hypothesis H. Let a n denote the unique accepting computation of M on 0 n and define
It is clear that L # UP#co-UP. If L # DTIME(2 n e
then a binary search algorithm can
correctly compute a n , for every n, in time 2 n e
. This would contradict Hypothesis H. Hence,
The discrete logarithm problem is an interesting possible witness for Assertion 2. The
best known deterministic algorithm requires time greater than 2
3 [Gor93]. Thus, the
discrete logarithm problem is a candidate witness for the noninclusion UP # co-UP #
3 .
Corollary 1 If, for some e > 0, UP # co-UP has a 2 n e
-bi-immune language, then # P
completeness is different from # P tt -completeness for NP.
Theorem 8 Assertion (3) implies Hypothesis H # .
Corollary 2 If, for some e > 0, NP # co-NP has a 2 n e
-bi-immune language, then # P
completeness is different from # P m -completeness for NP.
4.2 Comparisons with Genericity
The genericity hypothesis of Ambos-Spies and Bentzien [ASB00], which they used successfully
to separate NP-completeness notions for the bounded-truth-table reducibilities,
states that "NP contains an n 2 -generic language". Our next result enables us to compare
this with our hypotheses.
We say that a deterministic oracle Turing machine M is a predictor for a language L
if for every input word x, M decides whether x # L with oracle L|x. L is predictable in
time t(n) if there is a t(n) time-bounded predictor for L. We define a set L to be almost-everywhere
unpredictable in time t(n) if every predictor for L requires more than t(n) time
for all but finitely many x. This concept obviously implies DTIME(t(n))-complex almost
everywhere, but the converse does not hold:
Theorem 9 EXP contains languages that are DTIME(2 n )-complex but not almost-everywhere
unpredictable in time 2 n .
Now we state our characterization of t(n)-genericity.
Theorem 10 Let t(n) be a polynomial. A decidable language L is t(n)-generic if and only
if it is almost-everywhere unpredictable in time t(2 n
-1).
Proof. Assume that L is not almost-everywhere unpredictable in time t(2 n
-1), and let
M be a predictor for L that for infinitely many strings x runs in time t(2 n
1). Define a
condition C so that the characteristic sequence
(L|x)x #C #M with oracle L|x runs in time t(2 |x| -1) on input x.
accepts x). Then, C is dense along L because M correctly predicts
whether x # L for infinitely many x. It is easy to see that C # DTIME(t(n)). However, L is
not t(n)-generic because we defined C so that L does not meet C.
Assume that L is not t(n)-generic, and let C #DTIME(t(n)) be a condition that is dense
along L such that L does not meet C. Let T be a deterministic Turing machine that halts on
all inputs and accepts L. Define a predictor M for L to behave as follows on input x with
oracle A|x: If (A|x)1 #C, then M rejects x, and if (A|x)0 #C, then M accepts x. If neither
holds, then M determines membership in L by simulating T on x. Since L does not meet C,
M is a predictor for L. Since C is dense along L and L does not meet C, for infinitely many
x, either (A|x)1 #C or (A|x)0 #C, and in each of these cases, M runs for at most t(2 - 2 |x| )
steps. Since t(n) is polynomial function, by the linear speedup theorem [HS65], there is a
Turing machine that is equivalent to M that runs in time t(2 |x| -1).
Corollary 1 NP contains an n 2 -generic language if and only if NP contains a set that is
almost-everywhere unpredictable in time 2 2n .
By Theorem 8, Hypothesis H # holds if NP# co-NP contains a set that, for some e > 0,
is
-bi-immune. So, Hypothesis H # requires bi-immunity, which is weaker than almost-everywhere
unpredictability, and the time-bound is reduced from 2 2n to 2 n e
. On the other
hand, we require the language to belong to NP# co-NP instead of NP. Similarly, when we
consider Hypothesis H, we require the language to be P-bi-immune and not in DTIME(2 n e ),
whereas now we require the language to be in UP# co-UP. Moreover, the conclusion
of Theorem 1 is not known to follow from the genericity hypothesis. At the same time,
we note that the genericity hypothesis separates several bounded-truth-table completeness
notions in NP that do not seem obtainable from our hypotheses.
4.3 Relativization
Theorem 11 There exists an oracle relative to which the polynomial hierarchy is infinite
and Hypotheses H and H # both hold.
Proof. Define Kolmogorov random strings r 0 , r 1 , . as follows: r n is the first string of
length n such that
Then, define the oracle
Define M to be an oracle Turing machine that accept 0 # with oracle A as follows: On
input guess a string y of length n. If y # A, then accept. M is a UP A -machine that accepts
contains exactly one string of every length.
Now we show that no 2 n e
oracle Turing machine with oracle A, for any 0 < e < 1,
correctly computes infinitely many accepting computations of M. Observe that relative to
A, this implies both Hypotheses H and H # . Suppose otherwise, and let T be such an oracle
Turing machine. The gist of the remainder of the proof is that we will show how to simulate
T without using the oracle, and that will contradict the randomness or r n .
Suppose that T A (0 n . Then we simulate this computation without
using an oracle as follows:
1. Compute . Do this iteratively: Compute r i by running every program
(with input strings r 0 , r 1 , . , r i-1 ) of length # i/2 for 2 i steps. Then r i is the first
string of length i that is not output by any of these programs. Note that the total time
for executing this step is
2. Simulate T on input 0 n , except replace all oracle queries q by the following rules: If
|q| < l, answer using the previous computations. Otherwise, just answer "no."
If the simulation is correct, then this procedure outputs r n without using the oracle. The
running time of this procedure on input 0 n is 2 5n e +2 n e
, which is less than 2 n . So, we can
describe r n by a string of length O(logn), to wit, a description of T and 0 n . This contradicts
the definition of r n .
We need to show that the simulation is correct. The simulation can only be incorrect
if |q| # l and be the first such query. This yields a short
description of r m , given r 0 , r 1 , . , r l-1 . Namely, the description consists of the description
of T (a constant), the description of 0 n (logn bits), and the description of the number j such
that is the j-th query (at most n e ). Thus, the length of the description is O(n e ). Since
that the length of the description of r m is less than m/2. The running
time of T , given r 0 , r 1 , . , r l-1 , is 2 n e
, which is less than 2 m . (The reason is that the first
step in the simulation of T is not needed.) Therefore, the simulation is correct.
Finally, because A is a sparse set, using results of Balc- azar et al. [BBS86], there is an
oracle relative to which the hypotheses holds and the polynomial hierarchy is infinite.
Hypothesis H fails relative to any oracle for which
and Rogers [FR94] obtained an oracle relative to which NP #= co-NP and Hypothesis H #
fails. We know of no oracle relative to which P #= NP and every # P
T -complete set is #
complete.
4.4 Extensions
The extensions in this section are independently observed by Regan and Watanabe [RW01].
In Hypothesis H we can replace the UP-machine by an NP-machine under a stronger intractability
assumption. Consider the following hypothesis:
There is a NP-machine M that accepts 0 # such that
1. no probabilistic polynomial time-bounded Turing machine correctly outputs infinitely
many accepting computations with non-trivial (inverse polynomial) probability, and
2. for some e > 0, no 2 n e
time-bounded Turing machine correctly computes all accepting
computations with non-trivial probability.
We can prove that Turing completeness is different from truth-table completeness in
NP under the above hypothesis. The proof uses the randomized reduction of Valiant and
that isolates the accepting computations. We define L as in the proof of
Theorem 2. Let
i#v such that v is an accepting computation of M,
and the ith bit of
where v.r i denotes the inner product over GF[2].
Valiant and Vazirani showed that if we randomly pick r 1 , r 2 , - , r k , then with a non-trivial
probability there exists exactly one accepting computation v of M whose inner product
with each r i is 0. Thus, for a random choice of r 1 , - , r k , there is exactly one witness v
for i#. The rest of the proof is similar to that of Theorem 1.
We also note that we can replace the UP-machine in Hypothesis H with a FewP-
machine.
--R
Separating NP-completeness under strong hypotheses
Diagonalizations over polynomial time computable sets.
Genericity and measure for exponential time.
Resource bounded randomness and weakly complete problems.
Relativizations of the P
Completeness notions for nondeterministic complexity classes.
On inverting onto functions.
Distributionally hard languages.
Discrete logarithms in GF(p) using the number field sieve.
Complexity measures for public-key cryptosys- tems
Easy sets and hard certificate schemes.
On the computational complexity of algorithms.
Computation times of NP sets of different densities.
Completeness, approximation and density.
A comparison of polynomial time re- ducibilities
Cook versus karp-levin: Separating completeness notions if NP is not small
On polynomial time bounded truth-table reducibility of NP sets to sparse sets
Personal communication.
Reductions on NP and P-selective sets
On polynomial-time truth-table reducibilities of intractable sets to P-selective sets
NP is as easy as detecting unique solutions.
A comparison of polynomial time completeness notions.
--TR
--CTR
A. Pavan , Alan L. Selman, Bi-immunity separates strong NP-completeness notions, Information and Computation, v.188 n.1, p.116-126, 10 January 2004
John M. Hitchcock , A. Pavan, Comparing reductions to NP-complete sets, Information and Computation, v.205 n.5, p.694-706, May, 2007
Christian Glaer , Alan L. Selman , Samik Sengupta, Reductions between disjoint NP-pairs, Information and Computation, v.200 n.2, p.247-267, 1 August 2005
Lane A. Hemaspaandra, SIGACT news complexity theory column 40, ACM SIGACT News, v.34 n.2, June
Christian Glaer , Mitsunori Ogihara , A. Pavan , Alan L. Selman , Liyu Zhang, Autoreducibility, mitoticity, and immunity, Journal of Computer and System Sciences, v.73 n.5, p.735-754, August, 2007 | p-selectivity;truth-table completeness;turing completeness;p-genericity;many-one completeness |
586888 | A Constant-Factor Approximation Algorithm for Packet Routing and Balancing Local vs. Global Criteria. | We present the first constant-factor approximation algorithm for a fundamental problem: the store-and-forward packet routing problem on arbitrary networks. Furthermore, the queue sizes required at the edges are bounded by an absolute constant. Thus, this algorithm balances a global criterion (routing time) with a local criterion (maximum queue size) and shows how to get simultaneous good bounds for both. For this particular problem, approximating the routing time well, even without considering the queue sizes, was open. We then consider a class of such local vs. global problems in the context of covering integer programs and show how to improve the local criterion by a logarithmic factor by losing a constant factor in the global criterion. | Introduction
. Recent research on approximation algorithms has focused a
fair amount on bicriteria (or even multicriteria) minimization problems, attempting
to simultaneously keep the values of two or more parameters "low" (see, e.g., [11, 21,
22, 29, 30, 32]). One motivation for this is that real-world problems often require
such balancing. In this work, we consider a family of bicriteria problems that involve
balancing a local capacity constraint (e.g., the maximum queue size at the links of a
packet routing network, the maximum number of facilities per site in facility location)
with a global criterion (resp., routing time, total cost of constructing the facilities).
Since these global criteria are NP-hard to minimize even with no constraint on the
local criterion, we shall seek good approximation algorithms.
1.1. Packet Routing. Our main result is a constant-factor approximation algorithm
for store-and-forward packet routing, a fundamental routing problem in interconnection
networks (see Leighton's book and survey [14, 15]); furthermore, the
queue sizes will all be bounded by a constant. This packet routing problem has
received considerable attention for more than 15 years, and is as follows:
Definition 1.1 (Store-and-Forward Packet Routing).
We are given an arbitrary N-node routing network (directed or undirected graph)
G, and a set {1, 2, . , K} of packets which are initially resident (respectively) at the
(multi-)set of nodes {s of G. Each packet k is a message that needs
to be routed to some given destination node t k in G. We have to route each packet k
from s k to t k , subject to: (i) each packet k must follow some path in G; (ii) each edge
traversal takes one unit of time; (iii) no two packets can traverse the same edge at
the same unit of time, and (iv) packets are only allowed to queue along the edges of
# Bell Laboratories, Lucent Technologies, 600-700 Mountain Ave., Murray Hill, NJ 07974-0636,
USA. Part of this work was done while at the School of Computing, National University of Singapore,
Singapore 119260, and was supported in part by National University of Singapore Academic Research
Fund Grants RP950662, RP960620, and RP970607. E-mail: srin@research.bell-labs.com.
Dept. of Decision Sciences, National University of Singapore, Singapore 119260, Republic of
Singapore. Supported in part by National University of Singapore Academic Research Fund Grant
RP3970021, and a Fellowship from the Singapore-MIT Alliance Program in High-Performance Computation
for Engineered Systems. E-mail: fbateocp@nus.edu.sg.
G during the routing stage. There are no other constraints on the paths taken by the
packets, i.e., they can be arbitrary paths in G. The NP-hard objective is to select a
path for each packet and to coordinate the routing so that the elapsed time by which all
packets have reached their destinations is minimized; i.e., we wish to keep this routing
time as small as possible.
Extensive research has been conducted on this problem: see [14, 15] and the
references therein. The most desirable type of algorithm here would, in addition
to keeping the routing time and queue sizes low, also be distributed: given a set
of incoming packets and their (source, destination) values, any switch (node of G)
decides what to do with them next, without any other knowledge of the (multi-)set
This would be ideal for parallel computing. (Distributed
algorithms in this context are also termed on-line algorithms in the literature.) Several
such ingenious results are known for specific networks such as the mesh, butterfly, or
hypercube. For instance, given any routing problem with N packets on an N-node
butterfly, there is a randomized on-line routing algorithm that, with high probability,
routes the packets in O(log N) time using O(1)-sized queues [28]. (We let e denote
the base of the natural logarithm, and, for x > 0, lg x, ln x, and respectively
denote log 2 x, log e x, and max{log e x, 1}. Also, Z+ will denote the set of non-negative
Good on-line algorithms here, however, are not always feasible or required, for
the following reasons:
. A large body of research in routing is concerned with fault-tolerance: the
possibility of G being a reasonable routing network when its nodes are subject
to (e.g., random or worst-case) faults. See, e.g., Kaklamanis et al. [12],
Leighton, Maggs & Sitaraman [18], and Cole, Maggs & Sitaraman [6]. In this
case, we do not expect good on-line algorithms, since the fault-free subgraph
G of G has an unpredictable structure. Indeed, a fair amount of research
in this area, e.g., [6, 18], focuses on showing that -
G is still a reasonably
good routing network under certain fault models, assuming global information
about {(s k , t k )} and the fault structure.
. Ingenious on-line algorithms for specific networks such as the butterfly in
the fault-free case [28] are only existentially (near-)optimal. For instance,
the O(lg N) routing time of [28] is existentially optimal to within a constant
factor, since there are families of routing instances that require #(lg N)
time. However, the worst-case approximation ratio can be #(lg N ). It seems
very hard (potentially impossible) to devise on-line algorithms that are near-optimal
on each instance.
. The routing problem can be considered as a variant of unit-demand multi-commodity
flow where all arc capacities are the same, queuing is allowed, and
where delivery time is also a crucial criterion. (Algorithms for this problem
that require just O(1) queue sizes, such as ours, will also scale with network
size.) For such flow problems, the routing problems often have to be run
repeatedly. It is therefore reasonable to study o#-line approximation algo-
rithms, i.e., e#cient algorithms that use the knowledge of the network and of
and have a good approximation ratio.
Furthermore, it seems like a di#cult problem to construct on-line routing algorithms
for arbitrary networks, even with, say, a polylogarithmic approximation guar-
antee. See Ostrovsky and Rabani [26] for good on-line packet scheduling algorithms,
given the path to be traversed for each packet.
By combining some new ideas with certain powerful results of Leighton, Maggs
we present the first polynomial-time o#-
line constant-factor approximation algorithm for the store-and-forward packet routing
problem. Furthermore, the queue sizes of the edges are bounded by O(1). No approximation
algorithms with a sub-logarithmic approximation guarantee were known
for this problem, to the best of our knowledge. For instance, a result from the seminal
work of Leighton & Rao [19] leads to routing algorithms that are existentially
good. Their network embedding of G ensures that there is some routing instance on
G for which their routing time is to within an O(lg N) factor of optimal, but no good
worst-case performance guarantee is known. We may attempt randomized rounding
on some suitable linear programming (LP) relaxation of the problem; however, apart
from di#culties like controlling path lengths, it seems hard to get a constant-factor
approximation using this approach, for families of instances where the LP optimal
value grows as o(lg(N + K)). Our approach uses the rounding theorem of [13] to
select the set of paths that will be used in the routing algorithm of [17]. The analysis
involves an interesting trade-o# between the "dilation" criterion (maximum path
length) and the "congestion" criterion (maximum number of paths using any edge).
1.2. Covering Integer Programs. Let v T denote the transpose of a (column)
vector v. In the second part of the paper, we continue to address the problem of
simultaneously obtaining good bounds on two criteria of a problem. We focus on
the NP-hard family of covering integer programs (CIPs), which includes the well-known
set cover problem. This class of problems exhibits features similar to our
packet routing problem: the latter can be formulated as a covering problem with side
packing constraints. In CIPs, the packing constraints are upper bound constraints on
the variables.
Definition 1.2 (Covering Integer Programs).
Given A # [0, seeks to minimize c T
subject to Ax # b, x # Z n
for each j (the d j # Z+ are given
integers). If A # {0, 1} m-n , then we assume w.l.o.g. that each b i is a positive integer.
we may assume B # 1. A CIP is uncapacitated if
It is well-known that the two assumptions above are without loss of generality.
(i) If A # {0, 1} m-n , then we can clearly replace each b i by #b i #. (ii) Given a CIP
with some A i,j > b i , we can normalize it by first setting A i,j := b i for each such (i, j),
and then scaling A and b uniformly so that #k, (b k # 1 and max # A k,# 1). This is
easily seen to result in an equivalent CIP.
To motivate the model, we consider a concrete CIP example: a facility location
problem that generalizes the set cover problem. Here, given a digraph G, we want
to place facilities on the nodes suitably so that every node has at least B facilities in
its out-neighborhood. Given a cost-per-facility c j of placing facilities at node j, we
desire to place the facilities in a way that will minimize the total cost. It is easy to see
that this NP-hard problem is a CIP, with the matrix A having only zeroes and ones.
This problem illustrates one main reason for the constraints {x j # d j }: for reasons
of capacity, security, or fault-tolerance (not many facilities will be damaged if, for
instance, there is an accident/failure at a node), we may wish to upper bound the
number of facilities that can be placed at individual sites. The more general problem
of "file sharing" in a network has been studied by Naor & Roth [24], where again, the
maximum load (number of facilities) per node is balanced with the global criterion
of total construction cost. For similar reasons, CIPs typically include the constraints
In fact, the case where d
Dobson [7] and Fisher &Wolsey [8] study a natural greedy algorithm GA for CIPs.
For a given CIP, let OPT denote the value of its optimal integral solution. We define
shown in
[8] that GA produces a solution of value at most OPT (1 each row of
the linear system Ax # b is scaled so that the minimum nonzero entry in the row is at
least 1, it is shown in [7] that GA's output is at most OPT (1
Another well-known approach to CIPs is to start with their LP relaxation, wherein
each x j is allowed to be a real in the range [0, d j ]. Throughout, we shall let y # denote
the LP optimum of a given CIP. Clearly, y # is a lower bound on OPT . Bertsimas &
Vohra [5] conduct a detailed study of approximating CIPs and present an approximation
algorithm which finds a feasible solution whose value is O(y # lg m) [5]. Previous
work of this paper's first author [31] presents an algorithm that computes an x # Z n
such that Ax # b and
for some absolute constant a 0 > 0. 1 The bound "x j # d # j may not hold for all j, but
we will have for all j that
for a certain absolute constant a 1 > 0. A related result is presented in [24] for file-sharing
If B is "large" (greater than a certain threshold), then these results significantly
improve previous results in the "global" criterion of keeping c T
compromising
somewhat on the "local" capacity constraints {x j # d j }. This is a common
approach in bicriteria approximation: losing a small amount in each criterion to keep
the maximum such loss "low". In particular, if y # grows at least as fast as me -O(B) ,
then the output value here is O(y # ), while maintaining x
the CIP is uncapacitated, then the above is a significant improvement if B is large.)
We see from (1.2) that in the case where ln
the maximum "violation" are bounded by constants, which is reasonable.
Thus, we consider the case where ln however, the violation
can be as high as 1 which is unsatisfactory. If it
is not feasible (e.g., for capacity/fault-tolerance reasons) to deviate from the local
constraints by this much, then even the gain in the global criterion (caused by the
large value of B) will not help justify such a result. So, a natural question is: is it
possible to lose a small amount in the global criterion, while losing much less in the
local criterion (i.e., in in the case where ln
this in the a#rmative.
(a) For the important special case of unweighted CIPs (#j, c consider the case
parameter #, 0 < # < 1, we present an algorithm
that outputs an x with
1 Recall that To parse the term "ln note that it is
me -B , and is #(1) otherwise.
(ii) the objective function value is at most a 2 y # (1/(1-#)+(1/# 2
for an absolute constant a 2 > 0.
Note the significant improvement over (1.1) and (1.2), particularly if # is a con-
stant: by losing just a constant factor in the output value of the objective function,
we have ensured that each x j /d j is bounded by a constant (at most 1/(1 - #)
This is an improvement over the bound stated in (1.2). In our view, ensuring little loss
in the local criterion here is quite important as it involves all the variables x j (e.g.,
all the nodes of a graph in facility location) and since may be required to
be low due to physical and other constraints.
(b) For the case where the coe#cient matrix A has only zeroes and ones and where
a feasible solution (i.e., #j, x j # d j ) to a (possibly weighted) CIP is really required,
we present an approximation algorithm with output value at most O(y #
This works whether not. While incomparable with the results
of [7, 8], this is better if y # is bigger than a certain threshold. This is also seen to be
an improvement over the O(y # lg m) bound of [5] if y # m a , where a # (0, 1) is an
absolute constant.
Thus, this work presents improved local vs. global balancing for a family of prob-
lems: the basic packet-routing problem (the first constant-factor approximation) and
CIPs (gaining more than a constant factor in the local criterion while losing a constant
factor in the global criterion). The structure of the rest of the paper is as follows. In
-2, we discuss the algorithm for the packet routing problem, which consists mainly
of three steps: (1) constructing and solving an LP relaxation (-2.1); (2) obtaining a
set of routes via suitable rounding (-2.2); and (3) scheduling the packets (-2.3) using
the algorithm of [17]. The nature of our LP relaxation also provides an interesting
re-interpretation of our result, as shown by Theorem 2.4 in -2.3. We discuss in -2.4 an
extension of our idea to a more general setting, where the routing problem is replaced
by a canonical covering problem. In -3, we discuss our results for the general covering
integer programs. We present our improved local vs. global balancing for unweighted
CIPs in -3.1; the case where x j # d j is really required for all j is handled in -3.2, for
the case where the coe#cient matrix has only zeroes and ones. (Note, for instance,
that the coe#cient matrix has only zeroes and ones for the facility location problem
discussed in -1.2.)
2. Approximating the Routing Time to within a Constant Factor. We
refer the reader to the introduction for the definition and motivation for packet rout-
ing. Leighton, Maggs & Rao, in a seminal paper, studied the issue of scheduling the
movement of the packets given the path to be traversed by each packet [16]. They
showed that the packets can be routed in time proportional to the sum of "conges-
tion" and "dilation" of the paths selected for each packet. However, they did not
address the issue of path selection; one motivation for their work is that paths can
plausibly be selected using, e.g., the well-known "random intermediate destinations"
idea [33, 34]. However, no general results on path selection, and hence on the time
needed for packet routing, were known for arbitrary networks G. We address this
issue here by studying the paths selection problem.
Theorem 2.1. There are constants c # , c # > 0 such that the following holds. For
any packet routing problem on any network, there is a set of paths and a corresponding
schedule that can be constructed in polynomial time, such that the routing time is at
most c # times the optimal. Furthermore, the maximum queue size at each edge is
bounded by c # .
We shall denote any path from s k to t k as an (s k , t k )-path. Given a (directed)
path P , E(P ) will denote its set of (directed) edges.
2.1. A Linear Programming Relaxation. Consider any given packet routing
problem. Let us consider any feasible solution for it, where packet k is routed on path
denote the dilation of the paths selected, i.e., D is the length of a longest
path among the P k . Clearly, the time to route all the packets is bounded below by
D. Similarly, let C denote the congestion of the paths selected, i.e., the maximum
number of packets that must traverse any single edge during the entire course of the
routing. Clearly, C is also a lower bound on the time needed to route the packets.
Let N denote the number of nodes in the network and K the number of packets in
the problem. We now present a linear programming (LP) relaxation for the problem;
some of the notation used in this relaxation is explained in the following paragraph.
(ROUTING) min(C +D)/2 subject to:
E(G).
The vector x above is basically a "fractional flow" in G, where x k
f
denotes the
amount of "flow" of packet k on edge f # E(G). The superscript k merely indexes
a packet, and does not mean a kth power. The constraints "N k x model the
requirement that for packet k, (i) a total of one unit of flow leaves s k and reaches
and (ii) at all other nodes, the net inflow of the flow corresponding to packet k,
equals the net outflow of the flow corresponding to packet k. For conciseness, we have
avoided explicitly writing out this (obvious) set of constraints above. Constraints
say that the "fractional congestion" on any edge f is at most C. Constraints
(2.2) say that the "fractional dilation"
f , is at most D. This is a somewhat novel
way of relaxing path lengths to their fractional counterparts.
It is easy to see that any path-selection scheme for the packets, i.e., any integral
flow (where all the x k
f are either 0 or 1) with congestion C and dilation D, satisfies
the above system of inequalities. Thus, since C and D are both lower bounds on the
length of the routing time for such a path-selection strategy, so is (C +D)/2. Hence,
the optimum value of the LP is indeed a lower bound on the routing time for a given
routing problem: it is indeed a relaxation. Note that the LP has polynomial size since
it has "only" O(Km) variables and O(Km) constraints, where m denotes the number
of edges in the network. Thus, it can be solved in polynomial time. Let {x, C , D}
denote an optimal solution to the program. In -2.2, we will conduct a certain type
of "filtering" on x. Section 2.3 will then construct a path for each packet, and then
invoke the algorithm of [17] for packet scheduling.
2.2. Path Filtering. The main ideas now are to decompose x into a set of "flow
paths" via the "flow decomposition" approach, and then to adapt the ideas in Lin-
Vitter [20] to "filter" the flow paths by e#ectively eliminating all flow paths of length
more than 2D.
The reader is referred to Section 3.5 of [1] for the well-known flow decomposition
approach. This approach e#ciently transforms x into a set of flow paths that satisfy
the following conditions. For each packet k, we get a collection Q k of flows along
each Q k has at most m paths. Let P k,i denote the ith path in Q k . P k,i
has an associated flow value z k,i # 0, such that for each k,
words, the unit flow from s k to t k has been decomposed into a convex combination of
)-paths.) The total flow on any edge f will be at most C:
z
the inequality in (2.4) follows from (2.1). Also, let |P | denote the length of (i.e., the
number of edges in) a path P . Importantly, the following bound will hold for each k:
z k,i |P k,i
with the inequality following from (2.2).
The main idea now is to "filter" the flow paths so that only paths of length at
most 2D remain. For each k, define
z k,i .
It is to easy to check via (2.5) that g k # 1/2 for each k. Thus, suppose we define new
flow values {y k,i } as follows for each k: y
if |P k,i | # 2D. We still have the property that we have a convex combination of flow
values: # i y 1. Also, since g k # 1/2 for all k, we have y k,i # 2z k,i for all k, i. So,
implies that the total flow on any edge f is at most 2C:
Most importantly, by setting y all the "long" paths P k,i (those of length
more than 2D), we have ensured that all the flow paths under consideration are of
length at most O(D). We denote the collection of flow paths for packet k by P k . For
ease of exposition, we will also let yP denote the flow value of any general flow path
Remarks. We now point out two other LP relaxations which can be analyzed
similarly, and which yield slightly better constants in the approximation guarantee.
. It is possible to directly bound path-lengths in the LP relaxation so that
filtering need not be applied; one can show that this improves the approximation
guarantee somewhat. On the other hand, such an approach leads
to a somewhat more complicated relaxation, and furthermore, binary search
has to be applied to get the "optimal" path-length. This, in turn, entails
potentially O(lg N) calls to an LP solver, which increases the running time.
Thus, there is a trade-o# involved between the running time and the quality
of approximation.
. In our LP formulation, we could have used a variable W to stand for max{C, D}
in place of C and D; the problem would have been to minimize W subject to
the fractional congestion and dilation being at most W . Since W is a lower
bound on the optimal routing time, this is indeed a relaxation; using our approach
with this formulation leads to a slightly better constant in the quality
of our approximation. Nevertheless, we have used our approach to make the
relationship between C and D explicit.
2.3. Path Selection and Routing. Note that
} is a fractional
feasible solution to the following set of inequalities:
To select one path from P k for each packet k, we need to modify the above fractional
solution to an integral 0-1 solution. To ensure that the congestion does not increase
by much, we shall use the following rounding algorithm of [13]:
Theorem 2.2. ([13]) Let A be a real valued r - s matrix, and y be a real-valued
s-vector. Let b be a real valued vector such that Ay = b and t be a positive real number
such that, in every column of A, (i) the sum of all the positive entries is at most t
and (ii) the sum of all the negative entries is at least -t. Then we can compute an
integral vector y such that for every i, either y
Furthermore, if y contains d non-zero components, the integral
approximation can be obtained in time O(r 3 lg(1
To use Theorem 2.2, we first transform our linear system above to the equivalent
system:
The set of variables above is
}. Note that yP # [0, 1] for
all these variables. Furthermore, in this linear system, the positive column sum is
bounded by the maximum length of the paths in P 1
#P K . Since each path
in any P k is of length at most 2D due to our filtering, each positive column sum is
at most 2D. Each negative column sum is clearly -2D. Thus, the parameter t for
this linear system, in the notation of Theorem 2.2, can be taken to be 2D. Hence by
Theorem 2.2, we can obtain in polynomial time an integral solution y satisfying
For each packet k, by conditions (2.8) and (2.9), we have
1. (Note the
crucial role of the strict inequality in (2.8).) Thus, for each packet k, we have selected
at least one path from s k to t k , with length at most 2D; furthermore, the congestion is
bounded by 2C+2D (from (2.7)). If there are two or more such )-paths, we can
arbitrarily choose one among them, which of course cannot increase the congestion.
The next step is to schedule the packets, given the set of paths selected for each
packet. To this end, we use the following result of [17], which provides an algorithm
for the existential result of [16]:
Theorem 2.3. ([17]) For any set of packets with edge-simple paths having congestion
c and dilation d, a routing schedule having length O(c d) and constant
maximum queue size, can be found in random polynomial time.
Applying this theorem to the paths selected from the previous stage, which have
dilation d # 2D, we can route the packets in time
D). Recall that (C + D)/2 is a lower bound on the length of the optimal
schedule. Thus, we have presented a constant-factor approximation algorithm for the
o#-line packet routing problem; furthermore, the queue-sizes are also bounded by an
absolute constant, in the routing schedule produced. An interesting related point is
that our LP relaxation is reasonable: its integrality gap (worst-case ratio between the
optima of the integral and fractional versions) is bounded above by O(1).
An Alternative View. There is an equivalent interesting interpretation of Theorem 2.1:
Theorem 2.4. Suppose we have an arbitrary routing problem on an arbitrary
graph let L be any non-negative parameter (e.g., O(1), O(lg n), O( # n)).
K} be the set of source-destination pairs for the packets.
Suppose we can construct a probability distribution D k on the (s k , t k )-paths for each
k such that if we sample, for each packet k, an (s k , t k )-path from D k independently of
the other packets, then we have: (a) for any edge e # E(G), the expected congestion on
e is at most L, and (b) for each k, the expected length of the (s k , t k )-path chosen is at
most L. Then, there is a choice of paths for each packet such that the congestion and
dilation are O(L). Thus, the routing can be accomplished in O(L) time using constant-sized
queues; such a routing can also be constructed o#-line in time polynomial in |V |
and K.
We remark that the converse of Theorem 2.4 is trivially true: if an O(L) time
routing can be accomplished, we simply let D k place all the probability on the (s k , t k )-
path used in such a routing.
Proof of Theorem 2.4: Let # k
P denote the probability measure of any (s k , t k )-
path P under the distribution D k . Let supp(D k ) denote the support of D k , i.e., the
set of (s k , t k )-paths on which D k places nonzero probability. The proof follows from
the fact that for any (i,
is a feasible solution to (ROUTING), with C, D replaced by L. Hence by our filter-
round approach, we can construct one path for each packet k such that the congestion
and dilation are O(L). As seen above, the path selection and routing strategies can
be found in polynomial time.
We consider the above interesting because many fault-tolerance algorithms use
very involved ideas to construct a suitable (s k , t k )-path for (most) packets [6]. These
paths will need to simultaneously have small lengths and lead to small edge congestion.
Theorem 2.4 shows that much more relaxed approaches could work: a distribution
that is "good" in expectation on individual elements (edges, paths) is su#cient. Recall
that in many "discrete ham-sandwich theorems" (Beck & Spencer [4], Raghavan &
Thompson [27]), it is easy to ensure good expectation on individual entities (e.g.,
the constraints of an integer program), but is much more di#cult to construct one
solution that is simultaneously good on all these entities. Our result shows one natural
situation where there is just a constant-factor loss in the process.
2.4. Extensions. The result above showing a constant integrality gap for packet
routing, can be extended to a general family of combinatorial packing problems as
follows. Let S k be the family of all the subsets of vertices S such that s k # S and
S. Recall that the (s k , t k )-shortest path problem can be solved as an LP via the
following covering formulation:
c
subject to:
(i,j)#E: i#S,j /
#S
E(G).
Constraint (2.10) expresses the idea that "flow" crossing each s-t cut is at least 1.
The following is an alternative relaxation for the packet routing problem:
(ROUTING-II) min(C +D)/2 subject to:
(i,j)#E: i#S,j /
#S
We can use the method outlined in Section 2.1, 2,2 and 2.3 to show that the
optimal solution of (ROUTING-II) is within a constant factor of the optimal routing
time. A natural question that arises is whether the above conclusion holds for more
general combinatorial packing problems. To address this question, we need to present
an alternative (polyhedral) perspective of our (path) selection routine. First we recall
some standard definitions from polyhedral combinatorics.
Suppose we are given a finite set family F of subsets
of N . For any S # N , let #S # {0, 1} n denote the incidence vector of S. We shall
consider the problem
is a weight function on the elements of N .
Definition 2.5. ([25]) The blocking clutter of F is the family B(F), whose
members are precisely those H # N that satisfy:
P1.
P2. Minimality: If H # is any proper subset of H, then H # violates property P1.
A natural LP relaxation for (OPT
Q is known as the blocking polyhedron of F . The following result is well-known and
easy to check:
F such that #i # F, x i # 1}.
For several classes of clutters (set-systems), it is known that the extreme points
of Q are the integral vectors that correspond to incidence vectors of elements in
F . By Minkowski's Theorem [25], every element in Q can be expressed as a convex
combination of the extreme points and extreme rays in Q. For blocking polyhedra,
the set of rays is
Suppose we have a generic integer programming problem that is similar to (ROUTING-
II), except for the fact that for each k, (2.11) is replaced by the constraint
F k can be any clutter that is well-characterized by its blocking polyhedron Q k (i.e.,
the extreme points of the blocking polyhedron Q k are incidence vectors of the elements
in the clutter F k ). Thus, we have a generalization of (ROUTING-II):
subject to:
Note that the variables x are now indexed by elements of the set N . In the previously
discussed special cases, the elements of N are edges, or pairs of nodes.
The LP relaxation of (BLOCK) replaces the constraint (2.13) by
Theorem 2.6. The optimum integral solution to (BLOCK) has a value that is
at most a constant factor greater than the optimal value to its LP relaxation.
Proof. Let
denote an optimal solution to the LP
relaxation. By Caratheodory's Theorem [25], for each fixed k,
can be
expressed as a convex combination of extreme points and extreme rays of the blocking
polyhedron Q k . However, note that the objective function can only improve by
decreasing the value of
long as the solution remains feasible.
Furthermore, the extreme rays of the blocking polyhedron correspond to vectors v
with each v i non-negative. Thus, without loss of generality, we may assume that the
LP optimum is lexicographically minimal. This ensures that the optimal solution
can be expressed as a convex combination of the extreme points of the polyhedron
alone. As seen above, the extreme points in this case are incidence vectors of elements
of the k-th clutter (we use polyhedral language to let "k-th clutter" denote the
set-system F k ).
Let C and D denote the fractional "congestion" and fractional "dilation" of the
optimal solution obtained by the LP relaxation of (BLOCK). Let A k
2 , . denote
incidence vectors of the elements in the k-th clutter, and let A k
(i) be the ith coordinate
of A k
. Then we have a convex combination, for each k:
Thus, by constraints (2.12), # j:|A k
By filtering out those A k
j with size greater than 2D, we obtain a set of canonical
objects for each k, whose sizes are at most 2D. By scaling the # k
j by a suitable factor,
we also obtain a new set of # k
j such that
.
Using these canonical objects and {# k
} as the input to Theorem 2.2, we obtain a
set of objects (one from each clutter) such that the dilation is not more than 2D and
the congestion not more than 2(C +D). Hence the solution obtained is at most O(1)
times the LP optimum.
Remark. As pointed out by one of the referees, it is not clear whether the lexicographically
minimal optimal solution can be constructed in polynomial time. The
above result is thus only about the quality of the LP relaxation. It would be nice
to find the most general conditions under which the above can be turned into a
polynomial-time approximation algorithm.
3. Improved Local vs. Global Balancing for Covering. Coupled with the
results of [16, 17], our approximation algorithm for the routing time (a global crite-
rion) also simultaneously kept the maximum queue size (a local capacity constraint)
constant; our approach there implicitly uses the special structure of the cut covering
formulation. We now continue the study of such balancing in the context of covering
integer programs (CIPs). The reader is referred to -1.2 for the relevant definitions
and history of CIPs. In -3.1, we will show how to approximate the global criterion
well without losing much in the "local" constraints {x j # d j }. In -3.2, we present
approximation algorithms for a subfamily of the CIPs where x j # d j is required for
all j. One of the key tools used in -3.1 and -3.2 is Theorem 3.3, which builds on an
earlier rounding approach (Theorem 3.2) of [31].
3.1. Balancing Local with Global. The main result of -3.1 is Corollary 3.5.
This result is concerned with unweighted CIPs, and the case where ln
In this setting, Corollary 3.5 shows how the local capacity constraints can be violated
much less in comparison with the results of [31], while keeping the objective function
value within a constant factor of that of [31].
Let exp(x) denote e x ; given any non-negative integer k, let [k] denote the set
{1, 2, . , k}. We start by reviewing the Cherno#-Hoe#ding bounds in Theorem 3.1.
Let G(-, #) .
Theorem 3.1. ([23]) Let X 1 , X 2 , . , X # be independent random variables, each
taking values in [0, 1],
We shall use the following easy fact:
From now on, we will let {x [n]} be the set of values for the variables in
an arbitrary feasible solution to the LP relaxation of the
could be an optimal LP solution.) Let y
Recall that the case
handled well in [31]; thus we shall assume
B. We now summarize the main result of [31] for CIPs as a theorem:
Theorem 3.2. ([31]) For any given CIP, suppose we are given any 1 # <
# such that
holds. Then we can find in deterministic polynomial time, a vector
of non-negative integers such that: (a) (Az) i # b i # for each i # [m], (b)
y #, and (c) z j #x # j #d j # for each j # [n].
The next theorem presents a rounding algorithm by building on Theorem 3.2:
Theorem 3.3. There are positive constants a 3 and a 4 such that the following
holds. Given any parameter #, 0 < # < 1, let # be any value such that #
(a 3 Then we can find in deterministic polynomial time,
a vector non-negative integers such that: (a) (Az) i # b i #(1- #)
for each i # [m], (b) c T
z # a 4 y #, and (c) z j #x # j #d j # for each j # [n].
Remark. It will be shown in the proof of Theorem 3.3 that we can choose, for
instance, a 2. Since there is a trade-o# between a 3 and a 4 that can
be fine-tuned for particular applications, we have avoided using specific values for a 3
and a 4 in the statement of Theorem 3.3.
The following simple proposition will also be useful:
Proposition 3.4. If 0 < x < 1/e, then 1 - x > exp(-1.25x).
Proof of Theorem 3.3: We choose a 2. In the notation of Theorem
3.2, we take #(1 - #) and Our goal is to validate (3.2); by (3.1), it
su#ces to show that
exp(-y
Note that the left- and right-hand sides of (3.3) respectively decrease and increase
with increasing #; thus, since # 0
it is enough
to prove (3.3) for # 0 . We consider two cases.
Case I.
1/e. So, Proposition 3.4 implies that in order to prove (3.3), it su#ces to show that
i.e., that y # 2
# 1.25m exp(-1.5B). This is true from the facts that: (i) m/y #
(which follows from the fact that ln(m/y #
Case II. it su#ces to show that
exp(-y
Recall that 1. So, we have mB/y # > e, i.e., y # /(mB) < 1/e.
Thus,
The inequality follows from Proposition 3.4. So, to establish (3.4), we just need show
that
i.e., that
e, which in turn follows from the facts that
# 1. This completes the proof.
Our required result is:
Corollary 3.5. Given any unweighted CIP with any
parameter #, 0 < # < 1, we can find in deterministic polynomial time, a vector
non-negative integers such that: (a) Av # b, (b)
a is an absolute constant, and
Proof. Let #(a 3 z be as in the statement of Theorem
3.3. for each j. Conditions (a) and (c) are easy to
check, given Theorem 3.3. Since the z j 's are all non-negative integers and since the
CIP is unweighted condition (b) of Theorem 3.3 shows that at most
a 4 y # of them can be nonzero. Thus, condition (b) follows since v j # z j /(#(1-#))+1
if z j > 0 and since v
As mentioned in -1, this improves the value of
[31] to O(1/(1-#)), while keeping (c T
relatively small at O((1/# 2 )-ln
(as long as # is a constant bounded away from 1).
3.2. Handling Stringent Constraints. We now handle the case where the
constraints have to be satisfied and where the coe#cient matrix A has only
zeroes and ones. Recall from -1 that there is a family of facility location problems
where the coe#cient matrix has only zeroes and ones; this is an example of the CIPs
to which the following results apply.
We start with a technical lemma.
Lemma 3.6. For any # > 0, the sum
is at most u
Proof. If
be the highest index such that u r < #/e. Thus, s
ur
it follows that t r # u r ln(#/u r
the last inequality follows from the fact that for any x # y such that x < #/e,
The following simple proposition will also help.
Proposition 3.7. For any # > 0 and # 1,
Proof. The proposition is immediate if # e. Next note that for any a # e, the
function g a decreases as x increases from 1 to infinity. So, if # e and
e, then
Finally, if # > e and # > e, then (ln
Theorem 3.8. Suppose we are given a CIP with the matrix A having only zeroes
and ones. In deterministic polynomial time, we can construct a feasible solution z to
the CIP with z j # d j for each j, and such that the objective function value c T
- z is
O(y
Proof. Let a 3 and a 4 be as in the proof of Theorem 3.3. Define a
and, for any S # [n], y # Starting with S we construct a sequence
of sets S 0 # S 1 # - as follows. Suppose we have constructed S
. If S #, we stop; or else, if all j # S i satisfy a 5
stop. If not, define the proper subset S i+1 of S i to be {j #
to be d j # x note that for all such j,
t be the final set we construct. If S #, we do nothing more; since z j # x # j
for all j, we will have Az # b as required. Also, it is easy to check that z j # d j for all
j. So suppose S t #. Let we stopped at the non-empty set
we see that #x # j # d j for all j # S t . Recall that for all j # S t , we have fixed the
value of z j to be d j # x # j . Let w denote the vector of the remaining variables, i.e.,
the restriction of x # to S t . Let A # be the sub-matrix of A induced by the columns
corresponding to S t . We will now focus on rounding each x # j (j # S t ) to a suitable
non-negative integer z j # d j .
for each i # [m],
A
since z j # x # j for all j # S t , we get
A
Since each b i and A i,j is an integer, so is each b # i . Suppose b # i # 0 for some i. Then,
whatever non-negative integers z j we round the j # S t to, we will satisfy the constraint
So, we can ignore such indices i and assume without loss of generality
that B # .
constraints corresponding to indices i with b # i # 0 can be
retained as "dummy constraints".)
Proposition 3.7 shows that
i.e., that # (a 3 Thus, by Theorem 3.3, we can round
each x # j (j # S t ) to some non-negative integer z j #x # j # d j in such a manner that
the last inequality (i.e., that #(1 - # 1/2) follows from the fact that # a 5 # 2.
So we can check that the final solution is indeed feasible. We only need to bound the
objective function value, which we proceed to do now.
We first bound
Fix any i, 0 # i # t - 1. Recall that for each j # (S i - S i+1 ), we set z
a
Setting substituting (3.7) into (3.6),
where gives the final objective function value. Else
shows that
This, in combination with (3.8) and Lemma 3.6, shows that
This completes the proof.
4. Conclusion. In this paper, we analyze various classes of problems in the
context of balancing global versus local criteria.
Our main result is the first constant-factor approximation algorithm for the o#-
line packet routing problem on arbitrary networks: for certain positive constants c #
and c # , we show that given any packet routing problem, the routing time can e#ciently
be approximated to within a factor of c # , while ensuring that all edge-queues are of size
at most c # . Our result builds on the work of [16, 17], while exploiting an interesting
trade-o# between a (hard) "congestion" criterion and an (easy) "dilation" criterion.
Furthermore, we show that the result can be applied to a more general setting, by
providing a polyhedral perspective of our technique. Our approach of appropriately
using the rounding theorem of [13] has subsequently been applied by Bar-Noy, Guha,
Naor & Schieber to develop approximation algorithms for a family of multi-casting
problems [3]. It has also been applied for a family of routing problems by Andrews &
Zhang [2].
The second major result in the paper improves upon a class of results in multi-criteria
covering integer programs. We show that the local criterion of unweighted
covering integer programs can be improved from an approximately logarithmic factor
to a constant factor, with the global criterion not deteriorating by more than a
constant factor (i.e., we maintain a logarithmic factor approximation).
The third main result improves upon a well-known bound for covering integer pro-
grams, in the case where the coe#cient matrix A has only zeroes and ones. We show
that the approximation ratio can be improved from O(y # lg m) to O(y #
Some open questions are as follows. It would be interesting to study our packet-
routing algorithm empirically, and to fine-tune the algorithm based on experimental
observation. It would also be useful to determine the best (constant) approximation
possible in approximating the routing time. An intriguing open question is whether
there is a distributed packet-routing algorithm with a constant-factor approximation
guarantee. Finally, in the context of covering integer programs, can we approximate
the objective function to within bounds such as ours, with (essentially) no violation
of the local capacity constraints?
Acknowledgements
. We thank Bruce Maggs, the STOC 1997 program committee
and referee(s), and the journal referees for their helpful comments. These
have helped improve the quality of this paper a great deal. In particular, one of the
journal referees simplified our original proof of Lemma 3.6.
--R
Network flows: theory
Packet routing with arbitrary end-to-end delay requirements
Integral approximation sequences.
Rounding algorithms for covering problems.
Routing on butterfly networks with random faults.
On the greedy heuristic for continuous covering and packing problems.
Correlational inequalities for partially ordered sets.
Blocking Polyhedra.
Scheduling to minimize average completion time: O
Asymptotically tight bounds for computing with faulty arrays of processors.
Global wire routing in two-dimensional arrays
Introduction to Parallel Algorithms and Architectures: Arrays
Methods for message routing in parallel machines.
Packet routing and job-shop scheduling in O(congestion+dilation) steps
Fast algorithms for finding O(congestion
On the fault tolerance of some popular bounded-degree networks
An approximate max-flow min-cut theorem for uniform multi-commodity flow problems with applications to approximation algorithms
Scheduling n independent jobs on m uniform machines with both flow time and makespan objectives: a parametric approach.
Randomized Algorithms.
Optimal file sharing in distributed networks.
Universal O(congestion
Randomized rounding: a technique for provably good algorithms and algorithmic proofs.
How to emulate shared memory.
Improved approximation guarantees for packing and covering integer programs.
On the existence of schedules that are near-optimal for both makespan and total weighted completion time
A scheme for fast parallel communication.
Universal schemes for parallel communication.
--TR
--CTR
Stavros G. Kolliopoulos , Neal E. Young, Approximation algorithms for covering/packing integer programs, Journal of Computer and System Sciences, v.71 n.4, p.495-505, November 2005
Stavros G. Kolliopoulos, Approximating covering integer programs with multiplicity constraints, Discrete Applied Mathematics, v.129 n.2-3, p.461-473, 01 August
Konstantin Andreev , Bruce M. Maggs , Adam Meyerson , Ramesh K. Sitaraman, Designing overlay multicast networks for streaming, Proceedings of the fifteenth annual ACM symposium on Parallel algorithms and architectures, June 07-09, 2003, San Diego, California, USA | linear programming;approximation algorithms;multicommodity flow;packet routing;rounding theorems;randomized algorithms;covering integer programs;discrete ham-sandwich theorems;randomized rounding |
586898 | On Bipartite Drawings and the Linear Arrangement Problem. | The bipartite crossing number problem is studied and a connection between this problem and the linear arrangement problem is established. A lower bound and an upper bound for the optimal number of crossings are derived, where the main terms are the optimal arrangement values. Two polynomial time approximation algorithms for the bipartite crossing number are obtained. The performance guarantees are O(log n) and O(log2 n) times the optimal, respectively, for a large class of bipartite graphs on n vertices. No polynomial time approximation algorithm which could generate a provably good solution had been known. For a tree, a formula is derived that expresses the optimal number of crossings in terms of the optimal value of the linear arrangement and the degrees, resulting in an O(n1.6) time algorithm for computing the bipartite crossing number.The problem of computing a maximum weight biplanar subgraph of an acyclic graph is also studied and a linear time algorithm for solving it is derived. No polynomial time algorithm for this problem was known, and the unweighted version of the problem had been known to be NP-hard, even for planar bipartite graphs of degree at most 3. | Introduction
The planar crossing number problem calls for placing the vertices of a graph in the plane and drawing
the edges with Jordan curves, so that the number of edge crossings is minimized. This problem has
been extensively studied in graph theory [32], combinatorial geometry [22], and theory of VLSI [16].
In this paper we study the bipartite crossing number problem which is an important variation of the
planar crossing number. Throughout this paper E) denotes a connected bipartite graph,
are the two classes of independent vertices, and E is the edge set. We will assume that
# The research of the first author was supported by NSF grant CCR-9528228. The research of the second and fourth
authors was supported in part by the Alexander von Humboldt Foundation and by the Slovak Scientific Grant Agency
grant No. 95/5305/277. Research of the third author was supported in part by the Hungarian NSF contracts T 016 358
and T 019 367, and by the NSF contract DMS 970 1211. A preliminary version of this paper was published at WADS'97.
m. A bipartite drawing [13], or 2-layer drawing of G consists of placing
the vertices of V 0 and V 1 into distinct points on two parallel lines and then drawing each edge using
a straight line segment connecting the points representing the endvertices of the edge. Let bcr(G)
denote the bipartite crossing number of G, that is, bcr(G) is the minimum number of edge crossings
over all bipartite drawings of G.
Computing bcr(G) is NP-hard [11] 1 but can be solved in polynomial time for bipartite permutation
graphs [29]. The problem of obtaining nice multiple layer drawings of graphs (i.e. drawings with
small number of crossings), has been extensively studied by the graph drawing, VLSI, and CAD
communities [6, 7, 19, 30, 31]. In particular one of the most important aesthetic objectives in graph
drawing is reducing the number of crossings [23]. Very recently J-unger and Mutzel, [14] and Mutzel [20]
succeeded to employ integer programming methods in order to compute bcr(G) exactly, or to estimate
it, nevertheless, these methods do not guarantee polynomial time convergence. In fact, although a
O(log 4 n) times optimal polynomial time algorithm for approximating the planar crossing number
of degree bounded graphs has been known [17], no polynomial time approximation algorithm whose
performance is guaranteed has been previously known for approximating bcr(G). A nice result in this
area is a fast polynomial time algorithm of Eades and Wormald [7] which approximates the bipartite
crossing number by a factor of 3, when the positions of vertices in V 0 are fixed.
In this paper we explore an important relationship between the bipartite drawings and the linear
arrangement problem, which is another well-known problem in the theory of VLSI [4, 5, 15, 18, 28].
In particular, it is shown that for many graphs the order of magnitude for the optimal number of
crossings is bounded from below, and above, respectively, by minimum degree times the optimal
arrangement value, and by arboricity times the optimal arrangement value, where the arboricity of
G is the minimum number of acyclic graphs that G can be decomposed to. Hence for a large class
of graphs, it is possible to estimate bcr(G) in terms of the optimal arrangement value. Our general
method for constructing the upper bound is shown to provide for an optimal solution and an exact
formula, resulting to an O(n 1.6 computing bcr(G) when G is a tree. The presence
of arboricity in our upper bound allows us to relate some important topological properties such as
genus and page number, to bcr(G). In particular, our results easily imply that when G is "nearly
planar", i.e. it either has bounded genus, or bounded page number, then, the asymptotic values of
bcr(G), and the optimal arrangement are the same, provided that G is not too sparse.
A direct consequence of our results is that for many graphs, the bipratite drawings with small
sum of edge lenghts also have small bipartite crossings, and vis versa, and therefore, a suboptimal
solution to the bipartite crossing number problem can be extracted from a suboptimal solution to the
linear arrangement problem. Hence, we have derived here, the first polynomial time approximation
algorithms for bcr(G), which perform within a multiplicative factor of O(log n log log n) from the
optimal, for a large class of graphs. Moreover, we show here that the traditional divide and conquer
paradigm in which the divide phase approximately bisects the graph, also obtains a provably good
approximation, in polynomial time, for bcr(G) within a multiplicative factor of O(log 2 n) from the
optimal, for a variety of graphs. Crucial to verifying the performance guarantee of the divide and
conquer algorithm, is a lower bound of # G nb # (G)), derived here, for bcr(G), where b # (G), # < 1/2,
and # G are the size of the #-bisection and minimum degree of G, respectively. This significantly
improves Leighton's well-known lower bound of # b 23
(G)) [16] which was derived for the planar crossing
number of degree bounded graphs. The class of graphs for which the performance of our approximation
algorithms is guaranteed is very large, and in particular contains those regular graphs, degree bounded
graphs, and genus bounded graphs, which are not too sparse. Another notable aspect of relating bcr(G)
to the linear arrangement problem is that, both algorithms produce drawings with near optimal number
of crossings in which the coordinates of all vertices are integers, so that the total edge length is also
Technically speaking, the NP-hardness of the problem was proved for multigraphs, but it is widely assumed that it
is also NP-hard for simple graphs.
near optimal, with the same performance guarantee as for the number of crossings.
We also study biplanar graphs. A bipartite graph E) is called a biplanar, if it has a
bipartite drawing in which no two edges cross each other. Eades and Whitesides [8] have shown that
the problem of determining largest biplanar subgraph is NP-hard even when G is planar, and the
vertices in V 0 and V 1 have degrees at most 3 and 2, respectively. This raised the question of whether
or not computing a largest biplanar subgraph can be done in polynomial time when G is acyclic [20].
In this paper we present a linear time dynamic programming algorithm for the weighted version of
this problem in an acyclic graph. (The weighted version was first introduced by Mutzel [20].)
Section 2 contains our general results regarding the relation between bcr(G) and the linear arrangement
problem. Section 3 contains the applications, and includes several important observations, the
bisection based lower bound for bcr(G), and the approximation algorithms. Finally, Section 4 contains
our linear time algorithm for computing a largest biplanar subgraph of a tree.
Linear arrangement and bipartite crossings
We denote by d v the degree of v, and by d # v
denote the number vertices adjacent to v of degree 1. We denote by # G the minimum degree of G.
A bipartite drawing of G is obtained by: (i) placing the vertices of V 0 and V 1 into distinct points on
two horizontal lines y 0 , y 1 , respectively, (ii) drawing each edge with one straight line segment which
connects the points of y 0 and y 1 where the endvertices of the edge were placed. Hence, the order in
which the vertices are placed on y 0 and y 1 will determine the drawing.
Let DG be a bipartite drawing of G; when the context is clear, we omit the subscript G and write
D. For any e # E, let bcr D (e) denote the number of crossings of the edge e with other edges. Edges
sharing an endvertex do not count as crossing edges. Let bcr(D) denote the total number of crossings
in D, i.e.
The bipartite crossing number of G, denoted by bcr(G) is the minimum number of crossings of edges
over all bipartite drawings of G. Clearly,
We assume throughout this paper that the vertices of V 0 are placed on the line y 0 which is taken
to be the x-axis, and vertices of V 1 are placed on the line y 1 which is taken to be the line
For a vertex x-coordinate in the drawing D. We call the function
the coordinate function of D. Throughout this paper, we often omit the y coordinates.
Note that xD is not necessarily an injection, since for a # V 0 , and b # V 1 , we may have xD (a) = xD (b).
Given an arbitrary graph E), and a real function define the length of f , as
The linear arrangement problem is to find a bijection of minimum length.
This minimum value is denoted by -
L(G).
E) and D be a bipartite drawing of G. Define the length of D to be
In this section we derive a relation between the bipartite crossing number and the linear arrangement
problem.
Let D be a bipartite drawing of E) such that the vertices of V 0 are placed into the
points
(1, 0), (2, 0), ., (|V 0 |, 0).
For its neighbors satisfying xD the
median vertex of v,
We say that D has the
median property if the vertices of G have distinct x-coordinates and the x-coordinate of any vertex v
in V 1 is larger than, but arbitrarily close to, xD (med(v)), with the restriction that if a vertex of odd
degree and a vertex of even degree have the same median vertex, then the odd degree vertex has a
smaller x-coordinate. Note that if D has the median property, then xD is an injection.
When the bipartite drawing D does not have the median property, one can always convert it to a
drawing which has the property, by first placing the vertices of V 0 in the same order in which they
appear in D into the locations (1, 0), (2, 0), ., (|V 0 |, 0), and then placing each v # V 1 on a proper
position so that the median property holds. Such a construction is called the median construction and
was utilized by Eades and Wormald [7] to obtain the following remarkable result.
Theorem 2.1 [7] Let E), and D be a bipartite drawing of G. If D # is obtained using the
median construction from D, then
E) and D be a bipartite drawing of G. Consider an edge let u be a
vertex in V 0 # V 1 so that u /
# {a, b}. We say e covers u in D, if the line parallel to the y axis passing
through u has a point in common with the edge e. Thus for neither a nor b
are covered by e. However, a vertex c # V 1 with xD (c) = xD (a) is covered by e. Let ND (e) denote
the number of those vertices in V 1 which are covered by e in D. We will use the following two lemmas
later.
Lemma 2.1 For let D be a bipartite drawing of G. Recall that xD is the coordinate
function of D. Then, the following hold.
(i) Assume that xD (v) is an integer for all x # V 0 . Then, there is a bijection f
so that for any e = ab # E, it holds
(ii) Assume that D has the median property. Then for the bijection f # in (i), it holds
d a d # a
m.
Proof. To prove (i), we construct f # by moving all vertices in V to integer locations. Formally, let
be the order of vertices of V
that we may have xD (w i since xD may not be an
injection.) the proof of (i) easily follows. (In particular note that
the factor +1 appears in the upper bound, since the end point of e which belongs to V 1 may not have
an integer coordinate.) For (ii), let Assume x(a) > x(b), and let v be any
vertex in V 1 covered by e in D. Since D has the median property, at least #d v /2# of vertices adjacent
to v are separated from v in D by the straight line segment e. This means, in this case, that vertex
v generates at least # G /2# G - 1)/2 crossings on e. Moreover, vertex v, even if it has degree
1, generates one crossing on e, since v and med(v) are separated by the line segment e in D. Thus
G+1. Now assume xD (a) < xD (b), and let v be a vertex covered
by e. Then, v generates at least d v - # dv
crossings on e provided that v is not a vertex
of degree 1 which is adjacent only to a. Consequently, in this case, bcr D (e) # (ND (e) - d # a
We conclude that in either case, bcr D
a , and
consequently, using (i),
To finish the proof of (ii) take the sum over all
Lemma 2.2 Let E), and let D be a bipartite drawing of G which has the median
property, then
dv #2
with an arbitrary small # > 0.
Proof. To prove the claim, let uv # E with has the median property,
thus v is placed arbitrary close to u. So we may assume that |x D (v) - xD (u)| #
This way the total sum of the contributions of all edges which are incident to a vertex of degree one
in V 1 to L xD is at most |V 1 | #
# and the claim follows. 2
We now prove the main result of this section.
Theorem 2.2 Let
L(G).
Proof. Let D be a bipartite drawing of G. We will construct an appropriate bijection f
{1, 2, ., n}. Let D # be a drawing which is obtained by applying the median construction to D. Let
its neighbors with xD #
i be an integer, 1 # i #d v /2#, and let u be a vertex in V 0 so that xD #
Observe that u generates d u crossings on the edges u i v and u dv -i+1 v, if it is not adjacent to v.
Similarly, u generates d u - 1 crossings on the edges u i v and u dv-i+1 v, if it is adjacent to v. Thus
Note that D # has the median property, thus for
and hence (1) implies
Using (2) observe that, for
(bcr
Thus, using (3), when d v # 2 is even, we have
dv
(bcr
dv
Moreover, when d v # 2 is odd, we have,
dv
where the upper bound is obvious, and the lower bound holds since no vertex adjacent to v is between
and u
. Consequently, when d v # 2 is odd, we have,
dv
where the last line is obtained by observing that xD # (u
Combining this with (3), for odd d v , we obtaindv
dv
We note that since (5) is weaker than (4), it must also hold when d v is even, and conclude by summing
dv #2
dv #2
v .
Using Lemma 2.2, we get
v . (6)
Consider the bijection f # in Part (ii) of Lemma 2.1. Then
Observe that # G # 2 implies P v#V 0
Hence (6) implies
v . (7)
Observing that L f # -
, and
v , we obtain
which finishes the proof. 2
Next, we investigate the cases for which the error term P v#V d 2
v can be eliminated from Theorem
2.2.
Corollary 2.1 Let E) so that m # (1
# and # are positive constants. Then
Proof. To prove the result we will first show that for any bipartite drawing D of G it holds,
For now assume that (8) holds. It is easy to see that bcr(G) # m-
1+# m,
we conclude that 1)bcr(G). Combining this inequality with (8), we obtain
v , and thus
and the claim follows from Theorem 2.2.
To prove (8), let D be any bipartite drawing of G, and let v # V 0 so that d v - d # v # 2. Let
be the set of vertices of degree at least 2 which are adjacent to v, and assume with
no loss of generality that xD
be an integer, 1 # i # dv -d # v
and note that any vertex u generates at least one crossing on the edges
and u dv-i+1 v. Thus bcr(vu
2 #, and therefore
We conclude that by summing
Similarly we can show that 2bcr(D) # ( P v#V 0
hence the claim follows. 2
Remarks. The conditions of Corollary 2.1, involving # and # are not restrictive at all. For instance,
any bipartite graph of minimum degree at least 3, satisfies the conditions. We identify more additional
graphs which satisfy these conditions in Section 3.
2.2 An upper bound
We now derive an upper bound on bcr(G). We need the following obvious lemma.
Lemma 2.3 Let D be a bipartite drawing of
1 be two edges which cross in D. Assume that |x D (v) - xD (u)| # |x D (a) - xD (b)|, then either a or
b is covered by e in D. Moreover, if a is covered by e, then
if b is covered by e, then
|x D (a) - xD (v)| # |x D (v) - xD (u)|.Let VH and EH , denote the vertex set and the edge set of a subgraph H, of G. The arboricity of G,
denoted by aG , is maxH #
#, where the maximum is taken over all subgraphs H, with 2.
Note that # G /2 # aG #G , where #G denotes the maximum degree of G. A well-known theorem
of Nash-Williams [21] asserts that aG is the minimum number of edge disjoint acyclic subgraphs that
edges of G can be decomposed to.
Theorem 2.3 Let
L(G).
Proof. Consider a solution (not necessarily optimal) of the linear arrangement of G, realized by a
bijection n}. The mapping f # induces an ordering of vertices of V
y 0 . Lift up the vertices of V 1 into y 1 and draw the edges with respect to the new locations of these
vertices to obtain a bipartite drawing D. Note that
for this drawing D. Let I e to be the set all edges crossing e in
D so that for any ab # I e ,
Observe that if any edge e # /
# I e crosses e, then e # I e # . Hence, in this case the crossing of e and e #
contributes one to |I e # |. We conclude that
|I e |,
and will show that |I e | # aG (4|x D (u) - xD (v)| 1). For ebe the set of all those vertices y of V 0 so that |x D (y) - xD (v)| # |x D (u) - xD (v)|. Similarly, let
ebe the set of all those vertices y of V 1 so that |x D (y) - xD (u)| # |x D (u) - xD (v)|. Note that,
since the coordinates of all vertices are integers. Therefore, we
have
2. Let - observe that by Lemma
2.3, a # V e
1 . Consequently, |I e | # is the edge set of the induced subgraph
of G on the vertex set V e
by the definition of aG , and thus
I e # aG (4L xD +m).
To complete the proof we take f # to be the optimal solution to the linear arrangement problem, that
is,
2.3 Bipartite crossings in trees
We note that if aG is small, then, the gap between the upper bound and the lower bound in Theorems
2.2 and 2.3 is small, and hence, it is natural to investigate the case
In fact, in this case the method in the proof of Theorem 2.3 provides for an optimal bipartite drawing.
Theorem 2.4 Let T be a tree on the vertex set are the partite sets, and
be a bijection utilizing the optimal solution to the linear arrangement problem. Let
D # be a bipartite drawing constructed by the method of Theorem 2.3, that is, by lifting the vertices
in V 1 into the line
Proof. We prove the Theorem by induction on n. The result is true for 2. Let n # 3. Assume
that the Theorem is true for all l-vertex trees, l < n, and let T be a tree on n vertices. We first
show that the RHS of (11) is a lower bound on bcr(T ). We then show that bcr(D # ) equals to RHS of
(11). Consider an optimal bipartite drawing D of T . It is not di#cult to see that one of the leftmost
(rightmost) vertices is a leaf. Denote the left leaf by v 0 , the right leaf by v k , and let
be the path between v 0 and v k . Note that P will cross any edge in T which is not incident to v i ,
path P will generate at least
crossings, where c P counts exactly the number of edges in T (in D) which are not incident to any
vertex on P . Deleting the edges of P we get trees T i , on the vertex set V
rooted in
1. Consider the optimal bipartite drawings of T i , place them
consecutively such that T i does not cross T j , for i #= j. Then draw the path P without self crossings
such that v 0 (v k ) is placed to the left (right) of the drawing of T 1 (T k-1 ). Then clearly the number of
crossings in this new drawings is P k-1
so we conclude that
for otherwise D is not an optimal drawing. For any v # V , let d i
denote the degree of v in T i ; applying
the inductive hypothesis to T i ,
Now observe that for
Consequently,
where the last line is obtained by observing that j dv i -2
follows using (13) that
Now consider the optimal linear arrangements of the trees T i , for place them
consecutively in that order on a line, and the path P . Let g denote the bijection associated with this
arrangement, then L
1. Using this fact (15) implies
since L g # -
To finish the proof we will show that bcr(D # ) equals to the RHS of (11). Consider an optimal
linear arrangement f # of the tree T . It is not di#cult to see that, f #-1 (1) and f #-1 (n) are leaves,
[25, 4]. Let be the path between v
trees defined in the first part of the proof. Note that for the bijection g, described earlier, it holds
thus we conclude that,
and note that the above equation implies that P does not cross itself, in the arrangement associated
with f # . It follows that P does not cross itself in the bipartite drawing D # . Let f #
be the restriction
of f # to V i , and D # i be the subdrawing in D # which is associated with 1. Note that
However, it is easy to see that D #
is obtained from f #
by lifting the
vertex set V i
1 to the line hence we can apply the induction hypothesis to D # i ,
to obtain
Substituting c P its value from (12), and repeating the same steps used in deriving (15), we obtain
To complete the proof use (16) in (18) and obtain,
.Since the optimal linear arrangement of an n-vertex tree can be found in O(n 1.6 computing
D # can also be done in O(n 1.6 ) time.
Applications
It is instructive to provide examples of graphs G for which
L(G)). Consider any
bipartite G with # G # 3 and # regular bipartite graph with # G # 3.
Then, conditions of Corollary 2.1 are met, and thus by Theorem 2.3,
L(G)). Moreover,
consider any connected bipartite G of degree at most a constant k, with
is fixed. Note that, d v - d # v # 1 for any v # V , since G is connected and is not a star, and thus,
n. (Note that the star is excluded by the density condition
, to obtain n # 1
. Hence this graph satisfies the conditions of Corollary 2.1, moreover,
it is easy to see that aG # O(1), and we conclude using Theorem 2.3 that
L(G)).
3.1 Bipartite crossings, bisection, genus, and page number
The appearance of aG in the upper bound of Theorem 2.3 relates bcr(G) to other important topological
properties of G such as genus of G, denoted by g G [32], and page number of G [1], denoted by p G .
Observation 3.1 E), and assume that # G # 2 and m # (1 + #)n, for a fixed # > 0.
L(G)), provided that Consequently, under the given conditions for G,
if either
L(G)).
Proof. Assume that using Corollary 2.1 and Theorem 2.3, and observing that,
O(1), we conclude that
L(G)). (Note that, # G # 2, gives d # v
for all v # V . ) To finish the proof, observe that implies that
Next, we provide another application of our results, by deriving nontrivial upper bounds on the
bipartite crossing number.
Observation 3.2 Let E), with page number p G and genus g G . Then
L(G).
Proof. Since cr(G) # bcr(G) # 5a G
L(G), by Theorem 2.3, we need to bound aG in terms of g G and
G . Let H be a subgraph of G with the vertex set VH , |V H | # 2, and the edge set EH . Note that
which verifies the upper bound involving p G .
To finish the proof observe that
is a lower bound on the genus of H, or g H [32]. Thus,
H is at most (|V H | - 1) 2 /12 [32], it follows that for any subgraph H, p g G /12 # p g H /12 #
, and consequently aG # 2 #
Let 0 < # 1be a constant and denote by b # (G) size of the minimal #-bisection of G. That is,
denotes a cut which partitions V into A and -
A. Leighton [16] proved for any degree
bounded graph G, the inequality
(G)), where cr(G) is the planar crossing number of
G. Another very interesting consequence of Theorem 2.2 is providing a stronger version of Leighton's
result, for bcr(G).
Theorem 3.1
in particular when G is regular, it holds
Proof. The claim follows from the lower bound in Theorem 2.2 and the well-known observation that
(G). (See for instance [12].) 2
Remarks. After proving Theorem 3.1, we discovered that a weaker version of this Theorem for degree
bounded graphs can be obtained by a shorter proof which uses Menger's Theorem [27].
3.2 Approximation algorithms
Given a bipartite graph G, the bipartite arrangement problem is to find a bipartite drawing D of G
with smallest L xD , or smallest length, so that the x coordinate of any vertex is an integer. We denote
this minimum value by -
L(G). Note that coordinate function xD , for a bipartite drawing need not
to be an injection, since we may have xD (a) = xD (b), for a # V 0 , and b # V 1 . Thus, in general
L(G). Our approximation algorithms in this section provide a bipartite drawing in which all
vertices have integer coordinates, so that the number of crossings and at the same time the length of
the drawing is small. We need the following Lemma giving a relation between -
L(G).
Lemma 3.1 For any connected bipartite graph E) it holds
4 .
Proof. Let D be a bipartite drawing of G in which all x coordinates are integers. Let
and note that ND (e) # |x D (a) - xD (b)|, since any vertex in V 0 # V 1 has an integer x coordinate. Let
f # be the bijection in Part (i) in Lemma 2.1, then |f # (a) -f # (b)| # 2|x D (a) -xD (b)| + 1, and hence by
taking the sum over all edges, we obtain L f # 2L xD +m. To prove the lemma, we claim that there
are at least m-1edges so that xD (a) #= xD (b), and consequently L xD # m-1, which implies
the result. To prove our claim, note that there are at most nedges ab, so that xD (a) = xD (b), and
hence at least m- n
m-1edges ab, with xD (a) #= xD (b), since G is connected and therefore has at
least
Even et al. [9] in a breakthrough result came up with polynomial time O(log n log log n) times
optimal approximation algorithms for several NP-hard problems, including the linear arrangement
problem. Combining their result with ours, we obtain the following.
Theorem 3.2 Let E), and consider the drawing D (with integer coordinates) in Theorem
2.3 obtained form an approximate solution to the linear arrangement problem provided in [9]. Then
L(G)). Moreover, if G meets the conditions in Corollary 2.1, then
O(log n log log nbcr(G)), provided that #
Proof. Note that L
log n log log n) and thus the claim regarding L xD follows from
Lemma 3.1. To finish the proof note that, Theorem 2.3 gives
L(G)), and
the claim regarding bcr(D) is verified by the application of Corollary 2.1, since # divide and conquer paradigm has been very popular in solving VLSI layout problems both in
theory and also in practice. Indeed, the only known approximation algorithm for the planar crossing
number is a simple divide and conquer algorithm in which the divide phase consists of approximately
bisecting the graph [2]. This algorithm approximates cr(G)+n to within a factor of O(log 4 n) from the
optimal, when G is degree bounded [17]. A similar algorithm approximates -
L(G) to within a factor of
O(log 2 n) from the optimal. To verify the quality of the approximate solutions, in general, one needs
to show that the error term arising in the recurrence relations associated with the performance of
algorithms are small compared to the value of the optimal solution. A nice algorithmic consequence
of Theorem 3.1 is that the standard divide and conquer algorithm in which the divide phase consists
of approximately bisecting the graph gives a good approximation for bcr(G) in polynomial time. The
divide stage of our algorithm uses an approximation algorithm for bisecting a graph such as those in
[10, 17]. These algorithms have a performance guarantee of O(log n) from the optimal [10, 17]. It
should be noted that the lower bound of # b 23
(G)), although is su#cient to verify the the performance
of the divide and conquer approximation algorithm for the planar crossing number, can not be used to
show the quality of the approximation algorithm for bcr(G), since (as we will see) it does not bound
from above the error term in our recurrence relation. Thus our lower bound of # n#G b 1(G)) is crucial
to show the suboptimality of the solution.
Theorem 3.3 Let A be a polynomial time 1/3-2/3 bisecting algorithm to approximate the bisection
of a graph with a performance guarantee O(log n). Consider a divide and conquer algorithm which (a)
recursively bisects the graph G, using A, (b) obtains the two bipartite drawings, and then (c) inserts
the edges of the bisection between these two drawings. This divide and conquer algorithm generates,
in polynomial time, a bipartite drawing D with integer coordinates, so that L
L(G)).
Moreover, if G meets the conditions in Corollary 2.1, then
Proof. Assume that using A, we partition the graph G to 2 vertex disjoint subgraphs G 1 and G 2
recursively. Let - b(G) denote the number of those edges having one endpoint in the vertex set of G 1 ,
and the other in the vertex set of G 2 . Let DG 1
, and DG 2
be the bipartite drawings already obtained
by the algorithm for G 1 and G 2 , respectively. Let D denote the drawing obtained for G. To show the
claim regarding L xD , note that
Since, we use the approximation algorithm A for bisecting we have - nb 1(G)), hence
the error term in the recurrence relation is O(n log nb 1(G)). Moreover, 3 -
consequently using Lemma 3.1, we obtain, 12 -
1(G)n. Thus the error term is O(log n -
and consequently,
which implies L
L(G)). To verify the claim regarding bcr(D), note that
Now observing that m # aGn, - nb 1(G)), and nb 1(G) # 3 -
L(G), we obtain,
log n)
which implies
Note that by Corollary 2.1,
L(G)), and the claim follows. 2
Remarks. The method of Even et al. that we suggested to use in Theorem 3.2, although a theoretical
breakthrough, requires the usage of specific interior point linear programming methods which
may be computationally expensive or hard to code. Hence, the the divide and conquer approximation
algorithm, although in theory, weaker than the method of Theorem 3.2, it may be easier to implement.
Moreover, one may use very fast and simple heuristics developed by the VLSI and CAD communities
[24] for graph bisection in the divide stage. Although, these heuristics do not produce provably sub-optimal
solutions for bisecting a graph, they work well in practice, and are extremely fast. Therefore,
one may anticipate that certain implementations of the divide and conquer algorithm are very fast
and e#ective in practice.
Note that since aG can be computed in polynomial time, the class of graphs with aG # c# G is
recognizable in polynomial time, when c is a given constant. Hence, those graphs which meet the
required conditions in Theorems 3.2, and 3.3 can be recognized in polynomial time. Also, note that
many important graphs such those introduced in Section 3 meet the conditions, and hence for these
graphs the performance of both approximation algorithms is guaranteed.
Largest biplanar subgraphs in acyclic graphs
be a tree and w ij be a weight assigned to each edge ij . For any B # E T ,
define the weight of B, denoted by w(B), to be the sum of weights for all edges in B. In this section
we present a linear time algorithm to compute a biplanar subgraph of T of largest weight.
A tree on at least 2 vertices is called a caterpillar if it consists of a path to which some vertices of
degree 1 (leaves) are attached. We distinguish four categories of vertices in a caterpillar. First consider
caterpillars which are not stars. They have a unique path connecting two internal vertices to which
all leaves are attached to. We call this path the backbone of the caterpillar. The two endvertices of
the backbone are called endbone vertices, internal vertices of the backbone are called midbone vertices.
Leaves attached to endbones are called endleaves. Leaves attached to midbones are called midleaves.
For a star with at least 3 vertices, the middle vertex is considered as endbone, the backbone path
consists of this single endbone, and the leaves in the star are considered endleaves. If a star has two
vertices, then we treat these vertices as endbones.
be an unrooted tree and r # V T . Then, we view r as the root of T . Then any
vertex will have a unique parent which is the first vertex on the path towards the root.
For , the set of children of x, denoted by N x , are those vertices of T whose parent is x. For any
we denote by T x the component of T , containing x, which is obtained after removing
the parent of x from T . We define T r to be T .
We use the notation B x for a biplanar subgraph of T x , x # V T , and treat B x as an edge set. We say
that B x spans a vertex a, if there is an edge ab # B x . For x # V T , we define
Our goal is to determine W (T r ). To achieve this goal, we define 5 additional related optimization
problems as follows:
x is not spanned by B x } .
It is obvious that
and therefore solving all 5 problems for T x determines W (T x ). For any leaf v set w 1
Finally, for u # N x , x # V T define,
It is well-known and easy to show that a graph is biplanar i# it is a collection of vertex disjoint
caterpillars. This is equivalent to saying that a graph is biplanar i# it does not contain a double
claw which is a star on 3 vertices with all three edges subdivided. Therefore our problem is to find a
maximum weight forest of caterpillars in an edge-weighted acyclic graph. We will use these facts in
the next lemma, sometimes without explicitly referring to them.
Lemma 4.1
y #Nx\{y}
y #Nx\{y}
y #Nx\{y}
Proof Sketch. The basic idea for the recurrence relations is to describe how an optimal solution for
in the trees rooted in N x . Indeed, (21), (22), and (25) are obvious. For (23), note that
if x is an endbone in a maximum weight biplanar B x , then x is an endbone in a caterpillar C # B x .
Consider the case that C is not a star. Since, x is an endbone of C, it has at least two neighbors in
C, and all but one of its neighbors are leaves in C. Then exactly one neighbor y of x is an endbone
or an endleaf in C \ {x}. This justifies the presence of the first two terms in the inner curly bracket.
To justify the presence of the sum on y # , note that, in order to maximize the total weight of B x , we
must attach y # N x \ {y} to C as a leaf, only if f(y # must include
in B x , the maximum biplanar subgraph of T y # which has the total weight f(y # To justify
the term P y#Nx f(y), consider the case that C is a star. Then we must attach any y # N x to C as a
leaf only if we include in B x the maximum biplanar subgraph of T y .
For (24), note that, if x is a midbone in a maximum weight B x , then x is a midbone of C # B x , and
has 2 neighbors y 1 and y 2 in C. By deleting x from C, we obtain exactly two caterpillars C 1 and C 2
so that y i is either an endbone or an endleaf for C i , 2. Now follow an argument similar to (23)
to finish the proof of (24) 2
Theorem 4.1 For an edge-weighted acyclic graph largest weight biplanar subgraph
can be computed in O(|V T |) time.
Proof Sketch. With no loss of generality assume that T is connected, otherwise we apply our
arguments to the components of T . We select a root r for T , and then perform a post order traversal
and show that we can compute w i (T x quantities
are already known for the children of x. This is obvious for (20) and (25). For (21) and (22) the
expressions in curly braces are easy to evaluate in linear time, if a maximizing y is known. So the
issue is to find a maximizing y in linear time. It is easy to see that for (21) we look for y # N x
which maximizes w xy we look for y # N x which maximizes
all these can be computed in O(|N x |) time.
For (23), it su#ces to show that a y # N x can be found in O(|N x |) time which maximizes
To do so find
note that
Thus, to maximize w 4 (T x ), we should find y 1 , y 2 # N x , y 1 #= y 2 which give the largest two values for
It is easy to maintain for every x not just the values w i (T x also the edge-set of B x which
realizes this value, therefore, we can store the edge set of a largest biplanar subgraph as well. 2
Acknowledgment
. The research of the second and fourth author was done while they were visiting
Department of Mathematics and Informatics of University in Passau. They thank Prof. F.-J. Brandenburg
for perfect work conditions and hospitality. A preliminary version of this paper was published
at WADS'97 [26]. That version contained slight inaccuracies like missing error terms which are fixed
in the current version.
--R
The book thickness of a graph
A framework for solving VLSI layout problems
The assignment heuristics for crossing reduction
On optimal linear arrangements of trees
Graph layout problems
Algorithms for drawing graphs: an annotated bibliography
Edge crossings in drawings of bipartite graphs
Drawing graphs in 2 layers
Fast Approximate Graph Partition Algorithms
Crossing number is NP-complete
Approximate algorithms for geometric embeddings in the plane with applications to parallel processing problems
A new crossing number for bipartite graphs
Exact and heuristic algorithm for 2-layer straight line crossing number
Optimal linear labelings and eigenvalues of graphs
Complexity issues in VLSI
Combinatorial algorithms for integrated circuit layouts
On the bipartite crossing number
An alternative method to crossing minimization on hierarchical graphs
Edge disjoint spanning trees of finite graphs
Combinatorial Geometry
Which aesthetic has the greatest e
An introduction to VLSI physical design
The optimal numbering of the vertices of a tree
A minimum linear arrangement algorithm for undirected trees
Discrete Applied Mathematics 19
Methods for visual understanding of hierarchical systems structures
Crossing theory and hierarchy mapping
Topological graph theory
--TR
--CTR
Robert A. Hochberg , Matthias F. Stallmann, Optimal one-page tree embeddings in linear time, Information Processing Letters, v.87 n.2, p.59-66, 31 July
Journal of Discrete Mathematics Staff, Research problems, Discrete Mathematics, v.257 n.2-3, p.599-624, 28 November
Hillclimbing Algorithm for the Optimal Linear Arrangement Problem, Fundamenta Informaticae, v.68 n.4, p.333-356, December 2005
Matthias Stallmann , Franc Brglez , Debabrata Ghosh, Heuristics, Experimental Subjects, and Treatment Evaluation in Bigraph Crossing Minimization, Journal of Experimental Algorithmics (JEA), 6, p.8-es, 2001
Dimitrios M. Thilikos , Maria Serna , Hans L. Bodlaender, Cutwidth II: algorithms for partial w-trees of bounded degree, Journal of Algorithms, v.56 n.1, p.25-49, July 2005
Josep Daz , Jordi Petit , Maria Serna, A survey of graph layout problems, ACM Computing Surveys (CSUR), v.34 n.3, p.313-356, September 2002 | approximation algorithms;biplanar graph;bipartite drawing;linear arrangement;bipartite crossing number |
586901 | Regular Languages are Testable with a Constant Number of Queries. | We continue the study of combinatorial property testing, initiated by Goldreich, Goldwasser, and Ron in [J. ACM, 45 (1998), pp. 653--750]. The subject of this paper is testing regular languages. Our main result is as follows. For a regular language $L\in \{0,1\}^*$ and an integer n there exists a randomized algorithm which always accepts a word w of length n if $w\in L$ and rejects it with high probability if $w$ has to be modified in at least $\epsilon n$ positions to create a word in L. The algorithm queries $\tilde{O}(1/\epsilon)$ bits of w. This query complexity is shown to be optimal up to a factor polylogarithmic in $1/\epsilon$. We also discuss the testability of more complex languages and show, in particular, that the query complexity required for testing context-free languages cannot be bounded by any function of $\epsilon$. The problem of testing regular languages can be viewed as a part of a very general approach, seeking to probe testability of properties defined by logical means. | Introduction
Property testing deals with the question of deciding whether a given input x satises a prescribed
property P or is \far" from any input satisfying it. Let P be a property, i.e. a non-empty family of
binary words. A word w of length n is called -far from satisfying P , if no word w 0 of the same length,
which diers from w in no more than n places, satises P . An -test for P is a randomized algorithm,
which given the quantity n and the ability to make queries about the value of any desired bit of an
input word w of length n, distinguishes with probability at least 2=3 between the case of w 2 P and
A preliminary version of this paper appeared in the Proceedings of the 40 th Symposium on Foundation of Computer
Science
y Department of Mathematics, Raymond and Beverly Sackler Faculty of Exact Sciences, Tel Aviv University, Tel Aviv
69978, Israel, and AT&T Labs{Research, Florham Park, NJ 07932, USA. Email: noga@math.tau.ac.il. Research supported
by a USA Israeli BSF grant, by a grant from the Israel Science Foundation and by the Hermann Minkowski Minerva Center
for Geometry at Tel Aviv University.
z Department of Mathematics, Raymond and Beverly Sackler Faculty of Exact Sciences, Tel Aviv University, Tel Aviv
69978, Israel. E-mail: krivelev@math.tau.ac.il Part of this research was performed when this author was with DIMACS
Center, Rutgers University, Piscataway NJ, 08854, USA and AT&T Labs{Research, Florham Park, NJ 07932, USA.
Research supported in part by a DIMACS Postdoctoral Fellowship.
x Department of Computer Science, University of Haifa, Haifa, Israel. E-mail: ilan@cs.haifa.ac.il. Part of this research
was performed when this author was visiting AT& T Labs { Research, Florham Park, NJ 07932, USA.
{ School of Mathematics, Institute for Advanced Study, Olden Lane, Princeton, NJ 08540, USA. E-mail:
szegedy@math.ias.edu. Part of this research was performed when this author was with AT&T Labs{Research, Florham
Park, NJ 07932, USA.
the case of w being -far from satisfying P . Finally, we say that property P is (c; )-testable if for every
> 0 there exists an -test for P whose total number of queries is bounded by c.
Property testing was dened by Goldreich et. al [7] (inspired by [13]). It emerges naturally in
the context of PAC learning, program checking [6, 3, 10, 13], probabilistically checkable proofs [2] and
approximation algorithms [7].
In [7], the authors mainly consider graph properties, such as bipartiteness and show (among other
things) the quite surprising fact that testing bipartiteness can be done by randomly testing a polynomial
in 1= number of edges of the graph, answering the above question with constant probability of failure.
They also raise the question of obtaining general results as to when there is, for every > 0, an -test
for a property using queries (i.e c is a function of but independent of n) with constant
probability of failure. We call properties of this type -testable. So far, such answers are quite sparse;
some interesting examples are given in [7], several additional ones can be obtained by applying the
Regularity Lemma as we show in a subsequent paper [1].
In this paper we address testability of formal languages (see [8] as a general reference). A language
is a property which is usually viewed as a sequence of Boolean functions f
Our main result states that all regular languages are -testable with
query complexity only ~
O(1=). We also show that this complexity is optimal up to a factor poly-logarithmic
in 1=. This positive result cannot be extended to context-free languages, for there is an
example of a very simple context-free language which is not testable.
Since regular languages can be characterized using second order monadic logic, we thus obtain a
large set of logically dened objects which are testable. In [1] we provide testable graph properties
described by logical means as well. These results indicate a strong interrelation between testability and
logic. Although our result on regular languages can be viewed as a separate result having no logical
bearing at all, our opinion is that logic does provide the right context for testability problems, which
may lead to the discovery of further classes of testable properties.
The rest of this paper is organized as follows. In Section 2 we present the proof of the main result
showing that every regular language is testable. In Section 3 we show that the upper bound of ~
O(1=)
for the query complexity of testing regular languages, obtained in Theorem 1, is tight up to a poly-logarithmic
factor. Section 4 is devoted to the discussion of testability of context-free languages. There
we show in particular that there exist non-testable context-free languages. We also discuss testability
of the Dyck languages. The nal Section 5 contains some concluding remarks and outlines new research
directions.
Testing Regular Languages
In this section we prove the main result of the paper, namely that regular languages are ( ~
O( 1
testable. As this result is asymptotic, we assume that n is big enough with respect to 1
(and with
respect to any other constant that depends only on the xed language we are working with). All
logarithms are binary unless stated explicitly otherwise.
We start by recalling the standard denition of a regular language, based on nite automata. This
denition is convenient for algorithmic purposes.
Denition 2.1 A deterministic nite automaton (DFA) M over f0; 1g with states
is given by a function with a set F Q. One of the states, q 1 is called
the initial state. The states belonging to the set F are called accepting states, - is called the transition
function.
We can extend the transition function - to f0; 1g recursively as follows. Let
denote the empty
word. Then
Thus, if M starts in a state q and processes string u, then it ends up in a state -(q; u).
We then say that M accepts a word u if -(q rejects u means that -(q 1
Finally, the language accepted by M , denoted by LM , is the set of all u 2 f0; 1g accepted by M . We
use the following denition of regular languages:
Denition 2.2 A language is regular i there exists a nite automaton that accepts it.
Therefore, we assume in this section that a regular language L is given by its automaton M so that
A word w of length n denes a sequence of states (q
) in the following natural way: q
and for 1 j n,
This sequence describes how the automaton M moves while
reading w. Later in the paper we will occasionally refer to this sequence as the traversal path of w.
A nite automaton M denes a directed graph G(M) by V
g. The period g(G) of a directed graph G is the greatest common
divisor of cycle lengths in G. If G is acyclic, we set
We will use the following lemma about directed graphs.
Lemma 2.3 Let E) be a nonempty, strongly connected directed graph with a nite period g(G).
Then there exist a partition V which does not exceed 3jV j 2
such
1. For every 0 1 and for every the length of every directed path from u to
v in G is (j i) mod
2. For every 0 1 and for every and for every integer r m, if
(mod g), then there exists a directed path from u to v in G of length r.
Proof. To prove part 1, x an arbitrary vertex z 2 V and for each 0 i g 1, let V i be the set
of all those vertices which are reachable from v by a directed, (not necessarily simple), path of length
g. Note that since any closed (directed) walk in G is a disjoint union of cycles, the length of each
such walk is divisible by g. This implies that the sets V i are pairwise disjoint. Indeed, assume this is
false and suppose w lies in V i \ V j with i 6= j. As G is strongly connected there is a path p 1 from w
to z, and by denition there is a path p 2 of length i mod g from z to w as well as a path p 3 of length
mod g from z to w. Now the number of edges of either 3 is not divisible by g, which
is impossible. Therefore the sets V i form, indeed, a partition of V . For the union
of any (directed) path from z to u with a (directed) path from u to v forms a path from z to v, and as
any such path must have length j mod g the assertion of part 1 follows.
We next prove part 2. Consider any set of positive integers fa i g whose greatest common divisor is g.
It is well known that there is a smallest number t such that every integer s t which is divisible by g
is a linear combination with non-negative integer coe-cients of the numbers a i . Moreover, it is known
(see [9], [5]), that t is smaller than the square of the maximal number a i . Fix a closed (directed) walk
in G, that visits all vertices and whose length is at most jV j 2 . (This is easily obtained by numbering
the vertices of G arbitrarily as by concatenating directed paths from v i to v i+1 for
each 0 i k 1, where the indices are taken modulo k). Associate now the set of cycle lengths in
this walk with the set of positive integers fa i g as above. Then, following this closed walk and traversing
each directed cycle as many times as desired, we conclude that every integer which is divisible by g and
exceeds 2jV j 2 is a length of a closed walk passing through all vertices of the graph. Given, now, a vertex
and an integer r > 3jV (j i) mod g, x a shortest path p from
u to v, and note that its length l satises l = (j i) mod g and l < jV j( jV j 2 ). Adding to p a closed
walk of length r l from v to itself we obtain the required path, completing the proof. 2
We call the constant m from the above lemma the reachability constant of G and denote it by m(G).
In the sequel we assume that m is divisible by g.
If LM \ f0; 1g testing algorithm can reject any input without reading it at all. Therefore,
we can assume that we are in the non-trivial case LM \ f0; 1g n 6= ;.
We now introduce a key denition for the sequel:
Denition 2.4 Given a word w 2 f0; 1g n , a sub-word (run) w 0 of w starting at position i is called
feasible for language LM , if there exists a state q 2 Q such that q is reachable from q 1 in G in exactly
steps and there is a path of length n (jw in G from the state -(q; w 0 ) to at least one of
the accepting states. Otherwise, w 0 is called
Of course, nding an infeasible run in w proves that w 62 L. Our aim is to show that if a given word
w of length n is far from any word of length n in L, then many short runs of w are infeasible. Thus a
choice of a small number of random runs of w almost surely contains an infeasible run. First we treat
the following basic case:
Denition 2.5 We call an automaton M 'essentially strongly connected' if
1. M has a unique accepting state q acc ;
2. The set of states of the automaton, Q, can be partitioned into two parts, C and D so that
the subgraph of G(M) induced on C is strongly connected;
no edges in G(M) go from D to C (but edges can go from C to D).
(Note that D may be empty.)
Lemma 2.6 Assume that the language contains some words of length n, and that M is
essentially strongly connected with C and D being the partition of the states of M as in Denition 2.5.
Let m be the reachability constant of G[C]. Assume also that n 64m log(4m=). Then if for a word
w of length exists an integer 1 i log(4m=) such that the
number of infeasible runs of w of length 2 i+1 is at least 2 i 4 n
Proof.
Our intention is to construct a sequence (R j ) j=1;::: of disjoint infeasible runs, each being minimal in
the sense that each of its prexes is feasible, and so that each is a subword of the given word w. We
then show that we can concatenate these subwords to form a word in the language that is not too far
from w ('not too far' will essentially depend on the number of runs that we have constructed). This
in turn will show that if dist(w; L) n then there is a lower bound on the number of these infeasible
runs.
For reasons to become obvious later we also want these runs to be in the interval [m
A natural way to construct such a sequence is to repeat the following procedure starting from
1 be the shortest infeasible run starting from w[m + 1] and ending before
there is no such run we stop. Assume that we have constructed so
ending at w[c j 1 ], next we construct R j by taking the minimal infeasible run starting at w[c
and ending before w[n m+ 1]. Again if there is no such run we stop.
Assume we have constructed in this way runs R 1 ; :::; R h . Note that each run is a subword of w,
the runs are pairwise disjoint and their concatenation in order forms a (continuous) subword of w.
Also, note that by the denition of each run R j being minimal infeasible, its prex R (
obtained by
discarding the last bit of R j is feasible. This, in turn, implies that R 0
j which is obtained from R j by
ipping its last bit is feasible. In addition, by Denition 2.4, this means that for each R 0
there is a state
and such that q i j
is reachable from q 1 in c
Next we inductively construct a word w 2 L such that dist(w; w ) hm+ 2m+ 2. Assuming that
dist(w; L) n this will imply a lower bound on h. The general idea is to 'glue' together the R 0
h, each being feasible and yet very close to a subword of w (except for the last bit in each).
The only concern is to glue the pieces together so that as a whole word it will be feasible. This will
require an extra change of m bits per run, plus some additional 2m bits at the end of the word.
We maintain during the induction that for we construct is feasible starting
from position 1, and it ends in position c j . For the base case, let c to be any word of
length m which is feasible starting from position 1. Assume we have already dened a word w
from position 1 and ending in position c j 1 . Let -(q As both p j and q i j
are reachable
from q 1 by a path of length c j 1 , according to Lemma 2.3 we can change the last m bits in w j 1 so
that we get a word u j for which -(q 1 ;
. We now dene w j as a concatenation of u j and R 0
. Let
w h be the nal word that is dened in this way, ending at place c h . Now the reason we have stopped
with R h is either that there is no infeasible run starting at c h + 1, in which case, changing the last m
bits of w h and concatenating to it the remaining su-x of w (that starts at position c h exactly as
in the case of adding R 0
yields the required w . The other possible reason for stopping growing R h is
when there is a minimal infeasible run that start at c h ends after position n m+ 1. Let R be
that run, and let R 0 be the run obtained by
ipping the last bit of R. As was the case with any R 0
is feasible from position c h + 1. Hence there is a feasible word u of which R 0 is a prex, u is of length
and so that -(q i h
. We can construct w from w h and u exactly as we have constructed
w form w h and the su-x of w in the previous case.
By the denition of w , w 2 L. Following the inductive construction of w it follows that for
1. Then to get from w h to w we concatenate R 0 which is either a
subword of w (in the rst case previously discussed) or it is a subword of w where one bit was changed
(in the second case), following by changing m bits at the end of w h and possibly additional m bits at
the end of u. Therefore dist(w; w ) hm 2, as we claimed.
Recalling that dist(w; L) n, we conclude that h n 2
last inequality is
by our assumptions that n 64m log(4m=)). This already shows that if dist(w; L) n then there
are
n) many disjoint infeasible runs in w. However, we need a stronger dependence as stated in the
lemma. We achieve this in the following way.
Let log(4m=). For 1 i a, denote by s i the number of runs in fR j g h
whose length
falls in the interval [2
P a
h n=(4m) n=(4m). Therefore there exists an index i for which s i n=(4am). Consider all
infeasible runs R j with jR that if a run contains an infeasible sub-run then it is
infeasible by itself. Now, each infeasible run of length between 2 contained in at least
runs of length 2 i+1 , except maybe, for the rst two and the last two runs (these with the
two smallest j's and these with the two largest j's). As R j are disjoint, each infeasible run of length
contains at most three of the R j s of length at least 2 1. Thus, we a get a total of at least
runs of length at most 2 i+1 . By our assumption on the parameters this number
is:
am
log(4m=) , as claimed. 2
Now our aim is to reduce the general case to the above described case. For a given DFA M with
a graph by C(G) the graph of components of G, whose vertices correspond to
maximalby inclusion strongly connected components of G and whose directed edges connect components
of G, which are connected by some edge in G. Note that some of the vertices of C(G) may represent
single vertices of G with no self loops, that do not belong to any strongly connected subgraph of G
with at least two vertices. All other components have non empty paths inside them and will be called
truly connected. From now on we reserve k for the number of vertices of C(G) and set
may assume that all vertices of G are reachable from the initial state q 1 . Then C(G) is an acyclic graph
in which there exists a directed path from a component C 1 , containing q 1 , to every other component.
runs over all truly connected components
of G, corresponding to vertices of C(G). We will assume in the sequel that the following relation are
satised between the parameters:
Condition (*)
2k 64m log 8mk
.
log(1=) < 1
clearly, for any xed k; m; l for small enough and n large enough condition (*) holds.
Our next step is to describe how a word w 2 LM of length n can move along the automaton. If a word
w belongs to L, it traverses G starting from q 1 and ending in one of the accepting states. Accordingly,
w traverses C(G) starting from C 1 and ending in a component containing an accepting state. For
this reason, we call a path A in C(G) admissible, if it starts at C 1 and ends at a component with an
accepting state. Given an admissible path
in C(G), a sequence
of
pairs of vertices of G (states of M) is called an admissible sequence of portals if it satises the following
restrictions:
1.
for every 1 j t;
2.
3.
t is an accepting state of M );
4. For every 2 j t one has (p 2
The idea behind the above denition of admissible portals is simple: Given an admissible path A,
an admissible sequence P of portals denes how a word w 2 L moves from one strongly connected
component of A to the next one, starting from the initial state q 1 and ending in an accepting state. The
are the rst and last states that are traversed in C i j
Now, given an admissible path A and a corresponding admissible sequence P of portals, we say that
an increasing sequence of integers
forms an admissible partition with respect to (A; P ) if
the following holds:
1.
2. for every 1 j t, there exists a path from p 1
j to p 2
of length n j+1
3.
The meaning of the partition
j=1 is as follows. If w 2 L and w traverses M in accordance
with t, the value of n j indicates that w arrives to component C
for
the rst time after n j bits. For convenience we also set n 1. Thus, for each 1 j t, the
word w stays in C i j
in the interval [n that it is possible in principle that for a
given admissible path A and a corresponding admissible sequence of portals P there is no corresponding
admissible partition (this could happen if the path A and the set of portals P correspond to no word
of length n).
A triplet (A; is an admissible path, P is a corresponding admissible sequence of
portals and is a corresponding admissible partition, will be called an admissible triplet. It is clear
from the denition of an admissible triplet that a word w 2 L traverses G in accordance with a scenario
suggested by one of the admissible triplets. Therefore, in order to get convinced that w 62 L, it is enough
to check that w does not t any admissible triplet.
Fix an admissible triplet (A;
. For
t, we dene a language L j that contains all words that traverse in M from p 1
j to p 2
. This is
done formally by dening an automaton M j as follows: The set of states of M j is obtained by adding to
a new state f j . The initial state of M j and its unique accepting state are p 1
respectively. For
each
and 2 f0; 1g, if - M (q;
, we set - M j
We
Namely, in M j all transitions within C
remain the same.
All transitions going to other components now go to f j which has a loop to itself. Thus, M j is essentially
strongly connected as in Denition 2.5 with g. Then L j is the language accepted by M j .
Given the xed admissible triplet (A; word w of length sub-words of
setting t. Note that jw
Namely, if w were to path through M according to the partition then the substring w j corresponds
to the portion of the traversal path of w that lies within the component C
Lemma 2.7 Let (A; be an admissible triplet , where
. Let w be a word of length n satisfying dist(w; L) n. Dene languages (L
and words
(w
as described above. Then there exists an index j, 1 j t, for which dist(w
k .
Proof. Assume this is not the case. Let
j=1 be the partition and recall that t k. For
every be a word of length n j+1 n j 1 for which
(the empty word). Also, for 1 j t 1
choose j 2 f0; 1g so that - M (p 2
j+1 . Then by construction the word w
belongs to L and dist(w; w
{ a contradiction.Now we present a key idea of the proof. Ideally, we would like to test whether an input word w
of length n ts any admissible triplet. In the positive case, i.e. when w 2 LM , the traversal path of
w in M denes naturally an admissible triplet which w will obviously t. In the negative case, i.e.
when dist(w; L) n, Lemma 2.7 implies that for every admissible triplet (A; P; ), at least one of the
sub-words w j is very far from the corresponding language L j . Then by Lemma 2.6 w j contains many
short infeasible runs, and thus sampling a small number of random runs will catch one of them with
high probability. However, the problem is that the total number of admissible triplets clearly depends
on n, which makes the task of applying directly the union bound on the probability of not catching an
infeasible run impossible.
We circumvent this di-culty in the following way. We place evenly in a bounded number
(depending only on and the parameters of M) of transition intervals T s of a bounded length and
postulate that a transition between components of C(G) should happen inside these transition intervals.
Then we show that if w 2 L, it can be modied slightly to meet this restriction, whereas if dist(w; L)
n, for any choice of such an admissible triplet, w is far from tting it. As the number of admissible
triplets under consideration is bounded by a function of only, we can apply the union bound to estimate
the probability of failure.
Recall that runs over all truly connected components
of G, corresponding to vertices of C(G). Let log(1=)=. We place S transition intervals
s=1 evenly in [n], where the length of each transition interval T s is jT s m).
For
.
ALGORITHM
Input: a word w of length
1. For each 1 i log(8km=) choose r i random runs in w of length 2 i+1
2. For each admissible triplet (A;
j=1 such
that for all 2 j t one has do the following:
Form the automata M j , 1 j t, as described above.
Discard those chosen runs which end or begin at place p for which jp n j j n=(128km log(1=)).
Namely, those runs which have one of their ends closer than n=(128km log(1=)) from some
For each remaining run R, if R falls between n j and n j+1 , check whether it is feasible for
the automaton M j starting at b n is the rst coordinate of R in w. Namely,
is the place where R starts relative to n j , which is the the place w \enters" M j .
3. If for some admissible triplet all checked runs turned out to be feasible, output "YES". Otherwise
(i.e, in the case where for all admissible triplets at least one infeasible run has been found) output
"NO".
Lemma 2.8 If dist(w; L) n, then the above algorithm outputs "NO" with probability at least 3=4.
If w 2 L, then the algorithm always outputs "YES".
Proof. The proof contains two independent parts, in the rst we consider the case of an input w with
dist(w; L) n, on which the algorithm should answer 'NO' (with high probability). The other part
treats the case where w 2 L, for which the algorithm should answer 'YES'.
Let us rst assume that dist(w; L) n. The number of admissible triplets (A;
partition points fall into the union of transition intervals
can be estimated from above by
(rst choose an admissible path in C(G), the number of admissible paths is at most 2 k as any subset of
vertices of C(G) denes at most one path spanning it; then choose portals, the total number of chosen
portals is at most 2k, therefore there are at most jV j 2k possible choices for portals; then for a xed
there are at most SjT s j choices for each n j , where 2 j t and t k). For satisfying
condition (*) and S as above, this expression is at most (1=) 2k . Thus we need to check at most (1=) 2k
admissible triplets.
Let be an admissible triplet satisfying the restriction formulated in Step 2 of the above
algorithm. Write
. Then the triplet denes automata
and languages (L
as described before. By Lemma 2.7 for some 1 j t one has
n=(2k). Then by Lemma 2.6 there exists an i, 1 i log(8km=) so that
contains at least (2 i 4 n=(2km log(8km=)) runs of length 2 i+1 . At
most of them may touch the last bits of the interval [n 1], and at most
of them may touch the rst bits of this interval. Hence there are at least 2 i 6 n=(km log(1=)) 2
of them that touch neither the rst nor the last n=(128km log(1=)) bits of the
interval Obviously, if a random sample contains one of these infeasible runs, then it
provides a certicate for the fact that w does not t this admissible triplet. A random sample of r i runs
of length 2 i+1 misses all of these infeasible runs with probability at most
2k
Thus by the union bound we conclude that in this case a random sample does not contain a "witness" for
each feasible triplet with probability at most 1=4. This completes the proof for the case of dist(w; L)
n.
We now address the case for which w 2 L. We need to show that in this case the algorithm answers
'YES'. For this is is enough to show that if w 2 L, then there exists an admissible triplet which passes
successfully the test of the above algorithm. A traversal of w in M naturally denes a triplet (A;
as follows:
are components from C(G), ordered according to the
order of their traversal by w;
is the rst (resp. the last) state of C
visited by w;
set to be the rst time w
enters
while traversing M . However, this partition does not necessarily meet the requirement stated
in Step 2 of the algorithm: In the true traversal of w in M the transitions from C i j
to C i j+1
might
occur outside the transition intervals T s . We show that the desired triplet can be obtained from the
actual triplet, modifying only the third component of it. This modied triplet would
then correspond to a dierent word w (which is quite close to w) that makes all the transitions
inside the postulated transition intervals. In addition, we will take care that no query is made to bits
in which w 0 diers from w. Hence, the algorithm will actually be consistent with both. This is in fact
the reason for discarding the runs that are too close to some n j in Step 2 of the algorithm. Intuitively,
this is done as follows: Assume n j is not in a transition interval, then we either make the traversal in
longer so to end in p 2
in a transition interval, or we shorten the traversal in C so to enter
a transition interval, depending on where the closest transition interval is. Formally this is done as
follows. Dene a new partition
choose a transition
interval T s closest to n j . If C
is a truly connected component, we choose n 0
j as the leftmost coordinate
in T s satisfying the following restrictions: (a) n 0
is a singleton
without loops we set n 0
such an n 0
exists. Finally, we set
Note that the obtained triplet (A;
is truly connected. As there
exists a path from p 1
j to p 2
of length n j+1 n j 1, there also exists a path of length n 0
j 1.
This implies the admissibility of 0 and hence the admissibility of (A;
Let now R be a run of w inside [n 0
j+1 n=(128km log(1=))] and let b be its
rst coordinate. Since we placed S transition intervals fT s g evenly in [n], we have jn 0
+m). Therefore, R falls also completely inside [n
remark at this point that the purpose of discarding marginal runs at Step 2 of the algorithm is to achieve
that each one of the remaining runs will fall completely not only within [n 0
j+1 ], but also within
As we will see immediately this guarantees that R will be feasible for the corresponding
automaton M j . Without this deletion, with positive probability one of the sampled runs R may start in
a place where w is in C
and end in a place where w is in C i j
, thus making it impossible to attribute
R to one particular automaton M j . Therefore, with positive probability the algorithm would fail in the
positive case. Discarding marginal runs allows us to get a one-sided error algorithm).
As w 2 L, there exists a state q 2 C
so that -(q; R) 2 C
. Also, q is reachable from p 1
(the initial
state of C
steps (b is the rst coordinate of R). According to the choice of n 0
j we
have
is the period of C
. But then by Lemma 2.3 q is reachable from p 1
in
m) steps. This shows that R is feasible for M j , starting at b n 0
1. Thus, if w 2 L, the
above algorithm always outputs "YES". 2
Finally, the number of bits of w queried by our algorithm is at most
log(8km=) X
log(8km=) X
We have thus proven the following theorem.
Theorem 1 For every regular language L, every integer n and every small enough > 0, there exists
a one-sided error -testing algorithm for L\ f0; 1g n , whose query complexity is c log 3 (1=)=, where the
constant c > 0 depends only on L.
A nal note about the dependence of the complexity on the parameters is in place here. In the proof
M is considered xed, as the algorithm is tailored for a xed given language. However, in the calculation
above we have kept the dependence of the query complexity on the parameters of M explicit. One has
to take in mind though that the estimates hold only when condition (*) holds. In particular we require
(third item in (*)), that 1=(
Another note is about the running time of the algorithm (rather then just its query complexity). The
dominating term in Step 1 and the rst two subsets of Step 2 of the algorithm is the query complexity.
In the last substeps, each run has to be checked against M j . Each such check involves checking whether
there is a word u and a word v (of suitable lengths) so that uRv 2 L. Checking whether there are such
u; v is done directly by Lemma 2.3 in case the length of u and v are longer than m, or by checking all
words if one of them is shorter than m.
3 Lower bound for regular languages
In many testability questions, it is quite natural to expect a lower bound of order 1= for the query
complexity of testing. This is usually proven by taking a positive example of size n and perturbing it in
randomly chosen n places to create a negative instance which is hard to distinguish from the positive
one. Regular languages are not an exception in this respect, as shown by the next proposition and its
fairly simple proof.
Proposition 1 Let L be the regular language over the alphabet f0; 1g dened by 1g. For
any n an -test for L \ f0; 1g n has query complexity at least 1
3 .
Proof. Our proof is based on the following reformulation of the renowned principle of Yao [14], saying
that if there exists a probability distribution on the
union
of positive and negative examples such that
any deterministic testing algorithm of query complexity d is correct with probability less than 2/3 for
an input randomly chosen
from
according to this distribution, then d is a lower bound on the query
complexity of any randomized testing algorithm.
Dene a distribution on the set of positive and negative instances of length n as follows. The word
gets probability 1=2. Next we partition the index set [1; n] into , each of size
n, and for each 1 i t give probability 1=(2t) to the vector y i created from 1 n by
ipping all bits in
I i from 1 to 0. Note that dist(y are negative instances. Now we apply the above
mentioned principle of Yao. Let A be a deterministic -testing algorithm with query complexity d. If
A is incorrect on the word 1 n , then it is already incorrect with probability at least 1=2. Otherwise, it
should accept the input if all d tested bits equal to 1. Therefore it accepts as well at least t d of the
inputs y i . This shows that A gives an incorrect answer with probability at least (t d)=(2t) < 1=3,
implying d > t=3. 2.
The main idea of the proof of the above proposition can be used to get an
=) lower bound on
the query complexity of testing any non-trivial regular language, with a natural denition of non-trivial.
This is proven in the next proposition. A somewhat paradoxical feature of its proof is that our main
positive result (Theorem 1) and its proof are used here to get a negative result.
For a language L let L
Denition 3.1 A language L is non-trivial if there exists a constant 0 < 0 < 1, so that for innitely
many values of n the set L n is non-empty, and there exists a word w 2 f0; 1g n so that dist(w; L n ) 0 n.
Proposition 2 Let L be a non-trivial regular language. Then for all su-ciently small > 0, any
-testing algorithm for L requires
queries.
Proof. The proof here is essentially a generalization of the proof of Proposition 1. We thus present it
in a somewhat abridged form.
Let n be large enough. Assume L n 6= ;, and w 2 f0; 1g n is such that dist(w; L n ) 0 n. We may
clearly assume that the constant 0 is as small as needed for our purposes. Our main result, Theorem
1, and its proof imply that with probability at least 2=3, a random choice of a set of runs, built as
described at Step 1 of the testing algorithm of Theorem 1, and having total length ~
the algorithm to reject w. As we have noticed, the testing algorithm has one sided error, i.e., it always
accepts a word from L. Thus, if we choose a random set of runs as above, it will cause to reject w with
probability 2/3 and it will not coincide with any word u 2 L n (for otherwise, it would reject u too).
Each such random set of runs is just a random set of intervals in ng (of length as dened in
Step 1 of the testing algorithm) of total length bounded by ~
that two such random sets
intersect with probability ~
n)). Therefore if we choose ~
n) such subsets at random, then we
expect that ~
O( 2
n) pairs of them will intersect, and that 2/3 of the members will reject w. This implies
that there exists a family S of ~
disjoint sets of runs so that for each member of S, no
word of L n coincides with w on this set. Fix now 0 and let > 0 be small enough compared to 0 . We
partition the family S into , each of cardinality n, where the constant c
depends on 0 only and is thus independent of . Let u be a word in L n . For each 1 i t, the word
w i is obtained from u by changing the bits of u, corresponding to S i , to those from w. It follows then
that Indeed, to transform w i into a word in L n , at least one bit has to be changed
in every member of S i .
Now, as in the proof of Proposition 1, we dene a probability distribution on the union of positive
and negative examples. The word u gets probability 1=2, and each one of the t words w
probability 1=(2t). A simple argument, essentially identical to that in the proof of Proposition 1, shows
that any deterministic algorithm needs to query at
least
3 =) bits of the input word to be
successful with probability at least 2=3 on the dened probability distribution. Applying Yao's principle,
we get the desired result. 2
4 Testability of context-free languages
Having essentially completed the analysis of testability of regular languages, it is quite natural to try
to make one step further and to address testability of the much more complex class of context-free
languages (see, e.g., [8] for a background information). It turns out that the general situation changes
drastically here as compared to the case of regular languages. We show that there exist quite simple
context-free languages which are not -testable. Then we turn our attention to one particular family of
context-free languages { the so-called Dyck languages. We prove that the rst language in this family,
testable in time polynomial in 1=, while all other languages in the family are already non-testable.
All relevant denitions and proofs follow.
4.1 Some context-free languages are non-testable
As we have already mentioned, not all context-free languages are testable. This is proven in the following
proposition.
Theorem 2 Any -testing algorithm for the context-free language
the reversal of a word w, requires
n) queries in order to have error of at most 1=3.
Proof. Let n be divisible by 6. We again dene a distribution D on the union of positive and negative
inputs in the following way. A negative instance is chosen uniformly at random from among all negative
instances (i.e. those words w 2 f0; 1g n which are at distance at least n from L). We refer to this
distribution as N . Positive instances are generated according to a distribution P dened as follows: we
pick uniformly at random an integer k in the interval [n=6 and then select a positive example
uniformly among words vv R uu R with k. Finally the distribution D on all inputs is dened as
follows: with probability 1/2 we choose a positive input according to P and with probability 1=2 we
choose a negative input according to N . We note that a positive instance is actually a pair (k; w) (the
same word w may be generated using dierent k's).
We use the above mentioned Yao's principle again. Let A be a deterministic -testing algorithm for
L. We show that for any such A, if its maximum number of queries is
n), then its expected
error with respect to D is at least 1
A be such an algorithm. We can view A as
a binary decision tree, where each node represents a query to a certain place, and the two outgoing
edges, labeled with 0 or 1, represent possible answers. Each leaf of A represents the end of a possible
computation, and is labeled 'positive' or `negative' according to the decision of the algorithm. Tracing
the path from the root to a node of A, we can associate with each node t of A a pair (Q t
ng is a set of queries to the input word, and f is a vector of answers received
by the algorithm. We may obviously assume that A is a full binary tree of height d and has thus 2 d
leaves. Then jQ for each leaf t of A.
We will use the following notation. For a subset Q ng and a function f
with f on Qg ;
with f on Qg ;
is the set of all negative (resp. positive) instances of length n consistent with
the pair (Q; f ). Also, if D is a probability distribution on the set of binary strings of length n and
is a subset, we dene Pr D
w2E Pr D [w].
be the set of all leaves of A labeled 'positive', let T 0 be the set of all leaves of T labeled
'negative'. Then the total error of the algorithm A on the distribution D is
Pr
The theorem follows from the following two claims.
4.1 For every subset Q ng of cardinality
Pr D [E (Q; f )]
4.2 For every subset Q ng of cardinality
n) and for every function f
Pr
Based on Claims 4.1, 4.2, we can estimate the error of the algorithm A by
Pr
The theorem follows. 2
We now present the proofs of Claims 4.1 and 4.2.
Proof of Claim 4.1: Notice rst that L has at most 2 n=2 n=2 words of length n (rst choose a word
of length n=2 and then cut it into two parts v and u, thus getting a word
the number of words of length n at distance less than n from L is at most jL \ f0; 1g n j
log(1=)n . We get
It follows then from the denition of D that
Pr D [E (Q; f
Proof of Claim 4.2: It follows from the denition of the distribution D that for a word w 2 L\f0; 1g n ,
Pr D
Recall that f) is the set of words in L for which are consistent with f on the set of queries Q,
Hence,
Pr
Now observe that for each of the d pairs of places in Q there are at most two choices of k, for which
the pair is symmetric with respect to k or to n=2 + k. This implies that for n=6 2
choices of k, the set Q does not contain a pair symmetric with respect to k or n=2+k. For each such k,
Therefore,
Pr
As a concluding remark to this subsection we would like to note that in the next subsection (Theorem
we will give another proof to the fact that not all context-free languages are testable by showing
the non-testability of the Dyck language D 2 . However, we preferred to give Theorem 2 as well due to
the following reasons. First, the language discussed in Theorem 2 is simpler and more natural than
the Dyck language D 2 . Secondly, the lower bound of Theorem 2 is better than that of Theorem 4.
The proofs of these two theorems have many common points, so the reader may view Theorem 2 as a
"warm-up" for Theorem 4.
4.2 Testability of the Dyck languages
It would be extremely nice to determine exactly which context-free languages are testable. At present
we seem to be very far from fullling this task. However, we are able to solve this question completely
for one family of context-free languages { the so called Dyck languages.
For an integer n 1, the Dyck language of order n, denoted by D n , is the language over the alphabet
of 2n symbols grouped into n ordered pairs (a The language D n
is dened by the following productions:
2.
3.
where
denotes the empty word. Though the words of D n are not binary according to the above
denition, we can easily encode them and the grammar describing them using only 0's and 1's. Thus we
may still assume that we are in the framework of languages over the binary alphabet. We can interpret
D n as the language with n distinct pairs of brackets, where a word w belongs to D n i it forms a
balanced bracket expression. The most basic and well known language in this family is D 1 , where we
have only one pair of brackets. Dyck languages play an important role in the theory of context-free
languages (see, e.g., [4] for a relevant discussion) and therefore the task of exploring their testability is
interesting.
Our rst goal in this subsection is to show that the language D 1 is testable. Let us introduce a
suitable notation. First, for the sake of simplicity we denote the brackets a
Assume that n is a large enough even number (obviously, for odd n we have D 1 \ f0; 1g
there is nothing to test in this case). Let w be a binary word of length n. For 1 i n, we denote by
x(w; i) the number of 0's in the rst i positions of w. Also, y(w; i) stands for the number of 1 0 s in the
rst i positions of w. We have the following claims.
4.3 The word w belongs to D 1 if and only if the following two conditions hold: (a) x(w; i)
Proof. Follows easily from the denition of D 1 , for example, by induction on the length of w. We omit
a detailed proof. 2
Proof. Observe rst that by Claim 4.3 a word w is in D 1 if and only if we can partition its letters
into pairwise disjoint pairs, so that the left letter in each pair is a zero, and the right letter is a one.
Consider the bipartite graph, whose two classes of vertices are the set of indices i for which
and the set of indices i for which respectively, where each i with connected to all
assumption (a) and the defect form of Hall's theorem, this graph
contains a matching of size at least y(w; n) s 1 . By assumption (b), y(w; n) n=2 s 2 =2. Therefore,
there are at least n=2 s 2 =2 s 1 disjoint pairs of letters in w, where in each pair there is a zero on
the left and a one on the right. Let us pair the remaining elements of w arbitrarily, where all pairs
but at most one consist of either two 0's or two 1's. By changing, now, when needed, the left entry of
each such pair to 0 and its right entry to 1 we obtain a word in D 1 , and the total number of changes
performed is at most (s 2 completing the proof. 2
a) If for some 1 i n one has y(w; i) x(w; i) s, then dist(w; D 1 ) s=2; b) If
Proof. Follows immediately from Claim 4.3. 2
We conclude from the above three claims that a word w is far from D 1 if and only if for some
coordinate i it deviates signicantly from the necessary and su-cient conditions provided by Claim 4.4.
This observation is used in the analysis of an algorithm for testing D 1 , proposed below.
where C > 0 is a su-ciently large constant, whose value will be chosen later, and assume d is an even
integer. In what follows we omit all
oor and ceiling signs, to simplify the presentation.
ALGORITHM
Input: a word w of length
1. Choose a sample S of bits in the following way: For each bit of w, independently and with
probability choose it to be in S. Then, if S contains more then d
'YES' without querying any bit. Else,
2. If dist(S; D 1 \ f0; 1g d 0
Lemma 4.6 The above algorithm outputs a correct answer with probability at least 2=3.
Proof. As we have already mentioned, we set
The proof contains two independent parts, in the rst we prove that the algorithm is correct (with
probability and in the second part we prove that the algorithm has a bounded error
for words w for which dist(w; D 1 ) n.
Consider rst the positive case w 2 D 1 . Set assume for simplicity that t as well as n=t
are integers. For 1 j t, let X j be the number of 0's in S, sampled from the interval [1; nj=t]. Let
also Y j denote the number of 1's in S, sampled from the same interval. Both X j and Y j are binomial
random variables with parameters x(w; nj=t) and p, and y(w; nj=t) and p, respectively. As w 2 D 1 , we
get by Claim 4.3 that x(w; nj=t) y(w; nj=t), implying EX j EY j . Applying standard bounds on
the tails of binomial distribution, we obtain:
For . Note that EZ j np=t. Using similar argumentation as above,
we get
As w 2 D 1 , we have by Claim 4.3 x(w; Hence
Finally, we have the following estimate on the distribution of the sample size jSj:
Choosing C large enough and recalling the denition of t, we derive from (1){(4) that with probability
at least 2=3 the following events hold simultaneously:
1.
2.
3. X t np
4. jSj np
Assume that the above four conditions are satised. Then we claim that dist(S; D 1 ) < . Indeed,
the rst two conditions guarantee that for all 1 i jSj we have y(S; i) x(S; i) =2+2np=t 2=3.
The last two conditions provide x(S; jSj) y(S; Therefore, by Claim
4.4 our algorithm will accept w with probability at least 2=3, as
required. This ends the rst part of the proof.
Let us now consider the negative case. Assume that dist(w; D 1 \ f0; 1g n ) n. By Claim 4.4 we
have then that at least one of the following two conditions holds: a) there exists an index 1 i n, for
which y(w; i) x(w; i) n=2; b) x(w; n) y(w; n) n=2. In the former case, let X , Y be the number
of 0's, 1's, respectively, of S, sampled from the interval [1; i]. Let also k be the number of elements
from [1; i] chosen to S. Then are binomially distributed
with parameters x(w; i) and p, and y(w; i) and p, respectively. It follows from the denition of i that
EY EX np=2. But then we have
Choosing the constant C to be su-ciently large and recalling the denitions of p and , we see that
the above probability is at most 1=6. But if y(S; it follows from Claim 4.5 that
If x(w; n) y(w; n) n=2, we obtain, using similar arguments:
The above probability can be made at most 1=6 by the choice of C. But if x(S; jSj) y(S; jSj) 2, it
follows from Claim 4.5 that dist(S; D 1 ) . Thus in both cases we obtain that our algorithm accepts
w with probability at most 1=6. In addition, the algorithm may accept w (in each of the cases), when
(rst item in the algorithm). However, by equation (4) this may be bounded by 1/6
(choosing C as in the rst part). Hence the algorithm rejects w with probability at least 2=3. This
completes the proof of Lemma 4.6. 2.
By Lemma 4.6 we have the following result about the testability of the Dyck language D 1 .
Theorem 3 For every integer n and every small enough > 0, there exists an -testing algorithm for
query complexity is C log(1=)= 2 for some absolute constant C > 0.
The reader has possibly noticed one signicant dierence between the algorithm of Section 2 for
testing regular languages and our algorithm for testing D 1 . While the algorithm for testing regular
languages has a one-sided error, the algorithm of this section has a two-sided error. This is not a
coincidence. We can show that there is no one-sided error algorithm for testing membership in D 1 ,
whose number of queries is bounded by a function of only. Indeed, assume that A is a one-sided error
algorithm for testing D 1 . Consider its execution on the input word . It is easy to see
that dist(u; D 1 ) n. Therefore, A must reject u with probability at least 2=3. Fix any sequence of
coin tosses which makes A reject u and denote by Q the corresponding set of queried bits of u. We claim
that if jQ\[1; n=2+n]j n=2 n, then there exists a word w of length n from D 1 , for which
for all i 2 Q. To prove this claim, we may clearly assume that jQ \ [1; n=2
as follows. For we take the rst n indices i in [1; n=2
and set For the last n indices i in [1; n=2
the su-cient condition for the membership in D 1 , given by Claim 4.3. Indeed,
at any point j in [1; n=2+ n] the number of 0's in the rst j bits of w is at least as large as the number
of 1's. Also, for j n=2
Therefore w 2 D 1 . As A is assumed to be a one-sided error algorithm, it should always accept every
But then we must have jQ \ [1; n=2 queries a linear in n
number of bits. We have proven the following statement.
Proposition 3 Any one-sided error -test for membership in D 1
queries
n) bits on words of length
n.
Our next goal is to prove that all other Dyck languages, namely D k for all k 2 are non-testable.
We will present a detailed proof of this statement only for 2, but this clearly implies the result for
all k 3.
For the sake of clarity of exposition we replace the symbols a in the denition of D 2 by
respectively. Then D 2 is dened by the following context-free
where
is the empty word. Having in mind the above mentioned bracket interpretation of the Dyck
languages, we will sometimes refer to 0; 2 as left brackets and to 1; 3 as right brackets. Note that we
do not use an encoding of D 2 as a language over f0; 1g, but rather over an alphabet of size 4. Clearly,
non-testability of D 2 as dened above will imply non-testability of any binary encoding of D 2 that is
obtained by a xed binary encoding of f0; 1; 2; 3g.
Theorem 4 The language D 2 is not -testable.
Proof. Let n be a large enough integer, divisible by 8. We denote L Using Yao's
principle, we assign a probability distribution on inputs of length n and show that any deterministic
algorithm probing bits outputs an incorrect answer with probability 0:5 o(1). Both positive
and negative words will be composed of three parts: The rst which is a sequence of matching 0=1
(brackets of the rst kind) followed by a sequence of 0=2 (left brackets) and a sequence of 1=3 (right
brackets).
Positive instances are generated according to the distribution P as follows: choose k uniformly at
random in the range Given k, the word of length n is is of length n 2k
generated by: for choose v[i] at random from 0; 2 and then set v[n 2k+1
Negative instances are chosen as follows: the process is very similar to the positive case except that
we do not have the restriction on v[n 2k 1. Namely, we choose k at random in the
range Given k, a word of length n is is of length n 2k generated by:
choose v[i] at random from 0; 2 and for choose v[n 2k +1 i]
at random from 1; 3. Let us denote by N the distribution at this stage. Note that the words that are
generated may be of distance less than n from L n (in fact some words in L n are generated too). Hence
we further condition N on the event that the word is of distance at least n from L n .
The probability distribution over all inputs of length n is is now dened by choosing with probability
1/2 a positive instance, generated as above, and with probability 1/2 a negative instance, chosen
according to the above described process.
4.7 The probability that an instance generated according to N is n-close to some word in L n
is exponentially small in n.
Proof. Fix k and let be a word of length n generated by N . For such xed k the three parts
of w are the rst part of matching 0=1 of length 2k, the second part which is a random sequence of 0=2
of length n 2kand the third part which is a random sequence of 1=3 of length n 2k. Let us denote by
these three disjoint sets of indices of w.
We will bound from above the number of words w of length n of the form
2kwhich are at distance at most n from L n . First we choose the value of w on N 2 , which gives 2 n 2kpossibilities. Then we choose (at most) n bits of w to be changed to get a word from L n ( n
choices)
and set those bits (4 n possibilities). At this point, the only part of w still to be set is its value of N 3 ,
where we are allowed to use only right brackets 1; 3. The word to be obtained should belong to L n . It
is easy to see that there is at most one way to complete the current word to a word in L n using right
brackets only. Hence the number of such words altogether is at most 2 n 2k
. The total number
of words w of the form 0
and each such word gets the same probability
in the distribution N . Therefore the probability that a word chosen according to N is n-close to L n
can be estimated from above by
n=4
))n+2n n
for small enough > 0 as promised. 2
d, be a xed set of places and let k be chosen uniformly at random in the
range n=8; :::; n=4. Then S contains a pair i < j symmetric with respect to (n 2k)=2 with probability
at most d 8
n .
Proof. For each distinct pair there is a unique k for which are symmetric with respect to
the above point. Hence the above probability is bounded by d 8
We now return to the proof of Theorem 4. Let A be an algorithm for testing L n that queries at
most queries. As may assume that A is non-adaptive, namely, it queries some
xed set of places S of size d (as every adaptive A can be made non adaptive by querying ahead at
most 2 d possible queries dened by two possible branchings after each adaptive query. We then look at
these queries as our S). For any possible set of answers f and an input
the event that w is consistent with f on S. Let NoSym be the event that S contains
no symmetric pair with respect to (n 2k)=2. Also, let F 0 denote all these f 's on which the algorithm
answers 'NO' and let F 1 be all these f 's on which it answers 'YES'. Finally denote by (w positive) and
(w negative) the events that a random w is a positive instance and a negative instance, respectively.
The total error of the algorithm is
However, given that S contains no symmetric pairs, for a xed f , Prob[f w ^ (w is negative)] is
essentially equal to Prob[f w ^ (w is positive)] (these probabilities would be exactly equal if negative
w would be generated according to N . Claim 4.7 asserts that N is exponentially close to the real
distribution on negative instances). Hence each is of these probabilities is 0:5Prob[f w jNoSym] o(1).
Plugging this into the sum above, and using Claim 4.8 we get that the error probability is bounded
from below by Prob(NoSym)
f (0:5 o(1))Prob[f w jNoSym] (1 d 8
Concluding remarks
The main technical achievement of this paper is a proof of testability of regular languages. A possible
continuation of the research is to describe other classes of testable languages and to formulate su-cient
conditions for a context-free language to be testable (recall that in Theorem 2 we have shown that not
all context-free languages are testable).
One of the most natural ways to describe large classes of testable combinatorial properties is by
putting some restrictions on the logical formulas that dene them. In particular we can restrict the arity
of the participating relations, the number of quantier alternations, the order of the logical expression
(rst order, second order), etc.
The result of the present paper is an example to this approach, since regular languages are exactly
those that can be expressed in second order monadic logic with a unary predicate and an embedded
linear order. Another example can be found in a sequel of this paper [1], which addresses testability of
graph properties dened by sentences in rst order logic with binary predicates, and which complements
the class of graph properties shown to be testable by Goldreich et al [7]. Analogous results for predicates
of higher arities would be desirable to obtain, but technical di-culties arise when the arity is greater
than two.
As a long term goal we propose a systematic study of the testability of logically dened classes.
Since many dierent types of logical frameworks are known, to nd out which one is suited for this
study is a challenge. Virtually all single problems that have been looked at so far have the perspective
of being captured by a more general logically dened class with members that have the same testability
properties.
A very dierent avenue is to try to develop general combinatorial techniques for proving lower
bounds for the query complexity of testing arbitrary properties, possibly by nding analogs to the block
sensitivity [12] and the Fourier analysis [11] approaches for decision tree complexity. At present we have
no candidates for combinatorial conditions that would be both necessary and su-cient for -testability.
Acknowledgment
. We would like to thank Oded Goldreich for helpful comments. We are also grateful
to the anonymous referees for their careful reading.
--R
Proof veri
Proof of a conjecture by Erd
Property testing and its connections to learning and approximation.
Introduction to Automata Theory
A bound for a solution of a linear Diophantine problem
New directions in testing
On the degree of Boolean functions as real polynomials
Robust characterization of polynomials with applications to program testing.
Probabilistic computation
--TR
--CTR
Michal Parnas , Dana Ron , Ronitt Rubinfeld, Testing membership in parenthesis languages, Random Structures & Algorithms, v.22 n.1, p.98-138, January
Beate Bollig, A large lower bound on the query complexity of a simple boolean function, Information Processing Letters, v.95 n.4, p.423-428, 31 August 2005
Beate Bollig , Ingo Wegener, Functions that have read-once branching programs of quadratic size are not necessarily testable, Information Processing Letters, v.87 n.1, p.25-29, July
Eldar Fischer, On the strength of comparisons in property testing, Information and Computation, v.189 n.1, p.107-116, 25 February 2004
Eldar Fischer , Eric Lehman , Ilan Newman , Sofya Raskhodnikova , Ronitt Rubinfeld , Alex Samorodnitsky, Monotonicity testing over general poset domains, Proceedings of the thiry-fourth annual ACM symposium on Theory of computing, May 19-21, 2002, Montreal, Quebec, Canada
Eli Ben-Sasson , Prahladh Harsha , Sofya Raskhodnikova, Some 3CNF properties are hard to test, Proceedings of the thirty-fifth annual ACM symposium on Theory of computing, June 09-11, 2003, San Diego, CA, USA
Alon, Testing subgraphs in large graphs, Random Structures & Algorithms, v.21 n.3-4, p.359-370, October 2002
Eldar Fischer , Ilan Newman , Ji Sgall, Functions that have read-twice constant width branching programs are not necessarily testable, Random Structures & Algorithms, v.24 n.2, p.175-193, March 2004
Alon , Asaf Shapira, Every monotone graph property is testable, Proceedings of the thirty-seventh annual ACM symposium on Theory of computing, May 22-24, 2005, Baltimore, MD, USA
Asaf Shapira, A combinatorial characterization of the testable graph properties: it's all about regularity, Proceedings of the thirty-eighth annual ACM symposium on Theory of computing, May 21-23, 2006, Seattle, WA, USA
Alon , Asaf Shapira, Testing subgraphs in directed graphs, Proceedings of the thirty-fifth annual ACM symposium on Theory of computing, June 09-11, 2003, San Diego, CA, USA
Alon , Asaf Shapira, A characterization of easily testable induced subgraphs, Proceedings of the fifteenth annual ACM-SIAM symposium on Discrete algorithms, January 11-14, 2004, New Orleans, Louisiana | regular languages;context-free languages;property testing |
586907 | The Combinatorial Structure of Wait-Free Solvable Tasks. | This paper presents a self-contained study of wait-free solvable tasks. A new necessary condition for wait-free solvability, based on a restricted set of executions, is proved. This set of executions induces a very simple-to-understand structure, which is used to prove tight bounds for k-set consensus and renaming. The framework is based on topology, but uses only elementary combinatorics, and, in contrast to previous works, does not rely on algebraic or geometric arguments. | Introduction
This paper studies the tasks that can be solved by a wait-free protocol in shared-memory
asynchronous systems. A shared-memory system consists of n
that communicate by reading and writing shared variables; here we assume
only atomic read/write registers. We also assume that processes are completely
asynchronous, i.e., each process runs at a completely arbitrary speed. Processes
start with inputs and, after performing some protocol, have to halt with some
outputs. A task specifies the sets of outputs that are allowable for each assignment
of inputs to processes. A protocol is wait-free if each process halts with
an output within a finite number of its own steps, regardless of the behavior of
other processes. A task is wait-free solvable if there exists a wait-free protocol
that solves it.
The study of wait-free solvable tasks has been central to the theory of distributed
computing. Early research studied specific tasks and showed them to be
solvable (e.g., approximate agreement [9], 2n-renaming [2], k-set consensus with
at most k \Gamma 1 failures [7]) or unsolvable (e.g., consensus [10], n+1-renaming [2]).
A necessary and sufficient condition for the solvability of a task in the presence
of one process failure was presented in [3]. In 1993, a significant advancement
??? Supported by grant No. 92-0233 from the United States-Israel Binational Science
Foundation (BSF), Jerusalem, Israel, and the fund for the promotion of research in
the Technion. Email: hagit@cs.technion.ac.il.
y Part of this work was done while visiting the MIT Laboratory for Computer Science,
and the Cambridge Research Laboratory of DEC. Supported by CONACyT and
DGAPA Projects, UNAM. Email: rajsbaum@servidor.unam.mx.
was made in the understanding of this problem with [4, 17, 20]. This advancement
yielded new impossibility results for k-set consensus ([4, 17, 20], and later
[6, 14, 15]) and renaming ([17, 15]), as well as a necessary and sufficient condition
for wait-free solvability ([17, 18]). Of particular interest was the use of
topological notions to investigate the problem, suggested in [17, 20]. Yet, much
of this development remained inaccessible to many researchers, since it relied on
algebraic and geometric tools of topology.
In this paper, we present a self-contained study of wait-free solvable tasks
starting from first principles. We introduce a new necessary and sufficient condition
for wait-free solvability. This condition is used to prove tight bounds on
renaming and k-set consensus. It is also used to derive an extension of the necessary
condition of [3]. Our approach borrows critical ideas from previous works
in this area (especially, [4, 5, 17, 18, 20]), and integrates them into a unified
framework. Below we discuss the relationships between our work and previous
work.
To provide a feeling for our results, we present the following rough description
of key notions from combinatorial topology. A colored simplex is a set, in which
each of the elements, called vertices, is colored with a process id. A colored
complex is a collection of colored simplexes which is closed under containment.
A mapping from the vertices of one colored complex to the vertices of another is
simplicial if it maps a simplex to a simplex; it is color preserving if a vertex with
id p i is mapped to vertex with id p i . Finally, a complex whose largest simplex
contains m vertices is a pseudomanifold if every simplex with
is contained in either one or two simplexes with m vertices. Precise definitions
appear in Section 3; they do not rely on algebraic or geometric interpretations.
The novel combinatorial concept we use is of a pseudomanifold being a divided
image of a simplex. Very roughly, a pseudomanifold is a divided image of a
simplex if it has the same boundary as the simplex. The divided image preserves
some of (but not all) the topological structure of the simplex. We prove a new
necessary condition for wait-free solvability (Corollary 13): if a task is wait-free
solvable, then there exists a divided image of the complex of possible inputs; it
is straightforward to see that the decisions made by the protocol must induce a
simplicial map from this divided image to the complex of possible outputs which
must agree with the task specification.
We present a necessary and sufficient condition for wait-free solvability, i.e.,
a characterization of the wait-free solvable tasks. Consider a task, and a wait-free
protocol that solves it. We explicitly show that a subset of the protocol's
executions, called immediate snapshot executions [4, 20], induce a divided image
of the complex of possible inputs. We use a solution for the participating set
problem ([5]) to show that the above property is also sufficient. Namely, if there
exists a simplicial map from a divided image induced by immediate snapshots
executions to the output complex which agrees with the task, then the problem
is wait-free solvable.
We prove that the divided image induced by immediate snapshot executions
is orientable. We then prove a combinatorial theorem which extends Sperner's
Lemma (for orientable divided images). This theorem is the key to a completely
combinatorial proof that M-renaming is wait-free solvable only if M - 2n.
Using the basic Sperner's Lemma, we also show that k-set consensus is wait-free
solvable only if k ? n. (These bounds are known to be tight, see [2] and [7],
respectively.)
Divided images play a role similar to spans (both the geometric version used
in [17, 18, 14], and the algebraic version introduced in [15]). As discussed below
(after Definition 1) divided images have weaker mathematical properties than
geometric spans, in particular, they may have "holes". We show (in the full version
of the paper) that an orientable divided image corresponds in a natural
manner to an algebraic span. It was shown that such spans exist (in [17]), but
this proof requires a combination of algebraic (homology theory) and geometric
(subdivided simplexes) arguments. The existence of algebraic spans with certain
properties imply impossibilities of set consensus and renaming [15], without
relying on the more involved arguments of [17].
The necessary and sufficient condition we derive is not exactly the same as the
one proved by Herlihy and Shavit in [18]. We explicitly construct a specific well-structured
divided image (induced by immediate snapshot executions), while
Herlihy and Shavit show that an arbitrary span exists ([17]). The notion of
immediate snapshot executions was introduced in [4, 20]. The basic ideas needed
to show that immediate snapshot executions induce a divided image already
appeared in Borowsky and Gafni's paper [4]. However, they were interested in
properties of immediate snapshot executions to prove the impossibility result for
set consensus. It was not shown that they are orientable (a property used for the
renaming impossibility) or that they induce an algebraic span (or our simpler
combinatorial notion of a divided image), and no general conditions for wait-free
solvability were derived from them.
In the full version of this paper, we derive another necessary condition for
wait-free solvability from Corollary 13, of a different nature. This condition is
based on connectivity, and is therefore computable. This condition extends the
condition for solvability in the presence of one failure [3]. It follows from [11,
16] that there is no computable necessary and sufficient condition for wait-free
solvability.
2 Model of Computation
Our model is standard and was used in many papers; we follow [1].
A system consists of n Each process is a deterministic
state machine, with a possibly infinite number of states. We associate with
each process a set of local states. Among the states of each process is a subset
called the initial states and another subset called the output states. Processes
communicate by means of a finite number of single-writer multi-reader atomic
registers (also called shared variables). No assumption is made regarding the size
of the registers, and therefore we may assume that each process p i has only one
register R i . Each process p i has two atomic operations available to it:
its entire state to R i .
reads the shared variable R and returns its value v.
A system configuration consists of the states of the processes and registers.
Formally, a configuration C is a vector hs is the
local state of process p i and v j is the value of the shared variable R j . Denote
state Each shared variable may attain values from some domain which
includes a special "undefined" value, ?. An initial configuration is a configuration
in which every local state is an initial state and all shared variables are set to ?.
We consider an interleaving model of concurrency, where executions are modeled
as sequences of steps. Each step is performed by a single process. In each
step, a process p i performs either a write i operation or a read i (R) operation, but
not both, performs some local computation, and changes to its next local state.
The next configuration is the result of these modifications.
We assume that each process p i follows a local protocol P i that deterministically
determines p i 's next step: P i determines whether p i is to write or read, and
(in case of a read) which variable R to read, as a function of p i 's local state. If
next state as a function of p i 's current state
and the value v read from R. If p i writes R, then P i determines p i 's next state
and as a function of p i 's current state. We assume that all local protocols are
identical, i.e., depend only on the state, but not on the process id. A protocol is
a collection P of local protocols P
An event of p i is simply p i 's index i. A schedule is a finite or infinite sequence
of events. An execution is a finite or infinite alternating sequence of configurations
and events C is the initial configuration and
C k is the result of applying the event j k to C k\Gamma1 , for all k - 1. The schedule of
this execution is
Given an execution and a process p i , the view of p i in ff,
denoted ffji is the sequence state i (C 0 ); state i (C Intuitively, for example, if
decides in ff without taking any steps, then the only information contained
in ffji is p i 's initial state.
A process p i is faulty in an infinite schedule oe if it takes a finite number of
steps (i.e., has a finite number of events) in oe, and nonfaulty otherwise. These
definitions also apply to executions by means of their schedules.
We assume that each process has two special parts of its state, an input value
and an output value. Initial configurations differ only in the input values of the
processes. If we want to have a local protocol which depends on the process id,
then the id has to be provided explicitly as part of the input. We assume that
the output value is irrevocable, i.e., the protocol cannot over-write the output
value. Note that in our definition processes do not halt; they decide by writing
the output value, but continue to take steps (which are irrelevant).
task \Delta has some domain I of input values and domain O of output values;
specifies for each assignment of input values to the processes which output
values can be written by the processes. A protocol solves \Delta if for any finite
execution, the output values already written by the processes can be completed
(in any infinite extension of the execution where all processes are nonfaulty)
to output values for all processes that are allowable for the input values in the
execution. The protocol is wait-free if every nonfaulty process eventually writes
an output value.
3 Combinatorial Topology Concepts
In this section, we introduce the basic topological concepts we use in this pa-
per. Previous papers in this area, e.g., [8, 14, 17, 18, 20], used geometric or
algebraic interpretations of topological structures; in contrast, our approach is
purely combinatorial, abstracting ideas from [12, 19, 21].
Basic Notions: The basis of our definitions is the notion of a complex. A complex
K is a collection of finite nonempty sets closed under containment; that is, if
oe is an element of K, then every nonempty subset of oe is an element of K. A
nonempty subset of oe is a face of oe. A face of oe is proper if it is not equal to oe.
Each element of a complex is called a simplex. A complex K 0 is a subcomplex of
a complex K if K 0 ' K.
The dimension of a simplex oe, dim(oe), is the number of its elements minus
one. A simplex of dimension m (with m+1 elements) is called an m-simplex. The
dimension of a complex K is the maximum dimension of its simplexes; we only
consider complexes of finite dimension. A complex of dimension m is called an
m-complex. We sometimes use a superscript notation to denote the dimension
of simplexes and complexes, e.g., oe m is an m-simplex and K m is an m-complex.
The vertex set of K is the union of the 0-simplexes of K. We identify the
vertex v and the 0-simplex fvg.
Consider two complexes K and L. Let f be a function from the vertices of
K to the vertices of L. f is simplicial if for every simplex fv of K,
is a simplex of L. (Note that ff(v 0 treated as
a set, since f need not be one-to-one and there may be repetitions.) This implies
that a simplicial map f can be extended to all simplexes of K. Intuitively, a
simplicial map f maps every simplex oe of K to a simplex f(oe) (perhaps of
smaller dimension) of L. We extend f to a set of simplexes of K, S, by defining
f(S) to be the set of simplexes f(oe) in L, where oe ranges over all simplexes of
S. Clearly, if S is a subcomplex of K then f(S) is a subcomplex of L.
Divided Images: An m-complex K m is full to dimension m if every simplex of
K m is contained in some m-simplex of K m .
Let K m be a complex full to dimension m. An (m \Gamma 1)-simplex of K m is
external if it is contained in exactly one m-simplex; otherwise, it is internal. The
boundary complex of K m , denoted bound(K m ), is the subcomplex containing all
the faces of external simplexes of K m . Clearly, bound(K m ) is full to dimension
Abusing notation, let bound(oe m ) be the set of (m \Gamma 1)-faces of simplex
oe m .
A complex K m is an m-pseudomanifold, if it is full to dimension m and every
contained in either one or two m-simplexes. 5 An m-manifold
is an m-pseudomanifold in which every (m \Gamma 1)-simplex is contained in two
m-simplexes, i.e., it has no external simplexes.
The following combinatorial definition will play a key role later when we cast
the structure of a protocol in the topological framework.
be a complex. A complex K m is a weak divided image of
there exists a function / that assigns to each simplex of L m a subcomplex
of K m , such that:
1. for every - 2 K m there exists a simplex oe 2 L m such that - 2 /(oe),
2. for every oe single vertex, and
3. for every oe; oe assume that
K m is a divided image of L m if it also satisfies the following condition:
4. for every oe 2 L m , /(oe) is a dim(oe)-pseudomanifold with
We say that K m is a divided image of L m under /.
Intuitively, a divided image is obtained from L m by replacing each simplex of
with a pseudomanifold, making sure that they "fit together" (in the sense of
Condition 3). In addition, Condition 1 guarantees that / maps
Condition 2 guarantees that / maps vertices of L m to vertices of K finally,
Condition 4 guarantees that / preserves the dimension and the boundary of
simplexes in L m .
Fig. 1 shows an example of the divided image of a complex containing two
simplexes. In the figure, solid lines show the boundary of L 2 and their image
under /, in K 2 .
Consider a set oe m and let M (oe m ) be the complex consisting of oe m and all its
proper subsets; M (oe m ) is an m-pseudomanifold consisting of a single m-simplex
and all its faces. Of particular importance for us is the case where K m is a
divided image of M (oe m ). In this case, /(oe m
Remark. The concept of a divided image is reminiscent of the notion of acyclic
carrier 7 of [19], in that it associates subcomplexes of one complex to simplexes
of another. Munkres uses acyclic carriers to study subdivisions, a fundamental
concept of algebraic topology (cf. [19, 21]). However, divided images differ from
subdivisions, even if the requirement of connectivity is added. For example, a
2-dimensional torus with a triangle removed from its surface is a divided image
of a 2-simplex, since its boundary is a 1-dimensional triangle. However it is
5 In algebraic topology, pseudomanifolds are assumed to have additional properties,
which we do not require for our applications.
6 Notice that bound(oe) is a set of simplexes, and /(bound(oe)) is the complex which is
the union over these simplexes - of /(- ).
7 Not to be confused with the notion of carrier defined later.
Fig. 1. K 2 is a divided image of L 2 under /.
neither an acyclic carrier nor a subdivided simplex since it has "holes" (non-
trivial homology groups).
The next proposition states some simple properties of divided images; its
proof is left to the full paper.
Proposition2. Let K m be a divided image of L m under /.
(i) For every oe, oe
(ii) For every pair of j-simplexes oe j; oe j2 L m , if oe j6= oe j
2 , and oe j" oe j6= ;, then
is a pseudomanifold of dimension strictly smaller than j.
(iii) For every i-simplex oe divided image of M (oe i ) under
(iv) A simplex - external if and only if for some external simplex
oe
The carrier of a simplex - 2 K m , denoted carr(- ), is the simplex oe 2 L m of
smallest dimension such that - 2 /(oe). Intuitively, the carrier of a simplex - is
the "smallest" simplex in L m which is mapped to - . By Definition 1(1), every
simplex - 2 K m is in /(oe), for some oe 2 L m . By Proposition 2(ii), the carrier
is unique. Therefore, the carrier is well-defined.
Connectivity: For any j, 0 m, the j-graph of K m consists of one vertex
for every j-simplex of K m , and there is an edge between two vertices if and only
if their intersection is a (j \Gamma 1)-simplex of K m . K m is j-connected if its j-graph
is connected; K m is 0-connected if it consists of a single vertex.
Lemma 3. Let K m be a divided image of oe m under /. There exists a complex
~
/, a restriction of / to ~
K m is a divided
image of oe m under ~
for every i ? 1, and every oe i 2 oe m .
Colorings: A complex K is colored by associating a value from some set of colors
with each of its vertices. A coloring is proper if different vertices in the same
simplex have different colors. A simplicial preserving if
for every vertex v of K, that if a coloring is proper
and a simplicial map is color preserving, then for any simplex fv the
vertices are different, i.e., f(oe) is of the same dimension as oe.
Let K m be a divided image of L m . A simplicial map
Sperner coloring if for every v 2 K m , -(v) 2 carr(v). Intuitively, - "folds" K m
into L m with the requirement that each vertex of K m goes to a vertex of its
carrier. The main combinatorial definition we use is:
A complex K m is a (weak) chromatic divided image of L m , if it
is a (weak) divided image of L m with a proper Sperner coloring -.
Let K m be a divided image of M (oe m ). The next well-known lemma says that
an odd number of m-simplexes of K m must go to oe m (and in particular, at least
one simplex). This lemma is used in Section 7; it follows from the Index Lemma
(Lemma 17), presented later.
Lemma 5 (Sperner Lemma). Consider a divided image K m of M (oe m ) under
/, and a Sperner coloring - : K There exists an odd number of
Modeling Tasks and Protocols
In this section we model distributed tasks using combinatorial topology; this is
an adaptation of [17, 18] to our framework.
Tasks: Denote ids ng. For some domain of values V , let P (V ) be the
set of all pairs consisting of an id from ids and a value from V .
For a domain of inputs I, an input complex, I n , is a complex that includes
n-simplexes (i.e., subsets of n+1 elements) of P (I) and all their faces, such that
the vertices in an n-simplex have different id fields. For a domain of outputs O,
an output complex, O n , is defined similarly over O. That is, if (i; val) is a vertex
of I n then val denotes an input value for process p i , while if (i; val) is a vertex of
O n then val is an output value for process p i . Note that I n and O n are properly
colored by the id fields, and are full to dimension n. In addition, each complex
is colored (not necessarily properly) by the corresponding domain of values.
Using the combinatorial topology notions, a task is identified with a triple
I n is an input complex, O n is an output complex, and \Delta maps
each n-simplex of I n to a non-empty set of n-simplexes in O n . We sometimes
mention only \Delta when I n and O n are clear from the context. The simplexes in
\Delta(oe n ) are the admissible output simplexes for oe n . Intuitively, if oe n is an input
simplex and - n 2 \Delta(oe n ) is an admissible simplex, then - n is an admissible
output configuration when the system starts with input oe n .
We extend \Delta to simplexes of dimension smaller than n, i.e., for executions in
which n processes or less take steps, as follows. Recall that it must be possible
to complete the outputs of some processes in an execution to outputs for all
processes that are allowed for the inputs of the execution. Therefore, \Delta maps
an input simplex oe of dimension smaller than n to the faces of n-simplexes in
\Delta(oe n ) with the same dimension and ids, for all input simplexes oe n that contain
oe. Extended in this manner, \Delta(M (oe n )) is a subcomplex of O n . There is another
variant of wait-free solvability, which allows to explicitly define \Delta for simplexes
of dimension smaller than n. This can be captured in our model by adding as
part of the input a bit that tells the process whether to participate or not.
Non-participating processes are required to output some default value.
Protocol Complexes: We say that a view of a process is final if the process has
written an output. For an execution ff, the set f(0;
views(ff). Given a protocol P, the protocol complex, P n , is defined over the final
views reachable in executions of P, as follows. An n-simplex of final views is in P n
if and only if it is views(ff) for some execution ff of P. In addition, P n contains
all the faces of the n-simplexes. The protocol complex for an input n-simplex
oe, P n (oe), is the subcomplex of P n containing all n-simplexes corresponding
to executions of P where processes start with inputs oe, and all their faces.
Intuitively, only if there exists an execution ff with initial
values oe, such that the views of processes in - are the same as in ff. Note
however that ff is not necessarily unique.
The protocol complex, P n , is the union of the complexes P n (oe), over all input
n-simplexes oe. If a protocol is wait-free then P n (oe) is finite, since a process
terminates after a finite number of steps. Observe that the protocol complex
depends not only on the possible interleavings of steps (schedules), but also on
the transitions of processes and their local states. One can regard P n as colored
with four colors-an id, an input value, a view, and an output value. Note that
the ids coloring is proper.
The protocol implies a decision map which specifies the
output value for each final view of a process. When P solves \Delta it holds that if - 2
corresponds to an output simplex. Therefore, ffi P is simplicial and
preserves the ids coloring. Furthermore, for any input n-simplex oe, ffi P (P n (oe))
is a complex.
Since the protocol depends only on the input values, if two input n-simplexes
oe, oe 0 , have the same input values, i.e., differ only by a permutation of the ids,
then P n (oe) can be obtained from P n (oe 0 ) by applying the same permutation
to the ids. Therefore, the decision map must be anonymous; i.e., ffi P (P n (oe))
determines ffi P (P n (oe 0 )). If the protocol has to depend on the ids, then they have
to be given as part of the inputs.
The above definitions imply:
solves hI n ; O n ; \Deltai if and only if ffi P (P n (oe)) ' \Delta(M (oe)), for every
n-simplex oe 2 I n .
We say that agrees with \Delta.
Round
(a) Execution ff 1 .
Round
(b) Execution ff 2 .
Fig. 2. Executions ff 1 and ff 2 are indistinguishable to p1 and p2 .
This is the topological interpretation of the operational definition of a protocol
solving a task (presented at the end of Section 2).
5 A Condition for Wait-Free Solvability
In this section we define immediate snapshot executions and prove that the
subcomplex they induce is a chromatic divided image of the input complex.
This implies a necessary condition for tasks which are solvable by a wait-free
protocol. This condition is also sufficient since immediate snapshot executions
can be emulated in any execution.
An immediate snapshot execution (in short, ISE) of a protocol is a sequence
of rounds, defined as follows. Round k is specified by a concurrency class (called
a block in [20]) of process ids, s k . The processes in s k are called active in round
k. In round k, first each active process performs a write operation (in increasing
order of ids), and then each active process reads all the registers, i.e., performs
operations (in increasing order of ids). We assume that the concurrency
class is always non-empty. It can be seen that, for a given protocol, an
immediate snapshot execution, ff, is completely characterized by the sequence
of concurrency classes. Therefore, we can write
Immediate snapshot executions are of interest because they capture the computational
power of the model. That is, a task \Delta is wait-free solvable if and only
if there exists a wait-free protocol which solves \Delta in immediate snapshot executions
(this is shown as part of the proof of Theorem 14 below). Although they are
very well-structured, immediate snapshot executions still contain some degree of
uncertainty, since a process does not know exactly which processes are active in
the last round round. That is, if p i is active in round k and observes some other
process p j to be active (i.e., perform a write), p i does not know whether p j is
active in round k \Gamma 1 or in round k.
Consider for example, Fig. 2. Only p 0 distinguishes between executions ff 1
and have the same views in both executions and cannot distinguish
between them. However, as we prove below (in Proposition 8) this is the only
uncertainty processes have in immediate snapshot executions.
Denote the subcomplex of the protocol complex which contains all immediate
snapshot executions by E n . For an input simplex oe n 2 I n , E n (oe n ) is the
subcomplex of all immediate snapshot executions starting with oe n .
Fig. 3. The ISE complex, when each process takes at most one step.
We now show that if the protocol is wait-free and uses only read/write op-
erations, then the ISE complex is a divided image of the input complex. This is
done by defining a function / that assigns a subcomplex of E n to each simplex
of I n .
Fig. 3 contains an example of an immediate snapshot executions complex for
a single input simplex. This is the complex where each process takes at most one
step. Note that there are simplexes that correspond to the executions ff 1 and ff 2
from Fig. 2. Indeed, the vertices that correspond to p 1 and to p 2 are the same
in these simplexes, i.e., p 1 and p 2 have the same views.
First, we need a few simple definitions. For a simplex oe of I n , O n , or E n ,
let ids(oe) be the set of ids appearing in vertices of oe. For a simplex oe of I n or
inputs(oe) be the set of pairs of inputs with corresponding ids appearing
in vertices of oe. Finally, for a simplex oe of E n , let views(oe) be the set of views
appearing in vertices of oe and let observed(oe) be the set of ids of processes whose
operations appear in views(oe). 8 Intuitively, if p i is not in observed(oe), then the
views in oe are the same as in an execution in which p i does not take a step.
Notice that ids(- ) ' observed(- ), since a process always "observes itself."
We can now define /. For oe 2 I n , /(oe) is the complex containing all
simplexes
faces. Notice that /(oe) is full to dimension
dim(oe). A fact we use later is:
Proposition6. For any - 2 E n and oe 2 I n , - 2 /(oe) if and only if ids(-
Proof. If - 0 is a face of - , then ids(- 0
Thus, the definition of / implies that if - 2 /(oe)
then ids(-
8 Recall that these ids are not known by the processes', unless explicitly given in the
inputs. To make this definition concrete, a special part of the process' state captures
its identity. We defer the exact details to the full version.
ids(oe). Since the protocol is wait-free, there exists an execution in which all processes
in only processes in ids(oe), and processes
in ids(- ) have the same views as in - . Let - the simplex in E n that corresponds
to this execution. Note that
and is a face of -, the claim
follows. ut
We first show that the ISE complex is a weak divided image of the input
complex. In fact, this property does not depend on the protocol being wait-free
or on the type of memory operations used, i.e., that the protocol uses only atomic
read/write operations.
Lemma 7. E n is a weak chromatic divided image of I n under /.
Proof. Clearly, the process ids are a proper Sperner coloring of E n . We proceed
to prove that the three conditions of weak divided images (Definition 1) hold.
Condition (1): Consider a simplex be such that - n . Then
there is a simplex oe n 2 I n with ids(- n
is a face of - n , - 2 /(oe n ).
Condition (2) follows since the protocol is deterministic.
Condition (3) follows from Proposition
happens
if and only if -
We say that process p j is silent in an execution ff, if there exists a round
for every round r - k.
Intuitively, this means that no other process ever sees a step by p j . If p j is not
silent in ff, then it is seen in some round. Formally, a process p j is seen in round
k, and there exists some process p 0
. The last seen round of p j is the largest round k in which p j is seen.
These definitions imply:
Proposition8. Consider a finite immediate snapshot execution ff. If p j is not
silent in ff, then k is the last seen round of p j in ff if and only if (a) s for
every round r ? k, (b) s k 6= fp j g, and (c) either (i) g.
As a consequence, we have the next lemma.
Lemma 9. Consider an immediate snapshot execution complex E n . Let - i
1 be an
i-simplex of E n corresponding to an execution ff, and p i 2 ids(- i).
(i) If p i is not silent in ff, then there exists - i
another i-simplex of E n , that
differs only in p i 's view, corresponding to ff 0 .
(ii) If there exists - i
another i-simplex of E n , that differs only in p i 's view,
corresponding to ff 0 , then p j is not silent in ff; ff 0 . If k is the last seen round
of p j in ff, then, without loss of generality, p j is in the kth concurrency class
of - i
1 and the kth concurrency class of - i
2 is fp j g.
Lemma 10. For every simplex oe
Proof. Let
It follows from the definition of /
that - To show that -
notice that observed(- be a process id
in ids(oe i sees a step by p j , and in - i ,
does not see a step by any process not in ids(oe i ), it follows that p j 's view is
determined (because the protocol is deterministic). Namely, - i\Gamma1 is contained in
a single i-simplex - i , and hence - is a face of -
by the definition of /, - 2 bound(/(oe i )).
The other direction of the proof is similar. Since - 2 bound(/(oe i
that - is a face of some - This implies that - i\Gamma1 is a face of a
single is not in observed(-
(by Lemma 9(i)). Hence, observed(- It follows that -
This implies that - 2 /(bound(oe i )). ut
Intuitively, the next lemma implies that once we fix the views of all processes
but one, the remaining process may have only one of two views, which correspond
to the two options in Proposition 8(c). This shows that the uncertainty about
another process is restricted to its last seen round.
Lemma 11. For every simplex oe i 2 I n , /(oe i ) is an i-pseudomanifold.
Proof. As noted before, /(oe i ) is full to dimension i. We show that any simplex
contained in at most two i-simplexes. Let - i 2 /(oe i ) be such
that - i\Gamma1 is a face of - i . Since - i\Gamma1 and - i are properly colored by the ids, there
exists some id p j , such that p j appears in - i but not in - i\Gamma1 . In fact, any i-
simplex of /(oe i includes p j . Let ff be the prefix of an execution
with steps by processes in ids(oe i ), corresponding to - i . We can take such a prefix
because There are two cases:
Case 1: p j is silent in ff. Then observed(- does not see
an id not in ids(oe i ), its view is determined. Hence, - i is unique.
Case 2: p j is not silent in ff. Let k be the last seen round of p j in ff. Lemma 9(ii)
implies that that there are only two possible views for compatible with the
views in - g. ut
By Lemma 7, E n is a weak chromatic divided image of I n . Lemma 11 and
imply Condition (4) of Definition 1. Therefore, we have:
Theorem12. E n is a chromatic divided image of I n under /.
This implies the following necessary condition for wait-free solvability:
\Deltai be a task. If there exists a wait-free protocol
which solves this task then there exists a chromatic divided image E n of I n and
a color-preserving (on ids), anonymous simplicial map ffi from E n to O n that
agrees with \Delta.
We now restrict our attention to full-information protocols, in which a process
writes its whole state in every write to its shared register. The complex induced
by immediate snapshot executions of the full-information protocol for some input
complex I n is called the full divided image of I n . We have the following necessary
and sufficient condition for wait-free solvability.
Theorem 14. Let hI n ; O n ; \Deltai be a task. There exists a wait-free protocol which
solves this task if and only if there exists an full divided image E n of I n and
a color-preserving (on ids), anonymous simplicial map ffi from E n to O n that
agrees with \Delta.
Sketch of proof. Assume there exists a protocol P which solves \Delta. Without loss
of generality, we may assume that in P each process operates by writing and
then reading the registers R solves \Delta, it must solve \Delta in immediate
snapshot executions. By Theorem 12, the ISE complex, E n , is a chromatic
divided image of I n . Since the protocol can be simulated by a full-information
protocol, the corresponding full divided image is also a chromatic divided image
of I n . Clearly, ffi P is a color-preserving (on ids), anonymous simplicial map from
to O n that agrees with \Delta.
Assume there exists an full divided image E n of I n and a color-preserving
(on ids), anonymous simplicial map ffi from E n to O n that agrees with \Delta. By
using a protocol for the participating set problem ([5]), the immediate snapshot
executions can be simulated in a full-information manner. Using ffi as the output
rule of the protocol, we get the "only if" direction of the theorem. ut
Remark. The above theorem ignores the issue of computability. Clearly, the sufficient
condition requires that ffi is computable; furthermore, if a task is solvable
then it implies a way to compute ffi . Therefore, we can add the requirement that
is computable to the necessary and sufficient condition for wait-free solvability.
The previous theorem provides a characterization of wait-free solvable tasks
which depends only on the topological properties of hI To see if a task
is solvable, when the input complex is finite, we produce all E-divided images
of I n and check if a simplicial map ffi as required exists. Note that if we are
interested only in protocols that are bounded wait-free, i.e., where the protocol
has to hold within a predetermined number of steps N , then producing all E-
divided images of the input complex (which is finite) is recursive.
Orientability: We now show that the ISE complex, E n , is an orientable chromatic
divided image. This is used to prove that it induces an algebraic span [15]. We
leave the proof that an orientable chromatic divided image induces an algebraic
span to the full paper, since obviously, it requires the definition of algebraic span,
an algebraic concept of a different flavor from the rest of this paper.
be an m-pseudomanifold. An orientation of a simplex is an equivalence
class of orderings of its vertices, consisting of one particular ordering and
all even permutations of it. If the vertices are colored with ids, we could consider
the positive orientation to be the one in which the vertices are ordered
Fig. 4. An oriented 2-pseudomanifold, with a coloring (in brackets).
with the ids from small to large, and the negative to be the one where the two
vertices with smallest ids are exchanged (each orientation together with all its
even permutations). Denote by oe (i) the face of oe m in which the vertex with id
i is removed; e.g., oe (1) is the face with ids f0; mg. An orientation of an
m-simplex induces an orientation on each of its faces, oe (i) , according to the sign
of (\Gamma1) i . For example, if oe 2 is oriented hv then the induced orientations
are
K m is orientable if there exists an orientation for each of its m-simplexes
such that an m \Gamma 1-simplex contained in two m-simplexes gets opposite induced
orientations. K m together with such an orientation is an oriented pseudoman-
ifold. (See an example in Fig. 4 of a simple oriented 2-pseudomanifold and the
induced orientations.)
In the sequel, we sometimes use a combinatorial notion of orientability. In
the full paper, we prove that the previous (usual) definition of orientability is
equivalent to the combinatorial definition, for chromatic pseudomanifolds.
Lemma 15. A chromatic pseudomanifold K m is orientable if and only if its
m-simplexes can be partitioned into two disjoint classes, such that if two m-
simplexes share an (m \Gamma 1)-face then they belong to different classes.
We say that a chromatic divided image of M (oe m ) under /, K m , is orientable
if, for every oe 2 M (oe m ), /(oe) is orientable.
Theorem16. Let E n be a chromatic divided image of M (oe n ) under /, that
corresponds to the ISE complex starting with input oe n , in which any processor
takes the same number of steps in every execution. Then E n is orientable.
Proof. Let oe i be a face of oe m . We explicitly partition the i-simplexes of /(oe i )
into two disjoint classes, positive and negative.
Let the length of an immediate snapshot execution be the number of concurrency
classes in it. An i-simplex - 2 /(oe i ) is in positive if the length of the
immediate snapshot execution corresponding to - is even; otherwise, it is in neg-
ative. Consider two i-simplexes, - iand - i, that share an (i \Gamma 1)-face, and let p j
be the processor whose view is different. By Lemma 9, without loss of generality,
is in the kth concurrency class of - iand the kth concurrency class of - iis
is the last seen round of p j in - i. Furthermore, since the views of
all other processors are exactly the same, it follows that the lengths of the corresponding
executions differ exactly by one. Hence, the corresponding simplexes
are in different classes, i.e., have different orientations. ut
6 The Number of Monochromatic Simplexes
In this section we prove a combinatorial lemma about the number of monochromatic
simplexes in any binary coloring of an orientable divided image; this lemma
is used in the next section to show a lower bound on renaming.
be an orientable, chromatic divided image of oe m under /. Fix an
orientation of K m , and an induced orientation on its boundary. K m is symmetric
if, for any two i-faces of oe, oe iand oe i, /(oe i) and /(oe i) are isomorphic, under a
one-to-one simplicial map i that is order preserving on the ids: if v and w belong
to the same simplex, and id(v) ! id(w) then binary
coloring, b, of K m is symmetric, if
This definition is motivated by the notion of comparison-based protocols for
renaming, presented in the next section.
be the number of monochromatic m-simplexes of K m ,
counted by orientation, i.e., an m-simplex is counted as +1 if it is positively
oriented, otherwise, it is counted as \Gamma1. For example, if K m consists of just two
m-simplexes, both monochromatic, then the count would be 0, since they would
have opposite orientations, and hence one would count +1 and the other \Gamma1.
The main theorem of this section states that, if K m is a symmetric, oriented
chromatic divided image of oe m under /, with a symmetric binary coloring b,
The proof of this theorem relies on the Index Lemma-a
classical result of combinatorial topology, generalizing Sperner's Lemma (cf. [12,
p. 201]).
To state and prove the Index Lemma, we need the following definitions. Fix
a coloring c of K m with mg. A k-simplex of K m is complete under c,
if it is colored with k. The content, C, of c is the number of complete
m-simplexes, counted by orientation. That is, a complete simplex - m is counted
+1, if the order of the vertices given by the colors agrees with the orientation of
counts +1 if the order given by the colors belongs
to the equivalence class of orderings of the orientation, and else it counts \Gamma1.
For example, the 2-simplex - 1 in Fig. 4 is ordered hv and the colors
are under this order are h0; 1; 2i, hence, it would count +1. On the other hand,
the 2-simplex - 2 in Fig. 4 is ordered and the colors are under this
order are h1; 0; 2i, hence, it would count \Gamma1. The index, I, of c is the number of
1)-simplexes on the boundary of K m , also counted by orientation
(the orientation induced by the unique m-simplex that contains it).
where the (m \Gamma 1)-simplexes in each m-simplex are considered
separately, and counted as +1 or \Gamma1, by their induced orientations. We argue
that I and
To prove that consider the following cases. If an (m \Gamma 1)-face is inter-
nal, then it contributes 0 to S, since the contributions of the two m-simplexes
containing it cancel each other. Obviously, an internal (m \Gamma 1)-face contributes
0 to I. An external (m \Gamma 1)-face in the boundary of K m is counted the same,
or \Gamma1 by orientation, in both S and I. Therefore,
To prove that consider an m-simplex - m , and look at the following
cases. If - m contains two 1)-faces which are completely colored, then - m is
not completely colored and contributes 0 to C. Note that - m contributes 0 also
to S, since the contributions of the two faces cancel each other. If - m contains
exactly one (m \Gamma 1)-face which is completely colored (with
must be completely colored and contributes +1 or \Gamma1, by orientation, to C
as well as to S. If - m does not contain any (m \Gamma 1)-face which is completely
colored, then - m is not completely colored and therefore, it contributes 0 to C
as well as to S. Finally, note that - m cannot contain more than two
which are completely colored. ut
Theorem18 (Binary Coloring Theorem). Let K m be a symmetric, oriented
chromatic divided image of oe m under /, with a symmetric binary coloring b.
Proof. Let ae be the simplicial map from oe m to itself that maps the vertex v
whose id is i to the vertex whose id is (i (that is, the mapping
the rotates the id's). In the rest of the proof, we assume that sub-indices are
taken
Define a coloring of K m , 1), for every v.
Notice that an m-simplex, - m , is completely colored by c if and only if - m is
monochromatic under b. Moreover, for every v,
Let C and I be the content and index of K m under c. Clearly,
I 6= 0. The proof is by induction on m.
I 6= 0.
For the induction step, we consider bound(K m ), and "squeeze" it, by using
contractions. A contraction of bound(K m ) is obtained by identifying one of its
vertices, v 0 , with another vertex, v, with the same color, and deleting any simplex
containing both v and v 0 .
Consider an internal (m \Gamma 2)-simplex, - m\Gamma2 2 bound(K m ), which is contained
in two Its link vertices are v 1 , which is the vertex of
- 1 not in - m\Gamma2 , and v 2 , which is the vertex of - 2 not in - m\Gamma2 . A binary coloring
is irreducible if the link vertices of any internal (m \Gamma 2)-simplex simplex of /(oe (i) )
have different binary colors.
The first stage of the proof applies a sequence of specific contractions to
its coloring is irreducible, while preserving all other
properties.
The contractions we apply are symmetric contractions, in which we choose
an internal (m \Gamma 2)-simplex, - m\Gamma2 2 /(oe (m) ), to which a contraction can be
applied; that is, such that its two link vertices have the same binary coloring.
We contract - m\Gamma2 and simplexes symmetric to it in /(oe (i) ), for all i. (This is a
sequence of m+ 1 contractions.) Notice that the simplexes which are symmetric
to - m\Gamma2 are also internal and their link vertices have the same binary coloring.
A boundary is proper symmetric if it is the boundary of a symmetric, oriented
chromatic divided image of oe m under /, with a symmetric binary coloring b. In
the next claim we show that a symmetric contraction preserves all properties of
a proper symmetric boundary.
Assume we apply a symmetric contraction to bound(K m ), and get a
complex bound 0 . Then bound 0 is a non-empty, proper symmetric boundary under
Furthermore, I(bound 0
Proof. Given note that we have
that Therefore, bound 0 is chromatic. Also, it is easy to see that
the orientation on bound 0 is still well defined: two (m \Gamma 1)-simplexes that did
not have an (m \Gamma 2)-face in common before the contraction will have it after
the contraction, only if they differ in exactly one vertex, in addition to v 1 and
Thus, two such simplexes have opposite orientations. By the definition of
symmetric contraction, bound 0 remains symmetric.
By induction hypothesis of the theorem, #mono(/(oe (i) )) 6= 0, for every i.
Since a contraction removes simplexes with opposite orientations and the same
binary colorings, #mono(/(oe (i) for every i, and
This implies that bound 0 is non-empty. ut
By Claim 19, for the rest of the proof, we can assume that /(bound(oe m
non-empty, proper symmetric boundary with an irreducible
binary coloring. Recall that
1)-simplexes on the boundary of K m are counted
with the same sign by I.
Proof. We first argue that every complete (m \Gamma 1)-simplex in /(oe (i) ) is counted
with the same sign by I, for any i. To see this, assume, without loss of generality,
that consider an colored with
(the first component of a vertex is the
id and the second is its binary color).
Consider a path to any other (m \Gamma 1)-simplex - 2 colored with the same ids and
colors; such a path must exist since, by Lemma 3, we can assume that /(oe (i) ) is
1)-connected. Notice that the colors assigned by c are the same in - 1 and
will be counted by I. It remains to show that - 1 and
have the same orientation and hence are counted with the same sign by I.
Note that this path consists of a sequence of (m \Gamma 1)-simplexes, each sharing
an (m\Gamma2)-face with the previous simplex, and differing from the previous simplex
in exactly one binary color. Thus the path corresponds to a sequence of binary
vectors, starting with the all 0's vector and ending with the all 0's vector, and
each vector differing from the previous vector in exactly one binary color. That
is, the path corresponds to a cycle in a hypercube graph. Since the hypercube
graph is bipartite, the length of any cycle in it is even; therefore, the length of
the path is even. Clearly, since the complex is oriented, consecutive simplexes
on the path have different orientations. Since the length of the path is even, - 1
have the same orientation. Hence, - 1 and - 2 are counted with the same
sign by I.
Next, we show that complete (m \Gamma 1)-simplexes in different /(oe (i) )'s are also
counted with the same sign by I. Again, without loss of generality, assume that
counted by I. Note that the c
coloring of - 1 is f(0; 0); (1; 1)g.
We now show that any complete (m \Gamma 1)-simplex - 3 2 /(oe (i) ) will be counted
with the same sign by I. Without loss of generality, assume
- 3 is complete, with id's mg. Thus, the binary color of the vertex
with process id m must be 1, in order to get the color 0 under c. This implies
that the c coloring of - 3 is f(1; 1); (2; its binary coloring
is 1)g.
Consider the simplex - 2 2 /(oe (0) ), which is the image of - 1 under the symmetry
map, ae. That is, 0)g. Consider a path in /(oe (0) )
and - 3
. Since the binary coloring vector of - 3
differs from the binary
coloring vector of - 2
in exactly one position, the length of this path must be odd.
Therefore,
and - 3
must have different orientations.
The c coloring of - 3 , f(1; 1); (2; rotated w.r.t. its ids, and
hence the orderings of - 2 and - 3 agree (on the sign of the permutation) if and
only if m is odd. E.g., if
0)g. Finally, the orientation of - 1 is (\Gamma1) m times the orientation
of - 2 , since they are symmetric simplexes in /(oe (m) ) and /(oe (0) ). That
is, the orientations of - 1 and - 2 agree when m is even, and disagree otherwise.
Therefore, the orientations of - 1 and - 3 agree, and they are counted with the
same sign by I. ut
Since bound is non-empty and contains at least one simplex, Claim 20 implies
I 6= 0, which proves the theorem. ut
Applications
In this section, we apply the condition for wait-free solvability presented earlier
(Corollary 13) to derive two lower bounds, for renaming and for k-set consensus.
The first lower bound also relies on Theorem 18, and therefore, on the fact
that the chromatic divided image induced by immediate snapshot executions
is orientable. In the full version of the paper we also derive another necessary
condition, based on connectivity.
7.1 Renaming
In the renaming task ([2]), processes start with an input value (original name)
from a large domain and are required to decide on distinct output values (new
names) from a domain which should be as small as possible. Clearly, the task is
trivial if processes can access their id; in this case, process p i decides on i, which
yields the smallest possible domain. To avoid trivial solutions, it is required
that the processes and the protocol are anonymous [2]. That is, process p i with
original name x executes the same protocol as process p j with original name x.
Captured in our combinatorial topology language, the M-renaming task is
the triple hD contains all subsets of some domain D (of original
names) with different values, M n contains all subsets of [0::M ] (of new names)
with different values, and \Delta maps each oe n 2 D n to all n-simplexes of M n . We
use Theorem 12 and Theorem to prove that there is no wait-free anonymous
protocol for the M-renaming task, if M - 2n \Gamma 1. The bound is tight, since there
exists an anonymous wait-free protocol ([2]) for the 2n-renaming problem.
Theorem 21. If M ! 2n, then there is no anonymous wait-free protocol that
solves the M -renaming task.
Proof. Assume, by way of contradiction, that P is a wait-free protocol for the
M-renaming task, M - 2n \Gamma 1. Without loss of generality, we assume that every
process executes the same number of steps. Also, P is comparison-based, i.e.,
the protocol produces the same outputs on inputs which are order-equivalent.
(See Herlihy [13], who attributes this observation to Eli Gafni).
Assume that assume that the original names are only between
0 and 2n. By Corollary 13, there exists a a chromatic full divided image
of the input complex D n , be the decision map implied by P. By Theorem
16, S is orientable. Since the protocol is comparison-based and anonymous,
it follows that for any two i-simplexes, oe iand oe iof D n , ffi P maps /(oe i) and
/(oe i) to simplexes that have the same output values (perhaps with different
process ids).
be the binary coloring which is the parity of the new names assigned by
Therefore, the assumption of Theorem is satisfied for S(oe n ), and therefore,
at least one simplex of S(oe n ) is monochromatic under ffi 0 .
On the other hand, note that the domain [0; 2n \Gamma 1] does not include n
different odd names; similarly, the domain [0; 2n \Gamma 1] does not include n
different even names. This implies that ffi 0 cannot color any simplex of S with all
zeroes or with all ones; i.e., no simplex of S is monochromatic. A contradiction.
ut
7.2 k-Set Consensus
Intuitively, in the k-set consensus task ([7]), processes start with input values
from some domain and are required to produce at most k different output values.
To assure non-triviality, we require all output values to be input values of some
processes.
Captured in our combinatorial topology language, the k-set consensus task
is the triple hD n ; D n ; \Deltai. D n is P (D), for some domain D, and \Delta maps each
to the subset of n-simplexes in D n that contain at most k different
values from the values in oe n . In the full version of the paper, we use Theorem 12
and Sperner's Lemma to prove that any wait-free protocol for this problem
must have at least one execution in which k +1 different values are output. This
implies:
Theorem22. If k - n then there does not exists a wait-free protocol that solves
the k-set consensus task.
This bound is tight, by the protocol of [7].
This paper presents a study of wait-free solvability based on combinatorial topol-
ogy. Informally, we have defined the notion of a chromatic divided image, and
proved that a necessary condition for wait-free solvability is the existence of a
simplicial chromatic mapping from a divided image of the inputs to the outputs
that agrees with the problem specification. We were able to use theorems about
combinatorial properties of divided images to derive tight lower bounds for renaming
and k-set consensus. Our results do not use homology groups, whose
computation may be complicated. We also derive a new necessary and sufficient
condition, based on a specific, well structured chromatic divided image.
Many questions remain open. First, it is of interest to find other applications
to the necessary and sufficient condition presented here; in particular, can we derive
interesting protocols from the sufficient condition? Second, there are several
directions to extend our framework, e.g., to allow fewer than n failures (as was
done for one failure in [3]), to handle other primitive objects besides read/write
registers (cf. [14, 6]), and to incorporate on-going tasks.
Acknowledgments
We would like to thank Javier Bracho, Eli Gafni, Maurice
Herlihy, Nir Shavit and Mark Tuttle for comments on the paper and very useful
discussions.
--R
"Are Wait-Free Algorithms Fast?"
"Renaming in an asynchronous environment,"
"A combinatorial characterization of the distributed 1.solvable tasks,"
"Generalized FLP impossibility result for t-resilient asynchronous computations,"
"Immediate atomic snapshots and fast renaming,"
"The implication of the Borowsky-Gafni simulation on the set consensus hierarchy,"
"More Choices Allow More Faults: Set Consensus Problems in Totally Asynchronous Systems,"
"A tight lower bound for k-set agreement,"
"Reaching Approximate Agreement in the Presence of Faults,"
"Impossibility of distributed commit with one faulty process,"
"3-processor tasks are undecidable,"
A Combinatorial Introduction to Topology
A. Tutorial on
"Set Consensus Using Arbitrary Objects,"
"Algebraic Spans,"
"On the Decidability of Distributed Decision Tasks,"
"The asynchronous computability theorem for t- resilient tasks,"
"A simple constructive computability theorem for wait-free computation,"
Elements of
"Wait-free k-set agreement is impossible: The topology of public knowledge,"
--TR
--CTR
Faith Fich , Eric Ruppert, Hundreds of impossibility results for distributed computing, Distributed Computing, v.16 n.2-3, p.121-163, September | atomic read/write registers;set consensus;consensus;combinatorial topology;renaming;distributed systems;shared memory systems;wait-free solvable tasks |
586915 | Randomness, Computability, and Density. | We study effectively given positive reals (more specifically, computably enumerable reals) under a measure of relative randomness introduced by Solovay [manuscript, IBM Thomas J. Watson Research Center, Yorktown Heights, NY, 1975] and studied by Calude, Hertling, Khoussainov, and Wang [Theoret. Comput. Sci., 255 (2001), pp. 125--149], Calude [Theoret. Comput. Sci., 271 (2002), pp. 3--14], Kucera and Slaman [SIAM J. Comput., 31 (2002), pp. 199--211], and Downey, Hirschfeldt, and LaForte [Mathematical Foundations of Computer Science 2001, Springer-Verlag, Berlin, 2001, pp. 316--327], among others. This measure is called domination or Solovay reducibility and is defined by saying that $\alpha$ dominates $\beta$ if there are a constant c and a partial computable function $\varphi$ such that for all positive rationals $q<\alpha$ we have $\varphi(q)\!\downarrow<\beta$ and $\beta- \varphi(q) \leqslant c(\alpha- q)$. The intuition is that an approximating sequence for $\alpha$ generates one for $\beta$ whose rate of convergence is not much slower than that of the original sequence. It is not hard to show that if $\alpha$ dominates $\beta$, then the initial segment complexity of $\alpha$ is at least that of $\beta$.In this paper we are concerned with structural properties of the degree structure generated by Solovay reducibility. We answer a natural question in this area of investigation by proving the density of the Solovay degrees. We also provide a new characterization of the random computably enumerable reals in terms of splittings in the Solovay degrees. Specifically, we show that the Solovay degrees of computably enumerable reals are dense, that any incomplete Solovay degree splits over any lesser degree, and that the join of any two incomplete Solovay degrees is incomplete, so that the complete Solovay degree does not split at all. The methodology is of some technical interest, since it includes a priority argument in which the injuries are themselves controlled by randomness considerations. | Introduction
In this paper we are concerned with eectively generated reals in the interval (0; 1] and
their relative randomness. In what follows, real and rational will mean positive real
and positive rational, respectively. It will be convenient to work modulo 1, that is,
identifying and we do this below without
further comment.
Our basic objects are reals that are limits of computable increasing sequences of ratio-
nals. We call such reals computably enumerable (c.e.), though they have also been called
recursively enumerable, left computable (by Ambos-Spies, Weihrauch, and Zheng [1]),
and, together with the limits of computable decreasing sequences of rationals, semicom-
putable. If, in addition to the existence of a computable increasing sequence q
rationals with limit , there is a total computable function f such that q f(n) < 2 n
for all n 2 !, then is called computable. These and related concepts have been
widely studied. In addition to the papers and books mentioned elsewhere in this intro-
duction, we may cite, among others, early work of Rice [24], Lachlan [19], Soare [27],
and Cetin [8], and more recent papers by Ko [16, 17], Calude, Coles, Hertling, and
Khoussainov [5], Ho [15], and Downey and LaForte [14].
A real is random if its dyadic expansion forms a random innite binary sequence (in
the sense of, for instance, Martin-Lof [23]). Chaitin's number
the halting probability
of a universal self-delimiting computer, is a standard random c.e. real. (We will dene
these concepts more formally below.)
Many authors have
studied
and its properties, notably Chaitin [9, 10, 11] and
Martin-Lof [23]. In the very long and widely circulated manuscript [30] (a fragment of
which appeared in [31]), Solovay carefully investigated relationships between Martin-Lof-
Chaitin prex-free complexity, Kolmogorov complexity, and properties of random languages
and reals. See Chaitin [9] for an account of some of the results in this manuscript.
Solovay discovered that several important properties
of
(whose denition is model-
dependent) are shared by another class of reals he
called
lle e, whose denition is
model-independent. To dene this class, he introduced the following reducibility relation
among c.e. reals, called domination or Solovay reducibility.
1.1. Denition. Let and be c.e. reals. We say that dominates , and write
6 S , if there are a constant c and a partial computable function
that for each rational q < we have '(q)#< and
We write S if 6 S and 6 S .
The notation 6 dom has sometimes been used instead of 6 S .
Solovay reducibility is naturally associated with randomness because of the following
fact, whose proof we sketch for completeness. Let H() denote the prex-free complexity
of and K() its standard Kolmogorov complexity. (Most of the statements below also
hold with K() in place of H( ). For the denitions and basic properties of H() and
K( ), see Calude [3] and Li and Vitanyi [22]. Among the many works dealing with
these and related topics, and in addition to those mentioned elsewhere in this paper,
we may cite Solomono [28, 29], Kolmogorov [18], Levin [20, 21], Schnorr [25], and the
expository article Calude and Chaitin [4].) We identify a real 2 (0; 1] with the innite
binary string such that = 0:. (The fact that certain reals have two dierent dyadic
expansions need not concern us here, since all such reals are rational.)
1.2. Theorem (Solovay [30]). Let 6 S
be c.e. reals. There is a constant O(1)
such that H( n) 6 H( n) +O(1) for all n 2 !.
Proof sketch. We rst sketch the proof of the following lemma, implicit in [30] and noted
by Calude, Hertling, Khoussainov, and Wang [6].
1.3. Lemma. Let c 2 !. There is a constant O(1) such that, for all n > 1 and all
binary strings ; of length n with j0: 0: j < c2 n , we have jH() H()j 6 O(1).
The proof of the lemma is relatively simple. We can easily write a program P that,
for each su-ciently long , generates the 2c binary strings 0 of length n with
. For any binary strings ; of length n with j0: 0: j < c2 n , in
order to compute it su-ces to know a program for and the position of on the list
generated by P on input .
Turning to the proof of the theorem, let ' and c be as in Denition 1.1. Let
0:( n). Since n is rational and n < 2 (n+1) , we have '( n ) < c2 (n+1) . Thus,
by the lemma, H( n) = H('( n ))+O(1), and hence H( n) 6 H( n)+O(1).
Solovay observed
that
dominates all c.e. reals, and Theorem 1.2 implies that if a
c.e. real dominates all c.e. reals then it must be random. This led Solovay to dene a
c.e. real to
be
-like if it dominates all c.e. reals. The point is that the denition of
seems quite model-independent, as opposed to the model-dependent denition of
This circle of ideas was completed recently by Slaman [26], who proved the converse
to the fact that
at e reals are random.
1.4. Theorem (Slaman). A c.e. real is random if and only if it
is
-like.
It is natural to seek to understand the c.e. reals under Solovay reducibility. A useful
characterization of this reducibility is given by the following lemma, which we prove in
the next section.
1.5. Lemma. Let and be c.e. reals. Then 6 S if and only if for every computable
sequence of rationals a 0 ; a
a n
there are a constant c and a computable sequence of rationals " c such that
Phrased another way, Lemma 1.5 says that the c.e. reals dominated by a given c.e.
real essentially correspond to splittings of under arithmetic addition.
1.6. Corollary. Let 6 S
be c.e. reals. There is a c.e. real
and a rational c such
that
Proof. Let a 0 ; a be a computable sequence of rationals such that
a n . Let
be as in Lemma 1.5. Dene
each
" n is less than c, the real
is c.e., and of course
.
Solovay reducibility has a number of other beautiful interactions with arithmetic, as
we now discuss.
The relation 6 S is symmetric and transitive, and hence S is an equivalence relation
on the c.e. reals. Thus we can dene the Solovay degree [] of a c.e. real as its S
equivalence class. (When we mention Solovay degrees below, we always mean Solovay
degrees of c.e. reals.) The Solovay degrees form an upper semilattice, with the join of
[] and [] being [+]=[], a fact observed by Solovay and others ( is denitely not
a join operation here). We note the following slight improvement of this result. Recall
that an uppersemilattice U is distributive if for all a
exist
1.7. Lemma. The Solovay degrees of c.e. reals form a distributive uppersemilattice with
Proof. Suppose that 6 S
of rationals such that
1. By Lemma 1.5, there are a constant c
and a computable sequence of rationals " c such that
n . Then
This establishes distributivity.
To see that the join in the Solovay degrees is given by addition, we again apply
Lemma 1.5. Certainly, for any c.e. reals 0 and 1 we have
and hence Conversely, suppose that 0 ; 1 6 . Let a 0 ; a
be a computable sequence of rationals such that
a n . For each
there is a constant c i and a computable sequence of rationals " i
n a n . Thus
n is less than c 0 +c 1 ,
a nal application of Lemma 1.5 show that 0
The proof that the join in the Solovay degrees is also given by multiplication is a
similar application of Lemma 1.5.
There is a least Solovay degree, the degree of the computable reals, as well as a
greatest one, the degree of
For proofs of these facts and more on c.e. reals and
Solovay reducibility, see for instance Chaitin [9, 10, 11], Calude, Hertling, Khoussainov,
and Wang [6], Calude and Nies [7], Calude [2], Slaman [26], and Coles, Downey, and
LaForte [12].
Despite the many attractive features of the Solovay degrees, their structure is largely
unknown. Coles, Downey, and LaForte [12] have shown that this structure is very
complicated by proving that it has an undecidable rst order theory.
One question addressed in the present paper, open since Solovay's original 1974
notes, is whether the structure of the Solovay degrees is dense. Indeed, up to now, it
was not known even whether there is a minimal Solovay degree. That is, intuitively, if
a c.e. real is not computable, must there be a c.e. real that is also not computable,
yet is strictly less random that ?
In this paper, we show that the Solovay degrees of c.e. reals are dense. To do this
we divide the proof into two parts. We prove that if <
then there is a c.e. real
with < S
and we also prove that every incomplete Solovay degree splits over
each lesser degree.
The nonuniform nature of the argument is essential given the techniques we use,
since, in the splitting case, we have a priority construction in which the control of the
injuries is directly tied to the enumeration of
The fact that if a c.e. real is Solovay-
incomplete
then
must grow more slowly than is what allows us to succeed. (We
will discuss this more fully in Section 3.) This unusual technique is of some technical
interest, and clearly cannot be applied to proving upwards density, since in that case
the top degree
is
itself. To prove upwards density, we use a dierent technique,
taking advantage of the fact that, however we construct a c.e. real, it is automatically
dominated by
In light of these results, and further motivated by the general question of how randomness
can be produced, it is natural to ask whether the complete Solovay degree can
be split, or in other words, whether there exist nonrandom c.e. reals and such that
We give a negative answer to this question, thus characterizing the
random c.e. reals as those c.e. reals that cannot be written as the sum of two c.e. reals
of lesser Solovay degrees.
We remark that there are (non-c.e.) nonrandom reals whose sum is random; the
following is an example of this phenomenon. Dene the real by letting
is even and
n) otherwise. (Here we identify a real with its dyadic expansion
as above.) Dene the real by letting
n) otherwise.
Now and are clearly nonrandom, but
is random.
Before turning to the details of the paper, we point out that there are other reducibilities
one can study in this context. Coles, Downey, and LaForte [12, 13] introduced one
such reducibility, called sw-reducibility ; it is dened as follows. For sets of natural numbers
A and B, we say that A 6 sw B if there are a computable procedure and a
constant c such that A and the use of on argument x is bounded by x
reals we say that 6 sw if there are sets A and B such that
is the characteristic function of the set S.
As in the case of Solovay reducibility, it is not di-cult to argue that if 6 sw then
that
is sw-complete. Furthermore,
Coles, Downey, and LaForte [12] proved the analog of Slaman's theorem above in the
case of sw-reducibility, namely that if a c.e. real is random then it is sw-complete. They
also showed that Solovay reducibility and sw-reducibility are dierent, since there are
c.e. reals , ,
, and - such that 6 S but
sw and
S -, and that
there are no minimal sw-degrees of c.e. reals.
1.8. Question. Are the sw-degrees of c.e. reals dense?
Ultimately, the basic reducibility we seek to understand is H-reducibility, where
6H if there is a constant O(1) such that H( n) 6 H( n) +O(1) for all n 2 !.
Little is known about this directly.
Preliminaries
Fix a self-delimiting universal computer M . (That is, for all binary strings , if M()#
then M( 0 )" for all 0 properly extending .) Then one can
dene
via
M()#
(The properties
of
relevant to this paper are independent of the choice of M .)
The c.e.
real
is random in the canonical Martin-Lof sense. Recall that a Martin-
Lof test is a uniformly c.e. sequence fV e : e > 0g of subsets of f0; 1g such that for all
e > 0,
where denotes the usual product measure on f0; 1g ! . The string 2 f0; 1g ! and
the real 0: are random, or more precisely, 1-random, if =T
for every
Martin-Lof test fV e : e > 0g.
An alternate characterization of the random reals can be given via the notion of a
Solovay test. We give a somewhat nonstandard denition of this notion, which will be
useful below. A Solovay test is a c.e. sequence fI of intervals with rational end-points
such that
is the length of the interval I. As Solovay [30]
showed, a real is random if and only if is nite for every Solovay test
The following lemma, implicit in [30] and proved in [12], provides an alternate characterization
of Solovay reducibility, which is the one that we will use below.
2.1. Lemma. Let and be c.e. reals, and let
increasing sequences of rationals converging to and , respectively. Then 6 S if
and only if there are a constant d and a total computable function f such that for all
Whenever we mention a c.e. real , we assume that we have chosen a computable
increasing sequence converging to . The previous lemma guarantees that,
in determining whether one c.e. real dominates another, the particular choice of such
sequences is irrelevant. For convenience of notation, we adopt the convention that, for
any c.e. real mentioned below, the expression s s 1 is equal to 0 when
We will also make use of two more lemmas, the rst of which has Lemma 1.5 as a
corollary.
2.2. Lemma. Let 6 S
be c.e. reals and let be a computable increasing sequence
of rationals converging to . There is a computable increasing sequence ^
of rationals converging to such that for some constant c and all s 2 !,
Proof. Fix a computable increasing sequence rationals converging to , let
d and f be as in Lemma 2.1, and let c > d be such that f(0) < c 0 . We may assume
without loss of generality that f is increasing.
There must be an s 0 > 0 for which f(s 0 ) f(0) <
would have
contradicting our choice of d and f . It is now easy to
s0 so that ^
We can repeat the procedure in the previous paragraph with s 0 in place of 0 to
obtain an
s1 such that ^
Proceeding by recursion in this way, we dene a computable increasing sequence
of rationals with the desired properties.
We are now in a position to prove Lemma 1.5.
1.5. Lemma. Let and be c.e. reals. Then 6 S
if and only if for every computable
sequence of rationals a 0 ; a
a n
there are a constant c and a computable sequence of rationals " c such that
Proof. The if direction is easy; we prove the only if direction.
Suppose that 6 S . Given a computable sequence of rationals a 0 ; a
i6n a i and apply Lemma 2.2 to obtain c and ^
that lemma. Dene "
n . Now
for all n 2 !,
We nish this section with a simple lemma which will be quite useful below.
2.3. Lemma. Let
be c.e. reals. The following hold for all total computable
functions f and all k 2 !.
1. For each there is an s 2 ! such that either
(a) t f(n) < k( t n ) for all t > s or
2. There are innitely many for which there is an s 2 ! such that t f(n) >
Proof. If there are innitely many t 2 ! such that t f(n) 6 k( t n ) and innitely
many
which implies that S .
If there are innitely many t 2 ! such that t f(n) 6 k( t n ) then
So if this happens for all but nitely many n then 6 S . (The nitely many n for
which f(n) > k( n ) can be brought into line by increasing the constant k.)
3 Main Results
We now proceed with the proofs of our main results. We begin by showing that every
incomplete Solovay degree can be split over any lesser Solovay degree.
3.1. Theorem. Let
be c.e. reals. There are c.e. reals 0 and 1 such that
and 0
Proof. We want to build 0 and 1 so that
and the
following requirement is satised for each 2:
By Lemma 2.2 and the fact that
=c S
for any rational c, we may assume without
loss of generality that 2(
s
(Recall our convention
that 0 c.e. real .)
In the absence of requirements of the form R 1 i;e;k , it is easy to satisfy simultaneously
all requirements of the form R i;e;k : for each s 2 !, simply let i
s and 1 i
s .
In the presence of requirements of the form R 1 i;e;k , however, we cannot aord to be
quite so cavalier in our treatment of enough of has to be kept out of 1 i to
guarantee that 1 i does not dominate .
Most of the essential features of our construction are already present in the case of
two requirements R i;e;k and R 1 i;e 0 ;k 0 , which we now discuss. We assume that R i;e;k has
priority over R 1 i;e 0 ;k 0
and that both e and e 0
are total. We will think of the j as
being built by adding amounts to them in stages. Thus j
s will be the total amount
added to j by the end of stage s. At each stage s we begin by adding
s
s 1 to the
current value of each j ; in the limit, this ensures that j > S
We will say that R i;e;k is satised through n at stage s if e (n)[s]# and s e (n) >
). The strategy for R i;e;k is to act whenever either it is not currently satised
or the least number through which it is satised changes. Whenever this happens, R i;e;k
initializes R 1 i;e 0 ;k 0 , which means that the amount of 2
that R 1 i;e 0 ;k 0 is allowed to
funnel into i is reduced. More specically, once R 1 i;e 0 ;k 0 has been initialized for the
mth time, the total amount that it is thenceforth allowed to put into i is reduced to
The above strategy guarantees that if R 1 i;e 0 ;k 0 is initialized innitely often then the
amount put into i by R 1 i;e 0 ;k 0
(which in this case is all that is put into i except for
the coding of
adds up to a computable real. In other words, i S
< S . But it is
not hard to argue, with the help of Lemma 2.3, that this means that there is a stage s
after which R i;e;k is always satised and the least number through which it is satised
does not change. So we conclude that R 1 i;e 0 ;k 0 is initialized only nitely often, and that
R i;e;k is eventually permanently satised.
This leaves us with the problem of designing a strategy for R 1 i;e 0 ;k 0
that respects the
strategy for R i;e;k . The problem is one of timing. To simplify notation, let ^
s . Since R 1 i;e 0 ;k 0 is initialized only nitely often, there is a certain
that it is allowed to put into i after the last time it is initialized. Thus
waits until a stage s such that ^
adding nothing to i until
such a stage is reached, then from that point on it can put all of ^
s into i , which
of course guarantees its success. The problem is that, in the general construction, a
strategy working with a quota 2 m cannot eectively nd an s such that ^
If it uses up its quota too soon, it may nd itself unsatised and unable to do anything
about it.
The key to solving this problem (and the reason for the hypothesis that < S
is
the observation that, since the
much more slowly than
the sequence ^
can be used to modulate the amount that R 1 i;e 0 ;k 0
puts into
More specically, at a stage s, if R 1 i;e 0 ;k 0 's current quota is 2 m then it puts into i
as much of ^
possible, subject to the constraint that the total amount put
into i by R 1 i;e 0 ;k 0
since the last stage before stage s at which R 1 i;e 0 ;k 0
was initialized
must not exceed 2
s . As we will see below, the fact
that
implies that there
is a stage v after which R 1 i;e 0 ;k 0 is allowed to put in all of ^
In general, at a given stage s there will be several requirements, each with a certain
amount that it wants (and is allowed) to direct into one of the j . We will work back-
wards, starting with the weakest priority requirement that we are currently considering.
This requirement will be allowed to direct as much of ^
wants (subject to
its current quota, of course). If any of ^
then the next weakest priority
strategy will be allowed to act, and so on up the line.
We now proceed with the full construction. We say that R i;e;k has stronger priority
We say that R i;e;k is satised through n at stage s if
s be the least n through which R i;e;k is satised at stage s, if such an n exists,
and let n i;e;k
A stage s is e-expansionary if
Let q be the last e-expansionary stage before stage s (or let there have been
none). We say that R i;e;k requires attention at stage s if s is an e-expansionary stage
and there is an r 2 [q; s) such that either n i;e;k
r
r 1 .
If R i;e;k requires attention at stage s then we say that each requirement of weaker
priority than R i;e;k is initialized at stage s.
Each requirement R i;e;k has associated with it a c.e. real i;e;k , which records the
amount put into 1 i for the sake of R i;e;k .
We decide how to distribute at stage s as follows.
1. Let
s
s
s 1 to the current value of each i .
2. Let i < 2 and be such that 2he; ki be the number of times
R i;e;k has been initialized and let t be the last stage at which R i;e;k was initialized.
Let
(j+m)
(It is not hard to check that is non-negative.) Add to " and to the current
values of i;e;k and 1 i .
3. " to the current value of 0 and end the stage.
Otherwise, decrease j by one and go to step 2.
This completes the construction. Clearly,
We now show by induction that each requirement initializes requirements of weaker
priority only nitely often and is eventually satised. Assume by induction that R i;e;k
is initialized only nitely often. Let be the number of times R i;e;k
is initialized, and let t be the last stage at which R i;e;k is initialized. If e is not total
then R i;e;k is vacuously satised and eventually stops initializing requirements of weaker
priority, so we may assume that e is total. Now the following are clearly equivalent:
1. R i;e;k is satised,
2. lim s n i;e;k
s exists and is nite, and
3. R i;e;k eventually stops requiring attention.
Assume for a contradiction that R i;e;k requires attention innitely often.
Since
, part 2 of Lemma 2.3 implies that there are v > u > t such that for all w > v we
Furthermore, by the way the amount added to i;e;k
at a given stage is dened in step 2 of the construction, i;e;k
(j+m)
u and
i;e;k
Thus for all w > v,
(j+m)
(j+m)
(j+m)
From this we conclude that, after stage v, the reverse recursion performed at each stage
never gets past j, and hence everything put into i after stage v is put in either to code
or for the sake of requirements of weaker priority than R i;e;k .
Let be the sum of all 1 i;e 0 ;k 0
such that R 1 i;e 0 ;k 0 has weaker priority than R i;e;k .
Let s l > t be the lth stage at which R i;e;k requires attention. If R 1 i;e 0 ;k 0
is the pth
requirement on the priority list and p > j then
s l
l
and hence is computable.
Putting together the results of the previous two paragraphs, we see that i 6 S
Since
, this means that
It now follows from Lemma 2.3 that there is an
such that R i;e;k is eventually permanently satised through n, and such that R i;e;k
is eventually never satised through any n 0 < n. Thus lim s n i;e;k
s exists and is nite, and
hence R i;e;k is satised and eventually stops requiring attention.
We now show that the Solovay degrees are upwards dense, which together with the
previous result implies that they are dense.
3.2. Theorem. Let
be a c.e. real. There is a c.e. real such that
<
.
Proof. We want to build > S
to satisfy the following requirements for each
and
As in the previous proof, the analysis of an appropriate two-strategy case will be
enough to outline the essentials of the full construction. Let us consider the strategies
S e;k and R e 0 ;k 0 , the former having priority over the latter. We assume that both e and
are total.
The strategy for S e;k is basically to make look like
. At each point of the con-
struction, R e 0 ;k 0 has a certain fraction
of
that it is allowed to put into . (This is in
addition to the coding of
into , of course.) We will say that S e;k is satised through
n at stage s if e (n)#
and
s
e (n) > k( s n ). Whenever either it is not currently
satised or the least number through which it is satised changes, S e;k initializes R e 0 ;k 0 ,
which means that the fraction
of
that R e 0 ;k 0
is allowed to put into is reduced.
As in the previous proof, if S e;k is not eventually permanently satised through some
n then the amount put into by R e 0 ;k 0 is computable, and hence S
. But, as before,
this implies that there is a stage after which S e;k is permanently satised through some
n and never again satised through any n 0 < n. Once this stage has been reached, R e 0 ;k 0
is free to code a xed fraction
of
into , and hence it too succeeds.
We now proceed with the full construction. We say that a requirement X e;k has
stronger priority than a requirement Y e 0 ;k 0 if either he; ki < he
We say that R e;k is satised through n at stage s if e (n)# and
s
We say that S e;k is satised through n at stage s if e (n)# and
s
For a requirement X e;k , let n X e;k
s be the least n through which X e;k is satised at stage s,
if such an n exists, and let n X e;k
As before, a stage s is e-expansionary if
Let X e;k be a requirement and let q be the last e-expansionary stage before stage s (or
there have been none). We say that requires attention at stage s if
s is an e-expansionary stage and there is an r 2 [q; s) such that either n X e;k
r
r 1 .
At stage s, proceed as follows. First add
s
s 1 to the current value of . If no
requirement requires attention at stage s then end the stage. Otherwise, let X e;k be the
strongest priority requirement requiring attention at stage s. We say that X e;k acts at
stage s. If then initialize all weaker priority requirements and end the stage.
be the number of times that R e;k has been
initialized. If s is the rst stage at which R e;k acts after the last time it was initialized
then let t be the last stage at which R e;k was initialized, and otherwise let t be the last
stage at which R e;k acted. Add 2
(j+m)(
s
t ) to the current value of and end the
stage.
This completes the construction. Since is bounded by
it
is a well-dened c.e. real. Furthermore,
We now show by induction that each requirement initializes requirements of weaker
priority only nitely often and is eventually satised. Assume by induction that there
is a stage u such that no requirement of stronger priority than X e;k requires attention
after stage u. If e is not total then X e;k is vacuously satised and eventually stops
requiring attention, so we may assume that e is total. Now the following are clearly
equivalent:
1. X e;k is satised,
2. lim s n X e;k
s exists and is nite,
3. X e;k eventually stops requiring attention, and
4. X e;k acts only nitely often.
First suppose that be the number of times that
R e;k is initialized. (Since R e;k is not initialized at any stage after stage u, this number
is nite.) Suppose that R e;k acts innitely often. Then the total amount added to for
the sake of R e;k is 2 (j+m)
and hence S 2
(j+m)
. It now follows from
Lemma 2.3 that there is an such that R e;k is eventually permanently satised
through n, and such that R e;k is eventually never satised through n 0 < n. Thus
lim s n R e;k
s exists and is nite, and hence R e;k is satised and eventually stops requiring
attention.
Now suppose that acts innitely often. If v > u is the mth stage at
which S e;k acts then the total amount added to after stage v for purposes other than
coding
is bounded by
. This means that S
It now
follows from Lemma 2.3 that there is an such that S e;k is eventually permanently
satised through n, and such that S e;k is eventually never satised through n 0 < n. Thus
s exists and is nite, and hence S e;k is satised and eventually stops requiring
attention.
Combining Theorems 3.1 and 3.2, we have the following result.
3.3. Theorem. The Solovay degrees of c.e. reals are dense.
We nish by showing that the hypothesis that <
in the statement of Theorem
3.1 is necessary. This fact will follow easily from a stronger result which shows
that, despite the upwards density of the Solovay degrees, there is a sense in which the
complete Solovay degree is very much above all other Solovay degrees. We begin with a
lemma giving a su-cient condition for domination.
3.4. Lemma. Let f be an increasing total computable function and let k > 0 be a
natural number. Let and be c.e. reals for which there are innitely many s 2 ! such
that k( s ) > f(s) , but only nitely many s 2 ! such that k( t s ) > f(t) f(s)
for all t > s. Then 6 S
.
Proof. By taking as an approximating sequence for
, we may assume that f is the identity.
By hypothesis, there is an r 2 ! such that for all s > r there is a t > s with
. Furthermore, there is an s 0 > r such that k( s
Given s i , let s i+1 be the least number greater than s i such that k( s i+1
Assuming by induction that k( s i
, we have
Thus s 0 < s 1 < is a computable sequence such that k( s i
for all
Now dene the computable function g by letting g(n) be the least s i that is greater
than or equal to n. Then g(n) < k( g(n) ) 6 k( n ) for all n 2 !, and hence
6 S .
3.5. Theorem. Let and be c.e. reals, let f be an increasing total computable func-
tion, and let k > 0 be a natural number. If is random and there are innitely many
such that k( s ) > f(s) then is random.
Proof. As in Lemma 3.4, we may assume that f is the identity. If is rational then we
can replace it with a nonrational computable real 0 such that 0 0
s > s for all
so we may assume that is not rational.
We assume that is nonrandom and there are innitely many s 2 ! such that
show that is nonrandom. The idea is to take a Solovay test
!g such that 2 I i for innitely many use it to build a Solovay
test !g such that 2 J i for innitely many i 2 !.
Let
Except in the trivial case in which S , Lemma 2.3 guarantees that U is 0
2 . Thus
a rst attempt at building B could be to run the following procedure for all
parallel. Look for the least t such that there is an s < t with s 2 U [t] and s 2 I i . If
there is more than one number s with this property then choose the least among such
numbers. Begin to add the intervals
to B, continuing to do so as long as s remains in U and the approximation of remains
in I i . If the approximation of leaves I i then end the procedure. If s leaves U , say at
stage u, then repeat the procedure (only considering t > u, of course).
If 2 I i then the variable s in the above procedure eventually assumes a value in
U . For this value, k( s ) > s , from which it follows that k( u s ) > s
for some u > s, and hence that must be in one of the
intervals () added to B by the above procedure.
Since is in innitely many of the I i , running the above procedure for all
guarantees that is in innitely many of the intervals in B. The problem is that
we also need the sum of the lengths of the intervals in B to be nite, and the above
procedure gives no control over this sum, since it could easily be the case that we start
working with some s, see it leave U at some stage t (at which point we have already
added to B intervals whose lengths add up to t 1 s ), and then nd that the next
s with which we have to work is much smaller than t. Since this could happen many
times for each i 2 !, we would have no bound on the sum of the lengths of the intervals
in B.
This problem would be solved if we had an innite computable subset T of U . For
each I i , we could look for an s 2 T such that s 2 I i , and then begin to add the
intervals () to B, continuing to do so as long as the approximation of remained in I i .
(Of course, in this easy setting, we could also simply add the single interval [
to B.) It is not hard to check that this would guarantee that if 2 I i then is in one
of the intervals added to B, while also ensuring that the sum of the lengths of these
intervals is less than or equal to k jI i j. Following this procedure for all would give
us the desired Solovay test B. Unless 6 S , however, there is no innite computable
so we use Lemma 3.4 to obtain the next best thing.
Let
If 6 S then is nonrandom, so, by Lemma 3.4, we may assume that S is innite.
Note that k( s ) > s for all s 2 S. In fact, we may assume that k( s ) > s
for all s 2 S, since if k( s dier by a rational amount, and
hence is nonrandom.
The set S is co-c.e. by denition, but it has an additional useful property. Let
If s 2 S[t 1] S[t] then no u 2 (s; t) is in S, since for any such u we have
In other words, if s leaves S at stage t then so do all numbers in (s; t).
To construct B, we run the following procedure P i for all in parallel. Note
that B is a sequence rather than a set, so we are allowed to add more than one copy of
a given interval to B.
1. Look for an s 2 ! such that s 2 I i .
2. Let
I i then terminate the procedure.
3. If
and go to step 2. Otherwise, add the interval
to B, increase t by one, and repeat step 3.
This concludes the construction of B. We now show that the sum of the lengths of
the intervals in B is nite and that is in innitely many of the intervals in B.
For each i 2 !, let B i be the set of intervals added to B by P i and let l i be the sum of
the lengths of the intervals in B i . If P i never leaves step 1 then eventually
terminates then l i 6 k( t s ) for some s; t 2 ! such that s ; t 2 I i , and hence
reaches step 3 and never terminates then 2 I i and l i 6 k( s ) for
some s 2 ! such that s 2 I i , and hence again l i 6 k jI i j. Thus the sum of the lengths
of the intervals in B is less than or equal to k
To show that is in innitely many of the intervals in B, it is enough to show that,
for each i 2 !, if 2 I i then is in one of the intervals in B i .
Fix such that 2 I i . Since is not rational, u 2 I i for all su-ciently large
must eventually reach step 3. By the properties of S discussed above,
the variable s in the procedure P i eventually assumes a value in S. For this value,
, from which it follows that k( u s ) > s for some u > s,
and hence that must be in one of the intervals (), all of
which are in B i .
3.6. Corollary. If 0 and 1 are c.e. reals such that 0 random then at least
one of 0 and 1 is random.
Proof. Let . For each s 2 !, either 3( 0 0
so for some i < 2 there are innitely many s 2 ! such that 3( i i
Theorem 3.5, i is random.
Combining Theorem 3.1 and Corollary 3.6, we have the following results, the second
of which also depends on Theorem 1.4.
3.7. Theorem. A c.e. real
is random if and only if it cannot be written as + for
c.e. reals ; < S
3.8. Theorem. Let d be a Solovay degree. The following are equivalent:
1. d is incomplete.
2. d splits.
3. d splits over any lesser Solovay degree.
--R
Weakly computable real numbers
A characterization of c.
Information and Randomness
Nature 400
Centre for Discrete Mathematics and Theoretical Computer Science Research Report Series 59
Algorithmic information theory
Incompleteness theorems for random reals
Randomness and reducibility I
Presentations of computably enumerable reals
On the de
On the continued fraction representation of computable real numbers
Three approaches to the quantitative de
On the notion of a random sequence
The various measures of the complexity of
An Introduction to Kolmogorov Complexity and its Appli- cations
Process complexity and e
Randomness and recursive enumerability
Cohesive sets and recursively enumerable Dedekind cuts
Draft of a paper (or series of papers) on Chaitin's work
--TR
--CTR
Rod Downey , Denis R. Hirschfeldt , Geoff LaForte, Undecidability of the structure of the Solovay degrees of c.e. reals, Journal of Computer and System Sciences, v.73 n.5, p.769-787, August, 2007 | kolmogorov complexity;algorithmic information theory;randomness;solovay reducibility;computably enumerable reals |
586923 | A Fully Dynamic Algorithm for Recognizing and Representing Proper Interval Graphs. | In this paper we study the problem of recognizing and representing dynamically changing proper interval graphs. The input to the problem consists of a series of modifications to be performed on a graph, where a modification can be a deletion or an addition of a vertex or an edge. The objective is to maintain a representation of the graph as long as it remains a proper interval graph, and to detect when it ceases to be so. The representation should enable one to efficiently construct a realization of the graph by an inclusion-free family of intervals. This problem has important applications in physical mapping of DNA.We give a near-optimal fully dynamic algorithm for this problem. It operates in O(log n) worst-case time per edge insertion or deletion. We prove a close lower bound of $\Omega(\log n/(\log\log n+\log b))$ amortized time per operation in the cell probe model with word-size b. We also construct optimal incremental and decremental algorithms for the problem, which handle each edge operation in O(1) time. As a byproduct of our algorithm, we solve in O(log n) worst-case time the problem of maintaining connectivity in a dynamically changing proper interval graph. | Introduction
A graph G is called an interval graph if its vertices can be assigned to intervals on the
real line so that two vertices are adjacent in G iff their intervals intersect. The set of
intervals assigned to the vertices of G is called a realization of G. If the set of intervals
can be chosen to be inclusion-free, then G is called a proper interval graph. Proper
interval graphs have been studied extensively in the literature (cf. [7, 13]), and several
linear time algorithms are known for their recognition and realization [2, 3].
This paper deals with the problem of recognizing and representing dynamically
changing proper interval graphs. The input is a series of operations to be performed
on a graph, where an operation is any of the following: Adding a vertex (along with
the edges incident to it), deleting a vertex (and the edges incident to it), adding an
edge and deleting an edge. The objective is to maintain a representation of the dynamic
graph as long as it is a proper interval graph, and to detect when it ceases to be so.
The representation should enable one to efficiently construct a realization of the graph.
In the incremental version of the problem, only addition operations are permitted, i.e.,
the operations include only the addition of a vertex and the addition of an edge. In the
decremental version of the problem only deletion operations are allowed.
The motivation for this problem comes from its application to physical mapping of
DNA [1]. Physical mapping is the process of reconstructing the relative position of DNA
fragments, called clones, along the target DNA molecule, prior to their sequencing,
based on information about their pairwise overlaps. In some biological frameworks
the set of clones is virtually inclusion-free - for example when all clones have a similar
length (this is the case for instance for cosmid clones). In this case, the physical mapping
problem can be modeled using proper interval graphs as follows. A graph G is built
according to the biological data. Each clone is represented by a vertex and two vertices
are adjacent iff their corresponding clones overlap. The physical mapping problem then
translates to the problem of finding a realization of G, or determining that none exists.
Had the overlap information been accurate, the two problems would have been
equivalent. However, some biological techniques may occasionally lead to an incorrect
conclusion about whether two clones intersect, and additional experiments may change
the status of an intersection between two clones. The resulting changes to the corresponding
graph are the deletion of an edge, or the addition of an edge. The set of clones
is also subject to changes, such as adding new clones or deleting 'bad' clones (such as
chimerics [14]). These translate into addition or deletion of vertices in the corresponding
graph. Therefore, we would like to be able to dynamically change our graph, so as
to reflect the changes in the biological data, as long as they allow us to construct a map,
i.e., as long as the graph remains a proper interval graph.
Several authors have studied the problem of dynamically recognizing and representing
certain graph families. Hsu [10] has given an O(m+ n log n)-time incremental
algorithm for recognizing interval graphs. (Throughout, we denote the number of vertices
in the graph by n and the number of edges in it by m.) Deng, Hell and Huang [3]
have given a linear-time incremental algorithm for recognizing and representing connected
proper interval graphs This algorithm requires that the graph will remain connected
throughout the modifications. In both algorithms [10, 3] only vertex increments
are handled. Recently, Ibarra [11] found a fully dynamic algorithm for recognizing
chordal graphs, which handles each edge operation in O(n) time, or alternatively, an
edge deletion in O(n log n) time and an edge insertion in O(n= log n) time.
Our results are as follows: For the general problem of recognizing and representing
proper interval graphs we give a fully dynamic algorithm which handles each operation
in time O(d log n), where d denotes the number of edges involved in the
operation. Thus, in case a vertex is added or deleted, d equals its degree, and in case
an edge is added or deleted, d = 1. Our algorithm builds on the representation of
proper interval graphs given in [3]. We also prove a lower bound for this problem of
\Omega (log n=(log log n+ log b)) amortized time per edge operation in the cell probe model
of computation with word-size b [16]. It follows that our algorithm is nearly optimal
(up to a factor of O(log log n)).
For the incremental and the decremental versions of the problem we give optimal
algorithms (up to a constant factor) which handle each operation in time O(d). For the
incremental problem this generalizes the result of [3] to arbitrary instances.
As a part of our general algorithm we give a fully dynamic procedure for maintaining
connectivity in proper interval graphs. The procedure receives as input a sequence of
operations each of which is a vertex addition or deletion, an edge addition or deletion,
or a query whether two vertices are in the same connected component. It is assumed
that the graph remains proper interval throughout the modifications, since otherwise
our main algorithm detects that the graph is no longer a proper interval graph and halts.
We show how to implement this procedure in O(log n) time per operation. In compar-
ison, the best known algorithms for maintaining connectivity in general graphs require
O(log 2 n) amortized time per operation [9], or O(
n) worst-case (deterministic) time
per operation [4]. We also show that the lower bound of Fredman and Henzinger [5] of
\Omega (log n=(log log n+log b)) amortized time per operation (in the cell probe model with
word-size b) for maintaining connectivity in general graphs, applies to the problem of
maintaining connectivity in proper interval graphs.
The paper is organized as follows: In section 2 we give the basic background and
describe our representation of proper interval graphs and the realization it defines. In
sections 3 and 4 we present the incremental algorithm. In section 5 we extend the incremental
algorithm to a fully dynamic algorithm for proper interval graph recognition
and representation. We also derive an optimal decremental algorithm. In section 6 we
give a fully dynamic algorithm for maintaining connectivity in proper interval graphs.
Finally, in section 7 we prove a lower bound on the amortized time per operation of a
fully dynamic algorithm for recognizing proper interval graphs. For lack of space, some
of the proofs and some of the algorithmic details are omitted.
Preliminaries
E) be a graph. We denote its set V of vertices also by V (G) and its set E
of edges also by E(G). For a vertex
and N [v] := N (v) [ fvg. Let R be an equivalence relation on V defined by uRv iff
Each equivalence class of R is called a block of G. Note that every block
of G is a complete subgraph of G. The size of a block is the number of vertices in it.
Two blocks A and B are neighbors in G if some (and hence all) vertices a 2 A; b 2 B,
are adjacent in G. A straight enumeration of G is a linear ordering \Phi of the blocks in
G, such that for every block, the block and its neighboring blocks are consecutive in \Phi.
l be an ordering of the blocks of G. For any 1 -
we say that is ordered to the left of B j , and that B j is ordered to the right of B i .
A chordless cycle is an induced cycle of length greater than 3. A claw is an induced
K 1;3 . A graph is claw-free if it does not contain an induced claw. For basic definitions
in graph theory see, e.g., [7].
The following are some useful facts about interval and proper interval graphs.
Theorem 1. ([12]) An interval graph contains no chordless cycle.
Theorem 2. ([15]) A graph is a proper interval graph iff it is interval and claw-free.
Theorem 3. ([3]) A graph is a proper interval graph iff it has a straight enumeration.
Lemma 1 ("The umbrella property"). Let \Phi be a straight enumeration of a connected
proper interval graph G. If A; B and C are blocks of G, such that A
and A is adjacent to C, then B is adjacent to A and to C (see figure 1).
Fig. 1. The umbrella property
Let G be a connected proper interval graph and let \Phi be a straight enumeration
of G. It is shown in [3] that a connected proper interval graph has a unique straight
enumeration up to its full reversal. Define the out-degree of a block B w.r.t. \Phi, denoted
by o(B), as the number of neighbors of B which are ordered to its right in \Phi.
We shall use the following representation: For each connected component of the
dynamic graph we maintain a straight enumeration (in fact, for technical reasons we
shall maintain both the enumeration and its reversal). The details of the data structure
containing this information will be described below.
This information implicitly defines a realization of the dynamic graph (cf. [3]) as
follows: Assign to each vertex in block B i the interval
]. The out-degrees
and hence the realization of the graph can be computed from our data structure
in time O(n).
3 An Incremental Algorithm for Vertex Addition
In the following two sections we describe an optimal incremental algorithm for recognizing
and representing proper interval graphs. The algorithm receives as input a
series of addition operations to be performed on a graph. Upon each operation the algorithm
updates its representation of the graph and halts if the current graph is no longer
a proper interval graph. The algorithm handles each operation in time O(d), where d
denotes the number of edges involved in the operation. It is assumed that initially the
graph is empty, or alternatively, that the representation of the initial graph is known.
A contig of a connected proper interval graph G is a straight enumeration of G.
The first and the last blocks of a contig are called end-blocks. The rest of the blocks are
called inner-blocks.
As mentioned above, each component of the dynamic graph has exactly two contigs
(which are full reversals of each other) and both are maintained by the algorithm. Each
operation involves updating the representation. (In the sequel we concentrate on describing
only one of the two contigs for each component. The second contig is updated
in a similar way.)
3.1 The Data Structure
The following data is kept and updated by the algorithm:
1. For each vertex we keep the name of the block to which it belongs.
2. For each block we keep the following:
(a) An end pointer which is null if the block is not an end-block of its contig, and
otherwise points to the other end-block of that contig.
(b) The size of the block.
(c) Left and right near pointers, pointing to nearest neighbor blocks on the left and
on the right respectively.
(d) Left and right far pointers, pointing to farthest neighbor blocks on the left and
on the right respectively.
(e) Left and right self pointers, pointing to the block.
(f) A counter.
In the following we shall omit details about the obvious updates to the name of the
block of a vertex and to the size of a block.
During the execution of the algorithm we may need to update many far pointers
pointing to a certain block, so that they point to another block. In order to be able to
do that in O(1) time we use the technique of nested pointers: We make the far pointers
point to a location whose content is the address of the block to which the far pointers
should point. The role of this special location will be served by our self-pointers. The
value of the left and right self-pointers of B is always the address of B. When we say
that a certain left (right) far pointer points to B, we mean that it points to a left (right)
self-pointer of B. Let A and B be blocks. In order to change all left (right) far pointers
pointing to A so that they point to B, we require that no left (right) far pointer points
to B. If this is the case, we simply exchange the left (right) self-pointer of A with the
left (right) self-pointer of B. This means that: (1) The previous left (right) self-pointer
of A is made to point to B, and the algorithm records it as the new left (right) self-
pointer of B; (2) The previous left (right) self-pointer of B is made to point to A, and
the algorithm records it as the new left (right) self-pointer of A.
We shall use the following notation: For a block B we denote its address in the
memory by &B. When we set a far pointer to point to a left or to a right self-pointer of
B we will abbreviate and set it to &B. We denote the left and right near pointers of B by
l (B) and N r (B) respectively. We denote the left and right far pointers of B by F l (B)
and F r (B) respectively. We denote its end pointer by E(B). In the sequel we often refer
to blocks by their addresses. For example, if A and B are blocks, and N r
sometimes refer to B by N r (A). When it is clear from the context, we also use a name
of a block to denote any vertex in that block. Given a contig \Phi we denote its reversal by
\Phi R . In general when performing an operation, we denote the graph before the operation
is carried out by G, and the graph after the operation is carried out by G 0 .
3.2 The Impact of a New Vertex
In the following we describe the changes made to the representation of the graph in case
G 0 is formed from G by the addition of a new vertex v of degree d. We also give some
necessary and some sufficient conditions for deciding whether G 0 is proper interval.
Let B be a block of G. We say that v is adjacent to B if v is adjacent to some vertex
in B. We say that v is fully adjacent to B if v is adjacent to every vertex in B. We say
that v is partially adjacent to B if v is adjacent to B but not fully adjacent to B.
The following lemmas characterize, assuming that G 0 is proper interval, the adjacencies
of the new vertex.
Lemma 2. If G 0 is a proper interval graph then v can have neighbors in at most two
connected components of G.
Lemma 3. [3] Let C be a connected component of G containing neighbors of v. Let
be a contig of C . Assume that G 0 is proper interval and let 1 - a !
k. Then the following properties are satisfied:
1. If v is adjacent to B a and to B c , then v is fully adjacent to B b .
2. If v is adjacent to B b and not fully adjacent to B a and to B c , then B a is not adjacent
to B c .
3. If is adjacent to B b , then v is fully adjacent to B a or to
One can view a contig \Phi of a connected proper interval graph C as a weak linear
on the vertices of C, where x ! \Phi y iff the block containing x is ordered in
\Phi to the left of the block containing y. We say that \Phi 0 is a refinement of \Phi if for every
a contig can be reversed, we also allow
complete reversal of \Phi).
Lemma 4. If G is a connected induced subgraph of a proper interval graph G 0 , \Phi is a
contig of G and \Phi 0 is a straight enumeration of G 0 , then \Phi 0 is a refinement of \Phi.
Note, that whenever v is partially adjacent to a block B in G, then the addition of v
will cause B to split into two blocks of G 0 , namely B nN (v) and B " N (v). Otherwise,
if B is a block of G to which v is either fully adjacent or not adjacent, then B is also a
block of G 0 .
Corollary 1. If B is a block of G to which v is partially adjacent, then B n N (v) and
occur consecutively in a straight enumeration of G 0 .
Lemma 5. Let C be a connected component of G containing neighbors of v. Let the
set of blocks in C which are adjacent to v be fB g. Assume that in a contig of
. If G 0 is proper interval then the following properties are satisfied:
are consecutive in C.
2. If k - 3 then v is fully adjacent to B
3. If v is adjacent to a single block B 1 in C, then B 1 is an end-block.
4. If v is adjacent to more than one block in C and has neighbors in another compo-
nent, then B 1 is adjacent to B k , and one of B 1 or B k is an end-block to which v is
fully adjacent, while the other is an inner-block.
Proof. Claims 1 and 2 follow directly from part 1 of Lemma 3. Claim 3 follows from
part 3 of Lemma 3. To prove the last part of the lemma let us denote the other component
containing neighbors of v by D. Examine the induced connected subgraph H of G
whose set of vertices is V (D). H is proper interval as an
induced subgraph of G. It is composed of three types of blocks: Blocks whose vertices
are from V (C), which we will call henceforth C-blocks; blocks whose vertices are from
V (D), which we will call henceforth D-blocks; and fvg which is a block of H since
fvg is not connected. All blocks of C remain intact in H, except B 1 and B k which
might split into B j n N (v) and
Surely in a contig of H, C-blocks must be ordered completely before or completely
after D-blocks. Let \Phi denote a contig of H , in which C-blocks are ordered before D-
blocks. Let X denote the rightmost C-block in \Phi. By the umbrella property,
and moreover, X is adjacent to v. By Lemma 4, \Phi is a refinement of a contig of C.
Hence, precisely,
Therefore, one of B 1 or B k is an end-block.
W.l.o.g. . Suppose to the contrary that v is not fully adjacent to B k . Then
by Lemma 4 we have contradicting the
umbrella property. B 1 must be adjacent to B k , or else G 0 contains a claw consisting of
and a vertex from V (D)"N (v). It remains to show that B 1 is an inner-block.
Suppose it is an end block. Since B 1 and B k are adjacent, C contains a single block
contradiction. Thus, claim 4 is proved.
3.3 The Algorithm
In our algorithm we rely on the incremental algorithm of Deng, Hell and Huang [3],
which we call henceforth the DHH algorithm. This algorithm handles the insertion of a
new vertex into a graph in O(d) time, provided that all its neighbors are in the same connected
component, changing the straight enumeration of this component appropriately.
We refer the reader to [3] for more details.
We perform the following upon a request for adding a new vertex v. For each neighbor
u of v we add one to the count of the block containing u. We call a block full if its
counter equals its size, empty if its counter equals zero, and partial otherwise. In order
to find a set of consecutive blocks which contain neighbors of v, we pick arbitrarily a
neighbor of v and march down the enumeration of blocks to the left using the left near
neighbor pointers. We continue till we hit an empty block or till we reach the end of the
contig. We do the same to the right and this way we discover a maximal sequence of
nonempty blocks in that component which contain neighbors of v. We call this maximal
sequence a segment. Only the two extreme blocks of the segment are allowed to be
partial or else we fail (by Lemma 5(2)).
If the segment we found contains all neighbors of v then we can use the DHH
algorithm in order to insert v into G, updating our internal data structure accordingly.
Otherwise, by Lemmas 2 and 5(1) there could be only one more segment which contains
neighbors of v. In that case, exactly one extreme block in each segment is an end-block
to which v is fully adjacent (if the segment contains more than one block), and the two
extreme blocks in each segment are adjacent, or else we fail (by Lemma 5(3,4)).
We proceed as above to find a second segment containing neighbors of v. We can
make sure that the two segments are from two different contigs by checking that their
end-blocks do not point to each other. We also check that conditions 3 and 4 in Lemma 5
are satisfied. If the two segments do not cover all neighbors of v, we fail.
If v is adjacent to vertices in two distinct components C and D, then we should
merge their contigs. Let R be the two contigs of C. Let
l ; \Psi R be the two contigs of D. The way the merge is performed
depends on the blocks to which v is adjacent. If v is adjacent to B k and to B 0
by the umbrella property the two new contigs (up to refinements described below) are
. In the following we describe the necessary
changes to our data structure in case these are the new contigs. The three other cases
are handled similarly.
- Block enumeration: We merge the two enumerations of blocks and put a new block
fvg in-between the two contigs. Let the leftmost block adjacent to v in the new
ordering and let the rightmost block adjacent to v be B 0
. If
is partial we split it into two blocks -
in this order. If B 0
j is partial we split it into two blocks
(v) in this order.
- End pointers: We set E(B 1
l We then nullify the
end pointers of B k and B 0
1 .
Near pointers: We update N l
and N l (B 0
In case B i was split we update N r ( -
are made in case B 0
was split to the near pointers of B 0
j+1 .
- Far pointers: If B i was split we set F l ( -
the left self-pointer of B i with the left self-pointer of -
was split
we set F r ( -
1 and exchange the right self-pointer of
j with the right self-pointer of -
j . In addition, we set all right far pointers of
and all left far pointers of B 0
j to &fvg (in O(d)
time). Finally, we set F l
.
4 An Incremental Algorithm for Edge Addition
In this section we show how to handle the addition of a new edge (u; v) in O(1) time.
We characterize the cases for which G proper interval and show how
to efficiently detect them, and how to update our representation of the graph.
Lemma 6. If u and v are in distinct components in G, then G 0 is proper interval iff u
and v were in end-blocks of their respective contigs.
Proof. To prove the 'only if' part let us examine the graph fug.
H is proper interval as an induced subgraph of G. If G 0 is proper interval, then by
Lemma must be in an end-block of its contig, since u is not adjacent to any other
vertex in the component containing v. The same argument applies to u.
To prove the 'if' part we give a straight enumeration of the new connected component
containing u and v in G 0 . Denote by C and D the components containing u
and v respectively. Let be a contig of C , such that
l be a contig of D, such that
l is a straight enumeration of the new component.
We can check in O(1) time if u and v are in end-blocks of distinct contigs. If this
is the case, we update our data structure according to the straight enumeration given in
the proof of Lemma 6 in O(1) time.
It remains to handle the case where u and v were in the same connected component
C in G. If N then by the umbrella property it follows that C contains only
three blocks which are merged into a single block in G 0 . In this case G 0 is proper interval
and updates to the internal data structure are trivial. The following lemma analyses the
case where N (u) 6= N (v).
Lemma 7. Let be a contig of C , such that
k. Assume that N (u) 6= N (v). Then G 0 is proper interval iff
G.
Proof. To prove the 'only if' part assume that G 0 is proper interval. Since B i and B j
are not adjacent, F r (B i . Suppose to the contrary that
. If in addition F l (B
is a strict containment). As v and z are in distinct blocks, there exists a vertex b 2 N [v]n
N [z]. But then, v; b; z; u induce a claw in G 0 , a contradiction. Hence, F l (B
and so F r (B are in
distinct blocks, either (u; y) 62 E(G) or there is a vertex a 2 N [u] n N [x] (or both).
In the first case, v; u; x; y and the vertices of the shortest path from y to v induce a
chordless cycle in G 0 . In the second case u; a; x; v induce a claw in G 0 . Hence, in both
cases we arrive at a contradiction. The proof that F l (B
To prove the 'if' part we shall provide a straight enumeration of C [ fu; vg. If
we move v from B j to contained only v, F l (B
move u from B i to B i+1 . If u
was not moved and B i oe fug, we split B i into B i n fug; fug in this order. If v was not
moved and B j oe fvg, we split B j into fvg; B j n fvg in this order. It is easy to see that
the result is a straight enumeration of C [ fu; vg.
We can check in O(1) time if the condition in Lemma 7 holds. If this is the case,
we change our data structure so as to reflect the new straight enumeration given in the
proof of Lemma 7. This can be done in O(1) time, in a similar fashion to the update
technique described in Section 3.3. The details are omitted here. The following theorem
summarizes the results of Sections 3 and 4.
Theorem 4. The incremental proper interval graph representation problem is solvable
in O(1) time per added edge.
5 The Fully Dynamic Algorithm
In this section we give a fully dynamic algorithm for recognizing and representing
proper interval graphs. The algorithm performs each operation in O(d log n) time,
where d denotes the number of edges involved in the operation. It supports four types
of operations: Adding a vertex, adding an edge, deleting a vertex and deleting an edge.
It is based on the same ideas used in the incremental algorithm. The main difficulty in
extending the incremental algorithm to handle all types of operations, is updating the
end pointers of blocks when deletions are allowed. To bypass this problem we do not
keep end pointers at all. Instead, we maintain the connected components of G, and use
this information in our algorithm. In the next section we show how to maintain the connected
components of G in O(log n) time per operation. We describe below how each
operation is handled by the algorithm.
5.1 The Addition of a Vertex or an Edge
These operations are handled in essentially the same way as done by the incremental
algorithm. However, in order to check if the end-blocks of two segments are in distinct
components, we query our data structure of connected components (in O(log n) time).
Similarly, in order to check if the endpoints of an added edge are in distinct components,
we check if their corresponding blocks are in distinct components (in O(log n) time).
5.2 The Deletion of a Vertex
We show next how to update the contigs of G after deleting a vertex v of degree d. Note
that G 0 is proper interval as an induced subgraph of G. Denote by X the block containing
v. If X oe fvg, then the only change needed is to delete v. We hence concentrate on
the case that fvg. We can find in O(d) time the segment of blocks which includes
X and all its neighbors. Let the contig containing X be and let the
blocks of the segment be
We make the following updates:
l, we check whether B i can be merged with B i\Gamma1 .
If F l (B them by
moving all vertices from B i to B i\Gamma1 (in O(d) time) and deleting B i . If l
we act similarly w.r.t. B j and B j+1 . Finally, we delete B l . If
are non-adjacent, then by the umbrella property they are no longer in
the same connected component, and the contig should be split into two contigs, one
ending at B l\Gamma1 and one beginning at B l+1 .
were merged, we update N r (B
updates should be made w.r.t. in
case were merged. If the contig is split, we nullify N r (B l\Gamma1 ) and
l (B l+1 ). Otherwise, we update N r (B
were merged, we exchange the right self-pointer of B i
with the right self-pointer of B i\Gamma1 . Similar changes should be made w.r.t. B j and
. We also set all right far pointers previously pointing to B l , to &B
all left far pointers previously pointing to B l , to &B l+1 (in O(d) time).
Note that these updates take O(d) time and require no knowledge about the connected
components of G.
5.3 The Deletion of an Edge
Let (u; v) be an edge of G to be deleted. Let C denote the connected component of G
containing u and v, and let be a contig of C. If
into resulting in a straight enumeration of G 0 . Updates are
trivial in this case. If N then one can show that G is a proper
interval graph iff C was a clique, so again k = 1. We assume henceforth that k ? 1 and
W.l.o.g. were far neighbors of
each other, then we should split the contig into two contigs, one ending at B i and the
other beginning at B j . Otherwise, updates to the straight enumeration are derived from
the following lemma.
Lemma 8. Let be a contig of C, such that
k. Assume that N (u) 6= N (v). Then G 0 is proper interval iff F r (B i
and F l (B G.
Proof. Assume that G 0 is proper interval. We will show that F r (B i . The proof
that F l (B are adjacent in G, F r (B i
Suppose to the contrary that F r (B are in distinct
blocks, either there is a vertex a 2 N [v] n N [x] or there is a vertex b 2 N [x] n N [v] (or
both). In the first case, by the umbrella property (a; u) 2 E(G) and therefore u; x; v; a
induce a chordless cycle in G 0 . In the second case, x; b; u; v induce a claw in G 0 . Hence,
in both cases we arrive at a contradiction.
To prove the opposite direction we give a straight enumeration of C n f(u; v)g. If
we move u into B i\Gamma1 . If B i contained only u, F r (B j+1
. If u was not moved and
fug in this order. If v was not moved and
fvg in this order. The result is a contig of
v)g.
If the conditions of Lemma 8 are fulfilled, one has to update the data structure
according to its proof. These updates require no knowledge about the connected components
of G, and it can be shown that they take O(1) time. Hence, from Sections 5.2
and 5.3 we obtain the following result:
Theorem 5. The decremental proper interval graph representation problem is solvable
in O(1) time per removed edge.
6 Maintaining the Connected Components
In this section we describe a fully dynamic algorithm for maintaining connectivity in
a proper interval graph G in O(logn) time per operation. The algorithm receives as
input a series of operations to be performed on a graph, which can be any of the follow-
ing: Adding a vertex, adding an edge, deleting a vertex, deleting an edge or querying
if two blocks are in the same connected component. The algorithm depends on a data
structure which includes the blocks and the contigs of the graph. It hence interacts with
the proper interval graph representation algorithm. In response to an update request,
changes are made to the representation of the graph based on the structure of its connected
components prior to the update. Only then are the connected components of the
graph updated.
Let us denote by B(G) the block graph of G, that is, a graph in which each vertex
corresponds to a block of G and two vertices are adjacent iff their corresponding blocks
are adjacent in G. The algorithm maintains a spanning forest F of B(G). In order to
decide if two blocks are in the same connected component, the algorithm checks if they
belong to the same tree in F .
The key idea is to design F so that it can be efficiently updated upon a modification
in G. We define the edges of F as follows: For every two vertices u and v in B(G),
corresponding blocks are consecutive in a contig of G. Conse-
quently, each tree in F is a path representing a contig. The crucial observation about F
is that an addition or a deletion of a vertex or an edge in G induces O(1) modifications
to the vertices and edges of F . This can be seen by noting that each modification of G
induces O(1) updates to near pointers in our representation of G.
It remains to show how to implement a spanning forest in which trees may be cut
when an edge is deleted from F , linked when an edge is inserted to F , and which allows
to query for each vertex to which tree does it belong. All these operations are supported
by the ET-tree data structure of [8] in O(log n) time per operation.
We are now ready to state our main result:
Theorem 6. The fully dynamic proper interval graph representation problem is solvable
in O(d log n) time per modification involving d edges.
7 The Lower Bound
In this section we prove a lower bound
of\Omega (log n=(log log n log b)) amortized time
per edge operation for fully dynamic proper interval graph recognition in the cell probe
model of computation with word-size b [16].
Fredman and Saks [6] proved a lower bound
amortized
time per operation for the following parity prefix sum (PPS) problem: Given an
array of integers execute an arbitrary sequence
of Add(t) and Sum(t) operations, where an Add(t) increases A[t] by 1, and Sum(t)
returns (
2. Fredman and Henzinger [5] showed that the same lower
bound applies to the problem of maintaining connectivity in general graphs, by showing
a reduction from a modified PPS problem, called helpful parity prefix sum, for which
they proved the same lower bound. A slight change to their reduction yields the same
lower bound for the problem of maintaining connectivity in proper interval graphs, as
the graph built in the reduction is a union of two paths and therefore proper interval.
Using a similar construction we can prove the following result:
Theorem 7. Fully dynamic proper interval recognition
takes\Omega (log n=(log log n
log b)) amortized time per edge operation in the cell probe model with word-size b.
Acknowledgments
The first author gratefully acknowledges support from NSERC. The second author was
supported in part by a grant from the Ministry of Science, Israel. The third author was
supported by Eshkol scholarship from the Ministry of Science, Israel.
--R
Establishing the order of human chromosome-specific DNA fragments
Simple linear time recognition of unit interval graphs.
Recognition and representation of proper circular arc graphs.
SIAM Journal on Computing
Lower bounds for fully dynamic connectivity problems in graphs.
The cell probe complexity of dynamic data structures.
Algorithmic Graph Theory and Perfect Graphs.
Randomized dynamic graph algorithms with polylogarithmic time per operation.
A simple test for interval graphs.
Fully dynamic algorithms for chordal graphs.
Representation of a finite graph by a set of intervals on the real line.
Indifference graphs.
Recombinant DNA.
Eigenschaften der Nerven homologisch einfacher Familien in R n
Should tables be sorted.
--TR
--CTR
Ron Shamir , Roded Sharan, A fully dynamic algorithm for modular decomposition and recognition of cographs, Discrete Applied Mathematics, v.136 n.2-3, p.329-340, 15 February 2004
C. Crespelle , C. Paul, Fully dynamic recognition algorithm and certificate for directed cograph, Discrete Applied Mathematics, v.154 n.12, p.1722-1741, 15 July 2006
Derek G. Corneil, A simple 3-sweep LBFS algorithm for the recognition of unit interval graphs, Discrete Applied Mathematics, v.138 n.3, p.371-379, 15 April 2004
Jrgen Bang-Jensen , Jing Huang , Louis Ibarra, Recognizing and representing proper interval graphs in parallel using merging and sorting, Discrete Applied Mathematics, v.155 n.4, p.442-456, February, 2007 | proper interval graphs;lower bounds;graph algorithms;fully dynamic algorithms |
586924 | A Polynomial Time Approximation Scheme for General Multiprocessor Job Scheduling. | Recently, there have been considerable interests in the multiprocessor job scheduling problem, in which a job can be processed in parallel on one of several alternative subsets of processors. In this paper, a polynomial time approximation scheme is presented for the problem in which the number of processors in the system is a fixed constant. This result is the best possible because of the strong NP-hardness of the problem and is a significant improvement over the past results: the best previous result was an approximation algorithm of ratio $7/6 + \epsilon$ for 3-processor systems based on Goemans's algorithm for a restricted version of the problem. | Introduction
. One of the assumption made in classical scheduling theory is
that a job is always executed by one processor at a time. With the advances in parallel
algorithms, this assumption may no longer be valid for job systems. For example, in
semiconductor circuit design workforce planning, a design project is to be processed
by a group of people. The project contains n jobs, and each job can be worked
on by one of a set of alternatives, where each alternative consists of one or more
persons in the group working simultaneously on the particular job. The processing
time of each job depends on the subgroup of people being assigned to handle the
job. Note that the same person may belong to several different subgroups. Now the
question is how we can schedule the jobs so that the project can be finished as early
as possible. Other applications include (i) the berth allocation problem [21] where
a large vessel may occupy several berths for loading and unloading, (ii) diagnosable
microprocessor systems [20] where a job must be performed on parallel processors in
order to detect faults, (iii) manufacturing, where a job may need machines, tools, and
people simultaneously, and (iv) scheduling a sequence of meetings where each meeting
requires a certain group of people [11]. In the scheduling literature [17], this kind of
problems are called multiprocessor job scheduling problems.
Among the others, two types of multiprocessor job scheduling problems have
been extensively studied [7, 22]. The first type is the Pm jfixjC max problem, in which
the subset of processors and the processing time for parallel processing each job are
fixed. The second type is a more general version, the Pm jsetjC max problem, in which
each job may have a number of alternative processing modes and each processing
mode specifies a subset of processors and the job processing time on that particular
processor subset. The objective for both problems is to construct a scheduling of
minimum makespan on the m-processor system for a given list of jobs. The jobs are
supposed to be non-preemptive.
Approximability of the multiprocessor job scheduling problems has been studied.
The problem is a generalized version of the classical job scheduling prob-
Department of Computer Science, Texas A&M University, College Station,
Email: chen@cs.tamu.edu. Supported in part by the National Science Foundation under Grant
CCR-9613805.
y Department of Computer Science, Bucknell University, Lewisburg, Pennsylvania 17837, Email:
amiranda@eg.bucknell.edu.
J. CHEN and A. MIRANDA
lem on a 2-processor system [13], thus it is NP-hard. Hoogeveen et al. [18] showed
that the P 3 jfixjC max problem (thus also the P 3 jsetjC max problem) is NP-hard in the
strong sense thus it does not have a fully polynomial time approximation scheme
5]). Blazewicz et al. [4] developed a polynomial time approximation
algorithm of ratio 4=3 for the problem P 3 jfixjC max , which was improved
later by Dell'Olmo et al. [10], who gave a polynomial time approximation algorithm of
ratio 5=4 for the same problem. Both algorithms are based on the study of a special
type of schedulings called normal schedulings. Goemans [14] further improved the
algorithms by giving a polynomial time approximation algorithm of ratio 7=6 for the
recently, Amoura et al. [1] developed a polynomial time
approximation scheme for the problem Pm jfixjC max for every fixed integer m.
Approximation algorithms for the Pm jsetjC max problem were not as successful as
that for the Pm jfixjC max problem. Bianco et al. [3] presented a polynomial time
approximation algorithm for the Pm jsetjC max problem whose approximation ratio is
bounded by m. Chen and Lee [8] improved their algorithm by giving a polynomial
time approximation algorithm for the Pm jsetjC max problem with an approximation
showed that the problem P 3 jsetjC max can be approximated
in polynomial time with a ratio 7=6 ffl. Before the present paper, it was
unknown whether there is a polynomial time approximation algorithm with ratio c
for the problem Pm jsetjC max , where c is a constant independent of the number m of
processors in the system.
In this paper, we present a polynomial time approximation scheme for the problem
Pm jsetjC max . Our algorithm combines the techniques developed by Amoura et
al. [1], who split jobs into large jobs and small jobs, and the techniques developed by
Bell'Olmo et al. [10] and Goemans [14] on normal schedulings, plus the standard dynamic
programming and scaling techniques. More precisely, based on a classification
of large jobs and small jobs, we introduce the concept of (m; ffl)-canonical schedulings,
which can be regarded as a generalization of the normal schedulings. We show that
for any job list, there is an (m; ffl)-canonical scheduling whose makespan is very close
to the optimal makespan. Then we show how this (m; ffl)-canonical scheduling can be
approximated. Combining these two steps gives us a polynomial time approximation
scheme for the Pm jsetjC max problem.
Our result is the best possible in the following sense: because the problem
Pm jsetjC max is NP-hard in the strong sense, it is unlikely that our algorithm can
be further improved to a fully polynomial time approximation scheme [13]. More-
over, the polynomial time approximation scheme cannot be extended to the more
general problem P jsetjC max , in which the number m of processors in the system is
given as a parameter in the input: it can be shown that there is a constant
such that the problem P jsetjC max has no polynomial time approximation algorithms
whose approximation ratio is bounded by n ffi [23].
The paper is organized as follows. Section 2 gives necessary background and
preliminaries for the problem. In section 3 we introduce (m; ffl)-canonical schedulings
and study their properties. Section 4 presents the polynomial time approximation
scheme for the problem Pm jsetjC max , and section 5 concludes with some remarks and
further research directions.
2. Preliminaries. The Pm jsetjC max problem is a scheduling problem minimizing
the makespan for a set of jobs, each of which may have several alternative processing
modes. More formally, an instance J of the problem Pm jsetjC max is a list of jobs:
each job J i is associated with a list of alternative processing
modes: Each processing mode (or simply mode) M ij is specified
by a a subset of processors in the m-processor system and
ij is an integer indicating the parallel processing time of the job J i on the processor
In case there is no ambiguity, we also say that the processor set Q ij is a
mode for the job J i . For each job J
min i be the . The value min i will be called the
minimum parallel processing time for the job J i .
Given a list of jobs, a scheduling \Gamma(J ) of J on the m-processor
system consists of two parts: (1) determination of a processing mode for each job
J i in J ; and (2) determination of the starting execution time for each job under the
assigned mode so that at any moment, each processor in the system is used for (maybe
parallel) processing at most one job (assuming that the system starts at time
The makespan of the scheduling \Gamma(J ) is the latest finishing time of a job in J under
the scheduling \Gamma(J ). Let Opt(J ) denote the minimum makespan over all schedulings
for J . The Pm jsetjC max problem is for a given instance J to construct a scheduling
of makespan Opt(J ) for J .
Let Pm be the set of the m processors in the m-processor system. A collection
k g of k nonempty subsets of Pm is a k-partition of Pm if
collection of subsets of Pm is a partition of Pm if it is
a k-partition for some integer k 1. The total number Bm of different partitions of
the set Pm is called the mth Bell number [16]. Using the formula of Comtet [9], we
have
A looser but simpler upper bound for Bm , Bm m!, can be easily proved by induction.
Another combinatorial fact we need for analysis of our scheduling algorithm is
the "cut-index" in a nonincreasing sequence of integers.
Lemma 2.1. Let be a nonincreasing sequence of integers, let
be a fixed integer and ffl ? 0 be an arbitrary real number. Then there is an
(with respect to m and ffl) such that
is an integer; and
(2) for any subset T 0 of at most 3j 0 mBm integers t q in T with q ? j 0 , we have
Proof. To simplify expressions, let b 1. Decompose the sum
4 J. CHEN and A. MIRANDA
Since
there are at most bm=fflc subsums A j larger than
be the first subsum such that A k+1 (ffl=m)
. Since the sum of the first b k+1
integers t q in T with q ?
m is bounded by (ffl=m)
(ffl=m)
and the sequence is nonincreasing, we conclude that for any subset
T 0 of T of at most 3j 0 mBm integers t q with q ? j 0 , we must have
This completes the proof.
For the nonincreasing sequence T of integers, we will denote by j m;ffl the smallest
index that satisfies conditions (1) and (2) in Lemma 2.1. The index j m;ffl will be called
the cut-index for the sequence T .
3. On (m; ffl)-canonical schedulings. In this section, we first assume that the
mode assignment for each job in the instance J is decided, and discuss how we
schedule the jobs in J under the mode assignment to the processor set Pm . By this
assumption, the job list J is actually an instance for the Pm jfixjC max problem (recall
that the Pm jfixjC max problem is the problem Pm jsetjC max with the restriction that
every job in an instance has only one processing mode).
be an instance for the Pm jfixjC max problem, where each
job J i requires a fixed set Q i of processors for parallel execution with processing time
loss of generality, assume that the processing time
sequence is nonincreasing.
For the fixed number m of processors in the system, and for an arbitrarily given
real number ffl ? 0, let j m;ffl be the cut-index for the sequence T , as defined in
Lemma 2.1. That is, is an integer bounded by bm=fflc,
and for any subset T 0 of at most 3j m;ffl mBm integers t q in T with q ? j m;ffl , we have
We split the job set J into two subsets:
(1)
The jobs in JL will be called large jobs and the jobs in JS will be called small jobs.
Let \Gamma(J ) be a scheduling for the job set J . Consider the nondecreasing sequence
of integers, where h,
are the starting or finishing times of the j m;ffl large jobs in \Gamma(J ). A small job block
in \Gamma(J ) consists of a subset P 0 ' Pm of processors and a time interval [
such that the subset of processors are exactly those that are
executing large jobs in the time interval [ will be called
the height and the processor set P 0 will be called the type of the small job block .
Therefore, the subset P 0 of processors associated with the small job block are
those processors that are either idle or used for executing small jobs in the time
interval Note that the small job block can be of height 0 when
The small job block of time interval [ is the latest finish time
of a large job, will be called the "last small job block". Note that the last small job
block has type Pm .
Let be a small job block associated with a processor set P 0 and a time interval
The small job block at any time moment in the time interval [
can be characterized uniquely as a collection [Q of pairwise disjoint subsets
of the processor set P 0 such that at the time , for processors in
the subset Q i are used for parallel execution on the same small job (thus, the subset
is the subset of idle processors at time ). The collection [Q
will be called the type of the time moment . A layer in the small job block is a
such that all time moments between i
and j are of the same type. The type of the layer is equal to the type of any time
moment in the layer and the height of the layer is
Let L 1 and L 2 be two layers in the small job block of types [Q
respectively. We say that layer L 1 covers layer L 2 if fR
g. In particular, if L 1 and L 2 are two consecutive layers in the small job
block such that layer L 2 starts right after layer L 1 finishes and L 1 covers L 2 , then
layer L 2 is actually a continuation of the layer L 1 with some of the small jobs finished.
Definition 3.1. A floor oe in the small job block is a sequence fL
of consecutive layers such that (1) for h, layer L i starts right after layer
and (2) all small jobs interlacing layer
in layer L 1 and all small jobs interlacing layer L h finish in layer L h .
An example of a floor is given in Figure 1(a). Note that a small job block may
not have any nonempty floor at all, as shown in Figure 1(b).
Remark 1. There are a few important properties of floors in a small job block.
Suppose that the layer L 1 starts at time 1 while layer L h finishes at time 2 . Then
by property (2) in the definition, no small jobs cross the floor boundaries 1 and 2 .
Therefore, the floor oe can be regarded as a single job that uses the processor set P 0 ,
starts at time 1 and finishes at time 2 . The height of the floor oe is defined to be
which is equal to the sum of the heights of the layers L 1 Secondly,
since all floors in the small job block are for the same processor subset P 0 and
there are no small jobs crossing the starting and finishing times of any floors, the
floors in the same small job block can be rearranged in any order but can still fit
into the small job block without exceeding the height of the small job block. Finally,
property (1) in the definition ensures that no matter how the small jobs in a floor are
rearranged, a simple greedy algorithm is sufficient to refit the small jobs into the floor
without exceeding the floor height. The greedy algorithm is based on the idea of the
well-known Graham's scheduling algorithm for the classical job scheduling problem
[15].
Definition 3.2. Let J be an instance of the problem Pm jfixjC max and let be
any permutation of the jobs in J . The list scheduling algorithm based on the ordering
is to schedule each job J i of mode Q i in J , following the ordering of , at the
earliest time when the processor subset Q i becomes available.
Lemma 3.3. Let J oe be the set of small jobs in the floor oe. The list scheduling
algorithm based on any ordering of the jobs in J oe will always reconstruct the floor
oe.
Proof. Suppose that the first layer L 1 in the floor oe is of type
According to property (1) in the definition, every job in J oe must have a mode Q i
for some i. By the definition, each layer covers the layer L j , therefore, in the
floor oe, no processor subset Q i can become idle before its final completion time. Now
6 J. CHEN and A. MIRANDA
(b)
c
small
job
block
(a)
small
job
block
c
floor
s
Fig. 1. (a) A floor fL1 ; L2 ; L3g; (b) a small job block with no floor.
since the subsets Q are pairwise disjoint, the jobs of mode Q i in J oe can be
executed by the processor subset Q i in any order without changing the completion
time of Q i . Therefore, regardless of the ordering of the jobs in J oe , as long as the list
scheduling algorithm starts each job at its earliest possible time (thus no subset
can become idle before its final completion time), the completion time for each subset
will not be changed. Therefore, the list scheduling algorithm will construct a floor
with exactly the same layers L 1
Definition 3.4. Let [Q be a partition of the processor subset P 0 . We
say that we can assign the type [Q to a floor if the type of
the layer L 1 is a subcollection of fQ g.
Note that it is possible that we can assign two different types to the same floor as
long as the type of the floor is a subcollection of the assigned floor types. For example,
let be a partition of the processor subset P 0 . If the first layer L 1 in a
floor oe is of type [Q then we can assign either type
to the floor oe.
Definition 3.5. A small job block is a tower if it is constituted by a sequence
of floors such that we can assign types to the floors so that no two floors in the tower
are of the same type.
Note that since each floor type is a partition of the processor subset P 0 , a tower
contains at most Bm floors, where Bm , the mth Bell number, is the number of different
partitions of a set of m elements.
In our discussion, we will be concentrating on schedulings of a special form, in
the following sense.
Definition 3.6. Let J be an instance of the problem Pm jfixjC max , which is
divided into large job set JL and small job set JS as given in Equation (1) for a fixed
fixed constant ffl ? 0. A scheduling \Gamma(J ) of J is (m; ffl)-canonical
if every small job block in \Gamma(J ) is a tower.
Remark 2. Note that in an (m; ffl)-canonical scheduling, no small jobs cross the
boundary of a tower. Therefore, a tower of height t and associated with a processor
set Q can be simply regarded as a job of mode (Q; t).
We first show that an (m; ffl)-canonical scheduling \Gamma(J ) of J can be constructed
by the list scheduling algorithm when large jobs and towers in \Gamma(J ) are given in a
proper order.
Lemma 3.7. Let \Gamma(J ) be an (m; ffl)-canonical scheduling for the job set J . Let
be the sequence of the large jobs and towers in \Gamma(J ), ordered in terms of their
starting times in \Gamma(J ). Then the list scheduling algorithm based on the ordering ,
which regards each tower as a single job, constructs a scheduling of J with makespan
not larger than that of \Gamma(J ).
Proof. Let be any prefix of the ordered sequence , where each
J j is either a large job or a tower. Let \Gamma(J i ) be the scheduling of J i obtained from
\Gamma(J ) by removing all large jobs and towers that are not in J i , and let \Gamma 0 (J i ) be the
scheduling by the list scheduling algorithm on the jobs in J i . It suffices to prove
that for all i, the completion time of any processor in \Gamma 0 (J i ) is not larger than the
completion time of the same processor in \Gamma(J i ). We prove this by induction on i.
The case Now suppose that the mode for the job (or tower)
requires the processor subset Q i+1 for parallel processing time t i+1 . Let be
the earliest time in the scheduling \Gamma(J i ) at which the processor subset Q
available and let 0 be the earliest time in the scheduling \Gamma 0 (J i ) at which the processor
subset available. The list scheduling algorithm will start J i+1 at time
thus in the scheduling \Gamma 0 (J i+1 ), the completion time of each processor in the subset
. On the other hand, in the scheduling \Gamma(J i+1 ), the job J i+1
cannot start earlier than since according to the definition of the ordered sequence
, J i+1 cannot start until all jobs in J i have started. Therefore, in the scheduling
\Gamma(J i+1 ), the completion time of each processor in Q i+1 is at least which is
not smaller than since the inductive hypothesis assumes that 0 . Finally,
for each processor not in the subset Q i+1 , the completion time in \Gamma 0 (J i+1 ) is equal to
that in \Gamma 0 (J i ), which by the induction is not larger than that in \Gamma(J i ), which is equal
to the completion time of the same processor in \Gamma(J i+1 ).
Thus, once the ordering of large jobs and towers is decided, it is easy to construct
a scheduling that is not worse than the given (m; ffl)-canonical scheduling. In the
following, we will prove that for any instance J for the problem Pm jfixjC max , there is
an (m; ffl)-canonical scheduling whose makespan is very close to the optimal makespan.
Theorem 3.8. Let J be an instance for the problem Pm jfixjC max . Then for
any ffl ? 0, there is an (m; ffl)-canonical scheduling \Gamma(J ) of J such that the makespan
of \Gamma(J ) is bounded by (1
Proof. be an optimal scheduling of makespan Opt(J ) for J . We
construct an (m; ffl)-canonical scheduling for J based on the optimal scheduling
8 J. CHEN and A. MIRANDA
Let JL and JS be the set of large jobs and the set of small jobs in J , respectively,
according to the definition in Equation (1). Consider a small job block in the
scheduling
Assume that the small job block is associated with a processor set P 0 of r
processors, r m, and a time interval [ be the list of all
partitions of the processor set P 0 , where We divide the layers in the
small job block into groups, each corresponding to a partition of P 0 , as follows.
A layer of type T 0 is put in the group corresponding to a partition T j if T 0 is a
subcollection of T j . Note that a layer type T 0 may be a subcollection of more than
one partition of P 0 . In this case, we put the layer arbitrarily into one and only one of
the groups to ensure that each layer belongs to only one group.
For each partition T j of P 0 , we construct a floor frame oe j whose type is T j and
height is equal to the sum of heights of all layers belonging to the group corresponding
to the partition T j . Note that so far we have not actually assigned any small jobs to
any floor frames oe 1 yet. Moreover, since each layer belongs to exactly one of
the groups, it is easy to see that the sum
of the heights of the floor
frames oe 1 is equal to the sum of the heights of all layers in the small job block
, which is equal to the height of the small job block .
The construction for the floor frames for the last small job block in \Gamma 1 (J ) is
slightly different: for which we only group layers in which not all processors are idle.
Thus, the sum of the heights of all floor frames in the last small job block is equal to
is the latest finish time for some large job in the scheduling
After the construction of the floor frames for each small job block in the scheduling
the small jobs in JS to the floor frames using the following greedy
method. For each small job J that requires a parallel processing by a processor subset
Q, we assign J to an arbitrary floor frame oe in a small job block as long as the floor
frame oe satisfies the following conditions: (1) the type of the floor frame oe contains
the subset Q; and (2) adding the job J to oe does not exceed the height of the floor
frame oe (if there are more than one floor frames satisfying these conditions, arbitrarily
pick one of them). Note that we assign a job to a floor frame only when the mode
of the job is contained in the type of the floor frame. Therefore, this assignment will
never leave a "gap" between two jobs in the same floor frame.
The above assignment of small jobs in JS to floor frames stops when none of the
small jobs left in JS can be assigned to any of the floor frames according to the above
rules. Now each floor frame becomes a floor.
For each small job block in \Gamma 1 (J ), let S be the set of floor frames in . Since
the height of a resulting floor is not larger than the height of the corresponding floor
frame, the sum of the heights of the floors resulting from the floor frames in S is
not larger than the height of the small job block . Therefore, we can put all these
floors into the small job block (in an arbitrary order) to make a tower. Doing this
for all small job blocks in \Gamma 1 (J ) gives an (m; ffl)-canonical scheduling
the job set JL [ J 0
S is the set of small jobs that have been assigned to the
floor frames in the above procedure. The makespan of the scheduling
is bounded by Opt(J ). Now the only thing left is that we still need to schedule the
small jobs that have not been assigned to any floor frames. Let J 00
S be the
set of small jobs that are not assigned to any floor frames by the above procedure.
We want to demonstrate that there are not many jobs in the set J 00
S .
By the definition, the number of small job blocks in the scheduling \Gamma 1 (J ) is
2j m;ffl +1 3j m;ffl . Since each small job block is associated with at most m processors,
the number of floor frames constructed in each small job block is bounded by Bm .
Therefore, the total number of floor frames we constructed from the scheduling
is bounded by 3Bm j m;ffl . Moreover, each floor type is a collection of at most m
processor subsets.
If the set J 00
contains more than 3mBm j m;ffl small jobs, then there must be a
subset Q of processors such that the number of small jobs of mode Q in J 00
S is larger
than the number of the constructed floor frames whose type contains the subset Q.
be the set of floor frames whose type contains the subset Q.
By our assignment rules, assigning any job of mode Q in J 00
S to a floor frame in
would exceed the height of the corresponding floor frame. Since there
are more than d small jobs of mode Q in J 00
S , the sum of processing times of all small
jobs of mode Q in JS is larger than
On the other hand, by our
construction of the floor frames in each small job block , the sum of the heights of
the floor frames in whose type contains Q should not be smaller than the sum of
the heights of the layers in whose type contains Q. Summarizing this over all small
job blocks, we conclude that the sum
smaller than the sum
of processing times of all small jobs of mode Q in JS (since each small job of mode
Q must be contained in consecutive layers whose type contains Q). This derives a
contradiction. The contradiction shows that there are at most 3mBm j m;ffl small jobs
in the set J 00
S .
Now we assign the small jobs in J 00
S to the floor frames in the last small job block
in the scheduling \Gamma 2 (JL [J 0
S ). For each small job J of mode Q in J 00
S , we arbitrarily
assign J to a floor frame whose type contains Q in the last small job block, even this
assignment exceeds the height of the floor frame. Note that the last small job block
is associated with the whole processor set Pm , so for any mode Q, there must be a
floor frame in the last small job block whose type contains the processor subset Q.
This procedure stops with all small jobs in J 00
S assigned to floor frames in the last
small job block. It is easy to see that the resulting scheduling is an (m; ffl)-canonical
scheduling of the original job set J . Moreover, since the makespan of the
scheduling
S ) is bounded by Opt(J ), the makespan of the (m; ffl)-canonical
scheduling \Gamma 3 (J ) is bounded by
where t(J) is the parallel processing time of the small job J . Since there are at most
small jobs in the set J 00
S , by Lemma 2.1,
It is easy to see that Opt(J ) (
Therefore, the makespan of the (m; ffl)-
canonical scheduling \Gamma 3 (J ) is bounded by (1 This completes the proof
of the theorem.
Before we close this section, we introduce one more definition.
Definition 3.9. Let oe be a floor of type
are pairwise disjoint subsets of processors in the processor set Pm . Then each
subset plus the height l is called a room in the floor oe, whose type is Q i .
J. CHEN and A. MIRANDA
4. The approximation scheme. Now we come back to the original problem
Pm jsetjC max . Recall that an instance J of the problem Pm jsetjC max is a set of jobs
each job J i is given by a list of alternative processing modes
in which the pair (Q specifies the parallel processing
time t i;j of the job J i on the subset Q i;j of processors in the m-processor system.
In order to describe our polynomial time approximation scheme for the problem,
let us first discuss why this problem is more difficult than the classical job scheduling
problem.
In the classical job scheduling problem, each job is executed by one processor in
the system. Therefore, the order of executions of jobs in each processor is not crucial:
the running time of the processor is simply equal to the sum of the processing times
of the jobs assigned to the processor. Therefore, the decision of which job should
be assigned to which processor, in any order, will uniquely determine the makespan
of the resulting scheduling. This makes it possible to use a dynamic programming
approach that extends a scheduling for a subset of jobs to that for a larger subset.
The situation in the general multiprocessor job scheduling problem Pm jsetjC max ,
on the other hand, is more complicated. In particular, the makespan of a scheduling
depends not only on the assignment of processing modes to jobs, but also on the order
in which the jobs are executed. Therefore, the techniques of extending a scheduling
for a subset of jobs in the classical job scheduling problem are not directly applicable
here.
Theorem 3.8 shows that there is an (m; ffl)-canonical scheduling whose makespan
is very close to the optimal makespan. Therefore, constructing a scheduling whose
makespan is not larger than the makespan of a good (m; ffl)-canonical scheduling will
give a good approximation to the optimal schedulings.
Nice properties of an (m; ffl)-canonical scheduling are that within the same tower,
the order of the floors does not affect the height of the tower, and that within the same
floor, the order of the small jobs does not affect the height of the floor (see Remark
1 and Remark 2 in the previous section). Therefore, the only factor that affects the
heights of towers and floors are the assignments of jobs to towers and floors. This
makes it become possible, at least for small jobs, to apply the techniques in classical
job scheduling problem to our current problem. This is described as follows.
First suppose that we can somehow divide the job set J into large job set JL
and small job set JS . Let us start with an (m; ffl)-canonical scheduling \Gamma(J ) of the
set J . The scheduling \Gamma(J ) gives a nondecreasing sequence f of
integers, where are the starting
or finishing times of the j m;ffl large jobs in JL . Let the corresponding towers
be g, where the tower j consists of a subset P 0
j of processors and the
We suppose that the subset P 0
j of processors associated with each tower j is
known, and that the large jobs and towers of the scheduling \Gamma(J ) are ordered into a
sequence in terms of their starting times. However, we assume that the assignment
of small jobs to the rooms of the scheduling \Gamma(J ) is unknown. We show how this
information can be recovered.
For each tower j associated with the processor set P 0
, the number of floors in
the tower j is q r is the number of processors in the set P 0
.
Let oe j;1 be the floors of all possible different types in the tower j . For
each floor oe j;q , let fl j;q;1 jq be the rooms in the floor oe j;q , where r jq m.
Therefore, the configuration of the small jobs in the (m; ffl)-canonical scheduling \Gamma(J )
Algorithm. Schedule-Small
Input: The set JS of small jobs and an order of the large jobs
and towers in \Gamma(J )
Output: A scheduling for the job set J
1.
2. for to nS do
for each mode Q ij of the small job J 0
for each True such that the job J 0
under mode Q ij is addable to the room fl j;q;r
3. for each
call the list scheduling algorithm based on the order to
construct a scheduling for J in which the room fl j;q;r has
running time t j;q;r for all t j;q;r 0;
4. return the scheduling constructed in step 3 with the minimum makespan.
Fig. 2. Scheduling small jobs in floors
can be specified by a ((2j m;ffl
where t j;q;r specifies the running time of the room fl j;q;r (for index fj; q; rg for which
the corresponding room fl j;q;r does not exists, we can simply set t
Suppose that an upper bound T 0 for the running time of rooms is derived, then
we can use a Boolean array D of (2j m;ffl dimensions to describe the
configuration of a subset of small jobs in a scheduling:
-z
is the number of small jobs in J , such that
if and only if there is a scheduling on the first i small jobs to the floors in \Gamma(J ) such
that the running time of the room fl j;q;r is t j;q;r (recall that the running time of a
room is dependent only on the assignment of small jobs to the room and independent
of the order in which the small jobs are executed in the room). Initially, all array
elements in the array
Suppose that a configuration of a scheduling for the first small jobs is given
(2)
We say that the ith small job J 0
under mode Q i is addable to a room fl j;q;r in the
configuration in (2) if the room fl j;q;r is of type Q i and adding the job J 0
i to the room
does not exceed the upper bound T 0 of the running time of the room fl j;q;r .
Now we are ready to present our dynamic programming algorithm for scheduling
small jobs into the rooms in the (m; ffl)-canonical scheduling \Gamma(J ). The algorithm is
given in Figure 2.
Note that the algorithm Schedule-Small may not return an (m; ffl)-canonical
scheduling for the job set J . In fact, there is no guarantee that the height of the towers
constructed in the algorithm does not exceed the height of the corresponding towers
J. CHEN and A. MIRANDA
in the original (m; ffl)-canonical scheduling \Gamma(J ). However, we can show that the
scheduling constructed by the algorithm Schedule-Small has its makespan bounded
by the makespan of the original (m; ffl)-canonical scheduling \Gamma(J ).
Lemma 4.1. For all i, 0 i nS , the array element
if and only if there is a way to assign modes to the first i small jobs and arrange them
into the rooms such that the room fl j;q;r has running time t j;q;r for all fj; q; rg.
Proof. We prove the lemma by induction on i. The case can be easily
verified.
Suppose that there is a way W to assign modes to the first i small jobs and
arrange them into the rooms such that the room fl j;q;r has running time t j;q;r for all
rg. Suppose that W assigns the ith small job J 0
i of processing time t ij to the
room
of Removing the job J 0
i from W , we obtain a way that assigns
modes to the first arrange them into the rooms such that the
room fl j;q;r has running time t j;q;r for all fj; q; rg 6= and the room fl j0 ;q 0 ;r 0
has running time t j0 ;q 0 ;r 0
. By the inductive hypothesis, we have
Now in the ith execution of the for loop in step 2 in the algorithm Schedule-Small,
when the mode of the small job J 0
i is chosen to be Q ij with processing time t ij , the
algorithm will assign
The other direction of the lemma can be proved similarly. We omit it here.
The above lemma gives us directly the following corollary.
Corollary 4.2. If the sequence of large jobs and towers is ordered in terms
of their starting times in the (m; ffl)-canonical scheduling \Gamma(J ), then the algorithm
Schedule-Small constructs a scheduling for job set J whose makespan is bounded
by the makespan of the (m; ffl)-canonical scheduling \Gamma(J ).
Proof. Note that the (m; ffl)-canonical scheduling \Gamma(J ) gives a way to assign
and arrange all small jobs in JS into the rooms. According to Lemma 4.1, the
corresponding array element in the array D must have value True:
For this array element, step 3 of the algorithm will construct the towers that have exactly
the same types and heights as their corresponding towers in the (m; ffl)-canonical
scheduling \Gamma(J ) (this may not give exactly the same assignment of small jobs to
rooms. However, the running times of the corresponding rooms must be exactly the
same). Now since the sequence is given in the order sorted by the starting times of
the large jobs and towers in the (m; ffl)-canonical scheduling \Gamma(J ), by Lemma 3.7, the
call in step 3 to the list scheduling algorithm based on the order and this configuration
will construct a scheduling whose makespan is not larger than the makespan
of the (m; ffl)-canonical scheduling \Gamma(J ).
Finally, since step 4 of the algorithm returns the scheduling of the minimum
makespan constructed in step 3, we conclude that the algorithm returns a scheduling
whose makespan is not larger than the makespan of \Gamma(J ).
We analyze the algorithm Schedule-Small.
Lemma 4.3. Let T 0 be the upper bound used by the algorithm Schedule-Small
on the running time of the rooms. Then the running time of the algorithm Schedule-
Small is bounded by O(n2 m m;ffl T m;ffl
Proof. The number nS of small jobs in JS is bounded by the total number n of
jobs in J , each small job may have at most 2 different modes. Also as we
indicated before, the number of rooms is bounded by
the running time for each room is bounded by T 0 , for each fixed i, there cannot be more
than T m;ffl
Finally, for each
we can check each of the m;ffl component values t j;q;r to decide if the job J 0
under
is addable to the room fl j;q;r . In conclusion, the running time of step 2 in
the algorithm Schedule-Small is bounded by
O(n
We will also attach the mode assignment and room assignment of the job J 0
to each element True. With this information, from a given
configuration True, a corresponding scheduling for the small
jobs in the rooms can be constructed easily by backtracking the dynamic programming
procedure and its makespan can be computed in time m;ffl . Therefore, step 3 of the
algorithm takes time
In conclusion, the running time of the algorithm Schedule-Small is bounded
by O(n2 m m;ffl T m;ffl
We now discuss how an upper bound T 0 for the running time of rooms can
be derived. Given an instance of the problem Pm jsetjC max
and a positive real number ffl ? 0, where each job J i is specified by a list of alternative
processing modes J Recall that
g. Then the sum T
obviously an
upper bound on the makespan of the (m; ffl)-canonical schedulings for J (T 0 is the
makespan of a straightforward scheduling that assigns each job J i the mode corresponding
to min i then starts each job J i when the previous job J finishes. There-
fore, if no (m; ffl)-canonical scheduling has makespan better than T 0 , we simply return
this straightforward scheduling). In particular, the value is an upper bound for the
running time for all rooms. Moreover, since the job set J takes at least T 0 amount of
"work" (the work taken by a job is equal to the parallel processing time multiplied by
the number of processors involved in this processing) and the system has m processors,
the value T 0 also provides a lower bound for the optimal makespan Opt(J
In order to apply algorithm Schedule-Small, we need to first decide how the
set J is split into large job set JL and small job set JS , what are the modes for the
large jobs, what are the types for the towers, and what are the ordering for the large
jobs and towers on which the list scheduling algorithm can be applied. According to
Lemma 2.1, the number of large jobs is of form j
k bm=fflc, and by the definition, the number of towers is 2j m;ffl + 1. When m and
ffl are fixed, the number of large jobs and the number of towers are both bounded
by a constant. Therefore, we can use any brute force method to exhaustively try all
possible cases.
To achieve a polynomial time approximation scheme for the problem Pm jsetjC max ,
we combine the standard scaling techniques [19] with the concept of (m; ffl)-canonical
schedulings, as follows.
14 J. CHEN and A. MIRANDA
Algorithm. Approx-Scheme
Input: An instance J for the problem Pm jsetjCmax and
Output: A scheduling of J
2. let J 0 be the job set obtained by scaling the job set J by K;
3. for to bm=fflc do
3.1. for each subset J 0
L of j 0 jobs in J 0
3.2. for each mode assignment A to the jobs in J 0
3.3. for each possible sequence of 2j0
3.4. for each ordering of the j 0 jobs in J 0
L and the 2j0
call Schedule-Small on small job set J 0
S and the ordering
to construct a scheduling for the job set J 0 (use T 0
as the upper bound for the running time of rooms);
4. be the scheduling constructed in step 3 with the
minimum makespan;
5. replace each job J 0
by the corresponding job J i to obtain
a scheduling \Gamma 0 (J ) for the job set J ;
6. return the job scheduling \Gamma 0 (J ).
Fig. 3. The approximation scheme
be an instance of the Pm jsetjC max problem, where J
We let construct another
instance
n g for the problem, where J 0
That is, the jobs in J 0 are identical to those in J except that all
processing times t ij are replaced by bt ij =Kc. We say that the job set J 0 is obtained
from the job set J by scaling the processing times by K. We apply the algorithm
discussed above to the instance J 0 to construct a scheduling for J 0 from which a
scheduling for J is induced. The formal algorithm is presented in Figure 3.
We explain how step 5 converts the scheduling \Gamma 0 (J 0 ) for the job set J 0 into a
scheduling for the job set J . We first multiply the processing time and the
starting time of each job J 0
i in the scheduling \Gamma 0 (J 0 ) by K (but keeping the processing
mode). That is, for the job J 0
i of mode Q ij and processing time bt ij =Kc that starts at
time i in \Gamma 0 (J 0 ), we replace it by a job J 00
i of mode Q ij and processing time K
and let it start at time K i . This is equivalent to proportionally "expanding" the
scheduling K. Now on this expansion of the scheduling \Gamma 0 (J 0 ),
following the order in terms of their finish times, we do "correction" on processing
times by increasing the processing time of each job J 00
i from K
that this increase in processing time may cause many jobs in the scheduling to delay
their starting time by units. In particular, this increase may cause
the makespan of the scheduling to increase by units.) After the
corrections on the processing time for all jobs in J , we obtain a scheduling \Gamma 0 (J ) for
the job set J .
Lemma 4.4. For fixed m 2 and ffi ? 0, The running time of the algorithm
Approx-Scheme for the problem Pm jsetjC max is bounded by O(n m;ffl +j m;ffl +1 ), where
Proof. Since the integer k is bounded by bm=fflc, the number j 0 of large jobs in J 0
is bounded by j . Therefore, there are at most
ways to choose the large job set J 0
L . Since each job may have up to 2
alternative mode assignments, the total number of mode assignments to each large
set J 0
L is bounded by Each tower is associated with a subset
of the processor set Pm of m processors. Thus, each tower may be associated with
different subsets of Pm . Therefore, the number of different sequences of
up to 2j m;ffl +1 towers is bounded by . Finally, the number of
permutations of the j 0 large jobs and 2j 0 +1 towers is (3j 0 +1)!. Summarizing all these
together, we conclude that the number of times that the algorithm Schedule-Small
is called is bounded by:
O(bm=fflc
When the algorithm Schedule-Small is applied to the job set J 0
S , the upper
bound on the running time of the rooms is
According to Lemma 4.3, each call to the algorithm Schedule-Small takes time
where
Combining Equations (3) and (4), and noting that m and ffi thus ffl are fixed
constants, we conclude that the running time of the algorithm Approx-Scheme is
bounded by O(n m;ffl +j m;ffl +1 ).
Now we are ready to present our main theorem.
Theorem 4.5. The algorithm Approx-Scheme is a polynomial time approximation
scheme for the problem Pm jsetjC max .
Proof. As proved in Lemma 4.4, the algorithm Approx-Scheme runs in polynomial
time when m and ffi are fixed constants. Therefore, we only need to show
that the makespan of the scheduling \Gamma 0 (J ) constructed by the algorithm Approx-
Scheme for an instance J of the problem Pm jsetjC max is at most (1 times the
optimal makespan Opt(J ) for the instance J . Again let
Let \Gamma(J ) be an optimal scheduling of makespan Opt(J ). Under the scheduling
\Gamma(J ), the mode assignments of the jobs are fixed. Thus, this particular mode assignment
makes us able to split the job set J into large job set JL and small job set
JS according the job processing time. According to Theorem 3.8, there is an (m; ffl)-
canonical scheduling for the instance J , under the same mode assignments,
such that the makespan of \Gamma 1 (J ) is bounded by (1
Consider a room fl j;q;r in the (m; ffl)-canonical scheduling \Gamma 1 (J ). Suppose that
are the small jobs assigned to the room fl j;q;r by the scheduling \Gamma 1 (J ).
Then
is the processing time for the job J p i under
which is the same as under \Gamma(J ). Thus we must have
0Therefore, under the same mode assignments (with processing time replaced by
and the same room assignments, the corresponding scheduling \Gamma 1 (J 0 ) for the
set J 0 has no rooms with running time exceeding T 0
. Thus, by Lemma 4.1, when
step 3 of the algorithm Approx-Scheme loops to the stage in which the large job set
and their mode assignments, the tower types, and the ordering of the large jobs and the
towers all match that in the scheduling \Gamma 1 (J 0 ), the array element
J. CHEN and A. MIRANDA
corresponding to the room configurations of the scheduling must have value
True. Thus, a scheduling \Gamma 0
based on this configuration is constructed and its
makespan is calculated. Note that the scheduling \Gamma 0
may not be exactly the
scheduling must have exactly the same makespan.
Since step 4 of the algorithm Approx-Scheme picks the scheduling \Gamma 0 (J 0 ) that
has the smallest makespan over all schedulings for J 0 constructed in step 3, we conclude
that the makespan of the scheduling \Gamma 0 (J 0 ) is not larger than the makespan of
the scheduling \Gamma 0
larger than the makespan of the scheduling
As we described in the paragraph before Lemma 4.4, to obtain the corresponding
scheduling for the job set J , we first expand the scheduling \Gamma 0 (J 0 ) by K (i.e.,
multiplying the job processing times and starting times in \Gamma 0 (J 0 ) by K). Let the
resulting scheduling be \Gamma 0 (J 00 ). Similarly we expand the scheduling \Gamma 1 (J 0 ) by K to
obtain a scheduling \Gamma 1 (J 00 ). The makespan of the scheduling \Gamma 0 (J 00 ) is not larger
than the makespan of the scheduling \Gamma 1 (J 00 ) since they are obtained by proportionally
expanding the schedulings respectively, by the same factor K.
Moreover, the makespan of \Gamma 1 (J 00 ) is not larger than the makespan of the (m; ffl)-
canonical scheduling This is because these two schedulings use the same large
job set under the same mode assignment, the same small job set under the same mode
assignment and room assignment, and the same order of large jobs and towers. The
only difference is that the processing time t ij of each job J i in \Gamma 1 (J ) is replaced by a
possibly smaller processing time K of the corresponding job J 00
In
consequence, we conclude that the makespan of the scheduling \Gamma 0 (J 00 ) is not larger
than the makespan of the (m; ffl)-canonical scheduling \Gamma 1 (J ), which is bounded by
Finally, to obtain the scheduling \Gamma 0 (J ) for the job set J , we make corrections on
the processing times of the jobs in the scheduling \Gamma 0 (J 00 ). More precisely, we replace
the processing time K
which is the processing time of the
job J i in the job set J . Correcting the processing time for each job J 00
may make the makespan of the scheduling increase by
Therefore, after the corrections of processing time for all jobs in J 00 , the makespan of
the finally resulting scheduling \Gamma 0 (J ) for the job set J , constructed by the algorithm
Approx-Scheme, is bounded by
the makespan of
Here we have used the fact that Opt(J ) T 0 =m.
This completes the proof of the theorem.
5. Conclusion and remarks. In this paper, we have developed a polynomial
time approximation scheme for the Pm jsetjC max problem for any fixed constant m.
The result is achieved by combinations of the recent techniques developed in the area
of multiprocessor job schedulings plus the classical dynamic programming and scaling
techniques. Note that this result is a significant improvement over the previous results
on the problem: no previous approximation algorithms for the problem Pm jsetjC max
have their approximation ratio bounded by a constant that is independent of the
number m of processors in the system. Our result also confirms a conjecture made
by Amoura et al. [1]. In the following we make a few remarks on further work on the
problem.
The multiprocessor job scheduling problem seems an intrinsically difficult prob-
lem. For example, if the number m of processors in the system is given as a variable
in the input, then the problem becomes highly nonapproximable: there is a constant
ffi such as no polynomial time approximation algorithm for the problem can have an
approximation ratio smaller than n [23]. Observing this plus the
difficulties in developing good approximation algorithms for the problem, people had
suspected whether the Pm jsetjC max problem for some fixed m should be MAX-NP
hard [8]. The present paper completely eliminates this possibility [2].
The current form of our polynomial time approximation scheme may not be practically
useful, yet. Even for a small integer m and a reasonably small constant ffl, the
time complexity of our algorithm is bounded by a polynomial of very high degree.
On the other hand, our algorithm shows that there are very "normalized" schedul-
ings whose makespan is close to the optimal ones, and that these "good" normalized
schedulings can be constructed systematically. We are interested in investigating the
tradeoff between the degree of this kind of normalization and the time complexity
of approximation algorithms. In particular, we are interested in developing more
practical polynomial time algorithms for systems with small number of processors,
such as P 4 jsetjC max . Note that currently there is no known practical approximation
algorithm for the P 4 jsetjC max problem whose approximation ratio is smaller than 2
(a ratio 2 approximation algorithm for the problem follows from Chen and Lee's recent
work on general Pm jsetjC max problem [8]). Moreover, so far all approximation
algorithms for the Pm jsetjC max problem are involved in the technique of dynamic
programming, which in general results in algorithms of high complexity. Are there
any other techniques that may avoid dynamic programming?
Acknowledgement
. The authors would like to thank Don Friesen and Frank
Ruskey for their helpful discussions.
--R
Scheduling independent multiprocessor tasks
Proof verification and the hardness of approximation problems
Scheduling multiprocessor tasks on a dynamic configuration of dedicated processors
Scheduling multiprocessor tasks on the three dedicated processors
"Scheduling multiprocessor tasks on the three dedicated processors, Information Processing Letters 41, (1992), pp. 275-280."
Scheduling multiprocessor tasks to minimize scheduling length
Scheduling multiprocessor tasks - a survey
General multiprocessor tasks scheduling
Efficiency and effectiveness of normal schedules on three dedicated processors
Simultaneous resource scheduling to minimize weighted flow times
Complexity of scheduling parallel task systems
Computers and Intractability: A Guide to the Theory of NP-Completeness
An approximation algorithm for scheduling on three dedicated machines
Bounds for certain multiprocessing anomalies
Concrete Mathematics
Approximation algorithms for scheduling
Complexity of scheduling multi-processor tasks with prespecified processor allocations
Fast approximation algorithms for the Knapsack and sum of subset problems
An approximation algorithm for diagnostic test scheduling in multicomputer systems
Scheduling multiprocessor tasks without prespecified processor allo- cations
Current trends in deterministic scheduling
Approximation algorithms in multiprocessor task scheduling
--TR
--CTR
C. W. Duin , E. Van Sluis, On the Complexity of Adjacent Resource Scheduling, Journal of Scheduling, v.9 n.1, p.49-62, February 2006
Klaus Jansen , Lorant Porkolab, Polynomial time approximation schemes for general multiprocessor job shop scheduling, Journal of Algorithms, v.45 n.2, p.167-191, November 2002
Jianer Chen , Xiuzhen Huang , Iyad A. Kanj , Ge Xia, Polynomial time approximation schemes and parameterized complexity, Discrete Applied Mathematics, v.155 n.2, p.180-193, January, 2007 | multiprocessor processing;polynomial time approximation scheme;job scheduling;approximation algorithm |
586942 | Linear Time Algorithms for Hamiltonian Problems on (Claw,Net)-Free Graphs. | We prove that claw-free graphs, containing an induced dominating path, have a Hamiltonian path, and that 2-connected claw-free graphs, containing an induced doubly dominating cycle or a pair of vertices such that there exist two internally disjoint induced dominating paths connecting them, have a Hamiltonian cycle. As a consequence, we obtain linear time algorithms for both problems if the input is restricted to (claw,net)-free graphs. These graphs enjoy those interesting structural properties. | Introduction
. Hamiltonian properties of claw-free graphs have been studied
extensively in the last couple of years. Di#erent approaches have been made, and a
couple of interesting properties of claw-free graphs have been established (see [1, 2, 3,
5, 6, 13, 14, 15, 16, 19, 22, 23, 25, 26]). The purpose of this work is to consider the
algorithmic problem of finding a Hamiltonian path or a Hamiltonian cycle e#ciently.
It is not hard to show that both the Hamiltonian path problem and the Hamiltonian
cycle problem are NP-complete, even when restricted to line graphs [28]. Hence, it is
quite reasonable to ask whether one can find interesting subclasses of claw-free graphs
for which e#cient algorithms for the above problems exist.
Already in the eighties, Du#us, Jacobson, and Gould [12] defined the class of
(claw,net)-free (CN-free) graphs, i.e., graphs that contain neither an induced claw
nor an induced net (see Figure 1.1). Although this definition seems to be rather
restrictive, the family of CN-free graphs contains a couple of graph families that are
of interest in their own right. Examples of those families are unit interval graphs,
claw-free asteroidal triple-free (AT-free) graphs, and proper circular arc graphs. In
their paper [12], Du#us, Jacobson, and Gould showed that this class of graphs has the
nice property that every connected CN-free graph contains a Hamiltonian path and
every 2-connected CN-free graph contains a Hamiltonian cycle. Later, Shepherd [27]
proved that there is an O(n 6 ) algorithm for finding such a Hamiltonian path/cycle in
CN-free graphs. Note also that CN-free graphs are exactly the Hamiltonian-hereditary
# Received by the editors June 23, 1999; accepted for publication (in revised form) January 13,
2000; published electronically November 28, 2000. An extended abstract of these results has been
presented at the 25th International Workshop on Graph-Theoretic Concepts in Computer Science,
Lecture Notes in Comput. Sci. 1665, Springer-Verlag, New York, 1999, pp. 364-376.
http://www.siam.org/journals/sicomp/30-5/35777.html
Fachbereich Informatik, Universit?t Rostock, A. Einstein Str. 21, D-18051 Rostock, Germany
(ab@informatik.uni-rostock.de).
# Department of Mathematics and Computer Science, Kent State University, Kent, OH 44242
(dragan@mcs.kent.edu). The research of this author was supported by the German National Science
Foundation (DFG).
Fachbereich Mathematik, Technische Universit?t Berlin, Stra-e des 17. Juni 136, D-10623 Berlin,
Germany (ekoehler@math.TU-Berlin.DE). The research of this author was supported by the graduate
program "Algorithmic Discrete Mathematics," grant GRK 219/2-97 of the German National Science
Foundation (DFG).
LINEAR ALGORITHMS FOR HAMILTONIAN PROBLEMS ON. 1663
graphs [10], i.e., the graphs for which every connected induced subgraph contains a
Hamiltonian path.
In this paper we give a constructive existence proof and present linear time algorithms
for the Hamiltonian path and Hamiltonian cycle problems on CN-free graphs.
The important structural property that we exploit for this is the existence of an induced
dominating path in every connected CN-free graph (Theorem 2.3). The concept
of a dominating path was first used by Corneil, Olariu, and Stewart [8] in the context
of AT-free graphs. They also developed a simple linear time algorithm for finding
such a path in every AT-free graph [7]. As we show in Theorem 2.3, for the class of
CN-free graphs, a linear time algorithm for finding an induced dominating path exists
as well. This property is of interest for our considerations since we prove that all
claw-free graphs that contain an induced dominating path have a Hamiltonian path
(Theorem 3.1). The proof implies that, given a dominating path, one can construct
a Hamiltonian path for a claw-free graph in linear time.
For 2-connected claw-free graphs, we show that the existence of a dominating
pair is su#cient for the existence of a Hamiltonian cycle. dominating pair is a
pair of vertices such that every induced path connecting them is a dominating path.)
Again, given a dominating pair, one can construct a Hamiltonian cycle in linear time
(Theorem 5.6). This already implies, for example, a linear time algorithm for finding
a Hamiltonian cycle in claw-free AT-free graphs, since every AT-free graph contains a
dominating pair and it can be found in linear time [9]. Unfortunately, CN-free graphs
do not always have a dominating pair. For example, an induced cycle with more
than six vertices is CN-free but does not have such a pair of vertices. Nevertheless,
2-connected CN-free graphs have another nice property: they have a good pair or an
induced doubly dominating cycle. An induced doubly dominating cycle is an induced
cycle such that every vertex of the graph is adjacent to at least two vertices of the
cycle. A good pair is a pair of vertices, such that there exist two internally disjoint
induced dominating paths connecting these vertices. We prove that the existence of
an induced doubly dominating cycle or a good pair in a claw-free graph is su#cient
for the existence of a Hamiltonian cycle (Theorems 5.1 and 5.5). Moreover, given an
induced doubly dominating cycle or a good pair of a claw-free graph, a Hamiltonian
cycle can be constructed in linear time. In section 4 we present an O(m
algorithm which, for a given 2-connected CN-free graph, finds either a good pair or
an induced doubly dominating cycle.
For terms not defined here, we refer to [11, 17]. In this paper we consider finite
connected undirected graphs E) without loops and multiple edges. The
cardinality of the vertex set is denoted by n, whereas the cardinality of the edge set
is denoted by m.
A path is a sequence of vertices (v 0 , . , v l ) such that all v i are distinct and
its length is l. An induced path is a path where
cycle (k-cycle) is a path
(v 0 , . , its length is k. An induced cycle is a cycle
is an induced cycle
of length k # 5.
The distance dist(v, u) between vertices v and u is the smallest number of edges
in a path joining v and u. The eccentricity ecc(v) of a vertex v is the maximum
distance from v to any vertex in G. The diameter diam(G) of G is the maximum
eccentricity of a vertex in G. A pair v, u of vertices of G with dist(v,
is called a diametral pair.
A. BRANDST -
ADT, F. F. DRAGAN, AND E. K -
OHLER
a
c d
a
c
x
y z
Fig. 1.1. The claw K(a; b, c, d) and the net N(a, b, c; x, y, z).
For every vertex we denote by N(v) the set of all neighbors of v,
1}. The closed neighborhood of v is defined by {v}. For a
vertex v and a set of vertices S # V , the minimum distance between v and vertices
of S is denoted by dist(v, S). The closed neighborhood N[S] of a set S # V is defined
by
We say that a set S # V dominates G if doubly dominates
G if every vertex of G has at least two neighbors in S. An induced path of G
which dominates G is called an induced dominating path. A shortest path of G which
dominates G is called a dominating shortest path. Analogously one can define an
induced dominating cycle of G. A dominating pair of G is a pair of vertices v,
such that every induced path between v and u dominates G. A good pair of G is a pair
of vertices v, u # V , such that there exist two internally disjoint induced dominating
paths connecting v and u.
The claw is the induced complete bipartite graph K 1,3 , and for simplicity, we
refer to it by K(a; b, c, d) (see Figure 1.1). The net is the induced six-vertex graph
y, z) shown in Figure 1.1. A graph is called CN-free or, equivalently, (claw,
net)-free if it contains neither an induced claw nor an induced net. An asteroidal triple
of G is a triple of pairwise nonadjacent vertices, such that for each pair of them there
exists a path in G that does not contain any vertex in the neighborhood of the third
one. A graph is called AT-free if it does not contain an asteroidal triple. Finally, a
Hamiltonian path or Hamiltonian cycle of G is a path or cycle, respectively, containing
all vertices of G.
2. Induced dominating path. In this section we give a constructive proof
for the property that every connected CN-free graph contains an induced dominating
path. In fact, we show that there is an algorithm that finds such a path in linear time.
To prove the main theorem of this section we will need the following two lemmas.
Lemma 2.1 (see [12]). Let be an induced path of a CN-free
graph G, and let v be a vertex of G such that dist(v, P 2. Then any neighbor y of
v with dist(y, P adjacent to x 1 or to x k .
Lemma 2.2. Let P be an induced path connecting vertices v and u of a CN-free
graph G. Let also s be a vertex of G such that s /
Then
1. for every shortest path P # connecting v and s, P #
2. if there is an edge xy of G such that x # P \ {v} and y # P # \ {v}, then both
x and y are neighbors of v.
Proof. Let y be the vertex of P # \ {v} which is closest to s and has a neighbor
x on P \ {v}; clearly, y #= s. Let s # , v # be the neighbors of y on the subpaths of P #
connecting y with s and y with v, respectively. Since s # /
# N[P ], by Lemma 2.1, vertex
LINEAR ALGORITHMS FOR HAMILTONIAN PROBLEMS ON. 1665
y must be adjacent to v or to u. If yu # E, then v # u # E, too (otherwise, we have
a claw K(y; s # , v # , u)). But now dist(v, u) # dist(v,
dist(v, u), and a contradiction arises. Therefore, y is adjacent to v, and since y /
the paths P and P # have only the vertex v in common. Moreover, to avoid a claw
vertex x has to be adjacent to v.
Theorem 2.3. Every connected CN-free graph G has an induced dominating
path, and such a path can be found in O(n +m) time.
Proof. Let G be a connected CN-free graph. One can construct an induced
dominating path in G as follows. Take an arbitrary vertex v of G. Using breadth first
search (BFS), find a vertex u with the largest distance from v and a shortest path P
connecting u with v. Check whether this path P dominates G. If so, we are done.
Now, assume that the set is not empty. Again, using BFS, find a vertex s
in S with largest distance from v and a shortest path P # connecting v with s. Create a
new path P # by joining P and P # in the following way: there
is a chord xy between the paths P and P # (see Lemma 2.2), and P
otherwise. By Lemma 2.2, the path P # is induced. It remains to show that this path
dominates G.
Assume there exists a vertex t # V \ N[P # ]. First, we claim that t is dominated
neither by P nor by P # . Indeed, if t # (N[P necessarily tv # E
and v /
neighbors x # P and y # P # of v are adjacent. Therefore, we get
a net N(v, y, x; t, s # , u # ), where s # and u # are the vertices at distance two from v on
paths P # and P , respectively. Note that vertices s # , u # exist because dist(v, s) # 2.
Thus, t is dominated neither by P nor by P # . Moreover, from the choice of u
and s we have 2 # dist(v, t) # dist(v, s) # dist(v, u). Now let P # be a shortest path,
connecting t with v, and let z be a neighbor of v on this path. Applying Lemma
2.2 twice (to P, P # and to P # , P # ), we obtain a subgraph of G depicted in Figure
2.1. We have three shortest paths P, P # , P # , each of length at least 2 and with only
one common vertex v. These paths can have only chords of type zx, zy, xy. Any
combination of them leads to a forbidden claw or net. This contradiction completes
the proof of the theorem. Evidently, the method described above can be implemented
to run in linear time.
t u
x
z
y
s
Fig. 2.1.
It is not clear whether CN-free graphs can be recognized e#ciently. But, to apply
our method for finding an induced dominating path in these graphs, we do not need
to know in advance that a given graph G is CN-free. Actually, our method can be
A. BRANDST -
ADT, F. F. DRAGAN, AND E. K -
OHLER
applied to any graph G. It either finds an induced dominating path or returns either
a claw or a net of G, showing that G is not CN-free.
Corollary 2.4. There is a linear time algorithm that for a given (arbitrary)
connected graph G either finds an induced dominating path or outputs an induced claw
or an induced net of G.
Proof. Let G be a graph. For an arbitrary vertex v of G, we find a vertex u with
the largest distance from v and a shortest path P connecting u with v. If P dominates
G, then we are done. Else, we find a vertex s # V \ N[P ] with the largest distance
from v and a shortest path P # connecting v with s. If there are vertices in P # \ {v}
which have a neighbor on P \ {v}, we take the vertex y that is closest to s and check
whether y is adjacent to v and u. If it is adjacent neither to u nor to v, then G has a
net or a claw (see the proof of Lemma 2.1). If yu # E or yv # E and a neighbor x of
y on P \ {v} is not adjacent to v, then G has a claw (see Lemma 2.2). Now, if we did
not yet find a forbidden subgraph, then the only possible chord between the paths P
and P # is xy with xv, yv # E, and we can create an induced path P # as described in
the proof of Theorem 2.3. Hence, it remains to check whether P # dominates G. If
there exists a vertex t # V \ N[P # ], then again we will find a net or a claw in G (see
Theorem 2.3). It is easy to see that the total time bound of all these operations is
linear.
3. Hamiltonian path. In what follows we show that for claw-free graphs the
existence of an induced dominating path is a su#cient condition for the existence of
a Hamiltonian path. The proof for this result is constructive, implying that, given an
induced dominating path, one can find a Hamiltonian path e#ciently.
Theorem 3.1. Every connected claw-free graph G containing an induced dominating
path has a Hamiltonian path. Moreover, given an induced dominating path, a
Hamiltonian path of G can be constructed in linear time.
Proof. Let E) be a connected claw-free graph and let
be an induced dominating path of G. If
and, since G is claw-free, there are no three independent vertices in G - {x 1 }. (By
we denote a subgraph of G induced by vertices V \ {x 1 }.) If G - {x 1 } is
not connected, then, again because G is claw-free, it consists of two cliques C 0 , C 1
and a Hamiltonian path of G can easily be constructed. If G - {x 1 } is connected,
we can construct a Hamiltonian path as follows. First, we construct a maximal path
vertices that are not in P 1 are neither connected to y 1 nor
to y l . Let R be the set of all remaining vertices. If #, we are done. If there is
any vertex in R, it follows that y 1 y l # E since otherwise there are three independent
vertices in G- {x 1 }. Furthermore, any two vertices of R are joined by an edge, since
otherwise they would form an independent triple with y 1 (and with y l as well). Hence,
R induces a clique. Since G-{x 1 } is connected, there has to be an edge from a vertex
R # R to some vertex y Now we can construct a Hamiltonian
path P of G:
R), where -
R stands for an
arbitrary permutation of the vertices of R \ {v R }.
For first construct a Hamiltonian path P 2 for G
described above, using x 1 as the dominating vertex. At least one endpoint of P 2 is
adjacent to x 2 since if G # -{x 1 } is not connected, x 2 has to be adjacent to all vertices
of either C 0 or C 1 (otherwise, there is a claw in G), and if G # - {x 1 } is connected, the
construction gives a path ending in x 1 which is, of course, adjacent to x 2 . To construct
a Hamiltonian path for the rest of the graph we define for each vertex x i (i # 2) of P
a set of vertices
Each set C i forms a clique of G since if two
LINEAR ALGORITHMS FOR HAMILTONIAN PROBLEMS ON. 1667
vertices u, v # C i are not adjacent, then the set u, v, x i , x i-1 induces a claw. Hence
we can construct a path P
stands for an arbitrary permutation of the vertices of C i \{x i+1 }. This path P # is
a Hamiltonian path of G because it obviously is a path, and, since P is a dominating
path, each vertex of G has to be either on P , P 2 , or in one of the sets C i .
For the case finding the connected components of G - {x 1 } and
constructing the path P 1 can easily be done in linear time. For k # 2 we just have to
make sure that the construction of the sets C i can be done in O(n+m), and this can
be realized easily within the required time bound.
Theorem 3.2. Every connected CN-free graph G has a Hamiltonian path, and
such a path can be found in O(n +m) time.
Proof. By Theorem 2.3, every connected CN-free graph has an induced dominating
path P , and it can be found in linear time. Using the path P , by Theorem 3.1,
one can construct a Hamiltonian path of G in linear time.
Analogously to Corollary 2.4, we can state the following.
Corollary 3.3. There is a linear time algorithm that for a given (arbitrary)
connected graph G either finds a Hamiltonian path or outputs an induced claw or an
induced net of G.
Proof. The proof follows from Corollary 2.4 and the proof of Theorem 3.1.
4. Induced dominating cycle, dominating shortest path, or good pair.
In this section we show that every 2-connected CN-free graph G has an induced doubly
dominating cycle or a good pair. Moreover, we present an e#cient algorithm that, for
a given 2-connected CN-free graph G, finds either a good pair or an induced doubly
dominating cycle.
Lemma 4.1. Every hole of a connected CN-free graph G dominates G.
Corollary 4.2. Let H be a hole of a connected CN-free graph G. Every vertex
of V \ H is adjacent to at least two vertices of H.
A subgraph G # of G (doubly) dominates G if the vertex set of G # (doubly) dominates
G.
Lemma 4.3. Every induced subgraph of a connected CN-free graph G which is
isomorphic to S 3 or S - 3 (see Figure 4.1) dominates G.
a d c
f
e
a d c
f
e
Fig. 4.1.
Proof. Let G contain an induced subgraph isomorphic to S - 3 , and assume that it
does not dominate G. Then, there must be a vertex s such that dist(s, S -
2. Let
x be a neighbor of s from N[S - 3 ]. If x is adjacent neither to a, nor to b, nor to c (see
Figure
4.1), then G contains a claw (e.g., if xf # E, then a claw K(f
Thus, without loss of generality, x has to be adjacent to a or b.
1668 A. BRANDST -
ADT, F. F. DRAGAN, AND E. K -
OHLER
If xa # E, then x is adjacent neither to b nor to c, since otherwise we will get a
claw (K(x; a, b, s) or K(x; a, c, s)). To avoid a net N(a, e, d; x, b, c) vertex x must be
adjacent to e or d. But, if ex # E, then xd # E too. (Otherwise, we will have a claw
Analogously, if xd # E, then also xe # E. Hence, x is adjacent to both
e and d, and a net N(x, e, d; s, b, c) arises.
Now, we may assume that x is adjacent to b and not to a, c. To avoid a claw
K(b; x, e, f ), x must be adjacent to e or f . But again, xe # E if and only if xf # E.
(Otherwise, we get a net N(x, b, e; s, f, a) or N(x, b, f ; s, e, c).) Hence x is adjacent to
both e and f and a claw K(x; s, e, f) arises.
Consequently, S - 3 dominates G. Similarly, every induced S 3 (if it exists) dominates
G.
Lemma 4.4. Let P be an induced path connecting vertices v and u of a connected
CN-free graph G. Let s be a vertex of G such that s /
has an induced doubly dominating cycle, and such a
cycle can be found in linear time.
Proof. Let P v and P u be shortest paths connecting vertex s with v and u, re-
spectively. Both these paths as well as the path P have lengths at least 2. Since
and there is a chord between P and P v , then it is unique
and both its endvertices are adjacent to v. The same holds for P and P
endvertices of the chord (if it exists) are adjacent to u.
Now, without loss of generality, we suppose that dist(s, u) # dist(s, v). Then,
from u /
2.2 we deduce that P u # P
and P u at most one chord is possible, namely, the one with both endvertices adjacent
to s. Consequently, we have constructed an induced subgraph of G shown in Figure
4.2 (only chords s # s # , v # v # and u # u # are possible).
s
Fig. 4.2.
If the lengths of all three paths P, P v , P u are at least 3, then it is easy to see
that G has a hole H k (k # 6). Furthermore, if at least one of these paths has length
greater than or equal to 4, or two of them have lengths 3, then G must contain a hole
It remains to consider two cases: lengths of both P v and P u are 2 and
LINEAR ALGORITHMS FOR HAMILTONIAN PROBLEMS ON. 1669
the length of P is 3 or 2. Clearly, in both of these cases the graph G contains either a
hole or an induced subgraph isomorphic to S -
3 or S 3 . By Corollary
4.2, every hole of G doubly dominates G.
Let G contain an S - 3 with vertex labeling shown in Figure 4.1. We claim that the
induced cycle (e, b, f, d, e) dominates G or G contains a hole H 6 . Indeed, if a vertex
s of G does not belong to S -
3 , then, by Lemma 4.3, it is adjacent to a vertex of S -
3 .
Suppose that s is adjacent to none of e, b, f, d. Then, without loss of generality, sa # E,
and we obtain an induced subgraph of G isomorphic either to a net N(e, a, d; b, s, c) or
to H depending on whether vertices s and c are adjacent. Hence,
we may assume that (e, b, f, d, e) dominates G, and since G is claw-free, this cycle is
doubly dominating.
Now let G contain an S 3 with vertex labeling shown in Figure 4.1. We will show
that every vertex of G is adjacent to at least two vertices of the cycle (e, f, d, e) or G
contains a hole H 5 . Suppose vertex s of G is adjacent to none of e, d. Then, by Lemma
4.3, s is adjacent to at least one of a, b, c, f . Let sf # E. To avoid a claw, vertex s
is adjacent to both b and c. But then a hole H arises. Assume that
loss of generality, sa # E. To avoid a net N(a, e, d; s, b, c), s
must be adjacent to b or c. In both cases a hole H 5 occurs.
Clearly, the construction of an induced doubly dominating cycle of G given above
takes linear time.
Theorem 4.5. There is a linear time algorithm that, for a given connected CN-
finds an induced doubly dominating cycle or gives a dominating
shortest path of G.
Proof. Let G be a connected CN-free graph. One can construct an induced doubly
dominating cycle or a dominating shortest path of G as follows (compare with the
proof of Theorem 2.3). Take an arbitrary vertex v of G. Find a vertex u with the
largest distance from v and a shortest path P connecting u with v. Check whether
dominates G. If so, we are done; P is a dominating shortest path of G. Assume
now that the set is not empty. Find a vertex s in S with the largest
distance from v and a shortest path P v connecting v with s. Create again a new path
shortest paths P and P v as in the proof of Theorem 2.3. We have
proven there that P # dominates G. Now let P u be a shortest path between s and u.
If dist(s, u) # dist(v, u) or both dist(s, u) > dist(v, u) and v /
4.4 can be applied to get an induced doubly dominating cycle of G in linear time.
Therefore, we may assume that dist(s, u) > dist(v, u) # dist(v, s) and v # N[P u ].
Now we show that the shortest path P u dominates G. If v lies on the path P u , then
and we are done. Otherwise, let x be a neighbor of v in P u . Note that
dist(v, s) > 1 and so x #= s, u. Since G is claw-free, v is adjacent to a neighbor
x. Assume, without loss of generality, that x is closer to s than y. If
we show that dist(v,
by the proof of Theorem 2.3, the path P u will dominate G (as a path obtained by
"joining" two shortest paths that connect v with u and v with s, respectively). By the
triangle condition, we have dist(u, s) < dist(v, u)+dist(v, s) (strict inequality because
Consequently,
Since all our proofs were constructive, we can conclude the following.
A. BRANDST -
ADT, F. F. DRAGAN, AND E. K -
OHLER
Corollary 4.6. There is a linear time algorithm that, for a given (arbitrary)
connected graph G, either finds an induced doubly dominating cycle, or gives a dominating
shortest path, or outputs an induced claw or an induced net of G.
Lemma 4.7. Let be a dominating shortest path of a graph
G. Then max{ecc(v), ecc(u)} # diam(G) - 1.
Proof. Let x, y be a diametral pair of vertices of G; that is,
If both x and y are on P , then necessarily {x,
and holds or, without loss of generality,
and Finally, if both x and y are in N[P
dist(x, y), then we may assume that at least one of x, y belongs to N(v), say, x.
Hence, dist(x, y)
A pair of vertices u, v of G with dist(u, called a pair of
mutually furthest vertices.
Corollary 4.8. For a graph G with a given dominating shortest path, a pair of
mutually furthest vertices can be found in linear time.
Proof. Let be a dominating shortest path of G with
holds. Denote by x a
vertex of G such that dist(v, ecc(v). Note that both the eccentricity of v and a
vertex furthest from v can be found in linear time by BFS. Now, if
then v, x are mutually furthest vertices of G. Else, ecc(x) > ecc(v) # diam(G) - 1
must hold and vertices x and y, where y is a vertex with dist(x,
diametral pair of G; dist(x,
In what follows we will use the fact that in a 2-connected graph every pair of
vertices is joined by two internally disjoint paths. In order to actually find such a
pair of paths, one can use Tarjan's linear time depth first search- (DFS)-algorithm
for finding the blocks of a given graph. For the proof of Lemma 4.9, we refer to [21].
Lemma 4.9. Let G be a 2-connected graph, and let x, y be two di#erent nonadjacent
vertices of G. Then one can construct in linear time two induced, internally
disjoint paths, both joining x and y.
Theorem 4.10. There is a linear time algorithm that, for a given 2-connected
CN-free graph G, either finds an induced doubly dominating cycle or gives a good pair
of G.
Proof. By Theorem 4.5, we get either an induced doubly dominating cycle or a
dominating shortest path of G in linear time. We show that, having a dominating
shortest path of a 2-connected graph G, one can find in linear time a good pair or
an induced doubly dominating cycle. By Corollary 4.8, we may assume that a pair
x, y of mutually furthest vertices of G is given. Let also be two induced internally
disjoint paths connecting x and y in G. They exist and can be found in
linear time by Lemma 4.9 (clearly, we may assume that xy /
# E, because otherwise
together with a vertex z # V \ {x, y} will form a doubly
dominating triangle). If one of these paths, say, P 1 , is not dominating, then there
must be a vertex s # V \ N[P 1 ] . Since x, y are mutually furthest vertices of G,
we have dist(s, x) # dist(x, y), dist(s, y) # dist(x, y). Hence, we are in the conditions
of Lemma 4.4 and can find an induced doubly dominating cycle of G in linear
time.
Corollary 4.11. There is a linear time algorithm that, for a given (arbitrary)
2-connected graph G, either finds an induced doubly dominating cycle, or gives a good
or outputs an induced claw or an induced net of G.
LINEAR ALGORITHMS FOR HAMILTONIAN PROBLEMS ON. 1671
5. Hamiltonian cycle. In this section we prove that, for claw-free graphs, the
existence of an induced doubly dominating cycle or a good pair is su#cient for the
existence of a Hamiltonian cycle. The proofs are also constructive and imply linear
time algorithms for finding a Hamiltonian cycle.
Theorem 5.1. Every claw-free graph G that contains an induced doubly dominating
cycle has a Hamiltonian cycle. Moreover, given an induced doubly dominating
cycle, a Hamiltonian cycle of G can be constructed in linear time.
Proof. Let be an induced doubly dominating cycle
of G. As before, we define C
k). Each set C i forms
a clique of G; otherwise, we would have a claw. Furthermore, C
the sets N[x 1 ], C 2 , . , C k-1 form a partition of the vertex set of G. Note that any
vertex adjacent to x k and not to x j (1 < j < belongs to N[x 1 ], since the cycle DC
is doubly dominating. Let G }) be the subgraph of G induced
by If we show that there is a Hamiltonian path P in G # starting
at a neighbor of x k and ending at a neighbor of x 2 , then we are done; the cycle
Hamiltonian cycle of G (recall that
stands for an arbitrary permutation of the vertices of C i \ {x i+1 }).
G # is a connected graph, by Theorem 3.1, there exists a Hamiltonian path
. Assume that x k s, x k t /
# E. Then, to avoid a claw
vertices s and t have to be adjacent, giving a new Hamiltonian path
P # of G # starting at x 1 and ending at a vertex y. If y is adjacent neither to x k nor
to x 2 , then a claw K(x 1 occurs. (Note that in case
is adjacent to at least one of x k , x 2 because the cycle doubly
dominating.) Without loss of generality, yx 2 # E and the path P # is a desired path
of G # .
So, we may assume that x k is adjacent to t or s. Analogously, x 2 is adjacent to
one of t, s. If x k , x 2 are adjacent to di#erent vertices, then we are done; the path P #
starts at a neighbor of x k and ends at a neighbor of x 2 . Otherwise, let both x k and
x 2 be adjacent to t and not to s. Then a claw K(x 1
or we get a contradiction with the property of to be a doubly
dominating cycle.
Corollary 5.2. Every claw-free graph, containing an induced dominating cycle
of length at least 4, has a Hamiltonian cycle, and, given that induced dominating
cycle, one can construct a Hamiltonian cycle in linear time.
E) be a graph, and let be an induced dominating
path of G. P is called an enlargeable path if there is some vertex v in V \P that is either
adjacent to x 1 or to x k but not to both of them and, additionally, to no other vertex
in P . Consequently, an induced dominating path P is called nonenlargeable if such
a vertex does not exist. Obviously, every graph G that has an induced dominating
path has a nonenlargeable induced dominating path as well. Furthermore, given an
induced dominating path P , one can find in linear time a nonenlargeable induced
dominating path P # by simply scanning the neighborhood of both x 1 and x k . For the
next theorem we will need an auxiliary result.
Lemma 5.3. Let G be a claw-free graph, and let
an induced nonenlargeable dominating path of G such that there is no vertex y in G
with there is a Hamiltonian path in G that starts in x 1
1672 A. BRANDST -
ADT, F. F. DRAGAN, AND E. K -
OHLER
Proof. Let
is empty. Using the method described in the proof of Theorem 3.1, we can easily
construct a path, starting in x 1 and ending in x k , that contains all vertices of C 2 , . ,
C k-1 . This implies that we have to worry only about how to insert the vertices of the
neighborhood of x 1 into this path. We have to consider two cases.
Case 1. consists of two connected components C 0 , C 1 .
Since G is claw-free, both C 0 and C 1 induce cliques in G. Furthermore, x 2 is
adjacent to all vertices of at least one of C 0 and C 1 , say, C 1 , because otherwise we
have a claw in G.
Let y be an arbitrary vertex of C 0 . Since P is nonenlargeable, y has at least one
neighbor on P \{x 1 }, and let x j be the one with smallest index. By the preconditions
of our lemma, j #= k. If j > 2, then y has to be adjacent to x j+1 as well, since
y, x j-1 , x j+1 ) is a claw. Furthermore, y is adjacent to all vertices
y, x j-1 , c j ) is a claw. Hence, when constructing the
Hamiltonian path, we can simply add y to C j .
Now we consider the set Y of all vertices y of C 0 with yx 2 # E. Suppose there is a
vertex c 2 in C 2 with c 2 #= x 3 . If there is a vertex c 1 # C 1 that is nonadjacent to vertex
there is an edge from every vertex c 0 # Y to c 2 ; otherwise, K(x
a claw of G. This implies that we can construct a Hamiltonian path with the required
properties. If, on the other hand, all vertices of C 1 are adjacent to all vertices of C 2 ,
we can construct such a path by starting in x 1 , traversing through Y , x
proceeding as before. Now suppose that there is no vertex In
this case either all vertices c 0 # Y or all vertices c 1 # C 1 have to be adjacent to x 3 ,
because otherwise K(x claw. Suppose, without loss of generality, that
all vertices of Y are adjacent to x 3 . Then we construct the path by starting in x 1 ,
traversing through C 1 , x 2 , Y , x 3 , and proceeding as before.
Case 2. induces a connected graph.
If x 2 is not adjacent to any of the vertices in H, then H has to be a clique and
we can apply the method described in case 1.
Suppose now that x 2 is adjacent to some vertex in H. First, we construct a
Hamiltonian path which is done as in the proof of Theorem
3.1, since there is no independent triple in H. Now we claim that either x 2 is adjacent
to one of y 1 or y l , or P # does in fact induce a Hamiltonian cycle of H implying again
the existence of a path with an end-vertex adjacent to x 2 . Indeed, suppose x 2 is not
adjacent to any of the endvertices of P # . Then, since G is claw-free, y 1 has to be
adjacent to y l , because otherwise K(x would induce a claw in G. Hence
induces a Hamiltonian cycle in H.
Using P # , we can easily construct a Hamiltonian path in N[x 1 ] starting in x 1
and ending in x 2 . The rest of the Hamiltonian path of G can be constructed as
before.
In fact, we can prove a slightly stronger result. Let
Each of these
sets forms a clique of G.
Lemma 5.4. Let G be a claw-free graph, and let
an induced dominating path of G such that there is no vertex y in G with N(y)
also P be enlargeable but only to one end, e.g.,
assume that there exists an edge zb with z # C k-1 \ {x k } and b # B. Then there is
LINEAR ALGORITHMS FOR HAMILTONIAN PROBLEMS ON. 1673
Proof. First, we can easily construct a path, starting in x 1 and ending in x k-1 ,
that contains all vertices of C 2 , . , C k-2 . Then we attach to this path a path which
starts at x k-1 , goes through C k-1 , B using all their vertices, and ends in x k . Finally,
we insert the vertices of the neighborhood of x 1 into the obtained path as we have
done in the proof of Lemma 5.3.
Theorem 5.5. Let G be a 2-connected claw-free graph with a good pair u, v.
Then G has a Hamiltonian cycle and, given the corresponding induced dominating
paths, one can construct a Hamiltonian cycle in linear time.
Proof. Let be the induced
dominating paths, corresponding to the good pair u, v. By the definition of a good
both k and l are greater than 2. We may also assume that, for any induced
dominating path exists such
that together with y would form an induced
dominating cycle of length at least 4, and we can apply Corollary 5.2 to construct a
Hamiltonian cycle of G in linear time.
Let A 1 be the set of vertices a 1 that are adjacent to x 1 but to no other vertex of
be the set of vertices b 1 that are adjacent to x k but to no other vertex of
are defined accordingly for P 2 . Of course, each of the sets A 1 , A 2 ,
forms a clique of G.
First we assume that one of these paths, say, P 1 , is nonenlargeable, i.e., A
In this case we do the following. We remove the inner vertices of P 2 from G
and get the graph G- (P 2 ), where denotes the inner vertices of P 2 . Then, using
we create a Hamiltonian path in G- (P 2 ) that starts at u and ends at v (Lemma
5.3), and we add (P 2 ) to this path to create a Hamiltonian cycle of G.
We can use this method for creating a Hamiltonian cycle of G whenever we have
two internally disjoint paths P, P # of G both connecting u with v such that one of
them is an induced dominating and nonenlargeable path of the graph obtained from
G by removing the inner vertices of the other path.
Now we suppose that both paths P 1 , P 2 are enlargeable. Because of symmetry
we have to consider the following three cases.
Case 1. There exist a vertex a 1 # A 1 \ A 2 and a vertex
In this case there must be edges from a 1 , b 1 to inner vertices y i , y j of P 2 . Con-
sequently, we can form a new path P #
2 by starting in u and traversing through A 1 ,
is the subpath of P 2 between y i and y j . Evi-
dently,
contains all vertices of B 1 , A 1 and is internally disjoint from P 1 , which is
nonenlargeable in G- (P #
Case 2. there exists a vertex a 1 # A 1 \ A 2 .
In this case none of the vertices of B #) has a neighbor in
v. As G is 2-connected, for some vertex b # B there has to be a
vertex dominates G and z /
# B, vertex
z must be adjacent to a vertex y # P 2 \ {v}. If z is only adjacent to y but to no
other vertex of P 2 , then z necessarily belongs to A 2 and we can form a new path P #by starting in u, using all vertices of A 2 , B and ending in v. Again, P #
1 is internally
disjoint from P 2 and P 2 is nonenlargeable in G- (P #
then we
can apply Corollary 5.2.
Therefore, we may assume that z is adjacent to an inner vertex y of P 2 . Now, if
there exists a vertex a 1 # A 1 \ A 2 , then a 1 is adjacent to some vertex y # of
we can construct a new path P #
2 by using u, A 1 , y # , . , y, z, B, v. (If B was empty,
then
2 ends at . , y # , . , y l-1 , v.) This path is internally disjoint from P 1 , which
A. BRANDST -
ADT, F. F. DRAGAN, AND E. K -
OHLER
is nonenlargeable in G - (P #
then from the discussion above we may
assume that either A := A is empty or there is a vertex z # V \
which is adjacent to a vertex of A and has a neighbor y # in (P 2 ). Hence, we can
construct a path P #
2 by using u, A, z # , y # , . , y, z, B, v, which is internally disjoint
from P 1 . (If z
2 is constructed by using u, A, z, B, v.)
Case 3. A 2 is strictly contained in A 1 , and B 1 is strictly contained in B 2 .
Consider vertices cliques C
then we can construct a new path P #
2 by using
This path is internally disjoint from P 1 , which is nonenlargeable in
G-
there must be a neighbor y # (P 2 ) of z # . If vertex b is adjacent to
some vertex in C k-1 \{v}, then we construct a new path P #
2 by using u, A 1 , y # , . , v. It
will be internally disjoint from P 1 , which is enlargeable only to one end (at x
G-(P #
We are now in the conditions of Lemma 5.4 and can construct a Hamiltonian
path of G- (P #
starts in u and ends in v. Adding (P #
2 ) to this path, we obtain
a Hamiltonian cycle of G.
So, we may assume that zz # /
vertex z # A 1 \ A 2 and that vertex b is
not adjacent to any vertex of C k-1 \ {v}. From this we conclude also that z /
But since z /
there must be a neighbor x j # (P 1 ) of z. We choose vertex
with the smallest j. Clearly, 1 < j < k - 1 and z # C j .
First we define a new induced path P #
cliques
We have z # A # 1 , since
otherwise from the construction of P #
would be adjacent to z, and that is impossible
Note that vertex x j+1 is dominated by the path P 2 . If it is adjacent to only
vertex v from P 2 , then arises. Therefore,
x j+1 must be adjacent to an inner vertex y of P 2 . Now we define a new path P #by using u, A # 1 , y # , . , y, x j+1 , C j+1 , x j+2 , . , C k-1 , v. It is internally disjoint from
1 and contains all vertices of A # 1 and C 1). It is clear from the
construction that the path P #
1 dominates the graph G - (P #
(Every vertex which
was not dominated by the path P #
1 in G belongs to some set C i (j
It remains to show that the path P #
1 is nonenlargeable in G - (P #
Assume by
way of contradiction that it is enlargeable. Since A # 1 # (P #
2 ), this is possible only if
be a vertex of B #
1 . Then p does not belong to B 1 , since otherwise it
should be adjacent to z, which is contained in (P #
are cliques,
.) Now, from we conclude that the neighbors
of p in P 1 \ {v} are only vertices from {x j+1 , . , x k-1 }, i.e., p belongs to a set C s for
some s 1. Consequently, a contradiction to C s # (P #
arises.
It is not hard to see that the above method can be implemented to run in linear
time.
Theorem 5.6. Every 2-connected claw-free graph G that contains a dominating
pair has a Hamiltonian cycle, and, given a dominating pair, a Hamiltonian cycle can
be constructed in linear time.
Proof. Let v, u be a dominating pair of a 2-connected graph G. If vu /
by Lemma 4.9, there exist two internally disjoint induced paths connecting v and u.
Both these paths dominate G, and, therefore, u, v is a good pair of G. Thus, the
statement holds by Theorem 5.5.
Now let vu # E. Define sets A := N(u) \ N[v], B := N(v) \ N[u], and S :=
# N(u). Since G is claw-free, the sets A and B are cliques of G. Notice also that
LINEAR ALGORITHMS FOR HAMILTONIAN PROBLEMS ON. 1675
sets A, B, S, and {v, u} form a partition of the vertex set of G.
If there is an edge ab in G such that a # A and b # B, then vertices a, u, v, b induce
a 4-cycle which dominates G. Hence, we can apply Corollary 5.2 to get a Hamiltonian
cycle of G. Therefore, assume that no such edge exists. But since G is 2-connected,
there must be edges ax, by with x, y # S, a # A, and b # B. We distinguish between
two cases. Let G S denote the subgraph of G induced by S.
Case 1. G S is disconnected.
Then, it consists of two cliques S 1 and S 2 . Now, if vertices x, y are in di#er-
ent components of G S , say, x
y, P S2\{y} , u) is a Hamiltonian cycle of G. (P M stands for an arbitrary
permutation of the vertices of a set M .) If x, y are in one component, say, S 1 , then
is a Hamiltonian cycle of G.
Case 2. G S is connected.
Then, by Theorem 3.1, there exists a Hamiltonian path
G S . Assume that as, at /
# E. Then, to avoid a claw K(u; a, s, vertices s and t have
to be adjacent, giving a Hamiltonian cycle HC := (s, y 1 , . , y l , t, s) of G S . Vertices
x and y split this cycle into two paths Hence, a
is a Hamiltonian cycle of G.
Now, we may assume that a is adjacent to s or t. Analogously, b is adjacent
to one of t, s. If a, b are adjacent to di#erent vertices, say, as, bt # E, then
is a Hamiltonian cycle of G. Finally, if a, b are adjacent
only to s (similarly, to t), then (u, P \ {s}, v, P B\{b} , b, s, a, P A\{a} , u) is a
Hamiltonian cycle of G.
Theorem 5.7. Every 2-connected CN-free graph G has a Hamiltonian cycle, and
such a cycle can be found in O(n +m) time.
Proof. The proof follows from Theorems 4.10, 5.1, and 5.5.
Corollary 5.8. There is a linear time algorithm that for a given (arbitrary)
2-connected graph G either finds a Hamiltonian cycle or outputs an induced claw or
an induced net of G.
Corollary 5.9. A Hamiltonian cycle of a 2-connected (claw,AT)-free graph can
be found in O(n +m) time.
Remark. Corollary 5.8 implies that every 2-connected unit interval graph has
a Hamiltonian cycle, which is, of course, well known (see [24, 20]). The interesting
di#erence of the above algorithm compared to the existing algorithms for this problem
on unit interval graphs is that it does not require the creation of an interval model.
It also follows from Corollaries 3.3 and 5.8 that both the Hamiltonian path problem
and the Hamiltonian cycle problem are linear time solvable on proper circular arc
graphs. Note that previously known algorithms for these problems had time bounds
Fig. 5.1. Claw-free graph, containing a dominating pair and a net.
1676 A. BRANDST -
ADT, F. F. DRAGAN, AND E. K -
OHLER
It should also be mentioned that Theorems 3.1 and 5.5 do cover a class of graphs
that is not contained in the class of CN-free graphs. Figure 5.1 shows a graph that
is claw-free, does contain a dominating/good pair and, consequently, a dominating
path, but, obviously, it is neither AT-free nor net-free.
--R
Every 3-connected
Minimal 2-connected non-hamiltonian claw-free graphs
A linear time algorithm to compute a dominating path in an AT-free graph
Discrete Math.
time algorithms for dominating pairs in asteroidal triple-free graphs
Graph Theory
Forbidden subgraphs and the Hamiltonian theme
Characterizing forbidden pairs for hamiltonian properties
On Hamiltonian claw-free graphs
Algorithmic Graph Theory and Perfect Graphs
Local tournaments and proper circular arc graphs
Finding hamiltonian circuits in interval graphs
Hamiltonian cycles in 3-connected claw-free graphs
An optimum
Hamiltonicity in claw-free graphs
Introduction to Graph Theory
--TR
--CTR
Jou-Ming Chang, Induced matchings in asteroidal triple-free graphs, Discrete Applied Mathematics, v.132 n.1-3, p.67-78, 15 October
Andreas Brandstdt , Feodor F. Dragan, On linear and circular structure of (claw, net)-free graphs, Discrete Applied Mathematics, v.129 n.2-3, p.285-303, 01 August
Sun-Yuan Hsieh, An efficient parallel strategy for the two-fixed-endpoint Hamiltonian path problem on distance-hereditary graphs, Journal of Parallel and Distributed Computing, v.64 n.5, p.662-685, May 2004 | hamiltonian path;hamiltonian cycle;linear time algorithms;dominating pair;claw-free graphs;net-free graphs;dominating path |
586944 | On the Determinization of Weighted Finite Automata. | We study the problem of constructing the deterministic equivalent of a nondeterministic weighted finite-state automaton (WFA). Determinization of WFAs has important applications in automatic speech recognition (ASR). We provide the first polynomial-time algorithm to test for the twins property, which determines if a WFA admits a deterministic equivalent. We also give upper bounds on the size of the deterministic equivalent; the bound is tight in the case of acyclic WFAs. Previously, Mohri presented a superpolynomial-time algorithm to test for the twins property, and he also gave an algorithm to determinize WFAs. He showed that the latter runs in time linear in the size of the output when a deterministic equivalent exists; otherwise, it does not terminate. Our bounds imply an upper bound on the running time of this algorithm.Given that WFAs can expand exponentially in size when determinized, we explore why those that occur in ASR tend to shrink when determinized. According to ASR folklore, this phenomenon is attributable solely to the fact that ASR WFAs have simple topology, in particular, that they are acyclic and layered. We introduce a very simple class of WFAs with this structure, but we show that the expansion under determinization depends on the transition weights: some weightings cause them to shrink, while others, including random weightings, cause them to expand exponentially. We provide experimental evidence that ASR WFAs exhibit this weight dependence. That they shrink when determinized, therefore, is a result of favorable weightings in addition to special topology. These analyses and observations have been used to design a new, approximate WFA determinization algorithm, reported in a separate paper along with experimental results showing that it achieves significant WFA size reduction with negligible impact on ASR performance. | Introduction
Finite-state machines and their relation to rational functions and power series have been
extensively studied [2, 3, 12, 16] and widely applied in fields ranging from image compression
[9-11, 14] to natural language processing [17, 18, 24, 26]. A subclass of finite-state
machines, the weighted finite-state automata (WFAs), has recently assumed new
importance, because WFAs provide a powerful method for manipulating models of human
language in automatic speech recognition (ASR) systems [19, 20]. This new re-search
direction also raises a number of challenging algorithmic questions [5].
A weighted finite-state automaton (WFA) is a nondeterministic finite automaton
(NFA), A, that has both an alphabet symbol and a weight, from some set K, on each
transition.
be a semiring. Then A together with R generates a
partial function from strings to K: the value of an accepted string is the semiring sum
over accepting paths of the semiring product of the weights along each accepting path.
Such a partial function is a rational power series [25]. An important example in ASR
is the set of WFAs with the min-sum semiring, which
compute for each accepted string the minimum cost accepting path.
In this paper, we study problems related to the determinization of WFAs. A deter-
ministic, or sequential, WFA has at most one transition with a given input symbol out
of each state. Not all rational power series can be generated by deterministic WFAs. A
determinization algorithm takes as input a WFA and produces a deterministic WFA that
generates the same rational power series, if one exists. The importance of determinization
to ASR is well established [17, 19, 20].
As far as we know, Mohri [17] presented the first determinization procedure for
WFAs, extending the seminal ideas of Choffrut [7, 8] and Weber and Klemm [27] regarding
string-to-string transducers. Mohri gives a determinization procedure with three
phases. First, A is converted to an equivalent unambiguous, trim WFA A t , using an algorithm
analogous to one for NFAs [12]. (Unambiguous and trim are defined below.)
Mohri then gives an algorithm, TT, that determines if A t has the twins property (also
defined below). If A t does not have the twins property, then there is no deterministic
equivalent of A. If A t has the twins property, a second algorithm of Mohri's, DTA, can
be applied to A t to yield A 0 , a deterministic equivalent of A. Algorithm TT runs in
O(m 4n 2
is the number of transitions and n the number of states in A t .
Algorithm runs in time linear in the size of A 0 . Mohri observes that A 0 can be
exponentially larger than A, because WFAs include classical NFAs. He gives no upper
bound on the worst-case state-space expansion, however, and due to weights, the classical
NFA upper bound does not apply. Finally, Mohri gives an algorithm that takes a
deterministic WFA and outputs the minimum-size equivalent, deterministic WFA.
In this paper, we present several results related to the determinization of WFAs. In
Section 3 we give the first polynomial-time algorithm to test whether an unambiguous,
WFA satisfies the twins property. It runs in O(m 2 n 6 ) time. We then provide a
worst-case time complexity analysis of DTA. The number of states in the output deterministic
WFA is at most 2 n(2 lg n+n 2 lg j\Sigmaj+1) , where \Sigma is the input alphabet. If the
weights are rational, this bound becomes 2 n(2 lg n+1+min(n 2 lg j\Sigmaj;ae)) , where ae is the
maximum bit-size of a weight. When the input WFA is acyclic, the bound becomes
which is tight (up to constant factors) for any alphabet size.
In Sections 4-6 we study questions motivated by the use of WFA determinization
in ASR [19, 20]. Although determinization causes exponential state-space expansion
in the worst case, in ASR systems the determinized WFAs are often smaller than the
input WFAs [17]. This is fortuitous, because the performance of ASR systems depends
directly on WFA size [19, 20]. We study why such size reductions occur. The folklore
explanation within the ASR community credits special topology-the underlying
directed graph, ignoring weights-for this phenomenon. ASR WFAs tend to be multipartite
and acyclic. Such a WFA always admits a deterministic equivalent.
In Section 4 we exhibit multi-partite, acyclic WFAs whose minimum equivalent deterministic
WFAs are exponentially larger. In Section 5 we study a class of WFAs, RG,
with a simple multi-partite, acyclic topology, such that in the absence of weights the deterministic
equivalent is smaller. We show that for any A 2 RG and any i - n, there exists
an assignment of weights to A such that the minimal equivalent deterministic WFA
has states. Using ideas from universal hashing, we show that similar results
hold when the weights are random i-bit numbers. We call a WFA weight-dependent if
its expansion under determinization is strongly determined by its weights.
We examined experimentally the effect of varying weights on actual WFAs from
ASR applications. In Section 6 we give results of these experiments. Most of the ASR
examples were weight-dependent. These experimental results together with the theory
we develop show that the folklore explanation is insufficient: ASR WFAs shrink under
determinization because both the topology and weighting tend to be favorable.
Some of our results help explain the nature of WFAs from the algorithmic point of
view, i.e., how weights assigned to the transitions of a WFA can affect the performance
of algorithms manipulating it. Others relate directly to the theory of weighted automata.
Definitions and Terminology
Given a semiring (K;
weighted finite automaton (WFA) is a tuple
is the set of states, - q 2 Q is the initial state, \Sigma is the set
of symbols, ffi ' Q \Theta \Sigma \Theta K \Theta Q is the set of transitions, and Q f ' Q is the set of
final states. We assume throughout that j\Sigma j ? 1. A deterministic, or sequential, WFA
has at most one transition
WFA can have multiple transitions on a pair (q 1 ; oe), differing in target state q 2 . The
problems examined in this paper are motivated primarily by ASR applications, which
work with the min-sum semiring, Furthermore, some of
the algorithms considered use subtraction, which the min-sum semiring admits. We thus
limit further discussion to the min-sum semiring.
Consider a sequence of transitions
t induces string String w is accepted by t if q
q and q ' 2
accepted by G if some t accepts w. Let c(t i be the weight of t i . The weight of
t is
(w) be the set of all sequences of transitions that accept
string w. The weight of w is The weighted language of G is
the set of weighted strings accepted by G: accepted by Gg :
Intuitively, the weight on a transition of G can be seen as the "confidence" one has in
taking that transition. The weights need not, however, satisfy stochastic constraints, as
do the probabilistic automata introduced by Rabin [22].
Fix two states q and q 0 and a string v 2 \Sigma . Then c(q; v; q 0 ) is the minimum of
taken over all transition sequences from q to q 0 generating v. We refer to c(q; v; q 0 )
as the optimal cost of generating v from q to q 0 . We generally abuse notation so that
ffi(q; w) can represent the set of states reachable from state q 2 Q on string w 2 \Sigma .
We extend the function ffi to strings in the usual way: q means that
there is a sequence of transitions from q to q 0 generating v.
The topology of G, top(G), is the projection -Q\Theta\Sigma \ThetaQ (ffi): i.e., the transitions of G
without respect to the weights. We also refer to top(G) as the graph underlying G.
A WFA is trim if every state appears in an accepting path for some string and no
transition is weighted - 0 (1 in the min-sum semiring). A WFA is unambiguous if there
is exactly one accepting path for each accepted string.
Determinization of G is the problem of computing a deterministic WFA G 0 such
that if such a G 0 exists. We denote the output of algorithm DTA by
dta(G). We denote the minimal deterministic WFA accepting L(G) by min(G), if one
exists. We say that G expands if dta(G) has more states and/or transitions than G.
let the size of G be n m. We assume that each
transition is labeled with exactly one symbol, so j\Sigma j - m. Recall that the weights
of G are non-negative real numbers. Let C be the maximum weight. In the general
case, weights are incommensurable real numbers, requiring "infinite precision." In the
integer case, weights can be represented with bits. We denote the integral
range [a; b] by [a; b] Z . The integer case extends to the case in which the weights are
rationals requiring ae bits. We assume that in the integer and rational cases, weights are
normalized to remove excess least-significant zero bits.
For our analyses, we use the RAM model of computation as follows. In the general
case, we charge constant time for each arithmetic-logic operation involving weights
(which are real numbers). We refer to this model as the !-RAM [21]. The relevant
parameters for our analyses are n, m, and j\Sigma j. In the integer case, we also use a RAM,
except that each arithmetic-logic operation now takes O(ae) time. We refer to this model
as the CO-RAM [1]. The relevant parameters for the analyses are n, m, j\Sigma j, and ae.
3 Determinization of WFAs
3.1 An Algorithm for Testing the Twins Property
Definition 1. Two states, q and q 0 , of a WFA G are twins if 8(u; v) 2 (\Sigma ) 2 such
that q 2 ffi( -
the following holds:
c(q; v; has the twins property if all pairs q; q 0 2 Q are twins.
That is, if states q and q 0 are reachable from -
q by a common string, then q and q 0
are twins only if any string that induces a cycle at each induces cycles of equal optimal
cost. Note that two states having no cycle on a common string are twins.
G be a trim, unambiguous WFA. G has the twins
property if and only if 8(u; v) 2 (\Sigma ) 2 such that juvj - 2n the following holds:
when there exist two states q and q 0 such that (i) fq; q 0 g ' ffi( -
q; u), and (ii) q 2 ffi(q; v)
and must follow.
are analogous to those stated by Choffrut [7, 8] and (in
different terms) by Weber and Klemm [27] to identify necessary and sufficient conditions
for a string-to-string transducer to admit a sequential transducer realizing the
same rational transduction. The proof techniques used for WFAs differ from those used
to obtain analogous results for string-to-string transducers, however. In particular, the
efficient algorithm we derive here to test a WFA for twins is not related to that of Weber
and Klemm [27] for testing twins in string-to-string transducers.
We define T - q;-q , a multi-partite, acyclic, labeled, weighted graph having 2n 2 layers,
as follows. The root vertex comprises layer zero and corresponds to (-q; - q). For i ? 0,
given the vertices at layer i \Gamma 1, we obtain the vertices at layer i as follows. Let u be a
vertex at layer corresponding to (q 1 connected to u 0 , corresponding
to (q 0
2 ), at layer i if and only if there are two distinct transitions
and
G. The arc connecting u to u 0 is labeled with a 2 \Sigma and
has cost
q;-q has at most 2n
Let (q; q 0 ) i be the vertex corresponding to (q; q 0 at layer i of T - q;-q , if any. Let
be the set of pairs of distinct states of G that are reachable
from (-q; -
q;-q . For each (q; q 0 analogously to T -
q;-q .
Fix two distinct states q and q 0 of G. Let (q; q 0
, be all the occurrences of (q; q 0 ) in T q;q 0 , excluding (q; q 0 ) 0 . This sequence
may be empty. A symmetric sequence can be extracted from T q 0 ;q . We refer to these
sequences as the common cycles sequences of (q; q 0 ). We say that q and q 0 satisfy the
local twins property if and only if (a) their common cycles sequences are empty or (b)
zero is the cost of (any) shortest path from (q; q 0 ) 0 to (q; q 0 and from (q
to
Lemma 2. Let G be a trim, unambiguous WFA. G satisfies the twins property if and
only if (i) RT is empty or (ii) all (q; q 0 the local twins property.
Proof (Sketch). We outline the proof for the sufficient condition. The only nontrivial
case is when some states in RT satisfy the local twins property and their common cycles
sequences are not empty. Let RT 0 be such a set. Assume that G does not satisfy the
twins property. We derive a contradiction. Since RT 0 is not empty, we have that the set
of pairs of states for which (i) and (ii) are satisfied in Lemma 1 is not empty. But since
G does not satisfy the twins property, there must exist two states q and q 0 and a string
1, such that (i) both q and q 0 can be reached from the initial
state of G through string u; (ii) q 2 ffi(q; v) and q
loss of generality, assume that
Now, one can show that (q; q 0 using the fact that G is unambiguous,
one can show that there is exactly one path in T q;q 0 from the root to (q; q 0 ) jvj with cost
the local twins property.
To test whether a trim, unambiguous WFA has the twins property, we first compute
q;-q and the set RT . For each pair of states (q; q 0 that has not yet been processed,
we need only compute T q;q 0 and T q 0 ;q and their respective shortest path trees.
Theorem 1. Let G be a trim unambiguous WFA. In the general case, whether G satisfies
the twins property can be checked in O(m using the !-RAM. In the integer
case, the bound becomes O(aem using the CO-RAM.
3.2 The
In this section we describe the algorithm. We then give an upper bound on the
size of the deterministic machines produced by the algorithm. The results of Section 5
below show that our upper bound is tight to within polynomial factors.
Given WFA
generalizes the classic power-set construction
to construct deterministic WFA G 0 as follows. The start state of G 0 is f(-q; 0)g,
which forms an initial queue P . While P 6= ;, pop state
from P , where q 1g. The r i values encode path-length infor-
mation, as follows. For each oe 2 \Sigma , let fq 0
m g be the set of states reachable by
oe-transitions out of all the q i . For
be the minimum of the weights of oe-transitions into q 0
j from the q i plus the respective
g. Let q
m. We add transition (q; oe; ae; q 0 ) to G 0 and push q 0 onto P if q 0 is new. This is
the only oe-transition out of state q, so G 0 is deterministic.
Let TG (w) be the set of sequences of transitions in G that accept a string w 2 \Sigma ;
let t G 0 (w) be the (one) sequence of transitions in G 0 that accepts the same string. Mohri
[17] shows that c(t G 0
let be the set of sequences of transitions in G from state -
q to state q that induce
string w. Again, let t G 0 (w) be the (one) sequence of transitions in G 0 that induces the
same string; t G 0 (w) ends at some state f(q 1 such that some
shows that c(t G Thus, each r i is
a remainder that encodes the difference between the weight of the shortest path to some
state that induces w in G and the weight of the path inducing w in G 0 . Hence at least
one remainder in each state must be zero.
3.3 Analyzing
We first bound the number of states in dta(G), denoted #dta(G).
Theorem 2. If WFA G has the twins property, then
in the general case; in the integer (or rational)
case; and #dta(G) ! 2 n lg j\Sigmaj if G is acyclic, independent of any assumptions on
weights. The acyclic bound is tight (up to constant factors) for any alphabet.
Proof (Sketch). Let ~
R be the set of remainders in dta(G). Let R be the set of remainders
r for which the following holds: 9w 2 \Sigma , jwj - and two states q 1 and q 2 ,
such that )j. The twins property implies that ~
R ' R. In the
worst case, each i-state tuple from G will appear in dta(G), and there are j ~
i-tuples of remainders it can assume. (This over counts by including tuples without any
zero remainders.) Therefore, #dta(G) -
General Case: Each string of length at most n reach a pair of (not necessarily
distinct) states in G. Therefore, jRj
. Integer Case: The remainders in
R are in [0; (n
. Acyclic Case:
#dta(G) is bounded by the number of strings in the weighted language accepted by G,
which is bounded by j\Sigma j n . We discuss tightness in Section 5.
Processing each tuple of state-remainders generated by
time, excluding the cost of arithmetic and min operations, yielding the following.
Theorem 3. Let G be a WFA satisfying the twins property. In the general case,
takes O(j\Sigmaj(n on the !-RAM. In the (rational or) integer
case, on the CO-
RAM. In the acyclic case, takes O(j\Sigmaj(n +m)2 n lg j\Sigmaj ) time on the !-RAM and
O(aej\Sigmaj(n +m)2 n lg j\Sigmaj ) time on the CO-RAM.
We can use the above results to generate hard instances for any determinization
algorithm. A reweighting function (or simply reweighting) f is such that, when applied
to a WFA G, it preserves the topology of G but possibly changes the weights. We want
to determine a reweighting f such that min(f(G)) exists and jmin(f(G))j is maximized
among reweightings for which min(f(G)) exists. We restrict attention to the integer
case and, without loss of generality, we assume that G is trim and unambiguous.
Theorem 2 shows that for weights to affect the growth of dta(G), it must
be that ae - n 2 lg j\Sigma j. Set ae To find the required reweight-
ing, we simply consider all possible reweightings of G satisfying the twins property
and requiring at most ae max bits. There are (2 ae possible
reweightings, and it takes 2 O(n(2 lg n+(n 2 lg j\Sigmaj))) time to compute the expansion or decide
that the resulting machine cannot be determinized, bounding the total time by
4 Hot Automata
This section provides a family of acyclic, multi-partite WFAs that are hot: when de-
terminized, they expand independently of the weights on their transitions. Given some
alphabet an g, consider the language
i.e., the
set of all n-length strings that do not include all symbols from \Sigma . It is simple to obtain
an acyclic, multi-partite NFA H of poly(n) size that accepts L. It is not hard to show
that the minimal DFA accepting L has \Theta(2 n+lg n ) states. Furthermore, we can construct
H so that these bounds hold for a binary alphabet. H corresponds to a WFA with
all arcs weighted identically. Since acyclic WFAs satisfy the twins property, they can
always be determinized. Altering the weights can only increase the expansion. Kintala
and Wotschke [15] provide a set of NFAs that produces a hierarchy of expansion factors
when determinized, providing additional examples of hot WFAs.
5 Weight-Dependent Automata
In this section we study a simple family of WFAs with multi-partite, acyclic topology.
We examine how various reweightings affect the size of the determinized equivalent.
This family shrinks without weights, so any expansion is due to weighting. This study
is related in spirit to previous works on measuring nondeterminism in finite automata
[13,15]. Here, however, nondeterminism is encoded only in the weights. We first discuss
the case of a binary alphabet and then generalize to arbitrary alphabets.
5.1 The Rail Graph
We denote by RG(k) the k-layer rail graph. RG(k) has 2k
g. There are arcs (0;
See Fig. 1. RG(k) is 1)-partite and also has fixed in- and out-degrees. If we consider
the strings induced by paths from 0 to either T k or B k , then the language of RG(k)
is the set of strings LRG . The only nondeterministic choice is at the state
0, where either the top or bottom rail may be selected. Hence a string w can be accepted
by one of two paths, one following the top rail and the other the bottom rail.
a
a
a
a
a
a
a
a
a
a
Fig. 1. Topology of the k-layer rail graph.
Technically, RG(k) is ambiguous. We can disambiguate RG(k) by adding transitions
from T k and B k , each on a distinct symbol, to a new final state. Our results extend
to this case. For clarity of presentation, we discuss the ambiguous rail graph.
The rail graph is weight-dependent. In Section 5.2 we provide weightings such that
produces the (k+1)-vertex trivial series-parallel graph: a graph on k+1 vertices,
with transitions, on all symbols, only between vertices i and i+1, for 1 - i - k. On the
other hand, in Section 5.3 we exhibit weightings for the rail graph that cause DTA to
produce exponential state-space expansions. We also explore the relationship between
the magnitude of the weights and the amount of expansion that is possible. In Section
5.4, we show that random weightings induce the behavior of worst-case weightings.
Finally, in Section 5.5 we generalize the rail graph to arbitrary alphabets.
5.2 Weighting RG(k)
Consider determinizing RG(k) with DTA. The set of states reachable on any string
g. For a given weighting function c, let
c T (w) denote the cost of accepting string w if the top path is taken; i.e., c T
Analogously define c B (w) to be the corresponding
cost along the bottom path. Let R(w) be the remainder vector for w, which is a pair
of the form (0; c B 0). A state at layer 0
in the determinized WFA is labeled (fT string w leading to that
state. Thus, two strings w 1 and w 2 of identical length lead to distinct states in the determinized
version of the rail graph if and only if R(w 1
It is convenient simply to write (w). The sign of R(w) then
determines which of the two forms (0; x) or (x; 0) of the remainder vector occurs.
denote the weight on the top (rsp., bottom) arc labeled oe
into vertex T
Theorem 4. There is a reweighting f such that
which consists of the series-parallel graph
Proof. Any f for which suffices, since in this case R(w 1
g. In particular, giving zero weights suffices.
5.3 Worst-Case Weightings of RG(k)
Theorem 5. For any j 2 [0; k] Z there is a reweighting f such that layers 0 through j
of dta(f(RG(k))) form the complete binary tree on 2 vertices.
Proof (Sketch). Choose any weighting such that
Consider a pair of strings w
identical length such that w 1 6= w 2 . The weighting ensures that R(w 1
Theorem 6. For any j 2 [0; k] Z there is a reweighting f such that layers 0 through
the complete binary tree on vertices.
Theorem 6, generalized by Theorem 10, shows that weight-dependence is not an
artifact of DTA and that the acyclic bound of Theorem 2 is tight for binary alphabets.
We now address the sensitivity of the size expansion to the magnitude of the
weights, arguing that exponential state-space expansion requires exponentially big
weights for the rail graph. (This means that the size expansion, while exponential in
the number of states, is only super-polynomial in the number of bits.)
Theorem 7. Let f be a reweighting. If jdta(f(RG(k)))j
are
required to encode f(RG(k)).
Proof (Sketch). There must
be\Omega remainders among the states at depth k in
the determinized WFA,
necessitating\Omega distinct permutations of the d k
bits among them.
Thus\Omega (k) weights must have similarly high-order bits set.
Corollary 1. Let f be a reweighting. If jmin(f(RG(k)))j
are required to encode f(RG(k)).
5.4 Random Weightings of RG(k)
Theorem 8. Let G be RG(k) weighted with numbers chosen independently and uniformly
at random from [1;
denotes the expected value of the random variable X .
Theorem 9. Let G be RG(k) weighted with logarithms of numbers chosen independently
and uniformly at random from [1;
The proofs of Theorems 8 and 9 use the observation that the random functions
defined by RG are essentially universal hash functions [6] to bound sufficiently low the
probability that the remainders of two distinct strings are equal. Theorem 9 is motivated
by the fact that the weights of ASR WFAs are negated log probabilities.
Extending RG(k) to Arbitrary Alphabets
We can extend the rail graph to arbitrary alphabets, defining RG(r; k), the k-layer r-
rail graph, as follows. RG(r;
. Assume the alphabet is
for all 1 -
The subgraph induced by vertex 0 and vertices v i
comprises rail i of RG(r; k). The subgraph induced by vertices
and some j comprises layer j of RG(r; k). Vertex 0 comprises layer 0 of RG(r; k).
Thus, RG(2; k) is the k-layer rail graph, RG(k), defined in Section 5.1.
Let c(i; j; s) be the weight of the arc labeled s into vertex v i
Theorems 4 and 5
generalize easily to the k-layer r-rail graphs. Theorem 6 generalizes to RG(r;
follows, showing that the acyclic bound of Theorem 2 is tight for arbitrary alphabets.
Theorem 10. For any j 2 [0; k] Z there is a reweighting f such that layers 0 through
the complete r-ary tree on r j \Gamma1
vertices.
Proof (Sketch). Choose the following weighting. Set c(i; ';
for all 1 -
Given two strings, w 1 6= w 2 , such that jw 1 j, we can show that w 1
and w 2 must lead to different vertices in any deterministic realization, D, of RG(r; k).
Assume that w 1 and w 2 lead to the same vertex in D. Let c d (w) be the cost of string
w in D. Given any suffix s of length k \Gamma ', we can show that c(w 1
c d (w 1 The right hand side is a fixed value, \Delta.
Consider any position i - ' in which w 1 and w 2 differ. Denote the ith symbol of
string w by w(i). Consider two suffixes, s 1 and s 2 , of length
(i). Observe that the given weighting on RG(r; forces the
minimum cost path for any string with some symbol oe in position j to follow rail (r\Gammaoe).
Thus,
We can use this to show that c(w 1
6 Experimental Observations on ASR WFAs
To determine whether ASR WFAs manifest weight dependence, we experimented on
100 WFAs generated by the AT&T speech recognizer [23], using a grammar for the Air
Travel Information System (ATIS), a standard test bed [4]. Each transition was labeled
with a word and weighted by the recognizer with the negated log probability of realizing
that transition out of the source state; we refer to these weights as speech weights.
We determinized each WFA with its speech weights, with zero weights, and with
weights assigned independently and uniformly at random from [0; 2 i \Gamma1] Z (for each 0 -
could not be determinized with speech weights due to computational
limitations, and it is omitted from the data.
Figure
2(a) shows how many WFAs expanded when determinized with different
weightings. Figure 2(b) classifies the 63 WFAs that expanded with at least one weight-
ing. For each WFA, we took the weighting that produced maximal expansion. This was
usually the 8-bit random weighting, although due to computational limitations we were
unable to determinize some WFAs with large random weightings. The x-axis indicates
the open interval within which the value lg(jdta(G)j=jGj) falls.
The utility of determinization in ASR includes the reduction in size achieved with
actual speech weights. In our sample, 82 WFAs shrank when determinized. For each,
we computed the value lg(jGj=jdta(G)j), and we plot the results in Fig. 2(c).
In Fig. 2(d), we examine the relationship between the value lg(jdta(G)j=jGj) and
the number of bits used in random weights. We chose the ten WFAs with highest final
expansion value and plotted lg(jdta(G)j=jGj) against the number of bits used. For
reference the functions i
are plotted, where i is the number of bits. Most
speech zerosbit
rndbit
rndbit
rndbit
rndbit
rndbit
rndbit
rndbit
rnd
Type of weighting2060
Number
of
WFAs
that
expand
(a)
Log base 2 of expansion factor515
Number
of
WFAs
(b)
Log base 2 of shrinkage
Number
of
WFAs
(c)
Number of random bits26Log
of
expansion
factor
q0v004
(d)
Fig. 2. Observations on ASR WFAs.
of the WFAs exhibit subexponential growth as the number of bits increases, although
some, like q0t063 have increased by 128 times even with four random bits.
The WFA that could not be determinized with speech weights was "slightly hot,"
in that the determinized zero-weighted variant had 2.7% more arcs than the original
WFA. The remaining ninety-nine WFAs shrank with zero weights: none was hot. If one
expanded, it did so due to weights rather than topology.
Figure
2(a) indicates that many of the WFAs have some degree of weight dependence
Figure
2(d) suggests that random weights are a good way to estimate the degree
to which a WFA is weight dependent. Note that the expansion factor is some superlin-
possibly exponential, function of the number of random bits, suggesting that large,
e.g., 32-bit, random weights should cause expansion if anything will. Analogous experiments
on the minimized determinized WFAs yield results that are qualitatively the
same, although fewer WFAs still expand after minimization. Hence weight dependence
seems to be a fundamental property of these WFAs rather than an artifact of DTA.
Acknowledgements
. We thank Mehryar Mohri, Fernando Pereira, and Antonio Restivo
for fruitful discussions.
--R
Network Flows: Theory
Rational Series and Their Languages.
Algorithmic aspects in speech recognition: An intro- duction
Universal classes of hash functions.
Une caracterisation des fonctions sequentielles et des fonctions sous- sequentielles en tant que relations rationnelles
Finite automata computing real functions.
On computational power of weighted finite automata.
On measuring nondeterminism in regular languages.
Arithmetic coding of weighted finite automata.
Amounts of nondeterminism in finite automata.
On the use of sequential transducers in natural language processing.
Speech recognition by composition of weighted finite automata.
Weighted rational transductions and their application to human language processing.
Computational Geometry: An Introduction.
Probabilistic automata.
The AT&T
Analyse Syntaxique Transformationelle du Francais par Transducteurs et Lexique- Grammaire
Economy of description for single-valued transducers
--TR
--CTR
Mark G. Eramian, Efficient simulation of nondeterministic weighted finite automata, Journal of Automata, Languages and Combinatorics, v.9 n.2-3, p.257-267, September 2004
Bjrn Borchardt, A pumping lemma and decidability problems for recognizable tree series, Acta Cybernetica, v.16 n.4, p.509-544, 2004
Julien Quint, On the equivalence of weighted finite-state transducers, Proceedings of the ACL 2004 on Interactive poster and demonstration sessions, p.23-es, July 21-26, 2004, Barcelona, Spain
Manfred Droste , Dietrich Kuske, Skew and infinitary formal power series, Theoretical Computer Science, v.366 n.3, p.199-227, 20 November 2006
Bjrn Borchardt , Heiko Vogler, Determinization of finite state weighted tree automata, Journal of Automata, Languages and Combinatorics, v.8 n.3, p.417-463, 06/01/2003
Manfred Droste , Paul Gastin, Weighted automata and weighted logics, Theoretical Computer Science, v.380 n.1-2, p.69-86, June, 2007
Cyril Allauzen , Mehryar Mohri, Efficient algorithms for testing the twins property, Journal of Automata, Languages and Combinatorics, v.8 n.2, p.117-144, April
Ines Klimann , Sylvain Lombardy , Jean Mairesse , Christophe Prieur, Deciding unambiguity and sequentiality from a finitely ambiguous max-plus automaton, Theoretical Computer Science, v.327 n.3, p.349-373, 2 November 2004
Sylvain Lombardy , Jacques Sakarovitch, Sequential?, Theoretical Computer Science, v.356 n.1, p.224-244, 5 May 2006
Manfred Droste , Heiko Vogler, Weighted tree automata and weighted logics, Theoretical Computer Science, v.366 n.3, p.228-247, 20 November 2006 | rational functions and power series;algorithms;speech recognition;weighted automata |
586945 | On the Relative Complexity of Resolution Refinements and Cutting Planes Proof Systems. | An exponential lower bound for the size of tree-like cutting planes refutations of a certain family of conjunctive normal form (CNF) formulas with polynomial size resolution refutations is proved. This implies an exponential separation between the tree-like versions and the dag-like versions of resolution and cutting planes. In both cases only superpolynomial separations were known [A. Urquhart, Bull. Symbolic Logic, 1 (1995), pp. 425--467; J. Johannsen, Inform. Process. Lett., 67 (1998), pp. 37--41; P. Clote and A. Setzer, in Proof Complexity and Feasible Arithmetics, Amer. Math. Soc., Providence, RI, 1998, pp. 93--117]. In order to prove these separations, the lower bounds on the depth of monotone circuits of Raz and McKenzie in [ Combinatorica, 19 (1999), pp. 403--435] are extended to monotone real circuits.An exponential separation is also proved between tree-like resolution and several refinements of resolution: negative resolution and regular resolution. Actually, this last separation also provides a separation between tree-like resolution and ordered resolution, and thus the corresponding superpolynomial separation of [A. Urquhart, Bull. Symbolic Logic, 1 (1995), pp. 425--467] is extended. Finally, an exponential separation between ordered resolution and unrestricted resolution (also negative resolution) is proved. Only a superpolynomial separation between ordered and unrestricted resolution was previously known [A. Goerdt, Ann. Math. Artificial Intelligence, 6 (1992), pp. 169--184]. | Introduction
The motivation to work on the proof length of propositional proof systems is double. First, by the
work of Cook and Reckhow [15] we know that the claim that for every propositional proof system
there is a class of tautologies with no polynomial-size proofs is equivalent to NP 6= co-NP . This
connection explains the interest in developing combinatorial techniques to prove lower bounds for
dierent proof systems. The second motivation comes from the interest in studying eciency issues
in Automated Theorem Proving. The question is which proof systems have ecient algorithms
to nd proofs. Actually the proof system most widely used for implementations is resolution or
restrictions of resolution. Our work is relevant to both motivations. On one hand, all the separation
results of this paper improve to exponential the previously known superpolynomial ones. On the
other hand these exponential separations harden the known results showing ineciency of several
widely used strategies for nding proofs, especially for the resolution system.
A preliminary version of this paper appeared as [8] and as ECCC TR98-035.
y Departament de Llenguatges i Sistemes Informatics, Universitat Politecnica de Catalunya,
fbonet,esteban,galesig@lsi.upc.es
z Partially supported by projects SPRIT 20244 ALCOM-IT, TIC 98-0410-C02-01 and PB98-0937-C04-03.
x Partially supported by project KOALA:DGICYT:PB95-0787.
{ Supported by an European Community grant under the TMR project.
k Institut fur Informatik, Ludwig-Maximilians-Universitat Munchen, jjohanns@informatik.uni-muenchen.de.
Research of this author done at Department of Mathematics, University of California, San Diego, supported by
DFG grant No. Jo 291/1-1.
Haken [21] was the rst in proving exponential lower bounds for unrestricted resolution. He
showed that the Pigeonhole Principle requires exponential-size resolution refutations. Later Urquhart
[35] found another class of tautologies with the same property. Chvatal and Szemeredi [11] showed
that in some sense, almost all classes of tautologies require exponential size resolution proofs (see
[4, 5] for simplied proofs of these results). These exponential lower bounds are bad news for
Automated Theorem Proving, since they mean that often the time used in nding proofs will be
exponentially long in the size of the tautology, just because the shortest proofs are also exponentially
long in the size of the tautology. A natural question then is what happens with the classes
of tautologies with polynomial-size proofs. Can we nd these proofs eciently? Several authors
[4, 12, 5] have given weakly exponential (in the minimal proof size) time algorithms for nding
resolution proofs. The question obviously is whether these results can be improved or not.
Formally, we say that a propositional proof system S is automatizable if there is an algorithm
that for every tautology F , nds a proof of F in S in time polynomial in the length of the shortest
proof of F in S. The only propositional proof systems that are known to be automatizable are
algebraic proof systems like Hilbert's Nullstellensatz [2] and Polynomial Calculus [12]. On the
other hand bounded-depth Frege proof systems are not automatizable, assuming factoring is hard
[29, 10, 7]. Since Frege systems and Extended Frege systems polynomially simulate bounded-depth
Frege systems, they are also not automatizable under the same assumptions.
Note that automatizability is equivalent to the approximability to within a polynomial factor of
the following optimization problem: Given a proof of some tautology, nd its minimal size proof .
Iwama [23] and Alekhnovich et al. [1] show that it is NP -hard to approximate this problem to
within a linear factor, for most of the commonly studied proof systems.
Many strategies for nding resolution proofs are described in the literature, see e.g. Schoning's
textbook [34]. One commonly used type of strategy is to reduce the search space by dening
restricted versions of resolution that are still complete. Such restricted forms are commonly referred
to as resolution renements. One particularly important resolution renement is tree-like
resolution. Its importance stems from the close relationship between the complexity of tree-like
resolution proofs and the runtime of a certain class of satisability testing algorithms, the so-called
Algorithms (cf. [31, 3]). We prove an exponential separation between tree-like resolution and
unrestricted resolution (Corollary 20), thus showing that nding tree-like resolution proofs is not
an ecient strategy for nding resolution proofs. Until now only superpolynomial separations were
known [36, 13].
In this paper, we consider three more of the most commonly used resolution renements: negative
resolution, regular resolution and ordered resolution. We show an exponential separation
between tree-like resolution and each one of the above restrictions (Corollary 20 for negative resolution
and Corollary 23 for both regular and ordered resolution). Goerdt [19, 18, 20] gave several
superpolynomial separations between unrestricted resolution and some renements, in particular
he gave a superpolynomial separation between ordered resolution and unrestricted resolution. We
improve this result by giving an exponential separation between ordered and negative resolution
(Corollary 28), thus showing that unrestricted resolution can have an exponential speed-up over
ordered resolution.
The Cutting Planes proof system, CP from now on, is a refutation system based on manipulating
integer linear inequalities. Exponential lower bounds for the size of CP refutations are already
proven. Impagliazzo et al. [22] proved exponential lower bounds for tree-like CP . Bonet et al. [9]
proved a lower bound for the subsystem CP , where the coecients appearing in the inequalities
are polynomially bounded in the size of the formula being refuted. This is a very important result
because all known CP refutations fulll this property. Finally, Pudlak [30] and Cook and Haken
[14] gave general circuit complexity results from which exponential lower bounds for CP follow.
To this day it is still unknown whether CP is more powerful than CP , i.e., whether it produces
shorter proofs or not.
Nothing is known about automatizability of CP proofs. Since there is an exponential speed-up
of CP over resolution, it would be nice to nd an ecient algorithm for nding CP proofs. A
question to ask is whether trying to nd tree-like CP proofs would be an ecient strategy for
nding Cutting Planes proofs. Johannsen [24] gave a superpolynomial separation, with a lower
bound of the
log n ), between tree-like CP and dag-like CP (this was previously known for
CP from [9]). Here we improve that separation to exponential (Corollary 20). This means that
trying to nd tree-like proofs is also not a good strategy for nding proofs in CP .
1.1 Interpolation and Lower Bounds
Interpolation is a technique for proving lower bounds for resolution and Cutting Planes systems.
The name comes from a classical theorem of Mathematical Logic, the Craig's interpolation theorem.
Krajcek [27] reformulated this classical theorem in order to use it to prove lower bounds for proof
systems. Closely related ideas appeared previously in the mentioned works that gave lower bounds
for fragments of CP ([22, 9]).
The interpolation method translates proofs of certain formulas to circuits, preserving sizes. So
it is a way to reduce the problem of proving proof complexity lower bounds to circuit complexity
lower bounds. This is very important because in some cases there are strong circuit complexity
lower bounds. In particular for monotone circuits, there are both size and depth lower bounds.
The interpolation method works as follows. We consider a hard boolean function, i.e., one
that requires exponential size monotone circuits to be computed. We dene a contradiction
~r), such that A(~p; ~q) expresses that ~ p is a minterm 1 of the function (the variables ~q
describe the minterm), and B(~p; ~r) says that ~ p is a maxterm of the function (~r describes the max-
term). We suppose that A(~p; ~q) ^B(~p;~r) has a subexponential size refutation in a system with the
interpolation theorem (monotone version). Then by the theorem we can extract a subexponential
size monotone circuit that computes the hard function. Since this is impossible,
requires exponential size proofs.
With the above method we can prove lower bounds for resolution and CP (see [9, 30, 27]).
But to get an exponential lower bound for full CP we actually need an interpolation theorem that
translates proofs into monotone real circuits [30]. See Section 3 for the denition of monotone real
circuits and Theorem 17 for the interpolation theorem for Cutting Planes.
The main body of this paper consists on exponential separations between tree-like and dag-like
(general) versions of two proof systems, resolution and Cutting Planes. So far we have outlined
the ideas in order to prove complexity lower bounds for systems like resolution and CP . In order
to separate the tree-like versions from the general versions of the proof systems, we need to dene
a contradiction that has polynomial size proofs in resolution and CP but for
which we can prove exponential size lower bounds for the corresponding tree-like versions.
The interpolation theorem applied on tree-like proofs gives rise to tree-like circuits (i.e., formu-
las). Therefore we need exponential size lower bounds for monotone formulas, or equivalently, we
need linear depth lower bounds for monotone circuits.
Karchmer and Wigderson [26] proved an O(log 2 n) lower bound on the depth of monotone
1 Recall that a minterm (respectively a maxterm) of a boolean function f : f0; 1g n ! f0; 1g is a set of inputs
such that for each y 2 f0; 1g n obtained from x by changing a bit
from 1 to 0 (respectively by changing a bit from 0 to 1) it holds that
circuits computing the st-connectivity function. Johannsen [24] extended this lower bound to real
circuits, and using the interpolation theorem he proved a superpolynomial separation between
tree-like and dag-like CP .
Lower bounds on the depth of monotone boolean circuits of the
order
were given by Raz and McKenzie [32]. Here we extend their results to the case of monotone real
circuits. Namely, we prove an
lower bound on the depth of monotone
real circuits computing a certain monotone function Gen n which is computable in polynomial
time. This implies an
lower bound on the size of monotone real formulas computing Gen n .
Hence, by the interpolation theorem, we get the exponential separation of tree-like from dag-like
Cutting Planes. The same ideas also separate tree-like from dag-like resolution.
1.2 Section Description
The paper is organized as follows. In Section 2 we give the basic denitions of the proof systems
considered in the paper. In Section 3 we dene monotone real circuits, and prove the depth lower
bound for them. This is applied in Section 4 to prove the lower bounds for tree-like CP , giving
exponential separations of tree-like CP from CP , tree-like resolution from resolution as well as
from regular resolution, ordered resolution and negative resolution. Finally in Section 5 we prove
the exponential lower bound for ordered resolution, separating it from negative resolution. We
conclude by stating some open problems.
2 The Proof Systems
Resolution is a refutation proof system for CNF formulas, which are represented as sets of clauses,
i.e., disjunctions of literals. We identify clauses in which the same literals occur, multiple occurrences
and the order of appearance are disregarded. The only inference rule is the resolution
rule:
C _ D
The clause C _ D is called the resolvent , and we say that the variable x is eliminated in this
inference. A resolution refutation of some set of clauses is a derivation of the empty clause from
using the above inference rule. Resolution is a sound and complete refutation system, i.e., a set
of clauses has a resolution refutation if and only if it is unsatisable.
Several renements of the resolution proof system, i.e., restricted forms that are still complete,
have appeared in the literature. In this paper we consider the following three:
1. The regular resolution system: Viewing the refutations as graph, in any path from the empty
clause to any initial clause, no variable is eliminated twice.
2. The ordered 2 resolution system: There exists an arbitrary ordering of the variables in the
formula being refuted, such that if a variable x is eliminated before a variable y on any path
from an initial clause to the empty clause, then x is before y in the ordering. As no variable
is eliminated twice on any path, ordered resolution is a restriction of regular resolution.
3. The negative resolution system (N-resolution for short): To apply the resolution rule, one of
the two clauses should consists only of negative literals.
In Goerdt's paper [18] and in the preliminary version [8] of this paper, this renement is called Davis-Putnam
resolution. In the meantime, we have learned that it is more commonly known as ordered resolution.
In a tree-like proof any line in the proof can be used only once as a premise. Should the same
line be used twice, it must be rederived. A proof system that only produces tree-like proofs is called
tree-like. Otherwise we will call it dag-like, or just skip the adjective. When nothing is said it is
understood that the system is dag-like.
There is an algorithm (see e.g. Urquhart [36]) that transforms a tree-like resolution proof into
a possibly smaller regular tree-like resolution proof. Therefore tree-like resolution proofs of minimal
size are regular. That means that under the viewpoint of proof system complexity, tree-like
resolution and tree-like regular resolution are polynomially equivalent.
The Cutting Planes proof system, CP for short, is a refutation system for CNF formulas,
as resolution. It works with linear inequalities. The initial clauses are transformed into linear
inequalities in the following way:
_
_
A CP refutation of a set E of inequalities is a derivation of 0 1 from the inequalities in E and
the axioms x 0 and x 1 for every variable x, using the CP rules which are basic algebraic
manipulations, additions of two inequalities, multiplication of an inequality by a positive integer
and the following division rule:
i2I a i x i k
i2I
a i
where b is a positive integer that evenly divides all a i ,
It can be shown that a set of inequalities has a CP -refutation i it has no f0; 1g-solution.
Any assignment satisfying the original clauses is actually a f0; 1g-solution of the corresponding
inequalities, provided that we assign the numerical value 1 to True and the value 0 to False. It is
also well-known that CP can polynomially simulate resolution [16], and this simulation preserves
tree-like proofs.
3 Monotone Real Circuits
A monotone real circuit is a circuit of fan-in 2 computing with real numbers where every gate
computes a nondecreasing real function. This class of circuits was introduced by Pudlak [30]. We
require that monotone real circuits output 0 or 1 on every input of zeroes and ones only, so that
they are a generalization of monotone boolean circuits. The depth and size of a monotone real
circuit are dened as usual, and we call it a formula if every gate has fan-out at most 1.
Lower bounds on the size of monotone real circuits were given by Pudlak [30], Cook and Haken
[14] and Fu [17]. Rosenbloom [33] shows that they are strictly more powerful than monotone boolean
circuits, since every slice function can be computed by a linear size, logarithmic depth monotone
real circuit, whereas most slice functions require exponential size general boolean circuits. On the
other hand, Jukna [25] gives a general lower bound criterion for monotone real circuits, and uses
it to show that certain functions in P=poly require exponential size monotone real circuits. Hence
the computing power of monotone real circuits and general boolean circuits is incomparable.
For a monotone boolean function f , we denote by dR (f) the minimal depth of a monotone real
circuit computing f , and by s R (f) the minimal size of a monotone real formula computing f .
The method of proving lower bounds on the depth of monotone boolean circuits using communication
complexity was used by Karchmer and Wigderson [26] to give
an
n) lower bound
on the monotone depth of st-connectivity. Using the notion of real communication complexity introduced
by Krajcek [28], Johannsen [24] proved the same lower bound for monotone real circuits.
In the case of boolean circuits the Karchmer-Wigderson result was generalized by Raz and
McKenzie [32]. Consider the monotone function Gen n of n 3 inputs t a;b;c , 1 a; b; c n is dened
as follows: for c n, we dene the relation ' c (c is generated) recursively by
or there are a; b n with ' a
Finally Gen n ( ~ From now on we will write a; b ' c for t
Raz and McKenzie [32] proved a lower bound of the
on the depth of
monotone boolean circuits computing Gen n . By a modication of their method we show that this
result also holds for monotone real circuits:
Theorem 1. For some > 0 and suciently large n
dR (Gen n )
This section is dedicated entirely to the proof of the above theorem. In the next section (Section
we will see how to use the lower bounds provided by Theorem 1 to obtain lower bounds for the
complexity of proofs in resolution and Cutting Planes proof systems.
3.1 Real Communication Complexity
Let R X Y Z be a multifunction, i.e. for every pair (x; y) 2 X Y , there is a z 2 Z with
We view such a multifunction as a search problem, i.e., given input (x; y) 2 X Y ,
the goal is to nd a z 2 Z such that (x;
A deterministic communication protocol P over XY Z species the exchange of information
bits between two players, I and II, that receive as inputs respectively x 2 X and y 2 Y and nally
agree on a value P (x; y) 2 Z such that (x; R. The deterministic communication
complexity of R, CC(R), is the number of bits communicated between players I and II in the
optimal protocol for R.
A real communication protocol over X Y Z is executed by two players I and II who
exchange information by simultaneously playing real numbers and then comparing them according
to the natural order of R. This generalizes ordinary deterministic communication protocols in the
following way: in order to communicate a bit, the sender plays this bit, while the receiver plays a
constant between 0 and 1, so that he can determine the value of the bit from the outcome of the
comparison.
Formally, such a protocol P is specied by a binary tree, where each internal node v is labeled
by two functions f I
giving player I's move, and f II
and each leaf is labeled by an element z 2 Z. On input (x; y) 2 X Y , the players construct a
path through the tree according to the following rule:
At node v, if f I
(y), then the next node is the left son of v, otherwise the right
son of v.
The value P (x; y) computed by P on input (x; y) is the label of the leaf reached by this path.
A real communication protocol P solves a search problem R X Y Z if for every (x; y) 2
holds. The real communication complexity CCR (R) of a search problem
R is the minimal depth of a real communication protocol that solves R.
For a natural number n, let [n] denote the set be a monotone
boolean function, let X := f 1 (1) and Y := f 1 (0), and let the multifunction R f X Y [n]
be dened by
The Karchmer-Wigderson game for f is dened as follows: Player I receives an input x 2 X and
Player II an input y 2 Y . They have to agree on a position i 2 [n] such that (x;
Sometimes we will say that R f is the Karchmer-Wigderson game for the function f . There is a
relation between the real communication complexity of R f and the depth of a monotone real circuit
or the size of a monotone real formula computing f , similar to the boolean case:
f be a monotone boolean function. Then
CCR (R f ) dR (f) and CCR (R f ) log 3=2 s R (f) :
For a proof see [28] or [24].
We will apply Lemma 2 to the boolean function Gen to prove a linear lower bound for dR (Gen)
and an exponential lower bound for s R (Gen), from a lower bound for CCR (R Gen ). It is immediate
to see that to establish Theorem 1 from Lemma 2, it suces to prove the following result:
Theorem 3. For some > 0 and suciently large n
CCR (R Genn )
Analogously to the case of [32], to prove Theorem 3 we will prove a more general result about
real communication complexity. As in [32] we will introduce a class of special games, the DART
games, and the measure of structured communication complexity. In the next subsection we prove
that lower bounds for the real communication complexity of a relation R associated to a DART
game can can be obtained proving lower bounds for the structured communication complexity of
R (Theorem 4).
3.2 DART games and structured protocols
Raz and McKenzie [32] introduced a special kind of communication games, called DART games,
and a special class of communication protocols, the structured protocols, for solving them.
For is the set of communication games specied by a relation R
. I.e., the inputs for Player I are k-tuples of elements
. I.e., the inputs for Player II are k-tuples of binary colorings y i of [m].
For all The relation R X Y Z dening the game only
depends on e i.e., we can describe R(x;
can be expressed as a DNF-Search-Problem, i.e., there exists a DNF-
tautology FR dened over the variables e k such that Z is the set of terms of FR , and
holds if and only if the term z is satised by the assignment (e
A structured protocol for a DART game is a communication protocol for solving the search
problem R, where player I gets input x 2 X, player II gets input y 2 Y , and in each round,
player I reveals the value x i for some i, and II replies with y i . The structured communication
complexity of R 2 DART(m; k), denoted by SC(R), is the minimal number of rounds in a structured
protocol solving R.
The main theorem of [32] showed that for suitable m and k, the deterministic communication
complexity of a DART game cannot be much smaller than that of a structured protocol. We shall
show the same for its real communication complexity. Obviously, a structured protocol solving R
in r rounds can be simulated by a real communication protocol solving R in r (dlog me+1) rounds.
Conversely, we will prove that the following holds:
Theorem 4. Let m; k 2 N. For every relation R 2 DART(m; k), where m k 14 ,
CCR (R) SC(R)
The proof of this theorem is the main technical result of this section and we dedicate to it the
entire Subsection 3.3.
As a rst corollary to Theorem 4, we observe that for DART games, real communication protocols
are no more powerful than deterministic communication protocols.
Corollary 5. Let m; k 2 N. For R 2 DART(m;
Proof. CC(R) CCR (R) SC(R)
m)
CC(R)).
In the rest of this subsection we show how to obtain the proof of Theorem 3 using Theorem 4. For
g. Consider the communication game PyrGen(m; d)
dened as follows: We regard the indices as elements of Pyr d , so that the inputs for the two
players I and II in the PyrGen(m; d) game are respectively sequences of elements
and we picture these as laid out in a pyramidal form with (1; 1)
at the top and (d; j), 1 j d and the bottom. The goal of the game is to nd either an element
colored 0 at the top of the pyramid, or an element colored 1 at the bottom of the pyramid, or an
element colored 1 with the two elements below it colored 0. That is we have to nd indices (i;
such that one of the following holds:
1.
2. y i;j
3.
Observe that, setting e search problem can be dened as a
DNF search problem given by the following DNF tautology:
_
_
1jd
e d;j
3 Observe that w.l.o.g. we can assume that both players know the structure of the protocol of the game. Hence
we can assume that at each round they both know what is the coordinate i of the inputs they have to talk about.
Therefore they have no need to transmit the index i of this coordinate.
Therefore, PyrGen(m; d) is a game in DART(m; d+1 ).
The following reduction shows that the real communication complexity of the game PyrGen(m; d)
is bounded by the real communication complexity of the Karchmer-Wigderson game for Gen n for
a suitable n. The proof is taken from [32], we include it because we will have to refer to some
details of it below.
2. Then
CCR (PyrGen(m; d)) CCR (R Genn
Proof. We prove that any protocol P solving the Karchmer-Wigderson game for Gen n can be used
to solve the PyrGen(m; d) game. From their respective inputs for the PyrGen(m; d) game, Player
I and II compute respectively a minterm and a maxterm 4 for Gen n and then apply the protocol
We interpret the elements between 2 and n 1 as triples (i; j; k), where (i;
Now player I computes from his input x : Pyr d ! [m] an input ~ t x to Gen n with Gen n ( ~ t x
by setting the following:
a 1;1 ; a 1;1 ' n
a i+1;j ; a i+1;j+1 ' a i;j for (i;
where a i;j := (i; j; x i;j ). This completely determines ~ t x .
Likewise Player II computes from his input y : Pyr d ! (2 [m] ) a coloring col of the elements
from [n] by setting From this, he computes an
input ~ t y by setting a; b ' c i it is not the case that Obviously
Gen
Playing the Karchmer-Wigderson game for Gen n now yields a triple (a; b; c) such that a; b ' c
in ~ t x and a; b 6' c in ~ t y . By denition of ~ t y , this means that
by denition of ~ t x one of the following cases must hold:
a d;j for some j d. By denition of col, y d;j
. In this case, y 1;1
a i;j . Then we have y i;j
In either case, the players have solved PyrGen(m; d) without any additional communication.
A lower bound on the structured communication complexity of PyrGen(m; d) was proved in
[32]:
Lemma 7 (Raz/McKenzie [32]). SC(PyrGen(m; d)) d.
A proof of Theorem 3 therefore follows immediately from the above results:
4 Recall the denition of minterm and maxterm from footnote 1.
Proof of Theorem 3. Fix 28 . By Theorem 4 and Lemma 7, we get CCR (PyrGen(m; d))
d log m) . Recall that 2. Therefore Lemma 6 immediately implies the Theorem,
taking 1From our Theorem 3 we obtain consequences for monotone real circuits analogous to those
obtained in [32] for monotone boolean circuits.
Denition (Pyramidal Generation). Let ~ t be an input to Gen n . We say that n is generated
in a depth-d pyramidal fashion by ~ t if there is a mapping m : Pyr d ! [n] such that the following
hold (recall that a; b ' c means t
Observe that the reduction in the proof of Lemma 6 produces only inputs from Gen 1
n (1) which
have the additional property that n is generated in a depth-d pyramidal fashion. Hence we can
state the following strengthening of Theorem 1:
Corollary 8. Let n; d and be as above. Every monotone real formula that outputs 1 on every
input to Gen n for which n is generated in a depth-d pyramidal fashion, and outputs 0 on all inputs
where Gen n is 0, has to be of
size
z n
The other consequences drawn from Theorem 4 and Lemma 7 in [32] apply to monotone real
circuits as well, e.g. we just state without proof the following result:
Theorem 9. There are constants 0 < ;
< 1 such that for every function d(n) n , there is
a family of monotone functions f that can be computed by monotone boolean
circuits of size n O(1) and depth d(n), but cannot be computed by monotone real circuits of depth
less than
d(n).
The method also gives a simpler proof of the lower bounds in [24], in the same way as [32]
simplies the lower bound of [26].
3.3 Proof of Theorem 4:
To prove Thm. 4, we rst need some combinatorial notions from [32] and some lemmas. Let
A [m] k and 1 j k. For x be the number of 2 [m] such that
A. Then we dene
The following lemmas about these notions were proved in [32]:
([32]). For every A 0 A and 1 j k,
Lemma 11 ([32]). Let 0 < < 1 be given. If for every 1 j k, AV
every > 0 there is A 0 A with jA 0 j (1 )jAj and
In particular, setting
14 , we get
Corollary 12. If m k 14 and for every 1 j k, AV
14 , then there is A 0 A
with
14 .
This corollary is almost identical to the corresponding statement in [32]. Only the values of
and have been slightly modied to improve the nal bound.
For a relation R 2 DART(m; k), A X and B Y , let CCR (R; A; B) be the real communication
complexity of R restricted to AB.
Denition
called an (; ; ')-game if the following conditions hold:
1. R 2 DART(m; k),
2. SC(R) ',
3. jAj 2 jXj and jBj 2 jY j,
4. T
14 .
The following lemma and its proof are slightly dierent from the corresponding lemma in [32],
because we use the strong notion of real communication complexity where [32] use ordinary communication
complexity. The modication we apply is analogous to that introduced by Johannsen
[24] to improve the result of Karchmer and Wigderson [26] to the case of real communication
complexity. This modication will aect the proof of the rst point of the next lemma.
Lemma 13. For every
1. if for every 1 j k, AV
14 , then there is an (+2; +1; ')-game (R
with
(R
2. if ' 1 and for some 1 j k, AV
14 , then there is an
(R
Proof of Lemma 13. For part 1, we rst show that CCR (R; A; B) > 0. Assume otherwise, then
there is a term z in the DNF tautology FR dening R that is satised for every (x; y) 2 A B.
Therefore y j
denote the number of possible values of x j
in elements of A, then this implies that jBj 2 mk
. On the other hand, jBj 2 mk , hence it
follows that
, which is a contradiction since m 1
14 implies
14 .
Let an optimal real communication protocol solving R restricted to AB be given. For a 2 A
and b 2 B, let a and b be the real numbers played by I and II in the rst round on input a and
b, respectively. W.l.o.g. we can assume that these are jAj distinct real numbers.
Now consider a f0; 1g-matrix of size jAj jBj with columns indexed by the a and rows indexed
by the b , both in increasing order, and where the entry in position ( a ; b ) is 1 if a > b and 0
if a b . Thus this entry determines the outcome of the rst round, when these numbers are
played. It is now obvious that either the upper right quadrant or the lower left quadrant must form
a monochromatic rectangle.
Hence there are A A and B 0 B with jA j 1jAj and jB 0 j 1jBj such that R restricted to
can be solved by a protocol with one round fewer than the original protocol. By Lemma 10
(1), AV DEG j
14 for every 1 j k, hence by Corollary 12 there is A 0 A with
4 jAj and T
14 . Thus (R; A is an
For part 2 we proceed like in the proof of the corresponding lemma of [32], with the numbers
slightly adjusted. Assume without loss of generality that k is the coordinate for which
14 . Let R 0 and R 1 be the restrictions of R in which the k-th coordinate
xed to 0 and 1, respectively. Obviously, R 0 and R 1 are DART (m; k 1) relations,
and therefore at least one of SC(R 0 ) and SC(R 1 ) is at least k 1. Assume without loss of generality
that SC(R 0 ) k 1. We will prove that there are two sets A 0
such that the following properties hold:
14 (5)
(R
This means that there is a (+3 log m; +1; k 1)-game (R (R
CCR (R; A; B) and this proves part 2 of Lemma 13.
Given any set U [m] consider the sets AU [m] k 1 and BU (f0; 1g m associated to the
set U by the following denition of [32]:
there is an u 2 U such that
there is a w 2 f0; 1g m such that
The following two claims can be proved exactly as the corresponding Claims of [32] and we omit
their proof.
14. For a random set U of size m 5
14 , with m 1000 14 , we have that
15. For a random set U of size m 5
14 , with m 1000 14 we have that
3
Moreover it is immediate to see that the same reduction used in Claim 6.3 of [32] also works
for the case of real communication complexity. Therefore we get:
16. For every set U [m]
(R
Take a random set U which with probability greater than 1, satises both the properties of
14 and Claim 15, and dene A 0 := AU and B 0 := BU . This means that with probability at
least 1both A
Recall that jAj
and that, by hypothesis on Part 2 of the lemma
14 . Therefore we have that
This proves (3). For (4) observe that by Claim 15 we have
The property (5) follows directly from Lemma 10 (2), and nally (6) follows from Claim 16.
We nally end with the proof of Theorem 4 from Lemma 13.
Proof of Theorem 4. Let k 2 N, k 1000. We prove that for any ;
is such that
CCR (R; A; B) '
log m4
Observe that by Denition of (; ; ')-game, when we have that
Therefore CCR (R; A; (R). Moreover the right side of Equation 7 reduces to '
493 m).
Since by the same Denition ' SC(R), for we get the claim of the theorem:
CCR (R) SC(R)
m)
To prove Equation 7, we proceed by induction on ' 1 and m 1=7 . In the base case ' < 1
(that is
7 , the inequality (7) is trivial, since the right hand side gets negative
for large m. In the inductive step consider (R; A; B) be an (; ; ')-game, and assume that (7)
holds for all . For sake of contradiction, suppose that
CCR (R; A; B) < '
log m4 +. Then either for every 1 j k, AV
14 , and
Lemma 13 gives an ( (R
(R
or for some 1 j k, AV
14 , then Lemma 13 gives an (
game (R
(R
log m4
log
both contradicting the assumption.
4 Separation between tree-like and dag-like versions of Resolution
and Cutting Planes
Cutting Planes refutations are linked to monotone real circuits by the following interpolation theorem
due to Pudlak:
Theorem 17 (Pudlak [30]). Let ~ p; ~q; ~r be disjoint vectors of variables, and let A(~p; ~q) and B(~p; ~r)
be sets of inequalities in the indicated variables such that the variables ~ p either have only nonnegative
coecients in A(~p; ~q) or have only non-positive coecients in B(~p; ~r).
Suppose there is a CP refutation R of A(~p; ~q) [ B(~p; ~r). Then there is a monotone real circuit
C(~p) of size O(jRj) such that for any vector ~a 2 f0; 1g j~pj
Furthermore, if R is tree-like, then C(~p) is a monotone real formula.
In [30], only the relationship between resolution proof size and monotone real circuit size was
stated. The fact that C(~p) is a monotone real formula if R is tree-like is not part of the original
theorem, but can be directly obtained from the proof of the theorem in [30]. The reason is that
the underlying graphs of the refutation and the circuit are the same.
We now dene an unsatisable set of clauses related to the boolean function Gen n . Let n and d
be natural numbers whose values will be xed below. Recall that Pyr d := f (i; g.
For a given mapping m dening a pyramidal generation in the sense of the denition above, our
unsatisable set of clauses will be the conjunction of two CNF, Gen(~p; ~q) and Col(~p; ~r). The clauses
in Gen(~p; ~q) will encode the property that the inputs ~q dene a pyramidal generation, and therefore
Gen 1. The clauses in Col(~p; ~r) will say that that the inputs ~r dene a coloring, so that
Gen
More precisely: the variables p a;b;c for a; b; c 2 [n] represent the input to Gen n ; variables q i;j;a
for (i; d and a 2 [n] encode a pyramid, where the element a is assigned to the position (i;
by the mapping m : Pyr d ! [n]; the variables r a for a 2 [n] represent a coloring of the elements by
1 such that 1 is colored 0, n is colored 1 and the elements colored 0 are closed under generation.
The set Gen(~p; ~q) is given by (8) - (11), and Col(~p; ~r) by (12) - (14).
_
1an
q i;j;a for (i;
r a for a 2 [n] (12)
a for a 2 [n] (13)
r a _ r b _
r c for a; b; c 2 [n] (14)
If Gen( ~ t; ~q) is satisable for a xed vector ~ t 2 f0; 1g n 3
, then n is generated in a depth-d pyramidal
fashion, and if Col( ~ t; ~r) is satisable, then Gen( ~ the variables ~ p occur only positively
in Gen(~p; ~q) and only negatively in Col(~p; ~r), Theorem 17 is applicable, and the formula obtained
from this application satises the conditions of Corollary 8. Hence we can conclude:
Theorem 18. Every tree-like CP refutation of the clauses Gen(~p; ~q) [ Col(~p; ~r) has to be of sizen ) , for some > 0.
On the other hand, there are polynomial size dag-like resolution refutations of these clauses.
Theorem 19. There are (dag-like) resolution refutations of size n O(1) of the clauses Gen(~p; ~q) [
Proof. First we resolve clauses (9) and (12) to get
q d;j;c _
r c (15)
Now we want to derive
r c for every (i;
downward from d to 1. The induction base is just (15).
Now by induction we have
r a and
we resolve them against (14) to get
r c for 1 a; b; c n and then resolve
them against (11) and get
for every 1 a; b n. All of these are then resolved against two instances of (8), and we get the
desired q i;j;c _ r c for every 1 c n.
Finally, we have in particular
q 1;1;a _ r a for every 1 c n. We resolve them with (13) and get
a;a;n for every 1 a n. These are resolved with (10) to get
q 1;1;a for every 1 a n.
Finally, this clause is resolved with another instance of (10) (the one with to get the
empty clause.
It is easy to check that the above refutation is an N-resolution refutation. The following corollary
is an easy consequence of the above theorems and known simulation results.
Corollary 20. The clauses Gen(~p; ~q) [ Col(~p; ~r) exponentially separate tree-like resolution from
dag-like resolution and (dag-like) N-resolution as well as tree-like Cutting Planes from dag-like
Cutting Planes.
The resolution refutation of Gen(~p; ~q) [ Col(~p; ~r) that appears in the proof of Theorem 19 is
not regular. We do not know whether Gen(~p; ~q) [ Col(~p; ~r) has polynomial size regular resolution
refutations. To obtain a separation between tree-like resolution and regular resolution we will
modify the clauses Col(~p; ~r).
4.1 Separation of tree-like CP from regular resolution
The clauses Col(~p; ~r) are modied (and the modication called RCol(~p;~r)), so that Gen(~p; ~q) [
allow small regular resolutions, but in such a way that the lower bound proof still applies.
We replace the variables r a by r a;i;D for a 2 [n], 1 i d and D 2 fL; Rg, giving the coloring of
element a, with auxiliary indices i being a row in the pyramid and D distinguishing whether an
element is used as a left or right predecessor in the generation process.
The set RCol(~p;~r) is dened as follows:
r a;d;D for a 2 [n] and D 2 fL; Rg (16)
r a;i+1;L _ r b;i+1;R _
r a;i;D _ r a;i;
r a;i;D _ r a;j;D for 1
Due to the clauses (19) and (20), the variables r a;i;D are equivalent for all values of the auxiliary
indices D. Hence a satisfying assignment for RCol(~p;~r) still codes a coloring of [n] such that
elements a with are colored 0, the elements b with b; b ' n are colored 1, and the 0-colored
elements are closed under generation. Hence if RCol( ~ t; ~r) is satisable, then Gen( ~
Hence any interpolant for the clauses Gen(~p; ~q) [RCol(~p;~r) satises the assumptions of Corollary
8, and we can conclude
Theorem 21. Tree-like CP refutations of the clauses Gen(~p; ~q) [ RCol(~p;~r) have to be of sizen ) .
On the other hand, we have the following upper bound on (dag-like) regular resolution refutations
of these clauses:
Theorem 22. There are (dag-like) regular resolution refutations of the clauses Gen(~p; ~q)[RCol(~p; ~r)
of size n O(1) .
Proof. First we resolve clauses (9) and (16) to get
q d;j;a _
r a;d;D (21)
Rg. Next we resolve (10) and (17) to get
for 1 a n and D 2 fL; Rg. Finally, from (11) and (18) we obtain
Rg.
Now we want to derive
r a;i;D for every (i;
induction on i downward from d to 1. The induction base is just (21).
For the inductive step, resolve (23) against the clauses
q i+1;j;a _ r a;i+1;L and q i+1;j+1;b _
which we have by induction, to give
q i+1;j;a _
r c;i;D
for every 1 a; b n. All of these are then resolved against two instances of (8), and we get the
desired q i;j;c _ r c;i;D .
Finally, we have in particular
r a;1;L , which we resolve against (22) to get
q 1;1;a for every
a n. From these and an instance of(8) we get the empty clause.
Note that the refutation given in the proof of Theorem 22 is actually a ordered refutation: It
respects the following elimination order
r 1;d
Corollary 23. The clauses Gen(~p; ~q) [ RCol(~p;~r) exponentially separate the following proof sys-
tems: Tree-like resolution from regular and ordered resolution.
5 Lower bound for ordered resolutions
Goerdt [18] showed that ordered resolution is strictly weaker than unrestricted resolution, by giving
a superpolynomial lower bound (of the
order
log log n )) for ordered resolutions of a certain family
of clauses, which on the other hand has polynomial size unrestricted resolution refutations. In this
section we improve this separation to an exponential one, in fact, we give an exponential separation
of ordered resolution from N-resolution.
To simplify the exposition, we apply the method of [18] to a set of clauses SP n;m expressing a
combinatorial principle that we call the String-of-Pearls principle: From a bag of m pearls, which
are colored red and blue, n pearls are chosen and placed on a string. The string-of-pearls principle
SP n;m says that, if the rst pearl is red and the last one is blue, then there must be a blue pearl
next to a red pearl somewhere on the string.
SP n;m is given by an unsatisable set of clauses in variables p i;j and q j for
where p i;j is intended to say that pearl j is at position i on the string, and q j means that pearl j
is colored blue. The clauses forming SP n;m are:
_
These rst three sets of clauses express that there is a unique pearl at each position.
These last three sets of clauses express that the rst pearl is red, the last one is blue, and that
a pearl sitting next to a red pearl is also colored red. The clauses SP n;m are a modied and
simplied version of the clauses related to the st-connectivity problem that were introduced by
Clote and Setzer [13].
We shall modify the clauses SP n;m in such a way as to make small ordered resolution refutations
impossible, while still allowing for small unrestricted resolutions. The lower bound is then proved
by a bottleneck counting argument similar to that used in [18], which is based on the original
argument of Haken [21]. Note that the clauses (24) - (26) are similar to the clauses expressing the
Pigeonhole Principle, which makes the bottleneck counting technique applicable in our situation.
The set SP 0
n;m is obtained from SP n;m by adding additional literals to some of the clauses.
First, the clauses (27) and (29) for 1 i < nand j 0 nare replaced by
for every ' 2 [m], where ^{ := n
1. Similarly, the clauses (28) and (29) for n
replaced by
for every ' 2 [m], where now ^{ := 2j. All other clauses remain unchanged. The modied clauses
n;m do not have an intuitive combinatorial interpretation dierent from the meaning of the original
clauses SP n;m . The added literals only serve to make the clauses hard for ordered refutations.
The idea is that, for the clauses (30)-(33) to be used as one would use the original (27)-(29) in
a natural short, inductive proof (like the one given below), the additional literals
^{;' have to be
removed rst. The positions ^{ are chosen in such a way that this cannot be done in a manner
consistent with a global ordering of the variables.
Theorem 24. The clauses SP 0
n;m have negative resolution refutations of size O(nm 2 ).
Proof. We rst give a negative refutation of the clauses SP n;m , and then show how to modify them
for SP 0
n;m .
For every i 2 [n], we will derive the clauses p i;j !
[m] from SP n;m by a negative
resolution derivation. For these are the clauses (27) from SP n;m . Inductively, assume we
have derived p i;j and we want to derive p (i+1);j !
q j from these.
Consider the clauses (29) of the form p i;j
Using the inductive
assumption, we derive from these the clauses p i;j
that these are
negative clauses.
By a derivation of length m, we obtain p (i+1);j !
q j from these and the clause
SP n;m . The whole derivation is of length O(m), and we need m of them, giving a total length of
for the induction step.
We end up with a derivation of the clauses p n;j !
of length O(nm 2 ). In another m
steps we resolve these with the initial clauses (28), obtaining the singleton clauses
Finally we derive a contradiction from these and the clauses
Now we modify this refutation for the modied clauses SP 0
n;m . First, note that the original
clauses (27) can be obtained from (30) by a negative derivation of length m.
Next, we modify those places in the inductive step where the clauses (29) are used that have
been modied. First, we resolve the modied clauses (31) resp. (33) with the inductive assumption,
yielding the negative clauses
These are then resolved with the clause
after which we can continue as in
the original refutation.
In the places where the clauses (28) are used in the original refutation, we rst resolve (32) with
the clauses p n;j !
yielding
n;j , which can be resolved with
to get the singleton
clauses
n;j as in the original refutation.
In particular, there are polynomial size unrestricted resolution refutations of these clauses. The
next theorem gives a lower bound for ordered resolution refutations of these clauses.
Theorem 25. For suciently large n and m 9n, every ordered resolution refutation of the
clauses SP 0
n;m contains at least 2 n
clauses.
Proof. For sake of simplicity, let n be divisible by 8, say nm+m be the number
of variables, and let an ordering x of the variables be given, i.e., each x is one of the
variables p i;j or q j . Let R be a ordered resolution refutation of SP 0
n;m respecting this elimination
ordering, i.e., on every path through R the variables are eliminated in the prescribed order. We
shall show that R contains at least k! dierent clauses, which is at least 2 n
8 (log n 5) for large n.
For a position i 2 [n] and N , let S(i; ) be the set of those pearls j 2k such that p i;j is
among the rst eliminated variables, i.e.,
be the unique position such that there is an index 0 with
In other words, i 0 is the rst position for which k of the variables p i 0 ;j with j 2k
are eliminated.
Let the elements of S(i enumerated in increasing order for definiteness
only, the order is irrelevant for the argument. For each 1 k, dene a position i
by
Note that i is the position ^{ appearing in the added literals in the modied clauses (31) for
or (27), where in the rst case, respectively in the clauses (33) for
in the second case.
Further dene R := [2k] n S(i ; 0 ), i.e., R is the set of those pearls j 2k for which the
variable eliminated later than any of the variables p i 0 ;j for 1 k. Note that for all
by denition of i 0 .
Denition. A critical assignment is an assignment that satises all the clauses of SP 0
n;m except
for exactly one of the clauses (24). From a critical assignment , we dene the following data:
The unique position i 2 [n] such that (p i ;j the gap of
.
A 1-1 mapping is the unique j 2 [m]
such that (p i;j
For every j 2 [m], we refer to the value (q j ) as the color of j, where we identify the value 0 with
red and 1 with blue.
A critical assignment is called 0-critical, if the gap is i each
are colored blue (i.e., (q
are colored red (i.e., (q j 1
Note that the positions and the pearls thus the notion of 0-critical
assignment, only depend on the elimination order and not on the refutation R.
As in other bottleneck counting arguments, the lower bound will now be proved in two steps:
First, we show that there are many 0-critical assignments. Second, we will map each 0-critical
assignment to a certain clause C in R, and then show that not too many dierent assignments
can be mapped to the same clause C , thus there must be many of the clauses C .
The rst goal, showing there are many 0-critical assignments, is reached with the following
claim:
26. For every choice of pairwise distinct pearls b
is a 0-critical assignment with m (i
Proof of Claim 26. For those positions i such that m (i) is not dened yet, i.e.
arbitrarily but consistently, i.e. choose an arbitrary 1-1
mapping from [n] n fi to [m] n fb g. This is always possible, since by
assumption m 9k.
Finally, color those pearls that are assigned to positions to the left of the gap red, and those
that are assigned to positions to the right of the gap blue, i.e., set (q m (i)
. The pearls are colored according to the requirement in the
denition of a 0-critical assignment. Note that this does not result in a con
ict even if some of the
are among the because the positions are always on the correct
side of the gap: if i 0 n, then k. The remaining
pearls can be colored arbitrarily.
Now we map 0-critical assignments to certain clauses in R. For a 0-critical assignment , let
C be the rst clause in R such that does not satisfy C , and
occurs in C
This clause exists because determines a path through R from the clause
to the empty
clause, such that does not satisfy any clause on this path. The variables p i 0 ;j with j 2k are
eliminated along that path, and
are the rst among them in the elimination order.
27. Let be a 0-critical assignment and ' := m (i ). Then for every 1 k, the
literal
occurs in C .
Proof of Claim 27. Let 0 be the assignment dened by 0 (p
other variables x. As p i 0 ;j does not occur in C , 0 does not satisfy C either.
There is exactly one clause in SP 0
n;m that is not satised by 0 , depending on where the gap i 0
is, this clause is
The requirement for the coloring of the j in the denition of a 0-critical assignment entails that
these clauses are not satised by 0 , and that all other clauses are satised by 0 .
In any case, the literal p i ;' occurs in this clause, and there is a path through R leading from
the clause in question to C , such that 0 does not satisfy any clause on that path. The variable
that is eliminated in the last inference on that path must be one of the p i 0 ;j for 1 k, by
the denition of C . Since ' 2 R , the variable p i ;' appears after p i 0 ;j in the elimination order,
by the denition of R . Therefore p i ;' cannot have been eliminated on that path, so
occurs in C .
Finally we are ready to nish the proof of the theorem. Let ; be two 0-critical assignments
such that ' := m (i so that (p i ;' 27, the
literal
occurs in C , therefore satises C , and hence C 6= C .
By Claim 26, there are at least k! 0-critical assignments that dier in at least one of the
values m (i ). Thus R contains at least k! distinct clauses of the form C .
The following corollary is a direct consequence of Theorems 25 and 24.
Corollary 28. The clauses SP 0
exponentially separate ordered resolution from
unrestricted resolution and N-resolution.
A modication similar to the one that transforms SP n;m into SP 0
n;m can also be applied to the
clauses Gen(~p; ~q), yielding a set DPGen(~p; ~q). Then for the clauses DPGen(~p; ~q) [ Col(~p; ~r), an
exponential lower bound for ordered resolutions can be proved by the method of Theorem 25 (this
was presented in the conference version [8] of this paper). Also the N-resolution proofs of Theorem
19 can be modied for these clauses. Thus the clauses exponentially
separate ordered from negative resolution as well.
6 Open Problems
We would like to conclude by stating some open problems related to the topics of this paper.
1. For boolean circuits (monotone as well as general), circuit depth and formula size are essentially
the same complexity measure, as they are exponentially related by the well-known
Brent-Spira theorem. Is there an analogous theorem for monotone real circuits, i.e., is
every monotone function f? This would be implied by the converse
to Lemma 2, i.e., dR (f) CCR (R f ). Does this hold for every monotone function
2. The separation between tree-like and dag-like resolution was recently improved to a strongly
exponential one, with a lower bound of the form 2 n= log n ([5, 6, 31]). Can we prove the same
strong separation between tree-like and dag-like CP ?
3. A solution for the previous problem would follow from a strongly exponential separation of
monotone real formula size from monotone circuit size. Such a strong separation is not even
known for monotone boolean circuits.
4. Can the superpolynomial separations of regular and negative resolution from unrestricted
resolution [19, 20] be improved to exponential as well? And is there an exponential speed-up
of regular over ordered resolution?
Acknowledgments
We would like to thank Ran Raz for reading a previous version of this work and discovering an
error, Andreas Goerdt for sending us copies of his papers, Sam Buss for helpful discussions and
nally Peter Clote for suggesting us to work on resolution separations.
--R
Minimum propositional proof length is NP-hard to linearly approximate
Short proofs are narrow
Exponential separations between restricted resolution and cutting planes proof systems.
Lower bounds for cutting planes proofs with small coe
Using the Groebner basis algorithm to
An exponential lower bound for the size of monotone real circuits.
The relative e
Lower bounds on sizes of cutting planes proofs for modular coloring principles.
Unrestricted resolution versus N-resolution
Regular resolution versus unrestricted resolution.
The intractability of resolution.
Upper and lower bounds for tree-like cutting planes proofs
Complexity of
Lower bounds for monotone real circuit depth and formula size and tree-like cutting planes
Combinatorics of monotone computations.
Monotone circuits for connectivity require super-logarithmic depth
Separation of the monotone NC hierarchy.
Monotone real circuits are more powerful than monotone boolean circuits.
Hard examples for resolution.
The complexity of propositional proofs.
--TR
--CTR
Juan Luis Esteban , Jacobo Torn, A combinatorial characterization of treelike resolution space, Information Processing Letters, v.87 n.6, p.295-300, September
Michael Alekhnovich , Jan Johannsen , Toniann Pitassi , Alasdair Urquhart, An exponential separation between regular and general resolution, Proceedings of the thiry-fourth annual ACM symposium on Theory of computing, May 19-21, 2002, Montreal, Quebec, Canada
Maria Luisa Bonet , Nicola Galesi, Optimality of size-width tradeoffs for resolution, Computational Complexity, v.10 n.4, p.261-276, May 2002
Albert Atserias , Mara Luisa Bonet, On the automatizability of resolution and related propositional proof systems, Information and Computation, v.189 n.2, p.182-201, March 15, 2004
Paolo Liberatore, Complexity results on DPLL and resolution, ACM Transactions on Computational Logic (TOCL), v.7 n.1, p.84-107, January 2006
Jakob Nordstrm, Narrow proofs may be spacious: separating space and width in resolution, Proceedings of the thirty-eighth annual ACM symposium on Theory of computing, May 21-23, 2006, Seattle, WA, USA
Juan Luis Esteban , Nicola Galesi , Jochen Messner, On the complexity of resolution with bounded conjunctions, Theoretical Computer Science, v.321 n.2-3, p.347-370, August 2004
Robert Nieuwenhuis , Albert Oliveras , Cesare Tinelli, Solving SAT and SAT Modulo Theories: From an abstract Davis--Putnam--Logemann--Loveland procedure to DPLL(T), Journal of the ACM (JACM), v.53 n.6, p.937-977, November 2006
Henry Kautz , Bart Selman, The state of SAT, Discrete Applied Mathematics, v.155 n.12, p.1514-1524, June, 2007 | resolution;proof complexity;cutting planes proof system;computational complexity;circuit complexity |
586946 | Randomness is Hard. | We study the set of incompressible strings for various resource bounded versions of Kolmogorov complexity. The resource bounded versions of Kolmogorov complexity we study are polynomial time CD complexity defined by Sipser, the nondeterministic variant CND due to Buhrman and Fortnow, and the polynomial space bounded Kolmogorov complexity CS introduced by Hartmanis. For all of these measures we define the set of random strings $\mathrm{R}^{\mathit{CD}}_t$, $\mathrm{R}^{\mathit{CND}}_t$, and $\mathrm{R}^{\mathit{CS}}_t$ as the set of strings $x$ such that $\mathit{CD}^t(x)$, $\mathit{CND}^t(x)$, and $\mathit{CS}^s(x)$ is greater than or equal to the length of $x$ for $s$ and $t$ polynomials. We show the following: $\mathrm{MA} \subseteq \mathrm{NP}^{\mathrm{R}^{\mathit{CD}}_t}$, where $\mathrm{MA}$ is the class of Merlin--Arthur games defined by Babai. $\mathrm{AM} \subseteq \mathrm{NP}^{\mathrm{R}^{\mathit{CND}}_t}$, where $\mathrm{AM}$ is the class of Arthur--Merlin games. $\mathrm{PSPACE} \subseteq \mathrm{NP}^{\mathrm{cR}^{\mathit{CS}}_s}$. In the last item $\mathrm{cR}^{\mathit{CS}}_s$ is the set of pairs $\langle x,y \rangle$ so that x is random given y. These results show that the set of random strings for various resource bounds is hard for complexity classes under nondeterministic reductions.This paper contrasts the earlier work of Buhrman and Mayordomo where they show that for polynomial time deterministic reductions the set of exponential time Kolmogorov random strings is not complete for EXP. | Introduction
The holy grail of complexity theory is the separation of complexity classes
like P, NP and PSPACE. It is well known that all of these classes possess
complete sets and that it is thus sucient for a separation to show that a
complete set of one class is not contained in the other. Therefore lots of
eort was put into the study of complete sets. (See [BT98].)
Kolmogorov [Lev94] however suggested to focus attention on sets which
are not complete. His intuition was that complete sets possess a lot of
\structure" that hinders a possible lower bound proof. He suggested to look
at the set of time bounded Kolmogorov random strings. In this paper we
will continue this line of research and study variants of this set.
Kolmogorov complexity measures the \amount" of regularity in a string.
Informally the Kolmogorov complexity of a string x, denoted as C(x), is the
size of the smallest program that prints x and then stops. For any string x,
C(x) is less than or equal to the length of x (up to some additive constant).
Those strings for which it holds that C(x) is greater than or equal to the
length of x are called incompressible or random. A simple counting argument
shows that random strings exist.
In the sixties, when the theory of Kolmogorov complexity was developed,
Martin [Mar66] showed that the coRE set of Kolmogorov random strings is
complete with respect to (resource unbounded) Turing reductions. Kummer
[Kum96] has shown that this can be strengthened to show that this set
is also truth-table complete.
The resource bounded version of the random strings was rst studied by
Ko [Ko91]. The polynomial time bounded Kolmogorov complexity C p (x),
for p a polynomial is the smallest program that prints x in p(jxj) steps
([Har83]). Ko showed that there exists an oracle such that the set of random
strings with respect to this time bounded Kolmogorov complexity is
complete for coNP under strong nondeterministic polynomial time reduc-
tions. He also constructed an oracle where this set is not complete for coNP
under deterministic polynomial time Turing reductions.
Buhrman and Mayordomo [BM95] considered the exponential time Kolmogorov
random strings. The exponential time Kolmogorov complexity
is the smallest program that prints x in t(jxj) steps for functions
. They showed that the set of t(n) random strings is not deterministic
polynomial time Turing hard for EXP. They showed that the class
of sets that reduce to this set has p measure 0 and hence that this set is not
even weakly hard for EXP.
The results in this paper contrast those from Buhrman and Mayordomo.
We show that the set of random strings is hard for various complexity classes
under nondeterministic polynomial time reductions.
We consider three well studied measures of Kolmogorov complexity that
lie in between C p (x) and C t (x) for p a polynomial and
. We
consider the distinguishing complexity as introduced by Sipser [Sip83]. The
distinguishing complexity, CD t (x), is the size of the smallest program that
runs in time t(n) and accepts x and nothing else. We show that the set
of random strings R CD
xed polynomial is
hard for MA under nondeterministic reductions. MA is the class of Merlin-
Arthur games introduced by Babai [Bab85]. As an immediate consequence
we obtain that BPP and NP BPP are in NP R CD
t .
Next we shift our attention to the nondeterministic distinguishing complexity
which is dened as the size of the smallest nondeterministic
algorithm that runs in time t(n) and accepts only x. We dene
R CND
xed polynomial. We show that
AM NP R CND
t where AM is the class of Arthur-Merlin games [Bab85].
It follows that the complement of the graph isomorphism problem, GI, is
in NP R CND
t and that if for some polynomial t, R CND
The s(n) space bounded Kolmogorov complexity, CS s (xjy) is dened as
the size of the smallest program that prints x, given y and uses at most s(jxj+
tape cells [Har83]. Likewise we dene cR CS
for s(n) a polynomial. We show that PSPACE NP cR CS
s .
For the rst two results we use the oblivious sampler construction of
Zuckerman [Zuc96], a lemma [BF97] that measures the size of sets in terms
of CD complexity, and we prove a Lemma that shows that the rst bits of
a random string are in a sense more random than the whole string. For
the last result we make use of the interactive protocol [LFKN92, Sha92] for
QBF.
To show optimality of our results for relativizing techniques, we construct
an oracle world where our rst result can not be improved to deterministic
reductions. We show that there is an oracle such that BPP 6 P R CD
t for
any polynomial t. The construction of the oracle is an extension of the
techniques developed by Beigel, Buhrman and Fortnow [BBF98].
Denitions and Notations
We assume the reader familiar with standard notions in complexity theory as
can be found e.g., in [BDG88]. Strings are elements of , where
For a string s and integers n; m jsj we use the notation s[n::m] for the
string consisting of the nth through mth bit of s. We use for the empty
string. We also need the notion of an oblivious sampler from [Zuc96].
Denition 2.1 A universal (r; d; m; ;
)-oblivious sampler is a deterministic
algorithm which on input a uniformly random r-bit string outputs a
sequence of points z any collection of d
functions it is the case that
Pr
d
(Where
In our application of this denition, we will always use a single function
f .
Fix a universal Turing machine U , and a nondeterministic universal machine
U n . All our results are independent of the particular choice of universal
machine. For the denition of Kolmogorov complexity we need the fact that
the universal machine can, on input p; y halt and output a string x. For
the denition of distinguishing complexity below we need the fact that the
universal machine on input p; x; y can either accept or reject. We also need
resource bounded versions of this property.
We dene the Kolmogorov complexity function C(xjy) (see [LV97]) by
xg. We dene unconditional Kolmogorov
complexity by Hartmanis dened a time bounded version
of Kolmogorov complexity in [Har83], but resource bounded versions
of Kolmogorov complexity date back as far as [Bar68]. (See also [LV97].)
Sipser [Sip83] dened the distinguishing complexity CD t .
We will need the following versions of resource bounded Kolmogorov
complexity and distinguishing complexity.
uses at most
. (See [Har83].)
(1) U(p; x; y) accepts
rejects
runs in at most
(See [Sip83].)
(1) U n (p; x; y) accepts
rejects
runs in at most
(See [BF97].)
For 0 < 1 we dene the following sets of strings of \maximal" CD p
and CND p complexity.
Note that for these sets are the sets mentioned in the introduction. In
this case we will omit the and use R CD
t and R CND
t . We also dene the set
of strings of maximal space bounded complexity.
cR CS
The c in the notation is to emphasize that randomness is conditional.
Also, cR CS
s technically is a set of pairs rather than a set of strings. The
unconditional space bounded random strings would be
R CS
s g:
We have no theorems concerning this set.
The C-complexity of a string is always upperbounded by its length plus
some constant depending only on the choice of the universal machine. The
CD- and CND-complexity of a string are always upperbounded by the C-
complexity of that string plus some constant depending again only on the
particular choice of universal machine. All quantiers used in this paper are
polynomially bounded. Often the particular polynomial is not important
for the sequel or it is clear from the context and is omitted. Sometimes we
need explicit bounds. Then the particular bound is given as a superscript to
the quantier. E.g., we use 9 m y to denote \There exists a y with jyj m,"
or 8 =n x to denote \For all x of length n."
The classes MA and AM are dened as follows.
Denition 2.2 L 2 MA i there exists a jxj c time bounded machine M
such
1. x 2 L =) 9yPr[M(x;
2.
where r is chosen uniformly at random in f0; 1g jxj c
there exists a jxj c time bounded machine M such that
1. x 2 L =) Pr[9yM(x;
2.
where r is chosen uniformly at random in f0; 1g jxj c
It is known that NP [ BPP MA AM PSPACE [Bab85].
Let #M(x) represent the number of accepting computations of a non-deterministic
Turing machine M on input x. A language L is in P if there
exists a polynomial time bounded nondeterministic Turing machine M such
that for all x:
Let g be any function. We say that advice function f is g-bounded if for
all n it holds that jf(n)j g(n). In this paper we will only be interested in
functions g that are polynomial.
The notation sn
T is used for strong nondeterministic Turing reductions,
which are dened by A sn
Distinguishing Complexity for Derandomization
In this section we prove hardness of R CD
t and R CND
t for Arthur-Merlin and
Merlin-Arthur games respectively under NP-reductions.
Theorem 3.1 For any t with t(n) 2 !(n log n), MA NP R CD
t .
and
Theorem 3.2 For any t with t(n) 2 !(n), AM NP R CND
t .
The proof of both theorems is roughly as follows. First guess a string
of high CD poly -complexity, respectively CND poly -complexity. Next, we use
the nondeterministic reductions once more to play the role of Merlin, and
use the random string to derandomize Arthur. Note that this is not as
straightforward as it might look. The randomness used by Arthur in interactive
protocols is used for hiding and can in general not be substituted by
computational randomness.
The idea of using strings of high CD-complexity and Zuckerman's sampler
derandomization stems from [BF00] (Section 8), which is the full version
of [BF97]. Though they do not explicitly dene the set R CD
t , they use the
same approach to derandomize BPP computations there.
The proof needs a string of high CD p respectively CND p complexity for
some polynomial. We rst show that we can nondeterministically extract
such a string from a longer string with high CD t complexity (respectively
CND t complexity) for any xed t with t(n) 2 !(n log n).
Lemma 3.3 Let f be such that f(n) < n, and let t, t 0 and T be such
that T
all suciently large s with CD t holds that CD t 0
Proof . Suppose for a contradiction that for any constant d 0 and innitely
many s with CD t holds that CD t 0
log jf(jsj)j d 0 . Then for any such s there exists a program p s that runs in
recognizes only s[1::f(jsj)] where jp s j < (f(jsj) 2 log jf(jsj)j
d 0 . The following program then recognizes s and no other string.
Input y
Check that the rst f(jsj) bits of y equal using p s . (Assume
jf(jsj)j is stored in the program for a cost of log jf(jsj)j bits.)
Check that the last jsj f(jsj) bits of y equal
bits are also stored in the program.)
This program runs in time T Therefore it
takes at most t(jsj) steps on U for all suciently large s [HS66]. We lose the
log n factor here because our algorithm must run on a xed machine and
the simulation is deterministic.
The program's length is jp
log jf(jsj)j d which is less than jsj for
almost all s. Hence CD t (s) < jsj, which contradicts the assumption. 2
Corollary 3.4 For every polynomial n c , t 2 !(n log n) and suciently large
string s with CD t
c and s
Proof . Take t 0
c and apply Lemma 3.3. 2
Lemma 3.3 and Corollary 3.4 have the following nondeterministic analogon
Lemma 3.5 For every polynomial n c , t 2 !(n) and suciently large string
s with CND t
c and s
Proof . The same proof applies, with a lemma similar to Lemma 3.3. How-
ever, in the nondeterministic case the simulation costs only linear time [BGW70].Before we can proceed with the proof of the theorems, we also need some
earlier results. We rst need the following Theorem from Zuckerman:
Theorem 3.6 ([Zuc96]) There is a constant c such that for
(m) and
there exists a universal (r; d; m; ;
)-oblivious sampler which runs in polynomial
time and uses only
bits and outputs
We also need the following lemma by Buhrman and Fortnow:
Lemma 3.7 ([BF97]) Let A be a set in P. For each string x 2 A =n it
holds that CD p (x) 2 log(jjA =n jj) +O(log n) for some polynomial p.
As noted in [BF97], an analogous lemma holds for CND p and NP.
Lemma 3.8 ([BF97]) Let A be a set in NP. For each string x 2 A =n it
holds that CND p (x) 2 log(jjA =n jj) +O(log n) for some polynomial p.
From these results we can prove the theorems. If we, for Theorem 3.1, want
to prove that an NP machine oracle with oracle R CD
t can recognize a set A
in MA then the positive side of the proof is easy if x in A then there exists a
machine M and a string y such that a 2=3 fraction of the strings r of length
jxj c makes M(x; accept. So an NP machine can certainly guess one such
pair x; y as a \proof" for x 2 A. The negative side is harder. We will show
that if
2 A and we substitute for r a string of high enough CD complexity
(CND complexity for Theorem 3.2) then no y can make M(x; accept.
To grasp the intuition behind the proof let us look at the much simplied
example of a BPP machine M having a 1=3 error probability on input x and
a string r of maximal unbounded Kolmogorov complexity. There are 2 jxj k
possible computations on input x, where jxj k is the runtime of M . Suppose
that M must accept x then at most a 1=3 fraction, i.e. at most 2 jxj c
of these computations reject x. Each rejecting computation consists of a
deterministic part described by M and x and a set of jxj c coin
ips.
such a set of coin
ips with a binary string and we have that each rejecting
computation uniquely identies a string of length jxj c . Call this set B.
We would like to show by contradiction that a random string cannot be a
member of this set, and hence that any random string, used as a sequence of
coin
ips, leads to a correct result. Any string in B is described by M , x and
an index in B, which has length log jjBjj jxj c log 3. So far there are no
grounds for a contradiction since a description consisting of these elements
can have length greater than jxj c . However we can amplify the computation
of M on input x by repetition and taking majority. Then, repeating the
computation x times, blows up the number of incorrect computations to
using x c+1 random bits. However, for large enough x a
description of jxj plus or minus some additive constant
depending on the the denition of Kolmogorov complexity used is smaller
then jxj c+1 and thus will lead to a contradiction.
Unfortunately, in our case the situation is a bit more complicated. The
factor 2 in Lemma 3.7 renders standard amplifaction of randomized computation
useless. Fortunately, Theorem 3.6 allows for a dierent type of
amplication using much less random bits, so that the same type of argument
can be used. We will now proceed to show how to t the amplication
given by Theorem 3.6 to our situation.
Lemma 3.9
1. Let L be a language in MA. For any constant k and any constant 0 <
1there exists a deterministic polynomial time bounded machine
M such that:
and r is chosen uniformly at random from the
strings in f0; 1g (1+)(1+k)m
2. Let L be a language in AM. For any constant k and any constant 0 <
there exists a deterministic polynomial time bounded machine
M such that:
(a) x 2 L =) Pr[9yM(x;
and r is chosen uniformly at random from the
strings in f0; 1g (1+)(1+k)m
Proof .
1. Furer et al. showed that the fraction 2=3 (see Denition 2.2) can be
replaced by 1 in [FGM + 89]. Now let ML be the deterministic polynomial
time machine corresponding to L in Denition 2.2, adapted so
that it can accept with probability 1 if x 2 L. Assume ML runs in
This means that for ML the 9y and 8y in
the denition can be assumed to be 9 n c y and 8 n c y respectively. Also,
the random string may be assumed to be drawn uniformly at random
from f0; 1g n c .
To obtain the value 2 km in the second item, we use Theorem 3.6 with
1=6. For given x and y let f xy be the FP function
that on input z computes ML (x;
We use the oblivious sampler to get a good estimate
for Ef xy . That is, we feed a random string of length (1+)(1+k)m in
the oblivious sampler and it returns sample points
z d on which we compute 1
d
is the machine
that computes this sum on input x, y and r and accepts i its value
is greater than 1=2.
If x 2 L there is a y such that Pr[ML (x;
no matter which sample points are returned by
the oblivious sampler. If
y. With
probability 1
the sample points returned by the oblivious sampler
are such that
d
, so 1
d
probability 2 km . 2
2. The proof is analogous to the proof of Part 1. We just explain the
dierences. For the 1 in the rst item of the claim we can again refer
to [FGM + 89], but now to Theorem 2(ii) of that paper. In this part
ML is the deterministic polynomial time machine corresponding to
the AM-language L and we dene the function f x : f0; 1g m 7! [0; 1] as
the function that on input z computes 9 n c
is an FP NP computable function. The sample points z z d that
are returned in this case have the following properties. If x 2 L then
no matter which string is returned as z i . That is for every
possible sample point z i there is a y i such that ML (x; y
for any set of sample points z that the sampler may return,
there exists a such that ML (x; y
than half of the sample points with
probability 1
. That is
Pr
"d
d
is less than 2 km . So if we let M(x; r) be the deterministic polynomial
time machine that uses r to generate d sample points and
then interprets y as <y and counts the number of accepts
of ML (x; y accepts if this number is greater than 1d we get
exactly the desired result. 2
In the next lemma we show that a string of high enough CD poly (CND poly )
can be used to derandomize an MA (AM) protocol.
Lemma 3.10
1. Let L be a language in MA and 0 < 1. There exists a deterministic
polynomial time bounded machine M , a polynomial q, > 0 and
integers k and c such that for almost all n and every r with
2. Let L be a language in AM and 0 < 1. There exists a deterministic
polynomial time bounded machine M a polynomial q, > 0
and integers k and c such that for almost all n and every r with
Proof .
1. Choose < and k > 6
. Let M be the deterministic polynomial
time bounded machine corresponding to L, k and of Lemma 3.9,
item 1. The polynomial n c will be the time bound of the machine
witnessing L 2 MA of that same lemma. We will determine q later,
but assume for now that r is a string of length (1
that CD q (r) jrj, and for ease of notation set
Suppose x 2 L. Then it follows that there exists a y such that for all
s of length (1 1. So in particular it holds
that M(x;
Suppose x 62 L. We have to show that for all y it is the case that
Suppose that this is not true and let y 0 be such that
A x;y 0
It follows that A x;y0 2 P by essentially a program that simulates M
and has x and y 0 hardwired. (Although A x;y0 is nite and therefore
trivially in P it is crucial here that the size of the polynomial program
is roughly Because of the amplication of the MA
protocol we have that:
Since r 2 A x;y0 it follows by Lemma 3.7 that there is a polynomial p
such
On the other hand we chose r such that:
CD q (r) jrj
which gives a contradiction for q p.
2. Choose < and k > 5
. Let M be the deterministic polynomial
time bounded machine corresponding to L, and k of Lemma 3.9,
item 2. Again, n c will be the time bound of the machine now witnessing
will be determined later. Assume for now that
r is a string of length (1 c such that CND q (r) jrj.
Suppose x 2 L. Then it follows that for all s there exists a y such that
1. So in particular there is a y r such that M(x; y r
Suppose
L. We have to show that 8yM(x;
that this is not true. Dene A 1g. Then
A x 2 NP by a program that has x hardwired, guesses a y and simulates
. Because of the amplication of the AM protocol we have that
jjA x jj 2 (1+)(1+k)m km . Since r 2 A x it follows by Lemma 3.8 there
exists a polynomial p such that:
On the other hand, we chose r such that:
CND q (r) jrj
which gives a contradiction whenever q p.The following corollary shows that a string of high enough CD poly complexity
can be used to derandomize a BPP machine (See also Theorem 8.2
in [BF00]).
Corollary 3.11 Let A be a set in BPP. For any > 0 there exists a
polynomial time Turing machine M a polynomial q such that if CD q (r)
jrj with then for all x of length n it holds that x 2 A ()
Proof of Theorem 3.1. Let A be a language in MA. Let q, M , and
be as in Lemma 3.10, item 1. The nondeterministic
reduction behaves as follows on input x of length n. First guess an s of size
check that s 2 R CD
t . Set accept if and only
if there exists a y such that M(x; 1. By Corollary 3.4 it follows that
CD q (r) jrj=2 and the correctness of the reductions follows directly from
Lemma 3.10, item 1 with
Proof of Theorem 3.2. This follows directly from Lemma 3.10, item 2.
The NP-algorithm is analogous to the one above.
Corollary 3.12 For t 2 !(n log n)
1. BPP and NP BPP are included in NP R CD
t .
2. GI 2 NP R CND
t .
It follows that if R CND
then the Graph isomorphism problem,
GI, is in NP \ coNP.
Limitations
In the previous section we showed that the set R CD
t is hard for MA under
reductions. One might wonder whether R CD
t is also hard for MA under a
stronger reduction like the deterministic polynomial time Turing reduction.
In this section we show that this, if true, will need a nonrelativizing proof.
We will derive the following theorem.
Theorem 4.1 There is a relativized world where for every polynomial t and
t; .
The proof of this theorem is given in Lemma 4.2 which says that the
statement of Theorem 4.1 is true in any world where P
NP A =poly and Theorem 4.3 which precisely shows the existence
of such a world.
Lemma 4.2 For any oracle A and 0 < 1 it holds that if EXP NP A
NP A =poly and P
t;
A
Proof . Suppose for a contradiction that the lemma is not true. If EXP NP
NP=poly then EXP NP=poly, so EXP PH ([Yap83]). Furthermore, if
EXP NP NP=poly, then certainly EXP NP EXP=poly. It then follows
from [BH92] that EXP
If (see [BFT97] for a denition) is in P. Then
by [VV86] and so NP BPP which implies PH BPP by [Zac88].
Finally, the fact that unique-SAT is in P is equivalent to: For all x and
y, C poly (xjy) CD poly (xjy) + O(1), as shown in [FK96]. We can use the
proof of [FK96] to show that unique-SAT in P also implies that R CD
for a particular universal machine. (Note that we need only contradict the
assumption for one particular type of universal machine.) This then in
its turn implies by assumption that BPP and hence EXP NP are in P NP .
This however contradicts the hierarchy theorem for relativized Turing machines
[HS65]. As all parts of this proof relativize, we get the result for any
oracle. There's one caveat here. Though R CD
t;
A clearly has a meaningful in-
terpretation, to talk about P R CD
t;
A
one must of course allow P to have access
to the oracle. It is not clear that P can ask any question if the machine can
only ask question about the random strings. Therefore, one might argue
that P R CD
t;
AA
should actually be in the statement of the lemma. This does
not aect the proof.
Our universal machine, say U S , is the following. On input p; x; y, U S
uses the Cook-Levin reduction to produce a formula f on jxj variables with
the property that x satises f if and only if p accepts x. Then U S uses
the self-reducibility of f and the assumed polynomial time algorithm for
unique-SAT to make acceptance of x unique. That is rst if the number of
variables is not equal jyj it rejects. Then, using the well-known substitute
and reduce algorithm for SAT, it veries for assignments
successively obtained from the algorithm that the algorithm for
precisely accepts rejects if this algorithm accepts
both Using this universal machine every
program accepts at most one string and therefore R CD
via an obvious
predicate. As argued above, this gives us our contradiction. 2
Now we proceed to construct the oracle.
Theorem 4.3 There exists an oracle A such that EXP NP A
Proof . The proof parallels the construction from Beigel, Buhrman and Fortnow
[BBF98], who construct an oracle such that P
NP A . We will use a similar setup.
Let M A be a nondeterministic linear time Turing machine such that the
language L A dened by
is P A complete for every A.
For every oracle A, let K A be the linear time computable complete set
for NP A . Let N K A
be a deterministic machine that runs in time 2 n and
for every A accepts a language H A that is complete for EXP NP A
. We will
construct A such that there exists a n 2 bounded advice function f such that
for for all w
(Condition
(Condition 1)
Condition 0 will guarantee that
that EXP NP NP=poly
We use the term 0-strings for the strings of the form <0; w; 1 jwj 2
> and
1-strings for the strings of the form <1; z; w; v> with
other strings we immediately put in A.
First we give some intuition for the proof. M is a linear time Turing
machine. Therefore setting the 1-strings forces the setting of the 0 strings.
Condition 0 will be automatically fullled by just describing how we set the
1-strings because they force the 0-strings as dened by Condition 0.
Fullling Condition 1 requires a bit more care since N K A
can query
exponentially long and double exponentially many 0- and 1-strings. We consider
each 1-string <1; z; w; v> as a 0-1 valued variable y <z;w;v> whose value
determines whether <1; z; w; v> is in A. The construction of A wil force a
correspondence between the computation of N K A (x) and a low degree
polynomial over variables with values in GF (2). To encode the computation
properly we use the fact that the OR function has high degree.
We will assign a polynomial p z over GF[2] to all of the 0-strings and
1-strings z. We ensure that for all z
1. If p z is in A.
2. If p z is not in A.
First for each 1-string z = <1; z; w; v> we let p z be the single variable
polynomial y <z;w;v> .
We assign polynomials to the 0-strings recursively. Note that M A (x) can
only query 0-strings with jwj
jxj. Consider an accepting computation
path of M(x) (assuming the oracle queries are guessed correctly). Let
;m be the queries on this path and b ;m be the query
answers with b the query was guessed in A and b
Note that m
Let P be the set of accepting computation paths of M(x). We then
dene the polynomial p z for
> as follows:
Y
(p
Remember that we are working over GF[2] so addition is parity.
Setting the variables y <z;w;v> (and thus the 1-strings) forces the values of
z for the 0-strings. We have set things up properly so the following lemma
is straightforward.
Lemma 4.4 For each 0-string
> we have
2 and Condition 0 can be satised. The polynomial p z has degree at most
Proof: Simple proof by induction on jxj. 2
The construction will be done in stages. At stage n we will code all
the strings of length n of H A into A setting some of the 1-strings and
automatically the 0-strings and thus fullling both condition 0 and 1 for
this stage.
We will need to know the degree of the multivariate multilinear polynomials
representing the OR and the AND function.
Lemma 4.5 The representation of the functions OR(u um ) and the
um ) as multivariate multilinear polynomials over GF[2] requires
degree exactly m.
Proof: Every function over GF[2] has a unique representation as a multivariate
multilinear polynomial.
Note that AND is just the product and by using De Morgan's laws we
can write OR as
Y
The construction of the oracle now treats all strings of length n in lexicographic
order. First, in a forcing phase in which the oracle is set so that
all computations of N K A remain xed for future extensions of the oracle
and next a coding phase in which rst an advice string is picked and then
the computations just forced are coded in the oracle in such a way that they
can be retrieved by an NP machine with this advice string. Great care has
of course to be taken so that the two phases don't disturb each other and
do not disturb earlier stages of the construction.
We rst describe the forcing phase. Without loss of generality, we will
assume that machine N only queries strings of the form q 2 K A . Note that
since N runs in time 2 n it may query exponentially long strings to K A .
Let x 1 be the rst string of length n. When we examine the computation
of N(x 1 ) we encounter the rst query q 1 to K A . We will try to extend the
oracle A to A 0 A such that q 1 2 K A 0
. If such an extension does not exist
we may assume that q 1 will never be in K A no matter how we extend A in
the future. We must however take care that we will not disturb previous
queries that were forced to be in K A . To this end we will build a set S
containing all the previously encountered queries that were forced to be in
K A . We will only extend A such that for all q 2 S it holds that q 2 K A 0
We will call such an extension an S-consistent extension of A.
Returning to the computation of N(x 1 ) and q 1 we ask whether there is
an S-consistent extension of A such that q 1 2 K A 0
. If such an extension
exists we will choose the S-consistent extension of A which adds a minimal
number of strings to A and put q 1 in S. Next we continue the computation of
answered yes and otherwise we continue with q 1 answered
no. The next lemma shows that a minimal extension of A will never add
more than 2 3n strings to A.
Lemma 4.6 Let S be as above and q be any query to K A and suppose we are
in stage n. If there exists an S-consistent extension of A such that q 2 K A 0
then there exists one that adds at most 2 3n strings to A.
Proof . Let MK be a machine that accepts K A when given oracle A and
consider the computation of machine M A
l be the smallest
set of strings such that adding them to A is an S-consistent extension of A
such that M A 0
K (q) accepts. Consider the leftmost
accepting path of M A 0
K (q) and let q be the queries (both 0 and
1-queries) on that path. Moreover let b i be 1 . Dene for q the
following polynomial:
Y
(p
After adding the strings l to A we have that P
by Lemma 4.4 the degree of each p q i is at most 2 2n and hence the degree
of P q is at most 2 3n . Now consider what happens when we take out any
number of the strings l of A 0 resulting in A 00 . Since this was a
minimal extension of A it follows that M A 00
K (q) rejects and that P
computes the AND on the l strings . Since by Lemma 4.5 the
degree of the unique multivariate multilinear polynomial that computes the
AND over l variables over GF[2] is l it follows that l 2 3n . 2
After we have dealt with all the queries encountered on N K A
continue this process with the other strings of length n in lexicographic
order. Note that since we only extend A S-consistently we will never disturb
any computation of N K A on lexicographic smaller strings. This follows since
the queries that are forced to be yes will remain yes and the queries that
could not be forced with an S-consistent extension will never be forced by
any S 0 -consistent extension of A, for S S 0 . After we have nished this
process we have to code all the computations of N on the strings of length
n. It is easy to see that jjSjj 2 2n and that at this point by Lemma 4.6 at
most 2 5n strings have been added to A at this stage. Closing the forcing
phase we can now pick an advice string and proceed to the coding phase. A
standard counting argument shows that there is a string z of length n 2 such
that no strings of the form <1; z; w; v> have been added to A. This string
z will be the advice for strings of length n.
Now we have to show that we can code every string x of length n correctly
in A to fulll condition 1. We will do this in lexicographic order. Suppose
we have coded all strings x j (for j < i) correctly and that we want to code
x i . There are two cases:
In this case we put all the strings <1; z; x
in A and thus set all these variables to 0. Since this does not change the
oracle it is an S-consistent extension.
properly extend A S-consistently adding
only strings of the form <1; z; x to A. The following lemma shows that
this can always be done. A proper extension of A is one that adds one or
more strings to A.
Lemma 4.7 Let jjSjj 2 2n be as above. Suppose that N K A There
exists a proper S-consistent extension of A adding only strings of the form
Proof . Suppose that no such proper S-consistent extension of A exists.
Consider the following polynomial:
Y
q2S
Where P q is dened as in Lemma 4.6, equation 2. Initially Q
the degree of Q x i 2 5n . Since every extension of A with strings of the
w> is not S consistent it follows that Q x i computes the OR
of the variables y <z;x i ;w> . Since there are 2 n 2
many of those variables we
have by Lemma 4.5 a contradiction with the degree of Q x i . Hence there
exists a proper S-consistent extension of A adding only strings of the form
properly coded into A. 2
Stage n ends after coding all the strings of length n.
This completes the proof of Theorem 4.3 2
Theorem 4.3 together with the proof of Lemma 4.2 also gives the following
corollary.
Corollary 4.8 There exists a relativized world where where EXP NP is in
BPP and
Our oracle also extends the oracle of Ko [Ko91] to CD poly complexity as
follows.
Corollary 4.9 There exists an oracle such that R CD
t; for any t 2 !(n log(n))
and > 0 is complete for NP under strong nondeterministic reductions and
. The oracle from Theorem 4.3 is a world where coNP BPP and
poly poly (xjy)+O(1), hence it follows that R CD
Corollary 3.12 relativizes so by Item 1 we have that BPP NP R CD
t; . 2
As a byproduct our oracle shows the following.
Corollary 4.10 9A Unique-SAT A 2 P A and P NP A
corollary indicates that the current proof that shows that if Unique-
pcan not be improved to yield a collapse to P NP
using relativizing techniques.
5 PSPACE and cR CS
s
In this section we further study the connection between cR CS
s and interactive
proofs. So far we have established that strings that have suciently high
CND poly complexity can be used to derandomize an IP protocol that has
two rounds in such a way that the role of both the prover and the verier
can be played by an NP oracle machine. Here we will see that this is also
true for IP itself provided that the random strings have high enough space
bounded Kolmogorov complexity. The class of quantied boolean formulas
(QBF) is dened as the closure of the set of boolean variables x i and their
negations x i under the operations ^, _ , 8x i and 9x i . A QBF in which all
the variables are quantied is called closed. Other QBFs are called open.
We need the following denitions and theorems from [Sha92].
Denition 5.1 ([Sha92]) A QBF B is called simple if in the given syntactic
representation every occurrence of each variable is separated from its
point of quantication by at most one universal quantier (and arbitrarily
many other symbols).
For technical reasons we also assume that (simple) QBFs can contain
negated variables, but no other negations. This is no loss of generality since
negations can be pushed all the way down to variables.
Denition 5.2 ([Sha92]) The arithmetization of a is an
expression obtained from B by replacing every positive occurrence
of x i by variable z i , every negated occurrence of x i by (1 z i ), every ^ by
, every _ by +, every 8x i by
z i 2f0;1g , and every 9x i by
z i 2f0;1g .
It follows that the arithmetization of a (simple) QBF in closed form has
an integer value, whereas the arithmetization of an open QBF is equivalent
to a (possibly multivariate) function.
Denition 5.3 ([Sha92]) The functional form of a simple closed QBF is
the univariate function that is obtained by removing from the arithmetization
of B either
z i 2f0;1g or
z i 2f0;1g where i is the least index of a variable for
which this is possible.
be a (simple) QBF with quantiers . For
be the boolean formula obtained from B by removing all its quantiers. We
denote by ~
B the arithmetization of B 0 . It is well-known that the language
of all true QBFs is complete for PSPACE. The restriction of true QBFs to
simple QBFs remains complete.
Theorem 5.4 ([Sha92]) The language of all closed simple true QBFs is
complete for PSPACE (under polynomial time many-one reductions).
It is straightforward that the arithmetization of a QBF takes on a positive
value if and only if the QBF is true. This fact also holds relative a not
too large prime.
Theorem 5.5 ([Sha92]) A simple closed quantied boolean formula B is
true if and only if there exists a prime number P of size polynomial in
jBj such that the value of the arithmetization of B is positive modulo P .
Moreover if B is false then the value of the arithmetization of B is 0 modulo
any such prime.
Theorem 5.6 ([Sha92]) The functional form of every simple QBF can be
represented by a univariate polynomial of degree at most 3.
Theorem 5.7 ([Sha92]) For every simple QBF there exists an interactive
protocol with prover P and polynomial time bounded verier V such that:
1. When B is true and P is honest, V always accepts the proof.
2. When B is false, V accepts the proof with negligible probability.
The proof of Theorem 5.7 essentially uses Theorem 5.6 to translate a simple
QBF to a polynomial in the following way. First, the arithmetization
of a simple QBF B in closed form is an integer value V which is positive
if and only if B is true. Then, B's functional form F (recall: this is
arithmetization of the QBF that is obtained from B by deleting the rst
quantier) is a univariate polynomial p 1 of degree at most 3 which has the
property that p 1 Substituting any value r 1 in p 1 gives a
new integer value V 1 , which is of course the same value that we get when
we substitute r 1 in F . However, F (r 1 ) can again be converted to a (low
degree) polynomial by deleting its rst
P or
Q sign and the above game
can be repeated. Thus, we obtain a sequence of polynomials. From the
rst polynomial in this sequence V can be computed. The last polynomial
p n has the property that p n (r
things are needed: First, if any other sequence of polynomials q
has the property that q 1
there has to be some i where q i (r
is an intersection point of p i and q i . Second, all calculations
can be done modulo some prime number of polynomial size (Theorem 5.5).
We summarize this in the following observation, which is actually a skeleton
of the proof of Theorem 5.7.
Observation 5.8 ([Sha92],[LFKN92]) Let B be a closed simple QBF
wherein the quantiers are Q if read from left to right in its syntactic
representation. Let A be its arithmetization, and let V be the value of
A. There exist a prime number P of size polynomial in jBj such that for
any sequence r of numbers taken from [1::P ] there is a sequence of
polynomials of degree at most 3 and size polynomial in jBj such that:
1.
2.
3.
4. For any sequence of univariate polynomials q
(a)
(b) q
(c) q n (r n
there is a minimal i such that p i is an
intersection point of p i and q i .
Where all (in)equalities hold modulo P and hold modulo any prime of polynomial
size if B is false. Moreover, p i can be computed in space (jBj
from B, P , r
From this reformulation of Theorem 5.7 we obtain that for any sequence
of univariate polynomials q and sequence of values r that
items 2 and 3 in Observation 5.8 it holds that either q 1
the true value of the arithmetization of B, or there is some polynomial q i in
this sequence such that r i is an intersection point of p i and q i (where p i is as
in the Observation 5.8). As p i can be computed in quadratic space from B,
that in the latter case r i cannot have high space
bounded Kolmogorov complexity relative to B, P ,
Hence, if r i does have high space bounded Kolmogorov complexity, then r i is
not an intersection point, so the rst case must hold (i.e., the value computed
from q 1 is the true value of the arithmetization of B). The following lemma
makes this precise.
Lemma 5.9 Assume the following for B, P , n,
1. B is a simple false closed QBF on n variables.
2. P is a prime number 2 jBj of size polynomial in jBj.
3. is a sequence of polynomials of degree 3 with coecients in
4. r are numbers in [1::P ].
5.
7.
8. ~
Proof: Take all calculations modulo P . Suppose q 1 It
follows from Observation 5.8 that there exists a sequence
items 1 through 3 of that lemma. Furthermore since B is false
prime, so It
follows that there must be a minimal i such that p i 6= q i and r i is an intersection
point of p i and q i . However p i can be computed in space (jBj
from B, P and r As both p i and q i have degree at most 3, it
follows that CS n (r bounded by a constant. A contradiction. 2
This suces for the main theorem of this section. Let s be any polynomial
Theorem 5.10 PSPACE NP cR CS s
Proof: We prove the lemma for the proof can by padding
be extended to any polynomial. There exists an NP oracle machine that
accepts the language of all simple closed true quantied boolean formulas
as follows. On input B rst check that B is simple. Guess a prime number
P of size polynomial in B, a sequence of polynomials of degree at
most 3 and with coecients in [1::P ]. Finally guess a sequence of numbers
all of size jP j. Check that:
1.
2.
3.
4. nally that
is at least jP j for all i n.
If B is true Lemma 5.8 guarantees that these items can be guessed such
that all tests are passed. If B is false and no other test fails then Lemma 5.9
guarantees that p 1 so the rst check must fail. 2
By the fact that PSPACE is closed under complement and the fact that
cR CS
s is also in PSPACE Theorem 5.10 gives that cR CS
s is complete for
PSPACE under strong nondeterministic reductions [Lon82].
Corollary 5.11 cR CS
s is complete for PSPACE under strong nondeterministic
reductions.
Buhrman and Mayordomo [BM95] showed that for
, the set
R C
jxjg is not hard for EXP under deterministic Turing
reductions. In Theorem 5.10 we made use of the relativized Kolmogorov
complexity (i.e., CS s (xjy)). Using exactly the same proof as in [BM95] one
can prove that the set cR C
jxjg is not hard for EXP
under Turing reductions. On the other hand the proof of Theorem 5.10 also
works for this set: PSPACE NP cR C
t . We suspect that it is possible to
extend this to show that EXP NP cR C
t . So far, we have been unable to
prove this.
Acknowledgements
We thank Paul Vitanyi for interesting discussions and providing the title
of this paper. We also thank two anonymous referees who helped with a
number of technical issues that cleared up much o the proofs and who
pointed to us to more correct references. One of the referees also pointed
out Corollary 4.8.
--R
Trading group theory for randomness.
Complexity of programs to determine whether natural numbers not greater than n belong to a recursively enumerable set.
NP might not be as easy as detecting unique solutions.
Resource bounded kolmogorov complexity revisited.
Resource bounded kolmogorov complexity revisited.
Six hypotheses in search of a theorem.
Superpolynomial circuits
An excursion to the kolmogorov random strings.
Complete sets and structure in subrecursive classes.
On completeness and soundness in interactive proof systems.
Generalized Kolmogorov complexity and the structure of feasible computations.
On the computational complexity of algorithms.
Two tape simulation of multitape Turing machines.
On the complexity of learning minimum time-bounded turing machines
On the complexity of random strings (extended abstract).
Personal communication.
Strong nondeterministic polynomial-time reducibilities
Completeness, the recursion theorem and e
A complexity theoretic approach to randomness.
NP is as easy as detecting unique solutions.
Some consequences of non-uniform conditions on uniform classes
Probabilistic quanti
--TR | relativization;kolmogorov complexity;interactive proofs;randomness;complexity classes;arthur-merlin;merlin-arthur |
586952 | A Randomized Time-Work Optimal Parallel Algorithm for Finding a Minimum Spanning Forest. | We present a randomized algorithm to find a minimum spanning forest (MSF) in an undirected graph. With high probability, the algorithm runs in logarithmic time and linear work on an exclusive read exclusive write (EREW) PRAM. This result is optimal w.r. t. both work and parallel time, and is the first provably optimal parallel algorithm for this problem under both measures. We also give a simple, general processor allocation scheme for tree-like computations. | Introduction
We present a randomized parallel algorithm to find a minimum spanning forest (MSF) in an edge-
weighted, undirected graph. On an EREW PRAM [KR90] our algorithm runs in expected logarithmic
time and linear work in the size of the input; these bounds also hold with high probability
in the size of the input. This result is optimal with respect to both work and parallel time, and is
the first provably optimal parallel algorithm for this problem under both measures.
Here is a brief summary of related results. Following the linear-time sequential MSF algorithm
of Karger, Klein and Tarjan [KKT95] (and building on it) came linear-work parallel MST algorithms
for the CRCW PRAM [CKT94, CKT96] and the EREW PRAM [PR97]. The best CRCW PRAM
algorithm known to date [CKT96] runs in logarithmic time and linear work, but the time bound
is not known to be optimal. The best EREW PRAM algorithm known prior to our work is the
result of Poon and Ramachandran which runs in O(log n log log linear work.
All of these algorithms are randomized. Recently Chong, Han and Lam [CHL99] presented a
deterministic EREW PRAM algorithm for MSF, which runs in logarithmic time with a linear
number of processors, and hence with work O((m + n) log n), where n and m are the number of
vertices and edges in the input graph. It was observed by Poon and Ramachandran [PR98] that
the algorithm in [PR97] could be speeded up to run in O(log n \Delta 2 log n ) time and linear work by
using the algorithm in [CHL99] as a subroutine (and by modifying the 'Contract' subroutine in
[PR97]).
In this paper we improve on the running time of the algorithm in [PR97, PR98] to O(log n),
which is the best possible, and we improve on the algorithm in [CKT96] by achieving the logarithmic
time bound on the less powerful EREW PRAM.
Part of this work was supported by Texas Advanced Research Program Grant 003658-0029-1999. Seth Pettie
was also supported by an MCD Fellowship.
Our algorithm has a simple 2-phase structure. It makes subroutine calls to the Chong-Han-
Lam algorithm [CHL99], which is fairly complex. But outside of these subroutine calls (which are
made to the simplest version of the algorithm in [CHL99]), the steps in our algorithm are quite
straightforward.
In addition to being the first time-work optimal parallel algorithm for MSF, our algorithm can
be used as a simpler alternative to several other parallel algorithms:
1. For the CRCW PRAM we can replace the calls to the CHL algorithm by calls to a simple
logarithmic time, linear-processor CRCW algorithm such as the one in [AS87]. The resulting
algorithm runs in logarithmic time and linear work and is considerably simpler than the MSF
algorithm in [CKT96].
2. As modified for the CRCW PRAM, our algorithm is simpler than the linear-work logarithmic-time
CRCW algorithm for connected components given in [Gaz91].
3. Our algorithm improves on the EREW connectivity and spanning tree algorithms in [HZ94,
HZ96] since we compute a minimum spanning tree within the same time and work bounds.
Our algorithm is simpler than the algorithms in [HZ94, HZ96].
In the following we use the notation S +T to denote union of sets S and T , and we use S + e to
denote the set formed by adding the element e to the set S. We say that a result holds with high
probability (or w.h.p.) in n if the probability that it fails to hold is less than 1=n c , for any constant
The rest of this paper describes and analyzes our algorithm, and is organized as follows. Section
2 gives a high-level description of our algorithm, which works in two phases. Section 3 describes the
details of Phase 1 of our algorithm; the main procedure of Phase 1 is Find-k-Min, which is given
in section 3.4. Section 4 gives Phase 2, whose main procedure is Find-MSF. Section 5 gives the
proof that our algorithm runs in expected logarithmic time and linear work, and section 6 extends
this result to high-probability bounds. Section 7 addresses the issue of processor allocation in the
various steps of our algorithm. Section 8 discusses the adaptability of our algorithm to realistic
parallel models like the BSP [Val90] and QSM [GMR97] and the paper concludes with section 9.
2 The High-Level Algorithm
Our algorithm is divided into two phases along the lines of the CRCW PRAM algorithm of [CKT96].
In Phase 1, the algorithm reduces the number of vertices in the graph from n to n=k vertices, where
n is the number of vertices in the input graph, and To perform this reduction
the algorithm uses the familiar recursion tree of depth log n [CKT94, CKT96, PR97], which gives
rise to O(2 log n ) recursive calls, but the time needed per invocation in our algorithm is well below
O(log n=2 log n ). Thus the total time for Phase 1 is O(log n). We accomplish this by requiring
Phase 1 to find only a subset of the MSF. By contracting this subset of the MSF we obtain a graph
with O(n=k) vertices. Phase 2 then uses an algorithm similar to the one in [PR97], but needs no
recursion due to the reduced number of vertices in the graph. Thus Phase 2 is able to find the MSF
of the contracted graph in O(log n) time and linear work.
We assume that edge weights are unique. As always, uniqueness can be forced by ordering the
vertices, then ordering identically weighted edges by their end points.
Here is a high-level description of our algorithm.
y We use log (r) n to denote the log function iterated r times, and log n to denote the minimum r s.t. log (r) n - 1.
(Phase retain the lightest k edges in edge-list(v)
G 0 :=Contract all edges in G appearing in M
(Phase :=Sample edges of G 0 with prob. 1=
log (2) n
Theorem 2.1 With high probability, High-Level(G) returns the MSF of G in O(log n) time using
processors.
In the following sections we describe and analyze the algorithms for Phase 1 and Phase 2, and
then present the proof of the main theorem for the expected running time. We then obtain a
high probability bound for the running time and work. When analyzing the performance of the
algorithms in Phase 1 and Phase 2, we use a time-work framework, assuming perfect processor
allocation. This can be achieved with high probability to within a constant factor, using the load-balancing
scheme in [HZ94], which requires superlinear space, or the linear-space scheme claimed
in [HZ96]. We discuss processor allocation in Section 7 where we point out that a simple scheme
similar to the one in [HZ94] takes only linear space on the QRQW PRAM [GMR94], which is
a slightly stronger model than the EREW PRAM. The usefulness of the QRQW PRAM lies in
the fact the algorithms designed on that model map on to general-purpose models such as QSM
[GMR97] and BSP [Val90] just as well as the EREW PRAM. We then describe the performance of
our MSF algorithm on the QSM and BSP.
In Phase 1, our goal is to contract the input graph G into a graph with O(n=k) vertices. We do this
by identifying certain edges in the minimum spanning forest of G and contracting the connected
components formed by these edges. The challenge here is to identify these edges in logarithmic
time and linear work.
Phase 1 achieves the desired reduction in the number of vertices by constructing a k-Min forest
(defined below). This is similar to the algorithm in [CKT96]. However, our algorithm is considerably
simpler. We show that a k-Min forest satisfies certain properties, and we exploit these properties to
design a procedure Bor-uvka-A, which keeps the sizes of the trees contracted in the various stages of
Phase 1 to be very small so that the total time needed for contracting and processing edges in these
trees is o(log n=2 log n ). Phase 1 also needs a Filter subroutine, which removes 'k-min heavy' edges.
For this, we show that we can use an MSF verification algorithm on the small trees we construct
to perform this step. The overall algorithm for Phase 1, Find-k-Min uses these two subroutines to
achieve the stated reduction in the number of vertices within the desired time and work bounds.
3.1 k-Min Forest
Phase 1 uses the familiar 'sample, contract and discard edges' framework of earlier randomized
algorithms for the MSF problem [KKT95, CKT94, CKT96, PR97]. However, instead of computing
a minimum spanning forest, we will construct the k-Min tree [CKT96] of each vertex (where
(log (2) n) 2 ). Contracting the edges in these k-Min trees will produce a graph with O(n=k) vertices.
To understand what a k-Min tree is, consider the Dijkstra-Jarnik-Prim minimum spanning tree
algorithm:
(choose an arbitrary starting vertex v)
Repeat until T contains the MST of G
Choose minimum weight edge (a; b) s.t a 2 S, b 62 S
S
The edge set k-Min(v) consists of the first k edges chosen by this algorithm, when started at
vertex v. A forest F is a k-Min forest of G if F ' MSF(G) and for all v 2 G; k-Min(v) ' F .
be the set of edges on the path from x to y in tree T , and let maxweightfAg be
the maximum weight in a set of edges A.
For any forest F in G, define an edge (a; b) in G to be F -heavy if weight(a; b) ? maxweightfP F (a; b)g
and to be F -light otherwise. If a and b are not in the same tree in F then (a; b) is F-light.
Let M be the k-Min tree of v. We define weight v (w) to be maxweightfPM (v; w)g if w appears in
maxweightfk-Min(v)g. Define an edge (a; b) to be k-Min-heavy
maxfweight a (b); weight b (a)g, and to be k-Min-light otherwise.
3.1 Let the measure weight v (w) be defined with respect to any k in the range [1.n]. Then
weight v (w) - maxweightfPMSF (v; w)g.
Proof: There are two cases, when w falls inside the k-Min tree of v, and when it falls outside. If w is
inside k-Min(v), then weight v (w) is the same as maxweightfPMSF (v; w)g since k-Min(v) ' MSF .
Now suppose that w falls outside k-Min(v) and weight v (w) ? maxweightfPMSF (v; w)g. There
must be a path from v to w in the MSF consisting of edges lighter than maxweightfk-Min(v)g.
However, at each step in the Dijkstra-Jarnik-Prim algorithm, at least one edge in PMSF is eligible
to be chosen in that step. Since w 62 k-Min(v), the edge with weight maxweightfk-Min(v)g is
never chosen. Contradiction. 2
Let K be a vector of n values, each in the range [1::n]. Each vertex u is associated with a value of
denoted k u . Define an edge (u; v) to be K-Min-light if weight(u; v) ! maxfweight u (v); weight v (u)g,
where weight u (v) and weight v (u) are defined with respect to k u and k v respectively.
Lemma 3.1 Let H be a graph formed by sampling each edge in graph G with probability p. The
expected number of edges in G that are K-Min-light in H is less than n=p, for any K.
Proof: We show that any edge that is K-Min-light in G is also F -light where F is the MSF of
H. The lemma then follows from the sampling lemma of [KKT95] which states that the expected
number of F -light edges in G is less than n=p. Let us look at any K-Min-light edge (v; w). By
3.1, weight v (w) - maxweightfPMSF (v; w)g, the measure used to determine F -lightness.
Thus the criterion for K-Min-lightness, maxfweight v (w); weight w (v)g, must also be less than or
equal to maxweightfPMSF (v; w)g. Restating this, if (v; w) is K-Min-light, it must be F -light as
well. 2
We will use the above property of a k-Min forest to develop a procedure Find-k-Min(G; l). It
takes as input the graph G and a suitable positive integer l, and returns a k-Min forest of G. For
runs in logarithmic time and linear work. In the next few sections we describe some
basic steps and procedures used in Find-k-Min, and then present and analyze this main procedure
of Phase 1.
Phase 1 is concerned only with the k-Min tree of each vertex, it suffices to retain only the
lightest k edges incident on each vertex. Hence as stated in the first step of Phase 1 in algorithm
High-Level in Section 2 we will discard all but the lightest k edges incident on each vertex since we
will not need them until Phase 2. This step can be performed in logarithmic time and linear work
by a simple randomized algorithm that selects a sample of size
jLj from each adjacency list L,
sorts this sample, and then uses this sorted list to narrow the search for the kth smallest element
to a list of size O(jLj 3=4 ).
3.2 Bor-uvka-A Steps
In a basic Bor-uvka step [Bor26], each vertex chooses its minimum weight incident edge, inducing
a number of disjoint trees. All such trees are then contracted into single vertices, and useless
edges discarded. We will call edges connecting two vertices in the same tree internal and all others
external. All internal edges are useless, and if multiple external edges join the same two trees, all
but the lightest are useless.
Our algorithm for Phase 1 uses a modified Bor-uvka step in order to reduce the time bound to
o(log n) per step. All vertices are classified as being either live or dead. After a modified Bor-uvka
step, vertex v's parent pointer is is the edge of minimum weight incident on
v. In addition, each vertex has a threshold which keeps the weight of the lightest discarded edge
adjacent to v. The algorithm discards edges known not to be in the k-Min tree of any vertex. The
threshold variable guards against vertices choosing edges which may not be in the MSF. A dead
vertex v has the useful property (shown below) that for any edge (a; b) in k-Min(v), weight(a; b) -
weight(v; p(v)), thus dead vertices need not participate in any more Bor-uvka steps.
It is well-known that a Bor-uvka step generates a forest of pseudo-trees, where each pseudo-tree
is a tree together with one extra edge that forms a cycle of length 2. In our algorithm we will assume
that a Bor-uvka step also removes one of the edges in the cycle so that it generates a collection of
rooted trees.
The following three claims refer to any tree resulting from a modified Bor-uvka step. Their
proofs are straightforward and are omitted.
3.2 The sequence of edge weights encountered on a path from v to root(v) is monotonically
decreasing.
3.3 If consists of the edges in the path from v to root(v).
Furthermore, the weight of (v; p(v)) is greater than any other edge in d-Min(v).
3.4 If the minimum-weight incident edge of u is (u; v), k-Min(u) ' (k-Min(v)
T be a tree induced by a Bor-uvka step, and let T 0 be a subtree of T . If e is the
minimum weight incident edge on T , then the minimum weight incident edge on T 0 is either e or
an edge of T .
Proof: Suppose, on the contrary that the minimum weight incident edge on T 0 is e 0 62 T , and
let v and v 0 be the end points of e and e 0 which are inside T . Consider the paths P
(v 0 ) to the root of T . By Claim 3.2, the edge weights encountered on P and P 0 are monotonically
decreasing. There are two cases. If T 0 contains some, but not all of P 0 , then e 0 must lie along P 0 .
Contradiction. If T 0 contains all of P 0 , but only some of P , then some edge e 00 2 P is adjacent to
The procedure Bor-uvka-A(H; l; F ) given below returns a contracted version of H with the
number of live vertices reduced by a factor of l. Edges designated as parent pointers, which are
guaranteed to be in the MSF of H, are returned in F . Initially
Repeat log l times: (log l modified Bor-uvka steps)
For each live vertex v
Choose min. weight edge (v; w)
(1) If weight(v; w) ? threshold(v), v becomes dead, stop else
Each tree T induced by edges of F 0 is one of two types:
If root of T is dead, then
(2) Every vertex in T becomes dead (Claim 3.4)
If T contains only live vertices
(3) If depth(v) - k, v becomes dead (Claim 3.3)
Contract the subtree of T made up of live vertices
The resulting vertex is live, has no parent pointer, and
keeps the smallest threshold of its constituent vertices
Lemma 3.2 If Bor-uvka-A designates a vertex as dead, its k-Min tree has already been found.
Proof: Vertices make the transition from live to dead only at the lines indicated by a number. By
our assumption that we only discard edges that cannot be in the k-Min tree of any vertex, if the
lightest edge adjacent to any vertex has been discarded, we know its k-Min tree has already been
found. This covers line (1). The correctness of line (2) follows from Claim 3.4. Since (v; p(v)) is
the lightest incident edge on v, k-Min(v) '
be called dead. Since the root of a tree is dead, vertices at depth one are dead, implying vertices at
depth two are dead, and so on. The validity of line (3) follows directly from Claim 3.3. If a vertex
finds itself at depth - k, its k-Min tree lies along the path from the vertex to its root. 2
Lemma 3.3 After a call to Bor-uvka-A(H; k tree of each vertex is a subset of
F .
Proof: By Lemma 3.2, dead vertices already satisfy the lemma. After a single modified Bor-uvka
step, the set of parent pointers associated with live vertices induce a number of trees. Let T (v)
be the tree containing v. We assume inductively that after dlog ie modified Bor-uvka steps, the
tree of each vertex in the original graph has been found (this is clearly true for
For any live vertex v let (x; y) be the minimum weight edge s.t. x 2 T (v); y 62 T (v). By the
inductive hypothesis, the (i \Gamma 1)-Min trees of v and y are subsets of T (v) and T (y) respectively. By
is the first external edge of T (v) chosen by the Dijkstra-Jarnik-Prim algorithm,
starting at v. As every edge in (i \Gamma 1)-Min(y) is lighter than (x; y), is a subset
of chosen in the (dlog ie th modified Bor-uvka step,
is a subset of T (v) after dlog ie modified Bor-uvka steps. Thus after
steps, the k-Min tree of each vertex has been found. 2
Lemma 3.4 After b modified Bor-uvka steps, the length of any edge list is bounded by k k b
Proof: This is true for Assuming the lemma holds for modified Bor-uvka steps, the
length of any edge list after that many steps is - k k
. Since we only contract trees of height ! k,
the length of any edge list after b steps is
. 2
It is shown in the next section that our algorithm only deals with graphs that are the result of
O(log modified Bor-uvka steps. Hence the maximum length edge list is k k O(log
The costliest step in Bor-uvka-A is calculating the depth of each vertex. After the minimum
weight edge selection process, the root of each induced tree will broadcast its depth to all depth
1 vertices, which in turn broadcast to depth 2 vertices, etc. Once a vertex knows it is at depth
may stop, letting all its descendents infer that they are at depth - k. Interleaved with
each round of broadcasting is a processor allocation step. We account for this cost separately in
section 7.
Lemma 3.5 Let G 1 have m 1 edges. Then a call to Bor-uvka-A(G 1 ; l; F ) can be executed in time
O(k O(log processors.
Proof: Let G 1 be the result of b modified Bor-uvka steps. By Lemma 3.4, the maximum degree of
any vertex after the i th modified Bor-uvka step in the current call to Bor-uvka-A is k k b+i
. Let us now
look at the required time of the i th modified Bor-uvka step. Selecting the minimum cost incident edge
takes time log k k b+i
, while the time to determine the depth of each vertex is k \Delta log k k b+i
. Summing
over the log l modified Bor-uvka steps, the total time is bounded by P log l
As
noted above, the algorithm performs O(log modified Bor-uvka steps on any graph, hence the time
is k O(log
The work performed in each modified Bor-uvka step is linear in the number of edges. Summing
over log l such steps and dividing by the number of processors, we arrive at the second term in the
stated running time. 2
3.3 The Filtering Step
The Filter Forest
Concurrent with each modified Bor-uvka step, we will maintain a Filter forest, a structure
that records which vertices merged together at what time, and the edge weights involved. (This
structure appeared first in [King97]). If v is a vertex of the original graph, or a new vertex
resulting from contracting a set of edges, there is a corresponding vertex OE(v) in the Filter for-
est. During a Bor-uvka step, if a vertex v becomes dead, a new vertex w is added to the Filter
forest, as well as a directed edge (OE(v); w) having the same weight as (v; p(v)). If live vertices
are contracted into a live vertex v, a vertex OE(v) is added to the Filter forest in addition
to directed edges having the weights of edges
(v
It is shown in [King97] that the heaviest weight in the path from u to v in the MSF is the same
as the heaviest weight in the path from OE(u) to OE(v) in the Filter forest (if there is such a path).
Hence the measures weight v (w) can be easily computed in the following way. Let P f (x; y) be the
path from x to y in the Filter forest. If OE(v) and OE(w) are not in the same Filter tree, then
weight
weight w
If v and w are in the same Filter tree, let
weight
3.6 The maximum weight on the path from OE(v) to root(OE(v)) is the same as the maximum
weight edge in r-Min(v), for some r.
Proof: If root(OE(v)) is at height h, then it is the result of h Bor-uvka steps. Assume that the
claim holds for the first i ! h Bor-uvka steps. After a number of contractions, vertex v of the
original graph is now represented in the current graph by v c . Let T vc be the tree induced by the
th Bor-uvka step which contains v c , and let e be the minimum weight incident edge on T vc . By
the inductive hypothesis, maxweightfP f (OE(v); OE(T vc As
was shown in the proof of Claim 3.5, all edges on the path from v c to edge e have weight at most
weight(e)g. Each of the edges (v c ; p(v c )) and e has a corresponding edge in
the Filter forest, namely (OE(v c ); p(OE(v c ))) and (OE(T vc ); p(OE(T vc ))). Since both these edges are on the
path from OE(v) to p(OE(T vc )), maxweightfP f (OE(v); p(OE(T vc
. Thus the claim holds after
The Filter Step
In a call to Filter(H; F ) in Find-k-Min, we examine each edge
e from H if weight(e) ? maxfweight v (w); weight w (v)g In order to carry out this test we can
use the O(log n) time, O(m) work MSF verification algorithm of [KPRS97], where we modify the
algorithm for the case when x and y are not in the same tree to test the pairs (OE(x); root(OE(x))
and (OE(y); root(OE(y)), and we delete e if both of these pairs are identified to be deleted. This
computation will take time O(log r) where r is the size of the largest tree formed.
The procedure Filter discards edges that cannot be in the k-Min tree of any vertex. When it
discards an edge (a; b), it updates the threshold variables of both a and b, so that threshold(a) is
the weight of the lightest discarded edge adjacent to a. If a's minimum weight edge is ever heavier
than threshold(a), k-Min(a) has already been found, and a becomes dead.
be a graph formed by sampling each edge in H with probability p, and F be a
k-Min forest of H 0 . The call to Filter(H; F ) returns a graph containing a k-Min forest of H, whose
expected number of edges is n=p.
Proof: For each vertex v, Claim 3.6 states that maxweightfP f (OE(v);
Min(v) for some value k v . By building a vector K of such values, one for each vertex, we are able
to check for K-Min-lightness using the Filter forest. It follows from Lemma 3.1 that the expected
number of K-Min-light edges in H is less than n=p. Now we need only show that a k-Min-light
edge of H is not removed in the Filter step. Suppose that edge (u; v) is in the k-Min tree of u in
H, but is removed by Filter. If v is in the k u -Min tree of u (w.r.t. H 0 ), then edge (u; v) was the
heaviest edge in a cycle and could not have been in the MSF, much less any k-Min tree. If v was
not in the k u -Min tree of u (w.r.t. H 0 ), then weight(u; v) ? maxweightfk u -Min(u)g, meaning edge
(u; v) could not have been picked in the first k steps of the Dijkstra-Jarnik-Prim algorithm. 2
3.4 Finding a k-Min Forest
We are now ready to present the main procedure of Phase 1, Find-k-Min. (Recall that the initial
call - given in Section 2 - is Find-k-Min(G t ; log n), where G t is the graph obtained from G by
removing all but the k lightest edges on each adjacency list.)
Find-k-Min(H; i)
sample edges of H c with prob. 1=(log (i\Gamma1) n) 2
H is a graph with some vertices possibly marked as dead; i is a parameter that indicates the
level of recursion (which determines the number of Bor-uvka steps to be performed and the sampling
probability).
Lemma 3.6 The call Find-k-Min(G t ; log n) returns a set of edges that includes the k-Min tree of
each vertex in G t .
Proof: The proof is by induction on i.
Base: returns F , which by Lemma 3.3 contains the k-min tree of
each vertex.
Induction Step: Assume inductively that Find-k-Min(H; i\Gamma1) returns the k-min tree of H. Consider
the call Find-k-Min(H; i). By the induction assumption the call to Find-k-Min(H s returns
the k-min tree of each vertex in H s . By Claim 3.7 the call to Filter(H c ; F s ) returns in H f a set of
edges that contains the k-Min trees of all vertices in H c . Finally, by the inductive assumption, the
set of edges returned by the call to Find-k-min(H f contains the k-Min trees of all vertices in
contains the (log (i\Gamma1) n)-Min tree of each vertex in H, and Find-k-Min(H; i) returns
returns the edges in the k-Min tree of each vertex in H. 2
3.8 The following invariants are maintained at each call to Find-k-min. The number of
live vertices in H - n=(log (i) n) 4 , and the expected number of edges in H - m=(log (i) n) 2 , where m
and n are the number of edges and vertices in the original graph.
Proof: These clearly hold for the initial call, when log n. By Lemma 3.3, the contracted
graph H c has no more than n=(log (i\Gamma1) n) 4 live vertices. Since H s is derived by sampling edges with
probability 1=(log (i\Gamma1) n) 2 , the expected number of edges in H s is - m=(log (i\Gamma1) n) 2 , maintaining
the invariants for the first recursive call.
By Lemma 3.1, the expected number of edges in H f - n(log (i\Gamma1) n) 2
(log (i\Gamma1) n) 4
has the same number of vertices as H c , both invariants are maintained for the second recursive call.3.5 Performance of Find-k-Min
Lemma 3.7 Find-k-min(G t ; log n) runs in expected time O(log n) and work O(m n).
Proof: Since recursive calls to Find-k-min proceed in a sequential fashion, the total running time
is the sum of the local computation performed in each invocation. Aside from randomly sampling
the edges, which takes constant time and work linear in the number of edges, the local computation
consists of calls to Filter and Bor-uvka-A.
In a given invocation of Find-k-min, the number of Bor-uvka steps performed on graph H is the
sum of all Bor-uvka steps performed in all ancestral invocations of Find-k-min, i.e. P log n
which is O(log (3) n). ?From our bound on the maximum length of edge lists (Lemma 3.4), we can
infer that the size of any tree in the Filter forest is k k O(log (3) n)
, thus the time needed for each modified
Bor-uvka step and each Filter step is k O(log (3) n) . Summing over all such steps, the total time
required is o(log n).
The work required by the Filter procedure and each Bor-uvka step is linear in the number of
edges. As the number of edges in any given invocation is O(m=(log (i) n) 2 ), and there are O(log (i) n)
Bor-uvka steps performed in this invocation, the work required in each invocation is O(m= log (i) n)
(recall that the i parameter indicates the depth of recursion). Since there are 2 log n\Gammai invocations
with depth parameter i, the total work is given by P log n
log n\Gammai O(m= log (i) n), which is O(m).4 Phase 2
Recall the Phase 2 portion of our overall algorithm High-Level:
(the number of vertices in G s is - n=k)
G s :=Sample edges of G 0 with prob. 1=
log (2) n
The procedure Filter(G; F ) ([KPRS97]) returns the F -light edges of G. The procedure Find-
described below, finds the MSF of G 1 in time O((m 1 =m) log n log (2) n), where m 1 is the
number of edges in G 1 .
The graphs G s and G f each have expected m=
log (2) n edges since G s is derived by
sampling each edge with probability 1=
k, and by the sampling lemma of [KKT95], the expected
number of edges in G f is (m=k)=(1=
k. Because we call Find-MSF on graphs having
expected size O(m= log (2) n), each call takes O(log n) time.
4.1 The Find-MSF Procedure
The procedure Find-MSF(H) is similar to previous randomized parallel algorithms, except it uses
no recursion. Instead, a separate base case algorithm is used in place of recursive calls. We also
use slightly different Bor-uvka steps, in order to reduce the work. These modifications are inspired
by [PR97] and [PR98] respectively.
As its Base-case, we use the simplest version of the algorithm of Chong et al. [CHL99], which
takes time O(log n) using (m+n) log n processors. By guaranteeing that it is only called on graphs
of expected size O(m= log 2 n), the running time remains O(log n) with (m processors.
Find-MSF(H)
H s := Sample edges of H c with prob.
After the call to Bor-uvka-B, the graph H c has ! m= log 4 n vertices. Since H s is derived by
sampling the edges of H c with probability 1= log 2 n, the expected number of edges to the first
BaseCase call is O(m= log 2 n). By the sampling lemma of [KKT95], the expected number of edges
to the second BaseCase call is ! (m= log 4 n)=(1= log 2 n), thus the total time spent in these subcalls
is O(log n). Assuming the size of H conforms to its expectation of O(m= log (2) n), the calls to Filter
and Bor-uvka-B also take O(log n) time, as described below.
The Bor-uvka-B(H; l; F ) procedure returns a contracted version of H with O(m=l) vertices. It
uses a simple growth control schedule, designating vertices as inactive if their degree exceeds l. We
can determine if a vertex is inactive by performing list ranking on its edge list for log l time steps.
If the computation has not stopped after this much time, then its edge list has length ? l.
Bor-uvka-B(G;
Repeat log l times
For each vertex, let it be inactive if its edge list
has more than l edges, and active otherwise.
For each active vertex v
choose min. weight incident edge e
Using the edge-plugging technique, build a
single edge list for each induced tree (O(1) time)
Contract all trees of inactive vertices
The last step takes O(log n) time; all other steps take O(log l) time, as they deal with edge lists
of length O(l). Consequently, the total running time is O(log l). For each iteration of the
main loop, the work is linear in the number of edges. Assuming the graph conforms to its expected
size of O(m= log (2) n), the total work is linear. The edge-plugging technique as well as the idea of
a growth control schedule were introduced by Johnson & Metaxas [JM92].
5 Proof of Main Theorem
Proof: (Of Theorem 2.1) The set of edges M returned by Find-k-Min is a subset of the MSF of G.
By contracting the edges of M to produce G 0 , the MSF of G is given by the edges of M together
with the MSF of G 0 . The call to Filter produces graph G f by removing from G 0 edges known not
to be in the MSF. Thus the MSF of G f is the same as the MSF of G 0 . Assuming the correctness
of Find-MSF, the set of edges F constitutes the MSF of G f , thus M + F is the MSF of G.
Earlier we have shown that each step of High-Level requires O(log n) time and work linear in
the number of edges. In the next two sections we show that w.h.p, the number of edges encountered
in all graphs during the algorithm is linear in the size of the original graph. 2
6 High Probability Bounds
Consider a single invocation of Find-k-min(H; i), where H has m 0 edges and n 0 vertices. We want
to place likely bounds on the number of edges in each recursive call to Find-k-min, in terms of m 0
and i.
For the first recursive call, the edges of H are sampled independently with probability 1=(log (i\Gamma1) n) 2 .
Call the sampled graph H 1 . By applying a Chernoff bound, the probability that the size of H 1 is
less than twice its expectation is
Before analyzing the second recursive call, we recall the sampling lemma of [KKT95] which states
that the number of F -light edges conforms to the negative binomial distribution with parameters
is the sampling probability, and F is the MSF of H 1 . As we saw in the proof of
Lemma 3.1, every k-Min-light edge must also be F -light. Using this observation, we will analyze
the size of the second recursive call in terms of F -light edges, and conclude that any bounds we
attain apply equally to k-Min-light edges.
We now bound the likelihood that more than twice the expected number of edges are F -light.
This is the probability that in a sequence of more than 2n 0 =p flips of a coin, with probability p of
heads, the coin comes up heads less than n 0 times (since each edge selected by a coin toss of heads
goes into the MSF of the sampled graph). By applying a Chernoff bound, this is exp(\Gamma\Omega\Gamma n 0 )).
In this particular instance of Find-k-min, n 0 - m=(log (i\Gamma1) n) 4 and so the
probability that fewer than 2m=(log (i\Gamma1) n) 2 edges are F -light is
Given a single invocation of Find-k-min(H; i), we can bound the probability that H has more
than 2 log n\Gammai m=(log (i) n) 2 edges by exp(\Gamma\Omega\Gamma m=(log (i) n) 4 )). This follows from applying the argument
used above to each invocation of Find-k-min from the initial call to the current call
at depth log Summing over all recursive calls to Find-k-min, the total number of edges
(and thus the total work) is bounded by P log n
The probability that Phase 2 uses O(m) work is We omit the analysis
as it is similar to the analysis for Phase 1.
The probability that our bounds on the time and total work performed by the algorithm fail to
hold is exponentially small in the input size. However, this assumes perfect processor allocation.
In the next section we show that the probability that work fails to be distributed evenly among
the processors is less than 1=m !(1) . Thus the overall probability of failure is very small, and the
algorithm runs in logarithmic time and linear work w.h.p.
7 Processor Allocation
As stated in Section 2, the processor allocation needed for our algorithm can be performed by
a fairly simple algorithm given in [HZ94] that takes logarithmic time and linear work but uses
super-linear space, or by a more involved algorithm claimed in [HZ96] that runs in logarithmic
time and linear work and space. We show here that a simple algorithm similar in spirit to the one
in [HZ94] runs in logarithmic time and linear work and space on the QRQW PRAM [GMR94]. The
QRQW PRAM is intermediate in power between the EREW and CRCW PRAM in that it allows
concurrent memory accesses, but the time taken by such accesses is equal to the largest number of
processors accessing any single memory location.
We assume that the total size of our input is n, and that we have processors.
We group the q processors into q=r groups of size r = log n and we make an initial assignment of
O(r log n) elements to each group. This initial assignment is made by having each element choose
a group randomly. The expected number of elements in each group is r log n and by a Chernoff
bound, w.h.p. there are O(r log n) elements in each group. Vertices assigned to each group can be
collected together in an array for that group in O(log n) time and O(n) work and space by using
the QRQW PRAM algorithm for multiple compaction given in [GMR96], which runs in logarithmic
time and linear work with high probability. (We do not need the full power of the algorithm in
[GMR96] since we know ahead of time that each group has - c log 2 n elements w.h.p., for a suitable
constant c. Hence it suffices to use the heavy multiple compaction algorithm in [GMR96] to achieve
the bounds of logarithmic time and linear work and space.)
A simple analysis using Chernoff bounds shows that on each new graph encountered during the
computation each group receives either ! log n elements, or within a constant factor of its expected
number of elements w.h.p. Hence in O(log log n) EREW PRAM steps each processor within a group
can be assigned 1=(log n) of the elements in its group. This processor re-allocation scheme takes
O(log log n) time per stage and linear space overall, and with high probability, achieves perfect
balance to within a constant factor. The total number of processor re-allocation steps needed by
our algorithm is O(2 log n \Delta k log log log n), hence the time needed to perform all of
the processor allocation steps is O(log n) w.h.p.
We note that the probability that processors are allocated optimally (to within a constant
can be increased to 1 \Gamma n \Gamma!(1) by increasing the group size r. Since we perform o((log (2) n) 3 )
processor allocation steps, r can be set as high as n 1=(log (2) n) 3
without increasing the overall O(log n)
running time. Thus the high probability bound on the number of items in each group being
O(r log n) becomes 1\Gamman \Gamma!(1) . It is shown in [GMR96] that the heavy multiple compaction algorithm
runs in time O(log n log m= log log m) time w.h.p. in m, for any m ? 0. By choosing
log log n= log n , we obtain O(log n) running time for this initial step with probability
which is also the overall probability bound for processor allocation.
8 Adaptations to other Practical Parallel Models
Our results imply good MSF algorithms for the QSM [GMR97] and BSP [Val90] models, which
are more realistic models of parallel computation than the PRAM models. Theorem 8.1 given
below follows directly from results mapping EREW and QRQW computations on to QSM given in
[GMR97]. Theorem 8.2 follows from the QSM to BSP emulation given in [GMR97] in conjunction
with the observation that the slowdown in that emulation due to hashing does not occur for our
algorithm since the assignment of vertices and edges to processors made by our processor allocation
scheme achieves the same effect.
Theorem 8.1 An MSF of an edge-weighted graph on n nodes and m edges can be found in
O(g log n) time and O(g(m using O(m n) space on the QSM with a simple
processor allocation scheme, where g is the gap parameter of the QSM.
Theorem 8.2 An MSF of an edge-weighted graph on n nodes and m edges can be found on the
BSP in O((L + g) log n) time w.h.p., using (m processors and O(m n) space with a
simple processor allocation scheme, where g and L are the gap and periodicity parameters of the
BSP.
9 Conclusion
We have presented a randomized algorithm for MSF on the EREWPRAM which is provably optimal
both in time and work. Our algorithm works within the stated bounds with high probability in the
input size, and has good performance in other popular parallel models.
An important open question that remains is to obtain a deterministic parallel MSF algorithm
that is provably optimal in time and work. Recently an optimal deterministic sequential algorithm
for MSF was presented in [PR00]; an intriguing aspect of this algorithm is that the function
describing its running time is not known at present, although it is proven in [PR00] that the
algorithm runs within a small constant factor of the best possible. Parallelizing this optimal
sequential algorithm is a topic worth investigating.
--R
New connectivity and MSF algorithms for shuffle-exchange networks and PRAM
O jist'em probl'emu minima'aln ' im. Moravsk'e P
On the parallel time complexity of undirected connectivity and minimum spanning trees.
A linear-work parallel algorithm for finding minimum spanning trees
Finding minimum spanning trees in logarithmic time and linear work using random sampling.
A note on two problems in connexion with graphs.
The QRQW PRAM: Accounting for contention in parallel algorithms.
Efficient low-contention parallel algorithms
Can a shared-memory model serve as a bridging model for parallel computation? Theory of Computing Systems
An optimal randomized logarithmic time connectivity algorithm for the EREW PRAM.
Optimal randomized EREW PRAM algorithms for finding spanning forests and for other basic graph connectivity problems.
Connected components in O(log 3
A simpler minimum spanning tree verification algorithm.
A randomized linear-time algorithm to find minimum spanning trees
An optimal EREW PRAM algorithm for minimum spanning tree verification.
Parallel algorithms for shared-memory machines
A randomized linear work EREW PRAM algorithm to find a minimum spanning forest.
Private communication
An optimal minimum spanning tree algorithm.
A bridging model for parallel computation.
Shortest connection networks and some generalizations.
--TR
--CTR
Aaron Windsor, An NC algorithm for finding a maximal acyclic set in a graph, Proceedings of the sixteenth annual ACM symposium on Parallelism in algorithms and architectures, June 27-30, 2004, Barcelona, Spain
Vladimir Trifonov, An O(log n log log n) space algorithm for undirected st-connectivity, Proceedings of the thirty-seventh annual ACM symposium on Theory of computing, May 22-24, 2005, Baltimore, MD, USA
David A. Bader , Guojing Cong, Fast shared-memory algorithms for computing the minimum spanning forest of sparse graphs, Journal of Parallel and Distributed Computing, v.66 n.11, p.1366-1378, November 2006
Guojing Cong , David A. Bader, Designing irregular parallel algorithms with mutual exclusion and lock-free protocols, Journal of Parallel and Distributed Computing, v.66 n.6, p.854-866, June 2006 | EREW PRAM;optimal algorithm;parallel algorithm;minimum spanning tree |
586956 | Hardness of Approximate Hypergraph Coloring. | We introduce the notion of covering complexity of a verifier for probabilistically checkable proofs (PCPs). Such a verifier is given an input, a claimed theorem, and an oracle, representing a purported proof of the theorem. The verifier is also given a random string and decides whether to accept the proof or not, based on the given random string. We define the covering complexity of such a verifier, on a given input, to be the minimum number of proofs needed to "satisfy" the verifier on every random string; i.e., on every random string, at least one of the given proofs must be accepted by the verifier. The covering complexity of PCP verifiers offers a promising route to getting stronger inapproximability results for some minimization problems and, in particular, (hyper)graph coloring problems. We present a PCP verifier for NP statements that queries only four bits and yet has a covering complexity of one for true statements and a superconstant covering complexity for statements not in the language. Moreover, the acceptance predicate of this verifier is a simple not-all-equal check on the four bits it reads. This enables us to prove that, for any constant c, it is NP-hard to color a 2-colorable 4-uniform hypergraph using just c colors and also yields a superconstant inapproximability result under a stronger hardness assumption. | Introduction
The study of probabilistically checkable proof (PCP) systems has led to major breakthroughs in
theoretical computer science in the past decade. In particular this study has led to a surprisingly
clear understanding of the complexity of nding approximate solutions to optimization problems.
A recurring theme in this study is the association of new complexity measures to veriers of PCP
systems, and construction of e-cient veriers under the new measure. The new measures are
then related to some special subclass of optimization problems to gain new insight about the
approximability of problems in this subclass of optimization problems. This paper presents yet
another such complexity measure, the covering complexity of a verier, and relates it to a subclass
of optimization problems, namely hypergraph coloring problems. Below we elaborate on some of
the notions above, such as PCP, approximability, hypergraph coloring, and introduce our new
complexity measure.
Probabilistically checkable proofs. The centerpiece of a PCP system is the probabilistic veri-
er. This verier is a randomized polynomial time algorithm whose input is a \theorem", and who
is also given oracle access to a \proof". Using the traditional equivalence associated with randomized
algorithms, it is convenient to think of the verier as having two inputs, the \theorem" and a
\random string". Based on these two inputs the verier settles on a strategy to verify the proof |
namely, it decides on a sequence of queries to ask the oracle, and prepares a predicate P . It then
queries the oracle and if it receives as response bits a applies the predicate P (a
and accepts i the predicate is satised. 1 The quality of the PCP system is roughly related to its
ability to distinguish valid proofs (true \theorems" with correct \proofs") from invalid theorems
(incorrect \theorems" from any purported \proof") | hopefully the verier accepts valid proofs
with much higher probability than it does invalid theorems.
To study the power of PCP systems in a complexity-theoretic setting, we quantify some of the
signicant resources of the verier above, and then study the resources needed to verify proofs of
membership for some hard language. Fix such a language L and consider a verier V whose goal is
to verify proofs of membership in L. The above paragraph already hints at four measures we may
associate with such a verier and we dene them in two steps. For functions
say that a V is (r; q)-restricted if, on input x (implying the theorem x 2 L) of length n,
a random string of length r(n) and makes q(n) queries to the proof oracle. We say that veries
L with completeness c and soundness s, if (1) For every x 2 L, there exists an oracle such that
V , on input x and oracle access to , outputs accept with probability at least c, and (2) For every
x 62 L and every oracle , V outputs accept with probability at most s. The class of all languages
L that have an (r; q)-restricted verier verifying it with completeness c and soundness s is denoted
Covering Complexity. In the variant of PCPs that we consider here, we stick with (r; q)-
restricted veriers, but alter the notion of completeness and soundness. Instead of focusing on the
one proof that maximizes the probability with which the verier accepts a given input, here we
allow multiple proofs to be provided to the verier. We say that a set of proofs f covers
a verier V on input x if for every random string, there exists one proof i such that V accepts
i on this random string. We are interested in the smallest set of proofs that satisfy this property
and the cardinality of this set is said to be the covering complexity of the verier on this input.
This description of a verier is somewhat restrictive. More general denitions allow the verier to be adaptive,
deciding on later queries based on response to previous ones. For this paper, the restricted version su-ces.
Analogous to the class PCP, we may dene the class cPCP c;s [r; q] (for covering PCP) to be the class
of all languages for which there exist (r; q)-restricted veriers that satisfy the following conditions:
(Completeness) If x 2 L, the covering complexity of V on x is at most 1=c. (Soundness) If x 62 L
then the covering complexity of V on x is at least 1=s.
Notions somewhat related to covering complexity have been considered in the literature implicitly
and explicitly in the past. Typically these notions have been motivated by the approximability
of minimization problems, such as graph coloring, set cover, and the closest vector problem. Our
specic notion is motivated by graph and hypergraph coloring problems. We describe our motivation
next. We defer the comparison with related notions to later in this section.
Hypergraph coloring, approximability, and inapproximability. An l-uniform hypergraph
H is given by a set of vertices V and a set of edges E where an edge e 2 E is itself a subset of
V of cardinality l. A k-coloring of H is a map from V to the set kg such that no edge
is monochromatic. The hypergraph coloring problem is that of nding, given H, the smallest k
for which a k-coloring of H exists. When l = 2, then the hypergraph is just a graph, and the
hypergraph coloring problem is the usual graph coloring problem.
Graph and hypergraph coloring problems have been studied extensively in the literature from
both the combinatorial and algorithmic perspective. The task of determining if a l-uniform graph
is k-colorable is trivial if almost so if l 2. Every other case turns out to
be NP-hard. The case of l = 2, k 3 is a classical NP-hard problem, while the case of
was shown NP-hard by Lovasz [23]. Thus, even the property of a hypergraph being 2-colorable is
non-trivial. This property, also called Property B, has been studied in the extremal combinatorics
literature for long. Much work has been done on determining su-cient conditions under which a
hypergraph family is 2-colorable and on solving the corresponding algorithmic questions [11, 6, 7,
25, 26, 29, 27].
The hardness of the coloring problem motivates the study of the approximability of the graph and
hypergraph coloring problems. In context of these problems, an (l; k; k 0 )-approximation algorithm
is one that produces (in polynomial time) a k 0 -coloring of every k-colorable l-uniform hypergraph
for some k 0 > k, with the \approximation" being better as k 0 gets closer to k. Even this problem
turns out to be non-trivial, with the best known algorithms for coloring even 3-colorable graphs
requiring
colors [9, 19], where n is the number of vertices. Similarly, inspired in part by
the approximate graph coloring algorithms, several works [1, 10, 22] have provided approximation
algorithms for coloring 2-colorable hypergraphs. The best known result for 2-colorable 4-uniform
hypergraphs is a polynomial time coloring algorithm that uses ~
O(n 3=4 ) colors [1, 10].
To justify the intractability of the approximation versions of the hypergraph coloring problem,
one looks for inapproximability results. Inapproximability results show that it is NP-hard to achieve
the goals of an (l; k; k 0 )-approximation algorithm by producing a polynomial time computable
reduction from, say, SAT to a \gap" problem related to hypergraph coloring. Here we assume
a conservative denition of such a reduction, namely, the many-one reduction. The many-one
version of such a reduction would reduce a formula ' to an l-uniform hypergraph H such that
H is k-colorable if ' is satisable, and H is not k 0 -colorable if ' is not satisable. Since the
existence of an (l; k; k 0 )-approximation algorithm now gives the power to decide if ' is satisable
or not, this shows that the approximation problem is NP-hard. In the sequel, when we say that an
)-approximation problem is NP-hard, we always implicitly mean that the \gap version" of
the problem is NP-hard.
This methodology combined with the PCP technique has been employed heavily to get hardness
results of graph coloring problems. This approach started with the results of [24], and culminates
with, essentially tight, the results of [12] who show that the (2;
is NP-hard under randomized reductions. However, for graphs whose chromatic number is a small
constant, the known hardness results are much weaker. For example, for 3-colorable graphs the best
known hardness result only rules out coloring using 4 colors [20, 16]. This paper is motivated by the
quest for strong (super-constant) inapproximability for coloring graphs whose chromatic number is
a small constant. We do not get such results for graph coloring, but do get such inapproximability
results for hypergraph coloring and in particular for coloring 4-uniform hypergraphs.
Graph coloring and covering PCPs. In examining the reasons why the current techniques
have been unable to show strong hardness results for inapproximability of coloring 3-colorable
graphs, a natural question arises: Are PCPs really necessary to show such hardness results, or
would something weaker su-ce? To date there are no reasons showing PCPs are necessary. And
while the rst result showing the intractability of coloring 3-colorable graphs with 4 colors [20] did
use the PCP technique, [16] show that PCPs are not needed in this result. The starting point of our
work is the observation that covering PCPs are indeed necessary for showing strong hardness results
for graph coloring. Specically, in Proposition 2.1, we show that if the (2; c; !(1))-approximation
problem for coloring is NP-hard for c < 1, then NP cPCP
results can also be derived from hardness results for coloring hypergraphs, though we don't do so
here.)
Previous approaches have realized this need implicitly, but relied on deriving the required results
via PCPs. In particular, they use the trivial containment PCP 1;s [r; q] cPCP 1;s [r; q] and build
upon the latter result to derive hardness for coloring. (Notice, that we do not have such a simple
containment when the completeness parameter is not equal to 1. This special case of
important in general and is referred to as perfect completeness.) For our purposes, however, this
trivial containment is too weak. In particular it is known that PCP c;s [log; q] P for every c; s; q
such that c > s2 q (cf. [8, Lemma 10.6]). Thus it is not possible to show NP cPCP
for any constant
> 0, using the trivial containment mentioned above (and such a covering
PCP is essential for super-constant hardness results for coloring hypergraphs). Thus it becomes
evident that a direct construction of covering PCPs may be more fruitful and we undertake such
constructions in this paper.
Related Notions. Typically, every approach that applies PCP to minimization problems has
resulted, at least implicitly, in some new complexity measures. Two of those that are close to,
and likely to be confused with, the notion of covering complexity are the notions of \multiple
assignments" [2], and the \covering parameter" of [12]. Here we clarify the distinctions.
In the former case, the multiple assignments of [2], the proof oracle is expected to respond
to each query with an element of a large alphabet (rather than just a bit). When quantifying
the \quality" of a proof, however, the oracle is allowed to respond with a subset of the alphabet,
rather than just a single element, and the goal of the prover is to pick response sets of small sizes
so that on every random string, the verier can pick one elements from each response set to the
dierent queries so that it leads to acceptance. Once again we have a notion of covering all random
strings with valid proofs, but this time the order of quantiers is dierent. The notion of multiple
assignments is interesting only when the alphabet of the oracles responses are large, while our
notion remains interesting even when the oracle responds with an element of a binary alphabet.
The second related notion is the covering parameter of Feige and Kilian [12]. Since the names
are confusingly similar (we apologize for not detecting this at an early stage), we refer to their notion
as the FK-covering parameter. In a rather simplied sense, their parameter also allows multiple
proofs to be presented to the verier. But their notion of coverage requires that on every random
string and every possible accepting pattern of query responses for the verier, there should exist a
proof which gives this accepting pattern (and is hence accepted). For any xed verier and input,
the FK-covering number is always larger than ours, since we don't need every accepting pattern
to be seen among the proofs. Though the notions appear close, the motivation, the application,
and the technical challenges posed by the FK-covering parameter and ours are completely dierent.
Both notions arise from an attempt to study graph coloring, but their focus is on general graphs
(with high chromatic number), while ours is on graphs of small chromatic number. In their case,
separation of the FK-covering parameter is su-cient, but not necessary, to give inapproximability
of coloring. For our parameter, separation is necessary, but not su-cient to get the same. Finally,
in their constructions the challenge is to take a traditional PCP and enhance it to have small FK-
covering completeness and they use the PCP directly to argue that the soundness is not large. In
our case, the completeness is immediate and the soundness needs further analysis.
Gadgets and covering complexity. Returning to our notion of covering complexity, while it
seems essential to study this to get good hardness results on coloring, the reader should also be
warned that this notion is somewhat less robust that usual notions that one deals with in PCPs.
Specically, prior notions were not very sensitive to the predicate applied by the verier in deciding
its nal output. They could quantify the power of the verier by simple parameters such as number
of bits read, or number of accepting congurations. Here we are forced to pay attention to the
verier's computations and restrict these to get interesting results. It is reasonable to ask why this
happens and we attempt to give some justication below.
In standard PCPs, it is often possible to use \gadgets" (whenever they are available) to convert
the acceptance predicate of the verier from one form to another, for only a small loss in the
performance. For example suppose one has a PCP verier V 1 that reads three bits of a proof and
accepts if they are not all equal (NAE). Such a verier would directly prove the hardness of the
\Max NAE-3SAT" problem. But by application of a gadget the same verier can be transformed
into one that proves the hardness of the \Max 3SAT" problem. The gadget notices that the
function NAE(a; b; c) for three Boolean variables a; b; c is simply (a _ b _ c) ^ (:a _:b_:c), which
is a conjunction of two 3SAT clauses. Thus a transformed verier V 2 which picks three bits of
the proof as V 1 does, and then picks one of the two clauses implied by the check performed by V 1
and veries just this one clause, is now a verier whose acceptance predicate is a 3SAT condition.
Furthermore, if the acceptance probability of V 1 on the same proof is 1 , then the acceptance
probability of V 2 on some given proof is exactly 1 =2. Thus if V 1 proves inapproximability of
proves inapproximability of Max 3SAT.
Unfortunately, a similar transformation does not apply in the case of covering complexity.
Notice that two proofs, the oracle that always responds with 0 and the one that responds with 1,
always su-ces to cover any verier whose acceptance predicate is 3SAT. Yet there exist NAE 3-
SAT veriers that can not be covered by any constant number of proofs. (For example, the verier
that picks 3 of the n bits of the proof uniformly and independently at random and applies the
NAE 3-SAT predicate to them, needs
eds n) proofs to be covered.) Thus even though a gadget
transforming NAE 3SAT to 3SAT does exist, it is of no use in preserving covering complexity of
veriers. This non-robust behavior of cPCP veriers forces us to be careful in designing our veriers
and our two results dier in mainly the predicate applied by the verier.
Our results. Our rst result is a containment of NP in the class cPCP 1;" [O(log n); 4], for every
" > 0. If the randomness is allowed to be slightly super-logarithmic, then the soundness can be
reduced to some explicit o(1) function. Technically, this result is of interest in that it overcomes
the qualitative limitation described above of passing through standard PCPs. Furthermore, the
proof of this result is also of interest in that it shows how to apply the (by now) standard Fourier-
analysis based techniques to the studying of covering complexity as well. Thus it lays out the hope
for applying such analysis to other cPCP's as well.
Unfortunately, the resulting cPCP fails to improve inapproximability of graph coloring or even
hypergraph coloring. As noted earlier covering PCPs are only necessary, but not su-cient to get
hardness results for hypergraph coloring. In order to get hardness results for hypergraph coloring
from covering PCPs, one needs veriers whose acceptance condition is a NAE SAT (not-all-equal)
predicate (though, in this case, it is also reasonable to allow the responses of the queries to be
elements of a non-binary alphabet, and a result over q-ary alphabet will give a result for q-colorable
hypergraphs).
Keeping this objective in mind, we design a second verier (whose query complexity is also
4 bits), but whose acceptance predicate simply checks if the four queried bits are not all equal.
The verier has perfect completeness and its covering soundness can be made an arbitrarily large
constant (Theorem 4.2). This result immediately yields a super-constant lower bound on coloring
2-colorable 4-uniform hypergraphs: we prove that c-coloring such hypergraphs is NP-hard for
any constant c (Theorem 4.4), and moreover there exists a constant c 0 > 0 such that, unless
NP DTIME(n O(log log n) ), there is no polynomial time algorithm to color a 2-colorable 4-uniform
hypergraph using c 0
log log n
log log log n colors (Theorem 4.6). A similar hardness result also holds for coloring
2-colorable k-uniform hypergraphs for any k 5 by reduction from the case of 4-uniform
hypergraphs (Theorem 4.7). Prior to our work, no non-trivial inapproximability results seem to be
known for coloring 2-colorable hypergraphs, and in fact it was not known if 3-coloring a 2-colorable
4-uniform hypergraph is NP-hard.
We note that we do not have analogous results for the hardness of coloring 2-colorable 3-uniform
hypergraphs. The di-culty in capturing the problem stems from the di-culty of analyzing the
underlying maximization problem. The natural maximization version of hypergraph 2-coloring is
the following: color the vertices with two colors so that a maximum number of hyperedges are non-
monochromatic. For l-uniform hypergraphs, this is problem is known as Max l-Set Splitting. For
case we study here), a tight hardness result of 7=8 known [17] and this fact works
its way into our analysis. For hardness result is not known for the maximization
version (see [14]) and our inability to show hardness results for 3-uniform hypergraphs seems to
stem from this fact.
Organization. In Section 2, we go over some of the denitions more formally and relate covering
complexity to approximability of hypergraph coloring. In Section 3, we analyze a simple cPCP
verier that makes 4 queries and has perfect completeness and o(1) soundness. In Section 4, we
analyze a more complicated cPCP verier with similar parameters whose acceptance condition is
not-all-equal-sat. This yields the hardness result for coloring 2-colorable, 4-uniform hypergraphs.
This is the complete version of the conference paper [15].
Preliminaries
In this section we introduce covering PCPs formally, and establish a connection (in the wrong
direction) between covering PCPs and inapproximability of hypergraph coloring.
2.1 Probabilistically checkable proofs (PCPs)
We rst give a formal denition of a PCP. Below veriers are probabilistic oracle Turing machines
whose output, on input x and random string r with oracle O, is denoted V O (x; r). The output is
a bit with 1 denoting acceptance and 0 denoting rejection.
Denition 1 Let c and s be real numbers such that 1 c > s 0. A probabilistic polynomial time
oracle Turing machine V is a PCP verier with soundness s and completeness c for a language L
For x 2 L there exists oracle such that Prob r [V (x;
For x 62 L, for all
Two parameters of interest in a PCP are the number of random bits used by the verier and
the number of queries it makes to the proof oracle. Most of the time the symbols of are bits and
whenever this is not the case, this is stated explicitly.
Denition 2 For functions restricted if, on any input of
length n, it uses at most r(n) random bits and makes at most q(n) queries to .
We can now dene classes of languages based on PCPs.
Denition 3 (PCP) A language L belongs to the class PCP c;s [r; q] if there is an (r; q)-restricted
verier V for L with completeness c and soundness s.
Next we have the denition of covering PCP.
Denition 4 (Covering PCP) A language L belongs to the class cPCP c;s [r; q] if there is an
q)-restricted verier V such that on input x:
(i) if x 2 L then there is a set of proofs f 1=c such that for every random
string r there exists a proof i such
L, then for every set of k proofs f with k < 1=s, there is a random
string r for which V rejects every i , 1 i k.
One usually requires \perfect completeness" seeking PCP characterizations. It
is clear from the above denitions that PCP 1;s [r; q] cPCP 1;s [r; q] and thus obtaining a PCP
characterization for a language class is at least as hard as obtaining a covering PCP characterization
with similar parameters.
2.2 Covering PCPs and Graph Coloring
We now verify our intuition that \good" covering PCPs (i.e., those which have a large gap in
covering complexity between the completeness and soundness cases) are necessary for strong lower
bounds on the approximating the chromatic number. As usual, for a graph G, we denote by (G)
its chromatic number, i.e., the minimum number of colors required in a proper coloring of G.
Below, we use the phrase \it is NP-hard to distinguish f(n)-colorable graphs from g(n)-colorable
graphs" to mean that \the (2; f; g)-approximation problem is NP-hard". As mentioned in Section 1,
note that we are using a conservative denition of NP-hardness and hence this statement implies
that there is a many-one reduction from SAT that maps satisable instances of SAT to f(n)
colorable graphs and maps unsatisable instances to graphs that are not g(n)-colorable. Under
this assumption, we show how to get nice covering PCPs. Below and throughout this paper, the
function log denotes logarithms to base two.
Proposition 2.1 Suppose for functions f; given a graph G on n vertices, it is
NP-hard to distinguish between the cases (G) f(n) and (G) g(n). Then
Proof: Let the vertex set of G be g. The covering PCP consists of proofs
that correspond to \cuts" G, i.e., each i is n-bits long, with the j th
bit being 1 or 0 depending on which side of the cut i contains v j . The verier simply picks two
vertices
and
at random such that they are adjacent in G, and then check if the j th
1 and j thbits dier in any of the k proofs. The minimum number k of proofs required to satisfy the verier
for all its random choices is clearly the cut cover number (G) of G, i.e., the minimum number of
cuts that cover all edges of G. It is easy to see that (G)e, and therefore the claimed
result follows. 2
One can get a similar result for any base q, by letting the proofs be q-ary strings and the verier
read two q-ary symbols from the proof. In light of this, we get the following.
Corollary 2.2 Suppose that there exists an " > 0 such that it is NP-hard, given an input graph
G, to distinguish between the cases when G is 3-colorable and when (G)
cPCP 1;(" log 3 n) 1 [O(log n); 2] where the covering PCP is over a ternary alphabet, and the verier's
action is to simply read two ternary symbols from the proof and check that they are not equal.
In light of the above Corollary, very powerful covering PCP characterizations of NP are necessary
in order to get strong hardness results for coloring graphs with small chromatic number. A result
similar to Proposition 2.1, with an identical proof, also holds for hypergraph coloring, and thus
motivates us to look for good covering PCP characterizations of NP in order to prove hardness
results for coloring 2-colorable hypergraphs.
Proposition 2.3 Suppose that there exists a function f given an input
r-uniform hypergraph on n vertices, it is NP-hard to distinguish between the cases when it is 2-
colorable and when it is not f(n)-colorable. Then, NP cPCP
log f(n)
[O(log n); r]. In particu-
lar, if c-coloring 2-colorable r-uniform hypergraphs is NP-hard for every constant c, then NP
[O(log n); r] for every constant k 1.
3 PCP Construction I
We now move on to the constructions of our proof systems. To a reader familiar with PCPs we
rst give a preview of our constructions. Both our PCPs (of this section and the next) go through
the standard path. We start with strong 2-prover 1-round proof systems of Raz [28], apply the
composition paradigm [5], and then use the long code of [8] at the bottom level. One warning: in
the literature it is common to use a variant of the long code | called the \folded long code" | we
do not use the folded version. (Readers unfamiliar with the terms above may nd elaborations in
Section 3.1.)
As usual, the interesting aspects in the constructions are choice of the inner veriers and the
analyses of their soundness. The inner veriers that we use are essentially from [17]: The inner
verier in Section 4 is exactly the same as the one used by [17, Section 7] to show hardness of
Max 4-Set Splitting, while the one in this section is a small variant. The goals of the analyses
are dierent, since we are interested in the number of proofs required to cover all random strings.
Despite the dierence, we borrow large parts of our analysis from that of [17]. In the current section
our analysis essentially shows that if our verier, on some xed input, rejects every proof oracle
with probability at least
, then on any set of k proofs nearly
k fraction of random strings end up
rejecting all the proofs. Thus the standard soundness of the verier we construct is of interest and
we analyze this using lemmas from [17]. The analysis of the verier in Section 4 does involve some
new components and we will comment upon these in the next section.
3.1 Preliminaries: Label cover, Long codes, Proof composition
Our PCP constructions (also) follow the paradigm of proof composition, by composing an \outer
verier" with an \inner verier". In its most modern and easy to apply form, one starts with
an outer proof system which is a 2-Prover 1-Round proof system (2P1R) construction for NP. We
abstract the 2P1R by a graph-theoretic optimization problem called Label Cover. The specic
version of Label Cover we refer to is the maximization version LabelCover max discussed in [3]
(see [3] for related versions and the history of this problem).
Label Cover. A LabelCover max instance LC consists of a bipartite graph
vertex set U [ W and edge set F , \label sets" LU ; LW which represent the possible labels that
can be given to vertices in U; W respectively, and projection functions for each
W such that (u; w) 2 F . The optimization problem we consider is to assign a
label '(u) 2 LU (resp. '(w) 2 LW ) to each u 2 U (resp. w 2 W ) such that the fraction of edges
(call such an edge \satised") is maximized. The optimum
value of a LabelCover max instance LC, denoted OPT(LC), is the maximum fraction of \satised"
edges in any label assignment. In the language of LabelCover max , the PCP theorem [5, 4] together
with the parallel repetition theorem of Raz [28] yields parts (i)-(iii) of the theorem below. Here we
need an additional property that is also used in [17, Sections 6, 7]. First we need a denition: For
min
The denition above is quite technical (and borrowed directly from [17]) but the intuition is that
projects mostly onto dierent elements of LU i the \measure" is large.
Theorem 3.1 ([3, 17]) There exist d 0 ; e 0 < 1 and c > 0 and a transformation that, given a
parameter - > 0, maps instances ' of Sat to instances
of LabelCover max , in time n O(log - 1 ) , such that
where n is the size of the Sat instance '.
(iii) If ' is satisable then ' is not satisable then OPT(LC) -.
(iv) For every
Fg.
Remark: As mentioned earlier, conditions (i)-(iii) are standard for LabelCover max . The need for
Condition (iv) is inherited from some lemmas of [17] that we use (specically, Lemmas 3.3 and 3.4).
This condition is shown in Lemma 6.9 of [17].
To use the hardness of label cover, we use the standard paradigm of proof composition. The
use of this paradigm requires an error-correcting code, which in our case is again the long code.
We dene this next.
The Long Code. We rst remark on some conventions and notation we follow through the rest
of this paper: We represent Boolean values by the set f1; 1g with 1 standing for False and 1
for True. This representation has the nice feature that Xor just becomes multiplication. For any
domain D, denote by FD the space of all Boolean functions f 1g. For any set D, jDj
denotes its cardinality.
We now describe a very redundant error-correcting code, called the long code. The long code
was rst used by [8], and has been very useful in most PCP constructions since.
The long code of an element x in a domain D, denoted LONG(x), is simply the evaluations of
all the 2 jDj Boolean functions in FD at x. If A is the long code of a, then we denote by A(f) the
coordinate of A corresponding to function f , so that
We note that most of the proofs used in the literature use the \folded long code" which is a
code of half the length of the long code, involving evaluations of the elements x at exactly one of
the functions f or f (but not both). For reasons that will become clearer later, we cannot use
the folded long code here and work with the actual long code.
Constructing a \Composed" PCP. Note that Theorem 3.1 implies a PCP where the proof
is simply the labels of all vertices in U; W of the LabelCover max instance and the verier picks
an edge at random and checks if the labels of u and w are \consistent", i.e.,
u;w An alternative is to choose a random neighbor w 0 of u and instead checking
u;w dening '(u) to be the most common value of u;w 0 ('(w 0 )) it is easy
to see that the probability of acceptance in the latter PCP (that uses w; w 0 for the check) is at
most the probability of acceptance in the former PCP (that uses u; w for the check).
By the properties guaranteed in Theorem 3.1, either PCP uses O(log n log - 1 ) randomness, has
perfect completeness and soundness at most -. While the soundness is excellent, the number of bits
it reads from the proof in total (from the two \locations" it queries) is large (namely, O(log - 1 )).
In order to improve the query complexity, one \composes" this \outer" verication with an \inner"
verication procedure. The inner verier is given as input a projection function
has oracle access to purported encodings, via the encoding function Enc of some error-correcting
code, of two labels a 2 LU and b 2 LW , and its aim is to check that (b) = a (with \good"
accuracy) by making very few queries to Enc(a) and Enc(b). The inner veriers we use have a
slightly dierent character: they are given input two projections 1 and 2 (specically u;w and
u;w 0 ) and have oracle access to purported encodings Enc(b) and Enc(c) of two labels b; c 2 LW ,
and the aim is to test whether 1 This interesting feature was part of and necessary
for Hastad's construction for set splitting [17], and our PCPs also inherit this feature.
In our nal PCP system, the proof is expected to be the encodings of the labels '(w) of all
vertices using the encoding Enc. For e-cient constructions the code used is the long code
of [8], i.e., Enc
=LONG. We denote the portion of the (overall) proof that corresponds to w by
LP(w), and in a \correct" proof LP(w) would just be LONG('(w)) (the notation LP stands for \long
proof").
The construction of a PCP now reduces to the construction of a good inner verier that given a
pair of strings B; C which are purportedly long codes, and projection functions 1 and 2 , checks if
these strings are the long codes of two \consistent" strings b and c whose respective projections agree
(i.e., satisfy 1 Given such an inner verier IV, one can get a \composed verier" V comp
using standard techniques as follows (given formula ' the verier rst computes the LabelCover max
instance LC in polynomial time and then proceeds with the verication):
1. Pick u 2 U at random and w; w 0 2 N(u) at random
2. Run the inner verier with input u;w and u;w 0 and oracle access to LP(w) and LP(w 0 ).
3. Accept i the inner verier IV accepts
We denote by V comp (IV) the composed verier obtained using inner verier IV. The (usual)
soundness analysis of the composed PCP proceeds by saying that if there is a proof that causes the
verier V comp to accept with large, say (s "), probability, where s is the soundness we are aiming
for, then this proof can be \decoded" into labels for U [ W that \satisfy" more than a fraction -
of the edges in the LabelCover max instance, and by Theorem 3.1 therefore the original formula '
was satisable. In our case, we would like to make a similar argument and say that if at most k
proofs together satisfy all tests of V comp , then these proofs can be \decoded" into labels for U [W
that satisfy more than - fraction of edges of LC.
3.2 The Inner Verier
We now delve into the specication of our rst \inner verier", which we call Basic-IV4. This inner
verier is essentially the same as the one for 4-set splitting in [17], but has a dierent acceptance
predicate. Recall the inner verier is given input two projections functions
has oracle access to two tables and aims to check that B (resp. C) is the
long code of b (resp. c) which satisfy 1
Inner Verifier Basic-IV4 B;C
Choose uniformly at random f 2 FLU ,
Choose at random g 0 ; h 0 2 FLW such that 8b 2 LW ,
For a technical reason, as in [17], the nal inner verier needs to run the above inner verier for
the bias parameter p chosen at random from an appropriate set of values. The specic distribution
we use is the one used by Hastad [17] (the constant c used in its specication is the constant from
Equation (2) in the statement of Theorem 3.1).
Inner Verifier IV4 B;C
e,
t.
Choose uniformly at random.
Run Basic-IV4 B;C
Note that the inner verier above has perfect completeness. Indeed when B; C are long codes
of b; c where 1 then for each f 2 FLU , if
so these are not equal, and similarly for the case when
3.3 Covering Soundness analysis
Let X(
) be the indicator random variable for the rejection of a particular proof
Wg by the composed verier V comp (IV4
)). The probability that V 1 (
rejects
taken over its random choices is clearly the expectation
Here B; C are shorthand for LP(w) and LP(w 0 ) respectively and equal
respectively in a \correct" proof. We wish to say that no k proofs can together satisfy all the tests
which
performs.
) is the indicator random variable for the rejection of a set of k
proofs fLP by the verier V 1 (
), then the overall probability that V 1 (
rejects all these k proofs, taken over its random choices, is exactly
Y
where we use the shorthand
We now argue (see Lemma 3.2 below) that if this rejection probability is much smaller than
there is a way to obtain labels '(u) for
than - fraction of the edges (u; w) are satised by this labeling, i.e., Together
with Theorem 3.1, this implies that the rejection probability (from Equation (4)) for any set of k
proofs for a false claim of satisability (of '), can be made arbitrarily close to 1
, and in particular
is non-zero, and thus the covering soundness of the composed verier is at most 1=k.
Lemma 3.2 There exists a 0 < 1 such that for every integer k 1, every ", 0 < " < 4 k , and all
a 0 Before presenting the formal proof of Lemma 3.2, we rst highlight the basic approach. The
power of arithmetizing the rejection probability for a set of k proofs as in Equation (4) is that one
can expand out the product and analyze the expectation of4 k
where
products are dened to be 1. A special
term is which is the constant 1. We analyze the rest of the terms individually. We
can now imagine two new proofs ~
are exclusive-ors of subsets of the k
given proofs. Now one can apply existing techniques from [17] to analyze terms involving the tables
~
B and ~
C and show that ~
cannot be too negative, and similarly if the
expectation of ~
much below zero, then in fact OPT(LC) is quite large.
In short, at a high level, we are saying that if there exist k proofs such that the verier accepts at
least one of them with good probability, then some exclusive-or of these proofs is also accepted by
the verier with good probability, and we know this cannot happen by the soundness analysis of
[17] for the case of a single proof. This intuition is formalized via the next two lemmas from [17].
Before stating the lemmas, we make a slight digression to point out the relevance of not employing
folding here. Folded long codes are typically used as follows: Given a table
supposedly giving the long code of the encoding of the label assigned to w, conceptually we assume
we have a long proof A which respects the constraints A 0
such an A 0 for ourselves from A 0 by setting A 0
xed element of the concerned domain (i.e., LU or LW as the case
might be). Such a table A 0 , which satises A 0 ( f) = A 0 (f) for every function f , is said to be
folded. We then pretend the verier works with the long code, but carry out the soundness analysis
only for folded tables. In our case also we could do the same to analyze the acceptance of a single
proof. However when faced with multiple proofs, the intermediate tables we consider, such as B S
above, need not be folded even if the original proofs we were given were folded | in particular, this
will be the case when S has even cardinality. Thus our analysis needs to work with non \folded
tables" as well. This is why we work with the long code directly.
Now we go back to the technical lemmas.
Lemma 3.3 ([17]) For every
where the distribution of p; f; 2 is the same as the one in IV4
.
This lemma is Lemma 7.9 in [17] combined with calculation in the rst half of Lemma 7.14
in the same paper. Similarly the next lemma follows from Lemma 7.12 of the same paper and a
similar calculation.
Lemma 3.4 ([17]) There exists a < 1 such that for every
> 0 and all proof tables fBw g and
fCw g, indexed by w 2 W with Bw
is at leastOPT(LC)
where the expectation is taken over and where the distribution of
is the same as the one in IV4
.
We are now ready to prove Lemma 3.2.
Proof of Lemma 3.2: The proof is actually simple given Lemmas 3.3 and 3.4. We pick a
that satises
< ". By Equation (4), if E [X k (
", then there exist subsets S 1
;, such that
Suppose one of S 1 , S 2 is empty, say S applied to B S 1
(which is a function
mapping
which together with Equation (6) above
yields
> ", a contradiction since
"=8.
Now suppose both S 1 and S 2 are non-empty. Now we apply Lemma 3.4 to B S 1
and C S 2
to get
that the expectation in Equation (6) is at least 7
a. Together with Equation
this yields (using " 8
a>
a 0 for some absolute constant a 0 . 2
We are now ready to state and prove the main Theorem of this section.
Theorem 3.5 For every constant k, NP cPCP
Proof: The theorem follows from Lemma 3.2 and Theorem 3.1. Let
pick - > 0 small enough so that
a 0 > -. By Lemma 3.2 we have
implies OPT(LC) > -. Consider the PCP with verier V comp (IV4
). Using Theorem 3.1, we get
that if the input formula ' is not satisable, the verier V comp (IV4
rejects any k proofs with
probability at least 1
it clearly has perfect completeness and makes only 4 queries, the
claimed result follows. 2
Remark on tightness of the analysis: In fact, Lemma 3.2 can be used to show that for any
" > 0, there exists a (covering) PCP verier that makes 4 queries, has perfect completeness and
which rejects any set of k proofs with probability at least 1
". Note that this analysis is in fact
tight for the verier V comp (IV4) since a random set of k proofs is accepted with probability 1 4 k .
It would have been su-cient to prove that for any k proofs the set of verier coins causing the
verier to reject all k proofs is nonempty. We do not know a simpler proof of this weaker statement.
Construction II and Hardness of Hypergraph Coloring
In the previous section we gave a PCP construction which made only 4 queries into the proof and
had covering soundness smaller than any desired constant. This is already interesting in that it
highlights the power of taking the covering soundness approach (since as remarked in the introduction
one cannot achieve arbitrarily low soundness using classical PCPs with perfect completeness
that make some xed constant number of queries). We next turn to applying this to get a strong
inapproximability result for hypergraph coloring.
The predicate tested by the inner verier IV4
is F (x; w), and to get
a hardness result for hypergraph coloring, we require the predicate to be NAE(x; which is
true unless all of x; are equal. Note that NAE(x;
true, so one natural approach is to simply replace the predicate F tested by IV4
by NAE without
losing perfect completeness. The challenge of course is to prove that the covering soundness does
not suer in this process, and this is exactly what we accomplish. For completeness we describe
the inner verier below.
Inner Verifier IV-NAE4 B;C
Pick p as in IV4
.
Pick f; as in Basic-IV4 p .
Accept i not all of B(g 1 are equal.
To analyze the soundness of the resulting composed verier, we need to understand the \not-
all-equal" predicate NAE. Note that NAE(x; rejects i8
and this sum equals zero otherwise. With similar notation as in the previous section this implies
that for a given choice of the verier rejects all k proofs i
where denotes the exclusive-or of characteristic vectors, or worded dierently, symmetric dier-
ence of sets. If the verier accepts one the proofs then the right hand side of (7) must equal zero.
Hence we study the expected value of this quantity.
Before proceeding with the analysis we shed some insight into the analysis and explain what
is new this time. Let . The terms corresponding to T being the
empty set are exactly the terms that appeared in the analysis of the verier of Section 3. Let
us turn our attention to terms where T 6= ;. Typically, when a sum as the above appears,
we would just go ahead and analyze the individual terms. Unfortunately, it turns out that
we are unable to do this in our case. To see why, consider a typical summand above, namely
These are more general than the terms analyzed in Section
3, which were of the form B The rst two elements of such a
product come from an identical distribution, and similarly for the last two elements of the product.
This in turn enabled a certain \pairing" up of terms from which a good solution to the label cover
instance could be extracted (see the analysis in Lemma 7.12 of [17] for more details). But now,
since T 6= ;, the rst two tables,
and are dierent, and so are the last two. Therefore,
we now have to deal with individual terms which are the product of four elements each of which
comes from a dierent distribution. It does not seem possible to analyze such a term by itself and
extract meaningful solutions to the label cover instance.
To cope with this problem, we now bunch together terms
that involve the same T but dierent S 1 and S 3 . (Alternatively, one could think of this as xing T ,
and then picking S 1 and S 3 as random subsets of [k] and considering the expectation of the terms
This makes the distribution of the rst pair as a whole identical to that of the
second pair, and allows us to analyze the terms above. More formally, for each non-empty T [k],
and similarly for C. Using this notation the sum in Equation (7) equals
where the rst four terms correspond to the case where can be used to lower
bound the expectation of the rst two sums over Lemma 3.4 can be used to lower
bound the expectation of the third sum as a function of the optimum of the label cover instance.
Thus we only need to study the last sum.
We show that if the last term is too negative, then one can extract an assignment of labels to
the provers. The intuition behind the proof is as follows. B T and C T are two functions chosen
independently from the same distribution. Further, the queried pairs (g are also
chosen from the same distribution, but are not independent of each other (and are related via f ).
If we ignore this dependence for a moment, then we get:
and this would be good enough for us. Unfortunately, (g are not independent. The
intuition behind the proof of the next inequality is that if this correlation aects the expectation
of then there is some correlation between the tables for B T and C T and
so a reasonable strategy for assigning labels to w and w 0 can be extracted. Specically, we get the
following lemma:
Lemma 4.1 There exists a 0 < 1 such that the following holds: Let T
"=8 be such that
";
where the expectation is taken over the distribution of u; w; w as in IV-NAE4
.
Then OPT(LC)
a 0 As usual we postpone the proof of the lemma, and instead prove the resulting theorem.
Theorem 4.2 For every constant k, NP cPCP
moreover the predicate veried
by the PCP upon reading bits x;
Proof: We only have to analyze the soundness of the verier. Let a be the constant from
Lemma 3.4 and a 0 be the constant from Lemma 4.1. Let g. Let
and let - <
b. To create a verier for an instance SAT, reduce the instance of SAT to an
instance of Label Cover using Theorem 3.1 with parameter - and then use the verier based on
using IV-NAE4
as the inner verier. To show soundness, we need to show that if this verier is
covered by k proofs, then the instance of Label Cover has an optimum greater than -.
Suppose we have k proofs such that the verier always accept one of the proofs. This implies
that the expectation, over u; w; w of (9) is 0. This implies that at least one
of summands in (9) is less than or equal to 2 (k+2) in expectation (since there are at most
summands in the expression). If it is a summand in one of the rst two sums then this
contradicts Lemma 3.3. If it is a summand in the third sums then by Lemma 3.4, we get that
a> -. If it is a summand in the last sum, then by Lemma 4.1 we get that
a 0 > -. Thus in the last two cases we get that the optimum is more than - as
desired. 2
Before going on to the proof of Lemma 4.1, we discuss the consequences of Theorem 4.2 to
hypergraph coloring. Before doing so, we just note that in fact one can prove a stronger claim in
Theorem 4.2 that given any k proofs, the probability that the verier rejects all of them is at least8 k ", for " > 0 as small as we seek. The proof is really the same as that of Theorem 4.2, since
we have argued that all terms in the expansion (9) are arbitrarily small in the case when optimum
value of the label cover instance is very small. Once again this soundness analysis is tight, since a
random set of k proofs will, in expectation, satisfy a fraction 1 1
8 k of the verier's checks.
4.1 Hardness results for hypergraph coloring
Since the predicate used by the PCP of Theorem 4.2 is that of 4-set splitting, we get the following
Corollary.
Corollary 4.3 For every constant k 2, given an instance of 4-set splitting, it is NP-hard to
distinguish between the case when there is a partition of the universe that splits all the 4-sets, and
when for every set of k partitions there is at least one 4-set which is is not split by any of the k
partitions.
The above hardness can be naturally translated into a hardness result for coloring 4-uniform hy-
pergraphs, and this gives us our main result:
Theorem 4.4 (Main Theorem) For any constant c 2, it is NP-hard to color a 2-colorable
4-uniform hypergraph using c colors.
Proof: Follows from the above Corollary since a 4-set splitting instance can be naturally identied
with a 4-uniform hypergraph whose hyperedges are the 4-sets, and it is easy to see that the minimum
number of partitions k needed to split all 4-sets equals dlog ce where c is the minimum number of
colors to color the hypergraph such that no hyperedge is monochromatic. 2
In light of the discussion after the proof of Theorem 4.2, we in fact have the following stronger
result.
Theorem 4.5 For any constant c 2 and every " > 0, it is NP-hard to color a 2-colorable 4-
uniform hypergraph using c colors such that at least a fraction (1 1
") of the hyperedges are
properly colored (i.e., are not monochromatic).
Theorem 4.6 Assume NP 6 DTIME(n O(log log n) ). Then there exists an absolute constant c 0 > 0
such that there is no polynomial time algorithm that can color a 2-colorable 4-uniform hypergraph
using c 0
log log n
log log log n colors, where n is the number of vertices in the hypergraph.
Proof: This follows since the covering soundness of the PCP in Theorem 4.2 can be made an
explicit o(1) function. Indeed, nothing prevents from having a k that is a function of n. We
need to have
and to reach a contradiction - <
O(
. The proof size we need
is . We can thus have n O(log log n) size proofs by letting
log log n
log log log n ). Similarly to Theorem 4.4, this implies 2 k -coloring a 2-colorable
4-uniform hypergraph is hard unless NP DTIME(n O(log log n) ). 2
We now show that a hardness result similar to Theorem 4.4 also holds for 2-colorable k-uniform
hypergraphs for any k 5.
Theorem 4.7 Let k 5 be an integer. For any constant ' 2, it is NP-hard to color a 2-colorable
k-uniform hypergraph using ' colors.
Proof: The proof works by reducing from the case of 4-uniform hypergraphs, and the claimed
hardness then follows using Theorem 4.4.
Let H be a 4-uniform hypergraph with vertex set V . Suppose that
Construct a k-uniform hypergraph H 0 as follows. The vertex set of H 0 is
where the sets V (j) are independent copies of V . On each V (j) , take a collection F (j) of 4-element
subsets of V (j) that correspond to the hyperedges in H. A hyperedge of H 0 (which is a (4s
element subset of
now given by the union of s 4-sets belonging to s dierent F (j) 's,
together with t vertices picked from a 4-set belonging to yet another F (j) . More formally, for every
set of (s every choice of elements
and every t-element subset f j s+1 of e j s+1 , there is a hyperedge
If H is 2-colorable then clearly any 2-coloring of it induces a 2-coloring of H 0 , and hence H 0 is
2-colorable as well.
Suppose H is not '-colorable and that we are given an '-coloring of H 0 . Since H is not '-
colorable, each F (j) , for 1 j s' must contain a monochromatic set g j . By the pigeonhole
principle, there must be a color c such that dierent g j 's have color c. The hyperedge of
constructed from those (s + 1) sets is then clearly monochromatic (all its vertices have color c)
and we conclude that H 0 is not '-colorable.
Since the reduction runs in polynomial time when k and ' are constants the proof is complete.4.2 Discrete Fourier transforms
Before going on to the proof of Lemma 4.1 we now introduce a tool that has been crucial in the
analysis on inner veriers. This was hidden so far from the reader but already used in the proofs
of Lemmas 3.3 and 3.4 in [17]. Now we need to introduce them explicitly.
In general we consider functions mapping D to f1; 1g. For D and f 2 FD , let
x2 f(x). Notice that fxg is the long code of x. For any function A mapping FD to the reals,
we have the corresponding Fourier coe-cients
f
A(f) (f)
where D. We have the Fourier inversion formula given by
A (f)
and Plancherel's equality that states that
f
In the case when A is a Boolean function the latter sum is clearly 1.
We think of an arbitrary table A as being somewhat close to (or coherent with) the long code
of x if there exists a small set containing x such that ^
A is non-negligibly large. Thus, when
viewing the long proofs of w and w 0 , our goal is to show that the LP(w) and LP(w 0 ) have coherence
with the long codes of strings x and y such that u;w (x) and u;w 0 (y) are equal.
4.3 Proof of Lemma 4.1
Fix T [k]. Throughout this section the quantities that we dene depend on T , but we don't
include it as a parameter explicitly.
Recall we need to show that if the expectation, over u; w; w
is too negative (less than "), then we can assign labels to the label cover problem with acceptance
probability more that -. Recall that dened in terms of other random
variables f and g 0 and similarly h 2 in terms of f and h 0 . For brevity, we let X denote the quantity
that X is a random variable depending on all the variables above. We
rst analyze the expectation of X over g 1 and h 1 (for xed choice of u; w; w
we calculate the expectation over f , g 0 and h 0 . In both stages we get exact expressions. Finally
we make some approximations for the expectation over u; w; w 0 . (The careful reader may observe
that we don't take expectations over p | in fact the lemma holds for every choice of p of the inner
verier IV-NAE4
.)
The crux of this proof are the functions B dened as follows: We let
and
Note that for a xed choice of f and g 0 we have
We get a
similar expression for C T and thus we get:
Let us call the above quantity Y .
In what follows, we rely crucially on the properties of the Fourier coe-cients of B and C .
F and ^
G denote the Fourier coe-cients of B and C respectively. From the denitions and
some standard manipulation, we get
Using simple Fourier expansion, we can rewrite the quantity we are analyzing as:
Y
Y
The main property about the Fourier coe-cients of B and C is that their L 1 norm is bounded.
Specically, we have:
A
We start by dening the strategy we use to assign labels and prove that if the expectation (of
Y ) is large, then the labels give an assignment to the label cover instance with objective of at least
a 0
Strategy. Given w 2 W , and tables corresponding to LP(w) in k dierent proofs,
compute B S for every S [k], B and its Fourier coe-cients. Pick a non-empty set LW with
F j and assign as label to w, an element x 2 chosen uniformly at random. With
remaining probability, since
may be less than 2 k , assign no label to w.
Preliminary analysis. We now give a preliminary expression for the success probability of the
strategy. Consider picking u; w, and w 0 (and the associated 1 and 2 ) at random and checking for
the event 1 )). The probability of this event is lower bounded by the probability
that 1 () and 2 ( 0 ) intersect and we assign the elements corresponding to this intersection to w
and w 0 . The probability of these events is at least:
Below we show that this quantity is large if the expectation of Y is too small. We now return to
the expectation of Y .
An exact expression for the expectation of Y . We start with some notation. Fix u; w; w
and 1 and 2 . For x 2 LU and ; 0 LW , let s x
Since the argument of s x is always and the argument of t x is always 0 , we use the shorthand
s x for s x () and t x for t x ( 0 ). Further for real p and non-negative integers s; t, let (p; s; t) =2
, and let (p; we show that
F
Y
To prove the above it su-ces to show that
Y
Y
Factors corresponding to y and z with dierent projections on LU are independent, and thus the
expectation can be broken down into a product of expectations, one for each x 2 LU . Fix x 2 LU
and consider the term
Y
Y
If (or \false") the rst product equals ( 1) sx and the second equals (1 2p) t x . Similarly,
If the rst product equals (1 2p) sx and the second equals ( 1) t x . The events happen
with probability 1=2 each and thus giving that the expectation above (for xed x) equals2
Taking the product over all x's gives (12).
Inequalities on E [Y ]. For every u, we now show how to lower bound the expectation of Y , over
in terms of a sum involving only 's and 0 that intersect in their projections. This
brings us much closer to the expression derived in our preliminary analysis of the success probability
of our strategy for assigning labels and we lower bound a closely related quantity. Specically we
now use the inequality E
guaranteed by the Lemma statement) to show:
x minf1; ps x g is the quantity dened in Equation (1).
inequality with p ( 0 ) in the exponent follows by symmetry.)
To prove the above, consider the following expression, which is closely related to the expectation
of Y as given by (12).
Y
x
Y
x
First we note that E
0. (Here we are using the fact that
the tables B T and C T are chosen from the same distribution.) Next we note that the dierence
between Y and Y 1 arises only from terms involving ; 0 such that 1 To verify
this, note that if
for every x, if 1 we get that terms corresponding to such pairs of ; 0 vanish in
We conclude:
Y
x
Y
x
Using
taking absolute values we get,
Y
x
Y
x
Y
x
Y
x
where the last inequality uses j(p; t)j 1 for every t 0.
Next we simplify the terms of the LHS above. First we show that for every t 0,
First we note both j(p; s; t)j and j(p; s)j are upper bounded by 1
If p s 1 , then we have 1
s , let us set
To show (15), we need to prove that (z) 0 for z 2 [0; 1]. We have
in the interval in question and we only have to check
the inequality at the end points. We have
Using (15) we conclude that
Y
x
Y
x
Y
x
Substituting the above into (14) gives (13).
Based on (13). we want to prove that the strategy for assigning labels is a good one. First we
prove that large sets do not contribute much to the LHS of the sum in (13). Dene
We have
Lemma 4.8 We have
"=4:
Proof: By Property (iv) of Theorem 3.1 we have that the probability that p () (4k
is at most "2 (2k+4) . A similar chain of inequalities as (10) shows that
The sum in the lemma can hence be estimated as
"=4;
and the lemma follows. 2
By the same argument applied to 0 of size at least K, together with Equation (13), we get
"=2: (16)
We now relate to the probability of success of our strategy for assigning labels. From (11) we know
this quantity is at least
"2 (2k+2)
where the last inequality uses (16). (It is easy to convert this randomized strategy for assigning
labels to a deterministic one that does equally well.) The dominating factor in the expression is
the term p O(1) (from the denition of K) which can be calculated to be
O(
and the proof of
Lemma 4.1 is complete. 2
4.3.1 Comparison to previous proof of Theorem 4.4
We point out that the conference version of this paper [15] contained a dierent proof of Theorem
4.4. The current proof is signicantly simpler, and furthermore it is only a minor adjustment
of similar proofs in [17]. The key observation to make the current proof possible is the insight that
we should treat the terms of (7) in the collections given by B T (g does not
seem possible to handle them one by one in an e-cient manner. The previous proof did not make
this observation explicitly and ended up being signicantly more complicated. This \simplicity" in
turn has already enabled some further progress on the hypergraph coloring problem | in partic-
ular, using this style of analysis, Khot [21] shows a better super-constant hardness for a-colorable
4-uniform hypergraphs for a 7.
4.3.2 Subsequent related work
In a very recent work, Holmerin [18] showed that the vertex cover problem considered on 4-uniform
hypergraphs is NP-hard to approximate within a factor of (2 ") for arbitrary " > 0.
of vertices of a hypergraph H is said to be a vertex cover if every hyperedge of H intersects S.) He
proves this by modifying the soundness analysis of Hastad's 4-set splitting verier (which is also the
verier we use in Section 4) to show that any proof which sets only a fraction " of bits to 1 will
cause some 4-tuple tested by the verier to consist of only 1's. This in turn shows that for every
constant " > 0, given a 2-colorable 4-uniform hypergraph, it is NP-hard to nd an independent set
that consists of a fraction " of vertices. Note that this result is stronger as a small independent
set implies a large chromatic number and it thus immediately implies the hardness of coloring such
a 2-colorable 4-uniform hypergraph with 1=" colors, and hence our main result (Theorem 4.4).
We stress that the verier in Holmerin's paper is the same as the one in this paper; however, the
analysis in [18] obtains our result without directly referring to covering complexity.
Acknowledgments
We would like to thank the anonymous referees and Oded Goldreich for useful comments on the
presentation of the paper.
--R
Coloring 2-colorable hypergraphs with a sublinear number of colors
The hardness of approximate optima in lattices
Hardness of Approximations.
Proof veri
Probabilistic checking of proofs: A new characterization of NP.
An algorithmic approach to the Lov
Free bits
Improved approximation for graph coloring.
Coloring bipartite hypergraphs.
Improved approximation algorithms for maximum cut and satis
Inapproximability results for set splitting and satis
On the hardness of 4-coloring a 3-colorable graph
Vertex cover on 4-regular hyper-graphs is hard to approximate within (2 ")
Approximate graph coloring using semide
On the hardness of approximating the chromatic number.
Hardness results for approximate hypergraph coloring.
Approximate coloring of uniform hypergraphs.
On the hardness of approximating minimization problems.
A random recoloring method for graphs and hypergraphs.
Hypergraph coloring and the Lov
Improved bounds and algorithms for hypergraph 2- coloring
A parallel repetition theorem.
Coloring n-sets red and blue
--TR
--CTR
Adi Avidor , Ricky Rosen, A note on unique games, Information Processing Letters, v.99 n.3, p.87-91, August 2006
Subhash Khot, Guest column: inapproximability results via Long Code based PCPs, ACM SIGACT News, v.36 n.2, June 2005 | covering PCP;set splitting;hypergraph coloring;hardness of approximations;PCP;graph coloring |
586963 | Simple Learning Algorithms for Decision Trees and Multivariate Polynomials. | In this paper we develop a new approach for learning decision trees and multivariate polynomials via interpolation of multivariate polynomials. This new approach yields simple learning algorithms for multivariate polynomials and decision trees over finite fields under any constant bounded product distribution. The output hypothesis is a (single) multivariate polynomial that is an $\epsilon$-approximation of the target under any constant bounded product distribution.The new approach demonstrates the learnability of many classes under any constant bounded product distribution and using membership queries, such as j-disjoint disjunctive normal forms (DNFs) and multivariate polynomials with bounded degree over any field.The technique shows how to interpolate multivariate polynomials with bounded term size from membership queries only. This, in particular, gives a learning algorithm for an O(log n)-depth decision tree from membership queries only and a new learning algorithm of any multivariate polynomial over sufficiently large fields from membership queries only. We show that our results for learning from membership queries only are the best possible. | Introduction
From the start of computational learning theory, great emphasis has been put on developing
algorithmic techniques for various problems. It seems that the great progress has been made in
learning using membership queries, especially such functions as decision trees and multivariate
polynomials. Generally speaking, three different techniques were developed for those tasks:
the Fourier transform technique, the lattice based techniques and the Multiplicity Automata
technique. All the techniques use membership queries (which is also called substitution queries
for nonbinary fields).
The Fourier transform technique is based on representing functions using a basis, where a
basis function is essentially a parity of a subset of the input. Any function can be represented
as a linear combination of the basis functions. Kushilevitz and Mansour [KM93] gave a general
technique to recover the significant coefficients. They showed that this is sufficient for learning
decision trees under the uniform distribution. Jackson [J94] extended the result to learning DNF
under the uniform distribution. The output hypothesis is a majority of parities. (Also, Jackson
[J95] generalizes his DNF learning algorithm from uniform distribution to any fixed constant
bounded product distribution.)
The lattice based techniques are, at a very high level, performing a traversal of the binary
cube. Moving from one node to its neighbor, in order to reach some goal node. Angluin [A88]
gave the first lattice based algorithm for learning monotone DNF. Bshouty [Bs93] developed the
monotone theory, which gives a technique for learning decision trees under any distribution. (The
output hypothesis in that case is depth 3 formulas.) Schapire and Sellie [SS93] gave a lattice
based algorithm for learning multivariate polynomials over a finite field under any distribution.
(Their algorithm depends polynomially on the size of the monotone polynomial that describes
the function.)
Multiplicity Automata theory is a well studied field in Automata theory. Recently, some very
interesting connections where given, connecting learning such automata and learning decision
trees and multivariate polynomials. Ohnishi, Seki and Kasami [OSK94] and Bergadano and
gave an algorithm for learning Multiplicity Automata. Based
on this work Catlan and Varricchio [BCV96] show that this algorithm learns disjoint DNF.
Then Beimel et. al. [BBB+96] gave an algorithm that is based on Hankel matrices theory
for learning Multiplicity Automata and show that multivariate polynomials over any field are
learnable in polynomial time. (In all the above algorithms the output hypothesis is a Multiplicity
Automaton.)
All techniques, the Fourier Spectrum, the Lattice based and the Multiplicity Automata algorithms
give also learnability of many other classes such as learning decision trees over parities
(nodes contains parities) under constant bounded product distributions, learning CDNF (poly
size DNF that has poly size CNF) under any distribution and learning j-disjoint DNF (DNF
where the intersection of any j terms is 0).
In this paper we develop a new approach for learning decision trees and multivariate polynomials
via interpolation of multivariate polynomials over GF (2). This new approach leads to
simple learning algorithms for decision trees over the uniform and constant bounded product
distributions, where the output hypotheses is a multivariate polynomial (parity of monotone
terms).
The algorithm we develop gives a single hypothesis that approximate the target with respect
to any constant bounded product distribution. In fact the hypothesis is a good hypothesis under
any distribution that supports small terms. That is any distribution D where for a term T of size
!(log n) we have PrD Previous algorithms do not achieve this property.
It is also known that any DNF is learnable with membership queries under constant bounded
product distribution [J95], where the output hypothesis is a majority of parities. Our contribution
for j-disjoint DNF is to use an output hypothesis that is a parity of terms and to show that the
output hypothesis is an ffl approximation of the target against any constant bounded distribution.
We also study the learnability of multivariate polynomials from membership queries only. We
give a learning algorithm for multivariate polynomials over n variables with maximal degree
for each variable, where c ! 1 is constant, and with terms of size
d
log d)
using only membership queries. This result implies learning decision trees of depth O(log n) with
leaves from a field F from membership queries only.
This result is a generalization of the result in [B95b] and [RB89], where the learning algorithm
uses membership and equivalence queries in the former and only membership queries in the
latter.
The second result is a generalization of the result in [KM93] for learning boolean decision tree
from membership queries. The above result also give an algorithm for learning any multivariate
polynomial over fields of size log d)) from membership queries only.
This result is a generalization of the results in [BT88, CDG+91, Z90] for learning multivariate
polynomials under any field. Previous algorithms for learning multivariate polynomial over finite
fields F require asking membership queries with assignments in some extension of the field F
[CDG+91]. In [CDG+91] it is shown that an extension n of the field is sufficient to interpolate
any multivariate polynomial (when membership queries with assignments from an extension field
are allowed).
The organization of the paper is as follows. In section 2 we define the learning model and
the concept classes. In section 3 we give the algorithm for learning multivariate polynomial for
the boolean domain. In section 4 we give some background for multivariate interpolation. In
section 5 we show how to reduce learning multivariate polynomials to zero testing and to other
problems. Then in section 6 we give the algorithm for zero testing and also give a lower bound
for zero testing multivariate polynomials.
2 The Learning Model and Concept Classes
2.1 Learning Models
The learning criterion we consider is exact learning [A88] and PAC-learning[Val84].
In the exact learning model there is a function f called the target function f : F n ! F which
is a member of a class functions C defined over the variable set
field F . The goal of the learning algorithm is to output a formula h that is equivalent to f .
The learning algorithm performs a membership query (also called substitution query for the
nonbinary fields) by supplying an assignment a to the variables in V as input to
a membership oracle and receives in return the value of f(a). For our algorithms we will regard
this oracle as a procedure MQ f (). The procedure input is an assignment a and its output is
The learning algorithm performs an equivalence query by supplying any function h as input
to an equivalence oracle with the oracle returning either "YES", signifying that h is equivalent
to f , or a counterexample, which is an assignment b such that h(b) 6= f(b). For our algorithms
we will regard this oracle as a procedure EQ f (h). We say the hypothesis class of the learning
algorithm is H if the algorithm supplies the equivalence oracle functions from H.
We say that a class of boolean function C is exactly learnable in polynomial time if for any
there is an algorithm that runs in polynomial time, asks a polynomial number of
queries (polynomial in n and in the size of the target function) and outputs a hypothesis h that
is equivalent to f .
The PAC learning model is as follows. There is a function f called the target function which
is a member of a class of functions C defined over the variable set g. There is
a distribution D defined over the domain F n . The goal of the learning algorithm is to output a
formula h that is ffl-close to f with respect to some distribution D, that is,
Pr D
The function h is called an ffl-approximation of f with respect to the distribution D.
In the PAC or example query model, the learning algorithm asks for an example from the
example oracle, and receives an example (a; f(a)) where a is chosen from f0; 1g n according to
the distribution D.
We say that a class of boolean functions C is PAC learnable under the distribution D in
polynomial time if for any f 2 C over V n there is an algorithm that runs in polynomial time,
asks polynomial number of queries (polynomial in n, 1=ffl, 1=ffi and the size of the target function)
and with probability at least outputs a hypothesis h that is ffl-approximation of f with
respect to the distribution D.
It is known from [A88] that if a class is exactly learnable in polynomial time from equivalence
queries and membership queries then it is PAC learnable with membership queries in polynomial
time under any distribution D.
Let D be a set of distribution. We says that C is PAC learnable under D if there is a PAC-learning
algorithm for C such that for any distribution D 2 D unknown to the learner and for
any f 2 C the learning algorithm runs in polynomial time and outputs a hypothesis h that is an
ffl-approximation of f under any distribution D 0 2 D.
2.2 The Concept Classes and Distributions
A function over a field F is a function f set X. All classes considered in
this paper are classes of functions where . The elements of F n are called assignments.
We will consider the set of variables V describe the value of the
i-projection of the assignment in the domain F n of f . For an assignment a, the i-th entry of a
will be denoted by a i .
A literal is a nonconstant polynomial p(x i ). A monotone literal is x r
nonnegative
integer r. A term (monotone term) is a product of literals (monotone literals). A multivariate
polynomial is a linear combination of monotone terms. A multivariate polynomial with nonmonotone
terms is a linear combination of terms. The degree of a literal p(x i ) is the degree of the
polynomial p. The size of a term
Let MULF (n; k; t; d) be the set of all multivariate polynomials over the field F over n variables
with at most t monotone terms where each term is of size at most k and each monotone literal is
of degree at most d. For the binary field B the degree is at most so we will use MUL(n; k; t).
F (n; k; t; d) will be the set of all multivariate polynomial with nonmonotone terms with the
above properties. We use MUL ? (n; k; t) when the field is the binary field. Throughout the paper
we will assume that t - n. Since every term in MUL ?
F (n; k; t; d) can be written as multivariate
polynomial in MUL F (n; k; (d d) we have
Proposition 1
F (n; k; t; d) ' MUL F (n; k; t(d
For the boolean field (disjunctive normal form) is a disjunction of terms.
A j-disjoint DNF is a DNF where the disjunction of any j terms is 0. A k-DNF is a DNF with
terms of size at most k literals.
A decision tree (with leaves from some field F) over V n is a binary tree whose nodes are labeled
with variables from V n and whose leaves are labeled with constants from F . Each decision tree
T represents a function f To compute f T (a) we start from the root of the tree
the root is labeled with x i then f T TR (a) if a TR is the right subtree of
the root (i.e., the subtree of the right child of the root with all its descendent). Otherwise (when
a is the left subtree of the root. If T is a leaf then f T (a) is the
label of this leaf.
It is not hard to see that a boolean decision tree of depth k can be represented in MUL ? (n;
(each leaf in the decision tree defines a term and the function is the sum of all terms), and that
a j-disjoint k-DNF of size t can be represented in MUL ? (n; example
[K94].) So for constant k and d = O(log n) the number of terms is polynomial.
For a DNF and multivariate polynomial, f , we define size(f) to be the number of terms in f .
For a decision tree the size will be the number of leaves in the tree.
A product distribution is a distribution D that satisfies D(a
distributions D i on F . A product distribution is fixed constant bounded if there is a constant
1=2, that is independent of the number of variables n, such that for any variable x i ,
distribution D supports small terms if for every term of size !(log n),
we have PrD is the number of variables.
3 Simple Algorithm for the Boolean Domain
In this section we give an algorithm that PAC-learns with membership queries MUL ? (n; n; t)
under any distribution that supports small terms in polynomial time in n and t. We remind the
reader that we assume t - n. All the algorithms in the paper run in polynomial time also when
3.1 Zero test MUL(n; k; t)
We first show how to zero-test elements in MUL(n; k; t) in polynomial time in n and 2 k assuming
k is known to the learner. The algorithm will run in polynomial time for
Choose a term
of maximal size in f . Choose any
values from f0; 1g for the variables not in T . The projection will not be the zero function
because the term T will stay alive in the projection. Since the projection is a nonzero function
with variables there is at least one assignment for x
that gives value 1 for the
function. This shows that for a random and uniform assignment a, with probability at
least . So to zero test a function f 2 MUL(n; k; t) randomly and uniformly choose
polynomial number of assignments a i . If f(a i ) is zero for all the assignments then with high
probability we have f j 0. Now from the above we have
the probability that randomly chosen
elements is at most ffi.
This implies
there is a polynomial time probabilistic zero testing algo-
rithm, that succeeds with high probability.
3.2 Learning MUL(n; k; t).
We now show how to reduce zero-test to learning.
We first show how to find one term in f . If we know that
is a term in f . If Since we can zero-test we can find the minimal
This implies that f x 1
some multivariate polynomial f 1 . If f 1 we know that
is a term in f . We
continue recursively with f
1, in this case
is a term in f .
After we find a term T we define -
This removes the term T from f , and thus -
1). We continue recursively with -
f until we recover all the terms of f . Membership
queries for -
f can be simulated by memebership for f because MQ -
(a). The
complexity of the interpolation is performing nt calls to the zero testing procedure. This gives
there is an algorithm that with probability at least learns
f with
nt log nt
membership queries.
In particular this gives,
there is a polynomial time probabilistic interpolation
algorithm, that succeeds with high probability to learn f from membership queries.
3.3 Learning MUL ?
We now give a PAC-learning algorithm that learns MUL ? (n; n; t) under any distribution that
support small terms. We first give the idea of the algorithm. The formal proof is after Theorem 1.
To PAC-learn f we randomly choose an assignment a and define
a). A term in f of size k will have on average k=2 monotone literals in f 0 , and terms
with
will have with high probability \Omega\Gamma literals.
We perform a zero-restriction, i.e. for each i, with probability 1=2 we substitute x i / 0 in f 0 .
Since a term of size k in f has on average k=2 monotone literals after the first shift f(x + a), in
the second restriction this term will be zero with probability (about) This probability
is greater than Therefore with high probability all the terms
of size more than O(log t) will be removed by the second restriction. This ensures that with
high probability the projection f 00 is in MUL ? (n; O(log t); t), and therefore by Proposition 1
Now we can use the algorithm in subsection 3.2 to learn f 00 .
Notice that for multivariate polynomial h (with monotone terms) when we performed a zero
restriction, we delete some of the monotone terms from h, therefore, the monotone terms of f 00
are monotone terms of f 0 .
We continue to take zero-restrictions and collect terms of f 0 until the sum of terms that appear
in at least one restriction defines a multivariate polynomial which is a good approximation of f 0 .
We get a good approximation of f 0 with respect to any distribution that supports small terms
since we collect all the small (i.e. O(log t)) size terms.
Theorem 1 There is a polynomial time probabilistic PAC-learning algorithm with membership
queries, that learns MUL ? (n; n; t) under any distribution that support small terms.
We now prove that the algorithm sketched above PAC-learns with membership queries any
multivariate polynomial with non-monotone terms under distributions that support small terms.
For the analysis of the correctness of the algorithm we first need to formalize the notion of
distributions that support small terms. The following is one way to define this notion.
Definition 1. Let D c;t;ffl be the set of distributions that satisfy the following: For every
c;t;ffl and any DNF f with t terms of size greater than c log(t=ffl) we have
Pr
Notice that all the constant bounded product distributions D where
for all i are in D 1= log(1=d);t;ffl . In what follows we will assume that c - 2 and ffl ! 1=2. We will use
Chernoff bound (see [ASE]).
independent random variables where Pr[X
Then for any a we have
Pr
be a multivariate polynomial where T are terms and jT 1
Our algorithm starts by choosing a random assignment a and defines f 0
All terms that are of size s (in f 0 ) will contain on average s=2 monotone literals. Therefore by
Chernoff bound we have
Lemma 7 With probability at least 1=2 all the terms in f 0 of size more than ffc log(t=ffl), contain
at least (ff=4)c log(t=ffl) monotone literals, where ff - 4 and c - 1.
Proof. Let T be any term of size ffc log(t=ffl). Let P (T ) be the number of monotone literals
in T . We have
Pr
Since the number of terms of f 0 is t and ffl ! 1=2 the result follows.2
With probability at least 1=2 all the terms of size more than 4c log(t=ffl) will contain at least
c log(t=ffl) monotone literals and all terms of size 8c log(t=ffl) will contain at least 2c log(t=ffl)
monotone literals. Now we split the function f 0 into 3 functions f 1 , f 2 and f 3 . The function
will contain all terms that are of size at most 4c log(t=ffl). The function
will contain all terms of size between 4c log(t=ffl) and 8c log(t=ffl) and the
function f all terms of size more than 8c log(t=ffl).
Similarly,
Our algorithm will find all the terms in f 1 , some of the terms in f 2 and none of the terms in
f 3 . Therefore we will need the following claim.
is a multivariate polynomial that contains some of the terms
in f 2 . Then for any D 2 D c;t;ffl we have
Pr
Proof. The error is
Pr
Let
~
~
~
is the part of the term that contains monotone literals and ~
is the part
that contains the nonmonotone literals. If -
that when we change -
~
to sum of monotone terms we get
Y
q2S
So every monotone term in f 2 will contain one of the terms -
Therefore we
can
where f 2;i are multivariate polynomial with monotone
terms. Since h is a multivariate polynomial that contains some of the terms in f 2 we have
. Since j -
by the definition of distribution that support small terms we have
The algorithm will proceed as follows. We choose
zero restrictions . Recall that a zero restriction p of f 0 is a function f 0 (p) where
with probability 1=2,x i / 0 and with probability 1=2 it remains alive. We will show that with
probability at least 1=2 we have the following:
(A) For every term in f 1 there is a restriction p i such that f 1 (p i ) contains this term.
(B) For every
We will regard A and B as events. Let T 1 be the set of terms in f 1 . We know that jT 1
and every term in T 1 is of size at most 4c log(t=ffl). Let T 3 be the set of terms in f 3 . We know
that the number of terms in T 3 is at most t and every term has at least 2c log(t=ffl) monotone
literals. We have
Pr[not
and, for c - 2,
Pr[not
Therefore we have both events with probability at least 1=2.
This shows that with probability at least 1=2 all the projections f(p i ) contains terms of
size at most 8c log(t=ffl). Therefore, the algorithm proceed by learning each projection f(p i
using the previous algorithm and collecting all the terms of size
2c log(t=ffl).2
The number of membership queries of the above algorithm is O((t=ffl) k n) for some constant k.
For the uniform distribution k - 19.
The above analysis algorithm can also be used to learn functions f of the form
are terms and + is the addition of a field F . These
functions can be computed as follows. For an assignment a,
This gives the
learnability of decision trees with leaves that contain elements from the field F .
4 Multivariate Interpolation
In this section we show how to generalize the above algorithm for any multivariate polynomial
over any field.
Let
ff2I
a ff x ff 1
be a multivariate polynomial over the field F where a ff 2 F and ff are integers. We will
denote the class of all multivariate polynomials over the field F and over the variables x
by F [x The number of terms of f is denoted by jf j. We have jf all a ff are
not zero. When d be the maximal
degree of variables in f , i.e., I ' [d] n where
are d constants where is the zero of the field. A univariate polynomial
over the field F of degree at most d can be interpolated from membership queries
as follows. Suppose
where \Delta (i) (f) is the coefficient of x i in f in its polynomial representation. Then
This is a linear system of equations and can be solved for \Delta (i) (f ), as follows,
det
is the Vandermonde matrix.
If f is a multivariate polynomial then f can be written as
where \Delta (i) (f) is a multivariate polynomial over the variables x . We can still use (1) to
find \Delta (i) (f) by replacing each f(fl i ) with Notice that from the first equation in
the system, since
?From (1) a membership query for \Delta (i) can be simulated using d queries to
f . From (2), a membership query to \Delta (0) can be simulated using one membership query to f .
We now extend the \Delta operators as follows: for
Here \Delta always operates on the variable with the smallest index. So \Delta i 1 operates on x 1 in f to
give a function f 0 that depends on x operates on x 2 in f 0 and so on. We
will also write x i for the term x
k . The weight of i, denoted by wt(i), is the number of
nonzero entries in i.
The operator \Delta i (f) gives the coefficient of x i
1 in f when represented in F [x the
operator gives the coefficient of x i when f is represented in
F [x
Suppose I ' [d] k be such that
I are the k-suffixes of all terms of f . Here the k-suffix of a term x
n is x
k . Since
I if and only if x i is a k-suffix of some term in f , it is clear that jIj - jf j and we must have
i2I
We now will show how to simulate membership queries for using a
polynomial number (in n and jf j) of membership queries to f . Suppose we want to find
for some c 2 F n\Gammak using membership queries to f . We take r assignments -
ask membership queries for (-fl i ; c) for all
Now
then the above linear system of equations can be solved in time
The solution gives f)(c). The existence of -
where the above determinant is not zero will be
proven in the next section.
5 Reducing Learning to Zero-testing (for any Field)
In this section we show how to use the results from the previous section to learn multivariate
polynomials.
Let MULF (n; k; t; d) be the set of all multivariate polynomials over the field F over n variables
with t terms where each term is of size k and the maximal degree of each variable is at most d.
We would like to answer the following questions. Let f 2 MUL F (n; k; t; d).
1. Is there a polynomial time algorithm that uses membership queries to f and decides whether
2. Given i - n. Is there a polynomial time algorithm that uses membership queries to f and
decides whether f depends on x i ?
3. Given fi t. Is there an algorithm that
runs in polynomial time and finds such that
r
4. Is there a polynomial time algorithm that uses membership queries to f and identifies f?
When we say polynomial time we usually mean polynomial time in n; k; t and d but all the
results of this section hold for any time complexity T if we allow a blow up of poly(n; t) in the
complexity.
We show that 1,2 and 4 are equivalent and 1 ) 3. Obviously 2 2. We
will show
To prove 1 notice that f 2 MUL F (n; k; t; d) is independent of x i if and only if
is the coefficient of x i in f we have g 2 MUL F (n; k; t; d). Therefore
we can zero-test g in polynomial time.
To prove 1 s be a zero-test for functions in MULF (n; k; t; d), that is, run the
algorithm that zero-test for the input 0 and take all the membership queries in the algorithm
. We now have f 2 MUL F (n; k; t; d) is 0 if and only if f(fl i
Consider the s \Theta r matrix with rows [fl i 1
]. If this matrix have rank r then we choose r
linearly independent rows. If the rank is less than r then its columns are dependent and therefore
there are constants c i , r such that
r
s:
This shows that the multivariate polynomial
in MUL F (n; k; t; d) we get a contradiction.
Now we show that 1+2+3 ) 4. This will use results from the previous section. The algorithm
first checks whether f depends on x 1 , and if yes it generates a tree with a root labeled with x 1
that has d children. The ith child is the tree for \Delta i (f ). If the function is independent of x 1 it
builds a tree with one child for the root. The child is \Delta 0 (f ). We then recursively build the tree
for the children. The previous section shows how to simulate membership queries at each level
in polynomial time. This algorithm obviously works and it correctness follows immediately from
the previous section and (1)-(3).
The complexity of the algorithm is the size of the tree times the membership query simulation.
The size of the tree at each level is bounded by the number terms in f , and the depth of the
tree is bounded by n, therefore, the tree has at most O(nt) nonzero nodes. The total number of
nodes is at most a factor of d from the nonzero nodes. Thus the algorithm have complexity the
same as zero testing with a blow up of poly(n; t; d) queries and time.
Now that we have reduced the problem to zero testing we will investigate in the next section
the complexity of zero testing of MULF (n; k; t; d).
6 Zero-test of MULF (n; k; t; d)
In this section we will study the zero testing of MUL F (n; k; ?; d) when the number of terms is
unknown and might be exponentially large. The time complexity for the zero testing should be
polynomial in n and d (we have k ! n so it is also polynomial in k). We will show the following
Theorem 2. The class MUL F (n; k; ?; d), where d - cjF j, is zero testable in randomized
polynomial time in n, d and t (here t is not the number of terms in the target) for some constant
only if
d
The algorithm for the zero testing is simply to randomly and uniformly choose poly(n; d) points
a i from F n and query f at a i , and receive f(a i ). If for all the points a i , f is zero then with high
This theorem implies
Theorem 3. The class MULF (n; k; t; d) where d ! cjF j for some constant c is learnable in
randomized polynomial time (in n, d and t) from membership queries if
Proof of Theorem 2 Upper Bound. Let OE(n; k; d) the maximal possible number of roots
of a multivariate polynomial in MUL F (n; k; ?; d). We will show the following facts
1. OE(n; k; d) - jF j n\Gammak OE(k; k; d), and
2. OE(k; k; d) - jF
3. OE(1;
Both facts implies that if f 6j 0, when we randomly uniformly choose an assignment a 2 F n , we
have
Pr a
[f(a)
For d - cjF j we have that this probability is bounded by 1
poly(n;d;t) . Therefore the expected
running time to detect that f is not 0 is poly(n; d; t).
It remain to prove conditions (1) and (2). To prove (1) let f 2 MUL F (n; k; ?; d) with maximal
number of roots. Let m be a term in f with a maximal number of variables. Suppose, without
loss of generality,
k . For any substitution a of the variables x
the term m will stay alive in the projection because it is maximal in f .
Since g has at most OE(k; k; d) roots the result (1) follows.
The proof of (2) is similar to the proof of Schwartz [Sch80] and Zippel [Zip79]. Let f 2
MUL F (k; k; ?; d). Write f as polynomial in F [x
Let t be the number of roots of f d . Since f d d) we have
For assignments a for x we have f d (a) 6= 0. For those assignments we get a
polynomial in x 1 of degree d that has at most d roots for x 1 . For t assignments a for x
we have f d is zero and then the possible values of x 1 (to get a root for f) is bounded by jF j.
This implies
The theorem follows by induction on k. 2
Proof of Theorem 2 Lower Bound Let A be a randomized algorithm that zero tests
asks membership queries to f and if f 6j 0 it returns with
probability at least 2=3 the answer "NO". If all the membership queries in the algorithm returns
0 the algorithm returns the answer "YES" indicating that f j 0.
We run the algorithm for f j 0. Let D be the distributions that the
membership assignments a a l are chosen to zero test f . Notice that if all membership
queries answers are 0 while running the algorithm for f j 0 it would again choose membership
queries according to the distributions D l . Now randomly and uniformly choose fl i;j 2 F ,
Y
d
Y
otherwise. Note that for
any input a we have that
Therefore
This shows that there exists f ? 6j 0 such that running algorithm A for f ? it will answer the
wrong answer "YES" with probability more than 2=3. This is a contradiction. 2
--R
Machine Learning
The probabilistic method.
A deterministic algorithm for sparse multivariate polynomial interpolation
Exact learning of boolean functions via the monotone theory.
A Note on Learning Multivariate Polynomials under the Uniform Distri- bution
On the applications of multiplicity automata in learning.
Learning sat-k-DNF formulas from membership queries
Learning behaviors of automata from multiplicity and equivalence queries.
On zero-testing and interpolation of k-sparse multivariate polynomials over finite fields
An efficient membership-query algorithm for learning DNF with respect to the uniform distribution
On Learning DNF and related circuit classes from helpfull and not-so-helpful teachers
On using the Fourier transform to learn disjoint DNF.
Learning decision trees using the Fourier spectrum.
Randomized interpolation and approximation of sparse polynomials.
A polynomial time learning algorithm for recognizable series.
Interpolation and approximation of sparse multivariate polynomials over GF(2).
Fast probabilistic algorithms for verification of polynomial identities.
Learning sparse multivariate polynomial over a field with queries and counterexamples.
A theory of the learnable.
Probabilistic algorithms for sparce polynomials.
Interpolating polynomials from their values.
--TR
--CTR
Homin K. Lee , Rocco A. Servedio , Andrew Wan, DNF are teachable in the average case, Machine Learning, v.69 n.2-3, p.79-96, December 2007 | decision tree learning;multivariate polynomial;learning interpolation |
586974 | Simple Confluently Persistent Catenable Lists. | We consider the problem of maintaining persistent lists subject to concatenation and to insertions and deletions at both ends. Updates to a persistent data structure are nondestructive---each operation produces a new list incorporating the change, while keeping intact the list or lists to which it applies. Although general techniques exist for making data structures persistent, these techniques fail for structures that are subject to operations, such as catenation, that combine two or more versions. In this paper we develop a simple implementation of persistent double-ended queues (deques) with catenation that supports all deque operations in constant amortized time. Our implementation is functional if we allow memoization. | Introduction
. Over the last fteen years, there has been considerable development
of persistent data structures, those in which not only the current version,
but also older ones, are available for access (partial persistence) or updating (full per-
sistence). In particular, Driscoll, Sarnak, Sleator, and Tarjan [5] developed e-cient
general methods to make pointer-based data structures partially or fully persistent,
and Dietz [3] developed an e-cient general method to make array-based structures
fully persistent.
These general methods support updates that apply to a single version of a structure
at a time, but they do not accommodate operations that combine two dierent
versions of a structure, such as set union or list catenation. Driscoll, Sleator, and
Tarjan [4] coined the term con
uently persistent for fully persistent structures that
support such combining operations. An alternative way to obtain persistence is to
use purely functional programming. We take here an extremely strict view of pure
functionality: we disallow lazy evaluation, memoization, and other such techniques.
For list-based data structure design, purely functional programming amounts to using
only the LISP functions cons, car, cdr. Purely functional data structures are
automatically persistent, and indeed con
uently persistent.
A simple but important problem in data structure design that makes the issue of
con
uent persistence concrete is that of implementing persistent double-ended queues
(deques) with catenation. A series of papers [2, 4] culminated in the work of Kaplan
and Tarjan [11, 10], who developed a con
uently persistent implementation of deques
with catenation that has a worst-case constant time and space bound for any deque
operation, including catenation. The Kaplan-Tarjan data structure and its precursors
obtain con
uent persistence by being purely functional.
Department of Computer Science, Tel Aviv University, Tel Aviv 69978, Israel.
haimk@math.tau.ac.il.
y Department of Computer Science, Columbia University, New York, NY 10027. Research at
Carnegie Mellon University supported by the Advanced Research Projects Agency CSTO under the
title \The Fox Project: Advanced Languages for Systems Software", ARPA Order No. C533, issued
by ESC/ENS under Contract No. F19628-95-C-0050. cokasaki@cs.columbia.edu.
z Department of Computer Science, Princeton University, Princeton, NJ 08544 and InterTrust
Technologies Corporation, Sunnyvale, CA 94086. Research at Princeton University partially supported
by NSF Grant No. CCR-9626862. ret@cs.princeton.edu.
If all one cares about is persistence, purely functional programming is unnecessarily
restrictive. In particular, Okasaki [14, 15, 16] observed that the use of lazy evaluation
in combination with memoization can lead to e-cient functional (but not purely
functional in our sense) data structures that are con
uently persistent. In order to
analyze such structures, Okasaki developed a novel kind of debit-based amortization.
Using these techniques and weakening the time bound from worst-case to amortized,
he was able to considerably simplify the Kaplan-Tarjan data structure, in particular
to eliminate its complicated skeleton that encodes a tree extension of a redundant
digital numbering system.
In this paper we explore the problem of further simplifying the Kaplan-Tarjan
result. We obtain a con
uently persistent implementation of deques with catenation
that has a constant amortized time bound per operation. Our structure is
substantially simpler than the original Kaplan-Tarjan structure, and even simpler
than Okasaki's catenable deques: whereas Okasaki requires e-cient persistent deques
without catenation as building blocks, our structure is entirely self-contained. Furthermore
our analysis uses a standard credit-based approach. We give two alternative,
but closely related implementations of our method. The rst uses memoization. The
second, which saves a small constant factor in time and space, uses an extension of
memoization in which any expression can replace an equivalent expression.
The remainder of the paper consists of ve sections. In Section 2, we introduce
terminology and concepts. In Section 3, we illustrate our approach by developing
a persistent implementation of deques without catenation. In Section 4, we extend
our approach to handle stacks with catenation. In Section 5, we develop our solution
for deques with catenation. We conclude in Section 6 with some remarks and open
problems. An extended abstract of this work appeared in [9].
2. Preliminaries. The objects of our study are lists. As in [11, 10] we allow the
following operations on lists:
return a new list containing the single element x.
return a new list formed by adding element x to the
front of list L.
pop(L): return a pair whose rst component is the rst element
on list L and whose second component is a
list containing the second through last elements of
L.
return a new list formed by adding element x to the
back of list L.
return a pair whose rst component is a list containing
all but the last element of L and whose second
component is the last element of L.
catenate(L; R): return a new list formed by catenating L and R,
with L rst.
We seek implementations of these operations (or specic subsets of them) on
persistent lists: any operation is allowed on any previously constructed list or lists
at any time. For discussions of various forms of persistence see [5]. A stack is a list
on which only push and pop are allowed. A queue is a list on which only inject
and pop are allowed. A steque (stack-ended queue) is a list on which only push,
pop, and inject are allowed. Finally, a deque (double-ended queue) is a list on which
all four operations push, pop, inject, and eject are allowed. For any of these
four structures, we may or may not allow catenation. If catenation is allowed, push
and inject become redundant, since they are special cases of catenation, but it is
sometimes convenient to treat them as separate operations because they are easier to
implement than general catenation.
We say a data structure is purely functional if it can be built and manipulated
using the LISP functions car, cons, cdr. That is, the structure consists of a set
of immutable nodes, each either an atom or a node containing two pointers to other
nodes, with no cycles of pointers. The nodes we use to build our structures actually
contain a xed number of elds; reducing our structures to two elds per node by
adding additional nodes is straightforward. Various nodes in our structure represent
lists.
To obtain our results, we extend pure functionality by allowing memoization,
in which a function is evaluated only once on a node; the second time the same
function is evaluated on the same node, the value is simply retrieved from the previous
computation. In all our constructions, there are only a constant number of memoized
functions (one or two). We can implement memoization by having a node point to
the results of applying each memoized function to it. Initially each such pointer is
undened. The rst function evaluation lls in the appropriate pointer to indicate the
result. Subsequent evaluations merely follow the pointer to the result, which takes
time.
We also consider the use of a more substantial extension of pure functionality,
in which we allow the operation of replacing a node in a structure by another node
representing the same list. Such a replacement can be performed in an imperative
setting by replacing all the elds in the node, for instance in LISP by using replaca
and replacd. Replacement can be viewed as a generalization of memoization. In our
structures, any node is replaced at most twice, which means that all our structures
can be implemented in a write-once memory. (It is easy to convert an algorithm
that overwrites any eld only a xed constant number of times into a write-once
algorithm, with only a constant-factor loss of e-ciency.) The use of overwriting
instead of memoization saves a small constant factor in running time and storage
space and slightly simplies the amortized analysis.
To perform amortized analysis, we use a standard potential-based framework.
We assign to each conguration of the data structure (the totality of nodes currently
existing) a potential. We dene the amortized cost of an operation to be its actual
cost plus the net increase in potential caused by performing the operation. In our
applications, the potential of an empty structure is zero and the potential is always
non-negative. It follows that, for any sequence of operations starting with an empty
structure, the total actual cost of the operations is bounded above by the sum of
their amortized costs. See the survey paper [17] for a more complete discussion of
amortized analysis.
3. Noncatenable Deques. In this section we describe an implementation of
persistent noncatenable deques with a constant amortized time bound per operation.
The structure is based on the analogous Kaplan-Tarjan structure [11, 10] but is much
simpler. The result presented here illustrates our technique for doing amortized analysis
of a persistent data structure. At the end of the section we comment on the
relation between the structure proposed here and previously existing solutions.
3.1. Representation. Here and in subsequent sections we say a data structure
is over a set A if it stores elements from A. Our representation is recursive. It
is built from bounded-size deques called buers, each containing at most three ele-
4 H. KAPLAN, C. OKASAKI, and R. E. TARJEN
ments. Buers are of two kinds: prexes and su-xes. A nonempty deque d over A
is represented by an ordered triple consisting of a prex over A, denoted by pr(d); a
(possibly empty) child deque of ordered pairs over A, denoted by c(d); and a su-x
over A, denoted by sf(d). Each pair consists of two elements from A. The child deque
c(d), if nonempty, is represented in the same way. We dene the set of descendants
of a deque d in the standard way|namely, c 0
exist.
The order of elements in a deque is dened recursively to be the one consistent
with the order of each triple, each buer, each pair, and each child deque. Thus, the
order of elements in a deque d is rst the elements of pr(d), then the elements of each
pair in c(d), and nally the elements of sf(d).
In general the representation of a deque is not unique|the same sequence of
elements may be represented by triples that dier in the sizes of their prexes and
su-xes, as well as in the contents and representations of their descendant deques.
Whenever we refer to a deque d we actually mean a particular representation of d,
one that will be clear from the context.
The pointer representation for this representation is the obvious one: a node
representing a deque d contains pointers to pr(d), c(d), and sf(d). Note that the
pointer structure of d is essentially a linked list of its descendants, since c i (d) contains
a pointer to c i+1 (d), for each i.
3.2. Operations. Implementing the deque operations is straightforward, except
for maintaining the size bounds on buers. Specically, a push on a deque is easy
unless its prex is of size three, a pop on a deque is easy unless its prex is empty,
and symmetric statements hold for inject and eject. We deal with buer over
ow
and under
ow in a proactive fashion, rst xing the buer so that the operation to
be performed cannot violate its size bounds and then actually doing the operation.
The details are as follows.
We dene a buer to be green if it contains one or two elements, and red if it
contains zero or three. We dene two memoized functions on a deque: gp, which
constructs a representation of the same list but with a green prex; and gs, which
constructs a representation of the same list with a green su-x. We only apply gp (gs,
respectively) to a list whose prex (su-x) is red and can be made green. Specically,
for gp, if the prex is empty, the child deque must be nonempty, and symmetrically
for gs. Below we give implementations of push, pop, and gp; the implementations
for inject, eject, and gs are symmetric. We denote a deque with prex p, child
deque c, and su-x s by [p; c; s]. As mentioned in Section 2, we can implement the
memoization of gp and gs by having each node point to the nodes resulting from
applying gp and gs to it; initially, such pointers are undened.
to
If pr(d) is empty and c(d) is not, let
is nonempty, let (x; return the pair (x; [p; c(e); sf(e)]). Otherwise
(c(e) must be empty), let (x; return the pair (x; [;; ;; s]).
y, z be the three elements in pr(d). Let p be a prex
containing only x, and let c
(pr(d) is empty and c(d) is not), let ((x; be a prex
containing x followed by y. Return [p
3.3. Analysis. The amortized analysis of this method relies on the memoization
of gp and gs. We call a node representing a deque secondary if it is returned by a
call to gp or gs and primary otherwise. If a secondary node y is constructed by a call
gp(x) (gs(x), respectively), the only way to access y later is via another call gp(x)
(gs(x), respectively): no secondary node is returned as the result of a push, pop,
inject, or eject operation. This means that gp and gs are called only on primary
nodes.
We devide the nodes representing deques into three states: such a node is rr if
both its buers are red, gr if exactly one of its buers is red, and gg if both its buers
are green. We subdivide the rr and gr states: an rr node is rr0 if neither gp nor gs
has been applied to it, rr1 if exactly one of gp and gs has been applied to it, and
rr2 if both gp and gs have been applied to it; a gr node is gr0 if neither gp nor gs
has been applied to it, and gr1 otherwise. By the discussion above every secondary
node is gr0 or gg. We dene #rr0, #rr1, and #gr0 to be the numbers of primary
nodes in states rr0, rr1, and gr0, respectively. We dene the potential of a collection
of nodes representing deques to be 4#rr0
A call to push is either terminal or results in a call to gp, which in turn calls
push. Similarly, a call to pop is either terminal or results in a call to gp, which in
turn calls pop. We charge the O(1) time spent in a call to gp (exclusive of the inner
call to push or pop) to the push or pop that calls gp. A call to push results in
a sequence of recursive calls to push (via calls to gp), of which the bottommost is
terminal and the rest are nonterminal. A nonterminal push has one of the following
eects: it converts a primary rr0 node to rr1 and creates a new primary gr0 node
(the result of the push) and a new secondary gr0 node (the result of the call to gp);
it converts a primary rr1 node to rr2 and creates a new primary gr0 node and a
new secondary gr0 node; or, it converts a primary gr0 node to gr1 and creates a new
primary gg node and a new secondary gg node. In each case the total potential drops
by one, paying for the time needed for the push (excluding the recursive call). A
terminal push takes O(1) time, creates O(1) new nodes, and increases the potential
by O(1). We conclude that push takes O(1) amortized time. Analogous arguments
apply to pop, inject, and eject, giving us the following theorem:
Theorem 3.1. Each of the operations push, pop, inject, and eject dened
above takes O(1) amortized time.
3.4. Implementation Using Overwriting. With the memoized implementation
described above, a primary rr node can give rise to two secondary gr nodes
representing the same list; a primary gr node can give rise to a secondary gg node
representing the same list. These redundant representations exist simultanously. A
gr representation, however, dominates an rr representation for performing deque op-
erations, and a gg representation dominates a gr representation. This allows us to
improve the e-ciency of the implementation by using overwriting in place of memo-
ization: when gp is called on a node, it overwrites the contents of the node with the
results of the gp computation, and similarly for gs. Then only one representation of
a list exists at any time, and it evolves from rr to gr to gg (via one of two alternative
paths, depending on whether gp or gs is called rst). Each node now needs only
three elds (for prex, child deque, and su-x) instead of ve (two extra for gp and
gs).
Not only does the use of overwriting save a constant factor in running time and
storage space, but it also simplies the amortized analysis, as follows. We dene
#rr and #gr to be the number of nodes in states rr and gr, respectively. (There
are now no secondary nodes.) We dene the potential of a collection of nodes to be
3#rr +#gr. A nonterminal push has one of the following eects: it converts an rr
6 H. KAPLAN, C. OKASAKI, and R. E. TARJEN
node to gr and creates a new gr node, or converts a gr node to gg and creates a
new gg node. In either case it reduces the potential by one, paying for the O(1) time
required by the push (excluding the recursive call). A terminal push takes O(1) time
and can increase the potential by O(1). We conclude that push takes O(1) amortized
time. Similar arguments apply to pop, inject, and eject.
3.5. Related Work. The structure just described is based on the Kaplan-Tarjan
structure of [10, Section 4], but simplies it in three ways. First, the skeleton of our
structure (the sequence of descendants) is a stack; in the Kaplan-Tarjan structure,
this skeleton must be partitioned into a stack of stacks in order to support worst-case
constant-time operations (via a redundant binary counting mechanism). Second, the
recursive changes to the structure to make its nodes green are one-sided, instead of
two-sided: in the original structure, the stack-of-stacks mechanism requires coordination
to keep both sides of the structure in related states. Third, the maximum buer
size is reduced, from ve to three. In the special case of a steque, the maximum size
of the su-x can be further reduced, to two. In the special case of a queue, both the
prex and the su-x can be reduced to maximum size two.
There is an alternative, much older approach that uses incremental recopying
to obtain persistent deques with worst-case constant-time operations. See [7] for a
discussion of this approach. The incremental recopying approach yields an arguably
simpler structure than the one presented here, but our structure generalizes to allow
catenation, which no one knows how to implement e-ciently using incremental
recopying. Also, our structure can be extended to support access, insertion, and deletion
d positions away from the end of a list in O(log d) amortized time, by applying
the ideas in [12].
4. Catenable Steques. In this section we show how to extend our ideas to
support catenation. Specically, we describe a data structure for catenable steques
that achieves an O(1) amortized time bound for push, pop, inject, and catenate.
The data structure is based on the same recursive decomposition of lists as that in
Section 5 of [10]. The pointer structure that we need here is much simpler than that
in [10], and the analysis is amortized, following the framework outlined in Section 2
and used in Section 3.
4.1. Representation. Our structure is similar to the structure of Section 3,
but with slightly dierent denitions of the component parts. As in Section 3, we use
buers of two kinds: prexes and su-xes. Each prex contains two to six elements
and each su-x contains one to three elements. A nonempty steque d over A is
represented either by a su-x sf(d) only, or by an ordered triple consisting of a prex
pr(d) over A, a child steque c(d) of pairs over A, and a su-x sf(d) over A. In contrast
to Section 3, a pair over A is dened to be an ordered pair containing a prex and a
possibly empty steque of pairs over A. Observe that this denition adds an additional
kind of recursion (pairs store steques) to the structure of Section 3. This extra kind
of recursion is what allows e-cient catenation.
The order of elements in a steque is the one consistent with the order of each
triple, each buer, each pair, each steque within a pair, and each child steque. As in
Section 3, there can be many dierent representations of a steque containing a given
list of elements; when speaking of a steque, we mean a particular representation of it.
The pointer structure for this representation is straightforward. Each triple is
represented by a node containing three pointers: to a prex, a child steque, and a
su-x. Each pair is represented by a node containing two pointers: to a prex and a
steque.
4.2. Operations. The implementation of the steque operations is much like the
implementation of the noncatenable deque operations presented in Section 3.2. We
call a prex red if it contains either two or six elements, and green otherwise. We call a
su-x red if it contains three elements and green otherwise. The prex in a su-x-only
steque is considered to have the same color as the su-x. We dene two memoized
functions, gp, and gs, which produce green-prex and green-su-x representations of
a steque, respectively. Each is called only when the corresponding buer is red and
can be made green. We dene push, pop, and inject to call gp or gs when necessary
to obtain a green buer. In the denitions below, we represent a steque with prex
child steque c, and su-x s by [p; c; s].
Case 1: Steque d is represented by a triple. If
Case 2: Steque d is represented by a su-x only. If create a prex p
containing x and the rst two elements of sf(d), create a su-x s containing the last
element of sf(d), and return [p; ;; s]. Otherwise, create a su-x s by pushing x onto
sf(d) and return [;; ;; s].
Case 1: Steque d is represented by a triple. If
Case 2: Steque d is represented by a su-x only. If create a su-x s
containing x, and return [sf(d); ;; s]. Otherwise, create a su-x s by injecting x into
sf(d) and return [;; ;; s].
Case 1: d 1 and d 2 are represented by triples. First, catenate the buers sf(d 1 )
and pr(d 2 ) to obtain p. Now, calculate c 0 as follows: If jpj 5 then let c
9. Create two new prexes p 0 and
containing the rst four elements of p and p 00 containing the remaining
elements. Let c In either case, return
Case 2: d 1 or d 2 is represented by a su-x only. Push or inject the elements of the
su-x-only steque one-by-one into the other steque.
Note that both push and catenate produce valid steques even when their second
arguments are steques with prexes of length one. Although such steques are not
normally allowed, they may exist transiently during a pop. Every such steque is
immediately passed to push or catenate, and then discarded, however. In order to
dene the pop, gp, and gs operations, we dene a n aive-pop operation that simply
pops its steque argument without making sure that the result is a valid steque.
If d is represented by a triple, let (x; return the
consists of a su-x only, let (x;
the pair (x; ;) if
Case 1: Steque d is represented by a su-x only or jpr(d)j > 2. Return n aive-pop(d).
Case 2: Steque d is represented by a triple, x be the
rst element on pr(d) and y the second. If jsf(d)j < 3, push y onto sf(d) to form s and
8 H. KAPLAN, C. OKASAKI, and R. E. TARJEN
return (x; [;; ;; s]). Otherwise (jsf(d)j = 3), form p from y and the rst two elements
on sf(d), form s from the last element on sf(d), and return (x; [p; ;; s]).
Case 3: Steque d is represented by a triple,
create two new prexes p and p 0 by splitting pr(d) equally
in two. Let c
c(d) 6= ;), proceed as follows. Inspect the rst pair (p; d 0 ) in c(d). If jpj 4 or d 0 is
not empty, let ((p; d 0
Now inspect p.
Case 1: p contains at least four elements. Pop the rst two elements from p to form
inject these two elements into pr(d) to obtain p 0 . Let c
Return
Case 2: p contains at most three elements. Push the two elements in pr(d) onto p
to obtain p 0 . Let c is nonempty, or c
Return
(Steque d is represented by a triple with contain the rst
two elements of sf(d) and s the last element on sf(d). Let c
Return
4.3. Analysis. The analysis of this method is similar to the analysis in Section
3.3. We dene primary and secondary nodes, node states, and the potential function
exactly as in Section 3.3: the potential function, as there, is 4#rr0
where #rr0, #rr1, and #gr0 are the numbers of primary nodes in states rr0, rr1,
and gr0, respectively.
As in Section 3.3, we charge the O(1) cost of a call to gp or gs (excluding the cost
of any recursive call to push, pop, or inject) to the push, pop, or inject that calls
gp or gs. The amortized costs of push and inject are O(1) by an argument identical
to that used to analyze push in Section 3.3. Operation catenate calls push and
inject a constant number of times and creates a single new node, so its amortized
cost is also O(1).
To analyze pop, assume that a call to pop recurs to depth k (via intervening calls
to gp). By an argument analogous to that for push, each of the rst k 1 calls pays
for itself by decreasing the potential by one. The terminal call to pop can result in a
call to either push or catenate, each of which has O(1) amortized cost. It follows
that the overall amortized cost of pop is O(1), giving us the following theorem:
Theorem 4.1. Each of the operations push, pop, inject, and catenate dened
above takes O(1) amortized time.
We can improve the time and space e-ciency of the steque data structure by
constant factors by using overwriting in place of memoization, exactly as described in
Section 3.4. If we do this, we can also simplify the amortized analysis, again exactly
as described in Section 3.4.
4.4. Related work. The structure presented in this section is analogous to the
Kaplan-Tarjan structure of [10, Section 5] and the structure of [8, Section 7], but
simplies them as follows. First, the buers are of constant-bounded size, whereas
the structure of [10, Section 5] uses noncatenable steques as buers, and the structure
of [8, Section 7] uses noncatenable stacks as buers. These buers in turn must
be represented as in Section 3 of this paper or by using one of the other methods
mentioned there. In contrast, the structure of the present section is entirely self-
contained. Second, the skeleton of the present structure is just a stack, instead of
a stack of stacks as in [10] and [8]. Third, the changes used to make buers green
are applied in a one-sided, need-driven way; in [10] and [8], repairs must be made
simultaneously to both sides of the structure in carefully chosen locations.
Okasaki [14] has devised a dierent and somewhat simpler implementation of
con
uently persistent catenable steques that also achieves an O(1) amortized bound
per operation. His solution obtains its e-ciency by (implicitly) using a form of path
reversal [18] in addition to lazy evaluation and memoization. Our structure extends to
the double-ended case, as we shall see in the next section; whether Okasaki's structure
extends to this case is an open problem.
5. Catenable Deques. In this section we show how to extend our ideas to
support all ve list operations. Specically, we describe a data structure for catenable
deques that achieves an O(1) amortized time bound for push, pop, inject, eject,
and catenate. Our structure is based upon an analogous structure of Okasaki [16],
but simplied to use constant-size buers.
5.1. Representation. We use three kinds of buers: prexes, middles, and
su-xes. A nonempty deque d over A is represented either by a su-x sf(d) or by a
5-tuple that consists of a prex pr(d), a left deque of triples ld(d), a middle md(d),
a right deque of triples rd(d), and a su-x sf(d). A triple consists of a rst middle
buer , a deque of triples, and a last middle buer. One of the two middle buers
in a triple must be nonempty, and in a triple that contains a nonempty deque both
middles must be nonempty. All buers and triples are over A. A prex or su-x in a
5-tuple contains three to six elements, a su-x in a su-x-only representation contains
one to eight elements, a middle in a 5-tuple contains exactly two elements, and a
nonempty middle buer in a triple contains two or three elements.
The order of elements in a deque is the one consistent with the order of each
5-tuple, each buer, each triple, and each recursive deque. The pointer structure is
again straightforward, with the nodes representing 5-tuples or triples containing one
pointer for each eld.
5.2. Operations. We call a prex or su-x in a 5-tuple red if it contains either
three or six elements and green otherwise. We call a su-x in a su-x-only representation
red if it contains eight elements and green otherwise. The prex of a su-x-only
deque is considered to have the same color as the su-x. We introduce two memoizing
functions functions gp and gs as in Sections 3.2 and 4.2, which produce green-prex
and green-su-x representations of a deque, respectively, and which are called only
when the corresponding buer is red but can be made green. Below we give the implementations
of push, pop, gp, and catenate; the implementations of inject, eject,
and gs are symmetric to those of push, pop, and gp, respectively. We denote a deque
with prex p, left deque l, middle m, right deque r, and su-x s by [p; l; m;
Case 1: Deque d is represented by a 5-tuple. If
otherwise, let
Case 2: Deque d is represented by a su-x only. If sf(d) < 8, return a su-x-only
deque with su-x push(x; sf(d)). Otherwise, push x onto sf(d) to form s, with nine
elements. Create a new prex p with the rst four, a middle with the next two, and
a su-x s with the last three. Return [p; ;; m; ;; s].
As in Section 4.2, the implementation of pop uses n aive-pop.
Case 1: Deque d is represented by a su-x only or jpr(d)j > 3. Return n aive-pop(d).
Case 2:
Case 3: x be the rst element on pr(d). If
create a new su-x s containing all the elements in pr(d), md(d), and sf(d)
except x, and return the pair consisting of x and the deque represented by s only.
Otherwise, form p from pr(d) by popping x and injecting the rst element on md(d),
m 0 from md(d) by popping the rst element and injecting the rst element on
sf(d), form s from sf(d) by popping the rst element, and return (x;
create two new prexes p and p 0 , with p containing the rst four
elements of jpr(d)j and p 0 the last two; return [p; push((p
proceed as follows.
Case 1: ld(d) 6= ;. Inspect the rst triple t on ld(d). If either the rst nonempty
middle buer in t contains 3 elements or t contains a nonempty deque, let (t;
and assume that
x is nonempty if t consists of only one nonempty middle buer. Apply the appropriate
one of the folowing two subcases.
Case 1.1: 3. Form x 0 from x and p from pr(d) by popping
the rst element from x and injecting it into pr(d). Return
Case 1.2: 2. Inject both elements in x into pr(d) to form p. If d 0 and y
are empty, return [p; l; md(d); rd(d); sf(d)]. Otherwise (d 0 and y are nonempty)
let l
Case 2: Inspect the rst triple t in rd(d). If either the rst
nonempty middle buer in t contains 3 elements or t contains a nonempty deque, let
assume that x is nonempty if t consists of only one nonempty middle buer. Apply
the appropriate one of the following two subcases.
Case 2.1: x 0 from pr(d), m, and x by popping an
element from m and injecting it into pr(d) to form p, popping an element from
m and injecting the rst element from x to form m 0 , and popping the rst
element from x to form x 0 . Return
Case 2.2: 2. Inject the two elements in md(d) into pr(d) to form p. Let
are empty or r
Return
Case 1: Both d 1 and d 2 are represented by 5-tuples. Let y be the rst element in
pr(d 2 ), and let x be the last element in sf(d 1 ). Create a new middle m containing x
followed by y. Partition the elements in sf(d 1 ) fxg into at most two buers s 0
1 and
1 , each containing two or three elements in order, with s 00
possibly empty. let ld 0
ld 00
otherwise, let ld 00
ld 0
1 . Similarly, partition the elements in pr(d 1 ) fyg into at
most two prexes
, each containing two or three elements in order, with
possibly empty. Let rd 0
2 . Return [pr(d 1 ); ld 00
Case 2: d 1 or d 2 is represented by a su-x only. Push or inject the elements of the
su-x-only deque one-by-one into the other deque.
5.3. Analysis. To analyze this structure, we use the same denitions and the
same potential function as in Sections 3.3 and 4.3. The amortized costs of push,
inject, catenate, and pop are O(1) by an argument analogous to that in Section
4.3. The amortized cost of eject is O(1) by an argument symmetric to that for pop.
Thus we obtain the following theorem:
Theorem 5.1. Each of the operations push, pop, inject, eject, and catenate
dened above takes O(1) amortized time.
Just as in Sections 3.4 and 4.3, we can improve the time and space constant factors
and simplify the analysis by using overwriting in place of memoization. Overwriting is
the preferred implementation, unless one is using a functional programming language
that supports memoization but does not easily allow overwriting.
5.4. Related Work. The structure presented in this section is analogous to
the structures of [16, Chapter 11] and [8, Section 9] but simplies them as follows.
First, the buers are of constant size, whereas in [16] and [8] they are noncatenable
deques. Second, the skeleton of the present structure is a binary tree, instead of a
tree extension of a redundant digital numbering system as in [8]. Also, our amortized
analysis uses the standard potential function method of [17] rather than the more
complicated debit mechanism used in [16]. Another related structure is that of [10,
Section 5], which represents purely functional, real-time deques as pairs of triples
rather than 5-tuples, but otherwise is similar to (but simpler than) the structure of
[8, Section 9]. It is straightforward to modify the structure presented here to use pairs
of triples rather than 5-tuples.
6. Further Results and Open Questions. If the universe A of elements over
which deques are constructed has a total order, we can extend the structures described
here to support an additional heap order based on the order on A. Specically, we
can support the additional operation of nding the minimum element in a deque (but
not deleting it) while preserving a constant amortized time bound for every operation,
including nding the minimum. We merely have to store with each buer, each deque,
and each pair or triple the minimum element in it. For related work see [1, 2, 6, 13].
We can also support a
ip operation on deques. A
ip operation reverses the
linear order of the elements in the deque: the ith from the front becomes the ith from
the back, and vice-versa. For the noncatenable deques of Section 3, we implement
ip by maintaining a reversal bit that is
ipped by a
ip operation. If the reversal bit
is set, a push becomes an inject, a pop becomes an eject, an inject becomes a push,
and an eject becomes a pop. To support catenation as well as
ip we use reversal
bits at all levels. We must also symmetrize the denition in Section 5 to allow a
deque to be represented by a prex only, and extend the various operations to handle
this possibility. The interpretation of reversal bits is cumulative. That is, if d is a
deque and x is a deque inside of d, x is regarded as being reversed if an odd number
of reversal bits are set to 1 along the path of actual pointers in the structure from
the node for d to the node for x. Before performing catenation, if the reversal bit of
either or both of the two deques is 1, we push such bits down by
ipping such a bit
of a deque x to 0,
ipping the bits of all the deques to which x points, and swapping
the appropriate buers and deques. (The prex and su-x exchange roles, as do the
left deque and right deque; the order of elements in the prex and su-x is reversed
as well.) We do such push-downs of reversal bits by assembling new deques, not by
overwriting the old ones.
We have devised an alternative implementation of catenable deques in which the
sizes of the prexes and su-xes are between 3 and 5 instead of 3 and 6. We do this by
memoizing the pop and eject operations and avoiding creating a new structure with
a green prex (su-x, respectively) representing the original deque when performing
pop (eject, respectively). Using a more complicated potential function than the
ones used in earlier sections, we can show that such an implementation runs in O(1)
amortized time per operation.
One direction for future research is to nd a way to simplify our structures fur-
ther. Specically, consider the following alternative representation of catenable de-
ques, which uses a single recursive subdeque rather than two such subdeques. A
nonempty deque d over A is represented by a triple that consists of a prex pr(d), a
(possibly empty) child deque of triples c(d), and a su-x sf(d). A triple consists of a
nonempty prex , a deque of triples, and a nonempty su-x, or just of a nonempty prex
or su-x. All buers and triples are over A. The operations push, pop, inject, and
eject have implementations similar to their implementations in Section 5. The major
dierence is in the implementation of catenate, which for this structure requires a
call to pop. Specically, let d 1 and d 2 be two deques to be catenated. catenate
pops c(d 1 ) to obtain a triple (p; d and a new deque c, injects (s; c; sf(d 1
to obtain d 00 , and then pushes (p; d 00 ; pr(d 2 . The nal result
has prex pr(d 1 ), child deque c 0 , and su-x sf(d 2 ). It is an open question whether this
algorithm runs in constant amortized time per operation for any constant upper and
lower bounds on the buer sizes.
Another research direction is to design a con
uently persistent representation of
sorted lists such that accesses or updates d positions from either end take O(log d)
time, and catenation takes O(1) time. The best structure so far developed for this
problem has a doubly logarithmic catenation time [12]; it is purely functional, and
the time bounds are worst-case.
Acknowledgment
. We thank Michael Goldwasser for a detailed reading of this
paper, and Jason Hartline for discussions that led to our implementations using memoization
--R
Data structural bootstrapping
Con uently persistant deques via data structural boot- strapping
Fully persistent arrays
Fully persistent lists with catenation
Making data structures persistent
Deques with heap order
lists, PhD thesis
Simple con uently persistent catenable lists (ex- tended abstract)
An optimal RAM implementation of catenable min double-ended queues
Amortized computational complexity
Worst case analysis of set union algorithms
--TR
--CTR
Amos Fiat , Haim Kaplan, Making data structures confluently persistent, Journal of Algorithms, v.48 n.1, p.16-58, August
George Lagogiannis , Yannis Panagis , Spyros Sioutas , Athanasios Tsakalidis, A survey of persistent data structures, Proceedings of the 9th WSEAS International Conference on Computers, p.1-6, July 14-16, 2005, Athens, Greece | queue;memoization;functional programming;data structures;stack;double-ended queue deque;stack-ended queue steque;persistent data structures |
586990 | Taking a Walk in a Planar Arrangement. | We present a randomized algorithm for computing portions of an arrangement of n arcs in the plane, each pair of which intersect in at most t points. We use this algorithm to perform online walks inside such an arrangement (i.e., compute all the faces that a curve, given in an online manner, crosses) and to compute a level in an arrangement, both in an output-sensitive manner. The expected running time of the algorithm is $O(\lambda_{t+2}(m+n)\log n)$, where m is the number of intersections between the walk and the given arcs. No similarly efficient algorithm is known for the general case of arcs. For the case of lines and for certain restricted cases involving line segments, our algorithm improves the best known algorithm of [M. H. Overmars and J. van Leeuwen, J. Comput. System Sci., 23 (1981), pp. 166--204] by almost a logarithmic factor. | Introduction
S be a set of n x-monotone arcs in the plane. Computing the whole (or parts of the) arrangement
S), induced by the arcs of "
S, is one of the fundamental problems in computational
geometry, and has received a lot of attention in recent years [SA95]. One of the basic techniques
used for such problems is based on randomized incremental construction of the vertical
decomposition of the arrangement (see [BY98] for an example).
If we are interested in only computing parts of the arrangement (e.g., a single face or a zone),
the randomized incremental technique can still be used, but it requires non-trivial modifications
Intuitively, the added complexity is caused by the need to "trim" parts of the
plane as the algorithm advances, so that it will not waste energy on regions which are no longer
relevant. In fact, this requirement implies that such an algorithm has to know in advance what
are the regions we are interested in at any stage during the randomized incremental construction.
A variation of this theme, with which the existing algorithms cannot cope efficiently, is the
following online scenario: We start from a point and we find the face f of A( "
that contains p(0). Now the point p starts moving and traces a connected curve fp(t)g t0 . As
our walk continues, we wish to keep track of the face of A( "
S) that contains the current point
This work has been supported by a grant from the U.S.-Israeli Binational Science Foundation. This work is
part of the author's Ph.D. thesis, prepared at Tel-Aviv University under the supervision of Prof. Micha Sharir.
y School of Mathematical Sciences, Tel Aviv University, Tel Aviv 69978, Israel; sariel@math.tau.ac.il;
http://www.math.tau.ac.il/ ~ sariel/
p(t). The collection of these faces constitutes the zone of the curve p(t). However, the function
p(t) is not assumed to be known in advance, and it may change when we cross into a new face
or abruptly change direction in the middle of a face (see [BDH97] for an application where such
a scenario arises). The only work we are aware of that can deal with this problem efficiently
is due to Overmars and van Leeuwen [OvL81], and it only applies to the case of lines (and,
with some simple modifications to the case of segments as well). It can compute such a walk in
(deterministic) O((n +m) log 2 n) time, inside an arrangement of n lines, where m is the number
of intersections of the walk with the lines of "
S. This is done by maintaining dynamically the
intersection of half-planes that corresponds to the current face. The algorithm of [OvL81] is
somewhat complicated and it is probably not practical for actual implementation.
In this paper, we propose a new randomized algorithm that computes the zone of the walk in
a general arrangement of arcs, as above, in O( t+2 (n+m) log n) expected time, where t+2 (n+m)
is the maximum length of a Davenport-Schinzel sequence of order t
[SA95]. The new algorithm can be interpreted as a third "online" alternative to the algorithms
of [CEG dBDS95]. The algorithm is rather simple and appears to be practical. As a matter
of fact, we are currently implementing and experimenting with a variant of the algorithm.
As an application of the new algorithm, we present a new algorithm for computing a level in
an arrangement of arcs. It computes a single level in O( t+2 (n +m) log n) expected time, where
m is the complexity of the level.
Both results improve by almost a logarithmic factor over the best previous result of [OvL81],
for the case of lines. For the case of general arcs, we are not aware of any similarly efficient
previous result.
The paper is organized as follows. In Section 2 we describe the algorithm. In Section 3 we
analyze its performance. In Section 4 we mention a few applications of the algorithm, including
that of computing a single level. Concluding remarks are given in Section 5.
2 The Algorithm
In this section, we present the algorithm for performing an online walk inside a planar arrangement
Randomized Incremental Construction of the Zone Using an Oracle. Given a set "
of n x-monotone arcs in the plane, so that any pair of arcs of "
S intersect at most t times (for
some fixed constant
S) denote the arrangement of "
namely, the partition of the plane
into faces, edges, and vertices as induced by the arcs of "
S (see [SA95] for details). We assume
that "
S is in general position, meaning that no three arcs of "
S have a common point, and that
the x-coordinates of the intersections and endpoints of of the arcs of "
are pairwise distinct. The
vertical decomposition of A( "
S), denoted by A VD
S), is the partition of the plane into vertical
pseudo-trapezoids, obtained by erecting two vertical segments up and down from each vertex of
S), (i.e., points of intersections between pairs of arcs and endpoints of arcs) and extending
each of them until it either reaches an arc of "
S, or otherwise all the way to infinity. See [BY98] for
more details concerning vertical decomposition. To simplify (though slightly abuse) the notation,
we refer to the cells of A VD
S) as trapezoids.
A selection R of "
S is an ordered sequence of distinct elements of "
S. By a slight abuse of
notation, we also denote by R the unordered set of its elements. Let oe( "
S) denote the set of all
selections of "
S. For a permutation S of "
S, let S i denote the subsequence consisting of the first i
elements of S, for
Computing the decomposed arrangement A VD
S) can be done as follows. Pick a random permutation
S. Compute incrementally the decomposed arrangements A VD (S i ),
inserting the i-th arc s i of S into A VD (S To do so, we compute the
district D i of s i in A VD (S i\Gamma1 ), which is the set of all trapezoids in A VD (S i\Gamma1 ) that intersect s i .
We split each trapezoid of D i into O(1) trapezoids, such that no trapezoid intersects s i in its
interior, as in [SA95]. Finally, we perform a pass over all the newly created trapezoids, merging
vertical trapezoids that are adjacent, and have identical top and bottom arcs. The merging step
guarantees that the resulting decomposition is A VD (S i ), independently of the insertion order of
elements
Let fl be the curve of the walk. For a selection R 2
S), let Z fl (R) denote the zone
of fl in A(R); this is the set of all faces of A(R) that have a nonempty intersection with fl.
Let A fl;VD
S) denote the union of all trapezoids that cover Z
S). Our goal is to compute
A
S).
We assume for the moment that we are supplied with an oracle O(S \Delta), that can decide
in constant time whether a given vertical trapezoid \Delta is in A fl;VD (S i ). Equipped with this oracle,
computing A fl;VD (S) is fairly easy, using a variant of the randomized incremental construction,
outlined above. The algorithm is depicted in Figure 1. We present this algorithm at a conceptual
level only, because this is not the algorithm that we shall actually use. It is given to help us to
describe and analyze the actual online algorithm that we shall describe later.
Note that the set of trapezoids maintained by the algorithm in the i-th iteration is a superset
of A fl;VD (S i ) (there might be trapezoids in C i that are no longer in Z i . However, this implies
that those trpaezoids will be eliminated the first time an arc belonging to their conflict list will
be handled. Moreover, the algorithm CompZoneWithOracle can be augmented to compute a
history DAG (as in [SA95]), whose nodes are the trapezoids created by the algorithm and where
each trapezoid destroyed during the execution of the algorithm points to the trapezoids that
were created from it. Let HT fl (S i ) denote this structure after the i-th iteration of the algorithm.
Definitions. A trapezoid created by the split operation of CompZoneWithOracle is called a
transient trapezoid if it is later merged (in the same iteration) to form a larger trapezoid. A
trapezoid generated by CompZoneWithOracle is final if it is not transient. The rank rank(\Delta) of
a trapezoid \Delta is the maximum of the indices i; j of the arcs containing the bottom and top edges
of \Delta in the permutation S. We denote by D(\Delta) the defining set of a final trapezoid \Delta; this is
the minimal set D such that \Delta 2 A VD (D). It is easy to verify that jD(\Delta)j 4. We can also
define D(\Delta) for a transient trapezoid \Delta, to be the minimal set D such that \Delta can be trnasient
during an incramental construction of A VD (D). Here it is easy to verify that jD(\Delta) 6. The
index index(\Delta) of a trapezoid \Delta is the minimum i such that D(\Delta) ' S i . For a trapezoid \Delta, we
denote by cl(\Delta) the conflict list of \Delta; that is, the set of arcs of "
S that intersect \Delta in its interior.
Let next(\Delta) denote the first element of cl(\Delta), according to the ordering of S.
For a trapezoid \Delta generated by CompZoneWithOracle (which was not merged into a larger
trapezoid), we denote by father(\Delta) the trapezoid that \Delta was generated from. A vertical side
Algorithm CompZoneWithOracle( "
Input: A set "
S of n arcs, a curve fl, an oracle O
Output: A fl;VD
begin
Choose a random permutation
S.
for i from 1 to n do
for each \Delta 2 D i such that int do
split(\Delta; s) is the operation of splitting a vertical trapezoid \Delta
crossed by an arc s into a constant number of vertical trapezoids,
as in [dBvKOS97], such that the new trapezoids cover \Delta, and
they do not intersect s in their interior.
end for
Merge all the adjacent trapezoids of Temp that have the same top
and bottom arcs. Let Temp 1 be the resulting set of trapezoids.
Let Temp 2 be the set of all trapezoids of Temp 1 that are in A fl;VD (S i ).
Compute this set using jT emp 1 j calls to O.
end for
return C n
CompZoneWithOracle
Figure
1: A randomized incramental algorithm for constructing the zone of a walk in an arrage-
ment of arcs, using an oracle
of a vertical trapezoid \Delta is called a splitter. A splitter is transient if it is not incident to the
intersection point (or endpoint) that induced the vertical edge that contains (this means that
the two trapezoids adjacent to are transient, and will be merged into a larger final trapezoid).
Figure
2 for an illustration of some of these definitions. It is easy to verify that a trapezoid
\Delta is transient if and only if at least one of its bounding splitters is transient. Thus, one can
decide whether a trapezoid is transient, by inspecting its splitters, in constant time.
An Online Algorithm for Constructing the Zone. Let us assume that the random permutation
S has been fixed in advance. Note that S predetermines HT (S). The
key observation in the online algorithm is that in order to construct a specific leaf of HT fl (S) we
do not have to maintain the entire DAG, and it suffices to compute only the parts of the DAG
that lie on paths connecting the leaf with the root of HT fl (there might be several such paths,
since our structure is a DAG, and not a tree).
To facilitate this computation, we maintain a partial history DAG T . The nodes of T are of
two types: (i) final nodes - those are nodes whose corresponding trapezoids appear in HT fl (S),
l 1
l 2 l 3
l 4
l 5
Figure
2: Illustration of the defintions: (i) is a transient splitter, and thus ; 0 are both
transient. We have rank(
and (ii) transient nodes - these are some of the leaves of T , whose corresponding trapezoids
are transient. Namely, all the internal nodes of T are copies of identical nodes of HT fl (whose
corresponding trapezoids are final), while some of the leaves of T might be transient. Intuitively,
T stores the portion of HT fl that we have computed explicitly so far. The transient leaves of
delimit poritions of HT fl that have not been expanded yet. Inside each node of T , we also
maintain the conflict list of the corresponding trapezoid.
Suppose we wish to compute a leaf of HT fl which contains a given point p. We first locate
the leaf of T that contains p. This is done by traversing a path in T , starting from the root of
T , and going downward in each step into the child of the current trapezoid that contains p (this
requires O(1) time, because the out-degree of any node of HT fl is bounded by a constant that
depends on t). At the end we either reach a final leaf which is the required leaf of HT fl , or we
encounter a transient leaf v. In the latter case, we need to expand T further below v, and the
first step is to replace v by the coresponding node v of HT fl , obtained by merging the transient
trapezoid of v with adjacent transient trapezoids, to form final trapezoid associated with v .
Assume for the moment that we are supplied with a method (to be described shortly) to
generate all those transient trapezoids, whose union forms the final trapezoid that is stored
at v in HT fl . Then we do the following: (i) Merge all those transient trapezoids into a new
(final) trapezoid \Delta; (ii) Compute the conflict list cl(\Delta) from the conflict lists of the transient
trapezoids; 1 (iii) Compute the first element s \Delta in cl(\Delta) according to the permutation S; and
(iv) Compute all the transient trapezoids or final generated from \Delta by splitting it by s \Delta (this
generates O(1) new trapezoids). Overall, this requires O(k is the number of
transient trapezoids that are merged, and l is the total length of the conflict lists of the transient
trapezoids.
Thus, we had upgraded a transient node v at T into a final node v . We denote this operation
by Expand(v). We can now continue going down in T , passing to the child of \Delta that contains
p and repeating recursively the above procedure at that child, until reaching the desired leaf of
HT fl that contains p.
Let be a newly created trapezoid. If is transient, then one of its splitters must be transient.
Let denote this transient splitter, and let us assume that is the right edge of . This implies
that either the top arc or the bottom arc of are the cause of the splitting that generated . In
particular, next( f ) is either the top or bottom arc of , where f denotes the trapezoid that
was generated from.
To perform the merging of the conflict lists in linear time, one may use either a hash-table, or a bit-vector,
or one maintains the conflict lists in a consistent ordering. See [BY98].
We compute the transient trapezoid 0 that lies to the right of , by taking te midpoint p of
, and by performing a point-location query of p in T . During this point-location process, we
always go down into the trapezoid \Delta that contains p in its interior or on its left edge. We stop
as soon as we encounter a transient trapezoid 0 that has a left edge identical to the right edge
of . This happens when and 0 have the same top and bottom edges; namely, we stop when
Intuitively, if the trapezoid 0 has rank smaller than rank( ), then its left
edge is longer than the right edge of ; the first time when both and 0 have indentical connecting
edge is when their top and bottom edges are identical, namely, when rank(
continue this process of collecting adjacent transient trapezoids using point-location queries on
midpoints of transient splitters, until the two extreme splitters (to the left and to the right) are
non-transient. We take the union of those trapezoids to be the new expanded trapezoid. See
Figure
2.
Of course, during this point-location process, we might be forced into going into parts of HT fl
that do not appear yet in T . In such a case, we will compute those parts in an online manner,
by performing Expand calls on the relevant transient trapezoids that we might encounter while
going down T . Thus, the process of turning a transient trapezoid into a final trapezoid is a
recursive process, which might be quite substantial.
Let G S denote the adjacency graph of A VD (S). This is a graph having a vertex for each
trapezoid in A VD (S), such that an edge connects two vertices if their corresponding trapezoids
share a common vertical side. Moreover, under general position assumptions, a vertex in G S
has degree at most 4. It is easy to verify that a connected component of G S corresponds to a
face of A(S). By performing a point-location query, as described above, for a point p in T , we
can compute the node v of G S whose trapezoid \Delta v contains p. Furthermore, by carrying out a
point-location query similar to that used in the Expand operation, we can compute a node u of
G S . Indeed, during such a point-location query, we traverse down T until we reach a leaf u of
HT fl (i.e., the conflict list of the corresponding trapezoid is empty). The node u is a node of G S
adjacent to v (i.e., uv is an edge of G S ). Repeating this process, we can perform DFS in G S ,
which corresponds to the entire face of A( "
S) that contains p.
Let fl be the curve of the online walk whose zone we wish to compute. We consider fl to
be a directed curve, supplied to us by the user through a function EscapePoint fl . The function
EscapePoint fl (p; \Delta) receives as input a point p 2 fl, and a trapezoid \Delta that contains p, and
outputs the next intersection point of fl with @ \Delta following p. If we reach the end of fl, the
function returns nil. We assume (although this is not crucial for the algorithm) that fl does not
intersect itself.
Thus, given a walk fl, we can compute its zone by the algorithm depicted in Figure 3.
Note that by the time the algorithm terminates, the final parts of T are contained in HT fl .
proper inclusion might arise; see Remark 3.6.) In analyzing the performance of the algorithm, we
first bound the overall expected time required to compute HT fl , which can be done by bounding
the expected running time of CompZoneWithOracle (in an appropriate model of computaiton).
Next, we will bound the additional time spent by the algorithm in traversing between adjacent
trapezoids (i.e., the time spent in performing the point-location queries).
Remark 2.1 By skipping the expansion of the face that contains the current point p in CompZoneOnline,
we get a more efficient algorithm that only computes the D of the walk. There might be cases
where this will be sufficient.
Algorithm CompZoneOnline( "
Input: A set "
S of n arcs, a starting point p of the walk,
and a function EscapePoint fl that represents the walk
Output: The decomposed zone of fl in A( "
begin
Choose a random permutation
S.
- a partial history DAG with a root corresponding to
the whole plane
where leaf(HT is the leaf of HT fl whose associate trapezoid contains p.
(All the paths in HT fl from v to the root now exist in T .)
Compute the face F containing \Delta v in A fl;VD (S), and add it to the output zone.
Z
while
Compute v, the next leaf of HT fl , such that
This is done by performing a point-location query in T , as described
in the text, and enlarging T accordingly.
Compute the face F of \Delta v in A fl;VD (S)
(if it was not computed already), and add it to the output zone.
Z
while
return Z.
CompZoneOnline
Figure
3: Algorithm for constructing the zone of a walk in an arragement of arcs in an online
manner
2.1 Correctness
In this section, we try to prove the correctness of CompZoneOnline.
Observation 2.2 During the execution of CompZoneOnline, the union of trapezoids of the leaves
of T form a pairwise disjoint covering of the plane by vertical trapezoids.
Corollary 2.3 Each conflict list, computed for some trapezoid \Delta by the procedure CompZoneOnline
is the list of all arcs of S that cross \Delta.
Proof: By induction on the steps of CompZoneOnline. Observe that regions that \Delta was
generated from cover \Delta, and thus the union of their conflict lists must contain the conflict list
of \Delta.
Corollary 2.4 For a trapezoid \Delta created by CompZoneOnline, we have that the all the curves
of D(\Delta) appears in S before all the curves of K (\Delta).
Lemma 2.5 Point-location query in the middle of transient splitters never fails, namely, such a
query always generate a transient trapezoid whcih is adjacent to the current transient trapezoid,
and they have the save top and bottom arcs.
Proof: Let be the current transient trapezoid, let be a transient splitter, and let p be the
point located in the middle of and assume without loss of generality that is the right edge of
.
The point-location query must end up in a trapezoid \Delta, which is currently a node of T that
contains on its left edge. From this point on, the algorithm "refines" \Delta by going down in T ,
performing a sequence of splitting and expansion operations.
be the sequence of trapezoids created (or visited) having p on their left side,
computed during the "hunt" for a transient trapezoid adjacent to .
First, note that during this process we can not perform an insertion of arc having an endpoint
in the interior of . Since this will either contradict our general position assumption, or will imply
that is not a transient splitter.
Let \Delta i be the trapezoid having rank(s), such that i is maximl. Clearly, the left
edge of \Delta must contain . Otherwise, there is an arc s l of S i\Gamma1 that intersects the interior
of , but this implies that the computation of the conflict list of is incorrect, contradicting
Coroallary 2.3.
Thus, must have the same top and bottom arcs as . Implying, that the left side of \Delta i is .
We conclude that during the point-location process we will compute \Delta i , and
Definition 2.6 For a permutation S of "
S, let Hist = Hist(S) denote the history-DAG generated
by computing the whole vertical decomposition A VD (S), by an incremental construction that
inserts the curves in their order in S.
Lemma 2.7 For any final trapezoid \Delta created by the Expand procedure, during the execution of
CompZoneOnline, there exists i 0, such that \Delta is a trapezoid of A VD (S i ). As a matter of fact,
we have
Proof: By induction on the depth of the nodes in T , where the depth of a node is defined to
be the length of the longest path from the root of T to this node.
Indeed, for the base of the induction, a node of depth 0 must be the root of T , which is being
computed during initialization of the algorithm, and is thus the only trapezoid of A VD (S 0 ),
Let \Delta be a final trapezoid of depth k in T that was generated (directly) from a trapezoid by
the procedure Expand. Let the final trapezoid that was split to generate
. By our induction hypthesis, f is a trapezoid of A VD (S l ), where
Corollary 2.3, the conflict list of f was computed correctly.
If is final, then by the above, we have is a trapezoid of A VD (S i ), where
(namley, next(father( Otherwise, is transient, and during its expansion, we had
computed several transient trapezoids using point-location queries. Note that those
point-location queries were performed by placing points on transient splitters; namely, as soon
as we encounter a non-transient splitter, we had aborted the expansion in this direction.
Thus, must have the same two arcs as floor and ceiling (otherwise, either the
algorithm performed a point-location in the middle of a non-trnasient splitter, or the computation
of the conflict lists is incorrect). Let
Clearly, \Delta is a trapezoid, and its two splitters
are non-transient.
We claim that the two splitters of \Delta are induced by intersections having index at most
was generated from father( i ) by the
splitting caused by s m . And father( i ) (and its conflict list) was computed correctly, by our
induction hypothesis. Moreover, the left splitter of \Delta is either empty (i.e., \Delta left vertical side
is an intersection point), or it is final. If is final, then it is adjacent to the intersection point
that induces it, which by the above is defined by (at most) three arcs that must appear is Sm .
Similarly, the right splitter of \Delta is final, and is defined by (at most) three arcs that appear in
Sm .
Thus, \Delta is a trapezoid that does not intersect any arc of Sm in its interior, its top and bottom
arcs belong to Sm , and its two splitters are final and defined by arcs of Sm . This implies that \Delta
is in A VD (Sm ).
Lemma 2.8 All the final nodes computed by CompZoneOnline appear in HT fl .
Proof: Let \Delta be a final trapezoid computed by CompZoneOnline. The trapezoid \Delta was
generated during a sequence of recursive calls to Expand. Let be the set of final
trapezoids created direcly by those recursive calls, such that and they are ordered
according to their recursive call ordering. Let l
The trapezoid \Delta 1 was created because we performed a point-location query for a point p that
appear in Z(S). Since Z(S i+1
Moreover, if \Delta i+1 was computed during the computation of \Delta i , then there must be a point p i
that lie inside both vertical trapezoids, because the computation of \Delta i+1 was initiated by a point-location
query must also lie inside \Delta i+1 . This implies that p i 2 Z(S l i
Thus,
It follows, that \Delta i+1 appears in HT fl (S l i+1
By induction, it follows that \Delta k 2 HT fl (S).
3 The Analysis
3.1 Constructing the History DAG
In the following, we analyze the performance of CompZoneWithOracle. We assume that it maintains
for each trapezoid a conflict-lists that stores the set of arcs that cross it. Thus, the cost of
each operation on a trapezoid is proportional to the size of its conflict list. We also assume that
a call to the Oracle takes O(1) time.
Lemma 3.1 The algorithm CompZoneWithOracle computes the zone of fl in A VD (S) in
O ( t+2 (n +m) log n) expected time, and the expected number of trapezoids that it generates is
O ( t+2 (n +m)).
Proof: The proof is a straightforward adaptation of the proof of [CEG We omit the easy
details.
Observation 3.2 The trapezoids computed by CompZoneOnline are either (final) trapezoids
computed by CompZoneWithOracle (and thus appear in HT fl ), or transient trapezoids that were
split from trapezoids of HT fl .
Lemma 3.3 The expected number of transient trapezoids generated by CompZoneOnline is
O( t+2 (n +m)), and the expected total size of their conflict lists is O( t+2 (n +m) log n).
Proof: Each final trapezoid generated by CompZoneOnline might be split into O(1) transient
trapezoids. Each final trapezoid computed by CompZoneOnline is also computed by
CompZoneWithOracle. By Lemma 3.1, the expected number of such trapezoids is O( t+2 (n+m)).
The second part of the lemma follows by a similar argument.
Definition 3.4 A curve fl is locally x-monotone in A( "
S), if it can be decomposed inside each
face of A( "
into a constant number of x-monotone curves.
Theorem 3.5 The algorithm CompZoneOnline computes the zone of fl in A(S) in
O ( t+2 (n +m) log n) expected time, provided that fl is a locally x-monotone curve in A( "
S).
Proof: The time spent by CompZoneOnline is bounded by the time required to construct the
history DAG, by the time spent in maintaining the conflict lists of the trapezoids, and by the
time spent on performing point-location queries, as we move from one trapezoid to another in
A fl;VD (S).
By Lemmas 3.1 and 3.3, the expected time spent on maintaining the conflict lists of the
trapezoids computed by the algorithm is O( t+2 (n m) log n), since the total time spent on
handling the conflict lists is proportional to their total length. By Lemma 3.3, the expected total
size of those conflict lists is O( t+2 (n +m) log n).
Moreover, the depth of the history DAG constructed by the algorithm is O(log n) with a
probability polynomially close to 1 [Mul94]. Thus, the expected time spent directly on performing
a single point-location query (ignoring the time spent on maintaining the conflict lists) as we
move from one trapezoid to the next, is O(log n). The curve fl is locally x-monotone, which
implies that it intersects the splitters of each trapezoid of A fl;VD (S) at most O(1) times. Thus,
the expected number of point-location queries performed by the algorithm is proportional to the
expected number of transient trapezoids created plus O(m). By Lemma 3.3, we have that the
expected running time is
O
Remark 3.6 Note that CompZoneWithOracle computes the zone of fl in A VD (S i ), for each
In fact, it might compute a trapezoid \Delta 2 A fl;VD (S i ) that does not intersect the zone
of fl in A fl;VD (S). In particular, such a trapezoid \Delta will not be computed by CompZoneOnline.
This is a slackness in our analysis that we currently do not know whether it can be exploited to
further improve the analysis of the algorithm (we suspect that it cannot).
Remark 3.7 The only result of this type that we are aware of, is a classical result due to
Overmars and van Leeuwen [OvL81]. It maintains dynamically the convex hull of n points in the
plane in O(log 2 n) time for each insertion or deletion operation. The dual variant of this results
(maintaining the intersection of halfplanes) can be used to perform walks inside line arrangements
in (deterministic) O((n m) log 2 n) time, where m is the number of intersections of the walk
with the lines. The algorithm of [OvL81] requires somewhat involved rebalancing of the tree that
represents the current intersection of halfplanes. Our algorithm is somewhat simpler, faster, and
applies to more general arrangements.
As for segments and general arcs, we are not aware of any result of this type in the literature.
Of course, if the curve fl is known in advance (and is simple, in the sense that one can compute
quickly its intersections with any arc of "
S), we can compute the single face in the modified
arrangement (as in the proof of the general planar Zone Theorem [SA95, Theorem XX]) using
the algorithms of [dBDS95, CEG + 93]. These algorithms are slightly simpler than the algorithm
of Theorem 3.5, although they have the same expected performance. However, these algorithms
are useless for online walks.
Applications
In this section we present several applications of the algorithm CompZoneOnline.
4.1 Computing a Level in an Arrangement of Arcs
In this subsection we show how to modify the algorithm of the previous section to compute a
level in an arrangement of x-monotone arcs.
Definition 4.1 Let "
S be a set of n x-monotone arcs in the plane, any pair of which intersect at
most t times (for some fixed constant t). We assume that "
S is in general position, as above. The
level of a point in the plane is the number of arcs of "
lying strictly below it. Consider the closure
l of the set of all points on the arcs of "
having level l (for 0 l ! n). E l is a x-monotone
(not necessarily connected) curve (which is polygonal in the case of lines or segments), which is
called the level l of the arrangement A( "
S). At x-coordinates where a vertical line intersects less
than l lines of S, we consider E l to be undefined.
Levels are a fundamental structure in computational and combinatorial geometry, and have
been subject to intensive research in recent years (see [AACS98, Dey98, TT97, TT98]). Tight
bounds on the complexity of a single level, even for arrangements of lines, proved to be surprisingly
hard to obtain. Currently, the best known upper bound for the case of lines is O(n(l+1) 1=3 )
[Dey98], while the lower bound is \Omega\Gamma n log (l
bounds for other classes of arcs.
First, note that if "
S is a set of lines, then, once we know the leftmost ray that belongs to E l ,
then the level l is locally defined: as we move from left to right along E l , each time we encounter
an intersection point (a vertex of A( "
we have to change the line that we traverse. (This is also
depicted in Figure 4.) In particular, we can compute the level E l in O( 3 (n
using CompZoneOnline. The same procedure can be used to compute a level in an arrangement
of more general arcs. The only non-local behavior we have to watch for are jump discontinuities
of the level caused when an endpoint of an arc appears below the current level, or when the
Figure
4: The first level in an arrangement of segments (the vertical edges show the jump
discontinuities of the level, but are not part of the level).
current level reaches an endpoint of an arc (see Figure 4). See below for details concerning the
handling of those jumps.
In the following, let l; 0 l ! n be a prescribed parameter. Let E l denote the level l in the
arrangement
S).
The following adaption of CompZoneOnline to our setting is rather straightforward, but we
include it for the sake of completeness. We sort the endpoints of the arcs of "
S by their x-
coordinates. Each time our walk reaches the x-coordinate of the next endpoint, we updated E l
by jumping up or down to the next arc, if needed. This additional work requires O(n log n) time.
During the walk, we maintain the invariant that the top edge of the current trapezoid is part of
l . To compute the first trapezoid in the walk, we compute the intersection of level l with the
y-axis (this can be done by sorting the arcs according to their intersections with the y-axis). Let
0 be this starting point. We perform a point-location query with p 0 in our virtual history DAG
to compute the starting trapezoid \Delta 0 .
Now, by walking to the right of \Delta 0 we can compute the part of E l lying to the right of the
y-axis. Indeed, let \Delta be the current trapezoid maintained by the algorithm, such that its top
edge is a part of E l . Let p(\Delta) denote the top right vertex of \Delta. By performing point-location
queries in our partial history DAG T , we can compute all the trapezoids of A VD (S) that contain
p(\Delta) (by our general position assumption, the number of such trapezoids is at most 6; this
number materializes when p(\Delta) lies in the intersection of two x-monotone arcs). By inspecting
this set of trapezoids, one can decide where E l continues to the right of \Delta, and determine the
next trapezoid having E l as its roof. The algorithm sets \Delta to be this trapezoid.
If the algorithm reaches an x-coordinate of an endpoint of an arc, we have to update E l by
jumping up (if this is the right endpoint of an arc and it lies on or below the level) or down (if
it is a left endpoint and lies below the level); namely, we set \Delta to be the trapezoid lying above
(or below) the current \Delta.
The algorithm continues in this manner, until reaching the last edge of E l . The algorithm
then performs a symmetric walk to the left of the y-axis to compute the other portion of the
level.
Let CompLevel denote this modified algorithm. We summarize our result:
Theorem 4.2 The algorithm CompLevel computes the level l in A( "
S) in O ( t+2 (n
expected time.
Remark 4.3 Since CompLevel is online, we can use it to compute the first m 0 points of E l , in
expected O( t+2 (n +m 0 ) log n) time.
Remark 4.4 A straightforward extension of CompLevel allows us to compute any connected
path within the union of "
S (i.e., we restrict our "walk" to the arcs of "
S) in an on-line manner, in
randomized expected time O ( t+2 (m + n) log n), where m is the number of vertices of the path.
As above, the extended version can also handle jumps between adjacent arcs during the walk.
4.2 Other Applications
In this subsection, we provide some additional applications of CompZoneOnline.
Theorem 4.5 Let L be a set of n lines in the plane, and let be a prescribed constant.
Then one can compute a (1=r)-cutting of A(L), having at most (1+ ")(8r
randomized expected time O
log n
, where ff(n) is the inverse of the Ackermann
function [SA95].
Proof: This follows by plugging the algorithm of Theorem 4.2 and Remark 4.3 into the
algorithm described in [HP98].
For a discussion of cuttings of small asymptotic size, and their applications, see [Mat98, HP98].
Remark 4.6 Theorem 4.5 improves the previous result of [HP98] by almost a logarithmic factor.
Remark 4.7 Once we have computed the level l (in an arrangement of general arcs), we can
clip the arcs to their portions below the level. Supplied with those clipped arcs, we can compute
the arrangement below the level l in O((m+n) log n+ r) time, where is the complexity
of the level l, and r is the complexity of the first l levels of A( "
Thus, we can compute
the first l levels of A( "
S) in O( t+2 (m+n) log n+r) expected time, using randomized incremental
construction [Mul94]. This improves over the previous result of [ERvK96] that computes this
portion of the arrangement in O(n log n (note that this running time is not output
sensitive).
A byproduct of the technique of CompZoneOnline is the ability to perform point-location
queries using the partial history DAG mechanism.
Definition 4.8 For a point set P , and a set of arcs "
S) denote a connected polygonal
set, such
S), and (ii) the number of intersections between
S) and the
arcs of "
S is minimum. Let wM
S) denote the number of such intersections.
The set
S) can be interpreted as the minimum spanning Steiner tree of P , under the
metric of intersections with the arcs of "
S.
Lemma 4.9 Given a set of "
S of n arcs in the plane. One can answer point-location queries for
a set P of m points in an online manner, such that the overall expected time to answer those
queries is O( t+2 (n
log n) time.
Proof: We precompute a random permutation S of "
S, and let T be our partial history DAG.
We answer the point-location queries, by computing the relevant parts of the history DAG of
A VD (S), as in CompZoneOnline.
By the time the algorithm terminates, T is contained in A fl;VD (S), where
S).
However, the expected total weight of the trapezoids of T Z(S) is O( t+2 (n m) log(n)), by
Lemma 3.1. Which bounds the overall expected query time.
Remark 4.10 The result of Lemma 4.9 is somewhat disappointing, since wM
[Aga91], while for the case of lines, faces can be computed in, roughly, O(n 2=3 2=3 ) [AMS98]
(i.e,, j). We are not aware of any algorithm with a better running time, than the algorithm
of Lemma 4.9, for the case of lines, where the query points are given in an online fashion.
Currently, for the case of general arcs, no better bound than O
is known, on the
complexity of faces in arrangement of n arcs (see [EGP
The algorithm of Lemma 4.9 is simple, and it have the favorable additional property of being
adaptive. Namely, if wM
S) is smaller (i.e., the query point are "close together") the overall
query time improves. Furthermore, if there are alot of queries close together, the first query will
be slow, and the later ones will be fast (since the later queries use parts of paths that already
exists in the partial history DAG).
Conclusions
In this paper we have presented a new randomized algorithm for computing a zone in a planar
arrangement, in an online fashion. This algorithm is the first efficient algorithm for the case of
planar arcs, it performs faster (by nearly a logarithmic factor) than the algorithm of [OvL81]
for the case of lines and segments, and it is considerably simpler. We also presented an efficient
randomized algorithm for computing a level in an arrangement of arcs in the plane, whose
expected running time is faster than any previous algorithm for this problem.
The main result of this paper relies on the application of point-location queries to compute
the relevant parts of an "off-line" structure (i.e., the history DAG). The author believes that this
technique should have additional applications. In particular, this approach might be useful also
for algorithms in higher dimensions. We leave this as an open question for further research.
Acknowledgments
The author wishes to thank Pankaj Agarwal, Danny Halperin and Micha Sharir for helpful
discussions concerning the problems studied in this paper and related problems.
--R
On levels in arrangements of lines
Intersection and decomposition algorithms for planar arrangements.
The area bisectors of a polygon and force equilibria in programmable vector fields.
Algorithmic Geometry.
Computing a face in an arrangement of line segments.
On lazy randomized incremental construction.
Computational Geometry: Algorithms and Applications.
Improved bounds for planar k-sets and related problems
Algorithms in Combinatorial Geometry.
Arrangements of curves in the plane: Topology
An optimal algorithm for the
Constructing cuttings in theory and practice.
The complexity of many cells in the overlay of many arrangements.
Computational Geometry: An Introduction Through Randomized Algorithms.
Maintenance of configurations in the plane.
A characterization of planar graphs by pseudo-line arrangements
How to cut pseudo-parabolas into segments
--TR
--CTR
Nisheeth Shrivastava , Subhash Suri , Csaba D. Tth, Detecting cuts in sensor networks, Proceedings of the 4th international symposium on Information processing in sensor networks, April 24-27, 2005, Los Angeles, California
Naoki Katoh , Takeshi Tokuyama, Notes on computing peaks in k-levels and parametric spanning trees, Proceedings of the seventeenth annual symposium on Computational geometry, p.241-248, June 2001, Medford, Massachusetts, United States | planar arrangements;single face;levels;computational geometry |
586992 | The Density of Weakly Complete Problems under Adaptive Reductions. | Given a real number $\alpha < 1$, every language that is weakly $\leq_{n^{\alpha / 2} - {\rm T}}^{{\rm P}} $-hard for E or weakly $\leq_{n^{\alpha} - {\rm T}}^{\rm P}$-hard for E2 is shown to be exponentially dense. This simultaneously strengthens the results of Lutz and Mayordomo (1994) and Fu (1995). | Introduction
In the mid-1970's, Meyer[15] proved that every - P
-complete language for exponential
time-in fact, every - P
-hard language for exponential time-is dense. That is,
linear ), DENSE is the class of all dense languages, DENSE c is the
complement of DENSE, and Pm(DENSE c ) is the class of all languages that are - P
-reducible
to non-dense languages. language A 2 f0; 1g is dense if there is a real number ffl ? 0
such that jA -n
for all sufficiently large n, where .) Since that
time, a major objective of computational complexity theory has been to extend Meyer's
result from - P
m -reductions to - P
-reductions, i.e., to prove that every - P
-hard language for
E is dense. That is, the objective is to prove that
where PT(DENSE c ) is the class of all languages that are - P
T -reducible to non-dense lan-
guages. The importance of this objective derives largely from the fact (noted by Meyer[15])
that the class PT(DENSE c ) contains all languages that have subexponential circuit-size
complexity. language A ' f0; 1g has subexponential circuit-size complexity if, for every
real number ffl ? 0, for every sufficiently large n, there is an n-input, 1-output Boolean
This research was supported in part by National Science Foundation Grant CCR-9157382, with matching
funds from Rockwell International, Microware Systems Corporation, and Amoco Foundation.
circuit that decides that the set and has fewer than 2 n ffl
gates. Other-
wise, we say that A has exponential circuit-size complexity.) Thus a proof of (2) would tell
us that E contains languages with exponential circuit-size complexity, thereby answering a
major open question concerning the relationship between (uniform) time complexity and
circuit-size complexity. Of course (2) also implies the more modest, but more
famous conjecture, that
where SPARSE is the class of all sparse languages. language A ' f0; 1g is sparse if
there is a polynomial q(n) such that jA -n j - q(n) for all n 2 N.) As noted by Meyer[15],
the class PT(SPARSE) consists precisely of all languages that have polynomial circuit-size
complexity, so (3) asserts that E contains languages that do not have polynomial circuit-size
complexity.
Knowing (1) and wanting to prove (2), the natural strategy has been to prove results of
the form
for successively larger classes P r (DENSE c ) in the range
The first major step beyond (1) in this program was the proof by Watanabe[17] that
i.e., that every language that is - P
O(log n)\Gammatt -hard for E is dense. The next big step was the
proof by Lutz and Mayordomo[10] that, for every real number ff ! 1,
This improved Watanabe's result from O(log n) truth-table (i.e., nonadaptive) queries to
n ff such queries for ff arbitrarily close to 1 (e.g., to n 0:99 truth-table queries). Moreover,
Lutz and Mayordomo[10] proved (5) by first proving the stronger result that for all ff ! 1,
which implies that every language that is weakly - P
poly )
is dense. language A is weakly - P
r -hard for a complexity class C if -(P r (A) j C) 6= 0, i.e.,
nonnegligible subset of C in the sense of the resource-bounded measure
developed by Lutz[9]. A language A is weakly - P
r -complete for C if A 2 C and A is weakly
r -hard for C. See [12] or [2] for a survey of resource-bounded measure and weak com-
pleteness.) The set of weakly - P
languages for E is now known to have p-measure
in the class C of all languages, while the set of all - P
languages for E has measure 0 unless (which is generally
conjectured to be true), almost every language is weakly - P
n ff \Gammatt -hard, but not - P
for E, so the result of Lutz and Mayordomo [10] is much more general than the fact that
every - P
n ff \Gammatt -hard language for E is dense.
A word on the relationship between hardness notions for E and E 2 is in order here. It
is well known that a language is - P
m -hard for E if and only if it is - P
m -hard for this is
(E). The same equivalence holds for - P
T -hardness. It is also clear that
every language that is - P
n ff \Gammatt -hard for E. However, it is not generally
the case that Pm(P n ff \Gammatt may well be the case that a language can
be - P
n ff \Gammatt -hard for E, but not for E 2 . These same remarks apply to - P
-hardness.
The relationship between weak hardness notions for E and E 2 is somewhat different.
Juedes and Lutz [8] have shown that weak - P
m -hardness for E implies
-hardness for
proof of this fact also works for
T -hardness. However, Juedes and Lutz
[8] also showed that weak - P
m -hardness for does not generally imply
-hardness
for E, and it is reasonable to conjecture (but has not been proven) that the same holds for
-hardness. We further conjecture that the notions of weak - P
and
are incomparable, and similarly for
-hardness.
In any case, (6) implies that, for every ff ! 1, every language that is weakly - P
for either E or E 2 is dense.
Shortly after, but independently of [10], Fu[7] used very different techniques to prove
that, for every ff ! 1,
and
That is, every language that is - P
is dense. These
results do not have the measure-theoretic strength of (6), but they are a major improvement
over previous results on the densities of hard languages in that they hold for Turing
reductions, which have adaptive queries.
In the present paper, we prove results which simultaneously strengthen results of Lutz
and Mayordomo[10] and the results of Fu[7]. Specifically, we prove that, for every ff ! 1,
and
These results imply that every language that is weakly - P
weakly
dense. The proof of (9) and (10) is not a simple extension of
the proof in [10] or the proof in [7], but rather combines ideas from both [10] and [7] with
the martingale dilation technique introduced by Ambos-Spies, Terwijn, and Zheng [3].
Our results also show that the strong hypotheses - p (NP) 6= 0 and - p 2 (NP) 6= 0 (surveyed
in [12] and [2]) have consequences for the densities of adaptively hard languages for NP.
Mahaney [13] proved that
and Ogiwara and Watanabe [16] improved this to
That is, if P 6= NP, then no sparse language can be - P
btt -hard for NP. Lutz and Mayordomo
[10] used (6) to obtain a stronger conclusion from a stronger hypothesis, namely, for all
By and (10), we now have, for all ff ! 1,
and
Thus, if - p (NP) 6= 0, then every language that is - P
n 0:49 \GammaT -hard for NP is dense. If
- p2 (NP) 6= 0, then every language that is - P
n 0:99 \GammaT -hard for NP is dense.
Preliminaries
The Boolean value of a condition, / is
ae
0 if not /:
The standard enumeration of f0; 1g is s enumeration
induces a total ordering of f0; 1g which we denote by !.
All languages here are subsets of f0; 1g . The Cantor space is the set C of all languages.
We identify each language A 2 C with its characteristic sequence, which is the infinite
binary sequence
is the standard enumeration of f0; 1g . For
A to indicate that w is a prefix of (the characteristic
sequence of) A. The symmetric difference of the two languages A and B is A 4
The cylinder generated by a string w 2 f0; 1g is the set
Note that C
In this paper, a set X ' C that appears in a probability Pr(X) or a conditional probability
Pr(XjCw ) is regarded as an event in the sample space C with the uniform probability
measure. Thus, for example, Pr(X) is the probability that A 2 X when the language
A ' f0; 1g is chosen probabilistically by using an independent toss of a fair coin to decide
membership of each string in A. In particular, Pr(Cw . The complement of a set
is the set X
exactly t(n)-time-
computable if there is an algorithm that, on input runs for
at most O(t(k steps and outputs an ordered pair (a; b) 2 Z \Theta Z such
that
b . A function f : N d \Theta f0; 1g ! R is t(n)-time-computable if
there is an exactly t(n)-time-computable function b
We briefly review those aspects of martingales and resource-bounded measure that are
needed for our main theorem. The reader is referred to [2], [9], [12], or [14] for more thorough
discussion.
A martingale is a function d : f0; 1g ! [0; 1) such that, for all w 2 f0; 1g ,
a t(n)-martingale is a martingale that is t(n)-time-computable, and an
exact t(n)-martingale is a (rational-valued) martingale that is exactly t(n)-time-computable.
A martingale d succeeds on a language A 2 C if, for every c 2 N, there exists w v A such
that d(w) ? c. The success set of a martingale d is the set
Cjd succeeds on Ag:
The unitary success set of d is
w2f0;1g
The following result was proven by Juedes and Lutz [8] and independently by Mayor-
domo [14].
Lemma 2.1 (Exact Computation Lemma) Let t : N ! N be nondecreasing with t(n) - n 2 .
Then, for every t(n)-martingale d, there is an exact n
d such that
d].
A sequenceX
a
of series of terms a j;k 2 [0; 1) is uniformly p-convergent if there is a polynomial
such that, for all j; r 2 N,X
a r). The following
sufficient condition for uniform p-convergence is easily verified by routine calculus.
Lemma 2.2 Let a j;k 2 [0; 1) for all N. If there exist a real number ffl ? 0 and a
polynomial N such that a j;k - e \Gammak ffl
for all j; k 2 N with k - g(j), then the seriesX
a are uniformly p-convergent.
A uniform, resource-bounded generalization of the classical first Borel-Cantelli lemma
was proved by Lutz [9]. Here we use the following precise variant of this result.
Theorem 2.3 Let ff; e
ff, and let
be an exactly 2 (log n) ff
-time-computable function with the following two properties.
(i) For each j; k 2 N, the function d j;k defined by d j;k is a martingale.
(ii) The seriesX
are uniformly p-convergent.
Then there is an exact 2 (log n) e
ff
-martingale e
ff such
k=t
d]:
Proof (sketch). Assume the hypothesis, and fix ff
ff. Since
ff
it suffices by Lemma 2.1 to show that there is a 2 (log n) ff 0
martingale d 0 such
k=t
Fix a polynomial testifying that the seriesX
are
uniformly p-convergent, and define
for all w 2 f0; 1g . Then, for each w 2 f0; 1g ,
so 1). It is clear by linearity that d 0 is a martingale. To see that (16)
holds, assume that A 2[
k=t
arbitrary. Then there exist
2c) such that A 2 S 1 [d j;k ]. Fix w v A such that d j;k (w) - 1. Then
arbitrary here, it follows that A 2 S 1 [d 0 ], confirming
(16).
To see that d 0 is 2 (log n) ff 0
-time-computable, define dA
follows, using the abbreviation 2.
dA
s
dB
s
2s
s
2s
For all r 2 N and w 2 f0; 1g , it is clear that
and it is routine to verify the inequalities
dA
dB
whence we have
for all r 2 N and w 2 f0; 1g . Using formula (17), the time required to compute dC (r; w)
exactly is no greater than
and q is a polynomial. Since q(n) \Delta 2 (log n) ff
it follows that
exactly 2 (log n) ff 0
-time-computable. By (18), then, d 0 is a 2 (log n) ff 0
-martingale.
The proof of our main theorem uses the techniques of weak stochasticity and martingale
dilation, which we briefly review here.
As usual, an advice function is a function h Given a function q
we write ADV(q) for the set of all advice functions h such that jh(n)j - q(n) for all n 2 N.
Given a language B and an advice function h, we define the language
is a standard string-pairing function, e.g., ! x; y ?= 0 jxj 1xy. Given
functions t; we define the advice class
Definition (Lutz and Mayordomo[10], Lutz[11]) For t; language A is
weakly (t; q; -stochastic if, for all B; C 2 DTIME(t)=ADV(q) such that jC =n j -(n) for
all sufficiently large n,
lim
We write WS(t; q; -) for the set of all weakly (t; q; -stochastic languages.
The following result resembles the weak stochasticity theorems proved by Lutz and
Mayordomo [10] and Lutz [11], but gives a more careful upper bound on the time complexity
of the martingale.
Theorem 2.4 (Weak Stochasticity Theorem) Assume that ff; fi; fl; - 2 R satisfy ff -
there is an exact 2 (log n) -
-martingale d such that
Proof. Assume the hypothesis, and assume without loss of generality that ff; fi; fl; - 2 Q .
Fix
) be a
language that is universal for DTIME(2 n ff
) in the following sense. For each
Ng.
Define a function d is not a power of 2, then
where the sets Y i;j;k;y;z are defined as follows. If
is the set of all A 2 C such that
The definition of conditional probability immediately implies that, for each N, the
function d 0
i;j;k is a martingale. Since U 2 DTIME(2 n ff 0
to compute each Pr(Y i;j;k;y;z jCw ) using binomial coefficients is at most O(2 (log(i+j+k)) - 00
steps, so the time required to compute d 0
i;j;k (w) is at most O((2 n fi
steps. Thus d 0 is exactly 2 (log n) - 0
-time-computable.
As in [10] and [11], the Chernoff bound tells us that, for all
whence
e, let
4 , and fix k 0 2 N such that
for all j 2 N. Then g is a polynomial and, for all
It follows by Lemma 2.2 that the seriesX
i;j;k (-), for are uniformly p-convergent.
Theorem 2.3 that there is an exact 2 (log n) -
-martingale d such
k=t
Now assume that A 62 WS(2 n ff
by the definition of weak stochasticity,
we can fix and an infinite set J ' N such that, for
all . For each n 2 J , then, there is a prefix w v A
such that Cw ' Y i;j;k;h 1 (n);h2 (n), whence
i;j;k ]. This argument shows that[
k=t
It follows by (19) that
The technique of martingale dilation was introduced by Ambos-Spies, Terwijn, and
Zheng [3]. It has also been used by Juedes and Lutz[8] and generalized considerably by
Breutzmann and Lutz [6]. We use the notation of [8] here.
The restriction of a string to a language A ' f0; 1g is
the string w-A obtained by concatenating the successive bits b i for which s i 2 A. If
strictly increasing and d is a martingale, then the f -dilation of d is
the function f-d : f0; 1g ! [0; 1) defined by
for all w 2 f0; 1g .
Lemma 2.5 (Martingale Dilation Lemma - Ambos-Spies, Terwijn, and Zheng[3]) If f :
strictly increasing and d is a martingale, then f-d is also a martingale.
Moreover, for every language A 2 f0; 1g , if d succeeds on f \Gamma1 (A), then f-d succeeds on A.
Finally, we summarize the most basic ideas of resource-bounded measure in E and E 2 .
A p-martingale is a martingale that is, for some k 2 N, an n k -martingale. A p 2 -martingale
is a martingale that is, for some k 2 N, a 2 (log n) k
-martingale.
Definition (Lutz [9])
1. A set X of languages has p-measure 0, and we write - there is a p-
martingale d such that X ' S 1 [d].
2. A set X of languages has p 2 -measure 0, and we write - there is a
-martingale d such that X ' S 1 [d].
3. A set X of languages has measure 0 in E, and we write
4. A set X of languages has measure 0 in E 2 , and we write -(XjE 2
5. A set X of languages has measure 1 in E, and we write
In this case, we say that contains almost every element of E.
6. A set X of languages has measure 1 in E 2 , and we write -(XjE 2
In this case, we say that contains almost every element of E 2 .
7. The expression -(XjE) 6= 0 means that X does not have measure 0 in E. Note that
this does not assert that "-(XjE)" has some nonzero value. Similarly, the expression
means that X does not have measure 0 in E 2 .
It is shown in [9] that these definitions endow E and E 2 with internal measure structure.
This structure justifies the intuition that, if negligibly small
subset of E (and similarly for
The key to our main theorem is the following lemma, which says that languages that are
-reducible to non-dense languages cannot be very stochastic.
Lemma 3.1 (Main Lemma) For all real numbers ff ! 1 and fi
Proof. Let assume without loss of generality that ff and fi are
rational. Let A 2 P n ff \GammaT (DENSE c ). It suffices to show that A is not weakly
stochastic.
there exist a non-dense language S, a polynomial q(n),
and a q(n)-time-bounded oracle Turing machine M such that A = L(M S ) and, for every
makes exactly bjxj ff cqueries (all distinct) on input x with
oracle B. Call these queries c) in the order in which M makes
them.
For each B 2 f0; 1g and n 2 N, define an equivalence relation -B;n on f0; 1g -q(n) by
and an equivalence relation jB;n on f0; 1g n by
Note that -B;n has at most 2jB -q(n) j+1 equivalence classes, so jB;n has at most (2jB -q(n) j+
equivalence classes.
2 , and let J be the set of all n 2 N for which the following three conditions
hold.
conditions (ii) and (iii) hold for all sufficiently large n.
S is not dense, condition (i) holds for infinitely many n. Thus the set J is
infinite.
Define an advice function h
be a maximum-cardinality equivalence class of the relation j S;n . For
each
Let
Note that
For each n 2 N, let be the set of all coded pairs
such that x; y
denotes the ith query of M on input w when the successive oracle answers
are be the set of all such coded pairs in C n such that M accepts on input
x when the successive oracle answers are b Finally, define the languages
It is clear that B; C 2 DTIME(2 n ). Also, by our construction of these sets and the advice
function h, for each n 2 N, we have
ae
and
For each n 2 J , if -(n) is the number of equivalence classes of j S;n , then
so
It follows that j(C=h) =n
2 for all n 2 N.
Finally, for all n 2 J ,
Since J is infinite, it follows that
for all n 2 N, this
shows that A is not weakly
)-stochastic.
We now prove our main result.
Theorem 3.2 (Main Theorem) For every real number ff ! 1,
Proof. Let
2. By Theorem 2.4, there is an
exact 2 (log n) 2
-martingale d such that
By Lemma 3.1, we then have
Since d is a p 2 -martingale, this implies that - p2 (P n ff \GammaT (DENSE c
Then f is strictly increasing, so f-d, the f-dilation of d, is a martingale. The time required
to compute f-d(w) is
steps, where w steps to compute w 0 and then
O(2 (log jw 0
steps to compute d(w 0 ).)
Now jw 0 j is bounded above by the number of strings x such that jxj 2 - js jwj
jwj)c, so
Thus the time required to compute f-d(w) is
steps, so f-d is an n 2 -martingale.
Now let A 2 P n ff=2 \GammaT (DENSE c ). Then f
This shows that P n ff=2 \GammaT (DENSE c ) ' S 1 [f-d]. Since f-d is an
We now develop a few consequences of the Main Theorem. The first is immediate.
Corollary 3.3 For every real number ff ! 1,
The following result on the density of weakly complete (or weakly hard) languages now
follows immediately from Corollary 3.3.
Corollary 3.4 For every real number ff ! 1, every language that is weakly - P
for E or weakly - P
Our final two corollaries concern consequences of the strong hypotheses - p (NP) 6= 0
The relative strengths of these hypotheses are indicated by the known
implications
E)
(The leftmost implication was proven by Juedes and Lutz[8]. The remaining implications
follow immediately from elementary properties of resource-bounded measure.)
Corollary 3.5 Let ff ! 1. If - p (NP) 6= 0, then every language that is - P
NP is dense. If - p2 (NP) 6= 0, then every language that is - P
n ff \GammaT -hard for NP is dense.
We conclude by considering the densities of languages to which SAT can be adaptively
reduced.
Definition A function g : N ! N is subradical if log
It is easy to see that a function g is subradical if and only if, for all k ? 0,
n).
(This is the reason for the name "subradical.") Subradical functions include very slow-growing
functions such as log n and (log n) 5 , as well as more rapidly growing functions such
as 2 (log n) 0:99
Corollary 3.6 If - p (NP) 6= 0, g : N ! N is subradical, and SAT - P
dense.
Proof. Assume the hypothesis. Let A 2 NP. Then there is a - P
-reduction f of A to SAT.
Fix a polynomial q(n) such that, for all x 2 f0; 1g , jf(x)j - q(jxj). Composing f with
the - P
g(n)\GammaT -reduction of SAT to H that we have assumed to exist then gives a - P
reduction of A to H. Since g is subradical, log so for all
sufficiently large n,
4 . Thus A - P
The above argument shows that H is - P
-hard for NP. Since we have assumed
Corollary 3.5 that H is dense.
To put the matter differently, Corollary 3.6 tells us that if SAT is polynomial-time
reducible to a non-dense language with at most 2 (log n) 0:99
adaptive queries, then NP has
measure 0 in E and in E 2 .
Questions
As noted in the introduction, the relationships between weak hardness notions for E and
under reducibilities such as - P
remain to be resolved. Our main theorem
also leaves open the question whether - P
languages for E must be dense when2 - ff ! 1. We are in the curious situation of knowing that the classes P n 0:99 \Gammatt (DENSE c )
and P n 0:49 \GammaT (DENSE c ) have p-measure 0, but not knowing whether P n 0:50 \GammaT (DENSE c ) has
p-measure 0. Indeed, at this time we cannot even prove that E 6' P n 0:50 \GammaT (SPARSE).
Further progress on this matter would be illuminating.
--R
Theoretical Computer Science.
Relative to a random oracle A
On isomorphism and density of NP and other complete sets.
Equivalence of measures of complexity classes
EXP is not polynomial time Turing reducible to sparse sets.
Weak completeness in E and
Almost everywhere high nonuniform complexity
the density of hard languages.
Theoretical Computer Science
The quantitative structure of exponential time
Sparse complete sets for NP: Solution of a conjecture of Berman and Hartmanis.
Contributions to the Study of Resource-Bounded Measure
Reported in
On polynomial bounded truth-table reducibility of NP sets to sparse sets
On the Structure of Intractable Complexity Classes.
--TR
--CTR
John M. Hitchcock, The size of SPP, Theoretical Computer Science, v.320 n.2-3, p.495-503, June 14, 2004 | weakly complete problems;resource-bounded measure;complexity classes;computational complexity;polynomial reductions |
586993 | A Generalization of Resource-Bounded Measure, with Application to the BPP vs. EXP Problem. | We introduce resource-bounded betting games and propose a generalization of Lutz's resource-bounded measure in which the choice of the next string to bet on is fully adaptive. Lutz's martingales are equivalent to betting games constrained to bet on strings in lexicographic order. We show that if strong pseudorandom number generators exist, then betting games are equivalent to martingales for measure on E and EXP. However, we construct betting games that succeed on certain classes whose Lutz measures are important open problems: the class of polynomial-time Turing-complete languages in EXP and its superclass of polynomial-time Turing-autoreducible languages. If an EXP-martingale succeeds on either of these classes, or if betting games have the "finite union property" possessed by Lutz's measure, one obtains the nonrelativizable consequence $\mbox{BPP} \neq \mbox{EXP}$. We also show that if $\mbox{EXP} \neq \mbox{MA}$, then the polynomial-time truth-table-autoreducible languages have Lutz measure zero, whereas if | Introduction
Lutz's theory of measure on complexity classes is now usually defined in terms of resource-bounded
martingales. A martingale can be regarded as a gambling game played on unseen languages A. Let
be the standard lexicographic ordering of strings. The gambler G starts with capital
places a bet
A." Given a fixed particular
language A, the bet's outcome depends only on whether s 1 2 A. If the bet wins, then the new
capital while if the bet loses, C . The gambler then places a bet
on (or against) membership of the string s 2 , then on s 3 , and so forth. The gambler
succeeds if G's capital C i grows toward +1. The class C of languages A on which G succeeds
(and any subclass) is said to have measure zero. One also says G covers C. Lutz and others (see
[Lut97]) have developed a rich and extensive theory around this measure-zero notion, and have
shown interesting connections to many other important problems in complexity theory.
We propose the generalization obtained by lifting the requirement that G must bet on strings
in lexicographic order. That is, G may begin by choosing any string x 1 on which to place its first
bet, and after the oracle tells the result, may choose any other string x 2 for its second bet, and so
forth. Note that the sequences x may be radically different
for different oracle languages A-in complexity-theory parlance, G's queries are adaptive. The lone
restriction is that G may not query (or bet on) the same string twice. We call G a betting game.
Our betting games remedy a possible lack in the martingale theory, one best explained in
the context of languages that are "random" for classes D such as E or EXP. A language L is
D-random if L cannot be covered by a D-martingale. Based on one's intuition about random 0-1
sequences, the language L should likewise be D-random, where flip(x) changes
every 0 in x to a 1 and vice-versa. However, this closure property is not known for E-random or
EXP-random languages, because of the way martingales are tied to the fixed lex ordering of \Sigma .
Betting games can adapt to easy permutations of \Sigma such as that induced by flip. Similarly, a class
C that is small in the sense of being covered by a (D-) betting game remains small if the languages
are so permuted. In the r.e./recursive theory of random languages, our generalization is
similar to "Kolmogorov-Loveland place-selection rules" (see [Lov69]). We make this theory work
for complexity classes via a novel definition of "running in time t(n)" for an infinite process.
Our new angle on measure theory may be useful for attacking the problem of separating BPP
from EXP, which has recently gained prominence in [IW98]. In Lutz's theory it is open whether the
class of EXP-complete sets-under polynomial-time Turing reductions-has EXP-measure zero. If
so (in fact if this set does not have measure one), then by results of Allender and Strauss [AS94],
BPP 6= EXP. Since there are oracles A such that BPP A = EXP A [Hel86], this kind of absolute
separation would be a major breakthrough. We show that the EXP-complete sets can be covered
by an EXP betting game-in fact, by an E-betting game. The one technical lack in our theory as
a notion of measure is also interesting here: If the "finite unions" property holds for betting games
(viz. martingales do
enjoy the permutation-invariance of betting games, then BPP 6= EXP. Finally, we show that if a
pseudorandom number generator (PRG) of security 2
n\Omega\Gamma/1 exists, then for every EXP-betting game
G one can find an EXP-martingale that succeeds on all sets covered by G. PRGs of higher security\Omega\Gamma n) likewise imply the equivalence of E-betting games and E-measure. Ambos-Spies and Lempp
[ASL96] proved that the EXP-complete sets have E-measure zero under a different hypothesis,
namely
Measure theory and betting games help us to dig further into questions about PRGs and
complexity-class separations. Our tool is the notion of an autoreducible set, whose importance in
complexity theory was argued by Buhrman, Fortnow, van Melkebeek, and Torenvliet [BFvMT98]
(after [BFT95]). A language L is - p
-autoreducible if there is a polynomial-time oracle TM Q such
that for all inputs x, Q L correctly decides whether x 2 L without ever submitting x itself as a
query to L. If Q is non-adaptive (i.e., computes a polynomial-time truth-table reduction), we say
L is - p
tt -autoreducible. We show that the class of - p
T -autoreducible sets is covered by an E-betting
game. Since every EXP-complete set is - p
-autoreducible [BFvMT98], this implies results given
above. The subclass of - p
tt -autoreducible sets provides the following tighter connection between
measure statements and open problems about EXP:
ffl If the - p
tt -autoreducible sets do not have E-measure zero, then
ffl If the - p
-autoreducible sets do not have E-measure one in EXP, then EXP 6= BPP.
Here MA is the "Merlin-Arthur" class of Babai [Bab85, BM88], which contains BPP and NP.
Since EXP 6= MA is strongly believed, one would expect the class of - p
-autoreducible sets to
have E-measure zero, but proving this-or proving any of the dozen other measure statements in
Corollaries 6.2 and 6.5-would yield a proof of EXP 6= BPP.
In sum, the whole theory of resource-bounded measure has progressed far enough to wind the
issues of (pseudo-)randomness and stochasticity within exponential time very tightly. We turn the
wheels a few more notches, and seek greater understanding of complexity classes in the places where
the boundary between "measure one" and "measure zero" seems tightest.
Section 2 reviews the formal definitions of Lutz's measure and martingales. Section 3 introduces
betting games, and shows that they are a generalization of martingales. Section 4 shows how to
simulate a betting game by a martingale of perhaps-unavoidably higher time complexity. Section 5,
however, demonstrates that strong PRGs (if there are any) allow one to compute the martingale
in the same order of time. Section 6 presents our main results pertaining to autoreducible sets,
including our main motivating example of a concrete betting game. The concluding Section 7
summarizes open problems and gives prospects for future research.
A preliminary version of this paper without proofs appeared in the proceedings of STACS'98,
under the title "A Generalization of Resource-Bounded Measure, With an Application."
Martingales
A martingale is abstractly defined as a function d from f 0; 1 g into the nonnegative reals that
satisfies the following "average law": for all w
The interpretation in Lutz's theory is that a string w 2 f 0; 1 g stands for an initial segment of
a language over an arbitrary alphabet \Sigma as follows: Let s be the standard lexicographic
ordering of \Sigma . Then for any language A ' \Sigma , write w v A if for all iff the
ith bit of w is a 1. We also regard w as a function with domain and range
writing w(s i ) for the ith bit of w. A martingale d succeeds on a language A if the sequence
of values d(w) for w v A is unbounded. If J is a set of strings such that for any w 2 f 0; 1 g and
any b 2 f 0; 1 g, d(wb) 6= d(w) implies s jwbj 2 J , we say that the martingale d is active only on J .
stand for the (possibly empty, often uncountable) class of languages on which d
succeeds.
Definition 2.1 (cf. [Lut92, May94]). Let \Delta be a complexity class of functions. A class C of
languages has \Delta-measure zero, written - \Delta there is a martingale d computable in \Delta such
that C ' S 1 [d]. One also says that d covers C.
Lutz defined complexity bounds in terms of the length of the argument w to d, which we
denote by N . However, we also work in terms of the largest length n of a string in the domain of
w. For N ? 0, n equals blog Nc; all we care about is that Because
complexity bounds on languages we want to analyze will naturally be stated in terms of n, we
prefer to use n for martingale complexity bounds. The following correspondence is helpful:
- measure on EXP
Our convention lets us simply write "- E " for E-measure (regarding \Delta as E for functions),
similarly "- EXP " for EXP-measure, and generally - \Delta for any \Delta that names both a language and
function class. Abusing notation similarly, we define:
Definition 2.2 ([Lut92]). A class C has \Delta-measure one, written - \Delta
The concept of resource bounded measure is known to be robust under several changes [May94].
The following lemma has appeared in various forms [May94, BL96]. It essentially says that we can
assume a martingale grows almost monotonically (sure winnings) and not too fast (slow winnings).
Lemma 2.1 ("Slow-but-Sure-Winnings" lemma for martingales) Let d be a martingale.
Then there is a martingale d 0 with S 1 [d] ' S 1 [d 0 ] such that
If d is computable in time t(n) , then d 0 is computable in time O(2 n t(n)).
The idea is to play the strategy of d, but in a more conservative way. Say we start with an
initial capital of $1. We will deposit a part c of our capital on a bank and only play the strategy
underlying d on the remaining liquid part e of our capital. We start with no savings and a liquid
capital of $1. When our liquid capital e would reach $2 or exceed that level, we deposit an additional
$1 or $2 to our savings account c so as to keep the liquid capital in the range $[1; 2) at all times. If
d succeeds, it will push the liquid capital infinitely often to $2 or above, so c grows to infinity, and
d 0 succeeds too. Since we never take money out of our savings account c, and the liquid capital e is
bounded by $2, once our total capital d reached a certain level, it will never go more
than $2 below that level anymore, no matter how bad the strategy underlying d is. On the other
hand, since we add at most $2 to c in each step, d 0 (w) cannot exceed 2(jwj
We now give the formal proof.
Proof. (of Lemma 2.1) Define d
Checking the time and space complexity bounds for d 0 is again straightforward.
We can show by induction on jwj that
and that
from which it follows that d 0 is a martingale.
If d succeeds on !, e(w) will always remain positive for w v !, and d(wb)
or more infinitely often. Consequently, lim wv!;jwj!1
that S 1 [d] ' S 1 [d 0 ]. Moreover, by (4) and the fact that c does not decrease along any sequence,
we have that
Since c can increase by at most 2 in every step, c(w) - 2jwj. Together with (4), this yields
that
One can also show that S 1 [d 0 ] ' S 1 [d] in Lemma 2.1, so the success set actually remains intact
under the above transformation.
As with Lebesgue measure, the property of having resource-bounded measure zero is monotone
and closed under union ("finite unions property"). A resource-bounded version of closure under
countable unions also holds. The property that becomes crucial in resource-bounded measure is
that the whole space \Delta does not have measure zero, which Lutz calls the "measure conservation"
property. With a slight abuse of meaning for "6=," this property is written - \Delta (\Delta) 6= 0. In particular,
of \Delta that require substantially fewer resources, do have
\Delta-measure zero. For example, P has E-measure zero. Indeed, for any fixed c ? 0, DTIME[2 cn ] has
E-measure zero, and DTIME[2 n c
has EXP-measure zero [Lut92].
Apart from formalizing rareness and abundance in complexity theory, resource-bounded martingales
are also used to define the concept of a random set in a resource-bounded setting.
Definition 2.3. A set A is \Delta-random if - \Delta (fAg) 6= 0.
In other words, A is \Delta-random if no \Delta-martingale succeeds on A.
Betting Games
To capture intuitions that have been expressed not only for Lutz measure but also in many earlier
papers on random sequences, we formalize a betting game as an infinite process, rather than as a
Turing machine that has finite computations on string inputs.
Definition 3.1. A betting game G is an oracle Turing machine that maintains a "capital tape"
and a "bet tape," in addition to its standard query tape and worktapes, and works in stages
Beginning each stage i, the capital tape holds a nonnegative rational
number C i\Gamma1 . Initially, C computes a query string x i to bet on, a bet amount B i ,
g. The computation is legal so long as x i does not
belong to the set f x of strings queried in earlier stages. G ends stage i by entering a
special query state. For a given oracle language A, if x i 2 A and b i =+1, or if x
2 A and b
then the new capital is given by C i := C . The query and bet tapes
are blanked, and G proceeds to stage i + 1.
Since we require that G spend the time to write each bet out in full, it does not matter whether
we suppose that the new capital is computed by G itself or updated instantly by the oracle. In this
paper, we lose no generality by not allowing G to "crash" or to loop without writing a next bet
and query. Note that every oracle set A determines a unique infinite computation of G, which we
denote by G A . This includes a unique infinite sequence x 1 query strings, and a unique
sequence telling how the gambler fares against A .
Definition 3.2. A betting machine G runs in time t(n) if for all oracles A, every query of length
made by G A is made in the first t(n) steps of the computation.
A similar definition can be made for space usage, taking into account standard issues such as
whether the query tape counts against the space bound, or whether the query itself is preserved in
read-only mode for further computation by the machine.
Definition 3.3. A betting game G succeeds on a language A, written A 2 S 1 [G], if the sequence
of values C i in the computation G A is unbounded. If A 2 S 1 [G], then we also say G covers A.
Our main motivating example where one may wish not to bet in lexicographic order, or according
to any fixed ordering of strings, is deferred to Section 6. There we will construct an E-betting
game that succeeds on the class of - p
-autoreducible languages, which is not known to have Lutz
measure zero in E or EXP.
We now want to argue that the more liberal requirement of being covered by a time t(n)
betting game, still defines a smallness concept for subclasses of DTIME[t(n)] in the intuitive sense
Lutz established for his measure-zero notion. The following result is a good beginning.
Theorem 3.1 For every time-t(n) betting game G, we can construct a language in DTIME[t(n)]
that is not covered by G.
Proof. Let Q be a non-oracle Turing machine that runs as follows, on any input x. The machine
simulates up to t(jxj) steps of the single computation of G on empty input. Whenever G bets
on and queries a string y, Q gives the answer that causes G to lose money, rejecting in case of a
zero bet. If and when G queries x, Q does likewise. If t(jxj) steps go by without x being queried,
then Q rejects x.
The important point is that Q's answer to a query y 6= x is the same as the answer when Q
is run on input y. The condition that G cannot query a string x of length n after t(n) steps have
elapsed ensures that the decision made by Q when x is not queried does not affect anything else.
Hence Q defines a language on which G never does better than its initial capital C 0 , and so does
not succeed.
In particular, the class E cannot be covered by an E-betting game, nor EXP by an EXP-betting
game. Put another way, the "measure conservation axiom" [Lut92] of Lutz's measure carries over
to betting games.
To really satisfy the intuition of "small," however, it should hold that the union of two small
classes is small. (Moreover, "easy" countable unions of small classes should be small, as in [Lut92].)
Our lack of meeting this "finite union axiom" will later be excused insofar as it has the non-
relativizing consequence BPP 6= EXP. Theorem 3.1 is still good enough for the "measure-like"
results in this paper.
We note also that several robustness properties of Lutz's measure treated in Section 2 carry
over to betting games. This is because we can apply the underlying transformations to the capital
function c G of G, which is defined as follows:
Definition 3.4. Let G be a betting games, and i - 0 an integer.
(a) A play ff of length i is a sequence of i-many oracle answers. Note that ff determines the first
i-many stages of G, together with the query and bet for the next stage.
(b) c G (ff) is the capital C i that G has at the end of the play ff (before the next query).
Note that the function c G is a martingale over plays ff. The proof of Lemma 2.1 works for c G . We
obtain:
Lemma 3.2 ("Slow-But-Sure Winnings" lemma for betting games) Let G be a betting
game that runs in time t(n). Then we can construct a betting game G 0 running in time O(t(n))
such that S 1 [G] makes the same queries in the same order as G, and:
2:
Proof. The proof of Lemma 2.1 carries over. The only additional observation is that c G 0 can be
constructed on the fly, and this allows G 0 to run in time O(t(n)).
To begin comparing betting games and martingales, we note first that the latter can be considered
a direct special case of betting games. Say a betting game G is lex-limited if for all oracles
A, the sequence x 1 queries made by G A is in lex order. (It need not equal the lex
enumeration
Theorem 3.3 Let T (n) be a collection of time bounds that is closed under multiplication by 2 n ,
such as 2 O(n) or 2 n O(1)
. Then a class C has time-T (n) measure zero iff C is covered by a time-T (n)
lex-limited betting game.
Proof. From a martingale d to a betting game G, each stage i of G A bets on s i an amount B i with
is the first bits of the characteristic sequence of
A. This takes O(2 n ) evaluations of d to run G up through queries of length n, hence the hypothesis
on the time bounds T (n). In the other direction, when G is lex-limited, one can simulate G on a
finite initial segment w of its oracle up to a stage where all queries have been answered by w and
G will make no further queries in the domain of w. One can then define d(w) to be the capital
entering this stage. That this is a martingale and fulfills the success and run-time requirements is
left to the reader.
Hence in particular for measure on E and EXP, martingales are equivalent to betting games constrained
to bet in lex order. Now we will see how we can transform a general betting game into an
equivalent martingale.
4 From Betting Games to Martingales
This section associates to every betting game G a martingale dG such that S 1 [G] ' S 1 [d G ], and
begins examining the complexity of dG . Before defining dG , however, we pause to discuss some
subtleties of betting games and their computations.
Given a finite initial segment w of an oracle language A, one can define the partial computation
G w of the betting game up to the stage i at which it first makes a query x i that is not in the domain
of w. Define d(w) to be the capital C i\Gamma1 that G had entering this stage. It is tempting to think
that d is a martingale and succeeds on all A for which G succeeds-but neither statement is true
in general. The most important reason is that d may fail to be a martingale.
To see this, suppose x i itself is the lexicographically least string not in the domain of w. That
is, x i is indexed by the bit b of wb, and w1 v A iff x i 2 A. It is possible that G A makes a small
(or even zero) bet on x i , and then goes back to make more bets in the domain of w, winning lots of
money on them. The definitions of both d(w0) and d(w1) will then reflect these added winnings,
and both values will be greater than d(w). For example, suppose G A first puts a zero bet on x
then bets all of its money on x not being in A, and then proceeds with x
Put another way, a finite initial segment w may carry much more "winnings potential" than
the above definition of d(w) reflects. To capture this potential, one needs to consider potential plays
of the betting game outside the domain of w. Happily, one can bound the length of the considered
plays via the running time function t of G. Let n be the maximum length of a string indexed by
(jwj)c. Then after t(n) steps, G cannot query any more strings in the domain of
so w's potential is exhausted. We will define dG (w) as an average value of those plays that can
happen, given the query answers fixed by w. We use the following definitions and notation:
Definition 4.1. For any t(n) time-bounded betting game G and string w 2 \Sigma , define:
(a) A play ff is t-maximal if G completes the first jffj stages, but not the query and bet of the
next stage, within t steps.
(b) A play ff is G-consistent with w, written ff -G w, if for all stages j such that the queried
string x j is in the domain of w, ff That is, ff is a play that could possibly happen
given the information in w. Also let m(ff; w) stand for the number of such stages j whose
query is answered by w.
(c) Finally, put dG
ff t(n)\Gammamaximal;ff- Gw
The weight 2 m(ff;w)\Gammajffj in Equation (7) has the following meaning. Suppose we extend the simulation
of G w by flipping a coin for every query outside the domain of w, for exactly i stages. Then the
number of coin-flips in the resulting play ff of length i is its probability.
Thus dG (w) returns the suitably-weighted average of t(n)-step computations of G with w fixed.
The interested reader may verify that this is the same as averaging d(wv) over all v of length 2 t(n)
(or any fixed longer length), where d is the non-martingale defined at the beginning of this section.
Lemma 4.1 The function dG (w) is a martingale.
Proof. First we argue that
Observe that when ff This is because none of
the queries answered by fi can be in the domain of w, else the definition of G running in time t(n)
would be violated. Likewise if ff -G w then m(ff Finally, since c G is a martingale,
These facts combine to show the equality of (7) and (8).
By the same argument, the right-hand side of (8) is unchanged on replacing "t(n)" by any
Now consider w such that jwj + 1 is not a power of 2. Then the "n" for w0 and w1 is the same
as the "n" for dG (w). Let P 0 stand for the set of ff of length t(n) that are G-consistent with w0
but not with w1, P 1 for those that are G-consistent with w1 but not w0, and P for those that are
consistent with both. Then the set f ff : equals the disjoint union of P ,
and P 1 . Furthermore, for ff 2 P 0 we have m(ff;
dG (w0)+d G
c G (ff)2 m(ff;w1)\Gammat(n)
c G (ff)2 m(ff;w)\Gammat(n)
c G (ff)2 m(ff;w)\Gammat(n)
Finally, if jwj a power of 2, then dG (w0) and dG (w1) use t 0 their length of
ff. However, by the first part of this proof, we can replace t(n) by t 0 in the definition of dG (w)
without changing its value, and then the second part goes through the same way for t 0 . Hence dG
is a martingale.
It is still the case, however, that dG may not succeed on the languages on which the betting
game G succeeds. To ensure this, we first use Lemma 3.2 to place betting games G into a suitable
"normal form" satisfying the sure-winnings condition (5).
Lemma 4.2 If G is a betting game satisfying the sure-winnings condition (5), then S 1 [G] '
Proof. First, let A 2 S 1 [G], and fix k ? 0. Find a finite initial segment w v A long enough to
answer every query made in a play ff of G such that c G (ff) long enough to make t(n)
in the definition of dG (w) (Equation 7) greater than jffj. Then every ff 0 of length t(n) such that
has the form ff fffi. The sure-winnings condition (5) implies that the right-hand side of
defining dG (w) is an average over terms that all have size at least k. Hence dG (w) - k. Letting
k grow to infinity gives A 2 S 1 [d G ].
Now we turn our attention to the complexity of dG . If G is a time-t(n) betting game, it is clear
that dG can be computed deterministically in O(t(n)) space, because we need only cycle through
all ff of length t(n), and all the items in (7) are computable in space O(t(n)). In particular, every
E-betting game can be simulated by an ESPACE-martingale, and every EXP-betting game by an
EXPSPACE-martingale. However, we show in the next section that one can estimate dG (w) well
without having to cycle through all the ff, using a pseudo-random generator to "sample" only a
very small fraction of them.
5 Sampling Results
First we determine the accuracy to which we need to estimate the values d(w) of a hard-to-compute
martingale. We state a stronger version of the result than we need in this section. We will use the
strenghtening in Sections 6.2 and 6.3. Recall that
Lemma 5.1 Let d be a martingale that is active only on J ' f 0; 1 g , and let [ffl(i)] 1
i=0 be a non-negative
sequence such that
converges to a number K. Suppose we can compute in time
t(n) a function g(w) such that jg(w) \Gamma d(w)j - ffl(N ) for all w of length N . Then there is a
martingale d 0 computable in time O(2 n t(n)) such that for all
In this section, we will apply Lemma 5.1 with In
Section 6.3 we will apply Lemma 5.1 in cases where J is finite.
Proof. First note that for any w (with
In case inductively define:
Note that d 0 satisfies the average law (1), and that we can compute d 0 (w) in time O(2 n t(n)).
By induction on jwj, we can show using the estimate (9) that
It follows that
and that
This establishes the lemma in case . The generalization to other subsets J of
is left to the reader.
Next, we will specify precisely which function f G we will sample in order to estimate dG , and
how we will do it.
Let G be a t(n) time-bounded betting game. Consider a prefix w, and let n denote the largest
length of a string in the domain of w. With any string ae of length t(n), we can associate a unique
"play of the game" G defined by using w to answer queries in the domain of w, and the successive
bits of ae to answer queries outside it. We can stop this play after t(n) steps-so that the stopped
play is a t(n)-maximal ff-and then define f G (w; ae) to be the capital c G (ff). Note that we can
compute f G (w; ae) in linear time, i.e. in time O(jwj t(n)). The proportion of strings ae of length
t(n) that map to the same play ff is exactly the weight 2 m(ff;w)\Gammajffj in the equation (7) for dG (w).
Letting E stand for mathematical expectation, this gives us:
We will apply two techniques to obtain a good approximation g(w) to this average:
ffl sampling using pseudo-random generators, and
ffl approximate counting using alternation.
5.1 Sampling via Pseudo-Random Generators
First, we need some relevant background on pseudo-random generators.
Definition 5.1 ([NW94]). (a) The hardness HA (n) of a set A at length n is the largest integer
s such that for any circuit C of size at most s with n inputs,
where x is uniformly distributed over \Sigma n .
(b) A pseudo-random generator (PRG) is a function D that, for each n, maps \Sigma n into \Sigma r(n) where
n. The function r is called the stretching of D. We say that D is computable in C if
every bit of D(y) is computable in C, given y and the index of the bit in binary.
(c) The security SD (n) of D at length n is the largest integer s such that for any circuit C of size
at most s with r(n) inputs
s
where x is uniformly distributed over \Sigma r(n) and y over \Sigma n .
For our purposes, we will need a pseudo-random generator computable in E that stretches seeds
super-polynomially and has super-polynomial security at infinitely many lengths. We will use the
one provided by the following theorem.
Theorem 5.2 If MA 6= EXP, there is a pseudo-random generator D computable in E with stretching
'(log n) such that for any integer k, SD (n) - n k for infinitely many n.
The proof follows directly from the next results of Babai, Fortnow, Nisan and Wigderson [BFNW93],
and Nisan and Wigderson [NW94], combined with some padding.
Theorem 5.3 ([BFNW93]) If MA 6= EXP, there is a set A 2 EXP such that for any integer k,
for infinitely many n.
Theorem 5.4 ([NW94]) Given any set A 2 EXP, there is a pseudo-random generator D computable
in EXP with stretching n '(log n) such that SD
n)=n).
We will also make use of pseudo-random generators with exponential security and computable
in exponential time. They have the interesting property that we can blow up the stretching exponentially
without significantly reducing the security.
Theorem 5.5 ([GGM86]) If there is a pseudo-random generator computable in EXP (respec-
tively, E) with security 2
n) ), then there is such a pseudo-random generator
with stretching 2 p(n) for any fixed polynomial p.
The following general result shows how PRGs can be used to approximate averages. It provides
the accuracy and time bounds needed for applying Lemma 5.1 to get the desired martingale.
Theorem 5.6 Let D be a pseudo-random generator computable in time ffi(n) and with stretching
r(n). be a linear-time computable function, and s;
constructible functions such that s(N) - N and the following relations hold for any integer N - 0,
Then we can approximate
to within N \Gamma2 in time O(2 m(N) \Delta (s(N)
Proof. For any integer N - 0, let IN be a partition of the interval [\GammaR(N ); R(N )] into subintervals
of length 1
. Note that jI N I 2 IN and any string w of length
The predicate underlying -(I; w) can be computed by circuits of size O(s(N )). Since SD (m(N)) 2
!(s(N )), it follows that
approximates -(I; w) to within an additive error of (S D (m(N))) \Gamma1 , and we can compute it in time
We define the approximation ~ h(w) for h(w) as
I2IN
~
Since we can write h(w) as
I2IN
we can bound the approximation error as follows:
I2IN
I2IN
Computing ~ h(w) requires jI N evaluations of ~ -, which results in the claimed upper
bound for the time complexity of ~ h.
Now, we would like to apply Theorem 5.6 to approximate by (7) to within N \Gamma2 ,
by setting However, for a general betting game G running in time
t(n), we can only guarantee an upper bound of R(N) t(log N) on jf(w; ae)j. Since SD can be at
most exponential, condition (10) would force m(N) to be \Omega\Gamma t(log N )). In that case, Theorem 5.6
can only yield an approximation computable in time 2 O(t(log N)) . However , we can assume wlog.
that G satisfies the slow-winnings condition (6) of Lemma 3.2, in which case an upper bound of
holds. Then the term s(N) in the right-hand side of (10) dominates, provided
n) .
Taking everything together, we obtain the following result about transforming E- and EXP-
betting games into equivalent E- respectively EXP-martingales:
Theorem 5.7 If there is a pseudo-random generator computable in E with security 2
n) , then for
every E-betting game G, there exists an E-martingale d such that S 1 [G] ' S 1 [d]. If there is a
pseudo-random generator computable in EXP with security 2
n\Omega\Gamma/1 , then for every EXP-betting game
G, there exists an EXP-martingale d such that S 1 [G] ' S 1 [d].
Proof. By Lemma 3.2, we can assume that c G satisfies both the sure-winnings condition (5) as
well as the slow-winnings condition (6). Because of Lemma 4.2 and Lemma 5.1 (since the series
suffices to approximate the function dG (w) given by (7) to within N \Gamma2 in
, where
Under the given hypothesis for E, we can meet the conditions for applying Theorem 5.6 to
we obtain the approximation
of dG we need. The same holds in like manner for EXP, for which we have s(N) 2 2 (log N) O(1)
5.2 Approximate Counting using Alternation
Instead of hypothesizing the existence of strong pseudo-random generators, we can also use the
following theorem of Stockmeyer on approximate counting.
Theorem 5.8 ([Sto83]) For any h 2 #P and any polynomial p, there is a function g 2
3 such
that for any input w of length N ,
Theorem 5.9 (a) If then for every E-betting game G, there exists an E-martingale d
such that S 1 [G] ' S 1 [d].
(b) If NP ' DTIME[2 (log n) O(1)
then for every EXP-betting game G, there exists an EXP-
martingale d such that S 1 [G] ' S 1 [d].
The proof plugs (12) into the above sampling results in a similar manner.
6 Autoreducible Sets
An oracle Turing machine M is said to autoreduce a language A if L(M A and for all strings
M A on input x does not query x. That is, one can learn the membership of x by querying
strings other than x itself. If M runs in polynomial time, then A is P-autoreducible-we also write
-autoreducible. If M is also non-adaptive, then A is - p
-autoreducible.
One can always code M so that for all oracles, it never queries its own input-then we call
M an autoreduction. Hence we can define an effective enumeration [M i
of polynomial-time
autoreductions, such that a language A is autoreducible iff there exists an i such that L(M A
(For a technical aside: the same M i may autoreduce different languages A, and some M i may
autoreduce no languages at all.) The same goes for - p
-autoreductions.
Autoreducible sets were brought to the polynomial-time context by Ambos-Spies [AS84]. Their
importance was further argued by Buhrman, Fortnow, Van Melkebeek, and Torenvliet [BFvMT98],
who showed that all - p
T -complete sets for EXP are - p
-autoreducible (while some complete sets
for other classes are not). Here we demonstrate that autoreducible sets are important for testing
the power of resource-bounded measure.
6.1 Adaptively Autoreducible Sets
As stated in the Introduction, if the - p
T -autoreducible sets in EXP (or sufficiently the - p
sets for EXP) are covered by an EXP-martingale, then EXP 6= BPP, a non-relativizing consequence.
However, it is easy to cover them by an E-betting game. Indeed, the betting game uses its adaptive
freedom only to "look ahead" at the membership of lexicographically greater strings, betting nothing
on them.
Theorem 6.1 There is an E-betting game G that succeeds on all - p
-autoreducible sets.
Proof. Let be an enumeration of - p
T -autoreductions such that each M i runs in time
on inputs of length n. Our betting game G regards its capital as composed of infinitely many
"shares" c i , one for each M i . Initially, c Letting h\Delta; \Deltai be a standard pairing function,
inductively define n
During a stage makes a query of
length less than n looks up the answer from its table of past queries. Whenever M i makes
a query of length n s\Gamma1 or more, G places a bet of zero on that string and makes the same query.
Then G bets all of the share c i on 0 n s\Gamma1 according to the answer of the simulation of M i . Finally,
G "cleans up" by putting zero bets on all strings with length in [n that were not queries in
the previous steps.
If M i autoreduces A, then share c i doubles in value at each stage hi; ji, and makes the total
capital grow to infinity. And G runs in time 2 O(n) -indeed, only the "cleanup" phase needs this
much time.
Corollary 6.2 Each of the following statements implies BPP 6= EXP:
1. The class of - p
T -autoreducible sets has E-measure zero.
2. The class of - p
-complete sets for EXP has E-measure zero.
3. E-betting games and E-martingales are equivalent.
4. E-betting games have the finite union property.
The same holds if we replace E by EXP in these statements.
Proof. Let C stand for the class of languages that are not - p
-hard for BPP. Allender and
Strauss [AS94] showed that C has E-measure zero, so trivially it is also covered by an E-betting
game. Now let D stand for the class of - p
-complete sets for EXP. By Theorem 6.1 and the result
of [BFvMT98] cited above, D is covered by an E-betting game.
contains all of EXP, and:
ffl If D would have E-measure zero, so would C [ D and hence EXP, contradicting the measure
conservation property of Lutz measure.
ffl If E-betting games would have the finite-union property, then C [ D and EXP would be
covered by an E-betting game, contradicting Theorem 3.1.
Since (1) implies (2), and (3) implies (4), these observations suffice to establish the corollary for E.
The proof for EXP is similar.
Since there is an oracle A giving EXP A = BPP A [Hel86], this shows that relativizable techniques
cannot establish the equivalence of E-martingales and E-betting games, nor of EXP-martingales
and EXP-betting games. They cannot refute it either, since there are oracles relative to which
strong PRGs exist-all "random" oracles, in fact.
6.2 Non-Adaptively Autoreducible Sets
It is tempting to think that the non-adaptively P-autoreducible sets should have E-measure zero,
or at least EXP-measure zero, insofar as betting games are the adaptive cousins of martingales.
However, it is not just adaptiveness but also the freedom to bet out of the fixed lexicographic order
that adds power to betting games. If one carries out the proof of Theorem 6.1 to cover the class of
-autoreducible sets, using an enumeration [M i ] of - p
-autoreductions, one obtains a non-adaptive
E-betting game (defined formally below) that (independent of its oracle) bets on all strings in order
given by a single permutation of \Sigma . The permutation itself is E-computable. It might seem that an
E-martingale should be able to "un-twist" the permutation and succeed on all these sets. However,
our next results, which strengthen the above corollary, close the same "non-relativizing" door on
proving this with current techniques.
Theorem 6.3 For any k - 1, the - p
tt -complete sets for \Delta p
are - p
-autoreducible.
Here is the proof idea, which follows techniques of [BFvMT98] for the theorem that all EXP-
complete sets are - p
-autoreducible. Call a closed propositional formula that has at most k blocks
of like quantifiers (i.e., at most k \Gamma 1 quantifier alternations) a "QBF k formula," and let TQBF k
stand for the set of true QBF formulas. Let A be a - p
tt -complete set for \Delta p
k . Since
TQBF k is \Sigma p
k -hard, there is a deterministic polynomial-time oracle Turing machine M that accepts
A with oracle TQBF k . Let q(x; i) stand for the i-th oracle query made by M on input x. Whether
belongs to TQBF k forms a \Delta p
-question, so we can - p
-reduce it to A. It is possible that
this latter reduction will include x itself among its queries. Let b
i denote the answer it gives to
the question provided that any query to x is answered "yes," and similarly define
i in case x is
answered "no."
i , which holds in particular if x is not queried, then we know the correct answer
b i to the i-th query. If this situation occurs for all queries, we are done: We just have to run
M on input x using the b i 's as answers to the oracle queries. The b i 's themselves are obtained
without submitting the (possibly adaptive) queries made by M , but rather by applying the latter
tt -reduction to A to the pair hx; ii, and without submitting any query on x itself. Hence this
process satisfies the requirements of a - p
tt -autoreduction of A for the particular input x.
Now suppose that b +
for some i, and let i be minimal. Then we will have two opponents
play the k-round game underlying the QBF k -formula that constitutes the i-th oracle query. One
player claims that b +
i is the correct value for b i , which is equivalent to claiming that x 2 A, while
the other claims that b \Gamma
i is correct and that
A. Write -A
A. The players' strategies will consist of computing the game history so far, determining their
optimal next move, - p
tt -reducing this computation to A, and finally producing the result of this
reduction under their respective assumption about -A (x). This approach will allow us to recover
the game history in polynomial time with non-adaptive queries to A different from x. Moreover,
it will guarantee that the opponent making the correct assumption about -A (x) plays optimally.
Since this opponent is also the one claiming the correct value for b i , he will win the game. So, we
output the winner's value for b i .
It remains to show that we can compute the above strategies in deterministic polynomial time
with a \Sigma p
oracle, i.e. in FP \Sigma p
k . It seems crucial that the number k of alternations be constant here.
Proof. (of Theorem 6.3) Let A be a - p
tt -complete set for \Delta p
accepted by the polynomial-time
oracle Turing machine M with oracle TQBF k . Let q(x; i) denote the i-th oracle query of M TQBF k
on input x. Then q(x; i) can be written in the form (9y 1 )(8y
stand for the vectors of variables quantified in each block, or in the opposite form beginning
with the block (8y 1 ). By reasonable abuse of notation, we also ley y r stand for a string
of 0-1 assignments to the variables in the r-th block. Without loss of generality, we may suppose
every oracle query made by M has this form where each y j is a string of length jxj c , and M makes
exactly jxj c queries, taking the constant c from the polynomial time bound on M . Note that the
function q belongs to FP \Sigma p
k . Hence the language
belongs to \Delta p
k+1 . Since A is - p
k+1 , there is a polynomial-time nonadaptive oracle
that accepts L 0 with oracle A. Now define b
We define languages
-reductions inductively
as follows:
k. The set L ' consists of all pairs hx; ji with 1 - j - jxj c , such that there
is a smallest
(x), and the following condition holds. For
let the s-th bit of y r equal
r (hx; si)
otherwise. We put hx; ji into L ' iff there is a lexicographically least y ' such that
and the j-th bit of y ' is set to 1. The form of this definition shows that L ' belongs to \Delta p
. Hence
we can take N ' to be a polynomial-time non-adaptive oracle TM that accepts L ' with oracle A.
Now, we construct a - p
-autoreduction for A. On input x, we compute b
as well as y (b)
r for b 2 f0; 1g and 1 - r - jxj c . The latter quantity y (b)
r is defined as
follows: for 1 - s - jxj c , the s-th bit of y (b)
r equals N A[fxg
r (hx; si) if r j b mod 2, and N Anfxg
r (hx; si)
otherwise. Note that we can compute all these values in polynomial time by making non-adaptive
queries to A none of which equals x.
run M on input x using b
as the
answer to the i-th oracle query. Since it always holds that at least one of b
the correct oracle answer b i (x), we faithfully simulate M on input x, and hence compute -A (x)
correctly.
Otherwise, let i be the first index for which b +
i, we can determine q(x; i) by simulating M on input x until it asks the i-th query. We then
k )], and claim this value equals -A (x).
In order to prove the claim, consider the game history y (b
k . The opponent
claiming the correct value for b i (x) gets to play the rounds that allow him to win the game
(provided he plays well) no matter what the other player does. Since the former opponent is also
the one making the correct assumption about -A (x), an inductive argument shows that he plays
optimally: At his stages ', the string y ' in the above construction of L ' exists, and he plays it.
The key for the induction is that at later stages ' 0 ? ', the value of y r for the value
of y ' at stage '. So, the player with the correct assumption about -A (x) wins the game-that is,
his guess for b i (x) (and not the other player's guess).
In order to formalize the strengthening of Corollary 6.2 that results from Theorem 6.3, we call
a betting game G non-adaptive if the infinite sequence x 1 of queries G A makes is the
same for all oracles A. If G runs in 2 O(n) time, and this sequence hits all strings in \Sigma , then the
permutation - of the standard ordering s defined by -(s computable and
invertible in 2 O(n) time. It is computable in this amount of time because in order to hit all strings,
G must bet on all strings in f 0; 1 g n within the first 2 O(n) steps. Hence its ith bet must be made in
a number of steps that is singly-exponential in the length of s i . And to compute - need
only be run for 2 O(jx i j) steps, since it cannot query x i after this time. Since - and its inverse are
both E-computable, - is a reasonable candidate to replace lexicographic ordering in the definition
of E-martingales, and likewise for EXP-martingales. We say a class C has -E-measure zero if C
can be covered by an E-martingale that interprets its input as a characteristic string in the order
given by -.
Theorem 6.4 The class of - p
-autoreducible languages can be covered by a non-adaptive E-betting
game. Hence there is an E-computable and invertible permutation - of \Sigma such that this class has
-E-measure zero.
Proof. With reference to the proof of Theorem 6.1, we can let be an enumeration of
tt -autoreductions such that each M i runs in time n i +i. The machine G in that proof automatically
becomes non-adaptive, and since it queries all strings, it defines a permutation - of \Sigma as above
with the required properties.
Corollary 6.5 Each of the following statements implies BPP 6= EXP, as do the statements obtained
on replacing "E" by "EXP."
1. The class of - p
tt -autoreducible sets has E-measure zero.
2. The class of - p
tt -complete sets for EXP has E-measure zero.
3. Non-adaptive E-betting games and E-martingales are equivalent.
4. If two classes can be covered by non-adaptive E-betting games, then their union can be covered
by an E-betting game.
5. For all classes C and all E-computable and invertible orderings -, if C has -E-measure zero,
then C has E-measure zero.
Proof. It suffices to make the following two observations to argue that the proof of Corollary 6.2
carries over to the truth-table cases:
ffl The construction of Allender and Strauss [AS94] actually shows that the class of sets that
are not - p
tt -hard for BPP has E-measure zero.
ffl If Theorem 6.3 implies that all - p
tt -complete sets for EXP are - p
because BPP ' \Sigma p
Theorem 6.4 and the finite-unions property of Lutz's measures on E and EXP do the rest.
The last point of Corollary 6.5 asserts that Lutz's definition of measure on E is invariant under all
E-computable and invertible permutations. These permutations include flip from the Introduction
and (crucially) - from Theorem 6.4. Hence this robustness assertion for Lutz's measure implies
BPP 6= EXP. Our "betting-game measure" (both adaptive and non-adaptive) does enjoy this
permutation invariance, but asserting the finite-unions property for it also implies BPP 6= EXP.
The rest of this paper explores conditions under which Lutz's martingales can cover classes of
autoreducible sets, thus attempting to narrow the gap between them and betting games.
6.3 Covering Autoreducible Sets By Martingales
This puts the spotlight on the question: Under what hypotheses can we show that the - p
autoreducible sets have E-measure zero? Any such hypothesis must be strong enough to imply
EXP 6= BPP, but we hope to find hypotheses weaker than assuming the equivalence of (E- or
betting games and martingales, or assuming the finite-union property for betting games.
Do we need strong PRGs to cover the - p
-autoreducible sets? How close can we come to covering
the - p
-autoreducible sets by an E-martingale?
Our final results show that the hypothesis MA 6= EXP suffices. This assumption is only known
to yield PRGs of super-polynomial security (at infinitely many lengths) rather than exponential
security (at almost all lengths). Recall that MA contains both BPP and NP; in fact it is sandwiched
between NP BPP and BPP NP .
Theorem 6.6 If MA 6= EXP, then the class of - p
tt -autoreducible sets has E-measure zero.
We actually obtain a stronger conclusion.
Theorem 6.7 If MA 6= EXP, then the class of languages A autoreducible by polynomial-time
OTMs that always make their queries in lexicographic order has E-measure zero.
To better convey the essential sampling idea, we prove the weaker Theorem 6.6 before the stronger
Theorem 6.7. The extra wrinkle in the latter theorem is to use the PRG twice, to construct the set
of "critical strings" to bet on as well as to compute the martingale.
Proof. (of Theorem 6.6) Let [M i
enumerate the - p
-autoreductions, with each M i running
in time n i . Divide the initial capital into shares s i;m for with each s i;m valued initially
at (1=m 2 )(1=2 i ). For each share s i;m , we will describe a martingale that is only active on a finite
number of strings x, namely only if i - m=2dlog 2 me and m - further only if x
belongs to a set constructed below. We will arrange that whenever M i autoreduces A,
there are infinitely many m such that share s i;m attains a value above 1 (in fact, close to m) along
A. Hence the martingale defined by all the shares succeeds on A. We will also ensure that each
active share's bets on strings of length n are computable in time 2 an , where the constant a is
independent of i. This is enough to make the whole martingale E-computable and complete the
proof.
To describe the betting strategy for s i;m , first construct a set I = I i;m starting with I
and iterating as follows: Let y be the lexicographically least string of length m that does not appear
among queries made by M i on inputs x 2 I. Then add y to I. Do this until I has 3dlog 2 me strings
in it. This is possible because the bound 3dlog 2 mem i on the number of queries M i could possibly
make on inputs in I is less than 2 m . Moreover, 2 m bounds the time needed to construct I. Thus
we have arranged that
for all x; y 2 I with x y, M i (x) does not query y. (13)
Now let J stand for I together with all the queries M i makes on inputs in I. Adapting ideas from
Definition 4.1 to this context, let us define a finite Boolean function to be consistent
with M i on I, written fi - I M i , if for all x 2 I, M i run on input x with oracle answers given by
agrees with the value fi(x). Given a characteristic prefix w, also write fi - w if fi(x) and w(x)
agree on all x in J and the domain of w. Since I and J depend only on i and m, we obtain a
"probability density" function for each share s i;m via
The martingale d i;m standardly associated to this density (as in [Lut92]) is definable inductively
by d i;m
d i;m
(In case - i;m = 0, we already have d i;m (w) = 0, and so both d i;m (w1) and d i;m (w0) are set to 0.)
Note that the values - i;m (wb) for only differ from - i;m (w) if the string x indexed
by b belongs to J ; i.e., d i;m is only active on J .
sufficiently large m, if share s i;m could play the strategy
d i;m , then on A its value would rise to (at least) m=2 i . That is, s i;m would multiply its initial value
by (at least) m 3 .
To see this, first note that for any w v A long enough to contain J in its domain, - i;m
We want to show that for any v short enough to have domain disjoint from I, - i;m
To do this, consider any fixed 0-1 assignment fi 0 to strings in J n I that agrees with v. This
assignment determines the computation of M i on the lexicographically first string x 2 I, using
fi 0 to answer queries, and hence forces the value of fi(x) in order to maintain consistency on I.
This in turn forces the value fi(x 0 ) on the next string x 0 in I, and so on. Hence only one out
of 2 jIj possible completions of fi 0 to fi is consistent with M i on I. Thus - i;m
by (15), and 2
The main obstacle now is that (14), and hence d i;m (w), may not be computable in time 2 an
with a independent of i. The number of assignments fi to count is on the order of 2 jJj
Here is where we use the E-computable PRG D, with super-polynomial stretching and security,
obtained via Theorem 5.2 from the hypothesis MA 6= EXP. For all i and sufficiently large m,
D stretches a seed s of length m into at least 3dlog 2 mem i bits, which are enough to define an
assignment fi s to J (agreeing with any given w). We estimate - i;m (w) by
. By Theorem 5.2 there are infinitely many "good" m such that SD (m) ? m i+4 .
6.9 For all large enough good m, every estimate -
Suppose not. First note that both (14) and (16) do not depend on all of w, just on the up-to-
bits in w that index strings in J , and these can be hard-wired into circuits. The
tests [fi - I M i ] can also be done by circuits of size o(m i+1 ), because a Turing machine computation
of time r can be simulated by circuits of size O(r log r) [PF79]. Hence we get circuits of size less
than SD (m) achieving a discrepancy greater than 1=SD (m), a contradiction. This proves Claim 6.9.
Finally, observe that the proof of Claim 6.8 gives us not only d i;m (w) - i;m (w) \Delta m 3 , but
also d i;m A. For w v A and good m, we thus obtain estimates
g(w) for d i;m (w) within error bounds applying Lemma 5.1 for this
g(w) and yields a martingale d 0
i;m (w) computable in time 2 an , where the constant a is
independent of i. This d 0
i;m (w) is the martingale computed by the actions of share s i;m . Since
actually obtain jd 0
which is stronger than what we needed to conclude that share s i;m returns enough profit. This
completes the proof of Theorem 6.6.
To prove Theorem 6.7, we need to construct sets I = I i;m with properties similar to (13), in the
case where M i is no longer a - p
-autoreduction, but makes its queries in lexicographic order. To
carry out the construction of I, we use the pseudorandom generator D a second time, and actually
need only that M i on input 0 m makes all queries of length ! m before making any query of length
m. To play the modified strategy for share s i;m , however, appears to require that all queries
observe lex order.
Proof. (of Theorem 6.7). Recall that the hypothesis EXP 6= MA yields a PRG D computable in
stretching m bits to r(m) bits such that for all i, all sufficiently large m give r(m) ?
and infinitely many m give hardness SD
be a standard enumeration of
T -autoreductions that are constrained to make their queries in lexicographic order, with each
running in time O(n i ). We need to define strategies for "shares" s i;m such that whenever M i
autoreduces A, there are infinitely many m such that share s i;m grows its initial capital from 1=m 2 2 i
to 1=2 i or more. The strategy for s i;m must still be computable in time 2 am where a is independent
of i.
To compute the strategy for s i;m , we note first that s i;m can be left inactive on strings of
length ! m. The overall running time allowance 2 O(m) permits us to suppose that by the time s i;m
becomes active and needs to be considered, the initial segment w 0 of A (where A is the language
on which the share happens to be playing) that indexes strings of length up to
Hence we may regard w 0 as fixed. For any ff 2 f 0; 1
let M ff
stand for the computation in
which w 0 is used to answer any queries of length ! m and ff is used to answer all other queries.
Because of the order in which M i makes its queries, those queries y answered by w 0 are the same
for all ff, so that those answers can be coded by a string u 0 of length at most m i . Now for any
string y of length equal to m, define
Note that given u 0 and ff, the test "M ff
queries y" can be computed by circuits of size O(m i+1 ).
Hence by using the PRG D at length m, we can compute uniformly in E an approximation PD (x; y)
for P (x; y) such that for infinitely many m, said to be "good" m, all pairs x; y give jP D (x;
Here is the algorithm for constructing I = I i;m . Start with I := ;, and while jIj ! 3 log 2 m,
do the following: Take the lexicographically least string y I such that for all x 2 I,
. The search for such a y will succeed within jIj \Delta m i+4 trials, since for any particular
x, there are fewer than m i+4 strings y overall that will fail the test. (This is so even if m is not good,
because it only involves PD , and because PD involves simulating M D(s)
i over all seeds s.) There
is enough room to find such a y provided which holds for all sufficiently large m.
The whole construction of I can be completed within time 2 2am . It follows that for any sufficiently
large good m and x; y 2 I with x ! y, Pr ff [M ff
At this point we would like to define J to be "I together with the set of strings queried by M i
on inputs in I " as before, but unlike the previous case where M i was non-adaptive, this is not a
valid definition. We acknowledge the dependence of the strings queried by M i on the oracle A by
defining
JA
queries y g:
is, JA has the same size as J in the previous proof. This
latter definition will be OK because M i makes its queries in lexicographic order. Hence the share
s i;m , having already computed I without any reference to A, can determine the strings in JA on
which it should be active on the fly, in lex order. Thus we can well-define a mapping fi from f 0; 1 g r
to f 0; 1 g so that for any k - r, means that the query string y that happens to be kth
in order in the on-the-fly construction of JA is answered "yes" by the oracle. Then we may write
J fi for JA , and then write in place of Most important, given any x 2 I, every
such fi well-defines a computation M fi
i (x). This entitles us to carry over the two "consistency"
definitions from the proof of Theorem 6.6:
Finally, we may apply the latter notion to initial subsets of I, and define for 1 - 3 log m the
predicate
does not query x k .
6.10 For all ', Pr fi [R ' (fi)] - 1=2 ' .
For the base case does not query x 1 , M i being an
autoreduction, and because whether fi - x1 M i depends only on the bit of fi corresponding to x 1 .
Working by induction, suppose Pr fi [R . If R '\Gamma1 (fi) holds, then taking fi 0 to be fi
with the bit corresponding to x ' flipped, R holds. However, at most one of R ' (fi) and
does not query x ' . Hence Pr fi [R ' (fi)] - (1=2)Pr fi [R
and this proves Claim 6.10. (It is possible that neither R ' (fi) nor R ' (fi 0 ) holds, as happens when
some j, but this does not hurt the claim.)
Now we can rejoin the proof of Theorem 6.6 at equation (14), defining the probability density
function - i;m We get a martingale d i;m from - i;m as before, and this
represents an "ideal" strategy for share s i;m to play. The statement corresponding to Claim 6.8 is:
autoreduces A and m is good and sufficiently large, then the ideal strategy for
share s i;m multiplies its value by at least m 3 =2 along A.
To see this, note that we constructed I above so that for all
Pr ff [M ff
It follows that
d3 log me!
me 2 . Hence, using Claim 6.10 with log m, we get:
Since the fi defined by A satisfies fi - I M i , it follows by the same reasoning as in Claim 6.8 that
d i;m profits by at least a fraction of m 3 =2 along A. This proves Claim 6.11.
Finally, we (re-)use the PRG D as before to expand a seed s of length m into a string fi s of
(at least) bits. Given any w, fi s well-defines a fi and a set J fi of size at most r
as constructed above, by using w to answer queries in the domain of w and fi s for everything else.
We again obtain the estimate -
equation (16), with the same time
complexity as before. Now we repeat Claim 6.9 in this new context:
6.12 For all large enough good m, every estimate - i;m (w) satisfies j- i;m (w) \Gamma - i;m (w)j - ffl.
If not, then for some fixed w the estimate fails. The final key point is that because M i always
makes its queries in lexicographic order, the queries in the domain of w that need to be covered are
the same for every fi s . Hence the corresponding bits of w can be hard-wired by circuitry of size at
most r. The test [fi s - I M i ] can thus still be carried out by circuits of size less than m i+1 , and we
reach the same contradiction of the hardness value SD .
Finally, we want to apply Lemma 5.1 to replace d i;m (w) by a martingale d 0
i;m (w) that yields
virtually the same degree of success and is computable in time 2 O(n) . Unlike the truth-table case
we cannot apply Lemma 5.1 verbatim because we no longer have a single small set J that d 0 is
active on. However, along any set A, the values d 0
i;m (w) and d 0
or 1) can differ only
for cases where b indexes a string in the small set J corresponding to A, and the reader may check
that the argument and bounds of Lemma 5.1 go through unscathed in this case. This finishes the
proof of Theorem 6.7.
Conclusions
The initial impetus for this work was a simple question about measure: is the pseudo-randomness
of a characteristic sequence invariant under simple permutations such as that induced by flip in the
Introduction
? The question for flip is tantalizingly still open. However, in Section 6.2 we showed
that establishing a "yes" answer for any permutation that intuitively should preserve the same
complexity-theoretic degree of pseudo-randomness, or even for a single specific such permutation
as that in the simple proof of the non-adaptive version of Theorem 6.1, would have the major
consequence that EXP 6= BPP.
Our "betting games" in themselves are a natural extension of Lutz's measures for deterministic
time classes. They preserve Lutz's original idea of ``betting'' as a means of ``predicting'' membership
in a language, without being tied to a fixed order of which instances one tries to predict, or to a
fixed order of how one goes about gathering information on the language. We have shown some
senses in which betting games are robust and well-behaved. We also contend that some current
defects in the theory of betting games, notably the lack of a finite-unions theorem pending the
status of pseudo-random generators, trade off with lacks in the resource-bounded measure theory,
such as being tied to the lexicographic ordering of strings.
The main open problems in this paper are interesting in connection with recent work by
Impagliazzo and Wigderson [IW98] on the BPP vs. EXP problem. First we remark that the main
result of [IW98] implies that either or BPP has E-measure zero [vM98]. Among
the many measure statements in the last section that imply BPP 6= EXP, the most constrained
and easiest to attack seems to be item 4 in Corollary 6.5. Indeed, in the specific relevant case
starting with the assumption one is given a non-adaptive E-betting game G and
an E-martingale d, and to obtain the desired contradiction that proves BPP 6= EXP, one need
only construct an EXP-betting game G 0 that covers S What we can obtain is a
"randomized" betting game G 00 that flips one coin at successive intervals of input lengths to decide
whether to simulate G or d on that interval. (The intervals come from the proof of Theorem 6.4.)
Any hypothesis that can de-randomize this G 00 implies BPP 6= EXP. We do not know whether the
hypotheses considered in [IW98], some of them shown to follow from BPP 6= EXP itself, are
sufficient to do this.
Stepping back from trying to prove BPP 6= EXP outright or trying to prove that these measure
statements are equivalent to BPP 6= EXP, we also have the problem of narrowing the gap between
BPP 6= EXP and the sufficient condition EXP 6= MA used in our results. Moreover, does EXP 6=
MA suffice to make the - p
T -autoreducible sets have E-measure zero? Does that suffice to simulate
every betting game by a martingale of equivalent complexity? We also inquire whether there exist
oracles relative to which strong PRGs still exist. Our work seems to open many
opportunities to tighten the connections among PRGs, the structure of classes within EXP, and
resource-bounded measure.
The kind of statistical sampling used to obtain martingales in Theorems 5.6 and 5.7
was originally applied to construct martingales from "natural proofs" in [RSC95]. The derandomization
technique from [BFNW93] based on EXP 6= MA that is used here is also applied
in [BvM98, KL98, LSW98]. "Probabilistic martingales" that can use this sampling to simulate
betting games are formalized and studied in [RS98]. This paper also starts the task of determining
how well the betting-game and random-sampling ideas work for measures on classes below E. Even
straightforward attempts to carry over Lutz's definitions to classes below E run into difficulties,
as described in [May94] and [AS94, AS95]. We look toward further applications of our ideas in
lower-level complexity classes.
Acknowledgments
The authors specially thank Klaus Ambos-Spies, Ron Book (pace), and Jack
Lutz for organizing a special Schloss Dagstuhl workshop in July 1996, where preliminary versions
of results and ideas in this paper were presented and extensively discussed. We also thank the
referees for helpful comments.
--R
Measure on small complexity classes
Measure on P: Robustness of the notion.
"Algorithmic Information Theory and Randomness"
Trading group theory for randomness.
BPP has subexponential time simulations unless EXPTIME has publishable proofs.
Using autoreducibility to separate complexity classes.
Separating complexity classes using autoreducibility.
In 13th Annual Symposium on Theoretical Aspects of Computer Science
Hard sets are hard to find.
How to construct random functions.
On relativized exponential and probabilistic complexity classes.
Randomness vs. time: De-randomization under a uniform assumption
On the
A variant of the Kolmogorov concept of complexity.
Resource bounded measure and learn- ability
Almost everywhere high nonuniform complexity.
The quantitative structure of exponential time.
Contributions to the Study of Resource-Bounded Measure
Hardness versus randomness.
Relations among complexity measures.
Probabilistic martingales and BPTIME classes.
Pseudorandom generators
The complexity of approximate counting.
On the measure of BPP.
--TR
--CTR
Klaus Ambos-Spies , Wolfgang Merkle , Jan Reimann , Sebastiaan A. Terwijn, Almost complete sets, Theoretical Computer Science, v.306 n.1-3, p.177-194, 5 September | probabilistic computation;resource-bounded measure;theory of computation;betting games;pseudorandom generators;autoreducibility;complexity classes;sampling;computational complexity;polynomial reductions |
587001 | Message Multicasting in Heterogeneous Networks. | In heterogeneous networks, sending messages may incur different delays on different links, and each node may have a different switching time between messages. The well-studied telephone model is obtained when all link delays and switching times are equal to one unit. We investigate the problem of finding the minimum time required to multicast a message from one source to a subset of the nodes of size k. The problem is NP-hard even in the basic telephone model. We present a polynomial-time algorithm that approximates the minimum multicast time within a factor of O(log k). Our algorithm improves on the best known approximation factor for the telephone model by a factor of $O(\frac{\log n}{\log\log k})$. No approximation algorithms were known for the general model considered in this paper. | Introduction
. The task of disseminating a message from a source node to
the rest of the nodes in a communication network is called broadcasting. The goal
is to complete the task as fast as possible assuming all nodes in the network participate
in the eort. When the message needs to be disseminated only to a subset
of the nodes this task is referred to as multicasting. Broadcasting and multicasting
are important and basic communication primitives in many multiprocessor systems.
Current networks usually provide point-to-point communication only between some
of the pairs of the nodes in the network. Yet, in many applications, a node in the
network may wish to send a message to a subset of the nodes, where some of them
are not connected to the sender directly. Due to the signicance of this operation, it
is important to design e-cient algorithms for it.
Broadcast and multicast operations are frequently used in many applications for
message-passing systems (see [11]). It is also provided as a communication primitive
by several collective communication libraries, such as Express by Parasoft [8] and
the Message Passing Library (MPL) [1, 2] of the IBM SP2 parallel systems. This
operation is also included as part of the collective communication routines in the
Message-Passing Interface (MPI) standard proposal [7]. Application domains that
use broadcast and multicast operations extensively include scientic computations,
network management protocols, database transactions, and multimedia applications.
Part of this work was done while the rst three authors visited IBM T.J. Watson Research
Center. A preliminary version of this paper appeared in the Proc. of the 30th ACM Symp. on
Theory of Computing, 1998.
y AT&T Research Labs, 180 Park Ave., P.O. Box 971, Florham Park, NJ 07932. On leave from
the Faculty of EE, Tel Aviv University, Tel Aviv 69978, Israel. E-mail: amotz@research.att.com.
z Computer Science Department, Stanford University, Stanford, CA 94305. Supported by
an IBM Fellowship, ARO MURI Grant DAAH04-96-1-0007 and NSF Award CCR-9357849, with
matching funds from IBM, Schlumberger Foundation, Shell Foundation, and Xerox Corporation.
E-mail: sudipto@cs.stanford.edu.
x Bell Laboratories, Lucent Technologies, 600 Mountain Ave., Murray Hill, NJ
07974. On leave from the Computer Science Department, Technion, Haifa 32000, Israel.
E-mail: naor@research.bell-labs.com.
{ IBM T.J. Watson Research Center, P.O. Box 218, Yorktown Heights, NY 10598.
E-mail: sbar@watson.ibm.com.
A. BAR-NOY, S. GUHA, J. NAOR AND B. SCHIEBER
In most of these applications the e-ciency depends on the time it takes to complete
the broadcast or multicast operations.
There are two basic models in which trivial optimal solutions exist. In the rst
model, all nodes are assumed to be connected, a node may send a message to at most
one other node in each round, and it takes one unit of time (round) for a message
to cross a link. Therefore, in each round the number of nodes receiving the message
can be doubled. If the target set of nodes is of size k, then this process terminates
in dlog ke rounds. In the second model the communication network is represented
by an arbitrary graph, where each node is capable of sending a message to all of
its neighbors in one unit of time. Here, the number of rounds required to deliver a
message to a subset of the nodes is the maximum distance from the source node to
any of the nodes in the subset.
The model in which a node may send a message to at most one other node in each
round is known as the Telephone model. It is known that for arbitrary communication
graphs, the problem of nding an optimal broadcast in the Telephone model is NP-hard
[12], even for 3-regular planar graphs [20]. Following the two easy cases given
above, it is not hard to verify that in the Telephone model two trivial lower bounds
hold for the minimum broadcast time. The rst one is dlog ne, where n denotes the
number of nodes in the graph, and the second one is the maximum distance from
the source node to any of the other nodes. Research in the past three decades has
focused on nding optimal broadcast algorithms for various classes of graphs such as
trees, grids, and hypercubes. Also, researchers have looked for graphs with minimum
number of links for which a broadcast time of dlog ne can be achieved from any source
node. Problems related to broadcast which were extensively investigated are the
problems of broadcast multiple messages, gossiping, and computing certain functions
on all n inputs in a network. See, e.g., [4, 5, 6, 9, 13, 14, 15, 19, 22, 24, 25].
An assumption central to the Telephone model is that both sender and receiver
are busy during the whole sending process. That is, only after the receiver received
the message, both ends may send the message to other nodes. More realistic models
in this context are the Postal model [3] and the LogP model [18]. The idea there is
that the sender may send another message before the current message is completely
received by the receiver, and the receiver is free during the early stages of the sending
process. We note that in both the Postal model and the LogP model it is assumed
that the delay of a message between any pair of nodes is the same.
Optimal solutions for broadcast in the Postal model are known for the case of a
complete graph, and for some other classes of graphs. However, not much is known
for arbitrary graphs. In the Postal model, researchers have also concentrated on other
dissemination primitives and almost always assumed that the communication graph
is complete.
1.1. Our results. In this paper we dene a more general model based on the
Postal model and call it the heterogeneous postal model. Assume node u sends a
message to node v at time 0 and the message arrives at v at time uv . The assumption
is that u is free to send a new message at time s u , and v is free from time 0 to time
uv r v . We call uv the delay of the link (u; v), s u the sending (or switching) time
of u, and r v the receiving time of v. By denition, both s u and r v are smaller than
uv . In the single message multicast problem each node receives no more than a single
message. Thus, for this problem the receiving time has almost no relevance. Because
of this, and to keep the presentation clearer we assume for the rest of the paper that
r nodes u. Observe that when the delay, sending time, and receiving
time are all equal to 1, we obtain the Telephone model.
We believe that our framework may be useful to model modern communication
networks, where the major components { the processors and the communication links
{ are not homogeneous. Some processors are faster than others, and some links have
more bandwidth than others. These disparities are captured by the dierent values
of the delay and the switching time.
Since nding the minimum multicast time is NP-hard even in the Telephone
model, we turn our focus to approximation algorithms. The main result we present is
an approximation algorithm for computing a multicast scheme in the heterogeneous
Postal model. The approximation factor is O (log k), where k denotes the number of
processors in the target set. Previous approximation algorithms for multicasting were
known only in the Telephone model. Kortsarz and Peleg [17] gave an approximation
algorithm that produces a solution whose value is bounded away from the optimal
solution by an O(
n) additive term. This term is quite large, especially for graphs
in which the broadcast (multicast) time is polylogarithmic. Later, Ravi [21], gave an
algorithm that achieves a multiplicative approximation factor of O
log n log k
log log k
We also show that it is NP-hard to approximate the minimum broadcast time
within a factor of three in a model which is only slightly more complicated than the
Telephone model.
The rest of the paper is organized as follows. In Section 2 we dene our model.
In Section 3 we describe our solution. Finally, in Section 4 we show that this problem
is hard to approximate by a small constant factor.
2. The Model and the Problem. We dene our model as follows. Let
(V; E) be an undirected graph representing a communication network, where V is a
set of n nodes and E is the set of point to point communication links. Let U V
denote a special set of terminals, and let r be a special node termed the root. Let the
cardinality of the set U be k. To simplify notation assume that r 2 U .
We associate with each node v 2 V a parameter s v that denotes the sending
time. We sometimes refer to s v as the switching time of v to indicate that this is the
time it takes node v to send a new message. In other words, 1=s v is the number of
messages node v can send in one round (unit of time). We associate with each node
that denotes the receiving time. We assume that r
each node v. We associate with each link (u; v) 2 E a length uv that denotes the
communication delay between nodes u and v. By denition, uv is greater than both
s u and r v (= s v ). We can think of the delay uv as taking into account the sending
time at u and the receiving time at v.
Let the generalized degree of node v 2 V be the actual degree of v in the graph
G multiplied by the switching time s v . Observe that the generalized degree measures
the time it would have taken the node v to send a message to all of its neighbors.
Our goal is to nd a minimum time multicast scheme; that is, a scheme in which
the time it takes for all nodes in the set U to receive the message from the root r is
minimized. Without loss of generality, we may consider only multicast schemes that
are \not lazy"; i.e., schemes in which a node that has not nished sending the message
to its neighbors (but has already started) is not idle. Such multicast schemes can be
represented by an outward directed tree T that is rooted at r and spans all the nodes
in U , together with orderings on the edges outgoing from each node in the tree. The
multicast scheme corresponding to such a tree and orderings is a multicast in which
each node in the tree upon receiving the message (through its single incoming edge)
sends the message along each of its outgoing edges in the specied order. From now
A. BAR-NOY, S. GUHA, J. NAOR AND B. SCHIEBER
on, we refer to the tree in the representation of a multicast scheme as the tree \used"
by the scheme.
For a rooted tree T , denote by T its maximum generalized degree, and by L T
the maximum distance from r to any of the nodes in U (with respect to the lengths
xy associated with each link (x; y)). By denition, the multicast time of tree T is
greater than T and greater than L T . Hence,
Lemma 2.1. Let OPT denote the multicast time of an optimal solution using
tree T , then OPT 1( T
3. The Approximation Algorithm. In this section we describe the approximation
algorithm for multicasting a message to a set of terminals U from a root node
The main tool used by our algorithm is a procedure ComputeCore(U 0 ) that computes
for a given set of terminals U 0 , where r 2 U
1. A subset W U 0 which we call the core of U 0 , of size at most 3jU 0 j, where
r 2 W .
2. A scheme to disseminate a message known to all the nodes in W to the rest
of the nodes in U 0 in time proportional to the minimum multicast time from
r to U 0 .
The algorithm that computes the multicast scheme proceeds in ' phases. Let U
Upon termination, U frg. In the ith phase,
is invoked to compute:
1. The core of U denoted by U i .
2. A scheme to disseminate the message from U i to the set U i 1 in time proportional
to the minimum multicast time from r to U i 1 .
we have that k). The resulting
multicast scheme is given by looking at the rounds of the algorithm in backward
order. Namely, starting at in each round of the multicast scheme
the message is disseminated from U i to U i 1 . Since U ' U ' 1 U each
dissemination phase takes time proportional to the minimum multicast time from r
to U . It follows that the multicast time is up to O(log times the optimal multicast
time.
In the rest of the section we describe the procedure ComputeCore(U 0 ). Let OPT
be the minimum multicast time from r to U 0 . Lemma 2.1 implies that there exists
a tree T spanning the set U 0 such that T . The procedure
ComputeCore(U 0 ) has two main parts. In the rst part, we nd a set of jU 0 j paths,
one for each terminal, where the ith path connects the terminal u i to another terminal
called The paths have the following path properties:
Length Property: The length of each path is at most 4 ( T
Congestion Property: The generalized degree of the nodes in the graph induced
by the paths is at most 6 ( T
In the second part we design a dissemination scheme using the above paths. We do
it by transforming the paths into a set of disjoint spider graphs { graphs in which
at most one node has degree more than two. These spider graphs have the following
spider properties:
Each spider contains at least two terminals from U 0 .
The set of spiders spans at least half the nodes in U 0 .
The diameter of each spider is at most 4 ( T
The generalized degree of the center of a spider is at most 6 ( T
where the center of a spider is the unique node with degree larger than two,
if such exists, or one of the endpoints of the spider, otherwise.
Now, for each spider, we arbitrarily select one of the nodes from U 0 to the core of U 0 .
Note that each such node can multicast the message to the rest of the terminals in its
spider in O( T +L T ) time (linear in OPT ). We add all the terminals not contained
in any of the spiders to the core of U 0 . We claim that the size of the core is at most4
To see this, let x denote the number of spiders and let y be the number of the
terminals in all the spiders. It follows that the size of the core is jU x. By the
rst spider property we have that x y=2 and by the second spider property we get
that y jU 0 j=2. Thus,
We now turn to describe each of the two parts of the procedure ComputeCore(U 0 ).
3.1. Finding a set of paths. We rst claim the following lemma which is
analogous to the \tree pairing" lemma of Ravi [21].
Lemma 3.1. Let T be a tree that spans a set U 0 V , and suppose that jU 0 j
is even. There exists a way to pair the nodes of U 0 , and nd paths (in the tree T )
connecting each pair such that:
1. the paths are edge disjoint,
2. the length of each path is bounded by 2L T ,
3. the generalized degree of each node in the graph induced by the paths is at
most T .
Proof. The tree pairing lemma ([21]) states that there exists a pairing such that
the paths in T connecting each of the pairs are edge disjoint. Consider these paths.
Clearly the length of each of these paths is bounded by 2L T . The degree, and hence
the generalized degree, of every node in the graph induced by the paths is no more
than the (generalized) degree in T since we only use the edges of the tree T . Hence,
it is bounded by T .
The following corollary handles the odd case as well.
Corollary 3.2. Let T be a tree that spans a set U 0 V . There exists a way to
pair the nodes of U 0 , and nd paths (in the tree T ) connecting each pair such that:
1. the length of each path is bounded by 2L T ,
2. the generalized degree of each node in the graph induced by the paths is at
most 2 T .
Proof. The corollary clearly holds if jU 0 j is even. If jU 0 j is odd, we pair jU 0 j 1 of
the nodes as in Lemma 3.1, and pair the last node with any other node. The length
of the path connecting the last pair is still bounded by 2L T . However, the degree of
the subgraph may double up to 2 T .
Recall that the tree T spans the nodes of U 0 and Our
objective is to nd the set of paths as guaranteed by Corollary 3.2 with respect to
T . However, we do not know T . Thus, instead, we nd a set of fractional paths
satisfying similar properties. To this end, we write a linear program for nding a set
of (fractional) paths that minimizes the sum of two quantities: (1) the maximum over
all pairs of the average length of the paths connecting a pair, and (2) the maximum
generalized degree of the subgraph induced by the paths connecting the pairs.
The linear program is a variant of multicommodity
ow. For each edge (u; v), we
dene the directed edges (u; v) and (v; u) both of length uv . Let U g.
With each node v j 2 U 0 we associate commodity j. Node v j is the source of commodity
j and we create an articial sink t j with r t j
We connect each of the nodes
A. BAR-NOY, S. GUHA, J. NAOR AND B. SCHIEBER
by a directed edge (v The objective is
to minimize (L exactly one unit of
ow has to be shipped from each v j
to t j , such that the average length of the
ow paths from v j to t j is at most 2L, and
the maximum weighted congestion (generalized degree) of the induced subgraph is at
most 3.
More formally, let A denote the set of directed edges, and let f i (u; v) denote the
ow of commodity i on directed edge (u; v). The linear programming formulation is
as follows.
subject to:
For all 1 i h
and
(v;w)2A
For all 1 i
For all 1 i
(v i ;u)2A
For all 1 i
For all
3
For all 1 i
We now show that the set of paths guaranteed by Corollary 3.2 with respect to T
can be modied so as to obtain an integral solution for the linear program as follows.
If jU 0 j is even, the solution is obtained by using each path connecting a pair
to ship one unit of
ow from u j through u i to t j , and another unit of
ow from u i
through u j to t i . The length of each path is bounded by 2L T , and since we use each
path twice, the generalized degree is bounded by 2 T . If jU 0 j is odd, the solution is
obtained by using each of the 1(jU 0 j 1) paths connecting the rst jU nodes of
twice (once in each direction), and using the path connecting the last node in U 0
to its mate to ship
ow out of this node. The length of each path is still bounded by
However, because of the additional path, the degree is only bounded by 3 T .
It follows that the value of the objective function for this solution is T +L T ,
and thus the linear program is guaranteed to nd a solution whose value is upper
bounded by this value. Let L T and T denote the values of the length and congestion
in the optimal solution of the above linear program.
The optimal solution is a \fractional" solution in the sense that the (unit)
ow of
each commodity is split between several
ow paths. We round the fractional solution
into an integral solution using an algorithm proposed by Srinivasan and Teo [23].
This algorithm builds on a theorem proved by Karp et al. [16]. For completeness and
since the details are slightly dierent, we now describe the rounding of the fractional
solution.
Theorem 3.3. [16] Let A be a real valued r s matrix, and y be a real valued
s-vector. Let b be a real valued vector such that Ay = b. Let t be a positive real number
such that in every column of A,
1. the sum of all positive entries t, and
2. the sum of all negative entries t.
Then, we can compute (in polynomial time) an integral vector
y such that for every
t.
We now show how to nd an integral
ow of congestion at most 6 T +4L T , where
each
ow path (of each commodity) has length at most 4L T . We rst decompose the
ow into (polynomially many)
ow paths. If any path in this decomposition is longer
than 4L T , we discard it. We observe that since the average length is less than 2L T ,
discarding these long paths leaves at least half of a unit of
ow between each pair
(v We scale the
ows appropriately such that the total
ow to each t i is exactly
1. This can at most double the
ow on an edge, and the total congestion is now at
most 6 T .
denote the length bounded
ow paths. Denote the set of nodes in
a path P i by V (P i ) and the set of edges by E(P i ). Let f(P i ) denote the amount of
ow pushed on path P i . Dene the set P j as the set of all paths that carry
ow of
the jth commodity. Observe that each path belongs to exactly one P j . The linear
system needed for Theorem 3.3 is dened by the following linear equations,
where the i-th equation corresponds to the i-th row of A and the i-th element of b.
for each v s v
i:
for all j 4L T
The second set of inequalities captures the fact that the
ow on all the paths
corresponding to commodity j is exactly 1. Now the sum of the positive entries in a
column is
(length of path
The second part of the inequality follows since s v vw for all v; w and s t j
sum of the negative entries in a column is at most 4L T , this follows due to the fact
that each P i belongs to exactly one P j . Invoking Theorem 3.3 gives us a set of paths
such that,
for each v s v
i:
for all j 4L T
The second set of inequalities implies that each commodity has at least one
ow
path. So we have a set of
ow paths such that the congestion is at most 6 T
and their length is at most 4L T . Since T +L T T +L T these paths satisfy the
length and congestion properties as desired.
3.2. Finding a spider decomposition. We now show how to obtain a spider
decomposition satisfying the spider properties previously dened. Recall that we are
now given a set of paths connecting each terminal u j with another terminal Mate(u j ),
and that this set of paths satises the length and congestion properties.
We nd a set of at least jU 0 j=2 trees that satisfy the following properties which
are similar to the spider properties.
A. BAR-NOY, S. GUHA, J. NAOR AND B. SCHIEBER
Each tree spans at least two terminals from U 0 .
The diameter of each tree is at most 4L T 4 ( T
The generalized degree of each node in each of the trees is at most 6 T
Before showing how to nd these trees, we show how to transform them into the
required spiders. Repeatedly, consider the tree edges, and remove a tree edge if
it separates the tree into two subtrees such that either, both subtrees contain at
least two terminals, or one of them contains no terminals (in this case this subtree
is removed as well). Repeat this process until no more edges can be removed. The
process terminates since the number of edges is nite. Observe that upon termination,
if a connected component is not a spider, then another edge could be deleted. Thus,
we get the following claim.
3.4. When the process terminates each connected component is a spider.
Clearly, all the terminals spanned by the trees are also spanned by the spiders.
The diameter of each of these spiders is at most 4L T , since the distance between
every pair of nodes in U 0 spanned by a tree is at most 4L T to begin with. Also, the
generalized degree of the \center" of the spider is at most the generalized degree of
its originating tree since we have not introduced any new edges in the process. We
conclude that the spiders satisfy the desired spider properties.
Now, we show how to nd the required trees. Dene G p to be the undirected
graph induced by the paths from each terminal to its mate. Observe that a spanning
forest of this graph may not satisfy the required diameter property above and hence
some extra renement is necessary.
For each node u in G p , nd a unique terminal in U 0 that is closest to u (with
respect to the lengths xy associated with each link (x; y)). Ties are broken arbitrarily.
We modify the paths starting at each terminal as follows. From each terminal u
begin tracing the path connecting u to Mate(u). At some node v along this path,
the closest terminal to v will not be u. We are guaranteed to encounter such a node
because the closest node to Mate(u) is Mate(u) itself. From this node v trace the
path to its closest terminal. This creates a path from u to another terminal denoted
NewMate(u). Note that NewMate(u) may be dierent from Mate(u). However, we
are guaranteed that the path from u to NewMate(u) is not longer than the path from
u to Mate(u) and thus bounded by 4L T .
Dene an auxiliary directed graph H on the set of terminals U 0 with the set of
edges . By denition each node in H has outdegree
one. Thus, each connected component of (the undirected skeleton of) H contains
exactly one directed cycle. Discard one edge from each such connected component to
make it a rooted tree in which all edges are oriented towards the root. (The root is
unique since the outdegree of each node is one.) Note that every non-trivial strongly
connected component of H is a cycle. Thus, this can be done just by discarding an
arbitrary edge from each strongly connected component of H . Let H 0 be the resulting
forest.
Dene the level of a node in H 0 to be the number of edges in the path to it from
the root of its component. (We
ip the direction of the edges in H 0 for the purpose
of measuring distances.) Distinguish between nodes of even level and nodes of odd
level. Each edge of H 0 goes either from an odd level node to an even level node or
vice-versa.
Consider two collections of stars in H 0 . One collection consisting of edges from
odd level nodes to even level nodes, and the other consisting of edges from even level
nodes to odd level nodes. Every terminal with positive indegree and outdegree (in
spanned by a star in each one of the two collections. Every terminal with
either indegree or outdegree zero (in H 0 ) is spanned by a star in only one of the two
collections. However, by a simple pigeon-hole argument, at least one of the collections
spans at least half of the terminals.
Consider such a collection. First, note that each star in this collection induces an
undirected tree in the original graph when replacing each star edge by its originating
path. We now claim the following,
Lemma 3.5. The induced trees of any two stars belonging to the same collection
are node disjoint.
Proof. To obtain a contradiction assume they are not disjoint. Then, there exist
two distinct terminals with the same even or odd parity, say u and v, such that
NewMate(u) 6= NewMate(v), but the paths traced from u to NewMate(u) and
from v to NewMate(v) have a common node x. Consider the terminal chosen by x
as its closest terminal. We distinguish between two cases.
Case 1: The terminal chosen by x is u. Then u must be NewMate(v), contradicting
the fact that u and v are of the same parity. The case where v is the chosen terminal
of x is symmetric.
Case 2: The terminal chosen by x is NewMate(u). Then NewMate(v) must be the
same as NewMate(u); a contradiction. The case where NewMate(v) is the chosen
terminal of x is symmetric.
It is easy to see that the trees induced by the stars in the collection satisfy the
required properties. This concludes the construction.
4. Hardness of Approximations. In this section we show that the best possible
approximation factor of the minimum broadcast time in the heterogeneous Postal
model is 3 . We show this hardness result even for a restricted model in which
d. Note that when s
broadcast the message concurrently to all of its neighbors. The proof is by a reduction
to the set cover problem. In the unweighted version of the set cover problem we are
given a set U of elements and a collection S of subsets of U . The goal is to nd the
smallest number of subsets from S whose union is the set U . Feige [10] proved the
following hardness result.
Lemma 4.1. Unless NP DT IME(n log log n ), the set cover problem cannot be
approximated by a factor which is better than ln n, and hence it cannot be approximated
within any constant factor. In our proof, we will only use the fact that it is NP-Hard
to approximate the optimal set cover within any constant factor.
Theorem 4.2. It is NP-Hard to approximate the minimum broadcast time of any
graph within a factor of 3 .
Proof. Assume to the contrary that there exists an algorithm that violates the
claim of the theorem for some . We show how to approximate the set cover problem
within a constant factor using this algorithm.
To approximate the set cover problem we "guess" the size of the optimal set cover
and use our approximate minimum broadcast time algorithm to check our guess. Since
the size of the optimal set cover is polynomial, we need to check only a polynomial
number of guesses.
Consider an instance of set cover I = (U; S) where U is the set of elements, and
S a collection of subsets of U . Let jU m. Let the guess on the size
of the optimal set cover be k. We construct the following graph G. The graph G,
consists of 1 vertices: a distinguished root vertex r, vertices e
A. BAR-NOY, S. GUHA, J. NAOR AND B. SCHIEBER
corresponding to the elements of U , vertices um corresponding to the subsets,
and k additional vertices a
r
e
ae1
a
The root r has switching time and is connected to a by edges
with delay ra '
= 1. Each vertex a ' has switching time s(a ' connected
to all u j with delay a ' u j
= 1. Each vertex u j has switching time
connected to a vertex e i i the jth set contains the ith element. The delay of such
an edge is u
d, where d > 4 2
is a constant. Each vertex e i has switching time
1. Finally, to complete the instance of the multicasting problem, the target
multicast set consists of all vertices e i .
We rst show that if there is a set cover of size k, then there is a multicast scheme
of length d 2. After time 1, all the vertices a ' receive the message. After time 2,
all the vertices u j corresponding to sets which are in this cover, receive the message.
This is possible since all a ' are connected to all u j . Finally, these vertices send the
message to all the elements that they cover. Since s(u j
that the multicast time is d 2.
Suppose that the algorithm for the multicasting problem completes the multicasting
at time t. By the contradiction assumption, its approximation factor is 3 .
Since by our guess on the size of the set cover the optimal multicast time no more
than d+2 we have t (3 )(d 2.
The strict inequality follows from the choice of d.
We rst claim that all the vertices u j that participate in the multicast receive the
message from some a ' . Otherwise there exists a vertex e i 0 that received the message
via a path of a type (r; a This means that e i 0 received the message at
or after time 3d t. Our second claim is that each vertex a ' sends the message
to at most 2d vertices u j . This is because the (2d 1)st vertex would receive the
message at time 2d would not be able to help in the multicast eort that is
completed before time 3d 2.
Combining our two claims we get that the multicasting was completed with the
help of 2dk vertices u j . The corresponding 2dk sets cover all the elements e i . This
violates Lemma 4.1 that states that the set cover problem cannot be approximated
within any constant factor.
Remark:. In our proof we considered a restricted model in which the switching
time may only get two possible values and the delay may get only three possible values
(assuming that when an edge does not exist then the delay is innity). Observe that
this hardness result does not apply to the Telephone model in which the switching
time is always 1 and the delay is either 1 or innity. We have similar hardness results
for other special cases. However, none of them is better than 3 and all use similar
techniques.
Acknowledgment
. We thank David Williamson for his helpful comments.
--R
The IBM external user interface for scalable parallel systems
CCL: a portable and tunable collective communication library for scalable parallel comput- ers
Designing broadcasting algorithms in the Postal model for message-passing systems
Optimal Multiple Message Broadcasting in Telephone-Like Communication Systems
Optimal multi-message broadcasting in complete graphs
Algebraic construction of e-cient networks
Document for a standard message-passing interface
Express 3.0 Introductory Guide
Broadcast time in communication networks
A threshold of ln n for approximating set cover
Solving Problems on Concurrent Processors
Computers and Intractability: A Guide to the Theory of NP-Completeness
On the construction of minimal broadcast networks
Tight bounds on minimum broadcast networks
A survey of gossiping and broadcasting in communication networks
Global wire routing in two-dimensional arrays
Approximation algorithm for minimum time broadcast
Optimal broadcast and summation in the LogP model
Broadcast networks of bounded degree
Minimum broadcast time is NP-complete for 3-regular planar graphs and deadline 2
Rapid rumor rami
Generalizations of broadcasting and gossiping
A constant-factor approximation algorithm for packet routing
A new method for constructing minimal broadcast networks
A class of solutions to the gossip problem
--TR
--CTR
Teofilo F. Gonzalez, An Efficient Algorithm for Gossiping in the Multicasting Communication Environment, IEEE Transactions on Parallel and Distributed Systems, v.14 n.7, p.701-708, July
Michael Elkin , Guy Kortsarz, Sublogarithmic approximation for telephone multicast, Journal of Computer and System Sciences, v.72 n.4, p.648-659, June 2006
David Kempe , Jon Kleinberg , Amit Kumar, Connectivity and inference problems for temporal networks, Journal of Computer and System Sciences, v.64 n.4, p.820-842, June 2002
Pierre Fraigniaud , Bernard Mans , Arnold L. Rosenberg, Efficient trigger-broadcasting in heterogeneous clusters, Journal of Parallel and Distributed Computing, v.65 n.5, p.628-642, May 2005
Eli Brosh , Asaf Levin , Yuval Shavitt, Approximation and heuristic algorithms for minimum-delay application-layer multicast trees, IEEE/ACM Transactions on Networking (TON), v.15 n.2, p.473-484, April 2007 | postal model;logp model;combinatorial optimization;approximation algorithms;heterogeneous networks;multicast |
587029 | Beyond Competitive Analysis. | The competitive analysis of online algorithms has been criticized as being too crude and unrealistic. We propose refinements of competitive analysis in two directions: The first restricts the power of the adversary by allowing only certain input distributions, while the other allows for comparisons between information regimes for online decision-making. We illustrate the first with an application to the paging problem; as a byproduct we characterize completely the work functions of this important special case of the k-server problem. We use the second refinement to explore the power of lookahead in server and task systems. | Introduction
The area of On-Line Algorithms [16, 10] shares with Complexity Theory the
following characteristic: Although its importance cannot be reasonably denied
(an algorithmic theory of decision-making under uncertainty is of obvious
practical relevance and significance), certain aspects of its basic premises,
modeling assumptions, and results have been widely criticized with respect
to their realism and relation to computational practice. We think that now
is a good time to revisit some of the most often-voiced criticisms of competitive
analysis (the basic framework within which on-line algorithms have
Computer Science Department, University of California, Los Angeles, CA 90095. Re-search
supported in part by NSF grant CCR-9521606.
y Computer Science and Engineering, University of California, San Diego, La Jolla, CA
92093. Research supported in part by the National Science Foundation.
been heretofore studied and analyzed), and to propose and explore some
better-motivated alternatives.
In competitive analysis, the performance of an on-line algorithm is compared
against an all-powerful adversary on a worst-case input. The competitive
ratio of a problem-the analog of worst-case asymptotic complexity for
this area-is defined as
(1)
Here A ranges over all on-line algorithms, x over all "inputs", opt denotes
the optimum off-line algorithm, while A(x) is the cost of algorithm A when
presented with input x. This clever definition is both the weakness and
strength of competitive analysis. It is a strength because the setting is clear,
the problems are crisp and sometimes deep, and the results often elegant
and striking. But it is a weakness for several reasons. First, in the face of
the devastating comparison against an all-powerful off-line algorithm, a wide
range of on-line algorithms (good, bad, and mediocre) fare equally badly;
the competitive ratio is thus not very informative, fails to discriminate and
to suggest good approaches. Another aspect of the same problem is that,
since a worst-case input decides the performance of the algorithm, the optimal
algorithms are often unnatural and impractical, and the bounds too
pessimistic to be informative in practice. Even enhancing the capabilities of
the on-line algorithm in obviously desirable ways (such as a limited lookahead
capability) brings no improvement to the ratio (this is discussed more extensively
in a few paragraphs). The main argument for competitive analysis
over the classical expectation maximization is that the distribution is usually
not known. However, competitive analysis takes this argument way too far:
It assumes that absolutely nothing is known about the distribution, that any
distribution of the inputs is in principle possible; the worst-case "distribu-
tion" prevailing in competitive analysis is, of course, a worst-case input with
probability one. Such complete powerlessness seems unrealistic to both the
practitioner (we always know, or can learn, something about the distribution
of the inputs) and the theoretician of another persuasion (the absence of a
prior distribution, or some information about it, seems very unrealistic to a
probabilist or mathematical economist).
The paging problem, perhaps the most simple, fundamental, and practically
important on-line problem, is a good illustration of all these points.
An unreasonably wide range of deterministic algorithms (both the good in
practice LRU and the empirically mediocre FIFO) have the same competitive
ratio-k, the amount of available memory. Even algorithms within more
powerful information regimes-for example, any algorithm with lookahead
pages-provably can fare no better. Admittedly, there have been
several interesting variants of the framework that were at least partially successful
in addressing some of these concerns. Randomized paging algorithms
have more realistic performance [5, 11, 15]. Some alternative approaches to
evaluating on-line algorithms were proposed in [1, 14] for the general case
and in [2, 6, 7, 17] specifically for the paging problem.
In this paper we propose and study two refinements of competitive analysis
which seem to go a long way towards addressing the concerns expressed above.
Perhaps more importantly, we show that these ideas give rise to interesting
algorithmic and analytical problems (which we have only begun to solve in
this paper).
Our first refinement, the diffuse adversary model, removes the assumption
that we know nothing about the distribution-without resorting to the
equally unrealistic classical assumption that we know all about it. We assume
that the actual distribution D of the inputs is a member of a known
class \Delta of possible distributions. That is, we seek to determine, for a given
class of distributions \Delta, the performance ratio
A
ED (A(x))
ED (opt(x))
(2)
That is, the adversary picks a distribution D among those in \Delta, so that the
expected, under D, performance of the algorithm and the off-line optimum
algorithm are as far apart as possible. Notice that, if \Delta is the class of all
possible distributions, (1) and (2) coincide since the worst possible distribution
is the one that assigns probability one to the worst-case input and
probability zero everywhere else. Hence the diffuse adversary model is indeed
a refinement of competitive analysis.
In the paging problem, for example, the input distribution specifies, for
each page a and sequence of page requests ae, prob(ajae)-the probability that
the next page fault is a, given that the sequence so far is ae. It is unlikely that
an operating system knows this distribution precisely. On the other hand,
it seems unrealistic to assume that any distribution at all is possible. For
example, suppose that the next page request is not predictable with absolute
certainty: prob(ajae) - ffl, for all a and ae, where ffl is a real number between
capturing the inherent uncertainty of the request sequence. This
is a simple, natural, and quite well-motivated assumption; call the class of
distributions obeying this inequality \Delta ffl . An immediate question is, what is
the resulting competitive ratio
As it turns out, the answer is quite interesting. If k is the storage capacity,
the ratio shown to coincide with the expected cost of a simple
random walk on a directed graph with approximately
nodes. For
this value is easy to estimate: It is between 1
ffl; for
larger values of k we do not have a closed-form solution for the ratio. There
are two important byproducts of this analysis: First, extending the work in
[8], we completely characterize the work functions of the paging special case
of the k-server problem. Second, the optimum on-line algorithm is robust-
that is, the same for all ffl's-and turns out to be a familiar algorithm that
is also very good in practice: LRU. It is very interesting that LRU emerges
from the analysis as the unique "natural" optimal algorithm, although there
are other algorithms that are also optimal.
The second refinement of competitive analysis that we are proposing deals
with the following line of criticism: In traditional competitive analysis, the
all-powerful adversary frustrates not only interesting algorithms, but also
powerful information regimes. The classical example is again from paging:
In paging the best competitive ratio of any on-line algorithm is k. But
what if we have an on-line algorithm with a lookahead of ' steps, that is,
an algorithm that knows the immediate future? It is easy to see that any
such algorithm must fare equally badly as algorithms without lookahead. In
proof, consider a worst case request sequence abdc \Delta \Delta \Delta and take its ('
stuttered version, a '+1 b '+1 d '+1 c '+1 \Delta \Delta \Delta It is easy to see that an algorithm with
lookahead ' is as powerless in the face of such a sequence as one without a
lookahead. Once more, the absolute power of the adversary blurs practically
important distinctions. Still, lookahead is obviously a valuable feature of
paging algorithms. How can we use competitive analysis to evaluate its
power? Notice that this is not a question about the effectiveness of a single
algorithm, but about classes of algorithms, about the power of information
regimes-ultimately, about the value of information [12].
To formulate and answer this and similar questions we introduce our
second refinement of competitive analysis, which we call comparative analysis.
Suppose that A and B are classes of algorithms-typically but not necessarily
usually a broader class of algorithms, a more powerful
information regime. The comparative ratio R(A;B) is defined as follows:
min A2A
This definition is best understood in terms of a game-theoretic interpre-
tation: B wants to demonstrate to A that it is a much more powerful class
of algorithms. To this end, B proposes an algorithm B among its own. In
response, A comes up with an algorithm A. Then B chooses an input x.
Finally, A pays B the ratio A(x)=B(x). The larger this ratio, the more powerful
B is in comparison to A. Notice that, if we let A be the class of on-line
algorithms and B the class of all algorithms-on-line or off-line-then equations
(1) and (3) coincide, and Hence comparative analysis is
indeed a refinement of competitive analysis.
We illustrate the use of comparative analysis by attacking the question
of the power of lookahead in on-line problems of the "server" type: If L ' is
the class of all algorithms with lookahead ', and L 0 is the class of on-line
algorithms, then we know that, in the very general context of metrical task
systems [3] we have
(that is, the ratio is at most 2' systems, and it is
exactly while in the more restricted context of paging
Diffuse Adversaries
The competitive ratio for a diffuse adversary 1 is given in equation (2). In
order to factor out the disadvantage of the algorithm imposed by the initial
conditions, we shall allow an additive constant in the numerator. More
precisely, an on-line algorithm A is c-competitive against a class \Delta of input
distributions if there exists a constant d such that for all distributions D 2 \Delta:
Diffuse adversaries are not related to the diffusion processes in probability theory
which are continuous path Markov processes.
The competitive ratio of the algorithm A is the infimum of all such c's. Fi-
nally, the competitive ratio R(\Delta) of the class of distributions is the minimum
competitive ratio achievable by an on-line algorithm. It is important to observe
that \Delta is a class of acceptable conditional probability distributions; each
is the distribution of the relevant part of the world conditioned on the
currently available information. In the case of the paging problem with a set
of pages M , \Delta may be any set of functions of the form
where for all ae 2 M P
1. In the game-theoretic interpretation,
as the sequence of requests ae develops, the adversary chooses the values of
D(ajae) from those available in \Delta to maximize the ratio. Since we deal with
deterministic algorithms, the adversary knows precisely the past decisions
of A, but the adversary's choices may be severely constrained by \Delta. It is
indicative of the power of the diffuse adversary model that most of the proposals
for a more realistic competitive analysis are simply special cases of it.
For example, the locality of reference in the paging problem [2, 6] is captured
by the diffuse adversary model where \Delta consists of the following conditional
probability distributions: there is no edge from b to a in the
access graph and otherwise. Simirarly, the Markov paging
model [7] and the statistical adversary model [14] are also special cases of
the diffuse adversary model.
In this section we apply the diffuse adversary model to the paging prob-
lem. We shall focus on the class of distributions \Delta ffl , which contains all
functions ffl]-that is to say, all conditional distributions
with no value exceeding ffl. Since the paging problem is the k-server problem
on uniform metric spaces, we shall need certain key concepts from the
k-server theory (see [8, 4] for a more detailed exposition).
denote the number of page slots in fast memory, and let
M be the set of pages. A configuration is a k-subset of M ; we denote the
set of all configurations by C. Let ae 2 M . The work function associated
with ae, w ae (or w when ae is not important or understood from context) is a
function defined as follows: If (X) is the
optimum number of page faults when the sequence of requests is ae and the
final memory configuration is X.
Henceforth we use the symbols to denote set union and set
difference respectively. Also, we represent unary sets with their element, e.g.
we write a instead of fag.
Definition 2 If w is a work function, define the support of w to be all
configurations such that there is no Y 2 C, different from X, with
j. Intuitively, the values of w on its support completely
determine w.
The following lemmata, specific for the paging problem and not true in
general for the k-server problem, characterize all possible work functions by
determining the structure of their support. A similar, but more complicated,
characterization is implicit in the work of [11]. The first lemma states that
all values in the support are the same, and hence what matters is the support
itself, not the values of w on it.
Lemma 1 The support of a work function w contains only configurations
on which w achieves its minimum value.
Proof. The proof is by contradiction. Suppose that the lemma does not
hold. That is, there is a configuration A in the support of w such that
Choose now a configuration B with w(B) ! w(A)
that minimizes jB \Gamma Aj. By the quasiconvexity condition in [8], there is an
a A such that
This can hold only if either w(A) ?
In the first case, we get that 1 and this contradicts
the assumption that A is in the support of w. The second case also leads
to a contradiction since it violates the choice of B with minimum
because
An immediate consequence of Lemma 1 is that the off-line cost is always
equal to the minimum value of a work function. The next lemma determines
the structure of the support.
Lemma 2 For each work function w there is an increasing sequence of sets
the most recent request, such that the
support of w is precisely
Note that the converse, not needed in the sequel, also holds: Any such tower
of P j 's defines a reachable work function. For a work function w define its
signature to be the k-tuple its type to be the k-tuple
Proof. The proof is by induction on the length of the request sequence. The
basis case is obvious: Let P is the initial
configuration. For the induction step assume that we have the signature
be the resulting work function after request
r.
Consider first the case that r belongs to P t and not to P
kg. Since there is at least one configuration in the support of
w that contains the request r, the minimum value of w 0 is the same with the
minimum value of w. Therefore, the support of w 0 is a subset of the support
of w. It consists of the configurations that belong to the support of w and
contain the last request r. It is easy now to verify that the signature of w 0
is:
If, on the other hand, the request r does not belong to P k , the minimum value
of w 0 is one more than the minimum value of w. In this case, the support
of w 0 consists of all configurations in the support of w where one point has
been replaced by the request r, i.e. a server has been moved to service r.
Consequently, the signature of w 0 is given by:
The induction step now follows.
We now turn to the \Delta ffl diffuse adversary. Let w be the current work
function with signature
We want to determine the optimal on-line and off-line strategies. A natural
guess is that the best on-line strategy should prefer to have in the fast memory
pages from P i than pages from P j. The reason is that pages
in P i are more valuable to the off-line algorithm than pages in P
a configuration from the support of w remains in the support when we replace
any page from P a page from P i ; the converse does not hold in
general.
For the same reason the off-line strategy should prefer the next request to
be in P i than in Furthermore, the adversary should try to avoid placing
a request in a page already in the on-line fast memory, because although
this doesn't affect the on-line cost, it may shrink the support of the current
work function. This leads to the following strategy for the adversary: First,
order all pages in such a way that pages in P i precede pages in P
pages in P k precede pages in P k , the complement of
Call such an ordering a canonical ordering with respect to the signature
. The adversary now assigns probability ffl to the first 1=ffl pages of a
canonical ordering that are not in the on-line fast memory. Notice that this
presupposes that there are at least k + 1=ffl pages in total. Although, both
strategies seem optimal, we don't have a simple proof for this. Instead, we
will proceed carefully to establish that this is in fact the case.
But first we analyze the competitive ratio that results from the above
strategies. It is not difficult to see that the off-line cost is the expected cost
of a Markov chain M k;ffl : The states of M k;ffl are the types of the work functions,
i.e., all tuples (p From
a state (p 1 which corresponds to a signature
there are transitions to the following (not necessarily distinct)
They correspond to the case of the next request r being in P t ,
the case of corresponds to simply repeating the last request and we can
safely omit it. There is also a transition to the type
that corresponds to the case r 62 P k . The cost of each transition is the
associated off-line cost. All transitions have cost zero, except the last one,
which has cost one, because a request r increases the minimum value of the
work function if and only if r 62 P k .
Finally, the transition probabilities are determined by the adversary's
strategy. The total probability of the first t transitions is maxf(p
(each page that is not in on-line fast memory has probability ffl). The probability
of the last transition is the remaining probability
also shows that there is no need to consider types with p k greater than k+1=ffl
because these types are 'unreachable'. The significance of this fact is that
the Markov process M k;ffl is finite (it has O((k
Let c(M k;ffl ) be the expected cost of each step of the Markov process M k;ffl ,
which is also the expected off-line cost. If we assume that the on-line cost of
each step is one, we get that the competitive ratio resulting from the above
strategies is 1=M k;ffl .
We now turn our attention to the optimal strategy for the adversary. It
should be clear that in each step the adversary assigns probabilities to pages
which are not in the on-line fast memory, based on the current work function
and the configuration of the on-line algorithm. Our aim is to make the
adversary's strategy independent of the configuration of the on-line algorithm
and this can be achieved by allowing the on-line algorithm to swap any two
pages without any extra cost immediately after servicing a request. The on-line
algorithm suffers one page fault in each step, but it can move to the best
configuration relative to the current work function with no extra penalty.
Consider now a work function w with signature
We will first argue that the optimal strategy for
the adversary prefers to assign probability to pages in P i than to pages in
j. The reason is that the type (1;
of the resulting work function w 1 after a request in P i is
no less than the type
of the resulting work function w 2 after a request in P precisely,
by symmetry and the assumption that the on-line algorithm is always at the
best configuration, the only relevant part of a work function to the off-line
strategy is its type. Therefore, instead of comparing w 2 directly to w 1 we
can compare w 2 to any work function w 0
1 that has the same type as w 1 . It is
easy to see that there is a work function w 0
1 with the same type as w 1 such
that any configuration in the support of w 2 belongs to the support of w 0
1 .
Consequently, for any request sequence ae the off-line cost to service it with
initial function w 0
1 is no more than the off-line cost to service ae with initial
work function w 2 .
Similarly, we will argue that the optimal adversary prefers to assign probability
to pages in P k than to pages in P k . However, there is a trade-off
involved when the adversary assigns probability to pages in P k : The adversary
suffers an immediate page fault but the support of the resulting work
function is larger and therefore the adversary will pay less in the future. We
want to verify that the adversary never recovers a future payoff larger than
the current loss (the page fault). Again, we cannot directly compare the two
work functions w 1 and w 2 that result from a request in P k and a request in P k
respectively, but it suffices to compare w 1 with a work function w 0
2 that has
the same type with w 2 . The next lemma shows that for any request sequence
ae the cost of servicing it with initial work function w 1 differs by at most one
from the cost of servicing it with initial work function w 0
. As we argued
above, the worst case happens when the request in P k is actually a request in
therefore we need to compare two work functions with types
1).
In the next lemma we use for simplicity q
Lemma 3 Let w 1 be a work function with type (1;
Then there is a work function w 2 with type (1; 2; such that for
any configuration in the support of w 1 there is a configuration in the support
of w 2 that differs in at most one position.
Proof. Let be the signature of w 1 and let a be a page in
be the work function with signature
Consider a configuration X in the support of w 1 . We will show that there
exists a configuration Y in the support of w 2 such that jY \Gamma Xj - 1. Consider
a canonical ordering with respect to the signature of w 2 and let x k be the
last page of X in this order. Also, let b be the first page in this ordering
not in X \Gamma x k . We claim that differs in at most one
position from X, is in the support of w 2 . Notice first that Y contains the
page in P 1 . It also contains the page a, since a is in the the second place in
the ordering and therefore either a 2 X or a = b. It remains to show that
1. There are two cases to consider: The
first case, when 1, follows from the fact that x k is in P j only
if b is in P j . For the second case, when suffices to note that
In summary, assuming that the on-line algorithm suffers cost one in each
step but can swap pages freely, the optimum strategy for the adversary is to
assign probabilities to pages which are not in the fast on-line memory and
are first in a canonical ordering with respect to the current signature. On
the other hand, the on-line strategy should prefer to have in its fast memory
the first pages of this ordering. Therefore, if a 1 ; a is a canonical ordering
then the best configuration for an on-line algorithm is g. This
poses the question whether it is always possible for the on-line algorithm to
be in such a configuration by a simple swap in each step. Fortunately, there
is a familiar algorithm that does exactly this: the Least Recently Used (LRU)
algorithm. Surprisingly, LRU does not even remember the whole signature
of the work function. It simply remembers the first k pages a 1 ; a of a
canonical ordering, and services the next request by swapping the a k page. It
is easy to verify that after the next request, its fast memory contains the first
k pages of some canonical ordering with respect to the resulting signature.
Therefore, we have shown:
Theorem 1 When there are at least k+1=ffl pages, algorithm LRU is optimal
against a diffuse adversary with competitive ratio 1=c(M k;ffl ).
It seems difficult to determine the exact competitive ratio 1=c(M k;ffl ). For
the extreme values of ffl we know that 1. The
first case is when the adversary has complete power and the second when
the adversary suffers a page fault in almost every step. In fact, the next
corollary suggests that the competitive ratio may not be expressible by a
simple closed-form expression. It remains an open problem to find a simple
approximation for the competitive ratio.
Corollary 1 If and 1=ffl is an integer then
Y
Therefore,
ffl.
Proof. It is not difficult to see that the Markov process is identical to
the following random process: in each phase repeatedly choose uniformly
a number from f1; phase ends when a number
is chosen twice. This random process is a generalization of the well-known
birthday problem in probability theory. A phase corresponds to a cycle in the
Markov chain that starts (and ends) at state with type (1; 2). The cost of
the Markov chain is the expected length of a phase minus one (all transitions
in the cycle have cost one except the last one). It is not hard now to verify
the expression for R(ffl).
In order to bound the expected length of a phase, notice that each of the
first
n numbers has probability at most 1=
p n to end the phase. In contrast,
each of the next
n numbers has probability at least 1=
n to end the phase.
Elaborating on this observation we get that 1
ffl.
Comparative Analysis
On-line algorithms deal with the relations between information regimes. Formally
but briefly, an information regime is the class of all functions from a
domain D to a range R which are constant within a fixed partition of D.
Refining this partition results in a richer regime. Traditionally, the literature
on on-line algorithms has been preoccupied with comparisons between two
basic information regimes: The on-line and the off-line regime (the off-line
regime corresponds to the fully refined partition). As we argued in the intro-
duction, this has left unexplored several intricate comparisons between other
important information regimes.
Comparative analysis is a generalization of competitive analysis allowing
comparisons between arbitrary information regimes, via the comparative ratio
defined in equation (3). Naturally, such comparisons make sense only if
the corresponding regimes are rich in algorithms-single algorithms do not
lend themselves to useful comparisons. As in the case of the competitive
ratio for the diffuse adversary model, we usually allow an additive constant
in the numerator of equation (3).
We apply comparative analysis in order to evaluate the power of lookahead
in task systems. An on-line algorithm for a metrical task system has lookahead
if it can base its decision not only on the past, but also on the next '
requests. All on-line algorithms with lookahead ' comprise the information
regime L ' . Thus, L 0 is the class of all traditional on-line algorithms.
Metrical task systems [3] are defined on some metric space M; a server
resides on some point of the metric space and can move from point to point.
Its goal is to process on-line a sequence of tasks cost c(T j ; a j )
of processing a task T j is determined by the task T j and the position a of the
server while processing the task. The total cost of processing the sequence is
the sum of the distance moved by the server plus the cost of servicing each
task
Theorem 2 For any metrical task system, R(L
there are metrical task systems for which R(L
Proof. Trivially the theorem holds for Assume that ' ? 0 and fix an
algorithm B in L ' . We shall define an on-line algorithm A without lookahead
whose cost on any sequence of tasks is at most 2' times the cost of B.
Algorithm A is a typical on-line algorithm in comparative analysis: It tries
to efficiently "simulate" the more powerful algorithm B. In particular, A
knows the position of B ' steps ago. In order to process the next task, A
moves first to B's last known position, and then processes the task greedily,
that is, with the minimum possible cost.
be a sequence of tasks and let b be the points where
algorithm B processes each task and a 1 ; a the corresponding points for
algorithm A. For simplicity, we define also points b negative
j's.
Then the cost of algorithm B is
and the cost of algorithm A is
Recall that in order to process the j-th task, algorithm A moves to B's last
known position b j \Gamma' and then processes the task greedily, that is, d(b
is the smallest possible. In particular,
?From this, the fact that costs are nonnegative and the triangle inequality
we get
Combining these with the triangle inequalities of the form
we get that the cost of algorithm A is at most
The last expression is times the cost of algorithm B.
For the converse, observe that when c(T
all triangle inequalities above hold as equalities then a comparative ratio of
can be achieved.
Of course, for certain task systems the comparative ratio may be less that
1. For the paging problem, it is ' + 1.
Theorem 3 For the paging problem
Proof. Let be an algorithm for the paging
problem in the class L ' , that is, with lookahead '. Without loss of generality
we assume that B moves its servers only to service requests. Consider the
following on-line algorithm A:
In order to service a request r, A moves a server that has not
been moved in the last n times such that the resulting configuration
is as close as possible to the last known configuration of
B.
Fix a worst request sequence ae and let A
the configurations of A and B that service ae. Without loss of generality, we
assume that A moves a server in each step. By definition, A services the t-th
request by moving a server not in B t\Gamman (unless A
We will first show by induction that A t and B t , differ in at
most n points: jB n. This is obviously true for t - n. Assume that
it holds for t\Gamman then clearly the statement holds, because in
each step the difference can increase by at most one. Otherwise, A services
the t-th request by moving the server from some point x, x 62 B t\Gamman . Observe
that A t can differ from B t in more than n points only if x 2 A
However, x can belong to B only if x was requested at least once
in the steps servers only to
service requests. Therefore, A moved a server at x in the last n steps and it
cannot move it again. Hence, A t and B t cannot differ in more than n points.
The theorem now follows from the observation that for every
moves of A there is a move of some server of B. The reason is this:
in the same configuration then A will converge to the same configuration
in at most n moves (recall that A moves a different server each
time).
4 Open problems
We introduced two refinements of competitive analysis, the diffuse adversary
model and comparative analysis. Both restrict the power of the adversary:
The first by allowing only certain input distributions and the second by
restricting the refinement of the adversary's information regime. In general,
we believe that the only natural way to deal with uncertainty is by designing
algorithms that perform well in the worst world which is compatible with
the algorithm's knowledge. There are numerous applications of these two
frameworks for evaluating on-line algorithms. We simply mention here two
challenging open problems.
The Markov diffuse adversary. Consider again the paging problem. Suppose
that the request sequence is the output sequence of an unknown Markov
chain (intuitively, the program generating the page requests) with at most
s states, which we can only partially observe via its output. That is, the
output f(q) of a state q of the unknown Markov process is a page in M . The
allowed distributions \Delta are now all output distributions of s-state Markov
processes with output. We may want to restrict our on-line algorithms to
ones that do not attempt to exhaustively learn the Markov process-one way
to do this would be to bound the length of the request sequence to O(s). We
believe that this is a useful model of paging, whose study and solution may
enhance our understanding of the performance of actual paging systems.
The power of vision. Consider two robots, one with vision ff (its visual
sensors can detect objects in distance ff) and the other with vision fi, fi ? ff.
We want to measure the disadvantage of the first robot in navigating or
exploring a terrain against the second robot. Naturally, comparative analysis
seems the most appropriate framework for this type of problems. Different
restrictions on the terrain and the objective of the robot result in different
problems but we find the following simple problem particularly challenging:
On the plain, there are n opaque objects. The objective of the robot is to
construct a map of the plain, i.e., to find the position of all n objects. We
ask what the comparative ratio R(V ff ; V fi ) for this problem is, where V ff and
V fi denote the information regimes of vision ff and fi, respectively.
--R
A new measure for the study of on-line algorithms
Competitive paging with locality of reference.
An optimal on-line algorithm for metrical task systems
The server problem and on-line games
Competitive paging algorithms.
Strongly competitive algorithms for paging with locality of reference.
Markov paging.
On the k-server conjecture
Beyond competitive analy- sis
Competitive algorithms for on-line problems
A strongly competitive randomized paging algorithm.
On the value of information.
Shortest paths without a map.
A statistical adversary for on-line algorithms
Memory versus randomization in on-line algorithms
Amortized efficiency of list update and paging rules.
--TR
--CTR
Peter Damaschke, Scheduling Search Procedures, Journal of Scheduling, v.7 n.5, p.349-364, September-October 2004
Marek Chrobak , Elias Koutsoupias , John Noga, More on randomized on-line algorithms for caching, Theoretical Computer Science, v.290 n.3, p.1997-2008, 3 January
Marcin Bienkowski, Dynamic page migration with stochastic requests, Proceedings of the seventeenth annual ACM symposium on Parallelism in algorithms and architectures, July 18-20, 2005, Las Vegas, Nevada, USA
Peter Damaschke, Scheduling search procedures: The wheel of fortune, Journal of Scheduling, v.9 n.6, p.545-557, December 2006
Nicole Megow , Marc Uetz , Tjark Vredeveld, Models and Algorithms for Stochastic Online Scheduling, Mathematics of Operations Research, v.31 n.3, p.513-525, August 2006
Marek Chrobak, SIGACT news online algorithms column 8, ACM SIGACT News, v.36 n.3, September 2005
James Aspnes , Orli Waarts, Compositional competitiveness for distributed algorithms, Journal of Algorithms, v.54 n.2, p.127-151, February 2005 | paging problem;online algorithms;competitive analysis;metrical task systems |
587060 | Using redundancies to find errors. | This paper explores the idea that redundant operations, like type errors, commonly flag correctness errors. We experimentally test this idea by writing and applying four redundancy checkers to the Linux operating system, finding many errors. We then use these errors to demonstrate that redundancies, even when harmless, strongly correlate with the presence of traditional hard errors (e.g., null pointer dereferences, unreleased locks). Finally we show that how flagging redundant operations gives a way to make specifications "fail stop" bydetecting dangerous omissions. | INTRODUCTION
Programming languages have long used the fact that many
high-level conceptual errors map to low-level type errors.
This paper demonstrates the same mapping in a diVerent
direction: many high-level conceptual errors also map to low-level
redundant operations. With the exception of a few stylized
cases, programmers are generally attempting to perform
useful work. If they perform an action, it was because they
believed it served some purpose. Spurious operations violate
this belief and are likely errors. For example, impossible
Boolean conditions can signal mistaken expressions; critical
sections without shared state can signal the use of the wrong
variables written but not read can signal an unintentionally
lost result. At the least, these conditions signal
conceptual confusion, which we would also expect to correlate
with hard errors - deadlocks, null pointer dereferences,
etc. - even for harmless redundancies.
We use redundancies to find errors in three ways: (1)
by writing checkers that automatically flag redundancies, (2)
using these errors to predict non-redundant errors (such as
null pointer dereferences), and (3) using redundancies to find
incomplete program specifications. We discuss each below.
We wrote four checkers that flagged potentially dangerous
redundancies: (1) idempotent operations, (2) assignments
Permission to make digital or hard copies of all or part of this work for
personal or classroom use is granted without fee provided that copies are
not made or distributed for profit or commercial advantage and that copies
bear this notice and the full citation on the first page. To copy otherwise, to
republish, to post on servers or to redistribute to lists, requires prior specific
permission and/or a fee.
SIGSOFT 2002/FSE-10, November 18-22, 2002, Charleston, SC, USA.
that were never read, (3) dead code, and (4) conditional
branches that were never taken. The errors found would
largely be missed by traditional type systems and checkers.
For example, as Section 2 shows, assignment of variables to
themselves can signal mistakes, yet such assignments will type
check in any language we know of.
Of course, some legitimate actions cause redundancies.
Defensive programming may introduce "unnecessary" operations
for robustness; debugging code, such as assertions, can
check for "impossible" conditions; and abstraction boundaries
may force duplicate calculations. Thus, to eVectively
find errors, our checkers must separate such redundancies
from those induced by error.
We wrote our redundancy checkers in the xgcc extensible
compiler system [16], which makes it easy to build system-specific
static analyses. Our analyses do not depend on an
extensible compiler, but it does make it easier to prototype
and perform focused suppression of false positive classes.
We evaluated how eVective flagging redundant operations
is at finding dangerous errors by applying the above four
checkers to the Linux operating system. This is a good test
since Linux is a large, widely-used source code base (we check
roughly 1.6 million lines of it). As such, it serves as a known
experimental base. Also, because it has been written by many
people, it is representative of many diVerent coding styles and
abilities.
We expect that redundancies, even when harmless, strongly
correlate with hard errors. Our relatively uncontroversial hypothesis
is that confused or incompetent programmers tend
to make mistakes. We experimentally test this hypothesis by
taking a large database of hard Linux errors that we found in
prior work [8] and measuring how well redundancies predict
these errors compared to chance. In our tests, files that have
redundancy errors are roughly 45% to 100% more likely to
have hard errors compared to files drawn by chance. This
diVerence holds across the diVerent types of redundancies.
Finally, we discuss how traditional checking approaches
based on annotations or specifications can use redundancy
checks as a safety net to find missing annotations or incomplete
specifications. Such specification mistakes commonly
map to redundant operations. For example, assume we have
a specification that binds shared variables to locks. A missed
binding will likely lead to redundancies: a critical section with
no shared state and locks that protect no variables. We can
flag such omissions because we know that every lock should
protect some shared variable and that every critical section
should contain some shared state.
This paper makes four contributions:
1. The idea that redundant operations, like type errors,
commonly flag correctness errors.
2. Experimentally validating this idea by writing and applying
four redundancy checkers to real code. The errors
found often surprised us.
3. Demonstrating that redundancies, even when harmless,
strongly correlate with the presence of traditional hard
errors.
4. Showing how redundancies give a way to make specifications
"fail stop" by detecting dangerous omissions.
The main caveat with our approach is that the errors we
count might not be errors, since we were examining code
we did not write. To counter this, we only diagnosed errors
that we were reasonably sure about. We have had close to
two years of experience with Linux bugs, so we have reason-able
confidence that our false positive rate of bugs that we
diagnose, while non-zero, is probably less than 5%.
Section 2 through Section 5 present our four checkers.
Section 6 correlates the errors they found with traditional
hard errors. Section 7 discusses how to check for completeness
using redundancies. Section 8 discusses related work.
Finally, Section 9 concludes.
2. IDEMPOTENT OPERATIONS
System Bugs Minor False
Table
1: Bugs found by the idempotent checker in Linux
version 2.4.5-ac8.
The checker in this section flags idempotent operations
where a variable is: (1) assigned to itself
by itself (x / x), (3) bitwise or'd with itself (x | x) or (4)
bitwise and'd with itself (x & x). The checker is the simplest
in the paper (it requires about 10 lines of code in our system).
Even so, it found several interesting cases where redundancies
signal high-level errors. Four of these were apparent typos in
variable assignments. The clearest example was the following
code, where the programmer makes a mistake while copying
structure sa to structure da:
/* 2.4.1/net/appletalk/aarp.c:aarp_rcv */
else { /* We need to make a copy of the entry. */
This is a good example of how redundant errors catch cases
that type systems miss. This code - an assignment of a
variable to itself - will type check in all languages we know
of, yet clearly contains an error. Two of the other errors
were caused by integer overflow (or'ing an 8-bit variable by a
constant that only had bits set in the upper 16 bits). The final
one was caused by an apparently missing conversion routine.
The code seemed to have been tested only on a machine
where the conversion was unnecessary, which prevented the
tester from noticing the missing routine.
The minor errors were operations that seemed to follow
a nonsensical but consistent coding pattern, such as adding 0
to a variable for typographical symmetry with other non-zero
additions.
Curiously, each of the three false positives was annotated
with a comment explaining why the redundant operation was
being done. This gives evidence for our belief that programmers
regard redundant operations as somewhat unusual.
Macros are the main source of potential false positives.
They represent logical actions that may not map to a concrete
action. For example, networking code contains many calls
of the form used to reorder the bytes in
variable x in a canonical "network order" so that a machine
receiving the data can unpack it appropriately. However, on
machines on which the data is already in network order, the
macro will expand to nothing, resulting in code that will
simply assign x to itself. To suppress these false positives, we
modified the preprocessor to note which lines contain macros
- we simply ignore errors on these lines.
3. REDUNDANT ASSIGNMENTS
System Bugs False Uninspected
Linux 2.4.5-ac8 129 26 1840
Table
2: Bugs found by the redundant assignment checker
in Linux version 2.4.5-ac8 and the xgcc system used in this
paper. There were 1840 uninspected errors for variables assigned
but never used in Linux - we expect a large number
of these will be actual errors given the low number of false
positives in our inspected results.
The checker in this section flags cases where a value assigned
to a variable is not subsequently used. The checker
tracks the lifetime of variables using a simple global analy-
sis. At each assignment it follows the variable forward on all
paths. It emits an error message if the variable is read on
no path before either exiting scope or being assigned another
value. As we show, in many cases such lost values signal real
errors, where control flow followed unexpected paths, results
that were computed were not returned, etc.
The checker finds thousands of redundant assignments
in a system the size of Linux. Since it was so eVective, we
minimized the chance of false positives by radically restricting
the variables it would follow to non-global variables that were
not aliased in any way.
Most of the checker code deals with diVerentiating the
errors into three classes, which it ranks in the following order:
1. Variables assigned values that are not read. Empirically,
these errors tend to be the most serious, since they flag
unintentionally lost results.
2. Variables assigned a non-constant that is then overwritten
without being read. These are also commonly er-
rors, but tend to be less severe. False positives in this
class tend to come from assigning a return value from
a function call to a dummy variable that is ignored.
3. Variables assigned a constant and then reassigned other
values without being read. These are frequently due
to defensive programming, where the programmer always
initializes a variable to some safe value (most com-
monly: NULL, 0, 0xffffffff, and -1) but does not read
it before use. We track the value and emit it when reporting
the error so that messages using a common
defensive value can be easily suppressed.
Suppressing false positives. As with many redundant
checkers, macros and defensive programming cause most false
positives. To minimize the impact of macros, the checker
does not track variables killed or produced by macros. Its
main remaining vulnerability are to values assigned and then
passed to debugging macros that are turned oV:
Typically there are a small number of such macros, which
we manually turn back on.
We use ranking to minimize the impact of defensive pro-
gramming. Redundant operations that can be errors when
done within the span of a few lines can be robust programming
practice when separated by 20. Thus we rank errors
based on (1) the line distance between the assignment and
reassignment and (2) the number of conditions on the path.
Close errors are most likely; farther errors become more arguably
defensive programming.
The errors. This checker found more errors than all the
other checkers we have written combined. There were two
interesting error patterns that showed up as redundant as-
signments: (1) variables whose values were (unintentionally)
discarded and (2) variables whose values were not used because
of surprising control flow (e.g., an unexpected return).
Figure
1 shows a representative example of the first pat-
tern. Here, if the function signal pending returns true (a
signal is pending to the current process), an error code is
set and the code breaks out of the
enclosing loop. The value in err must be passed back to
the calling application so that it will retry the system call.
However, the code always returns 0 to the caller, no matter
what happens inside the loop. This will lead to an insidious
error: the code usually works but, occasionally, it will abort
but return a success code, causing the client to assume the
operation happened.
There were numerous similar errors on the caller side,
where the result of a function was assigned to a variable, but
then ignored rather than being checked. In both of these
cases, the fact that logically the code contains errors is readily
flagged by looking for variables assigned but not used.
The second class of errors comes from calculations that
are aborted by unexpected control flow. Figure 2 gives one ex-
ample: here all paths through a loop end in a return, wrongly
aborting the loop after a single iteration. This error is caught
by the fact that an assignment used to walk down a linked
list is never read because the loop iterator that would do so is
dead code. Figure 3 gives a variation on the theme of unexpected
control flow. Here an if statement has an extraneous
statement terminator at its end, making the subsequent return
to be always taken. In these cases, a coding mistake caused
"dangling assignments" that were not used. This fact allows
/* 2.4.1/net/decnet/af_decnet.c:dn_wait_run */
do {
if {
lost value */
break;
if (scp->state != DN_RUN)
return 0;
Figure
1: Lost return value caught by flagging the redundant
assignment to err.
/* 2.4.1/net/atm/lec.c:lec_addr_delete: */
entry != NULL;
{ /* BUG: never reached */
if (.) {
lec_arp_remove(priv->lec_arp_tables,
return 0;
Figure
2: A single-iteration loop caught by flagging the redundant
assignment entry#next. The assignment
appears to be read in the loop iteration statement (entry =
next) but it is dead code, since the loop always exits after a
single iteration. The logical result will be that if the entry the
loop is trying to delete is not the first one in the list, it will
not be deleted.
us to flag such bogus structures even when we do not know
how control flows in the code. The presence of these errors
led us to write the dead-code checker in the next section.
Reassigning values is typically harmless, but it does signal
fairly confused programmers. For example:
/* 2.4.5-ac8/drivers/net/wan/sdla_x25.c:
alloc_and_init_skb_buf */
struct sk_buff
Where new skb is assigned the value *skb but then immediately
reassigned another allocated value. A diVerent case
shows a potential confusion about how C's iteration works:
/* 2.4.1/drivers/scsi/scsi.c: */
for (; SCpnt; SCpnt = SCnext) {
Where the variable SCnext is assigned and then immediately
reassigned in the loop. The logic behind this decision remains
unclear.
The most devious error. A few of the values reassigned
before being used were suspicious lost values. One of the
worst (and most interesting) was from a commercial system
which had the equivalent of the following code:
2.4.5-ac8/fs/ntfs/unistr.c:ntfs_collate_names */
for {
if (ic) {
if (c1 < upcase_len)
if (c2 < upcase_len)
/* [META] stray terminator! */
return err_val;
if (c1 < c2)
return -1;
Figure
3: Catastrophic return caught by the redundant assignment
to c2. The last conditional is accidentally terminated
because of a stray statement terminator (";") at the end of
the line, causing the routine to always return err val.
/* 2.4.1/net/ipv6/raw.c:rawv6_getsockopt */
switch (optname) {
case IPV6_CHECKSUM:
if (opt->checksum ==
else
/* BUG: always falls through */
default:
return -ENOPROTOOPT;
Figure
4: Unintentional switch "fall through" causing the code
to always return an error. This maps to the low-level redundancy
that the value assigned to val is never used.
System Bugs False
Linux 2.4.5-ac8 66 26
Table
3: Bugs found by the dead code checker on Linux
version 2.4.5-ac8.
At first glance this seems like an obvious copy-and-paste
error. It turned out that the redundancy flags a much more
devious error. The array buf actually pointed to a "memory
mapped" region of kernel memory. Unlike normal memory,
reads and writes to this memory cause the CPU to issue
I/O commands to a hardware device. Thus, the reads are not
idempotent, and the two of them in a row rather than just one
can cause very diVerent results to happen. However, the above
code does have a real (but silent) error - in the variant of
C that this code was written, pointers to memory mapped IO
must be declared as "volatile." Otherwise the compiler is free
to optimize duplicate reads away, especially since in this case
there were no pointer stores that could change their values.
Dangerously, in the above case buf was declared as a normal
pointer rather than a volatile one, allowing the compiler to
optimize as it wished. Fortunately the error had not been
triggered because the GNU C compiler that was being used
had a weak optimizer that conservatively did not optimize
expressions that had many levels of indirection. However,
the use of a more aggressive compiler or later version gcc
could have caused this extremely diYcult to track down bug
to surface.
4. DEAD CODE
The checker in this section flags dead code. Since programmers
generally write code to run it, dead code catches
logical errors signaled by false beliefs that an impossible path
can execute.
The core of the dead code checker is a straightforward
mark-and-sweep algorithm. For each routine it (1) marks
all blocks reachable from the routine's entry node and (2)
traverses all blocks in the routine, flagging any that are not
marked. It has three modifications to this basic algorithm.
First, it truncates all paths that reach functions that would
not return. Examples include "panic," "abort" and "BUG"
which are used by Linux to signal a terminal kernel error
and reboot the system - code dominated by such calls cannot
run. Second, we suppress error messages for dead code
caused by constant conditions, such as
printf("in foo");
since these frequently signaled code "commented out" by using
a false condition. We also annotate error messages when
the code they flag is a single statement that contains a break
or return. These are commonly a result of defensive pro-
gramming. Finally, we suppress dead code caused by macros.
Despite its simplicity, dead code analysis found a high
number of clearly serious errors. Three of the errors caught
by the redundant assignment checker are also caught by the
dead code extension: (1) the single iteration loop in Figure 2,
(2) the mistaken statement terminator in Figure 3, and (3)
the unintentional fall through in Figure 4.
Figure
5 gives the most frequent copy-and-paste error.
Here the macro "pseterr" returns, but the caller does not
realize it. Thus, at all seven call sites that use the macro, there
is dead code after the macro that the client intended to have
executed.
/* 2.4.1/drivers/char/rio/rioparam.c:RIOParam */
if (retval == RIO_FAIL) {
rio_spin_unlock_irqrestore(&PortP->portSem, flags);
returns */
return RIO_FAIL;
Figure
5: Unexpected return: The call pseterr is a macro
that returns its argument value as an error. Unfortunately, the
programmer does not realize this and inserts subsequent op-
erations, which are flagged by our dead code checker. There
were many other similar mistaken uses of the same macro.
Figure
6 gives another common error - a single-iteration
loop that always terminates because it contains an if-else statement
that breaks out of the loop on both paths. It is hard to
believe that this code was ever tested. Figure 7 gives a variation
on this, where one branch of the if statement breaks
out of the loop but the other uses C's ``continue'' statement,
which skips the rest of the loop body. Thus, none of the code
at the end of the body can be executed.
/* 2.4.1/drivers/scsi/53c7,8xx.c:
return_outstanding_commands */
for
(struct NCR53c7x0_cmd *) c->next) {
if (c->cmd->SCp.buffer) {
printk (".");
break;
} else {
printk ("Duh? .");
break;
/* BUG: cannot be reached */
(struct scatterlist *) list;
list
if (free) {
Figure
Broken loop: the first if-else statement of the loop
contains a break on both paths, causing the loop to always
abort, without ever executing the subsequent code it contains.
5. REDUNDANT CONDITIONALS
The checker in this section flags redundant branch conditionals
branch statements (if, while, for, etc)
with non-constant conditionals that always evaluate to either
/* 2.4.5-ac8/net/decnet/dn_table.c:
dn_fib_table_lookup */
{
if (!dn_key_leq(k, f->fn_key))
break;
else
/* BUG: cannot be reached */
f->fn_state |= DN_S_ACCESSED;
if (f->fn_state&DN_S_ZOMBIE)
if (f->fn_scope < key->scope)
Figure
7: Useless loop body: similarly to Figure 6 this loop
has a broken if-else statement. One branch aborts the loop,
the other uses C's continue statement to skip the body and
begin another iteration.
/* 2.4.1/drivers/net/arcnet/arc-rimi.c:
arcrimi_found */
/* reserve the irq */ {
if (request_irq(dev->irq, &arcnet_interrupt .))
BUGMSG(D_NORMAL,
"Can't get IRQ %d!\n", dev->irq);
return -ENODEV;
Figure
8: Unexpected return: misplaced braces from the insertion
of a debugging statement causes control to always
return.
true or false; (2) switch statements with impossible case's.
Both cases are a result of logical inconsistency in the program
and are therefore likely to be errors.
The checker is based on the false-path pruning (FPP) feature
in the xgcc system. FPP was originally designed to prune
away false positives arising from infeasible paths. It symbolically
evaluates variable assignments and comparisons, either
to constants (e.g. or to other variables
(e.g. using a simple congruence closure algorithm
[11]. It will stop the checker from checking the
current execution path as soon as it detects a logical conflict.
With FPP, the checker is implemented using a simple
mark-and-sweep algorithm. For each routine, it explores all
feasible execution paths and marks branches (as opposed to
basic blocks in Section visited along the way. Then it
takes the set of unmarked branches and flags conditionals
associated with them as redundant.
The checker was able to find hundreds of redundant conditionals
in Linux 2.4.1. The main source of false positives
arises from the following two forms of macros: (1) those with
embedded conditionals, and (2) constant macros that are used
in conditional statements (e.g. "if (DEBUG) {.}," where
DEBUG is defined to be 0). After suppressing those, we are
left with three major classes of about 200 problematic cases,
which we describe below.
The first class of errors is the least serious of the three
that we characterize as "overly cautious programming style."
This includes cases where the programmer checks the same
condition multiple times within very short program distances.
We believe this could be an indication of a novice programmer
and the conjecture is supported by the statistical analysis
described in section 6.
Figure
9 shows a redundant check of the above type from
Linux 2.4.1. Although it is almost certainly harmless, it shows
the programmer has a poor grasp of the code. One might be
willing to bet on the presence of a few surrounding bugs.
/* 2.4.1/drivers/media/video/cpia.c:cpia_mmap */
if (!cam || !cam->ops)
return -ENODEV;
/* make this _really_ smp-safe */
if (down_interruptible(&cam->busy_lock))
return -EINTR;
if (!cam || !cam->ops) /* REDUNDANT! */
return -ENODEV;
Figure
9: Overly cautious programming style: the second
check of (!cam || !cam->ops) is redundant.
Figure
shows a more problematic case. As one can
see, the else branch of the second if statement will never
be taken, because the first if condition is weaker than the
negation of the second. Interestingly, the function returns
diVerent error codes for essentially the same error, indicating
a possibly confused programmer.
/* 2.4.1/drivers/net/wan/sbni.c:sbni_ioctl */
if(!(slave && slave->flags & IFF_UP &&
dev->flags & IFF_UP))
{
. /* print some error message, back out */
return -EINVAL;
if (slave) { . }
/* BUG: !slave is impossible */
else {
. /* print some error message */
return -ENOENT;
Figure
10: Overly cautious programming style. The check of
slave is guaranteed to be true and also notice the diVerence
in return value.
The second class of errors we catch are again seemingly
harmless, but when we examine them carefully, we find serious
errors around them. With some guesswork and cross-
referencing, we assume the while loop in Figure 11 is trying
to recover from hardware errors encountered when reading
a network packet. But since the variable err is never up-dated
in the loop body, the condition (err != SUCCESS) is
always true and the loop body is never executed more than
once, which is nonsensical. This could signal a possible bug
where the author forgets to update err in the large chunk
of recovery code in the loop. This bug, if confirmed, could
be diYcult to detect dynamically, because it is in the error
recovery code that is easy to miss in testing.
The third class of errors are clearly serious bugs. Figure
12 shows an example detected by the redundant condi-
/* 2.4.1/drivers/net/tokenring/smctr.c:
smctr_rx_frame */
{
. /* large chunk of apparent recovery code,
with no updates to err */
if (err != SUCCESS)
break;
Figure
Redundant conditional that suggests a serious program
error.
tional checker. As one can see, the second and third if statements
carry out entirely diVerent actions on identical condi-
tions. Apparently, the programmer cut-and-pasted the conditional
without changing one of the two NODE LOGGED OUT
into a fourth possibility: NODE NOT PRESENT.
/* 2.4.1/drivers/fc/iph5526.c:
rscn_handler */
if ((login_state == NODE_LOGGED_IN) ||
(login_state == NODE_PROCESS_LOGGED_IN)) {
else
if (login_state == NODE_LOGGED_OUT)
tx_adisc(fi, ELS_ADISC, node_id,
else
/* BUG: redundant conditional */
if (login_state == NODE_LOGGED_OUT)
tx_logi(fi, ELS_PLOGI, node_id);
Figure
12: Redundant conditionals that signal errors: a conditional
expression being placed in the else branch of another,
identical one
Figure
13 shows another serious error. One can see that
the author intended to insert an element pointed to by sp into
a doubly-linked list with head q->q first, but the while
loop really does nothing other than setting srb p to NULL,
which is nonsensical. The checker detects this error by inferring
that the exit condition for the while loop conflicts with
the true branch of the ensuing if statement. The obvious
fix is to replace the while condition (srb p) with (srb p &&
srb p->next). This bug can be dangerous and hard to detect,
because it quietly discards everything that was in the original
list and constructs a new one with sp as the only element
in it. As a matter of fact, the same bug is still present in
the latest 2.4.19 release of the Linux kernel source as of this
writing.
6. PREDICTINGHARDERRORSWITHRE-
DUNDANCIES
In this section we show the correlation between redundant
errors and hard bugs that can crash a system. The redundant
errors come from the previous four sections. The hard
/* 2.4.1/drivers/scsi/qla1280.c:
qla1280_putq_t */
while (srb_p )
if (srb_p) { /* BUG: this branch is never taken*/
if (srb_p->s_prev)
else
q->q_first
} else {
q->q_last
Figure
13: A serious error in a linked list insertion imple-
mentation: srb p is always null after the while loop (which
appears to check the wrong Boolean condition).
bugs were collected from Linux 2.4.1 with checkers described
in [8]. These bugs include use of freed memory, dereferences
of null pointers, potential deadlocks, unreleased locks, and
security violations (e.g., the use of an untrusted value as an
array index). We show that there is a strong correlation between
these two error populations using a statistical technique
called the contingency table method [6]. Further, we show
that a file containing a redundant error is roughly 45% to
100% more likely to have a hard error than a file selected at
random. These results indicate that (1) files with redundant
errors are good audit candidates and (2) redundancy correlates
with confused programmers who will probably make a
series of mistakes.
6.1 Methodology
This subsection describes the statistical methods used to
measure the association between program redundancies and
hard errors. Our analysis is based the 2 - 2 contingency table
[6] method. It is a standard statistical tool for studying
the association between two diVerent attributes of a popula-
tion. In our case, the population is the set of files we have
checked, and the two attributes are: (a) whether a file contains
redundancies, and (b) whether it contains hard errors.
In the contingency table approach, the sample population
is cross-classified into four categories based on two attributes,
say A and B, of the population. We obtain counts (o ij ) in
each category, and tabularize the result as follows:
A True False Totals
True
False
Totals
The values in the margin (n 1- , n 2- , n are row and column
totals, while n - is the grand total. The null hypothesis
H 0 of this test is that the A and B are mutually independent,
i.e. knowing A does not give us any additional information
about B. More precisely, if H 0 holds, we are expecting that:
. 1 .
We can then compute expected values for the four cells
in the table as follows:
We use a "chi-squared" test statistic [15]:
to measure how far the observed values (o ij ) deviates from
the expected values (e ij ). Using the T statistic, we can derive
the the probability of observing the null hypothesis H 0
is true, which is called the p-value 2 . The smaller the p-value,
the stronger the evidence against H 0 , thus the stronger the
correlation between attributes A and B.
6.2 Data acquisition and test results
In our previous work [8], we used the xgcc system to
check 2055 files in Linux 2.4.1 kernel. We had focused on serious
system crashing hard bug and were able to collect more
than 1800 serious hard bugs in 551 files. The types of bugs
we checked for included null pointer dereference, deadlocks,
and missed security checks. We use these bugs to represent
the class of serious hard errors, and derive correlation with
program redundancies.
We cross-classify the program files in the Linux kernel
into the following four categories and obtain counts in
1. number of files with both redundancies and hard
errors.
2. number of files with redundancies but not hard
errors.
3. number of files with hard errors but not redundancies
4. number of files with neither redundancies nor hard
errors.
We can then carry out the test described in section 6.1 for
the following three redundancy checkers: redundant assignment
checker, dead code checker, and redundant conditional
checker (the idempotent operation is excluded because of its
small sample size).
The result of the tests are given in Tables 4, 5, 6, and 7.
As we can see, the correlation between redundancies and hard
To see this is true, consider 100 white balls in an urn. We
first randomly draw 40 of them and put a red mark on them.
We put them back in the urn. Then we randomly draw
of them and put a blue mark on them. Obviously, we
should expect roughly 80% of the 40 balls with red marks
to have blue marks, as should we expect roughly 80% of the
remaining the red mark to have a blue mark.
2 Technically, under H 0 , T has a # 2 distribution with one degree
of freedom. p-value can be looked up in the cumulative
distribution table of the # 2
1 distribution. For example, if T is
larger than 4, the p-value will go below 5%.
Redundant Hard Bugs
Assignments
Totals 551 1504 2055
Table
4: Contingency table: Redundant Assignments vs. Hard
Bugs. There are 345 files with both error types, 435 files with
an assign error and no hard bugs, 206 files with a hard bug
and no assignment error, and 1069 files with no bugs of
either type. A T-statistic value above four gives a p-value of
less than .05, which strongly suggests the two events are not
independent. The observed T value of 194.37 gives a p-value
of essentially 0, noticeably better than this standard threshold.
Intuitively, the correlation between error types can be seen in
that the ratio of 345/435 is considerably larger than the ratio
if the events were independent, we expect these
two ratios to be close.
Hard Bugs
Dead Code Yes No Totals
Totals 551 1504 2055
Table
5: Contingency table: Dead code vs. Hard Bugs
errors are extremely high, with p-values being approximately
0 in all four cases. It strongly suggests that redundancies
often signal confused programmers, and therefore are a good
predictor for hard, serious errors.
6.3 Predicting hard errors
In addition to correlation, we want to know how much
more likely it is that we will find a hard error in a file that
has one or more redundant operations. More precisely, let
E be the event that a given source file contains one or more
hard errors, and R be the event that it contains one or more
forms of redundant operations, we can compute a confidence
interval for T which is a measure
of how much more likely we are to find hard errors in a file
given program redundancies.
The prior probability of hard errors is computed as follows
Number of files with hard errors
Total number of files checked
We tabularize the conditional probabilities and T # values
in
Table
8. (Again, we excluded the idempotent operation
checker because of its small bug sample.) As shown in ta-
ble, given any form of redundant operation, it is roughly
more likely we will find an error in that file
than otherwise. Furthermore, redundancies even predict hard
errors across time: we carried out the same test between re-
Redundant Hard Bugs
Totals 551 1504 2055
Table
Contingency table: Redundant Conditionals vs. Hard
Bugs
Hard Bugs
Aggregate
Totals 551 1504 2055
Table
7: Contingency table: Program Redundancies (Aggre-
gate) vs. Hard Bugs
dundancies found in Linux 2.4.5-ac8 and hard errors in 2.4.1
(roughly a year older) and found similar results.
7. FAIL-STOP SPECIFICATION
This section describes how to use redundant code actions
to find several types of specification errors and omissions.
Often program specifications give extra information that allow
code to be checked: whether return values of routines must be
checked against null, which shared variables are protected by
which locks, which permission checks guard which sensitive
operations, etc. A vulnerability of this approach is that if a
code feature is not annotated or included in the specification,
it will not be checked. We can catch such omissions by
flagging redundant operations. In the above cases, and in
many others, at least one of the specified actions makes little
sense in isolation - critical sections without shared states are
pointless, as are permission checks that do not guard known
sensitive actions. Thus, if code does not intend to do useless
operations, then such redundancies will happen exactly when
checkable actions have been missed. (At the very least we
will have caught something pointless that should be deleted.)
We sketch four examples below, and close with a checker that
uses redundancy to find when it is missing checkable actions.
Detecting omitted null annotations. Tools such as
LCLint [12] let programmers annotate functions that can return
a null pointer with a "null" annotation. The tool emits
an error for any unchecked use of a pointer returned from
a null routine. In a real system, many functions can return
making it easy to forget to annotate them all. We
can catch such omissions using redundancy. We know only
the return value of null functions should be checked. Thus,
a check on a non-annotated function means that either the
function: (1) should be annotated with null or (2) the function
cannot return null and the programmer has misunderstood
the interface.
Finding missed lock-variable bindings. Data race detection
tools such as Warlock [20] let users explicitly bind locks
Confidence
Interval for T #
Assign 353 889 0.3971 0.1289 0.0191 48.11% - 13.95%
Dead Code
Conditionals
Aggregate 372 945 0.3937 0.1255 0.0187 46.83% - 13.65%
Table
8: Program files with redundancies are roughly 50% more likely to contain hard errors
to the variables they protect. The tool flags when annotated
variables are accessed without their lock held. However, lock-
variable bindings can easily be forgotten, causing the variable
to be (silently) unchecked. We can use redundancy to catch
such mistakes. Critical sections must protect some shared
state: flagging those that do not will find either (1) useless
locking (which should be deleted for good performance) or
(2) places where a shared variable was not annotated.
Missed "volatile" annotations. As described in Section 4,
in C, variables with unusual read/write semantics must be
annotated with the "volatile" type qualifier to prevent the
compiler from doing optimizations that are safe on normal
variables, but incorrect on volatile ones, such as eliminating
duplicate reads or writes. A missing volatile annotation is
a silent error, in that the software will usually work, but only
occasionally give incorrect errors. As shown, such omissions
can be detected by flagging redundant operations (reads or
writes) that do not make sense for non-volatile variables.
Missed permission checks. A secure system must guard
sensitive operations (such as modifying a file or killing a pro-
cess) with permission checks. A tool can automatically catch
such mistakes given a specification of which checks protect
which operations. The large number of sensitive operations
makes it easy to forget a binding. As before, we can use redundancy
to find such omissions: assuming programmers do
not do redundant permission checks, then finding permission
check that does not guard a known sensitive operation signals
an incomplete specification.
7.1 Case study: Finding missed security holes
In a separate paper [3] we describe a checker that found
operating system security holes caused when an integer read
from untrusted sources (network packets, system call param-
eters) was passed to a trusting sink (array indices, memory
copy lengths) without being checked against a safe upper and
lower bound. A single violation can let a malicious attacker
take control of the entire system. Unfortunately, the checker
is vulnerable to omissions. An omitted source means the
checker will not track the data produced. An omitted sink
means the checker will not flag when unsanitized data reaches
the sink.
When implementing the checker we used the ideas in
this section to detect such omissions. Given a list of known
sources and sinks, the normal checking sequence is: (1) the
code reads data from an unsafe source, (2) checks it, and (3)
passes it to a trusting sink. Assuming programmers do not
do gratuitous sanitization, then a missed sink can be detected
by flagging when code does steps (1) and (2), but not (3).
Reading a value from a known source and sanitizing it implies
the code believes the value will reach a dangerous operation.
If the value does not reach a known sink, we have likely
missed one. Similarly, we could (but did not) infer missed
sources by doing the converse of this analysis: flagging when
the OS sanitizes data we do not think is tainted and then
passes it to a trusting sink.
The analysis found roughly 10 common uses of sanitized
inputs in Linux 2.4.6 [3]. Nine of these uses were harmless;
however one was a security hole. Unexpectedly, this was not
from a specification omission. Rather, the sink was known,
but our inter-procedural analysis had been overly simplistic,
causing us to miss the path to it. The fact that redundancy
flags errors both in the specification and in the tool itself was
a nice surprise.
8. RELATED WORK
Two existing types of analysis have focused on redundant
operations: optimizing compilers and "anomoly detection"
work.
Optimizing compilers commonly do dead-code elimination
and common-subexpression elimination [1] which remove
redundancies to improve performance. One contribution
of our work is the realization that these analyses have
been silently finding errors since their invention. While our
analyses are closely mirror these algorithms at their core, they
have several refinements. First, we operate on a higher-level
representation than a typical optimizer since a large number
of redundant operations are introduced due to the compilation
of source constructs to the intermediate representation.
Second, in order to preserve semantics of the program, compiler
optimizers have to be conservative in its analysis. In
contrast, since our goal is to find possible errors, it is perfectly
reasonable to flag a redundancy even if we are only 95% sure
about its legitimacy. In fact, we report all suspicious cases
and sort in order of a confidence heuristic (e.g. distance between
redundancies, etc) in the report. Finally, the analysis
tradeoVs we make diVer. For example, we use a path-sensitive
algorithm to suppress false paths; most optimizers omit path-
sensitive analyses because their time complexity outweighs
their benefit.
The second type of redundant analysis includes checking
tools. Fosdick and Osterweil first applied data flow "anomaly
detection" techniques in the context of software reliability. In
their DAVE system [18], they used a depth first search algorithm
to detect a fixed set of variable def-use type of anomalies
such as uninitialized read, double definition, etc. Static
approaches like this [13, 14, 18] are often path-insensitive,
and therefore could report bogus errors from infeasible paths.
Dynamic techniques [17, 7] instruments the program and
detect anomalies that arise during execution. However, dynamic
approaches are weaker in that they can only find errors
on executed paths. Further the run-time overhead and diY-
culty in instrumenting operating systems limits the usage of
this approach.
The dynamic system most similar to our work is Huang [17].
He discusses a checker similar to the assignment checker in
Section 3. It tracks the lifetime of variables using a simple
global analysis. At each assignment it follows the variable
forward on all paths. It gives an error if the variable is read
on no path before either exiting scope or being assigned another
value. However, no experimental results were given.
Further, because it is dynamic it seems predisposed to report
large numbers of false positives in the case where a value is
not read on the current executed path but would be used on
some other (non-executed) path.
Other tools such as lint, LCLint [12], or the GNU C
compiler's -Wall option warn about unused variables and
routines and ignored return values. While these have long
found redundancies in real code (we use them ourselves
daily), these redundancies have been commonly viewed as
harmless stylistic issues. Evidence for this perception is that
to the best of our knowledge the many recent error checking
projects focus solely on hard errors such as null pointer
dereferences or failed lock releases, rather than redundancy
checking [4, 10, 5, 9, 2, 19, 21]. A main contribution of
this paper is showing that redundancies signal real errors and
experimentally measuring how well this holds.
9. CONCLUSION
This paper explored the hypothesis that redundancies,
like type errors, flag higher-level correctness mistakes. We
evaluated the approach using four checkers which we applied
to the Linux operating system. These simple analyses found
many surprising (to us) error types. Further, they correlated
well with known hard errors: redundancies seemed to flag
confused or poor programmers who were prone to other error
types. These indicators could be used to decide where to audit
a system.
10.
ACKNOWLEDGEMENT
We would like to thank the anonymous reviewers for their
helpful comments. This work was supported by NFS award
0086160 and by DARPA contract MDA904-98-C-A933.
11.
--R
Detecting races in relay ladder logic programs.
Using programmer-written compiler extensions to catch security holes
Automatically validating temporal safety properties of interfaces.
A static analyzer for finding dynamic programming errors.
Statistical Inference.
An empirical study of operating systems errors.
Enforcing high-level protocols in low-level software
An overview of the extended static checking system.
Variations on the common subexpression problem.
A tool for using specifications to check code.
An algebra for data flow anomaly detection.
Data flow analysis in software reliability.
A system and language for building system-specific
Detection of data flow anomaly through program instrumentation.
A dynamic data race detector for multithreaded programming.
A first step towards automated detection of buVer overrun vulnerabilities.
--TR
Compilers: principles, techniques, and tools
AIDAMYAMPERSANDmdash;a dynamic data flow anomaly detection system for Pascal programs
LCLint
Variations on the Common Subexpression Problem
A static analyzer for finding dynamic programming errors
Data Flow Analysis in Software Reliability
Enforcing high-level protocols in low-level software
Automatically validating temporal safety properties of interfaces
An empirical study of operating systems errors
A system and language for building system-specific, static analyses
Detecting Races in Relay Ladder Logic Programs
An algebra for data flow anomaly detection
Using Programmer-Written Compiler Extensions to Catch Security Holes
--CTR
David Hovemeyer , Jaime Spacco , William Pugh, Evaluating and tuning a static analysis to find null pointer bugs, ACM SIGSOFT Software Engineering Notes, v.31 n.1, January 2006
Zhang , Neelam Gupta , Rajiv Gupta, Locating faults through automated predicate switching, Proceeding of the 28th international conference on Software engineering, May 20-28, 2006, Shanghai, China
Zhang , Neelam Gupta , Rajiv Gupta, Pruning dynamic slices with confidence, ACM SIGPLAN Notices, v.41 n.6, June 2006
Neelam Gupta , Haifeng He , Xiangyu Zhang , Rajiv Gupta, Locating faulty code using failure-inducing chops, Proceedings of the 20th IEEE/ACM international Conference on Automated software engineering, November 07-11, 2005, Long Beach, CA, USA
Yuriy Brun , Michael D. Ernst, Finding Latent Code Errors via Machine Learning over Program Executions, Proceedings of the 26th International Conference on Software Engineering, p.480-490, May 23-28, 2004
David Hovemeyer , William Pugh, Finding bugs is easy, ACM SIGPLAN Notices, v.39 n.12, December 2004
Sudarshan M. Srinivasan , Srikanth Kandula , Christopher R. Andrews , Yuanyuan Zhou, Flashback: a lightweight extension for rollback and deterministic replay for software debugging, Proceedings of the USENIX Annual Technical Conference 2004 on USENIX Annual Technical Conference, p.3-3, June 27-July 02, 2004, Boston, MA | error detection;extensible compilation |
587073 | Model exploration with temporal logic query checking. | A temporal logic query is a temporal logic formula with placeholders. Given a model, a solution to a query is a set of assignments of propositional formulas to placeholders, such that replacing the placeholders with any of these assignments results in a temporal logic formula that holds in the model. Query checking, first introduced by William Chan \citechan00, is an automated technique for finding solutions to temporal logic queries. It allows discovery of the temporal properties of the system and as such may be a useful tool for model exploration and reverse engineering.This paper describes an implementation of a temporal logic query checker. It then suggests some applications of this tool, ranging from invariant computation to test case generation, and illustrates them using a Cruise Control System. | INTRODUCTION
Temporal logic model-checking [7] allows us to decide whether
a property stated in a temporal logic such as CTL [6] holds in a
state-based model. Typical temporal logic formulas are AG(p #
"both p and q hold in every state of the system", or AG(p #
"every state in which p holds is always followed by a state
in which q holds".
Model checking was originally proposed as a verification tech-
however, it is also extremely valuable for model understand-
Permission to make digital or hard copies of all or part of this work for
personal or classroom use is granted without fee provided that copies are
not made or distributed for profit or commercial advantage and that copies
bear this notice and the full citation on the first page. To copy otherwise, to
republish, to post on servers or to redistribute to lists, requires prior specific
permission and/or a fee.
SIGSOFT 2002/FSE-10, November 18-22, 2002, Charleston, SC, USA.
r
r
Figure
1: A simple state machine.
ing [2]. We rarely start the study of a design with a complete specification
available. Instead, we begin with some key properties,
and attempt to use the model-checker to validate them. When the
properties do not hold, and they seldom do, what is at fault: the
properties or the design? Typically, both need to be modified: the
design if a bug was found, and the properties if they were too strong
or incorrectly expressed. Thus, this process is aimed not only at
building the correct model of the system, but also at discovering
which properties it should have.
Query checking was proposed by Chan [2] to speed up design understanding
by discovering properties not known a priori. A temporal
logic query is an expression containing a symbol ?x , referred
to as the placeholder, which may be replaced by any propositional
formula 1 to yield a CTL formula, e.g. AG?x , AG(?x # p). The
solution to a query is a set of strongest propositional formulas that
make the query true. For example, consider evaluating the query
AG?x , i.e., "what are the invariants of the system", on a model in
Figure
1. (p # q) # r is the strongest invariant: all others, e.g.,
p# q or r, are implied by it. Thus, it is the solution to this query. In
turn, if we are interested in finding the strongest property that holds
in all states following those in which -q holds, we form the query
for the model in Figure 1, evaluates to
In solving queries, we usually want to restrict the atomic propositions
that are present in the answer. For example, we may not
care about the value of r in the invariant computed for the model in
Figure
1. We phrase our question as AG(?x{p, q}), thus explicitly
restricting the propositions of interest to p and q. The answer we
get is p # q. Given a fixed set of n atomic propositions of interest,
the query checking problem defined above can be solved by taking
propositional formulas over this set, substituting them for
the placeholder, verifying the resulting temporal logic formula, tab-
propositional formula is a formula built only from atomic
propositions and boolean operators.
ulating the results and then returning the strongest solution(s) [1].
The number n of propositions of interest provides a way to control
the complexity of query checking in practice, both in terms of
computation, and in terms of understanding the resulting answer.
In his paper [2], Chan proposed a number of applications for
query checking, mostly aimed at giving more feedback to the user
during model checking, by providing a partial explanation when
the property holds and diagnostic information when it does not. For
example, instead of checking the invariant AG(a#b), we can evaluate
the query AG?x{a, b}. Suppose the answer is a # b, that is,
holds in the model. We can therefore inform the user
of a stronger property and explain that a # b is invariant because
a # b is. We can also use query checking to gather diagnostic
information when a does not hold. For example, if
is false, that is, a request is not always followed
by an acknowledgment, we can ask what can guarantee an
acknowledgment: AG(?x # AF ack).
In his work, Chan concentrated on valid queries, that is, queries
that always yield a single strongest solution. All of the queries
we mentioned so far are valid. Chan showed that in general it is
expensive to determine whether a CTL query is valid. Instead, he
identified a syntactic class of CTL queries such that every formula
in the class is valid. He also implemented a query-checker for this
class of queries on top of the symbolic CTL model-checker SMV.
Queries may also have multiple strongest solutions. Suppose
we are interested in exploring successors of the initial state of the
model in Figure 1. Forming a query EX?x , i.e., "what holds in
any of the next states, starting from the initial state s0 ?", we get
two incomparable solutions: p # q # r and -p # q # r. Thus,
we know that state s0 has at least two successors, with different
values of p in them. Furthermore, in all of the successors, q # r
holds. Clearly, such queries might be useful for model exploration.
Checking queries with multiple solutions can be done using the
method of Bruns and Godefroid [1]. They extend Chan's work by
showing that the query checking problem with a single placeholder
can be solved using alternating automata [17]. In fact, the queries
can be specified in temporal logics other than CTL. However, so
far this solution remains purely theoretical: no implementation of
such a query-checker is available.
The range of applications of query checking can be expanded
further if we do not limit queries to just one placeholder. In partic-
ular, queries with two placeholders allow us to ask questions about
pairs of states, e.g., dependencies between a current and a next state
in the system.
This paper describes three major contributions:
1. We enrich the language of queries to include several place-
holders. The previous methods only dealt with one place-
holder, referring to it as "?". In our framework, placeholders
need to be named, e.g. "? x ", "? y ", "? pre ".
2. We describe the temporal logic query checking tool which
we built on top of our existing multi-valued model-checker
# Chek [4, 5]. The implementation not only allows one to
compute solutions to the placeholders but also gives witnesses
paths through the model that explain why solutions are as
computed.
3. We outline a few uses of the temporal logic query checking,
both in domains not requiring witness computation and in
those that depend on it.
The rest of this paper is organized as follows: in Section 2, we
give the necessary background for this paper, briefly summarizing
model-checking, query checking, and multi-valued CTL
model-checking. Section 3 defines the reduction of the query checking
problem to multi-valued model-checking. Section 4 describes
some possible uses of query checking for model exploration. We
illustrate these on an example of the Cruise Control System [16].
This section can be read without the material in Sections 2 and 3.
We conclude in Section 5 with the summary of the paper and the
directions for future work. Proofs of theorems that appear in this
paper can be found [11].
2. BACKGROUND
In this section, we briefly outline CTL model-checking, describe
the query checking problem, and give an overview of multi-valued
model-checking.
2.1 Model-Checking
model-checking [6] is an automatic technique for verifying
properties expressed in a propositional branching-time temporal
logic called Computation Tree Logic (CTL). The system is represented
by a Kripke structure, and properties are evaluated on a
tree of infinite computations produced by unrolling it. A Kripke
structure is a tuple (S, s0 , A, R, I) where:
. S is a finite set of states, and s0 is the initial state;
. A is a set of propositional variables;
. R # S - S is the (total) transition relation;
. I : S # 2 A is a labeling function that maps each state onto
the set of propositional variables which hold in it.
CTL is defined as follows:
1. Constants true and false are CTL formulas.
2. Every atomic proposition a # A is a CTL formula.
3. If # and are CTL formulas, then so are -#
AG#.
The boolean operators -, # and # have the usual meaning. The
temporal operators have two components: A and E quantify over
paths, while X , F , U and G indicate "next state", "eventually (fu-
ture)", "until", and "always (globally)", respectively. Hence, AX#
is true in state s if # is true in the next state on all paths from s.
E[#U ] is true in state s if there exists a path from s on which #
is true at every step until becomes true.
The formal semantics of CTL is given in Figure 2. In this figure,
we use a function {true, false} to indicate the result
of checking a formula # in state s. We further define the set of
successors for a state s:
The more familiar notation for indicating that a property # holds
in a state s of a Kripke structure K (K, s |= #) can be defined as
follows:
We also say that a formula # holds in a Kripke structure K if #
holds in K's initial state. In Figure 2, we used conjunction and
disjunction in place of the more familiar universal and existential
#, for # {true, false}
s
s
Figure
2: Formal semantics of CTL operators.
true
false
{true}
{false, p, -p, true}
(a) (b) (c)
Figure
3: Lattices for set
same as (b), represented using
minimal elements.
quantification. The semantics of EX and AX can be alternatively
expressed as
Semantics of EG and EU is as follows:
Finally, the remaining CTL operators are given below:
AX# -EX-#
AF# A[true U #]
EF# E[true U #]
AG# -EF-#
For example, consider the model in Figure 1, where s0 is the
initial state and A = {p, q, r}. Properties AG(p# q) and AF q are
true in this model, whereas AXp is not.
2.2 Query Checking Fundamentals
This exposition follows the presentation in [1].
A lattice is a partial order (L, #), where every finite subset B #
L has a least upper bound (called "join" and written as #B) and a
greatest lower bound (called "meet" and written #B). # and # are
the maximal and the minimal elements of a lattice, respectively. A
lattice is distributive if join distributes over meet and vice versa.
Given a set of atomic propositions P , let PF (P ) be the set of
propositional formulas over P . For example, PF
false, p, -p}. This set forms a distributive lattice under implication
(see
Figure
3(a)). Since p # true, p is under true in this lattice.
Meets and joins in this lattice correspond to classical # and # op-
erations, respectively.
propositional formula is a solution to a query # in state s
if substituting for the placeholder in # is a formula that holds
in the Kripke structure K in state s. A query # is positive [2]
if when 1 is a solution to # and 1 # 2 , then 2 is also a
solution. For example, if p # q is a solution to #, then so is p.
In other words, the set of all solutions to a positive query is a set
of propositional formulas that is upward closed with respect to the
implication ordering: if some propositional formula is a solution,
so is every weaker formula. Alternatively, a query is positive if
and only if all placeholders in it occur under an even number of
negations [2]. Further, for positive queries it makes sense to look
just for the strongest solutions because all other solutions can be
inferred from them. These notions are formalized below.
Given the ordered set (L, #) and a subset B # L, we define
For example, for the ordered set (PF ({p}), #) shown in Figure
subset B of L is an upset if
-p} is not an upset whereas {p, -p, true} is.
We write U(L, #) for the set of all upsets of L. The distributive
lattice formed by elements of U(PF ({p}), #) ordered by set inclusion
is shown in Figure 3(b). We refer to this as an upset lattice.
Finally, we note that each upset can be uniquely represented by
the set of its minimal elements. For example, for
is sufficient to represent the set {p, -p, true}, and {false} is sufficient
to represent {p, -p, true, false}. Figure 3(c) shows the lattice
using minimal elements. In the
remainder of this paper, when we say that X is a solution to a query,
we mean that X # PF (P ), X is the set of minimal solutions to
this query, and #X is the set of all of its solutions.
2.3 Multi-Valued Model-Checking
Multi-Valued CTL model-checking [5] is a generalization of the
model-checking problem. Let B refer to the classical algebra with
values true and false ("classical logic"). Instead of using B , multi-valued
model-checking is defined over any De Morgan algebra
#) is a finite distributive lattice and - is
any operation that preserves involution (-#) and De Morgan
laws. Conjunction and disjunction are defined using meet and join
operations of (L, #), respectively. In this algebra, we get -#
and -#, but not necessarily the law of non-contradiction
(#) or excluded middle (#).
Properties are specified in a multiple-valued extension of CTL
called has the same syntax as CTL, except that any
# L is also a # CTL formula. However, its semantics is somewhat
different. We modify the labeling function of the Kripke structure
to be I : S - A # L, so that for each atomic proposition a # A,
I(s, means that the variable a has value # in state s. Thus,
The other operations are defined as their CTL counterparts (see
Figure
2), where # and # are interpreted as lattice # and #, re-
spectively. In fact, in the rest of this paper we often write "#" and
"#" in place of "#" and "#", even if the algebra we use is different
from B .
The complexity of model-checking a # CTL formula # on a Kripke
structure over an algebra
O(|S| - h - |#|), where h is the height of the lattice (L, #), provided
that meets, joins, and quantification operations take constant
time [5].
We have implemented a symbolic model-checker # Chek [4] that
receives a Kripke structure K and a # CTL formula # and returns
an element of the algebra corresponding to the value of # in K.
The exact interpretation of this value depends on the domain. For
example, if the algebra is B , # Chek returns true if # holds in K and
false if it does not: this is the classical model-checking. For more
information about multi-valued model-checking, please consult [4,
5].
3. TEMPORALLOGICQUERY-CHECKER
In this section, we describe the computation of query-checking
solutions in detail. We express the query-checking problem for one
placeholder in terms of the multiple-valued model-checking frame-work
described in Section 2. We then discuss how to deal with
queries containing multiple placeholders, and finally what to do in
the case of non-positive queries.
Recall that multi-valued model-checking is an extension of model-checking
to an arbitrary De Morgan algebra. In our case, the algebra
is given by the upset lattice of propositional formulas (see Figure
3). In order to reduce query-checking to multi-valued model-
checking, we need to translate a given query into a # CTL formula
such that the element of the upset lattice corresponding to the value
of the # CTL formula is the set of all solutions to the query.
3.1 Intuition
Consider two simple examples of temporal logic queries, using
the model in Figure 1. First, we ask ?x , meaning "what propositional
are true in a state". Solving this query with respect
to s0 , we notice that the formula p#-q#r holds in s0 , and all other
formulas that hold in s0 are implied by it. Thus, it is the strongest
solution, and the set of all solutions is given by #{p # -q # r}.
Next, we look at AX?x , which means "what single formula
holds in all successor states". To solve this query with respect to
the state s0 , we must first identify all successors of s0 , solve the
query ?x for each of them, and finally take the intersection of the
results. The solution to ?x in the two successors of s0 : s1 and s2 ,
is #{-p # q # r} and #{p # q # r}, respectively. The intersection
of these solutions is #{q # r}; thus, q # r holds in all successors
of s0 , and any other solution to AX?x is implied by it. Notice
that this computation is
corresponds to meet in the upset lattice, this precisely matches the
# CTL semantics of AX from Figure 2. Based on this observation,
we show how query-checking is reduced to multi-valued model-checking
3.2 Reduction to # CTL
The translation is defined top-down. All operators, constants,
and propositional variables are translated one-to-one: # is
mapped to the disjunction of the translation of # with the translation
of #; any variable p is mapped to itself; true is mapped to the
constant symbol #; and so forth. For the model in Figure 1,
We now show how to translate the placeholder ?x . Consider the
computation of ?x{p, q} in state s0 of the model in Figure 1. The
solution #{p#-q} to the query is obtained by examining the values
of p and q in s0 . We formalize this using the case-statement:
case ([[p #
case ([[p #
and since all of the cases are disjoint, this yields the (syntactic)
So, to evaluate ?x{p, q}, we use the fact that [[p # q]](s0
to get
To illustrate this idea further, consider a more complex query
evaluated in state s0 of Figure 1. The set
of all solutions to the subquery EX?x{p, q} is #{p # q, -p # q},
and the set of all solutions to ?x is #{p # -q}. To get the set of all
solutions to our query, we intersect the results to get #{p, p #= q}.
THEOREM 1. Let T be the above translation from CTL queries
into # CTL. Then, for any CTL query # and state s in the model,
contains exactly the solutions to # in state s.
As stated in Section 2.3, multiple-valued model-checking has
time complexity O(|S| - h - |#|), where h is the height of the
lattice. Thus, to estimate the complexity of query-checking, we
need to compute the height of the upset lattice used in the reduction
of query-checking to multi-valued model-checking. If the place-holder
is restricted to n atomic propositions {p1 , . , pn}, then,
since there are 2 n propositional formulas in n variables, the height
of the upset lattice (U(PF ({p1 , . , pn}), #) is 2 2 n
+1. The
complexity of query-checking is O(|S|
Recall that in traditional model-checking, the height of the model-checking
lattice is 2, and the complexity is O(|S| - |#|). Thus,
solving a query is, in the worst case, 2 2 n
times slower than checking
an equivalent model-checking property. However, we find that
in practice the running time of the query-checker is much better
than the worst case (see Section 4.4).
3.3 More Complex Queries
More than one placeholder can be required to express some properties
of interest. In this section, we give an extension of query-
checking which allows for multiple placeholders, where each may
depend on a different set of propositional variables. Furthermore,
we describe how to solve non-positive queries.
3.3.1 Multiple Placeholders
If a query contains multiple placeholders, it is transformed into
a CTL formula by substituting a propositional formula for each
placeholder. Thus, given a query on n placeholders, with L i being
the lattice of propositional formulas for the ith placeholder,
the set of all possible substitutions is given by the cross product
. We can lift the implication order, pointwise,
to the elements of L, thus forming a lattice. For two placeholders,
Once again, the set of all solutions to a query is an element of the
upset lattice over L.
We now show how to translate queries with multiple placeholders
to # CTL. Consider the query ?x # (EX?x # AX?y ). Each
potential solution to this query is an element of To
solve this query, we first find solutions to each subformula and then
combine the results. Let B # L1 be the set of all solutions to ?x
when viewed as a query with just one placeholder. However, since
we have two placeholders, each solution, including the intermediate
ones, must be a subset of L. The query ?x does not depend on
the placeholder ?y ; therefore, any substitution for ?y (i.e., any element
of L2 ) is acceptable. This results in Similarly,
the set of all solutions for EX?x is C - L2 , and for AX?y -
L1 -D, for some C # L1 and D # L2 . Combining these results,
we get
Thus, the set of solutions to this query is {(x, y) | x
For example, let us compute the solution to the query ?x{p, q}#
EX?y{p, q} in state s0 of the model in Figure 1. We know from
the example in Section 3.2 with just one placeholder,
Further, recall that Each solution to ?x{p, q}
#EX?y{p, q} is an element of the lattice
In this lattice,
Putting these together, yields
Thus, this query has two minimal solutions: (p # -q, p # q), and
(p # -q, -p # q).
3.3.2 Negation
Every query can be converted to its negation-normal form -
the representation where negation is applied only to atomic propositions
and placeholders. A query is positive if and only if all
of its placeholders are non-negated when the query is put into its
negation-normal form. Furthermore, we say that an occurrence of
a placeholder in a query is negative if it appears negated in the
negation-normal form of the query, and positive otherwise.
In this section, we describe how non-positive queries can be
solved by transforming them into positive form, query-checking,
and post-processing the solution. Note that the solution-set for
negated placeholders depend on the maximal solutions 2 , rather than
the minimal ones. We consider two separate cases: (1) when all
occurrences of a placeholder are either negative or positive, and
(2) when a given placeholder appears in both negative and positive
forms.
In case (1), the query is converted to the positive form by removing
all of negations that appear in front of a placeholder, and
then solved as described in the previous section. Finally, if the ith
placeholder occurred in a negative position, the ith formula in the
solution is negated to yield the correct result.
THEOREM 2. If (#1 , . , #n ) is a solution to a query Q, and
query Q # is identical to Q except that the ith placeholder appears
negated, then (#1 , . , -# i , . , #n ) is a solution to Q # .
An element # of a solution-set X # PF (P ) is maximal if, for all
We sketch the proof by giving an example for a query with a single
placeholder. Consider the query AG-?x . We obtain a solution-
set to AG?x and choose one formula # from it. Since AG# holds
in the model, so does AG-#); therefore, -# is in the solution-
set for AG-?x .
In case (2), if a placeholder ?x appears both in the positive and
the negative forms, we first replace each positive occurrence with
?x+ and each negative occurrence with ?x- , and then solve the
resulting query. Finally, the set of all solutions to ?x is given by the
intersection of solutions to ?x+ and ?x- .
The complexity of using multi-valued model-checking for query-
checking with multiple placeholders remains determined by the
height of the lattice. We show the result for two placeholders:
?x{p1 , . , pn} and ?y{q1 , . , qm}. There are 2 2 n
possible solutions
to ?x , and
to ?y ; therefore, there are 2 2
possible
simultaneous solutions. The height of the powerset lattice of solutions
1, and so the complexity is O(|S|
|#|). This result generalizes easily to any number of placehold-
ers. As with the case of a single placeholder, we find that in practice
query checking is more feasible than its worst case (see Section
4.4).
4. APPLICATIONS AND EXPERIENCE
In this section, we show two different techniques for model exploration
using temporal logic queries. The technique presented
in Section 4.2 uses only the solutions to the query-checking problem
and is essentially an extension of the methodology proposed
by Chan in [2]. The technique presented in Section 4.3 is completely
new and is based on the fact that in addition to computing
the solution to a query, our model-checker can also provide a witness
explaining it. The examples in this section are based on our
own experience in exploring an SCR specification of a Cruise Control
System [16], described in Section 4.1. Please refer to Table 3
for the running time of various queries used in this section.
4.1 The Cruise Control System (CCS)
The Cruise Control System (CCS) is responsible for keeping
an automobile traveling at a certain speed. The driver accelerates
to the desired speed and then presses a button on the steering
wheel to activate the cruise control. The
cruise control then maintains the car's speed, remaining active until
one of the following events occurs: (1) the driver presses the
brake pedal (Brake); (2) the driver presses the gas pedal (Accel);
(3) the driver turns the cruise control off
the engine stops running (Running); (5) the driver turns the ignition
off (Ignition); (6) the car's speed becomes uncontrollable
(Toofast). If any of the first three events listed above occur, the
driver can re-activate the cruise control system at the previously set
speed by pressing a "resume" button
The SCR method [12] is used to specify event-driven systems.
System outputs, called controlled variables, are computed in terms
of inputs from the environment, called monitored variables, and the
system state. To represent this state, SCR uses the notion of mode-
classes - sets of states, called modes, that partition the monitored
environment's state space. The system changes its state as the result
of events - changes in the monitored variables. For example,
an event @T(a) WHEN b, formalized as -a # b # a # , indicates that
a becomes true in the next state while b is true in the current state.
We prime variables to refer to their values in the next state.
We use the simplified version of CCS [3] which has 10 monitored
variables and 4 controlled variables. One of these, Throttle, is
described below. The system also has one modeclass CC, described
in
Table
1. Each row of the mode transition table specifies an event
that activates a transition from the mode on the left to the mode
on the right. The system starts in mode Off if Ignition is false,
and transitions to mode Inactive when Ignition becomes true.
Table
2 shows the event table for Throttle. Throttle assumes
the value tAccel, indicating that the throttle is in the accelerating
position, when (1) the speed becomes too slow while the system
is in mode Cruise, as shown in the first row of Table 2; or (2)
the system returns to the mode Cruise, indicated by @T(Inmode),
and the speed has been determined to be too slow (see the second
row of the table).
4.2 Applications of Queries without Witnesses
Below we show how temporal logic queries can replace several
questions to a CTL model-checker to help express reachability
properties and discover system invariants and transition guards.
Reachability analysis. A common task during model exploration
is finding which states are reachable. For example, in CCS we
may want to know whether all of the modes of the modeclass CC
are reachable. This can be easily solved by checking a series of
EF properties. For example, EF holds if and
only if the mode Cruise is reachable. However, queries provide
a more concise representation: the solution to the single query
EF ?x{CC} corresponds to all of the reachable modes, i.e., those
values p i for which EF In our example, the
solutions include all of the modes; thus, all modes are reachable.
Similarly, finding all possible values of Throttle when the system
is in mode Cruise is accomplished by the query EF
. More complex analysis can be done
by combining EF queries with other CTL operators. For an exam-
ple, see the queries in rows 6 and 7 of Table 3.
Discovering invariants. Invariants concisely summarize complex
relationships between different entities in the model, and are
often useful in identifying errors. To discover all invariants, we
simply need to solve the query AG?x , with the placeholder restricted
to all atomic propositions in the model. Unfortunately, in
all but the most trivial models, the solution to this query is too big to
be used effectively [2]. However, it is easy to restrict our attention
to different parts of the model. For example, the set of invariants of
the mode Inactive, with respect to the variables Ignition and
Running, is the solution to the query
which evaluates to using multiple
placeholders, we can find all invariants of each mode using a single
query. For example, each solution to the query
AG(?x{CC} #?y{Ignition, Running})
corresponds to invariants of each individual mode. In our example,
the solution
Running
indicates that Ignition and Running remain true while the system
is the mode Cruise. Moreover, this query can also help the analyst
determine which invariants are shared between modes. From the
solution
Ignition
we see that Ignition not only stays true throughout the mode
Inactive, but it is also invariant in the modes Cruise and Override.
The mode invariants for CCS that we were able to discover using
query-checking are equivalent to the invariants discovered by the
algorithms in [14, 15]. Notice that the strength of the invariants
obtained through query-checking depends on the variables to which
the placeholder is restricted. The strongest invariant is obtained by
restricting the placeholder to all of the monitored variables of the
system.
Guard discovery. Finally, we illustrate how queries can be used
to discover guards [18]. Suppose we are given a Kripke structure
translation of an SCR model, i.e., events that enable transitions
between modes are not explicitely represented. We can reverse-engineer
the mode transition table by discovering guards in the
Kripke structure.
Formally, a guard is defined as the weakest propositional formula
over current (pre-) and next (post-) states such that the invariant
# holds, where # is the guard, and # and # are the
pre- and post-conditions, respectively. Notice that since we define
the guard to be the weakest solution, the guard does not directly
correspond to an SCR event. Later we show that SCR events can
be discovered by combining guards with mode invariants. Since
guards are defined over pre- and post-states, two placeholders are
required to express the query used to discover them, making the
guard the weakest solution to the query
AG(#?pre # AX(?post #))
We now show how this query is used to discover an event that
causes CCS to switch from the mode Cruise to Inactive. In this
case, we let
furthermore, for practical reasons we restrict the ?pre and ?post
placeholders to the set {Toofast, Running, Brake}. After solving
this query, we obtain two solutions:
Toofast # -Running, ?post = true
Toofast
Before analyzing the result, we obtain the invariant for the mode
Cruise:
-Toofast # Running
using the invariant discovery technique presented in Section 4.2.
We notice that the first solution violates the invariant, making the
antecedent of the implication false; however, from the second solu-
tion, it follows that
AX((-Running # Toofast) #
holds, yielding the guard Toofast # . Finally,
combining this with the invariant for the mode Cruise, we determine
that the mode transition is guarded by two independent events,
@F(Running) and @T(Toofast), just as indicated in the mode
transition table.
4.3 Applications of Queries with
Given an existential CTL formula that holds in the model, a
model-checker can produce a trace through the model showing why
the formula holds. This trace is called a witness to the formula.
Similarly, given an existential query, the query-checker can produce
a set of traces, which we also refer to as a witness, showing
why each of the minimal solutions satisfies the query.
For example, consider the query EX?x{p} for the model in Figure
1. It has two minimal solutions:
fore, the witness consists of two traces, one for each solution, as
shown in Figure 4. The trace s0 , s2 corresponds to the solution p,
and the trace s0 , s1 - to the solution -p.
All of the traces comprising a witness to a query start from the
Old Mode Event New Mode
Off @T(Ignition) Inactive
Inactive @F(Ignition) Off
Ignition AND Running AND
Cruise @F(Ignition) Off
Inactive
@F(Running) WHEN Ignition Inactive
Ignition AND Running AND
Ignition AND Running AND
Initial Mode: Off WHEN NOT Ignition
Table
1: Mode transition table for mode class CC of the cruise control system.
Modes Events
Cruise @T(Inmode) @T(Inmode) @T(Inmode) @F(Inmode)
Throttle
Table
2: Event table for the controlled variable Throttle.
Figure
4: A witness for EX?x{p}, in the model in Figure 1.
initial state, so they can be represented as a tree. In addition, our
query-checker labels each branch in the tree with the set of solutions
that are illustrated by that branch. In the example in Figure
4, the left branch is labeled with and the right - with
-p. The benefit of treating a witness as a tree rather than a set
of independent traces is that it becomes possible to prefer certain
over others. For example, we may prefer a witness with
the longest common prefix, which usually results in minimizing the
total number of traces comprising the witness. We now show how
witnesses can be used in several software engineering activities.
Guided simulation. The easiest way to explore a model is to
simulate its behavior by providing inputs and observing the system
behavior through outputs. However, it is almost impossible to use
simulation to guide the exploration towards a given objective. Any
wrong choice in the inputs in the beginning of the simulation can
result in the system evolving into an "uninteresting" behavior. For
example, let our objective be the exploration of how CCS evolves
into its different modes. In this case, we have to guess which set
of inputs results in the system evolving into the mode Cruise, and
then which set of inputs yields transition into the mode Inactive,
etc. Thus, the process of exploring the system using a simulation is
usually slow and error prone.
An interesting alternative to a simple simulation is guided simu-
lation. In a guided simulation setting, the user provides a set of ob-
jectives, and then only needs to choose between the different paths
through the system in cases where the objective cannot be met by a
single path. Moreover, each choice is given together with the set of
objectives it satisfies.
Query-checking is a natural framework for implementing guided
simulations. The objective is given by a query, and the witness
serves as the basis for the simulation. For example, suppose we
want to devise a set of simulations to illustrate how CCS evolves
into all of its modes. We formalize our objective by the query
EF ?x{CC} and explore the witness. Moreover, we indicate that
we prefer a witnesses with the largest common prefix, which results
in a single trace through the system going through modes Off,
Inactive, Cruise, and finally Override. This trace corresponds
to a simulation given by the sequence of events: @T(Ignition),
bOff). Since our objective was achieved by a single trace, the simulation
was generated completely automatically, requiring no user
input.
Test case generation. Although the primary goal of model-checking
is to verify a model against temporal properties, it has recently been
used to generate test cases [10, 9, 13, 18]. Most of the proposed
techniques are based on the fact that in addition to computing expected
outputs, a model-checker can produce witnesses (or counter-
examples) which can be used to construct test sequences. The properties
that are used to force the model-checker to generate desired
test sequences are called trap properties [10].
Gargantini and Heitmeyer [10] proposed a method that uses an
SCR specification of a system to identify trap properties satisfying
a form of branch coverage testing criterion. Their technique
uses both mode transition and condition tables to generate test se-
Query Time Explanation
what are all reachable modes
what values of Throttle are reachable in mode Cruise
3 AGEF?x{CC} 0.787s what modes are globally reachable
4 EFEG?x{CC} 0.720s what modes have self-loops
Inactive #?x{Ignition, Running}) 0.267s what are the invariants, over Ignition and Running, of mode
Inactive
6 AG(?x{CC} #?x{Ignition, Running}) 0.942s find all mode invariants, restricted to Ignition and Running
what modes can follow Off
8 EF (? old {CC} # EX?new{CC}) 1.204s what pairs of modes can follow each other
9 EF how do values of Toofast and Inactive change as the system
EX(?y{Toofast, Running} #
goes between modes Cruise and Inactive
Table
3: Summary of queries used in Section 4.
Query Time
Table
4: Comparison between model-checking and query-checking.
quences. Here, we illustrate how our technique is applicable on
mode transition tables; other tables can be analyzed similarly.
The method in [10] assures a form of branch coverage by satisfying
the following two rules: (1) for each mode in the mode transition
table, test each event at least once; (2) for each mode, test
every case when the mode does not change (no-change) at least
once. For example, the two test sequences need to be generated for
mode Off, one testing the event @T(Ignition), and the other testing
the no-change case. These can be obtained using the following
trap properties:
Alternatively, the two test sequences can be obtained from a witness
to a single query EF
. Sim-
ilarly, the set of test sequences that cover the full mode transition
table is obtained from the witness of the query EF (? old {CC} #
EX?new{CC}).
Since all of the traces comprising a witness to a query are generated
at the same time, it is possible to minimize the number of different
test sequences that guarantee the full coverage of the mode
transition table. Moreover, whenever an EF query has more then
one minimal solution, the query-checker can produce each minimal
solution, and, if necessary, a witness for it, as soon as the new solution
is found. Therefore, even in the cases when the complexity
of the model-checking precludes obtaining the results for all of the
trap properties, the query-checker can produce a solution to some
of the trap properties as soon as possible.
Although the method suggested above generates a set of test sequences
that cover every change (and every no-change) in the mode
of the system, it does not necessarily cover all of the events. For ex-
ample, the change from the mode Cruise to the mode Inactive is
guarded by two independent events, @T(Toofast) and @F(Running);
however, the witness for our trap query contains only a single trace
corresponding to this change, covering just one of the events. We
can first identify the events not covered by the test sequences from
the witness to the query, and then use the method from [10] to generate
additional test sequences for the events not yet covered.
Alternatively, if we know the variables comprising the event for a
given mode transition, we can remedy the above problem by using
an additional query. In our current example, the events causing the
change from the mode Cruise to the mode Inactive depend on
variables Toofast and Running. To cover these events, we form
the query
The witness to this query corresponds to two test sequences: one
testing the change on the event @T(Toofast) and the other - on
the event @F(Running).
4.4 Running Time
Theoretical complexity of query-checking in Section 3.2 seems
to indicate that query-checking is not feasible for all but very small
models. However, our experience (see running times of queries
used in this section in Table 3) seems to indicate otherwise. We
address this issue in more detail below.
Theoretically, solving a query with a single placeholder restricted
to two atomic propositions is slower than model-checking an equivalent
formula by a factor of
. To analyze the difference
between the theoretical prediction and the actual running
times, we verified several CTL formulas and related queries and
summarized the results in Table 4. CTL formulas are checked using
# Chek, parametrized for B . The query in the second row is
restricted to two atomic propositions required to encode the enumerated
type for CC. However, the running time of this query is
only double that of the corresponding CTL formula (row 1). A
similar picture can be seen by comparing the CTL formula in row 3
with the query in row 4 of the table. Finally, increasing the number
of variables that a placeholder depends on, should slow down the
analysis significantly. Yet, comparing queries in rows 4 and 5 of
the table, we see that the observed slowdown is only three-fold.
Although we have not conducted a comprehensive set of experiments
to evaluate the running time of our query-checker, we believe
that our preliminary findings indicate that query-checking is in fact
feasible in practice.
5. CONCLUSION
In this section, we summarize the paper and suggest venues for
future work.
5.1 Summary and Discussion
In this paper, we have extended the temporal logic query-checking
of Chan [2] and Bruns and Godefroid [1] to allow for queries with
multiple placeholders, and shown the applicability of this extension
on a concrete example. We have implemented a query-checker for
multiple placeholders using the multi-valued model-checker # Chek.
Our implementation allows us not only to generate solutions to temporal
logic queries, but also to provide witnesses explaining the
answers. Further, our preliminary results show that it is feasible
to analyze non-trivial systems using query-checking. Please send
e-mail to xchek@cs.toronto.edu for a copy of the tool.
Building a query-checker on top of our model-checker has two
further advantages. First, we allow query-checking over systems
that have fairness assumptions. For example, we can compute invariants
of CCS under the assumption that Brake is pressed infinitely
often. As far as we know, Chan's system does not implement
fairness. Further, the presentation in this paper used CTL
as our temporal logic. However, since the underlying framework
of # Chek is based on -calculus, we can easily extend our query-
checker to handle -calculus queries.
We are also convinced that temporal logic query-checking has
many applications in addition to the ones we explored here. In par-
ticular, we see immediate applications in a variety of test case generation
domains and hope that practical query-checking can have
the same impact as model-checking for model exploration and analysis
Finally, note that query-checking is a special case of multi-valued
model-checking. Multi-valued model-checking was originally designed
for reasoning about models containing inconsistencies and
disagreements [8]. Thus, the reasoning was done over algebras derived
from the classical logic, where the # relation in
#, -) indicates "more true than or equal to". Query-checking is
done over lattices, and algebras over them, that have a different
interpretation - sets of propositional formulas. We believe that
there might be yet other useful interpretations of algebras, making
# Chek the ideal tool for reasoning over them.
5.2 Future Work
In this paper, we have only considered queries where the place-holders
are restricted to sets of atomic propositions. However,
through our experience we found that it is useful to place further
restrictions on the placeholders. For example, we may want to restrict
the solutions to the query EF ?x{p, q, r} only to those cases
in which p and q are not true simultaneously. From the computational
point of view, our framework supports it; however, expressing
such queries requires an extension to the query language and
some methodology to guide its use. We are currently exploring a
query language inspired by SQL, in which the above query would
be expressed as follows:
EF ?x where ?x in PF ({p, q, r}) and not (?x # p # q)
In the future, we plan to conduct further case studies to better assess
the feasibility of query-checking on realistic systems. We also
believe that the existence of an effective methodology is crucial to
the success of query-checking in practice. We will use our case
studies to guide us in the development of such a methodology.
6.
ACKNOWLEDGEMENTS
We gratefully acknowledge the financial support provided by
NSERC and CITO. We also thank members of the UofT Formal
Methods reading group for their suggestions for improving the presentation
of this work.
7.
--R
"Temporal Logic Query-Checking"
"Temporal-Logic Queries"
Towards Usability of Formal Methods"
Multi-Valued Model-Checker"
"Model-Checking Over Multi-Valued Logics"
"Automatic Verification of Finite-State Concurrent Systems Using Temporal Logic Specifications"
Model Checking.
"A Framework for Multi-Valued Reasoning over Inconsistent Viewpoints"
"Test Generation for Intelligent Networks Using Model Checking"
"Using Model Checking to Generate Tests from Requirements Specifications"
"Temporal Logic Query Checking through Multi-Valued Model Checking"
"Automated Consistency Checking of Requirements Specifications"
" Automatic Test Generation from Statecharts Using Model Checking"
"Automatic Generation of State Invariants from Requirements Specifications"
"An Algorithm for Strengthening State Invariants Generated from Requirements Specifications"
"Example NRL/SCR Software Requirements for an Automobile Cruise Control and Monitoring System"
"An Automata-Theoretic Approach to Branching-Time Model Checking"
"Coverage Based Test-Case Generation using Model Checkers"
--TR
Automatic verification of finite-state concurrent systems using temporal logic specifications
Automated consistency checking of requirements specifications
Automatic generation of state invariants from requirements specifications
Using model checking to generate tests from requirements specifications
Model checking
An automata-theoretic approach to branching-time model checking
A framework for multi-valued reasoning over inconsistent viewpoints
Test Generation for Intelligent Networks Using Model Checking
Model-Checking over Multi-valued Logics
An Algorithm for Strengthening State Invariants Generated from Requirements Specifications
Queries
chi-Chek
Temporal Logic Query Checking
--CTR
Steve Easterbrook , Marsha Chechik , Benet Devereux , Arie Gurfinkel , Albert Lai , Victor Petrovykh , Anya Tafliovich , Christopher Thompson-Walsh, Chek: a model checker for multi-valued reasoning, Proceedings of the 25th International Conference on Software Engineering, May 03-10, 2003, Portland, Oregon
Dezhuang Zhang , Rance Cleaveland, Efficient temporal-logic query checking for presburger systems, Proceedings of the 20th IEEE/ACM international Conference on Automated software engineering, November 07-11, 2005, Long Beach, CA, USA | CTL;query-checking;multi-valued model-checking |
587177 | Large-Scale Computation of Pseudospectra Using ARPACK and Eigs. | ARPACK and its {\sc Matlab} counterpart, {\tt eigs}, are software packages that calculate some eigenvalues of a large nonsymmetric matrix by Arnoldi iteration with implicit restarts. We show that at a small additional cost, which diminishes relatively as the matrix dimension increases, good estimates of pseudospectra in addition to eigenvalues can be obtained as a by-product. Thus in large-scale eigenvalue calculations it is feasible to obtain routinely not just eigenvalue approximations, but also information as to whether or not the eigenvalues are likely to be physically significant. Examples are presented for matrices with dimension up to 200,000. | Introduction
. The matrices in many eigenvalue problems are too large to
allow direct computation of their full spectra, and two of the iterative tools available
for computing a part of the spectrum are ARPACK [10, 11] and its Matlab counter-
part, eigs. 1 For nonsymmetric matrices, the mathematical basis of these packages is
the Arnoldi iteration with implicit restarting [11, 23], which works by compressing the
matrix to an "interesting" Hessenberg matrix, one which contains information about
the eigenvalues and eigenvectors of interest. For general information on large-scale
nonsymmetric matrix eigenvalue iterations, see [2, 21, 29, 31].
For some matrices, nonnormality (nonorthogonality of the eigenvectors) may be
physically important [30]. In extreme cases, nonnormality combined with the practical
limits of machine precision can lead to di#culties in accurately finding the eigenvalues.
Perhaps the more common and more important situation is when the nonnormality is
pronounced enough to limit the physical significance of eigenvalues for applications,
without rendering them uncomputable. In applications, users need to know if they
are in such a situation. The prevailing practice in large-scale eigenvalue calculations
is that users get no information of this kind.
There is a familiar tool available for learning more about the cases in which
nonnormality may be important: pseudospectra. Figure 1 shows some of the pseudospectra
of the "Grcar matrix" of dimension 400 [6], the exact spectrum, and converged
eigenvalue estimates (Ritz values) returned by a run of ARPACK (seeking
the eigenvalues of largest modulus) for this matrix. In the original article [23] that
described the algorithmic basis of ARPACK, Sorensen presented some similar plots
of di#culties encountered with the Grcar matrix. This is an extreme example where
the nonnormality is so pronounced that even with the convergence tolerance set to
its lowest possible value, machine epsilon, the eigenvalue estimates are far from the
true spectrum. From the Ritz values alone, one might not realize that anything was
# Received by the editors June 6, 2000; accepted for publication (in revised form) January 15,
2001; published electronically July 10, 2001.
http://www.siam.org/journals/sisc/23-2/37322.html
Oxford University Computing Laboratory, Parks Road, Oxford OX1 3QD, UK (TGW@comlab.
ox.ac.uk, LNT@comlab.ox.ac.uk).
In Matlab version 5, eigs was an M-file adapted from the Fortran ARPACK codes. Starting
with Matlab version 6, the eigs command calls the Fortran ARPACK routines themselves.
G. WRIGHT AND LLOYD N. TREFETHEN
Fig. 1. The #-pseudospectra for of the Grcar matrix (dimension
400), with the actual eigenvalues shown as solid stars and the converged eigenvalue estimates (for
the eigenvalues of largest modulus) returned by ARPACK shown as open circles. The ARPACK
estimates lie between the 10 -16 and 10 -17 pseudospectral contours.
amiss. Once the pseudospectra are plotted too, it is obvious.
Computing the pseudospectra of a matrix of dimension N is traditionally an expensive
task, requiring an O(N 3 ) singular value decomposition at each point in a grid.
For a reasonably fine mesh, this leads to an O(N 3 ) algorithm with the constant of the
order of thousands. Recent developments in algorithms for computing pseudospectra
have improved the constant [28], and the asymptotic complexity for large sparse matrices
[3, 12], but these are still fairly costly techniques. In this paper we show that
for large matrices, we can cheaply compute an approximation to the pseudospectra
in a region near the interesting eigenvalues. Our method uses the upper Hessenberg
matrix constructed after successive iterations of the implicitly restarted Arnoldi al-
gorithm, as implemented in ARPACK. Among other things, this means that after
performing an eigenvalue computation with ARPACK or eigs, a user can quickly
obtain a graphical check to indicate whether the Ritz values returned are likely to
be physically meaningful. Our vision is that every ARPACK or eigs user ought to
plot pseudospectra estimates routinely after their eigenvalue computations as a cheap
"sanity check."
Some ideas related to ours have appeared in earlier papers by Nachtigal, Reichel,
and Trefethen [17], Ruhe [19], Sorensen [23], Toh [24], and Toh and Trefethen [25].
For example, Sorensen plotted level curves of filter polynomials and observed that
they sometimes approximated pseudospectra, and Ruhe showed that pseudospectra
could be approximated by a rational Krylov method. What is new here is the explicit
development of a method for approximating pseudospectra based on ARPACK. Of
course, one could also consider the use of di#erent low-dimensional compressions of
a matrix problem such as those constructed by the Jacobi-Davidson algorithm [22].
Preliminary experiments, not reported here, show that this kind of Jacobi-Davidson
approximation of pseudospectra can also be e#ective.
We start by giving an overview of pseudospectra calculations and the implicitly
restarted Arnoldi iteration, followed by the practical details of our implementation
along with a discussion of some of the problems we have had to deal with. After this
we give some examples of the technique in practice. We also mention our Matlab
graphical user interface (GUI), which automates the computation of pseudospectra
LARGE-SCALE COMPUTATION OF PSEUDOSPECTRA 593
after the eigenvalues of a matrix have been computed by eigs in Matlab.
The computations presented in this paper were all performed using eigs in Matlab
version 6 (which essentially is ARPACK) and our GUI rather than the Fortran
ARPACK, although our initial experiments were done with the Fortran code.
2. Pseudospectra. There are several equivalent ways of defining # (A), the
#-pseudospectrum of a matrix A. The two most important (see, e.g., [27]) are perhaps
and
E) for some E with #E#}.
When the norms are taken to be the 2-norm, the definitions are equivalent to
where # min (-) denotes minimum singular value. This provides the basis of many
algorithms for computing pseudospectra. The most familiar technique is to use a grid
over the region of the complex plane of interest and calculate the minimum singular
value of zI - A at each grid point z. These values can then be passed to a contour
plotter to draw the level curves. For the rest of this paper, we consider the 2-norm;
other norms are discussed in [8, 28].
The reason for the cost of computation of pseudospectra is now clear: the amount
of work needed to compute the minimum singular value of a general matrix of dimension
N is O(N 3 ) (see, e.g., [5]). However, several techniques have been developed to
reduce this cost [28]. Here are two important ones.
(I) Project the matrix onto a lower dimensional invariant subspace via, e.g., a
partial Schur factorization (Reddy, Schmid, and Henningson [18]). This works well
when the interesting portion of the complex plane excludes a large fraction of the
eigenvalues of the matrix. In this case, the e#ect of the omitted eigenvalues on the
interesting portion of the pseudospectra is typically small, especially if the undesired
eigenvalues are well conditioned. Projection can significantly reduce the size of the
matrix whose pseudospectra we need to compute, making the singular value computation
dramatically faster. In general, the additional cost of projecting the matrix is
much less than the cost of repeatedly computing the smallest singular value for the
shifted original matrix.
Perform a single matrix reduction to Hessenberg or triangular form before
doing any singular value decompositions (Lui [12]), allowing the singular value calculations
to be done using a more e#cient algorithm.
One way of combining these ideas is to do a complete Schur decomposition of the
and then to reorder the diagonal entries of the triangular matrix
to leave the "wanted" eigenvalues at the top. The reordered factorization can then
be truncated leaving the required partial Schur factorization. We can now find the
singular values of the matrices shifted for each grid point z using either the original
matrix A or the triangular matrix
This allows us to work solely with the triangular matrix T once the O(N 3 ) factorization
has been completed. The minimum singular value of zI - T can be determined
594 THOMAS G. WRIGHT AND LLOYD N. TREFETHEN
in O(N 2 ) operations 2 using the fact that # min (zI - T
This can be calculated using either inverse iteration or inverse Lanczos iteration, which
require solutions to systems of equations with the matrix (zI - T
can be solved in two stages, each using triangular system solves.
By combining these techniques with more subtle refinements we have an algorithm
which is much more e#cient than the straightforward method. It is suggested in [28]
that the speedup obtained is typically a factor of about N/4, assuming the cost of
the Schur decomposition is negligible compared with that of the rest of the algorithm.
This will be the case on a fine grid for a relatively small matrix (N of the order
of a thousand or less), but for larger matrices the Schur decomposition is relatively
expensive, and it destroys any sparsity structure.
3. Arnoldi iteration. The Arnoldi iteration for a matrix A of dimension N
works by projecting A onto successive Krylov subspaces K j of dimension
starting vector v 1 [1, 20]. It builds an orthonormal basis for the Krylov
subspace by the Arnoldi factorization
is an upper Hessenberg matrix of dimension j, the columns of V j form an
orthonormal basis for the Krylov subspace, and f j is orthogonal to the columns of V j .
The residual term f j e # j can be incorporated into the first term V j H j by augmenting
the matrix H j with an extra row, all zeros except for the last entry, which is #f j #,
and including the next basis vector, v #, in the matrix V j . The matrix
H j is now rectangular, of size (j
The matrix
being rectangular, does not have any eigenvalues, but we can
define its pseudospectra in terms of singular values by (2.3). (With
the definition occasionally used that # is an eigenvalue of a rectangular matrix A if
A - #I is rank-deficient, where I is the rectangular matrix of appropriate dimension
with 1 on the diagonal and 0 elsewhere. In general, a rectangular matrix will have
no eigenvalues, but it will have nonempty #-pseudospectra for large enough #.) It can
then be shown [14, 25] that the pseudospectra of successive
are nested.
Theorem 3.1. Let A be an N - N matrix which is unitarily similar to a Hessenberg
matrix H, and let
denote the upper left (j section (in particular
could be created using a restarted Arnoldi iteration). Then for any # 0,
Thus as the iteration progresses, the pseudospectra of the rectangular Hessenberg
matrices better approximate those of A, which gives some justification for the
approximation
Unfortunately, this is only the case for the rectangular
matrices
There do not appear to be any satisfactory theorems to justify
a similar approximation for the square matrices H j , and of course for # su#ciently
small the #-pseudospectra of H j must be disjoint from those of A, since they will be
small sets surrounding the eigenvalues of H j , which are in general distinct from those
of A. This is not the case for the rectangular matrix as there will not be points in
the complex plane with infinite resolvent norm unless a Ritz value exactly matches an
can also be used for Hessenberg matrices [12], but for those we do not
have the advantage of projection.
LARGE-SCALE COMPUTATION OF PSEUDOSPECTRA 595
eigenvalue of the original matrix. That is, # (
typically empty for su#ciently
small #.
Although the property (3.2) is encouraging, theorems guaranteeing rapid convergence
in all cases cannot be expected. The quality of the approximate pseudospectra
depends on the information in the Krylov subspace, which in turn depends on the
starting vector v 1 . Any guarantee of rapid convergence could at best be probabilistic.
3.1. Implicitly restarted Arnoldi. In its basic form, the Arnoldi process may
require close to N iterations before the subspace contains good information about the
eigenvalues of interest. However, the information contained within the Hessenberg
matrix is very dependent on the starting vector relatively small
components of the eigenvectors corresponding to the eigenvalues which are not re-
quired, convergence may be quicker and the subspace size need not grow large. To
avoid the size of the subspace growing too large, practical implementations of the
Arnoldi iteration restart when the subspace size j reaches a certain threshold [20]. A
new starting vector -
v 1 is chosen which has smaller components in the directions of
eigenvectors corresponding to unwanted eigenvalues, and the process is begun again.
Implicit restarting [11, 23] is based upon the same idea, except that subspace is
only implicitly compressed to a single starting vector - v 1 . What is explicitly formed
is an Arnoldi factorization of size k based on this new starting vector, where k is the
number of desired eigenvalues, and this Arnoldi factorization is obtained by carrying
out implicitly shifted steps of the QR algorithm, with shifts possibly corresponding
to unwanted eigenvalue estimates. The computation now proceeds in an
accordion-like manner, expanding the subspace to its maximum size p, then compressing
to a smaller subspace. 3 This is computationally more e#cient than simple
restarting because the subspace is already of size k when the iteration restarts, and
in addition, the process is numerically stable due to the use of orthogonal transformations
in performing the restarting. This technique has made the Arnoldi iteration
competitive for finding exterior eigenvalues of a wide range of nonsymmetric matrices.
3.2. Arnoldi for pseudospectra. In a 1996 paper, Toh and Trefethen [25]
demonstrated that the Hessenberg matrix created during the Arnoldi process can
sometimes provide a good approximation to the pseudospectra of the original matrix.
They provided results for both the square matrix H j and the rectangular matrix
We choose to build our method around the rectangular Hessenberg matrices
though this makes the pseudospectral computation harder than if we worked with the
square matrix. The advantage of this is that we retain the properties of Theorem 3.1,
and the following in particular:
For every # 0, the approximate #-pseudospectrum generated by our ARPACK
algorithm is a subset of the #-pseudospectrum of the original matrix,
This is completely di#erent from the familiar situation with Ritz values, which
are, after all, the points in the 0-pseudospectrum of a square Hessenberg matrix. Ritz
values need not be contained in the true spectrum. Simply by adding one more row
to consider a rectangular matrix, we have obtained a guaranteed inclusion for every #.
The results presented by Toh and Trefethen focus on trying to approximate the
full pseudospectra of the matrix (i.e., around the entire spectrum) and they do not use
3 Our use of the variable p follows eigs. In ARPACK and in Sorensen's original paper [23], this
would be p
596 THOMAS G. WRIGHT AND LLOYD N. TREFETHEN
any kind of restarting in their implementation of the Arnoldi iteration. While this is
a useful theoretical idea, we think it is of limited practical value for computing highly
accurate pseudospectra since good approximations are often obtained generally only
for large subspace dimensions.
Our work is more local; we want a good approximation to the pseudospectra
in the region around the eigenvalues requested from ARPACK or eigs. By taking
advantage of ARPACK's implicit restarting, we keep the size of the subspace (and
hence
reasonably small, allowing us to compute (local) approximations to the
pseudospectra more rapidly, extending the idea of [25] to a fully practical technique
(for a more restricted problem).
4. Implementation. In deciding to use the rectangular Hessenberg matrix
we have made the post-ARPACK phase of our algorithm more di#cult. While the
simple algorithm of computing the minimum singular value of zI -A at each point has
approximately the same cost for a rectangular matrix as a square one, the speedup
techniques described in section 2 are di#cult to translate into the rectangular case.
The first idea, projection to a lower dimensional invariant subspace, does not
make sense for rectangular matrices because there is no such thing as an invariant
subspace. The second idea, preliminary triangularization using a Schur decompo-
sition, also does not extend to rectangular matrices, for although it is possible to
triangularize the rectangular matrix while keeping the same singular values (by performing
a QR factorization, for example), doing so destroys the vital property of
shift-invariance (see (2.4)).
However, our particular problem has a feature we have not yet considered: the
matrix is Hessenberg. One way to exploit this property is to perform a QR factorization
of the matrix obtained after shifting for each grid point. The upper triangular
matrix R has the same singular values as the shifted matrix, and they are also unchanged
on removing the last row of zeros, which makes the matrix square. We can
now use the inverse Lanczos iteration as in section 2 to find its smallest singular value.
The QR factorization can be done with an O(N 2 ) algorithm (see, e.g., [5, p. 228]),
which makes the overall cost O(N 2 ). Unfortunately, the additional cost of the QR
factorization at each stage makes this algorithm slightly slower for the small matrices
(dimensions 50-150) output from ARPACK than for square matrices of the same size,
but this appears to be the price to be paid for using matrices which have the property
of (3.3).
4.1. Refinements. In some cases we have found that inverse iteration to find
the minimum eigenvalue of (zI - R) # (zI - R) is more e#cient than inverse Lanczos
iteration but only when used with continuation (Lui [12]). Continuation works by
using the vector corresponding to the smallest singular value from the previous grid
point as the starting guess for the next grid point.
This sounds like a good idea; if the two shifted matrices di#er by only a small
shift, their singular values (and singular vectors) will be similar. When it works, it
generally means that only a single iteration is needed to satisfy the convergence crite-
rion. However, as Lui indicates, there is a problem with this approach if the smallest
and second smallest singular values "change places" between two values of z: the iteration
may converge to the second smallest singular value instead of the smallest, since
the starting vector had such a strong component in the direction of the corresponding
singular vector. This leads to the convergence criterion being satisfied for the wrong
singular value (even after several iterations).
LARGE-SCALE COMPUTATION OF PSEUDOSPECTRA 597
Choose max subspace size p-larger p for better pseudospectra.
Choose number of eigenvalues k-larger k for better pseudospectra.
Run ARPACK(A, p, k) to obtain
Define a grid over a region of C enclosing converged Ritz values.
For each grid point z:
Perform reduced QR factorization of shifted matrix: z -
I - #
Get # max (z) from Lanczos iteration on (R # R) -1 , random starting vector.
end.
Start GUI and create contour plot of the # min values.
Allow adjustment of parameters (e.g., grid size, contour levels) in GUI.
Fig. 2. Pseudocode for our algorithm.
In the course of our research, we have found several test matrices which su#er
from this problem, including the so-called Tolosa matrix [4]. Accordingly, because of
our desire to create a robust algorithm, we do not use inverse iteration. In theory it is
also possible to use continuation with inverse Lanczos iteration, but our experiments
indicate that the benefit is small and it again brings a risk of misconvergence.
Our algorithm (the main loop of which is similar to that in [28]) is summarized
in
Figure
2.
5. Practical examples. While one aim of our method is to approximate the
pseudospectra of the original matrix accurately, this is perhaps no more important
than the more basic mission of exhibiting the degree of nonnormality the matrix has,
so that the ARPACK or eigs user gets some idea of whether the Ritz values returned
are likely to be physically meaningful. Even in cases where the approximations of
the sets # (A) are inaccurate, a great deal may still be learned from their qualitative
properties.
In the following examples, ARPACK was asked to look for the eigenvalues of
largest real part except where otherwise indicated. However, the choice of region of
the complex plane to focus on is unimportant for our results and is determined by
which eigenvalues are of interest for the particular problem at hand. The number
of requested eigenvalues k was chosen rather arbitrarily to be large enough so that
the approximate pseudospectra clearly indicate the true behavior in the region of
the complex plane shown, and the maximum subspace size p was chosen to ensure
convergence of ARPACK for the particular choice of k. Experiments show that the
computed pseudospectra are not very sensitive to the choices of k and p, provided
they are large enough, but we have not attempted to optimize these choices.
5.1. Two extremes. Our first example (Figure 3), from Matrix Market [15],
shows a case where the approximation is extremely good. The matrix is the Jacobi
matrix of dimension 800 for the reaction-di#usion Brusselator model from chemical
engineering [7], and one seeks the rightmost eigenvalues. The matrix is not highly non-
normal, and the pseudospectra given by the approximation almost exactly match the
G. WRIGHT AND LLOYD N. TREFETHEN
Pseudospectra Approximate pseudospectra
Fig. 3. Pseudospectra for the matrix rdb800l (left) computed using the standard method, and
pseudospectra of the upper Hessenberg matrix of dimension computed using ARPACK
(right) in about 9% of the computer time (on the fine grid used here). Levels are shown for
. The number of matrix-vector products needed by ARPACK (nv ) is
1,493.
true pseudospectra around the converged Ritz values. This is a case where the pseudospectra
computed after running ARPACK indicate that the eigenvalues returned
are both accurate and physically meaningful, and that no further investigation is nec-
essary. In this computation we used a maximum subspace dimension of
requested eigenvalues.
The second case we consider is one where the matrix has a high degree of non-
normality-the Grcar matrix. As seen in Figure 1, ARPACK can converge to Ritz
values which are eigenvalues of a perturbation of order machine precision of the original
matrix, and the nonnormality of this particular matrix (here of dimension 400)
means that the Ritz values found can lie a long way from the spectrum of the matrix.
Figure
4 shows that the pseudospectra of the Hessenberg matrix (computed using
asking for eigenvalues of largest modulus) in this case are not
good approximations to the pseudospectra of the original one.
This is typical for highly nonnormal matrices-the Hessenberg matrix cannot capture
the full extent of the nonnormality, particularly when more than p eigenvalues
of the original matrix lie within the region of the complex plane in which the pseudospectra
are computed. In other words, the approximation is typically not so good in
areas away from the Ritz values computed, and then only accurately approximates the
pseudospectra of the original matrix when the Ritz values are good approximations
to the eigenvalues. Despite this, a plot like that of Figure 4 will instantly indicate to
the ARPACK user that the matrix at hand is strongly nonnormal and needs further
investigation.
5.2. A moderately nonnormal example. While the above examples show
two extreme cases, many important applications are more middle-of-the-range, where
LARGE-SCALE COMPUTATION OF PSEUDOSPECTRA 599
Pseudospectra ARPACK approximate pseudospectra
Fig. 4. The pseudospectra of the Grcar matrix of dimension 400 (left) computed using the
standard method, and the pseudospectra of the upper Hessenberg matrix of dimension 50 computed
using ARPACK (right) in about 8% of the computer time (on this fine grid). Contours are shown
-0.4
-0.3
-0.2
-0.4
-0.3
-0.2
-0.10.10.3Pseudospectra Approximate pseudospectra
Fig. 5. Pseudospectra for linearized fluid flow through a circular
pipe at Reynolds number 10,000 (streamwise-independent disturbances with azimuthal wave number
machine precision is su#cient to accurately converge the eigenvalues, but pronounced
nonnormality may nevertheless diminish the physical significance of some of them. A
good example of a case in which this is important is the matrix created by linearization
about the laminar solution of the Navier-Stokes equations for fluid flow in an infinite
circular pipe [26]. (Our matrix is obtained by a Petrov-Galerkin spectral discretization
of the Navier-Stokes problem due to Meseguer and Trefethen [16]. The axial
and azimuthal wave numbers are 0 and 1, respectively, and the matrix dimension is
402.) The pseudospectra are shown in Figure 5, and although the eigenvalues all have
negative real part, implying stability of the flow, the pseudospectra protrude far into
the right half-plane. This implies pronounced transient growth of some perturbations
of the velocity field in the pipe, which in the presence of nonlinearities in practice may
lead to transition to turbulence [30]. The approximate pseudospectra also highlight
G. WRIGHT AND LLOYD N. TREFETHEN
-226
-0.50.5Brusellator Wave Model
Crystal
Fig. 6. Left: The pseudospectra for the Brusselator wave model,
nv= 16,906. Right: Pseudospectral contours for a matrix of dimension
10,000 from the Crystal set at Matrix Market,
this behavior. The parameters used here were
5.3. Larger examples. We now consider four larger examples. The first is the
Brusselator wave model from Matrix Market (not to be confused with the very first
example), which models the concentration waves for reaction and transport interaction
of chemical solutions in a tubular reactor [9]. Stable periodic solutions exist for a
parameter when the rightmost eigenvalues of the Jacobian are purely imaginary. For a
matrix of dimension 5,000, using a subspace of dimension 100 and asking ARPACK for
20 eigenvalues, we obtained the eigenvalue estimates and approximate pseudospectra
shown in Figure 6 (left). The departure from normality is evidently mild, and the
conclusion from this computation is that the Ritz values returned by ARPACK are
likely to be accurate and the corresponding eigenvalues physically meaningful.
Figure
6 (right) shows approximate pseudospectra for a matrix of dimension
10,000, taken from the Crystal set at Matrix Market, which arises in a stability analysis
of a crystal growth problem [32]. The eigenvalues of interest are the ones with
largest real part. The fact that we can see the 10 -13 pseudospectrum (when the axis
scale is O(1)) indicates that this matrix is significantly nonnormal, and although the
matrix is too large for us to be able to compute its exact pseudospectra for compar-
ison, this is certainly a case where the nonnormality could be important, making all
but the rightmost few eigenvalues of dubious physical significance in an application.
The ARPACK parameters we used in this case were and the computation
took about one hour on our Sun Ultra 5 workstation. Although we do not
have the true pseudospectra in this case, we would expect that the rightmost portion
should be fairly accurate where there is a good deal of Ritz data and relatively little
nonnormality. We expect that the leftmost portion is less accurate where the e#ect
of the remaining eigenvalues of the matrix unknown to the approximation begins to
become important.
The third example, Figure 7 (left), shows the Airfoil matrix created by performing
transient stability analysis of a Navier-Stokes solver [13], also from Matrix Market. In
this case the matrix appears fairly close to normal, and the picture gives every reason
to believe that the eigenvalues have physical meaning. Using
ARPACK took about 9 hours to converge to the eigenvalues, while we were able to
plot the pseudospectra in about 3 minutes (even on the fine grid used here).
Our final example is a matrix which is bidiagonal plus random sparse entries
LARGE-SCALE COMPUTATION OF PSEUDOSPECTRA 601
-0.4
Airfoil
Random
Fig. 7. Left: The pseudospectra for the Airfoil matrix from
Matrix Market of dimension 23,560, nv= 72,853. Right: The
for our random matrix, with nv= 61,347.
elsewhere, created in Matlab by
The approximation to the pseudospectra of the matrix of dimension 200,000 is shown
in
Figure
7 (right), from which we can conclude that the eigenvalue estimates returned
are probably accurate, but that the eigenvalues toward the left of the plot would likely
be of limited physical significance in a physical application, if there were one, governed
by this matrix. We used a subspace size of 50 and requested 30 eigenvalues from this
example, and the whole computation took about 26 hours.
6. MATLAB GUI. We have created a Matlab GUI to automate the process
of computing pseudospectra, and Figure 8 shows a snapshot after a run of ARPACK.
Initially the pseudospectra are computed on a coarse grid to give a fast indication
of the nonnormality of the matrix, but the GUI allows control over the number of
grid points if a higher quality picture is desired. Other features include the abilities
to change the contour levels shown without recomputing the underlying values, and
to select parts of the complex plane to zoom in for greater detail. The GUI can
also be used as a graphical front end to our other pseudospectra codes for computing
pseudospectra of smaller general matrices. The codes are available on the World Wide
Web from http://web.comlab.ox.ac.uk/oucl/work/nick.trefethen/.
7. Discussion. The examples of section 5 give an indication of the sort of pictures
obtained from our technique. For matrices fairly close to normal, the approximation
is typically a very close match to the exact pseudospectra, but for more
highly nonnormal examples the agreement is not so close. This is mainly due to the
e#ect of eigenvalues which the Arnoldi iteration has not found: their e#ect on the
pseudospectra is typically more pronounced for nonnormal matrices.
The other point to note is that if we use the Arnoldi iteration to look (for ex-
G. WRIGHT AND LLOYD N. TREFETHEN
Fig. 8. A snapshot of the Matlab GUI after computing the pseudospectra of a matrix.
ample) for eigenvalues of largest real part, the rightmost part of the approximate
pseudospectra will be a reasonably good approximation. This can clearly be seen in
Figure
4 where we are looking for eigenvalues of largest modulus: the top parts of the
pseudospectra are fairly good and only deteriorate lower down where the e#ect of the
"unfound" eigenvalues becomes important.
However, as mentioned in the introduction, creating accurate approximations of
pseudospectra was only part of the motivation for this work. Equally important has
been the goal of providing information which can help the user of ARPACK or eigs
decide whether the computed eigenvalues are physically meaningful. For this purpose,
estimating the degree of nonnormality of the matrix is more important than getting
an accurate plot of the exact pseudospectra.
One of the biggest advantages of our technique is that while the time spent on the
computation grows as the dimension of the matrix increases, the time spent
on the pseudospectra computation remains roughly constant. This is because the
pseudospectra computation is based just on the final Hessenberg matrix, of dimension
typically in the low hundreds at most. Figure 9 shows the proportion of time spent
on the pseudospectra part of the computation for the examples we have presented
here. These timings are based on the time to compute the initial output from our
LARGE-SCALE COMPUTATION OF PSEUDOSPECTRA 603
Airfoil
Crystal
Grcar
Pipe Flow
R.-D. Brusselator
Random
Brusellator Wave Model
Dimension, N
Pseudospectra
time
Total
time
Fig. 9. The proportion of the total computation time spent on computing the pseudospectra
for the examples presented in this paper. For large N, the pseudospectra are obtained at very little
additional cost.
R.-D. Brusselator
Grcar
Fig. 10. Lower-quality plots of pseudospectra produced by our GUI on its default coarse
Such plots take just a few seconds. The matrices shown are rdb800l with
Figure
and the Grcar matrix with
Figure
4).
GUI using a coarse grid such as those illustrated in Figure 10, which is our standard
resolution for day-to-day work. For the "publication quality" pseudospectra of the
resolution of the other plots in this paper, the cost is about thirty times higher, but
G. WRIGHT AND LLOYD N. TREFETHEN
this is still much less than the cost of ARPACK for dimensions N in the thousands.
Acknowledgments
. We would like to thank Rich Lehoucq for his advice on
beginning during his visit to Oxford in October-November 1999, Penny
Anderson of The MathWorks, Inc., for her help with the beta version of the new eigs
command, and Mark Embree for his comments on drafts of the paper.
--R
The principle of minimized iteration in the solution of the matrix eigenvalue problem
Vorst, eds., Templates for the Solution of Algebraic Eigenvalue Problems: A Practical Guide
Computing the field of values and pseudospectra using the Lanczos method with continuation
Stability analysis in aeronautical industries
Matrix Computations
Operator Coe
Theory and Applications of Hopf Bifurcation
A block algorithm for matrix 1-norm estimation
Waves in Distributed Chemical Systems: Experiments and Computations
Package, http://www.
Computation of pseudospectra by continuation
Eigenvalue calculation procedure for an Euler- Navier-Stokes solver with applications to flows over airfoils
On hybrid iterative methods for nonsymmetric systems of linear equations
Matrix Market
A Spectral Petrov-Galerkin Formulation for Pipe Flow I: Linear Stability and Transient Growth
A hybrid GMRES algorithm for non-symmetric linear systems
Pseudospectra of the Orr-Sommerfeld operator
Rational Krylov algorithms for nonsymmetric eigenvalue problems.
Variations on Arnoldi's method for computing eigenelements of large unsymmetric matrices
Numerical Methods for Large Eigenvalue Problems
A Jacobi-Davidson iteration method for linear eigenvalue problems
Implicit application of polynomial filters in a k-step Arnoldi method
Matrix Approximation Problems and Nonsymmetric Iterative Methods
Calculation of pseudospectra by the Arnoldi iteration
Spectra and pseudospectra for pipe Poiseuille flow
Pseudospectra of matrices
Computation of Pseudospectra
Hydrodynamic stability without eigenvalues
Computational Methods for Large Eigenvalue Problems
Numerical Computation of the Linear Stability of the Di
--TR
--CTR
S.-H. Lui, A pseudospectral mapping theorem, Mathematics of Computation, v.72 n.244, p.1841-1854, October
Lorenzo Valdettaro , Michel Rieutord , Thierry Braconnier , Valrie Frayss, Convergence and round-off errors in a two-dimensional eigenvalue problem using spectral methods and Arnoldi-Chebyshev algorithm, Journal of Computational and Applied Mathematics, v.205 n.1, p.382-393, August, 2007
Kirk Green , Thomas Wagenknecht, Pseudospectra and delay differential equations, Journal of Computational and Applied Mathematics, v.196 n.2, p.567-578, 15 November 2006
C. Bekas , E. Kokiopoulou , E. Gallopoulos, The design of a distributed MATLAB-based environment for computing pseudospectra, Future Generation Computer Systems, v.21 n.6, p.930-941, June 2005
Computing smallest singular triplets with implicitly restarted Lanczos bidiagonalization, Applied Numerical Mathematics, v.49 n.1, p.39-61, April 2004 | implicit restarting;pseudospectra;ARPACK;arnoldi;eigenvalues |
587192 | A Multigrid Method Enhanced by Krylov Subspace Iteration for Discrete Helmholtz Equations. | Standard multigrid algorithms have proven ineffective for the solution of discretizations of Helmholtz equations. In this work we modify the standard algorithm by adding GMRES iterations at coarse levels and as an outer iteration. We demonstrate the algorithm's effectiveness through theoretical analysis of a model problem and experimental results. In particular, we show that the combined use of GMRES as a smoother and outer iteration produces an algorithm whose performance depends relatively mildly on wave number and is robust for normalized wave numbers as large as 200. For fixed wave numbers, it displays grid-independent convergence rates and has costs proportional to the number of unknowns. | Introduction
. Multigrid algorithms are effective for the numerical solution
of many partial differential equations, providing a solution in time proportional to
the number of unknowns. For some important classes of problems, however, standard
multigrid algorithms have not been useful, and in this paper we focus on developing
effective multigrid algorithms for one such class, the discrete Helmholtz equation.
Our main interest lies in solving exterior boundary value problems of the form
on\Omega ae R d (1.1)
@\Omega (1.2)
such as arise in the modeling of time-harmonic acoustic or plane-polarized electromagnetic
scattering by an obstacle. The boundary \Gamma represents the scattering obstacle,
and the boundary operator B can be chosen so that a Dirichlet, Neumann or Robin
boundary condition is imposed. The original unbounded domain is truncated to the
finite
domain\Omega by introducing the artificial boundary \Gamma 1 on which the radiation
boundary condition (1.3) approximates the outgoing Sommerfeld radiation condition.
Depending on what type of radiation condition is chosen, M can be either a (local)
differential operator or a global integral operator coupling all points on \Gamma 1 (see [14]).
The data for the problem are given by the right hand side f and the boundary data
g. In the most common case, f j 0 and \Gammag is the boundary data of an incident
plane wave. The critical parameter is the wave number k, which is positive in the
case of unattenuated wave propagation. Due to the radiation boundary condition,
the solution of (1.1)-(1.3) is a complex-valued function u
Discretization of (1.1)-(1.3) by finite differences or finite elements leads to a linear
system of equations
in which the coefficient matrix A is complex-symmetric, i.e., not Hermitian. Moreover,
for large values of the wave number k, it becomes highly indefinite.
It is this indefiniteness that until recently has prevented multigrid methods from
being applied to the solution of the discrete equations with the same success as these
methods have enjoyed for symmetric positive definite problems. As will be illustrated
in Section 2, the difficulties with standard multigrid methods applied to Helmholtz
problems concern both of the main multigrid components: smoothing and coarse grid
correction. In particular, standard smoothers such as Jacobi or Gau-Seidel relaxation
become unstable for indefinite problems since there are always error components-
usually the smooth ones-which are amplified by these smoothers. The difficulties
with the coarse grid correction are usually attributed to the poor approximation of
the Helmholtz operator on very coarse meshes, since such meshes cannot adequately
resolve waves with wavelength of which the solution primarily consists. We
show, however, that although the coarse grid correction is inaccurate when coarse-grid
eigenvalues do not agree well with their fine-grid counterparts, coarse meshes can still
yield useful information in a multigrid cycle.
In this paper, we analyze and test techniques designed to address the difficulties
in both smoothing and coarse grid correction for the Helmholtz equation. For
smoothing, our approach is to use a standard, damped Jacobi, relaxation when it
works reasonably well (on fine enough grids), and then to replace this with a Krylov
subspace iteration when it fails as a smoother. Earlier work such as in Bank [2]
and Brandt and Ta'asan [11] have employed relaxation on the normal equations in
this context. Krylov subspace smoothing, principally using the conjugate gradient
method, has been considered by a variety of authors [3, 7, 8, 27, 29].
For coarse grid correction, we identify the type and number of eigenvalues that
are handled poorly during the correction, and remedy the difficulty by introducing an
outer acceleration for multigrid; that is, we use multigrid as a preconditioner for an
outer, Krylov subspace, iteration. This approach has been used by many authors, e.g.
[28, 31] but only for problems in which the coarse grid is restricted to be fairly fine.
It has also been used in other settings [20, 23]. Any Krylov subspace method is an
option for both the smoother and the outer iteration; we use GMRES [25]. In contrast
to many multilevel strategies [2, 6, 9, 31], the resulting algorithm has no requirements
that the coarse grid be sufficiently fine. For approaches based on preconditioning
indefinite problems by preconditioners for the leading term, see [4, 5, 15, 32].
In more recent work, Brandt and Livshits [10] have developed an effective multi-grid
approach for the Helmholtz equation based on representing oscillatory error components
on coarse grids as the product of an oscillatory Fourier mode and a smooth
amplitude-or ray-function. The standard V-cycle is augmented by so-called ray cy-
cles, in which the oscillatory error components are eliminated by approximating the
associated ray functions in a multigrid fashion. This wave-ray methodology has also
been combined by Lee et al. [21] with a first-order system least-squares formulation for
the Helmholtz equation. These approaches require construction of and bookkeeping
for extra grids associated with the ray functions.
An outline of the paper is as follows. In Section 2, we perform a model problem
analysis, using a one-dimensional problem to identify the difficulties encountered by
both smoothers and coarse grid correction, and supplementing these observations
with an analysis of how dimensionality of the problem affects the computations. In
Section 3, we present the refined multigrid algorithms and test their performance on
a set of two-dimensional benchmark problems on a square domain. In particular, we
demonstrate the effectiveness of an automated stopping criterion for use with GMRES
smoothing, and we show that the combined use of GMRES as a smoother and outer
iteration produces an algorithm whose performance depends relatively mildly on wave
number and is robust for wave numbers as large as two hundred. In Section 4, we show
the performance of the multigrid solver on an exterior scattering problem. Finally, in
Section 5, we draw some conclusions.
2. Model Problem Analysis. Most of the deficiencies of standard multigrid
methods for solving Helmholtz problems can be seen from a one-dimensional model
problem. Therefore, we consider the Helmholtz equation on the unit interval (0; 1)
with homogeneous Dirichlet boundary conditions
This problem is guaranteed to be nonsingular only if k 2 is not an eigenvalue of the
negative Laplacian, and we will assume here that this requirement holds. The problem
is indefinite for which is the smallest eigenvalue of the negative Laplacian.
Finite difference discretization of (2.1) on a uniform grid containing N interior
points leads to a linear system of equations (1.4) with the N \Theta N coefficient matrix
denotes the mesh
width and I denotes the identity matrix. Under the assumptions on k above, it is
well-known (see [26]) that for sufficiently fine discretizations, the discrete problems
are also nonsingular. We also assume that all coarse grid problems are nonsingular.
The eigenvalues of A are
and the eigenvectors are
2h [sin ij-h] N
The choice of Dirichlet boundary conditions in (2.1) allows us to perform Fourier
analysis using these analytic expressions for eigenvalues and eigenvectors. In experiments
described in Section 3, we will examine how our observations coincide with
performance on problems with radiation conditions, which are nonsingular for all k
[18]. Aspects of the algorithm that depend on the dimensionality of the problem will
be considered at the end of this section.
2.1. Smoothing. For the smoothing operator, we consider damped Jacobi re-
laxation, defined by the stationary iteration
em um denote the residual and error vectors at
step m, respectively. denotes the matrix consisting of the diagonal
of A, and ! is the damping parameter. The associated error propagation matrix
is and the eigenstructure of this matrix governs the behavior
of the error em . Since D is a multiple of the identity matrix, S! is a
polynomial in A and hence shares the same system of orthonormal eigenvectors (2.3).
The eigenvalues of S! are
Thus, the eigenvalue - j of S! is the damping factor for the error component corresponding
to the eigenvalue - j of A.
We now consider the effects of damped Jacobi smoothing on three levels of grids:
fine, coarse, and intermediate.
2.1.1. Fine Grids. The fine grid mesh size is determined by accuracy requirements
on the discretization, and this allows us to make certain assumptions on the
size of h versus k on the fine grid. Recall that the wavelength - associated with a
time-harmonic wave with wave number k ? 0 is given by 2-=k. The quantity
is the number of mesh points per wavelength, and it measures the approximability of
the solution on a given mesh. A commonly employed engineering rule of thumb [17]
states that, for a second-order finite difference or linear finite element discretization,
equivalently, kh -=5 (2.5)
is required, and we will enforce (2.5) in all experiments. We also note that, for reasons
of stability, a bound on the quantity h 2 k 3 is also required [18]; for high wave numbers
this bound is more restrictive than the bound on kh.
As a consequence of (2.5), the quantity multiplying the smoothing parameter !
in (2.4) will vary between about \Gamma1=4 and 9=4 for
smoothing results in a slight amplification of the most oscillatory modes as
well as of the smoothest modes. One can adjust ! so that the most oscillatory mode
is damped, and this is the case as long as For S! to
be an effective smoother, ! is usually chosen to maximize damping for the oscillatory
half of the spectrum. This leads to the choice
which is equal to the familiar optimal value of 2=3 for the Laplacian [22, p. 11] when
But the smoothest mode is amplified for any
positive choice of ! when the discrete problem is indefinite, and this is the case for
the discrete Helmholtz operator . As can be seen from (2.4), more
smooth-mode eigenvalues of S! become larger than one in magnitude as h is increased,
thus making damped Jacobi-as well as other standard smoothers-increasingly more
unstable as the mesh is coarsened.
Figure
2.1 shows the damping factors - j for each of the eigenvalues - j of A for
wave number on a grid with 31. The maximal amplification occurs for
the smoothest mode, corresponding to the leftmost eigenvalue of A. When
this amplification factor is approximately equal to
Figure
2.2 shows how ae varies with kh. Limiting this largest amplification factor, say
to ae - 1:1, would lead to the mesh size restriction kh - 0:52, somewhat stronger than
(2.5). One also observes that, for kh ?
6, this mode is once again damped.
In summary, the situation on the finest grids is similar to the positive definite
case, except for the small number of amplified smooth modes whose number and
amplification factors increase as the mesh is coarsened.
l
Fig. 2.1. The damping factors for the damped Jacobi relaxation plotted against the eigenvalues
of A (+) for
kh
Fig. 2.2. The variation of the damping/amplification factor of the smoothest mode as a function
of kh for
2.1.2. Very Coarse Grids. As the mesh is coarsened, the eigenvalues of A that
correspond to the larger eigenvalues of the underlying differential operator disappear
from the discrete problem, while the small ones-those with smooth eigenfunctions-
remain. This means that, for a fixed k large enough for the differential problem to
be indefinite, there is a coarsening level below which all eigenvalues are negative. For
the model problem (2.1), this occurs for kh ? 2 cos(-h=2) for any fixed k ? -. In this
(negative definite) case, the damped Jacobi iteration is convergent for
with
and the spectral radius of S! is minimized for This would permit the use of
(undamped) Jacobi as a smoother on very coarse grids, but we shall not make use of
this.
2.1.3. Intermediate Grids. What remains is the difficult case: values of kh
for which the problem is not yet negative definite but for which a large number of
smooth modes are amplified by damped Jacobi relaxation. Jacobi smoothing and
other standard smoothers are therefore no longer suited, and it becomes necessary to
use a different smoothing procedure. In [11] and [16] it was proposed to replace classical
smoothers with the Kaczmarz iteration, which is Gau-Seidel relaxation applied to
the symmetric positive-definite system AA for the auxiliary variable v defined
by A This method has the advantage of not amplifying any modes, but it
suffers from the drawback that the damping of the oscillatory modes is very weak. In
the following section we propose using Krylov subspace methods such as GMRES for
smoothing. These methods possess the advantage of reducing error components on
both sides of the imaginary axis without resorting to the normal equations.
2.2. Coarse Grid Correction. The rationale behind coarse-grid correction is
that smooth error components can be well represented on coarser grids, and hence
a sufficiently good approximation of the error can be obtained by approximating
the fine grid residual equation using the analogous system on a coarser mesh. This
assumes both that the error consists mainly of smooth modes and that the solution
of the coarse grid residual equation is close to its counterpart on the fine grid. In this
section, we present an analysis of what goes wrong for the Helmholtz problem.
2.2.1. Amplification of Certain Modes. Assume the number of interior grid
points on the fine grid is odd, and consider the next coarser mesh, with
interior points. We identify R N and R n , respectively, with the spaces of grid functions
on these two meshes that vanish at the endpoints, and we indicate the mesh such
vectors are associated with using the superscripts h and H. Let e
the fine grid error, let r denote the residual, and let denote the
coarse mesh size. Let the coarse-to-fine transformation be given by the interpolation
operator I h
\Theta I h
The following indication of what can go wrong with the (exact) coarse grid correction
was given in [11]: consider a fine-grid error e consisting of only the
smoothest eigenvector v h of A h with associated eigenvalue - h . The fine-grid residual
is thus given by r since we are assuming that v h is smooth,
its restriction - r H := I H
h to the coarse grid will again be close to an
eigenvector of the coarse-grid operator A H , but with respect to a slightly different
eigenvalue - H . The coarse grid version of the correction is
I H
Hence the error on the fine grid after the correction is
where we have assumed that the smooth mode v h is invariant under restriction followed
by interpolation. This tells us that, under the assumption that the restrictions
of smooth eigenvectors are again eigenvectors of A H , the quality of the correction
depends on the ratio - h =- H . If the two are equal, then the correction is perfect,
but if the relative error is large, the correction can be arbitrarily bad. This occurs
whenever one of - h , - H is close to the origin and the other is not. Moreover, if - h
and - H have opposite signs, then the correction is in the wrong direction.
We now go beyond existing analysis and examine which eigenvalues are problematic
in this sense for finite differences; a similar analysis can also be performed for
linear finite elements. Consider the coarse-grid eigenfunctions v H
. To
understand the effects of interpolation of these grid functions to the fine grid, we must
examine both the first n fine-grid eigenfunctions fv h
and their complementary
modes fv h
are related by
\Theta v h
\Theta v h
. As is easily
verified, there holds [12]
I h
with c j := cos j-h=2 and s j := sin j-h=2,
If full weighting is used for the restriction operator I H
componentwise
\Theta
I H
\Theta
\Theta
and the relation I h
. The following mapping properties are easily established
I H
with c j and s j as defined above.
If A H denotes the coarse-grid discretization matrix, then the corrected iterate
~
possesses the error propagation operator C := I \Gamma
I h
h A h . Denoting the eigenvalues of A h and A H by f- h
respectively, we may summarize the action of C on the eigenvectors using (2.9) and
as follows:
Theorem 2.1. The image of the fine-grid eigenfunctions fv h
under the
error propagation operator C of the exact coarse grid correction is given by
As a consequence, the two-dimensional spaces spanned by a smooth mode and its
complementary mode are invariant under
The following result shows the dependence of the matrices C j on
Theorem 2.2. Using the notation defined above, there holds
Moreover,
lim
Proof. Both (2.13) and (2.14) are simple consequences of (2.12) and the representation
(2.2) of the eigenvalues - h
.
Application of the error propagation operator to a smooth mode v h
If the entries of the first column of C j are small, then this mode is damped by
the coarse grid correction. However, if the (1; 1)-entry is large then this mode is
amplified, and if the (2; 1)-entry is large (somewhat less likely), then the smooth
mode is corrupted by its complementary mode. As seen from (2.13), these difficulties
occur whenever - H
is small in magnitude. From the limits (2.14), it is evident that no
such problems arise in the symmetric positive-definite case (a fact that is well-known),
but they also do not occur when kh is very large, i.e., when the coarse grid Helmholtz
operator is negative definite. These observations can be extended by returning to
(2.8) and using (2.2), wherein it holds that
That is, the coarse-grid correction strongly damps smooth error modes for either very
small or very large values of kh, but it may fail to do so in the intermediate range
associated with a smooth mode.
We also note that in the limit the eigenvalues of C j are 0 and 1, so that
C j is a projection, and in this case the projection is orthogonal with respect to the
inner product induced by the symmetric and positive definite operator A h . The
projection property is lost for k ? 0, since the coarse grid operator as we have defined
it fails to satisfy the Galerkin condition A
h A h I h
H . (The Galerkin condition is,
however, satisfied e.g. for finite element discretizations with interpolation by inclusion)
Moreover, regardless of the type of discretization, the term A h -orthogonality ceases
to makes sense once k is sufficiently large that A h is indefinite.
2.2.2. Number of Sign Changes. In this section, we discuss the number of
eigenvalues that undergo a sign change during the coarsening process, and thereby
inhibit the effectiveness of coarse grid correction. This is the only aspect of the
algorithm that significantly depends on the dimensionality of the problem. Thus, here
we are considering the Helmholtz equation (1.1) on the d-dimensional unit cube (0; 1) d ,
with homogeneous Dirichlet boundary conditions. We consider standard
finite differences (second order three-point, five-point or seven-point discretization of
the Laplacian in one, two or three dimensions, respectively), as well as the class of
low order finite elements consisting of linear, bilinear or trilinear elements.
We first state the issue more precisely using finite differences. In d dimensions,
the eigenvalues of the discrete operator on a grid with mesh size h and N grid points
in each direction are
d
sin
For any fixed multi-index I, this eigenvalue is a well-defined function of h that converges
to the corresponding eigenvalue of the differential operator as h ! 0. Our
concern is the indices for which this function changes sign, for these are the troublesome
eigenvalues that are not treated correctly by some coarse grid correction. As
the mesh is coarsened, the oscillatory modes (j i ? N=2 for some i) are not represented
on the next coarser mesh, but the smooth-mode eigenvalues f- H
I g are slightly
shifted to the left with respect to their fine-grid counterparts f- h
I g, and some of these
eigenvalues change sign at some point during the coarsening process.
The following theorem gives a bound, as a function of k, on the maximal number
eigenvalue sign changes occurring on all grids.
Theorem 2.3. For finite difference discretization of the Helmholtz equation with
Dirichlet boundary conditions on the unit cube in d dimensions 3), the
number of eigenvalues that undergo a change in sign during the multigrid coarsening
process is bounded above by
3:
For the finite element discretizations, the number of sign changes is bounded above by
pj
3:
Proof. For finite differences, let
fine denote the number of negative eigenvalues
on some given fine grid, and let
lim denote the number of negative eigenvalues of
the continuous Helmholtz operator. Because eigenvalues (2.16) with the same index
I shift from right to left with grid coarsening, it follows that
this is an equality for all fine enough grids, as the discrete eigenvalues tend to the
continuous ones. To identify
lim , consider the continuous eigenvalues
It is convenient to view the indices of these eigenvalues as lying in the positive orthant
of a d-dimensional coordinate system. The negative eigenvalues are contained in the
intersection of this orthant with a d-dimensional sphere of radius k=- centered at
the origin. Let N denote this intersection, and let -
N denote the d-dimensional cube
enclosing N . The number of indices in -
N is bk=-c d , and the number in N is aebk=-c d ,
where
is the ratio of the volume of N to that of -
N . It follows that
3:
Now consider the eigenvalues of discrete problems. Again, since sign changes
occur from right to left with coarsening, the mesh size that yields the maximum
number of negative eigenvalues is the smallest value h for which the discrete operator
is negative semidefinite. With N mesh points in each coordinate direction, this is
equivalent to
d sin 2 N-h
3:
Thus,
d=k, and
d
3:
Combining (2.19) with the fact that
fine
fine
The latter difference, shown in (2.17), is then a bound on number of sign changes.
For finite elements, we are concerned with the eigenvalues of the coefficient matrix
A h , but it is also convenient to consider the associated operator A h defined on the
finite element space V h . The eigenvalues of A h are those of the generalized matrix
eigenvalue problem
A h u
where M h is the mass matrix. These eigenvalues tend to those of the continuous
operator. Moreover, since V H is a subspace of V h , the Courant-Fischer min-max
theorem implies that eigenvalues oe h and oe H with a common index shift to the right
with coarsening (or to the left with refinement). In addition, since M h is symmetric
positive-definite, Sylvester's inertia theorem implies that the number of negative
eigenvalues of A h is is the same as that of (2.21). It follows from these observations
Sign changes
Sign changes
Fig. 2.3. Indices of eigenvalues undergoing a sign change during coarsening of an N \Theta N finite
element grid with during further coarsening of the next coarser
(n \Theta n) grid with
that the maximal number of negative eigenvalues of A h is bounded above by the fine
grid limit
lim .
This is also a bound on the number of sign changes. It can be improved by
examining the eigenvalues of A h more closely. Using the tensor product form of the
operators, we can express these eigenvalues as
1), the indices run from 1 to N and
Consider the requirement - h
so that A h is negative
semidefinite. This is equivalent to
Since the expression - j =- j is monotonically increasing with j, the largest eigenvalue
in d dimensions equals zero if
-N
12d=k. For this value of h, there are
12d) d negative eigen-
values, and on coarser meshes, the problem remains negative definite. Consequently,
none of these
quantities undergo a sign change, giving the bound j \Gamma
of
(2.18).
Figure
2.3 gives an idea of how sign changes are distributed for bilinear elements in
two dimensions. At levels where the changes take place, the indices of the eigenvalues
lie in a curved strip in the two-dimensional plane of indices. Typically, there is one
level where the majority of sign changes occur. As k is increased and h decreased
correspondingly via (2.5), the shape of these strips remains fixed but the number of
indices contained in them grows like O(h \Gammad however, that (2.5) is
not needed for the analysis.) The behavior for finite differences is similar.
The remedy suggested in [11] for these difficulties consists of maintaining an
approximation of the eigenspace V H of the troublesome eigenvalues. A projection
scheme is then used to orthogonalize the coarse grid correction against V H , and the
coefficients of the solution for this problematic space are obtained separately. Since
it involves an explicit separate treatment of the problematic modes, this approach is
restricted to cases where there are only very a small number of these.
3. Incorporation of Krylov Subspace Methods. In view of the observations
about smoothing in Section 2.1 and coarse grid correction in Section 2.2, we modify
the standard multigrid method in the following way to treat Helmholtz problems:
ffl To obtain smoothers that are stable and still provide a strong reduction of
oscillatory components, we use Krylov subspace iteration such as GMRES as
smoothers on intermediate grids.
ffl To handle modes with eigenvalues that are either close to the origin on all
grids-and hence belong to modes not sufficiently damped on any grid-or
that cross the imaginary axis and are thus treated incorrectly by some coarse
grid corrections, we add an outer iteration; that is, we use multigrid as a
preconditioner for a GMRES iteration for (1.4).
We will demonstrate the effectiveness of this approach with a series of numerical
experiments. In all tests the outer iteration is run until the stopping criterion
is satisfied, where Aum is the residual of the mth GMRES iterate and the
norm is the vector Euclidean norm. The multigrid algorithm is a V-cycle in all cases;
the smoothing schedules are specified below.
3.1. GMRES Accelerated Multigrid. We begin with an experiment for the
one-dimensional Helmholtz equation on the unit interval with forcing term
inhomogeneous Dirichlet boundary condition on the left and Sommerfeld
condition on the right. We discretize using linear finite elements on a uniform grid,
where the discrete right hand side f is determined by the boundary conditions. We
apply both a V-cycle multigrid algorithm and a GMRES iteration preconditioned by
the same V-cycle multigrid method. The smoother in these tests is one step of damped
Jacobi iteration for both presmoothing and postsmoothing, using in
(2.6). The initial guess was a vector with normally distributed entries of mean zero
and variance one, generated by the Matlab function randn.
Table
3.1 shows the iteration counts for increasing numbers of levels beginning
with fine grids containing elements and for wave numbers
which correspond to two and four wavelengths in the unit inter-
val, respectively. We observe first that both methods display typical h\Gammaindependent
multigrid behavior until the mesh size on the coarsest grid reaches kh -=2. (With
256 elements, this occurs for coarsest mesh 1=8, and for
coarsest 1=16). At this point both methods require noticeably
more iterations, the increase being much more pronounced in the stand-alone
multigrid case. When yet coarser levels are added, multigrid diverges, whereas the
# levels MG GMRES MG GMRES MG GMRES MG GMRES
Table
Iteration counts for multigrid V-cycle as a stand-alone iteration and as a preconditioner for
GMRES applied to the one-dimensional model Helmholtz problem, with damped Jacobi smoothing.
A dash denotes divergence of the iteration.
128 \Theta 128 elements 256 \Theta 256 elements
# levels MG GMRES MG GMRES MG GMRES MG GMRES
Table
Iteration counts for the two-dimensional problem for fine grids with
128 \Theta 128 and 256 \Theta 256 meshes. A dash denotes divergence of the iteration.
multigrid preconditioned GMRES method again settles down to an h-independent
iteration count, which does, however, increase with k.
Table
3.2 shows the same iteration counts for the two-dimensional Helmholtz
problem on the unit square with a second order absorbing boundary condition (see
[1, 13]) imposed on all four sides and discretized using bilinear quadrilateral finite
elements on a uniform mesh. Since the problem cannot be forced with a radiation
condition on the entire boundary, in this and the remaining examples of Section 3,
an inhomogeneity was imposed by choosing a discrete right hand side consisting of
a random vector with mean zero and variance one, generated by randn. The initial
guess was identically zero. (Trends for problems with smooth right hand sides were
the same.) In addition, for all two-dimensional problems, we use two Jacobi pre- and
postsmoothing steps whenever Jacobi smoothing is used. The damping parameter !
is chosen to maximize damping of the oscillatory modes. For the grids on which we
use damped Jacobi smoothing this optimum value was determined to be 8=9. The
results show the same qualitative behavior as for the one-dimensional problem in that
stand-alone multigrid begins to diverge as coarse levels are added while the GMRES-
accelerated iteration converges in an h-independent number of iterations growing with
# elements on coarsest grid 512 256 128 64
GMRES iterations 152 78 42 25
Table
As more coarse grid information is used, the number of iterations decreases, for the one-dimensional
problem with and a fine grid containing
k, although with a larger number of iterations than in the one-dimensional case.
A natural question is whether corrections computed on the very coarse grids, in
particular those associated with mesh widths larger than 1/10 times the wavelength
any contribution at all towards reducing the error. We investigate this
by repeating the GMRES accelerated multigrid calculations for the one-dimensional
problem with omitting all calculations-be they smoothing
or direct solves-on an increasing number of coarse levels. The results are shown
in
Table
3.3. The leftmost entry of the table shows the iteration counts when no
coarse grid information is used, i.e., for GMRES with preconditioning by two steps of
iteration. Reading from left to right, subsequent entries show the counts when
smoothings on a succession of coarser grids are included, but no computations are
done at grid levels below that of the coarsest grid. For the rightmost entry, a direct
solve was done on the coarsest mesh; this is a full V-cycle computation. The results
indicate that the computations on all grids down to that at level 2, which has eight
elements and only two points per wavelength, still accelerate the convergence of the
outer iteration.
These results show that, although multigrid by itself may diverge, it is nevertheless
a powerful enough preconditioner for GMRES to converge in an h-independent number
of steps. Two additional questions are whether replacing the unstable Jacobi smoother
with a Krylov subspace iteration leads to a convergent stand-alone multigrid method,
and how sensitive convergence behavior is as a function of the wave number k. We
address the former in the following section.
3.2. GMRES as a Smoother. In this section we replace the unstable Jacobi
smoother with GMRES smoothing. We use GMRES on all levels j where kh j -
1=2 and continue using damped Jacobi relaxation when kh choice
is motivated by the discussion at the end of Section 2.1.1, and it ensures that the
largest amplification factor for the Jacobi smoother does not become too large. The
results of Section 2.1.2 show that we could switch back to Jacobi smoothing for very
coarse meshes, but we have not explored this option.
3.2.1. Nonconstant Preconditioners. This introduces a slight complication
with regard to the outer GMRES iteration when multigrid is used as a preconditioner.
The inner GMRES smoothing steps are not linear iterations, and therefore a different
preconditioner is being applied at every step of the outer iteration. A variant of
GMRES able to accommodate a changing preconditioner (known as flexible GMRES
is due to Saad [24]. It requires the following minor modification of the
standard (right preconditioned) GMRES algorithm: if the orthonormal basis of the
(m+1)st Krylov space Km+1 (AM in the case of a constant preconditioner M is
denoted by then the Arnoldi relation AM
Hm
holds with an (m upper Hessenberg matrix ~
Hm . If the preconditioning and
matrix multiplication step
z
is performed with a changing preconditioner results in the modified
Arnoldi relation
The residual vector is now minimized over the space
need no longer be a Krylov space. This requires storing
the vectors fz j g in addition to the orthonormal vectors fv j g, which form a basis of
3.2.2. Hand-Tuned Smoothing Schedules. Numerical experiments with a
fixed number of GMRES smoothing steps at every level did not result in good perfor-
mance. To get an idea of an appropriate smoothing schedule, we proceed as follows.
For given k, we calculate the number o max of FGMRES iterations needed with j-
level multigrid preconditioning, where we use Jacobi smoothing on all grids for which
do a direct solve at the next coarser grid, making j grids in all. We
then replace the direct solve on the coarsest grid of the j-level scheme with GMRES
smoothing on this grid, coupled with a direct solve on the next coarser grid, and
determine the smallest number m j of GMRES smoothing steps required for the outer
iteration to converge in omax steps. For example, for the first line of Table 3.4, 6
outer FGMRES steps were needed for a 5-level scheme, and then m
smoothing steps were needed for the outer iteration of the new 6-level preconditioner
to converge in 6 steps. When the number m j has been determined, we could fix
the number of GMRES smoothing steps to m j on this grid, add one coarser level,
determine the optimal number of GMRES smoothing steps on the coarser grid and
continue in this fashion until the maximal number of levels is reached. This approach
is modified slightly by, whenever possible, trying to reduce the number of smoothings
on finer levels once coarser levels have been added. This is often possible, since replacing
the exact solve on the coarsest grid with several GMRES smoothing steps often
has a regularizing effect, avoiding some damage possibly done by an exact coarse
grid correction in modes whose eigenvalues are not well represented on the coarse
grid. This hand-tuning procedure gives insight into the best possible behavior of this
algorithm.
In contrast to classical linear smoothers, whose damping properties for different
modes is fixed, the damping properties of GMRES depend on the initial residual. In
particular, since GMRES is constructed to minimize the residual, it will most damp
those modes that lead to the largest residual norm reduction. For this reason, we
will favor post-smoothing over pre-smoothing to prevent the unnecessary damping of
smoother modes that should be handled by the coarse-grid correction. We do include
two GMRES pre-smoothing steps to avoid overly large oscillatory components in the
residual prior to restricting it to the next lower level, which could otherwise lead to
spurious oscillatory error components being introduced by the coarse grid correction.
The results are shown in Table 3.4. The entry 'D' denotes a direct solve on the
corresponding level and 'J' indicates that damped Jacobi smoothing was used on this
level. Looking at the smoothing schedules, we observe a 'hump' in the number of
GMRES smoothing steps on the first two levels on which GMRES smoothing is used.
Below this, the number decreases and is often zero for the coarsest levels. However,
256 \Theta 256,
# levels Smoothing schedule MG FGMRES
9 J J J J 13
128 \Theta 128,
# levels Smoothing schedule MG FGMRES
256 \Theta 256,
# levels Smoothing schedule MG FGMRES
256 \Theta 256,
# levels Smoothing schedule MG FGMRES
6 J J
9 J J
256 \Theta 256,
# levels Smoothing schedule MG FGMRES
9 J
Table
Manually optimized GMRES smoothing schedule for the two-dimensional model Helmholtz prob-
lem: 'J' denotes Jacobi smoothing and 'D' denotes a direct solve. The FGMRES algorithm uses the
multigrid V-cycle as a preconditioner.
GMRES smoothing still helps on levels which are extremely coarse with regard to
resolution of the waves: in the case performing three GMRES
smoothing steps on level 4 (which corresponds to 1/2 point per wavelength) still
improves convergence.
We remark that the number of outer iterations in all these tests, for both preconditioned
FGMRES and standalone MG, is the same as for the corresponding two-grid
versions of these methods, so we cannot expect faster convergence with respect to the
wave number k. We also note that the number of iterations for standalone multigrid
is very close to that that for FGMRES with multigrid preconditioning. We believe
this is because the relatively large number of GMRES smoothing steps on intermediate
levels eliminates lower frequency errors, and this mitigates the effects of axis
crossings. We will return to this point in Section 3.4
3.3. A Stopping Criterion Based on L 2 -Sections. Hand tuning as in the
previous section is clearly not suitable for a practical algorithm. In this section, we
develop a heuristic for finite element discretizations that automatically determines
a stopping criterion for the GMRES smoother. This technique is based on an idea
introduced in [29].
We briefly introduce some standard terminology for multilevel methods applied
to second order elliptic boundary value problems on a bounded
domain\Omega ae R 2 (see
[30]). We assume a nested hierarchy of finite element spaces
in which the largest space V J corresponds to the grid on which the solution is sought.
We require the L 2 -orthogonal projections defined by
where (\Delta; \Delta) denotes the L 2 -inner product on \Omega\Gamma Let \Phi
denote the
basis of the finite element space V ' of dimension n ' used in defining the stiffness and
mass matrices. By the nestedness property V ' ae V '+1 , there exists an n '+1 \Theta n '
matrix I '+1
whose columns contain the coefficients of the basis \Phi ' in terms of the
basis \Phi '+1 , so that, writing the bases as row vectors,
The stopping criterion we shall use for the GMRES smoothing iterations is based
on the representation of the residual r ' of an approximate solution ~ u ' of the level-'
equation as the sum of differences of L 2 -projections,
which we refer to as residual sections. The following result for coercive problems,
which was proven in [29], shows that the error u
u ' is small if each appropriately
weighted residual section is small:
Theorem 3.1. Assume the underlying elliptic boundary value problem is H 1 -
elliptic and H 1+ff -regular with ff ? 0. Then there exists a constant c independent of
the level ' such that the H
-norm of the error on level ' is bounded by
The boundary value problem (1.1)-(1.3) under consideration is not H 1 -elliptic
and therefore does not satisfy the assumptions of this theorem. We have found,
however, that the bound (3.1) suggests a useful stopping criterion: terminate the
GMRES smoothing iteration on level ' as soon as the residual section (Q
has become sufficiently small. To obtain a formula for the computation of these
sections, assume the residual r ' is represented by the coefficient vector r ' in terms of
the dual basis of \Phi ' . The representation of Q with respect to the dual basis of
\Phi '\Gamma1 is then given by the coefficient vector I
Returning to the representation with respect to the basis \Phi '\Gamma1 requires multiplication
, so that we obtain
I
If the sequence of triangulations underlying the finite element spaces V ' is quasi-
uniform, then the mass matrix of level ' is uniformly equivalent to the identity scaled
by h d , where d denotes the dimension of the domain. For the case
consideration, this means that the Euclidean inner product on the coordinate space
denoted by (\Delta; \Delta) E , when scaled by h 2
' , is uniformly equivalent (with respect to
the mesh size) to the L 2 -inner product on V ' . Therefore, the associated norms satisfy
ch 2
where v ' is the coordinate vector of v ' with respect to \Phi ' . Using this norm equivalence
it is easily shown that
I
I
for some constants c and C uniformly for all levels '. As a result, the residual sections
may be computed sufficiently accurately without the need for inverting mass matrices.
In [29], it was suggested that the GMRES smoothing iteration for a full multigrid
cycle be terminated as soon as the residual section on the given level is on the order
of the discretization error on that level. For the problem under consideration here, we
shall use the relative reduction of L 2 -sections as a stopping criterion, so that roughly
an equal error reduction for all modes is achieved in one V-cycle. On the first level
on which GMRES smoothing is used, we have the additional difficulty that many
eigenvalues may be badly approximated on the next-coarser level. For this reason,
it is better to also smooth the oscillatory modes belonging to the next lower level
and base the stopping criterion on the residual section
use this 'safer' choice on all levels. Numerical experiments with optimal smoothing
schedules have shown the relative reduction of this residual section to scale like kh ' ,
so that we arrive at the stopping criterion
I '
I
I
A complete description of the multigrid V-cycle algorithm starting on the finest
level ' is as follows:
Algorithm 3.1. ~
V-cycle with GMRES smoothing
on coarse levels
~
else
steps of damped Jacobi smoothing to obtain u (1)
else
perform 2 steps of GMRES smoothing to obtain u (1)
endif
steps of damped Jacobi smoothing to obtain ~
else
perform GMRES smoothing until stopping criterion (3.2) is satisfied
or to obtain ~
endif
endif
In the standalone multigrid V-cycle, Algorithm 3.1 is used recursively beginning
with the finest level and iterated until the desired reduction of the relative residual is
achieved on the finest level. In the FGMRES variant, Algorithm 3.1 represents the
action of the inverse of a preconditioning operator being applied to the vector f ' .
3.4. Experiments with Automated Stopping Criterion. We now show how
the multigrid solver and preconditioner perform with the automated stopping criterion
for GMRES smoothing. Each method is applied to the two-dimensional Helmholtz
problem on the unit square with second-order absorbing boundary condition and
random right hand side data. In these tests, we used in (3.2), and we
also imposed an upper bound mmax on the number of GMRES smoothing steps,
terminating the smoothing if the stopping criterion is not satisfied after mmax steps;
we tested two values, levels, where damped
Jacobi smoothing is used, the number of pre-smoothings and post-smoothings was
2.
Standalone MG MG-preconditioned FGMRES
Nnk 2- 4- 8- 16- 32- 64- 2- 4- 8- 16- 32- 64-
Table
Iteration counts for standalone multigrid and multigrid-preconditioned FGMRES for various
fine grid sizes and wave numbers. In all cases, GMRES smoothing is performed on levels for which
kh ? 1=2 and the smoothing is terminated by the L 2 -section stopping criterion or when mmax
smoothing steps are reached.
This count was extrapolated from the maximum of 47 steps that memory constraints permitted.
We present three sets of results. Table 3.5 shows iteration counts for a variety
of wave numbers and mesh sizes. Table 3.6 examines performance in more detail
by showing the automatically generated smoothing schedules for two wave numbers,
Finally, to give an idea of efficiency, Table 3.7 shows an estimate
for the operation counts required for the problems treated in Table 3.6.
Grid # levels Smoothing schedule Iterations
128 \Theta 128 7 J J
128 \Theta 128 7 J J 17 17 11 2 D 13
Grid # levels Smoothing schedule Iterations
128 \Theta 128 7 J J 19 17 11 2 D 9
128 \Theta 128 7 J J 17 17 11 2 D 13
Grid # levels Smoothing schedule Iterations
512 \Theta 512 9 J J 33 37
512 \Theta 512 9 J J 31 37
Grid # levels Smoothing schedule Iterations
512 \Theta 512 9 J J 20 20
512 \Theta 512 9 J J 20 20
Table
Smoothing schedules with automated stopping criterion, for selected parameters.
We make the following observations on these results:
ffl For low wave numbers, the number of iterations of standalone multigrid is
close to that for FGMRES. The difference increases as the wave number in-
creases, especially for the case 20. For large enough k, multigrid fails
to converge whereas MG-preconditioned FGMRES is robust. This behavior
is explained by the results of Section 2.2.2. For large wave numbers, the increased
number of amplified modes eventually causes standalone multigrid to
fail; a larger number of smoothing steps mitigates this difficulty, presumably
by eliminating some smooth errors. The (outer) FGMRES iteration handles
this situation in a robust manner.
ffl The automated stopping criterion leads to smoothing schedules close to those
obtained by hand tuning (see Table 3.4), and correspondingly similar outer
iteration counts.
ffl The operation counts shown in Table 3.7 suggest that MG-preconditioned
FGMRES is more efficient than standalone multigrid even when the latter
Grid MG FGMRES MG FGMRES
64 \Theta 64 13.2 13.3
128 \Theta 128 24.0 22.1
256 \Theta 256 61.2 43.2 1091.2 971.1
512 \Theta 512 196.6 148.1 1418.1 1377.8
Table
Operation counts (in millions) for selected parameters, with
method is effective.
ffl For fixed wave number, outer iteration counts are mesh independent, so that
standard "multigrid-like" behavior is observed. Moreover, because Jacobi
smoothing is less expensive than GMRES smoothing, during the initial stages
of mesh refinement the costs per unknown are increasing at less than a linear
rate.
ffl The growth in outer iteration counts with increasing wave number is slower
than linear in k. The operation counts increase more rapidly, however, because
of the increased number of smoothing steps required for larger wave
numbers.
4. Application to an Exterior Problem. As a final example we apply the
algorithm to an exterior scattering problem for the Helmholtz equation as given in
(1.1)-(1.3). The
domain\Omega consists of the exterior of an ellipse bounded externally by
a circular artificial boundary \Gamma 1 on which we impose the exact nonlocal Dirichlet-to-
Neumann (DtN) boundary condition (see [19]). The source function is forcing
is due to the boundary condition on the boundary \Gamma of the scatterer, given by
with data g(x; representing a plane wave incident at angle ff
to the positive x-axis. The solution u represents the scattered field associated with the
obstacle and incident field g; the resulting total field u+g then satisfies a homogeneous
Dirichlet or Neumann boundary condition on \Gamma, respectively. An angle of incidence
was chosen to avoid a symmetric solution. The problems were discretized
using linear finite elements beginning with a very coarse mesh which is successively
refined uniformly to obtain a hierarchy of nested finite element spaces. The finest
mesh, obtained after five refinement steps, contains 32768 degrees of freedom. Several
combinations of k and h were tested, where in each case kh ! 0:5 on the finest
mesh.
Figure
4.1 shows a contour plot of the solution u of the Dirichlet problem for
8-. The computations make use of the PDE Toolbox of the Matlab 5.3 computing
environment.
The problems were solved using both the standalone and FGMRES-accelerated
versions of multigrid, with GMRES smoothing using the residual section stopping
criterion with stopping criterion requiring residual reduction by a
\Gamma6 as in Section 3, and zero initial guess. In all examples, we used the
maximal number of levels with the exception of the Dirichlet problem for
where we also varied the number of levels from six down to two. The results are
shown in Table 4.1. The table gives the wave number k and the length of the ellipse
2-=k. The third column gives the maximum value of kh on
Dirichlet problem
28
4 100 26
Neumann problem
28
Table
Iteration counts for the exterior scattering problem with Dirichlet or Neumann plane wave data
on the boundary of an ellipse for various wave numbers, grid sizes and numbers of levels.
the finest mesh and the fourth column indicates the number of levels used in each
computation. The last two columns list the iteration counts.
We observe that the preconditioned iteration performs well in all cases, with a
growth in number of iterations slower than linear in k. The standalone multigrid
variant performs less well in comparison, requiring more than 100 steps to converge
in several cases and even diverging in one case. This is particularly the case for the
Neumann problem, where the superiority of the preconditioned variant is even more
pronounced. For the Neumann problems we also notice a slight growth in iteration
counts for fixed k and decreasing h.
5. Conclusions. The results of this paper show that the addition of Krylov
subspace iteration to multigrid, both as a smoother and as an outer accelerating
procedure, enables the construction of a robust multigrid algorithm for the Helmholtz
equation. GMRES is an effective smoother for grids of intermediate coarseness, in that
it appears not to amplify any error modes and in addition tends to have a regularizing
effect on the contribution to the coarse grid correction coming from smoothing on a
given level. The combination of our multigrid algorithm as a preconditioner with
FGMRES is effective in handling the deficiencies of standard multigrid methods for
the Helmholtz equation, and the outer FGMRES acceleration is necessary particularly
for high wave numbers. In addition, results in the paper indicate that grids too coarse
to result in a meaningful discretization of the Helmholtz equation may still provide
some useful information for coarse-grid corrections. Using an automated stopping
criterion based on L 2 -sections of the residual leads to smoothing cycles that are close
to hand-tuned optimal smoothing schedules.
An important aspect of our algorithm is that it consists of familiar building blocks
and is thus easily implemented. For very large wave numbers for which the discretiza-
Fig. 4.1. Contour plot of the solution of the Dirichlet problem with wave number
tion must not only keep kh but also k 3 h 2 small, the grid hierarchy will contain more
grids fine enough to use Jacobi smoothing, thus making the algorithm more efficient.
The result is a multigrid method that appears to converge with a rate independent of
the mesh size h and with only moderate dependence on the wave number k. Finally,
the numerical results show that we are able to effectively solve Helmholtz problems
with wave numbers of practical relevance.
--R
A comparison of two multilevel iterative methods for nonsymmetric and indefinite elliptic finite element equations
Sharp estimates for multigrid rates of convergence with general smoothing and acceleration
An iterative method for the Helmholtz equation
The cascadic multigrid method for elliptic problems
On the combination of the multigrid method and conjugate gradients
The analysis of multigrid algorithms for nonsymmetric and indefinite elliptic problems
Multigrid methods for nearly singular and slightly indefinite
Absorbing boundary conditions for the numerical simulation of waves
Multigrid preconditioners applied to the iterative solution of singularly perturbed elliptic boundary value problems and scattering problems
Finite element method for the Helmholtz equation in an exterior domain
Exact non-reflecting boundary conditions
Analysis and comparison of relaxation schemes in robust multigrid and conjugate gradient methods
A Multigrid Preconditioner for Stabilised Discretisations of Advection-Diffusion Problems
Iterative Methods for Sparse linear Systems
GMRES: A generalized minimal residual algorithm for solving nonsymmetric linear systems
An observation concerning Ritz-Galerkin methods with indefinite bilinear forms
Some estimates of the rate of convergence for the cascadic conjugate-gradient method
Multigrid techniques for highly indefinite equations
On the performance of Krylov subspace iterations as smoothers in multigrid meth- ods
A new class of iterative methods for nonselfadjoint or indefinite problems
On the multilevel spltting of finite element spaces for indefinite elliptic boundary value problems
--TR
--CTR
Jan Mandel , Mirela O. Popa, Iterative solvers for coupled fluid-solid scattering, Applied Numerical Mathematics, v.54 n.2, p.194-207, July 2005
Peng Li , L. T. Pileggi, Efficient harmonic balance simulation using multi-level frequency decomposition, Proceedings of the 2004 IEEE/ACM International conference on Computer-aided design, p.677-682, November 07-11, 2004 | multigrid;helmholtz equation;krylov subspace methods |
587201 | On the Solution of Equality Constrained Quadratic Programming Problems Arising in Optimization. | We consider the application of the conjugate gradient method to the solution of large equality constrained quadratic programs arising in nonlinear optimization. Our approach is based implicitly on a reduced linear system and generates iterates in the null space of the constraints. Instead of computing a basis for this null space, we choose to work directly with the matrix of constraint gradients, computing projections into the null space by either a normal equations or an augmented system approach. Unfortunately, in practice such projections can result in significant rounding errors. We propose iterative refinement techniques, as well as an adaptive reformulation of the quadratic problem, that can greatly reduce these errors without incurring high computational overheads. Numerical results illustrating the efficacy of the proposed approaches are presented. | Introduction
A variety of algorithms for nonlinearly constrained optimization [7, 8, 12, 29, 31] use
the conjugate gradient (CG) method [25] to solve subproblems of the form
minimize
x
subject to
In nonlinear optimization, the n-vector c usually represents the gradient rf of the objective
function or the gradient of the Lagrangian, the n \Theta n symmetric matrix H stands for either
the Hessian of the Lagrangian or an approximation to it, and the solution x represents a
search direction. The equality constraints (1.2) are obtained by linearizing the constraints
of the optimization problem at the current iterate. We will assume here that A is an m \Theta n
that A has full row rank so that the constraints (1.2) constitute
linearly independent equations. We also assume for convenience that H is positive
definite in the null space of the constraints, as this guarantees that (1.1)-(1.2) has a unique
solution. This positive definiteness assumption is not needed in trust region methods, but
our discussion will also be valid in that context because trust region methods normally
terminate the CG iteration as soon as negative curvature is encountered (see [36, 38], and,
by contrast, [23]).
The use of an iterative method such as CG is attractive in large scale optimization
because, when the number of variables is large, it can be cost effective to solve (1.1)-
(1.2) approximately, and only increase the accuracy of the solution as the iterates of the
optimization algorithm approach the minimizer. In addition, the properties of the CG
method merge very well with the requirements of globally convergent optimization methods
(see e.g. [36]). In this paper we study how to apply the preconditioned CG method to (1.1)-
(1.2) so as to keep the computational cost at a reasonable level while ensuring that rounding
errors do not degrade the performance of the optimization algorithm.
The quadratic program (1.1)-(1.2) can be solved by computing a basis Z for the null
space of A, using this basis to eliminate the constraints, and then applying the CG method
to the reduced problem. We will argue, however, that due to the form of the preconditioners
used in practice, the explicit use of Z will cause the iteration to be very expensive, and that
significant savings can be achieved by means of approaches that bypass the computation of
Z altogether. The price to pay for these alternatives is that they can give rise to excessive
roundoff errors that can slow the optimization iteration and may even prevent it from
converging.
As we shall see, these errors cause the constraints (1.2) not to be satisfied to the desired
accuracy. We describe iterative refinement techniques that can improve the accuracy of the
solution in highly ill-conditioned problems. We also propose a mechanism for redefining
the vector c adaptively that does not change the solution of the quadratic problem but that
has more favorable numerical properties.
Notation. Throughout the paper k \Delta k stands for the ' 2 matrix or vector norm, while the
G-norm of the vector x is defined to be
x T Gx, where G is a given positive-definite
matrix. We will denote the floating-point unit roundoff (or machine precision) by ffl m . We
let -(A) denote the condition number of A, i.e.
are the nonzero singular values of A.
2. The CG method and linear constraints
A common approach for solving linearly constrained problems is to eliminate the constraints
and solve a reduced problem (c.f. [17, 20]). More specifically, suppose that Z is
an n \Theta (n \Gamma m) matrix spanning the null space of A. Then the columns of A T
together with the columns of Z span R n , and any solution x of the linear equations (1.2)
can be written as
for some vectors x A
. The constraints (1.2) yield
which determines the vector x A
. Substituting (2.1) into (1.1), and omitting constant terms
is a constant now) we see that x Z
solves the reduced problem
minimize
x Z2
where
As we have assumed that the reduced Hessian H ZZ is positive definite, (2.3) is equivalent
to the linear system
We can now apply the conjugate gradient method to compute an approximate solution of
the problem (2.3), or equivalently the system (2.4), and substitute this into (2.1) to obtain
an approximate solution of the quadratic program (1.1)-(1.2).
This strategy of computing the normal component A T x A exactly and the tangential
component Zx Z inexactly is compatible with the requirements of many nonlinear optimization
algorithms which need to ensure that, once linear constraints are satisfied, they remain
so throughout the remainder of the optimization calculation (cf. [20]).
Let us now consider the practical application of the CG method to the reduced system
(2.4). It is well known that preconditioning can improve the rate of convergence of the
CG iteration (c.f. [1]), and we therefore assume that a preconditioner W ZZ is given. W ZZ
is a symmetric, positive definite matrix of dimension which might be chosen to
reduce the span of, and to cluster, the eigenvalues of W \Gamma1
or could be the result
of an automatic scaling of the variables [7, 29]. Regardless of how W ZZ is defined, the
preconditioned conjugate gradient method applied to (2.4) is as follows (see, e.g. [20]).
Algorithm I. Preconditioned CG for Reduced Systems.
Choose an initial point x Z , compute r
ZZ
r Z and p \Gammag Z .
Repeat the following steps, until a termination test is satisfied:
r Z
Z
ZZ
r Z
\Gammag Z
Z / g Z
and r Z / r Z
This iteration may be terminated, for example, when r Z
ZZ
r Z is sufficiently small.
Once an approximate solution is obtained, it must be multiplied by Z and substituted in
(2.1) to give the approximate solution of the quadratic program (1.1)-(1.2). Alternatively,
we may rewrite Algorithm I so that the multiplication by Z and the addition of the term
A T x A
is performed explicitly in the CG iteration. To do so, we introduce, in the following
algorithm, the n-vectors x;
Algorithm II Preconditioned CG (in Expanded Form) for Reduced Systems.
Choose an initial point x satisfying (1.2), compute
\Gammag. Repeat the following steps, until a convergence test is satisfied:
x
This will be the main algorithm studied in this paper. Several types of stopping tests
can be used, but since their choice depends on the requirements of the optimization method,
we shall not discuss them here. In the numerical tests reported in this paper we will use
the quantity r T
ZZ
r Z to terminate the CG iteration.
Note that the vector g, which we call the preconditioned residual, has been explicitly
defined to be in the range of Z. As a result, in exact arithmetic, all the search directions
generated by Algorithm II will also lie in the range of Z, and thus the iterates x will all
satisfy (1.2). Rounding errors when computing (2.17) may cause p to have a component
outside the range of Z, but this component will normally be too small to cause difficulties.
3. Implementation of the Projected CG Method
Algorithm II constitutes an effective method for computing the solution to (1.1)-(1.2)
and has been successfully used in various algorithms for large scale optimization (cf. [16, 28,
39]). The main drawback is the need for a null-space basis matrix Z, whose computation
and manipulation can be costly, and which can sometimes give rise to unnecessary ill-conditioning
[9, 10, 18, 24, 33, 37]. These difficulties will become apparent when we describe
practical procedures for computing Z and when we consider the types of preconditioners
W ZZ used in practice. Let us begin with the first issue.
3.1. Computing a basis for the null space
There are many possible choices for the null-space matrix Z. Possibly the best strategy
is to choose Z so as to have orthonormal columns, for this provides a well conditioned
representation of the null space of A. However computing such a null-space matrix can be
very expensive when the number of variables is large; it essentially requires the computation
of a sparse LQ factorization of A and the implicit or explicit generation of Q, which has
always been believed to be rather expensive when compared with the alternatives described
in [24]. Recent research [30, 35] has suggested that it is in fact possible to generate Q
as a product of sparse Householder matrices, and that the cost of this may, after all,
be reasonable. We have not experimented with this approach, however, because, to our
knowledge, general purpose software implementing it is not yet available.
Another possibility is to try to compute a basis of the null-space which involves as
few nonzeros as possible. Although this problem is computationally hard [9], sub-optimal
heuristics are possible but still rather expensive [10, 18, 33, 37].
A more economical alternative is based on simple elimination of variables [17, 20]. To
define Z we first group the components of x into m basic or dependent variables (which for
simplicity are assumed to be the first m variables) and
and partition A as
where the m \Theta m basis matrix B is assumed to be nonsingular. Then we define
\GammaB
I
which clearly satisfies and has linearly independent columns. In practice Z is not
formed explicitly; instead we compute and store sparse LU factors [13] of B, and compute
products of the form Zv and Z T v by means of solves using these LU factors. Ideally we
would like to choose a basis B that is as sparse as possible and whose condition number is
not significantly worse than that of A, but these requirements can be difficult to achieve. In
simply ensuring that B is well conditioned can be difficult when the task of choosing
a basis is delegated to a sparse LU factorization algorithm such as MA48 [15]. Some recent
codes (see, e.g., [19]) have been designed to compute a well-conditioned basis, but it is not
known to us to what extent they reach their objective.
3.2. Preconditioning
These potential drawbacks of the null-space basis (3.1) are not sufficiently serious to
prevent its effective use in Algorithm II. However, when considering practical choices for
the preconditioning matrix W ZZ , one exposes the weaknesses of this approach. Ideally, one
would like to choose W ZZ so that W \Gamma1
thus
ZZ
is the perfect preconditioner. However, it is unlikely that Z T HZ or its inverse are sparse
matrices, and even if Z T HZ is of small dimension, forming it can be quite costly. Therefore
operating with this ideal preconditioner is normally out of the question.
In this paper we consider preconditioners of the form
ZZ
where G is a symmetric matrix such that Z T GZ is positive definite. Some suggestions on
how to choose G have been made in [32]. Two particularly simple choices are
The first choice is appropriate when H is dominated by its diagonal. This is the case, for
example, in barrier methods for constrained optimization that handle bound constraints
l - x - u by adding terms of the form \Gamma-
to the objective
function, for some positive barrier parameter -. The choice I arises in several trust
region methods for constrained optimization [7, 12, 29], where the preconditioner (which
derives from a change of variables) is thus given by
ZZ
Regardless of the choice of G, the preconditioner (3.3) requires operations with the
inverse of the matrix Z T GZ. In some applications [16, 39] Z, defined by (3.1), has a simple
enough structure that forming and factorizing the (n \Gamma m) \Theta (n \Gamma m) matrix Z T GZ is not
expensive when G has a simple form. But if the LU factors of B are not very sparse and
the number of constraints m is large, forming Z T GZ may be rather costly, even if
as it requires the solution of 2m triangular systems with these LU factors. In this case it is
preferable not to form Z T GZ, but rather compute products of the form (Z
solving (Z T using the CG method. This inner CG iteration has been employed
in [29] with I, and can be effective on some problems-particularly if the number
of degrees of freedom, very small. But it can fail when Z is badly conditioned
and tends to be expensive. Moreover, since the matrix Z T GZ is not known explicitly, it is
difficult to construct effective preconditioners for accelerating this inner CG iteration.
In summary when the preconditioner has the form (3.3), and when Z is defined by means
of (3.1), the computation (2.15) of the preconditioned residual g is often so expensive as
to dominate the cost of the optimization algorithm. The goal of this paper is to consider
alternative implementations of Algorithm II whose computational cost is more moderate
and predictable. Our approach is to avoid the use of the null-space basis Z altogether.
3.3. Computing Projections
To see how to bypass the computation of Z, let us begin by considering the simple case
when so that the preconditioner W ZZ is given by (3.4). If P Z denotes the orthogonal
projection operator onto the null space of A,
then the preconditioned residual (2.15) can be written as
This projection can be performed in two alternative ways.
The first is to replace P Z by the equivalent formula
and thus to replace (3.6) with
We can express this as
is the solution of
Noting that (3.10) are the normal equations, it follows that v + is the solution of the least
squares problem
minimize
and that the desired projection g + is the corresponding residual. This approach can be
implemented using a Cholesky factorization of AA T .
The second possibility is to express the projection (3.6) as the solution of the augmented
system /
I A T
!/
r +!
This system can be solved by means of a symmetric indefinite factorization that uses 1 \Theta 1
and 2 \Theta 2 pivots [21].
Let us suppose now that the preconditioner has the more general form (3.3). The
preconditioned residual (2.15) now requires the computation
This may be expressed as
if G is non-singular, and can be found as the solution of
G A T
!/
r +!
whenever Z T GZ is non-singular (see, e.g., [20, Section 5.4.1]). While (3.14) is far from
appealing when G \Gamma1 does not have a simple form, (3.15) is a useful generalization of (3.12).
Clearly the system (3.12) may be obtained from (3.15) by setting I, and the perfect
preconditioner results if other choices for G are also possible; all that is required
is that Z T GZ be positive definite. The idea of using the projection (3.7) in the CG method
dates back to at least [34]; the alternative (3.15), and its special case (3.12), are proposed
in [8], although [8] unnecessarily requires that G be positive definite. A more recent study
on preconditioning the projected CG method is [11].
Hereafter we shall write (2.15) as
where P is any of the projection operators we have mentioned above.
Note that (3.8), (3.12) and (3.15) do not make use of the null space matrix Z and only
require factorization of matrices involving A. Unfortunately they can give rise to significant
round-off errors, particularly as the CG iterates approach the solution. The difficulties are
caused by the fact that as the iterations proceed, the projected vector
increasingly small while r does not. Indeed, the optimality conditions of the quadratic
program (1.1)-(1.2) state that the solution x satisfies
for some Lagrange multiplier vector -. The vector Hx + c, which is denoted by r in
Algorithm II, will generally stay bounded away from zero, but as indicated by (3.16), it
will become increasingly closer to the range of A T . In other words r will tend to become
orthogonal to Z, and hence, from (3.13), the preconditioned residual g will converge to zero
so long as the smallest eigenvalue of Z T GZ is bounded away from zero.
That this discrepancy in the magnitudes of will cause numerical difficulties
is apparent from (3.9), which shows that significant cancellation of digits will usually
take place. The generation of harmful roundoff errors is also apparent from (3.12)/(3.15)
because will be small while the remaining components v + remain large. Since the magnitude
of the errors generated in the solution of (3.12)/(3.15) is governed by the size of the
large component v + , the vector g + will contain large relative errors. These arguments will
be made more precise in the next section.
Example 1.
We applied Algorithm II to solve problem CVXEQP3 from the CUTE collection [4],
with In this and all subsequent experiments, we use the simple
preconditioner (3.4) corresponding to the choice used both the normal equations
(3.8) and augmented system (3.12) approaches to compute the projection. The results are
given in Figure 1, which plots the residual
r T g as a function of the iteration number. In
both cases the CG iteration was terminated when r T g became negative, which indicates
that severe errors have occurred since r T
must be positive-continuing the
iteration past this point resulted in oscillations in the norm of the gradient without any
significant improvement. At iteration 50 of both runs, r is of order 10 5 whereas its projection
g is of
Figure
also plots the cosine of the angle between the preconditioned residual g and
the rows of A. More precisely, we define
A T
where A i is the i-th row of A. Note that this cosine, which should be zero in exact arithmetic,
increases indicating that the CG iterates leave the constraint manifold
Severe errors such as these are not uncommon in optimization calculations; see x7 and
[27]. This is of grave concern as it may cause the underlying optimization algorithms to
behave erratically or fail.
In this paper we propose several remedies. One of them is based on an adaptive redefinition
of r that attempts to minimize the differences in magnitudes between
. We also describe several forms of iterative refinement for the projection operation. All
these techniques are motivated by the roundoff error analysis given next.
4. Analysis of the Errors
We now present error bounds that support the arguments made in the previous section,
particularly the claim that the most problematic situation occurs in the latter stages of the
PCG Augmented System
Iteration
resid
cos
PCG Normal Equations
Iteration
resid
cos
Figure
1: Conjugate gradient method with two options for the projection
iteration when g + is converging to zero, but r + is not. For simplicity, we shall assume
henceforth that A has been scaled so that shall only consider the
simplest possible preconditioner, as opposed to exact, quantity will
be denoted by a subscript c.
Let us first consider the normal equations approach. Here is given by (3.9)
where (3.10) is solved by means of the Cholesky factorization of AA T . In finite precision,
instead of the exact solution v + of the normal equations we obtain v
the error \Deltav
with . Recall that ffl m denotes unit roundoff and -(A) the condition number of
A.
We can now study the total error in the projection vector g + . To simplify the analysis,
we will ignore the errors that arise in the computation of the matrix-vector product A T v
and in the subtraction given in (3.9), because these errors will be dominated by
the error in v + whose magnitude is estimated by (4.1). Under these assumptions, we have
from (3.9) that the computed projection
and the exact projection
1 The bound (4.1) assumes that there are no errors in the formation of AA T and Ar + , or in the backsolves
using the Cholesky factors; this is a reasonable assumption in our context. We should also note that (4.1)
can be sharpened by replacing the term possible
diagonal scalings D.
and thus the error in the projection lies entirely in the range of A T . We then have from
(4.1) that the relative error in the projection satisfies
This error can be significant when -(A) is large or when
is large.
Let us consider the ratio (4.4) in the case when kr much larger than its projection
We have from (3.9) that kr and by the assumption that
Suppose that the inequality above is achieved. Then (4.4) gives
which is simpler to interpret than (4.4). We can thus conclude that the error in the projection
(4.3) will be large when either -(A) or the ratio kr large.
When the condition number -(A) is moderate, the contribution of the ratio (4.4) to
the relative error (4.3) is normally not large enough to cause failure of the optimization
calculation. But as the condition number -(A) grows, the loss of significant digits becomes
severe, especially since -(A) appears squared in (4.3). In Example 1,
and we have mentioned that the ratio (4.4) is of order O(10 6 ) at iteration 50. The bound
(4.3) indicates that there could be no correct digits in g + , at this stage of the CG iteration.
This is in agreement with our test, for at this point the CG iteration could make no further
progress.
Let us now consider the augmented system approach (3.15). Again we will focus on the
choice I, for which the preconditioned residual is computed by solving
I A T
!/
r +!
using a direct method. There are a number of such methods, the strategies of Bunch and
Kaufman [5] and Duff and Reid [14] being the best known examples for dense and sparse
matrices, respectively. Both form the LDL T factorization of the augmented matrix (i.e.
the matrix appearing on the left hand side of (4.5)), where L is unit lower triangular and
D is block diagonal with 1 \Theta 1 or 2 \Theta 2 blocks.
This approach is usually (but not always) more stable than the normal equations ap-
proach. To improve the stability of the method, Bj-orck [2] suggests introducing a parameter
ff and solving the equivalent system
!/
r +!
An error analysis [3] shows that
where j depends on n and m and in the growth factor during the factorization, and oe 1 -
are the nonzero singular values of A. It is important to notice that now
-(A)-and not - 2 (A)-enters in the bound. If ff - oe m (A), this method will give a solution
that is never much worse than that obtained by a tight perturbation analysis, and therefore
can be considered stable for practical purposes. But approximating oe m (A) can be difficult,
and it is common to simply use
In the case which concerns us most, when kg converges to zero while kv
the term inside the last square brackets in (4.7) is approximately kv + k, and we obtain
where we have assumed that ff = 1. It is interesting to compare this bound with (4.3). We
see that the ratio (4.4) again plays a crucial role in the analysis, and that the augmented
system approach is likely to give a more accurate solution than the method of normal
equations in this case. This cannot be stated categorically, however, since the size of the
factor j is difficult to predict.
The residual update strategy described in x6 aims at minimizing the contribution of
the ratio (4.4), and as we will see, has a highly beneficial effect in Algorithm II. Before
presenting it, we discuss various iterative refinement techniques designed to improve the
accuracy of the projection operation.
5. Iterative Refinement
Iterative refinement is known as an effective procedure for improving the accuracy of a
solution obtained by a method that is not backwards stable. We will now consider how to
use it in the context of our normal equations and augmented system approaches.
5.1. Normal Equations Approach
Let us suppose that we choose I and that we compute the projection P A r
the normal equations approach (3.9)-(3.10). An appealing idea for trying to improve the
accuracy of this computation is to apply the projection repeatedly. Therefore rather than
computing in (2.15), we let where the projection is
applied as many times as necessary to keep the errors small. The motivation for this multiple
projections technique stems from the fact that the computed projection
have only a small component, consisting entirely of rounding errors, outside of the null space
of A, as described by (4.2). Therefore applying the projection P A to the first projection
c will give an improved estimate because the ratio (4.4) will now be much smaller. By
repeating this process we may hope to obtain further improvement of accuracy.
The multiple projection technique may simply be described as setting g +
performing the following steps:
solve L(L
set
where L is the Cholesky factor of AA T . We note that this method is only appropriate when
although a simple variant is possible when G is diagonal.
Example 2.
We solved the problem given in Example 1 using multiple projections. At every CG
iteration we measure the cosine (3.17) of the angle between g and the columns of A. If this
cosine is greater than 10 \Gamma12 , then multiple projections are applied until the cosine is less
than this value. The results are given in Figure 2, and show that the residual
r T g was
reduced much more than in the plane CG iteration (Figure 1). Indeed the ratio between
the final and initial values of
r T g is 10 \Gamma16 , which is very satisfactory.
It is straightforward to analyze the multiple projections strategy (5.1)-(5.2) provided
that, as before, we make the simplifying assumption that the only rounding errors we make
are in forming L and solving (5.1). We obtain the following result which can be proved by
induction. For
where as in (4.1)
A simple consequence of (5.3)-(5.4) and the assumption that A has norm one is that
and thus that the error converges R-linearly to zero with rate
Of course, this rate can not be sustained indefinitely as the other errors we have ignored
in (5.1)-(5.2) become important. Nonetheless, one would expect (5.5) to reflect the true
behaviour until k(g
small multiple of the unit roundoff ffl m . It should
Iteration
residual
Figure
2: CG method using multiple projections in the normal equations approach.
be stressed, however, that this approach is still limited by the fact that the condition number
of A appears squared in (5.5); improvement can be guaranteed only if
We should also note that multiple projections are almost identical in their form and
numerical properties to fixed precision iterative refinement to the least squares problem [3,
p.125]. Fixed precision iterative refinement is appropriate because the approach we have
chosen to compute projections is not stable. To see this, compare (4.3) with a perturbation
analysis of the least squares problem [3, Theorem 1.4.6]), which gives
Here the dependence on the condition number is linear-not quadratic. Moreover, since
is multiplied by kg is small the effect of the condition number of A is
much smaller in (5.7) than in (4.3).
We should mention two other iterative refinement techniques that one might consider,
but that are either not effective or not practical in our context.
The first is to use fixed-precision iterative refinement [3, Section 2.9] to attempt to
improve the solution v + of the normal equations (3.10). This, however, will generally
be unsuccessful because fixed-precision iterative refinement only improves a measure of
backward stability [21, p.126], and the Cholesky factorization is already a backward stable
method. We have performed numerical tests and found no improvement from this strategy.
However, as is well known, iterative refinement will often succeed if extended-precision
is used to evaluate the residuals. We could therefore consider using extended precision
iterative refinement to improve the solution v + of the normal equations (3.10). So long as
and the residuals of (3.10) are smaller than one in norm, we can expect that
the error in the solution of (3.10) will decrease by a factor ffl m-(A) 2 until it reaches O(ffl m ).
But since optimization algorithms normally use double precision arithmetic for all their
computations, extending the precision may not be simple or efficient, and this strategy is
not suitable for general purpose software.
For the same reason we will not consider the use of extended precision in (5.1)-(5.2) or
in the iterative refinement of the least squares problem.
5.2. Augmented System Approach
We can apply fixed precision iterative refinement to the solution obtained from the
augmented system (3.15). This gives the following iteration.
Compute
solve
G A T
!/
\Deltag
ae g
ae v
and update
Note that this method is applicable for general preconditioners G. When
an appropriate value of ff is in hand, we should incorporate it in this iteration, as described
in (4.6). The general analysis of Higham [26, Theorem 3.2] indicates that, if the condition
number of A is not too large, we can expect high accuracy in v + and good accuracy in g +
in most cases.
Example 3.
We solved the problem given in Example 1 using this iterative refinement technique. As
in the case of multiple projections discussed in Example 2, we measure the angle between
g and the columns of A at every CG iteration. Iterative refinement is applied as long as
the cosine of this angle is greater than 10 \Gamma12 . The results are given in Figure 3.
We observe that the residual
r T g is decreased almost as much as with the multiple
projections approach, and attains an acceptably small value. We should point out, however,
that the residual increases after it reaches the value 10 \Gamma10 , and if the CG iteration is
continued for a few hundred more iterations, the residual exhibits large oscillations. We
will return to this in x6.1.
In our experience 1 iterative refinement step is normally enough to provide good accu-
racy, but we have encountered cases in which 2 or 3 steps are beneficial.
6. Residual Update Strategy
We have seen that significant roundoff errors occur in the computation of the projected
residual vector is much smaller than the residual r + . We now describe a procedure
Iteration
residual
Figure
3: CG method using iterative refinement in the augmented system approach.
for redefining r + so that its norm is closer to that of g + . This will dramatically reduce the
roundoff errors in the projection operation.
We begin by noting that Algorithm II is theoretically unaffected if, immediately after
computing r + in (2.14), we redefine it as
for some y This equivalence is due to the condition and the fact that r
is only used in (2.15) and (2.16). It follows that we can redefine r + by means of (6.1) in
either the normal equations approach (3.8)/(3.13) or in the augmented system approach
(3.12)/(3.15) and the results would, in theory, be unaffected.
Having this freedom to redefine r + , we seek the value of y that minimizes
where G is any symmetric matrix for which Z T GZ is positive definite, and G \Gamma1 is the
generalized inverse of G. The vector y that solves (6.2) is obtained as
This gives rise to the following modification of the CG iteration.
Algorithm III Preconditioned CG with Residual Update.
Choose an initial point x satisfying (1.2), compute find the
vector y that minimizes kr\GammaA T y, compute
and set \Gammag. Repeat the following steps, until a convergence test is satisfied:
x
This procedure works well in practice, and can be improved by adding iterative refinement
of the projection operation. In this case, at most 1 or 2 iterative refinement steps
should be used. Notice that there is a simple interpretation of Steps (6.6) and (6.7). We
first obtain y by solving (6.2), and as we have indicated the required value is
(3.15). But (3.15) may be rewritten as
G A T
!/
and thus when we obtain g + in Step (6.7), it is as if we had instead found it by solving (6.11).
The advantage of using (6.11) compared to (3.15) is that the solution in the latter may be
dominated by the large components v + , while in the former g + are the large componentsof
course, in floating point arithmetic, the zero component in the solution of (6.11) will
instead be tiny rounded values provided (6.11) is solved in a stable fashion. Viewed in this
way, we see that Steps (6.6) and (6.7) are actually a limited form of iterative refinement in
which the computed v + , but not the computed g + which is discarded, is used to refine the
solution. This "iterative semi-refinement" has been used in other contexts [6, 22].
There is another interesting interpretation of the reset r / r \Gamma A T y performed at the
start of Algorithm III. In the parlance of optimization, c is the gradient of
the objective function (1.1) and r \Gamma A T y is the gradient of the Lagrangian for the problem
(1.1)-(1.2). The vector y computed from (6.2) is called the least squares Lagrange multiplier
estimate. (It is common, but not always the case, for optimization algorithms to set
in (6.2) to compute these multipliers.) Thus in Algorithm III we propose that the initial
residual be set to the current value of the gradient of the Lagrangian, as opposed to the
gradient of the objective function.
One could ask whether it is sufficient to do this resetting of r at the beginning of
Algorithm III, and omit step (6.6) in subsequent iterations. Our computational experience
shows that, even though this initial resetting of r reduces its magnitude sufficiently to avoid
errors in the first few CG iteration, subsequent values of r can grow, and rounding errors
may reappear. The strategy proposed in Algorithm III is safe in that it ensures that r
is small at every iteration, but one can think of various alternatives. One of them is to
monitor the norm of r and only apply the residual update when it seems to be growing.
6.1. The Case
There is a particularly efficient implementation of the residual update strategy when
I. Note that (6.2) is precisely the objective of the least squares problem (3.11) that
occurs when computing via the normal equations approach, and therefore the desired
value of y is nothing other than the vector v + in (3.10) or (3.12). Furthermore, the first
block of equations in (3.12) shows that r . Therefore, in this case (6.6) can
be replaced by r and (6.7) is In other words we have applied the
projection operation twice, and this is a special case of the multiple projections approach
described in the previous section.
Based on these observations we propose the following variation of Algorithm III that
requires only one projection per iteration. We have noted that (6.6) can be written as
. Rather than performing this projection, we will define r where g is the
projected residual computed at the previous iteration. The resulting iteration is given by
Algorithm III with the following two changes:
Omit
Replace (6.10) by g / g + and r /
This strategy has performed well in our numerical experiments and avoids the extra
storage and computation required by Algorithm III. We now show that it is mathematically
equivalent to Algorithm III - which in turn is mathematically equivalent to Algorithm II.
The arguments that follow make use of the fact that, when we have that
The first iteration is clearly the same as that of Algorithm III, except that the value
we store in r in the last step is not r us consider the effect that
this has on the next iteration. The numerator in the definition (6.3) of ff now becomes
T g which equals r T P g. Thus the formula of ff is theoretically unchanged, but the
has the advantage that it can never be negative, as is the
case with (6.3) when rounding errors dominate the projection operation. Next, the step
which is different from the value calculated in Algorithm III.
Step (6.6) is omitted in the new variant of Algorithm III. The projected residual calculated
in (6.7) is now P (P r +ffHp) which is mathematically equivalent to the value PP (r +ffHp)
calculated in Algorithm III (recall that (6.6) can be written as that the new
strategy applies the double projection only to r. Finally let us consider the numerator in
(6.8). In the new variant, it is given by
whereas in Algorithm III it is given by
By expanding these expressions we see that the formula for fi is mathematically equivalent
in both cases, but that in the new variant the projection is applied selectively.
Example 4.
We solved the problem given in Example 1 using this residual update strategy with
I. The results are given in Figure 4 and show that the normal equations and augmented
system approaches are equally effective in this case. We do not plot the cosine (3.17) of
the angle between the preconditioned residual and the columns of A because it was very
small in both approaches, and did not tend to grow as the iteration progressed. For the
normal equations approach this cosine was of order 10 \Gamma14 throughout the CG iteration; for
the augmented system approach it was of order 10 \Gamma15 . Note that we have obtained higher
accuracy than with the iterative refinement strategies described in the previous section;
compare with Figures 2 and 3.
Augmented System
Iteration
residual
Normal Equations
Iteration
residual
Figure
4: Conjugate gradient method with the residual update strategy.
To obtain a highly reliable algorithm for the case when I we can combine the
residual update strategy just described with iterative refinement of the projection operation.
This gives rise to the following iteration which will be used in the numerical tests reported
in x7.
Algorithm IV Residual Update and Iterative Refinement for
Choose an initial point x satisfying (1.2), compute
where the projection is computed by the normal equations (3.8) or augmented
system (3.12) approaches, and set \Gammag. Choose a tolerance ' max . Repeat
the following steps, until a convergence test is satisfied:
x
Apply iterative refinement to P r
until (3.17) is less than '
We conclude this discussion by elaborating on the point made before Example 4 concerning
the computation of the steplength parameter ff. We have noted that the formula
is preferable to (6.12) since it cannot give rise to cancellation. Similarly the
stopping test should be based on g T g rather than on g T r. The residual update implemented
in Algorithm IV does this change automatically, but we believe that these expressions are to
be recommended in other implementations of the CG iteration, provided the preconditioner
is based on
To test this, we repeated the computation reported in Example I using the augmented
system approach; see Figure 1. The only change is that Algorithm II now used the new
for ff and for the stopping test. The CG iteration was now able to continue
past iteration 70 and was able to reach the value
We also repeated the
calculation made in Example 3. Now the residual reached the level
and the
large oscillations in the residual mentioned in Example 3 no longer took place. Thus in
both cases these alternative expressions for ff and for the stopping test were beneficial.
6.2. General G
We can also improve upon the efficiency of Algorithm III for general G, using slightly
outdated information. The idea is simply to use the obtained when computing g + in
(6.7) as a suitable y rather than waiting until after the following step (6.5) to obtain a
slightly more up-to-date version. The resulting iteration is given by Algorithm III, with
the following two changes:
Omit
Replace (6.10) by g / g + and r / r obtained as a
bi-product from (6.7).
Notice, however, that for general G, the extra matrix-vector product A T v + will be required,
since we no longer have the relationship that we exploited when
Although we have not experimented on this idea here, it has proved to be beneficial in
other, similar circumstances [22].
7. Numerical Results
We now test the efficacy of the techniques proposed in this paper on a collection of
quadratic programs of the form (1.1)-(1.2). The problems were generated during the last
iteration of the interior point method for nonlinear programming described in [7], when this
method was applied to a set of test problems from the CUTE [4] collection. We apply the
CG method with preconditioner (3.4) (i.e. with to solve these quadratic programs.
We use the augmented system and normal equations approaches to compute projections,
and for each we compare the standard CG iteration (stand) with the iterative refinement
(ir) techniques described in x5 and the residual update strategy combined with iterative
refinement (update) as given in Algorithm IV. The results are given in Table 1. The first
column gives the problem name, and the second, the dimension of the quadratic program.
To test the reliability of the techniques proposed in this paper we used a very demanding
stopping test: the CG iteration was terminated when
In these experiments we included several other stopping tests in the CG iteration, that
are typically used by trust region methods for optimization. We terminate if the number
of iterations exceeds 2(n \Gamma m) where denotes the dimension of the reduced system
(2.4); a superscript 1 in Table 1 indicates that this limit was reached. The CG iteration
was also stopped if the length of the solution vector is greater than a "trust region radius"
that is set by the optimization method (see [7]). We us a superscript 2 to indicate that this
safeguard was activated, and note that in these problems only excessive rounding errors
can trigger it. Finally we terminate if p T Hp ! 0, indicated by 3 or if r T g ! 0, indicated by
4 . Note that the standard CG iteration was not able to meet the stopping test for any of
the problems in Table 1, but that iterative refinement and update residual were successful
in most cases.
Table
2 reports the CPU time for the problems in Table 1. Note that the times for the
standard CG approach (stand) should be interpreted with caution, since in some of these
problems it terminated prematurely. We include the times for this standard CG iteration
only to show that the iterative refinement and residual update strategies do not greatly
increase the cost of the CG iteration.
Next we report on 3 problems for which the stopping test
could not be
met by any of the variants. For these three problems, Table 3 provides the least residual
norm attained for each strategy.
As a final, but indirect test of the techniques proposed in this paper, we report the
results obtained with the interior point nonlinear optimization code described in [7] on 29
nonlinear programming problems from the CUTE collection. This code applies the CG
method to solve a quadratic program at each iteration. We used the augmented system
Augmented System Normal Equations
Problem dim stand ir update stand ir update
CORKSCRW 147
COSHFUN
OPTCTRL6
Table
1: Number of CG iterations for the different approaches. A 1 indicates that the
iteration limit was reached, 2 indicates termination from trust region bound, 3 indicates
negative curvature was detected and 4 indicates that r T
Augmented System Normal Equations
Problem dim stand ir update stand ir update
COSHFUN
OPTCTRL6
Table
2: CPU time in seconds. 1 indicates that the iteration limit was reached, 2 indicates
termination from trust region bound, 3 indicates negative curvature was detected and 4
indicated that r T
Augmented System Normal Equations
Problem dim stand ir update stand ir update
OBSTCLAE 900 2.3D-07 1.5D-07 5.5D-08 2.3D-07 9.9D-08 4.2D-08
Table
3: The least residual norm:
r T g attained by each option.
and normal equations approaches to compute projections, and for each of these strategies
we tried the standard CG iteration (stand) and the residual update strategy (update) with
iterative refinement described in Algorithm IV. The results are given in Table 4, where
"fevals" denotes the total number of evaluations of the objective function of the nonlinear
problem, and "projections" represents the total number of times that a projection operation
was performed during the optimization. A * indicates that the optimization algorithm
was unable to locate the solution.
Note that the total number of function evaluations is roughly the same for all strategies,
but there are a few cases where the differences in the CG iteration cause the algorithm to
follow a different path to the solution. This is to be expected when solving nonlinear
problems. Note that for the augmented system approach, the residual update strategy
changes the number of projections significantly only in a few problems, but when it does
the improvements are very substantial. On the other hand, we observe that for the normal
equations approach (which is more sensitive to the condition number -(A)) the residual
update strategy gives a substantial reduction in the number of projections in about half
of the problems. It is interesting that with the residual update, the performance of the
augmented system and normal equations approaches is very similar.
8. Conclusions
We have studied the properties of the projected CG method for solving quadratic programming
problems of the form (1.1)-(1.2). Due to the form of the preconditioners used
by some nonlinear programming algorithms we opted for not computing a basis Z for the
null space of the constraints, but instead projecting the CG iterates using a normal equations
or augmented system approach. We have given examples showing that in either case
significant roundoff errors can occur, and have presented an explanation for this.
We proposed several remedies. One is to use iterative refinement of the augmented
system or normal equations approaches. An alternative is to update the residual at every
iteration of the CG iteration, as described in x6. The latter can be implemented particularly
efficiently when the preconditioner is given by I in (3.3).
Our numerical experience indicates that updating the residual almost always suffices
to keep the errors to a tolerable level. Iterative refinement techniques are not as effective
by themselves as the update of the residual, but can be used in conjunction with it, and
the numerical results reported in this paper indicate that this combined strategy is both
economical and accurate.
9.
Acknowledgements
The authors would like to thank Andy Conn and Philippe Toint for their helpful input
during the early stages of this research.
Augmented System Normal Equations
f evals projections f evals projections
Problem n m stand update stand update stand update stand update
CORKSCRW 456 350 64 61 458 422
COSHFUN
GAUSSELM 14 11 25 26 92 93 28 41 85 97
HAGER4 2001 1000
OBSTCLAE 1024 0 26 26 6233 6068 26 26 6236 6080
OPTCNTRL
OPTCTRL6 122
Table
4: Number of function evaluations and projections required by the optimization
method for the different implementations of the CG iteration.
--R
Iterative solution methods.
CUTE: Constrained and unconstrained testing environment.
Some stable methods for calculating inertia and solving symmetric linear equations.
Linear least squares solutions by Housholder transfor- mations
Primal and primal-dual methods for nonlinear programming
Linearly constrained optimization and projected preconditioned conjugate gradients.
The null space problem I: Complexity.
The null space problem II: Algorithms.
A preconditioned conjugate gradient approach to linear equality constrained minimization.
A global convergence theory for general trust-region based algorithms for equality constrained optimization
Direct methods for sparse matrices.
The multifrontal solution of indefinite sparse symmetric linear equations.
The design of MA48
Practical Methods of Optimization.
Computing a sparse basis for the null-space
SNOPT: an SQP algorithm for large-scale constrained optimization
Practical Optimization.
Matrix Computations.
Iterative methods for ill-conditioned linear systems from optimiza- tion
Solving the trust-region subproblem using the Lanczos method
Sparse orthogonal schemes for structural optimization using the force method.
Methods of conjugate gradients for solving linear systems.
Iterative refinement and LAPACK.
Implicit nullspace iterative methods for constrained least squares problems.
On the implementation of an algorithm for large-scale equality constrained optimization
Multifrontal computation with the orthogonal factors of sparse matrices.
Indefinitely preconditioned inexact newton method for large sparse equality constrained nonlinear programming problems.
Preconditioning reduced matrices.
Substructuring methods for computing the null space of equilibrium matrices.
The conjugate gradient method in extremal problems.
QR Factorization of Large Sparse Overdetermined and Square Matrices with the Multifrontal Method in a Multiprocessing Environment.
The conjugate gradient method and trust regions in large scale optimiza- tion
Nested dissection for sparse nullspace bases.
Towards an efficient sparsity exploiting Newton method for minimization.
On large-scale nonlinear network optimization
--TR
--CTR
Luca Bergamaschi , Jacek Gondzio , Manolo Venturin , Giovanni Zilli, Inexact constraint preconditioners for linear systems arising in interior point methods, Computational Optimization and Applications, v.36 n.2-3, p.137-147, April 2007
H. S. Dollar , N. I. Gould , W. H. Schilders , A. J. Wathen, Using constraint preconditioners with regularized saddle-point problems, Computational Optimization and Applications, v.36 n.2-3, p.249-270, April 2007
Luca Bergamaschi , Jacek Gondzio , Giovanni Zilli, Preconditioning Indefinite Systems in Interior Point Methods for Optimization, Computational Optimization and Applications, v.28 n.2, p.149-171, July 2004
S. Bocanegra , F. F. Campos , A. R. Oliveira, Using a hybrid preconditioner for solving large-scale linear systems arising from interior point methods, Computational Optimization and Applications, v.36 n.2-3, p.149-164, April 2007
Nicholas I. M. Gould , Dominique Orban , Philippe L. Toint, GALAHAD, a library of thread-safe Fortran 90 packages for large-scale nonlinear optimization, ACM Transactions on Mathematical Software (TOMS), v.29 n.4, p.353-372, December
S. Cafieri , M. D'Apuzzo , V. Simone , D. Serafino, On the iterative solution of KKT systems in potential reduction software for large-scale quadratic problems, Computational Optimization and Applications, v.38 n.1, p.27-45, September 2007
Nicholas I. M. Gould , Philippe L. Toint, An iterative working-set method for large-scale nonconvex quadratic programming, Applied Numerical Mathematics, v.43 n.1-2, p.109-128, October 2002
Meizhong Dai , David P. Schmidt, Adaptive tetrahedral meshing in free-surface flow, Journal of Computational Physics, v.208 n.1, p.228-252, 1 September 2005
Silvia Bonettini , Emanuele Galligani , Valeria Ruggiero, Inner solvers for interior point methods for large scale nonlinear programming, Computational Optimization and Applications, v.37 n.1, p.1-34, May 2007 | conjugate gradient method;nonlinear optimization;quadratic programming;iterative refinement;preconditioning |
587203 | Practical Construction of Modified Hamiltonians. | One of the most fruitful ways to analyze the effects of discretization error in the numerical solution of a system of differential equations is to examine the "modified equations," which are equations that are exactly satisfied by the (approximate) discrete solution. These do not actually exist in general but rather are defined by an asymptotic expansion in powers of the discretization parameter. Nonetheless, if the expansion is suitably truncated, the resulting modified equations have a solution which is remarkably close to the discrete solution. In the case of a Hamiltonian system of ordinary differential equations, the modified equations are also Hamiltonian if and only if the integrator is symplectic. Evidence for the existence of a Hamiltonian for a particular calculation is obtained by calculating modified Hamiltonians and monitoring how well they are conserved. Also, energy drifts caused by numerical instability are better revealed by evaluating modified Hamiltonians. Doing this calculation would normally be complicated and highly dependent on the details of the method, even if differences are used to approximate derivatives. A relatively simple procedure is presented here, nearly independent of the internal structure of the integrator, for obtaining highly accurate estimates for modified Hamiltonians. As a bonus of the method of construction, the modified Hamiltonians are exactly conserved by a numerical solution in the case of a quadratic Hamiltonian. | Introduction
One of the most fruitful ways to analyze the effects of discretization error in the numerical solution
of differential equations is to examine the "modified equations," which are the equations that are
exactly satisfied by the (approximate) discrete solution. These do not actually exist (in general),
but rather are defined by an asymptotic expansion in powers of the discretization parameter.
Nonetheless, if the expansion is suitably truncated, the resulting modified equations have a solution
which is remarkably close to the discrete solution [9]. In the case of a Hamiltonian system of
The work of R. D. Skeel was supported in part by NSF Grants DMS-9971830, DBI-9974555 and NIH Grant
P41RR05969 and completed while visiting the Mathematics Department, University of California, San Diego.
y
z
ordinary differential equations, the modified equations are also Hamiltonian if and only if the
integrator is symplectic. The existence of a modified, or "shadow" [4], Hamiltonian is an indicator
of the validity of statistical estimates calculated from long time integration of chaotic Hamiltonian
systems [18]. In addition, the modified Hamiltonian is a more sensitive indicator than is the
original Hamiltonian of drift in the energy (caused by instability). Evidence for the existence of a
Hamiltonian for a particular calculation can be obtained by calculating modified Hamiltonians and
monitoring how well they are conserved. Doing this calculation would normally be complicated and
highly dependent on the details of the method, even if differences are used to approximate higher
derivatives. Presented here is a relatively simple procedure, nearly independent of the internal
structure of the integrator, for obtaining highly accurate estimates for modified Hamiltonians.
Consider a step by step numerical integrator x evolves an approximate
solution x n - x(nh) for a system of ordinary differential equations -
f(x). For such discrete
solutions there exists modified equations -
defined by an asymptotic expansion such
that formally the numerical solution x (nh). The modified right-hand-side function f h is
defined uniquely by postulating an asymptotic expansion f in powers of h,
substituting this into the equations for the numerical solution, expanding in powers of h, and
equating coefficients [26, 6, 22]. The asymptotic expansion does not generally converge except for
(reasonable integrators applied to) linear differential equations.
A Hamiltonian system is of the form
I
for some Hamiltonian H(x), . The modified equation for an integrator \Phi h applied to
this system is Hamiltonian, i.e., f modified Hamiltonian H h (x), if and only if
the integrator is symplectic [23, 20]. The integrator is symplectic if \Phi h;x (x)J T \Phi h;x (x) j J . There
is theoretical [2, 8, 18] and empirical evidence that
very small error
for a very long time where x h is the solution for a suitably truncated Hamiltonian H h . In what
follows we assume that H h is such a Hamiltonian and we neglect the very small error.
If we plot total energy as a function of time for a numerical integrator such as leapfrog/St-ormer/
Verlet applied to a molecular dynamics simulation, we get a graph like Fig. 3. What we observe
are large fluctuations in the original Hamiltonian, as the trajectory moves on a hypersurface of
constant modified Hamiltonian. A small drift or jump in the energy would be obscured by the
fluctuations. A plot of a modified Hamiltonian might be more revealing. As an example, the plots
of modified Hamiltonians in Fig. 4 show a clear rise in energy already in a 400-step simulation.
This indicates that plots of suitable modified Hamiltonians can make it easier to test integration
algorithms for instability and programming bugs. Details of this and other numerical tests are
given in section 2. Before continuing, it is worth emphasizing that the concern of this paper is
stability monitoring-not the monitoring and enhancement of accuracy, as in [4] and [15].
The goal is to construct an approximate modified Hamiltonian
that can be conveniently assembled from quantities, such as forces and energies, already available
from the numerical integration. We consider the special separable Hamiltonian H(q;
which the system is of the form
A "brute force" approach would be to determine an asymptotic expansion for H h and of the
quantities available for making an approximation and then to form a suitable linear combination
of the latter. By such a matching of asymptotic expansions one could derive the following modified
Hamiltonians for the leapfrog method:
Here a superscript n denotes evaluation at q n , the centered difference operator is defined by ffi w
the averaging operator is defined by -w
are defined in terms of q n , p n by the leapfrog method.
An easier and more elegant construction is presented in Secs. 3-5. The technique is developed
only for splitting methods. It is likely that a similar construction is also possible for symplectic
implicit Runge-Kutta methods. The idea is to add a new position and conjugate momentum variable
to get an extended Hamiltonian -
H h (y) which is homogeneous of order 2. For such a Hamiltonian
Jy h (t). Thus the problem is reduced to that of forming an approximation for
using the numerical solution of an extended Hamiltonian system. It is plausible that such a
construction might be useful theoretically due to the existence of robust approximation techniques.
Eq. (1) for H [2] contains an h 2 term which is not needed for achieving 2nd order accuracy. It
is present because the truncations H [2k] are designed to exactly conserve energy for the numerical
solution when H is quadratic. (See Secs. 4.1 and 5.1.) This is a very useful property because typical
applications, including molecular dynamics, are dominated by harmonic motion. The existence of
a modified Hamiltonian that is exactly conserved for a quadratic Hamiltonian is noted in [17,
Eq. (4.7b)], and the search for similar methods having this property was central to the results of
this paper. For a quadratic Hamiltonian the modified Hamiltonian H h actually exists (if h is not
too large), but H [2k] 6= H h . simple derivation of H h for the one-dimensional case is given in [22].)
Also, it should be noted that the Hamiltonians H [2k] will not detect numerical instability in the
case of quadratic Hamiltonians H .
The modified Hamiltonians H [2k] (x), defined by Eqs. (15), (8), (14), (7) are computed
and plotted as functions of time for numerical solutions generated by the leapfrog method, given
energy
time (fs)
8th order
6th order
4th order
energy
2nd order
Figure
1: Energy and various truncations of modified Hamiltonians for decalanine.
by (9). The unmodified Hamiltonians are those of classical molecular dynamics. The testing was
done with a molecular dynamics program written by the second author, which is compatible with
NAMD [11, 16] but limited in features to facilitate algorithm testing.
The first couple of experiments demonstrate the quality of the modified Hamiltonians. The test
problem is a 66-atom peptide, decalanine, in a vacuum [1]. The force field parameters are those of
CHARMM 22 for proteins [13, 14] without cutoffs for nonbonded forces.
Figure
1 shows a plot of the Hamiltonian and 2nd, 4th, 6th, and 8th order modified Hamiltonian
approximations vs. time for 100000 fs (femtoseconds) for a step size with the energy
sampled every 8th step. The level graph at the top is the 8th order truncation, the one just barely
beneath it is 6th order, and the one under that is 4th order. The greatly fluctuating graph is the
energy itself and the undulating one well below it is the 2nd order truncation. Note how well the
asymptotic theory holds for the higher order truncations-one could not obtain such flat plots by
simply smoothing the original Hamiltonian.
Figure
2 expands the vertical scale to show fluctuations in the 8th, 6th, and 4th order truncations
of modified Hamiltonians.
An explanation is in order concerning the initial drop in energy. Because a symplectic method
preserves volume in phase space and because there is less phase space volume at lower energies,
it can be inferred that the first part of the trajectory is simply the second half of a very unusual
fluctuation. In other words the initial conditions are atypical, i.e., not properly equilibrated (with
respect to the original Hamiltonian). This is particularly well revealed by the plot of the 2nd order
truncation.
The remaining experiments demonstrate the ability of modified Hamiltonians to detect instabil-
ity. The test problem is a set of 125 water molecules harmonically restrained to a 10 - A-radius sphere.
The water is based on the TIP3P model [25] without cutoffs and with flexibility incorporated by
adding bond stretching and angle bending harmonic terms (cf. Ref. [12]).
energy
time (fs)
8th order
6th order
4th order
Figure
2: Closer look at higher order truncations for decalanine.
energy
time (fs)
energy
Figure
3: Energy for flexible water with step size 2.5 fs.
Figure
3 shows a plot of the energy vs. time for 1 000 fs for a step size with the energy
sampled every step. Note that the large fluctuations make it is difficult to determine whether or
not there is energy drift.
Figure
4 shows a plot of the 6th and 8th order modified Hamiltonians for the same step size
energy
time (fs)
8th order
6th order
Figure
4: 6th and 8th order truncations with step size 2.5 fs.
fs. An upward energy drift is now obvious. The 2nd and 4th order approximations are
not shown because neither of them were as flat. Normal mode analysis for this system [10] shows
that the 250 fastest frequencies have periods in the range 9.8-10.2 fs and use of the formula in [22,
p. 131] shows that a 2.5 fs step size is 30% of the effective period for discrete leapfrog dynamics. It
is remarkable that the 8th order approximation is the flattest, even for such a large step size.
Figure
5, shows a plot of the 6th and 8th order modified Hamiltonians for step size
There is no apparent upward drift of the energy. Theoretically instability due to 4:1 resonance [21]
should occur for the leapfrog method at h \Delta angular frequency =
2, which is in the range 2.2-2.3 fs
for flexible water.
3 Augmenting the Integrator
We assume that one step of size h for the given method applied to a system with Hamiltonian
H is the composition of exact h-flows for Hamiltonian systems with Hamiltonians H 1 , H 2 , . ,
HL . Each H l (x) is assumed to be sufficiently smooth on some domain containing the infinite time
trajectory. For example,
1. the leapfrog method for separable Hamiltonian systems H(q;
2. the Rowlands method [19] for special separable Hamiltonian systems uses H 1
3. double time-stepping [7, 24] uses
4 U fast (q), H 2
fast (q),
energy
time (fs)
8th order
6th order
Figure
5: 6th and 8th order truncations with step size 2.15 fs.
4. Molly [5] does the same as double time-stepping except for the substitution of U slow (A(q))
for U slow (q) where A(q) is a local temporal averaging of q over vibrational motion.
We defined the homogeneous extension of a Hamiltonian by
If H is quadratic, then -
H is homogeneous of order 2:
. The extended Hamiltonian yields the augmented system
With initial condition and the system simplifies to
For
p+U(q), the extended Hamiltonian is -
and the simplified augmented system is
Remark. The association of ff with q rather than fi is of practical importance in that we want
to get values of -
p is calculated.
The following proposition shows that the value of the extended Hamiltonian can be calculated
knowing just the solution:
H(y) be the homogeneous extension of a given Hamiltonian H(x), and let y(t)
be a solution of the extended Hamiltonian system with ff initially 1. Then
J is the matrix
I
of augmented dimension.
Proof. Differentiating Eq. (4) with respect to oe gives -
Because -
H is a homogeneous extension of H , the solution of -
H "includes" that of H and we have
Of course, the goal is not to calculate the original Hamiltonian, for which we know a formula
but not the solution; rather, it is to calculate a modified Hamiltonian, for which we know the
solution (at grid points) but not a formula. Therefore, we must augment the integrator so that
its solution at grid points is that of the homogeneous extension of the modified Hamiltonian.
For an integrator that is a composition of Hamiltonian flows, this is accomplished by using the
homogeneous extension of each of the constituent Hamiltonians. More specifically, we define the
augmented method y
H to be the composition of exact flows for systems with
Hamiltonians -
HL where
H l (q; ff;
Lemma 1 (Commutativity) The method \Psi h defined above has a modified Hamiltonian
where H h (q; p) is the modified Hamiltonian of the original method \Phi h , i.e., the following diagram
commutes:
homogeneous extension
discretization # discretization
homogeneous extension
Proof. The modified Hamiltonian -
H h for method \Psi h can be expressed as an asymptotic expansion
using the Baker-Campbell-Hausdorf formula [20]. This formula combines Hamiltonians using the
operations of scalar multiplication, addition, and the Poisson bracket fH;
x JN x . It is thus
sufficient to show that each of these three commute with the operation of forming the homogeneous
extension. We show this only for the last of these. The homogeneous extension of the Poisson
bracket is
This is exactly the same as the (extended) Poisson bracket of ff
Remark. The aim is to discretize the extended Hamiltonian so that this commutativity property
holds. Extension of this technique to implicit Runge-Kutta methods would require an augmentation
of the method so that commutativity holds.
The following corollary allows the value of the Hamiltonian to be approximated from known
values of y h (t) at grid points:
Proposition 2 Let x h (t) and y h (t) be the solutions for modified Hamiltonians H h and -
tively. Then
Jy h (t):
Proof. Similar to that of Prop. 1. 2
4 Using Full Step Values
This section presents the construction of H [2k] for even values of k.
Let y h (t) be the solution of the modified extended Hamiltonian system with initial condition
y. It has values y h
(t) be the degree k polynomial
interpolant of these values. (For large k it may be preferable, instead, to use trigonometric
interpolation suitably modified [3].)
From Prop. 2, 1-
jh
Z jh=2
\Gammaj h=22 -
The interpolant - k where the error
(t) and
with the brackets denoting a 1)th divided difference. Noting
that -
jh
Z jh=2
\Gammaj h=22 -
Je(t)dt \Gammajh
Z jh=2
\Gammaj h=22 y h (t) T -
Z jh=2
\Gammaj h=22 -
Je(t)dt +O(h 2k+2 )
Z jh=2
\Gammaj h=2
where the second equation is obtained by integrating by parts and where fl(t) def
Jy [k+1]
(t).
This can be expressed as an expansion
By forming a suitable linear combination of the values H k;j , it is expected that
one can get -
H h with the first k=2 \Gamma 1 leading error terms eliminated:
linear combination of the H
Note. The value -
contains a leading term that is only O(h k ), so it is not useful for
eliminating error terms.
The case is the 4th order accurate formula
For the case
\Gammah
and
Z 2h
\Gamma2h
and hence,
Below are given formulas for H [8] and for H [4] in terms of values of y h (t) at grid points. Let a j
be the jth centered difference of y h (t) at
where the centered difference operator is defined by ffi and the
averaging operator is defined by
The 4th degree interpolant in divided difference form is
Hence,
6 a 3 s(s
and
6 a 4 s(s
and we have2
powers of s:
Averaging over
and averaging over \Gamma2 - s - 2 yields
90 A 14 \Gamma 107
Therefore,
For a 2nd degree interpolant it follows from Eq. (5) that
An implementation of these formulas might calculate H [2k] consecutive values of n in
terms of quantities A n
defined in terms of centered differences of y n which can
be obtained from the x n . (Only 1st and higher differences of fi n are needed.)
Example 1. To make this concrete, we calculate H [4] (x) for the leapfrog method, as given by
Eq. (2). The leapfrog method advances one step by
We have
Suppressing the n in the superscript,
y
whence
not needed7 7
From
so
Therefore,
4.1 The case of a quadratic Hamiltonian
The following result implies that, in the case where H(x) is quadratic, the numerical solution
exactly conserves an approximate modified Hamiltonian which is a linear functional of 1-
where -(t) is a linear combination of numerical solution values.
Proposition 3 Assume that \Phi h is the composition of flows for systems with quadratic Hamiltonians
and that \Psi h is constructed as in Proposition 1. Then the quantity
a i;j \Psi i
where the sum is taken over a finite set of pairs of integers, is exactly conserved by method \Psi h .
Proof. The mapping \Psi h is the composition of
flows for systems with homogeneous quadratic Hamiltonians. Then
a i;j (S i Sy) T -
a
a i;j \Psi i
(y):5 Using Intermediate Values
This section presents the construction of H [2k] for odd values of k.
For most numerical integrators one can define "sensible" mid-step values,
and these can be used instead of full step values to get an estimate accurate up to O(h 2k ). We
assume that \Psi
\Gammah=2
\Psi h=2 is a composition of exact flows of homogeneously
extended Hamiltonians.
Remark. It is not necessary that the mid-step values be approximations to y(t) at midpoints
nor that \Psi h be time symmetric (\Psi \Gamma1
we need is that \Psi where each of
\Psi 1;h , \Psi 2;h is a composition of exact flows of homogeneously extended Hamiltonians.
For example, the leapfrog method separates into half steps -
\Gammah=2
\Psi h=2 with -
\Psi h=2 as follows:
Remark. For the leapfrog method the estimate over an interval from (n\Gamma 1
2 k)h to (n+ 1
k is odd actually uses only values of energy and forces from the shorter interval from (n \Gamma 1k
to (n
The mid-step values are values at midpoints of some function z h (t) which can be used to
construct the Hamiltonian:
Proposition 4 Let z h
z h (t) T -
Jz h (t):
Proof. For any real s we define \Psi s
sh-flow for -
. Because - h is symplectic, z h Hamiltonian system
with Hamiltonian -
. Also, - h is the composition of flows of
Hamiltonians that are 2nd order homogeneous, and hence -
is homogeneous of order 2.
z h (t) T -
(t) be the degree k polynomial interpolant of the values z h
(y),
As before, let
jh
Z jh=2
\Gammaj h=22 -
The interpolant - k where the error
h (t) and
z [k+1]
Similar to before, we get
Z jh=2
\Gammaj h=2
z h (t) T -
Jz [k+1]
h (t). This can be expressed as an expansion
Again, it is expected that a suitable linear combination of the different values of H k;j
yields -
H h with the first (k leading error terms eliminated:
linear combination of the H
Note. It seems not to be possible to combine values obtained from full steps with those from
half steps to further increase the order of accuracy, because the error expansions for the two kinds
of averages do not have terms in common that can cancel.
For we have the 2nd order formula
For
Z h=2
\Gammah=2
and
Z 3h=2
\Gamma3h=2
e
Let b j be the jth centered difference of z h (t) at using mid-step values:
The 3rd degree interpolant is
Hence,
and
and we
powers of s:
Averaging
and averaging over \Gamma 3
yields
Therefore,
For a 1st degree interpolant it follows from Eq. (13) that
Example 2. We calculate H [2] (x) for the leapfrog method, as given by Eq. (1). We have
J(-y n
Suppressing the n in the superscript,
y
e
Therefore,
Example 3. We calculate H [6] (x) for the leapfrog method, as given by Eq. (3). From Eqs. (16)
we have
We have h-ffiF from Eq. (11b), and ffi 2 (q T F ) is given by Eq. (12). Then
and, therefore,
5.1 The case of a quadratic Hamiltonian
Proposition 5 Assume that \Phi h is the composition of flows for systems with quadratic Hamiltoni-
ans, that \Psi h is constructed as in Proposition 1, and that -
\Psi h=2 is as assumed at the beginning of
this section. Then the quantity
a i;j
where the sum is taken over a finite set of pairs of integers, is exactly conserved by method \Psi h .
Proof. Because -
are the compositions of flows for systems with homogeneous quadratic
Hamiltonians, the mappings -
a
a
a i;j
Acknowledgment
The authors are grateful for the assistance of Justin Wozniak, who did preliminary tests of the
second order truncation for the H'enon-Heiles Hamiltonian and for decalanine.
--R
http://www.
On the Hamiltonian interpolation of near to the identity symplectic mappings with application to symplectic integration algorithms.
Residual acceleration as a measure of the accuracy of molecular dynamics simulations.
Shadow mass and the relationship between velocity and momentum in symplectic numerical integration.
On the scope of the method of modified equations.
Generalized Verlet algorithm for efficient molecular dynamics simulations with long-range interactions
The life-span of backward error analysis for numerical integrators
Asymptotic expansions and backward analysis for numerical inte- grators
Longer time steps for molecular dynamics.
NAMD2: Greater scalability for parallel molecular dynamics.
Molecular Modelling: Principles and Applications.
Common molecular dynamics algorithms revisited: Accuracy and optimal time steps of St-ormer-leapfrog
An analysis of the accuracy of Langevin and molecular dynamics algorithm.
Backward error analysis for numerical integrators.
A numerical algorithm for Hamiltonian systems.
Numerical Hamiltonian Problems.
Nonlinear resonance artifacts in molecular dynamics simulations.
Integration schemes for molecular dynamics and related applications.
Some Geometric and Numerical Methods for Perturbed Integrable Systems.
Reversible multiple time scale molecular dynamics.
The modified equation approach to the stability and accuracy analysis of finite difference methods.
--TR
--CTR
Robert D. Engle , Robert D. Skeel , Matthew Drees, Monitoring energy drift with shadow Hamiltonians, Journal of Computational Physics, v.206 n.2, p.432-452, 1 July 2005
Jess A. Izaguirre , Scott S. Hampton, Shadow hybrid Monte Carlo: an efficient propagator in phase space of macromolecules, Journal of Computational Physics, v.200 n.2, p.581-604, November 2004 | integrator;symplectic;modified equation;backward error;hamiltonian;numerical |
587230 | Asynchronous Parallel Pattern Search for Nonlinear Optimization. | We introduce a new asynchronous parallel pattern search (APPS). Parallel pattern search can be quite useful for engineering optimization problems characterized by a small number of variables (say, fifty or less) and by objective functions that are expensive to evaluate, such as those defined by complex simulations that can take anywhere from a few seconds to many hours to run. The target platforms for APPS are the loosely coupled parallel systems now widely available. We exploit the algorithmic characteristics of pattern search to design variants that dynamically initiate actions solely in response to messages, rather than routinely cycling through a fixed set of steps. This gives a versatile concurrent strategy that allows us to effectively balance the computational load across all available processors. Further, it allows us to incorporate a high degree of fault tolerance with almost no additional overhead. We demonstrate the effectiveness of a preliminary implementation of APPS on both standard test problems as well as some engineering optimization problems. | Introduction
We are interested in solving the unconstrained nonlinear optimization problem:
We introduce a family of asynchronous parallel pattern search (APPS) methods.
Pattern search [15] is a class of direct search methods which admits a wide range of
algorithmic possibilities. Because of the
exibility aorded by the denition of pattern
search [23, 16], we can adapt it to the design of nonlinear optimization methods
that are intended to be eective on a variety of parallel and distributed computing
platforms.
Our motivations are several. First, the optimization problems of interest to us
are typically dened by computationally expensive computer simulations of complex
physical processes. Such a simulation may take anywhere from a few seconds to many
hours of computation on a single processor. As we discuss further in x2, the dominant
computational cost for pattern search methods lies in these objective function evalu-
ations. Even when the objective function is inexpensive to compute, the relative cost
of the additional work required within a single iteration of pattern search is negligible.
Given these considerations, one feature of pattern search we exploit is that it
can compute multiple, independent function evaluations simultaneously in an eort
both to accelerate the search process and to improve the quality of the result ob-
tained. Thus, our approach can take advantage of parallel and distributed computing
platforms.
We also have a practical reason, independent of the computational environment,
for using pattern search methods for the problems of interest. Simply put, for problems
dened by expensive computer simulations of complex physical processes, we
often cannot rely on the gradient of f to conduct the search. Typically, this is because
no procedure exists for the evaluation of the gradient and the creation of such a
procedure has been deemed untenable. Further, approximations to the gradient may
prove unreliable. For instance, if the accuracy of the function can only be trusted
to a few signicant decimal digits, it is di-cult to construct reliable nite-dierence
approximations to the gradient. Finally, while the theory for pattern search assumes
that f is continuously dierentiable, pattern search methods can be eective
on nondierentiable (and even discontinuous) problems precisely because they do not
explicitly rely on derivative information to drive the search. Thus we focus on pattern
search for both practical and computational reasons.
However, both the nature of the problems of interest and the features of the current
distributed computing environments raise a second issue we address in this work. The
original investigation into parallel pattern search (PPS) methods 1 [7, 22] made two
1 The original investigations focused on parallel direct search (PDS), a precursor to the more
general PPS methods discussed here.
fundamental assumptions about the parallel computation environment: 1) that the
processors were both homogeneous and tightly coupled and 2) that the amount of
time needed to complete a single evaluation of the objective was eectively constant.
It is time to reexamine these two assumptions.
Clearly, given the current variety of computing platforms including distributed
systems comprising loosely-coupled, often heterogeneous, commercial o-the-shelf
components [21], the rst assumption is no longer valid. The second assumption
is equally suspect. The standard test problems used to assess the eectiveness of a
nonlinear optimization algorithm typically are closed-form, algebraic expressions of
some function. Thus, the standard assumption that, for a xed choice of n, evaluations
complete in constant time is valid. However, given our interest in optimizing
problems dened by the simulations of complex physical processes, which often use
iterative numerical techniques themselves, the assumption that evaluations complete
in constant computational time often does not hold. In fact, the behavior of the
simulation for any given input is di-cult to assess in advance since the behavior of
the simulation can vary substantially depending on a variety of factors.
For both the problems and computing environments of interest, we can no longer
assume that the computation proceeds in lockstep. A single synchronization step
at the end of every iteration, such as the global reduction used in [22], is neither
appropriate nor eective when any of the following factors holds: function evaluations
complete in varying amounts of time (even on equivalent processors), the processors
employed in the computation possess dierent performance characteristics, or the
processors have varying loads. Again our goal is to introduce a class of APPS methods
that make more eective use of a variety of computing environments, as well as to
devise strategies that accommodate the variation in completion time for function
evaluations. Our approach is outlined in x3.
The third, and nal, consideration we address in this paper is incorporating fault
tolerant strategies into the APPS methods since one intent is to use this software on
large-scale heterogeneous systems. The combination of commodity parts and shared
resources raises a growing concern about the reliability of the individual processors
participating in a computation. If we embark on a lengthy computation, we want reasonable
assurance of producing a nal result, even if a subset of processors fail. Thus,
our goal is to design methods that anticipate such failures and respond to protect the
solution process. Rather than simply checkpointing intermediate computations to
disk and then restarting in the event of a failure, we are instead considering methods
with heuristics that adaptively modify the search strategy. We discuss the technical
issues in further detail in x4.
In x5 we provide numerical results comparing APPS and PPS on both standard
and engineering optimization test problems; and nally, in x6 we outline additional
questions to pursue.
Although we are not the rst to embark on the design of asynchronous parallel
optimization algorithms, we are aware of little other work, particularly in the area
of nonlinear programming. Approaches to developing asynchronous parallel Newton
or quasi-Newton methods are proposed in [4, 8], though the assumptions underlying
these approaches dier markedly from those we address. Specically, both assume
that solving a linear system of equations each iteration is the dominant computational
cost of the optimization algorithm because the dimensions of the problems of interest
are relatively large. A dierent line of inquiry [20] considers the use of quasi-Newton
methods, but in the context of developing asynchronous stochastic global optimization
algorithms. For now, we focus on nding local minimizers.
Parallel Pattern Search
Before proceeding to a discussion of our APPS methods, let us rst review the features
of direct search, in general, and pattern search, in particular.
Direct search methods are characterized by neither requiring nor explicitly approximating
derivative information. In the engineering literature, direct search methods
are often called zero-order methods, as opposed to rst-order methods (such as the
method of steepest descent) or second-order methods (such as Newton's method) to
indicate the highest order term being used in the local Taylor series approximation
to f . This characterization of direct search is perhaps the most useful in that it
emphasizes that in higher-order methods, derivatives are used to form a local approximation
to the function, which is then used to derive a search direction and predict
the length of the step necessary to realize decrease. Instead of working with a local
approximation of f , direct search methods work directly with f .
Pattern search methods comprise a subset of direct search methods. While there
are rigorous formal denitions of pattern search [16, 23], a primary characteristic of
pattern search methods is that they sample the function over a predened pattern of
points, all of which lie on a rational lattice. By enforcing structure on the form of
the points in the pattern, as well as simple rules on both the outcome of the search
and the subsequent updates, standard global convergence results can be obtained.
For our purposes, the feature of pattern search that is amenable to parallelism
is that once the candidates in the pattern have been dened, the function values at
these points can be computed independently and, thus, concurrently.
To make this more concrete, consider the following particularly simple version of
a pattern search algorithm. At iteration k, we have an iterate x k 2 R n and a step-length
parameter k > 0. The pattern of p points is denoted by g.
For the purposes of our simple example, we choose D fe
represents the jth unit vector. As we discuss at the end of this section, other
choices for D are possible. We now have several algorithmic options open to us. One
possibility is to look successively at the pattern points x k
either we nd a point x+ for which f(x+ ) < f(x k ) or we exhaust all 2n possibilities.
At the other extreme, we could determine x+ 2 fx k 2ng such that
(which requires us to compute f(x k
for all 2n vectors in the set D). Fig. 1 illustrates the pattern of points among which
we search for x+ when 2.
r
z }| {
Figure
1: A simple instance of pattern search
In either variant of pattern search, if none of the pattern points reduces the ob-
jective, then we set x reduce by setting
otherwise, we
set We repeat this process until some suitable
stopping criterion, such as k < tol, is satised.
There are several things to note about the two search strategies we have just
outlined. First, even though we have the same pattern in both instances, we have two
dierent algorithms with dierent search strategies that could conceivably produce
dierent sequences of iterates and even dierent local minimums. Second, the design
of the search strategies re
ects some intrinsic assumptions about the nature of both
the function and the computing environment in which the search is to be executed.
Clearly the rst strategy, which evaluates only one function value at a time, was
conceived for execution on a single processor. Further it is a cautious strategy that
computes function values only as needed, which suggests a frugality with respect to
the number of function evaluations to be allowed. The second strategy could certainly
be executed on a single processor, and one could make an argument as to why there
could be algorithmic advantages in doing so, but it is also clearly a strategy that can
easily make use of multiple processors. It is straightforward to then derive PPS from
this second strategy, as illustrated in Fig. 2.
Before proceeding to a description of APPS, however, we need to make one more
remark about the pattern. As we have already seen, we can easily derive two dierent
search strategies using the same basic pattern. Our requirements on the outcome of
the search are mild. If we fail to nd a point that reduces the value of f at x k ,
then we must try again with a smaller value of k . Otherwise, we accept as our new
iterate any point from the pattern that produces decrease. In the latter case, we may
choose to modify k . In either case, we are free to make changes to the pattern to
be used in the next iteration, though we left the pattern unchanged in the examples
given above. However, changes to either the step length parameter or the pattern are
subject to certain algebraic conditions, outlined fully in [16].
2 The reduction parameter is usually 1but can be any number in the set (0; 1).
Initialization:
Select a pattern g.
Select a step-length parameter 0 .
Select a stopping tolerance tol.
Select a starting point x 0 and evaluate f(x 0 ).
Iteration:
1. Evaluate concurrently.
2. Determine x+ such that f(x+
(synchronization point).
3. If f(x+ ) < f(x k ), then set x
Else set x
4. If k+1 < tol, exit. Else, repeat.
Figure
2: The PPS Algorithm
There still remains the question of what constitutes an acceptable pattern. We
borrow the following technical denition from [6, 16]: a pattern must be a positive
spanning set for R n . In addition, we add the condition that the spanning set be
composed of rational vectors.
Denition 1 A set of vectors fd positively spans R n if any vector z 2 R n
can be written as a nonnegative linear combination of the vectors in the set; i.e., for
any z 2 R n there exists
A positive spanning set contains at least n+1 vectors [6]. It is trivial to verify that the
set of vectors (used to dene the pattern for our examples above)
is a positive spanning set. 3
3 The terminology \positive" spanning set is a misnomer; a more proper name would be \non-
negative" spanning set.
Asynchronous Parallel Pattern Search
Ine-ciencies in processor utilization for the PPS algorithm shown in Fig. 2 arise
when the objective function evaluations do not complete in approximately the same
amount of time. This can happen for several reasons. First, the objective function
evaluations may be complex simulations that require dierent amounts of work depending
on the input parameters. Second, the load on the individual processors may
vary. Last, groups of processors participating in the calculation may possess dierent
computational characteristics. When the objective function evaluations take varying
amounts of time those processors that can complete their share of the computation
more quickly wait for the remaining processors to contribute their results. Thus,
adding more processors (and correspondingly more search directions) can actually
slow down the PPS method given in Fig. 2 because of an increased synchronization
penalty.
The limiting case of a slow objective function evaluation is when one never com-
pletes. This could happen if some processor fails during the course of the calculations.
In that situation, the entire program would hang at the next synchronization point.
Designing an algorithm that can handle failures plays some role in the discussion in
this section and is given detailed coverage in the next.
The design of APPS addresses the limitations of slow and failing objective function
evaluations and is based on a peer-to-peer approach rather than master-slave.
Although the master-slave approach has advantages, the critical disadvantage is that,
although recovery for the failure of slave processes is easy, we cannot automatically
recover from failure of the master process.
In the peer-to-peer scenario, all processes have equal knowledge, and each process
is in charge of a single direction in the search pattern D. In order to fully understand
APPS, let us rst consider the single processor's algorithm for synchronous PPS in a
peer-to-peer mode, as shown in Fig. 3. Here subscripts have been dropped to illustrate
how the process handles the data. The set of directions from all the processes forms
a positive spanning set. With the exception of initialization and nalization, the
only communication a process has with its peers is in the global reduction in Step 2.
To terminate, all processors detect convergence at the same time since they all have
identical, albeit independent, values for trial . 4
In an asynchronous peer-to-peer version of PPS (see Fig. 4), we allow each process
to maintain its own versions of x best , x+ , trial , etc. Unlike synchronous PPS, these
values may not always agree with the values on the other processes. Each process
decides what to do next based only on the current information available to it. If it
nds a point along its search direction that improves upon the best point it knows
so far, then it broadcasts a message to the other processors letting them know. It
also checks for messages from other processors, and replaces its best point with the
4 In a heterogeneous environment, there is some danger that the processors may not all have the
same value for trial because of slight dierences in arithmetic and the way values are stored; see [2].
Iteration:
1. Compute x trial x best
is \my" direction).
2. Determine f + (and the associated x+ ) via a global reduction
minimizing the f trial values computed in Step 1.
3. If f best , then fx best ; f best g fx+ g. Else trial 1
4. If trial > tol, go to Step 1. Else, exit.
Figure
3: Peer-to-peer version of (synchronous) PPS
Iteration:
Consider each incoming triplet fx+ received from another
processor. best , then fx best ; f best ; best g fx+
trial best .
1. Compute x trial x best
is \my" direction).
2. g.
3. If f best , then fx best ; f best ; best g fx+ best ,
and broadcast the new minimum triplet fx best ; f best ; best g to all other
processors. Else trial 1
4. If trial > tol, goto Step 0. Else broadcast a local convergence message
for the pair fx best ; f best g.
5. Wait until either (a) enough of processes have converged for this point
or (b) a better point is received. In case (a), exit. In case (b), goto
Figure
4: Peer-to-peer version of APPS
incoming one if it is an improvement. If neither its own trial point nor any incoming
messages are better, it performs a contraction and continues. Convergence is a trickier
issue than in the synchronous version because the processors do not reach trial < tol
at the same time. Instead, each processor converges in the direction that it owns, and
then waits for the other processes to either converge to the same point or produce
a better point. Since every good point is broadcast to all the other process, every
process eventually agrees on the best point.
The nal APPS algorithm is slightly dierent from the version in Fig. 4 because we
spawn the objective function evaluation as a separate process. Our motivation is that
we may sometimes want to stop an objective function evaluation before it completes
in the event that a good point is received from another processor. We create a group
of APPS daemon processes that follow the basic APPS procedure outlined in Fig. 4
except that each objective function evaluation will be executed as a separate process.
The result is APPS daemons working in peer-to-peer mode, each owning a single slave
objective function evaluation process.
The APPS daemon (see Fig. 5) works primarily as a message processing center.
It receives three types of messages: a \return" from its spawned objective function
evaluation and \new minimum" and \convergence" messages from APPS daemons.
When the daemon receives a \return" message, it determines if its current trial
point is a new minimum and, if so, broadcasts the point to all other processors.
The trial that is used to generate the new minimum is saved and can then be used
to determine how far to step along the search direction. The alternative would be
to reset trial = 0 every time a switch is made to a new point, but then scaling
information is lost which may lead to unnecessary additional function evaluations.
In the comparison of the trial and best f-values, we encounter an important caveat
of heterogeneous computing [2]. The comparison of values (f 's, 's, etc.) controls
the
ow of the APPS method, and we depend on these comparisons to give consistent
results across processors. Therefore, we must ensure that values are only compared
to a level of precision available on all processors. In other words, a \safe" comparison
declares
mach
where
mach is the maximum of all mach 's.
A \new minimum" message means that another processor has found a point it
thinks is best, and the receiving daemon must decide if it agrees. In this case, we
must decide how to handle tie-breaking in a consistent manner. If f best , then we
need to be able to say which point is \best" or if indeed the points we are comparing
are equal (i.e., x best = x+ ). The tie breaking scheme is the following. If f best ,
then compare + and best and select the larger value of . If the values are
also equal, check next to see if indeed the two points are the same, but rather than
comparing x best and x+ directly by measuring some norm of the dierence, use a
unique identier included with each point. Thus, two points are equal if and only if
Return from Objective Function Evaluation. Receive f trial .
1. Update x best and/or trial .
(a) If f trial < f best , then
i. fx best ; f best ; best g fx trial ; f trial ; trial g.
ii. Broadcast new minimum message with the triplet
best ; f best ; best g to all other processors
(b) Else if x best is not the point used to generate x trial , then
trial best .
(c) Else trial 1
2. Check for convergence and spawn next objective function
evaluation.
(a) If trial > tol, compute x trial x best trial d and spawn a
new objective function evaluation.
(b) Else broadcast convergence message with
best ; f best ; best g to all processors including myself.
New Minimum Message. Receive the triplet fx+ g.
1. If f best , then
best or I am locally converged, then
ag TRUE,
else
ag FALSE.
(b) Set fx best ; f best ; best g fx+ g.
(c) If
ag is TRUE, then break current objective function
evaluation spawn, compute x trial x best trial d, and
spawn a new objective function evaluation.
Convergence Message. Receive the triplet triplet fx+ g.
1. Go though steps for new minimum to be sure that this point is
x best .
2. Then, if I am the temporary master consider all the processes
that have so far converged to x best . If enough other processes
have converged so that their associated directions form a positive
spanning set, then output the solution, shutdown the remaining
APPS daemon processes, and exit.
Figure
5: APPS Daemon Message Types and Actions
their f-values, -values, and unique identiers match. 5
In certain cases, the current objective function evaluation is terminated in favor
of starting one based on a new best point. Imagine the following scenario. Suppose
three processes, A, B, and C start o with the same value for x best , generate their
own x trial 's, and spawn their objective function evaluations. Each objective function
evaluation takes several hours. Process A nishes its objective function evaluation
before any other process and does not nd improvement, so it contracts and spawns
a new objective function evaluation. A few minutes later, Process B nishes its
objective function evaluation and nds improvement. It broadcasts its new minimum
to the other processes. Process A receives this message and terminates its current
objective function evaluation process in order to move to the better point. This may
save several hours of wasted computing time. However, Process C, which is still
working on its rst objective function evaluation, waits for that to complete before
considering moving to the new x best .
When the daemon receives a \convergence" message, it records the converged
direction, and possibly checks for convergence. The design of the method requires
that a daemon cannot locally converge to a point until it has evaluated at least one
trial point generated from that best point along its search direction. Each point has
an associated boolean convergence table which is sent in every message. When a
process locally converges, it adds a TRUE entry to its spot in the convergence table
before it sends a convergence message. In order to actually check for convergence
of a su-cient number of processes, it is useful to have a temporary master to avoid
redundant computation. We dene the temporary master to be the process with
the lowest process id. While this is usually process 0, it is not always the case if we
consider faults, which are discussed in the next section. The temporary master checks
to see if the converged directions form a positive spanning set, and if so outputs the
result and terminate the entire computation.
Checking for a positive spanning set is done as follows. Let V D be the candidate
for a positive basis. We solve nonnegative least squares problems according to
the following theorem.
Theorem 3.1 A set is a positive spanning set if the set
is in its positive span (where 1 is the vector of all 1's).
Alternatively, we can check the positive basis by rst verifying that V is a spanning
set using, say, a QR factorization with pivoting, and then solving a linear program.
Theorem 3.2 (Wright [24]) A spanning set is a positive spanning
set if the maximum of the following LP is 1.
5 This system will miss two points that are equal but generated via dierent paths.
In the rst case, we can use software for the nonnegative least squares problem
from Netlib due to Lawson and Hanson [14]. In the second case, the software implementation
is more complicated since we need both a QR factorization and a linear
program solver, the latter of which is particularly hard to come by in both a freely
available, portable, and easy-to-use format.
4 Fault Tolerance in APPS
The move toward a variety of computing environments, including heterogeneous distributed
computing platforms, brings with it an increased concern for fault tolerance
in parallel algorithms. The large size, diversity of components, and complex architecture
of such systems create numerous opportunities for hardware failures. Our
computational experience conrms that it is reasonable to expect frequent failures.
In addition, the size and complexity of current simulation codes call into question
the robustness of the function evaluations. In fact, application developers themselves
will testify that it is possible to generate input parameters for which their simulation
codes fail to complete successfully. Thus, we must contend with software failures as
well as hardware failures.
A great deal of work has been done in the computer science community with
regard to fault tolerance; however, much of that work has focused on making fault
tolerance as transparent to the user as possible. This often entails checkpointing the
entire state of an application to disk or replicating processes. Fault tolerance has
traditionally been used with loosely-coupled distributed applications that do not depend
on each other to complete, such as business database applications. This lack
of interdependence is atypical of most scientic applications. While checkpointing
and replication are adequate techniques for scientic applications, they incur a substantial
amount of unwanted overhead; however, certain scientic applications have
characteristics that can be exploited for more e-cient and elegant fault tolerance.
This algorithm-dependent variety of fault tolerance has already received a considerable
amount of attention in the scientic computing community; see, e.g., [11, 12].
These approaches rely primarily on the use of diskless checkpointing, a signicant
improvement over traditional approaches. The nature of APPS is such that we can
even further reduce the overhead for fault tolerance and dispense with checkpointing
altogether.
There are three scenarios that we consider when addressing fault tolerance in
APPS: 1) the failure of a function evaluation, 2) the failure of an APPS daemon, and
the failure of a host. These scenarios are shown in Figure 6. The approaches for
handling daemon and host failures are very similar to one another, but the function
evaluation failure is treated in a somewhat dierent manner. When a function evaluation
fails, it is respawned by its parent APPS daemon. If the failure occurs more
than a specied number of times at the same trial point, then the daemon itself fails. 6
If an APPS daemon fails, the rst thing the temporary master does is check for convergence
since the now defunct daemon may have been in the process of that check
when it died. Next it checks whether or not the directions owned by the remaining
daemons form a positive basis. If so, convergence is still guaranteed, so nothing is
done. Otherwise, all dead daemons are restarted. If a host fails, then the APPS
daemons that were running on that host are restarted on a dierent host according
to the rules stated for daemon failures. The faulty host is then removed from the list
of viable hosts and is no longer used.
Exit from Function Evaluation.
1. If the number of tries at this point is less than the maximum
allowed number, respawn the function evaluation.
2. Else shutdown this daemon.
An APPS Daemon Failed.
1. Record failure.
2. If I am the (temporary) master, then
(a) Check for convergence and, if converged, output the result
and terminate the computation.
(b) If the directions corresponding to the remaining daemons do
not form positive spanning set, respawn all failed daemons.
A Host Failed.
1. Remove host from list of available hosts.
Figure
Tolerance Messages and Actions
Two important points should be made regarding fault tolerance in APPS. First,
there are no single points of failure in the APPS algorithm itself. While there are
scenarios requiring a master to coordinate eorts, this master is not xed If it should
fail while performing its tasks, another master steps up to take over. This means the
degree of fault tolerance in APPS is constrained only by the underlying communication
architecture. The current implementation of APPS uses PVM, which has a
single point of failure at the master PVM daemon [9]. We expect Harness [1], the
successor to PVM, to eliminate this disadvantage. The second point of interest is that
6 This situation can be handled in dierent ways for dierent applications; attempts to evaluate
a certain point could be abandoned without terminating the daemon.
no checkpointing or replication of processes is necessary. The algorithm recongures
on the
y, and new APPS daemons require only a small packet of information from
an existing process in order to take over where a failed daemon left o. Therefore,
we have been able take advantage of characteristics of APPS in order to elegantly
incorporate a high degree of fault tolerance with very little overhead.
Despite the growing concern for fault tolerance in the parallel computing world, we
are aware of only one other parallel optimization algorithm that incorporates fault
tolerance, FATCOP [3]. FATCOP is a parallel mixed integer program solver that
has been implemented using a Condor-PVM hybrid as the communication substrate.
FATCOP is implemented in a master-slave fashion which means that there is a single
point of failure at the master process. This is addressed by having the master
checkpoint information to disk (via Condor), but recovery requires user intervention
to restart the program in the event of a failure. In contrast, APPS can recover from
the failure of any type of process, including the failure of a temporary master, on its
own and has no checkpointing whatsoever.
5 Numerical Results
We compare PPS 7 and APPS on several test problems as well as two engineering
problems, a thermal design problem and a circuit simulation problem.
The tests were performed on the CPlant supercomputer at Sandia National Labs in
Livermore, California. CPlant is a cluster of DEC Alpha Miata 433 MHz Processors.
For our tests, we used 50 nodes dedicated to our sole use.
5.1 Standard Test Problems
We compare APPS and PPS with 8, 16, 24,and 32 processors on six four dimensional
test problems: broyden2a, broyden2b, chebyquad, epowell, toint trig, and vardim [18,
5]. Since the function evaluations are extremely fast, we added extra \busy work" in
order to slow them down to better simulate the types of objective functions we are
interested in. 8
The parameters for APPS and PPS were set as follows. Let be the problem
dimension, and let p be the number of processors. The rst 2n search directions are
g. The remaining p 2n directions are vectors that
are randomly generated (with a dierent seed for every run) and normalized to unit
length. This set of search directions is a positive spanning set. We initialize
and
7 We are using our own implementation of a positive basis PPS, as outlined in Fig. 3, rather than
the well-known parallel direct search (PDS) [22]. PDS is not based on the positive basis framework
and is quite dierent from the method described in Fig. 3, making comparisons di-cult.
8 More precisely, the \busy work" was the solution of a 100 101 nonnegative least squares
problem.
We added two additional twists to the way is updated for all tests. First, if the
same search direction yields the best point two times in a row, is doubled before
the broadcast. Second, the smallest allowable for a \new minimum" is such that
at least three contractions will be required before local convergence. That way, we
are guaranteed to have several evaluations along each search direction for each point.
Method Process Function Function Init Idle Total
ID Evals Breaks Time Time Time
Summary 272.5 70.6 0.04 0.07 24.72
Summary 235 N/A 0.22 6.10 30.63
Table
1: Detailed results for epowell on eight processors.
Before considering the summary results, we examine detailed results from two sample
runs given in Table 1. Each process reports its own counts and timings. All times
are reported in seconds and are wall clock times. Because APPS is asynchronous,
the number of function evaluations varies for each process, in this case by as much as
25%. Furthermore, APPS sometimes \breaks" functions midway through execution.
On the other hand, every process in PPS executes the same number of function eval-
uations, and there are no breaks. For both APPS and PPS, the initialization time is
longer for the rst process since it is in charge of spawning all the remaining tasks.
The idle time varies from task to task but is overall much lower for APPS than PPS.
An APPS process is only idle when it is locally converged, but a PPS process may
potentially have some idle time every iteration while it waits for the completion of the
global reduction. The total wall clock time varies from process to process since each
starts and stops at slightly dierent times. The summary information is the average
over all processes except in the case of total time, in which case the maximum over
all times is reported.
Because some of the search directions are generated randomly, every run of PPS
and APPS generates a dierent path to the solution and possibly dierent solutions in
the case of multiple minima. 9 Because of the nondeterministic nature of APPS, it gets
dierent results every run even when the search directions are identical. Therefore,
for each problem we report average summary results from 25 runs.
Problem Procs Function Evals APPS Idle Time Total Time
Name APPS PPS Breaks APPS PPS APPS PPS
broyden2a 8 40:59 37:00 8:14 0:07 0:95 3:88 4:88
chebyquad 8 73:06 62:00 16:74 0:05 1:61 6:86 8:11
toint trig 8 53:83 41:00 10:97 0:04 1:11 4:99 5:60
Table
2: Results on a collection of four dimensional test problems.
The test results are summarized in Table 2. These tests were run in a fairly
favorable environment for PPS|a cluster of homogeneous, dedicated processors. The
9 The exception is PPS with 8. Because there are no \extra" search directions, the solution
to the path is the same for every run|only the timings dier.
primary di-culty for PPS is the cost of synchronization in the global reduction. In
terms of average function evaluations per processor, both APPS and PPS required
about the same number. In general for both APPS and PPS, the number of function
evaluations per processor decreased as the number of processes increased. We expect
the idle time for APPS to be less than that for PPS; and, indeed, the idle time is
two orders of magnitude less. Furthermore, the idle time for PPS increases as the
number of processors goes up. APPS was faster (on average) than PPS in 22 of 24
cases. The total time for APPS either stayed steady or reduced as the number of
processors increased. In contrast, the total PPS time increased as the number of
processors increased due to the synchronization penalty.
Comparing APPS and PPS on simple problems is not necessarily indicative of
results for typical engineering problems. The next two subsections yield more meaningful
comparisons, given the types of problems for which pattern search is best
suited.
5.2 TWAFER: A Thermal Design Problem
This engineering application concerns the simulation of a thermal deposition furnace
for silicon wafers. The furnace contains a vertical stack of 50 wafers and several heater
zones. The goal is to achieve a specied constant temperature across each wafer and
throughout the stack. The simulation code, TWAFER [10], yields measurements at
a discrete collection of points on the wafers. The objective function f is dened by a
least squares t of the N discrete wafer temperatures T j to a prescribed ideal T
as
where x i is the unknown power parameters for the heater in zone i. We consider the
four and seven zone problems.
For this problem, we used the following settings for APPS and PPS. The rst n+1
search directions are the points of a regular simplex centered about the origin. The
remaining are generated randomly and normalized to unit length.
We set
There are some di-culties from the implementation point of view that are quite
common when dealing with simulation codes. Because TWAFER is a legacy code,
it expects an input le with a specic name and produces an output le with a
specic name. The names of these les cannot be changed, and TWAFER cannot
be hooked directly to PVM. As a consequence, we must write a \wrapper" program
that runs an input lter, executes TWAFER via a system call, and runs an output
lter. The input le for TWAFER must contain an entire description of the furnace
and the wafers. We are only changing a few values within that le, so our input lter
generates the input le for TWAFER by using a \template" input le. This template
le contains tokens that are replaced by our optimization variables. The output le
from TWAFER contains the heat measurements at discrete points. Our output lter
reads in these values and computes the least squares dierence between these and the
ideal temperature in order to determine the value of the objective function.
An additional caveat is that TWAFER must be executed in a uniquely named
subdirectory so that its input and output les are not confused with those of any
other TWAFER process that may be accessing the same disk.
Lastly, because TWAFER is executed via a system call, APPS has no way of
terminating its execution prematurely. (APPS can terminate the wrapper program,
but TWAFER itself will continue to run, consuming system resources.) Therefore,
we allow all function evaluations to run to completion, that is, we do not allow any
breaks.
Another feature of TWAFER is that it has nonnegativity constraints on the power
settings. We use a simple barrier function that returns a large value (e.g.,
Problem Method Procs f(x) Function Idle Total
Evals Time Time
4 Zone APPS 20 0:67 334:6 0:17 395:94
4 Zone PPS 20 0:66 379:9 44:77 503:88
Table
3: Results on the four and seven zone TWAFER problems.
Results for the TWAFER problem are given in Table 3. The four zone results are
the averages over ten runs, and the seven zones results are averages over nine runs.
(The tenth PPS run failed due to a node fault. The tenth APPS run had several faults,
and although it did get the nal solution, the summary data was incomplete.) Here we
also list the value of the objective function at the solution. Observe that PPS yields
slightly better function values (compared to the original value of more than 1000)
on average but at a cost of more function evaluations and more time. The average
function evaluation execution time for the four zone problem is 1.3 seconds and for
the seven zone problem is 10.4 seconds; however, the number of function evaluations
includes instances where the bounds were violated in which case the TWAFER code
was not executed and the execution time is essentially zero since we simply return
Once again PPS has a substantial amount of idle time. The relatively high
APPS idle time in the seven zone problem was due to a single run in which the idle
time was particularly high for some nodes (634 seconds on average).
5.3 SPICE: A Circuit Simulation Problem
The problem is to match simulation data to experimental data for a particular circuit
in order to determine its characteristics. In our case, we have 17 variables representing
inductances, capacitances, diode saturation currents, transistor gains, leakage
inductances, and transformer core parameters. The objective function is dened as
where N is the number of time steps, V SIM
j (x) is the simulation voltage at time step
for input x, and V EXP
j is the experimental voltage at time step j.
The SPICE3 [19] package is used for the simulation. Like TWAFER, SPICE3
communicates via le input and output, and so we again use a wrapper program.
The input lter for SPICE is more complicated than that for TWAFER because
the variables for the problem are on dierent scales. Since APPS has no mechanism for
scaling, we handled this within the input lter by computing an a-ne transformation
of the APPS variables. Additionally, all the variables have upper and lower bounds.
Once again, we use a simple barrier function.
The output lter for SPICE is also more complicated than that for TWAFER.
The SPICE output les consists of voltages that are to be matched to the experimental
data. The experimental data is two cycles of output voltage measured at
approximately Fig. 7). The simulation data contains
approximately 10 or more cycles, but only the last few complete cycles are used
because the early cycles are not stable. The cycles must be automatically identied
so that the data can be aligned with the experimental data. Furthermore, the time
steps from the simulation may dier from the time steps in the experiment, and so
the simulation data is interpolated (piecewise constant) to match the experimental
data. The function value at the initial point is 465.
The APPS parameters were set as follows. The search directions were generated
in the same way as those for the test problems. We set
tolerance corresponds to a less than 1% change in the circuit parameter). Once
again, we do not allow \breaks" since the function evaluation is called from a wrapper
program via a system call.
The results from APPS and PPS on the SPICE problem are reported in Table 4.
In this case, we are reporting the results of single runs, and we give results for 34 and
50 processors. The average SPICE run time is approximately 20 seconds; however,
we once again do not dierentiate between times when the boundary conditions are
violated and when the SPICE code is actually executed. Increasing the number of
processors by 47% results in a 39% reduction in execution time for APPS but only 4%
for PPS. For both 34 and 50 processors, APPS is faster than PPS, and even produces
a slightly better objective value (compared to the starting value of more than 400).
At the solution, two constraints are binding.
-55Time
Voltage
Figure
7: Spice results. The solid line represents the experimental output. The dashed
line represents the simulation output after optimization. The dotted line represents
the starting point for the optimization.
Method Procs f(x) Function Idle Total
Evals Time Time
APPS 34 26.3 57:5 111.92 1330.55
APPS 50 26.9 50:6 63.22 807.29
PPS 34 28.8 53:0 521.48 1712.24
PPS 50 34.9 47:0 905.48 1646.53
Table
4: Results for the 17 variable SPICE problem.
Initial Final f(x) Total
Procs Procs Time
34 34 27.8 1618.46
50
Table
5: APPS results for the 17 variable SPICE with a failure approximately every
seconds.
Table
5 shows the results of running APPS with faults. In this case, we used a
program that automatically killed one PVM process every seconds. The PVM
processes are the APPS daemons and the wrapper programs. The SPICE3 simulation
is executed via a system call, and so continues to execute even if its wrapper
terminates; regardless, the SPICE3 program can no longer communicate with APPS
and is eectively dead.
The results are quite good. In the case of 34 processors, every APPS task that
fails must be restarted in order to maintain a positive basis. So, the nal number of
APPS processes is 34. The total time is only increased by 21% despite approximately
50 failures; furthermore, this time is still faster than PPS. In the case of 50 processors,
the nal number of processors is 32. (Recall that tasks are only restarted if there are
not enough remaining to form a positive basis.) In the case of 50 processors, the
solution time is only increased by 29%, and is once again still faster than PPS. In this
case, however, the quality of the solution is degraded. This is likely due to the fact
that the solution lies on the boundary and some of the search directions that failed
were needed for convergence (see Lewis and Torczon [17]).
6 Conclusions
The newly-introduced APPS method is superior to PPS in terms of overall computing
time on a homogeneous cluster environment for both generic test problems and
engineering applications. We expect the dierence to be even more pronounced for
larger problems (both in terms of execution time and number of variables) and for
heterogenous cluster environments. Unlike PPS, APPS does not have any required
synchronizations and, thus, gains most of its advantage by reducing idle time.
APPS is fault tolerant and, as we see in the results on the SPICE problem for 34
processors, does not suer much slow-down in the case of faults.
In forthcoming work, Kolda and Torczon [13] will show that in the unconstrained
case the APPS method converges (even in the case of faults) under the same assumptions
as pattern search [23].
Although the engineering examples used in this work have bound constraints, the
APPS method was not fully designed for this purpose, as evidenced in the poor results
on the SPICE problem with faults on 50 processors. Future work will explore the
algorithm, implementation, and theory in the constrained cases.
In the implementation described here, the daemons and function evaluations are
in pairs; however, for multi-processor (MPP) compute nodes, this means there will be
several daemon/function evaluation pairs per node. An alternative implementation
of APPS is being developed in which there is exactly one daemon per node regardless
of how many function evaluations are assigned to it. As part of this alternative
implementation, the ability to dynamically add new hosts as they become available
(or to re-add previously failed hosts) will be incorporated.
Another improvement to the implementation will be the addition of a function
value cache in order to avoid reevaluating the same point more than once. The
challenge is deciding when two points are actually equal; this is especially di-cult
without knowing the sensitivity of the function to changes in each variable.
The importance of positive bases in the pattern raises several interesting research
questions. First, we might consider the best way to generate the starting basis. We
desire a pattern that maximizes the probability of maintaining a positive basis in
the event of failures. Another research area is the aect that \conditioning" of the
positive basis has on convergence. Our numerical studies have indicated that the
quality of the positive basis may be an issue. Last, supposing that enough failures
have occurred so that there is no longer a positive basis, we may ask if we can easily
determine the fewest number of vectors to add to once again have a positive basis.
Our current implementation simply restarts all failed processes.
A
Acknowledgments
Thanks to Jim Kohl, Ken Marx, Juan Meza for helpful comments and advice in the
implemenation of APPS and the test problems.
--R
Harness: A next generation distributed virtual machine.
Practical experience in the numerical dangers of heterogeneous computing
FATCOP: A fault tolerant Condor-PVM mixed integer program solver
Convergence and numerical results for a parallel asynchronous quasi-Newton method
Testing a class of methods for solving minimization problems with simple bounds on the variables
Theory of positive linear dependence
An asynchronous parallel Newton method
PVM: Parallel Virtual Machine: A Users' Guide and Tutorial for Network Parallel Computing
A model for low pressure chemical vapor deposition in a hot-wall tubular reactor
On the convergence of asynchronous parallel direct search.
Solving Least Squares Problems
Why pattern search works
Rank ordering and positive bases in pattern search algorithms
How to Build a Beowulf: A Guide to the Implementation and Application of PC Clusters
PDS: Direct search methods for unconstrained optimization on either sequential or parallel machines
A note on positively spanning sets.
--TR
--CTR
Genetha A. Gray , Tamara G. Kolda, Algorithm 856: APPSPACK 4.0: asynchronous parallel pattern search for derivative-free optimization, ACM Transactions on Mathematical Software (TOMS), v.32 n.3, p.485-507, September 2006
A. Ismael Vaz , Lus N. Vicente, A particle swarm pattern search method for bound constrained global optimization, Journal of Global Optimization, v.39 n.2, p.197-219, October 2007
Steven Benson , Manojkumar Krishnan , Lois Mcinnes , Jarek Nieplocha , Jason Sarich, Using the GA and TAO toolkits for solving large-scale optimization problems on parallel computers, ACM Transactions on Mathematical Software (TOMS), v.33 n.2, p.11-es, June 2007
Genetha Anne Gray , Tamara G. Kolda , Ken Sale , Malin M. Young, Optimizing an Empirical Scoring Function for Transmembrane Protein Structure Determination, INFORMS Journal on Computing, v.16 n.4, p.406-418, Fall 2004
Jack Dongarra , Ian Foster , Geoffrey Fox , William Gropp , Ken Kennedy , Linda Torczon , Andy White, References, Sourcebook of parallel computing, Morgan Kaufmann Publishers Inc., San Francisco, CA, | distributed computing;asynchronous parallel optimization;pattern search;direct search;fault tolerance;cluster computing |
587249 | Extensible Lattice Sequences for Quasi-Monte Carlo Quadrature. | Integration lattices are one of the main types of low discrepancy sets used in quasi-Monte Carlo methods. However, they have the disadvantage of being of fixed size. This article describes the construction of an infinite sequence of points, the first bm of which forms a lattice for any nonnegative integer m. Thus, if the quadrature error using an initial lattice is too large, the lattice can be extended without discarding the original points. Generating vectors for extensible lattices are found by minimizing a loss function based on some measure of discrepancy or nonuniformity of the lattice. The spectral test used for finding pseudorandom number generators is one important example of such a discrepancy. The performance of the extensible lattices proposed here is compared to that of other methods for some practical quadrature problems. | Introduction
. Multidimensional integrals appear in a wide variety of applications
in finance [4, 48, 50], physics and engineering [29, 42, 49, 58], and statistics
[8, 12, 13]. The integration domain may often be assumed, after some appropriate
transformation, to be the unit cube, in which case the integral takes the form:
Z
for some known integrand, f .
Adaptive methods, such as [2], have been developed for approximating multidimensional
integrals, but their performance deteriorates as the dimension increases.
For finance problems the dimension can be in the hundreds or even thousands. An
alternative to adaptive quadrature is Monte Carlo methods, where the integral is
approximated by the sample mean of the integrand evaluated on a set, P , of N independent
random points drawn from a uniform distribution on [0;
The quadrature error for Monte Carlo methods is typically O(N \Gamma1=2 ). One reason for
this relatively low accuracy is that the points in P are chosen independently of each
other. Thus, some parts of the integration domain contain clumps of points while
other parts are empty of points.
To obtain greater accuracy one may replace the random set P by a carefully
chosen deterministic set that is more uniformly distributed on [0; 1) s . As is explained
in Section 3, one may define a discrepancy that measures how much the empirical
distribution function of P differs from the continuous uniform distribution. Then
y Department of Mathematics, Hong Kong Baptist University, Kowloon Tong, Hong Kong SAR,
China. This research was supported by a HKBU FRG grant 96-97/II-67.
z fred@hkbu.edu.hk, http://www.math.hkbu.edu.hk/~fred
x D'epartement d'Informatique et de Recherche Op'erationnelle, Universit'e de Montr'eal C.P. 6128,
Succ. Centre-Ville, Montr'eal (Qu'ebec), Canada, H3C 3J7
F. J. Hickernell, H. S. Hong, P. L' '
Ecuyer and C. Lemieux.
one chooses P in quadrature rule (1.1) to have as small a discrepancy as possible.
The quadrature methods based on low discrepancy sets are called quasi-Monte Carlo
methods. They are discussed in several review articles [3, 16, 40, 59] and monographs
[27, 44, 53].
Two important families of low discrepancy sets are:
i. integration lattices [44, Chap. 5] and [53], and
ii. digital nets and sequences [44, Chap. 4].
These two families are introduced in Section 2. One advantage of the second family
is that any number of consecutive points from a good digital sequence has low dis-
crepancy. If one needs more points, one may use additional terms from the digital
sequence without discarding the original ones. On the other hand, until now, the
number of points in an integration lattice has had to be specified in advance. So far,
there has been no systematic way of adding points to an integration lattice while still
retaining its lattice structure.
The purpose of this article is to provide a method for constructing infinite lattice
sequences, thereby eliminating the need to know N , the number of points, in advance.
Although the emphasis is on rank-1 lattices, the method may be applied to integration
lattices of arbitrary rank. Given an infinite lattice sequence one may approximate a
multidimensional integrand with a quadrature rule of the form (1.1) for moderate
number of points N 0 . If the error estimate is unacceptably high, then one may choose
an additional points from the lattice sequence to obtain a quadrature rule
with N 1 points, and so on.
The following section describes the new method for obtaining infinite lattice se-
quences. Section 3 briefly reviews some results on discrepancy and quadrature error
analysis for quasi-Monte Carlo methods. These are used to find the generating vectors
for the new lattice sequences in Section 4. The issue of error estimation is addressed
in Section 5. Two practical examples are explored in Section 6, where the new lattice
sequences are compared with existing quadrature methods. The last section contains
some concluding remarks.
2. Integration Lattices and Digital Sequences. This section begins by introducing
integration lattices. Next, digital sequences and (t; s)-sequences are de-
scribed. Finally, the idea underlying digital sequences is used to produce infinite
lattice sequences.
2.1. Integration Lattices. Rank-1 lattices, also known as good lattice point
(glp) sets, were introduced by Korobov [31] and have been widely studied since then
(see [27, 44, 53] and the references therein). The formula for a shifted rank-1 lattice
set is simply
where N is the number of points, h is an s-dimensional generating vector of integers
(a good lattice point) that depends on N , \Delta is an s-dimensional shift vector in [0; 1) s ,
and fxg denotes the fractional part of a vector x, i.e.
later generalized glp sets by introducing more than one generating vector. A shifted
integration lattice with points based on generating vectors h
is:
Integration lattices and their use for quadrature are discussed in the monograph [53].
Extensible Lattice Sequences 3
For a given N there is the problem of choosing good generating vectors. Although
theoretical constructions exist for in higher dimensions one typically finds
generating vectors by minimizing a discrepancy or measure of non-uniformity of the
lattice. Several examples of discrepancies are given in Section 3.
2.2. Digital Nets and Sequences. Digital nets and sequences are another
method of constructing low discrepancy sets (see [32] and [44, Chapter 4]). Let b
denote a positive integer greater than one. For any non-negative integer one may
extract the digits of its base b representation, finitely
many of which are nonzero:
The i th term of a digital net or sequence is given by
z
z (i)
s: (2.2d)
If the generating matrices C are m \Theta m, then this construction yields a
digital net fz points. If the generating matrices are
1 \Theta 1, i.e. each C j has entries c jkl defined for k; then one has a digital
sequence g.
The prototype digital sequence is the one-dimensional Van der Corput sequence,
g. This is defined by taking equal to the identity
In essence, the Van der Corput sequence, takes the b-ary representation of an integer
and reflects it about the decimal point.
2.3. (t; m; s)-Nets and (t; s)-Sequences. Similarly to integration lattices one
has the problem of how to choose the generating matrices C j in (2.2). Usually this
is done to optimize the quality factor of the net or sequence. For any non-negative
s-vector k, and for a base b consider the following set of disjoint boxes, whose union
is the unit cube:
Each such box in B k has volume b \Gammak 1 \Gamma\Delta\Delta\Delta\Gammak s . A (t; m; s)-net in base b is a set of
points in [0; 1) s , such that every box in B k contains b m\Gammak 1 \Gamma\Delta\Delta\Delta\Gammak s of these
points for any k satisfying t. Thus, any function that is piecewise
4 F. J. Hickernell, H. S. Hong, P. L' '
Ecuyer and C. Lemieux.
constant on the boxes in B k will be integrated exactly according to quadrature rule
The integer parameter t is called the quality parameter of the net and it takes values
between 0 and m. A smaller value of t means a better net.
A (t; s)-sequence is an infinite sequence of points in [0; 1) s such that the b m points
numbered lb m to (l +1)b non-negative integer
l. By using a (t; s)-sequence to do quasi-Monte Carlo calculations, one need not know
the number of points required in advance. If the first b m1 points do not give sufficient
accuracy, then one may add the next b m2 \Gamma b m1 points in the sequence to get a net
with b m2 points, without throwing away the first b m1 points.
There is a connection between digital nets and sequences as defined above and
(t; m; s)-nets and (t; s)-sequences [32]. Let c (m)
jiffl denote the vector containing the first
m elements of the i th row of the the generating matrix C j . Given a non-negative
integer s-vector k, let C(m; k) be the following set of the first k j rows of the j th
generating matrix for s:
c (m)
Furthermore, let
is a linearly independent set of vectors
for all k with
The following theorem (see [32]) gives the condition for which a digital net is a (t; m; s)-
net and a digital sequence is a (t; s)-sequence.
Theorem 2.1. For prime bases b the digital net defined above in (2.2) is a
s)-net. If, in addition, T (s) is finite, then the digital sequence defined in
(2.2) is a (T (s); s)-sequence.
Finding good generating matrices for digital nets and sequences is an active area
of research. Virtually all generators found so far have been based on number theoretic
arguments. Early sequences include those of Sobol' [56], Faure [9] and Niederreiter
[43]. Algorithms for these sequences can be found in the ACM Transactions on Mathematical
Software collection. The FINDER software developed at Columbia University
by Traub and Papageorgiou implements generalized Sobol' and generalized Faure se-
quences. New constructions with smaller t values are given by Niederreiter and Xing
[47].
2.4. Infinite Lattice Sequences. The idea underlying digital sequences may
be extended to integration lattices to obtain infinite lattice sequences. The i th term
of a rank-1 lattice, which is fih=Ng, depends inherently on the number of points, N .
Thus, the formula for a lattice must be rewritten in a way that does not involve N
explicitly. A way to do this was first suggested in [24].
Suppose that the number of points, N , is some integer power of a base b - 2,
that is, . This is the same assumption as for a digital or (t; m; s)-net. The
first N values of the Van der Corput sequence, defined in (2.3) areb
Extensible Lattice Sequences 5
although in a different order. Therefore, the term
appears in the definition of the rank-1 lattice may be replaced by OE b (i), a term that
does not depend on N .
The s-dimensional generating vector h in (2.1) typically depends on N also. It
may be expressed in b-ary form as:
are digits. For k ? m the digits h jk do not affect the
definition of the rank-1 lattice set with points since they only contribute
integers to the product OE b (i)h. Therefore, each component of h may be written (in
principle) as an infinite string of digits:
This single "infinite" generating vector may serve for all possible values of m.
The preceding paragraphs provide the basis for defining an infinite rank-1 lattice
sequence. Altering the original definition in (2.1) leads to the following:
Definition 2.2. An infinite rank-1 lattice sequence in base b with generating
vector h of the form (2.6) and shift \Delta is defined as:
The first b m terms of the infinite rank-1 lattice sequence (2.7) are a rank-1 lattice.
Moreover, just as certain subsets of a (t; s)-sequence are (t; m; s)-nets, so subsets of
an infinite rank-1 lattice sequence are shifted rank-1 lattices.
Theorem 2.3. Suppose that P is the set consisting of the l +1 st run of b m terms
of the infinite lattice rank-1 sequence defined in (2.7):
Then, P is a rank-1 lattice with shift OE b (l)b \Gammam h + \Delta, that is,
Proof. The proof follows directly from the definition of the Van der Corput
sequence. For all note that
Substituting the right hand side into the definition of P completes the proof.
The definition of an infinite rank-1 lattice sequence may be extended to integration
lattices of arbitrary rank.
Definition 2.4. An infinite lattice sequence (of arbitrary rank) with bases
generating vectors h of the form (2.6) is defined as:
A practical complication for an integration lattice of rank greater than 1 is that there
are multiple indices, i k , each of which may or may not tend to infinity, and each at its
own rate. Because of this complication we will focus on rank-1 lattices in the sections
that follow. Theorem 2.3 also has a natural extension to infinite lattice sequences of
arbitrary rank, and its proof is similar.
6 F. J. Hickernell, H. S. Hong, P. L' '
Ecuyer and C. Lemieux.
3. Discrepancy. Unlike (t; s)-nets for which there exist explicit constructions
of the generating matrices C j , there are no such explicit constructions of generating
vectors h for rank-1 lattices for arbitrary s. Tables of generating vectors for lattices
that do exist (see [8, 14, 27]) are usually obtained by minimizing some measure of
non-uniformity, or discrepancy, of the lattice. This section describes several useful
discrepancy measures.
the quadrature error for a rule of the form (1.1) for an
arbitrary set P . Worst case error analysis of the quadrature error leads to a Koksma-
Hlawka-type inequality of the form [23]:
where D(P ) is the discrepancy or measure of nonuniformity of the point set defining
the quadrature rule, and V (f) is the variation or fluctuation of the integrand, f . The
precise definitions of the discrepancy and the variation depend on the particular space
of integrands.
In the traditional Koksma-Hlawka inequality (see [26] and [44, Theorem 2.11]),
the variation is the variation in the sense of Hardy and Krause, and the discrepancy
is the L1 -star discrepancy:
Here F unif is the uniform distribution on the unit cube, FP is the empirical distribution
function for the sample P , and j\Deltaj denotes the number of points in a set. The notation
denotes the L p -norm or the ' p -norm, depending on the context. Error bounds of
the form (3.1) involving the L p -star discrepancy have been derived by [57, 62]. Error
bounds involving generalizations of the star discrepancy appear in [21, 22, 23, 55].
When the integrands belong to a reproducing kernel Hilbert space, the error
bound (3.1) may be easily obtained [21, 23]. The discrepancy may be written in
terms of the reproducing kernel
Z
K(x; y) dx dy \GammaN
Z
For example, the L 2 -star discrepancy, whose formula was originally derived in [60], is a
special case of the above formula with K(x;
An advantage of
considering reproducing kernel Hilbert spaces of integrands is that the computational
complexity of the discrepancy is relatively small (at worst O(N 2 ) operations). By
contrast the L1 -star discrepancy requires O(N s ) operations to evaluate.
The discrepancy of type (3.3) can also be interpreted as an average-case quadrature
error [25, 41, 61]. Suppose that the integrand is a random function lying in the
sample space A, and suppose that the integrand has zero mean and covariance kernel,
K(x; y), that is,
Extensible Lattice Sequences 7
Then the root mean square quadrature error over A is the discrepancy as defined in
If P is a simple random sample, then the mean square discrepancy is [25]:
Z
This formula serves as a benchmark for other (presumably superior) low discrepancy
sets. Since the mean square discrepancy is O(N \Gamma1 ), the discrepancy itself is typically
O(N \Gamma1=2 ) for a simple random sample. The variance of a function, f , may be defined
as
Z
f dx
The mean value of the variance over the space of average-case integrands can be shown
to be [25]:
Z
Z
which is just the term in braces in (3.5).
It may seem odd at first that the discrepancy can serve both as an average-case
and worst-case quadrature error. The explanation is that the space of integrands, A,
in the average-case analysis is much larger than the space of integrands, W , in the
worst-case analysis. See [21, 25] and the references therein for the proofs of the above
results as well as further details.
There are some known asymptotic results for the discrepancies of (t; m; s)-nets.
The L1 -star discrepancy of any (t; m; s)-net is O(N \Gamma1 [log N Theorem 4.10].
Moreover, the typical (in the sense of an average taken over all possible nets) L 2 -
star discrepancy of (0; m; s)-nets is O(N \Gamma1 [log N the best possible for
any set. For discrepancies of the form (3.3) with sufficiently smooth kernels, typical
(0; m; s)-nets have O(N \Gamma3=2 [log N
Lattice rules, the topic of this article, are known to be particularly effective for integrating
periodic functions. Suppose that the integrand has an absolutely convergent
Fourier series with Fourier coefficients -
Z
Here k 0 x denotes the dot product of the s-dimensional wavenumber vector k with x.
The quadrature error for a particular integrand with an absolutely convergent
Fourier series is simply the sum of the quadrature errors of each term:
"N
8 F. J. Hickernell, H. S. Hong, P. L' '
Ecuyer and C. Lemieux.
where the term corresponding to does not enter because constants are integrated
exactly. One can multiply and divide by arbitrary weights w(k) inside this sum. Then
by applying H-older's inequality one has the following error bound of the form (3.1)
[23]:
"N
Here 1 f\Deltag denotes the indicator function. In order to insure that the discrepancy is
finite we assume that the weights increase sufficiently fast as k tends to infinity:
If P is the node set of an integration lattice, then it is known that trigonometric
polynomials are integrated exactly for all nonzero wavenumbers not in the dual lattice,
The dual lattice is the set of all k satisfying k 0 . Thus, for node
sets of lattices the definition of discrepancy above may be simplified to
Certain explicit choices of w(k) have appeared in the literature. For example, one
may choose
where the over-bar notation is defined as
are arbitrary positive weights, and ff is a measure of the assumed smoothness
of the integrand. If fi is the node set of a lattice, then
is a traditional figure of merit for
lattices [53]. Furthermore, for the discrepancy is DF;1 (P
ae(L) is the Zaremba figure of merit for lattice rules [44, Def. 5.31]. The more
general case of P not a lattice is considered in [20], and the case of unequal weights
discussed in [22].
Extensible Lattice Sequences 9
If the weight function w(k) takes the form (3.9) for positive integer ff, then for
the infinite sum defining the discrepancy in (3.8b) may be written as a
finite sum:
s
Y
for general sets P , where B 2ff denotes the Bernoulli polynomial of degree 2ff [1, Chap.
When P is the node set of an integration lattice, the double sum can be simplified
to a single sum:
s
Y
Another choice for w(k) is a weighted ' r -norm of the vector k to some power:
again for arbitrary positive weights these weights are unity, P is
the node set of a lattice, and
kkk \Gammaff
ae
min
kkk r
oe \Gammaff
For discrepancy is equivalent to the spectral test, commonly employed
to measure the quality of linear congruential pseudo-random number generators [30,
33]. The spectral test has been used to select lattices for quasi-Monte Carlo quadrature
in [7, 34, 35, 36]. The case which one might call an ' 1 -spectral test is also
interesting. We will return to these two cases in the next section.
4. Good Generating Vectors for Lattice Sequences. As mentioned at the
beginning of the previous section, finding good generating vectors for lattices typically
requires optimizing some discrepancy measure. In this subsection we propose some
loss functions and optimization algorithms for choosing good generating vectors for
extensible rank-1 lattice sequences.
In principle one would like to have an s \Theta 1 array of digits h jk , according to (2.6).
However, in practice it is only necessary to have an s max \Theta mmax array of digits h jk ,
is the maximum number of points and s max is the maximum dimension
to be considered. In finance calculations, for example, the necessity of timely
forecasts may constrain one to a budget of
private communication).
For simplicity we consider generating vectors h that are of the form originally
proposed by Korobov, that is,
This means that only the digits need to be chosen, for which there
are b choices. The generating vector is tested for dimensions up to s max ,
but in fact it can be extended to any dimension.
F. J. Hickernell, H. S. Hong, P. L' '
Ecuyer and C. Lemieux.
The number j defining the generating vector is chosen by minimizing a loss func-
tion, G, of the form
~
Here the function ~
is related to one of the measures of discrepancy introduced
in Section 3, and the maximum over some range of values of m and s insures that the
resulting generating vector is good for a range of numbers of points and dimensions.
However, since the discrepancy itself depends significantly on the number of points
and the dimension, it must be appropriately scaled to arrive at the function ~
G. The
details of this scaling are given below.
4.1. Generating Vectors Based on Minimizing P ff . The discrepancy defined
in (3.11), which is a generalization of the P ff figure of merit for lattice rules, has
the advantage of requiring only O(sN) operations to evaluate for lattices. To remove
some of the dimension dependence of this discrepancy it is divided by the square root
of right hand side of (3.6). The root mean square of this scaled discrepancy for a random
sample is then N \Gamma1=2 , independent of s. The formula for the scaled discrepancy
of the node set of a lattice with
s
Y
The specific choice of the value of fi j here is not crucial, but seems to give good results.
For one-dimensional lattices, i.e. evenly spaced points on the interval [0; 1), this
discrepancy is N \Gamma1 , and one would expect that as the dimension increases this scaled
discrepancy would tend to (or at least do no worse than) N \Gamma1=2 . Therefore, to remove
this remaining dimension dependence the above scaled discrepancy is divided by the
For a fixed s, D asy (m,s) is asymptotically O(b \Gammam m (s\Gamma1)=2
as N tends to infinity. This is the asymptotic order for (0; m; s)-nets [25], and what
we hope to achieve for lattice sequences. Furthermore, N
and s. In summary, the resulting loss function is
~
\Theta
s
Y
The optimal values of j found by minimizing G 1 (j) for different
ranges of m and s are given in Table 4.1. The algorithm for optimizing G 1 (j) may
be described as an intelligent exhaustive search. One need not compute G 1 (j) for all
possible values of j. Suppose at any stage of the optimization j is the best known
value of j, and one finds that ~
j. Since ~
depends only on the first digits of j, one can immediately eliminate from
consideration all j that have the same first m \Gamma 1 digits as ~
j. This same search
strategy is also used for the other loss functions described below.
Extensible Lattice Sequences 11
4.2. Generating Vectors Based on the Spectral Test. The use of the spectral
test to analyze the lattice structure of linear congruential generators is described
in [30] and tables of good integration lattices are given in [34]. The difference here is
that nearly all smaller lattices imbedded in the largest lattice considered must have
low discrepancy. In [34], only the full lattice was examined.
The length of the shortest non-zero vector in the dual lattice L ? is
which is related to the discrepancy (3.12) with 2. This length has the absolute
upper bound
d
where the constants fl s and ae s depend only on s (see [34] and the references therein).
The bound for s - 8 is the least upper bound for a general s-dimensional lattice with
real-valued coordinates, and with b \Gammam points per unit of volume. The bound for s ? 8
is not the least upper bound, but it is still reasonably tight, as our numerical results
will show. We define the normalized ' 2 -spectral test discrepancy as
~
which is larger than 1 and is the inverse of the quantity S t defined in [34]. (The
different notation here is to be consistent with the rest of this article). The loss
function to be minimized is of the form (4.2) with ~
We note that 1=d 2 (j; m; s) can be interpreted as the (Euclidean) distance between
the successive hyperplanes that contain all the points of the primal lattice L, for the
family of hyperplanes for which this distance is the largest. The problem of computing
a shortest vector in (4.4) can be formulated as a quadratic optimization problem
with s integer decision variables, because k can be written as a linear combination
of the s vectors of a basis of the dual lattice, with integer coefficients. The decision
variables are these coefficients. (See [30] for details.) We solved this problem by using
the branch-and-bound algorithm of Fincke and Pohst [10], with a few heuristic modifications
to improve the speed. The worst-case time complexity of this algorithm is
exponential in d 2 (j; m; s), and polynomial in s for d 2 (j; m; s) fixed [10]. In practice, it
(typically) works nicely even when d 2 (j; m; s) is large. For example, one can compute
2, an arbitrary j and in less than 1 second on a
Pentium-II computer.
4.3. Generating Vectors Based on the ' 1 -spectral Test. With the ' 1 norm,
the length of the shortest non-zero vector in L ? is
which is related to the discrepancy (3.12) with 1. One has the upper bound
which was established by Marsaglia [37] by applying the general convex body theorem
of Minkowski. This suggests the normalized ' 1 -spectral test quantity:
~
12 F. J. Hickernell, H. S. Hong, P. L' '
Ecuyer and C. Lemieux.
Here, we want to minimize the loss function (4.2) with ~
G 3 .
One can interpret d 1 (j; m; s) (or d 1 (j; m; s) \Gamma 1 in certain cases; see [30]) as
the minimal number of hyperplanes that cover all the points of P . We computed
via the algorithm of Dieter [6], which works fine for s up to about 10
(independently of m), but becomes very slow for larger s (the time is exponential in
s).
The test quantity in (4.8) has an interpretation similar that the quality
parameter t for (t; m; s)-nets. Define
s
Since ~
(s!)]=s.
Thus, the rank-1 lattice defined by j integrates exactly all trigonometric polynomials
of wavenumber k when
log(jk
s
If one considers T (j; m; s) as the quality parameter of the lattice, then this condition
is similar to that in (2.5). There, t determines the resolution at which piecewise
constant functions that are exactly integrated by a net. Here, T (j; m; s) determines
the resolution at which trigonometric polynomials are integrated exactly by a lattice.
The discrepancy for the node set of this lattice as defined in (3.12) with
If one can construct an infinite sequence of digits, j, for which
then the above discrepancy decays like N \Gammaff=s . Again, the parameter ff indicates the
assumed smoothness of the integrands.
4.4. Tables of Coefficients. We made computer searches to find the best j's,
based on minimizing the worst-case loss function
~
selected values of m . The bounds m
define a range of number of points in the lattice, and maximal number of dimensions,
that we are interested in. Selecting different parameters j for different ranges of
values of m and s is a convenient compromise between the extreme cases of choosing
a different j for each pair (m; s), and choosing the same j for all pairs (m; s). In
practice one typically has a general idea of how many points one is going to take. By
selecting an j specialized for that range, one can obtain a lattice with a better figure
of merit for this particular range.
Table
4.1 gives the optimal j's and the corresponding figures of merit (4.9) for
Because of computational efficiency constraints,
for the searches, we limited ourselves to m 1 - 20 for
Then, for the best j that we found, we verified the performance when m 0 was reduced
Extensible Lattice Sequences 13
or when m 1 or s 1 was increased, and retained the smallest m 0 and largest m 1 and s 1
for which G i was unchanged. The table also gives the value of G i;min (j; m
defined by replacing max by min in (4.9). This best-case figure tells us the range of
values taken by ~
over the given region. We also made exhaustive searches
with all the entries where in the table, and obtained the same
values of j and G cases.
Table
Values of j defining generating vectors of the form (4.1) for rank-1 lattice sequences with base
minimizing (4.9).
5. Estimating Quadrature Error. The advantage of an extensible lattice sequence
is that N need not be specified in advance. Therefore, in practice one would
estimate the quadrature error for an initial lattice, and continue to increase the lattice
size until the quadrature error meets the desired tolerance. Although the discrepancy,
D(P ), is a good measure for the quality of a set P , it cannot be used directly for error
estimation. The worst case error bounds (3.1) are often quite conservative, and there
is no easy way to estimate the variation of the integrand. The average case error
given, (3.4), is sensitive to how one defines the kernel - multiplying the kernel by a
factor of c 2 changes the average case error by a factor of c.
Quadrature error estimates for lattice rules have been investigated by [5, 28,
53]. Two different kinds of quadrature rules and error estimates have been proposed.
Both involve estimating the error for terms of Q(f
, where the P l are node sets of lattices and
The method proposed in [53] takes P to be composed of 2 s shifted copies of a
lattice. Rather than considering copy rules, we take P l to be the node sets of
shifted lattices of size b m1 imbedded in the extensible lattice described in Section 2.4:
14 F. J. Hickernell, H. S. Hong, P. L' '
Ecuyer and C. Lemieux.
Another quadrature rule and error estimate takes the P l to be independent random
shifts of a lattice. This can be done by taking:
where the \Delta l are independent, uniformly distributed random vectors in [0; 1] s .
Note that for both (5.1) and (5.2) the set P can be extended in size as necessary
by increasing m 1 . The theory behind the error estimates for cases (5.1) and (5.2) are
given in the following theorem.
Theorem 5.1. Suppose that a quadrature rule Q(\Delta; P ) based on some arbitrary
P , as given in (1.1), is the average value of the quadrature rules based on the sets
First, consider the case where the integrands are random functions from a sample
space A as described in the paragraph preceding (3.4). Then it follows that
where D is the discrepancy based on the covariance kernel for A.
Secondly, consider the case of a fixed integrand, f , but where the P l are random
shifts of a set P 0 , that is,
for independent, uniformly distributed
Proof. Assuming that the quadrature rules satisfy (5.3), it follows that the mean
square deviation of the Q(f ; P l ) from Q(f ; P ) may be written asM
If the integrand is a random function from a sample space A with covariance kernel
K, then average case error analysis in (3.4) plus (5.7) leads to the following equations:
Extensible Lattice Sequences 15
The equations above may be rearranged to give (5.4).
The quadrature error for rule Q(f ; P ) may also be written as:
Substituting the sum of the by equation (5.7) gives:
If the P l are random shifts as in (5.5), then the expected value of the term [I(f) \Gamma
vanishes for all k 6= l and taking the expected value of (5.8)
yields (5.6).
Some remarks are in order to explain the assumptions and conclusions of the
above theorem. These are given below.
Assumption (5.3), and thus conclusion (5.4), holds for both the cases (5.1) and
(5.2) above. In fact, this part of the theorem holds for any imbedded rule or any rule
where the P l all contain the same number of points, and their union is P or multiple
copies of P . For example, (5.4) would apply to the case where P is a (t; m; s)-net
made up of a union of subnets P l .
Assumption (5.5), and therefore conclusion (5.6) holds for (5.2). There is a difficulty
if one tries to derive a result like (5.6) for an imbedded rule of the form (5.1),
where \Delta is a random shift. The argument leading to (5.6) assumes that the points in
different P l are uncorrelated, which is not true for (5.1). However, if the extensible
lattice is a good one, it is expected that the terms
in (5.8) are on average negative. Under this assumption one may then conclude that
the right hand side of (5.6) is a conservative (too large) upper bound on the expected
square quadrature error.
The factor in (5.4) above involving the discrepancies of P and the P l does not
depend strongly on the particular choice of discrepancy, but only on the asymptotic
rate of decay. If, for example, D(P ) - CN \Gammaff for some unknown C, but known ff,
where N is the number of points in P , then
Although conclusions (5.6) and (5.9) are derived under different assumptions,
they both suggest error estimates of the form
F. J. Hickernell, H. S. Hong, P. L' '
Ecuyer and C. Lemieux.
indicates the rate of decay of the discrepancy for (5.9).
The factor c ? 1 depends on how conservative one wishes to be. The Chebyshev
inequality implies that the above inequality will hold "at least" 100(1 \Gamma c \Gamma2 )% of the
time. Error estimate (5.10) leads to the stopping criteria:
where ffl is the absolute error tolerance. Note that if the stopping criteria is not met,
one would normally increase the size of the P l by increasing m 1 , rather than increasing
the number of the P l by increasing M .
For some high dimensional problems the discrepancy of a lattice (or other low
discrepancy set) may decay as slowly as the Monte Carlo rate of O(N \Gamma1=2
N (see [18, 41]). Therefore, even when using (5.9), it may be advisable to make a
conservative choice of ff = 1=2. This choice makes the approach of error estimate
(5.9), based on random integrands, equivalent to that of (5.6), based on randomly
shifted P l .
Sloan and Joe [53, Section 10.3] suggest an error estimate of the form
s
s
where P is formed from 2 s copies of a rank-1 lattice, and each Q(f ; P l ) is an imbedded
rule based on half of the points in P . Although this case does not exactly fit Theorem
5.1, the arguments in the proof can be modified to obtain a result similar to to (5.4):
s
s
s
This would suggest that the error estimation formula of Sloan and Joe is reasonable
when
2D(P ) on average. The disadvantage of 2 s -copy rules is that they
require at least 2 s points, which may be unmanageable for large s.
To summarize, both imbedded lattice rules, (5.1), and independent random shifts
of lattices, (5.2), have similar error estimates, (5.10), and stopping criteria, (5.11).
The advantage of the independent random shifts approach is that, the theory holds
for any integrand, not the average over a space of integrands. One advantage of the
imbedded rules approach is that one need only generate a single extensible lattice.
Furthermore, the set P for the imbedded rule is the node set of a lattice (if M is a
power of b), which is never the case for the independent random shifts approach. Thus,
the accuracy of the imbedded rule approach is likely to be better. In the examples in
the next section, the imbedded lattice rules based on (5.1) are used.
6. Examples of Multidimensional Quadrature. Two example problems are
chosen to demonstrate the performance of the new rank-1 lattice sequences proposed in
Section 2.4. The first example is the computation of multivariate normal probabilities
and the second is the evaluation of a multidimensional integral arising in physics
problems.
Extensible Lattice Sequences 17
6.1. Multivariate Normal Probabilities. Consider the following multivariate
normal
Z b1
Z bs
as
where, a and b are known s-dimensional vectors, and \Sigma is a given s \Theta s positive definite
covariance matrix. Some a j and/or b j may be infinite.
Unfortunately, the original form is not well-suited for numerical quadrature.
Therefore, Alan Genz [12] proposed a transformation of variables that results in
an integral over an s \Gamma 1-dimensional unit cube. See [12, 13] for the details of the
transformation, and see [13, 15] for comparisons of different methods for calculating
multivariate normal probabilities.
The particular test problem is one considered by [13, 24] and may be described
as follows:
i.i.d. uniformly on [0;
generated randomly according to [13, 38]: (6.1d)
Numerical comparisons were made using three types of algorithms:
i. the adaptive algorithm DCHURE [2],
ii. an older Korobov rank-1 lattice rule with a different generating vector for
each N and s - this algorithm is a part of NAG and is used in [5, 12], and
iii. the new rank-1 lattice sequences proposed in Section 2.4 with generating
vectors given in Table 4.1
For the second and third algorithms we applied the periodizing transformation x 0
1j to the integrand over the unit cube. This appears to increase the accuracy
of the lattice rule methods. The computations were carried out in FORTRAN on a
Unix work station in double precision. The absolute error tolerance was chosen to be
, and this was compared with the actual error E. Since the true value of the
integral is unknown for this test problem, the value given by the Korobov algorithm
with a tolerance of used as the "exact" value for computing the error.
For the new rank-1 lattice sequences the stopping criterion (5.11) was used with M
between 4 and 7 and
For each dimension 50 random test problems were generated and solved by the
various quadrature methods. The scaled absolute errors E=ffl and the computation
times in seconds are given in the box and whisker plots of Fig. 6.1. The boxes contain
the middle half of the values and the whiskers give the range of most values except
the outliers (denoted by ).
Ideally, the scaled error should nearly always be less than one, otherwise the
error estimate is not conservative enough. On the other hand if the scaled error is too
small, then the error estimate is too conservative. Fig. 6.1 shows that the adaptive
rule performs well for smaller dimensions, but underestimates the error and is quite
slow in higher dimensions. The lattice rules do well even in higher dimensions, and
the new rank-1 lattice sequences appear to be faster than the older Korobov-type
rule. This is likely due to the fact that the lattice sequences proposed here can re-use
the old points when N must be increased.
F. J. Hickernell, H. S. Hong, P. L' '
Ecuyer and C. Lemieux.
6.2. A Multidimensional Integral from Physics. Keister [29] considered
the following multidimensional integral that has applications in physics:
Z
R s
Z
cos@
A dy; (6.2)
where \Phi denotes the standard Gaussian distribution function. Keister gave an exact
formula for the answer and compared the quadrature methods of McNamee and
Stenger [39] and Genz and Patterson [11, 51] for evaluating this integral. Later, Papageorgiou
and Traub [49] applied the generalized Faure sequence from FINDER to
this problem.
The results of numerical experiments for the above integral for dimension 25 are
shown in Figure 6.2. The exact value of the integral is reported in [49]. To be
consistent with the numerical results reported in [29, 49], we did not perform error
estimation, but just computed the actual error for each kind of numerical method as
a function of N , the number of points. Because \Phi there is a technical
difficulty with using an unshifted lattice rule, so when performing the numerical experiments
the lattice sequences were given random shifts (modulo 1). Box and whisker
plots show how well the new rank-1 lattice sequences perform for 50 random shifts.
According to Figure 6.2 the generalized Faure sequence (in base 29) and the
lattice sequence perform much better than the other two rules. In some cases the
lattice sequences perform better than the generalized Faure sequence.
7. Conclusion. Lattice rules are simpler to code than digital nets. Given the
construction in Section 2.4, it is now possible to have extensible lattice sequences in
the same way that one has (t; s)-sequences. Good generating vectors for these lattice
sequences may be found by using the spectral test or minimizing other discrepancy
measures, as shown in Section 4. The performance of these lattice rules is in many
cases comparable to other multidimensional quadrature rules and in some cases superior
Acknowledgments
. Thanks to Alan Genz for making available his software for
computing multivariate normal distributions. Also thanks to Joe Traub and Anargy-
ros Papageorgiou for making available the FINDER software.
--R
Handbook of Mathematical Functions with Formulas
An adaptive algorithm for the approximate calculation of multiple integrals
Valuation of mortgage backed securities using Brownian bridges to reduce effective dimension
Randomization of number theoretic methods for multiple integration
How to calculate shortest vectors in a lattice
Discr'epance de suites associ'ees 'a un syst'eme de num'eration (en dimension s)
Improved methods for calculating vectors of short length in a lattice
A Lagrange extrpolation algorithm for sequences of approximations to multiple inte- grals
Parameters for integrating periodic functions of several variables
Simulation of multivariate normal rectangle probabilities and their derivatives theoretical and computational results
On the assessment of random and quasi-random point sets
Random and Quasi-Random Point Sets
A comparison of random and quasirandom points for multidimensional quadrature
Computing multivariate normal probabilities using rank-1 lattice sequences
Funktionen von beschr-ankter Variation in der Theorie der Gleichverteilung
Applications of Number Theory to Numerical Analysis
Randomization of lattice rules for numerical multiple integration
Multidimensional quadrature algorithms
The Art of Computer Programming
The approximate computation of multiple integrals
On the distribution of digital sequences
Random numbers fall mainly in the planes
Construction of fully symmetric numerical integration formu- las
Generating quas-random paths for stochastic processes
Points and sequences with small discrepancy
Shiue eds
Quasirandom points and global function fields
Faster valuation of financial derivatives
The optimum addition of points to quadrature formulae
Lattice methods for multiple integration
Lattice Methods for Multiple Integration
Lattice methods for multiple integration: Theory
An intractability result for multiple integration
The distribution of points in a cube and the approximate evaluation of integrals
"Nauka"
Computational investigations of low discrepancy point sets
Average case complexity of multivariate integration
Some applications of multidimensional integration by parts
--TR
--CTR
Fred J. Hickernell , Harald Niederreiter, The existence of good extensible rank-1 lattices, Journal of Complexity, v.19 n.3, p.286-300, June
Hee Sun Hong , Fred J. Hickernell , Gang Wei, The distribution of the discrepancy of scrambled digital (t, m, s)-nets, Mathematics and Computers in Simulation, v.62 n.3-6, p.335-345, 3 March
Pierre L'Ecuyer, Quasi-monte carlo methods in practice: quasi-monte carlo methods for simulation, Proceedings of the 35th conference on Winter simulation: driving innovation, December 07-10, 2003, New Orleans, Louisiana
Fred J. Hickernell, My dream quadrature rule, Journal of Complexity, v.19 n.3, p.420-427, June
Hee Sun Hong , Fred J. Hickernell, Algorithm 823: Implementing scrambled digital sequences, ACM Transactions on Mathematical Software (TOMS), v.29 n.2, p.95-109, June | good lattice point sets;multidimensional;discrepancy;spectral test |
587257 | Residual Replacement Strategies for Krylov Subspace Iterative Methods for the Convergence of True Residuals. | In this paper, a strategy is proposed for alternative computations of the residual vectors in Krylov subspace methods, which improves the agreement of the computed residuals and the true residuals to the level of O(u) ||A|| ||x||. Building on earlier ideas on residual replacement and on insights in the finite precision behavior of the Krylov subspace methods, computable error bounds are derived for iterations that involve occasionally replacing the computed residuals by the true residuals, and they are used to monitor the deviation of the two residuals and hence to select residual replacement steps, so that the recurrence relations for the computed residuals, which control the convergence of the method, are perturbed within safe bounds. Numerical examples are presented to demonstrate the effectiveness of this new residual replacement scheme. | Introduction
Krylov subspace iterative methods for solving a large linear system typically consist of
iterations that recursively update approximate solutions x n and the corresponding residual vectors
They can be written in a general form as follows.
Algorithm 1. Template for Krylov subspace Method:
Input: an initial approximation x
For convergence
Generate a correction vector q n by the method;
(the vector x n does not occur in other statements)
End for
Department of Mathematics, Utrecht University, P.O. Box 80010, NL-3508 Utrecht, The Netherlands E-mail:
vorst@math.uu.nl
y Department of Mathematics, University of Manitoba, Winnipeg, Manitoba, Canada R3T 2N2. E-mail:
ye@gauss.amath.umanitoba.ca Research supported by grants from University of Manitoba Research Development
Fund and from Natural Sciences and Engineering Research Council of Canada
Most Krylov subspace iterative methods, including the conjugate gradient method (CG) [12], the
bi-conjugate gradient method (Bi-CG) [4, 13], CGS [19], and BiCGSTAB [22], fit in this framework
(see [2, 11, 16] for other methods).
In exact arithmetic, the recursively defined r n in Algorithm 1 is exactly the residual for the
approximate solution x In a floating
point arithmetic, however, the round-off patterns for x n and r n will be different. It is important
to note that any error made in the computation of x n is not reflected by a corresponding error
in r n , or in other words, computational errors to x n do not force the method to correct, since x n
has no influence on the iteration process. This leads to the well known situation that b \Gamma Ax n and
r n may differ significantly. This phenomenon has been extensively discussed in the literature, see
[10, 11, 18] and the references cited there. Indeed, if we denote the computed results of x
respectively (but we still use q n to denote the computed update vector of the algorithm),
then we have
where f l(z) denotes the computed result of z in finite arithmetic, the absolute value and inequalities
on vectors are componentwise, and u is the machine roundoff unit. The vectors / n and
rounding error terms, and they can be bounded by a straightforward error analysis (see Section
3 for details). In particular, the relations (1) and (2) show that / n and j n depend only on the
iteration vectors -
r n , and q n .
We will call b \Gamma A-x n the true residual for the approximation -
x n and call -
r n , as obtained by
recurrence formula (2), the computed residual (or the updated residual). Then the difference between
the two satisfies (using the finite precision recurrences (1) and (2))
where we assume for now that b \Gamma Ax Hence, the difference between the true and the
updated residuals is a result of accumulated rounding errors. In particular, a significant deviation
of
r n may be expected, if there is a -
r i with large norm during the iteration (a
not uncommon situation for Bi-CG and CGS). On the other hand, even when all / i or j i are small
(as is common for CG), but if it takes a relatively large number of iterations for convergence, the
sheer accumulation of / i and j i could also lead to a nontrivial deviation.
What makes all this so important is that, in a finite precision implementation, the sequence - r n
satisfies, almost to machine precision u, its defining recurrence relation, and as was observed for
many Krylov subspace methods, this is the driving force behind convergence of - r n [10, 15, 18, 20].
Indeed, residual bounds have been obtained in [20] for CG and Bi-CG, which show that even
a significantly perturbed recurrence relation (with perturbations much larger than the machine
precision) usually still leads to eventual convergence of the computed residuals. This theoretical
insight has been our motivation and justification for the residual replacement scheme to be presented
in Section 2.1. On the other hand, the true residual b \Gamma A-x n itself has no self-correcting mechanism
for convergence, mainly because any perturbation made to - x n does not have an effect on the
iteration parameters, whereas errors in -
immediately lead to other iteration parameters.
Thus, in a typical convergent iteration process, - r n converges to a level much smaller than u
eventually, but the true residual b \Gamma A-x n can only converge to the level dictated by \Sigma n
since
Usually, when -
r n is still bigger than the accumulated error \Sigma n
agrees well
with -
r n in magnitude, but when - r n has converged to a level that is smaller than the accumulated
error, then
just the accumulated error and has no agreement at all
with -
r n . In summary, a straightforward implementation would reduce the true residuals at best to
bound for this has been obtained in [10] and it is called the attainable accuracy.
We note that this term could be significant even if only one of / i or j i is large, or if n is large.
The above problems become most serious in methods such as CGS and Bi-CG where intermediate
x n and - r n can have very large norm, and this may result in a large / n or j n . Several popular
methods, such as BiCGSTAB [22], BiCGSTAB(') [17], QMR [7], TFQMR [5], and composite step
BiCG [1], have been developed to reduce the norm of - r n (see [6] for details). We note that controlling
the size of k-r n k only does not solve the deviation problem in all situations, as, for instance,
the accumulation of tiny errors over a long iteration may still result in a nontrivial deviation.
A simple approach for solving the deviation problem is to replace the computed residuals by the
true residuals at some iteration step to restore the agreement. Then the deviation at subsequent
steps will be the error accumulation after that iteration only. This includes a complete replacement
strategy that simply computes r n by b \Gamma Ax n at every iteration, and a periodic replacement strategy
that updates r n by b \Gamma Ax n only at intervals of the iteration count. While such a strategy maintains
agreement of the two kinds of residuals, it turns out that the convergence of the r n may deteriorate
(as we will see, it may result in unacceptably large perturbations to the lanczos recurrence
relation for the residual vectors that steers the convergence, see Section 2.3). Recently, Sleijpen
and van der Vorst [18], motivated by suggestions made by Neumaier (see [11, 18]), introduced a
very sophisticated replacement scheme that includes a so-called flying-restart procedure. It was
demonstrated that this new residual replacement strategy can be very effective in the sense that
it can improve the convergence of the true residuals by several orders of magnitude. For practical
implementations, such a strategy is very useful because it leads to meaningful residuals and this is
important for stopping the iteration process at the right point. Of course, one could, after termination
of the classical process, simply test the true residual, but the risk is that the true residual
stagnated already long before termination, so that much work has been done in vain.
The present paper will follow the very same idea of replacing the computed residual by the true
residual at selected steps, in order to maintain close agreement between the two residuals, but we
propose a simpler strategy so that the replacement is done only when it is necessary and at phases
in the iteration where it is harmless, that is that convergence mechanism for -
r n is not destroyed.
Specifically, we shall present a rigorous error analysis for iterations with residual replacement and
we will propose computable bounds for the deviation between the computed and true residuals.
This will be used to select the replacement phases in the iteration in such a way that the Lanczos
recurrence among -
r n is sufficiently well maintained. For the resulting strategy, it will be shown
that, provided that the computed residuals converge, the true residual will converge to the level
O(u)kAkkxk, the smallest level that one can expect for an approximation.
The paper has been organized as follows. In Section 2, we develop a refined residual replacement
strategy and we discuss some strategies that have been reported by others. We give an error analysis
in Section 3, and we derive some bounds for the deviation to be used in the replacement condition.
We present a complete implementation in Section 4. It turns out that the residual replacement
strategy can easily be incorporated in existing codes. Some numerical examples are reported in
Section 5, and we finish with remarks in Section 6.
The vector norm used in this paper is one of the 1, 2, or 1-norm.
Residual Replacement Strategy
In this section, we develop a replacement strategy that maintains the convergence of the true
residuals. A formal analysis is postponed to the next section. The specific iterative method can
be any of those that fit in the general framework of Algorithm 1. Throughout this paper, we shall
consider only iteration processes for which the computed residual - r n converges to a sufficiently
small level.
As mentioned in Section 1, we follow the basic idea to replace the computed residual -
r m by
the true residual f selected steps We will refer to such
an iteration step as one where residual replacement occurs. Hence, the residual generated at an
arbitrary step n could be either the usual updated residual - r or the true
residual depending on whether replacement has taken place or not at step n. In order
to distinguish the two possible formulations, we denote by r n the residual obtained at step n of the
process with the replacement strategy, that is
With the residual replacement at step m residual deviation is immediately
reduced to
and it can be shown (see Lemma 1 of Section 2.2) that For the
subsequent iterations n ? m, but before the next replacement step, we clearly have that
Therefore, the accumulated deviation before step m has no effect to the deviation after updating
(n ? m). However, in order for such a strategy to succeed, two conditions must be met, namely,
ffl the computed residual r n should preserve the convergence mechanism of the original process
that has been steered by the -
ffl from the last updating step m to the termination step K, the accumulated error \Sigma K
should be small relative to u(jr which is the upperbound for j- m j.
We discuss in the next two subsections how to satisfy these two objectives.
2.1 Maintaining convergence of computed residuals
In order that r n maintains the convergence mechanism of the original updated residuals, it should
preserve the property that gives rise to the convergence of the original - r n . We therefore need to
identify the properties that lead to convergence of the iterative method in finite precision arithmetic.
While this may be different for each individual method, it has been observed for several Krylov
subspace methods (including CG [10, 20], Bi-CG [20], CGS, BiCGSTAB, and BiCGSTAB(') [18]),
that the recurrence r and a similar one for q n is satisfied almost to machine
precision and this small local error is one of the properties behind the convergence of the computed
residuals. Furthermore, the analysis of [20] suggests that convergence is well maintained even when
the recurrence equations are perturbed with perturbations that are significantly greater than the
machine precision. This latter property is the basis for our residual replacement strategy. Therefore,
we briefly discuss this perturbation phenomenon for Bi-CG (or CG), as presented in [20].
Consider the Bi-CG iteration which contains r In
finite
which denote the computed results of r n and p n , respectively, satisfy
the perturbed recurrence
are rounding error terms that can be bounded in terms of u. Combining these
two equations, we obtain the following perturbed matrix equation in a normalized form
r n+1
where T n is an invertible tridiagonal matrix 1 , ff 0
with
ff
We note that (4) is just an equation satisfied by an exact Bi-CG iteration under a perturbation F n .
In particular, detailed bounds on - n and j n will, under some mild assumptions, lead to F n - O(u).
The main result of [20] states that if a sequence -
r n satisfies (4) and Z n+1 has full rank, then we
have
where
n . The case F reduces to the
known theoretical bound for the exact BiCG residuals [1]. Therefore, even when - r n and its exact
counterpart are completely different, their norms are bounded by similar quantities and are usually
comparable. Of course, in both cases, the bounds depend on the quality of the constructed basis.
More importantly, a closer examination of the bound reveals that even if the perturbation F n
is in magnitude much larger than u, the quantities in the bound, and thus k-r n+1 k, may not be
significantly affected. Indeed, in [20] numerical experiments were presented, where relatively large
artificial random perturbations had been injected to the recurrence for r n ; yet it did not significantly
affect the convergence mechanism.
An implication of this analysis is that, regardless of how -
r n is generated but as long as it
satisfies (4), its norm can be bounded by (6). Hence, we can replace -
r n by r
We assume that no breakdowns of the iteration process have occurred
when are not too large relative to kr n k and kr (see (5)), and we may
still expect it to converge in a similar fashion. Indeed, this criterion explains why the residual
replacement strategies like r but do not work always (see Section
2.3). Here, it will be used to determine when it is safe to replace -
r n by r
note that the above discussion is for Bi-CG, but the phenomenon it reveals seems to be valid for
many other methods, especially for those methods that are based on Bi-CG (CGS, BiCGSTAB,
and others).
Now we consider the case that residual replacement is carried out at step m, that is r
It follows from the definition of ffi m and -
r m that
. So, the updated residual r m satisfies
Thus depending on the magnitude of kj 0
k relative to kr m k and kr m\Gamma1 k, the use of r
may result in large perturbations to the recurrence relation. Therefore, a residual replacement
strategy should ensure that the replacement is only done when kj 0
kg is not
too large.
In a typical iteration, as the iteration proceeds, kffi n k, and hence kj 0
increases while k-r n k
decreases. Replacement will reduce ffi n but, in order to maintain the recurrence relation, it should
be carried out before kj 0
becomes too large relative to k-r n k. For this reason, we propose to set a
threshold ffl and carry out a replacement when kj 0
reaches the threshold. To be precise, we
replace the residual at step n by r
We note that, in principle, residual replacement can be carried out for all steps up to where
reaches certain point. However, from the stability point of view, it is preferred to generate the
residual by the recurrence as much as possible, since kj 0
n k is generally bigger than the recurrence
rounding error kj n k (of order u).
2.2 Groupwise solution updating to reduce error accumulations
From the discussions of Section 2.1, we learn that residual replacement should only be carried out
up to certain point. In this subsection, we will discuss how to maintain, after the last replacement,
the deviation at the order of ujAjjx n j, in which case x n is a backward stable solution. Note that,
for any x n , ukAkkx n k is the lowest value one can expect for its residual. This is simply because
even with the exact solution x, both
is the last updating step, which menas that we are in the final phase of the iteration
process, then, because of (3), the deviation at step n ? m is
From our updating condition, we have that kr n k - kj 0
is chosen not too close
to u, kr n k is small and -
m. We now discuss the three different parts of ffi n . The
discussion here is only to motivate the groupwise updating strategy; a more rigorous analysis will
be given in the next section.
we have that \Sigma n
ffl For the / i part, j/
m)ukAkkxk. If large, the accumulation of errors over steps can be significant.
We note that this is the same type of error accumulation in evaluating a sum
of small numbers by direct recursive additions, which can fortunately be corrected through
appropriately grouping the arithmetic operations as
with terms of similar order of magnitude in the same group S i
\Delta. In this way, the rounding errors associated with a large number of
additions inside a group S i is of the magnitude of uS i , which can be much smaller than uS.
The same technique can be adopted for computing x n as
Specifically, the recurrence for x n can be carried out in the following equivalent form
Groupwise Solution Update:
For convergence
End for
Such a groupwise update scheme has been suggested by Neumaier, and it has been worked
out by Sleijpen and van der Vorst (see [18] for both references). By doing so, the error in the
local recurrence is reduced. Indeed, for
(instead of ujx i j). Hence, \Sigma n
In summary, with groupwise updating of the approximated solution, all three parts of ffi n can
be maintained at the level of ujAjjxj. We mention that groupwise updating can also be used to
obtain better performance of a code for modern architectures, because it allows for level-3 BLAS
operations. This has been suggested in [21, page 52, note 5].
2.3 Some other residual replacement strategies
We briefly comment on some other residual replacement strategies.
For the naive strategy of "replacing always" (the residuals are computed always as b\GammaAx n ) or for
"periodic replacement" (update periodically at every ' steps), replacement is carried out throughout
the iteration, even when kr n k is very small. This, as we know, may result in large perturbations to
the recurrence equations relative to kr n k, since jj 0
j is at least j- n j - ujAjjx n j, see (7). In that case,
as kr n k decreases, the recurrence relation may be perturbed too much and hence the convergence
property deteriorates. This is the typical behaviour observed in such implementations.
We note that if - n can be made to decrease as kr n k does, then replacement can be carried out at
later stages of the iterations. This leads to the strategy of "flying-restart" of Sleijpen and van der
Vorst [18], which significantly reduces - n , and hence j 0
n , at a replacement step. In the flying-restart
strategy b is replaced by f at some but not all of the residual replacement steps (say
addition to the residual replacement r The advantage of
this is that, at the flying-restart step n i+1 , the residual is updated by r n i+1
(noting that b / r n i
. Then
which decreases as r n i
and -
decrease. This is the term that determines the perturbation to
the recurrence and can be kept small relative to r n . However, the deviation satisfies
(assuming x n i+1
). Namely, the deviation at each flying-restart step carries forward
to the later steps. Therefore flying-restart should only be used at carefully selected steps where
However, it is not easy to identify a condition to monitor this. It
is also necessary to have two different conditions for the residual replacement and flying-restart.
Fortunately, our discussion in the last two subsections shows that carrying out replacement carefully
at some selected steps, in combination with groupwise update, is usually sufficient. We shall not
pursue the flying-restart idea further in this paper.
Analysis of the Residual Replacement Scheme
In this section, we formally analyze the residual replacement strategy as developed in Section 2.1
(and presented in Algorithm 2 below). In particular, we develop a computable bound for kffi n k and
n k, that can be used for the implementation of the residual replacement condition.
We first summarize residual replacement strategy in the following algorithm, written in a form
that identifies relevant rounding errors for later theoretical analysis.
Algorithm 2: Iterative Method with Residual Replacement:
Given an initial approximation floating point vector);
set -
For convergence
Generate a correction vector q n by the method;
if residual replacement condition (8) holds
else
(denote but not compute x
End for
Note that x n and ffi n are theoretical quantities as defined by the formulas and are not to be
computed. The vectors / due to finite precision arithmetic
At step n of the iterative method, q n is computed in finite precision arithmetic by the algorithm.
However, the rounding errors involved in the computation of q n are irrelevant for the deviation of
the two residuals, which solely depends on the different treatment of q n in the recurrences for r n
and x n .
Throughout this paper, we assume that A is a floating point matrix. Our error analysis is
based on the following standard model for roundoff errors in basic matrix computations [8, p.66]
(all inequalities are componentwise).
where are floating point vectors, N is a constant associated with the matrix-vector
multiplication (for instance, the maximal number of nonzero entries per row of A).
It is easy to show that
Using this, the following lemma, which includes (1) and (2), is obtained.
Lemma 1 The error terms in the computed recurrence of Algorithm 2 are bounded as follows:
For a step at which a residual replacement is carried out:
Proof From (9), we have that j/ n j - uj-x This leads to the bound for
j/ n j: For a residual replacement step, the updated z is x n by definition, that is x
Therefore, The bounds for j n and - n follow similarly.
be the number of step at which a residual replacement is carried out and let
later step, but still before the next replacement step. Then, we have that
and
Proof The first bound follows directly from Lemma 1. For we have that q
Noting that -
Similarly,
We now consider the deviation of the two residuals.
be the number of an iteration step at which residual replacement is carried out
and let n ? m denote a later iteration step, still before the next replacement step. Then, we have
that
Proof At step m, by the definition of xm in Algorithm 2,
z with z being the
updated z-vector and -
Therefore . Hence, for the range of n ? m, and before the next residual replacement
step:
Lemma 2, we obtain the following computable bound on ffi n .
Lemma be the number of an iteration step at which residual replacement is carried out
and let n ? m denote a later iteration step, still before the next replacement step. Then, we have
Proof The bound for kffi m k follows from that for - m , see (14). From Lemma 2 and Lemma 3, it
follows that
which leads to the bound for ffi n in terms of norms.
We note that it is possible to obtain a sharper bound by accumulating the vectors in the bound
for jffi n j. Our experiments do not show any significant advantage of such an approach. We next
consider the perturbation to the recurrence.
Theorem 1 Consider step n of the iteration and let m ! n be the last step before n, at which a
residual replacement is carried out. If replacement is also done at step n, then let x 0
be the computed approximate solution and r 0
the residual. Then the residual r 0
n satisfies the following approximate recurrence
Proof First, in the notation of Alg. 2, x 0
where we have used that b \Gamma Ax . Furthermore, by
Lemma 3,
Also, kAi
Combining these three, and using that r 0
O(u), the
bound on kj 0
n k is obtained as in Lemma 4.
Note that bound (16) is computable at each iteration step. Therefore, we can implement the
residual replacement criterion (8) with this bound instead of kj 0
k. We note that the factor 2 in
the bound comes from the bound for q i in Lemma 2, which is pessimistic since q i -
x i . Therefore,
we can use the following d n as an estimate for kj 0
Hence, we shall use the following residual replacement criterion, that is residual replacement is
done if
With this strategy, the replaced residual vector r n satisfies the recurrence equation (15) with
k. With this property, we consider situations where r n converges. We now discuss
convergence of the true residual.
Theorem 2 Consider Algorithm 2 with the residual replacement criterion (18), and assume that
the algorithm terminates at step be the number of the last
residual replacement iteration step before termination. If
then
Proof From (17), we have dK ? k. Furthermore, at the
termination step, we have kr is the
last updating step, we have for n - m, d n ? fflkr n k as otherwise there would be another residual
replacement after m. That implies kr
~
which is an upper bound for kffi n k (Lemma 4) and ~
where
~
which implies
~
Thus the bound follows from
We add two remarks with respect to this theorem.
Remark 1: If the main condition (19) is satisfied, then the deviation, and hence the true residual,
will remain at the level of uNkAkkxK k at termination. Such an approximate solution is backward
stable and it is best one can expect. The condition suggests that ffl should not be chosen too small.
Otherwise, the replacement strategy will be terminated too early so that the accumulation after the
last replacement might become significant. As can be expected, however, the theoretical condition
is more restrictive than practically necessary and our numerical experience suggests that ffl can be
much smaller than what (19) dictates, without destroying the conclusion of the theorem.
Remark 2: On the other hand, in Section 2.1 we have seen that ffl controls perturbations to
the recurrence of r n , and for this reason it is desirable to choose it as small as possible. In our
experience, there is a large range of ffl around p u that balances the two needs.
Reliable Implementation of Iterative Methods
In this section, we summarize the main results of the previous sections into a complete implemen-
tation. We also address some implementation issues.
It is easy to see from the definition of d n (see (17)) that it increases except at the residual
replacement steps when it is reset to u(NkAkkxm k Our residual replacement strategy
is to reduce d n whenever necessary (as determined by the replacement criterion) so as to keep it
at the level of uNkAkkxK k at termination. With the use of criterion (18), however, there are
situations where the residual replacement is carried out in consecutive steps while d n remains
virtually unchanged, namely when kr n k stays around d n =ffl - uNkAkkx n k=ffl. ?From the stability
point of view, it is preferred not to replace the residuals in such situations. To avoid unnecessary
replacement in such cases, we impose as an additional condition that residual replacement is carried
out only when d n has a nontrivial increase from the dm of the previous replacement step m.
Therefore, we propose d n ? 1:1d m as a condition in addition to (18) for the residual replacement.
The following scheme sketches a complete implementation.
Algorithm 3: Reliable Implementation of Algorithm 1.
Input an initial approximation residual replacement threshold ffl; an estimate of NkAk;
For convergence
Generate a correction vector q n by the Iterative Method;
if d
End for
Remark: In this reliable implementation, we need estimates for N (the maximal number of
nonzero entries per row of A) and kAk. In our experience with sparse matrices, the simple choice
still leads to a practical estimate d n for kffi n k. In any case, we note that precise estimates
are not essential, because the replacement threshold ffl can be adjusted. We also need to choose
this ffl. Our extensive numerical testing (see section 5) suggests that ffl -
p u is a practical criterion.
However, there are examples where this choice leads to stagnating residuals at some unacceptable
level. In such cases, choosing a smaller ffl will regain the convergence to O(u).
The presented implementation requires one extra matrix-vector multiplication when an replacement
is carried out. Since only a few steps with replacement are required, this extra cost is marginal
relative to the other costs. However, some savings can be made by selecting a slightly smaller ffl and
carrying out residual replacement at the step next to the one for which the residual replacement
criterion is satisfied (cf [18]). It also requires one extra vector storage for the groupwise solution up-date
(for z) and computation of a vector norm k-x n k for the update of d n (kr n k is usually computed
in the algorithm for stopping criteria).
5 Numerical Examples
In this section, we present some numerical examples to show how Algorithm 3 works and to demonstrate
its effectiveness. We present our testing results for CG, Bi-CG and CGS. All tests are carried
out in MATLAB on a SUN Sparc-20 workstation, with
In all examples, unless otherwise specified, the replacement threshold ffl is chosen to be 10 \Gamma8 .
kAk1 is explicitly computed and N is set to 1. In Examples 1 and 2, we also compare d n and the
deviation kffi n k.
Example 1: The matrix is a finite-difference discretization on a 64 \Theta 64 grid for
with a homogeneous Dirichlet boundary condition. a(x; y. We apply
CG and Reliable CG (i.e. Alg. 3) to solve this linear system and the convergence results are given
in
Figure
1.
In
Figure
(and similarly in Figures 2 and 3 for the next example), we give in (a) the convergence
history of the (normalized) computed residual for CG (solid line), the (normalized) true residuals
for CG (dashed line) and for reliable CG (dotted line). In (b), we also give the (normalized)
deviations of the two residuals kffi (dash-dotted line) and for reliable
CG (dotted line) and the bound d n for reliable CG (in x-mark).
Example 2: The matrix is a finite-difference discretization on a 64 \Theta 64 grid for the following
convection diffusion equation
with a homogeneous Dirichlet boundary condition. The function f is a constant. We consider Bi-
CG and CGS for solving the linear systems with
The results are shown in Figure 2 for Bi-CG, and in Figure 3 for CGS.
In the above examples, we have observed the following typical convergence behaviour. For
the original implementations, the deviation increases and finally stagnates at some level, which
is exactly where the true residual stagnates, while the computed residual continues to converge.
With the reliable implementations, when the deviation increases to a certain level relative to r n , a
residual replacement is carried out and this reduces the error level. Eventually, the deviation and
hence the true residual arrive at the level of ukAkkxk. We also note that the bound d n captures
the behaviour of kffi n k very closely, although it may be an overestimate for ffi n by a few orders of
magnitude. In all three cases, the final residual norms for the reliable implementation are smaller
than the ones as obtained by the MATLAB function Anb.
Example 3: In this case, we have tested the algorithm for Bi-CG (or CG if symmetric definite)
and CGS, on the Harwell-Boeing collection of sparse matrices [3]. We compare the original imple-
mentations, the reliable implementations and the implementations of Sleijpen and van der Vorst
[18] (based on their replacement criteria (16) and (18)). In Table 1, we give the results for those
matrices for which the computed residuals converge to a level smaller than ukAkkxk so that there
is a deviation of the two residuals. For those cases where b is not given, we choose it such that a
Figure
1: Example 1 (CG) (a): solid - computed residual of CG; dashed - true residual of CG; dotted
true residual of reliable CG; (b): dash-dotted - of CG, dotted - nk of reliable
CG; x - dn of reliable CG
(a) Convergence History
iteration number
normalized
residual
norm
(b) Residual Deviation and Bound
iteration number
deviation
given random vector is the solution. We note that for some matrices, it may take 10n iterations
to achieve that, which is not practical. However, we have included these results in order to show
that even with excessive numbers of iterations, we still arrive at small true residuals eventually. We
list the normalized residuals res attained by the three implementations
and by Gaussian elimination with partial pivoting (MATLAB Anb). We also list the number of
residual replacements (n r ) for our reliable implementations and the number of flying-restart (n f )
and the number of residual replacements (n r ) for the implementations of Sleijpen and van der Vorst
(SvdV). There are two cases for which the computed residuals do not converge to O(u)kbk with the
choice of 8. For those cases, a slightly smaller ffl will recover the stability and the results
are listed in the last row of the table.
We see that in all cases, the reliable implementation reduces the normalized residual to O(u)
and res2 is the smallest among the three implementations, even smaller than MATLAB Anb. The
improvement on the true residual is more apparent in CGS than in Bi-CG (or CG). Except in a
few cases, both the reliable implementation presented here and the implementation of Sleijpen and
van der Vorst work well and are comparable. So the main advantage of the new approach is its
simplicity and an occasional improvement in accuracy.
Figure
2: Example 2 (Bi-CG) (a): solid - computed residual of Bi-CG; dashed - true residual of Bi-
CG; dotted - true residual of reliable Bi-CG; (b): dashed - of Bi-CG, dotted -
of reliable Bi-CG; x - dn of reliable Bi-CG
(a) Convergence History
iteration number
normalized
residual
norm
(b) Residual Deviation and Bound
iteration number
deviation
6 Concluding Remarks
We have presented a new residual replacement scheme for improving the convergence of the true
residuals in finite precision implementations of Krylog subspace iterative methods. By carefully
monitoring the deviation of the computed residual and the true residual and incorporating the
earlier ideas on residual replacement, we obtain a reliable implementation that preserves the convergence
mechanism of the computed residuals, as well as sufficiently small deviations. An error
analysis shows that this approach works under certain conditions, and numerical tests demonstrate
its effectiveness. Comparison with an earlier approach shows that the new scheme is simpler and
easier to implement as an add-on to existing implementations for iterative methods.
We point out that the basis for the present work is the understanding that the convergence
behaviour (of computed residuals) in finite precision arithmetic is preserved under small perturbations
to the recurrence relations. Such a supporting analysis is available for Bi-CG (and CG)
but it is still an empirical observation for most other Krylov subspace methods. It would be
interesting to derive a theoretical analysis confirming this phenomenon for those methods as well.
Acknowledgements
We would like to thank Ms. Lorrita McKnight for assistance in carrying
out the tests on Harwell-Boeing matrices.
Figure
3: Example 2 (CGS) (a): solid - computed residual of CGS; dashed - true residual of CGS;
dotted - true residual of reliable CGS; (b): dashed - of CGS, dotted - nk of
reliable CGS; x - dn of reliable CGS
(a) Convergence History
iteration number
normalized
residual
norm
(b) Residual Deviation and Bound
iteration number
deviation
--R
An Analysis of the Composite Step Biconjugate Gradient Algorithm for Solving nonsymmetric Systems
Templates for the solution of linear systems: Building blocks for iterative methods
Sparse Matrix Test Problems
Conjugate Gradient Methods for Indefinite Systems
A transpose-free quasi-minimal residual algorithm for non-Hermitian linear systems SIAM J
Iterative solutions of linear systems Acta Numerica
Matrix Computations
Behavior of Slightly Perturbed Lanczos and Conjugate-Gradient Recurrences
Estimating the attainable accuracy of recursively computed residual methods
Methods of Conjugate Gradients for solving linear systems
Solution of Systems of Linear Equations by Minimized Iterations
On the convergence rate of the conjugate gradients in presence of rounding errors
Accuracy and effectiveness of the Lanczos algorithm for the Symmetric eigenproblem
Iterative Methods for Sparse Linear Systems PWS Publishing
BICGSTAB(L) for linear equations involving unsymmetric matrices with complex spectrum Electronic Trans.
Reliable updated residuals in hybrid Bi-CG methods Computing <Volume>56</Volume>:<Pages>144-163</Pages> (<Year>1996</Year>)
Analysis of the Finite Precision Bi-Conjugate Gradient algorithm for Nonsymmetric Linear Systems
The performance of FORTRAN implementations for preconditioned conjugate gradients on vector computers
--TR
--CTR
Stefan Rllin , Martin H. Gutknecht, Variations of Zhang's Lanczos-type product method, Applied Numerical Mathematics, v.41 n.1, p.119-133, April 2002 | residuals;finite precision;residual replacement;krylov subspace methods |
587269 | Efficient Nonparametric Density Estimation on the Sphere with Applications in Fluid Mechanics. | The application of nonparametric probability density function estimation for the purpose of data analysis is well established. More recently, such methods have been applied to fluid flow calculations since the density of the fluid plays a crucial role in determining the flow. Furthermore, when the calculations involve directional or axial data, the domain of interest falls on the surface of the sphere. Accurate and fast estimation of probability density functions is crucial for these calculations since the density estimation is performed at each iteration during the computation. In particular the values fn (X1 ), fn (X2), ... , fn (Xn) of the density estimate at the sampled points Xi are needed to evolve the system. Usual nonparametric estimators make use of kernel functions to construct fn. We propose a special sequence of weight functions for nonparametric density estimation that is especially suitable for such applications. The resulting method has a computational advantage over kernel methods in certain situations and also parallelizes easily. Conditions for convergence turn out to be similar to those required for kernel-based methods. We also discuss experiments on different distributions and compare the computational efficiency of our method with kernel based estimators. | Introduction
.
esti mati oni s the problem of the esti
mati on of the values of a
li ty
ven samples from the associ
ated
di stri buti on. No
made about the type of the
di stri buti on
from
whi ch the samples are drawn.
Thi si si n contrast to
esti mati on
whi ch the
assumed to come from a
ven
fami ly, and the parameters
are then
esti mated by
ous
stati sti cal methods. Early
contri butors to the theory
of
nonparametri c
esti mati oni nclude
Smi rnov [21], Rosenblatt [16], Parzen [15], and
Chentsov [3].
ve
descri pti ons of
ous approaches to
nonparametri c
on along
wi th a
ve
bi bli ography can be
books by
lverman
[23] and Nadaraya [14]. More recent developments are
presentedi n books by Scott
[18] and Wand and Jones [27]. Results of the
experi mental
compari son of some
dely
used methods
In
addi ti on to data
analysi s,
ani mportant
appli cati on of
nonparametri c
onal
flui d
mechani cs. When the flow
calculati ons are per-
Lagrangi an framework, a set of
space are evolved through
me
usi ng the
ng
ons. In
poi nts that
werei ni ti ally close can move apart,
leadi ng to mesh
di storti on and
cal
di #culti es. Problems
th mesh
di storti on
can be
eli mi nated to a
extent by the use of smoothed
cle
hydrodynami cs
ques [2, 13, 9, 12]. SPH treats the
nts
bei ng tracked as samples comi
ng from an unknown
li ty
di stri buti on. These
calculati ons often
requi re the
computati on of the values of not only the unknown
densi ty,
buti ts
gradi ent as well.
# Received by the editors August 16, 1995; accepted for publication (in revised form) August 3,
1999; published electronically June 13, 2000.
http://www.siam.org/journals/sisc/22-1/29046.html
Department of Computer Science, University of California, Santa Barbara, CA 93106 (omer@cs.
ucsb.edu).
# Department of Mathematics, Indian Institute of Technology, Bombay, India (ashok@math.
iitb.ernet.in).
In contrast to
appli cati ons concerned
wi th the
di splay of the
densi ty,
ci ent to
esti mate the
densi ty on some
gri d,i n these
flui d flow
calculati ons the
requi red at each sample
nt. Another
di #erencei n these two types of
appli cati onsi s that when
deali ng
th data
analysi s,
usually concerned
th
the
opti mal accuracy one can get for a
ven sample
si ze. In
flui d flow
calculati ons,
where
addi ti onal "data" can be
ned
wi thi ncreased
di screti zati on,
usually
more concerned
wi th the
opti mal
vari ati on of the
onal e#ort as a
on
of error.
In some
appli cati ons, for
example,i n
ng
di recti onal data [24], the
samples
li e on the
ci rcle S 1 or along the surface of the
al
case of
di recti onal
al
whi ch the
c about the
center of the
ci rcle or the sphere,
ous methods have been proposed for
nonparametri c
cal
stati sti cs, such as the kernel [15, 1, 28] and the orthogonal
seri es methods
[17, 11]. The kernel method has been
extensi vely
studi ed,
probably the
most popular
appli cati ons such as SPH. In
thi s method, the value of the
densi ty at the
nt
xi s
esti mated as
#, (1)
ni s the
esti mate of the
ven a sample of
are the
posi ti ons of
the samples drawn from a
li ty
di stri buti on
wi th an unknown
on
Ki s a kernel
hi s the
ndow
wi dth, and A
hi s a
normali zati on factor
to make f
ni nto a
li ty
ty. One of the drawbacks of the kernel method
s the
onal
costi nvolved. Even
possi ble to reduce the
the
one-di mensi onal case
usi ng the
expansi on of a
polynomi al kernel and an
ng
strategy [19],
thi s strategy cannot be
ly extended to
hi gher
di mensi ons [5].
ng
methods [5] can be
usedi n any
di mensi on. However,
si nce the
evaluated on a
gri d,
methodi s not
sui table for the
flui d flow
calculati ons
whi ch we
arei nterested, where an
requi red at each sample
nt.
We propose a
cosi ne-based
wei ght
on
nonparametri c
esti mati on,
whi chi s a
al case of the class of
esti mators that form a #
sequence [26, 28].
Thi s
mi lar to the kernel
esti mator but has the ease
of
evaluati on of a
seri es
expansi on. The role of the
ndow
wi dth parameter h of the
kernel
methodi s replaced by a
smoothi ng parameter
our method, and f
ni s now
of the form
(2)
Our
choi ce of c
cularly
sui table for
appli cati onsi n
flui d flow
calculati ons
where the values f n (X 1 at the sampled
di recti ons themselves
are
requi red at each
each
me
stepi n the flow
We show that
th
esti mator the
red n values can be computed
e#ci ently
usi ng only O(m 1+d n)
operati ons for
di recti onal data and O(m d n)
operati ons for
al
di mensi ons, where m need not be large as long
wi thout bound
th n.
Thi si si n contrast to the O(n 2 )
ons
red by the kernel method
for
computati oni n the worst case and an expected
complexi ty of O(h d
th
ng bounded support.
However,i n the
al case of d = 1, the
of the kernel method can be reduced to
li near after
ng step.
GL U AND ASHOK SRINIVASAN
We
deri ve
ons under
whi ch the sequence of
esti mated
ons f n
fashi on converge to the unknown
experi mentally
veri fy the accuracy and the
e#ci ency of our
methodi n
practi cal test cases. Experi
ments and
cal analyses
alsoi ndi cate how m should vary
th n for
opti mal
accuracy.
The
paperi s
organi zed as follows. In
secti on 2 we define the
wei ght
on
ve the
ons for the convergence of the
ntegrated square
error (MISE) when the sample
spacei s S 1 (Theorem 3). The
ons guarantee
that
E(f
as n #. We also present
correspondi ng results for S 2 . In
secti on 3 schemes for
e#ci ent
computati on of these
esti mates on S 1 and S 2 are presented. In
secti ons 4
and 5, we
descri be
experi mental results
th our
esti mator and
wi th the
kernel method for some
di stri buti ons
practi ce. Our
experi mentsi mply
a net
savi ngs on the number of
ons performed over kernel
methodsi n
ons and also
veri fy the formula found for the
opti mal
choi ce of m. The results
show that the kernel method and our
esti mator perform
di #erent
setti ngs, and
thus complement each other. The
conclusi ons are
presentedi n
secti on 6. The
ns
addi ti onal test results.
2. The cosine estimator and the convergence of MISE. In
secti on, we
first
menti on some related work done on
spheri cal data; then we define our
esti mator
and
deri ve
ons
ts convergence for
di recti onal data on the
ci rcle, and
ve
correspondi ng results for
di recti onal and
al data on the sphere and
al data on
the
ci rcle.
The kernel method for
nonparametri c
esti mati on for
di recti onal and
al
di scussedi n [6, 8].
Whi le
deali ng
th
di recti onal data,
Fi sher,
Lewi s, and
Embleton [6] recommend
usi ng the
ng kernel:
exp
For
al data they recommend the kernel
#, (4)
where
normali zes W to a
li ty
on, and C
ni s the
reci procal
of h
usedi n the
defini ti on of kernel
esti mators. x and X i are the
Cartesi an represen-
tati on of
nts P and P i ,
respecti vely, and x T X
ii s
thei nner product of these two
vectors. W n
the role of K(x - X i ) of (1). Hall, Watson, and Cabrera
[8] analyze
esti mators for
di recti onal data
wi th the x - X i
replaced by
. Observe that the term x T X
ii s the
cosi ne of the angle between the
nts
ii s a measure of the
di stance along the surface
of the sphere between
nts P and P i . Inner product plays a
cruci al
rolei n these
esti mators. We
consi der an
esti mator that can be
terms of powers of the
nner product, the power
playi ng the role of the
smoothi ng parameter.
Thi s enables
us to expand the
esti matori n a
seri es and
li tates fast
computati on.
c (x-
c (x-
OOc (x)32
(a) (b)
-x
Fig. 1. The
ns c 32 (x) and c
2.1. The case of S 1 . We first define our
esti mator on S 1 . Assume X
1, 2, .
sequence
ndependently
andi denti cally
di stri buted
(i .i .d.) random
ables
(observati ons) for
di recti onal data on
th
li ty
[-#].
Wei mpose the
addi ti onal
condi ti on that
si nce the
random
ables X j are defined on the
ci rcle S 1 .
As an
esti mator of the
densi ty of
di recti onal data f(x), x #
[-#], we
der
a
nonparametri c
esti mator of the form
ven by (2)
th
cos 2m
on
[-#]. The
normali zati on factor Am
ven below makes c m
(x)i ntegrate to 1 on
[-#]:
-#
cos 2m
2# dx.
Maki ng use of a table
ofi ntegrals such as Gradshteyn and
Ryzhi k [7] and by
usi ng
can be shown that
As examples, the
ons c m (x) and c m
4 are shown on S
Fi gure 1(a) and on
thei nterval
Fi gure 1(b).
We
wi sh to find
su#ci ent
ons under
whi ch the sequence of
esti mators f n
converges to
fi n the MISE sense. In order to do
thi s, we first show the convergence
of the
bi as and then
deri ve the
ons under
whi ch the
ance converges to 0.
We shall then use these results to prove convergence of MISE on S 1 .
Fi rst we show that as m #, the expected value of the
esti mate f n (x) approaches
the actual
uni formly for any
ven n.
Lemma 1. Suppose f # C 2
andl et f n (x) be as given in (2). Then
uniforml y,
independentl y of n.
Proof.
-#
s)f(s)ds, (7)
GL U AND ASHOK SRINIVASAN
as
showni n
lverman [20] and
Whi ttle [29]. By a change of
able
-#
x-#
x-#
usi ng the
peri odi ci ty of c m and f , along
th (7), (8), and
the mean value theorem,
-#
-#
/2)dy, where # x
nt between x and y. Therefore
-#
-#
-#
From (6), the
ntegral evaluates to 1, and
nce yc m
(y)i s an odd
on, the
secondi ntegral evaluates to 0. Let 2M
We then have the
ng
esti mate for the
as:
-#
-#
-#Am
cos 2m (y/2)y 2 dy.
For any # such that 0 < #,
|y|<#Am
cos 2m (y/2)y 2 dy
|y|#Am
cos 2m (y/2)y 2 dy
|y|<#Am cos 2m (y/2)dy
Am cos 2m (#/2)
nce cos y decreases as
|y|i ncreases on
thei nterval under
consi derati on. Furthermore,
bounded above by 1. Therefore,
Am
Am
Am
In order to get a bound, we
wi ll choose # as a
functi on of m. If we take # 0
as m #, then
# as m #, then
thi s term decays
exponenti ally. The second
(10)i s the product of
thi s term and m), and thus the
product approaches 0
si nce the
exponenti al decay
domi nates. In order to get a good
bound on the first term of (10), we
wi sh to choose #
ng the
condi ti on that
# such that the #
2i s as small as
possi ble. We can choose
arbi trari ly small. Thus M #
/mi s an
asymptoti c bound on the
as.
Furthermore, the
ndependent of x; hence, the
uni form.
Lemma2 . Suppose f # C 2
andl et f n (x) be as given in (2). Then
uniforml y as n #, provided m # as n #, and
Proof.
-#
-#
as
showni n
Whi ttle [29]. As a consequence of Lemma 1, the
secondi ntegral approaches
asymptoti cally, and hence, the second term approach 0
nce
bounded. It thus su#ces to show the convergence of the
firsti ntegral to 0.
x#] |f(x)|.
maki ng a change of
able
usi ng
-#
n, where the
expressi on on the
ri ght-hand
a consequence of the
pressi on for
nce m/n 2
#, the
abovei ntegral converges to
nce
mi si ndependent of x, the
uni form. Therefore the
vari ance of
uni formly to 0 under the
ons of the lemma.
Note that the bound on the
bi as for the
cosi ne method
ven by Lemma
1i s of
the form
and the bound for the
ance
ven by Lemma
Therefore, the role played by m for the
cosi ne
methodi s the same as h -2 of the kernel
based methods, where
hi s the
ndow
wi dth of the kernel
esti mator. In other words
the bounds on the
bi as and the
vari ance of the
cosi ne
esti mator
arei n
accordance
wi th the
behavi or of the kernel method
lverman [23].
Such
si mi lari ty of rates of
convergencei s to be expected
si nce the
cosi ne
essenti ally
li ke the kernel
esti mator, though the forms of the
ons
di #er. It
wi ll be
shown later that the
advantage of the
cosi ne
esti mator
li esi ni ts
onal
e#ci ency.
Theorem 3. Suppose f # C 2
[-#] and f n (x) is as given in (2). If m # as
n #, and
-#
E(f
as n #.
Proof.
E(f
as
showni n
Whi ttle [29]. From Lemmas 1 and 2, each of
thei ntegrals approaches 0.
Hence, the MISE converges to 0.
GL U AND ASHOK SRINIVASAN
In fact, the
MISEi s of the form
where the
c bounds on the constants are as
as we shall
explai n later, the exact
asymptoti c constants are not all
mportant
for
practi cal
ons.
Condi ti ons for the convergence of the
esti mates
ts
deri vati ves on the real
li nei nstead of S 1 can be
consi der the case when the
di recti onal
data
li e along the surface of the
2.2. The case of S 2 . Let X
1, 2, .
., n, be a sequence
ables
th values on the surface of the
centered at
the
Suppose that the
li ty
on f(x) of the X j has
bounded second
deri vati ves. We
consi der a
nonparametri c
esti mator of the form
some m to be
determi ned as a
functi on of n. The c m are
thi s case as
follows. If # xX denotes the angle between
nts x and X, then
cos 2m
The
normali zi ng factor
ven below:
Through a
deri vati on along the
li nes of the case of the
ci rcle, the
ng theorem
can be proved for the convergence of the
esti mators.
Theorem 4. Suppose f # C
andl et f n (x) be as given in (14). If m #
as n # and
E(f
Analogous to (13), the form of the
MISEi s found to be
From
expressi on for MISE we see that
asi n the case of S
hi s
the
ndow
wi dth of the kernel
esti mator.
When
deali ng
th
al data, we can
consi der the
ng
al
esti mator for
spheri cal data:
cos 2m (# xX ).
We can also define a
correspondi ng
esti mator on the
ci rcle, where we take the
cosi ne
of the arc length between two
poi nts,i nstead of the
cosi ne of half the arc length
the case of
di recti onal data. The
between the
s the same
for the cases of the
ci rcle and the sphere,
vely.
3. E#cient evaluation of the density estimator. In
secti on, we shall
descri be an
e#ci ent
algori thm for the
computati on of the
esti mates f n (x)
evaluated at a set of n observed
nts X
2, .
on the
ci rcle S 1
case) and on the
We also show
f the value
of f n at some
arbi trary
xi s
desi red, then
ly
accompli shed once
computed. The
e#ci ency of our
methodi s
based on the fact that f n
terms of the
ons c m (x) as
or (15).
Suppose we represent the
posi ti ons of the observed
nts X
2, .
thei r
Cartesi an
coordi nates. We show that for any x, f n (x) can be expressed as a
polynomi al
of total degree
mi n the
coordi nates of x. The
coe#ci ents of
polynomi al can be
determi nedi n turn from the
coordi nates of the X i . Moreover, the
coe#ci ents are the
sums of the
contri buti ons due to each X
ii ndependently.
Fi rst we
consi der the case of
di recti onal data on S 1 . From (2), (5), and the
half-angle formula for
cosi ne we get
Denote the
poi nts on S 1
correspondi ng to the angles x and X i for
1, 2, .
., n, by
Cartesi an
coordi nates and let
# represent the
standardi nner product on R 2 . Then cos(x
.
ng
thi si nto
(17) we get
The
expressi on
(18)i s a
polynomi al of degree
. For a fixed m, we can
compute the
coe#ci ents by
addi ng the
contri buti on of each X i as follows.
Usi ng the
al theorem and (18)
r+s#m
s
x r
where
thei nner
th
Changi ng the order of
summati on
r+s#m
s
where
M(r, s) =n
i2, and
Ami s as
ven by (6). If we use the
expressi on for
onal ease, then (20)
fies to
r+s#m
s
GL U AND ASHOK SRINIVASAN
Table
Co mputatio nal
co mplexityo f the
co sine
estimato r.
Circle
Axial
Directional
for large m. Now we
consi der the number of
ons
requi red for the
evaluati on
of f n (x)
ven the
ons X
2, .
. The powers X r
i1 and X r
i2 for a fixed
1, 2, .
., m can be computed
th O(m)
multi pli cati ons.
ng
thi s for
1, 2, .
res O(mn)
multi pli cati ons. After the
conclusi on of
thi s step, each
of the O(m 2 ) averages
M(r, s) for a
ven r and s can be computed
wi th an
addi ti onal
O(n)
ons.
Si nce there are a total of O(m 2 )
correspondi ng to
the
rs
s
th 0 #
thi s means that the
coe#ci ents of the
polynomi ali n
or (21) can be computed
wi th a total of O(m 2 n)
ons.
Once the
coe#ci ents of f n (x) have been computed, to evaluate f n (x)
th
calculate the powers x r
1 and x r
1, 2, .
., mi n O(m)
ons.
Si nce the
coe#ci ents are already
avai lable, the
remai ni ng
res
only an
multi pli cati ons and
addi ti ons. The results for
di #erent cases
are
summari zedi n Table 1.
Remark. For our MISE convergence
condi ti on for S 1 (Theorem 3) to be
musti ncrease
wi thout bound
th n.
Theoreti cally, we can take m to be as
ng as we
li ke. Then the above
resulti mpli es that the
computati on
of the
densi ty at all of the sample
poi nts can be
accompli shed
usi ng only about O(n)
the
magni tude of m
ves acceptable accuracy for f n (x). The
problem
th
ng m to be too slowly
that the
magni tude of m controls
the error
our convergence proofs.
An
e#ci ent
algori thm for the
evaluati on of f n (#x) for
di recti onal data on S
constructed
larly. When #
2, .
are
observati ons on S 2 drawn from an
unknown
can be shown [4] that
r+s+t#m
where the
th
M(r, s, t) =n
i3 .
Thi s
ti me the
coe#ci ents of the
polynomi ali n (22) can be computed
wi th a total of
O(m 3 n)
ons. After
preprocessi ng, each
evaluati on of f n (x) for
res only an
ons.
ng results can be
deri ved for
al data as
summari zedi n Table 1.
It should also be noted that we needed the
Cartesi an
representati on of the data. If
the data are
cal
coordi nates, then there
wi ll be an
addi ti onal overhead
for
ng the
Cartesi an
representati on. However,
thi s overhead takes only
li near
ti me and so
wi ll be
negli gi ble for
su#ci ently large data.
Furthermore,i t has been
shown [24] that for
ani mportant class of
appli cati ons,
Cartesi an
coordi nates are
preferable to
cal
coordi nates, as the latter
systemi s not
cally stable for
solvi ng the
di #erenti al
equati ons that
se.
In the subsequent parts of
secti on we shall compare the
onal e#-
ci ency of our scheme
wi th that of the kernel method.
3.1. Parallelization. One of the advantages of the
onal strategy de-
bed
abovei s the ease of
paralleli zati on.
Paralleli zati oni s
redi n many
flui d
flow
calculati ons due to the large
si zes of the systems. The kernel
methodi s some-
what
di #cult to
paralleli ze. If we use an
e#ci ent
kerneli mplementati on that performs
kernel
evaluati ons only for those
nts
whi ch are
di stance h of the
ven sample, then an
e#ci enti mplementati on of the
paralleli zati on
res
load
ng and
decomposi ti on so that
poi nts that are close by
remai n on
the same processor, and so that each processor has roughly the same
loadi n terms
of the
onal e#ort. Also, the
communi cati on pattern for the kernel method
i s not very regular. In contrast,
paralleli zati on for the
cosi ne
esti mator can
ly
be
accompli shed by a global
reducti on
operati on, for
whi ch
e#ci ons
are usually
avai lable.
Thi s method
requi res the same
onal e#ort for each
poi nt, and so the
loadi s
ly balanced by
havi ng the same number of
each
processor.
decomposi ti on does not play
ani mportant
si nce the
poi nts can be on any processor.
3.2. Theoretical comparison of the kernel and the cosine estimators.
Now we analyze the
onal
e#ci ency of the kernel and the
cosi ne
esti mati on methods.
Ani mportant measure of the
e#ci ency of the
algori thmsi s not
just the convergence rate of the error
th sample
ze n, but the
of the
onal e#ort C
red as a
functi on of the error E. For the kernel
esti mator,
we can
te the
#, (23)
where
hi s the
smoothi ng parameter,
the
di mensi on, and
ni s the sample
ze. The
onal e#ort
requi red for
nonparametri c
esti mati on can
be expressed as
dependi ng on the
detai ls of the
algori thm used. For
a
ven sample
ves the
mal h as h # n -1/(d+4) . However,
si nce the
equati on for the
onal
also depends on h, we need to
der
the
possi bi li ty that a value of h smaller than
opti mal value may actually result
lower
onal e#ort.
Let us
consi der a
vari ati on of h
th n of the form
From (25), (24), and (23) we
# .
For
mi ni mum error, the exponent of both the terms on the
ri ght should be the same,
otherwi se the error due to the
hi gher term
wi ll
domi nate.
Thi s leads to
whi chi s the same as the value of
mal # for a
ven n. If we let h optn represent
the
opti mal h
mi ni mi zi ng the MISE for a
ven n, and h optC represent the
opti mal
GL U AND ASHOK SRINIVASAN
mi ni mi zi ng the
onal e#ort as a
functi on of the error, then the
expressi on
deri ved above for # does not
necessari lyi mply that h
si nce the
relati on
would
sti ll
sfy the
expressi on for # for some constant k. If k < 1,
then we can choose a
subopti mal value of
hi n order
toi mprove the speed of the
algori thm.
The
opti mal
vari ati on of error
th
onal
usi ng
thi s value of
ven by
4 .
Let us now
consi der the
cosi ne
esti mator. We can
te the
asymptoti c MISE as
follows:
where
Ei s the MISE,
mi s the
smoothi ng parameter,
di s the
di mensi on
the
ci rcle and for the sphere), and
ni s the sample
ze. The
onal
e#ort
requi red for
esti mator can be expressed as
n), where # can be
determi ned from Table 1. The
expressi on for C
abovei s the same as (24)
th
recalli ng that m behaves as h -2 . By an
mi lar to the
previ ous
case we can show that the
opti mal
vari ati on of error
th
onal
ven by
As examples, for the
cosi ne
esti mator on the
ci rcle
th
al data
and
di recti onal data 2), the
onal
complexi ty and the error are related
by
vely.
The
complexi ty of the kernel
esti matori s the same for
al and
di recti onal data.
However, several
di #erent
possi bi li ti es
st
dependi ng on how
e#ci ent
thei mplemen-
tati on of the
esti matori s. If we
der
esti mators of the form
ven by (3) and (4),
then we have
However,i f we
consi der a kernel
th bounded support, and use an
e#ci
mentati on of the
algori thm that computes the kernel only for those
poi nts that have
a nonzero
contri buti on, then the expected value of
for data on the
ci rcle. Note that the worst case
remai ns
(26). For
the case, we can
consi der an
e#ci ent
algori thm
usi ng
polynomi al kernels and
ng [19],
whi ch uses a
li near
me after
ani ni ti al O(n log n)
ng step. In
thi s case
whi ch means that the kernel method has a
better
than the
cosi ne kernel. However there appears to be no
natural
generali zati on of
thi s update strategy to
hi gher
di mensi ons [5].
Results for the
di #erent cases can be
determi nedi n the manner demonstrated
above and are
presentedi n Table 2. We
wi sh to
menti on that the exact
constantsi n Theorem 3 are not
qui te
mportant (compared
wi th the exponent
on E),
nce
asymptoti cally the
slowdowni ncurred by the cache
domi nates the
overall
runni ng
me. We can expect that the
mpler memory access pattern of our
esti mator
wi ll
makei t advantageous over the kernel
methodi n the
asymptoti c case.
Table
theo ptimal
co mputatio nal
e#o rt versus MISE. The numbers in the table represent
#, where the
relatio nship between the
co mputatio nal
es
no t take
into acco unt an initial
so rting step.
Estimator Circle
Cosine, axial data 1.75
Cosine, directional data 2.25
Kernel, worst case 1.25 *
Kernel, expected case 1.25 *
Si nce the worst case
es of the kernel method and the
cosi ne
esti mator
for
di recti onal data on the sphere have the same order, the
ve
e#ci enci es of
the methods can be tested only through
experi ments.
larly,
si nce the worst case
complexi ty of the
cosi ne
esti mator for
al data on the
spherei s the same as the
expected case for an
e#ci enti mplementati on of the kernel
esti mator, we need to
perform
experi ments to test the
ve
meri ts of the two
esti mators.
4. Experimental results. We performed
cal
experi ments for
al and
di recti onal data on the
ci rcle and
spherei n order to test the
e#ecti veness of our
esti mator. We first plot
esti mates for known
di stri buti ons and then demonstrate that
the MISE follows expected trends for
di stri buti ons. We finally compare the
onal
e#ci ency of our
esti mator
wi th that of kernel methods. More
ri cal
results are
presentedi n the
appendi x.
We
consi der the
on
normali zes the
functi on to be a
densi ty on the surface of a sphere and
Si s a known
functi on of U .
The angles # and # are the
azi muth and the
cal
coordi nates.
Thi s
s the
soluti on to a
cular
flui d
mechani cs. In
Fi gure 2(a) we present a
cal
esti mate for the
versi on of the above
on where # was taken
to be In
thi s figure, we take the data to be
di recti onal.
However,
nce
th respect to the center of the
ci rcle, we
can
consi der the data as
al and use the
al
esti mator. We can see from
Fi gure
2(b) that
res a much smaller value of m.
In
Fi gure 3(a) the
MISEi s compared versus m and n for the
one-di mensi onal #
di stri buti on
usi ng the
al
cosi ne
esti mator. We also compare
th one case of the
di recti onal
esti matori n order to show the benefit of
usi ng the
al
esti mator. In
Fi gure 3(b) the
MISEi s compared versus m and n for the
two-di mensi onal
versi on of
the #
di stri buti on on the surface of a sphere
usi ng the
di recti onal
esti mator.
We next present results for
experi ments
compari ng the speed of the
cosi ne and the
kernel
esti mators. We
consi der the
opti mal
vari ati on of the
onal e#ort
th
MISE. In order to get the
opti mal
onal e#ort for a
ven MISE, we allow for
the
possi bi li ty that we may
re
di #erent sample
si zes for the kernel and the
cosi ne
esti mators.
Thi si s
fied
ve
calculati ons one can
ly change
the "sample"
ze by
changi ng the
di screti zati on of the system. We have performed
these
sons only for
spheri cal data. The case of data on the
ci rcle was not
dered because of the
asymptoti c analyses of the
previ ous
secti on
whi ch clearly
cate that the
li near kernel
algori thmi n the
one-di mensi onal case
wi ll outperform
the
cosi ne
esti mator.
However,i n a
paralleli mplementati on, the
ng step for the
li near kernel
algori thm may be slow, and then one may
wi sh to
consi der the
cosi ne
esti mator.
GL U AND ASHOK SRINIVASAN
(a)
(b)
Co sine estimates
theo ne-dimensio nal
caseo f the #
defined
abo ve. The
so lid line represents the true density. (a) The dashed line represents
the
directio nal estimate
(b) The dashed line represent the axial estimate
The
ng kernel was chosen for the
sons:
[1, 2], 0
otherwi se,
(a)
(b)
n=1000
n=2000
Fig. 3. MISE
versusm and n
fo r the # density. (a)
One-dimensio nal
(o n the circle):
exp (US cos 2 (x))/A; the
so lid line
sho ws the results
fo r the axial
co sine
estimato r and the dashed
line
fo r the
directio nal
estimato r (with
Two -dimensio nal
(o n the
exp (US cos 2 (#))/A as defined
abo ve; MISE
fo r the
directio nal
co sine
estimato r.
where
Ai s the
normali zati on constant, and the
rati of the
di stance between two
nts along the surface of the sphere to
hi s
ven as the argument to the kernel
on. The use of
thi s kernel for
compari sons can be
fied
ts popular use
GL U AND ASHOK SRINIVASAN
Time
Fig. 4.
Co mpariso no f time (in
seco nds) versus the MISE
fo r the
co sine and kernel
data sampled
defined
abo ve. The
po ints marked in
represent the kernel estimate. The
po ints marked in x represent the
co sine estimate.
flui d
mechani cs
calculati ons [12 . Furthermore, we cannot expect any other kernel
to
ve a
ficantly better performance, for the
ng reasons:
(i
Iti s well
known that most kernels are equally good [23, Table 3.1
th respect to
"e#ci ency,"
as
Gi ven that the
e#ci encyi s about the same, the only other
consi derati oni s the
onal
e#orti nvolved. Our kernel takes between 6 and 10
ng
nt
operati ons for a nonzero
evaluati on
(i ncludi ng the cost of
computi ng the
square of the
di stance). Any other reasonable kernel would
requi re at least 6
ng
nt
ons. Apart from
thi s, the memory access
ti mes and
zero-evaluati ons
would add the same constant to all kernels.
Fi gures 4 and 5 compare the
onal e#ort
requi red for the
cosi ne
wei ght
on
esti mator and the kernel
esti mator for
es. We
obtai ned the
data
usi ng the
ng procedure. We performed
esti mates for
ous values of n,
m, and h and
obtai ned the MISE and the
ti me for the
calculati ons. For the
cosi ne
and the kernel
esti mates, we separately plotted the data for the
me
requi red for the
calculati ons versus the error. We chose the lower envelope of the data as the curve for
that
cular
esti mator
si nce the values of
m, n for the data on the lower envelope
ve the best
nable speed for a
ven MISE.
In
thei mplementati on of the kernel
esti mator, we
di vi ded the
spherei nto cells
such that the
si des of the cells had length at least 2h
nce the kernel defined above
has
ndow
dth 2h, rather than h). We placed each
samplei n the
ate cell.
When
computi ng the
densi ty for a
cular cell, we need to search over
only a few cells. The expected
complexi ty of
We first
der data
esti mated
usi ng the
al
cosi ne
esti mator and an
al
vari ant of the kernel
esti mator.
Fi gure 4 shows the results for the
two-di mensi onal
di stri buti on.
Thi si s an example of a
hi ghly
nonuni form
di stri buti on. We can see
that the kernel and the
cosi ne
esti mators are about equally fast.
We next
consi der a more
di stri buti on
ven by
where
#i s the
azi muth and
#i s the
on. The results
presentedi n
Fi gure 5(a)
show that the
cosi ne
esti mator outperforms the kernel
esti mator by more than an
order of
magni tude when the
al forms of both the
esti mators are used.
After
thi s we
consi dered the two
es
menti oned above but treated the data
as
di recti onal and
esti mated them
usi ng the
di recti onal
vari ants of the kernel and the
cosi ne
esti mators. For the #
di stri buti on, the
cosi ne
esti mator performed very poorly
onal
e#ci ency, and we do not present results for
thi s case.
Fi gure 5(b)
presents the results for the
di stri buti on
ven by
cos(#)+1/(8#). We can see
that the
cosi ne
esti mator
sti ll outperforms the kernel
esti mator, though only
sli ghtly.
5. Discussion. The
compari son of the
esti mates
wi th the true
densi tyi ndi cate
that the
cosi ne
esti mator produces accurate results for the
di stri buti ons tested. Plots
of MISE versus m and n follow the expected trends. As the sample
zei ncreases, the
error for the
decreases.
Besi des, the
opti mal value of
mi ncreases as the
sample
zei ncreases. It can also be seen that as the number of
poi ntsi ncreases, the
range of m over
whi ch the
esti mate performs well
alsoi ncreases. We can use
thi s to
our advantage by
choosi ng a
subopti mal value of m
whi ch decreases the
onal
e#ort
ficantly
buti ncreases the error only
sli ghtly.
The
experi ments
compari ng the
onal
e#ci enci es show that the
cosi ne
esti mator outperforms the kernel
esti mator for
al data when the
di stri buti oni s
moderately
uni form. If the
di stri buti oni s
hi ghly
nonuni form, then the two
esti ma-
tors
ve comparable performance for
al data. The
cosi ne
esti mator outperforms
the kernel
esti mator
sli ghtly for
di recti onal data when the
di stri buti oni s moderately
uni form. However, the
ng results for the
cosi ne
esti mator are poor for
hi ghly
nonuni form
di recti onal data. In general, when the
datai s not very
nonuni form,
smoother
wei ght
ons are used.
Thi s
ves a low value of m
whi chi mpli es a
fast
evaluati on
usi ng the
cosi ne
esti mator. However,
thi s leads to a
hi gher h for the
kernel
esti mator,
whi chi mpli es that more samples
contri bute to the kernel
evaluati on
of each sample
nt and, hence,
thi s leads to more
onal e#ort. Conversely,
when the
di stri buti oni s
hi ghly
nonuni form,
especi ally for
di recti onal data, the ker-
nel
methodi s to be preferred. The
ri cal test results
presentedi n the
further demonstrate
nt.
We also analyzed our
experi mental data to
esti mate an
opti mum
vari ati on of m
th n.
Usi ng the results of our
experi ments, we can perform a least squares
and
mate m as kn 1/2.5 for
one-di mensi onal
esti mati on
whi chi s the
same as that expected based on the
expressi on for the MISE.
ng
appears to
ve reasonable
esti mates for
densi ty on the surface of a sphere.
Thi s
resulti s also
stent
wi th the
cal
predi cti ons. Here, the
magni tude of
k depends on the
complexi ty of the
functi on. It
es between 1 and 10 for
the
di stri buti ons
dered here.
We also noted the values of m, n, and h,
whi ch gave the
opti mal
onal
e#ort for a
ven MISE, and compared the results for the kernel and the
cosi ne esti
mators. We observed that the values of h were close to the values
whi ch gave the
mi ni mum MISE for the
ven sample
si ze. However, the values of m were
cantly lower than the values
whi ch gave the
mi ni mum MISE for the
ven sample
ze, though the
errori nvolvedi tself was not much
hi gher than the
mi ni mum MISE.
appears that we can choose a
subopti mal
smoothi ng
parameteri n order to
ncrease the
speedi n the case of the
cosi ne
esti mator.
GL U AND ASHOK SRINIVASAN
(b)
(a)
Time
Time
Fig. 5.
Plo to f time (in
seco nds) versus the MISE
fo r the
co sine and the kernel
estimatio no f
data sampled
1/(8#). The
po ints marked in o represent the kernel estimate.
The
po ints marked in x represent the
co sine estimate. (a) Data treated as axial. (b) Data treated
as
directio nal.
Elevation
Density
Fig. 6.
Plo to f density
ns g(#;
di#erent
the
elevatio n. The
so lid line
the dashed line the
dash-do tted line
and the
do tted line
6. Conclu ion . In
thi s paper, we have
descri bed a
wei ght
on
esti mator
for
nonparametri c
esti mati on of
li ty
ons based on
cosi nes, and
we
provi ded
ons under
whi ch the
esti mate
ts
deri vati ves converge to the
actual
ons. We have developed a scheme for the
e#ci ent
computati on of the
densi ty and presented
experi mental results to check the performance of the
esti mator
for
practi cal problems. These results are
cularly relevant to
flui d mechan-
calculati ons
and,i n general, to
ons where the sample
si ze can be controlled,
for example, though refinement of the
di screti zati on. We have also
ven an
empi r-
cal formula for
choosi ng the
wei ght
on exponent parameter of the
esti mator.
Our
experi mental results suggest that the
cosi ne
esti mator outperforms the kernel
esti mator for both
di recti onal and
al data that are moderately
uni form. It
ves
performance comparable to the kernel
esti mator for
hi ghly
nonuni form
al data,
whi le the kernel
methodi s preferable for
hi ghly
nonuni form
di recti onal data. There
potenti al for further
theoreti cal study of our
esti mator.
Appendix
. Further test results. We present more test
resultsi n
secti on
to study the
ve
e#ci enci es of the
cosi ne and the kernel
techni ques, as
the
ed
systemati cally from
bei ng
vely
uni form to
bei ng
sharply peaked on the
For these tests, we chose
ons g(#;
si s a
constant that governs the sharpness of the
#i s the
elevati on, and
normali zes
thi s to a
li ty
on.
Fi gure 6 shows
the
densi ty as a
functi on of the
elevati on alone for
di #erent values of the parameter s.
Thi s
c about the center of the sphere, and thus we can use the
al
esti mators,
on 4. We can
alsoi gnore our knowledge of
and use the general
di recti onal
esti mators.
GL U AND ASHOK SRINIVASAN
GL U AND ASHOK SRINIVASAN
GL U AND ASHOK SRINIVASAN
Time
(a)
(b)
We present the results of the
experi ments as plots of
me versus MISE for the
kernel
esti mator versus the
cosi ne
esti mator, for both
al and
di recti onal data. The
kernel
esti matori s the one
usedi n the
ri cal
on 4. The tests were
performed on an Intel Celeron 300MHz processor
th 64 MB memory. The C code
was
led
wi th the gcc
compi ler at
opti mi zati on level -O3.
We can see from
Fi gures 7, 8, 9, 10, and 11 that when the
on
i s not very sharp, the
cosi ne
esti mator outperforms the kernel
esti mator for both
al and
di recti onal data. As the
densi ty becomes sharper, the kernel method starts
outperformi ng the
cosi ne
esti mator for
di recti onal data, though the
latteri s
sti ll
better for
al data. When the
densi ty becomes extremely sharp, the kernel method
becomes better for both types of data, though for
al data the two methods are
sti ll
comparable to a
extenti n terms of speed. These results follow the
cally
predi cted trends and demonstrate that these two methods complement each other for
di #erent types of data.
Appendix
. We thank the referees for
thei r
detai led comments and
advi ce, espe-
ci ally for
di recti ng our
attenti on to the current
li terature.
--R
On so me glo bal measureso f the deviatio nso f density functio n estimates
Estimatio no f unkno wn pro bability density basedo no bservatio ns
Fast implementatio nso f no nparametric curve estimato rs
Statistical Analysiso f Spherical Data
Kernel density estimatio n with spherical data
TREESPH: A unificatio no f SPH with the hierarchical tree metho d
The estimatio no f pro bability densities and cumulatives by Fo urier series metho ds
thed particle hydro dynamics
On estimatio no f a pro bability density functio n and mo de
Remarkso n so me no n-parametric estimateso f a density functio n
Estimatio no f pro bability density by ano rtho go nal series
Multivariate Density Estimatio n
Fast algo rithms fo r no nparametric curve estimatio n
Kernel density estimatio n using the fast Fo urier transfo rm
On the appro ximatio no f pro bability densitieso f rando m variables
Numerical So lutio no f Partial Di
Density Estimatio n fo r Statistics and Data Analysis
A new co mputatio nal metho d fo r the so lutio no f flo w pro blemso f micro structured fluids.
Pro bability density estimatio n in astro no my
Pro bability density estimatio n using delta sequences
Kernel Smo thing
On the estimatio no f the pro bability density
On the smo ns
--TR
--CTR
Jeff Racine, Parallel distributed kernel estimation, Computational Statistics & Data Analysis, v.40 n.2, p.293-302, 28 August 2002 | kernel method;convergence;efficient algorithm;nonparametric estimation;fluid mechanics;probability density |
587292 | A Multiresolution Tensor Spline Method for Fitting Functions on the Sphere. | We present the details of a multiresolution method which we proposed at the Taormina Wavelet Conference in 1993 (see "L-spline wavelets" in Wavelets: Theory, Algorithms, and Applications, C. Chui, L. Montefusco, and L. Puccio, eds., Academic Press, New York, pp. 197--212) which is suitable for fitting functions or data on the sphere. The method is based on tensor products of polynomial splines and trigonometric splines and can be adapted to produce surfaces that are either tangent plane continuous or almost tangent plane continuous. The result is a convenient compression algorithm for dealing with large amounts of data on the sphere. We give full details of a computer implementation that is highly efficient with respect to both storage and computational cost. We also demonstrate the performance of the method on several test examples. | Introduction
In many applications (e.g., in geophysics, meteorology, medical modelling, etc.), one
needs to construct smooth functions defined on the unit sphere S which approximate
or interpolate data. As shown in [11], one way to do this is to work with tensor-product
functions of the form
e
defined on the rectangle
where the ' i are quadratic polynomial B-splines on [\Gamma-=2; -=2], and the ~
are
periodic trigonometric splines of order three on [0; 2-]. With some care in the
choice of the coefficients (see Sect. 2), the associated surface
1) Institutt for Informatikk, University of Oslo P.O.Box 1080, Blindern 0316 Oslo, Norway
tom@ifi.uio.no. Supported by NATO Grant CRG951291. Part of the work was
completed during a stay at "Institut National des Sciences Appliqu'ees'' and "Laboratoire
Approximation et Optimisation" of "Universit'e Paul Sabatier'', Toulouse, France.
Department of Mathematics, Vanderbilt University, Nashville, TN 37240,
s@mars.cas.vanderbilt.edu. Supported by the National Science Foundation under
grant DMS-9803340 and by NATO Grant CRG951291.
with will be tangent plane continuous
In practice we often encounter very large data sets, and to get good fits using
tensor product splines (1.1), a large numbers of knots are required, resulting in many
basis functions and many coefficients. Since two spline spaces are nested if their
knot sequences are nested, one way to achieve a more efficient fit without sacrificing
quality is to look for a multiresolution representation of (1.1), i.e., to recursively
decompose it into splines on coarser meshes and corresponding correction (wavelet)
terms. Then compression can be achieved in the standard way by thresholding out
small coefficients.
The paper is organized as follows. In Sect. 2 we introduce notation and give
details on the tensor product splines to be used here. In Sect. 3 we describe the
general decomposition and reconstruction algorithm in matrix form, while in Sect. 4
we present a tensor version of the algorithms. The required matrices corresponding
to the polynomial and trigonometric spline spaces, respectively, are derived in
Sections 5 and 6. Sect. 7 is devoted to details of implementing the algorithm. In
Sect. 8 we present test examples, and in Sect. 9 several concluding remarks.
x2. Tangent plane continuous tensor splines
'm be the standard normalized quadratic B-splines associated with the
knot sequence
Recall that ' i is supported on the interval [x and that the B-splines form
a partition of unity on [\Gamma-=2; -=2]. Let T
m be the classical trigonometric
B-splines of order 3 defined on the knot sequence ~
m+3 , where
and ~
x ~
see Sect. 6. Recall that T j is supported on the
interval
~
m,
be the associated 2-periodic trigonometric B-splines, see [10]. These splines can
be normalized so that for OE 2 [0; 2-]
e
cos
x
e
cos
x
e
sin
x
~
Since the left and right boundaries of H map to the north and south poles,
respectively, a function f of the form (1.1) will be well-defined on S if and only if
x
and
where fS and fN are the values at the poles. Now since f is 2-periodic in the OE
variable and is C 1 continuous in both variables, we might expect that the corresponding
surface S f has a continuous tangent plane at nonpolar points. However,
since we are working in a parametric setting, more is needed. The following theorem
shows that under mild conditions on f which are normally satisfied in practice,
we do get tangent plane continuity except at the poles.
Theorem 2.1. Suppose f is a spline as in (1.1) which satisfies the conditions
and (2.5), and that in addition f('; OE) ? 0 for all ('; OE) 2 H. Then the
corresponding surface S f is tangent plane continuous at all nonpolar points of S.
Proof: Since f is a C 1 spline, the partial deriviatives f ' and f OE are continuous
on H. Now are two
tangents to the surface S f at the point f('; OE)vv v v v v v vv ('; OE). The normal vector to the
surface at this point is given by the cross product nn n n n n n nn := t 1 \Theta t 2 . By the hypotheses,
continuous, and thus to assure a continuous tangent plane, it suffices to show
that nn n n n n n nn has positive length (which insures that the surface does not have singular
points or cusps). Using Mathematica, it is easy to see that
\Theta cos(') 2 f(';
which is clearly positive for all values of ('; OE) 2 H with ' 6= \Sigma-=2.
With some additional side conditions on the coefficients of f , we can make the
surface S f also be tangent plane continuous at the poles. The required conditions
(cf. [3,11]) are that
AS cos
x
sin
and
AN cos
x
+BN sin
m, where AS ,BS ,AN , and BN are constants.
x3. Basic decomposition and reconstruction formulae
Suppose are a nested sequence of finite-dimensional linear subspaces of
an inner-product space X, i.e.
Let
be the corresponding orthogonal decompositions.
For our application, it is convenient to express decomposition and reconstruction
in matrix form, cf. [12]. Let ' k;mk be a basis for V k , and let
be a basis for W Then by the
nestedness, there exists an m k \Theta m k\Gamma1 matrix P k such that
where
The equation (3.1) is the usual refinement relation. Similarly, there exists an m k \Theta
such that
where
Let
be the Gram matrices of size m k \Theta m k and n k\Gamma1 \Theta n k\Gamma1 , respectively. It is easy to
see that
Clearly, the Gram matrices G k and H are symmetric. The linear independence
of the basis functions OE k;i and of / k;i implies that both G k and H are positive
definite, and thus nonsingular.
The following lemma shows how to decompose and reconstruct functions in V k
in terms of functions in V k\Gamma1 and W k\Gamma1 .
Lemma 3.1. Let f
k a k be a function in V k associated with a coefficient
vector a k 2 IR mk , and let
be its orthogonal decomposition, where
Then
a
Moreover,
a
Proof: To find a k\Gamma1 , we take the inner-product of both sides of (3.5) with ' k\Gamma1;i
. Using the refinement relation (3.1) and the orthogonality of
the ' k\Gamma1;i with / k\Gamma1;j , we get
which gives the formula for a k\Gamma1 . If we instead take the inner-products with /
we get the formula for b k\Gamma1 . In view of the linear independence of the functions
, the reconstruction formula (3.6) follows immediately from (3.5)
and the refinement relations.
x4. Tensor-product decomposition and reconstruction
In this section we discuss decomposition and reconstruction of functions in tensor
product spaces V k \Theta e
are as in the previous section, and where e
are similar subspaces of an inner-product space e
X. In particular, suppose
e
and that
e
be as in the previous section, and let e
be the analogous matrices associated with the spaces e
Theorem 4.1. Let f
k A k;' ~
' ' be a function in V k \Theta e
associated with a
coefficient matrix A k;' . Then f k;' has the orthogonal decomposition
with
f
~
(2)
~
where the matrices A
are computed from
the system of equations
G
e
G
e
e
e
e
G
e
e
e
with
e
Moreover,
e
e
e
e
Proof: To find the formula for A k\Gamma1;'\Gamma1 , we take the inner-product of both sides
of (4.1) with ' k\Gamma1;i for
' '\Gamma1;j for
. The
formulae for the B (i)
are obtained in a similar way. The reconstruction formula
follows directly from (4.1) after inserting the refinement relations and using
the linear independence of the components of the vectors ' k and in ~
Note that computing the matrices A k\Gamma1;'\Gamma1 and B (i)
in a decomposition
step can be done quite efficiently since several matrix products occur more than
once, and we need only solve linear systems of equations involving the four matrices
G
G
H . As we shall see below, in our setting the first two
of these are banded matrices, and the second two are periodic versions of banded
matrices. All of them can be precomputed and stored in compact form.
x5. The decomposition matrices for the polynomial splines
In this section we construct the matrices P k , Q k , and G k needed for the decomposition
and reconstruction of quadratic polynomial splines on the closed interval
[\Gamma-=2; -=2]. Consider the nested sequence of knots
where
with
be the associated normalized
quadratic B-splines with supports on the intervals [x k
. For
each k, the span V k of ' k;mk is the m k dimensional linear space of C 1
quadratic splines with knots at the x k
i . These spaces are clearly nested. In addition
to the well-known refinement relations
a simple computation shows that
Equations (5.2), (5.3) provide the entries for the matrix P k . In particular,
the first two and last two columns are determined by (5.3), while for any 3 -
2, the i-th column of P k contains all zeros except for the four rows
which contain the numbers 1=4, 3=4, 3=4, and 1=4. For example,
In general, P k has at most two nonzero entries in each row and and at most four
nonzero entries in each column.
In order to construct the matrices Q k , we now give a basis for the wavelet
space W k\Gamma1 . Here we work with the usual L 2 inner-product on L 2 [\Gamma-=2; -=2]. Let
Theorem 5.1. Given k - 1, let
and for k - 2, let
\Gamma6864\Gamma4967\Gamma4061
In addition, for k - 2, let
form a basis for W k\Gamma1 .
Proof: The wavelets in (5.5) are just the well-known quadratic spline wavelets,
see e.g., [1]. As described in [5], the coefficients of the remaining wavelets can
be computed by forcing orthogonality to V k\Gamma1 . In view of (3.2), the wavelets
are linearly independent if and only if the matrix Q k is
of full rank. This follows since the submatrix of Q k obtained by taking rows
easily be seen to be
diagonally dominant. For an alternate proof of linear independence, see Lemma 11
of [5].
In view of properties of B-splines, it is easy to see that
2:
We now describe the matrices Q k . By Theorem 5.1,
and
For general k - 2, the nonzero elements in the third column of Q k are repeated in
in each successive column they are shifted down
by two rows. The first two and last two columns of Q k contain the same nonzero
elements as Q 2 . Clearly, Q k has at most 4 nonzero entries in each row and at most
8 nonzero entries in each column.
We now describe the Gram matrices G k , which in general are symmetric and
five-banded. To get G k , we start with the matrix with 66h k =120 on the diagonal,
26h k =120 on the first subdiagonal, and h k =120 on the second diagonal. Then replace
the entries in the 3 \Theta 3 submatrices in the upper-left and lower-right corners by
@
For example,
and
x6. The decomposition matrices for the trigonometric splines
In this section we present the matrices e
needed for the decomposition
and reconstruction of periodic trigonometric splines of order 3. Suppose ' - 1, and
that
~
is a nested sequence of knots, where ~ h
where
0; otherwise.
is the usual trigonometric B-spline of order three associated with uniformly spaced
knots (0; h; 2h; 3h). Set
~
ae M ';i (OE);
For later use we define ~
';em '
' ';i for
The span e
';em '
is the space of periodic trigonometric splines
of order three. Clearly, these spaces are nested, and in fact we have the following
refinement relation:
Theorem 6.1. For all
~
where
Proof: By nestedness and the nature of the support of T h ,
for some numbers u; v; w; z. By symmetry, it is enough to compute u and v. To
find u, we note that on [0; h],
Then using (6.2) we can solve for u. To find v we note that
and then solve for v using (6.2).
Theorem 6.1 can now be used to find the entries in the matrix e
needed
in Sect. 2. In particular, each column has exactly the four nonzero elements
starting in the first row in column one, and shifted down
by two rows each time we move one column to the right (where in the last column
the last two elements are moved to the top of the column). For example,
e
Next we describe a basis for the wavelet space f
W '\Gamma1 which has dimension
~
1. In this case we work with the usual L 2 inner-product on
Theorem 6.2. Given ' - 1, let
~
~
where
and
~
with
Then ~
is a basis for the space f
Proof: To construct wavelets in f
we apply Theorem 5.1 of [6] which gives
explicit formulae for the ~
q i in terms of inner-products of ~
' ';i with ~
' '\Gamma1;j . To show
that ~
are linearly independent, it suffices to show that e
is of
full rank. To see this, we construct a ~ n '\Gamma1 \Theta ~
by moving the last
column of e
in front of the first column, and then selecting rows 2;
We
now show that this matrix is strictly diagonally dominant, and thus of full rank.
First, we note that in each row of B ' the element on the diagonal is ~
while the sum of the absolute values of the off diagonal elements is j~q 1 ( ~ h ' )j
simple computation shows that each of the functions D(h)
and r i (h) := ~
has a Taylor expansion which is an alternating series. In
particular, using the first two terms of each series, we get
Now it is easy to see that
\Theta j~q 3
also has an alternating series expansion, and we get
for the same range of h. This shows that B ' is strictly diagonally dominant, and
the proof is complete.
The formulae for the ~
q i in Theorem 6.2 are not appropriate for small values of
~
. In this case we can use the following Taylor expansions:
~
~
~
~
Rather than computing them each time we need them, we can precompute
and store the necessary values of ~
see
Table
1 in Sect. 7. We can now describe the matrix e
needed in Sect. 2 for
decomposing and reconstructing with trigonometric splines. For
e
~
~
~
~
~
~
where all ~
are evaluated at ~ h 1 . For ' - 2, each column of e
contains the 8 entries
~
evaluated at ~ h ' . In particular, these entries start in row
1 in column 1, and are shifted down by two each time we move one column to the
right (where in the last three columns, entries falling below the last row are moved
to the top). Clearly, e
Q ' has exactly four nonzero entries in each row. For example,
e
~
~
~
~
~
~
~
~
where all ~
are evaluated at ~ h 2 .
Finally, we describe the Gram matrices.
Theorem 6.3. For ' - 1, the 3
associated with the
~
' ';i is given by
e
I 00 I 01 I
I 01 I 00 I 01 I
I 02 I 01 I 00 I 01 I
I 01 I 00 I 01 I 02
I I 01 I 00 I 01
I 01 I
where
I
Z ~ x '
~
~
I 01 :=
Z ~ x '
~
~
I 00 :=
Z ~ x '
~
~
with
Moreover,
e
I
I
Proof: Using (6.2), the necessary integrals can be computed directly.
The formulae in Theorem 6.3 are clearly not appropriate for small values of
~
, in which case the following formulae can be used:
I
I
~
I
We can precompute and store the values of I 00 , I 01 , and I 02 for various levels
see
Table
2 in Sect. 7 for the values up to
x7. Implementation
7.1. Decomposition
The decomposition procedure begins with a tensor spline of the form (1.1) based on
polynomial splines ' k;i (') at a given level k - 1 and periodic trigonometric splines
~
';j (OE) at a given level ' - 1 with coefficient matrix C := A k;' of size m k \Theta e
To
carry out one step of the decomposition, we solve the systems (4.2) for A
, and set
To continue the decomposition, we now carry out the same procedure on the matrix
A k\Gamma1;'\Gamma1 . This process can be repeated at most min(k; times, where at each
step the new spline coefficients and wavelet coefficients are stored in C. Thus,
the entire decomposition process requires no additional storage beyond the original
coefficient matrix.
Because of the banded nature of the matrices appearing in (4.2), with careful
programming and the use of appropriate band matrix solvers, the j-th step of the
decomposition can be carried out with O(m k\Gammaj+1 e
To help keep
the number of operations as small as possible, we precompute and store the entries
of the matrices G
. appearing in (4.2). Table 1 gives
the values of ~
needed for the e
Table
gives the values of I
h ' and I needed for the e
G ' . The matrices
H k are symmetric positive definite and seven-banded, while the e
H ' are symmetric
positive definite periodic seven-banded matrices.
To check the robustness of the decomposition process, we computed the exact
condition numbers of the matrices G k , H k , e
H ' for up to eight levels. None
of the condition numbers exceeded 10, and we can conclude that the algorithm is
highly robust.
6 -28.996175484404513950 146.95303891951439472 -302.86111072242944246
9 -28.999940238853933503 146.99926613409589417 -302.99782947935722381
Tab. 1. Trigonometric spline wavelet coefficients for various '.
7.2. Thresholding
Typically, in the j-th step of the decomposition, many of the entries in the matrices
k\Gammaj;'\Gammaj of wavelet coefficients will be quite small. Thus, to achieve compression,
these can be removed by a thresholding process. In view of (5.6), tangent plane
continuity will be maintained at the poles if we retain all coefficients in the first two
and last two rows of these matrices. Given ffl, at the j-th level we remove all other
9 0.5500021215280431720 0.21666764608909212581 0.008333384794548569195
Tab. 2. Inner products of Trigonometric B-splines for various '.
wavelet coefficients in B (1)
k\Gammaj;'\Gammaj whose absolute values are smaller than
ffl=2 j . We do the same for B (3)
k\Gammaj;'\Gammaj using a threshold value of ffl=(300
smaller threshold is applied because of the scaling of the wavelets.
7.3. Reconstruction
In view of (4.3), to carry out one reconstruction step simply involves matrix multiplication
using our stored matrices. Because of the band nature of these matrices,
the computation of A k\Gammaj;'\Gammaj requires O(m k\Gammaj e
operations. At each step of
the reconstruction we can store these coefficients in the same matrix C where the
decomposition was carried out.
x8. Examples
To test the general performance of the algorithms, we begin with the following
simple example.
Example 1. Let s be the tensor spline with coefficients
Discussion: Since the normalized quadratic B-splines form a partition of unity,
it follows from (2.1) that with these coefficients, s j 1 for all ('; OE) 2 H, i.e., the
corresponding surface is exactly the unit sphere. In this case the coefficient matrix
is of size 770 \Theta 1536, and involves 1,182,720 coefficients. To test the algorithms, we
performed decomposition with various values of ffl, including zero. In all cases, after
reconstruction we got coefficients which were correct to machine accuracy (working
in double precision). The run time on a typical workstation is just a few seconds
for a full 7 levels of decomposition and reconstruction.
The illustrate the ability of our multiresolution approach to achieve high levels
of compression while retaining important features of a surface, we now create a
tensor spline fit to a smooth surface with a number of bumps.
Example 2. Let B be the surface shown in the upper left-hand corner of Figure 1.
Discussion: The surface B was created by fitting a spline f 8;8 to data created
by choosing 10 random sized subrectangles at random positions in H, and adding
tensor product quadratic B-splines of maximum height 3/4 with support on each
such rectangle to the constant values corresponding to the unit sphere. For
the coefficient matrix is of size 770 \Theta 768 and involves 591,360 coefficients.
To test the algorithms, we performed decomposition with the thresholding values
9. Table 3 shows the results of a typical run with
step nco
6 9746
Tab. 3. Reduction in coefficients in Example 2 with
Almost 3=4 of the coefficients are removed in the first step of decomposition, and
after 7 steps we end up with only 9745 coefficients (which amounts to a 60:1 compression
ratio). Table 4 shows the differences between the original coefficients and
the coefficients obtained after reconstruction. The table lists both the maximum
norm
and the average ' 1 norm
mem
are the original coefficients, and ~ c ij are the reconstructed ones. Due to the
scaling of the wavelets these numbers are somewhat larger than the corresponding
ffl.
The surfaces corresponding to the values are shown in
Figure
1. At near perfect looking reconstruction, while at
the major features are reproduced with only small wiggles in the surface. At
Fig. 1. Compressed surfaces for Example 2.
we have larger oscillations in the surface. This example shows that there is a critical
value of ffl beyond which the surface exhibits increasing oscillations with very little
additional compression.
x9. Remarks
Remark 9.1. The approach discussed in this paper was first presented at the
Taormina Wavelet Conference in October of 1993, and as far as we know was the
first spherical multiresolution method to be proposed. The corresponding proceedings
paper [6] focuses on the general theory of L-spline wavelets, and due to space
limitations, a full description of the method could not be included. In the meantime
Tab. 4. Coefficient errors in Example 2 for selected ffl.
we have become aware of the recent work [2,4,8,9,13]. In [2] the authors use tensor
splines based on exponential splines in the OE variable. The method in [4] uses
discretizations of certain continuous wavelet transforms based on singular integral
operators, while the method in [8] uses tensor functions based on polynomials and
trigonometric polynomials. Finally, the method in [9] utilizes C 0 piecewise linear
functions defined on spherical triangulations. Except for the last method, we are
not aware of implementations of the other methods.
Remark 9.2. In our original paper [6], an alternative way of making sure that
tangent plane continuity is maintained at the poles was proposed. The idea is to
decompose the original tensor product function s into two parts s H and s P , where
mk \Gamma2
e
and reconstruction can be
performed on s H . After adding s P , the reconstructed spline possesses tangent plane
continuity at the poles. Our implementation of this method exhibits essentially the
same performance in terms of compression and accuracy as the method described
here, but for higher compression ratios produces surfaces which are not quite as
visually pleasing near the poles.
Remark 9.3. The method described here can be extended to the case of nonuniform
knots in both the ' and OE variables. In this case the computational effort
increases considerably since the various matrices can no longer be precomputed
and stored.
Remark 9.4. In Sect. 4 we have presented the details of the tensor-product decomposition
and reconstruction algorithms assuming that the initial function f k;'
lies in the space V k \Phi e
not necessarily the same. Since these spaces
can always be reindexed, this is not strictly necessary in the abstract setting, but
was convenient for our application where there is a natural indexing for our spaces.
Remark 9.5. In computing the coefficients needed in Sections 5 and 6, we found
it convenient to use Mathematica.
Remark 9.6. There are several methods for computing approximations of the
form (1.1). An explicit quasi-interpolation method using data on a regular grid
(along with derivatives at the north and south poles) can be found in [11]. The
same paper also describes a two-stage method which can be used to interpolate
scattered data, and a least squares method which can be used to fit noisy data. A
general theory of quasi-interpolation operators based on trigonometric splines can
be found in [7].
Remark 9.7. A closed, bounded, connected set U in IR 3 which is topologically
equivalent to a sphere is called a sphere-like surface. This means that there exists a
one-to-one mapping of U onto the unit sphere S. Moreover, there exists a point O
inside the volume surrounded by U , such that every point on the surface U can be
seen from O. Such surfaces are also called starlike. For applications, we can focus
on the class of sphere-like surfaces of the form
where ae is a smooth function defined on S. Then each function f defined on U is
just the composition ae of a function g defined on S.
Remark 9.8. As indicated in [9], compression methods on the sphere can be
adapted to the problem of creating multiresolution representations of bidirectional
reflection distribution functions (BRDF's), althought the basic domain for such
functions is actually a hemisphere. We will explore the use of our method for this
purpose in a later paper.
Remark 9.9. It is well-known that the polynomial B-splines are stable. In particular
for quadratic B-splines (' i ) with general knots3 kck1 - k
for all coefficient vectors c. The same bounds hold for trigonometric splines since
the linear functionals
introduced in [11] are dual to the ~
Analogous stability results hold for general
p-norms.
--R
An Introduction to Wavelets
Multiresolution analysis and wavelets on S 2 and S 3
Algorithms for smoothing data on the sphere with tensor product splines
Spherical wavelet transform and its discretiza- tion
in Wavelets: Theory
efficiently representing functions on the sphere
Basic Theory
Fitting scattered data on spherelike surfaces using tensor products of trigonometric and polynomial splines
Wavelets for Computer Graphics
Biorthogonale Wavelets auf der Sph-are
--TR
--CTR
Thomas W. Sederberg , David L. Cardon , G. Thomas Finnigan , Nicholas S. North , Jianmin Zheng , Tom Lyche, T-spline simplification and local refinement, ACM Transactions on Graphics (TOG), v.23 n.3, August 2004
El Bachir Ameur , Driss Sbibih, Quadratic spline wavelets with arbitrary simple knots on the sphere, Journal of Computational and Applied Mathematics, v.162 n.1, p.273-286, 1 January 2004
John E. Lavery, Shape-preserving interpolation of irregular data by bivariate curvature-based cubic L | spherical data compression;tensor splines;multiresolution |
587299 | Explicit Algorithms for a New Time Dependent Model Based on Level Set Motion for Nonlinear Deblurring and Noise Removal. | In this paper we formulate a time dependent model to approximate the solution to the nonlinear total variation optimization problem for deblurring and noise removal introduced by Rudin and Osher [ Total variation based image restoration with free local constraints, in Proceedings IEEE Internat. Conf. Imag. Proc., IEEE Press, Piscataway, NJ, (1994), pp. 31--35] and Rudin, Osher, and Fatemi [ Phys. D, 60 (1992), pp. 259--268], respectively. Our model is based on level set motion whose steady state is quickly reached by means of an explicit procedure based on Roe's scheme [ J. Comput. Phys., 43 (1981), pp. 357--372], used in fluid dynamics. We show numerical evidence of the speed of resolution and stability of this simple explicit procedure in some representative 1D and 2D numerical examples. | Introduction
. The classical algorithms for image deblurring and/or denoising
have been mainly based on least squares, Fourier series and other L 2 -norm approxi-
mations, and, consequently, their outputs may be contaminated by Gibbs' phenomena
and do not approximate well images containing edges. Their computational advantage
comes from the fact that they are linear, thus fast solvers are widely available. How-
ever, the effect of the restoration is not local in spatial scale. Other bases of orthogonal
functions have been introduced in order to get rid of those problems, e.g., compactly
supported wavelets. However, Gibbs' phenomenon, (ringing), is still present for these
norms.
The Total Variation (TV) deblurring and denoising models are based on a variational
problem with constraints using the total variation norm as a nonlinear non-differentiable
functional. The formulation of these models was first given by Rudin,
Osher and Fatemi in ([19]) for the denoising model and Rudin and Osher in ([18]) for
the denoising and deblurring case. The main advantage is that their solutions preserve
edges very well, but there are computational difficulties. Indeed, in spite of the fact
that the variational problem is convex, the Euler-Lagrange equations are nonlinear
and ill-conditioned. Linear semi-implicit fixed-point procedures devised by Vogel and
Oman, (see [26]), and interior-point primal-dual implicit quadratic methods by Chan,
Golub and Mulet, (see [6]), were introduced to solve the models. Those methods give
good results when treating pure denoising problems, but the methods become highly
ill-conditioned for the deblurring and denoising case where the computational cost is
very high and parameter dependent. Furthermore, those methods also suffer from the
undesirable staircase effect, namely the transformation of smooth regions (ramps) into
piecewise constant regions (stairs).
In this paper we present a very simple time dependent model constructed by evolving
the Euler-Lagrange equation of the Rudin-Osher optimization problem, multiplied
by the magnitude of the gradient of the solution. The two main analytic features of
y Department of Mathematics, University of California, Los Angeles, 405 Hilgard Av-
enue, Los Angeles, CA 90095-1555 and Departament de Matem'atica Aplicada, Universitat de
Dr. Moliner, 50, 46100 Burjassot, Spain. E-mail addresses: marquina@uv.es, URL:
http://gata.uv.es/~marquina. Supported by NSF Grant INT9602089 and DGICYT Grant PB97-
1402.
Department of Mathematics, University of California, Los Angeles, 405 Hilgard Avenue, Los
Angeles, CA 90095-1555. E-mail address: sjo@math.ucla.edu. Supported by NSF Grant DMS
this formulation are the following: 1) the level contours of the image move quickly
to the steady solution and 2) the presence of the gradient numerically regularizes
the mean curvature term in a way that preserves and enhances edges and kills noise
through the nonlinear diffusion acting on small scales. We use the entropy-violating
Roe scheme, ([16]) for the convective term and central differencing for the regularized
mean curvature diffusion term. This makes a very simple, stable, explicit procedure,
computationally competitive compared with other semi-implicit or implicit procedures.
We show numerical evidence of the power of resolution and stability of this explicit
procedure in some representative 1D and 2D numerical examples, consisting of noisy
and blurred signals and images, (we use Gaussian white noise and Gausssian blur).
We have observed in our experiments that our algorithm shows a substantially reduced
staircase effect.
2. Deblurring and Denoising. A recording device or a camera would record
a signal or image so that 1) the recorded intensity of a small region is related to the
true intensities of a neighborhood of the pixel, through a degradation process usually
called blurring and 2) the recorded intensities are contaminated by random noise.
To fix our ideas we restrict the discussion to R 2 . An image can be interpreted as
either a real function defined on \Omega\Gamma a bounded and open domain of R 2 , (for simplicity
we will
assume\Omega to be the unit square henceforth) or as a suitable discretization of
this continuous image. Our interest is to restore an image which is contaminated with
noise and blur in such a way that the process should recover the edges of the image.
Let us denote by u 0 the observed image and u the real image. A model of blurring
comes from the degradation of u through some kind of averaging. Indeed, u may be
blurred through the application of a kernel: k(x; s; r) by means of
Z
\Omega
u(s; r) k(x; s; ds dr (2.1)
and, we denote this operation by v u. The model of degradation we assume is
where n is Gaussian white noise, i.e., the values n i of n at the pixels i are independent
random variables, each with a Gaussian distribution of zero mean and variance oe 2 .
If the kernel k is translation invariant, i.e., there is a function j(x; y), (also called
a kernel), such that k(x; s; and the blurring is defined as a
'superposition' of j 0 s:
Z
\Omega
u(s; r) ds dr (2.3)
and this isotropic blurring is called convolution. Otherwise, if the kernel k is not
translation-invariant we call this blurring anisotropic. For the sake of simplicity, we
suppose that the blurring is coming from a convolution, through a kernel function j
such that j u is a selfadjoint compact integral operator. Typically, j has the following
properties, goes to 1 and R
For any ff ? 0 the so-called heat kernel, defined as
is an important example that we will use in our numerical experiments.
The main advantage of the convolution is that if we take the Fourier transform of
(2.3) we get
then, to solve the model (2.2) with we take Fourier transform and we arrive at
To recover u(x; y), we need to deconvolve, i.e., this means that we have to divide the
equation (2.6) by - j(k; l) and to apply the inverse Fourier transform. This procedure
is generally very ill-posed. Indeed, j is usually smooth and j(x; y) ! 0 rapidly
as goes to 1, thus large frequencies in u 0 get amplified considerably.
The function u 0 is generally piecewise smooth with jumps in the function values and
derivatives; thus the Fourier method approximation gives global error estimates of
order O(h), (see ([11])) and suffers from Gibbs' phenomenon. Discrete direct methods
dealing with the linear integral equation (2.6) have been designed by different authors,
(see ([13] and references therein).
One way to make life easier is to consider a variational formulation of the model
that regularizes the problem. Our objective is to estimate u from statistics of the noise,
blur and some a priori knowledge of the image (smoothness, existence of edges). This
knowledge is incorporated into the formulation by using a functional R that measures
the quality of the image u, in the sense that smaller values of R(u) correspond to better
images. The process, in other words, consists in the choice of the best quality image
among those matching the constraints imposed by the statistics of the noise together
with the blur induced by j.
The usual approach consists in solving the following constrained optimization
problem:
min
subject to jjj
since E(
denotes the expectation of the
random variable X) imply that jjj
R\Omega (j
Examples of regularization functionals that can be found in the literature are,
r is the gradient and \Delta is the Laplacian, see Refs. [22,
8]. The main disadvantage of using these functionals is that they do not allow discontinuities
in the solution, therefore the edges can not be satisfactorily recovered.
In [19], the Total Variation norm or TV-norm is proposed as a regularization
functional for the image restoration problem:
Z
\Omega
Z
\Omega
y dx: (2.8)
The norm does not penalize discontinuities in u, and thus allows us to recover
the edges of the original image. There are other functionals with similar properties
introduced in the literature for different purposes, (see for instance, [7, 5, 25, 2]). The
restoration problem can be thus written as
min
Z
\Omega
subject to 1i Z
\Omega
(j
Its Lagrangian is
Z
\Omega
\Omega
(j
and its Euler-Lagrange equations, with homogeneous Neumann boundary conditions
for u, are:
\Omega
(j
There are known techniques, (see [3]), for solving the constrained optimization
problem (2.9) by exploiting solvers for the corresponding unconstrained problem,
whose Euler-Lagrange equations are (2.11) for - fixed. Therefore, for the sake of
clarity, we will assume the Lagrange multiplier - to be known throughout the exposi-
tion. For
, we can then write the equivalent unconstrained problem as
min
Z
\Omega
(j
and its Euler-Lagrange equation in the more usual form:
We call (2.14) the nonlinear deconvolution model. The linear deconvolution model
would be
that comes from the Euler-Lagrange equation of the corresponding unconstrained problem
with the norm
Since the equation (2.14) is not well defined at points where due to the
presence of the term 1=jruj, it is common to slightly perturb the Total Variation
functional to become:
Z
\Omega
where fi is a small positive parameter, or,
Z
\Omega
with the notation
3. The time dependent model. Vogel and Oman and Chan, Golub and Mulet
devised direct methods to approximate the solution to the Euler-Lagrange equation
with an a priori estimate of the Lagrange multiplier and homogeneous Neumann
boundary conditions. Those methods work well for denoising problems but the
removal of blur becomes very ill-conditioned with user-dependent choice of parame-
ters. However, stable explicit schemes are preferable when the steady state is quickly
reached because the choice of parameters is almost user-independent. Moreover, the
programming for our algorithm is quite simple compared to the implicit inversions
needed in the above mentioned methods.
Usually, time dependent approximations to the ill-conditioned Euler-Lagrange
equation are inefficient because the steady state is reached with a very small time
step, when an explicit scheme is used. This is the case with the following formulation
due to Rudin, Osher and Fatemi (see [19]) and Rudin and Osher (see [18]):
with u(x; given as initial data, (we have used as initial guess the original blurry
and noisy image u 0 ) and homogeneous Neumann boundary conditions, i.e., @u
@n
the boundary of the domain. As t increases, we approach to a restored version of our
image, and the effect of the evolution should be edge detection and enhancement and
smoothing at small scales to remove the noise. This solution procedure is a parabolic
equation with time as an evolution parameter and resembles the gradient-projection
method of Rosen (see [17]). In this formulation we assume an a priori estimate of the
Lagrange multiplier, in contrast with the dynamic change of - supposed in the Rosen
method, (see section 6 for details). The equation (3.1) moves each level curve of u
normal to itself with normal velocity equal to the curvature of the level surface divided
by the magnitude of the gradient of u, (see ([23]), ([15]) and ([20])). The constraints
are included in the -term and they are needed to prevent distortion and to obtain a
nontrivial steady state.
However, this evolution procedure is slow to reach steady state and is also stiff
since the parabolic term is quite singular for small gradients. In fact, an ad hoc rule
of thumb would indicate that the timestep \Deltat and the space stepsize \Deltax need to be
related by
\Deltat
for fixed c ? 0, for stability. This CFL restriction is what we shall relax. These issues
were seen in numerous experiments. In order to avoid these difficulties, we propose
a new time dependent model that accelerates the movement of level curves of u and
regularizes the parabolic term in a nonlinear way. In order to regularize the parabolic
term we multiply the whole Euler-Lagrange equation (2.14) by the magnitude of the
gradient and our time evolution model reads as follows:
We use as initial guess the original blurry and noisy image u 0 and homogeneous
Neumann boundary conditions as above, with an a priori estimate of the Lagrange
multiplier. From the analytical point of view this solution procedure approaches the
same steady state as the solution of whenever u has nonzero gradient. The effect
of this reformulation, (i.e. preconditioning) is positive in various aspects:
1. The effect of the regularizing term means that the movement of level curves
of u is pure mean curvature motion, (see [15]).
2. The total movement of level curves goes in the direction of the zeros of j u\Gammau 0
regularized by the anisotropic diffusion introduced by the curvature term.
3. The problem for the denoising case is well-posed in the sense that there exists
a maximum principle that determines the solution, (see ([15])).
4. There are simple explicit schemes, such as Roe's scheme, that behave stably
with a reasonable CFL restriction for this evolution equation. Let us remark
that explicit schemes could also be applied for the 'anisotropic blurring' case.
5. This procedure is more morphological, (see [1]), in the pure denoising case,
i.e., it operates mainly on the level sets of u and u 0 . This is easily seen if
we replace u by h(u) and u 0 by h(u 0 ) with equation (3.3) is
invariant, except that replaced by (h(u) \Gamma h(u 0 ))=h 0 (u).
The anisotropic diffusion introduced in this model is a nonlinear way to discriminate
scales of computation. This never occurs with a linear model, (e.g. the linear
deconvolution model), because in this case we would have the linear heat equation
with constant diffusion. Thus, our model (3.3) can be seen as a convection-diffusion
equation with morphological convection and anisotropic diffusion.
4. Explicit numerical schemes for the 1D model. The 2D model described
before is more regular than the corresponding 1D model, because the 1D original
optimization problem is barely convex. For the sake of understanding the numerical
behavior of our schemes, we also discuss the 1D model. The Euler-Lagrange equation
in the 1D case reads as follows:
x
This equation can be written either as
x
using the small regularizing parameter fi ? 0 introduced at the end of the previous
section or
using the ffi -function.
The Rudin-Osher-Fatemi model, (ROF model), in terms of the ffi -function will read
as follows
Our model in 1D will be
x
regularizing parameter. The parameter fi ? 0 plays a more
relevant role in this case than in the 2D model. We can also state our model in terms
of the ffi function as
where a convolution of the ffi function must be used in practice. The intensity of this
kind of convolution decides which scale acts on the diffusion term. In this paper, we
always approximate ffi by
A radical way to make the coefficient of u xx nonsingular is to solve the evolution
model:
This model works in such a manner that away from extrema we have a large multiplier
of \Gammaj (j and at extrema it is just the heat equation.
These evolution models are initialized with the blurry and noisy signal u 0 and
homogeneous Neumann boundary conditions, and with a prescribed Lagrange multi-
plier. We estimated - ? 0 near the maximum value such that the explicit scheme is
stable under appropriate CFL restrictions, (see below).
In order to convince the reader about the speed and programming simplicity of
our model, we shall give the details of the first order scheme for the 1D pure denoising
model, i.e.,
x
Let u n
j be the approximation to the value u(x
Then, the scheme for the problem (4.9) will be
where
and ug j is the upwind gradient, i.e.,
\Deltax
if
\Deltax
if
Our general explicit scheme has the following features:
1. We use central differencing for u xx ,
2. The convolution operator j is computed by evolving the heat equation u
with the explicit Euler method in time and central differencing in space with
corresponding to a oe of the 1D heat kernel:
-oe
3. We use upwind Roe differencing, (see [16], [10]), checking the direction of
propagation by computing the sign of the derivative of the coefficient of j
(j respect to u x times the sign of this term. Indeed, for our
evolution model (4.5) it is enough to check the sign of u x
For the model (4.8) we get the same direction of propagation as before. We
note that there is no notion of "entropy condition satisfying" discontinuities
in image processing; thus we omit the usual "entropy-fix" applied to the Roe
solver in this work.
4. The CFL condition depends on - and fi.
Indeed, the parabolic term in our model (4.5) gives a CFL restriction
\Deltat
x
and the convection term gives
\Deltat
s
x
for fixed c. These restrictions are reasonable at local extrema and near edges, compared
with the parabolic CFL restriction that corresponds to the reaction-diffusion
ROF model, (4.4):
\Deltat
which is too stiff along flat regions or at local extrema. The CFL restriction coming
from the convection term in the radical model (4.8) is better but also unfortunate
\Deltat
Thus, our model is more convenient from this point of view.
5. Explicit numerical schemes for the 2D model. We can express our 2D
model in terms of explicit partial derivatives as:
x
y
using u 0 as initial guess and homogeneous Neumann boundary conditions, (i.e., absorbing
boundary).
The denominator, u 2
y , appearing in the diffusion term may vanish or be small
along flat regions or at local extrema, when it is computed. Then, we can use either
the regularizing parameter fi ? 0, (small enough to perform floating point division),
or make the diffusion term equal to zero when gradient is smaller than a tolerance,
(we can also use parameter fi small as tolerance cut-off). Our choice in this paper
was the cut-off option, following a suggestion by Barry Merriman. Thus, concerning
stability and resolution the role of parameter fi is almost irrelevant in 2D calculations.
Let u n
ik be the approximation to the value u(x
and \Deltay and \Deltat are the spatial stepsizes and the time stepsize,
respectively. We denote by v
ik ). We point out that
we used for j, the convolution with the 2D heat kernel, (2.4), in our experiments,
aproximated by evolving the 2D heat equation u by means of the explicit
Euler method in time and central differencing in space. Then our first order scheme
reads as follows:
ik
ug x
ik
ik- (w n
where the second order term is defined by
if g x
ik
ik! fi and
ik
ik
otherwise, where
x
y
ik
2\Deltay
xx
ik
ik
yy
\Deltay 2
xy
ik
2\Deltax\Deltay
ug x
ik
is the upwind gradient in the x-direction, i.e.,
ug x
\Deltax (5.10)
if g x
ik
(w n
ug x
ik
\Deltax (5.11)
if g x
ik
(w n
ik
is the upwind gradient in the y-direction, i.e.,
ug y
\Deltay (5.12)
if g y
ik (w n
ug y
ik
\Deltay
if g y
ik (w n
A very simple way to extend this scheme to get high order accuracy is to follow
Shu-Osher prescription, (see [21]). Thus, we consider a method of lines, using an
explicit high order Runge-Kutta method in time and using a method of spatial ENO
reconstruction, (see [24], [9], [21] and [12]), of the same order, for the convection term,
applied on every time substep.
We have tested the Van Leer second order MUSCL spatial reconstruction using the
minmod function as slope-limiter together with classical second order Runge-Kutta
method and the third order PHM spatial reconstruction as in [12], using as slope-
limiter the harmod function, consisting of the harmonic mean of the lateral slopes
when they have the same sign and zero when they have different sign, together with the
third order Shu-Osher Runge-Kutta method of [21]. We have found that these explicit
methods are stable and give high accuracy under the same CFL restrictions as the first
order scheme.
As a sample we shall describe the second order MUSCL method. Since the Runge-Kutta
methods used here are linear combination of first order explicit Euler timesteps,
it is enough to formulate one Euler step, (in fact, in this case it is Heun's method
which is the arithmetic mean of two Euler timesteps). Following the notation used
above we have:
ik
ik
ik- (w n
where the reconstructed upwind gradients rug x
ik
and rug y
ik
are computed in the following
way. We reconstruct the left x-gradient in from the linear function:
\Deltax (5.15)
where
computed in x i , i.e.
gl x
where the minmod function is defined as
being sgn the sign function. Analogously, we have the reconstructed right x-gradient,
gr x
gr x
where
\Deltax (5.20)
where
Then the reconstructed upwind gradient in the x-direction is defined from the mean
value
as
if gm x
ik
if gm x
The procedure in the y-direction is similar.
-50501500 50 100 150 200 250 300
-5050150Fig. 6.1. Left, original vs. noisy 1D image; right original vs. recovered 1D image
6. Numerical Experiments. In this section, we perform some numerical experiments
in 1D and 2D.
We have used 1D signals with values in the range [0; 255]. The signal of (6.1, left)
represents the original signal versus the noisy signal with SNR - 5. The signal of
(6.1, right) represents the original signal versus the recovered signal after 80 iterations
with first order scheme with CFL 0:25. The estimated computed as
the maximum value allowed for stability, using the explicit Euler method in time. We
have used this experiment in order to achieve the appropiate amount of
difusion at small scales. In pure denoising 1D problems the choice of the value of fi
in our model depends on the SNR. Let us observe the very reduced staircase effect,
compared with the usual one obtained with either fixed-point iterative methods or
nonlinear primal-dual methods, (see [4]).
Now, we present a pure deblurring problem in 1D. The signal of (6.2, left) represents
the original signal versus the blurred signal with (as in 4.11. The
signal of (6.2, right) represents the original signal versus the recovered signal after 40
iterations with first order scheme with CFL 0:1. The estimated computed
as the maximum value allowed for stability, using the explicit Euler method in time.
We use 0:01 in this experiment.
The signal of (6.3, left) represents the original signal versus the blurred and noisy
signal with (as in 4.11), and SNR - 5. The signal of (6.2, right) represents the
original signal versus the recovered signal after 80 iterations with first order scheme
Fig. 6.2. Left, original vs. blur 1D image; right original vs. recovered 1D image
-50501500 50 100 150 200 250 300
-5050150Fig. 6.3. Left,original vs. noisy and blurred 1D signal ; right, original vs. recovered 1D signal
with CFL 0:25. The estimated computed as the maximum value allowed
for stability, using explicit Euler method in time. The - used for the current denoising
and deblurring problem is smaller than the one used in the above pure deblurring
problem, as we expected. We use this experiment to get the correct degree
of difusion at small scales. This shows that the 1D problem is quite sensitive to the
choice of fi, in contrast with the 2D case where the size of this parameter becomes
irrelevant. Let us also observe a very reduced staircase effect. We performed many
other experiments with 1D signals, obtaining similar results.
All our 2D numerical experiments were performed on the original image (Fig 6.4,
left) with 256 \Theta 256 pixels and dynamic range in [0; 255].
The third order scheme we used in our 2D experiments was based on the third
order Runge-Kutta introduced by Shu and Osher, (see [21]), to evolve in time with
a third order spatial approximation based on the PHM reconstruction introduced in
([12]).
Our first 2D experiment was made on the noisy image, (6.4, right), with a SNR
which is approximately 3. Details of the approximate solutions using the Chan-Golub-
Mulet primal-dual method and our time dependent model using the third order Roe's
scheme, (described above), are shown in Fig. 6.5. We used - 0:0713 and we perform
50 iterations with CFL number 0:1. We used the same estimated - as the one used for
the primal-dual method, and we observed that this value correponds to the largest we
50 100 150 200 250100200Fig. 6.4. Left: original image, right: noisy image, SNR- 3.
Chan-Golub-Mulet Primal-Dual
Resolution 256x256, SNR \approx 3, Estimated =0.0713
50 100 150 200 250100200ROE-ORDER3-RK3
50 100 150 200 250100200Fig. 6.5. Left: image obtained by the Chan-Golub-Mulet primal-dual method, right: image
obtained by our time evolution model,with 50 timesteps and CFL-0.1
allowed for stability with this CFL restriction. We also remark that the third order
Runge-Kutta method used enhances the diffusion at small scales. The contour plots
are shown in Fig 6.6. We can infer from these contours that the edges obtained by
the new model are sharper than the ones obtained by the primal-dual method. This
might seem surprising, since the steady state satisfies the same equation (2.14) on the
analytic level. Numerically they are quite different because the approximation of the
convection term involves hyperbolic upwind ideas.
Our second 2D experiment is a pure deblurring problem. Fig (6.7, left), corresponds
to the original image blurred with Gaussian blur where as in (2.4). We
remark that we computed the convolution operator j by evolving the 2D heat equation
with explicit Euler method in time and central differencing in space with a CFL
number of 0.125, in order to test our model in practical conditions. In Fig (6.7, right),
we represent the approximation using our third order Roe's scheme where we perform
50 iterations with CFL number 0:1. We have used (the maximum value that
allows stability for the above CFL restriction), and We observe that the
scheme is not sensitive to the choice of fi provided the value be small enough, (smaller
than 0:1). This behavior is justified from the fact that the 2D problem is more regular.
Fig. 6.6. Left: isointensity contours of part of the image obtained by the primal-dual method,
right: isointensity contours of part of the image obtained by our time evolution model.
50 100 150 200 25010020050 100 150 200 250100200Fig. 6.7. Left: image blurred with Gaussian blur with image restored with our
model, using third order Roe's scheme with 50 timesteps and CFL-0.1.
The isointensity contours showed in (6.8) make clear the edge enhancement obtained
through our algorithm.
Our 2D critical experiment was performed on the blurry and noisy image represented
in Fig (6.9, left), with Gaussian blur where as in (2.4) and SNR - 5.
We have used the 0:01. We performed 50 iterations with a CFL
number of 0:1, using our third order Roe's scheme, obtaining the approximation represented
in figure (6.9, right). Let us observe the denoising and deblurring effect in
the isointensity contours picture represented in figure (6.10).
Finally, we shall include the convergence history of the two 1D experiments corresponding
to the pure denoising problem and a denoising and deblurring problem
presented above. In Figs 6.11 and 6.12 we represent the semilog plot of the L 2 -norm
of the differences between consecutive iterates versus the number of iterations and the
plot of the evolution of the total variation of the solution, respectively. We observe
'superlinear' convergence along the first third part of the evolution and linear convergence
along the remainder. We pointed out that all our experiments were performed
with a constant timestep and thus, the computational cost is very low compared with
the semi-implicit methods. These usually require one third of the number of iterations
we needed, but every step of the semi-implicit method requires about five iterations
Fig. 6.8. Left: isointensity contours of part of the blurred image, right: isointensity contours
of part of the image restored by using our time evolution model.
50 100 150 200 25010020050 100 150 200 250100200Fig. 6.9. Left: image blurred with Gaussian blur with noisy with SNR - 10, right:
image restored with our model, using third order Roe's scheme with 50 timesteps and CFL-0.1.
of the preconditioned conjugate gradient method to invert.
7. Concluding remarks. We have presented a new time dependent model to
solve the nonlinear TV model for noise removal and deblurring together with a very
simple explicit algorithm based on Roe's scheme of fluid dymamics. The numerical
algorithm is stable with a reasonable CFL restriction, it is easy to program and it
converges quickly to the steady state solution, even for deblurring and denoising prob-
lems. The algorithm is fast and efficient since no inversions are needed for deblurring
problems with noise. Our time dependent model is based on level set motion that
makes the procedure morphological and appears to satisfy a maximum principle in the
pure denoising case, using as initial guess the noisy image. We also have numerical
evidence, (through our numerical tests), of this stability in the deblurring case, using
the noisy and blurred image as initial guess.
--R
A variational method in image recovery
Modular solvers for constrained image restoration problems
Extensions to total variation denoising
Image recovery via total variation minimization and related problems
A nonlinear primal-dual method for total variation-based image restoration
Constrained restoration and the recovery of discontinuities
The theory of Tikhonov regularization for Fredholm integral equations of the first kind
Uniformly high order accurate essentially non-oscillatory schemes III
Numerical methods for conservation laws
The Fourier method for nonsmooth data
reconstructions for nonlinear scalar conservation laws
Restoring images degraded by spatially variant blur
Fronts propagating with curvature dependent speed: algorithms based on a Hamilton-Jacobi formulation
The gradient-projection method for nonlinear programming: Part II
Total variation based image restoration with free local constraints
Nonlinear total variation based noise removal algorithms
Cambridge University Press
Efficient implementation of essentially non-oscillatory shock capturing schemes II
Solutions of ill-posed problems
On the numerical solution of Fredholm integral equations of the first kind by the inversion of the linear system produced by quadrature
Towards the ultimate conservative difference scheme V.
Variational problems and PDE's for image analysis and curve evolution
Iterative methods for total variation denoising
--TR
--CTR
John Steinhoff , Meng Fan , Lesong Wang , William Dietz, Convection of Concentrated Vortices and Passive Scalars as Solitary Waves, Journal of Scientific Computing, v.19 n.1-3, p.457-478, December
Youngjoon Cha , Seongjai Kim, Edge-Forming Methods for Image Zooming, Journal of Mathematical Imaging and Vision, v.25 n.3, p.353-364, October 2006
Ronald P. Fedkiw , Guillermo Sapiro , Chi-Wang Shu, Shock capturing, level sets, and PDE based methods in computer vision and image processing: a review of Osher's contributions, Journal of Computational Physics, v.185 n.2, p.309-341, March | upwind schemes;total variation norm;image restoration;nonlinear diffusion;level set motion |
587307 | Schur Complement Systems in the Mixed-Hybrid Finite Element Approximation of the Potential Fluid Flow Problem. | The mixed-hybrid finite element discretization of Darcy's law and continuity equation describing the potential fluid flow problem in porous media leads to a symmetric indefinite linear system for the pressure and velocity vector components. As a method of solution the reduction to three Schur complement systems based on successive block elimination is considered. The first and second Schur complement matrices are formed eliminating the velocity and pressure variables, respectively, and the third Schur complement matrix is obtained by elimination of a part of Lagrange multipliers that come from the hybridization of a mixed method. The structural properties of these consecutive Schur complement matrices in terms of the discretization parameters are studied in detail. Based on these results the computational complexity of a direct solution method is estimated and compared to the computational cost of the iterative conjugate gradient method applied to Schur complement systems. It is shown that due to special block structure the spectral properties of successive Schur complement matrices do not deteriorate and the approach based on the block elimination and subsequent iterative solution is well justified. Theoretical results are illustrated by numerical experiments. | Introduction
.
Let
be a bounded domain in R 3 with a Lipschitz continuous
boundary @
The potential
uid
ow in saturated porous media can be described by
the velocity u using Darcy's law and by the continuity equation for incompressible
ow
where p is the piezometric potential (
uid pressure), A is a symmetric and uniformly
positive denite second rank tensor of the hydraulic resistance of medium with
for all represents the density of potential sources in the
medium. The boundary conditions are given by
@
@
where
@
@
@
N are such that
@
@
@
is the outward
normal vector dened (almost everywhere) on the boundary @
Assume that the
domain
is a polyhedron and it is divided into a collection of
subdomains such that every subdomain is a trilateral prism with vertical faces and general
nonparallel bases (see, e.g., [11], [14] or [15]). We will denote the discretization of the
domain
by E h and assume an uniform regular mesh with the discretization parameter
h. Denote also the collection of all faces of elements which are not adjacent
This work was supported by the Grant Agency of the Czech Republic under grant 201/98/P108 and
by the grant AS CR A2030706. Revised version October 1999.
y Seminar for Applied Mathematics, Swiss Federal Institute of Technology (ETH) Zurich, ETH-
Zentrum, CH-8092 Zurich, Switzerland. (miro@sam.math.ethz.ch)
z Institute of Computer Science, Academy of Sciences of the Czech Republic, Pod vodarenskou vez
2, 182 07 Prague 8, Czech Republic. (maryska@uivt.cas.cz, miro@uivt.cas.cz, tuma@uivt.cas.cz)
to the boundary
@
D by
@
D and introduce the set of interior faces
@
We consider the following low order nite element approximation. Let
be the space spanned by the linearly independent basis functions v e
dened on the element e 2 E h in the form
and such that they are orthonormal with respect to the set of functionals
Z
5:
Here f e
k denotes the k-th face of the element e
k;3 ) is the
outward normal vector with respect to the face f e
k . The velocity function u will be
then approximated by vector functions linear on every element e 2 E h from the Raviart-
Thomas space
denotes the restriction of a function v h onto the element e 2 E h . Further
denote the space of constant functions on each element e 2 E h by M 0 (e) and denote the
space of constant functions on each face f 2 h by M 0 (f ). The piezometric potential p
will be approximated by the space which consists of elementwise constant functions
where h j e is the restriction of a function h onto element e . The Lagrange multipliers
coming from the hybridization of a method will be approximated by the space of
all functions constant on every face from h
Here h j f denotes the restriction of a function h onto the face f 2 h . Analogously we
introduce the spaces M 0(@
N ) as the spaces of functions constant on
every face from [ e2E h
@e \
@
D and h \
@
respectively. The detailed description of
the spaces that we use can be found in [14] (see also [11] or [15]).
The Raviart-Thomas approximation of the mixed-hybrid formulation for the problem
(1.1) and (1.2) reads as follows (see [4]):
Find
@e\@
(1.
@e\@
where p D;h and uN;h are approximations to the functions p D and uN on the spaces
where the function q is approximated by
For other details we refer to [14] or [11].
Further denote by the number of elements, by the number
of interior inter-element faces and the number of faces with the prescribed Neumann
boundary conditions in the discretization by
j@
j. Let e
be some numbered ordering of the set of prismatic elements and f k ,
NNC, be the ordering of the set of non-Dirichlet faces from h . For every element
we denote by NIF e i the number of interior inter-element faces and by NNC e i the
number of faces with Neumann boundary conditions imposed on the element e i . Let the
nite-dimensional space RT 0
spanned by linearly independent
basis functions v NA from the denition (1.6); let the space M 0
spanned by NE linearly independent basis functions nally the
space M 0
spanned by NIF linearly independent basis functions k ,
+NNC. From this Raviart-Thomas approximation we obtain the system
of linear algebraic equations in the formB @
C A =B @
where are unknowns, the
symmetric positive denite matrix block A 2 R NA;NA is given by the terms (Av
the outdiagonal block B 2 R NA;NE by (r and the block C 2 R NA;NIF+NNC
by <
. Here n k is the outward normal vector to with respect to the face
(see [11] and [14]).
Let us denote the system matrix in (1.12) by A. The symmetric matrix A is indenite
due to the zero diagonal block of dimension NNC. The structure
of nonzero elements in the matrix from a small model problem can be seen in Figure
1. Partition the submatrix C in A as (C 1 corresponds to the
interior inter-element faces in the discretized domain and C 2 2 R NA;NNC is the face-
condition incidence matrix corresponding to the element faces with Neumann boundary
conditions. Note that every column of C 1 contains only two nonzero entries equal to 1.
The singular values of C 1 are all equal to
2 and the matrix block C 2 has orthogonal
columns. Moreover, the whole matrix block C has also singular values equal to
2 or
1. The matrix B has a special structure. The nonzero elements correspond to the face-
element incidence matrix with values equal to -1. Thus all singular values of the matrix
are equal to
5 (the matrix B is, up to the normalization coe-cients, orthogonal).
It is easy to see from the denition of approximation spaces (see [14] or [15]) that
the symmetric positive denite block is 5 5 block diagonal and it was shown in [15]
Fig. 1. Structural pattern of the matrix obtained from mixed hybrid nite element approximation of
a model problem with (to be discussed in Section 5).
that the spectrum of the matrix block A satises
are positive constants independent of the discretization parameters and
dependent on the domain and the tensor A. It is also easy to see that the system matrix
A in (1.12) is non-singular if and only if the block (B C) has a full column rank. Clearly,
if the condition
@
holds (all boundary conditions are Neumann conditions), then
the matrix block (B C) is singular, due to the fact that all sums of row elements are zero.
In other words, the function p is unique up to a constant function in the case
@
Assuming
@
it follows from the analysis presented in [15] that there exist positive
constants c 3 and c 4 such that for the singular values of the matrix block (B C) we have
Moreover, for eigenvalues of the whole symmetric indenite matrix A it follows asymptotically
are positive constants independent of system parameters.
In this paper, for solving the symmetric indenite systems (1.12), the successive reduction
to Schur complement systems is proposed. We consider three successive Schur
complement systems arising during the block elimination of unknowns which correspond
to matrix blocks A, B and C 2 respectively, or in other words, which correspond to the
elimination of the velocity variables u, the pressure variables p and of a part of the
Lagrange multipliers . While the concept of reduction to the rst and second Schur
complement systems is well known as a static condensation (described e.g. in [4], Section
V. or in [11])), the proposed reduction to the third Schur complement system seems to
be new. The main contribution of the paper consists in a detailed investigation of the
structure of nonzero entries and the spectral properties of the Schur complement ma-
trices. This enables thorough complexity analysis of the direct or iterative solution of
corresponding Schur complement systems. A brief analysis of the structure of the rst
Schur complement matrix can be found in [4] as well as a straightforward observation
that its principal leading block is a diagonal. Here we extend this analysis and discuss
the mutual relation between the number of nonzero entries in the rst Schur complement
matrix and the number of nonzeros in the system matrix (1.12). We show further that
no ll-in occurs during the process of reduction to the second and third Schur complement
system. Moreover, we prove that the number of nonzeros in both these two Schur
complements is always less than the number of nonzeros in (1.12). It is shown also that
the spectral properties of matrices in such Schur complement systems do not deteriorate
during the successive elimination. Thus an approach based on the block reduction and
subsequent iterative solution is well justied.
The outline of the paper is as follows. In Section 2, we examine the structural pattern
of resulting Schur complement matrices and give estimates for their number of nonzero
elements in terms of the discretization parameters listed above. Section 3 is devoted to the
solution of the whole indenite system (1.12) via three Schur complement reductions and
subsequent direct solution. Using the graph theoretical results we give the asymptotic
bound of the computational complexity for the Cholesky decomposition method applied
to the third Schur complement system. In Section 4, we concentrate on the spectral
properties of the Schur complement system matrices. The theoretical convergence rate of
the iterative conjugate gradient-type method in terms of the discretization parameters is
estimated. The asymptotic bounds for the computational work of the iterative solution
are given. Section 5 contains some numerical experiments illustrating the previously
developed theoretical results. Finally, we give some concluding remarks and mention
some open questions for our future work.
2. Structural properties of the Schur complement matrices. In this section
we take a closer look to the discretized indenite system and corresponding Schur complements
and we extend the brief analysis from [4]. There are several possibilities for
the choice of a block ordering in the consecutive elimination. We shall concentrate on
the block ordering which seems to be the most natural and e-cient from the point of
view of solving the nal Schur complement system by a direct solver or by a conjugate
gradient-type method. The same ordering for the elimination of the rst two blocks was
used also in [4], p. 178-181 or in [11]. Note that the static condensation is not the only
way to form the successive Schur complements. E.g., in [17] the case of the Raviart-
Thomas discretization for the closely related nodal methods was studied and reduction
to a dierent second Schur complement system was discussed.
The following simple result gives the number of nonzero elements in the triangular
part of the matrix A. By the triangular part of a matrix M we mean its upper (lower)
strict triangle diagonal. We will deal only with the structural nonzero elements
we do not take into account accidental cancellations and possible initial zero values of the
Fig. 2. Structural pattern of the Schur complement matrix A=A for A from Figure 1.
tensor of hydraulic permeability. By the structure of a matrix M we mean
0g.
Lemma 2.1. The number of nonzeros in the triangular part of A is given by
Proof. The triangular part of A has 15NE nonzeros, the block B contributes by 5NE
nonzeros, C 1 has 2NIF nonzeros and C 2 contains NNC nonzeros.
The symmetric positive denite matrix block A in (1.12) is block-diagonal, each 5 5
block corresponds to certain element in the discretization of the domain. Therefore it
is straightforward to eliminate the velocity variables u and to obtain the rst Schur
complement system with the matrix
A=A =B @
A 11 A 12 A 13
A T
A T
The structure of the matrix A=A for our example problem is shown in Figure 2. For
details we also refer to [4], p. 180-181 or [11]. For the number of nonzeros in the matrix
A=A we can show the following result.
Lemma 2.2. The number of nonzeros in the triangular part of the Schur complement
matrix A=A is equal to
Proof. Clearly, . Note that the
ll-in for A 1 C 1 is considerably higher (it is equal to 10NIF ). Further, jA 13
and
The number of
nonzeros in A 23 is equal to
Finally, note that
Observe that the directed graph of the matrix B T C 1 has the set of arcs
is an interior face of ig:
The undirected graph of C T
adjacency relation
based on the connectivity through the interior faces inside the domain. It follows
that
where e(f) and
e(f) are the two elements from E h such that
considering the relation
NIF: Putting all the partial sums together
we get the desired result.
Consider now the second Schur complement matrix
A
A 22 A 23
A T
The structure of ( A=A)=A 11 for our example matrix is shown in Figure 3. The matrix
block A 11 in the rst Schur complement matrix A=A is diagonal [4], [11]. The following
result shows that it is worth to form the Schur complement matrix ( A=A)=A 11 from
the matrix A=A since no further ll-in appears during the elimination of the block A 11
corresponding to pressure variables p and so we can further reduce the dimension of the
system.
Theorem 2.1.
A 22 A 23
A T
Proof. We have the following structural equivalences:
Fig. 3. Structural pattern of the Schur complement matrix ( A=A)=A11 for A from Figure 1.
From previous Theorem it is also easy to see that right lower block B 22 is block-diagonal
with blocks of varying size (depending on number of faces with Neumann conditions in
each element) each corresponding to a certain element in the discretization. So in the
following we will consider the third Schur complement matrix
induced by the block B 22 in the matrix ( A=A)=A 11 : We can prove a similar result
to the one given in Theorem 2.1. Therefore, the Schur complement system with the
matrix (A=A)=A 11 can be reduced to the Schur complement matrix (( A=A)=A 11 )=B 22
of dimension equal to NIF , without inducing any additional ll-in. Moreover, this can
be done using incomplete factorization procedures.
Theorem 2.2.
Proof. Using Theorem 2.1 we get
only in the
trivial singular case with jE h we get the desired result Struct(B 11
The following simple corollary gives the number of nonzero elements in the second and
third Schur complement matrices ( A=A)=A 11 and (( A=A)=A 11 )=B 22 . We shall use
these results later.
Corollary 2.1. The number of nonzeros in the triangular part of ( A=A)=A 11 is
given by
and the number of nonzeros in the triangular part of ( A=A)=A 11 )=B 22 is given by
Apart from explicit assembly of the Schur complement matrices or using them implicitly
there is another possibility which may be considered { keeping the Schur complements
in factorized form. Consider the following decomposition:
In contrast to the previous case, where the local numbering of the
faces corresponding to the individual elements did not play a role, this is not the case
now.
Theorem 2.3. Assume that all the elements within the diagonal blocks of the matrix
A are nonzero. The ll-in in
if the faces with Dirichlet
boundary conditions are numbered rst in the local ordering of each nite element.
Proof. Because of the block structure of A we can consider the individual nite
elements independently. The minimum value of the nonzero count of ^
subsequent
rows which correspond to the same nite element is 1P
it is easily checked to be minimal in this case.
Therefore, from now we assume that within each element we have rst numbered the faces
corresponding to Dirichlet boundary conditions, then the interior inter-element faces and
nally the faces with Neumann boundary conditions. The matrix (2.19) can be written
in the form
It is clear that it is more advantageous to keep most of the blocks of (2.20) in the
explicit form multiplying the factors directly. A typical example is the block ^
B,
which is a diagonal matrix. The main question here is whether we can reduce the system
further as in the previous case and at the same time keep the matrix blocks in a
factorized form. Unfortunately, there is one basic obstacle. Whereas we are able to
embed the structure of A T
into the structure of A 22 we cannot in general express
in the factorized form as
is factor which can be easily computed.
We have considered the partially factorized structure (2.20) since it is important from
a computational point of view. Using a structural prediction based on such factors is exactly
the way how to obtain the sparsity structure of explicit Schur complement matrices
22 in an e-cient way. In our implementations
we used a procedure similar to the one from [16] to get these structures.
3. Direct solution of the Schur complement systems. In the following we will
discuss the direct solution of the Schur complement systems. Namely, we will concentrate
on the system with the matrix (( A=A)=A 11 )=B 22 2 R NIF;NIF : The following theorem
gives a bound on the asymptotic work necessary to solve the linear system (1.12), which
is dominated by the decomposition of the matrix (( A=A)=A 11 )=B 22 :
Theorem 3.1. The number of arithmetic operations to solve the symmetric inde-
nite system (1.12) directly via three consecutive block eliminations and using the Cholesky
decomposition is O(NIF 2
Proof. We will only give a sketch of the proof here. The work is dominated by the
decomposition of B
12 , which has the same nonzero structure as A 22 :
Our uniform regular nite element mesh is a well-shaped mesh in a suitable sense
(see [19]). The proof of Lemma 2.2 and the statements of Theorem 2.1 and 2.2 imply
that the graph G of (( A=A)=A 11 )=B 22 is also the graph of a well-shaped mesh. Namely,
it is a bounded-degree subgraph of some overlap graph (see [18], [19]). It was shown in
[25] that the upper bound on the second-smallest eigenvalue of the Laplacian matrix of
G (the Fiedler value) is of the order O(1=NIF 2=3 using the techniques from [25]
we obtain that there exists a O(NIF 2=3 )-size bisector of G.
Therefore, G satises the so-called NIF 2=3 -separator theorem: there exist constants
such that the vertices of G can be partitioned into sets GA ; GB and
the vertex separator GC such that jGA j; jGB j NIF and jGC j NIF 2=3 : Moreover,
any subgraph of G satises the NIF 2=3 -separator theorem. The technique of recursive
partitioning of G called generalized nested dissection and used to reorder the considered
Schur complement matrix provides an elimination ordering with an O(NIF 2 )-bound on
the arithmetic work of Cholesky decomposition (see Theorem 6 in [12]).
Note that the explicit computation of the matrix (( A=A)=A 11 )=B 22 is necessary in
the framework of direct methods. Theorem 3.1 provides a theoretical result which is based
on spectral partitioning methods. The reordering algorithms based on the separators
obtained by the spectral partitioning techniques and applied recursively within the nested
dissection need not necessarily be the best practical approach to get a reasonable matrix
reordering. Nevertheless, experimental results with various partitioning schemes show
that high quality reorderings can be e-ciently computed in this way (see [7]). Also some
other reorderings which combine global procedures (partitioning of large meshes) and
local algorithms (like MMD) can provide reasonable strategies to nd a ll-in minimizing
permutation.
4. The conjugate gradient method applied to the Schur complement sys-
tems. In this section we concentrate on the iterative solution of the Schur complement
systems discussed in Section 3. We consider the conjugate gradient method applied
to the symmetric positive denite systems with matrices A=A, (( A=A)=A 11 ) and
22 . It is well known that the convergence rate of the conjugate gradient
method can be bounded in terms of the condition number of the corresponding Schur
complement matrix [9], [6], [26]. We show that the condition number of the matrix A=A
is asymptotically the same as the conditioning of the negative part of spectrum of the
whole indenite matrix A. Moreover, we prove that condition numbers of the matrices
grow like 1=h 2 with respect to the discretization
parameter h and they do not deteriorate during the successive eliminations. Based on
these results we estimate the number of iteration steps necessary to achieve the prescribed
tolerance in error norm reduction. We show that the number of iteration steps
necessary to reduce the error norm by the factor of " grows asymptotically like 1=h for
all three Schur complement systems. Therefore, the total number of
ops in the iterative
algorithm can be signicantly reduced due to decrease of the matrix order during the
elimination. First, we consider the following theorem.
Theorem 4.1. Let 1 be the eigenvalues of the positive
denite block A 2 R NA;NA , 1 be the singular values of the
matrix block (B C) 2 R NA;NBC . Then for the eigenvalues of the Schur complement
Moreover, for the eigenvalues of the positive denite matrix blocks
A 22 A 23
A T
A 22 A 23
A T
The condition number of the Schur complement system matrix A=A then can be bounded
by the expression
(4.
Proof. The positive denite matrix A 1 has the spectrum 0 < 1= 1 1=
1=NA . The rst inclusion in the theorem follows from the following two inequalities 1
NA ((B C)x; (B C)x);
Similarly, from the inequalities 1
we obtain the second inclusion. The third part of the proof is completely analogous to
the second part.
Corollary 4.1. There exist positive constants c 9 and c 10 such that for the spectrum
of the Schur complement matrix A=A we have
The condition number of the matrix A=A can be
bounded as
The Schur complement system with positive denite matrix A=A can be solved iteratively
by the conjugate gradient method [9] or the conjugate residual method [6]. It
is well known that the conjugate gradient method generates the approximate solutions
which minimize the energy norm of the error at each iteration step [26], [6]. The closely
related conjugate residual method that dier only in the denition of innerproduct, on
the other hand, generates the approximate solutions which minimize their residual norm
at every iteration [6]. It is also well known fact that there exists so-called peak/plateau
connection between these methods [5] showing that there is no signicant dierence in
the convergence rate of these methods when measured by the residual norm of an approximate
solution. In our paper we use the conjugate gradient method together with
the minimal residual smoothing procedure applied on its top to get monotonic residual
norms [28]. Applying such technique allows better monitoring of the convergence
by residual norm and it is mathematically equivalent to the residual minimizing conjugate
residual method [6]. The computational cost of this technique is minimal and it
costs only two inner products and one vector update per iteration. In the framework
of iterative methods the number of operations in matrix-vector products is what is usually
the most important. These products, performed repeatedly in each iteration loop,
contribute in a substantial way to the nal e-ciency of iterative solver. When solving
the system with Schur complement matrix A=A the number of
ops per iteration for
an unpreconditioned method is dominated by the matrix vector multiplication with the
matrix A=A. Its number of nonzeros was given by Lemma 2.2. Moreover, using the
estimates (1.13) and (1.14), the condition number of the Schur complement matrix A=A
can be bounded by the term O( 3
Consequently, the number of
ops for conjugate
gradients necessary to achieve a reduction by " is of order
Assuming overestimates we obtain the asymptotic
estimate of order O(NE 3
The previous considerations did not take into account the Schur complement systems
with matrices (( A=A)=A 11 ) and ((( A=A)=A 11 )=B 22 ). The convergence rate of the
iterative conjugate gradient method applied to the second and third Schur complement
systems depend analogously on the condition number of the Schur complement matrices
[9], [6], [26]. The analysis of the spectrum of the matrix (( A=A)=A 11 ) is given in the
following theorem.
Theorem 4.2. Let 1 be the eigenvalues of the positive
denite block A 2 R NA;NA , 1 be the singular values of the
matrix block (B C) 2 R NA;NBC . Then for the spectrum of the Schur complement matrix
A=A)=A 11 we have
Consequently, the condition number of the matrix ( A=A)=A 11 can be bounded as follows
Proof. From the denition of the Schur complement matrix ( A=A)=A 11 and the
statement of Theorem 4.1 we have
A 22 A 23
A T
The bound for the minimal eigenvalue can be obtained considering the following result
(see [20], p.201):
A 11 A 12 A 13
A T
A T
A
A
Then from the interlacing property of the eigenvalue set of symmetric matrix A=A (see
e.g. [8]) it follows
Considering the previous inequalities we get the lower bound for the minimal eigenvalue
of the matrix ( A=A)=A 11 , which completes the proof.
We have shown that the condition number of the Schur complement system matrix
A=A)=A 11 is bounded by a multiple of the condition number of the matrix A=A.
Therefore the number of iteration steps for the conjugate gradient method necessary to
reduce the error norm(or after smoothing the residual norm) by some factor is asymptotically
the same as before. The complexity of the matrix-vector multiplication is lower
and according to Corollary 2.1 is of the order
Assuming again the overestimates (NIF we obtain the
asymptotic estimate O(NE). The total number of
ops for the conjugate gradients
or the conjugate residual method necessary to achieve a reduction by the factor " is
then again of order O(NE 3
NE). From the statements of Theorem 4.1 and Theorem
4.2 it is clear that the reduction to the Schur complement systems does not aect the
asymptotic conditioning of the positive denite matrices A=A and ( A=A)=A 11 : The
same is true for the spectral properties of the third Schur complement system with
the matrix (( A=A)=A 11 )=B 22 : Since the proof is completely analogous to the proof of
Theorem 4.2 we shall present only the following statement (cf. [10], p. 256).
Theorem 4.3. The condition number of (( A=A)=A 11 )=B 22 is bounded by the
condition number of the matrix ( A=A)=A 11
In the following we present two additional results concerning the the matrix-vector
multiplications with Schur complement matrices. Theorem 4.4 compares the number of
nonzeros in the Schur complement matrices ( A=A)=A 11 and (( A=A)=A 11 )=B 22 to the
number of nonzeros in the original matrix A:
Theorem 4.4. The number of nonzero entries in the matrix (( A=A)=A 11 ) or the
matrix ((( A=A)=A 11 )=B 22 ) is smaller than the number of nonzeros in the matrix A:
Proof. Using the fact that
it follows from Lemma 2.1 and Theorem 2.1 that
Clearly, the number of nonzeros in the matrix ((( A=A)=A 11 )=B 22 ) is even smaller.
Note that the number of nonzeros in the original matrix A can be smaller or larger
than the corresponding number of nonzeros in the matrix A=A: Consider now the factorized
Schur complement in the form (2.20). It can be shown also that there is no clear
winner between the number of
oating-point operations to multiply a dense vector by the
or the number of operations to get a product of
a matrix (( A=A)=A 11 )=B 22 with a dense vector of appropriate dimension, respectively.
The result depends on the shape of the domain and its boundary conditions. Never-
theless, the following Theorem 4.5 shows that if we do not form the Schur complement
explicitly it is worth to use the factorized form (2.19) and the reordering of the Schur
complement from Theorem 2.3 instead of its implicit form.
Theorem 4.5. Let v be a dense vector. The number of
oating-point operations
to compute
smaller than the number of
oating-point
operations to computeB @
Proof. Taking into account the local ordering from Theorem 2.3 the dierence between
these two quantities can be bounded by
2NIF NNC 0:
5. Numerical experiments. In the following we present numerical experiments
which illustrate the results developed in the theoretical part of the paper.
Two model potential
ow problems (1.1) and (1.2) in a rectangular domain with
Neumann conditions prescribed on the bottom and on the top of the domain have been
considered. Dirichlet conditions that preserve the nonsingularity of the whole system
matrix A were imposed on the rest of the boundary. The choice of boundary conditions
in these examples is motivated by our application and it comes from a modelling of a
conned aquifer (see [3]) between two impermeable layers.
In order to verify the theoretical results derived in previous sections we will restrict
our attention rst to the simplest geometrical shape - cubic domain and report the results
obtained from a uniformly regular mesh renement. In practical situations, however, relatively
thin aquifers with possible cracks in the rock are frequently modelled, and so the
number of Neumann conditions may represent a big portion of the whole boundary. As
our second model example, we consider a rectangular domain discretized by 6 layers of
Model potential
uid
ow problem - cubic domain
Discretization parameters Matrix dimensions
h, NE NIF NNC A A=A ( A=A)=A 11 (( A=A)=A 11 )=B 22
1/5, 250 525 100 2125 875 625 525
1/10, 2000 4600 400 17000 7000 5000 4600
1/15, 6750 15975 900 57375 23625 16875 15975
1/20, 16000 38400 1600 136000 56000 40000 38400
1/30, 54000 131400 3600 459000 189000 135000 131400
1/35, 87750 209475 4900 728875 300125 214375 209475
1/40, 128000 313600 6400 1088000 448000 320000 313600
Table
Model potential
uid
ow problem - realistic domain
Discretization parameter Matrix dimension
95x95x6 251560 36100 937460 395960 287660 251560
elements in the mesh. As we will see later, the reduction to the third Schur complement
proposed in this paper can become even more signicant than for the cubic domain.
Prismatic discretizations of domains with NE elements were used [14], [11]. For the
cubic domain we have then Discretization parameters h, NE, NIF , NNC,
dimension N of the resulting indenite system matrix A and the dimensions of the corresponding
Schur complement matrices A=A, ( A=A)=A 11 and (( A=A)=A 11 )=B 22 are
given in Table 1 for a cubic domain and in Table 2 for a more realistic domain. We note
again that the dierence between dimensions of the second and third Schur complement
matrix is signicantly larger in the case of modelling of thin layers that arise regularly
in our application.
For the example of a cubic domain the spectral properties of the matrix blocks A
and (B C) as well as of the whole symmetric indenite matrix A have been investigated.
The extremal positive and negative eigenvalues of the matrix A and the extremal singular
values of the block (B C) (squared roots of the extremal eigenvalues of the matrix
were approximated by a reduction to the symmetric tridiagonal form of
the matrix using 1500 steps of the symmetric Lanczos algorithm [8] and by a subsequent
Spectral properties of the system matrix and its blocks - problem with a cubic domain
matrix blocks spectral properties eigenvalues of the matrix A
NE spectrum of A sing. values of (BjC) negative part positive part
2000 [0.33e-2, 0.2e-1] [0.927e-1, 2.64] [-2.64, -0.898e-1] [0.335e-2, 2.64]
16000 [0.66e-2, 0.4e-1] [0.467e-1, 2.64] [-2.64, -0.413e-1] [0.679e-2, 2.65]
54000 [0.99e-2, 0.6e-1] [0.312e-1, 2.65] [-2.64, -0.241e-1] [0.104e-1, 2.65]
128000 [0.13e-1, 0.8e-1] [0.234e-1, 2.65] [-2.64, -0.152e-1] [0.136e-1, 2.65]
eigenvalue computation of the resulting tridiagonal matrix using the LAPACK double
precision subroutine DSYEV [1]. Extremal eigenvalues of the diagonal matrix block A
were computed directly by the LAPACK eigenvalue solver element by element. It can
be seen that the computed extremal eigenvalues of the block A are in perfect agreement
with the theory (see Table 3). Similarly, we can observe approximately a linear decrease
of the computed minimal singular value of the matrix block (B C) with respect to the
mesh discretization parameter h. From the computed extremal eigenvalues of the whole
indenite system A we can conclude that even if our mesh size parameters h are rather
small and give rise to very large system dimensions (see Table 1), they are outside of
the asymptotic inclusion set (1.15). Indeed for our example and our mesh size interval
we have c 1 =h c 4 , c 2 =h c 4 and with the exception of
using Lemma 2.1 in [22], pp. 3-4 (see also [15]) we obtain the inclusion
set in the form
which is in good agreement with the results in Table 3.
Using the same technique we have approximated the extremal eigenvalues of the
Schur complement matrices A=A, ( A=A)=A 11 and (( A=A)=A 11 )=B 22 coming from
a problem on a cubic domain. From Table 4 it can be seen that the inclusion set for
the extremal eigenvalues of the rst Schur complement matrix A=A coincides with the
bounds given in Theorem 4.1. We can see that the extremal eigenvalues of the second
Schur complement matrix ( A=A)=A 11 are bounded by the extremal eigenvalues of the
matrix A=A. Similarly, the extremal eigenvalues of the third Schur complement matrix
are bounded by the extremal eigenvalues of the matrix ( A=A)=A 11 .
This behaviour is in accordance with the asymptotic bounds given in Theorem 4.2 and
Theorem 4.3.
The smoothed conjugate gradient method has been applied to the resulting three
Schur complement systems (see also the discussion in previous section). Unpreconditioned
and also preconditioned versions with the IC(0) preconditioner [23], [24] for the
solution of these symmetric positive denite systems have been used. For the solution of
Spectral properties of Schur complement matrices - problem with a cubic domain
spectral properties of Schur complement matrices
2000 [0.182e1, 0.173e4] [0.251e1, 0.596e3] [0.272e1, 0.596e3]
54000 [0.693e-1, 0.579e3] [0.966e-1, 0.199e3] [0.992e-1, 0.199e3]
128000 [0.293e-1, 0.434e3] [0.409e-1, 0.149e3] [0.417e-1, 0.149e3]
the whole indenite system the minimal residual method has been used. For the preconditioned
version the positive denite block-diagonal preconditioning with ILUT(0,20) for
the decomposition of the block corresponding to constraints (see e.g. [22], [21]) was used.
The choice of ILUT(0,20) was motivated by our eort to obtain rather precise factorization
with restricted memory requirements which should be close to the full decomposition
of the block (B C) T (B C). This preconditioner was found generally better than the in-
denite block-diagonal preconditioning with the same ILUT(0,20) decomposition or than
the indenite preconditioner discussed in [13] or [21]. The initial approximation x 0 was
set to zero, the relative residual norm krnk
used as the stopping criterion.
For the implementation details of iterative solvers we refer to [6]. Our experiments were
performed on an SGI Origin 200 with processor R10000. In Table 5 and Table 6 we
consider iteration counts and CPU times in the minimal residual method (unprecondi-
tioned/preconditioned) applied to the whole system (1.12) and in the conjugate gradient
method (unpreconditioned/preconditioned) applied to the Schur complement systems
with the matrices A=A, ( A=A)=A 11 and (( A=A)=A 11 )=B 22 for a model problem with
a cubic and more realistic domain, respectively. The dependence of the iteration counts
presented in all columns of Table 5 corresponds surprisingly well to the theoretical order
O( 3
NE). The convergence behaviour of the smoothed conjugate gradient method applied
to the third Schur complement system with the matrix (( A=A)=A 11 )=B 22 for this
case is presented in Figure 4. From the results in Table 5 and Table 6 it follows that while
the gain from the solution of the third Schur complement system is rather moderate in
the case of a cubic domain and in the case of the realistic
at domain it becomes more
signicant.
6. Conclusions. Successive block Schur complement reduction for the solution of
symmetric indenite systems has been considered in the paper. It was shown that due
to the particular structure of matrices which arise from mixed-hybrid nite element discretization
of the potential
uid
ow problem, the resulting Schur complement matrices
remain sparse. Moreover, their spectral properties do not deteriorate and the iterative
conjugate gradient method can be successfully applied. Theoretical bounds for the
Number of iterations of the conjugate gradient method - problem with a cubic domain
unpreconditioned/preconditioned CG applied to matrix
NE A A=A ( A=A)=A 11 (( A=A)=A 11 )=B 22
2000 608/76 154/35 87/32 80/32
6.43/1.56 0.76/0.25 0.30/0.16 0.25/0.14
48.17/11.91 3.86/1.50 1.51/0.92 1.30/1.01
16000 1031/138 288/67 164/63 155/63
54000 1358/188 418/95 234/93 228/93
926.98/218.78 104.76/36.92 39.88/24.04 37.94/23.13
128000 1637/229 546/122 303/122 298/122
Table
Number of iterations of the conjugate gradient method - realistic model example
unpreconditioned/preconditioned CG applied to matrix
NE A A=A ( A=A)=A 11 (( A=A)=A 11 )=B 22
50700 2053/336 810/172 448/166 421/166
86700 2959/403 1042/222 578/214 543/214
132300 3420/447 1272/271 706/262 663/262
iteration number
relative
residual
norms
unpreconditioned and smoothed conjugate gradient method applied to ((-A/A)/A 11 )/B 22
Fig. 4. Convergence of the smoothed conjugate gradient method applied to the third Schur complement
system
convergence rate of this method in terms of the discretization parameters have been developed
and tested on a model problem example. Numerical experiments indicate that
the given theoretical bounds on the eigenvalue set are realistic not only for the system
matrix and its blocks, but also for the Schur complement matrices. The iteration counts
for the conjugate gradient method are also in a good agreement with the theoretical
predictions. Direct solution of the third Schur complement system is also a possible al-
ternative. Nevertheless, its comparison with iterative solvers is outside the scope of this
paper.
In case of structured grids, a geometric multigrid solver and/or preconditioner for
solving the nal Schur complement system can be used. Namely, the stencil from the rst
Schur complement which expresses element-element connectivity in the domain (see proof
of Lemma 2.2) remains unchanged after the subsequent two reduction and an appropriate
method could be based on that.
Another approach for the solution of symmetric indenite systems seems to be
promising. As was pointed out in [2], the classical null-space algorithm can be imple-
mented. QR factorization of the o-diagonal block (B C) is considered and the solution
of the indenite system is transformed to the solution of a block lower triangular sys-
tem, where the subproblem corresponding to the diagonal block can be solved using the
factorization or an iterative conjugate gradient-type algorithm. This approach
has the advantage of performing the matrix-vector multiplication by the Q factor using
elementary Householder transformations. Although the Q factor may be structurally
full, the elementary Householder vectors may be quite sparse. Moreover, a roundo error
analysis of the algorithm can be carried out.
7.
Acknowledgment
. Authors would like to thank Michele Benzi for careful reading
of manuscript and anonymous referees for their many useful comments which significantly
improved the presentation of the paper. We are indebted to Jir Muzak from the
Department of Mathematical Modelling in DIAMO, s.e., Straz pod Ralskem for providing
us with a model numerical example for the experimental part of this paper and to Jorg
Liesen for giving us the reference [20]. This work was supported by the Grant Agency of
the Czech Republic under grant 201/98/P108 and by the grant AS CR A2030706.
--R
LAPACK User's Guide SIAM
The use of QR factorization in sparse quadratic programming.
Dynamics of Fluids in Porous Media.
Mixed and Hybrid Finite Element Methods.
Relations between Galerkin and norm-minimizing iterative methods for solving linear systems
Iterative methods for large
Geometric mesh partitioning: implementation and exper- iments
Matrix Computations.
Method of conjugate gradients for solving linear systems.
Accuracy and Stability of Numerical Algorithms.
Generalized nested dissection
Sparse QR factorization with applications to linear least squares problems.
Approximate Schur complement preconditioning of the lowest-order nodal discretizations
Automatic mesh partitioning
Geometric separators for
Schur Complement and Statistics.
A preconditioned iterative method for saddle point problems.
Iterative Methods for Sparse Linear Systems.
ILUT: A dual threshold incomplete ILU factorization.
Spectral partitioning works: Planar graphs and
Parallel iterative solution methods for linear systems arising from discretized PDE's.
--TR | indefinite linear systems;preconditioned conjugate residuals;potential fluid flow problem;sparse linear systems;finite element matrices |
587322 | Sparse Serial Tests of Uniformity for Random Number Generators. | Different versions of the serial test for testing the uniformity and independence of vectors of successive values produced by a (pseudo)random number generator are studied. These tests partition the t-dimensional unit hypercube into k cubic cells of equal volume, generate n points (vectors) in this hypercube, count how many points fall in each cell, and compute a test statistic defined as the sum of values of some univariate function f applied to these k individual counters. Both overlapping and nonoverlapping vectors are considered. For different families of generators, such as linear congruential, Tausworthe, nonlinear inversive, etc., different ways of choosing these functions and of choosing k are compared, and formulas are obtained for the (estimated) sample size required to reject the null hypothesis of independent uniform random variables, as a function of the period length of the generator. For the classes of alternatives that correspond to linear generators, the most efficient tests turn out to have $k \gg n$ (in contrast to what is usually done or recommended in simulation books) and to use overlapping vectors. | Introduction
. The aim of this paper is to examine certain types of serial
tests for testing the uniformity and independence of the output sequence of general-purpose
uniform random number generators (RNGs) such as those found in software
libraries. These RNGs are supposed to produce \imitations" of mutually independent
random variables uniformly distributed over the interval [0; 1) (i.i.d. U(0; 1), for short).
Testing an RNG whose output sequence is U amounts to testing the null
hypothesis are i.i.d. U(0; 1)."
To approximate this multidimensional uniformity, good RNGs are usually designed
(theoretically) so that the multiset t of all vectors rst
successive output values, from all possible initial seeds, covers the t-dimensional
unit hypercube [0; 1) t very evenly, at least for t up to some t 0 , where t 0 is chosen
somewhere between 5 and 50 or so. When the initial seed is chosen randomly, this
t can be viewed in some sense as the sample space from which points are chosen at
random to approximate the uniform distribution over [0; 1) t . For more background
on the construction of RNGs, see, for example, [13, 17, 21, 35].
For large t, the structure of t is typically hard to analyze theoretically. Moreover,
even for a small t, one would often generate several successive t-dimensional vectors
of the form (u statistical testing then comes into play
because the dependence structure of these vectors is hard to analyze theoretically. An
excessive regularity of t implies that statistical tests should fail when their sample
P. L'Ecuyer and R. Simard, Departement d'Informatique et de Recherche Operationnelle,
Universite de Montreal, C.P. 6128, Succ. Centre-Ville, Montreal, H3C 3J7, Canada. e-
mail: lecuyer@iro.umontreal.ca and simardr@iro.umontreal.ca. S. Wegenkittl, Institute of
Mathematics, University of Salzburg, Hellbrunnerstrasse 34, A-5020 Salzburg, Austria, e-mail:
ste@random.mat.sbg.ac.at This work has been supported by the National Science and Engineering
Research Council of Canada grants # ODGP0110050 and SMF0169893, by FCAR-Quebec grant #
93ER1654, and by the Austrian Science Fund FWF, project no. P11143-MAT. Most of it was performed
while the rst author was visiting Salzburg University and North Carolina State University,
in 1997-98 (thanks to Peter Hellekalek and James R. Wilson).
sizes approach the period length of the generator. But how close to the period length
can one get before trouble begins?
Several goodness-of-t tests for H 0 have been proposed and studied in the past
(see, e.g., [13, 9, 26, 41] and references therein). Statistical tests can never certify for
good an RNG. Dierent types of tests detect dierent types of deciencies and the
more diversied is the available battery of tests, the better.
A simple and widely used test for RNGs is the serial test [1, 6, 8, 13], which
operates as follows. Partition the interval [0; 1) into d equal segments. This determines
a partition of [0; 1) t into cubic cells of equal size. Generate nt random
numbers U construct the points V
and let X j be the number of these points falling into cell j, for
has the multinomial distribution with parameters
1=k). The usual version of the test, as described for example in [6, 13, 14]
among other places, is based on Pearson's chi-square statistic
is the average number of points per cell, and the distribution of X 2
under H 0 is approximated by the chi-square distribution with k 1 degrees of freedom
when 5 (say).
In this paper, we consider test statistics of the general form
where f n;k is a real-valued function which may depend on n and k. We are interested
for instance in the power divergence statistic
real-valued parameter (by we understand the limit as
could also consider - seems unnecessary
in the context of this paper. Note that D . The power divergence statistic
is studied in [39] and other references given there. A more general class is the '-
divergence family, where f n;k (X Other forms of
f n;k that we consider are f n;k (where I denotes the indicator function),
which the corresponding Y is the
number of cells with at least b points, the number of empty cells, and the number of
collisions, respectively.
We are interested not only in the dense case, where > 1, but also in the sparse
case, where is small, sometimes much smaller than 1. We also consider (circular)
overlapping versions of these statistics, where U replaced
by V i .
In a slightly modied setup, the constant n is replaced by a Poisson random
variable with mean n. Then, (X a vector of i.i.d. Poisson random
variables with mean instead of a multinomial vector, and the distribution of Y
becomes easier to analyze because of this i.i.d. property. For large k and n, however,
the dierence between the two setups is practically negligible, and our experiments
are with
A rst-order test observes the value of Y , say y, and rejects H 0 if the p-value
is much too close to either 0 or 1. The function f is usually chosen so that p too
close to 0 means that the points tend to concentrate in certain cells and avoid the
others, whereas p close to 1 means that they are distributed in the cells with excessive
uniformity. So p can be viewed as a measure of uniformity, and is approximately a
random variable under H 0 if the distribution of Y is approximately continuous.
A second-order (or two-level) test would obtain N \independent" copies of Y ,
say is the theoretical distribution
of Y under H 0 , and compare their empirical distribution to the uniform. Such a
two-level procedure is widely applied when testing RNGs (see [6, 13, 16, 29, 30]).
Its main supporting arguments are that it tests the RNG sequence not only at the
global level but also at a local level (i.e., there could be bad behavior over short
subsequences which \cancels out" over larger subsequences), and that it permits one
to apply certain tests with a larger total sample size (for example, the memory size of
the computer limits the values of n and/or k in the serial test, but the total sample
size can exceed n by taking N > 1). Our extensive empirical investigations indicate
that for a xed total sample size Nn, when testing RNGs, a test with
typically more e-cient than the corresponding test with N > 1. This means that
for typical RNGs, the type of structure found in one (reasonably long) subsequence
is usually found in (practically) all subsequences of the same length. In other words,
when an RNG started from a given seed fails spectacularly a certain test, it usually
fails that test for most admissible seeds.
The common way of applying serial tests to RNGs is to select a few specic
generators and some arbitrarily chosen test parameters, run the tests, and check if
H 0 is rejected or not. Our aim in this paper is to examine in a more systematic
way the interaction between the serial tests and certain families of RNGs. From each
family, we take an RNG with period length near 2 e , chosen on the basis of the usual
theoretical criteria, for integers e ranging from 10 to 40 or so. We then examine,
for dierent ways of choosing k and constructing the points V i , how the p-value of
the test evolves as a function of the sample size n. The typical behavior is that
takes \reasonable" values for a while, say for n up to some threshold n 0 , then
converges to 0 or 1 exponentially fast with n. Our main interest is to examine the
relationship between n 0 and e. We adjust (crudely) a regression model of the form
log
e++ where
and are two constants and is a small noise. The result
gives an idea of what size (or period length) of RNG is required, within a given family,
to be safe with respect to these serial tests for the sample sizes that are practically
feasible on current computers. It turns out that for popular families of RNGs such as
the linear congruential, multiple recursive, and shift-register, the most sensitive tests
choose k proportional to 2 e and yield
which means that n 0 is
a few times the square root of the RNG's period length.
The results depend of course on the choice of f in (1.2) and on how d and t are
chosen. For example, for linear congruential generators (LCGs) selected on the basis
of the spectral test [6, 13, 24], the serial test is most sensitive when k 2 e , in which
case
k). These \most e-cient" tests are very sparse ( 1). Such large
values of k yield more sensitive tests than the usual ones (for which k 2 e and
5 or so) because the excessive regularity of LCGs really shows up at that level
of partitioning. For k 2 e , the partition eventually becomes so ne that each cell
contains either 0 or 1 point, and the test loses all of its sensitivity.
For xed n, the non-overlapping test is typically slightly more e-cient than the
overlapping one, because it relies on a larger amount of independent information.
However, the dierence is typically almost negligible (see Section 5.3) and the non-overlapping
test requires t times more random numbers. If we x the total number
of U i 's that are used, so the non-overlapping test is based on n points whereas the
overlapping one is based on nt points, for example, then the overlapping test is typically
more e-cient. It is also more costly to compute and its distribution is generally
more complicated. If we compare the two tests for a xed computing budget, the
overlapping one has an advantage when t is large and when the time to generate the
random numbers is an important fraction of the total CPU time to apply the test.
In Section 2, we collect some results on the asymptotic distribution of Y for
the dense case where k is xed and n ! 1, the sparse case where both
and the very sparse case where n=k ! 0. In
Section 3 we do the same for the overlapping setup. In Section 4 we brie
y discuss the
e-ciency of these statistics for certain classes of alternatives. Systematic experiments
with these tests and certain families of RNGs are reported in Section 5. In Section 6,
we apply the tests to a short list of RNGs proposed in the literature or available in
software libraries and widely used. Most of these generators fail miserably. However,
several recently proposed RNGs are robust enough to pass all these tests, at least for
practically feasible sample sizes.
2. Power Divergence Test Statistics for Non-Overlapping Vectors. We
brie
y discuss some choices of f n;k in (1.2) which correspond to previously introduced
tests. We then provide formulas for the exact mean and variance, and limit theorems
for the dense and sparse cases.
2.1. Choices of f n;k . Some choices of f n;k are given in Table 2.1. In each case,
Y is a measure of clustering: It tends to increase when the points are less evenly
distributed between the cells. The well-known Pearson and loglikelihood statistics,
are both special cases of the power divergence, with
respectively [39]. H is related to G 2 via the relation 2). The
statistics N b , W b , and C count the number of cells that contain exactly b points (for
b 0), the number of cells that contain at least b points (for b 1), and the number
of collisions (i.e., the number of times a point falls in a cell that already has a point in
respectively. They are related by N
and
2.2. Mean and Variance. Before looking at the distribution of Y , we give
expressions for computing its exact mean and variance under H 0 .
If the number of points is xed at n, Denoting
one obtains after some algebraic manipulations:
x
Table
Some choices of f n;k and the corresponding statistics.
divergence
loglikelihood
negative entropy
number of cells with exactly b points
number of cells with at least b points
number of empty cells
number of collisions
x
x
x
min(n
x
y
(f(x) )(f(y)
Although containing a lot of summands, these formulas are practical in the sparse
case since for the Y 's dened in Table 2.1, when n and k are large and
small, only the terms for small x and y in the above sums are non-negligible. These
terms converge to 0 exponentially fast as a function of x y, when x
rst two moments of Y are then easy to compute by truncating the sums after a small
number of terms. For example, with 1000, the relative errors on E[H ] and
are less than the sums are stopped at of 1000, and
if the sums are stopped at similar behavior is observed
for the other statistics.
The expressions (2.1) and (2.2) are still valid in the dense case, but for larger ,
more terms need to be considered. Approximations for the mean and variance of D -
when 1, with error terms in o(1=n), are provided in [39], Chapter 5, page 65.
In the Poisson setup, where n is the mean of a Poisson random variable, the X j
are i.i.d. Poisson() and the expressions become
2.3. Limit Theorems. The limiting distribution of D - is a chi-square in the
dense case and a normal in the sparse case. Two-moment-corrected versions of these
6 PIERRE L'ECUYER, RICHARD SIMARD, AND STEFAN WEGENKITTL
results are stated in the next proposition. This means that D (C)
- and D (N)
- in the
proposition have exactly the same mean and variance as their asymptotic distribution
(e.g., 0 and 1 in the normal case). Read and Cressie [39] recommend this type
of standardization, which tends to be closer to the asymptotic distribution than a
standardization by the asymptotic mean and variance. The two-moment corrections
become increasingly important when - gets away from around 1. The mean and
variance of D - can be computed as explained in the previous subsection. Another
possibility would be to correct the distribution itself, e.g., using Edgeworth-type expansions
[39], page 68. This gives extremely complicated expressions, due in part to
the discrete nature of the multinomial distribution, and the gain is small.
Proposition 2.1. For - > 1, the following holds under H 0 .
(i) [Dense case] If k is xed and n !1, in the multinomial setup
convergence in distribution, and
is the chi-square distribution with k 1 degrees of freedom. In the
Poisson setup, D (C)
(ii) [Sparse case] For both the multinomial and Poisson setups, if
is the standard normal distribution.
Proof. For the multinomial setup, part (i) can be found in [39], page 46, whereas
part (ii) follows from Theorem 1 of [11], by noting that all the X j 's here have the same
distribution. The proofs simplify for the Poisson setup, due to the independence. The
p n=k are i.i.d. and asymptotically N(0; 1) in the dense case, so
their sum of squares, which is X 2 , is asymptotically 2 (k).
We now turn to the counting random variables N b , W b , and C. These are not
approximately chi-square in the dense case. In fact, if xed k, it is clear
that xed b. This implies that W b ! k and
random variables are all degenerate.
For the Poisson setup, each X i is Poisson(), so p b
b 0 and N b is BN(k; p b ), a binomial with parameters k and p b . If k is large and p b
is small, N b is thus approximately Poisson with (exact) mean
for b 0:
The next result covers other cases as well.
Proposition 2.2. For the Poisson or the multinomial setup, under H 0 , suppose
that k !1 and n !1, and let 1 ,
positive constants.
one also has C
(ii) For
Proof. In (i), since 0, one has for the Poisson case E[N b+1 ]=E[N b
0. The relative contribution of W b+1 to the sum W
(a sum of correlated Poisson random variables) is then negligible compared
with that of N b , so N b and W b have the same asymptotic distribution (this follows
from Lemma 6.2.2 of [2]). Likewise, under these conditions with 2, C has the
same asymptotic distribution as N 2 , because
e 1. For the multinomial setup, it has been shown (see [2], Section 6.2) that N b
and W b , for b 2, are asymptotically Poisson(kp b ) when ! 0, the same as for the
Poisson setup. The same argument as for W 2 applies for C, using again their Lemma
6.2.2, and this proves (i). For for the Poisson setup, we saw already that N 0
is asymptotically
For the multinomial case, the same result follows from Theorem 6.D of [2], and this
proves (ii). Part (iii) is obtained by applying Theorem 1 of [11].
The exact distributions of C and N 0 under H 0 , for the multinomial setup, are
given by
where the
are the Stirling numbers of the second kind (see [13], page 71,
where an algorithm is also given to compute all the non-negligible probabilities in
time O(n log n)).
In our implementation of the test based on C, we used the Poisson approximation
for 1=32, the normal approximation for > 1=32 and n > 2 15 , and the exact
distribution otherwise.
3. Overlapping vectors. For the overlapping case, let X (o)
t;j be the number of
overlapping vectors j. Now, the formulas (2.1)
and (2.2) for the mean and variance, and the limit theorems in Propositions 2.1 and
2.2, no longer stand. The analysis is more di-cult than for the disjoint case because
in general P [X (o)
depends on i and P [X (o)
depends on the pair
(i; in a non-trivial way.
Theoretical results have been available in the overlapping multinomial setup, for
the Pearson statistic in the dense case. Let
and let X 2
be the equivalent of X 2
for the overlapping vectors of dimension t 1:
Consider the statistic ~
Good [8] has shown
that E[X 2
exactly (see his Eq. (5) and top of page 280) and that when
page 284). This setup, usually with
n=k 5 or so, is called the overlapping serial test or the m-tuple test in the literature
and has been used previously to test RNGs (e.g., [1, 29, 30]). The next proposition
generalizes the result of Good to the power divergence statistic in the dense case.
Further generalization is given by Theorem 4.2 of [43].
Proposition 3.1. Let
the power divergence statistic for the t-dimensional overlapping vectors, and dene
~
in the multinomial setup, if - > 1, k is xed,
and n !1, ~
Proof. The result is well-known for 1. Moreover, a Taylor series expansion
of D -;(t) in powers of X (o)
easily shows that D
probability as Theorem A6.1). Therefore, ~
D -;(t) has
the same asymptotic distribution as ~
D 1;(t) and this completes the proof.
For the sparse case, where k; our
simulation experiments support the conjecture that
~
~
The overlapping empty-cells-count test has been discussed in a heuristic way in a
few papers. For calls it the overlapping pairs sparse occupancy
(OPSO) and suggests a few specic parameters, without providing the underlying the-
ory. Marsaglia and Zaman [32] speculate that N 0 should be approximately normally
distributed with mean ke and variance ke (1 3e ). This make sense only if is
not too large or not too close to zero. We studied empirically this approximation and
found it reasonably accurate only for 2 5 (approximately). The approximation
could certainly be improved by rening the variance formula.
Proposition 2.2 (i) and (ii) should hold in the overlapping case as well. Our
simulation experiments indicate that the Poisson approximation for C is very accurate
for (say) < 1=32, and already quite good for 1, when n is large.
4. Which Test Statistic and What to Expect?. The LFSR, LCG, and MRG
generators in our lists are constructed so that their point sets t over the entire period
are superuniformly distributed. Thus, we may be afraid, if k is large enough, that
very few cells (if any) contain more than 1 point and that D - , C, N 0 , N b and W b
for b 2 are smaller than expected. In the extreme case where assuming
that the distribution of C under H 0 is approximately Poisson with mean n 2 =(2k),
the left p-value of the collision test is . For a xed
number of cells, this p-value approaches 0 exponentially fast in the square of the
sample size n. For example,
k, and 16
k, respectively. Assuming that k is near the RNG's period length, i.e.,
means that the test starts to fail abruptly when the sample size exceeds
approximately 4 times the square root of the period length. As we shall see, this is
precisely what happens for certain popular classes of generators. If we use the statistic
b instead of C, in the same situation, we have
and the sample size required to obtain a p-value less than a xed (small) constant is
2. In this setup, C and N 2 are equivalent to W 2 , and choosing
e-cient test.
Suppose now that we have the opposite: Too many collisions. One simple model
of this situation is the alternative are i.i.d. uniformly distributed over
boxes, the other k k 1 boxes being always empty." Under H 1 , W b is approximately
Poisson with mean
is large and 1 is small) instead of
Therefore, for a given 0 , and x 0 such that
the power of the test at level 0 is
where x 0 depends on b. When b increases, for a xed 0 , x 0 decreases and 1 decreases
as well if n=k 1 maximizes the power unless n=k 1 is large. In fact
the test can have signicant power only if 1 exceeds a few units (otherwise, with
large probability, one has W not rejected). This means
which can be approximated by O(k (b 1)=b
is reasonably large. Then, is the best choice. If k 1 is small, 1 is maximized
(approximately) by taking
The alternative H 1 just discussed can be generalized as follows: Suppose that the
have a probability larger than 1=k, while the other k k 1 cells have a smaller
probability. H 1 is called a hole (resp., peak , split) alternative if k 1 =k is near 1 (resp.,
near 0, near 1/2). We made extensive numerical experiments regarding the power of
the tests under these alternatives and found the following. Hole alternatives can be
detected only when n=k is reasonably large (dense case), because in the sparse case
one expects several empty cells anyway. The best test statistics to detect them are
those based on the number of empty cells N 0 , and D - with - as small as possible (e.g.,
0). For a peak alternative, the power of D - increases with - as a concave
function, with a rate of increase that typically becomes very small for - larger than
3 or 4 (or higher, if the peak is very narrow). The other test statistics in Table 2.1
are usually not competitive with D 4 (say) under this alternative, except for W b which
comes close when b n=k 1 (however it is hard to choose the right b because k 1 is
generally unknown). The split alternative with the probability of the k k 1 low-probability
cells equal to 0 is easy to detect and the collision test (using C or W 2 ) is
our recommendation. The power of D - is essentially the same as that of C and W 2 ,
for most -, because E[W 3 ] has a negligible value, which implies that there is almost a
one-to-one correspondence between C, W 2 , and D - . However, with the small n that
su-ces for detection in this situation, E[W 2 ] is small and the distribution of D - is
concentrated on a small number of values, so neither the normal nor the chi-square
is a good approximation of its distribution. Of course, the power of the test would
improve if the high-probability cells were aggregated into a smaller number of cells,
and similarly for the low-probability cells. But to do this, one needs to know where
these cells are a priori .
These observations extend (and agree with) those made previously by several
authors (see [39] and references therein), who already noted that for D - , the power
decreases with - for a hole alternative and increases with - for a peak alternative.
This implies in particular that G 2 and H are better [worse] test statistics than X 2 to
detect a hole [a peak]. In the case of a split alternative for which the cell probabilities
are only slightly perturbed, X 2 is optimal in terms of Pitman's asymptotic e-ciency
is optimal in terms of Bahadur's e-ciency (see [39] for details).
5. Empirical Evaluation for RNG Families.
5.1. Selected Families of RNGs. We now report systematic experiments to
assess the eectiveness of serial tests for detecting the regularities in specic families
of small RNGs. The RNG families that we consider are named LFSR3, GoodLCG,
BadLCG2, MRG2, CombL2, InvExpl. Within each family, we constructed a list of
specic RNG instances, with period lengths near 2 e for (integer) values of e ranging
from 10 to 40. These RNGs are too small to be considered for serious general purpose
softwares, but their study gives good indication about the behavior of larger instances
from the same families. At step n, a generator outputs a number un 2 [0; 1).
The LFSR3s are combined linear feedback shift register (LFSR) (or Tausworthe)
generators with three components, of the form
where means bitwise exclusive-or, and are constant parameters
selected so that the k j are reasonably close to each other, and the sequence fung
has period length (2 k1 1)(2 k2 1)(2 k3 1) and is maximally equidistributed (see
[19] for the denition and further details about these generators).
The GoodLCGs are linear congruential generators (LCGs), of the form
where m is a prime near 2 e and a is selected so that the period length is m 1 and
so that the LCG has an excellent behavior with respect to the spectral test (i.e., an
excellent lattice structure) in up to at least 8 dimensions. The BadLCG2s have the
same structure, except that their a is chosen so that they have a mediocre lattice
structure in 2 dimensions. More details and the values of a and m can be found in
[24, 26]. The MRG2 are multiple recursive generators of order 2, of the form
period length m 2 1, and excellent lattice structure as for the GoodLCGs [17, 21].
The CombL2s combine two LCGs as proposed in [15]:
so that the combined generator has period length (m 1 1)(m 2 1)=2 and an excellent
lattice structure (see [28] for details about that lattice structure).
InvExpl denotes a family of explicit inversive nonlinear generators of period length
m, dened by
where m is prime and (an) 1 mod
5.2. The Log-p-values. For a given test statistic Y taking value y, let
the log-p-value of the test as
For example, means that the right p-value is between 0.01 and 0.001. For a given
class of RNGs, given Y , t, and a way of choosing k, we apply the test for dierent
values of e and with sample size
e+ , for where the
constant
is chosen so that the test starts to fail at approximately the same value of
for all (or most) e. More specically, we dene ~ (resp. ) as the smallest values
of for which the absolute log-p-value satises j'j 2 (resp. j'j 14) for a majority
of values of e. These thresholds are arbitrary.
5.3. Test Results: Examples and Summary. Tables 5.1 and 5.2 give the log-
p-values for the collision test applied to the GoodLCGs and BadLCG2s, respectively,
in dimensions, with . Only the log-
p-values ' outside of the set f1; 0; 1g, which correspond to p-values less than 0:01,
are displayed. The symbols and ! mean ' 14 and ' 14, respectively. The
columns not shown are mostly blank on the left of the table and lled with arrows on
the right of the table. The small p-values appear with striking regularity, at about
the same for all e, in each of these tables. This is also true for other values of e not
shown in the table. One has ~
Table
5.1, while ~
in
Table
5.2. The GoodLCGs fail because their structure is too regular (the left p-values
are too small because there are too few collisions), whereas the BadLCG2s have
the opposite behavior (the right p-values are too small because there are too many
collisions; their behavior correspond to the split alternative described in Section 4).
Table
5.3 gives the values of ~
and for the selected RNG families, for the
collision test in 2 and 4 dimensions. All families, except InvExpl, fail at a sample size
proportional to the square root of the period length . At
1=2 , the left or
right p-value is less than 10 14 most of the time. The BadLCG2s in 2 dimensions are
the rst to fail: They were chosen to be particularly mediocre in 2 dimensions and the
test detects it. Apart from the BadLCG2s, the generators always fail the tests due to
excessive regularity. For the GoodLCGs and LFSR3s, for example, there was never
a cell with more than 2 points in it. For the LFSR3s, we distinguish two cases: One
where d was chosen always odd and one where it was always the smallest power of 2
such that . In the latter case, the number of collisions is always 0, since
no cell contains more than a single point over the entire period of the generator, as a
consequence of the \maximal equidistribution" property of these generators [19]. The
left p-values then behave as described at the beginning of Section 4. The InvExpl
resist the tests until after their period length is exhausted. These generators have
their point set t \random-looking" instead of very evenly distributed. However,
they are much slower than the linear ones.
We applied the power divergence tests with 4, and in most cases
the p-values were very close to those of the collision test. In fact, when no cell count
which we have observed frequently), there is a one-to-one
correspondence between the values of C and of D - for all - > 1. Therefore, all these
statistics should have similar p-values if both E[W 3 ] and the observed value of W 3 are
small (the very sparse situation). For the overlapping versions of the tests, the values
of
, and are exactly the same as those given in Table 5.3. This means that the
Table
The log-p-values ' for the GoodLCGs with period length 2 e , for the collision test (based on
C), in cells, and sample size . The table entries give the
values of '. The symbols and ! mean ' 14 and ' 14, respectively. Here, we have ~
and
22 3 11
26 2 6
28 2
overlapping tests are more e-cient than the non-overlapping ones, because they call
the RNG t times less.
We applied the same tests with smaller and larger numbers of cells, such as
found that ~
and increase when
moves away from 2 e . A typical example: For the GoodLCGs with
6, 5, and 7 for the four choices of k given above, respectively, whereas
. The classical way of applying the serial test for RNG testing uses a large
average number of points per cell (dense case). We applied the test based on X 2 to
the GoodLCGs, with k n=8, and found empirically
This means that the required sample size now increases as O( 2=3 ) instead of O( 1=2 )
as before; i.e., the dense setup with the chi-square approximation is much less e-cient
than the sparse setup. We observed the same for D - with other values of - and other
values of t, and a similar behavior for other RNG families.
For the results just described, t was xed and d varied with e. We now x
(i.e., we take the rst two bits of each number) and vary the dimension as
Table
5.4 gives the results of the collision test in this setup. Note the change in
for
the GoodLCGs and BadLCG2s: The tests are less sensitive for these large values of
t.
We also experimented with two-level tests, where a test of sample size n is replicated
times independently. For the collision test, we use the test statistic C T , the
total number of collisions over the N replications, which is approximately Poisson
with mean Nn 2 e n=k =(2k) under H 0 . For the power divergence tests, we use as test
statistics the sum of values of D (N)
- and of D (C)
- , which are approximately N(0; N)
and 2 (N(k 1)) under H 0 , respectively. We observed the following: The power
Table
The log-p-values ' for the collision test, with the same setup as in Table 5.1, but for the
BadLCG2 generators. Here, ~
22
26
28
Table
Collision tests for RNG families, in t dimensions, with k 2 e . Recall that ~
(resp. ) is the
smallest integer for which j'j 2 (resp. j'j 14) for a majority of values of e, in tests with sample
e+ .
RNG family
LFSR3, d power of
of a test with (N; n) is typically roughly the same as that of the same test at level
one with sample size n
N . Single-level tests thus need a smaller total
sample size than the two-level tests to achieve the same power. On the other hand,
two-level tests are justied when the sample size n is limited by the memory size of
the computer at hand. (For n k, the counters X j are implemented via a hashing
14 PIERRE L'ECUYER, RICHARD SIMARD, AND STEFAN WEGENKITTL
Table
Collision tests with divisions in each dimension and dimensions.
Generators
~
CombL2
technique, for which the required memory is proportional to n instead of k). Another
way of doing a two-level test with D - is to compute the p-values for the N replicates
and compare their distribution with the uniform via (say) a Kolmogorov-Smirnov or
Anderson-Darling goodness-of-t test. We experimented extensively with this as well
and found no advantage in terms of e-ciency, for all the RNG families that we tried.
6. What about real-life LCGs?. From the results of the preceding section
one can easily predict, conservatively, at which sample size a specic RNG from a
given family will start to fail. We verify this with a few commonly used RNGs, listed
in
Table
6.1. (Of course, this list is far from exhaustive).
Table
List of selected generators.
LCG1. LCG with
LCG2. LCG with
LCG3. LCG with
LCG4. LCG with
LCG5. LCG with
LCG6. LCG with
LCG7. LCG with
LCG8. LCG with
LCG9. LCG with
RLUX. RANLUX with
WEY1. Nested Weyl with
(see [10]).
WEY2. Shued nested Weyl with
(see [10]).
CLCG4. Combined LCG of [25].
CMRG96. Combined MRG in Fig. 1 of [18].
CMRG99. Combined MRG in Fig. 1 of [23].
Generators LCG1 to LCG9 are well-known LCGs, based on the recurrence x
at step i. LCG1 and LCG2 are recommended
by Fishman [7] and a FORTRAN implementation of LCG1 is given by
Fishman [6]. LCG3 is recommended in [14], among others, and is used in the SIMSCRIPT
II.5 and INSIGHT simulation languages. LCG4 is in numerous software
systems, including the IBM and Macintosh operating systems, the Arena and SLAM
II simulation languages (note: the Arena RNG has been replaced by CMRG99 after
we wrote this paper), MATLAB, the IMSL library (which also provides LCG1 and
Table
The log-p-values for the collision test in cells, and sample size
m.
Generator
Table
The log-p-values for the two-level collision test (based on C T ) in
cells, sample size for each replication, and replications.
Generator
LCG5), the Numerical Recipes [38], etc., and is suggested in several books and papers
(e.g., [3, 36, 40]). LCG6 is used in the VAX/VMS operating system and on Convex
computers. LCG5 and LCG9 are the rand and rand48 functions in the standard
libraries of the C programming language [37]. LCG7 is taken from [6] and LCG8 is
used in the CRAY system library. LCG1 to LCG4 have period length 2 31 2, LCG5,
LCG6, AND LCG9 have period length m, and LCG7 and LCG8 have period length
RLUX is the RANLUX generator implemented by James [12], with luxury level
24. At this luxury level, RANLUX is equivalent to the subtract-with-borrow
generator with modulus 43 and proposed in [31] and
used, for example, in MATHEMATICA (according to its documentation). WEY1 is
a generator based on the nested Weyl sequence dened by
(see [10]). WEY2 implements the shued nested Weyl sequence proposed in
dened by
CLCG4, CMRG96, and CMRG99 are the combined LCG of [25], the combined MRG
given in Figure 1 of [18], and the combined MRG given in Figure 1 of [23].
Table
6.2 gives the log-p-values for the collision test in two dimensions, for LCG1
to LCG6, with k m and m. As expected, suspect values start to appear
at sample size n 4
these LCGs are denitely rejected with n
m.
LCG4 has too many collisions whereas the others have too few. By extrapolation,
LCG7 to LCG9 are expected to start failing with n around 2 26 , which is just a bit
more than what the memory size of our current computer allowed when we wrote this
paper. However, we applied the two-level collision test with
. Here, the total number of collisions C T is approximately Poisson with
mean . The log-p-values are in Table 6.3. With a total
sample size of 32 2 24 , LCG7 and LCG8 fail decisively; they have too few collisions.
We also tried 4, and the collision test with overlapping, and the results were
similar.
We tested the other RNGs (the last 5 in the table) for several values of t ranging
from 2 to 25. RLUX passed all the tests for t 24 but failed spectacularly in 25
dimensions. With the log-p-value for
the collision test is are 239 collisions, while E[CjH 0 ] 166). For a
two-level test with the total number of collisions
was C much more than This result is not
surprising, because for this generator all the points V i in 25 dimensions or more lie
in a family of equidistant hyperplanes that are 1=
3 apart (see [20, 42]). Note that
RANLUX with a larger value of L passes these tests, at least for t 25. WEY1
passed the tests in 2 dimensions, but failed spectacularly for all t 3: The points are
concentrated in a small number of boxes. For example, with
a sample size as small as
(' 14). WEY2, CLCG4, CMRG96, and CMRG99 passed all the tests that we tried.
7. Conclusion. We compared several variants of serial tests to detect regularities
in RNGs. We found that the sparse tests perform better than the usual (dense)
ones in this context. The choice of the function f n;k does not seem to matter much.
In particular, collisions count, Pearson, loglikelihood ratio, and other statistics from
the power divergence family perform approximately the same in the sparse case. The
overlapping tests require about the same sample size n as the non-overlapping ones
to reject a generator. They are more e-cient in terms of the quantity of random
numbers that need to be generated.
It is not the purpose of this paper to recommend specic RNGs. For that, we
refer the reader to [22, 23, 27, 33], for example. However, our test results certainly
eliminate many contenders. All LCGs and LFSRs fail these simple serial tests as
soon as the sample size exceeds a few times the square root of their period length,
regardless of the choice of their parameters. Thus, when their period length is less
than 2 50 or so, which is the case for the LCGs still encountered in many popular
software products, they are easy to crack with these tests. These small generators
should no longer be used. Among the generators listed in Table 6.1, only the last four
pass the tests described in this paper, with the sample sizes that we have tried. All
others should certainly be discarded.
--R
Oxford Science Publica- tions
A Guide to Simulation
Inversive congruential pseudorandom numbers: A tutorial
The serial test for sampling numbers and other tests for randomness
A Guide to Chi-Squared Testing
Pseudorandom number generator for massively parallel molecular-dynamics simulations
Asymptotic normality and e-ciency for certain goodness-of- t tests
RANLUX: A Fortran implementation of the high-quality pseudorandom number generator of Luscher
The Art of Computer Programming
Simulation Modeling and Analysis
A random number generator based on the combination of four LCGs
Selection criteria and testing
An object-oriented random-number package with many long streams and substreams
Structural properties for two classes of combined random number generators
Inversive and linear congruential pseudorandom number generators in empirical tests
A current view of random number generators
A new class of random number generators
Asymptotic divergence of estimates of discrete distri- butions
Good ones are hard to
The Standard C Library
Portable random number generators
Thoughts on pseudorandom number generators
Tests for the uniform distribution
On the add-with-carry and subtract-with-borrow random number generators
--TR
--CTR
Makoto Matsumoto , Takuji Nishimura, Sum-discrepancy test on pseudorandom number generators, Mathematics and Computers in Simulation, v.62 n.3-6, p.431-442, 3 March
Peter Hellekalek , Stefan Wegenkittl, Empirical evidence concerning AES, ACM Transactions on Modeling and Computer Simulation (TOMACS), v.13 n.4, p.322-333, October
Pierre L'Ecuyer , Jacinthe Granger-Pich, Combined generators with components from different families, Mathematics and Computers in Simulation, v.62 n.3-6, p.395-404, 3 March
Pierre L'Ecuyer, Software for uniform random number generation: distinguishing the good and the bad, Proceedings of the 33nd conference on Winter simulation, December 09-12, 2001, Arlington, Virginia | random number generation;collision test;goodness-of-fit;m-tuple test;multinomial distribution;OPSO;serial test |
587368 | Preconditioners for Ill-Conditioned Toeplitz Systems Constructed from Positive Kernels. | In this paper, we are interested in the iterative solution of ill-conditioned Toeplitz systems generated by continuous nonnegative real-valued functions f with a finite number of zeros. We construct new w-circulant preconditioners without explicit knowledge of the generating function f by approximating f by its convolution f * KN with a suitable positive reproducing kernel KN. By the restriction to positive kernels we obtain positive definite preconditioners. Moreover, if f has only zeros of even order $\le 2s$, then we can prove that the property $ \int_{-\pi}^{\pi} t^{2k} K_N (t) \, \mbox{d} t \le C N^{-2k} $ $(k=0,\hspace*{1.5pt}\ldots,s)$ of the kernel is necessary and sufficient to ensure the convergence of the PCG method in a number of iteration steps independent of the dimension N of the system. Our theoretical results were confirmed by numerical tests. | Introduction
In this paper, we are concerned with the iterative solution of sequences of \mildly" ill{
conditioned Toeplitz systems
are positive denite Hermitian Toeplitz matrices generated by a continuous
non{negative function f which has only a nite number of zeros. Often these systems
are obtained by discretization of continuous problems (partial dierential equation, integral
equation with weakly singular kernel) and the dimension N is related to the grid parameter
of the discretization. For further applications see [12] and the references therein.
Iterative solution methods for Toeplitz systems, in particular the conjugate gradient method
(CG{method), have attained much attention during the last years. The reason for this is that
the essential computational eort per iteration step, namely the multiplication of a vector
with the Toeplitz matrix AN , can be reduced to O(N log N) arithmetical operations by fast
Fourier transforms (FFT). However, the number of iteration steps depends on the distribution
of the eigenvalues of AN . If we allow the generating function f to have isolated zeros,
then the condition numbers of the related Toeplitz matrices grow polynomial with N and the
CG{method converges very slow [8, 28, 45]. Therefore, the really task consists in the construction
of suitable preconditioners M N of AN so that the number of iteration steps of the
corresponding preconditioned CG{method (PCG{method) becomes independent of N . Here
it is useful to recall a result of O. Axelsson [1, p. 573] relating the spectrum of the coe-cient
matrix to the number of iteration steps to achieve a prescribed precision:
Theorem 1.1. Let A be a positive denite Hermitian (N; N){matrix which has p and q
isolated large and small eigenvalues, respectively:
Let dxe denote the smallest integer x. Then the CG{method for the solution of
requires at most
iteration steps to achieve precision , i.e.
where jjxjj A :=
denotes the numerical solution after n iteration steps.
In literature two kinds of preconditioners were mainly exploited, namely banded Toeplitz
matrices and matrices arising from a matrix algebra AON := f
O 0
where ON denotes a unitary matrix.
For another approach by multigrid methods see for example [23].
Various banded Toeplitz preconditioners were examined [10, 5, 40, 36, 41]. It was proved
that the corresponding PCG{methods converge in a number of iteration steps independent
of N . However, there is the signicant constraint that the cost per iteration of the proposed
procedure should be upper-bounded by O(N log N ). This implies some conditions on the
growth of the bandwidth of the banded Toeplitz preconditioners [41].
The above constraint is trivially fullled if we chose preconditioners from matrix algebras,
where the unitary matrix ON has to allow an e-cient multiplication with a vector in O(N log N)
arithmetical operations. Up to now, the only preconditioners of the matrix algebra class which
ensure the desired convergence of the corresponding PCG{method are the preconditioners proposed
in [31, 25]. Unfortunately, the construction of these preconditioners requires the explicit
knowledge of the generating function f .
Extensive examinations were done with natural and optimal Tau preconditioners [6, 3]. Only
for su-ciently smooth functions, where the necessary smoothness depends on the order of the
zeros of f , the natural Tau preconditioners become positive denite and lead to the desired
location of the eigenvalues of the preconditioned matrices. The optimal Tau preconditioner
is in general a bad choice if f has zeros of order > 2. The reason for this will become clear in
the following sections.
In this paper, we combine our approach in [31] with the approximation of f by its convolution
with a reproducing kernel KN . The kernel approach was given in [15] for positive generating
functions. Interesting tests with B{spline kernels were performed by R. Chan et al. in [14].
The advantage of the kernel approach is that it does not require the explicit knowledge of
the generating function. However, only for our theoretical proofs we need some knowledge
about the location of the zeros of the generating function f . See remarks at the end of this
section. We restrict our attention to positive kernels. This ensures that our preconditioners
are positive denite. Suppose that f has only zeros of even order 2s. Then we prove that
under the \moment condition"
Z
on the kernels KN , the eigenvalues of M 1
N AN are contained in some interval [a; b] (0 < a
b < 1) except for a xed number (independent of N) of eigenvalues falling into [b; 1) so that
PCG converges in O(1) steps.
Note that the above kernel property with su-ciently smooth f the Jackson
result
denotes the modulus of continuity. On the other hand, the classical saturation result
of P. P. Korovkin [29, 21] states that we cannot expect a convergence speed of jjf KN f jj 1
better than N 2 even in the presence of very regular functions f .
This paper is organized as follows: In Section 2, we introduce our w{circulant positive denite
preconditioners. We show how the corresponding PCG{method can be implemented with
only O(N) arithmetical operations per step more than the original CG{method. Section 3
is concerned with the location of the eigenvalues of the preconditioned matrices. We will see
that under some assumptions on the kernel the number of CG{iterations is independent of
N . Special kernels as Jackson kernels and B{spline kernels are considered in Section 5. In
Section 6, we sketch how our ideas can be extended to (real) symmetric Toeplitz matrices with
trigonometric preconditioners and to doubly symmetric block Toeplitz matrices with Toeplitz
blocks. Finally, Section 7 contains numerical results.
After sending our manuscript to SIAM J. Sci. Comput., R. H. Chan informed us that his
group has got similar results as in our preprint. See [16] and for a rened version [17]. The
construction of circulant preconditioners of R. H. Chan et al. is based on Jackson kernels and
the proofs are dierent from ours. In [16], the authors prove convergence of the corresponding
PCG{method in O(log N) iteration steps. By a trick (see [16, Theorem 4.2]), which can also
be applied to our w{circulant preconditioners, the R. H. Chan et al. need no knowledge about
the location of the zeros of f .
2 Preconditioners from kernels
Let C 2 denote the Banach space of 2{periodic real{valued continuous functions with norm
We are interested in the solution of Hermitian Toeplitz systems
a
generated by a non{negative function f 2 C 2 which has only a nite number of zeros. By
[10], the matrices AN (f) are positive denite such that (2.1) can be solved by the CG{method.
Unfortunately, since the generating function f 2 C 2 has zeros, the related Toeplitz matrices
are asymptotically ill{conditioned and the CG{method converges very slow. To accelerate
the convergence of the CG{method, we are looking for suitable preconditioners of AN , where
we do not suppose the explicit knowledge of the generating function f . To reach our aim, we
use reproducing kernels. This method was originally proposed for Toeplitz matrices arising
from positive functions f 2 C 2 in [15].
In [14], R. Chan et al. showed by numerical tests that preconditioners from special kernels
related to B{splines can improve the convergence of the CG{method also if f 0 has zeros
of various order. A theoretical proof of R. Chan's results was open up to now.
In this paper, we restrict our attention to even trigonometric polynomials
KN
c N;k cos
and KN 0; then KN is called a positive (trigonometric) kernel. As main examples of such
kernels we consider generalized Jackson polynomials and B{spline kernels in Section 4. For
denote the convolution of f with KN , i.e.
or equivalently in the Fourier domain
a k (f)c N;k e ikx : (2.5)
We consider so{called reproducing kernels KN (N 2 N) with the property that
lim
kf f N k
for all f 2 C 2 .
We chose grids GN (N 2 N) consisting of equispaced nodes
xN;l
such that f(x N;l 1. Note that the choice of the grids requires
some preliminary information about the location of the zeros of f . By a trick (cf. [16]) this
restriction can be neglected if we accept some more outlyers. We consider matrices of the
with
e 2ijk=N
diag (f(x N;l
Obviously, the matrices M N can be written as
~ a 0 ~ aN 1 e iNwN ~ a 1 e iNwN
~ a 1 ~
a 0
with
~ a
a k (f) := 1
These are ( e iNwN ){circulant matrices (see [20]). In particular, we obtain circulant matrices
for skew{circulant matrices for
N .
As preconditioners for (2.1), we suggest matrices of the form
with suitable positive reproducing kernels KN . By (2.5), the construction of these preconditioners
requires only the knowledge of the Toeplitz matrices AN . It is not necessary to know
the generating function f explicitly. However, for the theoretical results in this paper, we
must have some information about the location of the zeros of f . Note that by a trick in [16]
this information is also super
uous. Here we point out that the auxiliary nontrivial problem
of nding some crucial analytic properties of the generating function f has been treated and
partially solved in [40].
Moreover, our preconditioners have the following desirable properties:
1. Since f 0 with a nite number of zeros and KN is a positive kernel, it follows by (2.4)
that f N > 0. Thus, the matrices M N (f N ) are positive denite.
2. In the following section, we will prove that under certain conditions on the kernels KN
the eigenvalues of M 1
N AN are bounded from below by a positive constant independent
of N and that the number of isolated eigenvalues of M 1
N AN is independent of N . Then,
by Theorem 1.1, the number of PCG{steps to achieve a xed precision is independent
of N .
3. By construction (2.8), the multiplication of M N with a vector requires only O(N log N)
arithmetical operations by using FFT{techniques. By a technique presented in [26] it is
possible to implement a PCG{method with preconditioner M N which takes only O(N)
instead of O(N log N) arithmetical operations per iteration step more than the original
CG{method with respect to AN .
3 Eigenvalues of M 1
In this section, we prove that under certain assumptions on the kernels KN the eigenvalues
of M 1
N AN are bounded from below by a positive constant independent of N and that the
number of isolated eigenvalues of M 1
N AN is independent of N . For the proof of our main
result, we need some preliminary lemmata.
Lemma 3.1 Let p 2 C 2 be a non{negative function which has only a nite number of zeros.
be a positive function with
Then, for f := ph and any N 2 N, the eigenvalues of A 1
lie in the interval
The proof can be found for example in [5, 10, 31]. A more sophisticated version for f; g 2 L 1
was proved in [38, 37].
Lemma 3.2 Let p be a real{valued non{negative trigonometric polynomial of degree s.
Let N 2s: Then at most 2s eigenvalues of M N (p) 1 AN (p) dier from 1.
Proof: For arbitrary f 2 C 2 with pointwise convergent Fourier series, we obtain by replacing
by the Fourier series of f at xN;l
~
a
a
a
e 2ilk=N e 2ilj=N
r2Znf0g
a j+rN e iw N k e iw N (j+rN)
e 2ilk=N e 2ilj=N
r2Znf0g
a k+rN e iw
This is well{known as aliasing eect. Then it follows that
where
BN (f) := (b j k (f)) N 1
r2Znf0g
a k+rN (f) e iw
We consider is of degree smaller than s N
2 , we have that b k
jkj N 1 s. Consequently, BN (p) is of rank 2s. Now the assertion follows by (3.1).
In the sequel, we restrict our attention to Toeplitz matrices having a non{negative generating
function f 2 C 2 with a zero of even order 2s (s 2 N) at
We use the trigonometric polynomial
s
of degree s which has also a zero of order 2s at
The convergence of our PCG{method is related to the behavior of the grid functions
precisely, for the proof of our main theorem, we need
that fq s;N (x)g N2N is bounded for all x 2 GN from above and below by positive constants
independent of N . This will be the content of the following lemmata.
First, we see that the above property follows immediately for all grid points x 2 GN having
some distance independent of N from the zero of f :
Lemma 3.3 Let GN be dened by (2.7) with wN 6= 0. Let fKN g N2N be a sequence of
positive even reproducing kernels and let q s;N be given by (3.3). Then, for xN 2 GN \ [a; b]
and for every " > 0 there exists N(") such that
for all N N(").
Proof: Since xN 2 [a; b] (N 2 N) for some a > 0; b < 2, we have that
Further, we obtain by (2.6) that for every " > 0 there exists N(") such that
for all N N("). By rewriting (3.3) in the form
we obtain the assertion.
By Lemma 3.3, it remains to consider the sequences fq s;N
for N !1 or with xN ! 2 for N !1. Since both cases require the same ideas, we consider
lim
The existence of a lower bound of fq s;N (x N )g N2N does also not require additional properties
of the kernel KN :
Lemma 3.4 Let GN be dened by (2.7) with wN 6= 0. Let fKN g N2N be a sequence of
positive even reproducing kernels and let q s;N be given by (3.3). Then, for xN 2 GN with
lim
there exists a constant > 0 independent of N such that
q s;N
Proof: By denition of q s;N and p s;N , we have that
and since p s 0 and KN 0, we obtain for xN < that
Z
xN
The polynomial p s is monotonely increasing on [0; ]. Thus
Z
xN
KN
Since KN is even and fullls (2.3), we get for any sequence xN 2 GN
0 that
xN
It remains to examine if
for any xN 2 GN with lim
Here the \moment property" comes into the play.
Lemma 3.5 Let GN (n 2 N) be dened by (2.7) with
Let fKN g N2N be a sequence of positive even kernels and let q s;N (s 1) be given by (3.3).
Then there exists a constant < 1 independent of N such that
for all xN 2 GN with lim
only if KN fullls the \moment property"
Z
Note that the restriction (3.4) on the grids GN means that we have for any xN 2 GN that
w=N xN .
Proof: Since sin 2 x x 2 for all x 2 R, we obtain by (3.2) that
Similarly, we have for any xed 0 y =2 that
sin 2 x
and hence
2s
2s
x 2s
Using (3.6), we conclude by KN 0 that
Z
Z
2s
2s
Z
and since KN is even
s
2s
x 2s 2k
Z
Let KN satisfy (3.5). Then
s
2s
By (3.4), we have for any grid sequence xN 2 GN that xN w=N . Consequently,
By (3.7) this implies that there exists < 1 independent of N so that q s;N
On the other hand, we see by (3.7) with y := =4 that
Z
2s
x 2s 2k
Z
By denition of GN , there exists a grid sequence fxN g N2N so that xN approaches zero as N 1
1). Assume that KN does not fulll (3.5). Then we obtain for the above sequence
that p s;N while we have by (3.6) that p s
cannot be bounded from above. This completes the proof.
By Lemma 3.3 { Lemma 3.5, we obtain that for grids GN dened by (2.7) and (3.4) and for
even positive reproducing kernels with (3.5) there exist
Now we can prove our main theorem.
Theorem 3.6 Let fAN (f)g N2N be a sequence of Toeplitz matrices generated by a non{
negative function f 2 C 2 which has only a zero of order 2s (s 2 N) at the
grids GN be dened by (2.7) and (3.4). Assume that fKN g N2N is a sequence of even positive
reproducing kernels satisfying (3.5). Finally, let M N (f N ) be dened by (2.10). Then we have:
i) The eigenvalues of M 1
N (f N )AN (f) are bounded from below by a positive constant independent
of N .
ii) For N 2s, at most 2s eigenvalues of M N (f N are not contained in the interval
Here , are given by (3.8) and h min ; h max are dened as in Lemma 3.1,
where h := f=p s .
Proof: 1. To show ii), we consider the Rayleigh quotient
By Lemma 3.1, we have that
and thus, since the second factor on the right{hand side of (3.9) is positive
By Lemma 3.2, we know that
with a matrix RN (2s) of rank 2s and consequently
and
Since KN and p s are non{negative, we obtain by (2.4) and by denition of h that
This implies by denition of M N (f N ) that
and further by (3.3), (3.8) and since 0 < < 1 that
for all u 6= oN . Assume that RN (2s) has s 1 positive eigenvalues. Then, by properties
of the Rayleigh quotient and by Weyl's theorem [24, p. 184] at most s 1 eigenvalues of
are larger than hmax
. Similarly, we obtain by consideration of the left{
hand inequality of (3.10) that at most 2s s 1 eigenvalues of M N (f N are smaller
than h min
hmax .
2. To show i), we rewrite (3.9) as
As in the rst part of the proof, we see that this implies
Consequently, it remains to show that there exists a constant 0 < c < 1 such that
c
By (3.1), this is equivalent to
By the special structure of BN (p s ) and AN (p s ), assertion i) follows as in the proof of Theorem
4.3 in [3]. This completes the proof.
By the following theorem, the \moment property" (3.5) of the kernel is also necessary to
obtain good preconditioners.
Theorem 3.7 Let fAN (f)g N2N be a sequence of Toeplitz matrices generated by a non{
negative function f 2 C 2 which has only a zero of order 2s (s 2 N) at Let the grids
GN be dened by (2.7) and (3.4). Assume that fKN g N2N is a sequence of even positive reproducing
kernels which do not fulll (3.5). Finally, let M N (f N ) be dened by (2.10). Then,
for arbitrary " > 0 and arbitrary c 2 N, there exist N("; c) such that for all N N("; c) at
least c eigenvalues of M N (f N are contained in (0; ").
The proof follows again the lines of the fundamental paper of F. Di Benedetto [3, Theorem
5.4]. We include the short proof with respect to our background.
Proof: By the proof of Theorem 3.6, we have for all u 6= o that
Hence it remains to show that M N (p s;N has an arbitrary number of eigenvalues
in (0; ") for N su-ciently large. By (3.2) and [32, Theorem 3.1], we have that
diag
2s
I
diag
s
I
j;k=0 is an orthogonal matrix and where stoep a 0
and shank a 0 denote the symmetric Toeplitz matrix and the persymmetric Hankel matrix
with rst row a 0 , respectively. Deleting the rst s 1 and the last s 1 rows and columns of
we obtain AN (p s ). Thus, we have by Courants minimax theorem for the eigenvalues
2s
2s
The later result is due to a technique of D. Bini et al. [7, Proposition 4.2]. Consider AN (p s )
this matrix has positive eigenvalues, while we have for arbitrary " > 0
that
2s
Since KN does not fulll (3.5), we have by Lemma 3.5 that
lim
Thus, for j c independent of N and for su-ciently large N N("; c) the values j (AN (p s )
negative. The eigenvalues of AN (p s are continuous functions
of t. Since the smallest c eigenvalues pass from a positive value for to a negative
value for ") such that AN (p s
zero. This is equivalent to the fact that M N (p s;N has an eigenvalue "
we are done.
The generalization of the above results for generating functions with dierent zeros of even
order
is straightforward (see [18]). By applying the polynomial
Y
instead of p s and following the above lines, we can show that for grids GN of the form (2.7)
with xN;l 6= y m) and for kernels KN fullling (3.5) with
there exist constants 0 < < 1 such that for all x 2 GN
(p KN )(x)
4 Jackson polynomials and B{spline kernels
In this section, we consider concrete positive reproducing kernels KN with property (3.5).
The generalized Jackson polynomials of degree N 1 are dened by
sin(nx=2)
sin x=2
2m
determined by (2.3) [22, p. 203]. It is well{known
[22, p. 204], that the generalized Jackson polynomials J m;N are even positive reproducing
which satisfy property (3.5) for
In particular, J 1;N is the Fejer kernel which is related to the optimal circulant preconditioner
[19, 15]. However, the Fejer kernel does not fulll (3.5) for s 1 such that we cannot expect
a fast convergence of our PCG{method if f has a zero of order 2. Our numerical tests
conrm this result.
By Theorem 3.6, the generalized Jackson polynomials can be used
for the construction of preconditioners. Note that preconditioners related to Jackson kernels
were also suggested in [39]. However, the construction of the Fourier coe-cients of J m;N seems
to be rather complicated. See also [10]. Therefore we prefer the following B{spline kernels.
The \B{spline kernels" were introduced by R. Chan et al. in [14]. The authors showed by
numerical tests that preconditioners from B{spline kernels of certain order seem to be good
candidates for the PCG{method. Applying the results of the previous section, we are able to
show the theoretical reasons for these results, at least for the positive B{spline kernels.
Let [0;1) denote the characteristic function of [0; 1). The cardinal B{splines Nm (m 1) of
order m are dened by
and their centered version by
Note that Mm is an even function with supp
where
sinc x :=
sin x
Let the B{spline kernels B m;N be dened by [14]
mk
Note that B 1;N again coincides with the Fejer kernel.
For the construction of the preconditioner, it is important, that the Fourier coe-cient c
can be computed in a simple way for example by applying a simplied
version of de Boor's algorithm [9, p. 54].
By (4.1), it is easy to check that B m;N is a dilated, 2{periodized version of (sinc x
sinc
Thus
Moreover, we obtain similar to the generalized Jackson polynomials:
Lemma 4.1 The B{spline kernels B m;N satisfy (3.5) if and only if m s + 1.
Proof: By (4.2), we obtain that
Z
Z
sinc
dt
sin N
dt
2k sin u
2m
du
c N 2kZu 2k 2m du C N 2k
for 1. Thus, for m s
On the other hand, we have that
Z
Z
sin N
dt
2k sin u
2m
du
If m k, then the last integral is not bounded for N !1. Thus, for m s, the kernel B m;N
does not fulll property (3.5).
By Theorem 3.6, the B{spline kernels preconditioners
5 Generalizations of the preconditioning technique
In this section, we sketch how our preconditioners can be generalized to (real) symmetric
Toeplitz matrices and to doubly symmetric block Toeplitz matrices with Toeplitz blocks. We
will do this in a very short way since both cases do not require new ideas. However, we have
to introduce some notation to understand the numerical tests in Section 7.
Symmetric Toeplitz matrices
First, we suppose in addition to Section 2 that the Toeplitz matrices AN 2 R N;N are symmet-
ric, i.e. the generating function f 2 C 2 is even. Note that in this case, the multiplication of
a vector with AN can be realized using fast trigonometric transforms instead of fast Fourier
transforms (see [32]). In this way, complex arithmetic can be completely avoided in the iterative
solution of (2.1). This is one of the reasons to look for preconditioners of type (2.8),
where the Fourier matrix F N is replaced by trigonometric matrices corresponding to fast
trigonometric transforms. In practice, four discrete sine transforms (DST I { IV) and four
discrete cosine transforms (DCT I { IV) were applied (see [46]). Any of these eight trigonometric
transforms can be realized with O(N log N) arithmetical operations (see for example
[2, 44]). Likewise, we can dene preconditioners with respect to any of these transforms. In
this paper, we restrict our attention to the DST{II and DCT{II, which are determined by the
following transform matrices:
1). Similar to (2.10), (2.8), we
introduce the preconditioners (see [31])
diag
l
diag
l
We recall, that for the construction these preconditioners no explicit knowledge of the generating
function is required. Since f is even, the grids GN are simply chosen as GN := fx N;l :=
l
for the DCT{II and
the DST{II preconditioners, respectively. If f(x N;l then we can prove
Theorem 3.6 with respect to the preconditioners (5.1) in a completely similar way. We have
only to replace the decomposition (3.1) by
for the DCT{II and for the DST{II, respectively. See also [31].
Remark: Let
O 0
denote the matrix algebra with respect to the unitary matrix ON . Then the optimal preconditioner
of AN in AON is dened by
denotes the Frobenius norm. As mentioned in the previous section, the optimal
preconditioner in A FN coincides with our preconditioner (2.10) dened with respect to the
Fejer kernel B 1;N and with in (2.7). It is easy to check (see [33]) that the optimal
preconditioner in AON , where ON 2 fC IV
N g is equal to our preconditioner M N (f N ; ON )
in (5.1) dened with respect to ON and with respect to the Fejer kernel. Unfortunately,
the Fejer kernel preconditioners do not lead to a fast convergence of the PCG{method if the
generating function f of AN has a zero of order 2s 2.
In contrast to these results, the optimal preconditioners in AON with ON dened by the DCT
I { III or by the DST I { III do not coincide with the corresponding Fejer kernel preconditioner
In literature [6, 3], so{called optimal Tau preconditioners were of special
interest. Using our notation, optimal Tau preconditioners are the optimal preconditioners
with respect to the DST{I as unitary transform. The optimal Tau preconditioner realizes a
fast convergence of the PCG{method if the generating function f of AN has only zeros of
order 2s 2 [6].
Block Toeplitz matrices with Toeplitz blocks
Next we are interested in the solution of doubly symmetric block Toeplitz systems with
Toeplitz blocks. The construction of preconditioners with the help of reproducing kernels
was applied to well{conditioned block Toeplitz systems in [27]. Following these lines, we
generalize our univariate construction to ill{conditioned block Toeplitz systems with Toeplitz
blocks. In the next Section we will present good numerical results also for the block case.
However, in general, it is not possible to prove the convergence of PCG in a number of iteration
steps independent of N . Here we refer to [34].
Note that as in the univariate case there exist banded block Toeplitz preconditioners with
banded Toeplitz blocks which ensure a fast convergence of the corresponding PCG{method
[35]. See also [4, 30].
We consider systems of linear equations
where AM;N denotes a positive denite doubly symmetric block Toeplitz matrix with Toeplitz
blocks (BTTB matrix), i.e.
r;s=0 with A r := (a r;j k
and a r;j = a jrj;jjj . We assume that the matrices AM;N are generated by a real{valued 2{
periodic continuous even function in two variables, i.e.
a j;k := 1
Z'(s; ds dt :
Note that the multiplication of a vector with a BTTB matrix requires only O(MN log(MN))
arithmetical operations (see [33]). We dene our so{called \level{2" preconditioners by
N;M
(S II
with
by
r=0 with x kN+j := x
6 Numerical Examples
In this section, we conrm our theoretical results by various numerical examples. The fast
computation of the preconditioners and the PCG{method were implemented in MATLAB,
where the C{programs for the fast trigonometric transforms were included by cmex. The
algorithms were tested on a Sun SPARCstation 20.
As transform length we choose and as right{hand side b of (2.1) the vector consisting of
entries \1". The PCG{method started with the zero vector and stopped if kr (j) k 2 =kr (0) k 2 <
denotes the residual vector after j iterations.
We restrict our attention to preconditioners (2.10) and (5.1) constructed from B{spline kernels
. The following tables show the number of iterations of the corresponding
PCG{method to achieve a xed precision. The rst row of each table contains the exponent
n of the transform length in the univariate case and the block length N in the block
Toeplitz case. The kernels are listed in the rst column and the applied unitary transform in
the second column of each table. Here F w
N := W N F N with W N := diag( e ik=N
wN := =N in (2.7). For comparison, the second row of each table contains the number of
PCG{steps with preconditioner M N (f) dened by (2.8). These preconditioners, which can
be constructed only if the generating function f is known, were examined in [31].
We begin with symmetric ill{conditioned Toeplitz matrices AN (f) arising from the generating
functions
ii) (see [3, 10, 11, 14, 31, 36]): f(x) := x 4
The
Tables
present the number of iteration steps with dierent preconditioners.
As expected, for it is not su-cient to choose a preconditioner based on the Fejer
it is not su-cient to choose a preconditioner based on
the cubic B{spline kernel in order to keep the number of iterations independent
of N .
On the other hand, we have a similar convergence behavior for the dierent unitary transforms.
This is no surprise for F w
N and for S II
N . However, for F N and for C II
N , the corresponding
grids GN contain the zero of f , namely This was excluded in Theorem 3.6. In our
numerical tests it seems to play no rule that a grid point meets the zero of f .
Our next example in Table 3 conrms our theoretical results for the function
with zeros of order 2 at
Finally, let us turn to BTTB matrices AN;N . In our examples, the matrices AN;N are generated
by the functions
iv) (see [4]): '(s;
v) (see [30, 31]): '(s;
vi) (see [30, 31]): '(s;
These matrices are ill{conditioned and the CG{method without preconditioning, with Strang{
type{preconditioning or with optimal trigonometric preconditioning converges very slow (see
[30, 33, 4]). Our preconditioning (5.2) leads to the number of iterations in the Tables 4 { 6.
In [34], we proved that the number of iteration steps of PCG is independent of N in Example
iv) and we explained the convergence behavior of PCG for the other examples. To our
knowledge, there does not exist a faster PCG{method if the generating function ' is unknown
up to now.
Note that by [42, 43] any multilevel preconditioner is not optimal in the sense that a cluster
cannot be proper [45].
Summary
. We suggested new positive denite w{circulant preconditioners for sequences of
Toeplitz systems with polynomial increasing condition numbers. The construction of our preconditioners
is based on the convolution of the generating function with positive reproducing
kernels and, by working in the Fourier domain, does not require the explicit knowledge of the
generating function. As main result we proved that the quality of the preconditioner depends
on a \moment property" of the corresponding kernel which is related to the order of the zeros
of the generating function. This explains, e.g. why optimal circulant preconditioners arising
from convolutions with the Fejer kernels fail to be good preconditioners if the generating
function has zeros of order 2.
--R
Iterative Solution Methods.
Fast polynomial multiplication and convolution related to the discrete cosine transform.
Analysis of preconditioning techniques for ill-conditioned Toeplitz matrices
Preconditioning of block Toeplitz matrices by sine transforms.
preconditioning for Toeplitz matrices.
Capizzano. A unifying approach to abstract matrix algebra preconditioning.
Spectral and computational properties of band symmetric Toeplitz matrices.
Toeplitz preconditioners for Toeplitz systems with nonnegative generating functions.
Toeplitz preconditioners for Hermitian Toeplitz systems.
Conjugate gradient methods of Toeplitz systems.
Sine transform based preconditioners for symmetric Toeplitz systems.
Circulant preconditioners from B-splines
Circulant preconditioners constructed from kernels.
Circulant preconditioners for ill-conditioned Hermitian Toeplitz matrices
The best circulant preconditioners for Hermitian Toeplitz systems.
The best circulant preconditioners for Hermitian Toeplitz systems II: The multiple-zero case
An optimal circulant preconditioner for Toeplitz systems.
Circulant Matrices.
The Approximation of Continuous Functions by Positive Linear Operators.
Constructive Approximation.
Multigrid methods for Toeplitz matrices.
Matrix Analysis.
Iterative methods for ill-conditioned Toeplitz matrices
Iterative methods for Toeplitz-like matrices
A note on construction of circulant preconditioners from kernels.
Linear Operators and Approximation Theory.
Band preconditioners for block-Toeplitz -Toeplitz-block-systems
Preconditioners for ill-conditioned Toeplitz matrices
Optimal trigonometric preconditioners for nonsymmetric Toeplitz systems.
Trigonometric preconditioners for block Toeplitz systems.
Preconditioning of Hermitian block-Toeplitz-Toeplitz-block matrices by level-1 preconditioners
Preconditioning strategies for asymptotically ill-conditioned block Toeplitz systems
An ergodic theorem for classes of preconditiones matrices
On the extreme eigenvalues of Hermitian (block) Toeplitz matrices
Capizzano. Korovkin theorems and linear positive gram matrix algebra approximations of toeplitz matrices.
Capizzano. How to choose the best iterative strategy for symmetric Toeplitz systems
Capizzano. Toeplitz preconditioners constructed from linear approximation pro- cesses
Any circulant-like preconditioner for multi-level matrices is not optimal
How to prove that a preconditioner can not be optimal.
A polynomial approach to fast algorithms for discrete Fourier- cosine and Fourier-sine transforms
Circulant preconditioners with unbounded inverses.
Fast algorithms for the discrete W transform and for the discrete Fourier transform.
--TR
--CTR
D. Noutsos , S. Serra Capizzano , P. Vassalos, Spectral equivalence and matrix algebra preconditioners for multilevel Toeplitz systems: a negative result, Contemporary mathematics: theory and applications, American Mathematical Society, Boston, MA, 2001
D. Noutsos , S. Serra Capizzano , P. Vassalos, Matrix algebra preconditioners for multilevel Toeplitz systems do not insure optimal convergence rate, Theoretical Computer Science, v.315 n.2-3, p.557-579, 6 May 2004 | preconditioners;reproducing kernels;ill-conditioned Toeplitz matrices;CG method |
587369 | Symplectic Balancing of Hamiltonian Matrices. | We discuss the balancing of Hamiltonian matrices by structure preserving similarity transformations. The method is closely related to balancing nonsymmetric matrices for eigenvalue computations as proposed by Osborne [J. ACM, 7 (1960), pp. 338--345]and Parlett and Reinsch [Numer. Math., 13 (1969), pp. 296--304] and implemented in most linear algebra software packages. It is shown that isolated eigenvalues can be deflated using similarity transformations with symplectic permutation matrices. Balancing is then based on equilibrating row and column norms of the Hamiltonian matrix using symplectic scaling matrices. Due to the given structure, it is sufficient to deal with the leading half rows and columns of the matrix. Numerical examples show that the method improves eigenvalue calculations of Hamiltonian matrices as well as numerical methods for solving continuous-time algebraic Riccati equations. | Introduction
. The eigenvalue problem for Hamiltonian matrices
A G
where A; G; Q 2 R n\Thetan and G, Q are symmetric, plays a fundamental role in many
algorithms of control theory and other areas of applied mathematics as well as computational
physics and chemistry. Computing the eigenvalues of Hamiltonian matrices is
required, e.g., when computing the H1 -norm of transfer matrices (see, e.g., [9, 10]),
calculating the stability radius of a matrix ([13, 29]), computing response functions
[22], and many more. Hamiltonian matrices are also closely related to continuous-time
algebraic Riccati equations (CARE) of the form
with A; G; Q as in (1.1) and X 2 R n\Thetan is a symmetric solution matrix. Many numerical
methods for solving (1.2) are based on computing certain invariant subspaces
of the related Hamiltonian matrices; see, e.g., [19, 21, 26, 28]. For a detailed discussion
of the relations of Hamiltonian matrices and continuous-time algebraic Riccati
equations (1.2) we refer to [18].
In eigenvalue computations, matrices and matrix pencils are often preprocessed
using a balancing procedure as described in [23, 25] for a general matrix A 2 R n\Thetan .
First, A is permuted via similarity transformations in order to isolate eigenvalues, i.e.,
a permutation matrix P 2 R n\Thetan is computed such that
are upper triangular matrices. Then, a diagonal
matrix
Universitat Bremen, Fachbereich 3 - Mathematik und Informatik, Zentrum fur Technomathe-
matik, 28334 Bremen, Germany. E-mail: benner@math.uni-bremen.de
is computed such that rows and columns of D \Gamma1
Z ZDZ are as close in norm as possible.
That is, balancing consists of a permutation step and a scaling step. In the scaling
step, the rows and columns of a matrix are scaled, which usually leads to a decrease
of the matrix norm. This preprocessing step often improves the accuracy of computed
eigenvalues significantly; isolated eigenvalues (i.e., those contained in T 1 and T 2 ) are
even computed without roundoff error.
Unfortunately, applying this balancing strategy to a Hamiltonian matrix H as
given in (1.1) will in general destroy the Hamiltonian structure. This is no problem if
the subsequent eigenvalue algorithm does not preserve or use the Hamiltonian struc-
ture. But during the past fifteen years, several structure preserving methods for the
Hamiltonian eigenproblem have been suggested. In particular, the square-reduced
method [31], the Hamiltonian QR algorithm (if in (1.1), rank
[12], the recently proposed algorithm based on a symplectic URV-like decomposition
[7], or the implicitly restarted symplectic Lanczos method of [5] for large sparse Hamiltonian
eigenproblems are appropriate choices for developing subroutines for library
usage and raise the need for a symplectic balancing routine. Similarity transformations
by symplectic matrices preserve the Hamiltonian structure. Thus, in order to
balance a Hamiltonian matrix and to preserve its structure, the required permutation
matrix and the diagonal scaling matrix should be symplectic.
In Section 2 we will give some necessary background. Isolating eigenvalues of
Hamiltonian matrices without destroying the structure can be achieved using symplectic
permutation matrices. This will be the topic of Section 3. How to equilibrate
rows and norms of Hamiltonian matrices in a similar way as proposed in [25] using
symplectic diagonal scaling matrices will be presented in Section 4. When invariant
subspaces, eigenvectors, or solutions of algebraic Riccati equations are the target of
the computations, some post-processing steps are required. These and some other
applications of the proposed symplectic balancing method are discussed in Section 5.
Some numerical examples on the use of the proposed balancing strategy for eigenvalue
computation and numerical solution of algebraic Riccati equations are given in
Section 6.
2. Preliminaries. The following classes of matrices will be employed in the
sequel.
Definition 2.1. Let
where I n is the n \Theta n identity matrix.
a) A matrix H 2 R 2n\Theta2n is Hamiltonian if (HJ) . The Lie Algebra of
Hamiltonian matrices in R 2n\Theta2n is denoted by H 2n .
b) A matrix H 2 R 2n\Theta2n is skew-Hamiltonian if (HJ) . The Jordan
algebra of skew-Hamiltonian matrices in R 2n\Theta2n is denoted by SH 2n .
c) A matrix S 2 R 2n\Theta2n is symplectic if SJS
The Lie group of symplectic matrices in R 2n\Theta2n is denoted by S 2n .
d) A matrix U 2 R 2n\Theta2n is unitary symplectic if U 2 S 2n and UU . The
compact Lie group of unitary symplectic matrices in R 2n\Theta2n is denoted by US 2n .
Observe that every H 2 H 2n must have the block representation given in (1.1).
In [11], an important relation between symplectic and Hamiltonian matrices is
proved.
Proposition 2.2. Let S 2 R 2n\Theta2n be nonsingular. Then S \Gamma1 HS is Hamiltonian
for all H 2 H 2n if and only if S T
This result shows that in general, similarity transformations that preserve the
Hamiltonian structure have to be symplectic up to scaling with a real scalar.
The following proposition shows that the structure of 2n \Theta 2n orthogonal symplectic
matrices permits them to be represented as a pair of n \Theta n matrices. Hence,
the arithmetic cost and storage for accumulating orthogonal symplectic matrices can
be halved.
Proposition 2.3. [24] An orthogonal matrix U 2 R 2n\Theta2n is symplectic if and
only if it takes the form
2.
We have the following well-known property of the spectra of Hamiltonian matrices
(see, e.g., [18, 21], and the references given therein).
Proposition 2.4. The spectrum of a real Hamiltonian matrix, denoted by oe (H)
is symmetric with respect to the imaginary axis, i.e., if 2 oe (H), then \Gamma 2 oe (H).
The spectrum of Hamiltonian matrices can therefore be partitioned as
oe
When solving Hamiltonian eigenproblems one would like to compute a Schur
form for Hamiltonian matrices analogous to the real Schur form for non-symmetric
matrices. This should be done in a structure-preserving way.
Definition 2.5. a) Let "
H has the form
G
A T
A 2 R n\Thetan is in real Schur form (quasi-upper triangular) and "
is real Hamiltonian quasi-triangular.
b) If H 2 H 2n and there exists U 2 US 2n such that "
real Hamiltonian
quasi-triangular, then "
H is in real Hamiltonian Schur form and U T HU is called
a Hamiltonian Schur decomposition.
If a Hamiltonian Schur decomposition exists such that "
H is as is (2.2), then U
can be chosen such that oe ( "
Most of the structure-preserving methods for the Hamiltonian eigenproblem, i.e.,
those using symplectic (similarity) transformations, rely on the following result. For
Hamiltonian matrices with no purely imaginary eigenvalues this result was first stated
in [24] while in its full generality as given below it has been proved in [20].
Theorem 2.6. Let H 2 H 2n and let iff its pairwise distinct nonzero
purely imaginary eigenvalues. Furthermore, let the associated H-invariant subspaces
be spanned by the columns of U k , Then the following are equivalent.
There exists S 2 S 2n such that S \Gamma1 HS is real Hamiltonian quasi-triangular.
ii) There exists U 2 US 2n such that U T HU is in real Hamiltonian Schur form.
is congruent to J for all is always of the
appropriate dimension.
Note that from Theorem 2.6 it follows that purely imaginary eigenvalues of H 2
must have even algebraic multiplicity in order for the Hamiltonian Schur
form of H to exist.
3. Isolating Eigenvalues by Symplectic Permutations. Let P denote any
n \Theta n permutation matrix. It is easy to see that symplectic permutation matrices
have the form
With matrices of type (3.1) it is possible to transform a Hamiltonian matrix to the
~
22 0
-z -z -z -z -z -z
where A 11 , A 33 are upper triangular and either Q The existence of
such a P s will be proved in a constructive way later by Algorithm 3.4 which transforms
a Hamiltonian matrix to the form given in (3.2). From a Hamiltonian matrix having
the form (3.2) a total of 2(p of H can be read off directly as seen by
the following result.
Lemma 3.1. Let H 2 H 2n and is of the
form (3.2) and either G there exists a permutation matrix
2n\Theta2n such that
are upper triangular with
and
A 22 G 22
is a 2r \Theta 2r Hamiltonian submatrix of H.
Proof. Let H be as in (3.2) and
I
Then
13 A 11 A 12 G 12 G 11 A 13
22 \GammaA T
diag (\Pi q ; I
I
I r 0
I r
I q 07 7 7 7 7 7 5
Thus,
has the desired form. The eigenvalue relation (3.4) follows
from
\GammaI q 0
\GammaI q 0
13 A 11
Lemma 3.1 is merely of theoretical interest and demonstrates that in order to solve the
Hamiltonian eigenvalue problem, we can proceed by working only with H 22 . But the
transformations we have used in the proof are in general non-symplectic. If we want to
compute invariant subspaces, eigenvectors, and/or the Hamiltonian Schur form given
in Theorem 2.6, we can transform the Hamiltonian matrix in (3.2) such that it has
Hamiltonian Schur form in rows and columns
But this can not be accomplished using only symplectic permutation matrices of the
form (3.1). Therefore we need another class of transformation matrices.
Definition 3.2. A matrix P J 2 R 2n\Theta2n is called a J-permutation matrix if
a) it is symplectic, i.e., P T
c) each row and column have exactly one nonzero entry.
As P J 2 US 2n , it is clear that a similarity transformation by a J-permutation
matrix preserves the Hamiltonian structure. In analogy to standard permutations,
similarity transformations with P J can be performed without floating point opera-
tions. Moreover, they can be represented by a signed integer vector IP of length n,
rows and columns k; j are to be
interchanged while the sign indicates if the corresponding entry in P J is +1 or \Gamma1.
The entries of P J in rows to 2n can be deduced from IP and Proposition 2.3.
Furthermore, symplectic permutation matrices as given in (3.1) are J-permutation
matrices.
Lemma 3.3. For any H 2 H 2n having the form (3.2), there exists a J-permutation
matrix P J such that
A 11
A 12
G 11
A 22
G T"
G 22
A T
A T
A T3
A 11 is upper triangular and with the notation in (3.2),
Proof. Let a Hamiltonian matrix H be given as in (3.2). We need a J-permutation
matrix only in the first step. Let
I
Obviously, P 1 is a J-permutation matrix and
A 11 A 12 \GammaG 13 G 11 G 12 A 13
22
Now assume G
~
I q
Then,
13 A T0 A 11 A 12 A 13 G 11 G 12
We thus obtain the form (3.9) by another similarity transformation with
where \Pi q is defined in (3.8). For the other case, i.e.,
~
I q
and
In both cases, "
J-permutation matrix and "
P is a
Hamiltonian matrix having the desired form (3.9).
In order to isolate eigenvalues, it is sufficient to restrict ourselves to symplectic
permutations. But having computed the form (3.2), it is possible that there are still
isolated eigenvalues in H 22 . Applying the same procedure used to isolate eigenvalues
in H to H 22 , we can transform H 22 to the form (3.2). This process can then be
repeated until no more isolated eigenvalues are found. Accumulating all permutations
in a symplectic permutation matrix P s of the form (3.1), this results in a similarity
transformation
~
. A
. 0
. 0
gps
gps
Here, A j;j t, are upper triangular and for
either and the Hamiltonian submatrix
A t;t G t;t
has no isolated eigenvalues. If we now define p :=
then we
can partition ~
H in (3.10) as in (3.2). Then the first step in the proof of Lemma 3.1
can be performed to obtain the form (3.7). Just the block-structure of the upper left
and lower right diagonal blocks in (3.7) are more complicated. But it is still possible
to bring them to upper triangular form using repeatedly the same sequence of permutations
used in the proof of Lemma 3.1. This shows that 2(p + q) eigenvalues of the
Hamiltonian matrix can be read off directly from (3.10) and that ~
H is permutationally
similar to4
~
~
Y ~
Z
Further, we have H t;t
s
s
If only eigenvalues are required, we can continue working only with H t;t . If also eigen-vectors
and/or invariant subspaces are required, the similarity transformations used
to solve the reduced-order eigenproblem for H t;t have to be applied to the whole matrix
~
H . In that case, ~
H should be transformed to the form given in (3.9). Partitioning
~
H from (3.10) as in (3.2), we can perform the first step of the proof of Lemma 3.3
with the J-permutation matrix P 1 . The subsequent steps to achieve upper triangular
form in the first p + q rows and columns have then to be performed for each of the
first distinguishing the cases Q
A procedure to transform a Hamiltonian matrix H to the form in (3.10) is given in
the following algorithm. Note that in the given algorithm,
t.
Algorithm 3.4.
Input: Matrices A; G; Q 2 R n\Thetan , where defining a Hamiltonian
Output: A symplectic permutation matrix P s ; matrices A; G; Q with
defining a Hamiltonian matrix
having the form
~
END IF
END WHILE
~
END IF
END WHILE
END WHILE
END
In each execution of the outer WHILE-loop, we first search a row isolating an
eigenvalue. If such a row is found, we look for a column isolating an eigenvalue. In
this fashion it can be guaranteed that at the end, there are no more isolated eigenvalues
although we always only touch the first n rows and columns of the Hamiltonian matrix.
In an actual implementation one would of course never form the permutation
matrices explicitly but store the relevant information in an integer vector. Multiplications
by permutation matrices are realized by swapping the data contained in the
rows or columns to be permuted; for details, see, e.g., [3].
It is rather difficult to give a complete account of the cost of Algorithm 3.4. If
there are no isolated eigenvalues, the algorithm requires 4n floating point additions
and 2n comparisons as opposed to 8n additions and 4n comparisons for the
unstructured permutation procedure from [25] as implemented in the LAPACK subroutine
xGEBAL [3] when applied to H 2 R 2n\Theta2n . The worst case for Algorithm 3.4
would be that in each execution of the outer WHILE-loop, an isolated eigenvalue is
found in the last execution of the second inner WHILE-loop. In that case, the cost consists
of 4n 3 =3 floating point additions, moving
floating point numbers. But in this worst-case analysis, all eigenvalues are
isolated such that after permuting, there is nothing left to do, and the Hamiltonian
matrix is in Hamiltonian Schur form. A worst-case study for xGEBAL shows that
the permutation part requires 8n 3
and moving floating point numbers. We can therefore conclude that Algorithm
3.4 is about half as expensive as the procedure proposed in [25] applied to a
Hamiltonian matrix.
4. Symplectic Scaling. Suppose now that we have transformed the Hamiltonian
matrix to the form (3.10). Since all subsequent transformations are determined
from H t;t , the scaling parameters to balance H t;t have now to be chosen such that
the rows and columns of H t;t (instead of ~
are as close in norm as possible.
In order to simplify notation we will in the sequel call the Hamiltonian matrix
again H . Let H off be the off-diagonal part of H , i.e.,
We may without loss of generality assume that none of the rows and columns of H off
vanishes identically. Otherwise, we could isolate another pair of eigenvalues.
Now we want to scale H such that the norms of its rows and columns are close
in norm. As noted before, employing the technique of Parlett and Reinsch [25] destroys
the Hamiltonian structure. Diagonal scaling has thus to be performed using a
symplectic diagonal matrix D s . Such a matrix must have the form,
where D 2 R n\Thetan is a nonsingular diagonal matrix.
Let us at first note an obvious result for Hamiltonian matrices. Here and in the
sequel we will use the colon notation (see, e.g., [15]) H(:; k), H(j; :) to indicate the
kth column and jth row, respectively, of a matrix H .
Lemma 4.1. Let H 2 R 2n\Theta2n be a Hamiltonian matrix. Then for all p 1 and
for all
i.e., the p-norms of the ith column equals the norm of the (n + i)th row and the norm
of the ith row equals the norm of the (n
Proof. The result is obvious by noting kxk
for x 2 R 2n and
observing that from the structure of Hamiltonian matrices, we have
and furthermore, Equation (4.3) follows analogously by
noting
We can now conclude that it is sufficient to equilibrate the norms of the first
rows and columns of a 2n \Theta 2n Hamiltonian matrix by using a consequence of
Lemma 4.1.
Corollary 4.2. Let H 2 R 2n\Theta2n be a Hamiltonian matrix. Then for all p 1
and for all
Since a similarity transformation with any diagonal matrix does not affect the
diagonal elements of the transformed matrix, it is in the following sufficient to consider
H off . We will employ the notation
ith column of H off ;
transpose of the ith row of H off :
In the sequel, we will for convenience use 1. The results also hold for any
other p-norm. From a computational point of view it is also reasonable to use the
1-norm, since its computation does not involve any floating point multiplications and
furthermore, reducing the norm of H in one norm usually implies also a reduction in
the other norms.
Equilibrating can now be achieved in a similar way as in the
Parlett/Reinsch method. If fi denotes the base of the floating point arithmetic and
oe i is any signed integer, then they compute fi oe i closest to the real scalar
Thus, with D
H is in general no longer Hamiltonian. Unfortunately, using the
symplectic diagonal matrix
where
i and computing
~
(D (i)
s
we obtain
and thus in general, k ~
Nevertheless, equilibrating the 1-norms of h i and h i can be achieved by requiring
solving the resulting quartic equation
jg ii j:
It remains to show that equation (4.6) has a positive solution.
Theorem 4.3. Let H 2 R 2n\Theta2n be a Hamiltonian matrix and denote its off-diagonal
part by H off . Assume that none of the rows and columns of H off vanishes
identically. Then there exists a unique real number such that for ~
H as in (4.4)
we have
Proof. Solutions of Equation (4.6) are non-zero solutions of
k=0 a k t k . Recalling that g ii = the coefficients
of the polynomial p satisfy
a
a
Since there is at most one change of sign in the coefficients of the polynomial p,
Descartes' rule of signs shows that there is at most one positive zero of p. So if
there exists a positive solution of (4.6), it is unique. By assumption, kh
Therefore, either a ii is part of h i ) and either a 3 ? 0
or a 4 ? 0 (as q ii is part of h i ). Thus, we know that p is a polynomial of degree at
least 3 with positive leading coefficient and hence, lim t!1
On the other hand, if using the mean value theorem it
follows that there exists a positive zero of p. If g ii = 0, then is a zero of p and
The third order polynomial has a positive zero because
of the mean value theorem and
by has again at least one positive zero and hence equation (4.6) has
at least one positive real solution regardless of the value of g ii . On the other hand,
it was already observed that there is at most one such solution and we can conclude
that there exists a unique solving equation (4.6) whence k ~ h
The other equalities follow immediately from Corollary 4.2.
Computing the exact value equilibrating the ith and (n+i)th rows and columns
would require the solution of the fourth-order equation (4.6). Since the diagonal
similarity transformations are to be chosen from the set of machine numbers, it is
sufficient to find the machine number fi oe i closest to ffi i . This can be done similarly
to the computation in the general case as proposed in [25] and implemented in the
Fortran 77 subroutines BALANC from EISPACK [14] or its successor xGEBAL from
LAPACK [3]. That is, starting with the quantities in (4.5) are evaluated and
compared. If k ~ h
then this is repeated for
then we use
. This is achieved by the following algorithm.
Algorithm 4.4.
Input: Hamiltonian matrix H 2 H 2n having no isolated eigenvalues, base
of floating point arithmetic.
Output: Diagonal matrix D s 2 S 2n , H is overwritten by D \Gamma1
row and column norms equilibrated as far as possible.
r
END WHILE
r
END WHILE
END IF
diag
diag
diag
END
One execution of the outer FOR-loop of Algorithm 4.4 can be considered as a
sweep. The algorithm is terminated if for a whole sweep, all D . Usually, the
row and column norms are approximately equal after very few sweeps. Afterwards, the
iteration makes only very limited progress. Therefore, Parlett and Reinsch propose
in [25] a modification, which, translated to our problem, becomes:
Let ffi i be determined by the two inner WHILE-loops of Algorithm 4.4 and
compute
jg ii j:
If (where fl is a given positive constant),
then compute D i as in Algorithm 4.4. Otherwise, set D
For the behavior is essentially the same as for Algorithm 4.4 (in a few
cases, Algorithm 4.4 increases kh which can not happen if 1). For
slightly smaller than one, a step is skipped if it would produce an insubstantial
reduction of kh
In an actual computation, the similarity transformations with the D i 's can be
applied directly to the blocks A, G, and Q of the Hamiltonian matrix without forming
the Hamiltonian matrix itself. Thus, each similarity transformation can be performed
using only 4n \Gamma 4 multiplications. When the standard (not structure preserving)
scaling procedure from [25] is applied to H , each similarity transformation requires
multiplications. (Recall that in Algorithm 4.4, two rows and columns are
equilibrated at a time while only one row and column is treated in each step of the
inner FOR-loop of the standard procedure.)
The number of sweeps required to converge is similar to those for the general case
since the theory derived in [25] only requires the assumption of similarity transformations
with diagonal matrices and that in step i of each sweep, the ith rows and
columns are equilibrated as far as possible with . But this is accomplished by
Algorithm 4.4. Moreover, if ffi i is taken as the exact solution of (4.6), the convergence
of the sequence of similarity transformations to a stationary point can be proved as
in [23, 16]. That is, if
i is the solution of (4.6) in sweep k, then lim k!1 ffi (k)
for all hence in the limit, H is a balanced Hamiltonian matrix.
Note that here, each sweep has length n while in the standard balancing algorithm,
one has to go through each row/column pair of the matrix and thus, each sweep has
length 2n. Thus, the computational cost for scaling a 2n \Theta 2n Hamiltonian matrix
by Algorithm 4.4, assuming k 1 sweeps are required, is 4n n) as opposed
to 8n 2 n) for the standard scaling procedure as given in [25] with assumed
sweeps required for convergence. In general, k 1 k 2 such that the structure-preserving
scaling strategy is about half as expensive as the standard procedure.
These flop counts are based on the assumption that the cost for determining the
can be considered as small (O(1)) compared to the similarity transformations.
Remark 4.5. In [17] it is proposed to solve the matrix balancing problem using
a convex programming approach. To compare the complexity of this approach to
that of Algorithm 4.4, suppose that Algorithm 4.4 terminates after k 1 sweeps with
. For the matrix H to be balanced, let
ng \Theta ng
Eg. Assume that
and ! e . (Here, Theorem 5 in [17] states that the complexity
of computing a diagonal matrix Y with positive diagonal entries such that the
rows and columns of Y \Gamma1 HY are balanced with the same accuracy as achieved by
Algorithm 4.4 is O
ne
hmin
. From numerical experience, it can be assumed
that with respect to n. Hence, Algorithm 4.4 can be considered
to be of complexity O(n 2 ). This complexity is clearly superior to that of the convex
programming approach which is still the case if k
Algorithm 4.4 requires a careful implementation to guard against over- and underflow
due to a very large/small ffi i . Here, we can use the bounds discussed in [25] and
implemented in LAPACK subroutine xGEBAL [3]; we just have to take into account
that in each step we scale by fi \Sigma2 rather than fi as in xGEBAL.
5. Backtransformation, Ordering of Eigenvalues, and Applications. So
far we have only considered the problem of computing the eigenvalues of a Hamiltonian
matrix. In order to compute eigenvectors, invariant subspaces, and the solutions of
algebraic Riccati equations, we have to transform the Hamiltonian matrix to real Schur
form. As we are considering structure-preserving methods, the goal is to transform
the Hamiltonian matrix to real Hamiltonian Schur form as given in Theorem 2.6 a)
- if it exists.
Assume that we have applied Algorithm 3.4 to the Hamiltonian matrix and obtained
a symplectic permutation matrix P s such that P T
s HP s has the form given in
(3.10). Then, we have applied a J-permutation P J to the permuted Hamiltonian
matrix such that its rows and columns numbered
are in Hamiltonian Schur form, i.e., P T
has the form given in (3.9). (From
Lemma 3.3 we know that such a P J exists.) Next, we have applied Algorithm 4.4 to
the Hamiltonian submatrix H t;t 2 H 2r from (3.11) and obtained a diagonal matrix
diag
I
Then
A 22
G T"
G 22
A T
A T
A T3
A 11 2 R (p+q)\Theta(p+q) is upper triangular and the Hamiltonian submatrix "
H 22 :=
A22
Q22
G22
A
has no isolated eigenvalues and its rows and columns are equilibrated by
Algorithm 4.4. Now assume the Hamiltonian Schur form of "
H 22 exists and we have
computed U
U22
V22
V22
U22
US 2r that transforms "
H 22 into real Hamiltonian Schur
form. Set
I
22
real Hamiltonian quasi-triangular and
. The first n columns of S span a Lagrangian H-invariant subspace. In most
applications, the c-stable H-invariant subspace is desired. Let us assume the method
used to transform "
H 22 to Hamiltonian Schur form chooses U 22 such that the first r
columns of U 22 , i.e., the columns of
U22
V22
, span the "
H 22 -invariant subspace of choice.
But there is no guarantee that the isolated eigenvalues in "
A 11 are the desired ones.
In that case, we have to reorder the Hamiltonian Schur form in order to move the
undesired eigenvalues to the lower right block of "
H and the desired ones to the upper
left block. Assume that we want to compute the Lagrangian H-invariant subspace
corresponding to a set ae oe (H) which is closed under complex
conjugation. (Note that this is a necessary condition in order to obtain a Lagrangian
invariant subspace [2]). Using the standard reordering algorithm for the real Schur
form of an n \Theta n unsymmetric matrix as given in [15, 30], we can find an orthogonal
matrix ~
U such that with the orthogonal symplectic matrix U diag
U ; ~
U
, we
have that
~
A 11
~
A 12
~
G 11
~
A 22
~
G T~
G 22
A T
A T
A T3
where A 11 ; A 22 are quasi-upper triangular, and
A T
22
A T
Therefore, we have to swap the eigenvalues in ~
A 22 and \Gamma ~
A T
22 . Note that the eigenvalues
to be re-ordered are among the isolated eigenvalues and hence are real. This implies
that ~
A 22 is upper triangular. The re-ordering can be achieved analogously to the
re-ordering of eigenvalues in the real Schur form as given in [15, 30]. The following
procedure uses this standard re-ordering in order to swap eigenvalues within ~
A 22
A T
22 ) and requires rotations working exclusively in rows and columns n and
2n in order to exchange eigenvalues from ~
A 22 with eigenvalues from \Gamma ~
A T
22 . Assume
A 22
A T 22
sn
\Gammas n
cn
be a
Givens rotation matrix that annihilates the second component of
gnn
where
~
Then U n is a symplectic Givens rotation matrix acting in planes n and 2n and
~
A 11
~
A 12 a 1;n
~
G 11
~
A 22
. ~
G T~
G 22
gnn
A T
A T
A T
22 0
Here, the bar indicates elements changed by the similarity transformation.
The next step is now to move n up in the upper diagonal block using again the
standard ordering subroutine such that we obtain again the form given in (5.1), just
A 11 2 R (n\Gammak+1)\Theta(n\Gammak+1) and again, the relations (5.2) hold. This procedure has now
to be repeated until
A 11 ).
Remark 5.1. If the Hamiltonian matrix has the form
h A
which
corresponds to a linear system
stabilizable
and (C; detectable, then each isolated eigenvalue in (5.1) given by the diagonal
elements of ~
A 11 has negative real part. Otherwise, these eigenvalues are unstable or
undetectable and can not be stabilized/detected. Therefore, if we have not mixed
up blocks by the J-permutation matrix P J (i.e., in Algorithm 3.4, i n) and the
c-stable H-invariant subspace is required, no re-ordering is necessary.
Remark 5.2. When solving algebraic Riccati equations using any approach based
on the Hamiltonian eigenproblem, the symplectic balancing strategy proposed here is
often not enough to minimize errors caused by ill-scaling. This is due to the effect that
for a balanced Hamiltonian matrix
h A
G
we still may have kQk AE kGk which
may cause large errors when computing invariant subspaces [27]. Therefore, another
symplectic scaling using a similarity transformation with diag
ae I n
ae 2 R, should be applied to H in order to achieve kAk kGk kQk as far as possible;
see [4] for details and a discussion of several heuristic strategies to achieve this.
Remark 5.3. Everything derived so far for Hamiltonian matrices can be applied
in the same way to skew-Hamiltonian matrices. If N 2 SH 2n , then
h A
G
A T
with . The skew-Hamiltonian structure is again preserved
under symplectic similarity transformations. Hence, isolating eigenvalues, re-ordering,
etc., can be achieved in the same way as for Hamiltonian matrices as all considered
transformations do not depend on the signs in the matrix blocks A, G, Q, but only on
the distinction zero/non-zero when isolating eigenvalues and on the absolute values
of the entries when equilibrating rows and norms. Note that Algorithm 4.4 even
simplifies quite a lot for real skew-Hamiltonian matrices. As q
can be computed as in the general balancing algorithm
for non-symmetric matrices because in (4.5) we obtain k ~
Eigenvalues of skew-Hamiltonian matrices as well as a skew-Hamiltonian Schur
form can be computed in a numerically strong backward stable way by Van Loan's
method [31]. It is advisable to balance skew-Hamiltonian matrices using the proposed
strategies prior to applying this algorithm.
Remark 5.4. We have considered so far only real Hamiltonian and skew-
Hamiltonian matrices. Isolating eigenvalues and equilibrating rows and columns
for complex (skew-)Hamiltonian matrices can be achieved in exactly the same way.
A structure-preserving, numerically backward stable (and hence numerically strong
backward stable) method for solving the complex (skew-)Hamiltonian eigenproblem
has recently been proposed [8]. The proposed symplectic balancing method can (and
should) also be used prior to applying this algorithm.
6. Numerical Examples. We have tested the symplectic balancing strategy
for eigenvalue computations. The computations were done in Matlab 1 Version 5.2
with machine precision " 2:2204 \Theta 10 \Gamma16 . Algorithms 3.4 and 4.4 were implemented
as Matlab functions. We used the modified algorithm as suggested by (4.8) where
we set suggested in [25] and implemented in the LAPACK subroutine
xGEBAL [3]. The eigenvalues of the balanced and the unbalanced Hamiltonian matrix
1 Matlab is a trademark of The MathWorks, Inc.
were computed by the square-reduced method using a Matlab function sqred which
implements the explicit version of the square-reduced method (see [31]).
We also tested the effects of symplectic balancing for the numerically backward
stable, structure-preserving method for the Hamiltonian eigenvalue problem presented
in [7]. Like the square-reduced method, this algorithm uses the square of the Hamiltonian
matrix. But it avoids forming the square explicitly using a symplectic URV-type
decomposition of the Hamiltonian matrix.
As reference values we used the eigenvalues computed by the unsymmetric QR
algorithm with Parlett/Reinsch balancing as implemented in the LAPACK expert
driver routine DGEEVX [3], applied to the Hamiltonian matrix and using quadruple
precision.
Moreover, we tested the effects of balancing when solving algebraic Riccati equations
with the structure-preserving multishift method presented in [1] for the examples
from the benchmark collection [6]. We only present some of the most intriguing results
Example 6.1. [6, Example 6] The system data come from an optimal control
problem for a J-100 jet engine as a special case of a multivariable servomechanism
problem. The resulting Hamiltonian matrix H 2 R 60\Theta60 has 8 isolated eigenvalues:
triple eigenvalues at \Sigma20:0 and simple eigenvalues at \Sigma33:3.
Algorithm 3.4 returns and for the permuted Hamiltonian
matrix we have
Hn+1:n+i l \Gamma1;n+1:n+i l
Next, the Hamiltonian submatrix
is scaled using Algorithm 4.4. After six sweeps, we obtain the balanced Hamiltonian
We have
decreased the 2-norm of the matrix used in the subsequent eigenvalue computation by
more than five orders of magnitude. If the eigenvalues are computed by the square-
reduced method applied to the unbalanced Hamiltonian matrix, the triplet of isolated
eigenvalues is returned as a pair of conjugate complex eigenvalues with relative errors
\Gamma11 and a simple eigenvalue with relative error 3:96 \Theta 10 \Gamma11 . For
the simple eigenvalue at 33:3, the relative error is 7:7 \Theta 10 \Gamma15 . For the balanced
version, these eigenvalues are returned with full accuracy since they are not affected
by roundoff errors. The relative errors for the other (not isolated) eigenvalues are
given in Figure 6.1 where we use the relative distance of the computed eigenvalues to
those computed by DGEEVX as an estimate of the real relative error.
Figure
6.1 only contains the relative errors for the eigenvalues with positive real
parts as sqred returns the eigenvalues as exact plus-minus pairs. The '+' for the 26th
eigenvalue is missing as the computed relative error for the balanced version is zero
with respect to machine precision. The eigenvalues are ordered by increasing absolute
values. From Figure 6.1, the increasing accuracy for decreasing ratio kHk 2 =jj is
obvious - with or without balancing. All computed eigenvalues of the balanced
matrix are more accurate than for the unbalanced one. The increase in accuracy is
more significant for the eigenvalues of smaller magnitude. This reflects the decrease
of the ratios kHk 2 =jj which more or less determines the accuracy of the computed
eigenvalues; see [31]. The decrease factor for kHk 2 is about 5 \Theta 10 \Gamma6 . The accuracy
for the eigenvalues of smaller magnitude increases by almost the same factor.
From
Figure
6.2 we see that symplectic balancing also improves the eigenvalues
computed by the method proposed in [7]. As the method does not suffer from the
perturbation, the accuracy for all computed eigenvalues is similar. Also note
that in the unbalanced version, the isolated eigenvalues are computed with a relative
accuracy ranging from 7:0 \Theta 10 \Gamma14 to 1:2 \Theta 10 \Gamma15 .
eigenvalue number
relative
errors
'+' - with symplectic balancing, 'o' - without balancing
Fig. 6.1. square-reduced method.
eigenvalue number
relative
errors
'+' - with symplectic balancing, 'o' - without balancing
Fig. 6.2. symplectic URV
Using the balanced matrix in order to solve algebraic Riccati equations by the
multishift method as described in [1], we obtain the following results: if the multishift
method is applied to the unbalanced data, the computed solution yields a residual
of size 1:5 \Theta 10 \Gamma6 while using the balanced Hamiltonian matrix we get r
This shows that numerical methods for solving algebraic Riccati equations can
substantially be improved employing balancing.
Example 6.2. [6, Example 13] The Hamiltonian matrix is defined as in (1.1)
with
diag (1; 0;
denotes the fourth unit vector. After four sweeps of Algorithm
4.4, kHk 2 is reduced from 10 12 to 1:5 \Theta 10 6 . The accuracy of the computed
eigenvalues did not improve significantly, but for the stabilizing solution of the algebraic
Riccati equation, the Frobenius norm of the residual as defined in (6.1) dropped
from r
7. Concluding Remarks. We have seen that isolated eigenvalues of a real
Hamiltonian matrix can be deflated using similarity transformations with symplectic
permutation matrices, the deflated problem can be scaled in order to reduce the norm
of the deflated Hamiltonian matrix and to equilibrate its row and column norms, and
the remaining (not isolated) eigenvalues can then be determined by computing the
eigenvalues of the deflated, balanced Hamiltonian submatrix. If invariant subspaces
are required, then we can use J-permutation matrices and a symplectic re-ordering
strategy in order to obtain the desired invariant subspaces. The same method can
be applied in order to balance skew-Hamiltonian and complex (skew-)Hamiltonian
matrices.
Numerical examples demonstrate that symplectic balancing can significantly improve
the accuracy of eigenvalues of Hamiltonian matrices as well as the accuracy of solutions
of the associated algebraic Riccati equations computed by structure-preserving
methods.
Final Remark and Acknowledgments. The work presented in this article
continues preliminary results derived in [4]. The author would like to thank Ralph
Byers, Heike Fabender, and Volker Mehrmann for helpful suggestions.
--R
A multishift algorithm for the numerical solution of algebraic Riccati equations
Contributions to the Numerical Solution of Algebraic Riccati Equations and Related Eigenvalue Problems
A collection of benchmark examples for the numerical solution of algebraic Riccati equations I: Continuous-time case
structure preserving method for computing the eigenvalues of real Hamiltonian or symplectic pencils
A bisection method for computing the H1 norm of a transfer matrix and related problems
A fast algorithm to compute the H1-norm of a transfer function matrix
Matrix factorization for symplectic QR-like methods
Matrix Eigensystem Routines- EISPACK Guide Extension
Matrix Computations
Matrix balancing
On the complexity of matrix balancing
The Algebraic Riccati Equation
Invariant subspace methods for the numerical solution of Riccati equations
Canonical forms for Hamiltonian and symplectic matrices and pencils
The Autonomous Linear Quadratic Control Problem
Solution of large matrix equations which occur in response theory
A Schur decomposition for Hamiltonian matrices
Balancing a matrix for calculation of eigenvalues and eigenvec- tors
Computational Methods for Linear Control Systems
Solving continuous-time matrix algebraic Riccati equations with condition and accuracy estimates
Algorithms for Linear-Quadratic Optimization
A fast algorithm to compute the real structured stability radius
Algorithm 506-HQR3 and EXCHNG: Fortran subroutines for calculating and ordering the eigenvalues of a real upper Hessenberg matrix
A symplectic method for approximating all the eigenvalues of a Hamiltonian matrix
--TR
--CTR
Peter Benner , Daniel Kressner , Volker Mehrmann, Structure preservation: a challenge in computational control, Future Generation Computer Systems, v.19 n.7, p.1243-1252, October
Pierluigi Amodio, On the computation of few eigenvalues of positive definite Hamiltonian matrices, Future Generation Computer Systems, v.22 n.4, p.403-411, March 2006
Peter Benner , Daniel Kressner, Algorithm 854: Fortran 77 subroutines for computing the eigenvalues of Hamiltonian matrices II, ACM Transactions on Mathematical Software (TOMS), v.32 n.2, p.352-373, June 2006 | hamiltonian matrix;symplectic method;balancing;eigenvalues |
587379 | An Inverse Free Preconditioned Krylov Subspace Method for Symmetric Generalized Eigenvalue Problems. | In this paper, we present an inverse free Krylov subspace method for finding some extreme eigenvalues of the symmetric definite generalized eigenvalue problem x$. The basic method takes a form of inner-outer iterations and involves no inversion of B or any shift-and-invert matrix $A-\lambda_0 B$. A convergence analysis is presented that leads to a preconditioning scheme for accelerating convergence through some equivalent transformations of the eigenvalue problem. Numerical examples are given to illustrate the convergence properties and to demonstrate the competitiveness of the method. | Introduction
Iterative methods such as the Lanczos algorithm and the Arnoldi algorithm are widely used for
solving large matrix eigenvalue problems (see [21, 22]). Eective applications of these algorithms
typically use a shift-and-invert transformation, which is sometimes called preconditioning [22] and
requires solving a linear system of equations of the original size at each iteration of the process.
For truly large problems, solving the shift-and-invert equations by a direct method such as the LU
factorization is often infeasible or ine-cient. In those cases, one can employ an iterative method to
solve them approximately, resulting in two levels of iterations called inner-outer iterations. However,
methods like the Lanczos algorithm and the Arnoldi algorithm are very sensitive to perturbations
in the iterations and therefore require highly accurate solutions of these linear systems (see [8]).
Therefore, the inner-outer iterations may not oer an e-cient approach for these methods.
There has recently been great interest in other iterative methods that are also based on shift-
and-invert equations but tolerate low accuracy solutions. One simple example of such methods is the
inexact inverse iteration where the linear convergence property is preserved even when the inversion
is solved to very low accuracy (see [7, 13, 14, 27]). Several more sophisticated and competitive
methods have been developed that also possess such a property. They include the Jacobi-Davidson
Scientic Computing and Computational Mathematics Program, Department of Computer Science, Stanford
University, Stanford, CA 94305. E-mail : golub@sccm.stanford.edu. Research supported in part by National Science
Foundation Grant DMS-9403899.
y Department of Mathematics, University of Kentucky, Lexington, KY 40506-0027. E-mail: qye@ms.uky.edu. Part
of this Research was supported by NSERC of Canada while this author was with University of Manitoba.
method [5, 24, 25], truncated RQ iterations [26, 32] and others [14, 29, 30, 31]. One di-culty with
these methods is that it is not easy to determine to what accuracy the shift-and-invert equations
should be solved. On the other hand, there have been several works that aim at generalizing the
concept of preconditioning for linear systems to the eigenvalue problem [1, 2, 4, 11, 12, 17, 18, 15, 20,
28]. This is mostly done, however, by directly adapting a preconditioner used for inverting a certain
matrix into an eigenvalue iteration and in these situations, the role of preconditioners is usually
not clear, although some of them can be regarded as using inexact shift-and-invert [19]. Overall,
while all these new methods have been demonstrated to work successfully in some problems, there
is in general a lack of understanding of how and why they work. Furthermore, optimal eigenvalue
projection methods such as the Lanczos algorithm have mostly not been incorporated in these
developments.
In this paper, we shall present a variation of the Krylov subspace projection methods for computing
some extreme eigenvalues of the generalized eigenvalue problem
called the pencil problem for (A; B),
are symmetric matrices and B > 0. The method iteratively improves an approximate
eigenpair, each step of which uses either the Lanczos or the Arnoldi iteration to produce
a new approximation through the Rayleigh-Ritz projection on a Krylov subspace, resulting in a
form of inner-outer iterations. We shall present our theoretical and numerical ndings concerning
convergence properties of this method and derive bounds on asymptotic linear convergence rates.
Furthermore, we shall develop from the convergence analysis some equivalent transformations of
the eigenvalue problem to accelerate the convergence, which will be called preconditioning. In
particular, such transformations will be based on incomplete factorization and thus generalize the
preconditioning for linear systems. To the best of our knowledge, this is the rst preconditioning
scheme for the eigenvalue problem that is based on and can be justied by a convergence theory.
The paper is organized as follows. We present the basic algorithm in Section 2 and analyze
its convergence properties in section 3. We then give some numerical examples in Section 4 to
illustrate the convergence properties. We then present a preconditioned version of the algorithm
in Section 5, followed by some numerical examples on the preconditioning in Section 6. We nish
with some concluding remarks in Section 7.
Basic Inverse Free Krylov Subspace Method
In this section, we present our basic algorithm for nding the smallest eigenvalue and a corresponding
eigenvector (; x) of a pencil (A; B) where A; B are symmetric with B > 0. We note that
the method to be developed can be modied in a trivial way for nding the largest eigenvalue (or
simply considering ( A; B)).
Given an initial approximation we aim at improving it through the Rayleigh-Ritz
orthogonal projection on a certain subspace, i.e. by minimizing the Rayleigh quotient
on that subspace. Noting that the gradient of the Rayleigh quotient at x 0 is r
the well-known steepest descent method chooses a new approximate eigenvector x
by minimizing Clearly, this can be considered as the Rayleigh-Ritz projection method on
the subspace K 1 spanfx g. On the other hand, the inverse iteration constructs a
new approximation by x . If the inversion is solved inexactly by an iterative
solver (i.e. in an inexact inverse iteration [7]), then x 1 is indeed chosen from a Krylov subspace as
generated by A 0 B. Since x 1 is extracted from the Krylov subspace to solve the linear system,
it may not be a good choice for approximating the eigenvector.
We consider here a natural extension of these two approaches that nds a new approximate
eigenvector x 1 from the Krylov subspace
Km spanfx
(for some xed m) by using the Rayleigh-Ritz projection method. The projection is carried out by
constructing a basis for Km and then forming and solving the projection problem for the pencil
B). Repeating the process, we arrive at the following iteration, which we call inverse free Krylov
method for (A; B).
Algorithm 1: Inverse Free Krylov Subspace Method.
Input m 1 and an initial approximation x 0 with
For
Construct a basis
Find the smallest eigenpair
End
In the algorithm, we apply the projection to the shifted pencil update the approximation
accordingly, which is theoretically equivalent to using the projection of (A; B) directly.
This formulation, however, may improve the stability while saving matrix-vector multiplications by
utilizing which need to be computed in the construction of the basis.
In constructing a basis z there are many possible choices and, theoretically,
they are all equivalent in that the new approximate eigenpair ( obtained will be the
same, which is dened by
(1)
However, numerically, we will consider a basis that is orthonormal on a certain inner product. Such
a basis of the Krylov subspace is typically constructed through an iterative method itself, which
will be called the inner iteration. The original iteration of Algorithm 1 will be called the outer
iteration. We shall discuss in the subsections later three methods for constructing the basis.
In our presentation of Algorithm 1, we have assumed for convenience that
that we construct a full basis z Generally, if
only. Then, the Rayleigh-Ritz projection is simply carried out by replacing Zm with
and (1) is still valid. Numerically, however, early termination at step p of the
inner iteration is not likely and a full basis is usually constructed even when p < m in theory,
but this causes no problem as the larger space spanned by more vectors would yield a better
approximation.
Given that the basic ingredients of Algorithm 1 are the projection and the Krylov subspaces,
it is not surprising that some similar methods have been considered before. In [10, 11], Knyazev
discussed and analyzed some very general theoretical methods and suggested several special cases,
among which is the use of Km and (1). Morgan and Scott's preconditioned Lanczos algorithm
[18] takes a similar iteration but uses the smallest Ritz value of the matrix A k B rather than
that of the pencil to update the eigenvalue. With an m varied with each iteration,
it has a quadratic convergence property. We point out however that the quadratic convergence
is not a desirable property because it is achieved at the cost of increasingly larger m and it prevents
improvement of convergence by preconditioning (there is hardly any need to accelerate a
quadratic convergent algorithm). Our study will be somewhat of dierent nature in that we consider
accelerating convergence by changing certain conditions of the problem through equivalent
transformations (see section 5) as opposed to increasing m.
Also related to ours are methods based on inverting a shifted matrix A k B or its projection,
which include the inverse iteration and the Jacobi-Davidson method [24]. When the inversion is
solved approximately by an iterative method, the solution is extracted from a Krylov subspace
generated by A k B (or its projection). In these cases, it is chosen to satisfy the related linear
system. We note that the Jacobi-Davidson method also uses the Rayleigh-Ritz projection in the
outer iteration, the cost of which increases with the iteration. By xing the size of subspaces for
projection, the cost of Algorithm 1 is xed per outer iteration.
I, it is easy to see that Algorithm 1 is just the standard restarted Lanczos algorithm
for A. In this regard, our investigation is on the version with a xed m and on how m aects the
convergence. Furthermore, our development will lead to a preconditioning strategy that transforms
I) to the pencil problem (L 1 AL suitably chosen L), to which Algorithm
1 will be applied. This transformation to a more complicated problem may seem counter intuitive
but an important feature of Algorithm 1 is that the case I oers no advantage than a more
general B:
We now discuss in details the construction of a basis for Km in Algorithm 1.
2.1 Orthonormal basis by the Lanczos algorithm
One obvious choice of the basis of the Krylov subspace Km is the orthonormal one as constructed by
applying the Lanczos algorithm to C Simultaneously
with the Lanczos process, we produce a tridiagonal matrix
. The Lanczos process
requires m+1 matrix-vector multiplications by C Once the basis has been constructed,
we
multiplications by B. Note that
Here we state the Lanczos algorithm.
Algorithm 2: Orthonormal basis by Lanczos.
B, an approximate
For
End
With the orthonormal basis, Bm is in general a full matrix and we need to solve a generalized
eigenvalue problem for (Am ; Bm ). While in the exact arithmetic, it may not be valid in a
nite precision arithmetic for larger m when there could be severe loss of orthogonality among z i .
This can be corrected by either computing
explicitly or using reorthogonalization
[6] in the Lanczos process. We note that C k Zm has been computed in the Lanczos algorithm and
can be stored for forming Am .
2.2 B-orthonormal basis by the Arnoldi algorithm
We can also construct a B-orthonormal basis for Km by the modied Gram-Schmidt process in the
B-inner product, which is essentially the Arnoldi algorithm. The advantage of this approach is a
simpler projection problem with but it is at the cost of a longer recurrence. We also need
to compute
. We state the algorithm here.
Algorithm 3: B-orthonormal basis by Arnoldi.
Input
For
For
End
Each step of the Arnoldi algorithm requires 2 matrix-vector multiplications with one by C k and
one by B. In addition, we need to store Bz i from each iteration in order to save matrix-vector
multiplications, resulting in storage cost of m vectors. We note again that, for larger m, the B-
orthogonality among the columns of Zm may gradually be lost. This leads to deterioration of the
equation In that case, we need either reorthogonalization in the Arnoldi algorithm, or
explicit computations of
In comparing the two constructions, the computational costs associated with them are very
comparable. They both require 2(m multiplications. The Arnoldi recurrence
is more expensive in both
ops and storage than the Lanczos recurrence while it produces a more
compact projection matrix than the Lanczos algorithm. Clearly, these dierences are very minor
when m is not too large, which is the case of interest in practical implementations. In terms of
numerical stability of these two theoretically equivalent processes, our testing suggests that there
is very little dierence. However, for the preconditioned version of Algorithm 1 that we will discuss
in Section 5, the approach by the Arnoldi algorithm seems to have some advantage, see section 5.
2.3 C k -orthogonal basis by a variation of the Lanczos algorithm
It is also possible to construct Zm that is C k -orthogonal by a variation of the Lanczos algorithm
with a three term recurrence. Then the projection
will have a compact form,
leading to a computationally more eective approach than the previous two. However, it is less
stable owing to the indeniteness of C k . For the theoretical interest, we outline this variation of
the Lanczos algorithm for in the form of a full matrix tridiagonalization.
be the standard tridiagonalization of the Lanczos algorithm for C where T
is tridiagonal and Q is orthogonal with x k =kx k k as its rst column. For the sake of simplicity in
presentation, we assume here that k is between the rst and the second eigenvalue, which implies
that C has exactly one negative eigenvalue. Noting that the (1; 1) entry of T is x T
be the block LDL T decomposition of T , where
I
and
Write
It is easy to check that
and
Now a Lanczos three term recurrence can be easily derived
from (2) to construct the columns of Z, which still form a basis for the Krylov subspace and is
essentially C-orthogonal. However, our tests show that this is numerically less stable. Therefore,
we shall not consider this further and omit a detailed algorithm here.
3 Convergence analysis.
In this section, we study convergence properties of Algorithm 1 that include a global convergence
result and a local one on the rate of linear convergence. In particular, we identify the factors
that aect the speed of the convergence so as to develop preconditioning strategy to improve the
convergence.
We rst prove that Algorithm 1 always converges to an eigenpair. For that, we need the
following proposition, the proof of which is straightforward.
Proposition 1 Let 1 be the smallest eigenvalue of (A; B) and ( k ; x k ) be the eigenpair approximation
obtained by Algorithm 1 at step k. Then
and
Theorem 1 Let ( k ; x k ) denote the eigenpair approximation obtained by Algorithm 1 at step k.
Then k converges to some eigenvalue ^
of (A; B) and converges in
direction to a corresponding eigenvector).
Proof From Proposition 1, we obtain that k is convergent. Since x k is bounded, there is a
convergent subsequence x n k . Let
x:
B)^x. Then it follows from (3) that,
Suppose now ^ r 6= 0. We consider the projection of (A; B) onto
rg by dening
r]:
Noting that f^x; ^ rg is orthogonal, we have ^
B)^r
is indenite. Thus the smallest eigenvalue of
B), denoted by ~ , is less than ^
, i.e.
~
Furthermore, at step k, dene r
Let ~ k+1 be the smallest eigenvalue of
B.
Hence by the continuity property of the eigenvalue, we have
On the other hand, k+1 is the smallest eigenvalue of the projection of (A; B) on
which implies
Finally, combining the above together, we have obtained
which is a contradiction to (4). Therefore, ^
is an eigenvalue and
Now, to show
suppose there is a subsequence m k such that
> 0. From the subsequence m k , there is a subsequence n k for which x n k
is convergent. Hence by
virtue of the above proof, which is a contradiction. This completes the proof.
Next, we study the speed of convergence through a local analysis. In particular, we show that
k converges at least linearly.
be the smallest eigenvalue of (A; B), x be a corresponding unit eigenvector and
be the eigenpair approximation obtained by Algorithm 1 at step k. Let 1 be the smallest
eigenvalue of A k B and u 1 be a corresponding unit eigenvector. Then
Asymptotically, if k ! 1 , we have
Proof First, from the denition, we have
Furthermore, A 1 I k B 0 and A 0I is the smallest eigenpair
of is the smallest eigenpair of (A; B). Clearly A k B is indenite and
hence 1 0: Now using Theorem 3 of Appendix, we have
which leads to the bound (5).
To prove the asymptotic expansion, let 1 (t) be the smallest eigenvalue of A tB. Then
. Using the analytic perturbation theory, we obtain 0
and hence
Choosing
from which the expansion follows.
We now present our main convergence result. We assume that k is already between the rst
and the second smallest eigenvalues. Then by Theorem 1, it converges to the smallest eigenvalue.
Theorem 2 Let 1 < 2 n be the eigenvalues of (A; B) and ( k+1 ; x k+1 ) be the approximate
eigenpair obtained by Algorithm 1 from ( k ; x k ). Let 1 2 n be the eigenvalues
of A k B and u 1 be a unit eigenvector corresponding to 1 . Assume 1 < k < 2 . Then
kBk
where
and
with Pm denoting the set of all polynomials of degree not greater than m:
Proof First, write g. At step k of the algorithm, we
have
Let A k be the eigenvalue decomposition of A k B, where
orthogonal and g. Let q be the minimizing polynomial in m with
and it follows from x T
and
Using Proposition 1, we have y T
and hence
On the other hand, we also have
and
where we note that q( 1
Thus
kBk
where we have used (8) and (9). Finally, combining (7), (10) and Lemma 1, we have
kBk
which leads to the theorem.
It is well known that m in the theorem can be bounded in terms of i as (see [16, Theorem
1.64] for example)
Then the speed of convergence depends on the distribution of the eigenvalues i of A k B but not
those of (A; B). This dierence is of fundamental importance as it allows acceleration of convergence
by equivalent transformations that change the eigenvalues of A k B but leave those of (A; B)
unchanged (see the discussion on preconditioning in Section 5). On the other hand, the bound
shows accelerated convergence when m is increased. In this regard, our numerical tests suggests
that the convergence rate decreases very rapidly as m increases (see Section 4).
For the special case of just the steepest descent method for
A. It is easy to check in this case that
Using this in Theorem 2, we recover the classical convergence bound for the steepest descent
method [9, p.617]. We note that there is a stronger global convergence result in this case, i.e. k is
guaranteed to converge to the smallest eigenvalue if the initial vector has a nontrivial component
in the smallest eigenvector (see [9, p.613]). There is no such result known for the case B 6= I.
Asymptotically we can also express the bound in terms of the eigenvalues of A 1 B instead
of i which is dependent of k. We state it as the following corollary; but point out that the bound
of Theorem 2 is more informative.
n be the eigenvalues of A 1 B. Then, we have asymptot-
ically
p! 2m
The proof follows from combining (11) with i
4 Numerical Examples - I
In this section, we present numerical examples to illustrate the convergence behavior of Algorithm
1. Here, we demonstrate the linear convergence property and the eect of m on the convergence
rate.
Example 1: Consider the Laplace eigenvalue problem with the Dirichlet boundary condition
on an L-shape region. A nite element discretization on a triangular mesh with 7585 interior
nodes (using PDE toolbox of MATLAB) leads to a pencil eigenvalue problem
apply Algorithm 1 to nd the smallest eigenvalue with a random initial vector and the stopping
criterion is set as kr k k=kr 0 We give the convergence history of
the residual kr k k for (from top down resp.) in Figure 1. We present in Figure
2 (a) the number of outer iterations required to achieve convergence for each m in the range of
correspondingly in Figure 2 (b), the total number of inner iterations.
We observe that the residual converges linearly with the rate decreased as m increases. Fur-
thermore, from Figure 2 (a), the number of outer iterations decreases very rapidly (quadratically
or even exponentially) as m increases and it almost reaches its stationery limit for m around 70.
Because of this peculiar property, we see from Figure 2 (b) that the total number of inner iterations
is near minimal for a large range of m (40 < m < 80 in this case).
Example 2: We consider a standard eigenvalue problem which A is a ve point
nite dierence discretization of the Laplace operator on the mesh of the unit
square. Again, we apply Algorithm 1 to nd the smallest eigenvalue with a random initial vector.
Figure
Example 1: convergence of r k for
outer iterations
2-norm
of
residuals
In this case, it is simply a restarted Lanczos algorithm and we shall consider its comparison with
the Lanczos algorithm without restart. In Figure 3, we present the convergence history of k 1
where k is the approximate eigenvalue obtained at each inner iteration. They are plotted in the
dot lines from top down for respectively. The corresponding plot for the Lanczos
algorithm (without restart) is given in the solid line.
We have also considered the number of outer iterations and the total number of inner iterations
as a function of m and observed the same behavior as in Example 1. We omit a similar gure
here. In particular, the nearly exponential decrease of the outer iteration count implies that the
convergence history with a moderate m (in this case even 16) will be very close to the
one with very large m (i.e. Lanczos without restart in the solid line ).
These examples conrm the linear convergence property of Algorithm 1. Furthermore, our
numerical testing has consistently shown that the number of outer iterations decreases nearly
exponentially as m increases. This implies that near optimal performance of the algorithm can
be achieved with a moderate m, which is very attractive in implementations. Unfortunately we
have not been able to explain this interesting behavior with our convergence results. Even for the
restarted Lanczos algorithm, it seems to be a phenomenon not observed before and can not be
explained by the convergence theory of the Lanczos algorithm either.
5 Preconditioning
In this section, we discuss how to accelerate the convergence of Algorithm 1 through some equivalent
transformations, which we call preconditioning, and we shall present the preconditioned version of
Algorithm 1.
Figure
2: Example 1: Outer and total inner iterations vs. m
outer
iterations
(a)
parameter m (inner iterations)
total
inner
iterations
(b)
From our convergence result (Theorem 2), the rate of convergence depends on the spectral
distribution of C i.e. the separation of 1 from the rest of eigenvalues 2 ; ; n of C k .
With an approximate eigenpair ( k ; x k ), we consider for some matrix L k the transformed pencil
which has the same eigenvalues as (A; B). Thus, applying one step of Algorithm 1 to
have the bound (6) of Theorem 2 with the rate of convergence 2
determined by ^
the eigenvalues of
We can now suitably choose L k to obtain a favorable distribution of ^ i and hence a smaller m . We
shall call (13) a preconditioning transformation.
One preconditioning transformation can be constructed using the LDL T factorization of a
symmetric matrix [6]. For example, if
is the LDL T factorization with D k being a diagonal matrix of 1, choosing this L k results in ^
. Then, at the convergence stage with 1 < k < 2 , we have ^
which implies
and thus by Theorem 2,
Figure
3: Example 2: Eigenvalue convergence history against each inner iteration for restarted
dot lines from top down and for Lanczos without restart in solid
line
total inner iterations
error
of
Ritz
value
We conclude that Algorithm 1, when applied to
k ) at step k using the exact LDL T factor-
ization, converges quadratically. This is even true with (i.e. the steepest descent method).
Similarly, in light of Corollary 1, if we use a constant L obtained from the LDL T factorization
(assuming 1 is known) with D k being a diagonal matrix of 0 and 1,
Algorithm 2 also converges quadratically.
What we have described above is the ideal situations of fast quadratic convergence property
achieved by using an exact LDL T factorization. In practice, we can use an incomplete LDL T
factorization A k B L k D k L T
(through incomplete LU factorization, see [23, Chapter 10]).
Then we will have a nonzero but small m and hence fast linear convergence. Indeed, to be e-cient,
we can consider a constant L as obtained from an incomplete LDL T factorization of
where 0 is a su-ciently good approximation of 1 and apply Algorithm 1 to (13). Then, the
preconditioned algorithm converges linearly with the rate determined by the eigenvalues of
which has a better spectral distribution as long as ( 0 k )L 1 BL T is small relative to ^
We note that 0 k need not be very small if L 1 BL T is small (e.g. for the discretization of
dierential operators). It may work even when 0 < 1 , for which A and the incomplete
LDL T factorization becomes incomplete Cholesky factorization. It is also possible to construct L
based on other factorization, such as approximate eigenvalue decomposition.
As in the preconditioned iterative methods for linear systems, the preconditioned iteration of
Algorithm 1 can be implemented implicitly, i.e. without explicitly forming the transformed problem
C k . We derive a preconditioned version of the algorithm in the rest of this section.
be the approximate eigenpair that has been obtained at step k for the pencil (A; B).
is the corresponding approximate eigenpair for the transformed
pencil (13). By applying one step of iteration to the transformed pencil, the new approximation is
obtained by constructing a basis ^ z
z m for the Krylov subspace
and form the projection problem
is the smallest eigenpair of the above projection problem,
then
v) is the new approximate eigenpair for (A; B).
z m ]:
Then, the new approximate eigenpair can be written as ( k+1 and the projection problem
is
Therefore, to complete the k-th iteration, we only need to construct
basis for the subspace L T
Km . The actual construction of z i depends on which method we use
and will be given in details in the subsections later.
Here, we summarize the preconditioned algorithm as follows.
Algorithm 4: Preconditioned Inverse Free Krylov Subspace Method.
Input m and an initial approximation x 0 with
For convergence
Construct a preconditioner L k ;
Construct a preconditioned basis
Km
Find the smallest eigenvalue 1 and a eigenvector v for (Am ; Bm );
End
Remark: As in the linear system case, the above algorithm takes the same form as the original
one except using a preconditioned search space L T
Km . In the following subsections, we discuss
the construction of a preconditioned basis by the Arnoldi algorithm and the Lanczos algorithm
corresponding to the construction in Sections 2.1 and 2.2. Our numerical testing suggests that the
Arnoldi algorithm might be more stable than the Lanczos algorithm in some cases.
5.1 Preconditioned basis by the Arnoldi method
In the Arnoldi method, we construct ^
B-orthonormal basis for
Km . Correspond-
ingly, z is a B-orthonormal basis for L T
Km . Starting from ^
B , the
recurrence for
z i is
where h
z T
z
and
with
is B-orthonormal, and h j;i above ensures this condition.
From this, we arrive at the following algorithm.
Algorithm 5: Preconditioned B-orthonormal basis by Arnoldi
Input and a preconditioner L k .
For
For
End
We see from the algorithm that only L T
k is needed in our construction. If we use 0 < 1
and is an incomplete Cholesky factor, i.e. A 0 B LL T , then we can use any matrix
approximating
explicitly forming L k . For example, for dierential
operators, we can use the multigrid or domain decomposition preconditioners for A 0 B directly.
5.2 Preconditioned basis by the Lanczos method
In the Lanczos method, we construct ^ z
z m as an orthonormal basis for
Km . Then the corresponding
basis z
k . Starting from ^
the recurrence is
z
z
z T
z i and
. The resulting tridiagonal matrix T as constructed from 's
and 's satises
Zm . Thus, using ^
and
with
Clearly, the formulas for i and i+1 ensures
z j is M-orthonormal. Thus, we have the alternative formulas
From this, we can derive a recurrence to construct the basis. We note that this construction
normalizes z i in the M-norm. In practice, M could be nearly singular. Therefore, it is more
appropriate to normalize it in the 2-norm. The following algorithm is one of several possible
formulations here.
Algorithm Preconditioned basis by Lanczos
Input and a preconditioner L k .
For
End
6 Numerical Examples - II
In this section, we present some numerical examples to demonstrate the eectiveness and competitiveness
of the preconditioned inverse free Krylov subspace method (Algorithm 4).
Example 3: A and B are the same as in Example 1. We apply Algorithm 4 to nd the smallest
eigenvalue (closest to We use a constant L k as obtained by the threshold incomplete
LDL T factorization of A A with the drop tolerance 10 2 . We compare our algorithm
with the Jacobi-Davidson algorithm that uses the same number of inner iterations (m) and with
the same kind of preconditioner. We give in Figure 4 the convergence history of the residual kr k k
of Algorithm 4 in solid lines and that of the Jacobi-Davidson algorithm in dot lines from top down
respectively. In Figure 5, we also present the number of outer iterations and the
total number of inner iterations required to reduce kr k k=kr 0 k to 10 7 for each m in (+) mark for
Algorithm 4 and in (o) mark for the Jacobi-Davidson algorithm.
Comparing it with Example 1, the results clearly demonstrate the acceleration eect of pre-conditioning
by signicantly reducing the number of outer iterations (Fig. 4 and Fig. 5(a)).
Furthermore, the values of m at which the total number of inner iterations is near its minimum are
signicantly smaller with preconditioning (around Fig. 5). Although J-D algorithm
has smaller number of total inner iteration for very small m, the corresponding outer iteration
count is larger, which increases its cost.
We also considered for this example the ideal preconditioning with L k chosen as the exact LDL T
factorization of C k . In this case, we use an initial vector with kAx 0 so that 0
is su-ciently close to 1 . We present the residual convergence history in Figure 6 for
steepest descent method), 2 and 4. The result conrms the quadratic convergence property for all
Figure
4: Example 3 residual convergence history for lines - Algorithm 4;
dot lines - Jacobi-Davidson)
outer iterations
2-norm
of
residuals
m. We have also tested the case that uses L L from the exact factorization of A 1 B and in
this case it converges in just one iteration, conrming Corollary 1.
The next example is for the standard eigenvalue problem and the preconditioned
algorithm implicitly transforms it to a pencil problem.
Example 4: A is the same matrix as in Example 2 We use a constant L k as obtained
by the incomplete LDL T decompositions of A A with no ll-in. We compare it with the
Jacobi-Davidson algorithm with the same kind of preconditioner. We also consider the shift-and-
spectral transformed Lanczos algorithm (i.e. applying the Lanczos to A 1 ).
We give in Figure 7 the convergence history of the residual kr k k of Algorithm 4 in solid lines
from top down for and that of the Jacobi-Davidson algorithm in dot lines for
with the corresponding marks. The residual for the spectral transformed Lanczos
is given in dash-dot (with +) line. Figure 8 is the number of outer iterations and the total number
of inner iterations vs. m.
Again, preconditioning signicantly accelerates convergence and our result compares very favorably
with the Jacobi-Davidson method. An interesting point here is that Algorithm 4 with
based on incomplete factorization outperforms the shift-and-invert Lanczos algorithm. Although
we do not suggest this is the case in general, it does underline the eectiveness of the preconditioned
algorithm.
Figure
5: Example 3 Outer and total inner iterations vs. m (+ - Algorithm 4;
outer
iterations
(a)
total
inner
iterations
(b)
7 Concluding Remarks
We have presented an inverse free Krylov subspace method that is based on the classical Krylov
subspace projection methods but incorporates preconditioning for e-cient implementations. A
convergence theory has been developed for the method and our preliminary tests of the preconditioned
version demonstrate its competitiveness. Comparing with the existing methods, it has a
relatively well understood theory and simple numerical behavior. We point out that the algorithm
has a xed cost per outer iteration, which makes it easy to implement.
For the future work, we will consider generalizations in three directions, namely, an e-cient
block version for computing several eigenvalues simultaneously, a strategy to compute interior
eigenvalues and an algorithm for the general nonsymmetric problem.
A
Appendix
. Perturbation Bounds for Generalized Eigenvalue
Problems
We present a perturbation theorem that is used in the proof of Lemma 1 but might be of general
interest as well. In the following, A; B; and E are all symmetric.
Theorem 3 Let 1 be the smallest eigenvalue of (A; B) with x a corresponding unit eigenvector
and let 1 be the smallest eigenvalue of with u a corresponding unit eigenvector, where
min (E)
Figure
convergence with ideal preconditioning
where min (E) and max (E) denote the smallest and the largest eigenvalues of E respectively.
Proof Using the minimax characterization, we have
z 6=0
z T
z T Bz
Similarly,
min (E)
which completes the proof.
We note that 1=x T Bx (or 1=u T Bu ) is the Wilkinson condition number for 1 (or 1 ). These
bounds therefore agree with the rst order analytic expansion and will be sharper than traditional
bounds based on kBk.
Figure
7: Example 4 residual convergence history for lines - Algorithm
4; dot lines - Jacobi-Davidson, dash-dot - shift-and-invert Lanczos)
--R
A subspace preconditioning algorithm for eigenvec- tor/eigenvalue computation
The Davidson method
Applied Numerical Linear Algebra
Minimization of the computational labor in determining the
Matrix Computations
Inexact inverse iterations for the eigenvalue problems
Large sparse symmetric eigenvalue problems with homogeneous linear constraints: the Lanczos process with inner-outer iterations
Functional Analysis in Normed Spaces
Convergence rate estimates for iterative methods for a mesh symmetric eigenvalue problem.
Preconditioned Eigensolvers - An oxymoron? Elec
Toward the optimal preconditioned eigensolver: Locally optimal block preconditioned conjugate gradient method
An inexact inverse iteration for large sparse eigenvalue problems
The inexact rational Krylov sequence method
The restarted Arnoldi method applied to iterative linear solvers for the computation of rightmost eigenvalues
Computer Solution of Large Linear Systems
Generalizations of Davidson's method for computing eigenvalues of sparse symmetric matrices
Preconditioning the Lanczos algorithm for sparse symmetric eigenvalue problems
A geometric theory for preconditioned inverse iteration
a new method for the generalized eigenvalue problem and convergence estimate Preprint
The Symmetric Eigenvalue Problem
Numerical Methods for Large Eigenvalue Problems
Iterative Methods for Sparse Linear Systems
A Jacobi-Davidson iteration method for linear eigenvalue problems
A truncated RQ iteration for large scale eigenvalue calculations
Robust preconditioning of large sparse symmetric eigenvalue problems
Restarting techniques for the (Jacobi-)Davidson symmetric eigenvalue method Elec
Dynamic thick restarting of the Davidson and implicitly restarted Arnoldi methods
Inexact Newton preconditioning techniques for large symmetric eigenvalue problems Elec.
Convergence analysis of an inexact truncated RQ iterations Elec.
--TR
--CTR
James H. Money , Qiang Ye, Algorithm 845: EIGIFP: a MATLAB program for solving large symmetric generalized eigenvalue problems, ACM Transactions on Mathematical Software (TOMS), v.31 n.2, p.270-279, June 2005
P.-A. Absil , C. G. Baker , K. A. Gallivan, A truncated-CG style method for symmetric generalized eigenvalue problems, Journal of Computational and Applied Mathematics, v.189 n.1, p.274-285, 1 May 2006 | preconditioning;eigenvalue problems;krylov subspace |
587382 | Nonlinearly Preconditioned Inexact Newton Algorithms. | Inexact Newton algorithms are commonly used for solving large sparse nonlinear system of equations $F(u^{\ast})=0$ arising, for example, from the discretization of partial differential equations. Even with global strategies such as linesearch or trust region, the methods often stagnate at local minima of $\|F\|$, especially for problems with unbalanced nonlinearities, because the methods do not have built-in machinery to deal with the unbalanced nonlinearities. To find the same solution $u^{\ast}$, one may want to solve instead an equivalent nonlinearly preconditioned system ${\cal F}(u^{\ast})=0$ whose nonlinearities are more balanced. In this paper, we propose and study a nonlinear additive Schwarz-based parallel nonlinear preconditioner and show numerically that the new method converges well even for some difficult problems, such as high Reynolds number flows, where a traditional inexact Newton method fails. | Introduction
. Many computational engineering problems require the numerical
solution of large sparse nonlinear system of equations, i.e., for a given nonlinear
vector u # R n , such that
starting from an initial guess u (0)
and Inexact Newton algorithms (IN) [7, 8, 11, 17] are commonly
used for solving such systems and can briefly be described here. Suppose u (k) is the
current approximate solution; a new approximate solution u (k+1) can be computed
through the following steps:
Algorithm 1.1 (IN).
1: Find the inexact Newton direction p (k) such that
Step 2: Compute the new approximate solution
Here # k is a scalar that determines how accurately the Jacobian system needs to be
solved using, for example, Krylov subspace methods [2, 3, 11, 12]. # (k) is another
scalar that determines how far one should go in the selected inexact Newton direction
[7]. IN has two well-known features, namely, (a) if the initial guess is close enough
to the desired solution then the convergence is very fast, and (b) such a good initial
# Department of Computer Science, University of Colorado, Boulder, CO 80309-0430
(cai@cs.colorado.edu). The work was supported in part by the NSF grants ASC-9457534, ECS-
and ACI-0072089, and by Lawrence Livermore National Laboratory under subcontract
B509471.
Department of Mathematics & Statistics, Old Dominion University, Norfolk, VA 23529-0077;
ISCR, Lawrence Livermore National Laboratory, Livermore, CA 94551-9989; and ICASE, NASA
Langley Research Center, Hampton, VA 23681-2199 (keyes@icase.edu). This work was supported in
part by NASA under contract NAS1-19480 and by Lawrence Livermore National Laboratory under
subcontract B347882.
guess is generally very di#cult to obtain, especially for nonlinear equations that have
unbalanced nonlinearities [19]. The step length # (k) is often determined by the components
with the worst nonlinearities, and this may lead to an extended period of
stagnation in the nonlinear residual curve; see Fig 5.2 for a typical picture and more
in the references [4, 14, 16, 23, 27, 28].
In this paper, we develop some nonlinearly preconditioned inexact Newton algorithms
Find the solution u # R n of (1.1) by solving a preconditioned system
Here the preconditioner G : R n
1. If
2. G # F -1 in some sense.
3. G(F (w)) is easily computable for w # R n .
4. If a Newton-Krylov type method is used for solving (1.4), then the matrix-vector
product (G(F (w))) # v should also be easily computable for
As in the linear equation case [13], the definition of a preconditioner can not be given
precisely, nor is it necessary. Also as in the linear equation case, preconditioning can
greatly improve the robustness of the iterative methods, since the preconditioner is
designed so that the new system (1.4) has more uniform nonlinearities. PIN takes the
following
Algorithm 1.2 (PIN).
1: Find the inexact Newton direction p (k) such that
Step 2: Compute the new approximate solution
Note that the Jacobian of the preconditioned function can be computed, at least in
theory, using the chain rule, i.e.,
#F
If G is close to F -1 in the sense that G(F (u)) # u, then #G
#F
I .
In this case, Algorithm 1.2 converges in one iteration, or few iterations, depending on
how close is G to F -1 . In fact, the same thing happens as long as G(F (u)) # Au,
where A is constant matrix independent of u. On the other hand, if G is a linear
function, then #G
would be a constant matrix independent of u. In this case the
Newton equation of the preconditioned system
reduces to the Newton equation of the original system
and G does not a#ect the nonlinear convergence of the method, except for the stopping
conditions. However, G does change the conditioning of the linear Jacobian system,
and this forms the basis for the matrix-free Newton-Krylov methods.
Most of the current research has been on the case of linear G; see, for example,
[4, 24]. In this paper, we shall focus on the case when G is the single-level nonlinear
additive Schwarz method. As an example, we show the nonlinear iteration history, in
Figure
5.2, for solving a two-dimensional flow problem with various Reynolds numbers
using the standard IN (top) and PIN (bottom). It can be seen clearly that PIN is
much less sensitive to the change of the Reynolds number than IN. Details of the
numerical experiment will be given later in the paper. Nonlinear Schwarz algorithms
have been studied extensively as iterative methods [5, 9, 20, 21, 22, 25, 26], and are
known, at least experimentally, to be not very robust, in general, unless the problem is
monotone. However, we show in the paper that nonlinear Schwarz can be an excellent
nonlinear preconditioner.
We remark that nonlinear methods can also be used as linear preconditioners as
described in [6], but we will not look into this issue in this paper.
Nested linear and nonlinear solvers are often needed in the implementation of
PIN, and as a consequence, the software is much harder to develop than for the
regular IN. Our target applications are these problems that are di#cult to solve using
traditional Newton type methods. Those include (1) problems whose solutions have
local singularities such as shocks or nonsmooth fronts; and (2) multi-physics problems
with drastically di#erent sti#ness that require di#erent nonlinear solvers based on a
single physics submodel, such as coupled fluid-structure interaction problems.
The rest of the paper is organized as follows. In section 2, we introduce the
nonlinear additive Schwarz preconditioned system and prove that under certain assumptions
it is equivalent to the original unpreconditioned system. In section 3, we
derive a formula for the Jacobian of the nonlinearly preconditioned system. The details
of the full algorithm is presented in section 4, together with some comments
about every step of the algorithm. Numerical experiments are given in section 5. In
section 6, we make some further comments and discuss some future research topics
along the line of nonlinear preconditioning. Several concluding remarks are given in
section 7.
2. A nonlinear additive Schwarz preconditioner. In this section, we describe
a nonlinear preconditioner based on the additive Schwarz method [5, 9]. Let
be an index set; i.e., one integer for each unknown u i and F i . We assume that
is a partition of S in the sense that
Here we allow the subsets to have overlap. Let n i be the dimension of S i ; then, in
general,
Using the partition of S, we introduce subspaces of R n and the corresponding restriction
and extension matrices. For each S i we define V i # R n as
and a n-n restriction (also extension) matrix I S i
whose kth column is either the kth
column of the n - n identity matrix I n-n if k # S i or zero if k # S i . Similarly, let
s be a subset of S; we denote by I s the restriction on s. Note that the matrix I s is
always symmetric and the same matrix can be used as both restriction and extension
operator. Many other forms of restriction/extension are available in the literature;
however, we only consider the simplest form in this paper.
Using the restriction operator, we define the subdomain nonlinear function as
F.
We next define the major component of the algorithm, namely the nonlinearly preconditioned
function. For any given v # R n , define T as the solution of the
following subspace nonlinear system
We introduce a new function
which we will refer to as the nonlinearly preconditioned F (u). The main contribution
of this paper is the following algorithm.
Algorithm 2.1. Find the solution u # of (1.1) by solving the nonlinearly preconditioned
system
with u (0) as the initial guess.
Remark 2.1. In the linear case, this algorithm is the same as the additive
Schwarz algorithm. Using the usual notation, if
then
where A
is the subspace inverse of A
in V i .
Remark 2.2. The evaluation of the function F(v), for a given v, involves the
calculation of the T i , which in turn involves the solution of nonlinear systems on S i .
Remark 2.3. If the overlap is zero, then this is simply a block nonlinear Jacobi
preconditioner.
Remark 2.4. If (2.2) is solved with Picard iteration, or Richardson's method,
then the algorithm is simply the nonlinear additive Schwarz method, which is not a
robust algorithm, as is known from experience with linear and nonlinear problems.
Assumption 2.1 (local unique solvability). For any s # S, we assume that
is uniquely solvable on s.
Remark 2.5. The assumption means that, for a given subset s, if both u and
are solutions on s, i.e.,
and
then if u| This assumption maybe a little too strong. A
weaker assumption could be for s to be the subsets in the partition. In this case, the
proof of the following theorem needs to be modified.
Theorem 2.1. Under the local unique solvability assumption, the nonlinear systems
and (2.2) are equivalent in the sense that they have the same solution.
Proof. Let us first assume that u # is the solution of (1.1), i.e., F
immediately implies that
By definition, T i satisfies
Comparing (2.3) and (2.4), and using the local unique solvability assumption, we must
have
Therefore, u # is a solution of (2.2).
Next, we assume that u # is a solution of (2.2) which means that
We prove that T i in two steps. First we show that T i
equals zero in the nonoverlapping part of S, then we show that T i must equal
each other in the overlapping part of S.
be the nonoverlapping part of S, i.e.,
there exists one and only one i such that k # S i },
Obviously
for any 1 # j # N . Taking ASSUMPTION 2.1, we have
for any 1 # j # N . Due to the uniqueness, we must have
for any 1 # i, j # N . Since the sum of T i (u # )| s is zero, and they all equal to each
other, they must all be zero. Thus,
. This is equivalent to saying that u # is a solution of (1.1).
3. Basic properties of the Jacobian. If (2.2) is solved using a Newton type
algorithm, then the Jacobian is needed in one form or another. We here provide a
computable form of it, and discuss some of its basic properties. Let J be the Jacobian
of the original nonlinear system, i.e.,
and JS i
, the Jacobian of the subdomain nonlinear system, i.e.,
N. Note that if F (-) is sparse nonlinear function, then J is a sparse
matrix and so are the JS i
. Unfortunately, the same thing cannot be said about the
preconditioned function F(-). Its Jacobian, generally speaking, is a dense matrix,
and is very expensive to compute and store as one may imagine. In the following
discussion, we denote by
and JS i
the Jacobian of the preconditioned whole system, and the subsystems, respectively.
Because of the definition of T i , JS i
is a n-n matrix.
components in S i , n independent variables u 1 , . , un , and its other n-n i components
are zeros.
Suppose we want to compute the Jacobian J at a given point u # R n . Consider
one subdomain S i . Let S c
be the complement of S i in S, we can write
which is correct up to an re-ordering of the independent variables
u and uS c
u. Using the definition of T i (u), we have that
Taking the derivative of the above function with respect to uS i
, we obtain
# I S i -
which implies that
assuming the subsystem Jacobian matrix #FS i
is nonsingular in the subspace V i . Next,
we take the derivative of (3.1) with respect to uS c
which is equivalent to
Note that
since the sets S i and S c
do not overlap each other. Combining (3.2) and (3.3), we
obtain
J.
Summing up (3.6) for all subdomains, we have a formula for the Jacobian of the
preconditioned nonlinear system in the form of
J.
(3.7) is an extremely interesting formula since it corresponds exactly to the additive
Schwarz preconditioned linear Jacobian system of the original un-preconditioned
equation. This fact implies that, first of all, we know how to solve the Jacobian system
of the preconditioned nonlinear system, and second, the Jacobian itself is already
well-conditioned. In other words, nonlinear preconditioning automatically o#ers a
linear preconditioning for the corresponding Jacobian system.
4. Additive Schwarz preconditioned inexact Newton algorithm. We describe
a nonlinear additive Schwarz preconditioned inexact Newton algorithm (AS-
PIN). Suppose u (0) is a given initial guess, and u (k) is the current approximate so-
lution; a new approximate solution u (k+1) can be computed through the following
steps:
Algorithm 4.1 (ASPIN).
1: Compute the nonlinear residual g through the following
two steps:
a) Find g (k)
by solving the local subdomain
nonlinear systems
with a starting point g (k)
b) Form the global residual
c) Check stopping conditions on g (k) .
Step 2: Form elements of the Jacobian of the preconditioned system
Step 3: Find the inexact Newton direction p (k) by solving the Jacobian system
approximately
Step 4: Compute the new approximate solution
where # (k) is a damping parameter.
ASPIN may look a bit complicated, but as a matter of fact, the required user
input is the same as that for the regular IN Algorithm 1.1, i.e., the user needs to
supply only two routines for each subdomain:
(1) the evaluation of the original function FS i
(w). This is needed in both Step 1
a) and Step 2 if the Jacobian is to be computed using finite-di#erence methods. It is
also needed in Step 4 in the line search steps.
(2) the Jacobian of the original function JS i
in terms of a matrix-vector multipli-
cation. This is needed in both Step 1 a) and Step 3.
We now briefly discuss the basic steps of the algorithm. In Step 1 a) of Algorithm
4.1, N subdomain nonlinear systems have to be solved in order to evaluate the
preconditioned function F at a given point. More explicitly, we solve
which has n i equations and n i unknowns, using Algorithm 1.1 with a starting value
. Note that the vector u (k)
is needed to evaluate GS i
(#), for
this requires the ghost points in a mesh-based software implementation.
In a parallel implementation, the ghost values often belong to several neighboring
processors and communication is required to obtain their current values. We note,
however, that the ghost values do not change during the solution of the subdomain
nonlinear system.
In Step 2, pieces of the Jacobian matrix are computed. The full Jacobian matrix
J never needs to be formed. In a distributed memory parallel implementation, the
submatrices JS i
are formed, and saved. The multiplication of J with a given vector is
carried out using the submatrices JS i
. Therefore the global J matrix is never needed.
Several techniques are available for computing the JS i
, for example, using an analytic
multi-colored finite di#erencing, or automatic di#erentiation. A triangular
factorization of JS i
is also performed at this step and the resulting matrices are stored.
In Step 3, the matrix
should not considered as a linear preconditioner since it does not appear on the right-hand
side of the linear system. However, using the additive Schwarz preconditioning
theory, we know that for many applications the matrix
J is well-conditioned,
under certain conditions. We also note that if an inexact solver is used to compute
w in Step 3, the Newton search direction would be changed and, as a result,
the algorithm becomes an inexact Newton algorithm.
As noted above the Jacobian system, in Step 3, does not have the standard form
of a preconditioned sparse linear system
However, standard linear solver software packages can still be used with some slight
modification, such as removing the line that performs
Since the explicit sparse format of
J is often not available, further preconditioning
of the Jacobian system using some of the sparse matrix based techniques,
such as ILU, is di#cult.
A particular interesting case is when the overlap is zero; then the diagonal blocks
of
J are all identities, therefore, do not involve any computations when
multiplied with vectors. Let us take a two-subdomain case for example,
and JS i
22 J 21 I
# .
The same thing can also be done for the overlapping case. This is a small saving when
there many small subspaces. However, the saving can be big if there are relatively few
subspaces, but the sizes are large. For example, in the case of a coupled fluid-structure
interaction simulation, there could be only two subdomains; one for the fluid flow and
one for the structure.
In Step 4, the step length # (k) is determined using a standard line search technique
[7] based on the function
More precisely, we first compute the initial reduction
Jp (k) .
Then, # (k) is picked such that
Here # is a pre-selected parameter (use The standard cubic backtracking
algorithm [7] is used in our computations.
5. Numerical experiments. We show a few numerical experiments in this
section using ASPIN, and compare with the results obtained using a standard inexact
Newton's algorithm. We are mostly interested in the kind of problems on which
the regular inexact Newton type algorithm does not work well. We shall focus our
discussion on the following two-dimensional driven cavity flow problem [15], using
the velocity-vorticity formulation, in terms of the velocity u, v, and the vorticity #,
x
y
Fig. 5.1. A 9 - 9 fine mesh with 3 - 3 subdomain partition. The 'o' are the mesh points. The
dashed lines indicate a 3 - nonoverlapping partitioning. The solid lines indicate
the "overlapping = 1" subdomains.
defined on the unit
-#u-
#y
#x
Re
#x
#y
Here Re is Reynolds number. The boundary conditions are:
. bottom, left and right:
. top:
We vary the Reynolds number in the experiments. The boundary condition on # is
given by its definition:
#y
#x
The usual uniform mesh finite di#erence approximation with the 5-point stencil is
used to discretize the boundary value problem. Upwinding is used for the divergence
(convective) terms and central di#erencing for the gradient (source) terms. To obtain
a nonlinear algebraic system of equations F , we use natural ordering for the mesh
points, and at each mesh point, we arrange the knowns in the order of u, v, and #.
The partitioning of F is through the partitioning of the mesh points. In other words,
the partition is neither physics-based nor element-based. Figure 5.1 shows a typical
mesh, together with an overlapping partition. The subdomains may have di#erent
sizes depending on whether they touch the boundary of # The size of the overlap is
as indicated in Figure 5.1. Note that since this is mesh-point based partition, the zero
overlap case in fact corresponds to the 1/2 overlap case of the element-based partition,
which is used more often in the literature on domain decomposition methods for finite
element problems [10].
The subdomain Jacobian matrices JS i
are formed using a multi-colored finite
di#erence scheme.
The implementation is done using PETSc [1], and the results are obtained on a
cluster of DEC workstations. Double precision is used throughout the computations.
We report here only the machine independent properties of the algorithms.
5.1. Parameter definitions. We stop the global PIN iterations if
used for all the tests. The global linear iteration for solving
the global Jacobian system is stopped if the relative tolerance
or the absolute tolerance
is satisfied. In fact we pick # independent of k, throughout the
nonlinear iterations. Several di#erent values of # global-linear-rtol are used as given in
the tables below.
At the kth global nonlinear iteration, nonlinear subsystems
defined in Step 1 a) of Algorithm 4.1, have to be solved. We use the standard IN with
a cubic line search for such systems with initial guess g (k)
0. The local nonlinear
iteration in subdomain S i is stopped if one of the following two conditions is satisfied:
)# local-nonlinear-rtol #FS i
or
)# local-nonlinear-atol .
The overall cost of the algorithm depends heavily on the choice of # local-nonlinear-rtol .
We report computation results using a few di#erent values for it.
5.2. Comparison with a Newton-Krylov-Schwarz algorithm. We compare
the newly developed algorithm ASPIN with a well-understood inexact Newton
algorithm using a cubic backtracking line search as the global strategy, as described
in [7]. Since we would like to concentrate our study on the nonlinear behavior of the
algorithm, not how the linear Jacobian systems are solved, the Jacobian systems are
solved almost exactly at all Newton iterations. More precisely at each IN iteration,
the Newton direction p (k) satisfies
with GMRES with an one-level additive Schwarz preconditioner is used
as the linear solver with the same partition and overlap as in the corresponding ASPIN
algorithm. The history of nonlinear residuals is shown in Figure 5.2 (top) with several
di#erent Reynolds numbers on a fixed fine mesh of size 128 - 128.
5.3. Test results and observations. As the Reynolds number increases, the
nonlinear system becomes more and more di#cult to solve. The Newton-Krylov-
Schwarz algorithm fails to converge once the Reynolds number passes the value
770.0 on this 128 - 128 mesh, no matter how accurately we solve the Jacobian
system. Standard techniques for going further would employ pseudo time
stepping [18] or nonlinear continuation in h or Re [28]. However, our proposed PIN
algorithm converges for a much larger range of Reynolds numbers as shown in Figure
5.2. Furthermore, the number of PIN iterations does not change much as we increase
the Reynolds number. A key to the success of the method is that the subdomain
nonlinear problems are well solved.
In
Table
5.1, we present the numbers of global nonlinear PIN iterations and the
numbers of global GMRES iterations per PIN iteration for various Reynolds numbers
and overlapping sizes. Two key stopping parameters are # global-linear-rtol for the
global linear Jacobian systems and # local-nonlinear-rtol for the local nonlinear systems.
We test several combinations of two values 10 -6 and 10 -3 . As shown in the table, the
total number of PIN iteration does not change much as we change # global-linear-rtol
and # local-nonlinear-rtol ; however, it does increase from 2 or 3 to 6 or 9 when the
Reynolds number increases from 1 to 10 4 . The bottom part of Table 5.1 shows the
corresponding numbers of GMRES iterations per PIN iteration. These linear iteration
numbers change drastically as we switch to di#erent stopping parameters. Solving the
global Jacobian too accurately will cost a lot of GMRES iterations and not result in
much savings in the total number of PIN iterations.
Table
5.1 also compares the results with two sizes of overlap. A small number
PIN iterations can be saved as one increases the overlapping size from 0 to 1, or more,
as shown also in Table 5.3. The corresponding number of global linear iterations
decreases a lot. We should mention that the size of subdomain nonlinear systems
increases as one increases the overlap, especially for three dimensional problems. The
communication cost in a distributed parallel implementation also increases as we
increase the overlap. Recent experiments seem to indicate that small overlap, such as
overlap=1, is preferred balancing the saving of the computational cost and the increase
of the communication cost, see for example [10, 14]. Of course, the observation is
highly machine and network dependent.
In
Table
5.2, we look at the number of Newton iterations for solving the subdomain
nonlinear systems. In this test case, we partition the domain into 16 subdomains,
4 in each direction, and number them naturally from the bottom to top, and left to
right. Four
touch the moving lid. The solution
of the problem is less smooth near the lid, especially when the Reynolds number is
large. As expected, the subdomains near the lid need more iterations; two to three
times more than what is needed in the smooth subdomains for the large Reynolds
number cases.
We next show how the iteration numbers change as we change the number of
subdomains with a fixed 128 - 128 fine mesh. The results are displayed in Table
5.4. As we increase the number of subdomains from 4 to 16 the number of global
PIN iterations does not change much; up or down by 1 is most likely due to the last
bits of the stopping conditions rather than the change of the algorithm. Note that
when we change the number of subdomains, the inexact Newton direction changes,
and as a result, the algorithm changes. As a matter of fact, we are comparing two
mathematically di#erent algorithms. The bottom part of Table 5.4 shows that the
number of GMRES iterations per PIN increases quite a bit as we increase the number
Global PIN iterations. Fine mesh 128 - 128, 4 - 4 subdomain partition on 16 processors.
Subdomain linear systems are solved exactly. # global-linear-rtol is the stopping condition for the
global GMRES iterations. # local-nonlinear-rtol is the stopping condition for the local nonlinear
iterations. The absolute tolerances are #
. The finite di#erence step size is 10 -8 .
number of PIN iterations
number of GMRES iterations per PIN
of subdomains.
6. Some further comments. We comment on a few important issues about
the newly proposed algorithm including parallel scalabilities and load balancing in
parallel implementations.
Parallel scalability is a very important issue when using linear or nonlinear iterative
methods for solving problems with a large number of unknowns on machines
with a large number of processors. It usually involves two separate questions, namely
how the iteration numbers change with the number of processors and with the number
of unknowns. It is a little bit surprising that, from our limited experience, the
number of ASPIN iterations is not sensitive at all to either the number of processors
or the number of unknowns. In other words, the number of nonlinear PIN iterations
is completely scalable. However, this can not be carried over to the linear solver. To
Total number of subdomain nonlinear iterations. Fine mesh 128 - 128, 4 - 4 subdomain partition
on processors. Subdomains are naturally ordered. Subdomain linear systems are solved
exactly. # is the stopping condition for the global GMRES iterations.
is the stopping condition for the local nonlinear iterations. The absolute
tolerances are # 1. The
finite di#erence step size is 10 -8 .
subdomain #
Table
Varying the overlapping size. Fine mesh 128 - 128, 4 - 4 subdomain partition on 16 pro-
cessors. Subdomain linear systems are solved exactly. # is the stopping
condition for the global GMRES iterations. # is the stopping condition
for the local nonlinear iterations. The absolute tolerances are #
. The finite di#erence step size is 10 -8 .
GMRES/PIN
make the linear solver scalable, a coarse grid space is definitely needed. Our current
software implementation is not capable of dealing with the coarse space, therefore no
further discussion of this issue can be o#ered at this point.
Load balancing is another important issue for parallel performance that we do not
address in the paper. As shown in Table 5.2, the computational cost is much higher
in the subdomains near the lid than the other subdomains, in particular for the
large Reynolds number cases. To balance the computational load, idealy, one should
partition the domain such that these subdomains that require more linear/nonlinear
iterations contain less mesh points. However, the solution dependent cost information
is not available until after a few iterations, and therefore the ideal partition has to
obtained dynamically as the computation is being carried out.
Di#erent subdomain partitions with the same fine mesh 128 - 128. Subdomain linear systems
are solved exactly. # is the stopping condition for the global GMRES
iterations. # is the stopping condition for the local nonlinear iterations.
The absolute tolerances are #
1. The finite di#erence step size is 10 -8 .
number of PIN iterations
subdomain partition
number of GMRES iterations per PIN
Table
Di#erent fine meshes on 16 processors. Subdomain linear systems are solved ex-
actly. # is the stopping condition for the global GMRES iterations.
is the stopping condition for the local nonlinear iterations. The absolute
tolerances are # 1. The
finite di#erence step size is 10 -8 .
number of PIN iterations
fine mesh
number of GMRES iterations per PIN
We only discussed one partitioning strategy based on the geometry of the mesh
and the number of processors available in our computing system. Many other partitioning
strategies need to be investigated. For example, physics-based partitions: all
the velocity unknowns
as# 1 and the vorticity unknowns
. In this case, the number
of subdomains may have nothing to do with the number of processors. Further
partitions may be needed on
and# 2 for the purpose of parallel process-
ing. One possible advantage of this physics-based partition is that the nonlinearities
between di#erent physical quantities can be balanced.
An extreme case of a mesh-based partition would be that each subdomain contains
only one grid point. Then, the dimension of the subdomain nonlinear system is the
same as the number of variables associated with a grid point, 3 for our test case. In
this situation, ASPIN becomes a pointwise nonlinear scaling algorithm. As noted in
linear scaling does not change the nonlinear convergence of Newton's method, but
nonlinear scaling does. Further investigation should be of great interest.
--R
The Portable
Hybrid Krylov methods for nonlinear systems of equations
Convergence theory of nonlinear Newton-Krylov algorithms
Parallel Newton- Krylov-Schwarz algorithms for the transonic full potential equation
Domain decomposition methods for monotone nonlinear elliptic problems
Nonlinearly preconditioned Krylov subspace methods for discrete Newton algorithms
Numerical Methods for Unconstrained Optimization and Nonlinear Equations
On the nonlinear domain decomposition method
Domain decomposition algorithms with small overlap
Globally convergent inexact Newton methods
Choosing the forcing terms in an inexact Newton method
Matrix Computations
Globalized Newton-Krylov- Schwarz algorithms and software for parallel implicit CFD
Numerical Computation of Internal and External Flows
Robust linear and nonlinear strategies for solution of the transonic Euler equations
Iterative Methods for Linear and Nonlinear Equations
Convergence analysis of pseudo-transient continuation
An analysis of approximate nonlinear elim- ination
On the Schwarz alternating method.
On the Schwarz alternating method.
On Schwarz alternating methods for incompressible Navier-Stokes equations in n dimensions
NITSOL: A Newton iterative solver for nonlinear systems
Parallel Multilevel Methods for Elliptic Partial Di
Rate of convergence of some space decomposition methods for linear and nonlinear problems
Global convergence of inexact Newton methods for transonic flow
A locally refined rectangular grid finite element method: Application to computational fluid dynamics and computational physics
--TR
--CTR
Feng-Nan Hwang , Xiao-Chuan Cai, A parallel nonlinear additive Schwarz preconditioned inexact Newton algorithm for incompressible Navier-Stokes equations, Journal of Computational Physics, v.204
Heng-Bin An , Ze-Yao Mo , Xing-Ping Liu, A choice of forcing terms in inexact Newton method, Journal of Computational and Applied Mathematics, v.200 n.1, p.47-60, March, 2007
S.-H. Lui, On monotone iteration and Schwarz methods for nonlinear parabolic PDEs, Journal of Computational and Applied Mathematics, v.161 n.2, p.449-468, 15 December
D. A. Knoll , D. E. Keyes, Jacobian-free Newton-Krylov methods: a survey of approaches and applications, Journal of Computational Physics, v.193 n.2, p.357-397, 20 January 2004 | incompressible flows;nonlinear preconditioning;nonlinear additive Schwarz;inexact Newton methods;krylov subspace methods;domain decomposition;nonlinear equations;parallel computing |
587383 | On Two Variants of an Algebraic Wavelet Preconditioner. | A recursive method of constructing preconditioning matrices for the nonsymmetric stiffness matrix in a wavelet basis is proposed for solving a class of integral and differential equations. It is based on a level-by-level application of the wavelet scales decoupling the different wavelet levels in a matrix form just as in the well-known nonstandard form. The result is a powerful iterative method with built-in preconditioning leading to two specific algebraic multilevel iteration algorithms: one with an exact Schur preconditioning and the other with an approximate Schur preconditioning. Numerical examples are presented to illustrate the efficiency of the new algorithms. | Introduction
The discovery of wavelets is usually described as one of the most important advances in mathematics
in the twentieth century as a result of joint eorts of pure and applied mathematicians. Through
the powerful compression property, wavelets have satisfactorily solved many important problems
in applied mathematics e.g. signal and image processing; see [23, 20, 34, 38] for a summary.
There remain many mathematical problems to be tackled before wavelets can be used for
solution of dierential and integral equations in a general setting. The traditional wavelets were
designed mainly for regular domains and uniform meshes. This was one of the reasons why wavelets
may not be immediately applicable to arbitrary problems. The introduction of the lifting idea,
interpolatory wavelets [35, 25, 1] and adaptivity [17] provides a useful way of constructing wavelets
functions in non-regular domains and in high dimensions.
However, the algebraic (sparse) structure of the matrix generated by a wavelet method is usually
a nger-like one that is a di-cult sparse pattern to deal with; refer to [10, 13, 15, 18]. Firstly
direct solution of a linear system with such a matrix is either not feasible or ine-cient. Secondly
iterative solution requires a suitable preconditioner and this choice of preconditioner is usually dependent
of the smoothness of the underlying operator (in addition to the assumptions for wavelets
compression). Often a diagonal preconditioner is not su-cient. For some particular problems, several
preconditioning techniques have been suggested. For instance, the nger matrix from wavelets
representation of a Calderon-Zygmund operator plus a non-constant diagonal matrix cannot be
preconditioned eectively by a diagonal matrix. In this case, one can use the idea of two-stage
preconditioning as proposed in [13] or to use other modied wavelets methods such as a centring
algorithm [15]; see also [38] for another modied algorithm and [19] for using approximate inverses.
there exists a gap in realizing the full e-ciency oered by wavelet bases for model prob-
lems. That is to say, a generally applicable iterative algorithm is still lacking. For a recent and
general survey of iterative methods, refer to [31].
This paper proposes two related and e-cient iterative algorithms based on the wavelet formulation
for solving an operator equation with conventional arithmetic. Both algorithms use the
Schur complements recursively but dier in how to use coarse levels to solve Schur complements
equations. In the rst algorithm, we precondition a Schur complement by using coarse levels while
in the second we use approximate Schur complements to construct a preconditioner. We believe
that our algorithms can be adapted to higher dimensional problems more easily than previous work
in the subject.
The motivation of this work follows from the observation that any 1-scale compressed results
(matrices) can be conveniently processed before applying the next scale. In this way, regular patterns
created by past wavelet scales are not destroyed by the new scales like in the non-standard (NS)
form [10] and unlike in the standard wavelet bases; we dene the notation and give further details
in Section 2. This radical but simple idea will be combined in Section 3 with the Schur complement
method and Richardson iterations in a multi-level iterative algorithm. Moreover the Richardson iterations
can be replaced by a recursive generalized minimal residuals (GMRES) method [30]. The
essential assumption for this new algorithm to work is the invertibility of an approximate band
matrix; in the Appendix we show that for a class of Calderon-Zygmund and pseudo-dierential
operators such an invertibility is ensured. In practice we found that our method works equally well
for certain operators outside the type for which we can provide proofs. In Section 4, we present
an alternative way of constructing the preconditioner by using approximate Schur complements.
Section 5 discusses the complexity issues while Section 6 presents several numerical experiments to
illustrate the eectiveness of the two new algorithms.
We remark that our rst algorithm is similar to the framework of a NS form reformulation of
the standard wavelets bases (based on the pyramid algorithm) but does not make use of the NS
form itself, although our algorithm avoids a nger matrix (just like a NS form method) that could
arise from overlapping dierent wavelet scales. The NS form work was by Beylkin, Coifman and
Rokhlin [10] often known as the BCR paper. As a by-product, the NS form reduces the
ops from
O(n log n) to O(n). However, the NS form does not work with conventional arithmetic although
operations with the underlying matrix (that has a regular sparse pattern) can be specially designed;
in fact the NS form matrix itself is simply singular in conventional arithmetic. The recent work in
[22] has attempted to develop a direct solution method based on the NS form that requires a careful
choice of a threshold; here our method is iterative. In the context of designing recursive sparse
preconditioners, it is similar to the ILUM type preconditioner to a certain extent [33]. Our second
algorithm is similar to the algebraic multi-level iteration methods (AMLI) that were developed
for nite elements [4, 2, 3, 36]; here our method uses wavelets and does not require estimating
eigenvalues.
Wavelets splitting of an operator
This section will set up the notation to be used later and motivate the methods in the next
sections. We rst introduce the standard wavelet method. For simplicity, we shall concentrate on
the Daubechies' order m orthogonal wavelets with low pass lters c
lters (such that d In fact, the ideas and expositions in this
paper apply immediately to the more general bi-orthogonal wavelets [20].
Following the usual setting of [10, 20, 22, 11, 34], the lter coe-cients c j 's and d j 's dene the
scaling function (x) and the wavelet function (x). Further, dilations and translations of (x)
and (x) dene a multi-resolution analysis for L 2 in d-dimensions, in particular,
(R d
where the subspaces satisfy the relations
In numerical realisations, we select a nite dimension space V 0 (in the nest scale) as our approximation
space to the innite decomposition of L 2 in (1) i.e. eectively use
to approximate L 2 (R d ). Consequently for a given operator its innite and exact
operator representation in wavelet bases
is approximated in space V 0 by
where are both projection operators.
For brevity, dene operators
Then one can observe that
A further observation based on T
is that the wavelet
coe-cients of T j 1 will be equivalently generated by the block operator
Now we change the notation and consider the discretization of all continuous operators. Dene
A as the representation of operator T 0 in space V 0 . Assume that A on the nest
level is of dimension the dimension of matrices on a coarse level j is
'. The operator splitting in (3) for the case of dimensions
can be discussed similarly [10, 22]) corresponds to the two-dimensional wavelet transform
e
where the one level transform from j 1 to j (for any
with rectangular matrices P j and Q j (corresponding to operators P j and Q j ) dened respectively
as
For a class of useful and strongly elliptic operators i.e. Calderon-Zygmund and pseudo-dierential
operators, it was shown in BCR [10] that matrices A
k;i are indeed
'sparse' satisfying the decaying property
c m;j
m;j is a generic constant depending on m and j only.
To observe a relationship between the above level-by-level form and the standard wavelet rep-
resentation, dene a square matrix of size 0
I j
. Then the standard wavelet transform can be
written as
that transforms matrix A into e
Figure
1: The level-by-level form (left) versus the non-standard wavelet form [10] (right)
22464160208Thus the diagonal blocks of e
A are the same as A j 's of a level-by-level form. However the o-
diagonal blocks of the former are dierent from B j and C j of the latter. To gain some insight
into the structure of the o-diagonal blocks of matrix e
A with the standard wavelet transform, we
consider the following case of wavelets. Firstly after level 1
transform, we obtain
e
nn
Secondly after level 2 transform, we get
e
e
nn
Finally after level 3 transform, we arrive at
e
e
I 3n=4
nn
Clearly the o-diagonal blocks of e
A 3 are perturbations of that of the level-by-level form o-diagonal
blocks in fact the one-sided transforms for the o-diagonal blocks are responsible for
the resulting (complicated) sparsity structure. This can be observed more clearly for a typical
example with Fig.1 where the left plot shows the level-by-level
representation set-up that will be used in this paper and in Fig.2 where the left plot shows the
standard wavelet representation as in (8).
Figure
2: The standard wavelet form representation (left) versus an alternative centering form [15]
(right) for the example in Figure 1
12864112Motivated by the exposition in (8) of the standard form, we shall propose a preconditioning and
iterative scheme that operates on recursive 1-level transforms. Thus it will have the advantage of
making full use of the NS form idea and its theory while avoiding the problem of a non-operational
NS form matrix.
Remark 1 Starting from the level-by-level set-up, taking T ' and the collection of all triplets
1j' as a sparse approximation for T 0 is the idea of the NS form [10, 22]. By way of
comparison, in Fig.1, the NS form representation versus the level-by-level form are shown. It turns
out that this work uses the identical set-up to the NS form without using the NS form formulation
itself because we shall not use the proposed sparse approximation. Note that the centering algorithm
[15] (see the right plot in Fig.2) is designed as a permutation of the standard wavelet form (see
the left plot of Fig.2) and is only applicable to a special class of problems where its performance is
better.
3 An exact Schur preconditioner with level-by-level wavelets
We now present our rst and new recursive method for solving the linear system dened
on the nest scale V 0 i.e.
A is of size 0 discussed in the previous section, and x
Instead of considering a representation of T 0 in the decomposition space (2) and then the resulting
linear system, we propose to follow the space decomposition and the intermediate linear system in a
level-by-level manner. A sketch of this method is given in Fig.3 (the left plot) where we try to show
a relationship between the multi-resolution (MR for wavelet representation) and the multi-level
(ML for preconditioning via Schur) ideas from the nest level (top) to the coarsest level (bottom).
Figure
3: Illustration of Algorithms 1 (left) and 2 (right). Here we take
the nest level and 3 the coarsest level), use '2' to indicate a DWT step (one level of wavelets) and
' to denote the direct solution process on the coarsest level. The arrows denote the sequence
of operations (written on the arrowed lines) with each algorithm interacting the two states (two
columns on the plots) of multi-resolution wavelets and multi-level Schur decomposition. The left
plot shows that for Algorithm 1, a Richardson step (or GMRES) takes the results of a DWT step to
the next level via the Schur decomposition while the right plot shows that the Schur decomposition
takes the results of a DWT step to the next level via a Schur approximation.2 Richardson
Schur LU
Schur LU
Schur LU
Direct Solution
MR/Wavelets ML/Schur2
Approximation
Schur LUApproximation
Schur LUSchur LU
Direct Solution
MR/Wavelets ML/Schur
Firstly at level 0, we consider V
and the wavelet transform (4) yields
e
where e x
e
nn
following the general result in (5), it is appropriate to consider the approximation of A
band matrices. To be more precise, let B (D) denote a banded matrix of D with semi-bandwidth
where integer 0. Dene A suitable (to be
specied later). Then matrix e
can be approximated by
nn
Or equivalently matrix
nn
is expected to be small in some norm (refer to the Appendix). Write equation (10) as
Consequently we propose to use M our preconditioner to equation (14). This preconditioner
can be used to accelerate iterative solution; we shall consider two such methods: the
Richardson method and the GMRES method [30].
The most important step in an iterative method is to solve the preconditioning equation:
or in a decomposed form A 1
y (1)y (2)!
r (1)r (2)!
Using the Schur complement method we obtain
y (2)= z 2
y (1)
Here the third equation of (17), unless its dimension is small (i.e. V 1 is the coarsest scale), has to
be solved by an iterative method with the preconditioner T 1 ; we shall denote the preconditioning
step by
where T 1 is of size 1 1 . This sets up the sequence of a multilevel method where the main
characteristic is that each Schur complements equation in its exact form is solved iteratively with
a preconditioner that involves coarse level solutions.
At any level j (1 j < '), the solution of the following linear system
with T j of size j j and through solving
e
can be similarly reduced to that of
with T j+1 of size j+1 j+1 . The solution procedure from the nest level to the coarsest level can
be illustrated by the following diagram (for
Transform
Schur
Precondition
j+1 j+1
A j+1 z
y (2)
y (1)
where as with (12) and (13)
A j+1 B j+1
nn
nn
The coarsest level is set up in the previous section, where a system like (20) is solved
by a direct elimination method. As with conventional multi-level methods, each ne level iteration
leads to many coarse level iteration cycles. This can be illustrated in Fig. 4 where
(bottom plot) are assumed and at the coarsest level (
direct solution is
used. In practice, a variable may be used to achieve certain accuracy for the
preconditioning step i.e. convergence up to a tolerance is pursued whilst by way of comparison
smoothing rather convergence is desired in an usual multilevel method. Our experiments have
shown that are often su-cient to ensure the overall convergence.
We now summarise the formulation as an algorithm. The iterative solver for (19) at level i can
be the Richardson method
or the GMRES method [30] for solving e
(or actually a combination of the two). For
simplicity and generality, we shall use the word \SOLVE" to denote such an iterative solver (either
Richardson or GMRES).
Algorithm 1 (Recursive I)
1. and start on the nest level.
2. Apply one level DWT to T j x to obtain ~
3. Use j steps of SOLVE for ~
4. In each step, implement the preconditioner
Restrict to the coarse level:
A j+1 z
y (2)
y (1)
Figure
4: Iteration cycling patterns of Algorithm 1 with levels: top for
3. In each case, one solves a ne level equation (starting from the nest level 0) by iteratively
solving coarser level equations times; on the coarsest level
(here level 3) a direct solution is
carried out.2020
5. Use SOLVE for the above third equation with the preconditioner T j+1 i.e. solve T j+1 x
b j+1 .
7. If (on the coarsest level), apply a direct solver to T j x
and proceed with Step 8; otherwise return to Step 2.
8.
9. Interpolate the coarse level solution to the ne level j:
x (2)
x (1)
x (2)
x (1)
10. Apply one level inverse DWT to e
y j to obtain y j .
11. If (on the nest level), check the residual error | if small enough accept the solution
x 0 and stop the algorithm. If j > 0, check if j steps (cycles) have been carried out; if not,
return to Step 2 otherwise continue with Step 8 on level j.
The rate of convergence of this algorithm depends on how well the matrix T j approximates
j and this approximation is known to be accurate for a suitable and for a class of Calderon-
Zygmund and pseudo-dierential operators [10]. For this class of problems, it remains to discuss
the invertibility of matrix A j which is done in the Appendix; a detailed analysis on T j T j may
be done along the same lines as Lemma 1. For other problem classes, the algorithm may not work
at all for the simple reason that A j may be singular e.g. the diagonal of matrix may
have zero entries. Some extensions based on the idea of [13] may be applied as discussed in Section
6.
Remark 2 We remark that for a class of general sparse linear systems, Saad, Zhang, Botta,
Wubs et al [28, 12, 32, 33] have proposed a recursive multi-level preconditioner (named as ILUM)
similar to this Algorithm 1. The rst dierence is that we need to apply one level of wavelets to
achieve a nearly sparse matrix while these works start from a sparse matrix and permute it to obtain
a desirable pattern suitable for Schur decomposition. The second dierence is that we propose an
iterative step before calling for the Schur decomposition while these works try to compute the exact
Schur decomposition approximately. Therefore it is feasible to rene our Algorithm 1 to adopt the
ILUM idea (using independent sets) for other problem types. However one needs to be careful in
selecting the dimensions of the leading Schur block if a DWT is required for compression purpose.
4 An approximate Schur preconditioner with level-by-level wavelets
In the previous algorithm, we use coarse level equations to precondition the ne level Schur complement
equation. We now propose an alternative way of constructing a preconditioner for a ne level
equation. Namely we approximate and compute the ne level Schur complement before employing
coarse levels to solve the approximated Schur complement equation. A sketch of this method is
shown in Fig.3 showing the natural coupling of wavelet representation (level-by-level form) and
Schur complement. symbol '2'. To dierentiate from Algorithm 1, we change the notation for all
matrices.
At any level k for consider the solution (compare to (19))
A
Applying 1-level of DWT, we obtain
A
A (k)
Note that we have the block LU decomposition
A (k)
A (k)
I
#"
I A (k)1
A (k)0 S (k)
22 A (k)
A (k)
12 is the true Schur complement. To approximate this Schur
complement, we must consider approximating the second term in the above S (k) . We propose to
form band matrix approximations
For level these approximations are possible for a small bandwidth ; see Appendix. Seeking
a band approximation to the inverse of A (k)
makes sense because A (k)
is expected to have a
decaying property (refer to (5)). Let S denote the set of all matrices that have the sparsity pattern
of a band matrix B
). The formation of a sparse approximate inverse (SPAI) is to nd a
band matrix B 11 2 S such that
min
Refer to [8, 24, 16]. Brie
y as with most SPAI methods, the use of F-norm decouples the minimisation
into least squares (LS) problems for individual columns c j of B 11 . More precisely, owning
to
the j-th LS problem is to solve A (k)
which is not expensive since c j is sparse. Once B 11 is
found, dene an approximation to the true Schur complement S (k) as
22 A (k)
and set A This generates a sequence of matrices A (k) .
Now comes the most important step about the new preconditioner. Setting M
the ne level preconditioner M (0) is dened recursively by
I
#"
I B (k)
is an approximation to the true Schur complement S (k) of A (k) . Here
Observe that this preconditioner is dened through the V-cycling pattern
recursively using the coarse levels.
To go beyond the V-cycling, we propose a simple residual correction idea. We view the solution
y [j]
k of the preconditioning equation (compare to (15) and (9))
as an approximate solution to the equation
Then the residual vector is
k . This calls for a repeated solution M
r k and
gives the correction and a new approximate solution to (25):
In practice we take see the top
plot in Fig.4) as our experiments suggest that 2 is su-cient to ensure the overall convergence.
Thus an essential feature of this method dierent from Algorithm 1 is that every approximated
Schur complement matrix needs to be transformed to the next wavelet level in order to admit the
matrix splitting (23) while inverse transforms are needed to pass coarse level information back to
a ne level as illustrated in Fig.3. Ideally we may wish to use A
22 A (k)
the approximate Schur complement but taking A (k)
21 and A (k)
12 as full matrices would jeopardize the
e-ciency of the overall iterative method. We proposed to use band matrices (or thresholding) to
approximate these two quantities just as in (13) with Algorithm 1.
To summarise, the solution of the preconditioning equation from the nest level to the coarsest
level and back up is illustrated in Fig.5 for means the same entry
and exit point. The general algorithm for solving M (0) y can be stated as follows:
Algorithm 2 (Recursive II)
1. Apply one level DWT to T k to obtain A (k)
(k)2. Find the approximate inverse B (k)
3. Generate the matrix T
22 A (k)
12 .
Solution Stage
1. and start on the nest level.
2. Apply one level DWT to r k and consider ~
3. Solve the preconditioning equation M (k) ~ y by
Restricting to the coarse level:
r (2)
4. Solve for the above third equation at the next level T k+1 y
5.
(on the coarsest level), apply a direct solver to T k y
and proceed with Step 8; otherwise return to Step 2.
7. Set k := k 1.
8. Interpolate the coarse level k solution to the ne level k:
e
y (2)
y (1)
i:e:
e
y (1)
e
y (2)
e
y (1)
9. Apply one level inverse DWT to e
y k to obtain y k .
10. When (on the nest level), check the residual error | if small enough accept the
solution y 0 and stop the algorithm.
When k > 1, check if cycles have been carried out; if not, nd the residual vector and return
to Step 2 otherwise continue with Step 7 on level k.
Remark 3 It turns out that this algorithm is similar to the algebraic multi-level iteration methods
(AMLI) that was developed for a class of symmetric positive denite nite element equations in a
hierarchical basis [4, 2, 3, 36]. In fact, let and then S (k) is implicitly dened by the following
I P
A (k+1)
where P is a degree polynomial satisfying
As with AMLI, for the valid choice P 1 gives rise to
the V-cycling pattern and For > 1, the polynomial P (t) is chosen to improve the preconditioner;
ideally
A (k)
O(1) asymptotically. Refer to these original papers about how to work
out the coe-cients of P (t) based on eigenvalue estimates. However we do not use any eigenvalue
estimates to construct P .
We also remark that an alternative denition of a recursive preconditioner dierent from AMLI
is the ILUM method as mentioned in Remark 2, where in a purely algebraic way (using independent
sets of the underlying matrix graph) A (k)
11 is dened as a block diagonal form after a suitable
permutation. This would give rise to another way of approximating the true Schur complement
A (k). However the sparsity structures of blocks A (k)and A (k)will aect
the density of nonzeros in matrix S (k) and an incomplete LU decomposition has to be pursued as
in [28, 12, 33].
5 Complexity analysis
Here we mainly compare the complexity of Algorithms 1-2 in a full cycle. Note that both algorithms
can be used by a main driving iterative solver where each step of iteration will require n 2
ops
(one
op refers to 1 multiplication and 1 addition) unless some fast matrix vector multiplication
methods are used. One way to reduce this
op count is to use a small threshold so that only a
sparse form of e
A is stored. Also common to both algorithms are the DWT steps which are not
counted here.
To work out
op counts for diering steps of the two algorithms, we list the main steps as follows
Algorithm 1 Algorithm 2
Main 3 band solves: 3n i 2 2 band-band multiplications: 8n i 2
step 4 band-vector multiplications: 8n i 4 band-vector multiplications: 8n i
3 o-band-vector multiplications (R i
Therefore an -cycling (iteration) across all levels would require these
ops (assuming
and
after ignoring the low order terms. Therefore we obtain F II =F I
9
for a typical situation
with I 10. Thus we expect Algorithm I to be cheaper than II if the same
number of iteration steps are recorded. Here by Algorithm I we meant the use of a Richardson
iteration (in SOLVE of Algorithm 1); however if a GMRES iteration is used for preconditioning
then the
op count will increase. Of course as is well known,
op count is not always a reliable
indicator for execution speed; especially if parallel computing is desired a lot of other factors have
to be considered.
Remark 4 For sparse matrices, all
op counts will be much less as DWT matrices are also
sparse. The above complexity analysis is done for a dense matrix case. Even in this case, setting
up preconditioners only adds a few equivalent iteration steps to a conventional iteration solver. One
can usually observe overall speed-up. For integral operators, the proper implementation is to use the
biorthorgonal wavelets as trial functions to yield sparse matrices A directly (for a suitable
threshold); in this case a dierent complexity analysis is needed as all matrices are sparse and
experiments have shown that although this approach is optimal a much larger complexity constant
(independent of n) is involved. For structured dense matrices where FFT is eective, wavelets may
have to be applied implicitly to preserve FFT representation. Further work is in progress.
6 Numerical experiments
Here we shall compare the new algorithms with two previous methods: the WSPAI method by
Chan-Tang-Wan [14] (CTW) and the two stage method by Chan-Chen [13] (CC). Further comparisons
with other methods such as SPAI and ILU type can be found in [14] and [39]. Note that
WSPAI, where applicable, is faster that SPAI and ILU.
We now present numerical experiments from two sets of problems solved by these 4 methods:
M1 | Algorithm 1 with the SOLVE step replaced by a Richardson iteration method for
steps on each level.
M2 | Algorithm 1 with the SOLVE step replaced by a GMRES iteration method on each
level; in particular GMRES(25) for the main nest level iteration and GMRES() for coarser
levels.
M3 | Algorithm 1 with the SOLVE step replaced by a GMRES(25) iteration method on the
nest level (outer iteration method) and a Richardson iteration method for steps on coarser
levels.
M4 | Algorithm 2 with the main iteration method being a GMRES(25)iteration method on
the nest level and steps of residual correction on all levels for preconditioning.
The test problems are the following:
{ Example 1. Symmetric case [10]: A
{ Example 2. Symmetric case
log ji Ljlog jj Lj
6 otherwise
Set 2:
{ Example 3. Unsymmetric case: A
{ Example 4. An anisotropic PDE problem in both x and y directions:
where the coe-cients are dened as ([39, 13, Ch.5])
100 (x; y) 2 [0; 0:5] [0; 0:5] or [0:5; 1] [0:5; 1]
100 (x; y) 2 [0; 0:5] [0:5; 1] or [0:5; 1] [0; 0:5]
Table
1: Number of iteration steps to reach 1). Here is for
cycling pattern, n is the problem size and ' denotes the number of levels used.
Method Size Levels Case of Case of
used n ' Steps Steps
{ Example 5. A discontinuous coe-cient PDE problem as tested in [14, 39, 13]:
where the coe-cients are dened as ([39, Ch.3])
Here set 1 problems fall into the class for which our algorithms are expected to work; we mainly
test the dependence of the variants of Algorithms 1-2 on the parameter choices. This is because for
symmetric positive denite (SPD) matrices, their principal submatrices are also SPD and invertible
so the assumptions of Lemma 1 (Appendix) are satised. Set 2 problems, although outside the
class of problems that we have provided an analysis, are solvable by previously studied wavelets
preconditioners so we include these for comparison purpose. For all cases, we stop an iterative
method when the residual in the 2-norm is reduced by . The coarsest level is xed at
Tables
1-2 display the numerical results for dierent problem sizes from solving set 1
problems, where 'Steps' means the number of iteration steps required. Clearly one can observe
that 'Steps' is approximately independent of the problem size which is usually expected of a good
preconditioner.
For set 2 problems, we do not expect M1 { M4 to work. To extend these methods, we propose
to combine one step of Stage-1 preconditioning as proposed in [13] to smooth out the given matrix
(w.r.t. level 0 to 1). Specically we select a diagonal matrix such that the sum of
diagonal entries of B 1 and C 1 in
is minimised. Then our algorithms can be applied to the new linear system D 1
points to one way of possible further work. Other approaches similar to stage 1 preconditioning
Table
2: Number of iteration steps to reach 1). Here is for
cycling pattern, n is the problem size and ' denotes the number of levels used.
Method Size Levels Case of Case of
used n ' Steps Steps
512
43
may be considered e.g. for some indenite problems we have found that the permutation idea by
Du [21] can be combined with our algorithms here; the idea was recently used in [9] to extend the
applicability of sparse approximate inverse preconditioners.
In
Table
3, we use 'Stage-1' to indicate if such a step has been carried out; whenever such
an one-o stage 1 diagonal preconditioning is used, we put \Yes" in this column. The symbol '*'
denotes a case where no convergence has been achieved after 100 steps. Clearly the simple idea
of Stage-1 preconditioning does improve the performance of M1{M4 except for M1 (although M2,
M4 appear to be more robust than M3). Therefore we shall only compare M2{M4 with the work
of CTW [14] and CC [13] next.
Finally in Table 4, we show results from solving Examples 4 and 5 where the CPU seconds
are obtained from a Sun Ultra-2 workstation (using Matlab 5) and the other notation is the same
as in
Table
3. Here 'diag' refers to the diagonal preconditioner which does not work for these
two examples and data with a little 'f' in column 5 are used in constructing Figs.6 and 7. Table
4 demonstrates that when combined with stage-1 preconditioning, the new algorithms can out
perform the previous methods. In particular, it appears that M3 is the fastest method in the
table. To compare the residuals and CPU time of M3 (the best case), CTW [14] and CC [13] in
solving Examples 4 and 5, we plot respectively in Fig.6 and Fig.7 the convergence history and CPU
time (all data are taken from Table 4), where one can observe that M3 (New I) out performs the
others. This again conrms that the new algorithm M3 converges the fastest. As remarked already,
comparisons with other non-wavelets preconditioners (e.g. SPAI and ILU) can be found in [14] and
[39] where it was concluded that WSPAI is faster.
As also remarked earlier, the proposed multi-level algorithms can be potentially developed much
further by incorporating the ideas from [6, 13, 9, 21]. More importantly as they are closely based on
wavelets compression theory, generalizations to multi-dimensions appear to be more straightforward
than similar and known wavelet preconditioners; these aspects are currently being investigated.
Table
3: Number of iteration steps to reach Note that this
example is outside the scope of M1 M4. Here is for cycling pattern, n is the problem size and
' denotes the number of levels used.
Method With Size Levels Case of Case of
used Stage-1 n ' Steps Steps
Table
4: Comparison of the new algorithms with previous work for Examples 4 5 (set 2). Data
indicated by 'f' in column 5 are used in Figs.6-7.
Problem Stage-1 Method Convergence Steps CPU
Diag
Diag
CC [13] 54 f 302
Conclusions
This paper has presented two related algorithms implementing an algebraic wavelet preconditioner.
The rst one is similar to the set up of a NS form representation of a wavelet basis while the second
resembles the AMLI preconditioner designed for nite elements. Both algorithms are observed to
give excellent performance for a class of symmetric positive denite Calderon-Zygmund and pseudo-
dierential operators. Combined with a minor stage 1 preconditioning step, they are immediately
applicable to problems outside this class of problems. We note that there are several methods that
are designed to deal with anisotropic and highly indenite elliptic problems [6, 9, 21] and alternative
methods of constructing a sparse preconditioner [33, 12]; these should be investigated in the near
future to further extend the multi-level preconditioner to an even wider class of problems.
Acknowledgements
This work is partially supported by grants NSF ACR 97-20257, NASA Ames NAG2-1238, Sandia
Lab LG-4440 and UK EPSRC GR/R22315. The second author wishes to thank the Department
of Mathematics, UCLA for its hospitality during his visits conducting part of this work.
--R
A wavelet-based approach for the compression of kernel data in large scale simulations of 3D integral problems
Algebraic multilevel preconditioning methods I
Algebraic multilevel preconditioning methods II
Algebraic multilevel iteration method for Stieljes matrices
The algebraic multilevel iteration methods
On the additive version of the algebraic multilevel iteration method for anisotropic elliptic problems
Hierarchical bases and the
Preconditioning highly inde
Fast wavelet transforms and numerical algorithms I
Fast wavelet transforms for matrices arising from boundary element methods
Matrix renumbering ILU: An e
Wavelet sparse approximate inverse preconditioners
Discrete wavelet transforms accelerated sparse preconditioners for dense boundary element systems
An analysis of sparse approximate inverse preconditioners for boundary integral equa- tions
Adaptive wavelet methods for elliptic operator equations - convergence rates
Wavelet methods for second-order elliptic problems
Wavelet adaptive method for second order elliptic problems: boundary conditions and domain decomposition
Wavelet and multiscale methods for operator equations
LU factorization of non-standard forms and direct multiresolution solvers
Multiresolution representation and numerical algorithms: a brief review
On a family of two-level preconditionings of the incomplete block factorization type
Algebraic multilevel iteration preconditioning technique
Preconditioning of inde
ILUM: a multi-elimination ILU preconditioner for general sparse matrices
Iterative Methods for Sparse Linear Systems
GMRES: a generalized minimal residual algorithm for solving unsymmetric linear systems
Iterative solution of linear systems in the 20th century
BILUM: Block versions of multielimination and multilevel ILU preconditioner for general sparse linear systems
Enhanced multi-level block ILU preconditioning strategies for general sparse linear systems
Wavelets and Filter Banks
The lifting scheme: a construction of second generation of wavelets
On two ways of stabilizing the hierarchical basis multilevel methods
Nearly optimal iterative methods for solving
Wavelet Transforms and PDE Techniques in Image Compression
Scalable and multilevel iterative methods
On the multi-level splitting of nite element spaces
--TR | schur complements;multiresolution;sparse approximate inverse;multilevel preconditioner;wavelets;level-by-level transforms |
587394 | An Algebraic Multilevel Multigraph Algorithm. | We describe an algebraic multilevel multigraph algorithm. Many of the multilevel components are generalizations of algorithms originally applied to general sparse Gaussian elimination. Indeed, general sparse Gaussian elimination with minimum degree ordering is a limiting case of our algorithm. Our goal is to develop a procedure which has the robustness and simplicity of use of sparse direct methods, yet offers the opportunity to obtain the optimal or near-optimal complexity typical of classical multigrid methods. | Introduction
. In this work, we develop a multilevel multigraph algorithm.
Algebraic multigrid methods are currently a topic of intense research interest [17, 18,
20, 46, 12, 48, 38, 11, 44, 3, 4, 1, 2, 5, 16, 7, 29, 28, 27, 42, 41, 21]. An excellent recent
survey is given in Wagner [49]. In many \real world" calculations, direct methods are
still widely used [6]. The robustness of direct elimination methods and their simplicity
of use often outweigh the apparent benets of fast iterative solvers. Our goal here is to
try to develop an iterative solver that can compete with sparse Gaussian elimination
in terms of simplicity of use and robustness and to provide the potential of solving
a wide range of linear systems more e-ciently. While we are not yet satised that
our method has achieved this goal, we believe that it is a reasonable rst step. In
particular, the method of general sparse Gaussian elimination with minimum degree
ordering is a point in the parameter space of our method. This implies that in the
worst case, our method defaults to this well-known and widely used method, among
the most computationally e-cient of general sparse direct methods [26]. In the best
case, however, our method can exhibit the near optimal order complexity of the
classical multigrid method.
Our plan is to take well studied, robust, and widely used procedures and data
structures developed for sparse Gaussian elimination, generalize them as necessary,
and use them as the basic components of our multilevel solver. The overall iteration
follows the classical multigrid V-cycle in form, in contrast to the algebraic hierarchical
basis multigraph algorithm developed in [11].
In this work we focus on the class of matrices which are structurally symmetric;
that is, the pattern of nonzeros in the matrix is symmetric, although the numerical
values of the matrix elements may render it nonsymmetric. Such structurally symmetric
matrices arise in the discretizations of partial dierential equations, say, by
the nite element method. For certain problems, the matrices are symmetric and
positive denite, but for others the linear systems are highly nonsymmetric and/or
indenite. Thus in practice this represents a very broad class of behavior. While our
main interest is in scalar elliptic equations, as in the nite element code PLTMG [8],
our algorithms can formally be applied to any structurally symmetric, nonsingular,
sparse matrix.
Sparse direct methods typically have two phases. In the rst (initialization) phase,
Department of Mathematics, University of California at San Diego, La Jolla, CA 92093. The
work of this author was supported by the National Science Foundation under contract DMS-9706090.
y Bell Laboratories, Lucent Technologies, Murray Hill, NJ 07974.
equations are ordered, and symbolic and numerical factorizations are computed. In
the second (solution) phase, the solution of the linear system is computed using the
factorization. Our procedure, as well as other algebraic multilevel methods, also
breaks naturally into two phases. The initialization consists of ordering, incomplete
symbolic and numeric factorizations, and the computation of the transfer matrices
between levels. In the solution phase, the preconditioner computed in the initialization
phase is used to compute solution using the preconditioned composite step conjugate
gradient (CSCG) or the composite step biconjugate gradient (CSBCG) method [9].
Iterative solvers often have tuning parameters and switches which require a certain
level of a priori knowledge or some empirical experimentation to set in any particular
instance. Our solver is not immune to this, although we have tried to keep the number
of such parameters to a minimum. In particular, in the initialization phase, there are
only three such parameters:
, the drop tolerance used in the incomplete factorization (called dtol in our
code).
maxf il, an integer which controls to overall ll-in (storage) allowed in a given
incomplete factorization.
maxlvl, an integer specifying the maximum number of levels.
(The case corresponds to sparse Gaussian elimina-
tion.) In the solution phase, there are only two additional parameters:
tol, the tolerance used in the convergence test.
maxcg, an integer specifying the maximum number of iterations.
Within our code, all matrices are generally treated within a single, unied frame-
work; e.g., symmetric positive denite, nonsymmetric, and indenite problems generally
do not have specialized options. Besides the control parameters mentioned above,
all information about the matrix is generated from the sparsity pattern and the values
of the nonzeros, as provided in our sparse matrix data structure, a variant of the data
structure introduced in the Yale sparse matrix package [23, 10]. For certain block
matrices, the user may optionally provide a small array containing information about
the block structure.
This input limits the complexity of the code, as well as eliminates parameters
which might be needed to further classify a given matrix. On the other hand, it seems
clear that a specialized solver directed at a specic problem or class of problems, and
making use of this additional knowledge, is likely to outperform our algorithm on
that particular class of problems. Although we do not think our method is provably
\best" for any particular problem, we believe its generality and robustness, coupled
with reasonable computational e-ciency, make it a valuable addition to our collection
of sparse solvers.
The rest of this paper is organized as follows. In section 2, we provide a general
description of our multilevel approach. In section 3, we dene the sparse matrix data
structures used in our code. Our incomplete factorization algorithm is a standard
drop tolerance approach with a few modications for the present application. These
are described in section 4. Our ordering procedure is the minimum degree algorithm.
Once again, our implementation is basically standard with several modications to the
input graph relevant to our application. These are described in section 5. In section
6, we describe the construction of the transfer matrices used in the construction of
the coarse grid correction. Information about the block structure of the matrix, if any
is provided, is used only in the coarsening procedure. This is described in section 7.
Finally, in section 8, we give some numerical illustrations of our method on a variety
of (partial dierential equation) matrices.
2. Matrix formulation. Let A be a large sparse, nonsingular N N matrix.
We assume that the sparsity pattern of A is symmetric, although the numerical values
need not be. We will begin by describing the basic two-level method for solving
Let B be an N N nonsingular matrix, called the smoother, which gives rise to the
basic iterative method used in the multilevel preconditioner. In our case, B is an
approximate factorization of A, i.e.,
where L is (strict) lower triangular, U is (strict) upper triangular with the same
sparsity pattern as L t , D is diagonal, and P is a permutation matrix.
Given an initial guess x steps of the smoothing procedure produce iterates
The second component of the two-level preconditioner is the coarse grid correction.
Here we assume that the matrix A can be partitioned as
A ff A fc
A cf A cc
where the subscripts f and c denote ne and coarse, respectively. Similar to the
smoother, the partition of A in ne and coarse blocks involves a permutation matrix
P . The ^
coarse grid matrix ^
A is given by
A ff A fc
A cf A cc
I cc
The matrices V cf and W t
N) matrices with identical sparsity patterns;
thus
A has a symmetric sparsity pattern. If A
A.
Let
I cc
In standard multigrid terminology, the matrices ^
W are called restriction and
prolongation, respectively. Given an approximate solution xm to (2.1), the coarse grid
correction produces an iterate xm+1 as follows.
(b Axm );
As is typical of multilevel methods, we dene the two-level preconditioner M
implicitly in terms of the smoother and coarse grid correction. A single cycle takes
an initial guess x 0 to a nal guess x 2m+1 as follows:
Two-Level Preconditioner
are dened using (2.3).
(ii) xm+1 is dened using (2.7).
are dened using (2.3).
The generalization from two-level to multilevel consists of applying recursion to
the solution of the equation ^
r in (2.7). Let ' denote the number of levels in the
recursion.
M(') denote the preconditioner for
A.
Then (2.7) is generalized to
(b Axm );
The general ' level preconditioner M is then dened as follows:
'-Level Preconditioner
directly.
starting from initial guess x 0 , compute x 2m+1 using (iii){(v):
are dened using (2.3).
(iv) xm+1 is dened by (2.8), using iterations of the ' 1 level
scheme for
r to dene ^
M , and with initial guess ^
are dened using (2.3).
The case corresponds to the symmetric V-cycle, while the case
corresponds to the symmetric W-cycle. We note that there are other variants of both
the V-cycle and the W-cycle, as well as other types of multilevel cycling strategies [30].
However, in this work (and in our code) we restrict attention to just the symmetric
V-cycle with presmoothing and postsmoothing iterations.
For the coarse mesh solution our procedure is somewhat nontraditional.
Instead of a direct solution of (2.1), we compute an approximate solution using one
smoothing iteration. We illustrate the practical consequences of this decision in section
8.
If A is symmetric, then so is M , and the '-level preconditioner could be used as a
preconditioner for a symmetric Krylov space method. If A is also positive denite, so
is M , and the standard conjugate gradient method could be used; otherwise the CSCG
method [9], SYMLQ [43], or a similar method could be used. In the nonsymmetric
case, the '-level preconditioner could be used in conjunction with the CSBCG method
[9], GMRES [22], or a similar method.
To complete the denition of the method, we must provide algorithms to
compute the permutation matrix P in (2.2);
compute the incomplete factorization matrix B in (2.2);
compute the ne-coarse partitioning
compute the sparsity patterns and numerical values in the prolongation and
restriction matrices in (2.6).
3. Data structures. Let A be an N N matrix with elements A ij and a symmetric
sparsity structure; that is, both A ij and A ji are treated as nonzero elements
(i.e. stored and processed) if jA diagonal entries A ii are treated as
nonzero regardless of their numerical values.
Our data structure is a modied and generalized version of the data structure
introduced in the (symmetric) Yale sparse matrix package [23]. It is a rowwise version
of the data structure described in [10]. In our scheme, the nonzero entries of A are
stored in a linear array a and accessed through an integer array ja. Let i be the
number of nonzeros in the strict upper triangular part of row i and set
The array ja is of length N+1+, and the array a is of length N+1+ if A A. If
A t 6= A, then the array a is of length 2. The entries of ja(i), 1 i N
are pointers dened as follows:
The locations ja(i) to ja(i contain the i column indices corresponding to
the row i in the strictly upper triangular matrix.
In a similar manner, the array a is dened as follows:
If A t 6= A, then
In words, the diagonal is stored rst, followed by the strict upper triangle stored row-
wise. If A t 6= A, then this is followed by the strict lower triangle stored columnwise.
Since A is structurally symmetric, the column indexes for the upper triangle are identical
to the row indexes for the lower triangle, and hence they need not be duplicated
in storage.
As an example, let
A 11 A 12 A 13 0 0
A 21 A 22 0 A 24 0
A
43 A 44 0
Then
a A11 A22 A33 A44 A55 A12 A13 A24 A34 A35 A21 A31 A42 A43 A53
Diagonal Upper triangle Lower triangle
Although the YSMP data structure was originally devised for sparse direct methods
based on Gaussian elimination, it is also quite natural for iterative methods based
on incomplete triangular decomposition. Because we assume that A has a symmetric
sparsity structure, for many matrix calculations a single indirect address computation
in ja can be used to process both a lower and a upper triangular element in A. For
example, the following procedure computes
procedure mult(N, ja, a, x, y)
end for
for k ja(i) to ja(i
end for
end for
For symmetric matrices, set lmtx 0; umtx 0. Also, may be readily
computed by setting lmtx 0; umtx ja(N
The data structure for storing quite analogous to that
for A. It consists of two arrays, ju and u, corresponding to ja and a, respectively.
The rst entries of ju are pointers as in ja, while entries ju(i) to ju(i
contain column indices of the nonzeros of row i in of U . In the u array, the diagonal
entries of D are stored in the rst N entries. Entry arbitrary. Next, the
nonzero entries of U are stored in correspondence to the column indices in ju. If
the nonzero entries of L follow, stored columnwise.
The data structure we use for the N ^
W and the ^
are
similar. It consists of an integer array jv and a real array v. The nonzero entries of
are stored rowwise, including the rows of the block I cc . As usual, the rst
entries of jv are pointers; entries jv(i) to jv(i contain column indices for row
W . In the v array, the nonzero entries of ^
are stored rowwise in correspondence
with jv but shifted by N since there is no diagonal part. If ^
W , this is
followed by the nonzeros of ^
stored columnwise.
4. ILU factorization. Our incomplete (L+D)D 1 (D+U) factorization is similar
to the row elimination scheme developed for the symmetric YSMP codes [23, 26].
For simplicity, we begin by discussing a complete factorization and then describe the
modications necessary for the incomplete factorization. Without loss of generality,
assume that the permutation matrix
After k steps of elimination, we have the block factorization
A 11 A 12
A 21 A 22
I
where A 11 is k k and A 22 is N k N k. We assume that at this stage, all
the blocks on the right-hand side of (4.1) have been computed except for the Schur
complement S, given by
Our goal for step k + 1 is to compute the rst row and column of S, given by
Because A and (L +D)D 1 (D + U) have symmetric sparsity patterns, and our data
structures take advantage of this symmetry, it is clear that the algorithms for computing
are the same and in practice dier only in the assignments of
shifts for the u and a arrays, analogous to lmtx and umtx in procedure mult. Thus
we will focus on the computation of just . At this point, we also assume that the
array ju has been computed in a so-called symbolic factorization step.
The major substeps are as follows:
1. Copy the rst column of A 22 (stored in the data structures ja and a) into an
expanded work vector z of size N .
2. Find the multipliers given by nonzeros of D 1
3. For each multiplier
using column k of L 21 (i.e.,
4. Copy the nonzeros in z into the data structures ju and u.
In step 1, we need to know the nonzeros of the rst column of A 22 , which is
precisely the information easily accessible in the ja and a data structures. In step
3, we need to know the nonzeros in columns of L 21 , which again is precisely the
information easily available in our data structure. In step 4, we copy a column of
information into the lower triangular portion of the ju and u data structures. Indeed,
the only di-cult aspect of the algorithm is step 2, in which we need to know the
sparsity structure of the rst column of U 12 , information that is not readily available
in the data structure. This is handled in a standard fashion using a dynamic linked
list structure and will not be discussed in detail here.
To generalize this to the incomplete factorization case, we rst observe that the ju
array can be computed concurrently with the numeric factorization simply by creating
a list of the entries of the expanded array z that are updated in step 3. Next, we note
that one may choose which nonzero entries from z to include in the factorization by
choosing which entries to copy to the ju and u data structures in step 4. We do this
through a standard approach using a drop tolerance . In particular, we neglect a
pair of o-diagonal elements if
j. Note D ii has not yet been computed. It is well known that
the ll-in generated through the application of a criterion such as (4.4) is a highly
nonlinear and matrix dependent function of . This is especially problematic in the
present context, since control of the ll-in is necessary in order to control the work
per iteration in the multilevel iteration.
Several authors have explored possibilities of controlling the maximum number
of ll-in elements allowed in each row of the incomplete decomposition [35, 47, 31].
However, for many cases of interest, and in particular for matrices arising from discretizations
of partial dierential equations ordered by the minimum degree algorithm,
most of the ll-in in a complete factorization occurs in the later stages, even if all the
rows initially have about the same number of nonzeros. Thus while it seems advisable
to try to control the total ll-in, one should adaptively decide how to allocate the
ll-in among the rows of the matrix. In our algorithm, in addition to the drop tolerance
, the user provides a parameter maxf il, which species that the total number
of nonzeros in U is not larger than maxf il N .
Our overall strategy is to compute the incomplete decomposition using the given
drop tolerance. If it fails to meet the given storage bound, we increase the drop
tolerance and begin a new incomplete factorization. We continue in this fashion until
we complete a factorization within the given storage bound. Of course, such repeated
factorizations are computationally expensive, so we developed some heuristics which
allow us to predict a drop tolerance which will satisfy the storage bound.
As the factorization is computed, we make a histogram of the approximate sizes
of all elements that exceed the drop tolerance and are accepted for the factorization.
Let m denote the number of bins in the histogram; our code. Then for
each pair of accepted o-diagonal elements, we nd the largest k 2 [1; m] such that
Here > 1 our code). The histogram is realized as an integer array h of
size m, where h ' is the number of accepted elements that exceeded the drop tolerance
by factors between ' 1 and ' for 1 ' m 1; hm contains the number of
accepted elements exceeding the drop tolerance by m 1 . If the factorization reaches
the storage bound, we continue the factorization but allow no further ll-in. However,
we continue to compute the histogram based on (4.5), proling the elements we would
have accepted had space been available. Then using the histogram, we predict a
new value of such that the total number of elements accepted for U is no larger
than maxf il N=. Such a prediction of course cannot be guaranteed, since the
sizes and numbers of ll-in elements depend in a complicated fashion on the specic
history of the incomplete factorization process; indeed, the histogram cannot even
completely prole the remainder of the factorization with the existing drop tolerance,
since elements that would have been accepted could introduce additional ll-in at
later stages of the calculation as well as in
uence the sizes of elements computed at
later stages of the factorization. In our implementation, the factor varies between
depending on how severely the storage bound was exceeded. Its
purpose is to introduce some conservative bias into the prediction with the goal that
the actual ll-in accepted should not exceed maxf il N .
Finally, we note that there is no comprehensive theory regarding the stability
of incomplete triangular decompositions. For certain classes of matrices (e.g., M-matrices
and H-matrices), the existence of certain incomplete factorizations has been
proved [39, 25, 24, 40, 51]. However, in the general case, with potentially indenite
and/or highly nonsymmetric matrices, one must contend in a practical way with
the possibility of failure or near failure of the factorization. A common approach
is to add a diagonal matrix, often a multiple of the identity, to A and compute an
incomplete factorization of the shifted matrix. One might also try to incorporate
some form of diagonal pivoting; partial or complete pivoting could potentially destroy
the symmetric sparsity pattern of the matrix. However, any sort of pivoting greatly
increases the complexity of the implementation, since the simple but essentially static
data structures ja, a, ju, and u are not appropriate for such an environment.
Our philosophy here is to simply accept occasional failures and continue with
the factorization. Our ordering procedure contains some heuristics directed towards
avoiding or at least minimizing the possibility of failures. And when they do occur,
failures often corrupt only a low dimensional subspace, so a Krylov space method
such as conjugate gradients can compensate for such corruption with only a few extra
iterations. In our implementation, a failure is revealed by some diagonal entries in D
becoming close to zero. O-diagonal elements L ji and U ij are multiplied by D 1
ii , and
the solution of (L multiplication by D 1
ii . For
purposes of calculating the factorization and solution, the value of D 1
ii is modied
near zero as follows:
1=D ii for jD ii j > ,
Here is a small constant; in our implementation, is the machine
epsilon. Although many failures could render the preconditioner well-dened but
essentially useless, in practice we have noted that D 1
ii is rarely modied for the large
class of nite element matrices which are the main target of our procedure.
5. Ordering. To compute the permutation matrix P in (2.2), we use the well-known
minimum degree algorithm [45, 26]. Intuitively, if one is computing an incomplete
factorization, an ordering which tends to minimize the ll-in in a complete
factorization should tend to minimize the error
For particular classes of matrices, specialized ordering schemes have been developed
[34, 15, 37, 36]. For example, for matrices arising from convection dominated prob-
lems, ordering along the
ow direction has been used with great success. However, in
this general setting, we prefer to use just one strategy for all matrices. This reduces
the complexity of the implementation and avoids the problem of developing heuristics
to decide among various ordering possibilities. We remark that for convection
dominated problems, minimum degree orderings perform comparably well to the specialized
ones, provided some (modest) ll-in is allowed in the incomplete factorization.
For us, this seems to be a reasonable compromise.
Our minimum degree ordering is a standard implementation, using the quotient
graph model [26] and other standard enhancements. A description of the graph of
the matrix is the main required input. Without going into detail, this is essentially
a small variant of the basic ja data structure used to store the matrix A. We will
denote this modied data structure as jc. Instead of storing only column indices for
the strict upper triangle as in ja, entries jc(i) to jc(i of the jc data structure
contain column indices for all o-diagonal entries for row i of the matrix A.
We have implemented two small enhancements to the minimum degree ordering;
as a practical matter, both involve changes to the input graph data structure jc
that is provided to the minimum degree code. First, we have implemented a drop
tolerance similar to that used in the the factorization. In particular the edge in the
graph corresponding to o-diagonal entries A ij and A ji is not included in the jc data
structure if
jA jj A ii j: (5.1)
This excludes many entries which are likely to be dropped in the subsequent incomplete
factorization and hopefully will result in an ordering that tends to minimize the
ll-in created by the edges that are kept.
The second modication involves some modest a priori diagonal pivoting designed
to minimize the number failures (near zero diagonal elements) in the subsequent
factorization. We rst remark that pivoting or other procedures based on the values
of the matrix elements (which can be viewed as weights on graph edges and nodes)
would destroy many of the enhancements which allow the minimum degree algorithm
to run in almost linear time. Our modication is best explained in the context of a
simple 2 2 example. Let
b a
with a; b; c 6= 0. Clearly, A is nonsingular, but the complete triangular factorization
of A does not exist. However,
a b
a 0
c bc=a
a b
Now suppose that A ii 0, A jj these four elements form a
submatrix of the form described above, and it seems an incomplete factorization of
A is less likely to fail if the P is chosen such that vertex j is ordered before vertex i.
This is done as follows: for each i such that A ii 0, we determine a corresponding
j such that A jj there is more than one choice, we choose the one for
which jA ij A ji =A jj j is maximized. To ensure that vertex i is ordered after vertex j,
we replace the sparsity pattern for the o-diagonal entries for row (column) i with
the union of those for rows (columns) i and j. If we denote the set of column indices
for row i in the jc array as adj(i), then
Although the sets adj(i) and adj(j) are modied at various stages, it is well known
that (5.3) is maintained throughout the minimum degree ordering process [26], so
that at every step of the ordering process deg(j) deg(i), where deg(i) is the degree
of vertex i. As long as deg(j) < deg(i), vertex j will be ordered before vertex i by
the minimum degree algorithm. On the other hand, if at some stage
of the ordering process, it remains so thereafter, and (5.3) becomes
In words, i and j become so-called equivalent vertices and will be eliminated at the
same time by the minimum degree algorithm (see [26] for details). Since the minimum
degree algorithm sees these vertices as equivalent, they will be ordered in an arbitrary
fashion when eliminated from the graph. Thus, as a simple postprocessing step, we
must scan the ordering provided by the minimum degree algorithm and exchange the
order of rows i and j if i was ordered rst. Any such exchanges result in a new
minimum degree ordering which is completely equivalent, in terms of ll-in, to the
the original.
For many types of nite element matrices (e.g., the indenite matrices arising
from Helmholtz equations), this a priori scheme is useless because none of the diagonal
entries of A is close to zero. However, this type of problem is likely to produce only
isolated small diagonal entries in the factorization process, if it produces any at all.
On the other hand, other classes of nite element matrices, notably those arising
in from mixed methods, Stokes equations, and other saddle-point-like formulations,
have many diagonal entries that are small or zero. In such cases, the a priori diagonal
pivoting strategy can make a substantial dierence and greatly reduce the numbers
of failures in the incomplete triangular decomposition.
6. Computing the transfer matrices. There are three major tasks in computing
the prolongation and restriction matrices ^
W of (2.6). First, one must
determine the sparsity structure of these matrices; this involves choosing which unknowns
are coarse and which are ne. This reduces to determining the permutation
P of (2.4). Second, one must determine how coarse and ne unknowns are
related, the so-called parent-child relations [49]. This involves computing the sparsity
patterns for the matrices V cf and W fc . Third, one must compute the numerical values
for these matrices, the so-called interpolation coe-cients [50].
There are many existing algorithms for coarsening graphs. For matrices arising
from discretizations of partial dierential equations, often the sparsity of the matrix A
is related in some way to the underlying grid, and the problem of coarsening the graph
of the matrix A can be formulated in terms of coarsening the grid. Some examples are
given in [14, 13, 17, 18, 46, 12, 49]. In this case, one has the geometry of the grid to
serve as an aid in developing and analyzing the coarsening procedure. There are also
more general graph coarsening algorithms [32, 33, 19], often used to partition problems
for parallel computation. Here our coarsening scheme is based upon another well-known
sparse matrix ordering technique, the reverse Cuthill{McKee algorithm. This
ordering tends to yield reordered matrices with minimal bandwidth and is widely used
with generalized band elimination algorithms [26]. We now assume that the graph
has been ordered in this fashion and that a jc data structure representing the graph
in this ordering is available. Our coarsening procedure is just a simple postprocessing
step of the basic ordering routine, in which the N vertices of graph are marked as
COARSE or F INE.
procedure coarsen(N, jc, type)
end for
for j jc(i) to jc(i
end for
end for
This postprocessing step, coupled with the the reverse Cuthill{McKee algorithm,
is quite similar to a greedy algorithm for computing maximal independent sets using
breadth-rst search. Under this procedure, all coarse vertices are surrounded only by
ne vertices. This implies that the matrix A cc in (2.4) is a diagonal matrix. For the
sparsity patterns of matrices arising from discretizations of scalar partial dierential
equations in two space dimensions, the number of coarse unknowns ^
N is typically on
the order of N=4 to N=5. Matrices with more nonzeros per row tend to have smaller
values of ^
N . To dene the parents of a coarse vertex, we take all the connections of
the vertex to other ne vertices; that is, the sparsity structure of V cf in (2.5) is the
same as that of the block A cf .
In our present code, we pick V cf and W fc according to the formulae
ff A fc ;
ff
~
Here D ff is a diagonal matrix with diagonal entries equal to those of A ff . In this
sense, the nonzero entries in V cf and W fc are chosen as multipliers in Gaussian elimi-
nation. The nonnegative diagonal matrices R ff and ~
R ff are chosen such that nonzero
rows of W fc and columns of V cf , respectively, have unit norms in ' 1 .
Finally, the coarsened matrix ^
A of (2.5) is \sparsied" using the drop tolerance
and a criterion like (5.1) to remove small o-diagonal elements. Empirically, applying
a drop tolerance to ^
A at the end of the coarsening procedure has proved more e-cient,
and more eective, than trying to independently sparsify its constituent matrices. If
the number of o-diagonal elements in the upper triangle exceeds maxf il ^
N , the
drop tolerance is modied in a fashion similar to the incomplete factorization. The
o-diagonal elements are proled by a procedure similar to that for the incomplete fac-
torization, but in this case the resulting histogram is exact. Based on this histogram,
a new drop tolerance is computed, and (5.1) is applied to produce a coarsened matrix
satisfying the storage bound.
7. Block matrices. Our algorithm provides a simple but limited functionality
for handling block matrices. Suppose that the N N matrix A has the K K block
structure
A =B @
where subscripts for A ij are block indices and the diagonal blocks A jj are square
matrices. Suppose A jj is of order
The matrix A is stored in the usual ja and a data structures as described in
section 3 with no reference to the block structure. A small additional integer array ib
of size K + 1 is used to dene the block boundaries as follows:
In words, integers in the range ib(j) to ib(j inclusive, comprise the index set
associated with block A jj . Note that ib(K
This block information plays a role only in the coarsening algorithm. First, the
reverse Cuthill{McKee algorithm described in section 6 is applied to the block diagonal
matrix
A =B @
A 11
AKKC A (7.2)
rather than A. As a practical matter, this involves discarding graph edges connecting
vertices of dierent blocks in the construction of the graph array jc used as input.
Such edges are straightforward to determine from the information provided in the ib
array. The coarsening algorithm applied to the graph of
A produces output equivalent
to the application of the procedure independently to each diagonal block of
A. As
a consequence, the restriction and prolongation matrices automatically inherit the
block structure A. In particular,
VKKC A and ^
W jj are are rectangular matrices
having the structure of (2.6) that would have resulted from the application of the
algorithm independently to A jj . However, like the matrix A,
are stored
in the standard jv and v data structures described in section 3 without reference to
their block structures.
The complete matrix A is used in the construction of the coarsened matrix ^
A of
(2.5). However, because of (7.1) and (7.3)
A 1K
A also automatically inherits the K K block structure of A. It is not necessary
for the procedure forming ^
A to have any knowledge of its block structure, as this
block structure can be computed a priori by the graph coarsening procedure. Like
A is stored in standard ja and a data structures without reference to its block
structure. Since the blocks of A have arbitrary order, and are essentially coarsened
independently, it is likely that eventually some of the ^
That is, certain blocks
may cease to exist on coarse levels. Since the block information is used only to discard
certain edges in the construction of the graph array jc, \00" diagonal blocks present
no di-culty.
8. Numerical experiments. In this section, we present a few numerical illus-
trations. In our rst sequence of experiments, we consider several matrices loosely
based on the classical case of 5-point centered nite dierence approximations to u
on a uniform square mesh. Dirichlet boundary conditions are imposed. This leads to
the n n block tridiagonal system
I
I T I
I T I
I
with T the n n tridiagonal matrix
This is a simple test problem easily solved by standard multigrid methods. In contrast
to this example we also consider the block tridiagonal system
Both A and
A have the same eigenvectors and the same eigenvalues, although the
association of eigenvectors and eigenvalues are reversed in the case of
A. That is,
the so-called smooth eigenvectors are associated with large eigenvalues, while rough
eigenvectors are associated with smaller eigenvalues. Although
A does not arise naturally
in the context of numerical discretizations of partial dierential equations, it is
of interest because it dees much of the conventional wisdom for multigrid methods.
Third, we consider block 3 3 systems of the form
where A is the discrete Laplacian and D is a symmetric positive denite \stabilization"
matrix with a sparsity pattern similar to A. However, the nonzeros in D are of size
compared to size O(1) nonzero elements in A. C x and C y also have sparsity
patterns similar to that of A, but these matrices are nonsymmetric and their nonzero
entries are of size O(h). Such matrices arise in stabilized discretizations of the Stokes
equations. One third of the eigenvalues of S are negative, so S is quite indenite.
In addition to the ja and a arrays, for the matrix S we also provided an ib array as
described in section 7 to dene its 33 block structure. We emphasize again that this
block information is used only in the computation of the graph input to the coarsening
procedure and is not involved in any aspect of the incomplete factorization smoothing
procedure. With many small diagonal elements, this class of matrices provides a good
test of the a priori pivoting strategy used in conjunction with the minimum degree
ordering.
In
Table
8.1, Levels refers to the number of levels used in the calculation. In our
implementation the parameter maxlvl, which limits the number of levels allowed, was
set su-ciently large that it had no eect on the computation. The drop tolerance
was set to matrices. The ll-in control parameter maxf il was set
su-ciently large that it had no eect on the computation. The initial guess for all
problems was x
In
Table
8.1, the parameter Digits refers to
In these experiments, we asked for six digits of accuracy. The column labeled Cycles
indicates the number of multigrid cycles (accelerated by CSCG) that were used to
achieve the indicated number of digits. Finally, the last two columns, labeled Init. and
Solve, record the CPU time, measured in seconds, for the initialization and solution
phases of the algorithm, respectively. Initialization includes all the orderings, incomplete
factorizations, and computation of transfer matrices used in the multigraph
preconditioner. Solution includes the time to solve (2.1) to at least six digits given
the preconditioner. These experiments were run on an SGI Octane R10000 250mhz,
using double precision arithmetic and the f90 compiler.
In analyzing these results, it is clear that our procedure does reasonably well
on all three classes of matrices. Although it appears that the rate of convergence
is not independent of N , it seems apparent that the work is growing no faster than
logarithmically. CPU times for larger vales of N are aected by cache performance
as well as the slightly larger number of cycles.
For the highly indenite Stokes matrices S, it is important to also note the ro-
bustness, that the procedure solved all of the problems. With more nonzeros per row
on average, the incomplete factorization was more expensive to compute than for the
other cases. This is re
ected in relatively larger initialization and solve times.
In our next experiment, we illustrate the eect of the parameters maxlvl and .
For the matrix A with we solved the problem for
and 1 maxlvl 7. We terminated the iteration when the solution had six digits,
Performance comparison.
Digits Cycles Init. Solve
Discrete Laplacian A,
Stokes matrix S,
as measured by (8.1). We also provide the total storage for the ja and ju arrays
for all matrices, measured in thousands of entries. Since the matrices are symmetric,
this is also the total (
oating point) storage for all matrices A and approximate LDU
factorizations.
Here we see that our method behaves in a very predictable way. In particular,
decreasing the drop tolerance or increasing the number of levels improves the convergence
behavior of the method. On the other hand, the timings do not always follow
the same trend. For example, for the case increasing the number of levels
from decreases the number of cycles but increases the time.
This is because for our method defaults to the standard conjugate gradient
iteration with the incomplete factorization preconditioner. When maxlvl > 1,
one presmoothing and one postsmoothing step are used for the largest matrix. With
the additional cost of the recursion, the overall cost of the preconditioner is more than
double the cost for the case
We also note that, unlike the classical multigrid method, where the coarsest matrix
is solved exactly, in our code we have chosen to approximately solve the coarsest
system using just one smoothing iteration using the incomplete factorization. When
the maximum number of levels are used, as in Table 8.1, the smallest system is
typically 1 1 or 2 2, and this is an irrelevant remark. However, in the case
of
Table
8.2, the fact that the smallest system is not solved exactly signicantly
in
uences the overall rate of convergence. This is why, unlike methods where the
coarsest system is solved exactly, increasing the number of levels tends to improve
the rate of convergence. In the case the coarsest matrix had an exact LDU
factorization for the case (because the matrix itself was nearly diagonal),
and setting maxlvl > 5 did not increase the number of levels. The cases
Dependence of convergence of and maxlvl, discrete Laplacian A,
maxlvl Digits Cycles Init. Solve
3 6.1 96 13.2 116.9 1077 1119
6 { { { { { {
7 { { { { { {
6.1 56 12.1 64.9 878 2106
6.1 22 16.6 31.7 878 3649
and used a maximum of 10 and 9 levels, respectively, but the results did not
change signicantly from the case 7.
We also include in Table 8.2 the case
ination. (In fact, our code uses jjAjj as the drop tolerance when the user species
to avoid dividing by zero.) Here we see that Gaussian elimination is reasonably
competitive on this problem. However, we generally expect the initialization cost for
to grow like O(N 3=2 ). For we expect the solution times to
grow like O(N p ), p > 1. For the best multilevel choices, we expect both initialization
and solution times to behave like O(N) O(N log N ).
In our nal series of tests, we study the convergence of the method for a suite of
test problems generated from the nite element code PLTMG [8]. These example
problems were presented in our earlier work [11], where a more complete description
of the problems, as well as numerical results for our hierarchical basis multigraph
method and the classical AMG algorithm of Ruge and Stuben [46], can be found.
As a group, the problems feature highly nonuniform, adaptively generated meshes,
relatively complicated geometry, and a variety of dierential operators. For each test
case, both the sparse matrix and the right-hand side were saved in a le to serve as
input for the iterative solvers. A short description of each test problem is given below.
Problem Superior. This problem is a simple Poisson equation
with homogeneous Dirichlet boundary conditions on a domain in the shape of Lake
Superior. This is the classical problem on a fairly complicated domain. The solution
is generally very smooth but has some boundary singularities.
Problem Hole. This problem features discontinuous, anisotropic coe-cients. The
overall domain is the region between two concentric circles, but this domain is divided
into three subregions. On the inner region, the problem is
with In the middle region, the equation is
and in the outer region the equation is
Homogeneous Dirichlet boundary conditions are imposed on the inner (hole) bound-
ary, homogeneous Neumann conditions on the outer boundary, and the natural continuity
conditions on the internal interfaces. While the solution is also relatively
smooth, singularities exist at the internal interfaces.
Problem Texas. This is an indenite Helmholtz equation
posed in a region shaped like the state of Texas. Homogeneous Dirichlet boundary
conditions are imposed. The length scales of this domain are roughly 16 16, so this
problem is fairly indenite.
Problem UCSD. This is a simple constant coe-cient convection-diusion equation
r (ru
posed on a domain in the shape of the UCSD logo. Homogeneous
Dirichlet boundary conditions are imposed. Boundary layers are formed at the bottom
of the region and the top of various obstacles.
Problems Jcn 0 and Jcn 180. The next two problems are solutions of the current
continuity equation taken from semiconductor device modeling. This equation is a
convection-diusion equation of the form
r (ru
in most of the rectangular domain. However, in a curved band in the interior of
the domain, jj 10 4 and is directed radially. Dirichlet boundary conditions
and are imposed along the bottom boundary and along a short segment on
the upper left boundary, respectively. Homogeneous Neumann boundary conditions
are specied elsewhere. The solutions vary exponentially across the domain which is
typical of semiconductor problems.
In the rst problem, Jcn 0, the convective term is chosen so the device is forward
biased. In this case, a sharp internal layer develops along the top interface boundary.
In the second problem, Jcn 180, the sign of the convective term is reversed, resulting
in two sharp internal layers along both interface boundaries.
We summarize the results in Table 8.3. As before, perhaps the most important
point is that the method solved all of the problems. While convergence rates are not
independent of h, once again the growth appears to be at worst logarithmic.
Below we make some additional remarks.
Table
Performance comparison.
N Levels Digits Cycles Init. Solve
20k 9 7.3 5 1.4e 0 9.4e-1
Hole,
Jcn 0,
Jcn
For all problems, decreasing the drop tolerance will tend to increase the effectiveness
of the preconditioner, although it generally will also make the
preconditioner more costly to apply. Thus one might optimize the selection
of the drop tolerance to minimize the decreasing number of cycles against the
increasing cost per cycle. In these experiments, we did not try such systematic
optimization, but we did adjust the drop tolerance in a crude way such
that more di-cult problems performed in a fashion similar to the easy ones.
Problem Texas is by far the most di-cult in this test suite. While we set
the problem with order 80k was the only one which came close
to achieving this storage limit. Most were well below this limit, and many
averaged less than 10 nonzeros per row in L and U factors.
For the nonsymmetric problems the CSBCG method is used for acceleration.
Since the CSBCG requires the solution of a conjugate system with A t , two
matrix multiplies and two preconditioning steps are required for each itera-
tion. As noted in section 3, with our data structures, applying a transposed
matrix and preconditioner costs the same as applying the original matrix or
preconditioner. Since these are the dominant costs in the CSBCG methods,
the cost per cycle is approximately double that for an equivalent symmetric
system.
--R
On eigenvalue estimates for block incomplete factorization methods
The algebraic multilevel iteration methods - theory and applications
Stabilization of algebraic multilevel iteration methods
Algebraic multilevel preconditioning methods I
A class of hybrid algebraic multilevel preconditioning methods
PLTMG: A Software Package for Solving Elliptic Partial Di
An analysis of the composite step biconjugate gradient method
General sparse elimination requires no permanent integer storage
The incomplete factorization multigraph algorithm
The hierarchical basis multigrid method and incomplete LU decomposi- tion
Orderings for incomplete factorization preconditioning of nonsymmetric problems
Towards algebraic multigrid for elliptic problems of second order
Boundary treatments for multilevel methods on unstructured meshes
Black box multigrid
Variational iterative methods for non-symmetric systems of linear equations
Algorithms and data structures for sparse symmetric Gaussian elimination
A stability analysis of incomplete LU factorizations
Algebraic analysis of the hierarchical basis preconditioner
Computer Solution of Large Sparse Positive De
Incomplete block factorization preconditioning for linear systems arising in the numerical solution of the helmholtz equation
An algebraic hierarchical basis preconditioner
Incomplete Decompositions - Theory
Analysis of multilevel graph partitioning
Ordering techniques for convection dominated problems on unstructured three dimensional grids
Ordering strategies for modi
Energy optimization of algebraic multigrid bases
An analysis of the robustness of some incomplete factorizations
On the stability of the incomplete LU-factorizations and characterizations of H-matrices
Using approximate inverses in algebraic multilevel methods
Solution of sparse inde
A multigrid method based on incomplete Gaussian elimination
A graph theoretic study of the numeric solution of sparse positive de
ILUT: a dual threshold incomplete LU factorization
Convergence of algebraic multigrid based on smoothed aggregation
Introduction to algebraic multigrid
An energy-minimizing interpolation for robust multi-grid methods
On the robustness of ILU smoothing
--TR
--CTR
Randolph E. Bank, Compatible coarsening in the multigraph algorithm, Advances in Engineering Software, v.38 n.5, p.287-294, May, 2007
Gh. Juncu , E. Mosekilde , C. Popa, Numerical experiments with MG continuation algorithms, Applied Numerical Mathematics, v.56 n.6, p.844-861, June 2006
J. S. Ovall, Hierarchical matrix techniques for a domain decomposition algorithm, Computing, v.80 n.4, p.287-297, September 2007
Michele Benzi, Preconditioning techniques for large linear systems: a survey, Journal of Computational Physics, v.182 n.2, p.418-477, November 2002 | algebraic multigrid;multigraph methods;incomplete LU factorization |
587404 | A Scalable Parallel Algorithm for Incomplete Factor Preconditioning. | We describe a parallel algorithm for computing incomplete factor (ILU) preconditioners. The algorithm attains a high degree of parallelism through graph partitioning and a two-level ordering strategy. Both the subdomains and the nodes within each subdomain are ordered to preserve concurrency. We show through an algorithmic analysis and through computational results that this algorithm is scalable. Experimental results include timings on three parallel platforms for problems with up to 20 million unknowns running on up to 216 processors. The resulting preconditioned Krylov solvers have the desirable property that the number of iterations required for convergence is insensitive to the number of processors. | Introduction
. Incomplete factorization (ILU) preconditioning is currently
among the most robust techniques employed to improve the convergence of Krylov
space solvers for linear systems of equations. (ILU stands for incomplete LU fac-
torization, where L and U are the lower and upper triangular (incomplete) factors
of the coe#cient matrix.) However, scalable parallel algorithms for computing ILU
preconditioners have not been available despite the fact that they have been used for
more than twenty years [12]. We report the design, analysis, implementation, and
computational evaluation of a parallel algorithm for computing ILU preconditioners.
Our parallel algorithm assumes that three requirements are satisfied.
. The adjacency graph of the coe#cient matrix (or the underlying finite element
or finite di#erence mesh) must have good edge separators, i.e., it must be
possible to remove a small set of edges to divide the problem into a collection
of subproblems that have roughly equal computational work requirements.
. The size of the problem must be su#ciently large relative to the number
of processors so that the work required by the subgraph on each processor
is suitably large to dominate the work and communications needed for the
boundary nodes.
. The subdomain intersection graph (to be defined later) should have a small
chromatic number. This requirement will ensure that the dependencies in
factoring the boundary rows do not result in undue losses in concurrency.
An outline of the paper is as follows. In section 2, we describe the steps in
the parallel algorithm for computing the ILU preconditioner in detail and provide
theoretical justification. The algorithm is based on an incomplete fill path theorem;
the proof and discussion of the theorem are deferred to an appendix. We also discuss
# Received by the editors August 4, 2000; accepted for publication (in revised form) December
17, 2000; published electronically April 26, 2001. This work was supported by U. S. National
Science Foundation grants DMS-9807172 and ECS-9527169, by the U. S. Department of Energy
under subcontract B347882 from the Lawrence Livermore Laboratory, by a GAANN fellowship from
the Department of Education, and by NASA under contract NAS1-19480 while the authors were in
residence at ICASE.
http://www.siam.org/journals/sisc/22-6/37619.html
Old Dominion University, Norfolk, VA 23529-0162 and ICASE, NASA Langley Research Center,
Hampton VA 23681-2199 (hysom@cs.odu.edu, pothen@cs.odu.edu).
the role that a subdomain graph constraint plays in the design of the algorithm,
show that the preconditioners exist for special classes of matrices, and relate our
work to earlier work on this problem. Section 3 contains an analysis that shows that
the parallel algorithm is scalable for two-dimensional (2D-) and three-dimensional
(3D-)model problems, when they are suitably ordered and partitioned. Section 4
contains computational results on Poisson and convection-di#usion problems. The
first subsection shows that the parallel ILU algorithm is scalable on three parallel
platforms; the second subsection reports convergence studies. We tabulate how the
number of Krylov solver iterations and the number of entries in the preconditioner
vary as a function of the preconditioner level for three variations of the algorithm.
The results show that fill levels higher than one are e#ective in reducing the number
of iterations; the number of iterations is insensitive to the number of subdomains;
and the subdomain graph constraint does not a#ect the number of iterations while it
makes possible the design of a simpler parallel algorithm.
The background needed for ILU preconditioning may be found in several books;
see, e.g., [1, 15, 17, 33]. A preliminary version of this paper was presented at Super-computing
'99 and was published in the conference proceedings [18]. The algorithm
has been revised, additional details have been included, and the proof of the theorem
on which it is based has been added. The experimental results in section 4 are new,
and most of them have been included in the technical reports [19, 20].
2. Algorithms. In this section we discuss the Parallel ILU (PILU) algorithm
and its underlying theoretical foundations.
2.1. The PILU algorithm. Figure 2.1 describes the steps of the PILU algorithm
at a high level; the algorithm is suited for implementation on both message-passing
and shared-address space programming models.
The PILU algorithm consists of four major steps. In the first step, we create
parallelism by dividing the problem into subproblems by means of graph partitioning.
In the second step, we preserve the parallelism in the interior of the subproblems by
locally scheduling the computations in each subgraph. In the third step, we preserve
parallelism in the boundaries of the subproblems by globally ordering the subproblems
through coloring a suitably defined graph. In the final step, we compute the
preconditioner in parallel. Now we will describe the four steps in greater detail.
Step 1: Graph partitioning. In the first step of PILU, we partition the
adjacency graph G(A) of the coe#cient matrix A into p subgraphs by removing a
small set of edges that connects the subgraphs to each other. Each subgraph will be
mapped to a distinct processor that will be responsible for the computations associated
with the subgraph.
An example of a model five-point grid partitioned into four subgraphs is shown
in
Figure
2.2. For clarity, the edges corresponding to the coe#cient matrix elements
(within each subgraph or between subgraphs) are not shown. The edges drawn correspond
to fill elements (elements that are zero in the coe#cient matrix but are nonzero
in the incomplete factors) that join the di#erent subgraphs.
To state the objective function of the graph partitioning problem, we need to introduce
some terminology. An edge is a separator edge if its endpoints belong to di#erent
subgraphs. A vertex in a subgraph is an interior vertex if all of its neighbors belong to
that subgraph; it is a boundary vertex if it is adjacent to one or more vertices belonging
to another subgraph. By definition, an interior vertex in a subgraph is not adjacent to
a vertex (boundary or interior) in another subgraph. In Figure 2.2, the first 25 vertices
are interior vertices of the subgraph S 0 , and vertices numbered 26 through 36 are its
Input: A coe#cient matrix, its adjacency graph, and the number of processors
p.
Output: The incomplete factors of the coe#cient matrix.
1. Partition the adjacency graph of the matrix into p subgraphs (sub-
domains), and map each subgraph to a processor. The objectives
of the partitioning are that the subgraphs should have roughly
equal work, and there should be few edges that join the di#erent
subgraphs.
2. On each subgraph, locally order interior nodes first, and then order
boundary nodes.
3. Form the subdomain intersection graph corresponding to the par-
tition, and compute an approximate minimum vertex coloring for
it. Order subdomains according to color classes.
4. Compute the incomplete factors in parallel.
a. Factor interior rows of each subdomain.
b. Receive sparsity patterns and numerical values of the nonzeros
of the boundary rows of lower-numbered subdomains adjacent to
a subdomain (if any).
c. Factor boundary rows in each subdomain and send the sparsity
patterns and numerical values to higher-numbered neighboring
subdomains (if any).
Fig. 2.1. High level description of the PILU algorithm.
boundary vertices. The goal of the partitioning is to keep the amount of work associated
with the incomplete factorization of each subgraph roughly equal, while keeping
the communication costs needed to factor the boundary rows as small as possible.
There is a di#culty with modeling the communication costs associated with the
boundary rows. In order to describe this di#culty, we need to relate this cost more
precisely to the separators in the graph. Define the higher degree of a vertex v as
the number of vertices numbered higher than v in a given ordering. We assume that
upward-looking, row-oriented factorization is used. At each boundary between two
subgraphs, elements need to be communicated from the lower numbered subgraph to
the higher numbered subgraph. The number of these elements is proportional to the
sum of the higher degrees (in the filled graph G(F )) of the boundary vertices in the
lower numbered subgraph. But unfortunately, we do not know the fill edges at this
point since we have neither computed an ordering of G(A) nor computed a symbolic
factorization. We could approximate by considering higher degrees of the boundary
vertices in the graph G(A) instead of the filled graph G(F ), but even this requires us
to order the subgraphs in the partition.
The union of the boundary vertices on all the subgraphs forms a wide vertex separator
. This means that the shortest path from an interior vertex in any subgraph
to an interior vertex in another subgraph consists of at least three edges; such a path
has length at least three. The communication cost in the (forward and backward)
triangular solution steps is proportional to the sum of the sizes of the wide vertex
separators. None of the publicly available graph partitioning software has the minimization
of wide separators as its objective function, but it is possible to modify
existing software to optimize this objective.
42 43 44 45 46
93 94 95 96 9799101103 104 105 106 107 108
Fig. 2.2. An example that shows the partitioning, mapping, and vertex ordering used in the
PILU algorithm. The graph on the top is a regular 12 - 12 grid with a five-point stencil partitioned
into four subdomains and then mapped on four processors. The subdomains are ordered by a coloring
algorithm to reduce dependency path lengths. Only the level one and two fill edges that join the
di#erent subdomains are shown; all other edges are omitted for clarity. The figure on the bottom
right shows the subdomain intersection graph when the subdomain graph constraint is enforced. (This
prohibits fill between the boundary nodes of the subdomains S 1 and S 2 , indicated by the broken edges
in the top graph.) The graph on the bottom left shows the subdomain intersection graph when the
subdomain graph constraint is not enforced.
The goal of the partitioning step is to keep the amount of work associated with
each subgraph roughly equal (for load balance) while making the communication costs
due to the boundaries as small as possible. As the previous two paragraphs show,
modeling the communication costs accurately in terms of edge and vertex separators
in the initial graph G(A) is di#cult, but we could adopt the minimization of the
wide separator sizes as a reasonable goal. This problem is NP-complete, but there
exist e#cient heuristic algorithms for partitioning the classes of graphs that occur in
practical situations. (Among these graph classes are 2D-finite element meshes and
3D-meshes with good aspect ratios.)
Step 2: Local reordering. In the second step, in each subgraph we order the
interior vertices before the boundary vertices. This ordering ensures that during the
incomplete factorization, an interior vertex in one subgraph cannot be joined by a
fill edge to a vertex in another subgraph, as will be shown later. Fill edges between
two subgraphs can join only their boundary vertices together. Thus interior vertices
corresponding to the initial graph G(A) remain interior vertices in the graph of the
factor G(F ). The consequences of this are that the rows corresponding to the interior
vertices in each subdomain of the initial problem G(A) can be factored concurrently,
and that communication is required only for factoring rows corresponding to the
boundary rows. The reader can verify that in each subgraph in Figure 2.2 the interior
nodes have been ordered before the boundary nodes.
The observation concerning fill edges in the preceding paragraph results from
an application of the following incomplete fill path theorem. Given the adjacency
graph G(A) of a coe#cient matrix A, the theorem provides a static characterization
of where fill entries arise during an incomplete factorization
L is the lower triangular incomplete factor, -
U is the upper triangular incomplete
factor, and E is the remainder matrix. The characterization is static in that fill is
completely described by the structure of the graph G(A); no information from the
factor is required.
We need a definition before we can state the theorem. A fill path is a path joining
two vertices i and j, all of whose interior vertices are numbered lower than the end
vertices i and j. 1
Recall also the definition of the levels assigned to nonzeros in an incomplete
factorization. To discuss the sparsity pattern of the incomplete factors, we consider
the filled matrix
I. The sparsity pattern of F is initialized to that of
A. All nonzero entries in F corresponding to nonzeros in A have level zero, and zero
entries have level infinity. New entries that arise during factorization are assigned a
level based on the levels of the causative entries, according to the rule
The incomplete fill path theorem describes an intimate relationship between fill
entries in ILU(k) factors and path lengths in graphs.
Theorem 2.1. Let
I be the filled matrix corresponding to an
incomplete factorization of A, and let f ij be a nonzero entry in F . Then f ij is a level
entry if and only if there exists a shortest fill path of length joins i and j
in G(A).
A proof and a discussion of this theorem are included in the appendix.
Now consider the adjacency graph G(A) and a partition
of it into subgraphs (subdomains). Any path joining two interior nodes in distinct
subdomains must include at least two boundary nodes, one from each of the subgraphs;
since each boundary node is numbered higher than (at least one of) the path's end
vertices (since these are interior nodes in the subgraph), this path cannot be a fill
path. If two interior nodes belonging to separate subgraphs were connected by a fill
path and the corresponding fill entry were permitted in F , the interior nodes would be
transformed into boundary nodes in G(F ). This is undesirable for parallelism, since
then there would be fewer interior nodes to be eliminated concurrently.
The local ordering step preserves interior and boundary nodes during the factorization
and ensures that a subdomain's interior rows can be factored independently
of row updates from any other subdomain. Therefore, when subdomains have relatively
large interior/boundary node ratios, and contain approximately equal amounts
of computational work, we expect PILU to exhibit a high degree of parallelism.
1 The reader has doubtless noted that interior is used in a di#erent sense here than previously.
We trust it will be obvious from the context where interior is used to refer to nodes in paths and
where it is used to refer to nodes in subgraphs.
Step 3: Global ordering. The global ordering phase is intended to preserve
parallelism while factoring the rows corresponding to the boundary vertices. In order
to explain the loss of concurrency that could occur during this phase of the algo-
rithm, we need the concept of a subdomain intersection graph, which we shall call a
subdomain graph for brevity.
The subdomain graph S(G, #) is computed from a graph G and its
partition subgraphs. The vertex set V s contains a vertex corresponding
to every subgraph in the partition; the edge set E s contains edge
if there is an edge in G with one endpoint in S i and the other in S j . We can compute
a subdomain graph S(A) corresponding to the initial graph G(A) and its partition.
(This graph should be denoted S(G(A), #), but we shall write S(A) for simplicity.)
We could also compute a subdomain graph S(F ) corresponding to the graph of the
factor G(F ). The subdomain graph S(A) corresponding to the partition of the initial
graph G(A) (the top graph) in Figure 2.2 is shown in the graph at the bottom right
in that figure.
We impose a constraint on the fill, the subdomain graph constraint. The sub-domain
graph corresponding to G(F ) is restricted to be identical to the subdomain
graph corresponding to G(A). This prohibits some fill in the filled graph G(F if two
subdomains are not joined by an edge in the original graph G(A), any fill edge that
joins those subdomains is not permitted in the graph of the incomplete factor G(F ).
The description of the PILU algorithm in Figure 2.1 assumes that the subdomain
graph constraint is satisfied. This constraint makes it possible to obtain scalability in
the parallel ILU algorithm. Later, we discuss how the algorithm should be modified
if this constraint is relaxed.
Each subdomain's nodes (in G(A)) are ordered contiguously. Consequently, saying
"subdomain r is ordered before subdomain s" is equivalent to saying "all nodes
in subdomain r are ordered, and then all nodes in subdomain s are ordered." This
permits S(A) to be considered as a directed graph, with edges oriented from lower to
higher numbered vertices.
Edges in S(F ) indicate data dependencies in factoring the boundary rows of the
subdomains. If an edge in S(F ) joins r and s and subdomain r is ordered before
subdomain s, then updates from the boundary rows of r have to be applied to the
boundary rows of s before the factorization of the latter rows can be completed. It
follows that ordering S(F ) so as to reduce directed path lengths reduces serial bottlenecks
in factoring the boundary rows. If we impose the subdomain graph constraint,
these observations apply to the subdomain graph S(A) as well since then S(A) is
identical with S(F ).
We reduce directed path lengths in S(A) by coloring the vertices of the subdomain
graph with few colors using a heuristic algorithm for graph coloring, and then by
numbering the subdomains by color classes. The boundary rows of all subdomains
corresponding to the first color can be factored concurrently without updates from any
other subdomains. These subdomains update the boundary rows of higher numbered
subdomains adjacent to them. After the updates, the subdomains that correspond
to the second color can factor their boundary rows. This process continues by color
classes until all subdomains have factored their boundary rows. The number of steps
it takes to factor the boundary rows is equal to the number of colors it takes to color
the subdomain graph.
In
Figure
2.2, let p i denote the processor that computes the subgraph S i . Then p 0
computes the boundary rows of S 0 and sends them to processors p 1 and p 2 . Similarly,
3 computes the boundary rows of subgraph S 3 and sends them to p 1 and p 2 . The
latter processors first apply these updates and then compute their boundary rows.
How much parallelism can be gained through subdomain graph reordering? We
can gain some intuition through analysis of simplified model problems, although we
cannot answer this question a priori for general problems and all possible partitions.
Consider a matrix arising from a second order PDE that has been discretized on a
regularly structured 2D grid using a standard five-point stencil. Assume that the grid
is naturally ordered and that it has been partitioned into square subgrids and mapped
into a square grid of p processors. In the worst case, the associated subdomain graph,
which itself has the appearance of a regular 2D grid, can have a dependency path
of length 2( # p - 1). Similarly, a regularly structured 3D grid discretized with a
seven-point stencil that is naturally ordered and then mapped on a cube containing
p processors can have a dependency path length of 3( 3
1). However, regular 2D
grids with the five-point stencil and regular 3D grids with the seven-point stencil are
bipartite graphs and can be colored with two colors. If all subdomains of the first
color class are numbered first, and then all subdomains of the second color class are
numbered, the longest dependency path in S will be reduced to one. This discussion
shows that coloring the subdomain graph is an important step in obtaining a scalable
parallel algorithm.
Step 4: Preconditioner computation. Now that the subdomains and the
nodes in each subdomain have been ordered, the preconditioner can be computed.
We employ an upward-looking, row oriented factorization algorithm. The interior of
each subdomain can be computed concurrently by the processors, and the boundary
nodes can be computed in increasing order of the color classes. Either a level-based
ILU(k) or a numerical threshold based ILUT(# , p) algorithm may be employed on
each subdomain. Di#erent incomplete factorization algorithms could be employed in
di#erent subdomains when appropriate, as in multiphysics problems. Di#erent fill
levels could be employed for the interior nodes in a subdomain and for the boundary
nodes to reduce communication and synchronization costs.
2.2. Relaxing the subdomain graph constraint. Now we consider how the
subdomain graph constraint might be relaxed. Given a graph G(A) and a partition
of it into subgraphs, we color the subdomain graph S(A) and order its subdomains as
before. Then we compute the graph G(F ) of an incomplete factor and its subdomain
graph S(F ). To do this, we need to discover the dependencies in S(F ), but initially
we have only the dependencies in S(A) available. This has to be done in several
rounds, because fill edges could create additional dependencies between the boundary
rows of subdomains, which in turn might lead to further dependences. The number
of rounds needed is the length of a longest dependency path in the subdomain graph
G(F ), and this could be # p). This discussion applies when an ILU(k) algorithm
is employed, with symbolic factorization preceding numerical factorization. If ILUT
were to be employed, then symbolic factorization and numerical factorization must
be interleaved, as would be done in a sequential algorithm.
We can then color the vertices of S(F ) to compute a schedule for factoring the
boundary rows of the subdomains. For achieving concurrency in this step the subdomain
graph S(F ) should have a small chromatic number (independent of the number
of vertices in G(A)). Note that the description of the PILU algorithm in Figure 2.1
needs to be modified to reflect this discussion when the subdomain graph constraint
is relaxed.
The graph G(F ) in Figure 2.2 indicates the fill edges that join S 1 to S 2 as broken
lines. The corresponding subdomain intersection graph S(F ) is shown on the lower
left. The edge between S 1 and S 2 necessitates three colors to color S(F the subdomains
S 0 and S 3 form one color class; S 1 by itself constitutes the second color class;
and S 2 by itself makes up the third color class. Thus three steps are needed for the
computation of the boundary rows of the preconditioner when the subdomain graph
constraint is relaxed. Note that the processor responsible for the subdomain S 2 can
begin computing its boundary rows when it receives an update from either S 0 or S 3 ,
but that it cannot complete its computation until it has received the update from the
Theorem 2.1 has an intuitively simple geometric interpretation. Given an initial
node i in G(A), construct a topological "sphere" containing all nodes that are at a
distance less than or equal to k edges. Then a fill entry f ij is admissible in an
is within the sphere. Note that all such nodes j do not cause
fill edges since there needs to be a fill path joining i and j. By applying Theorem 2.1,
we can gain an intuitive understanding of the fill entries that may be discarded on
account of the subdomain graph constraint. Referring again to Figure 2.2, we see that
prohibited edges arise when two nonadjacent subdomains in G(A) have nodes that
are joined by a fill path of length less than k zero edge is discarded by
the constraint.
2.3. Existence of PILU preconditioners. The existence of preconditioners
computed from the PILU algorithm can be proven for some classes of problems.
Meijerink and van der Vorst [28] proved that if A is an M-matrix, then ILU
factors exist for any predetermined sparsity pattern, and Manteu#el [27] extended
this result to H-matrices with positive diagonal elements. These results immediately
show that PILU preconditioners with sparsity patterns based on level values exist for
these classes of matrices. This is true even when di#erent level values are used for the
various subdomains and boundaries.
Incomplete Cholesky (IC) preconditioners for symmetric problems could be computed
with our parallel algorithmic framework using preconditioners proposed by
Jones and Plassmann [21] and by Lin and Mor-e [23] on each subdomain and on the
boundaries. The sparsity patterns of these preconditioners are determined by the numerical
values in the matrix and by memory constraints. Lin and Mor-e have proved
that these preconditioners exist for M- and H-matrices. Parallel IC preconditioners
also can be shown to exist for M- and H-matrices. If the subdomain graph constraint
is not enforced, then the preconditioner computed in parallel corresponds to a preconditioner
computed by the serial algorithm from a reordered matrix. If the constraint
is enforced, some specified fill elements are dropped from the Schur complement; it
can be shown that the resulting Schur complement matrix is componentwise larger
than the former and hence still an M-matrix.
2.4. Relation to earlier work. We now briefly discuss earlier parallel ILU
algorithms that are related to the PILU algorithm proposed here. Earlier attempts at
parallel algorithms for preconditioning (including approaches other than incomplete
are surveyed in [6, 12, 34]; orderings suitable for parallel incomplete
factorizations have been studied inter alios in [4, 11, 13]. The surveys also describe
the alternate approximate inverse approach to preconditioning.
Saad [33, section 12.6.1] discusses a distributed ILU(0) algorithm that has the features
of graph partitioning, elimination of interior nodes in a subdomain before boundary
nodes, and coloring the subdomains to process the boundary nodes in parallel.
Only level 0 preconditioners are discussed there, so that fill between subdomains, or
within each subdomain, do not need to be considered. No implementations or results
were reported, although Saad has informed us recently of a technical report [24] that
includes an implementation and results. Our work, done independently, shows how fill
levels higher than zero can be accommodated within this algorithmic framework. We
also analyze our algorithm for scalability and provide computational results on the
performance of PILU preconditioners. Our results show that fill levels higher than zero
are indeed necessary to obtain parallel codes with scalability and good performance.
Karypis and Kumar [22] have described a parallel ILUT implementation based
on graph partitioning. Their algorithm does not include a symbolic factorization, and
they discover the sparsity patterns and the values of the boundary rows after the
numerical computation of the interior rows in each subdomain. The factorization of
the boundary rows is done iteratively, as in the discussion given above, where we show
how the subdomain graph constraint might be relaxed. The partially filled graph of
the boundary rows after the interior rows are eliminated is formed, and this graph
is colored to compute a schedule for computing the boundary rows. Since fill edges
in the boundary rows are discovered as these rows are being factored, this approach
could lead to long dependency paths that are #(p). The number of boundary rows is
meshes with good aspect ratios. If the
cost of factoring and communicating a boundary row is proportional to the number
of rows, then this phase of their algorithm could
severely limiting the
scalability of the algorithm (cf. the discussion in section 3).
Recently Magolu monga Made and van der Vorst [25, 26] have reported variations
of a parallel algorithm for computing ILU preconditioners. They partition the mesh,
linearly order the subdomains, and then permit fill in the interior and the boundaries
of the subdomains. The boundary nodes are classified with respect to the number
of subdomains they are adjacent to, and are eliminated in increasing order of this
number. Since the subdomains are linearly ordered, a "burn from both ends" ordering
is employed to eliminate the subdomains. Our approaches are similar, except that
we additionally order the subdomains by means of a coloring to reduce dependency
path lengths to obtain a scalable algorithm. They have provided an analysis of the
condition number of the preconditioned matrices for a class of 2D second order elliptic
boundary value problems. They permit high levels of fill (four or greater) as we do,
and show that the increased fill permitted across the boundaries enables the condition
number of the preconditioned matrix to be insensitive to the number of subdomains
(except when the latter gets too great). We have worked independently of each other.
A di#erent approach, based on partitioning the mesh into rectangular strips and
then computing the preconditioner in parallel steps in which a "wavefront" of the
mesh is computed at each step by the processors, was proposed by Bastian and Horton
[3] and was implemented for shared memory multiprocessors recently by Vuik,
van Nooyen, and Wesseling [36]. This approach has less parallelism than the one
considered here.
3. Performance analysis. In this section we present simplified theoretical analyses
of algorithmic behavior for matrices arising from PDEs discretized on 2D grids
with five-point stencils and 3D grids with seven-point stencils. Since our arguments
are structural in nature, we assume ILU(k) is the factorization method used. After a
word about nomenclature, we begin with the 2D case.
The word grid refers to the grid (mesh) of unknowns for regular 2D and 3D grids
with five- and seven-point stencils, respectively; this is identical to the adjacency
graph G(A) of the coe#cient matrix of these problems. We use the terms eliminating
Fig. 3.1. Counting lower triangular fill edges in a naturally ordered grid. We count the number
of edges incident on vertex 9. Considering the graphs from top to bottom, we find that there are two
level 0 edges; there is one level 1 edge, due to fill path 9, 3, 4; there is one level 2 edge due to fill
path 9, 3, 4, 5; there are two level 3 edges, due to fill paths 9, 3, 4, 5, 6 and 9, 3, 2, 1, 7. We can
generalize that two additional fill edges are created for every level greater than three, except near
the boundaries. We conclude that asymptotically there are 2k lower triangular edges incident on a
vertex in a level k factorization. Since the mesh corresponds to a structurally symmetric problem,
there are 2k upper triangular edges incident on a vertex as well.
a node and factoring a row synonymously.
We assume the grid has been block-partitioned, with each subdomain consisting
of a square subgrid of dimension c - c. We also assume the subdomain grid has
dimensions # p- # p, so there are p processors in total. There are thus
in the grid, and subdomains have at most
boundary nodes.
If subdomain interior nodes are locally numbered in natural order and k # c, each
row in the factor F asymptotically has 2k (strict) upper triangular and 2k (strict)
lower triangular nonzero entries. The justification for this statement arises from a consideration
of the incomplete fill path theorem; the intuition is illustrated in Figure 3.1.
Assuming that the classical ILU(k) algorithm is used for symbolic factorization,
both symbolic and numeric factorization of row j entails 4k 2 arithmetic operations.
This is because for each lower triangular entry f ji in matrix row j, factorization
requires an arithmetic operation with each upper triangular entry in row i.
A red-black ordering of the subdomain graph gives an optimal bipartite division.
If red subdomains are numbered before black subdomains, our algorithm simplifies to
the following three stages.
1. Red processors eliminate all nodes; black processors eliminate interior nodes.
2. Red processors send boundary-row structure and values to black processors.
3. Black processors eliminate boundary nodes.
If these stages are nonoverlapping, the cost of the first stage is bounded by the cost
of eliminating all nodes in a subdomain. This cost is 4k 2 c
The cost for the second stage is the cost of sending structural and numerical values
from the upper-triangular portions of the boundary rows to neighboring processors.
c, the incomplete fill path theorem can be used to show that, asymptotically,
a processor only needs to forward values from c rows to each neighbor. We assume a
standard, noncontentious communication model wherein # and # represent message
startup and per-word-transfer times, respectively. We measure these times in non-dimensional
units of flops by dividing them by the time it takes to execute one flop.
The time for an arithmetic operation is thus normalized to unity. Then the cost for
the second step is
Since the cost of factoring a boundary row can be shown to be asymptotically
identical to that for factoring an interior row, the cost for eliminating the 4c boundary
nodes is (4k 2
. Speedup can then be expressed as
The numerator represents the cost for sequential execution, and the three terms in the
denominator represent the costs for the three stages (arithmetic for interior nodes,
communication costs, and arithmetic for the boundary nodes) of the parallel algorithm
Three implications from this equation are in order. First, for a fixed problem
size and number of processors, the parallel computational cost (the first and third
terms in the denominator) is proportional to k 2 , while the communication cost (the
second term in the denominator) is proportional to k. This explains the increase in
e#ciency with level that we have observed. Second, if the ratio N/p is large enough,
the first term in the denominator will become preeminent, and e#ciency will approach
100%. Third, if we wish to increase the number of processors p by some factor while
maintaining a constant e#ciency, we need only increase the size of the problem N
by the same factor. This shows that our algorithm is scalable. This observation is
not true for a direct factorization of the coe#cient matrix, where the dependencies
created by the additional fill cause loss in concurrency.
For the 3D case we assume partitioning into cubic subgrids of dimension c -
c - c and a subdomain grid of dimension p 1/3
which gives
Subdomains have at most 6c 2 boundary nodes. A development similar to that above
shows that, asymptotically, matrix rows in the factor F have 2k 2 (strict) upper and
lower triangular entries, so the cost for factoring a row is 4k 4 . Speedup for this case
can then be expressed as
4k 4 N
4k 4 N
4. Results. Results in this section are based on the following model problems.
Problem 1. Poisson's equation in two or three dimensions:
g.
Problem 2. Convection-di#usion equation with convection in the xy plane:
#x
e xy
#y
e -xy g.
Homogeneous boundary conditions were used for both problems. Derivative terms
were discretized on the unit square or cube, using 3-point central di#erencing on
regularly spaced n x - n y - n z grids (n 2D). The values for # in Problem
2 were set to 1/500 and 1/1000. The problem becomes increasingly unsymmetric,
and more di#cult to solve accurately as # decreases. The right-hand sides of the
resulting systems, artificially generated as is the
all-ones vector.
preconditioning is amenable to performance analysis since the nonzero
structures of ILU(k) preconditioners are identical for any PDE that has been discretized
on a 2D or 3D grid with a given stencil. The structure depends on the grid
and the stencil only and is not a#ected by numerical values if pivoting is not needed
for numerical stability. Identical structures imply identical symbolic factorization
costs, as well as identical flop counts during the numerical factorization and solve
phases. In parallel contexts, communication patterns and costs are also identical.
While preconditioner e#ectiveness-the number of iterations until the stopping criteria
is reached-di#ers with the numerics of the particular problem being modeled,
the parallelism available in the preconditioner does not.
The structure of ILUT preconditioners, on the other hand, is a function of the
grid, the stencil, and the numerics. Changing the problem, particularly for non-
diagonally dominant cases, can alter the preconditioner structure, even when the grid
and stencil remain the same.
We report our performance evaluation for ILU(k) preconditioners, although the
parallel algorithmic framework proposed here could just as easily work with ILUT(# ,
p). We have compared the performance of ILU(k) with ILUT in an earlier report [18].
We report there that for Problem 2 with incurred more
fill than ILU(5) on a 2D domain for grid sizes up to 400 - 400; for 3D domains and
grid sizes up to 64 - 64 - 64, the same ILUT preconditioner incurred fill between
ILU(2) and ILU(3).
In addition to demonstrating that our algorithm can provide high degrees of
parallelism, we address several other issues. We study the influence of the subdomain
graph constraint on the fill permitted in the preconditioner and on the convergence of
preconditioned Krylov space solvers. We also report convergence results as a function
of the number of nonzeros in the preconditioner.
4.1. Parallel performance. We now report timing and scalability results for
preconditioner factorization and application on three parallel platforms:
. an SGI Origin2000 at NASA Ames Research Center (AMES);
. the Coral PC Beowulf cluster at ICASE, NASA Langley Research Center;
. a Sun HPC 10000 Starfire server at Old Dominion University (ODU).
2206 DAVID HYSOM AND ALEX POTHEN
Table
Time (sec.) required for incomplete (symbolic and numeric) factorization for a 3D scaled
problem; 91, 125 unknowns per processor, seven-point stencil, ILU(2) factorization on interior nodes,
and ILU(1) factorization on boundary nodes. Dashes (-) for Beowulf and HPC 10000 indicate that
the machines have insu#cient cpus to perform the runs.
Procs Origin2000 Beowulf HPC 10000
AMES (ICASE) (ODU)
8 2.44 3.11 2.43
Both problems were solved using Krylov subspace methods as implemented in the
PETSc [2] software library. Problem 1 was solved using the conjugate gradient
method, and Problem 2 was solved using Bi-CGSTAB [35]. PETSc's default convergence
criterion was used, which is five orders of magnitude (10 5 ) reduction in the
residual of the preconditioned system. We used our own codes for problem generation,
partitioning, ordering, and symbolic factorization.
Table
4.1 shows incomplete factorization timings for a 3D memory-scaled problem
with approximately 91, 125 unknowns per processor. As the number of processors
increases, so does the size of the problem. The coe#cient matrix of the problem
factored on 216 processors has about 19.7 million rows. ILU(2) was employed for the
interior nodes, and ILU(1) was employed for the boundary nodes. Reading down any
of the columns shows that performance is highly scalable, e.g., for the SGI Origin2000,
factorization for 216 processors and 19.7 million unknowns required only 62% longer
than the serial case. Scanning horizontally indicates that performance was similar
across all platforms, e.g., execution time di#ered by less than a factor of two between
the fastest (Origin2000) and slowest (Beowulf) platforms.
Table
4.2 shows similar data and trends for the triangular solves for the scaled
problem. Scalability for the solves was not quite as good as for factorization; e.g., the
solve with 216 processors took about 2.5 times longer than the serial case. This is
expected due to the lower computation cost relative to communication and synchronization
costs in triangular solution.
We observed that the timings for identical repeated runs on the HPC 10000
and SGI typically varied by 50% or more, while repeated runs on the Beowulf were
remarkably consistent.
Table
4.3 shows speedup for a constant-sized problem of 1.7 million unknowns.
There is a clear correlation between performance and subdomain interior/boundary
node ratios; this ratio needs to be reasonably large for good performance.
The performances reported in these tables are applicable to any PDE that has
been discretized with a seven-point central di#erence stencil since the sparsity pattern
of the symbolic factor depends on the grid and the stencil only.
4.2. Convergence studies. Our approach for designing parallel ILU algorithms
reorders the coe#cient matrices whose incomplete factorization is being computed.
This reordering could have a significant influence on the e#ectiveness of the ILU
preconditioners. Accordingly, in this section we report the number of iterations of a
preconditioned Krylov space solver needed to reduce the residual by a factor of 10 5 .
We compare three di#erent algorithms.
Table
Time (sec.) to compute triangular solves for 3D scaled problem; 91, 125 unknowns per processor,
seven-point stencil, ILU(2) factorization on interior nodes, ILU(1) factorization on boundary nodes.
Dashes (-) for Beowulf and HPC 10000 indicate that the machines have insu#cient cpus to perform
the runs.
Procs Origin2000 Beowulf HPC 10000
Table
Speedup for 3D constant-size problem; the grid was 120-120-120 for a total of approximately
1.7 million unknowns; data is for ILU(0) factorization performed on the SGI Origin2000; "I/B
ratio" is the ratio of interior to boundary nodes in each subdomain.
Procs Unknowns/ I/B Time E#ciency
Processor ratio (sec.) (%)
8 216,000 9.3 2.000 100
4.3 .408 62
. Constrained PILU(k) is the parallel ILU(k) algorithm with the subdomain
graph constraint enforced.
. In unconstrained PILU(k), the subdomain graph constraint is dropped, and
all fill edges up to level k between the boundary nodes of di#erent subdomains
are permitted, even when such edges join two nonadjacent subdomains of the
initial subdomain graph S(A).
. In block Jacobi ILU(k) (BJILU(k)), all fill edges joining two di#erent subdomains
are excluded.
Intuitively, one expects, especially for diagonally dominant matrices, that larger
amounts of fill in preconditioners will reduce the number of iterations required for
convergence.
4.2.1. Fill count comparisons. For a given problem, the number of permitted
fill edges is a function of three components: the factorization level, k; the subdomain
and the discretization stencil. While the numerical values of the coe#cients
of a particular PDE influence convergence, they do not a#ect fill counts. Therefore,
our first set of results consists of fill count comparisons for problems discretized on a
using a standard, seven-point stencil.
Table
4.4 shows fill count comparisons between unconstrained PILU(k), constrained
PILU(k), and block Jacobi ILU(k) for various partitionings and factorization
levels. The data shows that more fill is discarded as the factorization level increases,
and as subdomain size (the number of nodes in each subdomain) decreases. These
two e#ects hold for both constrained PILU(k) and block Jacobi ILU(k) but are much
more pronounced for the latter. For example, less than 5% of fill is discarded from unconstrained
factors when subdomains contain at least 512 nodes (so that the
Table
Fill comparisons for the 64 - 64 - 64 grid. U denotes unconstrained, C denotes constrained,
and B denotes block Jacobi ILU(k) preconditioners. The columns headed "nzF/nzA" show the ratio
of the number of nonzeros in the preconditioner to the number of nonzeros in the original problem
and are indicative of storage requirements. The columns headed "constraint e#ects" present another
view of the same data: here, the percentage of nonzeros in the constrained PILU(k) and block Jacobi
ILU(k) factors are shown relative to that for the unconstrained PILU(k). These columns show the
amount of fill dropped due to the subdomain graph constraint.
Nodes per Subdom. nzF/nzA Constraint e#ects (%)
subdom. count Level U
4 9.73 9.73 9.73 100.00 100.00
3 6.32 6.32 5.70 99.92 90.13
3 6.60 5.64 2.71 85.37 41.04
subgraphs on each processor are not too small), but up to 42% is discarded from block
Jacobi factors. Thus, one might tentatively speculate that, for a given subdomain size
and level, PILU(k) will provide more e#ective preconditioning than BJILU(k). We
have observed similar behavior for 2D problems also. For both 2D and 3D problems,
when there is a single subdomain the factors returned by the three algorithms are
identical. For the single subdomain case, the ordering we have used corresponds to
the natural ordering for these model problems.
An important observation to make in Table 4.4 is how the sizes (number of nonze-
ros) of the preconditioners depend on levels of fill. For the 3D problems considered
here (cube with 64 points on each side, seven-point stencil), a level one preconditioner
typically requires twice as much storage as the coe#cient matrix A; when the level is
two, this ratio is about three; when the level is three, it is about six; and when the
level is four, it is about ten. For 2D problems (square grid with 256 points on a side,
Table
Iteration comparisons for the 64-64-64 grid. U denotes unconstrained, C denotes constrained,
and B denotes block Jacobi ILU(k) preconditioners. The starred entries (*) indicate that, since
there is a single subdomain, the factor is structurally and numerically identical to the unconstrained
PILU(k). Dashed entries (-) indicate the solutions either diverged or failed to converge after 200
iterations. For Problem 2, when the level zero preconditioners did not reduce the relative
error in the solution by a factor of 10 5 at termination; when the level one preconditioners
did not do so either.
Problem 1 Problem 2
Nodes per Subdom.
subdom. count Level U C B U C B U C B
28 78
28
28 28 67 -
4
43 43 64 28 28 - 63 63 -
26
five-point stencil), the growth of fill with level is slower; the ratios are about 1.4 for
level one, 1.8 for level two, 2.6 for level three, 3.5 for level four, 4.3 for level five, and
5.4 for level six.
In parallel computation fill levels higher than those employed in sequential computing
are feasible since modern multiprocessors are either clusters or have virtual
shared memory, and these have memory sizes that increase with the number of pro-
cessors. Another point to note is that the added memory requirement for these level
values is not as prohibitive as it is for a complete factorization. Hence it is practical
to trade-o# increased storage in preconditioners for reducing the number of iterations
in the solver.
4.2.2. Convergence of preconditioned iterative solvers. The fill results
in the previous subsection are not influenced by the actual numerical values of the
nonzero coe#cients; however, the convergence of preconditioned Krylov space solvers
is influenced by the numerical values. Accordingly, Table 4.5 shows iterations required
for convergence for various partitionings and fill levels for the three variant
algorithms that we consider. The data in these tables can be interpreted in
various ways; we begin by discussing two ways that we think are primarily significant
First, by scanning vertically one can see how changing the number of subdomains,
and hence, matrix ordering, a#ects convergence. The basis for comparison is the
iteration count when there is a single subdomain. The partitioning and ordering
for these cases is identical to, and our data in close agreement with, that reported
by Benzi, Joubert, and Mateescu [4] for natural ordering. (They report results for
Problem 2 with but not for
A pleasing property of both the constrained and unconstrained PILU algorithms
is that the number of iterations increases only mildly when we increase the number
of subdomains from one to 512 for these problems. This insensitivity to the number
of subdomains when the number of nodes per subdomain is not too small confirms
that the PILU algorithms enjoy the property of parallel algorithmic scalability. For
example, Poisson's equation (Problem 1) preconditioned with a level two factorization
and a single subdomain required 24 iterations. Preconditioning with the same level,
constrained PILU(k) on 512 subdomains needed only two more iterations. Similar
results are observed for the convection-di#usion problems also. This property is a
consequence of the fill between the subdomains that is included in the PILU algorithm.
Similar results have been reported in [26, 36], and the first paper includes a condition
number analysis supporting this observation.
Increasing the level of fill generally has the beneficial e#ect of reducing the number
of iterations needed; this influence is largest for the worse-conditioned convection-
di#usion problem with 1/1000. For this problem, level zero preconditioners do not
converge for reasonable subdomain sizes. Also, even though level one preconditioners
require fewer iteration numbers than level two preconditioners in some cases, when the
PETSc solvers terminate because the residual norms are reduced by 10 5 , the relative
errors are larger than 10 -5 for the former preconditioners. The relative errors are
also large for the convection-di#usion problem with when the level is set
to zero.
Second, scanning the data in Table 4.5 horizontally permits evaluation of the
subdomain graph constraint's e#ects. Again, unless subdomains are small and the
factorization level is high; constrained and unconstrained PILU(k) show very similar
behavior. Consider, for example, Poisson's equation (Problem 1) preconditioned
with a level two factorization and 512 subdomains. The solution with unconstrained
required 25 iterations while constrained PILU(k) required 26.
We also see that PILU(k) preconditioning is more e#ective than BJILU(k) for all
3D trials. (Recall that the single apparent exception, Problem 2,
with 32, 768 nodes per subdomain, has large relative errors at termination.) Again,
the extremes of convergence behavior are seen for Problem 2 with
with level one preconditioners, BJILU(k) su#ers large relative errors at termination
while the other two algorithms do not, when the number of subdomains is 64 or fewer.
On 2D domains, while PILU(k) is more e#ective than BJILU(k) for Poisson's
equation, BJILU(k) is sometimes more e#ective in the convection-di#usion problems.
We also examine iteration counts as a function of preconditioner size graphically.
A plot of this data appears in Figure 4.1. In these figures the performance of the
512 nodes per subdomain, 512 subdomains
Block Jacobi ILU(k)
Const.
Unconst. PILU(k)1020304050
4096 nodes per subdomain, 64 subdomains
Block Jacobi ILU(k)
Const.
Unconst.
nodes per subdomain, 8 subdomains
Block Jacobi ILU(k)
Const.
Unconst.
Fig. 4.1. Convergence comparison as a function of preconditioner size for the convection-
di#usion problem, on the 64 - 64 - 64 grid. Data points are for levels through 4. Data
points for constrained and unconstrained PILU(k) are indistinguishable in the third graph.
constrained and unconstrained PILU algorithms is often indistinguishable. We find
again that PILU(k) preconditioning is more e#ective than BJILU(k) for 3D problems
for a given preconditioner size; however, this conclusion does not always hold for 2D
problems, especially for lower fill levels. As the number of vertices in the subdomains
increases, higher fill levels become more e#ective in reducing the number of iterations
needed for convergence. We find that fill levels as high as four to six can be the most
e#ective when the subdomains are su#ciently large. Fill levels higher than these do
not seem to be merited by these problems, even for the di#cult convection-di#usion
problems with level four preconditioner reduces the number of
iterations below ten.
5. Conclusions. We have designed and implemented a PILU algorithm, a scalable
parallel algorithm for computing ILU preconditioners that creates concurrency
by means of graph partitioning. The theoretical basis of the algorithm is the incomplete
fill path theorem that statically characterizes fill elements in an incomplete
factorization in terms of paths in the adjacency graph of the initial coe#cient matrix.
To obtain a scalable parallel algorithm, we employ a subdomain graph constraint that
excludes fill between subgraphs that are not adjacent in the adjacency graph of the
initial matrix. We show that the PILU algorithm is scalable by an analysis for 2D-
and 3D-model problems and by computational results from parallel implementations
on three parallel computing platforms.
We also study the convergence behavior of preconditioned Krylov solvers with
preconditioners computed by the PILU algorithm. The results show that fill levels
higher than one are e#ective in reducing the number of iterations, that the number of
iterations is insensitive to the number of subdomains, and that the subdomain graph
constraint does not a#ect the number of iterations needed for convergence while it
makes possible the design of a scalable parallel algorithm.
Appendix
. Proof of the incomplete fill path theorem.
Theorem A.1. Let
I be the filled matrix corresponding to an
incomplete factorization of A, and let f ij be a nonzero entry in F . Then f ij is a level
entry if and only if there exists a shortest fill path of length joins i and j
in G(A).
Proof. If there is a shortest fill path of length we prove that
the edge exists by induction on the length of the fill path.
Define a chord of a path to be an edge that joins two nonconsecutive vertices on
the path. The fill path joining i and j is chordless, since a chord would lead to a
shorter fill path.
The base case immediate, since a fill path of length one in the graph
G(A) is an edge {i, j} in G(A) that corresponds to an original nonzero in A.
Now assume that the result is true for all lengths less than k + 1. Let h denote
the highest numbered interior vertex on the fill path joining i and j.
We claim that the (i, h) section of this path is a shortest fill path in G(A) joining
i and h. This section is a fill path by the choice of h since all intermediate vertices on
this section are numbered lower than h. If there were a fill path joining i and h that
is shorter than the (i, h) section, then we would be able to concatenate it with the
(h, section to form a shorter (i, path. Hence the (i, h) section is a shortest fill
path joining i and h. Similarly, the (h, j) section of this path is the shortest fill path
joining h and j.
Each of these sections has fewer than k hence the inductive hypothesis
applies. Denote the number of edges in the (i, h) ((h, j)) section of this path
by 1. By the inductive hypothesis, the edge {i, h} is a
fill edge of level k 1 - 1, and the edge {h, j} is a fill edge of level k 2 - 1. Now by the
sum rule for updating fill levels, when the vertex h is eliminated, we have a fill edge
{i, j} of level
Now we prove the converse. Suppose that {i, j} is a fill edge of level k; we show
that there is a fill path in G(A) of length by induction on the level k.
The base case immediate, since the edge {i, j} constitutes a trivial fill
path of length one. Assume that the result is true for all fill levels less than k. Let h
be a vertex whose elimination creates the fill edge {i, j} of level k. Let the edge {i, h}
have level k 1 , and let the edge {h, j} have level k 2 ; by the sum rule for computing
levels, we have that k. By the inductive hypothesis, there is a shortest
fill path of length h, and such a path of length
and j. Concatenating these paths, we find a fill path joining i and j of length
We need to prove that the (i, fill path in the previous paragraph is a shortest
fill path between i and j. Consider the elimination of any another vertex g that causes
the fill edge {i, j}. By the choice of the vertex h, if the level of the edge {i, g} is k # 1
and that of {g, j} is k. The inductive hypothesis applies to the
(i, g) and (g, j) sections, and hence the sum of their lengths is at least k + 1.
This completes the proof.
This result is a generalization of the following theorem that characterizes fill in
complete factorizations for direct methods, due to Rose and Tarjan [30].
Theorem A.2. Let I be the filled matrix corresponding to the
complete factorization of A. only if there exists a fill path joining
in the graph G(A).
Here we associate level values with each fill edge and relate it to the length of
shortest fill paths. The incomplete fill path theorem enables new algorithms for incomplete
symbolic factorization that are more e#cient than the conventional algorithm
that simulates numerical factorization. We have described these algorithms in an
earlier work [29] and the report is in preparation.
D'Azevedo, Forsyth, and Tang [9] have defined the (sum) level of a fill edge {i, j}
using the length criterion employed here, and hence they were aware of this result.
However, the theorem is neither stated nor proved in their paper. Definitions of level
that compute levels of fill nonzeros by rules other than by summing the levels of the
causative pairs of nonzeros have been used in the literature. The "maximum" rule
defines the level of a fill nonzero to be the minimum over all causative pairs of the
maximum value of the levels of the causative entries:
A variant of the incomplete fill path theorem can be proved for this case, but it is not
as simple or elegant as the one for the "sum" rule. Further discussion of these issues
will be deferred to a future report.
Acknowledgments
. We thank Dr. Edmond Chow of CASC, Lawrence Livermore
National Laboratory, and Professor Michele Benzi of Emory University for helpful
discussions.
--R
Cambridge University Press
http://www.
Parallelization of robust multigrid methods: ILU factorization and frequency decomposition method
Numerical experiments with parallel orderings for ILU preconditioners
Approximate and incomplete factorizations
An object-oriented framework for block preconditioning
Experimental study of ILU preconditioners of indefinite matrices
Ordering methods for preconditioned conjugate gradient methods applied to unstructured grid problems
A Graph-Theory Approach for Analyzing the E#ects of Ordering on ILU Preconditioning
Ordering strategies and related techniques to overcome the trade-o# between parallelism and convergence in incomplete factorizations
Numerical Linear Algebra for High Performance Computers
Analysis of parallel incomplete point factorizations
Iterative Methods for Solving Linear Systems
Parallel incomplete Cholesky preconditioners based on the nonoverlapping data distribution
Incomplete Decomposition (ILU): Algorithms
Parallel ILU Ordering and Convergence Relationships: Numerical Experiments
An improved incomplete Cholesky factorization
Parallel threshold-based ILU factorization
An incomplete factorization technique for positive definite linear systems
An iterative solution method for linear equation systems of which the coe
Fast algorithms for incomplete factorization
Algorithmic aspects of vertex elimination on directed graphs
ILUT: A dual-threshold incomplete LU factorization
Iterative Methods for Sparse Linear Systems
Parallelism in ILU-preconditioned GM- RES
--TR
--CTR
Robert D. Falgout , Jim E. Jones , Ulrike Meier Yang, Conceptual interfaces in hypre, Future Generation Computer Systems, v.22 n.1, p.239-251, January 2006
Luca Bergamaschi , Giorgio Pini , Flavio Sartoretto, Computational experience with sequential and parallel, preconditioned Jacobi--Davidson for large, sparse symmetric matrices, Journal of Computational Physics, v.188
Michele Benzi, Preconditioning techniques for large linear systems: a survey, Journal of Computational Physics, v.182 n.2, p.418-477, November 2002 | preconditioning;parallel preconditioning;incomplete factorization;ILU |
587409 | Asymptotic Analysis of the Laminar Viscous Flow Over a Porous Bed. | We consider the laminar viscous channel flow over a porous surface. The size of the pores is much smaller than the size of the channel, and it is important to determine the effective boundary conditions at the porous surface. We study the corresponding boundary layers, and, by a rigorous asymptotic expansion, we obtain Saffman's modification of the interface condition observed by Beavers and Joseph. The effective coefficient in the law is determined through an auxiliary boundary-layer type problem, whose computational and modeling aspects are discussed in detail. Furthermore, the approximation errors for the velocity and for the effective mass flow are given as powers of the characteristic pore size $\ep$. Finally, we give the interface condition linking the effective pressure fields in the porous medium and in the channel, and we determine the jump of the effective pressures explicitly. | Introduction
Finding eective boundary conditions at the surface which separates a channel
ow and a porous medium is a classical problem.
Supposing a laminar incompressible and viscous
ow, we nd out immediately
that the eective
ow in a porous solid is described by Darcy's law. In the free
uid we obviously keep the Navier-Stokes system. Hence we have two completely
dierent systems of partial dierential equations. First, Darcy's law combined
with the incompressibility gives a second order equation for the pressure and a
rst order system for the velocity. In the Navier-Stokes system, the orders of the
corresponding dierential operators are dierent, and it is not clear what kind
of conditions one should impose at the interface between the free
uid and the
porous part. Clearly, due to the incompressibility, the normal mass
ux should
be continuous. Other classically used conditions are the continuity of the pressure
and, for a free
uid, the vanishing of the tangential velocity at the interface.
Let us discuss the mathematical background of the interface conditions. It
is well-known that Darcy's law is a statistical result giving the average of the
momentum equation (the Navier-Stokes equations) over the pore structure. Its
rigorous derivation involves the weak convergence in
div) (respectively the
two-scale convergence) of velocities, and only the continuity of the normal velocities
is preserved. Other continuity conditions at the interface are generally lost, such
that further analysis is required.
Concerning other interface conditions used in engineering literature, the vanishing
of the tangential velocity is found to be an unsatisfactory approximation,
and in [2] a new condition is proposed. The condition reads
where ~u e is the eective velocity in the channel, ~v f is the mean ltration velocity
given by Darcy's law, ~ is a tangent vector to the interface, ~ is the normal into the
Asymptotic analysis of the laminar viscous
ow over a porous bed 3
uid, K is the permeability of the porous medium, and the scalar is a function
of the geometry of the porous medium.
In [2], this law is derived by heuristic arguments and justied experimentally.
A theoretical attempt to derive (1.1) is undertaken in [15] and, using a statistical
approach, a Brinkman type approximation in the transition layer is derived. A
matching argument then allows to obtain the formula
The interested reader can also consult the lecture note [4].
Dierent considerations can be found in [5] and [11]. They distinguish two
cases:
(a) The pressure gradient on the side of the porous solid of the interface is normal
to the interface. Consequently, we have a balanced
ow on both sides of the
interface. Then, using an asymptotic point of view, the following laws are
obtained in [11]: ( ~u e
on the interface. This case describes the
ows in cavities. The mathematical
justication is in [9]. We shall not consider it in this paper.
(b) The pressure gradient on the side of the porous solid at the interface is
not normal. This case is considered in the fundamental paper [5]. After
discussing the orders of magnitude of the unknowns it is found out that
on the interface the velocity of the free
uid is zero, and the pressure is
continuous.
All results cited above are not mathematically rigorous. Furthermore, dierent
approaches give dierent results and two natural questions arise immediately:
(Q1) What are the correct matching conditions (i.e. conditions at the interface)
between those two
ow equations?
(Q2) What are the eective constants entering the matching conditions?
We are going to answer those questions in the following. In Section 2, we
dene our problem and discuss some simple approximations. In Section 3, we
introduce an important auxiliary problem of boundary layer type which we need
to construct a better approximation, and Section 4 gives additional results for an
analogous problem on a nite strip. Then, in Section 7, the eective equations
with the Beavers-Joseph type boundary condition are presented together with
improved error estimates. Finally, in Section 6, we show how the computation of
the constants involved in the interface conditions works in practice. Especially, it
turns out that the dierence between Darcy's pressure in the porous part and the
eective channel pressure is equal to a constant multiple of the eective channel
shear stress, thus contradicting [11].
Asymptotic analysis of the laminar viscous
ow over a porous bed 5
e
e
Y
Y *
Z *
Figure
1: The
ow
region
2 Setting of the problem
This section deals with the equations describing
uid motion over a porous bed,
under a uniform pressure gradient. We assume the condition of the experiment by
Beavers and Joseph [2], i.e. a stationary laminar incompressible viscous
ow.
For simplicity reasons, we consider a
ow over a periodic porous medium with
a characteristic pore size ". The
ow
region
of two parts, see Figure 1. The upper
part
and the lower
part
2 is the porous medium which is obtained by putting solid
obstacles of size " b into the
domain
the (permeable) interface
betweenand
2 . More precisely, let
made of two complementary parts, the solid part Z and the
uid part Y . It is
assumed that Z is a smooth closed subset of Y , strictly included in Y , Y \Z
and Y [Z Now assume that both b and L are integer multiples of ". Then
the
domain
can be covered by a regular mesh of N(") cells of
size ". Each cell Y "
divided into a
uid part "(Y + k) and a solid
6 Willi Jager, Andro Mikelic and Nicolas Neu
k). The
uid
part
2 of the porous medium is therefore
"=
and the whole
ow region
is
After specifying the geometry, we consider the equations determining the velocity
eld and the pressure eld p " in the Beavers{Joseph experiment:
in
in
(@
For small ", this problem is extremely dicult to solve, so that one has to look
for approximations. Classically, the system (2.3)-(2.8) used to be approximated
by a Poiseuille
ow
in
for
and, indeed, it could be shown in [10] that (2.9) is an approximation in the following
sense:
Z
Z
Z
Asymptotic analysis of the laminar viscous
ow over a porous bed 7
The above estimates indicate that in the L 2 -sense
in
in
in
in
in
on and the eective mass
ow behaves as H 3
As shown in [2] and [15], this O(") approximation often is not good enough.
Therefore, we would like to continue with the asymptotic expansion for ~u " and p " .
Energy estimates from [10] imply that in the interior
of
1 , but
globally in
Further, they show that there is an oscillatory
boundary layer conned to the neighborhood of which can be represented in the
~ bl;"
bl ( x
where ~
are solutions to a Stokes problem on an innite strip Z bl , which we
discuss in the following Section 3.
Z -2
Z bl
Figure
2: The boundary layer cell Z bl
3 The auxiliary boundary layer problem
Figure 2. For later use, we also
introduce the following notation
Z <k := Z bl \ (0; 1) (1;
l := Z bl \ (0; 1) (l;
Now f ~
are the solutions to the problem
~
div y
~
[(r y
~
~
Asymptotic analysis of the laminar viscous
ow over a porous bed 9
Further, in order to dene ! bl uniquely, we require
A variational formulation of this problem is: nd a locally square integrable
vector eld ~ bl 2 W satisfying
Z
Z bl
r ~ bl
Z
where W is the function space which contains all y 1 -periodic, divergence-free, locally
square integrable vector elds ~z dened on Z bl , having nite viscous energy
(i.e.
R
satisfying the no-slip boundary condition at
the solid boundaries [ 1
In [9] (Proposition 3.22, pages 462-463) the following result is proved:
Theorem 1 The problem (3.11) has a unique solution ~
bl 2 W . It is locally
innitely dierentiable outside of the interface S. Furthermore, a pressure eld ! bl
exists, which is unique up to an additive constant and locally innitely dierentiable
outside of the interface S, such that (3.5) holds. In a neighborhood of S we have
~
bl
The following lemma states some simple properties of f ~ bl
l
Z
Z bl
jr ~
Proof. (3.12) follows immediately from (3.6) by
Z 1@ bl@y 2
Z 1@ bl@y 1
Then, by integrating the second component of the momentum equation (3.5) over
the rectangle (0; 1) (a; b) we obtain
and (3.13) follows by applying (3.12).
Next, integrate the rst component of the momentum equation (3.5) over (0; 1)
with respect to y 1 . Using (3.13) we nd out that
linear
function of y 2 . Since it is bounded in the limit y it has to be a constant
for y 2 0, which proves (3.14).
Finally, after taking
bl as the test function in (3.11), we obtain (3.15).
We expect that the problem (3.5){(3.10) represents a boundary layer. This
means that changes of the velocity and the pressure elds are concentrated around
the interface S and vanish very rapidly with increasing distance from S. In linear
elasticity, results of this type are called Saint-Venant's principle. Saint-Venant's
principle is also valid in our case, and we want to discuss this in more detail.
We start our considerations with Z (the part of Z bl lled by the
uid). Here,
one can obtain sharp decay estimates by using results for the decay of solutions of
general elliptic equations (see Theorem 10.1 from [12], which is an application of
Tartar's lemma). However, in our particular situation we can give a direct proof:
Theorem 3 Let
Then, for every a > 0, we have
Asymptotic analysis of the laminar viscous
ow over a porous bed 11
Proof. Applying the curl -operator to equation (3.5), we see that
ag : (3.19)
Furthermore, by (3.14) and periodicity, we conclude that
Consequently, the solution bl of (3.19) may be written as
We see, that the leading decay exponent is proportional to 2, such that (3.17)
and (3.18) follow.
With the help of this estimate for
bl , we can prove exponentially
fast stabilization of ~
Corollary 4 Let
Then for every a > 0 and every < 2 we have
~
and D ~
Proof. We follow [9]: by (3.16) and (3.6) the functions bl
bl
bl
@
Using the explicit form of the right hand side given by (3.20), a variation of constants
yields also an explicit representation for bl
bl
(D 1
(D 1
bl
(D 2
1;n
(D 2
e 2ny2 (3.27)
with the additional relations
4n
4n
The representation (3.26),(3.27) allows us to conclude (3.22) and (3.23).
For the pressure eld ! bl , we have:
Corollary 5 For
we have ! bl (y
Proof. Taking the divergence of the momentum equation (3.5), we nd that
bl is square integrable and the averages over
sections fy ag of ! bl C bl
are zero, ! bl can be written as
from which the assertion follows.
Now we turn our attention to the porous part Z . Due to the presence of the
solid obstacles, our estimates will be much less precise in this case.
Lemma 6 Let the distance between the solid obstacles and the boundary of the
unit cell Y be bigger than or equal to 2d . Let ~
bl be the solution of (3.5){(3.10),
and let l 2 Z; l < 1. We introduce a function ~
lower by
~
lower
bl for y 2 l (3.32)
Asymptotic analysis of the laminar viscous
ow over a porous bed 13
~
lower
lower
bl
d
Z y1 bl
Z l
bl
Z 2l y 2
l
lower
bl
Z 2l y 2
l
bl
for
Then ~ lower 2 W \ H 1 (Z bl
r ~
lower
C lower
r ~ bl
where the constant C lower satises
C lower d 3
with C P denoting the Poincare constant appearing in
bl
r bl
If we further assume that the ball B (1=2; 1=2) with radius and center (1=2; 1=2)
is contained in Z , an easy calculation in polar coordinates yields the estimate
which (together with d 1) results in the estimate
r ~ lower
r ~
bl
Proof. By a straightforward calculation, one veries that ~ lower
This calculation also yields the estimates
Z
dy d 2
bl 2
dy
Z
bl 2
dy
14 Willi Jager, Andro Mikelic and Nicolas Neu
Z
@ lower@y 2
dy d 4
Z
bl 2
dy
Z
bl 2
dy
+d 4
Z
@ lower@y 2
dy
Z l
l d
bl
d
Z
dy 4
Z
where Z l;l we use the simple trace estimates
Z
bl 2
dy
Z
dy (3.44)
Z l
l 1
bl
Z
bl 2
dy
Z
We insert (3.44),(3.45) and (3.38) into the sum of (3.41){(3.43) and use d 1 to
obtain (3.36) and (3.37).
With the help of this lemma, we are able to deduce exponential decay of r ~
Proposition 7 Assume that l 2 Z with l < 0. Dene Z <l as in (3.2). Then the
solution ~
bl of problem (3.11) fullls
r ~
bl
r ~
bl
e
lower
where
lower =2 ln
C lower
Proof. If we test (3.11) with some function ~ bl ~ lower , where ~
lower is the
function constructed in the previous lemma, this results in the estimate
r ~
bl
C lower
r ~
bl
Using the hole-lling technique as in[9], Lemma 2.4 (we add C lower
r ~
bl
to both sides of (3.48) and evaluate the resulting recursion), we obtain exponential
decay with rate
lower given by (3.47).
Applying local regularity results, one immediately obtains:
Asymptotic analysis of the laminar viscous
ow over a porous bed 15
Corollary 8 For any a < 0, 2 IN , the solution f ~
exponentially
for
C(a; )e
lower
and D ! bl (y
C(a; )e
lower
for all y 2 < a < 0.
Proof. From Proposition 3.7 of [9] we know that
Z
Z l
C
r ~
bl
Z
Z l
Z
Z l+1
C
r ~
bl
which implies exponentially fast stabilization to a constant (which has to be zero
because of (3.10)):
Ce
lower jlj for l <
Pointwise estimates for ~
bl , ! bl and their derivatives can then be obtained as usual
by dierentiating the equations obtaining estimates for higher derivatives, which
can then be used with the Sobolev embedding theorem.
The above results imply that ~
bl is a boundary layer type velocity eld and
bl is a boundary layer type pressure. Only the constants C bl
1 and C bl
! from (3.21)
and (3.29) will enter the eective
ow equations in the channel. They contain the
information about the geometry of the porous bed.
Remark 9 As we shall see from the numerical examples, in general C bl
However, if the geometry of Z is axisymmetric with respect to re
ections around
the axis y
This result is
obtained by the following simple argument:
Let Z be axisymmetric around the axis y
bl be a solution
for (3.5){(3.10). Then ( bl
a solution. By uniqueness, it must be equal to ( ~
we conclude that
Asymptotic analysis of the laminar viscous
ow over a porous bed 17
4 Approximation of the boundary layer problem
on a nite domain
In this section we propose a scheme for computing the actual values of C bl
1 and
cases where the geometry of the porous medium is known. Since problem
resp. (3.11) is dened on an innite domain Z bl , the rst step is
to approximate its solution with solutions of problems dened on nite domains
of the form Z k
l := Z bl \ (0; 1) (l;
bl
k;l g be
the solution of
~
bl
l [ Z k
div y
~
bl
l [ Z k
bl
k;l
[fr y
~
bl
~
bl
bl
k;l g is y
with the following additional boundary conditions motivated by Corollary 4 and
Proposition 7:
~
bl
Further, to dene the pressure eld ! bl
k;l uniquely, we require
Z
Analogous to the estimates on ( ~
of the previous section, one can prove
the following properties for the solution ( ~ bl
k;l ) of (4.1){(4.8):
0 , the solution f ~
bl
k;l g of problem (4.1){(4.8) can be
represented as
(D 1
e 2nk
where
1;k;l :=
Z 1( bl
and
k;l (y) =X
with
!;k;l :=
Z 1! bl
Z 1! bl
This representation immediately yields, that for every 0 < a < k, 0 < < 2, and
are constants C(a; ); C(a; ; ) such that for all y 2 Z k
a we have
~
bl
D ~
bl
Asymptotic analysis of the laminar viscous
ow over a porous bed 19
and ! bl
!;k;l
For
~
bl
k;l
l
Ce
lower jmj (4.18)
with
lower from (3.47). Furthermore, for y 2 Z 0
l
k;l (y)
Ce
lower
~ bl
k;l (y) (C bl
Ce
lower
and ! bl
Ce
lower jy
From these estimates we obtain:
Proposition 11 Let f ~
bl
k;l g be the solution of problem (4.1){(4.8). Then, for
every < 2, a constant C exists such that
r ~
bl
k;l r ~
bl
l
lower
Proof. Let := ~
bl ~
bl
k;l . Then (; !) is y 1 -periodic, vanishes
on
l
k=1 (@Z (0; k)) and solves
in Z k
l . Set
lower
Jager, Andro Mikelic and Nicolas Neu
where lower is derived from in the same way as ~
lower was derived from ~
bl in
Lemma 6. Testing (4.23) with ^
, we obtain
Z
Z l+1
Z 1rr lower =
Now note that ! may be replaced by ! c for an arbitrary c 2 IR due to
R
=const Then we can apply the exponential stabilization results for
f ~
k;l g from (3.22), (3.23), (3.30), (4.15), (4.15), and (4.17) to
obtain (4.22).
The approximation error between C bl
1 and C bl
1;k;l can then be estimated as follows
Corollary 12 For every < 2 there is a constant C such that
1;k;l
lower
Proof. Note that
1;k;l
Z
bl
bl
which can be estimated as desired by using Poincare's inequality on Z 0 [ Z 1
together with (4.22).
In order to obtain estimates for the pressure dierence ! bl
we need the
following result:
Lemma 13 For each F 2 L 2 (Z k
l
R
l
there is a function ~
l ) vanishing on the boundaries y
l
satisfying div ~
together with the stability estimate
l
l
More generally if F 2 H r (Z k
l ) for some integer r 0, then ~
' can be chosen such
that
l
Asymptotic analysis of the laminar viscous
ow over a porous bed 21
Proof. The proof is similar to the proof of Lemma 3.4 in [9]. We search for ~
in the form
with div correcting the non-zero boundary values of r.
More precisely, let be the solution to
@
@
n=l
(@Z (0; n)) [ fy
Since
R
l
the testing of (4.32) with yields
Z
l
Z
l
Z
l
l
Z
l
l
l
and therefore the estimate
l
Note that we have used the Poincare inequality
l
Z
l
l
l
with a constant C being independent of k and l. This estimate can easily be proved
by extending from Z k
l to the rectangle (0; 1) (l; k) and by using the Poincare
estimate there. Furthermore, by localizing we also get estimates for higher order
derivatives of in the form
l
l
The function # from (4.31) has to correct the non-zero boundary values of
and should therefore fulll
22 Willi Jager, Andro Mikelic and Nicolas Neu
@#
@
@
on
n=l (@Z (0; n)) [ fy lg. Since @Z is smooth, such a function
can be constructed by a local H r -lift such that
l
l
with C being independent of k and l.
The combination of (4.37) and (4.41) then yields the desired regularity estimate
for ~
', and the lemma is proved.
With the help of this lemma we obtain:
Proposition 14 Let ! bl and ! bl
k;l be the pressure elds determined by (3.5){(3.10),
resp. (4.1){(4.8). Then, for every < 2, there is a constant C such that
k;l
l
lower
For the dierence C bl
!;k;l we have the better estimate
!;k;l
C
lower
Proof. Set
l
Z
l
and let F :=
k;l
!. Since
R
l
yields a function ~
which
we can use as test function in the dierence of momentum equations
bl ~
bl
Inserting
! in the pressure term, testing with ~
and doing a partial integration
we obtain Z
l
r( ~
bl ~
bl
k;l )r~'
Z
l
Asymptotic analysis of the laminar viscous
ow over a porous bed 23
If we now use the stability estimate (4.29) together with (4.22), this yields
l
lower
Finally, to estimate
!, let
Z
Then, obviously, we have
By Theorem 3.7 of [9], we also have
r( ~
bl ~
bl
which can be used together with (4.21) to estimate
lower
r( ~
bl ~ bl
l
Using the triangle inequality in (4.47) together with (4.51) and (4.22), we obtain
(4.42).
In order to get (4.43), we note that
!;k;l
such that the estimate follows directly from (4.51) and (4.22).
We complete this section with a regularity result, which we will need in the
following.
Proposition
l f; g be the y 1 -periodic solution of
in Z k
l with boundary conditions
l
and pressure normalization Z
Then we have the estimate
l
l
l
l
l
C
l
with a constant C independent of k and l.
Proof. We rst note that we can bound k 2 k L 2 (Z k
l ) and
l
in
terms of krk L 2 (Z k
l ) . In the lower region, this follows from Poincare's inequality
applied on every cell. In the upper region, we rst have
l
@
l
because
setting
we also obtain
l
@
Additionally, the Hardy inequality
@
together with an estimate of S 1 (0) by trace inequality and Poincare inequality on
l
Asymptotic analysis of the laminar viscous
ow over a porous bed 25
Next, for deriving a bound for krk L 2 (Z k
l ) , we test the momentum equation (4.53)
with to obtain
l
Z
l
On Z 0
l , we may again use Poincare's inequality to obtain
Z
l
f
l
l
and because of
also the estimate
Z
can be obtained by applying Poincare's inequality. In order to get an estimate for
R
and write
Z
Z
Z
Here, the rst term can be estimated as
Z
while for the second term we have
Z
using again (4.62). Combining these estimates, we obtain the desired bound for
26 Willi Jager, Andro Mikelic and Nicolas Neu
Next, we localize the problem by multiplying with smooth cut-o functions
are identically 1 on Z i and vanish for y 2
Denote with S the region Z i+1
l . By shifting the pressure by a suitable constant
we may assume that
R
(otherwise, the boundary conditions on
@Z would be violated), we may also shift by constant multiples of e 1 such that
we may also assume that
R
The resulting function f ~
is a
solution of
~
div ~
in S with Dirichlet and periodic boundary conditions in the case i < k 1 and a
combination of Dirichlet, periodic and slip boundary conditions when
We examine the right hand side of (4.70). Since
Z
Z
Z
r ~
we have kr~k H 1 (S) C
r ~
. Since
R
0, we can apply Proposition 1.2,
Ch. I of [16] to obtain
C
r ~
Then, however, the right hand side of (4.70) is in L 2 while the right hand side
of (4.71) is in H 1 , and we can apply Proposition 2.2 of Ch. I of [16] to get that
and kr~k L 2 (S) can be bounded in terms of krk L 2 (S) (with a constant
depending on the geometry of the porous inclusion). We note that the slip boundary
fy easily eliminated by making an even extension for 1 ; and f 1
and an odd extension for 2 and f 2 (the zeroth order re
ection). Summing up
these local estimates we get that kD 2 k L 2 (Z k
l ) and krk L 2 (Z k
l ) are bounded by a
multiple of krk L 2 (Z k
l ) with a constant that does not depend on k and l. Thus,
Proposition 15 is proved.
Asymptotic analysis of the laminar viscous
ow over a porous bed 27
Corollary
bl
k;l g be the solution to (4.1){(4.8), and let
~
:= ~
bl
k;l ~
y 22
Then
l
r! bl
k;l
l
where C can be chosen independent from k and l.
28 Willi Jager, Andro Mikelic and Nicolas Neu
5 Discretization
Now, we turn our attention to the discretization of problem (4.1){(4.8). Essen-
tially, we use a stabilized nite element discretization in the sense of [8],[3]. Unfor-
tunately, in both of these papers only polygonal domains were considered, so that
the direct application of these results to our domain Z k
l is unsatisfying because
of the curved boundaries
l
We resolve this diculty by using
generalized domain partitions given by nonlinear mappings of the elements while
essentially keeping the approximation results known for usual domain partitions
with linear mappings, see [19], [20], [1], [14]. While this is very convenient theoret-
ically, the practical implementation will usually be too complicated. Therefore, it
is important that the use of a simpler polygonal approximation of the domain can
be interpreted as a perturbation of this approach, see the discussion at the end of
this section.
Let
be a Lipschitz domain, and let T h be a partition
of
in subsets e, called
elements, where each element e is the image of a reference element ^
e under a
mapping
e is either the reference triangle ^
or the reference quadrangle ^
We require the following properties of the partition
1.
2. For two elements
3. A side of an element e (which is dened as the image of a side of ^ e) is either
a subset of
@
or the side of exactly one other element e 0 6= e. In the second
case, we require that the mapping 1
restricted to the corresponding
side of ^
e is linear.
4. The
are bi-Lipschitzian mappings with
Asymptotic analysis of the laminar viscous
ow over a porous bed 29
e
e (y)j
for some constant C 1 > 0.
5. e 2 W 2;1 , with kD 2 e k 1 C 2 diam(e).
6. For simplicity reasons, we consider only quasiuniform partitions, i.e. a constant
exists such that for h := max e2T h
diam(e) we have diam(e)
Arbitrary ne triangulations of this kind exist, which can be shown by modifying
triangulations of polygonal approximations of the
domain
see [19]. The quality
of the domain partition T h is determined by the constants C
renement of such a partition results in a new partition which still has the above
properties (especially property 5).
Next, let
where P 1 is the space of linear polynomials, and Q 1 is the span of P 1 and the
polynomial x 1 x 2 . With these nite elements, we can dene the space
The following approximation result is then the substitute for approximation
results on triangulations with linear element mappings.
Theorem 17
Let
h be as above. Then, for
, an u h 2 S h
exists such that
Second, for
, an u h 2 S h exists such that
Additionally, if u has zero boundary values on some component of
@
, then also u h
can be chosen such that it has zero boundary values on that boundary component.
In all cases, the constants only depend on the smoothness of the domain and
the quality of the domain partition T h .
Proof. See [19], [20] for the case of (deformed) triangular elements, and [14]
where also the case of (deformed) quadrilaterals is handled by using a generalized
interpolation operator of Clement type.
Remark In contrast to standard interpolation results for triangulations with
linear element mappings, the term kruk L
appears on the right hand side of (5.6).
This is due to the fact that linear functions are no more interpolated exactly (in
contrast to constant functions). Note that this estimate is only possible because the
e approximate linear mappings in the limit diam(e) ! 0 (see property 5 required
for a partition T h ).
let
l , and let T h be a domain partition of Z k
l (tting across the
lateral boundaries, where periodic boundary conditions are prescribed for problem
(3.5){(3.10)). Then dene S h analogously to (5.4) as
l
the ansatz space for the velocity eld as
~
l
and the ansatz space for the pressure eld as
Z
We now search for ( ~
bl
being the solution to
Z
l
r ~ bl
Z
l
r! bl
k;l;h ~
Z
~
Z
l
div ~
bl
Z
r! bl
Asymptotic analysis of the laminar viscous
ow over a porous bed 31
for all (~' k;l;h ; k;l;h
. The second term in (5.11) must be included for the
above pair of ansatz spaces to stabilize the discretization, see [8],[3]. The constant
can be any positive number. The following error estimates then hold:
Proposition 19 Let T h ; ~
be dened as above. Assume that the interior
boundary aligned with the sides of the elements of T h , let ( ~
bl
be the solution of problem (4.1){(4.8), and let ( ~
bl
H h L h be the
solution of (5.10){(5.11). Furthermore, let
bl
r ~
bl
k;l
l
bl
k;l
bl
k;l
l
r! bl
k;l
l
Then we have an error estimate in the viscous energy norm of the form
r( ~
bl
k;l;h
~
bl
l
ChR( ~ bl
together with a stability estimate for the pressure gradient of the form@ X
r(! bl
ChR( ~
bl
We further have the following L 2 -error estimate for the pressure
k;l;h (! bl
k;l Z k
l
Z
k;l dy)
l
bl
and the velocity
~
bl
k;l;h
~ bl
k;l
l
bl
with a constant C which is independent of k and l.
Proof. The proof can be done as in [8], [3], or [7] with the following modi-
cations: rst, on our more general domain partitions, the approximation results
from theorem 17 replace the standard ones. Second, [8], [3], [7] handle only the
Dirichlet case. However, one can easily check, that their proofs can be transfered
with almost no changes because the slip boundary condition still allows for partial
integration with vanishing boundary terms. For (5.14), the proof uses the
assertion of Lemma 13 such that the factor k needs to be included in the
estimate. For (5.15), one needs the H 2 -regularity estimate from proposition 15
which introduces the factor k + 1.
We now describe the denition of the discrete approximations to C bl
1;k;l and
!;k;l . First, we set
Z
Second, in order to obtain good error estimates for the numerical approximation of
!;k;l , we dene C bl
!;k;l;h as a smoothly weighted average of the pressure eld in the
following be a Lipschitz-continuous function satisfying
~
elsewhere
It is obvious that Z
~
Z
Z l
such that we may approximate C bl
!;k;l by
Z
l
~
Then we can prove:
Proposition 20 For C bl
!;k;l;h given by (5.16), (5.19) we have the estimates:
1;k;l;h C bl
1;k;l
!;k;l;h C bl
!;k;l
lower
Asymptotic analysis of the laminar viscous
ow over a porous bed 33
Proof. In order to show (5.20), we use the denitions (4.11) and (5.16) in the
momentum equations (4.1) and (5.10) to obtain
1;k;l;h C bl
Z
k;l;h
~
bl
ds
Z
l
r ~
bl
k;l r( ~
bl
k;l
~ bl
Z
l
r! bl
bl
k;l
~ bl
Z
l
r( ~
bl
k;l
~
bl
Z
l
r ~
bl
k;l;h r( ~ bl
k;l
~
bl
Z
l
r! bl
bl
k;l
~ bl
Z
l
r( ~
bl
k;l
~
bl
dx
Z
l
r(! bl
bl
Z
l
r! bl
bl
k;l
~
bl
Here, the rst term is of order O(h 2 ) by (5.12), the second term can be estimated
as
Z
l
r(! bl
bl
Z
l
r(! bl
k;l;h
~
bl
k;l
by (5.15), and, if one applies (5.11), the third term can be estimated as
Z
l
r! bl
bl
k;l
~
bl
Z
l
r(! bl
bl
k;l
~
bl
k;l;h )+
Z
r! bl
k;l;h
Next, if we use (5.15) and (5.13), we can see that the resulting terms are of order
respectively. Thus, we have shown (5.20).
In order to show the estimate (5.21), we rst note that
!;k;l C bl
Z
l
(! bl
k;l;h (y)) ~ (y) dy +O(e
lower jlj ) (5.24)
by (4.21), (4.14), and (5.19). Next, let ~
l ) be the vector eld given by
Lemma 13 which satises div ~
l , and
l
34 Willi Jager, Andro Mikelic and Nicolas Neu
Furthermore, theorem 17 yields the existence of an interpolating vector eld ~
~
H h which fullls
l
l
l
l
By using this, we can estimate the rst term from the right hand side of (5.24) as
follows:
Z
l
(! bl
Z
l
(! bl
dy
Z
l
r(! bl
dy
Z
l
r(! bl
Z
l
r(! bl
Z
l
r(! bl
Z
l
r( ~
bl
k;l
~
bl
Here, the rst term is of order C(k by (5.13), (5.26), and (5.25). The
second term can be estimated by replacing rst ~
' h by ~
' which can be done up
to an error of order C(k due to (5.12), (5.27), and (5.25). For the rest, a
partial integration yields
Z
l
r( ~
bl
k;l
~ bl
Z
l
k;l
~
bl
k;l;h )~' dy C(k
if one uses (5.15) and (5.25). Thus, also (5.21) is proved.
As mentioned at the beginning of this section, the practical implementation of
domain partitions with nonlinear element mappings is rather complicated. It is
usually easier to approximate a
domain
with curved boundaries by polygonal
domains
h which then can be partitioned in triangles and/or quadrangles (see
for example [17]). However, it can be shown that, on a discrete level, this simpler
approach is equivalent to our theoretically more convenient formulation up to a
quadrature error of optimal order, see [18].
Asymptotic analysis of the laminar viscous
ow over a porous bed 35
Figure
3: Symmetric cell and its coarsest grid.
6 Numerical Results
We are now ready to demonstrate our method for computing the constants C bland C bl
on specic examples. First, we consider the symmetric geometry shown
on the left part of Figure 3. The boundary @Z is a circle with radius 0:25 and
center at (0:5; 0:5). On the right-hand side of Figure 3, the initial polygonal
approximation and the initial grid T h0 with quadrangle elements (see section 5)
is depicted. T h0 is then uniformly rened, yielding further grids T
. By
using the discretization described in the previous section, we obtain for every
a system of linear equations which must be solved to obtain the
discrete solution f ~
bl
k;l;h g.
Since the arising linear systems are very large, we have applied the multigrid
method which is known to be of optimal complexity for a large range of problems,
see [6]. However, due to the pressure stabilization and the polygonal approximation
of the smooth boundary
l
k=1 (@Z (0; k)), we are not in a Galerkin setting.
In this case, it is known that the simple multigrid V-cycle (one coarse-grid correction
between pre- and post-smoothing) does not have to converge independently
of the number of levels. We therefore used the W-cycle (two coarse-grid corrections
between smoothing). Both pre- and post-smoothing are done by two steps
of a block incomplete decomposition where each block contains the unknowns of
one grid node (the corners of the elements). And indeed, our numerical observa-
36 Willi Jager, Andro Mikelic and Nicolas Neu
(j 1)
Table
1: Results for the symmetric cell
tions conrm that this method is robust with respect to the number of levels and
variations of the parameters k; l, see also Table 5 below.
In
Table
1, the results for a computation with are shown. Starting
from the coarsest level which contains 20 elements, we rene 5 times, which yields
a grid with 20480 elements (61440 unknowns). On each level
the discretized equation and compute the approximations C bl
1;k;l;h and C bl
!;k;l;h given
by (5.16) and (5.19) (with the choice (5.17)). The value given for
computed by polynomial extrapolation. We see that both C bl
1;k;l;h and
!;k;l;h converge to limit values with rate O(h 2 ) which we expected from Proposition
19. As shown in Remark 9, C bl
must be zero. Since our grid is symmetric,
the solution of the discrete problem also has the same symmetry property, such
that the approximations C bl
!;k;l;h are zero up to machine accuracy.
Let us now assume that the porous part is generated by the unsymmetric cell
from
Figure
4, where the boundary curve is given by the ellipse
0:5 0:25
For this domain we obtain the results shown in Table 2. Both C bl
1;k;l;h and C bl
!;k;l;h
converge again with order O(h 2 ), which is in accordance with Proposition 19. The
Asymptotic analysis of the laminar viscous
ow over a porous bed 37
Figure
4: Unsymmetric cell and coarsest grid.
(j 1)
(j 1)
Table
2: Results for the unsymmetric cell
l
Table
3: C bl
1;k;l (extrapolated) for varying k; l.
l
Table
4: C bl
!;k;l (extrapolated) for varying k; l.
error arising from the cutting of the domain is not noticeable any more already
2. This is shown in Table 3, where the results for varying k and l are
given. Note that even the values for are accurate up to the extrapolation
error.
Figure
5 shows the three solution components bl
From here and
from the values of C bl
in
Table
2 it is obvious that a pressure jump occurs inside
the boundary layer.
Finally, Table 5 shows the convergence rates of our multigrid iteration. As
we expected from the discussion above, it is perfectly robust with respect to the
number of levels. So far, we did not observe a signicant dependency on k or
l, even if one might expect some deterioration, since the error estimates from
Proposition 19 are needed in the usual W-cycle convergence theory. The reason
might be that sharper error estimates are possible in suitably weighted norms.
Asymptotic analysis of the laminar viscous
ow over a porous bed 39
Table
5: Multigrid convergence rates.
Figure
5: Detail from a picture showing bl
Asymptotic analysis of the laminar viscous
ow over a porous bed 41
7 The eective equations
It turned out that ~ bl stabilizes exponentially fast to a constant vector (C bl
(and to 0 for y 2 !1), which translates into ~ bl;" tending to "(C bl
for This forces us to consider the corresponding counter-
ow in the
channel, which is described by the following 2D Oseen-Couette system
in
d
in
div ~
in
~
If we assume that Re :=
is not too big, the problem (7.1){(7.4)
has a unique solution in the form of 2D Couette
ow ~
Following the ideas from [9], we write down the correction of order " for the
velocity. Essentially, it corresponds to the elimination of the tangential component
of the normal stress at , caused by the approximation by Poiseuille's
ow and its
contribution to the energy estimate. The correction reads
bl ( x
The correction of the velocity eld by the oscillatory boundary layer velocity ~
bl;"
involves the introduction of the boundary layer pressure eld ! bl;" . Hence, as usual
for
ow problems, it is necessary to correct the pressure eld simultaneously. The
corresponding pressure correction reads
Here, p 1;" is an appropriate regularization of the eective pressure p in the porous
42 Willi Jager, Andro Mikelic and Nicolas Neu
bed, dened by
in
where the permeability tensor K is dened as
Z
Y
Here, the w j
are solutions of the auxiliary problems
div y ~
~
Z
Y
The pressure eld p is a C 1 -function outside the corners. However, due to
the discontinuities of the traces at (0; 0) and at (b; 0), its gradient is not square
integrable
in
2 . We must regularize the values at the upper corners and p 1;" is
such a regularization, satisfying
").
Let us now introduce the dierence U "
0 between the velocity eld ~u " and its
expansion up to order O("), i.e. we set
~
bl;" @v 0@x 2
Then, after constructing the corresponding outer boundary layer, it was proved
in [10] that
jr ~
Z bj ~
Z bZ Hj ~
Asymptotic analysis of the laminar viscous
ow over a porous bed 43
It should be noted that the presence of the logarithmic term is a consequence of
the corner singularities in the eective pressure. It was proved in [9] that in the
absence of the boundary singularities, the above estimates hold without the log "
term. Therefore, in the interior of the
domain
the expansion (7.5) is of order
O(" 3=2 ). Globally, it is of order O(" 3=2 j log "j).
The estimates (7.16)-(7.18) are sucient for calculating the next order correction
at the interface . The estimate (7.18) gives us a possibility of approximating
the velocity values at by an oscillatory velocity eld. Computationally, it is not
very useful.
In view of the problem setting in [9], the Beavers{Joseph law corresponds to
taking into the account the next order corrections for the velocity. In fact, we
formally get on the interface :
"j log
"C bl
x
x
Integrals of the absolute value of the right hand side are not small, since our
estimates do not give any pointwise control of ru on . Nevertheless, the right
hand side of (7.21) is small in the appropriate Sobolev norm, of a negative order.
The precise estimates are in [10]. Hence, we get the eective law
which is exactly Saman's modication of the Beavers and Joseph law from (1.2)
with 1
K
44 Willi Jager, Andro Mikelic and Nicolas Neu
Beavers-Joseph profile
bl
bl4
Poiseuille
profile1
bl
bl
Figure
Beavers-Joseph versus Poiseuille prole.
Let us introduce the eective
ow equations
in
1 through the following boundary
value problem: nd a velocity eld ~u e and a pressure eld p e such that
in
div
in
Under the same assumption of laminarity as for problem (7.1){(7.4), the problem
(7.23)-(7.28) has a unique solution
"C bl
Asymptotic analysis of the laminar viscous
ow over a porous bed 45
(see
Figure
and the eective mass
ow rate through the channel is
Z Hu e
where C bl
By using the theory of the very weak solutions for the Stokes system, the
following approximation properties of f~u e ; are proved in [10]:
The estimates (7.32)-(7.34) justify Saman's modication of the law by Beavers
and Joseph. Furthermore, we are able to calculate their proportionality
it is equal toC blK 1=2 . We note that (7.28) is not the only possible interface
law. If one replaces Beavers and Joseph's law by u e
(0) the estimates
remain valid. However, such a condition involves the knowledge of
the zeroth order approximation v 0
1 , and is not really an interface condition.
At this stage, we can consider the approximation to the channel
ow as satis-
but we still must determine the ltration velocity and the pressure in the
porous medium. We have already mentioned the in
uence of corner singularities
on the solution of the problem (7.8)-(7.10). In order to avoid the discussion of such
eects, we limit ourselves to the behavior in the interior of the porous medium.
The inertia eects are negligible and we can use the theory from [9], Theorem 3.
In the interior, all boundary layer eects are exponentially small and we have
~
where ~
are dened by (7.12). Hence, the ltration velocity is given
through the Darcy law
in
Far away from the corners, the pressure eld p approximates p " at order O(
").
It can only be determined after we have found the eective pressure eld in the
channel and the stabilization constant C bl
giving the pressure dierence at innities
for the auxiliary problem. As shown in Section 6, this stabilization constant is
generally dierent from zero and we must use the interface law
This shows that, contrary to the intuition, the eective pressure in the system
channel
ow/porous medium is not always continuous. Thus, the continuity assumption
for the eective pressure from [11] is not correct in general.
Finally, let us note that we have some freedom in choosing the position of the
interface. It could be set at any distance of order O(") from the solid obstacles.
Intuitively, the law of Beavers and Joseph should be independent of such a choice.
This invariance result can be established rigorously, and we refer for details to [13]
where it was proved that a perturbation of order O(") in the position of the interface
implies a perturbation in the solution of order O(" 2 ). Consequently, the
eective law doesn't change and the physical intuition is conrmed.
Asymptotic analysis of the laminar viscous
ow over a porous bed 47
--R
Finite element methods for problems with rough coe
Boundary conditions at a naturally permeable wall
Lectures on the Mathematical Theory of Multiphase Flow
Equations et ph
of Computational Mathematics
Eine robuste und e
On boundary conditions for uid ow in porous media
Some Methods in the Mathematical Analysis of Systems and Their Control
PhD thesis
On the boundary condition at the interface of a porous medium
Theory and Numerical Analysis
Curved elements in the
--TR | finite elements;unbounded domain;homogenization;multigrid;boundary layer;periodic structures;interface law;beavers-joseph;navier-stokes |
587417 | Fast Finite Volume Simulation of 3D Electromagnetic Problems with Highly Discontinuous Coefficients. | We consider solving three-dimensional electromagnetic problems in parameter regimes where the quasi-static approximation applies and the permeability, permittivity, and conductivity may vary significantly. The difficulties encountered include handling solution discontinuities across interfaces and accelerating convergence of traditional iterative methods for the solution of the linear systems of algebraic equations that arise when discretizing Maxwell's equations in the frequency domain.The present article extends methods we proposed earlier for constant permeability [E. Haber, U. Ascher, D. Aruliah, and D. Oldenburg, J. Comput. Phys., 163 (2000), pp. 150--171; D. Aruliah, U. Ascher, E. Haber, and D. Oldenburg, Math. Models Methods Appl. Sci., to appear.] to handle also problems in which the permeability is variable and may contain significant jump discontinuities. In order to address the problem of slow convergence we reformulate Maxwell's equations in terms of potentials, applying a Helmholtz decomposition to either the electric field or the magnetic field. The null space of the curl operators can then be annihilated by adding a stabilizing term, using a gauge condition, and thus obtaining a strongly elliptic differential operator. A staggered grid finite volume discretization is subsequently applied to the reformulated PDE system. This scheme works well for sources of various types, even in the presence of strong material discontinuities in both conductivity and permeability. The resulting discrete system is amenable to fast convergence of ILU-preconditioned Krylov methods.We test our method using several numerical examples and demonstrate its robust efficiency. We also compare it to the classical Yee method using similar iterative techniques for the resulting algebraic system, and we show that our method is significantly faster, especially for electric sources. | Introduction
The need for calculating fast, accurate solutions of three-dimensional electromagnetic
equations arises in many important application areas including,
among others, geophysical surveys and medical imaging [29, 32, 2]. Conse-
quently, a lot of effort has recently been invested in finding appropriate numerical
algorithms. However, while it is widely agreed that electromagnetic
phenomena are generally governed by Maxwell's equations, the choice of numerical
techniques to solve these equations depends on parameter ranges and
various other restrictive assumptions, and as such is to a significant degree
application-dependent [20, 32, 2].
The present article is motivated by remote sensing inverse problems, e.g.
in geophysics, where one seeks to recover material properties - especially conductivity
- in an isotropic but heterogeneous body, based on measurements
of electric and magnetic fields on or near the earth's surface. The forward
model, on which we concentrate here, consists of Maxwell's equations in the
frequency domain over a frequency range which excludes high frequencies.
Assuming a time-dependence e \Gamma-!t , these equations are written as
r
r \Theta H \Gamma b
r
r
where - is the magnetic permeability, oe is the conductivity, ffl is the electrical
permittivity,
J s is a known source current density, and ae is the (unknown) volume density
of free charges. In our work we assume that the physical properties - ? 0,
can vary with position, and -ffl! 2 L 2 - 1, where L is a
typical length scale. The electric field E and the magnetic field H are the
unknowns, with the charge density defined by (1c). Note that as long as
redundant and can be viewed as an invariant of the system,
obtained by taking the r\Delta of (1a). The system (1) is defined over a three-dimensional
spatial
domain\Omega\Gamma In principle, the
domain\Omega is unbounded (i.e.
practice, a bounded subdomain of IR 3 is used for numerical
approximations. In this paper we have used the boundary conditions
H \Theta n
although other boundary conditions are possible.
A number of difficulties arise when attempting to find numerical solutions
for this three-dimensional PDE system. These difficulties include handling
regions of (almost) vanishing conductivity, handling different resolutions in
different parts of the spatial domain, handling the multiple scale lengths over
which the physical properties can vary, and handling regions of highly varying
conductivity, magnetic permeability, or electrical permittivity where jumps
in solution properties across interfaces may occur.
On the other hand, the nature of the data (e.g. measurements of the
electric and/or magnetic fields at the surface of the earth) is such that one
cannot hope to recover to a very fine detail the structure of the conductivity oe
or the permeability -. We therefore envision, in accordance with the inverse
problem of interest, a possibly nonuniform tensor product grid covering the
domain\Omega\Gamma where b oe and - are assumed to be smooth or even constant inside
each grid cell, but they may have significant jump discontinuities that can
occur anywhere
in\Omega across cell interfaces. The source J s is also assumed not
to have jumps across interfaces. The relative geometric simplicity resulting
from this modeling assumption is key in obtaining highly efficient solvers for
the forward problem.
Denoting quantities on different sides of an interface by subscripts 1 and
2, it can be easily shown [33] that across an interface
n \Theta
These conditions imply that neither E nor H are continuous in the normal
direction when b oe, resp. -, have a jump discontinuity across a cell face,
and likewise, b oeE and -H are not necessarily continuous in tangential di-
rections. Care must therefore be exercised when numerical methods are employed
which utilize these variables if they are to be defined where they are
double-valued.
By far the most popular discretization for Maxwell's equations is Yee's
method [37] (see discussions and extensions in [32, 26, 19]). This method
employs a staggered grid, necessitating only short, centered first differences
to discretize (1a) and (1b). In the more usual application of this method, the
electric field components are envisioned on the cell's edges and the magnetic
field components are on the cell's faces - see Fig. 1. It is further possible to
H z
x
y
z
Figure
1: A staggered discretization of E and H in three dimensions: E-
components on edges, H-components on faces.
eliminate the components of the magnetic field from the discrete equations,
obtaining a staggered discretization for the second order PDE in E,
r \Theta (- \Gamma1 r \Theta E) \Gamma
Related methods include the finite integration technique and certain mixed
finite element methods [35, 5, 25]. Although these methods are often presented
in the context of time-domain Maxwell's equations the issues arising
when applying an implicit time-discretization (a suitable technique under our
model assumptions) are often similar to the ones we are faced with here.
The popularity of Yee's method is due in part to its conservation properties
and other ways in which the discrete system mimics the continuous
system [19, 15, 5, 4]. However, iterative methods to solve the discrete system
may converge slowly in low frequencies, due to the presence of the rich,
nontrivial null space of the curl operator, and additional difficulties arise
when highly discontinuous coefficients are present [29, 24, 16]. There are two
major reasons for these difficulties. First, the conductivity can essentially
vanish (for example, in the air, which forms part of \Omega\Gamma/ from an analytic
perspective, the specific subset of Maxwell's equations used typically forms
an almost-singular system in regions of almost-vanishing b oe. Even in regions
where the conductivity is not close to vanishing, the resulting differential
operator is strongly coupled and not strongly elliptic [6, 1]. Second, in cases
of large jump discontinuities, care must be taken to handle H and b
carefully, since these are located as in Fig. 1 where they are potentially discontinuous
In [1], we addressed the often slow convergence of iterative methods when
used for the equations resulting from the discretization of (5) by applying
a Helmholtz decomposition first, obtaining a potential formulation with a
Coulomb gauge condition. This change of variables (used also in [3, 12,
22, 28], among many others) splits the electric field into components in the
active and the null spaces of the curl operator. A further reformulation,
reminiscent of the pressure-Poisson equation for the incompressible Navier-Stokes
equations [14, 31], yields a system of strongly elliptic, weakly coupled
PDEs, for which more standard preconditioned Krylov space methods are
directly applicable.
In [15], we further addressed possible significant jumps in the conductivity
while - is assumed constant, by employing a finite volume discretization on
a staggered grid, akin to Yee's method with the locations of E- and H-
components exchanged, as in Fig. 2. The normal components of E are now
double-valued, but this is taken care of in an elegant way by the Helmholtz
decomposition of E and by introducing the (generalized) current
x
y
z
Figure
2: A staggered discretization of E and H in three dimensions: E-
components on faces, H-components on edges.
into the equations. The curl operators in (5) are replaced by the vector
Laplacian according to the vector identity
r \Theta r\Theta
for sufficiently smooth vector functions (not E).
In this paper we generalize our approach from [1, 15] to the case where
the magnetic permeability - may be highly discontinuous as well. This is
a realistic case of interest in geophysical applications, although usually the
jump in conductivity dominates the jump in permeability. Now the roles
of E and H are essentially dual, and it is possible to apply a Helmholtz
decomposition to either E or H, keeping the other unknown vector function
intact. We choose to decompose the electric field E, referring to Fig. 2 for
the locations of the H-unknowns in the ensuing discretization. The major
departure from our previous work is in the fact that the identity (7) does not
directly extend for the operator r \Theta (- \Gamma1 r\Theta ) appearing in (5). We can,
however, stabilize this operator by subtracting r(-
this forms the basis for our proposed method. In cases of constant magnetic
permeability or electric conductivity the formulation can be reduced to our
previous formulation in [15] or a variant thereof.
Our approach to dealing with possible discontinuities can be viewed as
using fluxes (which are continuous on cell faces) and vorticities (which are
continuous on cell edges). The introduction of such unknowns is strongly
connected to mixed finite elements which are used for highly discontinuous
problems [7, 8, 16].
The paper is laid out as follows. In Section 2, we reformulate Maxwell's
equations in a way which enables us to extend our methods. The resulting
system is amenable to discretization using a finite-volume technique described
in Section 3.
The extension and generalization of our approach from [1] through [15] to
the present article is not without a price. This price is an added complication
in the sparsity structure of the resulting discrete system and a corresponding
added cost in the iterative solution of such systems. We briefly describe
the application of Krylov space methods to solve the system of algebraic
equations in Section 4. We use BICGSTAB (see, e.g., [30]) together with one
of two preconditioners: an incomplete block LU-decomposition (which is a
powerful preconditioner in the case of diagonally dominant linear systems)
and SSOR. The system's diagonal dominance is a direct consequence of our
analytic formulation.
We present the results of numerical experiments in Section 5 and compare
results obtained using our method with those obtained using a more traditional
Yee discretization. If the source is not divergence-free, as is the case
for electric (but not magnetic) sources, then our method is better by more
than two orders of magnitude. The method works well also for a case where
the problem coefficients vary rapidly. We conclude with a short summary
and further remarks.
2 Reformulation of Maxwell's Equations
Maxwell's equations (1a) and (1b) can be viewed as flux balance equations,
i.e. each term describes the flux which arises from a different physical con-
sideration, and the equations are driven by the conservation of fluxes. (In
fact, this was how they were originally developed [23].) Therefore, in both
(1a) and (1b) we have a flux term which should be balanced. In (1b) the
generalized current density b
J defined in (6) is balanced with the source and
the flux which arise from magnetic fields, and in (1a) the magnetic flux
is balanced with the flux which arises from electric fields. In our context
these fluxes are well-defined on cell faces but they may be multi-valued at
cell edges. 1
Furthermore, the leading differential operator in (5), say, has a nontrivial
null space. Rather than devising iterative methods which directly take
this into account (as, e.g., in [2, 16]), we transform the equations before
discretization.
We decompose E into its components in the active space and in the null
space of the curl operator:
r
We could decompose H instead in a similar way, but we would not decompose
both. Here we have chosen to concentrate on the decomposition of E.
Substituting equations we obtain
r \Theta A \Gamma
r \Theta H \Gamma b
r
Furthermore, (5) becomes
r \Theta (- \Gamma1 r \Theta
r
Note that across an interface between distinct conducting media we have,
in addition to (4),
n \Theta
ae s in (12c) is an electric surface charge density. These conditions and
the differential equations (1) imply that while b
H(curl;\Omega\Gamma4 see,
e.g., [13].
not. Moreover, rOE inherits the discontinuity of E \Delta n, while A is continuous,
and both r \Delta A and r \Theta A are bounded (cf. [13]).
In [15] we had the relation r \Theta r \Theta A = \Gammar 2 A holding. However,
when - varies, the identity (7) does not extend directly, and we must deal
with the null space of r \Theta A in a different way.
Let us define the Sobolev spaces
equipped with the usual norm (see, e.g., [13])
kvk
(the
(\Omega\Gamma4 and [L
are used on the right hand side of (13b)),
and
Green's formula yields, for any u 2
(r \Theta (-
where the usual notation for inner product in L
2(\Omega\Gamma and [L
2(\Omega\Gamma2 3 is used.
Thus, for any u 2 W
(r \Theta (- \Gamma1 r \Theta u); u)
We may therefore stabilize (11a) by subtracting a vanishing term:
r \Theta (- \Gamma1 r \Theta
obtaining a strongly elliptic operator for A, provided A 2 W
0(\Omega\Gamma4 The
latter condition is guaranteed by the choice (17b) below.
A similar stabilization (or penalty) approach was studied with mixed
success in [10, 20]. However, our experience and thus our recommendation
are more positive because of the discretization we utilize. We elaborate
further upon this point in the next section.
Using (10c), we can write (10b) as
r \Theta H \Gamma
This may be advantageous in the case of discontinuous -, similarly to the
mixed formulation used for the simple div-grad system
r
in [7, 27, 8, 11] and elsewhere.
Our final step in the reformulation is to replace, as in [15], the gauge
condition (10c) on A by an indirect one, obtained upon taking r\Delta of (15a)
and simplifying using (15b) and (10c). This achieves only a weak coupling
in the resulting PDE system. We note that replacing of the gauge condition
(10c) is similar to the pressure Poison equation in CFD [31, 14]. The complete
system, to be discretized in the next section, can now be written as
r \Theta H \Gamma
In order to complete the specification of this system we must add appropriate
Boundary Conditions (BCs). First, we note that the original BC (3)
can be written as
(r \Theta A) \Theta n
An additional BC on the normal components of A is required for the
Helmholtz decomposition (9) to be unique. Here we choose (corresponding
to (13c))
A
This, together with (9), determines A for a given E.
Moreover, since (16d) was obtained by taking the r\Delta of (16a) additional
are required on either @OE=@n or OE. For this we note that the original
together with the PDE (1b) imply also
The latter relation (17c), together with (17b), implies @OE
@n
0 at the bound-
ary. 23
The above conditions determine OE up to a constant. We pin this constant
arbitrarily, e.g. by requiring
Z
Finally, we note that (17c) together with (16a) and (3), imply in turn
that @/
@n
0 at the boundary. Since (16a) and (16d) imply that
we obtain / j 0, and thus retrieve (10c), by pinning / down to 0 at one
additional point.
The system (16) subject to the boundary conditions (17) and / pinned
at one point is now well-posed.
3 Deriving a discretization
As in [15], we employ a finite volume technique on a staggered grid, where b
and A are located at the cell's faces, H is at the cell's edges, and OE and / are
located at the cell's center. Correspondingly, the discretization of (16d) and
(16c) are centered at cell centers, those of (16e) and (16a) are centered at cell
faces, and that of (16b) is centered at cell edges. The variables distribution
over the grid cell is summarized in Table 1.
To approximate r \Delta u for
integrate at first over the cell using the Gauss divergence theoremje i;j;k j
Z
e i;j;k
r \Delta u
Z
@e i;j;k
In cases where the original BC is different from (3) we still use the boundary condition
as an asymptotic value for an infinite domain. Alternatively, note the
possibility of applying the Helmholtz decomposition to H , although generally there are
good practical reasons to prefer the decomposition (9) of E.
3 In our geophysical applications J s \Delta n vanishes at the boundary.
A x
A y
A z
J z
J z
H z
Table
1: Summary of the discrete grid functions. Each scalar field is approximated
by the grid functions at points slightly staggered in each cell e i;j;k of
the grid.
and then use midpoint quadrature on each face to evaluate each component
of the surface integrals appearing on the right-hand side above. Thus, define
and express the discretization of (16d) and (16c) on each grid cell as
Note that we are not assuming a uniform grid:
each cell may have different widths in each direction. The boundary conditions
are used at the end points of (19).
Next, consider the discretization at cell faces. Following [15], we define
the harmonic average of boe between neighboring cells in the x-direction by
where h x
)=2. If b oe is assumed constant over each
cell, this integral evaluates to
2boe i;j;k
2boe i+1;j;k
Then, the resulting approximation for the x-component of (16e) is [15]
A x
Next, we discretize (16a) as in [37]. Writing the x-component of these
equations,
where we denote J discretization centered at the center of
the cell's x-face results:
H z
Similar expressions to those in (20c) and (21) can be derived in the yand
z-directions. The boundary conditions (3) are used to close (21). Using
(20c) we can eliminate b
J from (19a) and obtain a discrete equation in which
the dominant terms all involve OE. The resulting stencil for OE has 7 points.
We also apply the obvious quadrature for the single condition (17d).
Finally we discretize the edge-centered (16b). Consider, say, the x-component
of (16b), written as
Integrating this equation over the surface of a rectangle with corners
the expression on the left hand side is integrated using the Gauss curl theo-
rem, and - on the right hand side is averaged in both directions to obtain a
value on the edge. We do not divide through by - before this integration because
we wish to integrate the magnetic flux, which is potentially less smooth
around an edge than the magnetic field. This yields
Z z k+1
z k
If - is assumed to be constant over each cell, this integral evaluates to
[h y
h z
h z
h z
h z
Then, the resulting approximation for the x-component of (16b) is
iA y
h z
A z
(22c)
Using (19b) as well as (22c) and similar expressions derived in the y- and
z-directions, we substitute for H and / in (21) and obtain a discrete system
of equations for A. The resulting stencil for A has 19 points and the same
structure as for the discretization of the operator
r \Theta (- \Gamma1 r\Theta
The difference between this discretization and a direct discretization of the
latter is that - at the interface is naturally defined as an arithmetic average
and not a harmonic average.
The discretization described above can be viewed as a careful extension
of Yee's method, suitable for discontinuous coefficients and amenable to fast
iterative solution methods. It is centered, conservative and second order accu-
rate. Note that throughout we have used a consistent, compact discretization
of the operators r\Delta ; r\Theta and r. We can denote the corresponding discrete
operators by r h and r h and immediately obtain the following identities
(cf. [18, 17]),
These are of course analogues of vector calculus identities which hold for
sufficiently differentiable vector functions. The boundary conditions (17) are
discretized using these discrete operators as well.
Next, note that upon applying r h \Delta to (21) and using (23b) and (19a) we
obtain
Moreover, from (21) and (17c), the discrete normal derivative of / vanishes
at the boundaries as well. Setting / to 0 arbitrarily at one point,
then determines that / j 0 throughout the domain (as a grid function).
We obtain another conservation property of our scheme, namely, a discrete
divergence-free A:
Recall the stabilizing term added in (14). For the exact solution this term
obviously vanishes. Now, (24) assures us that the corresponding discretized
term vanishes as well (justifying use of the term 'stabilization', rather than
'penalty'). This is not the case for the nodal finite element method which was
considered in [20, 10]. For an approximate solution that does not satisfy (24)
the stabilization term may grow in size when - varies over a few orders of
magnitude, or else r h \Delta A grows undesirably in an attempt to keep -
approximately constant across an interface [20].
Our particular way of averaging across discontinuities, namely, arithmetic
averaging of - at cell edges and harmonic averaging of b
oe at cell faces, can
be important. The averaging can be viewed as a careful approximation of
the constitutive relationship for discontinuous coefficients. To show that, we
look first at the relation
across a face whose normal direction is x. This flux flows in series, and
therefore an approximate b oe that represents the bulk property of the flow
through the volume is given by the harmonic average (corresponding to an
arithmetic average of the resistivities). Next, we look at the relation
where - is an edge variable and B x is the flux through four cells which
share that edge. Here the flow is in parallel, which implies that we need to
approximate - on the edge by an arithmetic average.
Note also that if we use the more common implementation of Yee's
method (i.e. H on the cell's faces and E on the cell's edges) then the
roles of b oe and - interchange and we need to average - harmonically and b oe
arithmetically.
4 Solution of the discrete system
After the elimination of H , / and b
J from the discrete equations we obtain
a large, sparse system for the grid functions corresponding to A and OE:
A
OE
b A
Here
is the result of the discretization of the operator r \Theta -
corresponds to the discretization of the operator r\Theta ; D likewise corresponds
to the discretization of the operator the diagonal matrix S results from
the discretization of the operator b oe(\Delta); M and M c similarly arise from the
at cell edges and at cell centers respectively; and H
represents the discretization of r \Delta (boer(\Delta)). In regions of constant - the
simplifies into a discretization of the vector Laplacian. The blocks
are weakly coupled through interfaces in -. A typical sparsity structure of
H 1 for variable -, as contrasted with constant -, is displayed in Fig. 3. The
structure of the obtained system is similar to that in [15], although the main
block diagonal is somewhat less pleasant.
Note that as long as the frequency is low enough that the diffusion number
satisfies
(where h is the maximum grid spacing) the matrix is dominated by the
diagonal blocks. This allows us to develop a block preconditioner good for
low frequencies ! based on the truncated ILU (ILU(t)) decomposition of the
major blocks [30]. Thus, we approximate K by the block diagonal matrix
and then use ILU(t) to obtain a sparse approximate decomposition of the
K. This decomposition is used as a preconditioner for the Krylov
solver. Note that although K is complex the approximation -
K is real and
therefore we need to apply the ILU decomposition only to two real matrices,
which saves memory.
Note that the block approximation (26) makes sense for our discretization
but not for the direct staggered-grid discretization of (1). Thus, the
Sparsity structure - constant -
Sparsity structure - variable -
Figure
3: Sparsity structure of the matrix H 1 corresponding to variable -
(right) and to constant - (left).
reformulation and subsequent discretization of Maxwell's equations allow us
to easily obtain a good block preconditioner in a modular manner, while the
discretization of (1) does not.
5 Numerical examples
Our goal in this section is to demonstrate the efficacy of our proposed method
and to compare it to the more common discretization of the system (1), utilized
e.g. in [24, 29], using standard Krylov-type methods and preconditioners
for solving the resulting discrete systems. We vary the type of source
used, the size of jumps in the coefficients oe and -, the preconditioner and
the grid size.
In the tables below, 'iterations' denotes the number of BICGSTAB iterations
needed for achieving a relative accuracy of
the number of giga-flops required; the SSOR parameter (when used) equals
and the ILU threshold (when used) equals 10 \Gamma2 . The latter threshold is
such that in our current Matlab implementation iterations involving these
two preconditioners cost roughly the same.
5.1 Example Set 1
We derive the following set of experiments. Let the air's permeability be
its permittivity ffl We assume a
cube of constant conductivity oe c and permeability - c embedded in an otherwise
homogeneous earth with conductivity oe permeability
Fig. 4. In a typical geophysical application, the conductivity
may range over four orders of magnitude and more, whereas the permeability
rarely changes over more than one order of magnitude. Therefore, we
experiment with conductivity oe c ranging from 10 \Gamma2 S/m to 10 3 S/m and
permeability - c ranging from - 0 to 1000- 0 .
-0.4
Figure
4: The setting of our first numerical experiment. A cube of conductivity
oe c and permeability - c is embedded inside a homogeneous earth with
conductivity oe e and permeability - e . Also, in the air
We experiment with two different sources: (i) a magnetic source (a plane
wave); and (ii) an electric dipole source in the x direction centered at (0; 0; 0).
The fact that the first source is magnetic implies that it is divergence-free.
This source lies entirely in the active space of the curl operator. In contrast,
the electric source is not divergence-free.
Both sources are assumed to oscillate with different frequencies ranging
from 1 to 10 6 Hz. The solution is obtained, unless otherwise noted, on a
nonuniform tensor grid (see Fig. 4) consisting of cells. There are 95232
unknowns corresponding to this grid and 128000 (complex)
unknowns. We then solve the system using the method described in the
previous section.
5.1.1 Example 1a.
In order to be able to compare the resulting linear algebra solver with that
corresponding to Yee's method we discretize the system
r \Theta [- \Gamma1 r \Theta ( b
using the staggered grid depicted in Fig. 2, i.e., where b
J is on cells' faces,
which is similar to the discretization in [24]. This yields the discrete system
for the unknowns vector e corresponding to grid values of b
J=boe, where the
matrices C; M and S are defined in Section 4 and - b depends on the source.
In order to solve this system as well as ours we use BICGSTAB and an
preconditioner. The comparison between the methods for the case
different frequencies is summarized in Table 2.
Electric source Magnetic source
iterations operations iterations operations
6.3 728 99 113 6.4 7.3
Table
2: Iteration counts and computational effort for our method (A; OE)
and the traditional implementation of Yee's method (applied to E, or b
J
Example Set 1 using both an electric source and a magnetic source.
Table
2 shows that our method converges in a moderate number of iterations
for both sources, despite the presence of significant jumps in - and
oe. On the other hand, the more traditional discretization performs poorly
for the electric source and reasonably well for the magnetic source. Slow
convergence of the direct staggered discretization of Maxwell's equations in
the case of an electric source was also reported in [29], where E was defined
on the grid's edges.
These results clearly show an advantage of our formulation over the original
Yee formulation, even for a simple preconditioning, especially for electric
sources and in low frequencies. In such circumstances, the discretized first
term on the left hand side of (27) strongly dominates the other term, and
the residual in the course of the BICGSTAB iteration has a nontrivial component
in the null space of that operator; hence its reduction is very slow.
The magnetic source, on the other hand, yields a special situation where, so
long as the discrete divergence and the roundoff error are relatively small,
the residual component in the null space of the leading term operator is also
small, hence the number of iterations using the traditional method is not
much larger than when using our method.
5.1.2 Example 1b.
Next, we test the effect of discontinuities on our method. We use the electric
source and record the number of BICGSTAB iterations needed for our
method to converge for various values of ~
using block-ILU(t) preconditioning as described in the previous section. The
results are summarized in Table 3.
Note that large jump discontinuities in oe do not significantly affect the
rate of convergence of the iterative linear system solver for our method, but
large jump discontinuities in - have a decisive effect. Results in a similar
spirit were reported in [16] regarding the effect of discontinuities in - on a
specialized multigrid method for an edge-element discretization. However,
even for large discontinuities in - the number of iterations reported in Table
3 remains relatively small compared with similar experiments reported in
[29, 9]. We attribute the increase in the number of iterations as the jump in
- increases in size to a corresponding degradation in the condition number
of K in (25). This degradation, however, does not depend strongly on grid
size, as we verify next.
Table
3: Iteration counts for different frequencies, conductivities and permeabilities
in Example Set 1. The conductivity/permeability structure is a
cube in a half-space.
5.1.3 Example 1c.
In the next experiment, we use the cube model with the electric source to
evaluate the influence of the grid on the number of iterations. We fix
and test our method on a case with a modest coefficient jump ~
and on a
case with a large jump ~
set of uniform grids in the interval [\Gamma1; 1] 3
is considered. For each grid we record the resulting number of iterations
using both the SSOR and the block ILU preconditioners. The results of this
experiment are gathered in Table 4.
We observe that the number of iterations increases as the number of un-
Grid size ILU SSOR ILU SSOR
Table
4: Iteration counts for different grids, for two sets of problem coefficients
and using two preconditioners.
knowns increases. The increase appears to be roughly proportional to the
number of unknowns to the power 1=3. The growth in number of iterations
as a function of grid size is also roughly similar for both preconditioners,
although the block ILU requires fewer iterations (about 1=4 for ~
for each grid. However, ILU requires more memory than SSOR, which may
prohibit its use for very large problems. The increase rate is also similar, as
expected, for both values of ~
-. Thus, the increase in number of iterations as
a function of ~
essentially does not depend on the grid size. Practically, how-
ever, this increase is substantial and may be hard to cope with for (perhaps
unrealistically) large values of ~
- using the present techniques.
5.2 Example 2
In our next experiment we consider a more complicated earth structure. We
employ a random model, which is used by practitioners in order to simulate
stochastic earth models [21]. Two distinct value sets oe
probability P are assumed for the conductivities and permeabilities: for each
cell, the probability of having values oe and the probability of having
values . This can be a particularly difficult model to work with,
as the conductivity and permeability may jump anywhere from one cell to
the next, not necessarily just across a well-defined manifold. A cross-section
of such a model is plotted in Fig. 5. We then carry out experiments as before
for frequencies ranging from 0 to 10 6 Hz.
We use the random model with different conductivities, permeabilities
and frequencies and the electric source, with on the domain [\Gamma1; 1] 3
(in km). We employ a uniform grid of size 44 3 and use both the block
ILU and the SSOR preconditioners. The results of this experiment in terms
Conductivity slice log(S/m)
Figure
5: The setting of Example Set 2.
of iteration counts are summarized in Table 5. The results show that our
solution method is effective even for highly varying conductivity. As before,
the method deteriorates when very large variations in - are present.
We can also see from Table 5 that the block ILU preconditioner works
very well for low frequencies, but it is not very effective for high frequencies.
It is easy to check that in all cases where the block ILU preconditioner fails
to achieve convergence (denoted 'nc', meaning failure to achieve a residual of
\Gamma7 in 800 iterations) the maximum grid spacing h satisfies !-oe h AE 1. In
such a case the discretization of the leading terms of the differential operator
no longer yields the dominant blocks in the matrix equations (25), and therefore
our block ILU preconditioner fails. Thus, for high frequency and high
conductivity we require more grid points in order for this preconditioner to
be effective. This is also consistent with the physics, as the skin depth [36]
decreases and the attenuated wave can be simulated with fidelity only on a
finer grid.
6 Summary and further remarks
In this paper we have developed a fast finite volume algorithm for the solution
of Maxwell's equations with discontinuous conductivity and permeability.
The major components of our approach are as follows.
ffl Reformulation of Maxwell's equations: The Helmholtz decomposition is
applied to E; then a stabilizing term is added, resulting in a strongly elliptic
system; the system is written in first order form to allow flexibility
ILU SSOR ILU SSOR ILU SSOR
Table
5: Iteration counts for different frequencies, conductivities and permeabilities
for Example Set 2.
in the ensuing discretization; and finally, the divergence-free Coulomb
gauge condition is eliminated using differentiation and substitution,
which yields a weakly coupled PDE system enabling an efficient preconditioner
for the large, sparse algebraic system which results from
the discretization.
ffl Discretization using staggered grids, respecting continuity conditions
and carefully averaging material properties across discontinuities. For
this discretization, the stabilizing term vanishes at the exact solution of
the discrete equations, which is important for cases with large contrasts
in -.
ffl Solution of the resulting linear system using fast preconditioned Krylov
methods.
The resulting algorithm was tested on a variety of problems. We have shown
dramatic improvement over the more standard algorithm when the source is
electric. Good performance was obtained even when the coefficients - and
oe were allowed to vary rapidly on the grid scale - a case which should prove
challenging for multigrid methods.
The project that has motivated us is electromagnetic inverse problems
in geophysical prospecting [34]. Solving the forward problem, i.e. Maxwell's
equations as in the present article, is a major bottleneck for the data inversion
methods - indeed it may have to be carried out dozens, if not hundreds, of
times for each data inversion. Thus, extremely fast solvers of the problem
discussed in our paper are needed. Based on the algorithm described here an
implementation has been carried out which solves realistic instances of this
forward problem in less than two minutes on a single-processor PC, enabling
derivation of realistic algorithms at low cost for the inverse problem.
Acknowledgment
We wish to thank Drs. Doug Oldenburg and Dave
Moulton for fruitful discussions and an anonymous referee for valuable comments
on the first two sections of our exposition.
--R
A method for the forward modelling of 3D electromagnetic quasi-static problems
Adaptive multilevel methods for edge element discretizations of maxwell's equations.
Whitney forms: a class of finite elements for three-dimensional computations in electromagnetism
Multigrid solution to elliptic flow problems.
Mixed and hybrid finite element methods.
Finite Elements
Numerical algorithms for the FDITD and FDFD simulation of slowly varying electromagnetic feilds.
Theoretical and numerical difficulties in 3D vector potential methods in finite element magnetostatic compu- tations
Black box multigrid.
Geomagnetic induction in a heterogenous sphere: Azimuthally symmetric test computations and the response of an undulating 660-km discontinuity
Finite Element Methods for Navier-Stokes Equations
On pressure boundary conditions for the incompressible Navier-Stokes equations
Fast modelling of 3D electromagnetic using potentials.
Multigrid method for maxwell's equations.
Natural discretizations for the divergence gradient and curl on logically rectangular grids.
The orthogonal decomposition theorems for mimetic finite difference methods.
Mimetic discretizations for Maxwell's equations and equations of magnetic diffusion.
The Finite Element Method in Electromagnetics.
Electrical conduction in sandstones: Examples of two dimensional and three dimensional percolation.
A scalar-vector potential solution for 3D EM finite-difference modeling
Theoretical concepts in physics.
Three dimensional magnetotelluric modeling using difference equations - theory and comparison to integral equation solutions
An analysis of finite volume
A convergence analysis of Yee's scheme on nonuniform grids.
Numerical implementation of nodal methods.
Solutions of 3D eddy current problems by finite elements.
Iterartive Methods for Sparse Linear Systems.
A multigrid solver for the steady state navier-stokes equations using the pressure-poisson formulation
Computational Electrodynamics: the Finite-Difference Time-Domain Method
Diadic Green Functions in Electromagnetics.
Electromagnetic theory for geophysical applications.
Time domain electromagnetic field computation with finite difference methods.
Inversion of Magnetotelluric Data for a One Dimensional Conductivity
Numerical solution of initial boundary value problems involving maxwell's equations in isotropic media.
--TR
--CTR
Eldad Haber , Stefan Heldmann, An octree multigrid method for quasi-static Maxwell's equations with highly discontinuous coefficients, Journal of Computational Physics, v.223 n.2, p.783-796, May, 2007 | krylov methods;mixed methods;coulomb gauge;solution discontinuities;finite volume;helmholtz decomposition;maxwell's equations;preconditioning |
587484 | Uniform Convergence and Mesh Independence of Newton''s Method for Discretized Variational Problems. | In an abstract framework, we study local convergence properties of Newton's method for a sequence of generalized equations which models a discretized variational inequality. We identify conditions under which the method is locally quadratically convergent, uniformly in the discretization. Moreover, we show that the distance between the Newton sequence for the continuous problem and the Newton sequence for the discretized problem is bounded by the norm of a residual. As an application, we present mesh-independence results for an optimal control problem with control constraints. | Introduction
In this paper we study local convergence properties of Newton-type methods applied to
discretized variational problems. Our target problem is the variational inequality representing
the rst-order optimality conditions in constrained optimal control. In an abstract frame-
work, the optimality conditions are modeled by a \generalized equation", a term coined by S.
Robinson [12], where the normal cone mapping is replaced by an arbitrary map with closed
graph. In this setting, Newton's method solves at each step a linearized generalized equation.
When the generalized equation describes rst-order optimality conditions, Newton's method
becomes the well-known sequential quadratic programming (SQP) method.
We identify conditions under which Newton's method is not only locally quadratically
convergent, but the convergence is uniform with respect to the discretization. Moreover, we
derive an estimate for the number of steps required to achieve a given accuracy. Under some
additional assumptions, which are natural in the context of the target problem, we prove
that the distance between the Newton sequences for the continuous problem and the Newton
sequence for the discretized problem, measured in the discrete metric, can be estimated by
the norm of a residual. Normally, the residual tends to zero when the approximation becomes
ner, and the two Newton sequences approach each other. In the context of the target optimal
control problem, the residual is proportional to the mesh spacing h, uniformly along the
Newton sequence. In particular, this implies that the least number of steps needed to reach
a point at distance " from the solution of the discrete problem does not depend on the mesh
spacing; that is, the method is mesh-independent.
The convergence of the SQP method applied to nonlinear optimal control problems has
been studied in several papers recently. In [5, 6] we proved local convergence of the method
for a class of constrained optimal control problems. In parallel, Alt and Malanowski obtained
related results for state constrained problems [3]. In the same line Troltzsch [13] studied the
SQP method for a problem involving a parabolic partial dierential equation.
Kelley and Sachs [10] were the rst to obtain a mesh independence result in constrained
optimal control; they studied the gradient projection method. More recently, uniform convergence
and mesh-independence results for an augmented Lagrangian version of the SQP method,
applied to a discretization of an abstract optimization problem with equality constraints, were
presented by Kunisch and Volkwein [11]. Alt [2] studied the mesh-independence of Newton's
method for generalized equations, in the framework of the analysis of operator equations in
Allgower et al. [1]. An abstract theory of mesh independence for innite-dimensional optimization
problems with equality constraints, together with applications to optimal control of
partial dierential equations and an extended survey of the eld can be found in the thesis of
Volkwein [14].
The local convergence analysis of numerical procedures is closely tied to problem's sta-
bility. The analysis is complicated for optimization problems with inequality constraints, or
for related variational inequalities. In this context, the problem solution typically depends on
perturbation parameters in a nonsmooth way. In Section 2 we present an implicit function
theorem which provides a basis for our further analysis. In Section 3 we obtain a result on
uniform convergence of Newton's method applied to a sequence of generalized equations, while
Section 4 presents our mesh-independence results. Although in part parallel, our approach is
dierent from the one used by Alt in [2] who adopted the framework of [1]. First, we study
the uniform local convergence of Newton's method which is not considered in [2]. In the mesh-
independence analysis, we avoid consistency conditions for the solutions of the continuous and
the discretized problems; instead, we consider the residual obtained when the Newton sequence
of the continuous problem is substituted into the discrete necessary conditions. This allows
us to obtain mesh independence under conditions weaker than those in [2] and, in the same
time, to signicantly simplify the analysis.
In Section 5 we apply the abstract results to the constrained optimal control problem
studied in our previous paper [5]. We show that under the smoothness and coercivity conditions
given in [5], and assuming that the optimal control of the continuous problem is a
Lipschitz continuous function of time, the SQP method applied to the discretized problem is
Q-quadratically convergent, and the region of attraction and the constant of the convergence
are independent of discretization, for a suciently small mesh size. Moreover, the l 1 distance
between the Newton sequence for the continuous problem at the mesh points and the Newton
sequence for the discretized problem is of order O(h). In particular, this estimate implies the
mesh-independence result in Alt [2].
2. Lipschitzian localization
Let X and Y be metric spaces. We denote both metrics by (; ); it will be clear from the
context which metric we are using. B r (x) denotes the closed ball with center x and radius r.
In writing \f maps X into Y " we adopt the convention that the domain of f is a (possibly
proper) subset of X. Accordingly, a set-valued map F from X to the subsets of Y may have
empty values.
Denition 2.1. Let map Y to the subsets of X and let x 2 (y ). We say that
has a Lipschitzian localization with constants a, b and M around (y ; x ), if the map y 7!
(y)\B a single-valued (a function) and Lipschitz continuous in B b (y ) with a Lipschitz
constant M .
Theorem 2.1. Let G map X into the subsets of Y and let y 2 G(x ). Let G 1 have a
Lipschitzian localization with constants a; b, and M around (y ; x ). In addition, suppose that
the intersection of the graph of G with B a closed and B a (x ) is complete. Let
the real numbers ,
M , a, m and satisfy the relations
Suppose that the function continuous with a constant in the ball
B a (x ), that
sup
and that the set
is nonempty.
Then the set fx 2 B a consists of exactly one point,
x, and for each
Proof. Let us choose positive ;
a and such that the relations in (1) hold. We
rst show that the set T := nonempty. Let x 0 2 and put
Take an arbitrary " > 0 such that
Choose an y
and
from the Lipschitzian localization property, there exists x 1 such that
We dene inductively a sequence x k in the following way. Let x be already dened
for some k 1 in such a way that
and
Clearly, x 0 and x 1 satisfy these relations. Using the second inequality in (5), we estimate
a
Thus both x k 1 and x k are in B a (x ) from which we obtain by (2),
k. Due to the assumed Lipschitzian localization property of G, there
exists x k+1 such that (7), with k replaced by k + 1, is satised and
By (6) we obtain
and hence (6) with k replaced by k + 1, is satised. The denition of the sequence x k is
complete.
>From (6) and the condition M < 1, fx k g is a Cauchy sequence. Since all x
sequence fx k g has a limit x Passing to the limit in (7), we obtain g(x 00
Hence x 00 2 T and the set T is nonempty. Note that x 00 may depend on the choice of ". If we
prove that the set T is a singleton, say ^
x, the point x would not depend on ".
Suppose that there exist x 00 2 T and
It follows that (g(x); y )
x 00 . >From the denition of the Lipschitzian localization, we obtain
which is a contradiction. Thus T consists of exactly one point, ^ x, which does not depend on
". To prove (4) observe that for any choice of k > 1,
Passing to the limit in the latter inequality and using (5), we obtain
But since x
x does not depend on the choice of ", one can take in (8) and the proof
is complete.
3. Newton's Method
Theorem 2.1 provides a basis for the analysis of the error of approximation and the convergence
of numerical procedures for solving variational problems. In this and the following
section we consider a sequence of so-called \generalized equations". Specically, for each
be a closed and convex subset of a Banach space, let Y N be a linear
normed space, let fN : X N 7! Y N be a function, and let FN : X N 7! 2 Y N be a set-valued
map with closed graph. We denote by k kN the norms of both X N and Y N . We study the
following sequence of problems:
Find
We assume that there exist constants , ,
, and L, as well as points x
that satisfy the following conditions for each N :
(A2) The function fN is Frechet dierentiable in B (x
N ) and the derivative rfN is Lipschitz
continuous in B (x
N ) with a Lipschitz constant L.
(A3) The map
y 7!
has a Lipschitzian localization with constants ; and
around the point (z
We study the Newton method for solving (9) for a xed N which has the following form:
If x k is the current iterate, the next iterate x k+1 satises
where x 0 is a given starting point. If the range of the map F is just the origin, then (9) is an
equation and (10) becomes the standard Newton method. If F is the normal cone mapping in
a variational inequality describing rst-order optimality conditions, then (10) represents the
rst-order optimality condition for the auxiliary quadratic program associated with the SQP
method.
In the following theorem, by applying Theorem 2.1, we obtain the existence of a locally
unique solution of the problem (9) which is at distance from the reference point proportional
to the norm of the residual z
N . We also show that the method (10) converges Q-quadratically
and this convergence is uniform in N and in the choice of the initial point from a ball around
the reference point x
N with radius independent of N . Note that, for obtaining this result we
do not pass to a limit and consequently, we do not need to consider sequences of generalized
equations.
Theorem 3.1. For every
there exist positive constants and such that if
kz
then the generalized equation (9) has a unique solution xN in B (x
xN satises
Furthermore, for every initial point x
N ) there is a unique Newton sequence fx k g, with
Newton sequence is Q-quadratically convergent to xN , that
is,
where is independent of k; N and x
Proof. Dene
min
s
We will prove the existence and uniqueness of xN by using Theorem 2.1 with
and
Observe that a ; b , and
b a. By (A3) the map G has a Lipschitzian localization
around
. One can check that the relations (1) are satised.
Further, for
N ); we have
Obviously, x
dened in (3). The assumptions of Theorem
2.1 are satised, hence there exists a unique xN in B (x
Hence
xN is a unique solution of
holds. This completes the rst part of the
proof.
Given x
N ), a point x is a Newton step from x k if and only if x satises the
inclusion
where G is the same as above, but now
The proof will be completed if we show that (14) has a unique solution x
this solution satises (12). To this end we apply again Theorem 2.1 with a; b; M;
M , and
the same as in the rst part of the proof, and with
With these identications, it can be checked that the assumptions (1) and (2) hold, and that g
is Lipschitz continuous in B (x
N ) with a Lipschitz constant . Further, by using the solution
xN obtained in the rst part of the proof, we have
L
The last expression has the estimate
Thus xN 2 6= ; and the assumptions of Theorem 2.1 are satised. Hence, there exists a
unique Newton step x k+1 in B (x
N ) and by Theorem 2.1 and (15) it satises
4. Mesh independence
Consider the generalized equation (9) under the assumption (A1)-(A3). We present rst a
lemma in which, for simplicity, we suppress the dependence of N .
Lemma 4.1. For every
, every > 0 and for every suciently small > 0, there
exists a positive such that the map
is a Lipschitz continuous function from B (z
for y and for w.
Proof. Let
and > 0. We choose the positive constants and as a solution of
the following system of inequalities:
This system of inequalities is satised by rst taking suciently small, and then taking
suciently small. In particular, we have and .
We apply Theorem 2.1 with
,
and
We have
for all x Hence the function g is Lipschitz continuous with a Lipschitz constant
. For
+2
Note that a point x is in the set P (y only if g(x) 2 G(x). Since
the set dened in (3) is not empty. Hence, from Theorem 2.1 the set P (y
consists of exactly one point. Let us call it x 00 . Applying the same argument to an arbitrary
point (y that there is exactly one point x
Hence x 0 2 and we obtain
It remains to prove that P maps B (z From the last inequality
with
Thus x
In the remaining part of this section, we x
and 0 < < 1, and we choose the
constants and according to Theorem 3.1. For a positive with , let be the constant
whose existence is claimed in Lemma 4.1. Note that can be chosen arbitrarily small; we take
Also, we assume that kz
and consider Newton sequences with initial points
In such a way, the assumptions of Theorem 3.1 hold and we have a unique
Newton sequence which is convergent quadratically to a solution.
Suppose that Newton's method (10) is supplied with the following stopping test: Given
" > 0, at the k-th step the point x k is accepted as an approximate solution if
Denote by k " the rst step at which the stopping test (18) is satised.
Theorem 4.1. For any positive " < , if x k" is the approximate solution obtained using
the stopping test (18) at the step
and
Proof. Choose an " such that 0 < " < . If the stopping test (18) is satised at x k" , then
there exists v k
" with k v k
" such that
Let P N be dened as in (16) on the basis of fN and FN . Since
Lemma 4.1 implies that
The latter inequality yields (19). For all k < k " , we obtain
Since x k is a Newton iterate, we have
Hence
By the denition of the map P N , the Newton step x 1 from x 0 satises
while the Newton step x 2 from x 1 is
Since P N is Lipschitz continuous with a constant , we have
By induction, the 1)-st Newton step x k+1 satises
Combining (21) and (22) and we obtain the estimate
which yields (20).
Our next result provides a basis for establishing the mesh-independence of Newton's
method (10). Namely, we compare the Newton sequence x k
N for the \discrete" problem
and the Newton sequence for a \continuous" problem which is again described by (9) but with
us assume that the conditions (A1){(A3) hold for the generalized equation
According to Theorem 3.1, for each starting point x 0
0 close to a solution x
there is a unique Newton sequence x k
which converges to x 0 Q-quadratically. To relate the
continuous problem to the discrete one, we introduce a mapping N from X 0 to X N . Having in
mind the application to optimal control presented in the following section, X 0 can be thought
as a space of continuous functions x() in [0; 1] and, for a given natural number N , t
will be the space of sequences fx Ng. In this case the operator N
is the interpolation map N
Theorem 4.2. Suppose that for every k and N there exists r k
and
In addition, let
for all k and N . Then for all
Proof. By denition, we have
Using Lemma 4.1 we have
By induction we obtain (24).
The above result means that, under our assumptions, the distance between the Newton
sequence for the continuous problem and the Newton sequence for the discretized problem,
measured in the discrete metric, can be estimated by the sup-norm !N of the residual obtained
when the Newton sequence for the continuous problem is inserted into the discretized
generalized equations. If the sup-norm of the residual tends to zero when the approximation
becomes ner, that is, when N !1, then the two Newton sequences approach each other. In
the next section, we will present an application of the abstract analysis to an optimal control
problem for which the residual is proportional to the mesh spacing h, uniformly along the
Newton sequence. For this particular problem Theorem 4.2 implies that the distance between
the Newton sequences for the continuous problem and the Newton sequence for the discretized
problem is O(h).
For simplicity, let us assume that if the continuous Newton process starts from the point
N , then the discrete Newton process starts from N (x 0
Also, suppose that for any xed
kN (w) N (v)k N ! kw vk 0 as N !1: (25)
In addition, let
where !N is dened in (23). Letting k tend to innity and assuming that N is a continuous
mapping for each N , Theorem 4.2 gives us the following estimate for the distance between the
solution xN of the discrete problem and the discrete representation N of the solution x
of the continuous problem:
Choose a real number " satisfying
where is as in Theorem 3.1. Theorem 4.2 yields the following result:
Theorem 4.3. Let (25) and (26) hold and let " satisfy (28). Then for all N suciently
large,
Proof. Let m be such that
Choose N so large that1
!N < "=2
and
Using Theorem 3.1, Theorem 4.2, (27), and (31), we obtain
This means that if the continuous Newton sequence achieves accuracy " (measured by the
distance to the exact solution) at the m-the step, then the discrete Newton sequences should
achieve the same accuracy " at the (m 1)-st step or earlier. Now we show that the latter
cannot happen earlier than at the (m 1)-st step. Choose N so large that
and suppose that
>From Theorem 3.1, (24), (27), (30) and (31), we get
which contradicts the choice of " in (28).
5. Application to optimal control
We consider the following optimal control problem:
subject to _
U is a nonempty, closed and convex set
in IR m , and y 0 is a xed vector in IR n . L the space of essentially bounded and
measurable functions with values in IR m and W 1;1 (IR n ) is the space of Lipschitz continuous
functions with values in IR n .
We are concerned with local analysis of the problem (32) around a xed local minimizer
which we assume certain regularity. Our rst standing assumption is the following:
Smoothness. The optimal control u is Lipschitz continuous in [0; 1]. There exists a positive
number such that the rst three derivatives of ' and g exist and are continuous in the set
Dening the Hamiltonian H by
it is well known that the rst-order necessary optimality conditions at the solution (y
can be expressed in the following way: There exists 2 W 1;1
solution of the variational inequality
_
_
where NU (u) is the normal cone to the set U at the point u; that is, NU (u) is empty if u 62 U ,
while for u 2 U ,
Although the problem (32) is posed in L 1 and the optimality system (33){(35) is satised
almost everywhere in [0; 1], the regularity we assume for the particular optimal solution implies
that at (y ; u ; ) the relations (33){(35) hold everywhere in [0; 1].
Dening the matrices
yy H(x (t));
where z we employ the following coercivity condition:
Coercivity. There exists > 0 such that
Let N be a natural number, let 1=N be the mesh spacing, let t
the forward dierence operator dened by
We consider the following Euler discretization of the optimality system (33){(35):
The system (37){(39) is a discrete-time variational inequality depending on the step size h. It
represents the rst-order necessary optimality condition for the following discretization of the
original problem (32):
subject to y 0
In this section we examine the following version of the Newton method for solving the variational
system (37)-(39), which corresponds to the SQP method for solving the optimization
problem (40). Let x denote the k-th iterate. Let the superscript k and the
attached to the derivatives of H and G denote their values at x k
. Then the new
iterate x solution of the following linear variational inequality for
the variable
In [5],
Appendix
2, it was proved that the coercivity condition (36) is stable under the Euler
discretization, then the variational system (41){(43) is equivalent, for x k near x
to the following linear-quadratic discrete-time optimal control problem which is expressed in
terms of the variables y, u, and z = (y; u):
minimize
(y N y k
r z ' k
subject to y 0
A natural stopping criterion for the problem at hand is the following: Given " > 0, a
control ~
obtained at the k-th iteration is considered an "-optimal solution if
dist(r u H(~y k
where ~
i and ~
i are the solutions of the state and the adjoint equations (37) and (38) corresponding
to
We now apply the general approach developed in the previous section to the discrete-time
variational inequality (37){(38). The discrete L 1
N norm is dened by
The variable x is the triple (y; u; ) while X N is the space of all nite sequences x
with y 0 given, equipped with the L 1
norm. The space Y N is the Cartesian product L 1
N corresponding to the four components of the function fN dened by
r
With the choice (x
the general condition (A1) is satised by taking
(z
The rst component of z
N is estimated in the following way:
sup
sup
ih
Since g is smooth and both y and u are Lipschitz continuous, the above expression is bounded
by O(h). The same bound applies for the second component of z
N , while the third and fourth
components are zero. Thus the norm of z
N can be made arbitrarily small for all suciently
large N . Condition (A2) follows from the smoothness assumption. A proof of condition (A3)
is contained in the proof of Theorem 6 in [5]. Applying Theorems 3.1 and 4.1 and using the
result from [5], Appendix 2, that the discretized coercivity condition is a second-order sucient
condition for the discrete problem, we obtain the following theorem:
Theorem 5.1. If Smoothness and Coercivity hold, then there exist positive constants K,
" and
N with the property that for every N >
N there is a unique solution (y
of the variational system (37){(39) and (y h ; u h ) is a local minimizer for the discrete problem
(40). For every starting point (y
there is a unique SQP sequence (y k ; u Q-quadratically convergent, with a constant
K, to the solution (y In particular, for the sequence of controls we have
Moreover, if the stopping test (44) is applied with an " 2 [0;
"], then the resulting "-optimal
control u k" satises
Note that the termination step k " corresponding to the assumed accuracy of the stopping
test can be estimated by Theorem 4.1. Combining the error in the discrete control with
the discrete state equation (37) and the discrete adjoint equation (38), yield corresponding
estimates for discrete state and adjoint variables.
Remark. Following the approach developed in [5] one can obtain an analog of Theorem
5.1 by assuming that the optimal control u is merely bounded and Riemann integrable in
[0; 1] and employing the so-called averaged modulus of smoothness to obtain error estimates.
The stronger Lipschitz continuity condition for the optimal control is however needed in our
analysis of the mesh independence.
The SQP method applied to the continuous-time optimal control problem (32) has the
following starting point, the iterate x
_
_
for a.e. t 2 [0; 1], where the superscript k attached to the derivatives of H and G denotes
their values at x k . In particular, (45){(48) is a variational inequality to which we can apply
the general theory from the previous sections. We attach the index to the continuous
problem and for
fy
is clearly satised with Condition (A2) follows from
the Smoothness assumption. The condition (A3) follows from the Coercivity assumption as
proved in [9], Lemma 3 (see also [4], Section 2.3.4 for an earlier version of this result in the
convex case). Hence, we can apply Theorem 3.1 obtaining that for any suciently small ball
around x (in the norm of X 0 ), if the starting point x 0 is chosen from B, then the SQP
method produces a unique sequence x k 2 B which is Q-quadratically convergent to x (in the
norm of X 0 ). Moreover, from Theorem 4.1 we obtain an estimate for the number of steps
needed to achieve a given accuracy.
In order to derive a mesh-independence result from the general theory, we rst study the
regularity of the SQP sequence for the continuous problem.
Lemma 5.1. There exist positive constants p and q such that for every x
continuous in [0; 1], for every
Proof. In [5], Section 6, extending a previous result in [7], see also [6], Lemma 2, we showed
that the coercivity condition implies pointwise coercivity almost everywhere. In the present
circumstances, the latter condition is satised everywhere in [0; 1]; that is, there exists a
constant > 0 such that for every v 2 U U and for all t 2 [0; 1]
For a positive parameter p consider the SQP sequence x k starting from x
the initial control u 0 is a Lipschitz continuous function in [0; 1]. Throughout the proof we will
choose suciently small and check the dependence of the constants of p. By (48) the iterate
(r
uy H(x k (t))(y k+1 (t) y k (t))
for every t 2 [0; 1] and for every u 2 U . Let t are contained in B p (x )
for all k and therefore both y 0k and 0k are bounded by a constant independent of k; hence,
y k and k are Lipschitz continuous function in time with Lipschitz constants independent of
k. We have from (50)
(r
uy
and the analogous inequality with t 1 and t 2 interchanged. Adding these two inequalities
and adding and subtracting the expressions r 2
uy
where the function k is dened as
By (49), for a suciently small p the right-hand side of the inequality (51) satises
Combining (51) and (52) we obtain
uy
uy
uy
Let u k be Lipschitz continuous in time with a constant L k . Then the function k is almost
everywhere dierentiable and its derivative is given by
_
>From this expression we obtain that there exists a constants c 1 , independent of k and t and
bounded from above when p ! 0, such that
Estimating the expressions in the right-hand side of (54) we obtain that there exists a constant
independent of k and t and bounded from above when p ! 0, such that
Hence, u k+1 is Lipschitz continuous and, for some constants c of the same kind as c 1 ; c 2 , its
Lipschitz constant L k+1 satises
Since p can be chosen arbitrarily small, the sequence L i.e. by a
constant q. The proof is complete.
To apply the general mesh-independence result presented in Theorem 4.2 we need to estimate
the residual r k
obtained when the SQP sequence of the continuous problem is substituted
into the relations determining the SQP sequence of the discretized problem. Specically, the
residual is the remainder term associated with the Euler scheme applied to (45){(48); that is,
where the subscript i denotes the value at t i . From the regularity of the Newton sequence
established in Lemma 5.1, the uniform norm of the residual is bounded by ch, where c is
independent of k. Note that the map N (x) dened in Section 4, acting on a function x
gives the sequence x(t i Condition (25) is satised because the space X 0 is a
subset of the space of continuous functions. Summarizing, we obtain the following result:
Theorem 5.2. Suppose that Smoothness and Coercivity conditions hold. Then there exists
a neighborhood W, in the norm of X 0 , of the solution x such that for all
suciently small step-sizes h, the following mesh-independence property holds:
sup
where u k () is the control in the SQP sequence (y k (); u k (); k ()) for the continuous problem
starting from a point x continuous in [0; 1], and u k
h is
the control in the SQP sequence (y k
h ) for the discretized problem starting from the point
Applying Theorem 4.3 to the optimal control problem considered we obtain the mesh-
independence property (29) which relates the number of steps for the continuous and the
discretized problem needed to achieve certain accuracy. The latter property can be also easily
deduced from the estimate (55) in Theorem 5.2, in a way analogous to the proof of Theorem
4.3. Therefore the estimate (55) is a stronger mesh-independence property than (29).
Table
1: L 1 error in the control for various choices of the mesh.
Iteration
Table
2: Error in current iterate divided by error in prior iterate squared.
Iteration
5. Numerical examples
The convergence estimate of Theorem 5.2 is illustrated using the following example:
minimize
dt
subject to _
This problem is a variation of Problem I in [8] that has been converted from a linear-quadratic
problem to a fully nonlinear problem by making the substitution and by adding
additional terms to the cost function that degrade the speed of the SQP iteration so that the
convergence is readily visible (without these additional terms, the SQP iteration converges to
computing precision within 2 iterations). Figures 1{3 show the control iterates for successively
ner meshes. The control corresponding to barely visible beneath the
Observe that the SQP iterations are relatively insensitive to the choice of the mesh. Specically,
suciently large to obtain mesh independence. In Table 1 we give the L 1
error in the successive iterates. In Table 2 we observe that the ratio of the error in the current
iterate to the error in the prior iterate squared is slightly larger than 1.
Figure
1. SQP iterates for the control with
Figure
3. SQP iterates for the control with
Figure
2. SQP iterates for the control with
--R
Discretization and mesh-independence of Newton's method for generalized equa- tions
The Lagrange-Newton method for state constrained optimal control problems
Lipschitzian stability in nonlinear control and optimiza- tion
Variants of the Kuhn-Tucker sucient conditions in cones of non-negative functions
Dual approximations in optimal control
Multiplier methods for nonlinear optimal control
Mesh independence of the gradient projection method for optimal control problems
Augmented Lagrangian-SQP techniques and their approxima- tions
Mathematical programming: the state of the art (Bonn
--TR
--CTR
Steven J. Benson , Lois Curfman McInnes , Jorge J. Mor, A case study in the performance and scalability of optimization algorithms, ACM Transactions on Mathematical Software (TOMS), v.27 n.3, p.361-376, September 2001 | optimal control;newton's method;discrete approximation;sequential quadratic programming;mesh independence;variational inequality |
587495 | Output Tracking Through Singularities. | Output tracking for nonlinear systems is complicated by the existence of "singular submanifolds." These are surfaces on which the decoupling matrix loses rank. To provide additional control action we identify a class of smooth vector fields whose integral curves can be incrementally tracked using rapidly switched piecewise constant controls. At discrete times the resulting piecewise smooth state trajectories approach the integral curve being tracked. These discontinuous controllers are applied to sliding mode control---we use incremental tracking to move the state toward the sliding surface. The resulting controller achieves approximate output tracking in situations where the usual approach to sliding mode control fails due to the loss of control action on the singular submanifold. | Introduction
Tracking in the case where the decoupling matrix loses rank on a \singular
submanifold" have been considered by a number of authors (c.f. [2, 5, 6, 7, 9,
15]). In [2] the problem of exact tracking is studied using results on singular
ordinary dierential equations and on the multiplicity of solutions. Conditions
under which the singular tracking control is smooth or analytic are given in
[9], assuming that that the inputs and some of their derivatives are related to
the outputs and their derivatives via a singular ordinary dierential equation.
In output trajectories which the system can track using continuous open
loop controls are identied for systems which satisfy a suitable observability
condition and a discontinuous feedback controller is introduced which achieves
robust tracking in the face of perturbations. In [5] the relative order is locally
This work was supported in part by the Natural Sciences and Engineering Research
Council of Canada
y Department of Mathematics and Statistics, Queen's University, Kingston, Ontario K7L
3N6, Canada. E-mail: ron@mast.QueensU.CA
increased by keeping the state trajectory near a codimension one submanifold.
In some sense our approach takes the opposite point of view in that we seek to
reduce the relative order by using vibratory controls. These switched controls
allow motion in directions other than those of the drift vector eld or vector
elds in the Lie Algebra generated by the control vector elds.
Recently there has been increased interest in the use of patterns in control.
The pioneering work of Brockett [1], Pomet [12], Lui and Sussmann [10] and
others looks at curves that can be approached by state trajectories of smooth
a-ne systems. For single-input systems these results highlight the very limited
class of smooth paths which can be closely approximated by the state
trajectory. We introduce the notion of incremental tracking of smooth integral
curves by state trajectories. The state trajectories are permitted to move
far from the integral curve being tracked but are required to approach them
arbitrarily closely arbitrarily often. This weaker notion of approximation by
the state trajectory lends itself well to sliding mode control where we wish to
steer the state to a sliding surface. This is a surface on which the state evolves
so that the tracking errors go to zero. We are not concerned about the path
along which the trajectory approaches the sliding surface as long as any large
deviations take place in directions which are not seen directly by the output.
Sliding mode control utilizing discontinuous feedback controllers can achieve
robust asymptotic output tracking (c.f. [16, 13, 14] and the references therein)
under the implicit assumption that the state trajectory can always be steered
towards the \sliding surface". That is the decoupling matrix is of full rank
everywhere (c.f. [8]). In [6] sliding mode control is studied in the case where
the decoupling matrix loses rank and there exists a \singular submanifold"
near which the state trajectory cannot be steered towards the sliding sur-
face. For systems whose singular submanifold satises suitable transversality
conditions a class of smooth output functions y d is identied which can be
approximately tracked using a truncated sliding mode controller. For these
outputs the state trajectory passes through the \singular submanifold" a nite
number of times. There are, however, many simple systems where truncated
controllers cause the state trajectory to \stick" to the \singular submanifold"
so that the state moves ever farther from the sliding surface. For such systems
the standard approaches to output tracking are also not very successful. The
following example illustrates the di-culties which can arise.
Example 1.1 Consider the a-ne nonlinear system in IR 3
_
_
_
(1)
Suppose that we wish to regulate the output
close to y d (t) while keeping the state vector bounded. If
then we can regulate y by keeping the state trajectory on or near
to the \sliding
y d g. We note
that without the term x 2
3 this system is linear with relative order 3 but here
and the relative order of y is 2 (c.f. [6, 8]). In particular
_
y d
y d and . The natural sliding
mode controller usm
x(t) reaches S p
t and stays in S p
t after a nite time has elapsed (c.f. [16],
[13]). Inherent in this control scheme is the assumption that b does not vanish
along the state trajectory. Of course in our case b vanishes on the \sin-
gular manifold" hence usm can become unbounded as
x(t) approaches N . One natural solution is to use the truncated controller
or the simpler controller
For linear systems such truncated controllers work on a neighbourhood of the
origin which expands as L grows. This is not the case here. In fact, suppose
that that we wish to track y negative, and x
positive). If we perturb x 3 so that x
sm < 0
hence _
returns to N . For x
sm > 0 and
once again x returns to N . In essence the state trajectory will \stick" to the
submanifold 0g. Of course on N we have _
that _
and the state trajectory evolves on N in such a way that
Of course we can track y using this approach if the initial state x 2 (0) > 0.
The larger x 2 (0) is the more we can insulate the system from the above phe-
nomena. On the other hand, if we track y d sin t, even with x 2 (0) > 0,
we will inevitably nd that x 2 becomes negative and the above problem dom-
inates. This phenomena is illustrated by Figure 1, which shows the results of
a simulation performed using Simnon/PCW for Windows Version 2.01 (SSPA
Maritime Consulting AB, Sweden). If x 2 (0) < 0 then the divergence of s and
e is immediate. With controller (2) with
the onset of this divergence is only delayed.
y, yd
Figure
1: Tracking of sin t using a truncated sliding mode controller.
It is of interest to note that if we could enforce s 0 exactly in the case y d 0
then
and the
resulting \zero dynamics" are unstable.
The approximate input-output linearization scheme of [5] applied to this
example has similar problems. Tracking schemes which are based on dier-
entiating y until u appears come up against this same obstruction. Tomlin
and Sastry have observed a similar phenomena in the Ball and Beam example
[15], where their switched control scheme is not eective. The above example
presents similar obstructions.
Instead of taking more derivatives of s to deal with the singular submanifold
N we use fewer derivatives. As a result we lose direct control over s (as
_
s is independent of u) but avoid the problems associated with the \singular
manifold". We introduce a switched periodic controller which causes the state
to \incrementally track" the integral curve of a vector eld obtained from Lie
Brackets of the drift and control vector elds. The resulting continuous but
nonsmooth state trajectory approaches the sliding surface. We will return to
this example in Section 4.
The rest of the paper is organized as follows: in Section 2 we formulate
the sliding mode control problem for single-input single-output a-ne nonlinear
systems. In Section 3 we introduce our switched controllers and present our
results on approximate trajectory tracking for systems with drift. In Section
4 we state and prove our main results - applications of incremental tracking
to sliding mode control - and continue the above example. Finally, some
concluding remarks are oered in Section 5.
Output tracking and sliding surfaces
Suppose that M is a smooth manifold. Given a smooth function
and a vector eld X(x) on M , denotes the Lie derivative
of h(x) along X(x) and X t curve of X passing through x 0
at so that d
Y is a smooth vector eld on
M then [X; Y denotes the Lie Bracket of X and
Y , and adX LA denote the Lie Algebra generated by
i.e. the smallest vector space containing X and Y and closed under
Lie Brackets. Suppose that N is a codimension 1 submanifold of M . A vector
eld X is transversal to N if X(x) 62 T x N 8x 2 N , where T x N is the tangent
space to N at x. If P Q is a submanifold and smooth map of
manifolds then f is transversal to P if Image(df x
Consider the nonlinear control system model
_
where M IR ' is a smooth m dimensional embedded submanifold of IR ' ,
are smooth vector elds
on M , and h is a smooth output function on M . If x 2 M we denote by jjxjj
the norm on M which is induced by the standard norm on IR ' .
Suppose that y is a smooth function which we wish the
output y of (3) to track. The standard approach in sliding mode control (c.f.
[13, 16]) is to force the evolution of the output tracking error
be governed by a stable dierential equation of the form s(e p
linear so that
Denition 2.1 The output of (3) can approximately track y d to degree p if,
given any - > 0, there exists an admissible input u - and time t - > t 0 such
that js(e p (t))j - and the resulting state x(t) is bounded on [t - ; 1). We say
that y asymptotically tracks y d to degree p if s(e p and x(t) is bounded
on
The relative degree r of the output y is the least positive integer for which
the derivative y (r) (t) is an explicit function of the input u. More precisely r is
the least positive integer for which gf (r 1) h 6 0 (c.f. [7, 8]). For single-input
systems the \decoupling matrix" is the 1 1 matrix whose entry is gf (r 1) h.
Thus the rank of the decoupling matrix changes where gf (r 1) h vanishes. We
choose p r to avoid a possibly singular dierential equation for u. Thus
If we set h
then s(e p is equivalent to the requirement that s p (x(t);
In particular if we let S p
t denote the sliding surface
then x(t) 2 S p
tracking. Similarly if
d (t)g (7)
t and x(t)
d and perfect tracking.
Our rst assumption is that S p
t is submanifold.
t is an embedded codimension 1 submanifold of M for all
Remark 2.2 It is straightforward to show that A1 holds if the map h p is
transversal to the hyperplane s 1
d (t) (c.f. [6]).
The standard sliding mode controller approach (c.f. [6], [13],[16]) is to
pick the relative order of the output y. Then u appears explicitly
in d
d (t)) and h(x). The standard sliding
mode controller takes the form usm (x;
where K > 0. Using this control d
hence, after some nite time t f t 0 , we will have s r
. If, in addition, the system has bounded \zero dynamics" on E p
t then
asymptotic tracking of an output y d will be achieved (c.f. [8]). We note
that systems which fail to be strongly observable in the sense of [7] can have
unstable zero dynamics (c.f. [6, 15]). Of course the assumption that b does
not vanish along the state trajectory is strong. It holds in the linear case
but it is rarer in the nonlinear case. Typically b vanishes on the singular
submanifold unbounded when the
state trajectory reaches N . A natural solution is to use a truncated controller,
but the resulting state trajectory can \stick" to N and evolve in such a way
that one travels away from S p
t on N (such is the case in Example 1.1). We now
introduce switched controllers which permit us to move towards the sliding
surface even if b(x) vanishes.
3 Incremental Tracking
The set of curves which can be approximately tracked by the state trajectories
of a-ne systems has been characterized in [12]. For single-input systems the
state trajectory can only be made to stay close to integral curves of vector
elds of the form f +g where is a smooth function on M . Thus to make the
state approach the sliding surface S r
(where r is the relative degree of y) we are
limited to the standard sliding mode controller and the problems associated
with singular submanifolds. We seek instead to identify vector elds whose
integral curves can be approached arbitrarily closely at discrete times by the
state trajectory. If the deviations from the integral curve are \parallel" to S p
for some p r we can use these state trajectories to implement sliding mode
controllers for which singular manifolds do not pose a problem.
Denition 3.1 The integral curves of a smooth vector eld X are said to be
incrementally tracked by the state of (3) if there exist controllers fu n g with
the following property:
(a) each u n (x; t) is smooth with respect to x and is piecewise constant and
periodic with respect to t with period
(b) if (t) is an integral curve of X on [0; 1] , x n (t) the state trajectory when
su-ciently large,
a
a
a
a
Figure
2: Incremental Tracking of (t).
While not essential, we will assume that vector elds are complete. Let I
denote the set of vector elds on M whose integral curves can be incrementally
tracked by the state of system (3) and I 0 the subset of I consisting of vector
elds X with
I for all smooth functions
Theorem 3.2 The set of vector elds I and I 0 whose integral curves can be
incrementally tracked by the state of (3) have the following properties:
I 0 is a Lie Algebra over IR. If X 2 I; Y 2 I 0 then
(iii) Suppose that Y 2 I and X; ad k+1
(a) If [ad i
ad k
(b) If ad 2k
(iv) If ad k+1
is odd) and ad k
f can be incrementally
tracked by the state of (3) using the periodic switched controllers
dened by:
and
Remark 3.3 For the linear system model _
of Theorem 3.2 implies that ad Repeating these steps with
ad in place of etc. we nd that b; Ab; ; A n 1 b 2 I 0 and
hence these constant vector elds can be incrementally tracked by the state.
From this one can deduce the standard linear result on controllability. We
also note that (ii) above implies that incremental tracking of the drift vector
eld is preserved under smooth static state feedback. We also point out the
fact that condition (iv) is nongeneric and will hold only for certain special
systems.
We are interested in incremental tracking where large deviations of the
state trajectory from the integral curve have only a small eect on the output
of the system. We now make this notion more precise:
Denition 3.4 Suppose that > 0 and X is a vector eld on M whose
integral curves can be incrementally tracked by the state fx n (t)g of (3) using
controllers fu n g. If, for n su-ciently large,
we say that the integral curves of X can be incrementally tracked preserving
Let I p denote the set of vector elds on M whose integral curves can
be incrementally tracked preserving h p and I p
0 the subset of I p consisting of
vector elds X with
We assume that p r.
Theorem 3.5 The set of vector elds I p and I p
0 have the following properties:
0 is a Lie Algebra over IR. If X 2 I
(iii) Suppose that Y 2 I p , X; ad k+1
Then
(a) If [ad i
k is odd).
(b) If ad 2k
(iv) If ad k+1
and the output of system (3) has relative order r > p then
ad k
Example 3.6 (Example 1.1 continued) Here we have
2. Thus condition (iv) of Theorem 3.5 holds and
ad 2
Proof (Theorem 3.2)
An integral curve of f can be tracked exactly using u
1. In this case the corresponding state trajectory x n
hence f 2 I. Now let
smooth and set t
(x)n, and
which approximates (t k ). In particular we can guarantee that jj (t k )
su-ciently large. This means that
I hence g 2 I 0 . Note that in both of the above cases x n (t) stays close
to (t) 8t 2 [0; 1].
Suppose that X;Y 2 I 0 , (t) is an integral curve for X+Y on [0; 1], and
> 0. Then 2X; 2Y 2 I 0 and if ' > 0 we dene the \switched integral curve"
It follows that
Continuing to switch
between integral curves of X and Y we get
Here
In particular, for ' su-ciently large, jj
that given 0 > 0 there exist piecewise constant periodic wrt t controllers
with period such that the integral curves
of 2Y are incrementally tracked by the corresponding state trajectory x n (t).
Thus we have jj 2Y k=n
su-ciently large. In particular if we can arrange that
su-ciently large. Similarly
exists controllers fu 0
with period 0
n =n such that
. Thus this concatenation of
n g and fu n g results in a piecewise smooth state trajectory ~
x n which achieves
n and n su-ciently large. Now we
repeat the pattern (u n followed by u 0
n ) to generate a piecewise smooth state
trajectory ~
x n for which jj (t k ) ~
applications of
the triangle inequality). Thus we can choose to achieve incremental
tracking of X +Y , hence X +Y 2 I. Now we can repeat the above argument
using X;Y to conclude that (X To show
that we argue as above. If ' > 0 then
the \switched integral curve"
(t) produced by following the integral curve
for
'Y for 1=4' units of time, then the integral curve for
'X for 1=4'
units of time, then the integral curve for
'Y followed by that of
'X. Then
'X 1=4' (4
'Y 1=4' (4
'X 1=4' (4
so that
(Y 1=
(Y 1=
assuming
xed (c.f. [18]). Continuing to switch between these integral
curves we generate
su-ciently large,
where (t) is an integral curve for [X; Y ] on [0; 1], t
they can be incrementally tracked using periodic
switched controllers fu n g and fu 0
g. We then argue as above to show that
Repeating these steps with p
aY shows that a[X; Y
hence Finally, suppose that X 2 I; Y 2 I 0 . Let (t) be an
integral curve for positive integer, and > 0. Then
I and we dene the \switched integral curve"
t < 1=m' and
Continuing to switch
between integral curves of X and Y we get
so, for ' and m su-ciently large jj
Now repeat the argument used to show that I 0 is closed under
sums to conclude that
(iii) (a): Suppose that Y 2 I, X; ad k+1
by (t) the switched integral curve which results from following the integral
curve for X for 1=n 2 units of time where then following the
integral curve for Y for 1=n 2 units of time, and nally following the integral
curve for X units of time. By construction
Noting that
ad i
an absolutely convergent series for all t (c.f. [17, 18]), we see that
where
Since X and ad k+1
Algebra from (ii) above, it follows that
In particular the integral curve for B (writing B; G for
B(n); G(n)) can be incrementally tracked by the state. This means that the
switched integral curve
which results from following the integral curve for
nB for time 1=n 2 followed by the switched integral curve (t) results in
Using the Baker-Campbell-Hausdor Formula [17], which converges for n sufciently
large, we have
[G;
From the denitions for G(n) and B(n) and in light of hypotheses (iii)(a) we
hence 1
Tedious applications of the Jacobi
identity show that 1
a consequence of hypotheses (iii)(a), and the same conclusion
applies to the higher order terms in the Baker-Campbell-Hausdor series.
In particular we see that
Repeating ' times the switched integral curves used to generate
we arrive at the state
observe that
ad k
(t) is a switched integral curve of vector elds
which can be incrementally tracked by the state of system (3). Furthermore
if
(x) as n ! 1. If (t) is
the integral curve for ad k
for n su-ciently large and
switched between integral
curves of vector elds which can be incrementally tracked. Thus we can
repeat the argument used in (ii) above to show that there exist piecewise constant
periodic controllers fu n g with periods
that su-ciently large and
This implies that ad k
is odd we re[lace X with X and proceed
as above to conclude that ad k
I from which we deduce that ad k
(iii)(b): This is a particular case of (iii)(a).
This result is a consequence of (i) and (iii)(a). If we set
as a consequence of (i). Since ad k+1
that ad 2k
holds
(and also (iii)(b)). In particular we can conclude that ad k
can check that the controller u n dened in (iv) is precisely the one used in the
proof of (iii)(a). A more direct approach to the proof of (iv) is illuminating
and is outlined below. Using the control u n (t) dened in (iv) (and t
save accounting) we have
Applying the Baker-Campbell-Hausdor formula (c.f. [17]) two times we can
In the case (with help of
MAPLE V) the expression
72n 3=2
96n 7=2
Because ad 3
it is not hard to show that all terms in X(n) other than
ad 2
are multiplied by negative powers of n. In particular lim n!1 X(n) =6
similar situation holds for other values of k , that is lim n!1
Repeating the above we nd that x n (3'=n 2
f is incrementally tracked by
the state of system (3).
Proof (Theorem 3.5)
(i): We can track an integral curve (t) of f exactly using u
This means that f 2 I p . As noted
in the proof of Theorem 3.2 we can nd controllers u n such that the corresponding
state trajectory x n closely follows the integral curves for
g for all
just for discrete times). Since the state trajectory x n makes
no large deviation from the integral curve of
g we have incremental tracking
preserving h p .
(ii): In the proof of Theorem 3.2 (ii) we saw that an integral curve (t)
of can be tracked by switched integral curves of X and Y which stay
close to (t) for all t 2 [0; 1]. Since X;Y 2 I p we can nd switched controllers
n such that the corresponding state trajectories x incrementally
track the integral curves of X and Y while preserving h p . Thus the image
under h p of the concatenation of x used to incrementally track (t)
will stay close to h p ((t)) and we will have incremental tracking of X
preserving h p . The same situation holds in the case of [X; Y ].
(iii)(a): In the proof of Theorem 3.2 (iii)(a) we constructed switched integral
curves of X and Y which incrementally track integral curves of ad k
need
not closely approximate these curves except at a discrete set of times. Thus
the controllers u n produce state trajectories x n which incrementally track the
integral curve
making frequent and large deviations
from (t). By construction these large motions are along integral curves for
the vector elds X and
we have (Xad k+1
LA we have Zh In particular B(n)h
that the large motions of the state trajectory x n are in directions in which h p
does not vary. Thus we achieve incremental tracking of (t) preserving h p .
(iii)(b): This is a particular case of (iii)(a).
gh (denition
of relative order) we have gh and the result follows from (iii)(b) above.
4 Incremental Sliding Mode Controllers
In the nonsingular case the simple sliding mode controller (2) gives rise to
vector elds f + Lg and f Lg with several noteworthy properties. Given
any compact subset C there exists L > 0;
(i) On the set C
Remark 4.1 Suppose that y d is a smooth function satisfying
d
on Condition (i) implies that if the state stays in C then
the output will asymptotically track y d using the simplied controller (2).
In particular if s p (x(t); t) > 0 (so that we are \above" the sliding surface
d
dt
d
dt
d
dt
s(y r
From the denition of s (s is linear) we have d
d
This, combined with
our assumption that 1 s( _
y r
d (t)) or d
d (t)) 1 , yields d
In particular the state trajectory returns to the sliding
surface fs p (x; 0g. A similar situation results when s
Remark 4.2 Condition (ii) follows from the denition of the relative order
r, since gf i 1. That this is important in sliding mode
control can be seen as follows: when the state \slides" on the sliding surface
t the trajectory is the integral curve of the \equivalent vector eld" on S r
which has the form
[3]). Note that Xh r As
a consequence along this integral curve the tracking error satises the stable
dierential equation s(e r dened by (4).
We seek to weaken the above in several ways. First we use the sliding
surface S p
t where p is allowed be smaller than the relative order r of y. As
a consequence of Theorem 3.5 f Lg 2 I p . We relax (ii) by allowing vector
elds of the form d I p such that d h only require
(i) above to hold on an open subset of Z of M which is invariant under the
integral curves of d We summarize these observations as follows:
Denition 4.3 Let X be a vector eld on M . An open subset Z of M is said
to be invariant with respect to a vector eld X if, for all x 2 Z, the integral
curve t 7! X t (x) stays in Z.
A2. There exists an open subset Z M invariant with respect to vector
elds
(i) On Z
If A2 holds for constants the following restricted class
of desired output functions:
fy d j 1 s(y p
We will show that these outputs can be approximately tracked. We note that
in the nonsingular case A2 holds with
for L su-ciently large. If A2 holds with d 2 I p we dene the set-valued map
F d (x; t) by
F d (x;
d
where co fd is the closed convex hull generated by the fd
Theorem 4.4 Suppose A1, A2 hold for system (3). Then there exist d
I p and an open subset Z M such that for all smooth functions y
(i) the dierential inclusion _
has a unique solution
(ii) for any solution x f to _
Z \ S p
(iii) for t t f the curve t 7! y F is a smooth function of t which
In particular lim t!1 (y p
Proof By construction, F d (x; t) is nonempty, compact and convex and it is
straightforward to show that F d is upper semicontinuous with respect to x; t.
Thus the basic conditions of [3, p.76] are satised the proof that locally solutions
to the dierential inclusion _
exist can be found in
[3, pp. 67-68 and pp. 77-78], and is omitted here. That solutions stay in
Z follows from A2 i.e. the assumption that Z is strongly invariant with respect
to d To establish uniqueness we note that both d + and d are
transversal to S P
t \ Z as a consequence of A2 (i). Furthermore the limiting
vector elds on S P
t \ Z which result from d
the opposite orientations on S P
t \Z.
Thus [3, Corollary 2, p.108] implies that there is exactly one solution to this
dierential inclusion starting at x(t 0
(ii) Suppose that y d 2 Y d and s p Then, from the deni-
tion of Y d , we have 1 s( _
A2(i). Thus d
d
and s p (x; t) is strictly increasing along integral curves of d
strictly decreasing along integral curves of d in fs p > 0g.
by (i) we have established (ii).
(iii): For t t f x F is a smooth integral curve for the equivalent vector eld
X dened in Remark 4.2. Here Xh a consequence of A2 (ii),
hence y 1. From Section 2 we know that
if y
equivalent to s p (x(t);
t . In particular, since x F (t) 2 S p
t from
(ii), we have s(e p
A necessary condition for approximate tracking of y d is that both y p
d and
the state trajectory remain bounded. In the nonsingular case the state trajectory
and the solution to the dierential inclusion _
are identical
and it su-ces to ensure that solutions to _
remain bounded. In our case the same assumption su-ces.
A3. Suppose that A2 holds for system (3) and y d 2 Y d . Then solutions to
the dierential inclusion _
with initial state x(t 0
remain bounded for
Remark 4.5 Note that in light of Theorem 4.4 (ii) it su-ces to study the trajectory
on S p
t . Since there is a unique vector eld G(x; t) in co fd
that makes @
tangent to S p
it su-ces to check that this one integral
curve is bounded. A su-cient (but far from necessary) condition for A3 to
hold is that Z \ S p
t be bounded.
Suppose that A1, A2, A3 hold for system (3) with initial state x(t 0
Z, where Z is an open subset of M invariant with respect to vector elds
. If we could make the state of (3) exactly track the solution
x F (t) to _
Theorem 4.4 would imply asymptotic tracking of
y d . We now describe a \digital controller" which allows us to incrementally
track x F and approximately track y d . We are motivated by the typical \sample
and hold" digital controller with xed sample rate T . That is, if u(x; t) is a
smooth function of x and t the digital controller u k (t) takes the form
is the state at time t k which results from using the control u k on
the time interval We have controllers u
are
piecewise constant periodic functions of t with periods
n =n and
respectively, and which cause the state of (3) to incrementally track
integral curves of d respectively. Thus we require a digital controller
with variable sampling rate. We dene our digital controller for the system
(3) as follows:
We observe that while u k (x; t) is not constant with respect to t over
it is piecewise constant due to the piecewise constant time dependence of u
and u n .
Theorem 4.6 Under assumptions A1, A2, A3 the switched controller (11)
achieves the following property for the closed loop system: if x(t 0
and y d 2 Y d then, for n su-ciently large, the output y of (3) approximately
tracks y d to degree p.
Proof Let x F (t) denote the solution to the dierential inclusion _
From Theorem 4.4 there exists t f t 0 such
that x F (t) 2 Z\S p
This implies that x F
light
of A3, x F (t) is a bounded function of t. We rst consider the case where
=n. The vector eld d + is incrementally tracked by the state trajectory
produced by u
n . We now calculate the rate of change of s p (x(t); t) when
x(t) is the integral curve x F (t) of d + but time t is rescaled to match the time
rescaling which occurs in incremental tracking. For t < t f we have, from A2
and the linearity of s,
d
as 0 < n 1. Thus there is some least time t 1 > 0 such that s p
positive integer k 1 (depending on n) such that x F
We can make jjx F
arbitrarily small by increasing n and hence
for n su-ciently large. Since x F (t) is incrementally tracked by x
n (t) we
have
1. Therefore by picking
large enough we ensure that jjs p
In particular, using the \digital" controller (11) results in
a state trajectory x
n (t) for (3) with the property that , for n su-ciently
large, s
)jj < =2. Thus u
We can now repeat the above starting from the initial state x(t ' 1
incrementally track the integral curve of d
)jj < =2 (increasing n if neces-
sary). Because the integral curve x F (t) is bounded we can choose n su-ciently
large to continue the above switching and ensure that the state trajectory x n
resulting from the controller (11) satises
Incremental tracking ensures that x n is close to x F at discrete times ft k g but
for t k < t < t k+1 we may have x n (t) far from x F (t). We now use the fact that
d 2 I p , and thus are incrementally tracked preserving h p , to show that s p is
unaected by these deviations. In particular on [t
by denition of incrementally tracking.
This allows us to ensure that jjs p su-ciently
large. Because 1 this implies that, for n su-ciently large,
and x n (t) is bounded, hence the output y of (3) approximately tracks y d to
degree p.
Let R(x 0 ) denote the set of states which can be reached from the initial
state x(t 0 Theorem 4.6 ensures approximate tracking if x 0 2 Z and
so it is natural to look for a controller which steers x 0 to the open set Z
in nite time. It will often be the case that R(x 0 ) \ Z 6= ;. In particular
we need to use the above theorem when the state trajectory \sticks" to the
singular submanifold under the naive truncated sliding mode controller. Thus
if Z intersects the singular submanifold it is likely reachable from the initial
state. Suppose that C is compact, Z an open subset of M , and u 0 (x; t) is a
controller for system (3) which transfers the state from x(t 0 to
the hybrid controller
where u k is the digital controller (11) and k 1.
Theorem 4.7 Suppose that A1, A2, A3 hold, C M is compact, and there
exists a controller u 0 (x; t) which transfers the state of system (3) to Z \C at
su-ciently large, the hybrid switched controller (12)
achieves the following property for the closed loop system: if y d 2 Y d then the
output y of (3) approximately tracks y d to degree p.
Proof For an initial state x(t 1 Theorem 4.6 implies that, for
su-ciently large, the controller (12) achieves approximate tracking of y d .
From the continuity of solutions to _
with respect to the initial
conditions (c.f. [3]) we have approximate tracking of y d for any initial state
in some open neighbourhood U 1 of x 1 . Because Z \ C is compact we can
obtain a nite open covering [ m
U i of Z \ C by such open sets. Thus the
hybrid switched controller (12) with n maxfn i results in
approximate tracking of y d .
Remark 4.8 We note that the hypotheses of Theorem 4.7 are satised for
for a-ne systems whose singular set fgf r 1 empty. In this case
we use and u 0 is not needed. To verify them for a
given a system model one could start by using Theorems 3.2, 3.5 to nd vector
elds which the state trajectory can incrementally track. If the natural sliding
mode controller has a singular submanifold N , check to see if the vector elds
which can be incrementally tracked preserving the output map are su-cient
for A2 to hold. Then, If A3 if holds as well (see Remark 4.5), Theorems 4.6,
4.7 yield a controller. Example 1.1 is a case in point.
Example 4.9 (Example 1.1 continued) We have seen that f; ad 2
so it is natural to choose a sliding surface with 1. We set
(t)g. Clearly the set S 1
t is an embedded submanifold
dimensional) for each xed time t so that A1 holds. Here gs(h p
2. To satisfy A2(i) we want
(ad 2
some open set Z so it is natural to look
for a set invariant with respect to f; ad 2
f and on which x 2 q 2 > 0, x 3 0.
For many systems a systematic approach to nding a suitable subset Z may
not be possible but for the example under consideration x linear
dierential equation. Thus we can nd such a set by constructing a Lyapunov
function. In particular let z 1
a 0 a 1
We can nd a Lyapunov function
solving Lyapunov's
Equations A T I for the positive denite matrix
a b
where
for
where q 0 > 0. By construction Z(q 0 ) is invariant with respect to f (a 0 z
a
where
0 and f 2 I p we
have
I p as a consequence of Theorem 3.2. Since by construction
V is decreasing along the integral curves of d we have Z(q 0 ) invariant with
respect to d . Because Z(q 0 ) puts no restrictions on x 1 it is also invariant
with respect to d
To verify that A2 holds we
rst note that s(h p
Note that by shrinking q 0 we can ensure
that in the set Z(q 0 ) we have x 3 arbitrarily close to 0 and x 2 arbitrarily close
to q 2 . In particular, given any constants
choose such that on the set
Thus A2(i) holds and A2(ii) holds automatically as In light of Remark
4.5 assumption A3 will hold if Z\S p
t is bounded. Here is a bounded
set by construction and hence A3 holds. Thus A1, A2, A3 hold and Theorem
4.7 implies that we can approximately track to degree 1 the set of output
paths
fy d j 1 s(y p
The construction of the controller u 0 which moves the state into Z is simplied
here because Z is the level set fV of a Lyapunov function for
_
g. We set u 0 (x;
(x). For any x(t 0 ) there will be
incrementally track d using u n (x;
incrementally track d + using the controller from Theorem 3.2(iv)
with namely
. If we want y d
we can dene Z by choosing q To
ensure close tracking we pick 0:1. Figure 3 shows a SIMNON simulation
using the controller (12) with Z. The tracking
performance is not particularly sensitive to variations in these parameters.
Increasing n gives tighter tracking but requires more control eort.
y, yd
Figure
3: Approximate Tracking of a y d
In
Figure
4 we show the eect of an initial state which is initially well outside
of Z
Figure
4: Approximate Tracking of a y d
We note that in this situation state trajectories resulting from controllers
based on relative degree will stick to the singular manifold
send s(e r (t)) !1. Our approach has the state passing back and forth across
N . The initial delay is due to the requirement that the state must enter Z
before our switched controller can act to reduce s.
Conclusions
There are situations where it useful to be able to control the state of a system
so that it closely approaches a given curve at discrete times. We have introduced
the concept of incremental tracking of integral curves where the state
trajectory (with re-parametrized time) closely approaches an integral curve
at discrete times. These controllers were then applied to sliding mode control
where the state trajectory used to reach the sliding surface is not very criti-
cal. Our discontinuous \digital sliding mode controller" achieved approximate
tracking in situations where the natural truncated sliding mode controller (and
the natural truncated smooth controller based on inversion) fails.
--R
Characteristic phenomena and model problems in non-linear control
On the Singular Tracking Problem
Nonlinear control via approximate input-output linearization: The ball and beam example
Global Sliding Mode Control
Global approximate output tracking for nonlinear systems
Nonlinear Control Systems
On Tracking through singular- ities: regularity of the control
Limits of highly oscillatory controls and the approximation of general pathes by admissible trajectories
Nonlinear Dynamical Control Systems
On the curves that may be approached by trajectories of a smooth a-ne system
Applied Nonlinear Control
Sliding Controller Design for Nonlinear Systems
Sliding Modes in Control Optimization
Foundations of di
--TR | output tracking;discontinuous state feedback;sliding mode control;lie brackets;singularities |
587528 | On a Boundary Control Approach to Domain Embedding Methods. | In this paper, we propose a domain embedding method associated with an optimal boundary control problem with boundary observations to solve elliptic problems. We prove that the optimal boundary control problem has a unique solution if the controls are taken in a finite dimensional subspace of the space of the boundary conditions on the auxiliary domain.Using a controllability theorem due to J. L. Lions, we prove that the solutions of Dirichlet (or Neumann) problems can be approximated within any prescribed error, however small, by solutions of Dirichlet (or Neumann) problems in the auxiliary domain taking an appropriate subspace for such an optimal control problem. We also prove that the results obtained for the interior problems hold for the exterior problems. Some numerical examples are given for both the interior and the exterior Dirichlet problems. | Introduction
The embedding or fictitious domain methods which were developed specially in the seventies ([5],
[2], [34], [35], [28] or [13]), have been a very active area of research in recent years because of their
appeal and potential for applications in solving problems in complicated domains very e#ciently. In
these methods, complicated domains # where solutions of problems may be sought, are embedded
into larger
domains# with simple enough boundaries so that the solutions in this embedded domains
can be constructed more e#ciently. The use of these embedding methods are a commonplace these
days for solving complicated problems arising in science and engineering. To this end, it is worth
mentioning the domain embedding methods for Stokes equations (Borgers [4]), for fluid dynamics
and electromagnetics (Dinh et. al. [11]) and for the transonic flow calculation (Young et. al. [36]).
In [3], an embedding method is associated with a distributed optimal control problem. There
the problem is solved in an auxiliary
domain# using a finite element method on a fairly structured
mesh which allows the use of fast solvers. The auxiliary
domain# contains the domain # and the
solution
in# is found as a solution of a distributed optimal control problem such that it satisfies
the prescribed boundary conditions of the problem in the domain #. The same idea is also used
in [9] where a least squares method is used. In [12], an embedding method is proposed in which
a combination of Fourier approximations and boundary integral equations is used. Essentially, a
Fourier approximation for a solution of the inhomogeneous equation
in# is found, and then, the
solution in # for the homogeneous equation is sought using the boundary integral methods.
# Institute of Mathematics, Romanian Academy of Sciences, P.O. Box 1-764, RO-70700 Bucharest, Romania (e-mail:
Department of Mathematics, Texas A&M University, College Station, TX-77843, USA (e-mail:
prabir.daripa@math.tamu.edu)
In recent years, progress in this field has been substantial, especially in the use of the Lagrange
multiplier techniques. In this connection, the works of Girault, Glowinski, Hesla, Joseph, Kuznetsov,
Lopez, Pan, Periaux ([14], [15], [16], [17] and [18]) should be cited.
There are many problems for which an exact solution on some particular domains may be known
or computed numerically very e#ciently. In these cases, an embedding domain method associated
with a boundary optimal control problem would allow one to find the solution of the problem
very e#ciently in an otherwise complicated domain. Specifically, the particular solution of the
inhomogeneous equation can be used to reduce the problem to solving an homogeneous equation
in # subject to appropriate conditions on the boundary of the domain #. This solution in the
complicated domain # can be obtained via an optimal boundary control problem where one seeks
for the solution of the same homogeneous problem in the auxiliary
domain# that would satisfy
the boundary conditions on the domain #. We mention that the boundary control approach has
been already used by Makinen, Neittaanmaki and Tiba for optimal shape design and two-phase
Stefan-type problems ([29], [32]). Also, recently there has been an enormous progress in shape
optimization using the fictitious domain approaches. We can cite here, for instance, the works of
Haslinger, Klarbring, Makinen, Neittaanmaki and Tiba (see [8], [21], [22], [23] and [33])
among many others.
In section 2, an optimal boundary control problem involving an elliptic equation is formulated.
In this formulation, the solution on the auxiliary
domain# is sought such that it satisfies the
boundary conditions on the domain #. In general, such an optimal control problem leads to an
illposed problem, and consequently it may not have a solution.
Using a controllability theorem of J. L. Lions, it is proved here that the solutions of the problems
in # can be approximated within any specified error, however small, by the solutions of the problems
in# for appropriate values of the boundary conditions. In section 3, it is shown that our optimal
control problem has an unique solution in a finite dimensional space. Consequently, considering
a family of finite dimensional subspaces having their union dense in the whole space of controls,
we can approximate the solution of the problem in # with the solutions of the problems
in#
using finite dimensional optimal boundary control problems. Since the values of the solutions in
are approximately calculated on the boundary of the domain #, we study the optimal control
problem with boundary observations in a finite dimensional subspace in section 4. In section 5, we
extend the results obtained for the interior problems to the exterior problems. In section 6, we give
some numerical examples for both bounded and unbounded domains. The numerical results are
presented to show the validity and high accuracy of the method. Finally, in section 7 we provide
some concluding remarks. There is still a large room for further improvement and numerical tests.
In a future work, we will apply this method in conjunction with a fast algorithm ([6], [7]) to solve
other elliptic problems in complicated domains.
Controllability
Let
(i.e. the maps defining the boundaries of the domains and their derivatives
are Lipschitz continuous) be two bounded domains in R N such that
Their boundaries are
denoted by # and #, respectively.
In this paper, we use domain embedding and optimal boundary control approach to solve the
following elliptic equation:
subject to either Dirichlet boundary conditions
or Neumann boundary conditions
#y
#nA (#)
h # on #, (2.3)
#nA (#)
is the outward conormal derivative associated with A.
We assume that the operator A is of the form
with a ij # C (1),1 (
# and there exists a constant c > 0 such that
in# for any (# 1 , , # N ) # R N . Also, we assume that f # L
A function y # H 1/2 (#) will be called a solution of the Dirichlet problem (2.1)-(2.2) if it verifies
equation (2.1) in the sense of distributions and the boundary conditions (2.2) in the sense of traces
in L 2 (#). A function y # H 1/2 (#) will be called a solution of the Neumann problem (2.1), (2.3) if
it verifies the equation (2.1) in the sense of distributions and the boundary conditions (2.3) in the
sense of traces in H -1 (#) (see [27], Chap. 2, 7).
The Dirichlet problem (2.1)-(2.2) has an unique solution and it depends continuously on the
data
If there exists a constant c 0 > 0 such that a 0 # c 0 in #, then the Neumann problem (2.1), (2.3) has
an unique solution and it depends continuously on the data
If a #, then the Neumann problem (2.1), (2.3) has a solution if
In this case the problem has an unique solution in H 1/2 (#)/R and
We also we remark that the solution of problem (2.1)-(2.2) can be viewed (see [27], Chap. 2,
as the solution of the problem
#n A #)
for any # H 2
and that a solution of problem (2.1), (2.3) is also solution of the problem
for any # H 2 (#
#n A #)
where A # is the adjoint operator of A given by
(a ji
Evidently, the above results also hold for problems in the
domain#
We consider in the following only the cases in which our problems have unique solutions, i.e. the
Dirichlet problems, and we assume in case of the Neumann problems that there exists a constant
such that a 0 # c 0 in #
Below we use the notations and the notions of optimal control from Lions [26]. First, we will study
the controllability of the solutions of the above two problems (defined by (2.1) through (2.3)) in #
with the solutions of a Dirichlet problem in # Let
be the space of controls. The state of the system for a control v # L 2 (#) will be given by the solution
1/2(# of the following Dirichlet problem
in#
In the case of the Dirichlet problem (2.1)-(2.2), the space of observations will be
and the cost function is given by
and y(v) is the solution of problem (2.11). For the Neumann problem given by
and (2.3), the space of observations will be
and the cost function will be
Remark 2.1 Since y(v) # H
2(#5 we have y(v) # H 2 (D) for any
domain D which satisfies
# (see [30], Chap. 4, 1.2, Theorem 1.3, for instance).
Therefore, having the same values on both the sides of #. Also, #y(v)
#nA (#)
Consequently, above two cost functions make
sense.
Proposition 2.1 A control u # L 2 (#) satisfies where the control function is given by
(2.13), if and only if the solution of (2.11) for
#nA (#)
and
where y is the solution of the Dirichlet problem defined by (2.1) and (2.2) in the domain #. The
same result holds if the control function is given by (2.15) and y is the solution of the Neumann
problem (2.1) and (2.3).
Proof. Let
1/2(# be the solution of problem (2.11) corresponding to an u # L 2 (#) such
that with the control function given by (2.13). Consequently, y(u) verifies equation (2.1)
in the sense of distributions and the boundary condition (2.2) in the sense of traces. It gives
in #. Since y(u) satisfies equation (2.11)
# in the sense of distributions, then, evidently, y(u)
is a solution the equation in (2.16). From (2.17) and Remark 2.1 we obtain that y(u) also satisfies
the two boundary conditions of (2.17). The reverse implication is evident.
The same arguments also hold for the Neumann problem defined by (2.1) and (2.3) and the
control function given by (2.15). #
Since (2.16) is not a properly posed problem, it follows from the above proposition that the optimal
control might not exist. However, J. L. Lions proves in [26] (Chap. 2, 5.3, Theorem 5.1)
a controllability theorem which can be directly applied to our problem. We mention this theorem
below.
Lions Controllability Theorem The set { #z0 (v)
is dense in H -1 (#),
#) is the solution of the problem
z
z
Now, we can easily prove
Lemma 2.1 For any g # L 2 (#), the set { #z(v)
is dense in H -1 (#)
#) is the solution of the problem
on #.
Proof. Let z # H
#) be the solution of the problem
on #.
Using z z in the Lions controllability Theorem, we get that the set { #(z(v)-z)
is dense in H -1 (#). Hence, the lemma follows. #
The following theorem proves controllability of the solutions of problems in # by the solutions of
Dirichlet problems in # In the proof of this theorem below, we use the spaces # s introduced in
Lions and Magenes [27], Chap. 2, 6.3. For the sake of completeness, we give here definitions of
these spaces # s .
Let #(x) be a function in D(
# ) which is positive
in# and vanishes on #. We also assume that
for any x 0 #, the following limit
lim
exists and is positive, where d(x, #) is the distance from x
# to the boundary #. Then, for
, the space # s is defined by
With the norm
|#s
the space #
s(# is a Hilbert space, and
D(# is dense in #
Now, for a positive non-integer real the integer part of s and 0 < # < 1, the space
# s is, as in case of the spaces H s , the intermediate space
Finally, for negative real values -s, s > 0, the space #
-s(# is the dual space of #
Theorem 2.1 The set {y(v) |# : v # L 2 (#)} is dense, using the norm of H 1/2 (#), in {y # H 1/2 (#)
f in #}, where y(v) # H
1/2(# is the solution of the Dirichlet problem (2.11) for a given v # L 2 (#).
Proof. Let us consider y # H 1/2 (#) such that Ay = f in #, and a real number # > 0. We denote
the traces of y on # by
#nA (#)
(#). From the previous lemma, it
follows that there exists v # L 2 (#) such that the solution z(v # H
- #) of problem (2.18)
satisfies
|
#)
be the solution of the Dirichlet problem (2.11) corresponding to v # and let us define
y # y on #
#.
1/2(# and satisfies in the sense of distributions the equation
in#
and the boundary conditions
Consider, as in Remark 2.1, a fixed domain D such that
D(#2 we have
3/2(#, where C(D) depends only on the domain D. Therefore,
Taking into account the continuity of the solution on the data (see Lions and Magenes [27], Chap.
2, 7.3, Theorem 7.4), we get
Below, the controllability of the solutions of the Dirichlet and the Neumann problems (given by
(2.1),(2.2) and (2.1), (2.3) respectively) in # by Neumann problems
in# is discussed.
Now, as a set of controls we can take the space
and for a v # H -1 (#), the state of the system will be the solution y(v) # H
1/2(# of the problem
in#
= v on #. (2.20)
We remark that the following
solution of problem (2.11)} #
solution of problem (2.20)}
establish a bijective correspondence. Consequently, Proposition 2.1 also holds if the space of controls
there is changed to H -1 (#) and the states y(v) of the system are solutions of problem (2.20).
Theorem 2.1 in this case becomes
Theorem 2.2 The set {y(v) |# : v # H -1 (#)} is dense, using the norm of H 1/2 (#), in {y #
1/2(# is a solution of the Neumann problem (2.20) for a
3 Controllability with finite dimensional spaces
Let {U # be a family of finite dimensional subspaces of the space L 2 (#) such that given (2.10) as
a space of controls with the Dirichlet problems, we have
U # is dense in
For a v # L 2 (#) we consider the solution y
1/2(# of the problem
in#
We fix an U # . The cost functions J defined by (2.13) and (2.15) are di#erentiable and convex.
Consequently, an optimal control
v#U#
exists if and only if it is a solution of the equation
when the control function is (2.13), and
#nA (#) ,
#y # (v)
#y # (v)
for any v # U # , (3.5)
when the control function is (2.15). Above, y(u # ) is the solution of problem (2.11) corresponding to
(v) is the solution of problem (3.2) corresponding to v. If y f # H
2(# is the solution of
the problem
in#
then, for a v # L 2 (#), we have
where y(v) and y # (v) are the solutions of problems (2.11) and (3.2), respectively. Therefore, we can
rewrite problems (3.4) and (3.5) as
and
#nA (#)
for any v # U # ,
respectively. Next, we prove the following
Lemma 3.1 For a fixed #, let # 1 , , #n# , n # N be a basis of U # and y # i ) be the solution of
problem (3.2) for { #y #1 )
#nA (#y #n # )
#nA (#}
are linearly independent sets.
Proof. From Remark 2.1, we have y # (v) # H 2 (D) for any domain D which satisfies
D # and consequently, y # (v) # H 3/2 (#) for any v # L 2 (#). Assume that for # 1 , , # n# R we
have
and therefore y on #. This implies that
#y #1 #1++#n #n # )
From (3.10) and (3.11), we get y
#, and therefore, # 1
on #, or # The second part of the statement
can be proved using similar arguments. #
The following proposition proves the existence and uniqueness of the optimal control when the states
of the system are the solutions of the Dirichlet problems.
Proposition 3.1 Let us consider a fixed U # . Then, problems (3.8) and (3.9) have unique solutions.
Consequently, if the boundary conditions of Dirichlet problems (2.11) lie in the finite dimensional
space U # , then there exists an unique optimal control of problem (3.3) corresponding to either the
Dirichlet problem (2.1), (2.2) or the Neumann problem (2.1), (2.3).
Proof. For a given #, let V # denote the subspace of L 2 (#) generated by {y
is a basis of U # , and y # i ) is the solution of problem (3.2) with Since the norms
are equivalent to the
norm
the above lemma then implies that there exists two positive constants c
and C such that
Consequently, from the Lax-Milgram lemma we get that equation (3.8) has an unique solution. A
similar reasoning proves that equation (3.9) also has an unique solution. This time we use the norm
equivalence
#y # (v)
#) | H
in the Lax-Milgram lemma. #
The following theorem proves the controllability of the solutions of the Dirichlet and Neumann
problems in # by the solutions of the Dirichlet problems in #
Theorem 3.1 Let {U # be a family of finite dimensional spaces satisfying (3.1). We associate the
solution y of the Dirichlet problem (2.1), (2.2) in # with problem (3.3) in which the cost function
is given by (2.13). Also, the solution y of the Neumann problem (2.1), (2.3) will be associated with
problem (3.3) in which the cost function is given by (2.15). In both the cases, there exists a positive
constant C such that for any # > 0, there exists U # such that
where u # U # is the optimal control of the corresponding problem (3.3) with # , and y(u # )
is the solution of problem (2.11) with
Proof. Let us consider an # > 0 and y # H 1/2 (#) be the solution of problem (2.1), (2.2). From
Theorem 2.1, there exists v # L 2 (#) such that y(v # H
1/2(#5 the solution of problem (2.11) with
(#. Consequently, there exists a constant C 1 such that
U # is dense in L 2 (#), there exists # and v # U # such that |v # - v # | L 2 (# and then,
there exists a positive constant C 2 such that
From (3.12) and (3.13) we get
and consequently,
where u # L 2 (#) is the unique optimal control of problem (3.3) on U # with the cost function
given by (2.13). Therefore,
A similar reasoning can be made for the solution y # H 1/2 (#) of problem (2.1), (2.3). #
Using the basis # 1 , , #n# of the space U # we define the matrix
and the vector
Then problem (3.8) can be written as
#,1 , , #,n# R
Consequently, using Theorem 3.1, the solution y of problem (2.1), (2.2) can be obtained within
any prescribed error by setting the restriction to # of
where #,1 , , #,n# ) is the solution of algebraic system (3.16). Above, y f is the solution of
problem (3.6) and y # i ) are the solutions of problems (3.2) with
An algebraic system (3.16) is also obtained in the case of problem (3.9). This time the matrix
of the system is given by
#nA (#) ,
#nA (#)
and the free term is
#y f
#nA (#) ,
#nA (#)
Therefore, using Theorem 3.1, the solution y of problem (2.1), (2.3) can be estimated by (3.17).
Also, y f is the solution of problem (3.6) and y # i ) are the solutions of problems (3.2) with
The case of the controllability with finite dimensional optimal controls for states of the system given
by the solution of a Neumann problem will be treated in a similar way. As in the previous section
the space of the controls will be U given in (2.19), and the state of the system y(v) # H
be given by the solution of Neumann problem (2.20) for a v # H
Let {U # be a family of finite dimensional subspaces of the space H -1 (#) such that
U # is dense in
This time, the function y # (v) # H
appearing in (3.4), (3.5), (3.8) and (3.9) will be the
solution of the problem
in#
#y # (v)
= v on #, (3.21)
for a v # H
appearing in (3.7), (3.8) and (3.9) will be the solution of the
problem
in#
#y f
With these changes, Lemma 3.1 also holds in this case, and the proof of the following proposition is
similar to that of Proposition 3.1.
Proposition 3.2 For a given U # the problems (3.8) and (3.9) have unique solutions. Consequently,
if the boundary conditions of Neumann problems (2.20) lie in the finite dimensional space U # , then
there exists an unique optimal control of problem (3.3), corresponding to either Dirichlet problem
(2.1), (2.2), or Neumann problem (2.1), (2.3).
A proof similar to that given for Theorem 3.1 can also be given for the following theorem.
Theorem 3.2 Let {U # be a family of finite dimensional spaces satisfying (3.20). We associate
the solution y # H 1/2 (#) of problem (2.1), (2.2) with problem (3.3) in which the cost function is
given by (2.13). Also, the solution y of problem (2.1), (2.3) will be associated with problem (3.3) in
which the cost function is given by (2.15). In both the cases, there exists a constant C such that for
any # > 0, there exists # such that
where u # U # is the optimal control of the corresponding problem (3.3) with # , and y(u # )
is the solution of problem (2.20) with
Evidently, in the case of the controllability with solutions of Neumann problem (2.20) we can also
write algebraic systems (3.16) using a basis # 1 , , #n# of a given subspace U # of the space
(#). As in the case of the controllability with solutions of the Dirichlet problem (2.11), these
algebraic systems have unique solutions.
Remark 3.1 We have defined y f as a solution of problems (3.6) or (3.22) in order to have
, respectively, on the boundary #. In fact, we can replace
y(v) by y # (v) in the cost functions (2.13) and (2.15), with y f # H
satisfying only
in# , (3.23)
and the results obtained in this section still hold.
Indeed, the two sets
corresponding to y f given by
(3.23) and (3.6), y # (v) being the solution of (3.2), are identical to the set {y(v) # H
being the solution of (2.11). Also, the two sets
corresponding to y f given by (3.23) and (3.22), y # (v) being the solution of (3.21), are
identical with the set {y(v) # H
being the solution of (2.20).
4 Approximate observations in finite dimensional spaces
In practical computing, we calculate the values of y # (v) at some points on # and use in (3.8),
some interpolations of these functions. We will see below that using these interpolations,
i.e. observations in finite dimensional subspaces, we can still obtain the approximate solutions of
problems (2.1), (2.2) and (2.1), (2.3).
As in the previous sections we deal at first with the case when the states of the system will be given
by the Dirichlet problem (2.11). Let U # be a fixed finite dimensional subspace of
the basis # 1 , , #n# .
Let us assume that for problem (2.1), (2.2), we choose a family of finite dimensional spaces {H }
such that
H is dense in
Similarly, we choose the finite dimensional spaces {H } such that
H is dense in
for problem (2.1), (2.3). We notice that H given in (4.1) or (4.2) is a subspace of H given in (2.12)
or (2.14), respectively.
Let us consider a fixed H , given in (4.1) or (4.2) depending on the problem we have to solve.
For a given # i , we will consider the solution y problem (3.2) corresponding to
and we will approximate its trace on # by y # ,i
. Also, the approximation of #y
#nA (#)
on # will
be denoted by #y #
#nA (#)
Since the systems (3.16) have an unique solution, the determinants of the matrices # given in
and (3.18) are non-zero. Consequently, if |y # i
,i | L 2 (#) or | #y
are
small enough, then the matrices
and
#nA (#) ,
#y #
have non-zero determinants. In this case, the algebraic systems
have unique solutions. In this system the free term is
if the matrix # is given by (4.3), and
#y f
#nA (#) ,
#y # ,i
if the matrix # is given by (4.4). Above, we have denoted by g # and h # some approximations
in H of g # and h # , respectively. Also, y f and #y f
#nA (#)
are some approximations of y f and #y f
#nA (#)
in the corresponding H of L 2 (#) and H -1 (#), respectively, with y f # H
satisfying (3.23).
If we write for a vector # 1 , , # n# R n# ,
and
#y #
(#)
#nA (#)
problems analogous to (3.8) and (3.9) can be written as
and
#nA (#)
(#)
#nA (#)
for any # R n# ,
whose solutions # are the optimal control for the following cost functions
J (#) =2 |y #
and
J (#) =2
#y #
(#)
#nA (#)
respectively.
The solution y of problems (2.1), (2.2) and (2.1), (2.3) can be approximated with the restriction
to # of
#,1 , , #,n# ) being the solution of appropriate algebraic system (4.5).
For a vector, # 1 , , # n# ), we will use the norm
| and the corresponding matrix
norm will be denoted by || ||. From (3.17) and (4.14) we have
depends only on the basis in U # . Since
and from algebraic systems (3.16) and (4.5) we have #
l # and
l # , we get that
there exists C # > 0, depending on the basis in U # , such that
In the case of matrices (3.14) and (4.3) and the free terms (3.15) and (4.6), we have
Instead, if we take matrices (3.18) and (4.4) and the free terms (3.19) and (4.7), then we get
1#i#n# |
#y # ,i
#y f
#y f
1#i#n# |
#y # ,i
where C is a constant and C # depends on the basis in U # .
In the case when the states of the system will be given by the Neumann problem (2.20), U # will
be a subspace of what we said above in the case of the Dirichlet problems
in#
can be applied in the case of the Neumann problems in # the only di#erence being that this time
are the solutions of problems (3.21) with
In both cases, when the control is e#ected via Dirichlet and Neumann problems, using Theorems
3.1 and 3.2, and equations (4.15)-(4.18), we obtain
Theorem 4.1 Let {U # be a family of finite dimensional spaces satisfying (3.1) if we consider
problem (2.11), or satisfying (3.20) if we consider problem (2.20). Also, we associate problem (2.1),
(2.2) or (2.1), (2.3) with a family of spaces {H } satisfying (4.1) or (4.2), respectively. Then, for
any # > 0, there exists # such that the following holds.
(i) if the space H is taken such that |y # i
,i | L 2 (#) , are small enough, y is
the solution of problem (2.1), (2.2) and y(u # ) is given by (4.14) in which # is the solution of
algebraic system (4.5) with the matrix given in (4.3) and free term in (4.6) then the algebraic system
(4.5) has an unique solution and
,i | L 2 (# ,
(ii) if the space H is taken such that | #y
are small enough,
y is the solution of problem (2.1), (2.3) and y(u # ) is given by (4.14) in which # is the solution
of algebraic system (4.5) with the matrix given in (4.4) and free term in (4.7) then the algebraic
system (4.5) has an unique solution and
#nA (#y f
where C is a constant and C # depends on the basis of U # .
Remark 4.1 Since the matrices # given in (4.3) and (4.4) are assumed to be non-singular, it
follows that {y # ,i } i=1,,n #
and { #y # ,i
are some linearly independent sets in L 2 (#) and
respectively. Consequently, if m is the dimension of the corresponding subspace H , then
5 Exterior problems
In this section, we consider the domain # R N of problems (2.1), (2.2) and (2.1), (2.3) as the
complement of the closure of a bounded domain and it lies on only one side of its boundary. The
same assumptions will be made on the
domain# of problems (2.11) and (2.20), and evidently, #
In order to follow the way in the previous sections and to prove that the solutions of the problems
in # can be approximated by the solutions of problems
in# we have to specify the spaces in which
our problems have solutions and also, their correspondence with the trace spaces.
First, we notice that the
domain# -
being bounded, then the Lions controllability Theorem does
not need to be extended to unbounded domains. Moreover, we see that the boundaries # and # of
the domains #
and# are bounded, and consequently, we can use finite open covers of them (as for
the bounded domains), to define the traces.
In order to avoid the use of the fractional spaces of the spaces in #
and# we simply remark
that since H 1/2 (#) is dense in L 2 (#), then using the continuity of the solution on the data (of the
and the continuity of the conormal derivative operator #
on the boundary #, we
get from the Lions controllability Theorem that
The set #z0 (v)
#) is the
solution of the problem
z
z
Now, we associate to the operator A the symmetric bilinear form
a(y,
#y
#z
# a 0 yz for y, z # H
which is continuous on H
1(#1 Evidently, a is also continuous on H 1 (#) H 1 (#). Now,
and taking the boundary data g # H 1/2 (#) and h # H -1/2 (#), then problems (2.1),
(2.2) and (2.1), (2.3) can be written in the following variational form
fz for any z # H 1(#)
and
z for any z # H 1 (#), (5.2)
respectively. Similar equations can also be written for problems (2.11) and (2.20).
Therefore, if there exists a constant c 0 > 0 such that a 0 # c 0 in # then the bilinear form a is
1(#6842056# i.e. there exists a constant # > 0 such that #|y| 2
a(y, y) for any y # H
1(#5 It
follows from the Lax-Milgram lemma that problems (2.11) and (2.20) have unique weak solutions in
1(#6 Naturally, problems (2.1), (2.2) and (2.1), (2.3) in # also have unique weak solutions given
by the solutions of problems (5.1) and (5.2), respectively.
We know that there exits an isomorphism and homeomorphism of H
(see Theorem 7.53, p. 216, in [1], or Theorem 5.5, p. 99, and Theorem 5.7, p. 103, in [30]), i.e.
there are two constants such that we have
. For any y # H
1(# , there exists v # H 1/2 (#) such that
y | H
.
. For any v # H 1/2 (#), there exists y # H
1(# such that y = v on # and | y | H
Using this correspondence we can easily prove the continuous dependence of the solutions on
data. For instance, for problems (2.1), (2.2) and (2.1), (2.3) we have
and
respectively.
Therefore, if there exists a constant c 0 > 0 such that a 0 # c 0 in # then we can proceed in the
same manner and obtain similar results for the exterior problems to those obtained in the previous
sections for the interior problems. Evidently, in this case we take
as a space of the controls for problem (2.11), in place of that given in (2.10), and the space of controls
for problem (2.20) will be taken as
in place of the space given in (2.19).
If a in # the domain being unbounded, then our problems might not have solutions in the
classical Sobolev spaces (see [10]), and we have to introduce the weighted spaces which take into
account the particular behavior of the solutions to infinity.
For domains in R 2 , we use the weighted spaces introduced in [24, 25], specifically
where D
# is the space of the distributions on # and r denotes the distance from the origin. The
norm on W
1(# is given by
| v | W
(L
For domains in R N , N # 3, appropriate spaces, introduced in [20] and used in [19, 31], are
with the norm
| v | W
(L
We remark that the space H
1(# is continuously embedded in W
1(#8 and the two spaces coincide
for the bounded domains. We use W 1
to denote the closure of
D(# in W
Concerning the space of the traces of the functions in W
1(#6 we notice that the boundary #
being bounded, these traces lie in H 1/2 (#). This fact immediately follows considering a bounded
domain D
# such that #D and taking into account that W 1 (D) and H 1 (D) are identical.
Assuming that
and using the spaces W 1 in place of the spaces H 1 , we can rewrite the problems (5.1) and (5.2),
and also, similar equations for problems (2.11) and (2.20).
For 2, the bilinear form a(y, z) generates on W 1
an equivalent norm with that induced by
(see [24]). Also, the bilinear form a(y, z) generates on W
1(# /R a norm which is equivalent
to the standard norm.
For N # 3, the above introduced norm on W 1 (R N ) is equivalent to that generated by a (see
[20]). Now, if we extend the functions in W 1
0(# with zero in R N
- # we get that the bilinear form
a(y, z) also generates on W 1
a norm equivalent to that induced by W
1(#1 Moreover, using the
fact that the
domain# is the complement of a bounded set, it can be proved that the bilinear form
a(y, z) generates in W
1(# a norm equivalent to the above introduced norm.
Therefore, we can conclude that, in the case of a our exterior problems have unique
solutions in the spaces W 1 if N # 3. If 2, the Dirichlet problems have unique solutions in W 1 ,
and the Neumann problems have unique solutions in W 1 /R.
Using the fact that on the bounded domains D the spaces W 1 (D) and H 1 (D) coincide, the
continuous embedding of H
1(# in W
1(#7 and the homeomorphism and isomorphism between
we can easily prove that there exits an homeomorphism and isomorphism
between W
1(# and W
Consequently, we get the following continuous dependence on
the data of the solution y of problem (2.1), (2.2).
and
Concerning the problem (2.1), (2.3), we have
and
Therefore, we can prove in a manner similar to the previous sections that when a
and N # 3, the solutions of the Dirichlet and Neumann problems in # can be approximated with
solutions of both the Dirichlet and the Neumann problems in # Naturally, the controls will be taken
in the appropriate space (5.3) or (5.4). If a
on# and 2, the solutions of the Dirichlet
problems in # can be approximated with solutions of the Dirichlet in # the Neumann problems not
having unique solutions.
6 Numerical Results
In this section, we consider some fixed U # and H , and we drop the subscripts # and . First, we
summarize the results obtained in the previous sections concerning the algebraic system we have to
solve to obtain the solutions, within a prescribed error, of problems (2.1), (2.2) or (2.1), (2.3) using
the solutions of problems (2.11) or (2.20).
We saw that, if, for both the bounded and unbounded domains, there exists a constant c 0 > 0
such that the coe#cient a 0 of the operator A satisfies a 0 # c 0 in # then the solutions of problems
(2.1), (2.2) or (2.1), (2.3) can be estimated by the solutions of both problems (2.11) and (2.20).
If a
in# for both the bounded and the unbounded domains, then the solutions of problems
(2.1), (2.2) can be estimated by the solutions of problems (2.11). If a the domains are
unbounded and N # 3, then the solutions of problems (2.1), (2.3) can be obtained from the solutions
of problems (2.20).
Actually, we have to solve an algebraic system (4.5) which we rewrite as
Some remarks on the computing of the elements of the matrix # and the free term l are made below.
a) Depending on the problem in # we choose the finite dimensional subspace of controls U # U .
If we use problem (2.11), U is L 2 (#)
if# is bounded and is H 1/2 (#)
if# is unbounded. Also, U is
if# is bounded and is H -1/2 (#)
if# is unbounded, if we use problem (2.20). Let # 1 , , #n ,
N, be the basis of U .
b) Depending on the problem
in# and the coe#cient a 0 of the operator A, we calculate the
values of y # i n, at the nodes of a mesh on #, the boundary of #. For problem (2.11),
we calculate y # i solutions of problems (3.2). For problem (2.20), if there
exists a constant c 0 > 0 such that a 0 # c 0 in # or if a
and# is unbounded, we
calculate y # i solutions of problems (3.21).
c) Using the values of y # i calculated at the nodes of the mesh in #, we will
compute the elements of the matrix # which are some inner products in we have to
solve problem (2.1), (2.2), or in we have to solve problem (2.1), (2.3). We notice
that the inner product in H -1 (#) is given by
where -# is the Laplace-Beltrami operator on # is the tangential gradient on #, I # is the
identity operator, and
Evidently, use of this inner product implies the solving of n problems of the above type to find
the corresponding of #y
#nA (#)
, and one problem to find the corresponding of h #y f
#nA (#)
. The finite
dimensional subspace H # H depends on the numerical integration method that we use. We remark
that the matrix # is symmetric and full.
d) The elements of the free term l will also be some inner products in the corresponding space
of observations H. In these inner products we use a solution y f of equation (3.23), and the data
# or h # in the boundary conditions of the problem we have to solve, (2.1), (2.2) or (2.1), (2.3),
respectively.
e) The elements of the matrix # and the free term l depend on the problems
in# and #, and
also, on the coe#cient a 0 of the operator A. For problem (2.1), (2.2), the matrix # and the free term
l are given in (4.3) and (4.6), respectively. For problem (2.1), (2.3), if there exists a constant c 0 > 0
such that a 0 # c 0 in # or if a
and# is unbounded, the matrix # and the free term
l are given in (4.4) and (4.7), respectively. Evidently, in these equations, y #
, the approximations in
H of y # i on the problem in #
Finally, if # 1 , , # n ) is the solution of algebraic system (6.1), and y is the solution of the
problem we have to solve, then its approximation is the restriction to # of
If we use the finite element method to calculate the functions y f and y # i
is not necessary to adapt the meshes
in# to the geometry of #. The values of these functions at
the points of # mentioned in the items b) and d) above can be found by interpolation using their
values at the mesh nodes. In our numerical examples we use an explicit formula for these functions.
Actually, we can find explicit formulae for the solutions of most problems in simple shaped domains
We saw that the matrices # given in (3.14) and (3.18) are non-singular and therefore, problems
have unique solutions. Also, algebraic systems (6.1) have unique solutions if their matrices
are good approximations in H of the matrix and the free term of the algebraic systems
(3.16), respectively. In fact, this approximation depends on the numerical integration on #. Also,
from Remark 4.1 we must take n # m, n being the dimension of U and m the dimension of H .
However, as we saw in Section 2, the problem in infinite space may not have a solution. Consequently,
for very large n, we might obtain algebraic systems (3.16) almost singular. These algebraic systems
can be solved by an iterative method, the conjugate gradient method, for instance, but we wanted
to see whether the algebraic system is non-singular and we applied the Gauss method, checking
the diagonal elements during the elimination phase. Below, we show that our numerical results are
encouraging.
Our numerical tests refer to both the interior and exterior Dirichlet problems
on #,
where # R 2 is either the interior or exterior domain of the square # centered at the origin, with
the sides parallel to the axes and of length 2 unit. The approximate solution of this problem is given
by the solutions of the Dirichlet problems
in#
in which the
domain# is either the disc # centered at the origin with radius equal to 2, or the
exterior domain of the disc # centered at the origin with radius equal to 0.99. The solutions of these
interior and exterior Dirichlet problems
in# are found by the Poisson formulae
2#r #|=r
dS # .
The square # is discretized with m equidistant points and H is taken as the space of the continuous
piecewise linear functions. The circle # is similarly discretized with n equidistant points and U is
taken as the space of the piecewise constant functions. The values of the integrals in the Poisson
equation at the points on # are calculated using the numerical integration with 3 nodes. The integrals
in the inner products in L 2 (#) are calculated by an exact formula, i.e., if we have on # two continuous
piecewise linear functions y 1 and y 2 , such that for x # [x k , x k+1
then
[y k
being the mesh size on #.
It is worth mentioning right at the outset that all computations below used fifteen significant
digits (double precision). Numerical experiments were carried out with three sets of data
for the problems in #: g #
only for
the interior problem, g # In the case of the interior problems, exact
known solutions are compared with the computed ones at 19 equidistant points on a diagonal of
the square: (-1.4,-1.4), ,(0,0), ,(1.4,1.4). The maximum of the relative error between exact and
computed solutions is denoted in the tables below by err d . In the case of the unbounded domains,
we do not know the exact solution but we can directly compute the error between the values of the
boundary conditions g # and the values of the computed solution given in (6.2), at the considered
points on #. The maximum of the relative error at these points is denoted in these tables by err b .
The errors err d and err b in the three examples for the interior problem are almost the same.
In
Table
6.1, we give an example for g(x 1 , x -4. In this example,
corresponding to the mesh size 0.1 on #. In all these cases, err d < err b as shown in the Table 6.1. For
these simple examples, it is not necessary to compute y f numerically as we used y f
36 .67121E-12 .11648E-06
Table
6.1. Tests for the interior Dirichlet problem.
The smaller diagonal element during the Gauss elimination method is of the order 10 -17 for
and of the order 10 -14 for It is greater than 10 -10 for
We should mention that in both cases with 72, the last pivot is of the order
However, we notice an increase in error for n > 60 (see Table 6.1), and these cases should be
cautiously considered. In all these cases the error err b , which can be calculated for any example, is
a good indicator of the computational accuracy.
In
Table
6.2, we give an example for the exterior problem with In
this example, corresponding to the mesh size of 1/15 on #.
90 0.18828E+00
Table
6.2. Tests for the exterior Dirichlet problem.
The smaller diagonal element during the Gauss elimination method is of order 10 -15 for
of the order 10 -14 for and it is greater than 10 -12 for
Conclusions
In this paper we studied, for both interior and exterior problems, the approximation of the solutions
of the Dirichlet (Neumann) problems in # with the solutions of the Dirichlet (Neumann)
in# by
means of an optimal boundary control problem
in# with observations on the boundary of the
domain #. As we saw in Section 2, such an optimal boundary control problem might lead to an
illposed problem if the space of the controls is infinite dimensional. We proved that if the controls
are taken in a finite dimensional subspace, then our problem has an unique optimal control. Using
the J. L. Lions controllability theorem we also proved that the set of the restrictions to # of the
solutions of the Dirichlet (Neumann) problems
in# is dense in the set of the solutions of the Dirichlet
(Neumann) problems in #. It is natural to take a family of finite dimensional spaces whose union is
dense in the space of the boundary conditions of the problem in # Then the set of restrictions to
# of the solutions of the Dirichlet (Neumann) problems
in# which assume values of the boundary
conditions in that union is dense in the set of solutions of these problems in #. Consequently, the
optimal boundary control problem
in# in which the controls are taken in a finite dimensional space
of such a family, will provide a solution of a Dirichlet (Neumann) problem
in# whose restriction
to # will approximate the solution of the Dirichlet (Neumann) problem in #. Actually, such an
optimal control problem whose controls are taken in a finite dimensional space leads to the solution
of a linear algebraic system. Since in the practical applications, the values of the solutions
in#
are approximately calculated on the boundary of the domain #, we also studied the optimal control
problem with boundary observations in a finite dimensional subspace.
Our primary goal in this paper has been to present this method and provide some theory and
calculations in order to bring forth some of the merits of this method as well as to provide some
theoretical support. However, we feel at this point that it is perhaps worthwhile to make some
remarks on this method against the backdrop of a somewhat di#erent technique within the same
framework: namely, Lagrange multiplier technique. For reasons mentioned below, we think that the
boundary control approach is simpler, more flexible, and can be more accurate than the Lagrange
multiplier approach to domain embedding methods.
As we mentioned earlier in the section on Introduction, very good results have been obtained
in recent years by using the Lagrange multiplier approach to the domain embedding methods. In
the Lagrange multiplier approach to domain embedding methods, values of Lagrange multipliers
are sought such that the solution of the problem in the
domain# satisfies the specified boundary
conditions of the problem in #. In fact, values of the Lagrange multipliers are essentially the jump
at the boundary # of #, in the normal derivative of the solution in # In the boundary control
approach to domain embedding methods proposed in this paper, boundary values of the solution
in# are sought such that this solution satisfies the specified boundary conditions of the problem
in #. Consequently, the numbers of supplementary unknowns introduced in the two methods are
equivalent.
In the Lagrange multipliers approach, one solves a saddle-point problem for the nodal values of
the solution
in# and those of the multipliers. In the method proposed in this paper, the problem is
reduced to solving a linear algebraic system. The construction of this linear system needs the solution
of many problems in # but these problems, being completely independent, can be simultaneously
solved on parallel machines. We point out that in the conjugate gradient method associated with
the Lagrange multipliers method, one has to solve at each iteration a problem
in# which has an
additional term arising from the Lagrange multipliers. In contrast, our method requires solutions of
simpler problems, o#ers good parallelization opportunities, and consequently a low computational
complexity.
In our approach, we solve several problems in the simple shaped
domain# and a linear combination
of these solutions provides the final desired solution. Since fast solution techniques are usually
available for many problems in simpler domains, we can expect to get very accurate solutions in our
approach. In fact, our numerical results for both, the interior and the exterior Dirichlet, problems
confirm a high accuracy of the proposed method. In the Lagrange multipliers method, problem
formulation introduces an additional term (which is an integral on #) which almost always forces
one to use the finite element method to solve for desired solutions even in regular domains. Very
good approximate solutions in the presence of such additional term usually requires additional computational
complexity such as the use of finer meshes. This increases the dimension of problems in
each iteration.
In a future work, we will apply the proposed method in conjunction with fast algorithms to
solve general elliptic partial di#erential equation in complex geometries. There we will also make
numerical comparisons between the results obtained with the method proposed in this paper and
that using the Lagrange multipliers method.
Acknowledgment
: The second author (Prabir Daripa) acknowledges the financial support of the
Texas Advanced Research Program (Grant No. TARP-97010366-030).
--R
Methods of fictitious domains for a second order elliptic equation with natural boundary conditions
Domain embedding methods for the Stokes equations
The direct solution of the discrete Poisson equation on irregular regions
A fast algorithm to solve nonhomogeneous Cauchy-Riemann equations in the complex plane
Singular Integral Transforms and Fast Numerical Algorithms: I
Les espaces du type Beppo-Levi
A spectral embedding method applied to the advection-di#usion equation
analysis of a finite element realization of a fictitious domain/domain decomposition method for elliptic problems
On the solution of the Dirichlet problem for linear elliptic operators by a distributed Lagrange multiplier method
Joseph D.
On the coupling boundary integral and finite element methods for the exterior Stokes problem in 3-D
Espaces de Sobolev avec poids.
Shape optimization of materially non-linear bodies in con- tact
Equations int
Optimal control of systems governed by partial di
A boundary controllability approach in optimal shape design problems
On the approximation of the boundary control in two-phase Stefan-type problems
An embedding of domain approach in free boundary problems and optimal design
Capacitance matrix methods for the Helmholtz equation on general three-dimensional regions
On the numerical solution of Helmholtz equation by the capacitance matrix method
A locally refined finite rectangular grid finite element method.
--TR | optimal control;domain embedding methods |
587569 | Properties of a Multivalued Mapping Associated with Some Nonmonotone Complementarity Problems. | Using the homotopy invariance property of the degree and a newly introduced concept of the interior-point-$\varepsilon$-exceptional family for continuous functions, we prove an alternative theorem concerning the existence of a certain interior-point of a continuous complementarity problem. Based on this result, we develop several sufficient conditions to assure some desirable properties (nonemptyness, boundedness, and upper-semicontinuity) of a multivalued mapping associated with continuous (nonmonotone) complementarity problems corresponding to semimonotone, P$(\tau, \alpha, \beta)$-, quasi-P*-, and exceptionally regular maps. The results proved in this paper generalize well-known results on the existence of central paths in continuous P0 complementarity problems. | Introduction
. Consider the nonlinear complementarity problem (NCP)
where f is a continuous function from R n into itself. This problem has now gained
much importance because of its many applications in optimization, economics, engi-
neering, etc. (see [8, 12, 16, 18]).
There are several equivalent formulations of the NCP in the form of a nonlinear
equation F is a continuous function from R n into R n . Given such an
equation F the most used technique is to perturb F to a certain F # , where #
is a positive parameter, and then consider the equation F # has a
unique solution denoted by x(#) and x(#) is continuous in #, then the solutions {x(#)}
describe, depending on the nature of F # (x), a short path denoted by {x(# (0, -
or a long path {x(# (0, #)}. If a short path {x(# (0, - #]} is bounded,
then for any subsequence {# k } with # k # 0, the sequence {x(# k )} has at least one
accumulation point, and by the continuity each of the accumulation points is a solution
to the NCP. Thus, a path can be viewed as a certain continuous curve associated
with the solution set of the NCP. Based on the path, we may construct various
computational methods for solving the NCP, such as interior-point path-following
methods (see, e.g., [15, 25, 26, 27, 28, 32, 39]), regularization methods (see [8, 10, 11,
41]), and noninterior path-following methods (see [1, 2, 3, 5, 7, 17, 21]). The most
common interior-point path-following method is based on the central path. The curve
(0, #)} is said to be the central path if for each # > 0 the vector x(#) is
the unique solution to the system
(1)
# Received by the editors September 28, 1998; accepted for publication (in revised form) March 14,
2000; published electronically September 20, 2000.
http://www.siam.org/journals/sicon/39-2/34519.html
Institute of Computational Mathematics and Scientific/Engineering Computing, Chinese
Academy of Sciences, P.O. Box 2719, Beijing 100080, China (zyb@lsec.cc.ac.cn, ybzhao@se.cuhk.
edu.hk).
# Department of Mathematics and Computer Sciences, Royal Military College of Canada,
Kingston, Ontario, K7K 7B4, Canada (isac-g@rmc.ca).
572 YUN-BIN ZHAO AND GEORGE ISAC
is continuous on (0, #).
In the case when f is a monotone function and the NCP is strictly feasible (i.e.,
there is a vector u # R n such that u > 0 and f(u) > 0), the existence of the central
path is well known (see, for example, [14, 25, 30, 31]). This existence result has been
extended to some nonmonotone complementarity problems. Kojima, Mizuno, and
proved that the central path exists if f is a uniform-P function. If f is a
-function satisfying a properness condition and the NCP is strictly feasible, Kojima,
Megiddo, and Noma [25] showed that there exists a class of interior-point trajectories
which includes the central path as a special case. If f is a P 0 -function and NCP
has a nonempty and bounded solution set, Chen, Chen, and Kanzow [4] and Gowda
and Tawhid [13] proved that the NCP has a short central path {x(# (0, -
#)}.
Under a certain properness condition, Gowda and Tawhid [13] showed that the NCP
with a P 0 -function has a long central path [13, Theorem 9]. It should be pointed out
that noninterior-point trajectories have also been extensively studied in the recent
literature (see [1, 2, 3, 5, 10, 11, 13, 17, 35, 37]).
However, for a general complementarity problem, the system (1) may have multiple
solutions for a given # > 0, and even if the solution is unique, it is not necessarily
continuous in #. As a result, the existence of the central path is not always guaranteed.
We define the (multivalued) mapping U : (0, # S(R n
(2)
++ ) is the set of all subsets of R n
++ , the positive orthant
of R n . The main contribution of this paper is to describe several su#cient conditions
which ensure that the multivalued mapping U(#) has the following desirable
properties.
(a) U(# for each # (0, #).
(b) For any fixed - # > 0, the set #(0,-#] U(#) is bounded.
(c) If U(#, then U(#) is upper-semicontinuous at #. (That is, for any su#-
ciently small # > 0, we have that #= U(# U(#B for all # su#ciently close to
#, where 1} is the Euclidean unit ball.)
(d) If U(-) is single-valued, then U(#) is continuous at # provided that U(#.
If the mapping U(-) satisfies properties (a), (b), and (c), then the set
can be viewed as an "interior band" associated with the solution set of the NCP.
The "interior band" can be viewed as a generalization of the concept of the central
path. Indeed, if U(-) satisfies properties (a), (b), and (d), then the set #(0,#) U(#)
coincides with the central path of the NCP.
There exist several ways of generating the central path of the NCP, including
maximal monotone methods [14, 30], minimization methods [31], homeomorphism
techniques [6, 14, 15, 25, 33], the parameterized Sard theorem [42], and weakly univalent
properties of continuous functions [13, 35, 37]. In this paper, we develop a
di#erent method for the analysis of the existence of the central path. By means of
the homotopy invariance property of the degree and a newly introduced concept of
interior-point-exceptional family for continuous functions, we establish an alternative
theorem for the nonemptyness of the mapping U(#). For a given # > 0, the result
states that there exists either an interior-point-exceptional family for f or U(#.
Consequently, to show the nonemptyness of the mapping U(-), it is su#cient to verify
conditions under which the function f possesses no interior-point-exceptional family
for any # > 0. Along with this idea, we provide several su#cient conditions that
guarantee the aforementioned desirable properties of the multivalued mapping U(-).
PROPERTIES OF A MULTIVALUED MAPPING 573
These su#cient conditions are related to several classes of (nonmonotone) functions
such as semimonotone, quasi-P # -, P(#)-, and exceptionally regular maps. The
results proved in the paper include several known results on the central path as special
instances.
This paper is organized as follows. In section 2, we introduce some definitions
and some basic results that will be utilized in the paper. In section 3, we show
an essential alternative theorem that is useful in later derivations. In section 4, we
establish some su#cient conditions to guarantee the nonemptyness, boundedness, and
upper-semicontinuity of the map U(#), and the existence of the central path. Some
concluding remarks are given in section 5.
Notations: R n
(respectively, R n
denotes the space of n-dimensional real vectors
with nonnegative components (respectively, positive components), and R n-n stands
for the space of n - n matrices. For any x # R n , we denote by #x# the Euclidean
norm of x, by x i the ith component of x for the vector whose
ith component is max{0, x i }. When x # R n
(R n
++ ), we also write it as x # 0
for simplicity.
2. Preliminaries. We first introduce the concept of an E 0 -function, which is
a generalization of an E 0 -matrix, i.e., semimonotone matrix, (see [8]). Recall that
an is said to be an E 0 -matrix if for any 0 #= x # 0, there exists a
component x i > 0 such that (Mx) i # 0. M is a strictly semimonotone matrix if for
any 0 #= x # 0, there exists a component x i > 0 such that (Mx) i > 0.
Definition 2.1. A function f : R n
R n is said to be an E 0 -function (i.e.,
semimonotone function) if for any x #= y and x # y in R n , there exists some i such
that x i > y i and f i (x) # f i (y). f is a strictly semimonotone function if for any x #= y
and x # y in R n , there exists some i such that x i > y i and f i (x) > f i (y).
It is evident that is an E 0 -function
if and only if M is an E 0 -matrix. We recall that a function f is said to be a P 0 (P)-
function if for any x #= y in R n
Clearly, a P 0 -function is an E 0 -function. However, the converse is not true (see [8,
Example 3.9.2]). Thus the class of E 0 -functions is larger than that of P 0 -functions.
Definition 2.2. (D1) [23, 24]. A map f : R n
R n is said to be quasi monotone
if for x #= y in R n , f(y) T implies that f(x) T
R n is said to be a P # -map if there exists a scalar
# 0 such that for any x #= y in R n we have
where
I
I - (x,
[26]. M is said to be a P # -matrix if there exists a scalar # 0 such that
Clearly, for a linear map is a P # -map if and only if M is a
showed that the class of P # -matrices coincides with the class
of su#cient matrices [8, 9]. A new equivalent definition of the P # -matrix is given in
[46]. The next concept is a generalization of the quasi monotone function and the
Definition 2.3. [46] A function f : R n
R n is said to be a quasi-P # -map if
there exists a constant # 0 such that the following implication holds for all x #= y
in R n .
defined by (3).
From the above definition, it is evident that the class of quasi-P # -maps includes
quasi monotone functions and P # -maps. (see [46] for details). The following concept
of a P (#)-map is also a generalization of the P # -map. In [46], it is pointed out
that monotone functions and P # -maps are special cases of P(#)-maps.
Definition 2.4. [46] A mapping f : R n
R n is said to be a P(#)-map if
there exist constants # 0, # 0, and 0 # < 1 such that the following inequality
holds for all x #= y in R
1#i#n
1#i#n
The concept of exceptional regularity that we are going to define next has a close
relation to such concepts as copositive, R 0 -, P 0 -, and E 0 -functions. It is shown that
the exceptional regularity is a weak su#cient condition for the nonemptyness and the
boundedness of the mapping U(#) (see section 4.4 for details).
Definition 2.5. Let f be a function from R n into R n . f is said to be exceptionally
regular if, for each # 0, the following complementarity problem has no solution of
norm 1:
The following two results are employed to prove the main result of the next
section. Let S be an open bounded set of R n . We denote by S and #(S) the closure
and boundary of S, respectively. Let F be a continuous function from S into R n . For
any y # R n such that y # F (#(S)), the symbol deg(F, S, y) denotes the topological
degree associated with F, S, and y (see [34]).
Lemma 2.1. [34] Let S # R n be an open bounded set and F, G be two continuous
functions from S into R n .
(i) Let the homotopy H(x, t) be defined as
and let y be an arbitrary point in R n . If y /
then deg(G, S,
(ii) If deg(F, S, y) #= 0, then the equation F y has a solution in S.
The following upper-semicontinuity theorem of weakly univalent maps is due to
Ravindran and Gowda [35].
PROPERTIES OF A MULTIVALUED MAPPING 575
Lemma 2.2. [35] Let g : R n
R n be weakly univalent; that is, g is continuous
and there exist one-to-one continuous functions
uniformly on every bounded subset of R n . Suppose that q # R n such that g -1 (q # ) is
nonempty and compact. Then for any given scalar # > 0 there exists a scalar # > 0
such that for any weakly univalent function h : R n
R n and for any q # R n with
sup
-#
we have
where B denotes the open unit ball in R n
3. Interior-point-exceptional family and an alternative theorem. We
now introduce the concept of the interior-point-exceptional family for a continuous
function, which brings us to a new idea, to investigate the properties of the mapping
U(#) defined by (2), especially the existence of the central path for NCPs. This
concept can be viewed as a variant of the exceptional family of elements which was
originally introduced to study the solvability of complementarity problems and variational
inequalities [19, 20, 36, 43, 44, 45, 46].
Definition 3.1. Let f : R n
R n be a continuous function. Given a scalar
# > 0, we say that a sequence {x r
++ is an interior-point-exceptional
family for f if #x r
# as r # and for each x r there exists a positive number
x r
for all
Based on the above concept, we can prove the following result which plays a key
role in the analysis of the paper.
Theorem 3.1. Let f be a continuous function from R n into R n . Then for each
there exists either a point x(#) such that
or an interior-point-exceptional family for f .
Proof. Let F be the Fischer-Burmeister function of f
defined by
It is well known that x solves the NCP if and only if x solves the equation F
Given # > 0, we perturb F (x) to F # (x) given by
It is easy to see that x(#) solves the equation F # only if x(#) satisfies
the system (5). We now consider the convex homotopy between the mapping F # (x)
and the identity mapping, that is,
576 YUN-BIN ZHAO AND GEORGE ISAC
Let r > 0 be an arbitrary positive scalar. Consider the open bounded set S
r}. The boundary of S r is given by #S There are
only two cases.
Case 1. There exists a number r > 0 such that 0 /
[0, 1]}. In this case, by (i) of Lemma 2.1, we have that deg(I, S r ,
where I is the identity mapping. Since deg(I, S r , 1, from the above equation and
(ii) of Lemma 2.1, we deduce that the equation F # has a solution, denoted by
x(#), which satisfies the system (5).
Case 2. For each r > 0, there exists some point x r
#S r and t r # [0, 1] such that
then the above equation reduces to F #
implies that x(#) := x r satisfies the system (5).
We now verify that t r #= 1. In fact, if t 0, then from (7) we
have that x which is impossible since x r
#S r .
Therefore, it is su#cient to consider the case of 0 < t r < 1 for all r > 0. In
this case, it is easy to show that f actually has an interior-point-exceptional family.
Indeed, in this case, (7) can be written as
x r
Squaring both sides of the above and simplifying, we have
x r
1), the above equation implies that x r
We see from the above equation that
x r
We further show that x r
++ . In fact, it follows from (8) that
x r
On the other hand, by using (9) we obtain
x r
x r
Combining (10) and the above equation yields x r
++ . Since #x r
it is clear
that #x r
# as r #. Consequently, the sequence {x r
} is an interior-point-
exceptional family for f .
The above result shows that if f has no interior-point-exceptional family for
each # > 0, then property (a) of the mapping U(-) holds. From the result, it is
interesting to study various practical conditions under which a continuous function
does not possess an interior-point-exceptional family for every # (0, #). In the
next section, we provide several such conditions ensuring the aforementioned desirable
properties of the mapping U(-).
PROPERTIES OF A MULTIVALUED MAPPING 577
4. Su#cient conditions for properties of U(-).
4.1. -function. In this section, we prove that the multivalued mapping U(-)
has properties (a) and (b) if f is a continuous E 0 -function satisfying a certain properness
condition. Moreover, if F # (x) given by (6) is weakly univalent, then property (c)
also holds. Applied to P 0 complementarity problems, this existence result extends a
recent result due to Gowda and Tawhid [13]. The following lemma is quite useful.
Lemma 4.1. Let f : R n
R n be an E 0 -function. Then for any sequence
++ with #u k
#, there exist an index i and a subsequence of {u k
denoted by {u k j
}, such that u k j
Proof. This proof has appeared in several works, see [11, 13, 35, 38]. Let {u k
++ be a sequence satisfying #u k
#. Choosing a subsequence if necessary, we
may suppose that there exists an index set I # {1, . , n} such that u k
for each
i } is bounded for each i /
R n be a vector constructed as
follows:
Thus,
} is a bounded sequence. Clearly, u k
is an E 0 -function,
there exist an index i # I and a subsequence of {u k
}, denoted by {u k j
}, such that
Note that the right-hand side of the above inequality is bounded. The desired result
follows.
To show the main result of this subsection, we will make use of the following
assumption which is weaker than several previously known conditions.
Condition 4.1. For any sequence {x k
satisfying
# and [-f(x k
# 0, and
(ii) for each index i with x k
i #, the corresponding sequence {f i
above, and
(iii) there exists at least one index i 0 such that x k
it holds that
1#i#n
for some subsequence {x k l
As we see in the following result the above condition encompasses several particular
cases; we omit the details.
Proposition 4.1. Condition 4.1 is satisfied if one of the following conditions
holds.
(C1) For any positive sequence {x k
++ with #x k
# and [-f(x k
0, it holds that max 1#i#n x k l
subsequence {x k l
(C2) For any sequence {x k
++ with #x k
# and min 1#i#n f i
0, it holds that max 1#i#n x k l
subsequence {x k l
[22, 29] For any sequence {x k
} with #x k
#,
# 0, and
# 0, it holds that
lim inf
578 YUN-BIN ZHAO AND GEORGE ISAC
[13] For any sequence {x k
} with #x k
#,
lim inf
min 1#i#n x k
# 0, and lim inf
there exist an index j and a subsequence {x k l
} such that x k l
is a R 0 -function.
monotone and the NCP is strictly feasible.
is a uniform P-function.
Remark 4.1. The condition (C1) of the above proposition is weaker than each of
the conditions (C2) through (C7). (C2) is weaker than each of the conditions (C4)
through (C7). The concept of the R 0 -function, a generalization of the R 0 -matrix [8],
was introduced in [39] and later modified in [6].
In what follows, we show under a properness condition that the short "interior
band" #(0,-#] U(#) is bounded for each given -
# > 0. The boundedness is important
because it implies that the sequence {x(# k )}, where
is bounded and each accumulation point of the sequence is a solution to the NCP
provided that f is continuous. We impose the following condition on f .
Condition 4.2. For any positive sequence {x k
++ such that #x k
#,
and the sequence {f i is bounded for each index i with
#, it holds that
1#i#n
for some subsequence {x k l
Clearly, Condition 4.2 is weaker than Condition 4.1 and thereby weaker than all
conditions listed in Proposition 4.1. We now prove the boundedness of the short
"interior band" under the above condition.
Lemma 4.2. Suppose that Condition 4.2 is satisfied. If U(# for each # > 0,
then for any -
# > 0 the set
#(0,-#] U(#) is bounded, i.e., property (b) holds. Particu-
larly, U(#) is bounded for each # > 0.
Proof. Suppose that there exists some -
#(0,-#] U(#) is unbounded.
Then there exists a sequence {x(# k )}, where # k # (0, -
#], such that #x(# k )# as
and that
for all
Thus, for each i such that x i #, the sequence {f i (x(# k ))} is bounded. By
Condition 4.2, we deduce that there exists a subsequence {x(# k l )} such that
1#i#n
This is a contradiction since x i
# for all
The main result on E 0 -functions is given as follows. Even for P 0 -functions, this
result is new.
Theorem 4.1. Suppose that f is a continuous E 0 -function and Condition 4.1 is
satisfied. Then the properties (a) and (b) of the mapping U(#) hold. Moreover, if F # (x)
defined by (6) is weakly univalent in x, then the mapping U(-) is upper-semicontinuous,
i.e., property (c) also holds.
PROPERTIES OF A MULTIVALUED MAPPING 579
Proof. To prove property (a), by Theorem 3.1, it su#ces to show that there exists
no interior-point-exceptional family of f for any # > 0. Assume to the contrary that
for certain # > 0 the function f has an interior-point-exceptional family {x r
#x r
#, {x r
++ , and f is an E 0 -function, by Lemma 4.1 there exist some
index m and a subsequence {x r j
}, such that x r j
From (4), we have
bounded below, the right-hand side of the above
equation is bounded below. It follows that lim j# - r
On the other hand, we note that for any 0 < - < 1 the function
# --
-#
is monotonically decreasing with respect to the variable t # (0, #). Passing through a
subsequence, we may suppose that there exists an index set I # {1, . , n} such that
i } is bounded for each i /
# I.
# I, then there exists some scalar C > 0 such that x r j
#(t) is decreasing and - r j # 1, we have
Thus, for all su#ciently large j, we have
# I.
using (4) and the facts - r
#, we have
which implies that
Therefore, [-f(x r
it follows from (4) that
which implies that {f i (x r j )} is bounded above for all i # I. Since m # I and {f m
is bounded below, the sequence {f m (x r j )} is indeed bounded. From Condition 4.1,
there is a subsequence of {x r j
denoted also by {x r j
}, such that
1#i#n
However, from (4) we have
for all i # {1, . , n}. This is a contradiction. Property (a) of U(#) follows.
Since Condition 4.1 implies Condition 4.2, the boundedness of the set
follows immediately from Lemma 4.2. It is known that x(# U(#) if and only if x(#)
is a solution to the equation F #
# (0). Since U(#) is bounded,
the set F -1
# (0) is bounded (in fact, compact, since f is continuous). If F # (x) is weakly
univalent in x, by Lemma 2.2, for each scalar # > 0 there is a # > 0 such that for any
weakly univalent function h : R n
sup
x#
-#
we have
It is easy to see that for the given # > 0 there exists a scalar # > 0 such that
sup
x#
-#
Setting h(x) := F # (x) in (13) and (14), we obtain that #= F -1
for all |#, i.e., U(# U(#B for all # su#ciently close to #. Thus, U(#)
is upper-semicontinuous.
Ravindran and Gowda [35] showed that if f is a P 0 -function, then F # (x) given by
(6) is a P-function in x, and hence the equation F # has at most one solution
x(#). In this case, the upper-semicontinuity of U(-) reduces to the continuity of x(#).
By the fact that every P 0 -function is an E 0 -function and is weakly univalent, we have
the following result from Theorem 4.1.
Corollary 4.1. Suppose that f : R n
R n is a continuous P 0 -function and
Condition 4.1 is satisfied. Then the central path exists and any slice of it is bounded,
i.e., for each # > 0 there exists a unique x(#) satisfying the system (1), x(#) is continuous
on (0, #), and the set {x(# (0, -
#]} is bounded for each -
When f is a P 0 -function, Gowda and Tawhid [13, Theorem 9] showed that the
(long) central path exists if condition (C4) of Proposition 4.1 is satisfied. Corollary 4.1
can serve as a generalization of the Gowda and Tawhid result. It is worth noting that
the consequences of Corollary 4.1 remain valid if condition (C1) or (C2) of Proposition
4.1 holds.
4.2. Quasi-P # -maps. The concept of the quasi-P # -map that is a generalization
of the quasi monotone function and the P # -map was first introduced in [46] to study
the solvability of the NCP. Under the strictly feasible assumption as well as the
following condition, we can show the nonemptyness and the boundedness of U(-) if f
is a continuous quasi-P # -map .
Condition 4.3. For any sequence {x k
++ such that
#, lim
and {f(x k )} is bounded, it holds that
1#i#n
for some subsequence {x k l
PROPERTIES OF A MULTIVALUED MAPPING 581
Clearly, the above condition is weaker than Conditions 4.1 and 4.2. It is also
weaker than Condition 3.8 in [4] and Condition 1.5(iii) in [25]. The following is the
main result of this subsection.
Theorem 4.2. Let f be a continuous quasi-P # -map with the constant # 0 (see
Definition 2.3). Suppose that Condition 4.3 is satisfied. If the NCP is strictly feasible,
then property (a) of U(#) holds. Moreover, if Condition 4.2 is satisfied, then property
(b) holds, and if F # (x) is weakly univalent in x, then property (c) also holds.
While the nonemptyness of U(#) is ensured under Condition 4.3, it is not clear if
the boundedness of U(#) can follow from this condition. However, from the implications
Condition 4.1# Condition 4.2 # Condition 4.3, we have the next consequence.
Corollary 4.2. Suppose that f is a continuous quasi-P # -map and F # (x) is
weakly univalent in x. If the NCP is strictly feasible and Condition 4.1 or 4.2 is
satisfied, then the mapping U(-) has properties (a), (b), and (c).
The proof of Theorem 4.2 is postponed until we have proved two technical lemmas.
Lemma 4.3. Let f satisfy Condition 4.3. Assume that {x r
} r>0 is an interior-
point-exceptional family for f . If there exists a subsequence of {x r
}, denoted by
{x rk
}, such that for some 0 < # < 1,
lim
#x rk
then we have
lim
1#i#n
x rk
Proof. Suppose that {x rk
} is an arbitrary subsequence of {x r
} such that (15)
holds. Since #(t) defined by (11) is decreasing on (0, #), for each i # {1, . , n} we
have
1#i#n
x rk
min 1#i#n x rk
and
1#i#n
x rk
Suppose to the contrary that there exists a subsequence of {x rk
denoted also by
{x rk
}, such that min 1#i#n x rk
is a constant. We
derive a contradiction. Indeed, since - rk - 1
< 0, from (16) we have
min 1#i#n x rk
# for all
From (17) and the above relation, we obtain
1#i#n
x rk
Since #x rk
#, we deduce from (15) that
lim
1#i#n
x rk
Therefore, it follows from (18) that there exists a scalar c such that c # f i
for all Condition 4.3, there exists a
subsequence of {x rk
denoted still by {x rk
}, such that max 1#i#n x rk
However, from (12) we have that x rk
This is a
contradiction.
Lemma 4.4. Let f satisfy Condition 4.3. Assume that {x r
} is an interior-point-
#-exceptional family for f . Let u > 0 be an arbitrary vector in R n . Then for any
subsequence {x rk
(where r k # as k #) there exists a subsequence of {x rk
denoted still by {x rk
}, such that f(x rk
su#ciently large k.
Proof. Let {x rk
} be an arbitrary subsequence of {x r
(where r k # as k #).
By using (4) we have
#x rk
x rk
#x rk
x rk
# .
We suppose that f(x rk
su#ciently large k. We derive a contra-
diction. From (19), we have
#x rk
Since #x rk
#, for all su#ciently large k we have
#x rk
which implies that
lim
#x rk
#x rk
for any scalar 0 < # < 1. Thus, we see from Lemma 4.3 that
min
1#i#n
x rk
Notice
#x rk
for all su#ciently large k. From (19), (20), and the above inequality, we have
x rk
min 1#i#n x rk
PROPERTIES OF A MULTIVALUED MAPPING 583
for all su#ciently large k. This is a contradiction.
We are now ready to prove the results of Theorem 4.2.
Proof of Theorem 4.2. To show property (a) of the mapping U(#), by Theorem
3.1, it su#ces to show that f has no interior-point-exceptional family for any # > 0.
Assume to the contrary that there exists an interior-point-exceptional family for f ,
denoted by {x r
}. By the strict feasibility of the NCP, there is a vector u > 0 such
that f(u) > 0. Consider two possible cases.
Case (A). There exists a number r 0 > 0 such that
1#i#n
In this case, the index set I
#,
it is easy to see that
for all su#ciently large r. Since f is a quasi-P # -map and I is empty, the
above inequality implies that f(x r
su#ciently large r. How-
ever, by Lemma 4.4 there exists a subsequence of {x r
}, denoted by {x rk
}, such that
su#ciently large k. This is a contradiction.
Case (B). There exists a subsequence of {x r
} denoted by {x r j
as j #, such that
1#i#n
By using (4), for each i we have
# .
There exist a subsequence of {x r j
denoted also by {x r j
}, and a fixed index m such
that
1#i#n
For each i such that x r j
#, (21) implies that A (r j )
j, we deduce that {x r j
m} is bounded, i.e., there is a constant - # such that 0 < x r j
for all j.
um , setting in (21), we have
um # 1# 1
If um < x r j
#, setting in (21), we obtain
584 YUN-BIN ZHAO AND GEORGE ISAC
We consider two subcases, choosing a subsequence whenever it is necessary.
Subcase 1. - r j # 1. From (22) and (23), for all su#ciently large j we have
um
Thus, for all su#ciently large j, we obtain
1#i#n
um /2)}
The last inequality above follows from the fact that f(u) > 0, {x r j
++ , and
#. Since f is a quasi-P # -map, the above inequality implies that f(x r
su#ciently large j, which is impossible according to Lemma 4.4.
Subcase 2. There exists a subsequence of {- r j }, denoted also by {- r j }, such that
1. In this case, from (22) and (23), we have
# .
It follows from (4) that
1#i#n
.
We now show that T (r su#ciently large j.
noting that - r j # and #x r j
#, we obtain
), by the same argument as the above, we can
show that
for all su#ciently large j. Thus, by the quasi-P # -property of f , we deduce from
su#ciently large j. It is a contradiction
since {x r j
#, and f(u) > 0.
The above contradictions show that f has no interior-point-exceptional family
for each # > 0. By Theorem 3.1, the set U(# for any # > 0. The boundedness of
PROPERTIES OF A MULTIVALUED MAPPING 585
the short "interior band " follows from Lemma 4.2, and the upper-semicontinuity of
U(#) follows easily from Lemma 2.2.
The class of quasi-P # -maps includes the quasi monotone functions as particular
cases. The following result is an immediate consequence of Theorem 4.2.
Corollary 4.3. Suppose that f is a continuous quasi monotone (in particular,
pseudomonotone) function, and the NCP is strictly feasible.
(i) If Condition 4.3 is satisfied, then property (a) of U(#) holds.
(ii) If Condition 4.2 is satisfied, then properties (a) and (b) of U(#) hold.
In the case when F # (x) is univalent (continuous and one-to-one) in x, the equation
has at most one solution. Combining this fact and Theorem 4.2, we have
the following result concerning the existence of the central path of the NCP. To our
knowledge, this result can be viewed as the first existence result on the central path
for the NCP with a (generalized) quasi monotone function. Up to now, there is
no interior-point type algorithms designed for solving (generalized) quasi monotone
complementarity problems.
Corollary 4.4. Let f be a quasi-P # -map, and F # (x) is univalent in x. If the
NCP is strictly feasible and Condition 4.2 is satisfied, then the central path exists and
the set {x(# (0, -
#]} is bounded for any given - # > 0.
Particularly, if f is a P 0 -function, then F # (x) is univalent in x (see [35]). We have
the following result.
Corollary 4.5. Let f be a continuous P 0 and quasi-P # -map. If the NCP is
strictly feasible and Condition 4.2 is satisfied, then the conclusions of Corollary 4.4
are valid.
4.3. P (#)-maps. It is well known (see [14, 25, 30, 31]) that the monotonicity
combined with strict feasibility implies the existence of the central path. In this
section, we extend the result to a class of nonmonotone complementarity problems.
Our result states that if f is a P(#) and P 0 -map (see Definition 2.4), the central
path exists provided that the NCP is strictly feasible. This result gives an answer
to the question "What class of nonlinear functions beyond P # -maps can ensure the
existence of the central path if the NCP is strictly feasible?" We first show properties
of the mapping U(-) when f is a P (#)-map.
Theorem 4.3. Let f be a continuous P (#)-map. If the NCP is strictly fea-
sible, then properties (a) and (b) of U(#) hold. Moreover, if F # (x) is weakly univalent
in x, property (c) also holds.
Proof. Suppose that there exists a scalar # > 0 such that f has an interior-point-
#-exceptional family denoted by {x r
++ and #x r
# as r #,
there exist some p and a subsequence denoted by {x r j
such that #x r j
# and
1#i#n
Clearly, x r j
On the other hand, there exists a subsequence of {x r j
denoted also by {x r j
}, such that for some fixed index m and for all j we have
1#i#n
By the definition of the P(#)-map, we have
586 YUN-BIN ZHAO AND GEORGE ISAC
1#i#n
1#i#n
- u# .
From (4), we have that f p
p , and hence
It is easy to see that
Combining (25) and (26) leads to
From
min := min
1#i#n
we deduce that
min # as j #.
We now show that {x r j
m} is bounded. Assume that there exists a subsequence of {x r j
denoted still by {x r j
m}, such that x r j
#. Then, from (21), we have
and hence for all su#ciently large j we have
1#i#n
By (27) and the above relation, we obtain
min # as j #.
However, since f is a P(#)-map, we have
min #,
which contradicts (28). This contradiction shows that the sequence {x r j
m} is bounded.
By using (4) and (24), we have
- u# .
PROPERTIES OF A MULTIVALUED MAPPING 587
Multiplying both sides of the above inequality by 1/(x r j
rearranging terms,
and using (26), we have
For all su#ciently large j, the left-hand side of the above inequality is negative, but
the right-hand side tends to f p (u) > 0 as j #. This is a contradiction. The
contradiction shows that f has no interior-point-exceptional family for every # > 0.
By Theorem 3.1, property (a) of U(#) follows. The proof of the boundedness of the
set
#(0,-#] U(#) is not straightforward. It can be proved by the same argument as
the above. Indeed, we suppose that {x(# k )} 0<#k<-#(0,-#] U(#) is an unbounded
sequence. Replacing {x r j
} by {x(# k )}, using
instead of (4), and repeating the aforementioned proof, we can derive a contradic-
tion. The upper-semicontinuity of U(-) can be obtained by Lemma 2.2. The proof is
complete.
The class of P(#)-maps includes several particular cases such as P (#, 0)-,
P(#, 0, 0)-, and P(0, #)-maps. It is shown in [46] that the class of P(#, 0, 0)-maps
coincides with the class of P # -maps. Therefore, f is said to be a P # -map if and only
if there exists a nonnegative scalar # 0 such that
1#i#n
1#i#n
Particularly, a matrix M # R n-n is a P # -matrix if and only if there is a constant
# 0 such that
1#i#n
1#i#n
This is an equivalent definition of the concept of a P # -matrix (su#cient matrix) introduced
by Kojima et al. [26] and Cottle, Pang, and Venkateswaran [9]. The following
result follows immediately from Theorem 4.3.
Corollary 4.6. Let f be a continuous P 0 and P(#)-map. If the NCP is
strictly feasible, then the central path exists and any slice of it is bounded.
It is worth noting that each P # -map is a P 0 and a P(#)-function. The following
result is a straightforward consequence of the above corollary.
Corollary 4.7. Let f be a continuous P # -map. If the NCP is strictly feasible,
then the central path exists and any slice of it is bounded.
It should be pointed out that P # -maps are also special instances of quasi-P # -
maps. A result similar to Corollary 4.3 can be stated for P # -maps. However, as we
have shown in Corollary 4.7, the additional conditions such as Conditions 4.1, 4.2,
and 4.3 are not necessary for a P # -map to guarantee the existence of the central path.
While P # -maps and quasi monotone functions are contained in the class of quasi-P # -
maps, Zhao and Isac [46] gave examples to show that a P # -map, in general, is not a
quasi monotone function, and vice versa.
588 YUN-BIN ZHAO AND GEORGE ISAC
4.4. Exceptionally regular functions. In section 4.1, we study the properties
of the mapping U(#) for -functions satisfying a properness condition, i.e., Condition
4.1. In sections 4.2, we show properties of U(#) for quasi-P # -maps under the
strictly feasible condition as well as some properness conditions. In the above section,
properness assumptions are removed, and properties of U(#) for P (#)-maps are
proved under the strictly feasible condition only. In this section, removing both the
strictly feasible condition and properness conditions, we prove that properties of U(#)
hold if f is an exceptionally regular function. The exceptional regularity of a function
(see Definition 2.5) was originally introduced in [46] to investigate the existence of a
solution to the NCP.
Definition 4.1. [16] A map v : R n
R n is said to be positively homogeneous
of degree # > 0 if
the above concept reduces to the standard concept of positive
homogeneity. Under the assumption of positively homogeneous of degree # > 0, we
can show that properties (a) and (b) of U(#) hold if f is exceptionally regular. See
the following result.
Theorem 4.4. Let f be a continuous and exceptionally regular function from
R n into R n . If positively homogeneous of degree # > 0, then
properties (a) and (b) of U(#) hold. Moreover, if F # (x) is weakly univalent, property
(c) also holds.
Proof. Suppose that there is a scalar # > 0 such that f has an interior-point-
exceptional family {x r
We derive a contradiction. Indeed, since G(x) is positively
homogeneous of degree # > 0, we have
#) - f(0)).
Without loss of generality, assume that x r /#x r
x. From the above relation, we
have
lim
r#
From (4), we have2
x r
for all
# and x r
we deduce that x r
for each i # I (-x). We now show that
lim
#x r
for some -
It is su#cient to show the existence of the above limit. Indeed, for
each using (30) and (29) we have
lim
#x r
r#
#x r
x r
#x r
x r
Thus, (31) holds, with
PROPERTIES OF A MULTIVALUED MAPPING 589
Now, we consider the case of i /
(-x). In this case, -
using (4), (31), and
(29), we see from x r
# 0 that
r#
x r
r#
#x r
r#
#x r
#x r
i.e.,
Combining (32) and the above relation implies that f is not exceptionally regular.
This is a contradiction. The contradiction shows that f has no interior-point-
exceptional family for each # > 0, and hence property (a) of U(#) follows from Theorem
3.1. Property (b) of U(#) can be easily proved. Actually, suppose that there exists
a sequence {x(# k )} 0<#k<-# with #x(# k )#, where loss of
generality, let x(# k )/#x(# k )# -
1. As in the proof of (29) we have
Therefore,
which contradicts the exceptional regularity of f(x).
It is not di#cult to see that a strictly copositive map and a strictly semimonotone
function are special cases of exceptionally regular maps. Hence, we have the following
result.
Corollary 4.8. Suppose that positively homogeneous
of degree # > 0. Then conclusions of Theorem 4.4 are valid if one of the following
conditions holds.
(i) f is an E 0 -function, and for each 0 #= x # 0 there exists an index i such that
(ii) f is strictly copositive, that is, x
(iii) f is a strictly semimonotone function.
Proof. Since each of the above conditions implies that f(x) is exceptionally reg-
ular, the result follows immediately from Theorem 4.4.
Motivated by Definition 2.5, we introduce the following concept.
4.2. M # R n-n is said to be an exceptionally regular matrix if for
all #I is an R 0 -matrix.
It is evident that an exceptionally regular matrix is an R 0 -matrix, but the converse
is not true. The following result is an immediate consequence of Theorem 4.4 and its
corollary.
Corollary 4.9. Let is an arbitrary vector
in R n . If one of the following conditions is satisfied, then properties (a) and (b) of
the mapping U(#) hold:
590 YUN-BIN ZHAO AND GEORGE ISAC
(i) M # R n-n is an exceptionally regular matrix.
(ii) M is a strictly copositive matrix.
(iii) M is a strictly semimonotone matrix.
(iv) M is an E 0 -matrix, and for each 0 #= x # 0 there exists an index i such that
(possibly, (Mx) i < 0).
Furthermore, if M is also a P 0 -matrix, then the central path of a linear complementarity
problem exists and any slice of it is bounded.
The R 0 -property of f has played an important role in the complementarity theory.
We close this section by considering this situation. The concept of a nonlinear R 0 -
function was first introduced by Tseng [38] and later modified by Chen and Harker
[6]. We now give a definition of the R 0 -function that is di#erent from those in [38]
and [6].
R n is said to be an R 0 -function if is the unique
solution to the following complementarity problem:
This concept is a natural generalization of the R 0 -matrix [8]. In fact, for the linear
function it is easy to see that f is an R 0 -function if and only if M is
an R 0 -matrix. In the case when f is an E 0 -function, we have shown in Theorem 4.1
that there exists a subsequence {- rk } such that - rk # 1. Moreover, if G is positively
homogeneous, then from (31) we deduce that -
using these facts and the
above R 0 -property and repeating the proof of Theorem 4.4, we have the following
result.
Theorem 4.5. Suppose that for each scalar t # 0 and x # R n ,
and that f is an E 0 and R 0 -function. Then the conclusions of Theorem 4.4 remain
valid. Moreover, if f is a P 0 and R 0 -function, the central path exists and any slice
of it is bounded.
5. Conclusions. We introduced the concept of the interior-point-exceptional
family for continuous functions, which is important since it strongly pertains to the
existence of an interior-point x(# U(#) and the central path, even to the solvability
of NCPs. By means of this concept, we proved that for every continuous NCP the set
U(#) is nonempty for each scalar # > 0 if there exists no interior-point-exceptional
family for f . Based on the result, we established some su#cient conditions for the
assurance of some desirable properties of the multivalued mapping U(#) associated
with certain nonmonotone complementarity problems. Since properties (a) and (b)
of U(#) imply that the NCP has a solution, the argument of this paper based on
the interior-point-exceptional family can serve as a new analysis method for the
existence of a solution to the NCP.
It is worth noting that any point in U(#) is strictly feasible, i.e., x(#) > 0 and
Therefore, the analysis method in this paper can also be viewed as a
tool for investigating the strict feasibility of a complementarity problem. In fact, from
Theorems 3.1, 4.1, 4.4, and 4.5, we have the following result.
Theorem 5.1. Let f be a continuous function. Then the complementarity problem
is strictly feasible whenever one of the following conditions holds.
(i) There exists a scalar # > 0 such that f has no interior-point-exceptional
family.
(ii) f is an E 0 -function and Condition 4.1 is satisfied.
positively homogeneous of degree # > 0 and f is
exceptionally regular.
PROPERTIES OF A MULTIVALUED MAPPING 591
is an E 0 and R 0 -matrix.
It should be pointed out that the results and the argument of this paper can
be easily extended to other interior-point paths. For instance, we can consider the
existence of the path
(where b and a > 0 are fixed vectors in R n ) first studied by Kojima, Megiddo, and
the above path reduces to the central path). This
path can be studied by the concept of interior-point-#(a, b)-exceptional family. For a
continuous
we say that a sequence {x r
++ is an interior-
b)-exceptional family for f if #x r
# as r #, and for each x r there
exists a positive number - r # (0, 1) such that for each i
x r
Using
and arguing as in the same proof of Theorem 3.1, we can show that for any # > 0 there
exists either a point x(#) satisfying (33) or an interior-point-#(a, b)-exceptional family
for f . This result enables us to develop some su#cient conditions for the existence of
the path (33).
Acknowledgments
. The authors would like to thank the referees and Professor
Jim Burke for their helpful suggestions and comments on an earlier version of this
paper, which helped the authors to correct some mistakes and improve the presentation
of the manuscript. They also thank Dr. Mustapha Ait Rami for his valuable
comments.
--R
The global linear convergence of a non-interior-point path following algorithm for linear
A global and local superlinear continuation-smoothing method for P 0 and R 0 NCP or monotone NCP
A Penalized Fischer-Burmeister NCP-Function: Theoretical Investigation and Numerical Results
Smooth approximations to
A. class of smoothing functions for nonlinear and mixed
The Linear Complementarity Problem
Beyond monotonicity in regularization methods for
Engineering and economic applications of
Existence and limiting behavior of trajectories associated with
A survey of theory
Global convergence of a class of non-interior-point algorithms using Chen-Harker-Kanzow functions for
Lecture Notes in Math.
Functions without exceptional families of elements and
Some nonlinear continuation methods for linear
New NCP-functions and their properties
Complementarity problems over cones with monotone and pseudomonotone maps
Seven kinds of monotone maps
Homotopy continuation methods for
A. Unified Approach to Interior Point Algorithms for Linear
A. new continuation method for
A. polynomial-time algorithm for linear
A new class of merit functions for the nonlinear complementarity problem
The complementarity problem for maximal monotone multifunctions
Pathways to the optimal set in linear programming
Interior path following primal dual algorithms
Properties of an interior-point mapping for mixed
Iterative Solution of Nonlinear Equations in Several Variables
Regularization of P 0
A. solution condition for
Growth behavior of a class of merit functions for the
An infeasible path-following method for monotone
An algorithm for the linear complementarity problem with a P 0
On Constructing Interior-Point Path-Following Methods for Certain Semimonotone Linear
Existence of a solution to nonlinear variational inequality under generalized positive homogeneity
Exceptional family of elements for a variational inequality problem and its applications
--TR
--CTR
Y. B. Zhao , D. Li, A New Path-Following Algorithm for Nonlinear P*Complementarity Problems, Computational Optimization and Applications, v.34 n.2, p.183-214, June 2006 | generalized monotonicity;weakly univalent maps;nonlinear complementarity problems;interior-point-va-exceptional family;central path |
587580 | Second Order Methods for Optimal Control of Time-Dependent Fluid Flow. | Second order methods for open loop optimal control problems governed by the two-dimensional instationary Navier--Stokes equations are investigated. Optimality systems based on a Lagrangian formulation and adjoint equations are derived. The Newton and quasi-Newton methods as well as various variants of SQP methods are developed for applications to optimal flow control, and their complexity in terms of system solves is discussed. Local convergence and rate of convergence are proved. A numerical example illustrates the feasibility of solving optimal control problems for two-dimensional instationary Navier--Stokes equations by second order numerical methods in a standard workstation environment. | Introduction
This research is devoted to the analysis of second methods for solving optimal
control problems involving the time dependent Navier Stokes equations. Thus we
consider
min J(y; u) over (y; u)
subject to> > <
@y
@
in
Here
is a bounded domain in R 2 , with su-ciently smooth boundary @
The
nal time T > 0 and the initial condition y 0 are xed. The vector valued variable y
and the scalar valued variable p represent the velocity and the pressure of the
uid.
Further u denotes the control variable and B the control operator. The precise
functional analytic setting of problem (1.1), (1.2) will be given in Section 2. For
the moment it su-ces to say that typical cost functionals include tracking type
and functionals involving the vorticity of the
uid
jcurl y(t; )j 2
where > 0 and z are given. For the following discussion it will be convenient to
formally represent all the equality constraints involved in (1.2) by ^ e(y;
that (1.1), (1.2) can be expressed in the form
min J(y; u) over (y; u)
subject to
In this form solving (1.1), (1.2) appears at rst to be a standard task, see [AM,
SECOND ORDER METHODS IN FLOW CONTROL 3
and the references given there. However, the formidable
size of (1.1), (1.2) and the goal of analyzing second order methods necessitate
an independent analysis.
For second order methods applied to optimal control problems two classes can
be distinguished depending on whether (y; p) in (1.1), (1.2) are considered as independent
variables or as functions of the control variable u. In the former case
represents an explicit constraint for the optimization problem whereas
in the latter case serves the purpose of describing the evaluation
of (y; p) as a function of u. In fact (P ) can be expressed as the reduced problem
where y(u) is implicitly dened via
To obtain a second order method in the case when (y; p) are considered as independent
variables one can derive the optimality system for (P ) and apply the
Newton algorithm to the optimality system. This is referred to as the sequential
quadratic programming (SQP){method. Alternatively, if (y; p) are considered as
functions of u, then Newton's method can be applied to
directly. The relative
merits of these two approaches will be discussed in Section 4. To anticipate some of
this discussion let us point out that the dierence in numerical eort between these
two methods is rather small. In fact, after proper rearrangements the dierence
in computational cost per iteration of the SQP{method for (P ) and the Newton
method for
solving either the linearized equation (1.2) or the full
nonlinear equation itself. In view of the time{dependence of either of these two
equations an iterative procedure is used for their solution so that the dierence
between solving the linearized and nonlinear equation per sweep is not so signi-
cant. A second consideration that may in
uence the choice between SQP{method
or Newton{method applied to
and u 0 for (y; p; u) can clearly be used independently of each other in the SQP{
method, where the states are decoupled from the controls. It is sometimes hinted at
that this decoupling is not only important for the initialization but also during the
iteration and that as a consequence the SQP{method may require fewer iterations
than Newton's method for
As we shall see below, the variables y and p
can be initialized independently from u 0 also in the Newton method. Specically,
are available it is not necessary to abandon (y
As for the choice of the initial guess (y
is to rely on one of the suboptimal strategies that were developed in the
recent past to obtain approximate solutions to (1.1), (1.2). We mention reduced
order techniques [IR], POD{based methods [HK, KV, LT] and the instantaneous
control method [CTMK, BMT, CHK]. As another possibility one can carry out
some gradient steps before one switches to the Newton iteration.
Let us brie
y comment on some related contributions. In [AT] optimality systems
are derived for problems of the type (1.1), (1.2). A gradient technique is proposed
in [GM] for the solution of (1.1), (1.2). Similarly in [B] gradient techniques are
analyzed for a boundary control problem related to (1.1), (1.2). In [FGH] the
authors analyze optimality systems for exterior boundary control problems. One of
the few contributions focusing on second order methods for optimal control of
uids
are given in [GB, H]. These works are restricted to stationary problems, however.
This paper, on the other hand focuses on second order methods for time dependent
problems. We show that despite the di-culties due to the size of (1.1),
(1.2) and the fact that the optimality systems contains a two point boundary value
problem, forward in time for the primal- and backwards in time for the adjoint vari-
ables, second order methods are computationally feasible. We establish that the
initial approximation to the reduced Hessian is only a compact perturbation of the
Hessian at the minimizer. In addition we give conditions for second order su-cient
optimality conditions of tracking type problems. These results imply superlinear
convergence of quasi{Newton as well as SQP{methods. While the present paper
focuses on distributed control problems in a future paper we plan to address the
case of velocity control along the boundary.
The paper is organized as follows. Section 2 contains the necessary analytic
prerequisites. First and second order derivatives of the cost functional with respect
to the control are computed in Section 3. The fourth section contains a comparison
of second order methods to solve (1.1), (1.2). In Section 5 convergence of quasi{
Newton method and SQP{methods applied to
P ) is analyzed. Numerical results
for the Newton{method and comparisons to a gradient method are contained in
Section 6.
2. The optimal control problem
In this section we consider the optimal control problem (1.1), (1.2) in the abstract
subject to e(y;
To dene the spaces and operators arising in (2.1) we
assume
to be a bounded
domain in R 2 with Lipschitz boundary and introduce the solenoidal spaces
with the superscripts denoting closures in the respective norms. Further we dene
W endowed with the norm
H
equipped with the norm
H
denoting the dual space of V . Here
is an abbreviation for L 2 (0;
that up to a set of measure zero in (0; T ) elements can be identied
with elements in C([0; T can be identied with
elements in C([0; T ]; V ). In (2.1) further U denotes the Hilbert space of controls
R is the cost functional which is assumed to be bounded
from below, weakly lower semi-continuous, twice Frechet dierentiable with locally
Lipschitzean second derivative, and radially unbounded in u, i.e.
. Furthermore, the control space U is identied with
SECOND ORDER METHODS IN FLOW CONTROL 5
its dual U . To simplify the notation for the second derivative we also assume that
the functional J can be decomposed as
The nonlinear mapping
is dened by
Comparing (1.1), (1.2) to (2.1) we note that
the conservation of mass, as well as the boundary condition are realized in the choice
of the space W while the dynamics are described by the condition e(y; In
variational form the constraints in (2.1) can be equivalently expressed as:
given u 2 U nd y 2 W such that
The following existence result for the Navier{Stokes equations in dimension two is
well known ([CF, L, T], Chapter III).
Proposition 2.1. There exists a constant C such that for every u 2 U there exists
a unique element
and
U
From Proposition (2.1) we conclude that with respect to existence (2.1) is equivalent
to
subject to u 2 U;
Theorem 2.1. Problem (2.1) admits a solution (y
Proof. With the above formalism the proof is quite standard and we only give a
brief outline. Since J is bounded from below there exists a minimizing sequence
un g in W U . Due to the radial unboundedness property
of J in u and Proposition 2.1 the sequence f(y n ; un )g is bounded in W U and
hence there exists a subsequence with a weak limit (y ; u ) 2 W U . Weak lower
semi-continuity of (y; u) ! J(y; u) implies that
and it remains to show that y This can be achieved by passing to the
limit in (2.3) with (y; u) replaced by (y(u n ); un ).
We shall also require the following result concerning strong solutions to the Navier-Stokes
equation, ([T], Theorem III. 3.10).
Proposition 2.2. If y then for every u 2 U the
solution Moreover, for every
bounded set U in U
is bounded in H 2;1 (Q):
6 MICHAEL HINZE AND KARL KUNISCH
We shall frequently refer to the linearized Navier-Stokes system and the adjoint
equations given next:
in
a.e. on (0; T ];
and
in
a.e. on [0; T
Proposition 2.3. Let y
3 ]. Then (2.5) admits a unique variational solution v 2 W and (2.6) has a
unique variational solution w and the
rst equation in (2.6) holding in L (V ) \ W . Moreover, the following estimates
hold.
iii. jwj L 2
If in addition y
iv. jwj L 2
For
@
solutions v of (2.5) and w of (2.6) are elements of H 2;1 (Q) and satisfy the a-priori
estimates
v.
and
vi.
Proof. The proof is sketched in the Appendix.
3. Derivatives
In this section representations for the rst and second derivatives of ^
J appropriate
for the treatment of (2.4) by the Newton and quasi{Newton method are derived.
We shall utilize the notation
Proposition 3.1. The operator continuously differentiable
with Lipschitz continuous second derivative. The action of the rst two
derivatives of e 1 are given by
he 1
where (v; r) 2 X .
SECOND ORDER METHODS IN FLOW CONTROL 7
Proof. Since e 2 is linear we restrict our attention to e 1 . Let
dened by
and recall that, due to the assumption
that
for all (u; v; To argue local Lipschitz continuity of e,
let
pZ Tjy ~
Here and below C denotes a constant independent of x; ~
x and . Due to the
continuous embedding of W into L 1 (H) we have
jx ~
R T
Using Holder's inequality this further implies the estimate
jx ~
and consequently, after redening C one last time
This estimate establishes the local Lipschitz continuity of e. To verify that the
formula for e x given above represents the Frechet - derivative of e we estimate
R T
sup
R T
Cjy ~
R T
and Frechet - dierentiability of e follows. To show Lipschitz continuity of the rst
derivative let x; ~
x and (v; r) be in X and estimate
R T
R T
Cjy ~
The expression for the second derivative can be veried by an estimate analogous
to the one for the rst derivative. The second derivative is independent of the point
at which it is taken and thus it is necessarily Lipschitz continuous.
From (3.2) it follows that for 2 L 2 (V ) and w 2 W the mapping
is an element of W . In Section 4 we shall use the fact that can also be identied
with an element of L 4
Lemma 3.1. For 2 L 2 (V ) and w 2 W the functional can be identied with
an element in W \ L 4=3 (V ).
Proof. To argue that 2 L 4=3 using (3.2)
where k is the embedding constant of V into H . This gives the claim .
Proposition 3.2. Let is a homeo-
morphism. Moreover, if the inverse of its adjoint e
is applied to an
element setting (w; w
we have w and w is the variational solution to (2.6).
Proof. Due to Proposition 3.1, e y (x) is a bounded linear operator. By the closed
range theorem the claim follows once it is argued that (2.5) has a unique solution
. This is a direct consequence of Proposition 2.3, i.
and ii. The assertion concerning the adjoint follows from the same proposition, iii.
and its proof.
As a consequence of Propositions 3.1 and 3.2 and the implicit function theorem
the rst derivative of the mapping u ! y(u) at u in direction -u is given by
y (x)e u (x)-u;
u). By the chain rule we thus obtain
Introducing the variable
we obtain utilizing Proposition 2.3 iii. with
representation for the rst derivative of
Here
is the variational solution
of
where the rst equation holds in L 4=3 (V ) \ W .
The computation of the second derivative of ^
J is more involved.
Let (-u; -v) 2 U U and note that the second derivative of u ! y(u) from U to
W can be expressed as
SECOND ORDER METHODS IN FLOW CONTROL 9
By the chain rule, and since W
We introduce the Lagrangian
and the matrix operator
y (x)e u (x)
We observe that the second derivative of L with respect to x can be expressed as
The above computation for ^
J 00 (u) together with (3.4) imply that
4. Second order methods
This section contains a description and a comparison of second order methods
to solve (2.1). Throughout u denotes a (local) solution to (2.1).
4.1. Newton{and quasi{Newton algorithm. For the sake of reference let us
specify the Newton algorithm.
Algorithm 4.1. (Newton Algorithm).
1. Choose u
2. Do until convergence
ii) update u
Let us consider the linear system in 2. i). Its dimension is that of the control space
U . From the characterization of the Hessian ^
we conclude that its evaluation
requires as many solutions to the linearized Navier{Stokes equation (3.4) with
appropriate right hand sides as is the dimension of U . If U is innite dimensional
then an appropriate discretization must be carried out. Let us assume now that the
dimension of U is large so that direct evaluation of ^
not feasible. In this
case 2. i) must be solved iteratively, e. g. by a conjugate gradient technique. We
shall then refer to 2. i) as the "inner " loop as opposed to the do{loop in 2. which
is the "outer" loop of the Newton algorithm. The inner loop at iteration level k of
the outer loop requires to
with
(ii) iteratively evaluate the action of ^
j , the j{th iterate of the inner
loop on the k{th level of the outer loop.
The iterate
j can be evaluated by successively applying the steps
a) solve in L 2
b) evaluate J yy (x)v
c) solve in W for w
d) and nally set q := J uu -u +B w.
We recall that 1 2 L 2 (V ) and that for s 2 W
he 1
Z TZ
Moreover, by Lemma 3.1 the functional appearing in b) is an element of W \
Hence by Proposition 2.3 the adjoint equation in c) can equivalently be
rewritten as
where the rst equation holds in W \ L 4=3 (V ). Summarizing, for the outer
iteration of the Newton method one Navier{Stokes solve for y(u k ) and one linearized
Navier{Stokes solve for are required. For the inner loop one forward ({in time)
as well as one backwards linearized Navier{Stokes solve per iteration is necessary.
Concerning initialization we observe that if initial guesses (y are
available (with y 0 not necessarily y(u 0 )) then alternatively to the initialization in
Algorithm 4.1 this information can be used advantageously to compute the adjoint
variable 1 required for the initial guess for the right hand side of the linear system
as well as to carry out steps a) - c) for the evaluation of the Hessian. There is no
necessity to recompute y(u 0 ) from u 0 .
To avoid the di-culties of evaluating the action of the exact Hessian in Algorithm
4.1 one can resort to quasi{Newton algorithms. Here we specify one of the most
prominent candidates, the BFGS{method. For w and z in U we dene the rank{one
operator
z 2 L(U ), the action of which is given by
(w
In the BFGS{method the Hessian ^
J 00 at u is approximated by a sequence of operators
Algorithm 4.2. (BFGS{Algorithm)
1. Choose u
SECOND ORDER METHODS IN FLOW CONTROL 11
2. Do until convergence
ii) update u
Note that the BFGS{algorithm requires no more system solves than the gradient
algorithm applied to (2.1), which is one forward solution of the nonlinear equation
to obtain y(u k ) and one backward solve of the linearized equation (3.7) obtain the
adjoint variable (u k ).
In order to compare Newton's method to the SQP method derived in the next
section we rewrite the update step 2. i) in Algorithm 4.1. To begin with we observe
that the right hand side in the update step can be written with the help of the
adjoint variable from and the operator T (x) dened in (3.10) as
J
J
where we dropped the iteration indices. As a consequence, with
(3.3) the update can be written as
-y
J
so that
-y
J
holds. Since e x
closed and we have the sequence of identities
x
Thus there exists - 2 Z such that
e
-y
J
Using this equation together with the denition of -y, Newton's update may be
rewritten as
2-y
4.2. Basic SQP{method. Here we regard (2.1) as a minimization problem of the
functional J over the space X subject to the explicit constraint
SQP-algorithm consists in applying Newton's method to the rst order optimality
system
where the Lagrangian L is dened in (3.9).
With x denoting a solution to problem (P), e x surjective by Proposition
3.2, and hence there exists a Lagrange multiplier 2 Z which is even unique such
that (4.4) holds. The SQP{method will be well dened and locally second order
convergent, if in addition to the surjectivity of e x the following second order
optimality condition holds.
There exists > 0 such that
If (H1) holds then, due to the regularity properties of e there exists a neighborhood
uniformly positive denite on ker(e x (x)) for every
Algorithm 4.3. (SQP{algorithm)
1. Choose
2. Do until convergence
solve
ii) update
Just as for Newton's method step 2. i.) is the di-cult one. While in contrast
to Newton's method neither the Navier{Stokes equation nor its linearization needs
to be solved, the dimension of the system matrix which is twice the dimension of
the state plus the dimension of the control space is formidable for applications in
uid mechanics. In addition from experience with Algorithm 4.3 for other optimal
control problems, see [KA, V] for example, it is well known that preconditioning
techniques must be applied to solve (4.5) e-ciently. As a preconditioner one might
consider the (action of the) operator
H is the inverse to the (discretized) instationary Stokes operator or
the (discretized) linearization of the Navier{Stokes equation at the state y k , either
one with homogenous boundary conditions.
One iteration of the preconditioned version of Algorithm 4.3 therefore requires
two linear parabolic solves, one forward and one backwards in time. As a con-
sequence, even with the application of preconditioning techniques, the numerical
expense counted in number of parabolic system solves is less for the SQP{method
than for Newton's method. However, the number of iterations of iterative methods
applied to solve the system equations in Algorithms 4.1 and 4.3 strongly depends on
the system dimension, which is much larger for Algorithm 4.3 than for Algorithm
4.1.
To further compare the structure of the Newton and the SQP{methods let us
assume for an instance that x k is feasible for the primal equation, i. e. e(x k
and feasible for the adjoint equation (3.5), i. e. e
SECOND ORDER METHODS IN FLOW CONTROL 13
Then the right hand side of (4.5) has the form@J u
A
and comparing to the computation at the end of section 4.1 we observe that the
linear systems describing the Newton and the SQP{methods coincide. In general
the nonlinear primal and the linearized adjoint equation will not be satised by the
iterates of the SQP{method and we therefore refer to the SQP{method as an outer
or unfeasible method, while the Newton method is a feasible one.
4.3. Reduced SQP{method. The idea of the reduced SQP{method is to replace
(4.5) with an equation in ker e x (x), so that the reduced system is of smaller dimension
than the original one. To develop the reduced system we follow the lines of
[KS]. Recall the denition of T
Note that A is a right{inverse to e x (x). In fact, we have
y (x)e u (x)v
By Proposition 3.2 and due to B 2 L(U; L 2 (V )) the operator T (x) is an isomorphism
from U to ker e x (x) and hence the second equality in (4.5) given by
can be expressed as
Using this in the rst equality of (4.5) we nd
x
Applying T (x) to this last equation and ii) from above implies that if -u is a
solution coordinate of (4.5) then it also satises
Once -u is computed from (4.8) then -y and - can be obtained from (4.7) (which
requires one forward linear parabolic solve) and the rst equation in (4.5) (another
backwards linear parabolic solve).
Let us note that if x is feasible then the rst term on the right hand side of (4.8)
is zero and (4.8) is identical to step 2. i) in Newton's Algorithm 4.1.
This again re
ects the fact that Newton's method can be viewed as an SQP{
method that obeys the feasibility constraint It also points at the fact
that the amount of work (measured in equation solves) for the inner loop coincides
for both the Newton and the reduced SQP{methods. The signicant dierence
between the two methods lies in the outer iteration. To make this evident we next
specify the reduced SQP{algorithm.
14 MICHAEL HINZE AND KARL KUNISCH
Algorithm 4.4. (Reduced SQP{algorithm).
1. Choose x
2. Do until convergence
i) Lagrange multiplier update: solve
e
ii) Solve
iii) update
Note that in the algorithm that we specied we did not follow the procedure outlined
above for the update of the Lagrange variable. In fact for reduced SQP{methods
there is no "optimal" update strategy for . The two choices described above are
natural and frequently used. To implement Algorithm 4.4 two linear parabolic
systems have to be solved in steps 2. i) and 2. ii) ) and, in addition two linear
parabolic systems are necessary to evaluate the term involving the operator A on
the right hand side of 2. ii) ). In applications this term is often neglected since it
vanishes at x .
The reduced SQP{method and Newton's method turn out to be very similar.
Let us discuss the points in which they dier:
Most signicantly the velocity eld is updated by means of the nonlinear
equation in Newton's method and via the linearized equation in the reduced
SQP{method.
ii) The right hand sides of the linear systems dier due to the appearance of the
term involving the operator A. As mentioned above this term is frequently
not implemented.
iii) Formally there is a dierence in the initialization procedure in that y 0 is
chosen independently from u 0 in the reduced SQP{method and y
Newton's method. However, as explained in section 4.1 above, if a good initial
guess y 0 independent from y(u 0 ) is available, it can be utilized in Newton's
method as well.
5. Convergence analysis
We present local convergence results for the algorithms introduced in Section 4
for cost functionals of separable type (2.2). For this purpose it will be essential
to derive conditions that ensure positive deniteness of ^
(H1). The
key to these conditions are the a-priori estimates of Proposition 2.3. We shall also
prove that the dierence ^
compact. This property is required
for the rate of convergence analysis of quasi-Newton methods. In our rst result we
assert positive deniteness of the Hessian provided that J y (x) is su-ciently small,
a condition which is applicable to tracking-type problems.
SECOND ORDER METHODS IN FLOW CONTROL 15
Lemma 5.1. (Positive deniteness of Hessian)
Let u 2 U and assume that J yy positive semi-denite
and J uu (x) 2 L(U) be positive denite, where Then, the Hessian
(u) is positive denite provided that jJ y (x)j L 2 (V ) is su-ciently small.
Proof: We recall from (3.11) that
is the solution to (3.7). It follows that
e
Here we note that for -u 2 U the functional
is an element of W . Since J yy (x) is assumed to be positive denite and J uu (x) is
positive denite the result will follow provided the operator norm of
R := e
can be bounded by jJ y (x)j L 2 (V ) . Straightforward estimation gives
From Proposition 2.3 we conclude that
To estimate
yy (x)(; ); 1 (x)ik L(W;W ) we recall that for
he 1
ZZ
Using (3.2) and the continuity of the embedding W ,! L 1 (H) we may estimate
with a constant C independent of g and h. Therefore,
where we applied iii. in Proposition 2.3 to (3.7).
Lemma 5.2. Let x 2 X and denote by the function dened in (3.5).
Then, under the assumptions of Lemma 5.1 on J condition (H1) is satised with
replaced by (x; ).
Proof. Let (v; u) 2 N (e x (x)). Then v solves (2.5) with
to Proposition 2.3, v 2 W and satises
be chosen such that J uu (x)(u; u) -juj 2
U for all u 2 U . We nd
pT
Here and below C denotes a generic constant independent of (v; u) and
Due to (3.5) and Proposition 2.3
These estimates imply
and combined with (5.4) the claim follows.
Lemma 5.3. If B 2 L(U; L 2 (H)), then the dierence
is compact for every u 2 U .
Proof. Utilizing (5.2) we may rewrite
It will be shown that both summands dene compact operators
on U . For this purpose let U be a bounded subset of U . Utilizing
Proposition 2.3 we conclude that
y (x)e
is a bounded subset of W and hence of L 2 (V ). Since by assumption J is twice
continuously Frechet dierentiable with respect to y from L 2 (V ) to R it follows
that J yy (S) is a bounded subset of L 2 (V ). Proposition 2.3, iii. implies that
consequently e
y (J yy (S)) is bounded in W 2
)g. Since W 2
4=3 is compactly embedded in L 2 (H) [CF] and B 2 L(U; L 2 (H))
it follows from the fact that e
that
is pre-compact in U .
Let us turn to the second addend in (5.5). Due to Lemma 3.1 and its proof the
set
is a bounded subset of W \ L 4=3 (V ). It follows, utilizing Proposition 2.3 that
is a bounded subset of W 2
4=3 H . As above the assumption that B 2 L(U; L 2 (H))
implies that
SECOND ORDER METHODS IN FLOW CONTROL 17
is precompact in U and the lemma is veried.
The following lemma concerning the operators T (x) and A(x) dened in (3.10) and
(4.6) will be required for the analysis of the reduced SQP-method.
Lemma 5.4. The mappings Let x 7! A(x) from X to L(Z ; X) and x 7! T (x)
from X to L(U; X) are Frechet dierentiable with Lipschitz continuous derivatives.
Proof. An immediate consequence of i., ii. in Proposition 2.3 and the identities ii)
and iii) in Section (4.3) together with the dierentiability properties of the mapping
x 7! e x (x).
We are now in the position to prove local convergence for the algorithms discussed
in Section 4. Throughout we assume that (y ; u ) is a local solution to (2.1)
and set y In addition to the general conditions on J , B
and e we require
positive semi-denite, J uu
positive denite, and jJ y su-ciently small.
With (H2) holding (H1) is satised due to Lemma 5.1. In particular a second
order su-cient optimality condition holds and (y ; u ) is a strict local solution to
(2.1). The following theorem follows from well known results on Newton's algorithm
Theorem 5.1. If (H2) holds then there exist a neighbourhood U(u ) such that
for every u 0 2 U(u ) the iterates fu n gn2N of Newton's Algorithm 4.1 converge
quadratically to u .
Theorem 5.2. If (H2) holds then there exist a neighbourhood U(u ) and > 0
such that for all u positive denite operators H 0 2 L(U) with
the BFGS method of Algorithm 4.2 converges linearly to u . If in addition B 2
then the convergence is super-linear.
Proof: For the rst part of the theorem we refer to [GR, Section4], for example.
For the second claim we observe that the dierence ^
by Lemma 5.3, so that the claim follows from [GR, Theorem 5.1], see also [KS1].
Theorem 5.3. Assume that (H2) holds and let be the Lagrange multiplier
associated to x . Then there exist a neighbourhood U(x ; ) X Z such that
for all 4.3 is well dened and the iterates
converge quadratically to
Proof: Since J and e are twice continuously dierentiable with Lipschitz continuous
second derivative, e x surjective by Proposition 3.2 and (H1) is satised,
second order convergence of the SQP-method follows from standard results, see for
instance [IK].
We now turn to the reduced SQP-method.
Theorem 5.4. Assume that (H1) holds and let denote the Lagrange multiplier
associated to x . Then there exist a neighbourhood U(x ) X such that for all
reduced SQP-algorithm 4.4 is well dened and its iterates fx n gn2N
converge two-step quadratically to x , i.e.
for some positive constant C independent of k 2 N.
Proof: First note that (H1) implies positive deniteness of T
in a neighbourhood ~
U(x ) of x . By Lemma 5.4 the mappings x 7! T (x) and
x 7! A(x) are Frechet dierentiable with Lipschitz continuous derivatives. Fur-
thermore, utilizing Proposition 2.3, iii. and Lemma A.2 it can be shown that the
mapping x 7! (x) is locally Lipschitz continuous, where is dened through (3.5).
This, in particular, implies for the Lagrange multiplier updates k the estimate
where the constant C is positive and depends on x and on supfjJ yy (x)j L(L 2 (V );L 2 (V
U(x )g. Altogether, the assumptions for Corollary 3.6 in [K] are met and there
exists a neighbourhood ^
claim follows.
6. Numerical results
Here we present numerical examples that should rst of all demonstrate the
feasibility of utilizing Newton's method for optimal control of the two-dimensional
instationary Navier-Stokes equations in a workstation environment despite the formidable
size of the optimization problem. The total number of unknowns (primal-,
adjoint-, and control variables) in Example 1 below, for instance, is of order 2.210 6 .
The time horizon could still be increased or the mesh size decreased by utilizing
reduced storage techniques at the expense of additional cpu-time, but we shall not
pursue this aspect here. The control problem is given by (1.1), (1.2) with cost
function J dened by
Qo
Qc
with
c
and
subsets
of
denoting the control and observation volumes, respectively. In our examples
Re =400 and B is the indicator function of Q c . The results for
Newton's method will be compared to those of the gradient algorithm, which we
recall here for the sake of convenience.
Algorithm 6.1. Gradient Algorithm
1. choose u 0 ,
2. Set d
d)
3. Set
4. 2.
SECOND ORDER METHODS IN FLOW CONTROL 19
Given a control u the evaluation of the gradient of J at a point u amounts to
solving (1.2) for the state y and (3.7) for the adjoint variable . Implementing
a stepsize rule to determine an approximation of is numerically expensive as
every evaluation of the functional J at a control u requires solving the instationary
Navier-Stokes equations with right hand side Bu.
We compare two possibilities for computing approximations to the optimal step
size . For this purpose let us consider for search direction d 2 U the
solutions of the systems
and
is the associated adjoint variable.
1. For a given search direction d 2 U interpolate the function I() by a quadratic
polynomial using the values I(0); I 0 (0) and I 00 (0), i.e.
and use the unique zero
of the equation I 0 as approximation of , with w given by (6.3).
2. Use the linearization of the mapping 7! y(u + d) at
in the cost functional J . This results in the quadratic approximation
I 2 () := J(y(u) d)
of the functional I(). Now use the unique root
of the equation I 0
as approximation of , with v given in (6.2).
The denominator of
(u)d; di U . From (5.1) with u replaced
by u it follows as in the proof of Theorem 5.1 that it is positive, provided that the
state y(u) is su-ciently close to z in L 2 (H).
Let us note that the computation of
1 requires the solution of linearized Navier-Stokes
equations forward and backward in time, whereas that of
only requires
one solve of the linearized Navier-Stokes equations. In addition, a numerical comparison
shows that the step-size guess
performs better than
both with respect
to the number of iterations in the gradient method and with respect to computational
time. For the numerical results presented below we therefore use the step
size proposal
. Thus, every iteration of the gradient algorithm amounts to solving
the nonlinear Navier-Stokes equations forward in time and the associated adjoint
equations backward in time for the computation of the gradient, and to solving
linearized Navier-Stokes equations forward in time for the step size proposal.
The inner iteration of Newton's method is performed by the conjugate gradient
method, the choice of which is justied in a neighbourhood of a local solution u
of the optimal control problem by the positive deniteness of ^
desired state z is su-ciently close to the optimal state y(u ).
For the numerical tests the target
ow z is given by the Stokes
ow with boundary
condition z tangential direction, see Fig. 1. The termination criterion for
the j-th iterate u k
j in the conjugate gradient method is chosen as
min<
The initialization for Newton's method was u 0 := 0.
Figure
1. Control target, Stokes
ow in the cavity
The discretization of the Navier-Stokes equations, its linearization and adjoint
was carried out by using parts of the code developed by Bansch in [BA], which is
based on Taylor-Hood nite elements for spatial discretization. As time step size
we took which resulted in 160 grid points for the time grid, and 545
pressure and 2113 velocity nodes for the spatial discretization. All computations
were performed on a DEC-ALPHA TM station 500.
Iteration CG-steps
6 19 4.686819e-6 0.032 1.480534e-3
Table
1. Performance of Newton's method for Example 1
Example 1 We rst present the results
for
c
Table
conrms super-linear convergence of the in-exact Newton method. To
SECOND ORDER METHODS IN FLOW CONTROL 21
achieve the the same accuracy as Newton's method the gradient algorithm requires
iterations. The computing time with Newton's method is approximately
minutes whereas the gradient method requires 110 minutes. This demonstrates
the superiority of Newton's method over the gradient algorithm for this example.
For larger values of and coarser time and space grids the dierence in computing
time is less drastic. In fact this dierence increases with decreasing and increasing
mesh renement. As expected a signicant amount of computing time is spent for
read-write actions of the variables to the hard-disc in the sub-problems.
In
Figures
2, 3, 4 the evolution of the cost functional, the dierence to the Stokes
ow and the control as a function of time are documented. It can be observed that
Newton's method tends to over-estimate the control in the rst iteration step,
whereas the gradient algorithm approximates the optimal control from below, see
Figure
4. Graphically there is no signicant change after the second iteration for
Newton's method. These comments hold for quite a wide range of values for .
In Fig. 5 the uncontrolled
ow together with the controlled
ow and the control
action at the end of the time interval are presented.2.8e-02
Figure
2. Newton's method (6 Iterations) (top) versus Gradient
algorithm (96 Iterations), Re=400, Evolution of cost
functional for relative
In the previous example the observation
volume
and the control
volume
c
each cover the whole spatial domain. From the practical point of view this is not
feasible. However, from the numerical standpoint this is a complicated situation,
since the inhomogeneities in the primal and adjoint equations are large.
22 MICHAEL HINZE AND KARL KUNISCH1.4e-02
Figure
3. Newton's method (6 Iterations) (top) versus Gradient
algorithm (96 Iterations), Re=400, Evolution of dier-
ence to Stokes
ow for relative
We next present two numerical examples with dierent observation and control
volumes. This results in smaller control and observation volumes than in Example
1, and thus the primal and adjoint equations are numerically simpler to solve.
Example 2
Here
and
1:). The
spatial and temporal discretizations as well as the parameter are the same as
in Example 1. Newton's method takes 15 minutes cpu-time and its convergence
statistics are presented in Tab. 2. The gradient algorithm needs 25 iterations and
26 minutes cpu to reduce the value of the cost functional from ^
to
Iteration CG-steps
Table
2. Performance of Newton's method for Example 2
Example 3
Here
and
0:7). Again, the
discretization of the spatial and the time domain as well as the parameter are the
SECOND ORDER METHODS IN FLOW CONTROL 231.8e+00
Figure
4. Newton's method (6 Iterations) (top) versus Gradient
algorithm (96 Iterations), Re=400, Evolution of control
for relative
same as in Example 2. The gradient algorithm needs 38 iterations to reduce the
value of the cost functional from ^
It
takes about 80 minutes cpu-time. We also implemented the Polak-Ribiere variant
of the conjugate gradient algorithm. It converges after 37 iterations and yields
a slightly better reduction of the residual. The amount of of cpu-time needed is
nearly equal to that taken by the gradient algorithm. Newton's method is faster.
It converges within 7 iterations to the approximate solution and needs
cpu-time. The average cpu-time for the inner iteration loop is 7.5 minutes. As in
the previous examples the average cost of a conjugate gradient iteration in the inner
loop decreases with decreasing residual of the outer-iteration loop. The results are
depicted in Tab. 3.
Appendix
A. Proof of Proposition 2.3
In the proof of Proposition 2.3 we make frequent use of the following Lemma.
Lemma A.1. For
Z
Figure
5. Results for from top to bottom: uncontrolled
controlled
ow at control force at
Iteration CG-steps
Table
3. Performance of Newton's method for Example 3
there exists a positive constant C such that
with a positive constant C.
Proof. In [T1].
Lemma A.2. There exists a positive constant C such that
Proof. Since the proof is identical to that of Lemma 3.1.
Note that the power 4=3 in the previous estimate cannot be improved by requiring
Su-cient conditions for (ru) t v +(u r)v 2 L 2 (V ) are given by requiring
in addition that u or v 2 L 1 (V ).
Proof of Proposition 2.3. Existence and uniqueness of a solution to (2.5) can be
shown following the lines of the existence and uniqueness proof for the instationary
two-dimensional Navier-Stokes equations in [T, Chap.III]. In the following we sketch
the derivation of the necessary a-priori estimates.
i. Test with respect to time, use b(u; v;
estimate using Young's inequality and the rst estimate in (A.1). This results
in
After integration from 0 to t Gronwall's inequality gives
Using (A.3) in (A.2), the Cauchy-Schwarz inequality yields
26 MICHAEL HINZE AND KARL KUNISCH
Combining (A.3) and (A.4) yields the rst claim.
ii. Test (2.5) with 2 V pointwise in time and estimate using the Cauchy-Schwarz
inequality and the rst estimate in (A.1). This gives
Z
which implies
This, together with y 2 W L 1 (H), and the estimates (A.3), (A.4) gives ii.
Combining i. and ii. implies
iii. For y 2 W we introduce the bounded linear operator A(y) 2 L(W;Z ) by
Note that A(y) coincides with e y (x) of Section 3. Due to i. this operator
admits a continuous inverse A(y) 1 2 L(Z ; W ). For the adjoint A(y) 2
there exists for every
solution (w; to
From i. and ii. together with the fact that
we have
By Lemma A.2 and the assumption that g 2 L 4=3 (V ) the mapping
is an element of L (V ), with 2 [1; 4=3]. From (A.8) we therefore conclude
that w t 2 L (V ). Together with w
[DL5]. From (A.8) we deduce that the rst equation in (2.6) is well dened
in L (V ). Referring to (A.8) a third time and utilizing the fact that w(T )
is well dened in H it follows that w(T
there exists a constant C such that
Combining this estimate with (A.9) implies the estimate in iii.
iv. If y utilizing
(A.8) we nd that w t 2 L 2 (V ). Moreover, by (A.1) we have
Together with (A.9) this gives the desired estimate in iv.
v. Test (2.5) with v pointwise in time and utilize Young's inequality and the
last estimate in (A.1) to obtain
Integration from 0 to t together with (A.4) results in
so that Gronwall's inequality gives
Using this in (A.11) yields
To estimate jv t j L 2 (H) test (2.5) with 2 V and use the last estimate in (A.1).
This gives
Z
so that y 2
together with (A.13) and (A.14) implies
Therefore,
which is v. The estimation for jwj L 1
2 ) is similar to that for
In order to cope with b(; in the estimation of
utilizes the third estimate in (A.1) to obtain the estimate vi.
28 MICHAEL HINZE AND KARL KUNISCH
--R
The Lagrange-Newton method for state constrained optimal control problems
On some control problems in uid mechanics
Numerical solution of a ow-control problem: vorticity reduction by dynamic boundary action
Instantaneous control of backward-facing-step ows Preprint No
Feedback control for unsteady ow and its application to the stochastic Burgers equation
Mathematical Analysis and Numerical Methods for Science and Technology
Boundary value problems and optimal boundary control for the Navier-Stokes system: the two-dimensional case
Numerical Methods for Nonlinear Variational Problems
Optimal control of two-and three-dimensional incompressible Navier-Stokes Flows
The local convergence of Broyden-like methods in Lipschitzean problems in Hilbert spaces
The velocity tracking problem for Navier-Stokes ows with bounded distributed controls
Formulation and analysis of a sequential quadratic programming method for the optimal Dirichlet boundary control of Navier-Stokes ow
Control strategies for uid ows - optimal versus suboptimal control
Augmented Lagrangian-SQP-methods for nonlinear optimal control problems of tracking type
Optimal control of thermally convected uid ow
Reduced SQP methods for parameter identi
Mesh independence of the gradient projection method for optimal control problems
Control of Burgers equation by a reduced order approach using Proper Orthogonal Decomposition Bericht Nr.
Mathematical Topics in Fluid Mechanics I
Modelling and control of physical processes using proper orthogonal decomposition
--TR
--CTR
Kerstin Brandes , Roland Griesse, Quantitative stability analysis of optimal solutions in PDE-constrained optimization, Journal of Computational and Applied Mathematics, v.206 n.2, p.908-926, September, 2007 | optimal control Navier-Stokes equations;newton method;second order sufficient optimality;SQP method |
587583 | Stability of Perturbed Delay Differential Equations and Stabilization of Nonlinear Cascade Systems. | In this paper the effect of bounded input perturbations on the stability of nonlinear globally asymptotically stable delay differential equations is analyzed. We investigate under which conditions global stability is preserved and if not, whether semiglobal stabilization is possible by controlling the size or shape of the perturbation. These results are used to study the stabilization of partially linear cascade systems with partial state feedback. | Introduction
The stability analysis of the series (cascade) interconnection of two stable
nonlinear systems described by ordinary differential equations is a classical
subject in system theory ([13], [14], [17]).
stable stable
Contrary to the linear case, the zero input global asymptotic stability of
each subsystem does not imply the zero input global asymptotic stability of
the interconnection. The output of the first subsystem acts as a transient
input disturbance which can be sufficient to destabilize the second subsys-
tem. In the ODE case, such destabilizing mechanisms are well understood.
they can be subtle but are almost invariably associated to a finite escape
time in the second subsystem (Some states blow up to infinity in a finite
time). The present paper explores similar instability mechanisms generated
by the series interconnection of nonlinear DDEs. In particular we consider
the situation where the destabilizing effect of the interconnection is delayed
and examine the difference with the ODE situation.
In the first part of the paper we study the effect of external (affine)
perturbations w on the stability of nonlinear time delay systems
whereby we assume that
globally asymptotically stable.
We consider perturbations which belong to both L 1 and L1 and
investigate the region in the space of initial conditions which give rise to
bounded solutions under various assumptions on the system and the perturbation
First, we consider global results: in the ODE-case, an obstruction is
formed by the fact that arbitrary small input perturbations can cause the
state to escape to infinity in a finite time, for instance when the interconnection
term \Psi(z) is nonlinear in z. This is studied extensively in the literature
in the context of stability of cascades, i.e. when the perturbation in (1)
is generated by another ODE, see e.g. [15] [13] and the references therein.
Even though delayed perturbations do not cause a finite escape time, we
explain a similar mechanism giving rise to unbounded solutions, caused by
nonlinear delayed interconnection terms.
In a second part, we allow situations whereby unbounded solutions are
inevitable and we investigate under which conditions trajectories can be
bounded semi-globally in the space of initial conditions, in case the perturbation
is parametrized, i.e. a). Hereby we let the parameter a
regulate the L 1 or L1 norm of the perturbation. We also consider the effect
of concentrating the perturbation in an arbitrary small time-interval.
As an application, we consider the special case whereby the perturbation
is generated by a globally asymptotically stable ODE. This allows us
to strengthen the previous results by the application of a generalization of
LaSalle's theorem [7] to the DDE-case: the convergence to zero of a solution
is implied by its boundedness. We also show that the origin of the
cascade is stable. We will concentrate on the following synthesis problem:
the stabilization of a partial linear cascade,!
with the SISO-system (A; B; C) controllable, using only partial state feed-back
laws of the form which allows to influence the shape and size
of the input 'perturbation' y to the nonlinear delay equation.
In the ODE-case this stabilization problem is extensively studied in the
literature, for instance in [16][1][15][8]. Without any structural assumption
on the interconnection term, achieving global stabilization is generally
not possible because the output of the linear subsystem, which acts as a
destabilizing disturbance to the nonlinear subsystem, can cause trajectories
to escape to infinity in a finite time. Therefore one concentrates on semi-global
stabilization, i.e. the problem of finding feedback laws making the
origin asymptotically stable with a domain of attraction containing any pre-set
compact region in the state space. An instructive way to do so is to drive
the 'perturbation' y fast to zero. However, a high-gain control, placing all
observable eigenvalues far into the LHP, will not necessarily result in large
stability regions, because of the fast peaking obstacle [15] [13]. Peaking is
a structural property of the -subsystem whereby achieving faster convergence
implies larger overshoots which can in turn destabilize the cascade.
Semi-global stability results are obtained when imposing structural assumptions
on the -subsystem (a nonpeaking system) or by imposing conditions
on the z-subsystem and the interconnection term \Psi: for example in [15] one
imposes a linear growth restriction on the interconnection term and requires
global exponential stability of the z-subsystem.
In this paper the classical cascade results are obtained and analysed in
the more general framework of bounded input perturbations and generalized
to the time-delay case.
Preliminaries The state of the delay equation (1) at time t can be described
as a vector z(t) 2 R n or as a function segment z t defined by
z
Therefore delay equations form a special class of functional differential equations
[3][5][6].
We assume that the right-hand side of (1) is continuous in all of its
arguments and Lipschitz in z and z(t \Gamma ). Then a solution is uniquely
defined by specifying as initial condition a function segment z 0 whereby z 0 2
C([\Gamma; 0]; R n ), the Banach space of continuous bounded functions mapping
the delay-interval [\Gamma; 0] into R n and equipped with the supremum-norm
k:k s .
Sufficient conditions for stability of a functional differential equation are
provided by the theory of Lyapunov functionals [3] [6], a generalization of
the classical Lyapunov theory for ODEs: for functional differential equations
of the form
a mapping R is called a Lyapunov functional on a set G if V is
continuous on G and
0 on G. Here
V is the upper-right-hand derivative
of V along the solutions of (2), i.e.
The following theorem, taken from [3], provides sufficient conditions for
stability:
Theorem 1.1 Suppose z = 0 is a solution of (2) and
and there exist nonnegative functions a(r) and b(r) such that a(r) !1
as r !1 and
Then the zero solution is stable and every solution is bounded. If in addition,
b(r) is positive definite, then every solution approaches zero as t !1.
Instead of working with functionals, it is also possible to use classical Lyapunov
functions when relaxing the condition
This approach, leading
to the so-called Razumikhin-type theorems [6], is not considered in this paper
In most of the theorems of the paper, the condition of global asymptotic
stability for the unperturbed system (equation (1) with
cient. When the dimension of the system is higher than one, we sometimes
need precise information about the interaction of different components of
the state z(t). This information is captured in the Lyapunov functional, associated
with the unperturbed system. Therefore, when necessary, we will
restrict ourself to a specific class of functionals, with the following assumption
Assumption 1.2 The unperturbed system
delay-independent
globally asymptotically stable (i.e. GAS for all values of the delay) with a
Lyapunov-functional of the form
radially unbounded and such that the
conditions of theorem 1.1 (with b(r) positive definite) are satisfied.
This particular choice is motivated by the fact that such functionals are used
for a class of linear systems [3][6] and, since we assume delay-independent
stability, the time-derivative of the functional should not depend explicitly
on . Choosing a delay-independent stable unperturbed system also allows
us to investigate whether the results obtained in the presence of perturbations
are still global in the delay. Note that in the ODE-case (3) reduces to
hardly forms any restriction because under mild conditions
its existence is guaranteed by converse theorems.
The perturbation j(t) 2 L p ([0; 1)) when 9M such that kjk
We assume j in (1) to be continuous and to belong to both L 1 and L1 .
When the perturbation is generated by an autonomous ODE,
b() with a and b continuous and locally Lipschitz, with
is globally asymptotically and locally exponentially stable (GAS and LES),
these assumptions are satisfied.
In the paper we show that when the unperturbed system is delay-independent
stable with a functional of the form (3) and the initial condition is bounded
(i.e. kz 0 k s R ! 1), arbitrary small perturbations can cause unbounded
trajectories provided the delay is large enough. Therefore it is instructive to
treat the delay as the (n + 1)-th state variable when considering semi-global
results: with a parametrized perturbation j(t; a), we say for instance that
the trajectories of (1) can be bounded semi-globally in z and semi-globally
in the delay if for each compact
region\Omega ae R n , and 8
a positive number a such that all initial conditions z
with z 0 (')
, give rise to bounded trajectories when
a a.
belongs to class , if it is strictly increasing
and The symbol k:k is used for the Euclidean norm in R n and by
kx; yk we mean
.
2 The mechanism of destabilizing perturbations
In contrast to linear systems, small perturbations (in the L 1 or L1 sense) are
sufficient to destabilize nonlinear differential equations. In the ODE-case,
the nonlinear mechanism for instability is well known: small perturbations
suffice to make solutions escape to infinity in a finite time, for instance when
the interconnection term \Psi is nonlinear in z. This is illustrated with the
following example:
which can be solved analytically for z to give
ds
escapes to infinity in a finite time t e which is given by
log
This last expression shows that the escape time becomes smaller as the
initial conditions are chosen larger, and, as a consequence, however fast j(t)
would be driven to zero in the first equation of (4), z(0) could always be
chosen large enough for the solution to escape to infinity in finite time.
In the simple example (4), the perturbation is the output of a stable
linear system. Its initial condition j(0) dictates the L1-norm of the per-
turbation, while the parameter a controls its L 1 -norm. Making these norms
arbitrary small does not result in global stability. This is due to the nonlinear
growth of the interconnection term.
One may wonder whether the instability mechanism encountered in the
ODE situation (4) will persist in the DDE situation
ae
In contrast to (4), system (7) exhibits no finite escape time. This can be
proven by application of the method of steps, i.e. from the boundedness of
of
and thus of z('). Nevertheless the exponentially decaying input j still causes
unbounded solutions in (7): this particular system is easily seen to have an
exponential solution z e
at . The instability mechanism can be
explained by the superlinear divergence of the solutions of
Theorem 2.1
has solutions which diverge faster than any exponential function.
Proof 2.1 Take as initial condition a strictly positive solution segment z 0
over [\Gamma; 0] with z(0) ? 1. For t 0, the trajectory is monotonically
increasing. This means that in the interval [k;
z
The solution at point k; k 1 is bounded below by the sequence satisfying
z
which has limit +1. The ratio R
z
satisfies
R
and consequently (R tends to infinity. However for an exponential
function e at , R = e a and (R \Gamma 1)R is constant.
As a consequence, for the system (7), arbitrarily fast exponential decay of
cannot counter the blow-up caused by the nonlinearity in z(t \Gamma ), and
hence the system is not globally asymptotically stable.
The instability mechanism illustrated by (4) and (7) can be avoided by
imposing suitable growth restrictions on the interconnection term \Psi. When
the unperturbed system is scalar, it is sufficient to restrict the interconnection
term to have linear growth in both of its arguments, i.e.
This linearity condition is by itself not sufficient however, if the unperturbed
system has dimension greater than one. In that case, the interaction of the
different components of the state z(t) can still cause "nonlinear" effects
leading to unbounded solutions. An illustration of this phenomenon is given
by the following system!
which was shown in [13] to have unbounded solutions, despite the linearity
of the interconnection. The instability is caused by the mutual interaction
between z 1 and z 2 when j 6= 0.
The following theorem, inspired by theorem 4.7 in [13], provides sufficient
conditions for bounded solutions. To prevent the instability mechanism due
to interacting states, conditions are put on the Lyapunov functional of the
unperturbed system.
Theorem 2.2 Assume that the system
satisfies Assumption 1.2 and that the interconnection term \Psi(z;
grows linearly in its arguments, i.e. satisfies (8). Furthermore if the perturbation
(ii) jj dk
dz jj jjzjj ck(z),
then all trajectories of the perturbed system are bounded, for all values of
the time delay.
Condition (ii) is sometimes called a polynomial growth condition because it
is satisfied if k(z) is polynomial in z.
Proof 2.2 Along a trajectory z(t) we have:
dz
kzk
cff 1=fl
r
ff 2=fl'
cff 1=fli
cff 1=fli
For cannot escape to infinity because k(z(t\Gamma )) is bounded
(calculated from the initial condition) and the above estimate can be integrated
over the interval since the right hand side is linear in V and j 2 L 1 .
For t we can use the estimate k(z(t \Gamma
Because this estimate for
V is increasing in both of its argument, an upper
bound for V (t) along the trajectory is described by
with as initial condition W (z the method of steps, it is clear
that W cannot escape to infinity in a finite time. From
monotonically increasing. As a consequence, for t 2 , W (t) W
and
and this estimate can be integrated leading to boundedness of lim t!1 sup V (t)
because Hence the trajectory z(t) is bounded.
Remark: When the interconnection term \Psi is undelayed, i.e. \Psi(z), condition
(i) in theorem 2.2 can be dropped [13].
3 Semi-global results for parametrized perturbation
Although no global results can be guaranteed in the absence of growth con-
ditions, the examples in the previous section suggest that one should be
able to bound the solutions semi-globally in the space of initial conditions
by decreasing the size of the perturbation. Therefore we assume that the
perturbation is parametrized,
We will consider two cases: a) parameter a controls the L 1 - or the L1-norm
of j and b) a regulates the shape of a perturbation with fixed L 1 -norm.
3.1 Modifying the L 1 and the L1 norm of the perturbation
We first assume that the L 1 -norm of j is parametrized. We have the following
result:
Theorem 3.1 Consider the system
and suppose that the unperturbed system is GAS with the Lyapunov functional
Assumption 1.2. If furthermore kj(t;
a !1, then the trajectories can be bounded semi-globally both in z and the
delay , by increasing a.
Proof 3.1 Let 0 be fixed and denote
by\Omega the desired stability domain in
R n , i.e. such that all trajectories starting in z 0 with z 0 (')
2\Omega for ' 2 [\Gamma; 0]
are bounded. Let V c , sup z0
As long as V (t) 2V c , z(t) and belong to a compact set. Hence
When a ! 1, the increase of V tends to zero. As a consequence the assumption
is valid for 8t 0. Hence the trajectories with initial
condition
in\Omega are bounded.
Note that for a fixed
region\Omega increases with and this influences
both the value M in the estimation of jk 0 (z)\Psi(z; z(t \Gamma )j and the
critical value
a of a in order to bound the trajectories. However when
belongs to a compact interval [0; ], we can take a sup 2[0;
hence bound the trajectories semi-globally in both the state and the delay.
The result given above is natural because for a given initial condition, a
certain amount of energy is needed for destabilization, expressed mathematically
by kjk 1 . However global stability in the state is not possible because
the required energy can become arbitrary small provided the initial condition
is large enough, see for instance example (4). Later we will discuss why
the trajectories cannot be bounded globally in the delay.
Now we consider the case whereby the L1-norm of the perturbation is
parametrized.
Theorem 3.2 Consider the system
Suppose that the unperturbed system is GAS with the Lyapunov functional
Assumption 1.2. If kj(t; a)k trajectories
of the perturbed system can be bounded semi-globally in both z and the
delay .
Proof 3.2 As in the proof of Theorem 3.1, it is sufficient to prove semi-global
stability in the state for a fixed 0.
Let\Omega and V c be defined as in
Theorem 3.1.
\Omega , with ffl ? 0 small.
The time derivative of V satisfies
When z(t)
n\Omega ffl we have, since b is positive definite,
Mkjk1 \GammaN for some number N ? 0 provided kjk 1 is small enough.
Only when z(t)
the value of V can increase with the estimate
Mkjk1 .
Now we prove by contradiction that all trajectories with initial condition
in\Omega are bounded for small suppose that a solution starting
in\Omega
(with it has to cross the level set 2V c . Assume
that this happens for the first time at t . Note that for small kjk 1 , t is
large. During the interval [t increase and decrease, but
increases, z(t)
2\Omega ffl and the increase \DeltaV is
limited: \DeltaV Mkjk1 . When z(t) would be
outside\Omega ffl for a time-interval
Hence by reducing kj(t; a)k 1 we can make the time-interval \Deltat arbitrary
small. On the other hand (for large a),
dz
when z t is
inside\Omega 2 , because f and \Psi map bounded sets into bounded sets.
Hence with jt \Deltat we have L\Deltat. Because of (12)
we can increase a (reduce kj(t; a)k 1 ) such that L\Deltat ffl and consequently
we have:
If ffl was chosen such
lies
inside\Omega , we have a contradiction because
this implies W (t ) W c . Hence a trajectory can never cross the level set
2W c and is bounded.
The results of Theorems 3.1 and 3.2 are not global in the delay, although
the unperturbed system is delay-independent stable. Global results in the
delay are generally not possible: we give an example whereby it is impossible
to bound the trajectories semi-globally in the state and globally in the delay,
even if we make the size of the perturbation arbitrary small w.r.t. the L 1
and L1 -norm.
Example 3.1 Consider the following system:
z
The unperturbed system, i.e. (13) with delay-independent stable.
This is proven with the Lyapunov functional
Its time derivative
\Gammaz 2(z 1
z 2+1
z 2+1
is negative definite: when z 1 62 [1; 3], both terms are negative and in the
other case the second term is dominated, because it saturates in z 2 . From
this it follows that the conditions of Assumption 1.2 are satisfied.
With the perturbation
whereby increasing a leads to a reduction of both kjk 1 and kjk 1 , we can not
bound the trajectories semi-globally in the state and globally in : for each
value of a we can find a bounded initial condition (upper bound independent
of a), leading to a diverging solution, provided is large enough: the first
equation of (13) has a solution z \Gammaff is the real
solution of equation
boundedness in of this solution over the interval [\Gamma; 0] (initial condition)
is guaranteed. Choose z 2
The above solution for z 1 satisfies:
when
ff log 5
3 ] and thus
A rather lengthy calculation shows that with z 2 and the perturbation
(14), the solution of (15) always escapes to infinity in a finite time t f (a).
Hence this also holds for the solution of the original system when the delay
is large enough such
log 5
This result is not in contradiction with the intuition that a perturbation
with small L 1 -norm can only cause escape in a finite time when the initial
condition is far away from the origin, as illustrated with example (4): in
the system (13) with driven away from the origin as long as
By increasing the delay in the first equation, we can keep
z 1 in this interval as long as desired. Thus the diverging transient of the
unperturbed system is used to drive the state away from the origin, far
enough to make the perturbation cause escape.
3.2 Modifying the shape of the perturbation
We assume that the shape of a perturbation with a fixed L 1 -norm can be
controlled and consider the influence of an energy concentration near the
origin. In the ODE case this does not allow to improve stability proper-
ties. This is illustrated with the first equation of example (4): instability
occurs when z(0) 1
R te \Gammas j(s)ds
and by concentrating the perburbation the
stability domain may even shrink, because the beneficial influence of damping
is reduced. In the DDE-case however, when the interconnection term is
linear in the undelayed argument, it behaves as linear during one delay interval
preventing escape. Moreover, starting from a compact region of initial
conditions, the reachable set after one delay interval can be bounded independently
of the shape of the perturbation (because of the fixed L 1 -norm).
After one delay interval we are in the situation treated in Theorem 3.1. This
is expressed in the following theorem. As in Theorem 2.2 the polynomial
growth condition prevents hidden nonlinearities due to interacting states.
Theorem 3.3 Consider
and suppose that the unperturbed system is GAS with the Lyapunov functional
Assumption 1.2. Let k(z)
satisfy the polynomial growth condition k dk
dz kkzk ck(z). Assume that \Psi has
linear growth in z(t), kj(t; a)k independent of a and lim a!1
the trajectories of (16) can be bounded semi-globally in z
and for all
Proof 3.3 Consider a fixed
let\Omega be the desired stability
domain in R n and let R be such that z 0 (')
The interconnection term has linear growth in z, i.e. there exist two
class- functions fl 1 and fl 2 such that
The time-derivative of the Lyapunov function V satisfies
dz
jj(t; a)j
cV (z t )jj(t; a)j
During the interval [0;
to\Omega . Therefore, when kzk R,
one can bound
by a factor c 2 independent of
a. Thus
and when a trajectory leaves the set fz : kzk Rg at t , because
t jj(s;a)jds
for some constant M , independently of a. In the above expression, V
As a consequence, also k(z) and kz(t)k can be bounded, uniformly in
a. Hence at time the state z , i.e. z(t); t 2 [0; ] belongs to
a compact
region\Omega 2 independently of a.
Now we can translate the original problem over one delay interval: at
time the initial conditions belong to the bounded
region\Omega 2 and with t
jj(s; a)jds
Because of Theorem 3.1, we can increase a such that all solutions starting
in\Omega 2 are bounded.
Until now we assumed a fixed . But because is compact, we can
take the largest threshold of a for bounded solutions over this interval.
Applications: stability of cascades and partial
state feedback
In this section we use the results given above to study the stability of cascade
systems:
whereby the subsystem
with a functional of
the form (3),
globally asymptotically and locally exponentially
stabilizable, h() is continuous and locally Lipschitz,
partial state feedback laws investigate in which situations the
equilibrum (z; can be made semiglobally asymptotically stable.
4.1 LaSalle's theorem
Because the 'perturbation' y in (17) is generated by a GAS ODE, we can
strengthen the boundedness results in the previous section to asymptotic
stability results, by applying a generalization to the time-delay case of the
classical LaSalle's theorem [7]:
Theorem 4.1 (LaSalle Invariance Principle)
Let\Omega be a positively invariant
set of an autonomous ODE, and suppose that every solution starting
in\Omega converges to a set E
2\Omega . Let M be the largest invariant set contained
in E, then every bounded solution starting
in\Omega converges to M as t !1.
This theorem is generalized to functional differential equations by Hale
[3]: with the following definition of an invariant set,
Definition 4.1 Let fT (t); t 0g be the solution semigroup associated to
the functional differential equation , then a set Q ae C is called invariant if
LaSalle's theorem can be generalized to:
Theorem 4.2 If V is a Lyapunov functional on G and x t is a bounded
solution that remains in G, then x t converges to the largest invariant set in
We will now outline its use in the context of stability of cascades. Suppose
that in the cascade (17)-(18), with a particular feedback law the
-subsystem is GAS. Hence there exists a Lyapunov function V () such that
we use this Lyapunov function as a
Lyapunov functional for the whole cascade, we can conclude from Theorem
4.2 that every solution that remains bounded converges to the largest invariant
set where and by the GAS of
this is
the equilibrium point (z; solutions are either unbounded or
converge to the origin.
By means of a local version of Theorem 3.1 one can show that the origin
of (17)-(18) is stable.
Hence the theorems of the previous section are strengthened to asymptotic
stability theorems. For example under the conditions of Theorem 3.1,
one can achieve semi-global asymptotic stability in both the state and the
delay.
4.2 Stabilization of partially linear cascades
In the rest of this paper we assume a SISO linear driving system (the -
with controllable and consider linear partial state feedback control
laws . From the previous sections it is clear that the input y of the
z-subsystem can act as a destabilizing disturbance. However, the control
can drive the output of the linear system fast to zero. We will investigate
under which conditions this is sufficient to stabilize the whole cascade. An
important issue in this context is the so-called fast peaking phenomenon
[15]. This is a structural property of the -system whereby imposing faster
convergence of the output to zero implies larger overshoots which can in
turn destabilize the cascade and may form an obstacle to both global and
semi-global stabilizability. We start with a short description of the peaking
phenomenon, based on [15], and then apply the results of the previous
section to the stabilization of the cascade system.
4.2.1 The peaking phenomenon
When in the system,
the pair (A; B) is controllable, one can always find state feedback laws
F resulting in an exponential decay rate with exponent \Gammaa. Then the
output of the closed loop system satisfies
where fl depends on the choice of the feedback gain. We are interested
in the lowest achievable value of fl among different feedback laws and its
dependence upon a.
Denote by F(a) the collection of all stabilizing feedback laws
with the additional property that all observable 1 eigenvalues of (C; A F ),
with A \Gammaa. For a given a and F 2 F(a), define
the smallest value of fl in (22) as
ky(t)ke at
where the supremum is taken over all t 0 and all initial conditions satisfying
k(0)k 1. Now denote by (a) = inf F2F(a) F . The output of system
(21) is said to have peaking exponent s when there exists constants ff
such that
In [15] one places all eigenvalues to the left of the line
for large a. When the output is said to be nonpeaking.
The peaking exponent s is a structural property related to the zero-
dynamics: when the system has relative degree r, it can be transformed (in-
cluding a preliminary feedback transformation) into the normal form [4][1]:
ae
which can be interpreted as an integrator chain linearly coupled with the
zero-dynamics subsystem
. Using state feedback the output of an
integrator chain can be forced to zero rapidly without peaking [13]. Because
of the linear interconnection term, stability of the zero-dynamics subsystem
implies stability of the whole cascade. On the contrary, when the zero
dynamics are unstable, some amount of energy, expressed by
is needed for its stabilization and therefore the output must peak. More
precisely we have the following theorem, proven in the appendix.
Theorem 4.3 The peaking exponent s equals the number of eigenvalues in
the closed RHP of the zero-dynamics subsystem.
The definition of the peaking exponent (23) is based on an upper bound of
the exponentially weighted output, while its L 1 -norm is important in most
of the theorems of section 3. But because the overshoots related to peaking
occur in a fast time-scale ( at), there is a connection. For instance we have
the following theorem, based on a result of Braslavsky and Middleton [10]:
Theorem 4.4 When the output y of system (21) is peaking (s 1), ky(t)k 1
can not be reduced arbitrarily.
Proof 4.1 Denote by z 0 an unstable eigenvalue of the zero-dynamics of
(21). When a feedback is stabilizing the relation between y and
in the Laplace-domain is given by
with
. The first term vanishes at z 0 because the eigenvalues of the
zero dynamics appear as zeros in the corresponding transfer function H(s)
and since the feedback F is stabilizing, no unstable pole-zero cancellation
occurs at z 0 . Hence
4.2.2 Nonpeaking cascades
When the -subsystem is minimum-phase and thus nonpeaking, one can find
state feedback laws resulting in
and the L 1 -norm of the output can be made arbitrary small. So by Theorem
3.1, the cascade (20) can be stabilized semi-globally in the state and in the
delay.
4.2.3 Peaking cascades
When the -subsystem is nonminimum phase, the peaking phenomenon
forms an obstacle to semi-global stabilizability, because the L 1 -norm of the
output cannot be reduced (Theorem 4.4).
For ODE-cascades, we illustrate the peaking obstruction with the following
example:
Example 4.1 In the cascade,
the peaking exponent of the -subsystem is 1 (zero dynamics
cascade cannot be stabilized semi-globally since the explicit solution of the
first equation is given by
whereby
Hence the solution reaches infinity in a
For DDE-cascades, we consider two cases:
Case 1: Peaking exponent=1 We can apply theorem 3.3 and obtain
semi-global stabilizability in the state and in the delay, when the interconnection
term is linear in the undelayed argument: besides (25) the L 1 -norm
of y can also be bounded from above since there exists feedback laws
such that
and because of the fast time-scale property, the energy can be concentrated
since 8t ? 0:
jy(s)jds
ds ! 0 as a !1:
Case 2: Peaking exponent ? 1 In this case, we expect the L 1 -norm of
y to grow unbounded with a, as suggested by the following example:
Example 4.2 When k is considered as the output of the integrator chain,
the peaking exponent is
be reduced arbitrarily by achieving a faster exponential decay rate. In Proposition
4.32 of [13], it is shown that with the feedback-law
k=1 a n\Gammak+1 q solutions of
there exists a constant c independent of a such that
hence the particular feedback gain is able to achieve an upper
bound which corresponds to definition (23), for each choice of the output
. It is also shown in [13] that with the same feedback and with as
initial condition 1
d such that s such that
As a consequence,
while the peaking exponent of output
With the two following examples, we show that when the energy of an
exponentially decaying input perturbation ( e \Gammaat ) grows unbounded with
a, an interconnection term which is linear in the undelayed argument, is
not sufficient to bound the solutions semi-globally in the state. Because it
is hard to deal in general with outputs generated by a linear system with
peaking exponent s ? 1, we use an artificial perturbation a s e \Gammaat , which has
both the fast time-scale property and the suitable growth-rate of the energy
(a
Example 4.3 The solutions of equation
can not be bounded semi-globally in z by increasing a, for any ? 0, if the
'peaking exponent' s is larger than one. \Sigma
Proof: Equation (26) has an exponential solution z e (t),
z e
( a
a s
ff
e a
Consider the solution z(t) with initial condition z 0 j L ? 0 on [\Gamma; 0]. For
and consequently coincides on [o; ] with
For large a, expression (27) describes a decreasing lower bound on [; 2 ],
since y(t) reaches its maximum in t (a) with t ! 0 as a ! 1. Thus
imposing and from this
one can argue 2 that z(t) z e (t); t . Thus the trajectory starting with
initial condition L on [\Gamma; 0] is unbounded when
Le L ff a
a
a s
ff
e 3a
When s ? 2, for each value of L, the solution is unstable for large a, thus the
attraction domain of the stable zero solution shrinks to zero. When
a solution starting from L ?
\Theta 3
ffff is unstable for large a.
Even when the interconnection term contains no terms in z(t), but only
delayed terms of z, semi-global results are still not possible in general, as
shown with the following example.
2 Intersection at t would imply
Example 4.4 The solutions of the system
with otherwise, can not be
bounded semi-globally in z by increasing a, for any ? 0, when the 'peaking
exponent' s is greater than one. \Sigma
Proof: When z 1, equation (28) reduces to:
which has the following explicit solution,
z l (t) = at
When the initial condition of (28) is L on [\Gamma; 0], during one delay-interval,
one can find an lower bound of the solution by integrating
with solution
z
When a is chosen such that b 1, the expression for z l (t) is valid for
t 0. When z u (2) ? z l (2 ), one can argue that for large a, z u (t) ?
z l (t); t 2 [; 2 ] and z u (t) describes a lower bound for the solution starting
in L for reaches is maximum in t (a) with t ! 0 as
1). Consequently, the trajectory with initial condition L on [\Gamma; 0],
is unbounded when
4.2.4 Zero dynamics with eigenvalues on the imaginary axis
The situation where the zero dynamics possess eigenvalues on the imaginary
axis but no eigenvalues in the open RHP deserves special attention. According
to Theorem 4.3, the system is peaking, that is, the L 1 norm of the output
cannot be reduced arbitrarily. However this energy can be 'spread out' over
a long time interval: it is indeed well known that a system with all its ei-
genalues in the closed LHP can be stabilized with a low-gain feedback, as
expressed by the following theorem, taken from [13]:
Theorem 4.5 If a system
stabilizable and the eigenvalues
of A 0 are in the closed left half plane, then it can be stabilized with a
low-gain control law which for large a satisfies:
a
The infinity-norm of such a low-gain control signal can be arbitrary reduced,
which results, by theorem 3.2, in satisfactory stabilizability results when
it also acts as an input disturbance of a nonlinear system. This suggests
not to force the output of (21) exponentially fast ( e \Gammaat ) to zero, which
results in peaking, but to drive it rapidly without peaking to the manifold
on which the dynamics are controlled by the low-gain control
action. Mathematically, with and a feedback transformation
the normal form of the -subsystem is transformed into
Using a high-gain feedback driving e(t) to zero without peaking, as proven
in [13], proposition 4.37, one can always force the output to satisfy the
constraint
a
with fl independent of a. A systematic treatment of such high-low gain
control laws can be found in [8].
For instance the system,
ae
is weakly minimum-phase (zero-dynamics
0). With the high-low gain
feedback the explicit solution of (30) for large a can be
approximated by:
a
a
a
Perturbations satisfying constraint (29) can be decomposed in signals
with vanishing L 1 and L1 -norm. This suggests the combination of theorems
3.1 and 3.2 to:
Theorem 4.6 Consider the interconnected system
Suppose that the z-subsystem is GAS with the Lyapunov functional V (z t )
satisfying Assumption 1.2 and the zeros of the -subsystem are in the closed
LHP. Then the interconnected system can be made semi-globally asymptotically
stable in both [z; ] and the delay, using only partial-state feedback.
Proof 4.2 As argumented in the beginning of section 4, the origin (z;
(0;
Let\Omega be the desired region of attraction in the (z; )-
space and choose R such that for all (z
R. Because of the
assumption on the -subsystem, there exist partial-state feedback laws such
that
with fl independent of a.
Consider the time-interval [0; 1]. Because
lim
a
one can show, as in the proof of theorem 3.1, that by taking a large, the
increase of V can be limited arbitrary. Hence for t 1, the trajectories can
be bounded inside a compact
region\Omega 2 . We can now translate the original
problem over one time-unit and since
sup
flR(e \Gammaat +a
we can, by theorem 3.2, increase a until the stability domain
Hence all trajectories starting
in\Omega are bounded and converge to the origin,
because of LaSalle's theorem.
Conclusions
We studied the effect of bounded input-perturbations on the stability of
nonlinear delay equations of the form (1).
Global stability results are generally not possible without structural assumptions
on the interconnection term because arbitrary small perturbations
can lead to unbounded trajectories, even when they are exponentially
decaying. In the ODE-case this is caused by the fact that superlinear
destabilizing terms can drive the state to infinity in a finite time. Superlinear
delayed terms cannot cause finite-escape but can still make trajectories
diverge faster than any exponential function.
In a second part we dropped most of the structural assumptions on
the unperturbed system and the interconnection term and considered semi-global
results when the size of the perturbation can be reduced arbitrary.
Here we assumed that the unperturbed system is delay-independent stable.
When the L 1 or the L1 norm of the perturbations is brought to zero,
trajectories can be bounded semi-globally in both the state and the delay.
By means of examples we explained mechanisms prohibiting global results in
the delay. We also considered the effect of concentrating a perturbation with
a fixed L 1 -norm near the origin. This leads to semi-global stabilizability in
the state and compact delay-intervals not containing the origin, when the
interconnection term is linear in its undelayed arguments.
As an application, we studied the stabilizability of partial linear cascades
using partial state feedback. When the interconnection term
is nonlinear, output peaking of the linear system can form an obstruction
to semi-global stabilizability because the L 1 -norm of the output cannot be
reduced by achieving a faster exponential decay rate. If we assume that
the interconnection term is linear in the undelayed argument and the peaking
exponent is one, we have semi-global stabilizability results, because the
of the output can be bounded from above while concentrating its
energy. Even with the above assumption on the interconnection term, higher
peaking exponents may form an obstruction. When the zeros of the linear
driving system are in the closed left half plane, we have satisfactory stability
results when using a high-low gain feedback, where the output of the
linear subsystem can be decomposed in two signals with vanishing L 1 and
L1 norm respectively.
The main contribution of this paper lies in placing the classical cascade
results in the more general framework of bounded input perturbations and
its generalization to a class of functional differential equations.
Acknowledgements
The authors thank W.Aernouts for fruitful discussions on the results presented
in the paper. This paper presents research results of the Belgian programme
on Interuniversity Poles of Attraction, initiated by the Belgian
State, Prime Minister's Office for Science, Technology and Culture (IUAP
P4/02). The scientific responsibility rests with its authors.
--R
Asymptotic stability of minimum phase non-linear systems
Sufficient conditions for stability and instability of autonomous functional-differential equations
Nonlinear control systems.
Introduction to the theory and application of functional differential equations
Stability of Functional Differential Equations
Stability theory of ordinary differential equations.
Robustness of nonlinear delay equations w.
How violent are fast controls?
Slow peaking and low-gain design for global stabilization of nonlinear systems
Constructive Nonlinear Control.
On the input-to-state stability properties
The peaking phenomenon and the global stabilization of nonlinear systems.
Tools for semiglobal stabilization by partial state and output feedback.
nonlinear small gain theorem for the analysis of control systems with saturation.
--TR | nonlinear control;delay equations;cascade systems |
587590 | Affine Invariant Convergence Analysis for Inexact Augmented Lagrangian-SQP Methods. | An affine invariant convergence analysis for inexact augmented Lagrangian-SQP methods is presented. The theory is used for the construction of an accuracy matching between iteration errors and truncation errors, which arise from the inexact linear system solvers. The theoretical investigations are illustrated numerically by an optimal control problem for the Burgers equation. | Introduction
. This paper is concerned with an optimization problem of the
following type:
minimize J(x) subject to
are su#ciently smooth functions and X , Y are real
Hilbert spaces. These types of problems occur, for example, in the optimal control of
systems described by partial di#erential equations. To solve (P) we use the augmented
(sequential quadratic programming) technique as developed in [11].
In this method the di#erential equation is treated as an equality constraint, which
is enforced by a Lagrangian term together with a penalty functional. We present an
algorithm, which has second-order convergence rate and depends upon a second-order
su#cient optimality condition. In comparison with SQP methods the augmented
Lagrangian-SQP method has the advantage of a more global behavior. For certain
examples we found it to be less sensitive with respect to the starting values, and the
region for second-order convergence rate was reached earlier, see e.g. [11, 15, 17]. We
shall point out that the penalty term of the augmented Lagrangian functional need
not to be implemented but rather that it can be realized by a first-order Lagrangian
update.
Augmented Lagrangian-SQP methods applied to problem (P) are essentially Newton
type methods applied to the Kuhn-Tucker equations for an augmented optimization
problem. Newton methods and their behavior under di#erent linear transformations
were studied by several authors, see [5, 6, 7, 8, 10], for instance. In this paper,
we combine both lines of work and present an a#ne invariant setting for analysis
and implementation of augmented Lagrangian-SQP methods in Hilbert spaces. An
a#ne invariant convergence theory for inexact augmented Lagrangian-SQP methods
is presented. Then the theoretical results are used for the construction of an accuracy
matching between iteration errors and truncation errors, which arise from the inexact
linear system solvers.
The paper is organized as follows. In -2 the augmented Lagrangian-SQP method
is introduced and necessary prerequisites are given. The a#ne invariance is introduced
Karl-Franzens-Universit?t Graz, Institut f?r Mathematik, Heinrichstra-e 36, A-8010 Graz, Austria
Konrad-Zuse-Zentrum f?r Informationstechnik Berlin (ZIB), Takustra-e 7, D- 14195 Berlin,
Germany (weiser@zib.de). The work of this author was supported by Deutsche Forschungsgemeinschaft
(DFG), Sonderforschungsbereich 273.
S. VOLKWEIN AND M. WEISER
in -3. In -4 an a#ne invariant convergence result for the augmented Lagrangian-SQP
method is presented. Two invariant norms for optimal control problems are analyzed
in -5, and the inexact Lagrangian-SQP method is studied in -6. In the last section
we report on some numerical experiments done for an optimal control problem for the
Burgers equation, which is a one-dimensional model for nonlinear convection-di#usion
phenomena.
2. The augmented Lagrangian-SQP method. Let us consider the following
constrained optimal control problem
minimize J(x) subject to
are real Hilbert spaces. Throughout we do
not distinguish between a functional in the dual space and its Riesz representation
in the Hilbert space. The Hilbert space X - Y is endowed with the Hilbert space
product topology and, for brevity, we set
Let us present an example for (P) that illustrates our theoretical investigations
and that is used for the numerical experiments carried out in -7. For more details we
refer the reader to [18].
Example 2.1.
Let# denote the interval (0, 1) and set
-# for given
We define the space W (0, T ) by
which is a Hilbert space endowed with the common inner product. For controls
the state y # W (0, T ) is given by the weak solution of the unsteady
Burgers equation with Robin type boundary conditions, i.e., y satisfies
and
#y x (t, -)# y(t, -)y x (t, - f(t, -) #
denotes the duality pairing
between H 1 and its dual. We suppose that f # L
Recall that W (0, T ) is continuously embedded into the
space of all continuous functions from [0, T ] into L
2(#2 denoted by C([0, T ]; L
see e.g. [3, p. 473]. Therefore, (2.1a) makes sense. With every controls u, v we associate
the cost of tracking type
J(y, u,
where z # L 2 (Q) and # > 0 are fixed. Let
We introduce the bounded operator
whose action is defined by
#-e(y, u, v), # L 2 (0,T ;H
e(y, u,
where for given g # H
1(# the mapping (-#
1(# is the
Neumann solution operator associated with
.
Now the optimal control problem can be written in the form (P). #
For c # 0 the augmented Lagrange functional associated with (P) is
defined by
Y .
The following assumption is rather standard for SQP methods in Hilbert spaces, and
is supposed to hold throughout the paper.
Assumption 1. Let x # X be a reference point such that
a) J and e are twice continuously Fr-echet-di#erentiable, and the mappings J #
and e # are Lipschitz-continuous in a neighborhood of x # ,
b) the linearization e # of the operator e at x # is surjective,
c) there exists a Lagrange multiplier # Y satisfying the first-order necessary
optimality conditions
where the Fr-echet-derivative with respect to the variable x is denoted by a
prime, and
d) there exists a constant # > 0 such that
for all # ker e #
where ker e # denotes the kernel or null space of e #
Remark 2.2. In the context of Example 2.1 we write x
proved in [18] that Assumption 1 holds provided #y # - z# L 2 (Q) is su#ciently small. #
The next proposition follows directly from Assumption 1. For a proof we refer
to [12] and [13], for instance.
Proposition 2.3. With Assumption 1 holding x # is a local solution to (P).
Furthermore, there exists a neighborhood of is the unique
solution of (2.2) in this neighborhood.
The mapping x # L c (x, # ) can be bounded from below by a quadratic func-
tion. This fact is referred to as augmentability of L c and is formulated in the next
proposition. For a proof we refer the reader to [11].
4 S. VOLKWEIN AND M. WEISER
Proposition 2.4. There exist a neighborhood -
U of x # and a constant - c # 0 such
that the mapping x # L # c (x, # ) is coercive on the whole space X for all x # -
U and
Remark 2.5. Due to Assumption 1 and Proposition 2.4 there are convex neighborhoods
of # such that for all (x, #
a) J(x) and e(x) are twice Fr-echet-di#erentiable and their second Fr-echet-deri-
vatives are Lipschitz-continuous in U(x # ),
c) L # 0 (x, #) is coercive on the kernel of e # (x),
d) the point z is the unique solution to (2.2) in U , and
there exist - # > 0 and - c # 0 such that
for all # X and c # - c. # (2.3)
To shorten notation let us introduce the operator
# for all (x, # U.
Then the first-order necessary optimality conditions (2.2) can be expressed as
To find x # numerically we solve (OS) by the Newton method. The Fr-echet-derivative
of the operator F c in U is given by
denotes the adjoint of the operator e # (x).
Remark 2.6. With Assumptions 1 holding there exists a constant C > 0 satisfying
(see e.g. in [9, p. 114]), where B(Z) denotes the Banach space of all bounded linear
operators on Z. #
Now we formulate the augmented Lagrangian-SQP method.
Algorithm 1.
a) Choose
c) Solve for (#x, #) the linear system
d) Set
back to b).
Remark 2.7. Since X and Y are Hilbert spaces, equivalently be
obtained from solving the linear system
and setting #). Equation (2.7) corresponds to a
Newton step applied to (OS). This form of the iteration requires the implementation
of e # c) of Algorithm 1 do not - see [11]. In case of
Example 2.1 this requires at least one additional solve of the Poisson equation. #
3. A#ne invariance. Let -
be an arbitrary isomorphism. We
transform the x variable by
By. Thus, instead of (P) we study the whole class
of equivalent transformed minimization problems
By) subject to e( -
with the transformed solutions -
By Setting
I
# and G c (y,
By, #),
the first-order necessary optimality conditions have the form
Applying Algorithm 1 to (
OS) we get an equivalent sequence of transformed iterates.
Theorem 3.1. Suppose that Assumption 1 holds. Let
be the starting iterates for Algorithm 1 applied to the optimality conditions
(OS) and (
OS), respectively. Then both sequences of iterates are well-defined
and equivalent in the sense of
By
Proof. First note that the Fr-echet-derivative of the operator G c is given by
By, #). (3.3)
To prove (3.2) we use an induction argument. By assumption the identity (3.2) holds
Now suppose that (3.2) is satisfied for k # 0. This implies -
By
Using step b) of Algorithm 1 it follows that -
By
From (3.3),
we conclude that (#y, #)
Utilizing step d) of Algorithm 1 we get
the desired result.
Due to the previous theorem the augmented Lagrangian-SQP method is invariant
under arbitrary transformations -
B of the state space X . This nice property should,
of course, be inherited by any convergence theory and termination criteria. In -4 we
develop such an invariant theory.
Example 3.2. The usual local Newton-Mysovskii convergence theory (cf. [14,
p. 412]) is not a#ne invariant, which leads to an unsatisfactory description of the
domain of local convergence. Consider the optimization problem
subject to #
with unique solution x associated Lagrange multiplier
Note that the Jacobian #F 0 does not depend on # here, but only on
(#). In the context of Remark 2.5 we choose the neighborhood
6 S. VOLKWEIN AND M. WEISER
x
d)
c)
a)
x
x
Fig. 3.1. Illustration for Example 3.2. a) Contour lines of the cost functional, the constraint,
and the areas occupied by the other subplots. b) Neighborhood U(x # ) (gray) and Kantorovich ball of
theoretically assured convergence (white) for the original problem formulation. c) U(x # ) and Kantorovich
ball for the "better" formulation. d) U(x # ) and Kantorovich ball for the "better" formulation
plotted in coordinates of the original formulation.
Defining
the Newton-Mysovskii theory essentially guarantees convergence for all starting points
in the Kantorovich region
Here, # denotes the spectral norm for symmetric matrices and # 2 is the Euclidean
norm. For our choice of U , resulting in # 1.945 and a section of
the Kantorovich region at # is plotted in Figure 3.1-b). A di#erent choice of
coordinates, however, yields a significantly di#erent result. With the transformation
problem (3.4) can be written as
subject to
For the same neighborhood U , the better constants # 1.859 and result.
Again, a section of the Kantorovich region at # is shown in Figure 3.1-c).
Transformed back to (#) space, Figure 3.1-d) reveals a much larger domain of theoretically
assured convergence. This "better" formulation of the problem is, however,
not at all evident. In contrast, a convergence theory that is invariant under linear
transformations, automatically includes the "best" formulation. #
Remark 3.3. The invariance of Newton's method is not limited to transformations
of type (3.1). In fact, Newton's method is invariant under arbitrary transformations
of domain and image space, i.e., it behaves exactly the same for AF c
Because F c has a special gradient structure in the optimization
context, meaningful transformations are coupled due to the chain rule. Meaningful
transformations result from transformations of the underlying optimization problem,
i.e., transformations of the domain space and the image space of the constraints.
Those are of the type
x
# .
For such general transformations there is no possibility to define a norm in an invariant
way, since both the domain and the image space of the constraints are transformed
x). For this reason, di#erent types of transformations have
been studied for di#erent problems, see e.g. [6, 7, 10]. #
4. A#ne invariant convergence theory. To formulate the convergence theory
and termination criteria in terms of an appropriate norm, we use a norm that is
invariant under the transformation (3.1).
Definition 4.1. Let z # U . Then the norms # z : Z # R, z # U , are called
a#ne invariant for (OS), if
#F c (-z)#z# z
for all - z # U and #z # Z. (4.1)
We call {# z } z#U a #-continuous family of invariant norms for (OS), if
for every r, #z # Z and z # U such that z +#z # U . Using a#ne invariant norms
we are able to present an a#ne invariant convergence theorem for Algorithm 1.
Theorem 4.2. Assume that Assumption 1 holds and that there are constants
#-continuous family of a#ne invariant norms {# z } z#U , such
that the operator #F c satisfies
z
for s, # [0, 1], z # U , and #z # Z such that co{z, z +#z} # U , where co A denotes
the convex hull of A. For k # N let h
Suppose that h 0 < 2 and that the level set L(z 0 ) is closed. Then, the iterates stay in
U and the residuals converge to zero at a rate of
k .
8 S. VOLKWEIN AND M. WEISER
Additionally, we have
#F c (z k+1 )# z k #F c (z k )# z k . (4.5)
Proof. By induction, assume that L(z k ) is closed and that h k < 2 for k # 0. Due
to Remark 2.5 the neighborhood U is assumed to be convex, so that z
all # [0, 1]. From #F c (z k )#z we conclude that
ds
ds
for all # [0, 1]. Applying (4.2), (4.3), h
z k ds
holds. If z k +#z k
there exists an -
# [0, 1] such that z k
i.e.,
which is a contradiction. Hence, z k+1
z k /2.
Thus, we have h k+1 # h 2
closed, every Cauchy
sequence in L(z k+1 ) converges to a limit point in L(z k ), which is, by (4.4) and the
continuity of the norm, also contained in L(z k+1 ). Hence, L(z k+1 ) is closed. Finally,
using in (4.6), the result (4.5) is obtained.
Remark 4.3. We choose simplicity over sharpness here. The definition of the
level set L(z) can be sharpened somewhat by a more careful estimate of the term
Theorem 4.2 guarantees that lim k# h To ensure that z k
z # in Z as
k # we have to require that the canonical norm # Z
on Z can be bounded
appropriately by the a#ne invariant norms # z
Corollary 4.4. If, in addition to the assumptions of Theorem 4.2, there exists
a constant -
C > 0 such that
for all # Z and z # U,
then the iterates converge to the solution z
Proof. By assumption and Theorem 4.2 we have
Thus, {z k
} k#N is a Cauchy sequence in L(z 0 ) # U . Since L(z 0 ) is closed, the claim
follows by Remark 2.5-d).
For actual implementation of Algorithm 1 we need a convergence monitor indicating
whether or not the assumptions of Theorem 4.2 may be violated, and a termination
criterion deciding whether or not the desired accuracy has been achieved.
From (4.5), a new iterate z k+1 is accepted, whenever
#F c (z k+1 )# z k < #F c (z k )# z k . (4.7)
Otherwise, the assumptions of Theorem 4.2 are violated and the iteration is considered
as to be non-convergent. The use of the norm # z k for both the old and the new iterate
permits an e#cient implementation. Since in many cases the norm #F c (z k+1 )# z k is
defined in terms of #z derivative need not be evaluated
at the new iterate. If a factorization of #F c (z k ) is available via a direct solver, it can
be reused at negligible cost even if the convergence test fails. If an iterative solver
is used, #z k+1 in general provides a good starting point for computing #z k+1 , such
that the additional cost introduced by the convergence monitor is minor.
The SQP iteration will be terminated with a solution z k+1 as soon as
with a user specified tolerance TOL. Again, the use of the norm # z k allows an
e#cient implementation.
5. Invariant norms for optimization problems. What remains to be done
is the construction of a #-continuous family of invariant norms. In this section we
introduce two di#erent norms.
5.1. First invariant norm. The first norm takes advantage of the parameter c
in the augmented Lagrangian. As we mentioned in Remark 2.5, there exists a - c # 0
such that L # c (z) is coercive on X for all z # U and c # - c. Hence, the operator
belongs to B(Z) for all c # -
c.
Let us introduce the operator
I
# for all z # U and c # 0. (5.1)
Since L # c (z) is self-adjoint for all z # U , S c (z) is self-adjoint as well. Due to (2.3) the
operator S c (z) is coercive for all z # U and c # -
c. Thus, for all z # U
is a norm on Z for c # -
c.
Proposition 5.1. Let c # - c. Then, for every z # U the mapping
S. VOLKWEIN AND M. WEISER
defines an a#ne invariant norm for (2.2).
Proof. Let z # U be arbitrary. Since #S 1/2
defines a norm on Z for c # -
c and
#F c (z) is continuously invertible by Remark 2.6, it follows that # z is a norm on Z.
Now we prove the invariance property (4.1). Let -
L c denote the augmented Lagrangian
associated with the transformed problem (3.1). Then we have -
setting -
#r#
From (3.3) we conclude that
with
# U . Using (5.3) and (5.4) we obtain
which gives the claim.
In order to show the #-continuity (4.2) required for Theorem 4.2, we need the
following lemma.
Lemma 5.2. Suppose that c # - c and that there exists a constant # 0 such that
for all # Z, z # U and #z # Z such that z + #z # U . Then we have
where
Y
Proof. Let
# Z and z # U . From (5.1) and (5.2) we infer
By assumption S c (z) is continuously invertible. Utilizing the Lipschitz assump-
tion (5.5) the second additive term on the right-hand side can be estimated as
#.
Note that
Y
This implies
Inserting (5.7) into (5.6) the claim follows.
Proposition 5.3. Let all hypotheses of Lemma 5.2 be satisfied. Then {# z } z#U
is a #(3 )/2-continuous family of invariant norms with
#F c (z)# z (5.8)
for all # Z and z # U , where -
introduced in (2.3).
Proof. From (5.3) it follows that
We estimate the additive terms on the right-hand side separately. Using Lemma 5.2
we find
Applying (5.3) and (5.5) we obtain
Hence, using
12 S. VOLKWEIN AND M. WEISER
and it follows that {# z } z#U is a #(3 +C e )/2-continuous family of invariant norms.
Finally, from
z
Z
we infer (5.8).
5.2. Second invariant norm. In -5.1 we introduced an invariant norm provided
the augmentation parameter in Algorithm 1 satisfies c # - c. But in many
applications the constant - c is not explicitly known. Thus, L # c (x, #) -1 need not to be
bounded for c # [0, - c), so that S c (x, #) given by (5.1) might be singular. To overcome
these di#culties we define a second invariant norm that is based on a splitting
X , such that at least the coercivity of L # 0 (x, #) on ker e # (x) can be
utilized. Even though the thus defined norm can be used with larger value
of c may improve the global convergence properties - see [16, Section 2.3].
To begin with, let us introduce the bounded linear operator T c
Lemma 5.4. For every (x, # U and c # 0 the operator T c (x, #) is an isomorphism
Proof. Let r # X be arbitrary. Then the equation T c (x,
equivalent with
Due to Remark 2.6 the operator #F c (x, #) is continuously invertible for all (x, # U
and c # 0. Thus, # is uniquely determined by (5.9), and the claim follows.
We define the bounded linear operator R c
I
# for (x, # U and c # 0. (5.10)
Note that R c (x, #) is coercive and self-adjoint. Next we introduce the invariant norm
z
Y
for z # U and (r 1 , r 2 ) T
# Z. To shorten notation, we write #R c (z) 1/2 T c (z)
the first additive term.
Proposition 5.5. For every z # U the mapping given by (5.11) is an a#ne
invariant norm for (OS), which is equivalent to the usual norm on Z.
Proof. Let z # U be arbitrary. Since R c (z) is coercive and T c (z) is continuously
invertible, it follows that # z defines a norm which is indeed equivalent to the usual
norm on Z. Now we prove the invariance property (4.1). For (x,
By, # U we
have
I
# . (5.
Utilizing (3.3), (5.11) and (5.12) the invariance property follows.
The following proposition guarantees that {# z } z#U is a #-continuous family of
invariant norms for (OS).
Proposition 5.6. Suppose that there exists a constant # 0 such that
for all # Z, z # U and #z # Z such that z + #z # U . Then we have
For the proof of the previous proposition, we will use the following lemmas.
Lemma 5.7. With the assumption of Proposition 5.6 holding and z = (x, #) it
follows that
for all # ker e #
Proof. Let
Using (5.10)
and (5.11) we obtain
For all c # 0 the operator R c (z) is continuously invertible. Furthermore, R c (z) is
self-adjoint. Thus, applying (5.13) and
the second additive term on the right-hand side of (5.14) can be estimated as
Inserting this bound in (5.14) the claim follows.
Lemma 5.8. Let the assumptions of Theorem 5.6 be satisfied. Then
# z
for all r # X.
Proof. For arbitrary r # X we set # 1 , Using (5.9) and (5.13)
we estimate
# z
# z
14 S. VOLKWEIN AND M. WEISER
so that the claim follows.
Proof of Proposition 5.6. Let z, z Utilizing (5.11), Lemmas 5.7 and 5.8
we find
# z
# z
and therefore
z .
Hence, {# z } z#U is a 3#/2-continuous family of invariant norms.
Remark 5.9. Note that the Lipschitz constant of the second norm does not involve
C e and hence is independent of the choice of c. In contrast, choosing c too small may
lead to a large Lipschitz constant of the first norm and thus can a#ect the algorithm. #
Example 5.10. Let us return to Example 3.2. Using the second norm with
the theoretically assured, a#ne invariant domain of convergence is shown in Figure 5.1,
to be compared with Figures 3.1 b) and d). Its shape and size is clearly more similar
to the non-invariant domain of convergence for the "better" formulation, and, by
definition, does not change when the coordinates change. #
x
Fig. 5.1. Illustration for Examples 3.2 and 5.10. Neighborhood U(x # ) (gray) and a#ne invariant
domain of theoretically assured convergence (white).
5.3. Computational e#ciency. The a#ne invariance of the two norms developed
in the previous sections does not come for free: the evaluation of the norms is
more involved than the evaluation of some standard norm.
Nevertheless, the computational overhead of the first norm defined in -5.1 is
almost negligible, since it can in general be implemented by one additional matrix
vector multiplication. It requires, however, a su#ciently large parameter c.
On the other hand, the second norm defined in -5.2 works for arbitrary c # 0, but
requires one additional system solve with the same Jacobian but di#erent right hand
side. In case a factorization of the matrix is available, the computational overhead is
negligible - compare the CPU times of the exact Newton method in -7. If, however,
the system is solved iteratively, the additional system solve may incur a substantial
cost, in which case the first norm should be preferred.
5.4. Connection to the optimization problem. When solving optimization
problems of type (P), feasibility optimality are the relevant quan-
tities. This is well reflected by the proposed norms # z . Let z = (x, #) and
Using Taylor's theorem (see [19, p. 148]) and
the continuity of L # 0 , we obtain for the first norm
z
Y
Y
Y
Y
The second norm is based on the partitioning F c (x,
correspondingly on a splitting of the Newton correction into a optimizing direction
tangential to the constraints manifold and a
feasibility direction #F c (x, # 1 , we have for
z
Y
Y
Y
Y
Y
Recall that Thus, in the proximity of the solution, both a#ne
invariant norms measure the quantities we are interested in when solving optimization
problems, in addition to the error in the Lagrange multiplier and the optimizing
direction's Lagrange multiplier component, respectively.
6. Inexact augmented Lagrangian-SQP methods. Taking discretization
errors or truncation errors resulting from iterative solution of linear systems into
account, we have to consider inexact Newton methods, where an inner residual remains
z
Such inexact Newton methods have been studied in a non a#ne invariant setting by
Dembo, Eisenstat, and Steihaug [4], and Bank and Rose [1].
S. VOLKWEIN AND M. WEISER
With slightly stronger assumptions than before and a suitable control of the inner
residual, a similar convergence theory can be established as in -4.
Note that exact a#ne invariance is preserved only in case the inner iteration is
a#ne invariant, too.
Theorem 6.1. Assume that Assumption 1 holds and that there are constants
#-continuous family of a#ne invariant norms {# z } z#U , such
that the operator #F c satisfies
z (6.2)
for s, # [0, 1], z # U , and #z # Z such that z
and define the level sets
Suppose that z 0
# U and that L(z 0 ) is closed. If the inner residual r k resulting from
the inexact solution of the Newton correction (6.1) is bounded by
where
then the iterates stay in U and the residuals converge to zero as k # at a rate of
#F c (z k+1 )# z k+1 #F c (z k )# z k . (6.5)
Proof. Analogously to the proof of Theorem 4.2, one obtains
ds (6.6)
for all # [0, 1]. Using (6.6), (6.2), (4.2), and (6.3), we find for # [0, 1]
z k ds
z k
z k .
From (6.1) and (6.3) we have
and thus, setting # in (6.7) and #F c (z k )#z k
z k and using (6.4) it follows
that
From (6.4) we have #F c (z k )#z k
If z k+1
# U , then there is some # [0, 1] such that co{z k , z k
#/(2#F c (z k )# z k , which contradicts (6.10). Thus, z k+1
# U . Furthermore, inserting
and therefore L(z k+1 ) # L(z k ) is closed.
The next corollary follows analogously as Corollary 4.4.
Corollary 6.2. If, in addition to the assumptions of Theorem 6.1, there exists
a constant -
C > 0 such that
C#F c (z)# z
for all # Z and z # U , then the iterates converge to the solution z
of (OS).
For actual implementation of an inexact Newton method following Theorem 6.1
we need to satisfy the accuracy requirement (6.4). Thus, we do not only need an error
estimator for the inner iteration computing # k , but also easily computable estimates
[#] and [#] for the Lipschitz constants # and # in case no suitable theoretical values
can be derived. Setting in (6.6), we readily obtain
z k
and hence a lower bound
z k
#.
Unfortunately, the norms involve solutions of Newton type systems and therefore
cannot be computed exactly. Assuming the relative accuracy of evaluating the norms
are -
respectively, we define the actually computable estimate
S. VOLKWEIN AND M. WEISER
We would like to select a # k such that the accuracy matching condition (6.4) is
Unfortunately, due to the local sampling of the global Lipschitz constant
# and the inexact computation of the norms, the estimate [#] k is possibly too small,
translating into a possibly too large tolerance for the inexact Newton correction. In
order to compensate for that, we introduce a safety factor # < 1 and require the
approximate accuracy matching condition
to hold. An obvious choice for # would be (1
From Propositions 5.3
and 5.6 we infer that # is of the same order of magnitude as #. Thus we take the
estimate
currently ignoring C e when using the first norm.
Again, the convergence monitor (4.7) can be used to detect non-convergence. In
the inexact setting, however, the convergence monitor may also fail due to # k chosen
too large. Therefore, whenever (4.7) is violated and a reduction of # k is promising
(e.g.
, the Newton
correction should be recomputed with reduced # k .
Remark 6.3. If an inner iteration is used for approximately solving the Newton
equation (6.1) which provides the orthogonality relation (#z k , #z k
in a
scalar product (-) z k that induces the a#ne invariant norm, the estimate (6.11) can
be tightened by substituting
k . Furthermore, the norm #z k
of the exact Newton correction is computationally available, which permits the construction
of algorithms that are robust even for large inaccuracies # k . The application
of a conjugate gradient method that is confined to the null space of the linearized
constraints [2] to augmented Lagrangian-SQP methods can be the focus of future
research. #
7. Numerical experiments. This section is devoted to present numerical tests
for Example 2.1 that illustrate the theoretical investigations of the previous sections.
To solve (P) we apply the so-called "optimize-then-discretize" approach: we compute
an approximate solution by discretizing Algorithm 1, i.e., by discretizing the associated
system (2.6). In the context of Example 2.1 we have x
(#y, To reduce the size of the system we take
advantage of a relationship between the SQP steps #u, #v for the controls and the
SQP step # for the Lagrange multiplier. In fact, from
we infer that
of Algorithm 1. Inserting (7.1) into (2.6) we obtain
a system only in the unknowns (#y, #). Note that the second Fr-echet-derivative of
the Lagrangian is given by
. The solution (#y, #u, #v, #) of (2.6) is
computed as follows: First we solve
#y x (-,
#y x (-,
in# ,
- z in Q,
in# ,
y and #. Then we obtain #u and
#v from (7.1). For more details we refer the reader to [18].
For the time integration we use the backward Euler scheme while the spatial
variable is approximated by piecewise linear finite elements. The programs are written
in MATLAB, version 5.3, executed on a Pentium III 550 MHz personal computer.
Run 7.1 (Neumann control). In the first example we choose
The grid is given by
50 for
50 for
To solve (2.1) for we apply the Newton method at each time step. The
algorithm needs one second CPU time. The value of the cost functional is 0.081.
Now we turn to the optimal control problem. We choose and the
desired state is z(t, In view of the choice of z and the nonlinear
convection term yy x in (2.1b) we can interprete this problem as determining u in such
a way that it counteracts the uncontrolled dynamics which smoothes the discontinuity
at transports it to the left as t increases. The discretization of (7.2) leads
to an indefinite system
# . (7.3)
As starting values for Algorithm 1 we take y
S. VOLKWEIN AND M. WEISER
t-axis
x-axis
optimal
Optimal controls u (t) and v (t)
t-axis
Fig. 7.1. Run 7.1: residuum t #y(t, - z(t, -)# L
2(# and optimal controls.
Table
Run 7.1-(i): decay of #Fc (z k+1 )# z k for the first norm.
(i) First we solve (7.3) by an LU-factorization (MATLAB routine lu) so that
the theory of -4 applies. According to -4 we stop the SQP iteration if
In case #F c (z 0 )# z 0 is very large, the factor 10 -3 on the right-hand side of (7.4) might
be too big. To avoid this situation Algorithm 1 is terminated if (7.4) and, in addition,
holds. The augmented Lagrangian-SQP method stops after four iterations. The CPU
times for di#erent values of c can be found in Tables 7.6 and 7.7. Let us mention that
for the algorithm needs 102.7 seconds and for divergence
of Algorithm 1. As it was proved in [15] the set of admissible starting values reduces
whenever c enlarges. The value of the cost functional is 0.041. In Figure 7.1 the
residuum t #y(t, - z(t, -)# L
2(# for the solution of (2.1) for
as for the optimal state is plotted. Furthermore, the optimal controls are presented.
The decay of #F c (z k+1 )# z k , for the first invariant norm given by (5.3)
and for di#erent values of c is shown in Table 7.1. Recall that the invariant norm is
only defined for c # -
c. Unfortunately, the constant -
c # 0 is unknown. We proceed as
follows: Choose a fixed value for c and compute
Table
Run 7.1-(i): values of [#] k for di#erent c.
Table
Run 7.1-(i): decay of #Fc (z k+1 )# z k for the second norm.
in each level of the SQP iteration. Whenever [#] k is greater than zero, we have
coercivity in the direction of the SQP step. Otherwise, c needs to be increased. In
Table
7.2 we present the values for [#] k . We observed numerically that [#] k is positive
increased if c increased.
Next we tested the second norm introduced in (5.11) for Again, the augmented
method stops after four iterations and needs 97.4 seconds CPU time.
Thus, both invariant norms lead to a similar performance of Algorithm 1. The decay
of #F c (z k+1 )# z k can be found in Table 7.3.
(ii) Now we solve (7.3) by an inexact generalized minimum residual (GMRES)
method (MATLAB routine gmres). As a preconditioner for the GMRES method we
took an incomplete LU-factorization of the matrix
by utilizing the MATLAB function luinc(D,1e-03). Here, the matrix P is the discretization
of the heat operator y t - #y xx with the homogeneous Robin boundary
conditions #y x (-, The same preconditioner
is used for all Newton steps.
We chose # In -6 we introduced estimators for the constants # and
#, denoted by [#] k and [#] k , respectively. Thus, for k # 0 we calculate [#] k and [#] k ,
and then we determine # k+1 as follows:
while
do
For the first norm #F c (z k )#z k
z k is
already determined by the computation of the previous Newton correction. Thus we
have but in case of the second norm, #F c (z k )#z k
z k has to be calculated
with a given tolerance -
# k . In our tests we take -
# k for all k # 0. As starting
values we choose #
We test four strategies for the choice of -
for
It turns out that for -
we obtain the best performance with respect to CPU times. Hence, in the following
22 S. VOLKWEIN AND M. WEISER
Table
Run 7.1-(ii): decay of #Fc (z k )# z k for the first norm with #
Table
Run 7.1-(ii): values of [#] k for #
test examples we take #
The decay of #F (z k )# z k is presented in Table 7.4. Algorithm 1 stops after at most
seven iterations. Let us mention that for c the estimates [#] k for
the coercivity constant are positive. In particular, for the augmented
Lagrangian-SQP method has the best performance. In Table 7.5 the values of the
estimators [#] k are presented. In Table 7.6 the CPU times for the first norm are
presented. It turns out that the performance of the inexact method does not change
significantly for di#erent values of # k . Since we have to solve an additional linear
system at each level of the SQP iteration in order to compute the second norm, the
first norm leads to a better performance of the inexact method with respect to the
CPU time. Compared to part (i) the CPU time is reduced by about 50% if one
takes the first norm. In case of the second norm the reduction is about 45% for
7.7. Finally we test the inexact method
using decreasing # k . We choose # # 1. It turns out
that this strategy speeds up the inexact method for both norms, as can be expected
from the theoretical complexity model developed in [7].
Run 7.2 (Robin control). We choose
-10 in # 0, T
and y
The desired state was taken to be z(t,
(i) First we again solve (7.3) by an LU-factorization. We take the same starting
values and stopping criteria as in Run 7.1. The augmented Lagrangian-SQP method
stops after four iteration and needs 105 seconds CPU time. The discrete optimal
solution is plotted in Figure 7.2. From Table 7.8 it follows that (4.7) is satisfied
exact 97.5 96.8 96.9
Table
Run 7.1-(ii): CPU times in seconds for the first norm.
first norm second norm
exact 97.5 97.4
Table
Run 7.1-(ii): CPU times in seconds for both norms and
numerically. Let us mention that [#] 0 , . , [#] 3 are positive for c
For the needed CPU times we refer to Tables 7.10 and 7.11.
(ii) Now we solve (7.3) by an inexact GMRES method. As a preconditioner we
take the same as in Run 7.1. We choose # k. The decay of #F (z k )# z k is
presented in Table 7.9. As in part (i) we find that [#] k > 0 for all test runs. The needed
CPU times are shown in Tables 7.10 and 7.11. As we can see, the inexact augmented
Lagrangian-SQP method with GMRES is much faster than the exact one using the
LU-factorization. For the first norm the CPU time is reduced by about 55%, and
for the second norm by about 50% for # k # {0.3, 0.4, 0.5, 0.6, 0.7}. Moreover, for our
example the best choice for c is . For smaller values of # k the method does
not speed up significantly. As in Run 7.1 we test the inexact method using decreasing
# k . Again we choose # 1. As in Run 7.1, this
strategy speeds up the inexact method significantly for both norms. The reduction is
about 9% compared to the CPU times for fixed # k , compare Table 7.11.
Table
Run 7.2-(i): decay of #Fc (z k+1 )# z k for di#erent c.
S. VOLKWEIN AND M. WEISER0.501
-0.50.5t-axis
Optimal state y * (t,x)
-113Optimal controls u (t) and v * (t)
t-axis
Fig. 7.2. Run 7.2: optimal state and controls.
Table
Run 7.2-(ii): decay of #Fc (z k )# z k for #
exact 105.1 105.7 105.7
Table
Run 7.2-(ii): CPU times in seconds for the first norm.
first norm second norm
exact 105.1 105.5
Table
Run 7.2-(ii): CPU times in seconds for both norms and
--R
Global approximate newton methods
A subspace cascadic multigrid method for Mortar elements.
Mathematical Analysis and Numerical Methods for Science and Technology
Newton Methods for Nonlinear Problems.
Local inexact Newton multilevel FEM for nonlinear elliptic problems
Finite Element Methods for Navier-Stokes Equations
Inexact Gauss Newton Methods for Parameter Dependent Nonlinear Problems
Augmented Lagrangian-SQP-methods in Hilbert spaces and application to control in the coe#cient problems
Optimization by Vector Space Methods
First and second-order necessary and su#cient optimality conditions for infinite-dimensional programming problems
Iterative solution of nonlinear equations in several variables
Nonlinear Functional Analysis and its Applications
--TR
--CTR
Anton Schiela , Martin Weiser, Superlinear convergence of the control reduced interior point method for PDE constrained optimization, Computational Optimization and Applications, v.39 n.3, p.369-393, April 2008
S. Volkwein, Lagrange-SQP Techniques for the Control Constrained Optimal Boundary Control for the Burgers Equation, Computational Optimization and Applications, v.26 n.3, p.253-284, December | affine invariant norms;burgers' equation;nonlinear programming;multiplier methods |
587626 | On Reachability Under Uncertainty. | The paper studies the problem of reachability for linear systems in the presence of uncertain (unknown but bounded) input disturbances that may also be interpreted as the action of an adversary in a game-theoretic setting. It defines possible notions of reachability under uncertainty emphasizing the differences between reachability under open-loop and closed-loop control. Solution schemes for calculating reachability sets are then indicated. The situation when observations arrive at given isolated instances of time leads to problems of anticipative (maxmin) or nonanticipative (minmax) piecewise open-loop control with corrections and to the respective notions of reachability. As the number of corrections tends to infinity, one comes in both cases to reachability under nonanticipative feedback control. It is shown that the closed-loop reach sets under uncertainty may be found through a solution of the forward Hamilton--Jacobi--Bellman--Isaacs (HJBI) equation. The basic relations are derived through the investigation of superpositions of value functions for appropriate sequential maxmin or minmax problems of control. | Introduction
Recent developments in real-time automation have promoted new interest in the reachability
problem-the computation of the set of states reachable by a controlled process through
available controls. Being one of the basic problems of control theory, it was studied from
the very begining of investigations in this field (see [18]). The problem was usually studied
in the absence of disturbances, under complete information on the system equations and
the constraints on the control variables. It was shown, in particular, that the set of states
Research supported by National Science Foundation Grant ECS 9725148. We thank Oleg Botchkarev
for the figures.
reachable at given time t under bounded controls is one and the same, whether one uses
open-loop or closed-loop (feedback) controls. It was also indicated that these "reachability
sets" could be calculated as level sets for the (perhaps generalized) solutions to a "forward"
Hamilton- Jacobi-Bellman equation [18], [19], [3], [15], [17].
However, in reality the situation may be more complicated. Namely, if the system is subject
to unknown but bounded disturbances, it may become necessary to compute the set of states
reachable despite the disturbances or, if exact reachability is impossible, to find guaranteed
errors for reachability.
These questions have implicitly been present in traditional studies on feedback control under
uncertainty for continuous-time systems, [10], [28], [4], [9], [12]. They have also appeared
in studies on hybrid and other types of transition systems [1], [29], [21], [5].
This leads us to the topic of the present paper which is the investigation of reachability
under uncertainty for continuous-time linear control systems subjected to unknown input
disturbances, with prespecified geometric (hard) bounds on the controls and the unknowns.
The paper indicates various notions of reachability, studies the properties of respective reach
sets and indicates routes for calculating them.
The first question here is to distinguish, whether reachability under open-loop and closed-loop
controls yield the same reach sets. Indeed, since closed-loop control is based on better
information, namely, on the possibility of continuous on-line observations of the state space
variable (with no knowledge of the disturbance), it must produce, generally speaking, a
result which is at least "not worse," for example, than the one by an open-loop control
which allows no such observations, but only the knowledge of the initial state, with no
knowledge of the disturbance. An open-loop control of the latter type is further referred to
as "nonanticipative."
However, there are many other possibilities of introducing open-loop or piecewise open-loop
controls, with or without the availability of some type of isolated on-line measurements
of the state space variable, as well as with or without an "a priori" knowledge of the
disturbance. Thus, in order to study the reachability problem in detail, we introduce a
hierarchy of reachability problems formulated under an array of different "intermediate"
information conditions. These are formulated in terms of some auxiliary extremal problems
of the maxmin or minmax type.
Starting with open-loop controls, we first distinguish the case of anticipative control from
nonanticipative control. The former, for example, is when a reachable set, from a given
initial state x 0 , at given time , is defined as the set
of such states
x, that for any admissible disturbance given in advance, for the whole interval under con-
sideration, there exists an admissible control that steers the system to a -neighborhood
g. Here the respective auxiliary extremal problem is of the
maxmin type.(Maximum in the disturbance and minimum in the control). On the other
hand, for the latter the disturbance is not known in advance. Then the reachability set
from a given initial state is defined as the set X
of such states x whose
-neighborhoods B (x) may be reached with some admissible control, one and the same for
all admissible disturbances, whatever they be. Now the respective auxiliary problem is of
the minmax type.
It is shown that always X
and that the closed-loop reach set X
attained under nonanticipative, but feedback control lies in between, namely,
There also are some intermediate situations when the observations of the state space variable
arrive at given N isolated instants of time. In that case one has to deal with reachability
under possible corrections of the control at these N time instants. Here again we distinguish
between corrections implemented through anticipative control (when the future disturbance
is known for each time interval in between the corrections) and nonanticipative control,
when it is unknown. The respective extremal problems are of sequential maxmin and
types accordingly and the controls are piecewise open-loop: at isolated time instants
of correction comes information on the state space variable, while in between these the
control is open-loop (either anticipative or not). Both cases produce respective sequences
reach sets". The
relative positions of the reach sets in the hierarchical scheme are as follows
Finally, in the limit, as the number of corrections N tends to infinity, both sequences of
reachability sets converge to the closed-loop reach set 1 .
The adopted scheme is based on constructing superpositions of value functions for open-loop
control problems. In the limit these relations reflect the Priciple of Optimality under
set-membership uncertainty. This principle then allows one to describe the closed loop reach
set as a level set for the solution to the forward HJBI (Hamilton-Jacobi-Bellman-Isaacs)
equation. The final results are then presented either in terms of value functions for this
equation or in terms of set-valued relations.
Schemes of such type have been used in synthesizing solution strategies for differential games
and related problems, and were constructed in backward time, [23], [11], [27], [28].
The topics of this paper were motivated by applied problems and also by the need for a
theoretical basis for further algorithmic schemes.
As indicated in the sequel, this is true when all the sets involved are nonempty and when the problems
satisfy some regularity conditions.
dynamics. Reachability under open loop
controls
In this section we introduce the system under consideration and define two types of open-loop
reachability sets. Namely, we discuss reachability under unknown but bounded disturbances
for the system
with continuous matrix coefficients A(t); B(t); C(t). Here x 2 IR n is the state and u 2 IR p is
the control that may be selected either as an open loop control OLC-a Lebesgue-measurable
function of time t, restricted by the inclusion
or as a closed-loop control CLC-a set-valued strategy
Here v 2 IR q is the unknown input disturbance with values
P(t); Q(t) are set-valued continuous functions with convex compact values.
The class of OLC's u(\Delta) bounded by inclusion (2) is denoted by UO and the class of input
disturbances v(\Delta) bounded by (4) as VO . The strategies U are taken to be in U C -the
class U C of CLC's that are multivalued maps U(t; x) bounded by the inclusion (3), which
guarantee the solutions to equation (1), (which now turns into a differential
inclusion), for any Lebesgue-measurable function v(\Delta). 2
We distinguish two types of open loop reach sets-the maxmin type and the minmax type.
As we will see in the next Section, the names maxmin and minmax assigned to these sets
are due to the underlying optimization problems used for their calculation.
Definition 1.1 An open loop reach set (OLRS) of the maxmin type (from set X
is the set vectors x such that for every disturbance
there exist an initial state x 0 2 X 0 and an OLC u(t) 2 P(t) which steer the
trajectory , from state x
The set X 0 is assumed convex and compact (X
2 For example, the class of set-valued functions with values in compIR n , upper semicontinuous in x and
continuous in t.
turns to be empty, one may introduce the open loop -reachable set
as in Definition 1.1 except that (5) is replaced by
Here
is the ball of radius with center x.
Thus the OLRS of the maxmin type is the set of points x 2 IR n that can be
reached, for any disturbance v(t) 2 Q(t) given in advance, for the whole interval t 0 t ,
from some point x(t
The open loop -reach set is the set of points x 2 IR n whose -neighborhood
may be reached, for any disturbance v(t) given in advance, through some x(t 0
By taking 0 large enough, we may assume
to be the unique trajectory corresponding to x(t 0
u(\Delta) and disturbance v(\Delta). Then
is the reach set in the variable u(\Delta) 2 UO (at time t from set X 0 ) with fixed disturbance
input v(\Delta).
Lemma 1.1
This formula follows from Definition 1.1. Recall the definition of the geometrical
difference P
\GammaQ of sets
Then directly from (1) one gets
Z
S(s; )P(s)ds
Z
S(s; )(\GammaQ(s))ds: (7)
Here S(s; t) stands for the matrix solution of the adjoint equation
In other words the set
is the geometric difference of two "ordinary" reach sets, namely, the set X(t; t
taken from X(t 0 calculated in the variable u, with v(t) j 0, and the set
taken from x(t 0 calculated in the variable v, with u(\Delta) j 0.
This simple geometrical interpretation is of course due to the linearity of (1).
For the -reachable set, we have the following lemma.
Lemma 1.2 The set may be expressed as
also
Remark 1.1. Definition (8) of may also be rewritten as
We now define another class of open-loop reach sets under uncertainty-the OLRS of the
Definition 1.2 An open loop -reach set (OLRS) of the minmax type (from set
is the set X for each of which there
exists a control u(t) 2 P(t) that assigns to each v(t) 2 Q(t) a vector x such that the
respective trajectory ends in x[
Thus the -OLRS of minmax type consists of all x whose -neighborhood B (x) contains
the states x[ ] generated by system (1) under some control u(t) 2 P(t) and all fv(t) 2
selected depending on u; v. 3
A reasoning similar to the above leads to the following lemma.
Lemma 1.3 The set X may be expressed as
and
usually turns out that X
Remark 1.2. Definition may be rewritten as
Direct calculation, based on the properties of set-valued operations, allows to conclude the
following.
Lemma are both nonempty for some ? 0,
we have
We shall now calculate the open-loop reach sets defined above, using the techniques of
convex analysis ([25], [12], [15]).
2 The calculation of open-loop reach sets
Here we shall calculate the two basic types of open-loop reach sets. The relations of this
section will also serve as the basic elements further constructions which will be produced
as some superposititions of the relations of this section.
The calculations of this section and especially of later sections related to reachability under
feedback control require a number of rather cumbersome calculations of geometrical
(Minkowski) differences and their support functions. In order to simplify these calculations
we transform system (1) to a simpler form. Taking the transformation
gets
Keeping the previous notations loss of generality,
to the system
with the same constraints on u; v as before. For equation (10) consider the following two
problems:
Problem (I) Given a set X 0 and x 2 IR n , find
min u
min
under conditions
Problem (II) Given a set X 0 and x 2 IR n , find
min
under conditions
Here
and G is a closed set in IR n . Thus
where h+ (Q; G) is the the Hausdorff semidistance between compact sets Q; G; defined as
min z
The Hausdorff distance is h(Q; Q)g.
In order to calculate the function explicitly, we use the relations
and (see [10], [15] for the next formula)
where
is the support function of G [15]. (For compact G, sup may be substituted by max.)
We thus need to calculate
min u
min
which gives, after an application of (11), and an interchange of min u ; min x() and max l (see
Z
Due to (11), the last formula says simply that V \Gamma is given by
where
Z
B(t)P(s)ds
Z
(\GammaC(s))Q(s)ds;
It then follows that
and so (12) implies that x 2
Z
This gives, from the definitions of support function and geometrical difference,
ae
l
Z
B(s)P(s)ds
Z
(\GammaC(s)Q(s))ds
which, interpreted as integrals of multivalued functions, again results in (14).
Theorem 2.1 The set its support function
It is clear that if the difference
Z
B(t)P(s)ds
Z
C(s)(\GammaQ(s))ds 6= ;;
Note that function may be also defined as the solution to
Problem
min u
min
Direct calculations then produce the formula
which gives the same result as Problem (I).
Similarly, we may calculate
min
Taking into account the minimax theorem of [7] and the fact that
l
l
we come to
l
Z
ae(ljB(s)P(s))ds
Z
ae(\GammaljC(s)Q(s))ds:
Here (conc h)(l) is the closed concave hull of h(l). Note that
where (conv (l) is the closed convex hull and also the Fenchel second conjugate
h (l) of h(l) (see [25], [12] for the definitions).
Therefore
where
Z
(\GammaC(s))Q(s)ds
Z
B(s)P(s)ds:
It then follows that
Similarly, (18) implies that
Z
so that the support function
l
Z
B(s)P(s)ds
l
Z
(\GammaC(s))Q(s)ds
Theorem 2.2 The set X by (20) and its support function
It can be seen from (22) that X may be empty. At the same time, in order
that it is sufficient that
Z
(\GammaC(s))Q(s)ds 6= ;;
which holds for ? 0 sufficiently large.
It is worth mentioning that a minmax OLRS may be also be specified through an alternative
definition.
Definition 2.1 An open loop -reach set (OLRS) of the minmax type (from set X 0 ,
at time t 0 ) is the union
where
for some u(\Delta) 2 UP with 0 given and each set X
This leads to Problem (II ). Given set X 0 , and vector x 2 IR n , find
min
under conditions x(t 0
Direct calculations here lead to the formula
the same result as Problem II .
The equivalence of Problems II ; II means that definitions 1.2 and 2.1 both lead to the
same set X As we shall see, this is not so for the problem of reachability with
corrections. A similar observation holds for problems I ; I .
Remark 2.1. For the case that X is a singleton, one should recognize the following.
The OLRS of the maxmin type is the set of points reachable at time from a given point
x 0 for any disturbance v(\Delta) 2 VO , provided function v(t); t 0 t is communicated to the
controller in advance, before the selection of control u(t). As mentioned above, the control
u(\Delta) is then selected through an anticipative control procedure.
On the other hand, for the construction of the the \Gammareach set of the minmax type there is no
information provided in advance on the realization of v(\Delta), which becomes known only after
the selection of u. Indeed, given point one has to select the control u(t) for the
whole time interval t 0 t , whatever be the unknown v(t) over the same interval. The
control u(\Delta) is then selected through a nonanticipative control procedure. Such a definition
allows to specify an OLRS as consisting of points x each of which is complemented by a
neighborhood B (x) so that
for a certain control u(\Delta) 2 UO . This requires ? 0 to be sufficiently large.
As a first step towards reachability under feedback, we consider piecewise open-loop controls
with possibility of corrections at fixed instants of time.
3 Piecewise open-loop controls: reachability with
corrections
Here we define and calculate reachability sets under a finite number of corrections. This is
done either through the solution of problems sequential maxmin and minmax or through
operations on set-valued integrals.
Taking a given instant of time t that divides the interval T in two, namely,
consider the following sequential maxmin problem.
Problem
min u
min
and then find
min u
min
The latter is a problem on finding a sequential maxmin with one "point of correction"
. Using the notation [1;
Let us find
using the technique of convex analysis. According
to section 2, (see (11)), we have
(ae(ljB(s)P(s))\Gammaae(\GammaljC(s)Q(s))dsj(l; l) 1g
where
B(s)P(s)ds
(\GammaC(s))Q(s))ds:
Substituting this in (24), we have
min u
min
l
Z
Continuing the calculation, we come to
l
Z
(ae(ljB(s)P(s))ds
Z
where
(ae(ljB(s)P(s))ds \Gamma ae(\GammaljC(s)Q(s)))ds:
is the support function of the set
B(s)P(s)ds
(\GammaC(s)Q(s))ds:
Together with (25) this allows us as in Section 2, to express V \Gamma(; x; [1; 2]) as
where
"'
B(s)P(s)ds
(\GammaC(s))Q(s)ds
Z
B(s)P(s)ds
Z
(\GammaC(s))Q(s))ds:
Formula (26) shows that
(defined as the level set of
1 ) is also the
reach set with one correction. In particular, X \Gamma(; t consists of all states x that
may be reached for any function v(\Delta) 2 VP , whose values are communicated in two stages,
through two consecutive selections of some open-loop control u(t) according to the following
scheme.
Stage (1): given at time t 0 are the initial state x 0 and function v(t) for ,-select at
time t 0 the control u(t) for
Then at instant of correction t additional information for stage (2).
Stage (2): given at time t are the state x(t ) and function v(t) for ,-select at time
the control u(t) for
This proves Theorem 3.1.
Theorem 3.1 The set
is the maxmin OLRS with one correction at instant t oe and is given by formula (26).
We refer to
as the maxmin OLRS with one correction at instant
The two-stage scheme may be further propagated to the class of piecewise open-loop controls
with k corrections. Taking the interval, introduce a partition
so that the interval T is now divided into k
where
are the points of correction.
Consider also a nondecreasing continuous function (t) 0; denoting
and also
Problem
Solve the following consecutive optimization problems.
Find
min u
min
then find
min u
min
then consecutively, for
min u
min
and finally
min u
min
Direct calculation gives
with
B(s)P(s)ds
(\GammaC(s))Q(s)ds;
then
with
"'
B(s)P(s)ds
(\GammaC(s)Q(s)ds
B(s)P(s)ds
(\GammaC(s)Q(s)ds;
then consecutively
with
B(s)P(s)ds
(\GammaC(s))Q(s))ds
B(s)P(s)ds
(\GammaC(s))Q(s)ds
and finally
where
(0)+
B(s)P(s)ds
(\GammaC(s))Q(s))ds
B(s)P(s)ds
(\GammaC(s))Q(s)ds
Z
B(s)P(s)ds
Z
(\GammaC(s))Q(s)ds
We refer to
as the maxmin OLRS with k corrections at points
Theorem 3.2 The set
is given by formula (29).
We denote
and also introduce additional notations for the functions
emphasizing the dependence of
on the initial condition
We further assume
Note that the number of nodes j in any partition \Sigma k is k 1. The
partition applied to a function V k is precisely \Sigma k . Consequently, the increment
is presented as a sum of k once it is applied to a function V k with
index k.
A sequence of partitions \Sigma k is monotone in k if for every
contains all
the nodes j of partition \Sigma k 1
Theorem 3.3 Given are a monotone sequence of partitions \Sigma
continuous nondecreasing function (t) 0; that generates for any partition \Sigma k
a sequence of numbers
Given also are a sequence of value functions
each of which is formed by
the partition \Sigma k and a sequence is the index of
Then the following relations are true.
(i) For any fixed ; x, one has
(ii) For any fixed ; x and index i 2 [1; k] one has
(iii) The following inclusions are true for
where the sets
are defined by (30).
The proofs are based on the following properties of the geometrical (Minkowski) sums and
differences of sets
and the fact that in general a maxmin does not exceed a min max. Direct calculations
indicate that the following superpositions will also be true.
Lemma 3.1 The functions
k satisfy the following property
This follows from Theorem 3.2 and the definitions of the respective functions
Remark 3.1 Formula (34) reflects a semigroup property, but only for the selected points of
correction
The reasoning above indicates, for example, that
is the set of states that
may be reached for any function v(\Delta) 2 VO , whose values are communicated in advance in
stages, through k consecutive selections of some open-loop control u(t) according to
the following scheme.
Stage (1): given at time t 0 are initial state x 0 and function v(t) for select at time
Then at instant of correction j comes additional information for stage (j 1).
Stage (j), (j = 2; :::; k): given at time j are state x( j ) and function v(t) for
select at time control u(t) for t 2 T j+1 .
Remark 3.2. There is a case when all the functions
taken for all the integers
coincide. This is when system (10) satisfies the so-called matching conditions:
We now pass to the problem of sequential minmax, with one correction at instant t 0
using the notations for Problem (I 1 ). This is
Problem
min
then find
min
The latter is a problem of finding a sequential minmax with one point of correction
Denoting
let us find X +(; t using the techniques of convex analysis (as
above, with obvious changes).
This gives
where
(\GammaC(s))Q(s)ds
B(s)P(s)ds:
Continuing the calculations, we have
min
where
Z
(\GammaC(s))Q(s)ds
Z
B(s)P(s)ds:
This proves Theorem 3.4
Theorem 3.4 The set
is the minmax OLRS with one correction at instant
Here the problem is again solved in two stages, according to the following scheme.
Stage (1): given at time t 0 are set X 0 and x 2 IR n . Select control u(t) (one and the same
for all v) and for each v(t); t 2 T 1 , assign a vector x(t 0 that jointly with u; v produces
Then at instant of correction t additional information for stage (2).
Stage (2): given at time t are x(t ) and vector x 2 IR n . Select control u(t); t 2 T 2 (one
and the same for all v) and for each v(t); t 2 T 2 assign a vector x(t
that jointly with u; v steers the system to state x() 2 B 2
(x).
We now propagate this minmax procedure to a sequential minmax problem in the class of
piecewise open-loop controls with k corrections, using the notations of Problem (I k ).
Problem (II k ).
Solve the following consecutive optimization problems.
Find
min
then consecutively, for
min u
min
and finally
min u
This time direct calculation gives
where
(\GammaC(s))Q(s)ds
\Gamma:::
(\GammaC(s))Q(s)ds
\Gamma:::
Z
(\GammaC(s))Q(s)ds
Z
B(s)P(s)ds
We refer to X
as the maxmin OLRS with k corrections at points k .
Theorem 3.5 The set
is then the mimax OLRS with one corection and is given by formula (38).
Denote
assuming
Under the assumptions and notations of Theorem 3.3, the last results may be summarized
in the following proposition.
Theorem 3.6 (i) For any fixed values ; x one has
(ii) For any fixed ; x and index i 2 [1; k] one has
(iii) The following inclusions are true for
(iv) The following superpositions will also be true
In this section we have considered problems with finite number of possible corrections and
additional information coming at fixed instants of time, having presented a hierarchy of
piecewise open-loop reach sets of the anticipative (maxmin) or of the nonanticipative type.
These were presented as level sets for value functions which are superpositions of "one-
stage" value functions calculated in Section 2. A semigroup-type property (34) for these
value functions was indicated which is true only for the points of correction (Remark 3.1).
In the continuous case, however, we shall need this property to be true for any points. Then
it would be possible to formulate the Principle of Optimality under uncertainty for our class
of problems.
We shall therefore investigate some limit transitions with number of corrections tending to
infinity. This will allow a further possibility of continuous corrections of the control under
unknown disturbances.
4 The alternated integrals and the value functions
We observed above that the open-loop reach sets of both types (maxmin and minmax) are
described as the level sets of some value functions, namely 4
We now propagate this approach, based on using value functions, to systems with continuous
measurements of the state to allow continuous corrections of the control.
First note that inequality
is always true with equality attained, for example, under the following assumption.
Assumption 4.1 There exists a scalar function ffl(t) ? 0 such that
for all
In order to simplify the further explanations, we shall further deal in this section with the
case omitting the last symbol 0 in the notations for
Now note that Lemmas 3.1, 3.2 indicate that each of the functions
may be determined through a sequential procedure,
and a similar one for
. How could one express this procedure in terms of set-valued
For a given partition \Sigma k we have (j i)
in view of the previous relations (see (27)
-(29)), we may formulate a set-valued analogy of Lemma 3.1.
4 Here, without abuse of notation for
, we shall use symbol (\Delta) rather than the earlier
emphasizing the function (t); used in the respective constructions.
5 The case (\Delta) 6= 0 would add to the length of the expressions, but not to the essence of the scheme. This
case could be treated similarly, with obvious complements.
Lemma 4.1 The following relations are true
In terms of set-valued integrals (43) is precisely the equivalent of (29).
Moreover,
min u
min u
oe l
Similarly, for the sequential minmax, we have
Using notations identical to (42),(43), but with minus changed to plus in the symbols for
k , we have Lemma 4.2.
Lemma 4.2 The following relations are true
In terms of set-valued integrals, formula (46) is precisely the equivalent of (38), provided
Moreover,
min u
oe l
0g.
It is important to emphasize that until now all the relations were derived for a fixed partition
6 Also note that under Assumption 4.1, with X 0 single-valued, one may treat the sets X
as the
Hausdorff limits
What would happen, however, if k increases to infinity with
and would the result depend on the type of partition?
Our further discussion will require an important nondegeneracy assumption.
Assumption 4.2 There exist continuous vector functions
and a number ffl ? 0 such that
(a)
for all the sets
and
for all the sets
whatever be the partition \Sigma k .
This last assumption is further taken to be true without further notice. 7
Observing that (29), (38) have the form of certain set-valued integral sums, ("the alternated
sums"), we introduce the additional notation
Let us now proceed with the limit operation. Take a monotone sequence of partitions
1. Due to inclusions (33) and the boundedness of the sequence
from below by any of the sets X
the sequence I \Gamma (; t
a set-valued limit. Similarly, the inclusions (40) and the boundedness of the sequence
above ensure that it also has a set-valued limit. A more detailed
investigation of this scheme along the lines of [23] would indicate that under assumption
4.2 (a); (b) these set-valued limits do not depend on the type of partition \Sigma k . This leads to
Theorem 4.1.
7 If at some stage this assumption is not fulfilled, it may be applied to sets of type
sufficiently large.
Theorem 4.1 There exist Hausdorff limits I \Gamma (; t
with
These limits do not depend on the type of partition \Sigma k .
Morerover,
so that
We refer to I(; t as the alternated reach set. 8
The proofs of the convergence of the alternated integral sums to their Hausdorff limits and
of the equalities (52) are not given here. They follow the lines of those given in detail in
[14] for problems on sequential maxmimin and minmax considered in backward time (see
also [23], [22], [13]).
Let us now study the behavior of the function
According to (38), (31) the sequence
increasing in i with i !1. This sequence
is pointwise bounded in x by any solution of Problem (II k ) and therefore has a pointwise
limit. Due to (29), Theorem 4.1, and the continuity of the distance function d(x; M) in
lim
and therefore we may conclude that
under condition (48). This yields Theorem 4.2.
Theorem 4.2 Under condition (48) there exists a pointwise limit
lim
limit does not depend on the type of partititon
The alternated integral is the level set of the function
8 A maxmin construction of the indicated type had been introduced in detail in [23], where it was constructed
in backward time, becoming known as the alternated integral of Pontryagin.
does not depend on the partition \Sigma k and due to the properties of minmax
we also come to the following conclusion.
Theorem 4.3 The function satisfies the semigroup property:
for The following inequality is true
min u
Similarly, for the decreasing sequence of functions
we have Theorem 4.4.
Theorem 4.4 (i) Under condition (48) there exists a pointwise limit
lim
limit does not depend on the type of partititon
(ii) The alternated integral is the level set of the function V
I
(iii) The function satisfies the semigroup property:
for
(iv) The following inequality is true
A consequence of (52) is the basic assertion, Theorem 4.5.
Theorem 4.5 With the initial condition the
following equality is true
The function V(; x) satisfies the semigroup property
The last relation follows from (59), (54), (57).
Thus, under the nondegeneracy Assumption 4.2 the two forward alternated integrals I
coincide and so do the value functions
Relations (55), (58), (59) allow us to construct a partial differential equation for the function
V(t; x)-the so-called HJBI (Hamilton-Jacobi-Bellman-Isaacs) equation.
We now investigate the existence of the total derivative dV(t; x)=dt along the trajectories
of system (10). Due to (59), (13), we have
Observing that for d(x; X(t; t of (61) is unique and
taking l 0 (t; may apply the rules for differentiating a
"maximum"-type function [6], to get
Direct calculations indicate that the respective partials exist and are continuous in the
domain D[intD 0 , where
and intD 0 stands for the interior of the respective set.
To find the value of the total derivative take inequalities (58), (55), which may be rewritten
as
and
min u
Dividing both relations by oe ? 0 and passing to the limit with oe ! 0, we get
Since in Theorem 4.5 we had for the linear system (10) we
have
which results in the next proposition.
Theorem 4.6 In the domain D [ intD 0 the value function V(t; x) satisfies the "forward"
equation
over
Equation (63) may be rewritten as
The last theorem indicates that the HJBI equation (63) is satisfied everywhere in the open
domain D [ intD 0 . However, the continuity of the partials @V=@x; @V=@t on the boundary
of the domains D; D 0 was not investigated and in fact may not hold. But it is not difficult
to check that with boundary condition (65) the function V(t; x) will be a minmax solution
to equation (66) in the sense of [26], which is equivalent to the statement that V(t; x) is
a viscosity solution ([3], [20]) to (66), (67). This particularly follows from the fact that
function V(t; x) is convex, being a pointwise limit of convex functions
Let us note here that the problem under discussion may be treated not only as above but
also within the notion of classical solutions to equation (66), (65). Indeed, although all
the results above were proved for the criterion d(x(t in the respective problems, the
following assertion is also true.
Assertion 4.1 Theorems 3.1-3.6, 4.1-4.6, are all true with the criterion d(x(t in the
respective problems substituted by d 2 (x(t
This assertion follows from direct calculations, as in paper [13], with formula (11) substituted
by
The respective value function similar to V(t; x), denoted further as V 1 (t; x), will now be a
solution to (66) with boundary condition
together with its first partials, turns out to be continuous in t; x 2 D[D 0 .
Thus we come to
Theorem 4.7 The function V 1 (t; x)-a classical solution to (66), (67)-satisfies the relation
We have constructed the set X(t; t as the limit of OLRS and the level set of function
V(t; x), (or function V 1 (t; x))-the sequential maxmin or minmax of function d(t;
function d 2 (t; X 0 restriction It remains to show that X(t; t
precisely the set of points that may be reached from X 0 with a certain feedback control
strategy U(t; x), whatever be the function v(t).
Prior to the next section, we wish to note the following. Function V(t;
may be interpreted as the value function for the following
Problem (IV): find the value function
U
x(\Delta)
is a CLC (see Section 1) and X U (\Delta) is the set of all solutions to the
differential inclusion
generated by taken within the interval
Its level set
is precisely the closed-loop reach set. It is the set of such points x 2 IR n that there exists
a strategy U 2 U C which for any solution x(t) of (69), ensures the
inequality . Due to the structure of (69), (A(t) j 0), this is equivalent to
the following definition of closed-loop reachability sets.
Definition 4.1 A closed-loop reachability set X () is the set of such points x 2 IR n for
each of which there exists a strategy U 2 U C that for every v(\Delta) 2 VO assigns a point x
such that every solution x[t] of the differential inclusion
satifies the inequality d(x[ ]; x) .
Once the Principle of Optimality (60) is true, it may also be used directly to derive equation
- the HJBI equation for the function V(t; x). Therefore, set X () (if nonempty), will
be nothing else than the set X(; t defined earlier as the limit of open-loop reach sets.
5 Closed loop reachability under uncertainty
We shall now show that each point of X(t; t may be reached from X 0 with a certain
feedback control strategy U(t; x), whatever be the function v(t).
In order to do this, we shall need the notion of solvability set, (or, in other terms, "the
backward reachability set", see [11], [27], [15]) - a set similar to X(t; t
in backward time. We first recall from [13] some properties of these sets. Consider
Problem find the value function
U
x(\Delta)
where M is a given convex compact set (M 2 convIR n ) and X U is the variety of all trajectories
x(\Delta) of the differential inclusion (69), generated by a given
strategy U 2 U C .
The formal HJBI equation for the value V (t; x) is
@t
@x
with boundary condition
Equation (71) may be rewritten as
@t
@x
@x
0: (73)
An important feature is that function V (t; x) may be interpreted as a sequential maxmin
similar to the one in section 3. Namely, taking the interval t t 1 , introduce a partition
similar to that of Section 3. For the given partition, consider the recurrence relations
min u
min u
min u
almost everywhere in the respective intervals.
Lemma 5.1 ([13]) With
there exists a pointwise limit
that does not depend upon the type of partition \Sigma k .
The function
We shall refer to
as the sequential maxmin. This function enjoys
properties similar to those of its "forward time" counterpart, the function of section
3. A similar construction is possible for a "backward" version of the sequential minmax.
The level set
is referred to as the closed loop solvability set CLSS at time t, from set M. It may be
presented as an alternated integral of Pontryagin, - the Hausdorff limit of the sequence
I (t;
Z
Z
Z
under conditions (74). Also presumed is a nondegeneracy assumption similar to Assumption
4.2.
Assumption 5.1 For a given set M 2 convIR n there exists a continuous function fi 3 (t) 2
and a number ffl ? 0, such that
for any whatever be the partition \Sigma k .
This assumption is presumed in the next lemma.
Lemma 5.2 Under condition (76) there exists a Hausdorff limit I (t;
I (t;
0:
This limit does not depend on the type of partition \Sigma k and coincides with the CLSS,
I (t;
From the theory of control under uncertainty and differential games it is known that if
there exists a feedback strategy U(t; x) 2 U C that steers system (10)
from state x(t) = x to set M whatever be the unknown disturbance v(\Delta) ([11], [29], [15]).
Therefore, assuming our assumptions, we just have to prove the
inclusion
or, in view the properties of V(t; x); V (t; x), that
is the solution to equation (71) with boundary condition
(Recall that V(t 1
Due to the definition of the geometrical difference and of the integral I
check that
We thus have to prove the inclusion
Under assumptions 4.2 (a) taken for 0, or under assumption 4.2,
it is possible to observe, through direct calculation, using the properties of integrals I
(see formulas (29),(75)), that the following holds:
I (t;
where
and we arrive at Lemma 5.3.
Lemma 5.3 The following inclusion is true
moreover,
Inclusion (79) implies the existence of a feedback strategy U (t; x) that brings system (10)
from x
Theorem 5.1 Under assumptions 4.1(a), there exists a closed-loop
strategy U (t; x) ' U C that steers system (10) from x
The strategy U (t; x) may be found through the solution V(t; x) of equation (71), with boundary
condition (78), as
U (t;
(if the gradient @V (t; x)=@x does exist at ft; xg), or, more generally, as
U (t;
0g.
This is verified by differentiating V(t; x) with respect to t and checking that a.e.
dV
dt
(see [10], [15]).
The previous theorem ensures merely that some point of may be reached from
x . In order to demonstrate that any point x may be reached from position
ft; x g, we have to prove the inclusion
for any x ?
is a solution to (66) with boundary condition
But inclusions (82), (83) again follow from the properties of I assuming
both of these set-valued integrals are nonempty. The latter, in its turn, is again ensured
by either assumptions 4.2(a), Assumption 4.1. This leads to
Theorem 5.2.
Theorem 5.2 Under either Assumptions 4.1(a), or 4.1 there
exists a closed-loop strategy U ? (t; x) ' U C that steers system (10) from x to point
The strategy U ? (t; x) may be found through the solution V ? (t; x) of equation (71), with boundary
condition (84), as
(if the gradient @V ? (t; x)=@x does exist at ft; xg), or, more generally, as
0g.
Remark 5.1. Assumptions 4.1(a), are ensured by Assumption 4.1.
If this does not hold, it is possible to go through all the procedures taking \Gamma neighborhoods
of sets X(\Delta); W (\Delta) rather than the sets themselves. Then one has to look for the (\Delta)-reach
sets -solvability sets W (t; sufficiently large, so
that X(t; t would surely be nonempty.
Remark 5.2. The emphasis of this paper is to discuss the issue of reachability under uncertainty
governed by unknown but bounded disturbances. This topic was studied here
through a reduction to the calculation of value functions for a successive problems on sequential
and maxmin of certain distance functions or their squares. The latter
problems were dealt with via techniques of convex analysis and set-valued calculus. However
the solution schemes of this paper naturally allow a more general situation which is to
substitute the distance function d(x; M) by any proper convex function OE(x), for example,
with similar results passing through. The more general problems then reduce to those of
this paper.
Thus, given terminal cost function OE(x), it may readily generate a terminal set M as a level
set some ff, with support function ([25])
The given formalisms for decribing reachability are not the only ones available. We further
indicate yet another formal scheme.
6 Reachability and the funnel equations
In this section we briefly indicate some connections between the previous results and those
that can be obtained through evolution equations of the "funnel type" [2], [15].
Consider the evolution equations
lim
oe
with initial condition
and
lim
oe
with
Under some regularity assumptions (similar to Assumption 4.1) which ensure that all the
sets that appear in (87), (88) are nonempty, these equations have solutions which turn to
be set-valued. The solutions almost everywhere. But they
need not be unique. However, the property of uniqueness may be restored if we presume
that are the (inclusion) maximal solutions, (see [15], Sections 1.3, 1.7).
solution X 0 (t) to a funnel equation of type (87), (88) is maximal if it satisfies the inclusion
other solution X (t) to the respective equation with the same initial
condition).
Equations (87), (88) may be interpreted as some limit form of the recurrence equations
\Gammaoe(\GammaC
and
\Gammaoe(\GammaC (t)Q(t) 6= ;:
Indeed, taking, for example solving the recurrence equation (89) for
values of time
oe (0j) to
oe (kj ), we observe that
oe (kj) is similar to I \Gamma (; t
selected with constant oe
\Gamma(\GammaC
is similar to (29)(when and to (43).
Under Assumtion 4.1 a direct calculation leads to the next conclusions.
Lemma 6.1 The following relations are true with
oe
oe
is a maximal solution
to equation (87) with
Therefore, the closed-loop reach set X (; t may be also calculated through the funnel
equation (87) which therefore also describes the dynamics of the level sets of the value
function V(; x) -the solution to the forward HJBI equation (66).
Remark 6.1. As we have seen, equation (87) describes the evolution of the alternated integral
Similarly to that, equation (88) describes the evolution of the alternated
recurrence equations (89), (90) may then serve to be the basis
of numerical schemes for calculating the reach sets.
7 Example
Consider the system
defined on the interval [0; ], with hard bounds
on the control u and the uncertain disturbance v.
As is known (see, for example, [16]), a parametric representation of the boundary of the
reach set X(; t of system (91) without uncertainty (v(t) j 0) is given by two
curves (see external set in fig.1, generated for x
and where oe 0 is the parameter, (the values oe ? 0 correspond to the vertices of
Similarly, the reach set X(; t in the variable v is given by the curves
According to (7) , the set
which leads to a parametrization of the boundary of this set in the form
(see internal set in fig.1, generated for
so that the OLRS under uncertainty is smaller than X(; 0; x 0 jP(\Delta); f0g)-the
reach set without uncertainty.
Let us now look for the OLRS one correction at time Taking
may figure out that set has to be
bounded by two curves (fig.2)
which gives
we come to intX
Continuing with r
We also observe that with r 2 ! 1=2 we have sufficiently
large.
As indicated above, sets X turn to be empty unless is sufficiently large.
Continuing our example further, for
\GammaX (2; 0; 0jf0g; Q(\Delta))g;
and
The last set is nonvoid if is such that B (0)
\GammaX (2; 0; 0; jf0g; ;. The
smallest value 0 of all such ensures
For all 0 it is then possible to compare sets
that the latter is smaller than the former (see fig.3, where X shown by the
internal continuous curve, by the external continuous curve and
by the dashed curve).
Fig.1
xFig.2
-226
xFig.3
8 Conclusion
In this paper we deal with one of the recent problems in reachability analysis which is to
specify the sets of points that can be reached by a controlled system despite the unknown
but bounded disturbances in the system inputs. The paper gives a description of several notions
of such reachability and indicates schemes to calculate various types of reach sets. We
consider systems with linear structure and closed-loop controls that are generally nonlinear.
In particular, we emphasize the difference between reachability under open-loop and closed-loop
controls. We distinguish open-loop controls of the anticipative type, which presume
the disturbances to be known in advance, and of the nonanticipative type, which presume
no such knowledge. The nonanticipative open-loop reach set is smaller than the one for anticipative
open-loop controls and the closed loop reach set (which is always nonanticipative)
lies in between. Intermediate reach sets are those generated by piecewise closed-loop controls
that allow on-line measurements of the state space variable at isolated instants of time
- the points of correction. Increasing the number of corrections to infinity and keeping them
dense within the interval under consideration, we came to the case of continuous corrections
- the solution to the problem of reachability under closed-loop (feedback control).
The various types of reach sets introduced here were calculated through two alternative
presentations, namely, either through operations on set-valued integrals or as level sets for
value functions in sequential problems on maxmin or minmax for certain distance functions.
For the closed-loop reachability problem the corresponding value function defines a mapping
that satifies the semigroup property. This property allowed us to formulate the Principle of
Optimality under Uncertainty for the class of problems considered here. The last Principle
allowed to demonstrate that the closed-loop reach set under uncertainty is the level set for
the solution to a forward HJBI equation. On the other hand, the feedback control strategy
that steers a point to its closed-loop reach set (whatever be the disturbance) may be found
from the solution to a backward HJBI equation whose boundary condition is taken from the
solution of the earlier mentioned forward HJBI equation.
This paper leaves many issues for further investigation. For example, there is a strong
demand from many applied areas to calculate reach sets under uncertainty. However, the
given solutions to the problem are not simple to calculate. Among the nearest issues may
be the calculation of the reach sets of this paper through ellipsoidal approximations along
the schemes of [15], [16]. Then, of course, comes the propagation of the results to nonlinear
systems. Here the application of the HJBI technique seems to allow some progress. Needless
to say, similar problems could also be posed for systems with uncertainty in its parameters
or in the model itself, as well as for other types of controlled transition systems.
--R
KUPFERMAN O.
Optimal Control and Viscosity Solutions of Hamilton-Jacobi-Bellman Equations
Optimal Control and Related Minimax Design Prob- lems
Stochastic Transition Systems.
RUBINOV A.
Topics in Control Theory.
SUBBOTIN A.
Control and Observation Under Uncertainty.
Pontryagin's alternated integral in the theory of control synthesis.
N.
MARKUS L.
Optimality and reachability with feedback controls.
SOUGANIDIS P.
Controllers for reachability specifications for hybrid systems.
IVANOV G.
WETS R.
Generalized Solutions of First Order PDE's: the Dynamic Optimization Perspective.
On the Existence of Solutions to a Differential Game.
Existence of Saddle Points in Differential Games.
Reach set Computation Using Optimal Control.
--TR | uncertainty;differential games;reach sets;alternated integral;funnel equations;differential inclusions;closed-loop control;HJBI equation;open-loop control;reachability;dynamic programming |
587740 | On Lagrangian Relaxation of Quadratic Matrix Constraints. | Quadratically constrained quadratic programs (QQPs) play an important modeling role for many diverse problems. These problems are in general NP hard and numerically intractable. Lagrangian relaxations often provide good approximate solutions to these hard problems. Such relaxations are equivalent to semidefinite programming relaxations.For several special cases of QQP, e.g., convex programs and trust region subproblems, the Lagrangian relaxation provides the exact optimal value, i.e., there is a zero duality gap. However, this is not true for the general QQP, or even the QQP with two convex constraints, but a nonconvex objective. In this paper we consider a certain QQP where the quadratic constraints correspond to the matrix orthogonality condition XXT=I. For this problem we show that the Lagrangian dual based on relaxing the constraints XXT=I and the seemingly redundant constraints XT X=I has a zero duality gap. This result has natural applications to quadratic assignment and graph partitioning problems, as well as the problem of minimizing the weighted sum of the largest eigenvalues of a matrix. We also show that the technique of relaxing quadratic matrix constraints can be used to obtain a strengthened semidefinite relaxation for the max-cut problem. | Introduction
. Quadratically constrained quadratic programs (QQPs) play
an important modeling role for many diverse problems. They often provide a much
improved model compared to the simpler linear relaxation of a problem. However, very
large linear models can be solved e#ciently, whereas QQPs are in general NP-hard
and numerically intractable. Lagrangian relaxations often provide good approximate
solutions to these hard problems. Moreover these relaxations can be shown to be
equivalent to semidefinite programming (SDP) relaxations, and SDP problems can be
solved e#ciently, i.e., they are polynomial time problems; see, e.g., [31].
relaxations provide a tractable approach for finding good bounds for many
hard combinatorial problems. The best example is the application of SDP to the
max-cut problem, where a 87% performance guarantee exists [11, 12]. Other examples
include matrix completion problems [23, 22], as well as graph partitioning problems
and the quadratic assignment problem (references given below).
In this paper we consider several quadratically constrained quadratic (nonconvex)
programs arising from hard combinatorial problems. In particular, we look at the
orthogonal relaxations of the quadratic assignment and graph partitioning problems.
We show that the resulting well-known eigenvalue bounds for these problems can
be obtained from the Lagrangian dual of the orthogonally constrained relaxations,
# Received by the editors June 9, 1998; accepted for publication (in revised form) by P. Van Dooren
July 30, 1999; published electronically May 31, 2000.
http://www.siam.org/journals/simax/22-1/34029.html
Department of Management Sciences, University of Iowa, Iowa City, IA 52242-1000
(kurt-anstreicher@uiowa.edu).
# University of Waterloo, Department of Combinatorics and Optimization, Waterloo, Ontario N2L
3G1, Canada (henry@orion.uwaterloo.ca). This author's research was supported by Natural Sciences
and Engineering Research Council of Canada.
42 KURT ANSTREICHER AND HENRY WOLKOWICZ
but only if the seemingly redundant constraint X T I is explicitly added to the
orthogonality constraint XX Our main analytical tool is a strong duality
result for a certain nonconvex QQP, where the quadratic constraints correspond to
the orthogonality conditions XX We also show that the technique
of applying Lagrangian relaxation to quadratic matrix constraints can be used to
obtain a strengthened SDP relaxation for the max-cut problem.
Our results show that current tractable (nonconvex) relaxations for the quadratic
assignment and graph partitioning problems can, in fact, be found using Lagrangian
relaxations. converse statement is well known, i.e., the Lagrangian dual is equivalent
to an (tractable) SDP relaxation.) Our results here provide further evidence to
the following conjecture: the Lagrangian relaxation of an appropriate QQP provides
the strongest tractable relaxation for QQPs.
1.1. Outline. We complete this section with the notation used in this paper.
In section 2, we present several known results on QQPs. We start with convex
QQPs where a zero duality gap always holds. Then we look at the minimum eigenvalue
problem and the trust region subproblem, where strong duality continues to hold. We
conclude with the two trust region subproblem, the max-cut problem, and general
nonconvex QQPs where nonzero duality gaps can occur.
The main results are in section 3. We show that strong duality holds for a
class of orthogonally constrained quadratic programs if we add seemingly redundant
constraints before constructing the Lagrangian dual.
In section 4 we apply this result to several problems, i.e., relaxations of quadratic
assignment and graph partitioning problems, and a weighted sum of eigenvalue prob-
lem. In section 5 we present strengthened semidefinite relaxations for the max-cut
problem. In section 6 we summarize our results and describe some promising directions
for future research.
1.2. Notation. We now describe the notation used in the paper.
Let S n denote the space of n-n symmetric matrices equipped with the trace inner
product, positive semidefiniteness
positive definiteness) and A # B denote A - B # 0, i.e., S n is equipped with
the L-owner partial order. We let P denote the cone of symmetric positive semidefinite
matrices; M m,n denotes the space of general m- n matrices also equipped with the
trace inner product, #A, denotes the space of general m-m
matrices; O denotes the set of orthonormal (orthogonal) matrices; # denotes the set
of permutation matrices.
We let Diag(v) be the diagonal matrix formed from the vector v; its adjoint
operator is diag(M ), which is the vector formed from the diagonal of the matrix M.
For , the vector formed (columnwise) from M .
The Kronecker product of two matrices is denoted
A# B, and the Hadamard
product is denoted A # B.
We use e to denote the vector of all ones, and ee T to denote the matrix of
all ones.
2. Some known results. The general QQP is
x. We now present several QQP problems where the
Lagrangian relaxation is important and well known. In all these cases, the Lagrangian
QUADRATIC MATRIX CONSTRAINTS 43
dual provides an important theoretical tool for algorithmic development, even where
the duality gap may be nonzero.
2.1. Convex quadratic programs. Consider the convex quadratic program
where all q i (x) are convex quadratic functions. The dual is
min
x
If # is attained at # , x # , then a su#cient condition for x # to be optimal for CQP is
primal feasibility and complementary slackness, i.e.,
In addition, it is well known that the Karush-Kuhn-Tucker (KKT) conditions
are su#cient for global optimality, and under an appropriate constraint qualification
the KKT conditions are also necessary. Therefore strong duality holds if a constraint
qualification is satisfied, i.e., there is no duality gap and the dual is attained.
However, surprisingly, if the primal value of CQP is bounded, then it is attained
and there is no duality gap; see, e.g., [44, 36, 34, 35] and, more recently, [26]. However,
the dual may not be attained, e.g., consider the convex program
and its dual
min
x
Algorithmic approaches based on Lagrangian duality appear in, e.g., [19, 25, 31].
2.2. Rayleigh quotient. Suppose that A = A T
. It is well known that the
smallest eigenvalue # 1 of A is obtained from the Rayleigh quotient, i.e.,
Since A is not necessarily positive semidefinite, this is the minimization of a nonconvex
function on a nonconvex set. However, the Rayleigh quotient forms the basis for many
algorithms for finding the smallest eigenvalue, and these algorithms are very e#cient.
In fact, it is easy to see that there is no duality gap for this nonconvex problem, i.e.,
min
x
To see this, note that the inner minimization problem in (2.2) is unconstrained. This
implies that the outer maximization problem has the hidden semidefinite constraint
(an ongoing theme in the paper)
i.e., # is at most the smallest eigenvalue of A. With # set to the smallest eigenvalue,
the inner minimization yields the eigenvector corresponding to # 1 . Thus, we have an
example of a nonconvex problem for which strong duality holds. Note that the problem
(2.1) has the special norm constraint and a homogeneous quadratic objective.
44 KURT ANSTREICHER AND HENRY WOLKOWICZ
2.3. Trust region subproblem. We will next see that strong duality holds for
a larger class of seemingly nonconvex problems. The trust region subproblem (TRS)
is the minimization of a quadratic function subject to a norm constraint. No convexity
or homogeneity of the objective function is assumed.
Assuming that the constraint in TRS is written "#," the Lagrangian dual is
min
x
This is equivalent to (see [43]) the (concave) nonlinear semidefinite program
s.t. Q+ #I # 0,
Moore-Penrose inverse. It is shown in [43] that strong duality holds
for TRS, i.e., there is a zero duality gap - # , and both the primal and dual
are attained. Thus, as in the eigenvalue case, we see that this is an example of a
nonconvex program where strong duality holds.
Extensions of this result to a two-sided general, possibly nonconvex constraint are
discussed in [43, 28]. An algorithm based on Lagrangian duality appears in [40] and
(implicitly) in [29, 41]. These algorithms are extremely e#cient for the TRS problem,
i.e., they solve this problem almost as quickly as they can solve an eigenvalue problem.
2.4. Two trust region subproblem. The two trust region subproblem (TTRS)
consists of minimizing a (possibly nonconvex) quadratic function subject to a norm
and a least squares constraint, i.e., two convex quadratic constraints. This problem
arises in solving general nonlinear programs using a sequential quadratic programming
approach and is often called the Celis-Dennis-Tapia (CDT) problem; see [4].
In contrast to the above single TRS, the TTRS can have a nonzero duality gap;
see, e.g., [33, 47, 48, 49]. This is closely related to quadratic theorems of the alterna-
tive, e.g., [5]. In addition, if the constraints are not convex, then the primal may not
be attained; see, e.g., [26].
In [27], Martinez shows that the TRS can have at most one local and nonglobal
optimum, and the Lagrangian at this point has one negative eigenvalue. Therefore, if
we have such a case and add another ball constraint that contains the local, nonglobal
optimum in its interior and also makes this point the global optimum, we obtain a
TTRS where we cannot close the duality gap due to the negative eigenvalue. It is
uncertain what constraints could be added to close this duality gap. In fact, it is still
an open problem whether TTRS is an NP-hard or a polynomial-time problem.
2.5. Max-cut problem. Suppose that E) is an undirected graph with
vertex set
and weights w ij on the edges (v i , The max-cut problem
consists of finding the index set I # {1, 2, . , n}, in order to maximize the weight of
the edges with one end point with index in I and the other in the complement. This is
equivalent to the following discrete optimization problem with a quadratic objective:
We equate x I and x otherwise. Define the homogeneous
quadratic objective
where Q is an n - n symmetric matrix. Then the MC problem is equivalent to the
QQP
This problem is NP-hard, i.e., intractable.
Since the above QQP has many nonconvex quadratic constraints, a duality gap
for the Lagrangian relaxation is expected and does indeed occur most of the time.
However, the Lagrangian dual is equivalent to the SDP relaxation (upper bound)
e,
which has proven to have very strong theoretical and practical properties, i.e., the
bound has an 87% performance guarantee for the problem MC and a 97% performance
in practice; see, e.g., [12, 18, 15]. Other theoretical results for general objectives and
further relaxed constraints appear in [30, 46].
In [38], several unrelated, though tractable, bounds for MC are shown to be
equivalent. These bounds include the box relaxation -e # x # e, the trust region
relaxation
an eigenvalue relaxation. Furthermore, these bounds are
all shown to be equivalent to the Lagrangian relaxation; see [37]. Thus we see that
the Lagrangian relaxation is equivalent to the best of these tractable bounds.
2.6. General QQP. The general, possibly nonconvex QQP has many applications
in modeling and approximation theory; see, e.g., the applications to SQP
methods in [21]. Examples of approximations to QQPs also appear in [9].
The Lagrangian relaxation of a QQP is equivalent to the SDP relaxation and is
sometimes referred to as the Shor relaxation; see [42]. The Lagrangian relaxation can
be written as an SDP if one takes into the account the hidden semidefinite constraint,
i.e., a quadratic function is bounded below only if the Hessian is positive semidefinite.
The SDP relaxation is then the Lagrangian dual of this semidefinite program. It can
also be obtained directly by lifting the problem into matrix space using the fact that
relaxing xx T to a semidefinite matrix X.
One can relate the geometry of the original feasible set of QQP with the feasible
set of the SDP relaxation. The connection is through valid quadratic inequalities, i.e.,
nonnegative (convex) combinations of the quadratic functions; see [10, 20].
3. Orthogonally constrained programs with zero duality gaps. Consider
the orthonormal constraint
(The set of such X is sometimes known as the Stiefel manifold; see, e.g., [7]. Applications
and algorithms for optimization on orthonormal sets of matrices are discussed
in [7].) In this section we will show that for holds for a certain
(nonconvex) quadratic program defined over orthonormal matrices. Because of the
similarity of the orthonormality constraint to the norm constraint x T the result
of this section can be viewed as a matrix generalization of the strong duality result
for the Rayleigh quotient problem (2.1).
Let A and B be n - n symmetric matrices, and consider the orthonormally constrained
homogeneous QQP
This problem can be solved exactly using Lagrange multipliers (see, e.g., [14]) or using
the classical Ho#man-Wielandt inequality (see, e.g., [3]). We include a simple proof
for completeness.
Proposition 3.1. Suppose that the orthogonal diagonalizations of A, B are
respectively, where the eigenvalues in # are ordered
nonincreasing and the eigenvalues in # are ordered nondecreasing. Then the optimal
value of QQPO is - O = tr # and the optimal solution is obtained using the
orthogonal matrices that yield the diagonalizations, i.e.,
Proof. The constraint G(X) := XX T
- I maps M n to S n . The Jacobian of the
constraint at X acting on the direction h is . The adjoint of
the Jacobian acting on S # S n is J #
tr
But J # is one-one for all X orthogonal. Therefore,
J is onto, i.e., the standard constraint qualification holds at the optimum. It follows
that the necessary conditions for optimality are that the gradient of the Lagrangian
is 0, i.e.,
Therefore,
i.e., AXBX T is symmetric, which means that A and XBX T commute and so are
mutually diagonalizable by the orthogonal matrix U . Therefore, we can assume that
both A and B are diagonal and we choose X to be a product of permutations that
gives the correct ordering of the eigenvalues.
The Lagrangian dual of QQPO is
min
tr AXBX T
However, there can be a nonzero duality gap for the Lagrangian dual; see [50] for
an example. The inner minimization in the dual problem (3.3) is an unconstrained
quadratic minimization in the variables vec (X), with Hessian
I# S.
this minimization is unbounded if the Hessian is not positive semidefinite. In
order to close the duality gap, we need a larger class of quadratic functions.
Note that in QQPO the constraints XX are equivalent.
Adding the redundant constraints X T we arrive at
Using symmetric matrices S and T to relax the constraints XX
respectively, we obtain a dual problem
s.t.
(B# A),
Theorem 3.2. Strong duality holds for QQPOO and DQQPOO , i.e., -
and both primal and dual are attained.
Proof. Let and U are orthonormal matrices
whose columns are the eigenvectors of A and B, respectively, # and # are the corresponding
vectors of eigenvalues, and Diag(#). Then for any S and
U# V is nonsingular, tr
S, and
tr
T , the dual problem DQQPOO is equivalent to
s.t.
However, since # and # are diagonal matrices, (3.4) is equivalent to the ordinary
linear program:
But LD is the dual of the linear assignment problem:
s.t.
Assume without loss of generality that # 1
Then LP can be interpreted as the problem of finding a permutation #(-) of {1, . , n}
so that
But the minimizing permutation is then
Proposition 3.1 the solution value - D is exactly - O .
48 KURT ANSTREICHER AND HENRY WOLKOWICZ
4. Applications. We now present three applications of the above strong duality
result.
4.1. Quadratic assignment problem. Let A and B be n - n symmetric ma-
trices, and consider the homogeneous quadratic assignment problem (QAP) (see, e.g.,
[32]),
QAP min tr AXBX T
s.t. X #,
where # is the set of n - n permutation matrices. The set of orthonormal matrices
contains the permutation matrices, and the orthonormally constrained problem (3.1)
is an important relaxation of QAP. The bounds obtained are usually called the eigenvalue
bounds for QAP; see [8, 13]. Theorem 3.2 shows that the eigenvalue bounds
are in fact obtained from a Lagrangian relaxation of (3.1) after adding the seemingly
redundant constraint XX
4.2. Weighted sums of eigenvalues. Consider the problem of minimizing the
weighted sum of the k largest eigenvalues of an n - n symmetric matrix Y , subject
to linear equality constraints. An SDP formulation for this problem involving 2k
semidefiniteness constraints on n - n matrices is given in [1, section 4.3]. We will
show that the result of section 3 can be applied to obtain a new SDP formulation of
the problem having only k semidefiniteness constraints on n - n matrices.
For convenience we consider the equivalent problem of maximizing the weighted
sum of the k minimum eigenvalues of Y . Let w
are interested in the problem
s.t. A vec (Y
are the eigenvalues of Y , and A is a
matrix. From Proposition 3.1 it is clear that, for any Y ,
tr Y XWX T ,
and therefore from Theorem 3.2 the problem WEIG is equivalent to the problem
s.t.
A vec (Y
Note that, for any Y , the matrix
W# Y is block diagonal, with the final n- k blocks
identically zero. Since
I# S is also block diagonal, and tr T is a function of the diagonal
of T only, it is obvious that T can be assumed to be a diagonal matrix
Writing the problem (4.1) in terms of t, and separating the block diagonal constraints,
results in the SDP
We have thus obtained an SDP representation for the problem WEIG with
semidefiniteness constraints on n - n matrices, as claimed.
4.3. Graph partitioning problem. Let E) be an edge-weighted undirected
graph with node set
The graph partitioning (GP) problem consists of partitioning the node set N into k
disjoint subsets S 1 , . , S k of specified sizes
as to minimize the total weight of the edges connecting nodes in distinct subsets of
the partition. This problem is well known to be NP-hard. GP can be modeled as a
quadratic problem
z := min tr X T LX
where L is the Laplacian of the graph and P is the set of n - k partition matrices
(i.e., each column of X is the indicator function of the corresponding set;
node i is in set j and 0 otherwise).
The well-known Donath-Ho#man bound [6] z DH # z for GP is
z DH := max
are the eigenvalues
of We will now show that the Donath-Ho#man bound can be obtained by
applying Lagrangian relaxation to an appropriate QQP relaxation of GP. (An SDP
formulation for this bound is given in [1].) Clearly, if P is a partition matrix, then
i is the ith row of X. Moreover, the columns of X
are orthogonal with one another, and the norm of the jth column of X is # m j . It
follows that if X is a partition matrix, there is an n - n orthogonal matrix -
X such
that
where M is the k - k matrix
# .
In addition, note that x T
is the ith diagonal element of XX T , so the constraint
equivalent to -
i is the ith row of -
X. Since tr X T
50 KURT ANSTREICHER AND HENRY WOLKOWICZ
tr LXX T , a lower bound z 1 # z can be defined by
We will now obtain a second bound z 2 # z 1 by applying a Lagrangian procedure to
all of the constraints in (4.2). Using symmetric matrices S and T for the constraints
respectively, and a vector of multipliers u i for the constraints
u,S,T
min
tr L -
Theorem 4.1. z
Proof. Rearranging terms and using Kronecker product notation, the definition
of z 2 can be rewritten as
u,S,T
+min
X),
and we are using the fact that
solves the implicit
minimization problem in the definition of z 2 , and if this constraint fails to hold, the
minimum is -#. Using this hidden semidefinite constraint, we can write
Note that if u
M for any scalar #, then
M# I),
M# I).
In addition, tr T It follows that we may choose
any normalization for e T u without a#ecting the value of z 2 . Choosing e T
arrive at
QUADRATIC MATRIX CONSTRAINTS 51
However, as in the previous section, Proposition 3.1 and Theorem 3.2 together imply
that for any U , the solution value in the problem
is exactly
Therefore, we immediately have z
SDP relaxations for the GP problem are obtained via Lagrangian relaxation in
[45]. A useful corollary of Theorem 4.1 is that any Lagrangian relaxation based on a
more tightly constrained problem than (4.2) will produce bounds that dominate the
Donath-Ho#man bounds.
A problem closely related to the orthogonal relaxation of GP is the orthogonal
Procrustes problem on the Stiefel manifold; see [7, section 3.5.2]. This problem has a
linear term in the objective function, and there is no known analytic solution for the
general case.
5. A strengthened relaxation for max-cut. As discussed above, the SDP
relaxation for MC performs very well in practice and has strong theoretical proper-
ties. There have been attempts at further strengthening this relaxation. For example,
a copositive relaxation is presented in [39]. Adding cuts to the SDP relaxation is discussed
in [15, 16, 17, 18]. These improvements all involve heuristics, such as deciding
which cuts to choose or solving a copositive problem, which is NP-hard in itself.
The relaxation in (2.3) is obtained by lifting the vector x into matrix space using
. Though the matrix X in the lifting is not an orthogonal matrix, it is a
partial isometry up to normalization, i.e.,
We will now show that we can improve the semidefinite relaxation presented in
section 2.5 by considering Lagrangian relaxations using the matrix quadratic constraint
(5.1). In particular, consider the relaxation of MC
s.t. diag e,
where X is a symmetric matrix. Note that if X
and diag(X 2 ne. As a result, the above relaxation is equivalent to the relaxation
tr QX 2
ith row of X, and x 0 is a scalar. (Note that if
replacing X with -X leaves the objective and
constraints in (5.2) unchanged.) We will obtain an upper bound - 2 # - 1 by applying
a Lagrangian procedure to all of the constraints in (5.2). Using multipliers u i for the
52 KURT ANSTREICHER AND HENRY WOLKOWICZ
constraints x T
for the constraint x 2
matrix S for the matrix equality X 2
we obtain a Lagrangian problem
tr QX 2
Letting - x problem can be written in
Kronecker product form as
ne T u
Q-x,
where
Applying the hidden semidefinite constraint -
we obtain an equivalent problem,
ne T u
Note that if we take clearly optimal and the problem
reduces to
s.t. -Q+ U # 0,
which is exactly the dual of (2.3), the usual SDP relaxation for MC. It follows that we
have obtained an upper bound - 2 which is a strengthening of the usual SDP bound,
The strengthened relaxation (5.3) involves a semidefiniteness constraint on a (n 2
as opposed to an n-n matrix in the usual SDP relaxation (2.3).
This dimensional increase can be mitigated by taking note of the fact that X in (5.2)
must be a symmetric matrix, and therefore (5.2) can actually be written as a problem
over a vector x of dimension n(n + 1)/2. In addition, alternative relaxations can be
obtained by not making the substitutions based on (5.1) used to obtain the problem
(5.2). The e#ect of these alternatives on the performance of strengthened SDP bounds
for MC is the topic of ongoing research; for up-to-date developments, see the URL
http://orion.uwaterloo.ca/-hwolkowi/henry/reports/strngthMC.ps.gz.
6. Conclusion. In this paper we have shown that a class of nonconvex quadratic
problems with orthogonal constraints can satisfy strong duality if certain seemingly redundant
constraints are added before the Lagrangian dual is formed. As applications
of this result we showed that well-known eigenvalue bounds for QAP and GP problems
can actually be obtained from the Lagrangian dual of QQP relaxations of these
problems. We also showed that the technique of relaxing quadratic matrix constraints
can be used to obtain strengthened SDP relaxations for the max-cut problem.
Adding constraints to close the duality gap is akin to adding valid inequalities in
cutting plane methods for discrete optimization problems. In [2, 24] this approach, in
QUADRATIC MATRIX CONSTRAINTS 53
combination with a lifting procedure, is used to solve discrete optimization problems.
In our case we add quadratic constraints. The idea of quadratic valid inequalities has
been used in [10]; and closing the duality gap has been discussed in [20].
Our success in closing the duality gap for the QQPO problem considered in section
3, where we have the special Kronecker product in the objective function, raises
several interesting questions. For example, can the strong duality result for QQPO
be extended to the same problem with an added linear term in the objective, or are
there some other special classes of objective functions where this is possible? Another
outstanding question is whether it is possible to add quadratic constraints to close
the duality gap for the TTRS.
--R
Interior point methods in semidefinite programming with applications to combinatorial optimization
Perturbation Bounds for Matrix Eigenvalues
An alternative theorem for quadratic forms and extensions
Lower bounds for the partitioning of graphs
The geometry of algorithms with orthogonality constraints
Approximation algorithms for quadratic programming
Semidefinite programming relaxation for nonconvex quadratic pro- grams
Semidefinite programming in combinatorial optimization
Improved approximation algorithms for maximum cut and satisfiability problems using semidefinite programming
Bounds for the quadratic assignment problems using continuous optimization
A new lower bound via projection for the quadratic assignment problem
An Interior Point Method for Semidefinite Programming and Max-Cut Bounds
Fixing Variables in Semidefinite Relaxations
A spectral bundle method for semidefinite programming
An interior-point method for semidefinite programming
On the convergence of the method of analytic centers when applied to convex quadratic programs
Cones of Matrices and Successive Convex Relaxations of Non-convex Sets
A tour d'horizon on positive semidefinite and Euclidean distance matrix completion problems
An Analytic Center Based Column Generation Algorithm for Convex Quadratic Feasibility Problems
On the Extension of Frank-Wolfe Theorem
Local minimizers of quadratic functions on Euclidean balls and spheres
Interior Point Polynomial Algorithms in Convex Program- ming
The quadratic assignment problem: A survey and recent developments
Optimality conditions for the minimization of a quadratic with two quadratic constraints
Duality in quadratic programming and l p-approximation
Duality in quadratic programming and l p-approximation
Duality in quadratic programming and l p-approximation
A recipe for semidefinite relaxation for (0
Convex relaxations of 0-1 quadratic programming
Copositive relaxation for general quadratic programming
A semidefinite framework for trust region subproblems with applications to large scale minimization
A New Matrix-Free Algorithm for the Large-Scale Trust-Region Subproblem
Nauk SSSR Tekhn.
Indefinite trust region subproblems and nonsymmetric eigenvalue perturbations
On l p programming
Semidefinite relaxations for the graph partitioning problem
Approximating quadratic programming with bound and quadratic constraints
Some Properties of Trust Region Algorithms for Nonsmooth Optimization
On a subproblem of trust region algorithms for constrained optimization
A dual algorithm for minimizing a quadratic function with two quadratic constraints
Semidefinite programming relaxations for the quadratic assignment problem
--TR
--CTR
Henry Wolkowicz , Miguel F. Anjos, Semidefinite programming for discrete optimization and matrix completion problems, Discrete Applied Mathematics, v.123 n.1-3, p.513-577, 15 November 2002 | quadratically constrained quadratic programs;lagrangian relaxations;quadratic assignment;max-cut problems;graph partitioning;semidefinite programming |
587742 | Optimal Kronecker Product Approximation of Block Toeplitz Matrices. | This paper considers the problem of finding n n matrices and Bk that minimize $||T - \sum A_k \otimes B_k||_F$, where $\otimes$ denotes Kronecker product and T is a banded n n block Toeplitz matrix with banded n n Toeplitz blocks. It is shown that the optimal and Bk are banded Toeplitz matrices, and an efficient algorithm for computing the approximation is provided. An image restoration problem from the Hubble Space Telescope (HST) is used to illustrate the effectiveness of an approximate SVD preconditioner constructed from the Kronecker product decomposition. | Introduction
. A Toeplitz matrix is characterized by the property that its
entries are constant on each diagonal. Toeplitz and block Toeplitz matrices arise
naturally in many signal and image processing applications; see, for example, Bunch
[4] and Jain [17] and the references therein. In image restoration [21], for instance,
one needs to solve large, possibly ill-conditioned linear systems in which the coefficient
matrix is a banded block Toeplitz matrix with banded Toeplitz blocks (bttb).
Iterative algorithms, such as conjugate gradients (cg), are typically recommended
for large bttb systems. Matrix-vector multiplications can be done efficiently using
fast Fourier transforms [14]. In addition, convergence can be accelerated by preconditioning
with block circulant matrices with circulant blocks (bccb). A circulant matrix
is a Toeplitz matrix in which each column (row) can be obtained by a circular shift of
the previous column (row), and a bccb matrix is a natural extension of this structure
to two dimensions; c.f. Davis [10].
Circulant and bccb approximations are used extensively in signal and image
processing applications, both in direct methods which solve problems in the "Fourier
domain" [1, 17, 21], and as preconditioners [7]. The optimal circulant preconditioner
introduced by Chan [8] finds the closest circulant matrix in the Frobenius norm. Chan
and Olkin [9] extend this to the block case; that is, a bccb matrix C is computed to
minimize
bccb approximations work well for certain kinds of bttb matrices [7], especially
if the unknown solution is almost periodic. If this is not the case, however, the
performance of bccb preconditioners can degrade [20]. Moreover, Serra-Capizzano
and Tyrtyshnikov [6] have shown recently that it may not be possible to construct a
bccb preconditioner that results in superlinear convergence of cg.
Here we consider an alternative approach: optimal Kronecker product approxi-
mations. A Kronecker product
A\Omega B is defined as
a
Raytheon Systems Company, Dallas,
y Department of Mathematics and Computer Science, Emory University, Atlanta, GA 30322
(nagy@mathcs.emory.edu).
J. KAMM AND J. NAGY
In particular, we consider the problem of finding matrices A k , B k to minimize
s
A
where T is an n 2 \Theta n 2 banded bttb matrix, and A k , B k are n \Theta n banded Toeplitz
matrices. A general approach for constructing such an optimal approximation was
proposed by Van Loan and Pitsianis [25] (see also Pitsianis [23]). Their approach,
which we describe in more detail in Section 2, requires computing principal singular
values and vectors of an n 2 \Theta n 2 matrix related to T .
An alternative approach for computing a Kronecker product approximation T -
A\Omega B for certain deconvolution problems was proposed by Thirumalai [24]. A similar
approach for banded bttb matrices was considered by Nagy [22]. As opposed to
the method of Van Loan and Pitsianis, the schemes described in [22, 24] require
computing principal singular values and vectors of an array having dimension at most
n \Theta n, and thus can be substantially less expensive. Moreover, Kamm and Nagy [20]
show how these approximations can be used to efficiently construct approximate svd
preconditioners.
Numerical examples in [20, 22, 24] indicate that this more efficient approach can
lead to preconditioners that perform better than bccb approximations. However,
theoretical results establishing optimality of the approximations, such as in equation
(1.1), were not given. In this paper, we provide these results. In particular, we show
that some modifications to the method proposed in [22, 24] are needed to obtain an
approximation of the form (1.1). Our theoretical results lead to an efficient algorithm
for computing Kronecker product approximations of banded bttb matrices.
This paper is organized as follows. Some notation is defined, and a brief review of
the method proposed by Van Loan and Pitsianis is provided in Section 2. In Section
3 we show how to exploit the banded bttb structure to obtain an efficient scheme
for computing terms in the Kronecker product decomposition. A numerical example
from image restoration is given in Section 4.
2. Preliminaries and Notation. In this section we establish some notation to
be used throughout the paper, and describe some previous work on Kronecker product
approximations. To simplify notation, we assume T is an n \Theta n block matrix with
n \Theta n blocks.
2.1. Banded bttb Matrices. We assume that the matrix T is a block banded
Toeplitz matrix with banded Toeplitz blocks (bttb), so it can be uniquely determined
by a single column t which contains all of the non-zero values in T ; that is, some central
column. It will be useful to define an n \Theta n array P as
operator transforms matrices into vectors by stacking columns as follows:
\Theta a 1 a 2 \Delta \Delta \Delta an
a 1
a 2
an
TOPELITZ KRONECKER PRODUCT APPROXIMATION 3
Suppose further that the entry of P corresponding to the diagonal of T is known 1 .
For example, suppose that
where the diagonal of T is located at (i; is the sixth
column of T , and we write
In general, if the diagonal of T is then the upper and lower block bandwidths of
are respectively. The upper and lower bandwidths of each Toeplitz
block are
In a similar manner, the notation is used to represent a banded
point Toeplitz matrix X constructed from the vector x, where x i corresponds to the
diagonal entry. For example, if the second component of the vector
corresponds to the diagonal element of a banded Toeplitz matrix X , then
2.2. Kronecker Product Approximations. In this subsection we review the
work of Van Loan and Pitsianis. We require the following properties of Kronecker
products:
(A\Omega B)(C\Omega
(AC)\Omega (BD),
ffl If U 1 and U 2 are orthogonal matrices, then U
A more complete discussion and additional properties of Kronecker products can be
found in Horn and Johnson [16] and Graham [13].
Loan and Pitsianis [25] (see also, Pitsianis [23]) propose a general technique
for an approximation involving Kronecker products where jjT \Gamma
minimized. By defining the transformation to tilde space of a block matrix T ,
In image restoration, P is often referred to as a "point spread function", and the diagonal entry
is the location of the "point source". See Section 4 for more details.
4 J. KAMM AND J. NAGY
as
~
it is shown in [23, 25] that
s
s
(~ a k
~
where ~
a
Thus, the Kronecker product approximation
problem is reduced to a rank-s approximation problem. Given the svd of ~
it is well known [12] that the rank-s approximation ~
which minimizes jj ~
~
. Choosing ~ a
~
~ a k
~
jj F over all rank-s approximations, and thus
one can construct an approximation -
This general technique requires computing the largest s singular triplets of an
which may be expensive for large n. Thirumalai [24] and Nagy [22]
show that a Kronecker product approximation of a banded bttb matrix T can be
found by computing the largest s singular triplets of the n \Theta n array P . However, this
method does not find the Kronecker product which minimizes the Frobenius norm
approximation problem in equation (1.1). In the next section we show that if T is a
banded bttb matrix, then this optimal approximation can be computed from an svd
of a weighted version of the n \Theta n array P .
3. bttb Optimal Kronecker Product Approximation. Recall that the Van
Loan and Pitsianis approach minimizes
for a general (un-
structured) matrix T , by minimizing jj ~
(~ a k
~
k )jj F . If it is assumed that A k
are banded Toeplitz matrices, then the array P associated with the central
column of T can be weighted and used to construct an approximation which minimizes
k=1 (~ a k
~
Theorem 3.1. Let T be the n 2 \Theta n 2 banded bttb matrix constructed from P ,
is the diagonal element of T (therefore, the upper and lower block bandwidths
of T are and the upper and lower bandwidths of each Toeplitz block
are be an n \Theta n banded Toeplitz matrix with upper
lower bandwidth be an n \Theta n banded Toeplitz
matrix with upper bandwidth lower bandwidth n \Gamma j. Define a k and b k such
that A
~
~
a
~
TOPELITZ KRONECKER PRODUCT APPROXIMATION 5
s
~ a k
~
s
(W a a k )(W b b k ) T
Proof. See Section 3.1. 2
Therefore, if A k and B k are constrained to be banded Toeplitz matrices, then
can be minimized by finding a k , b k which minimize jjP
(W a a k )(W b b k ) T jj F . This is a rank-s approximation problem, involving a matrix
of relatively small dimension, which can be constructed using the svd of Pw . Noting
that W a and W b are diagonal matrices which do not need to be formed explicitly, the
construction of -
are
banded Toeplitz matrices, can be computed as follows:
ffl Define the weight vectors w a and w b based on the (i; location (in P ) of the
diagonal entry of T :
\Theta p
\Theta p
ffl Calculate
its svd
, where ": "
denotes point-wise multiplication.
ffl Calculate
a
where "./" denotes point-wise division.
The proof of Theorem 3.1 is based on observing that ~
T has at most n unique rows
and n unique columns, which consist precisely of the rows and columns of P . This
observation will become clear in the following subsection.
3.1. Proof of Theorem 3.1. To prove Theorem 3.1, we first observe that if a
matrix has one row which is a scalar multiple of another row, then a rotator can be
constructed to zero out one of these rows, i.e.,
ff
If this is extended to the case where more than two rows are repeated, then a simple
induction proof can be used to establish the following lemma.
6 J. KAMM AND J. NAGY
Lemma 3.2. Suppose an n \Theta n matrix X has k identical rows:
x
Then a sequence of k \Gamma1 orthogonal plane rotators can be constructed
such that
thereby zeroing out all the duplicate rows.
It is easily seen that this result can be applied to the columns of a matrix as well,
using the transpose of the plane rotators defined in Lemma 3.2.
Lemma 3.3. Suppose an n \Theta n matrix X contains k identical columns:
Then an orthogonal matrix Q can be constructed from a series of plane rotators such
that
\Theta p
The above results illustrate the case where the first occurrence of a row (column)
is modified to zero out the remaining occurrences. However, this is for notational
convenience only. By appropriately constructing the plane rotators, any one of the
duplicate rows (columns) may be selected for modification, and the remaining rows
(columns) zeroed out. These rotators can now be applied to the matrix ~
T .
Lemma 3.4. Let T be the n 2 \Theta n 2 banded bttb matrix constructed from P , where
ij is the diagonal entry of T . In other words,
define
~
TOPELITZ KRONECKER PRODUCT APPROXIMATION 7
Then orthogonal matrices Q 1 and Q 2 can be constructed such that
~
Proof. By definition,
representing ~
T using the n \Theta n 2 submatrices ~
~
~
~
~
it is clear that ~
contains only n unique rows, which are ~ t T
n , and that the i th
submatrix, ~
contains all the unique rows, i.e.,
~
Furthermore, it can be seen that there are occurrences of ~ t T
of ~ t T
occurrences of ~ t T
occurrences of ~ t T
of ~ t T
. Therefore, a sequence of orthogonal plane rotators can be constructed to zero
8 J. KAMM AND J. NAGY
out all rows of ~
T except those in the submatrix ~
~
W a
~
partitioning ~
~
\Theta ~
T in
where each ~
ij is an n \Theta n submatrix, it can be seen that ~
contains only n unique
columns, which are the columns of P , and that the j th submatrix ~
contains all the unique columns, i.e.,
~
Furthermore, the matrix ~
occurrences of p 1
of occurrences of pn .
Therefore, a sequence of orthogonal plane rotators can be constructed such that
~
:The following properties involving the vec and toep2 operators are needed.
Lemma 3.5. Let T , ~
T , and P be defined as in Lemma 3.4. Further, let A k be an
n \Theta n banded Toeplitz matrix with upper bandwidth lower bandwidth
and let B k be an n \Theta n banded Toeplitz matrix with upper bandwidth lower
bandwidth j. Define a k and b k such that A
Then
1. are any two matrices of the
same size,
TOPELITZ KRONECKER PRODUCT APPROXIMATION 9
2. toep2(x; are any two
vectors of the same length,
3. toep2fvec[(
4.
Proof. Properties 1 and 2 are clear from the definitions of the vec and toep2
operators. Property 3 can be seen by considering the banded Toeplitz matrices
toep(a; i) and noting that the central column of
all the non-zero entries is
an b 1
an
Therefore, property 3 holds when both sides are banded bttb matrices
constructed from the same central column, and can be extended to
applying property 2. Property 4 follows from properties 2 and 3. 2
Using these properties, Lemma 3.4 can be extended to the matrix ~
~ a k
~
k .
Lemma 3.6. Let T be the n 2 \Theta n 2 banded bttb matrix constructed from P , where
ij is the diagonal entry of T . Further, let A k be an n \Theta n banded Toeplitz matrix
with upper bandwidth lower bandwidth be an n \Theta n banded
Toeplitz matrix with upper bandwidth j. Define a k and b k such that
a
Let ~
T , W a , and W b be defined as in Lemma 3.4. Then orthogonal matrices Q 1 and
can be constructed such that
s
~ a k
~
Proof. Using Lemma 3.5,
s
A
s
a
By definition of the transformation to tilde space,
s
A
s
~
a k
J. KAMM AND J. NAGY
Applying Lemma 3.4 to T \Gamma
s
~ a k
~
a
:The proof of Theorem 3.1 follows directly from Lemma 3.6 by noting that
s
~ a k
~
s
~ a k
~
s
a
s
(W a a k )(W b b k ) T
3.2. Further Analysis. It has been shown how to minimize
when the
structure of -
T is constrained to be a sum of Kronecker products of banded Toeplitz
matrices. We now show that if T is a banded bttb matrix, then the matrix -
must adhere to this structure. Therefore, the
approximation minimizes
when T is a
banded bttb matrix.
If T is a banded bttb matrix, then the rows and columns of ~
T have a particular
structure. To represent this structure, using an approach similar to Van Loan and
Pitsianis [25], we define the constraint matrix S n;! . Given an n \Theta n banded Toeplitz
matrix T , with upper and lower bandwidths
is an
matrix such that S T
be a
4 \Theta 4 banded Toeplitz matrix with bandwidths !
TOPELITZ KRONECKER PRODUCT APPROXIMATION 11
and
Note that S T
n;! clearly has full row rank. Given the matrix T in (2.2),
~
and the rows and columns of ~
~
~
. Using the structure of ~
T , the matrix -
A
minimizing
must be structured such that A i and B i are banded Toeplitz matrices, as
the following sequence of results illustrate.
Lemma 3.7. Let
\Theta a 1 a 2 \Delta \Delta \Delta an
be the n \Theta n matrix whose structure
is constrained by S T
n;! a
be the svd of A, where
n;!
Proof. Given the svd of A,
n;!
J. KAMM AND J. NAGY
By definition, S T
n;!
Applying this result to A T , it is clear that the right singular vectors of A satisfy
if the rows of A are structured in the same manner.
Lemma 3.8. Let A =6 6 6 4
a
be the n \Theta n matrix whose structure is constrained
by S T
n;! a
i be the svd of A, where
Theorem 3.9. Let T be an n \Theta n banded block Toeplitz matrix with n \Theta n banded
Toeplitz blocks, where the upper and lower block bandwidths of T are
and the upper and lower bandwidths of each Toeplitz block are
\Theta fl u fl l
. Then
the matrices A i and B i minimizing
banded Toeplitz matrices, where the upper and lower bandwidths
of A i are given by !, and the upper and lower bandwidths of B i are given by fl.
Proof. Recall that
(~ a i
where
. The structure of T results in rank( ~
~
~
Letting ~
i be the
svd of ~
(~ a i
~
is minimized by ~
a
Therefore, A i is an n \Theta n banded Toeplitz matrix with
upper and lower bandwidths given by !, and B i is an n \Theta n banded Toeplitz matrix
with upper and lower bandwidths given by fl. 2
3.3. Remarks on Optimality. The approach outlined in this section results
in an optimal Frobenius norm Kronecker product approximation to a banded bttb
matrix. The approximation is obtained from the principal singular components of
an array Pw = W a PW b . It might be interesting to consider whether it is possible
to compute approximations which are optimal in another norm. In particular, the
method considered in [20, 22, 24] uses a Kronecker product approximation computed
from the principal singular components of P . Unfortunately we are unable to show
that this leads to an optimal norm approximation. However, there is a very close
relationship between the approaches. Since W a and W b are full rank, well-conditioned
diagonal matrices, P and Pw have the same rank. Although it is possible to establish
bounds on the singular values of products of matrices (see, for example, Horn and
Johnson [15]), we have not been able to determine a precise relationship between
the Kronecker product approximations obtained from the two methods. However we
have found through extensive numerical results that both methods give similarly good
approximations. Since numerical comparisons do not provide any additional insight
into the quality of the approximation, we omit such results. Instead, in the next
TOPELITZ KRONECKER PRODUCT APPROXIMATION 13
section we provide an example from an application that motivated this work, and
illustrate how a Kronecker product approximation might be used in practice. We
note that further comparisons with bccb approximations can be found in [20, 24].
4. An Image Restoration Example. In this section we consider an image
restoration example, and show how the Kronecker product approximations can be
used to construct an approximate svd preconditioner. Image restoration is often
modeled as a linear system:
where b is an observed blurred, noisy image, T is a large, often ill-conditioned matrix
representing the blurring phenomena, n is noise, and x is the desired true image. If
the blur is assumed to be spatially invariant, then T is a banded bttb matrix [1, 21].
In this case, the array P corresponding to a central column of T is called a point
spread function (psf).
The test data we use consists of a partial image of Jupiter taken from the Hubble
Space Telescope (hst) in 1992, before the mirrors in the Wide Field Planetary Camera
were fixed. The data was obtained via anonymous ftp from ftp.stsci.edu, in
the directory pub/stsdas/testdata/restore/data/jupiter. Figure 4.1 shows the
observed image. Also shown in Figure 4.1 is a mesh plot of the psf, P , where the
peak corresponds to the diagonal entry of T . The observed image is 256 \Theta 256, so T
is 65; 536 \Theta 65; 536.
50 100 150 200 250100200020406010305000.040.08a. Observed, blurred image. b. psf, P .
Fig. 4.1. Observed hst image and point spread function.
We mention that if T is ill-conditioned, which is often the case in image restora-
tion, then regularization is needed to suppress noise amplification in the computed
solution [21]. Although T is essentially too large to compute its condition number,
certain properties of the data indicate that T is fairly well conditioned. For instance,
we observe that the psf is not very smooth (smoother psfs typically indicate more
14 J. KAMM AND J. NAGY
ill-conditioned T ). Another indication comes from the fact that the optimal circulant
approximation of T , as well as our approximate svd of T (to be described below) are
well conditioned; specifically these approximations have condition numbers that are
approximately 20.
We also mention that if the psf can be expressed as (i.e., it has
rank 1), then the matrix T is separable. Using Theorem 3.1,
A\Omega B, where
oev). Efficient numerical methods that exploit the
Kronecker product structure of T (e.g., [2, 5, 11]) can then be used.
However, as can be seen from the plot of the singular values of P in Figure 4.2,
for this data, P is not rank one, and so T is not separable. We therefore suggest con-
values of the psf, P .
structing an approximate svd to use as a preconditioner, and solve the least squares
problem using a conjugate gradient algorithm, such as cgls; see Bj-orck [3].
This preconditioning idea was proposed in [20], and can be described as follows. Given
s
A
an svd approximation of T can be constructed as
A
. Note that the number of terms s only
affects the setup cost of calculating \Sigma. For s - 1, clearly solves the
minimization problem
min
TOPELITZ KRONECKER PRODUCT APPROXIMATION 15
over all diagonal matrices \Sigma and therefore produces an optimal svd approximation,
given a fixed
UA\Omega UB and
VA\Omega VB . This is analogous to the circulant and
bccb approximations discussed earlier, which provide an optimal eigendecomposition
given a fixed set of eigenvectors (i.e., the Fourier vectors).
In our tests, we use cgls to solve the ls problem using no preconditioner,
our approximate svd preconditioner (with terms in equation (4.1)) and the optimal
circulant preconditioner. Although we observed that T is fairly well conditioned,
we should still be cautious about noise corrupting the computed restorations. There-
fore, we use the conservative stopping tolerance jjT T
Table
4.1 shows the number of iterations needed for convergence in each case, and
in
Figure
4.3 we plot the corresponding residuals at each iteration. The computed
solutions are shown in Figure 4.4, along with the hst observed, blurred image for
comparison.
Table
Number of cgls and pcgls iterations needed for convergence.
cgls, no prec. pcgls, circulant prec. pcgls, svd prec.
43 12 4
iteration
residual
2-norm
no prec.
circulant prec.
svd prec
Fig. 4.3. Plot of the residuals at each iteration.
5. Concluding Remarks. Because the image and psf used in the previous
section come from actual hst data, we cannot get an analytical measure on the accuracy
of the computed solutions. However, we observe from Figure 4.4 that all
solutions appear to be equally good restorations of the image, and from Figure 4.3
we see that the approximate svd preconditioner is effective at reducing the number
of iterations needed to obtain the solutions. Additional numerical examples comparing
the accuracy of computed solutions, as well as computational cost of bccb and
the approximation svd preconditioner, can be found in [19, 20]. A comparison of
J. KAMM AND J. NAGY
50 100 150 200 25010020050 100 150 200 250100200a. hst blurred image. b. cgls solution, 43 iterations.
50 100 150 200 25010020050 100 150 200 250100200c. pcgls solution, circ. prec., 12 its. d. pcgls solution, svd prec., 4 its.
Fig. 4.4. The observed image, along with computed solutions from cgls and pcgls.
computational complexity between bccb preconditioners and the approximate svd
preconditioner depends on many factors. For example:
ffl What is the dimension of P (i.e., the bandwidths of T )?
ffl Is a Lanczos scheme used to compute svds of P , A 1 and
ffl Do we take advantage of band and Toeplitz structure when forming matrix-matrix
products involving UA , UB , VA , VB and A k , B k ,
TOPELITZ KRONECKER PRODUCT APPROXIMATION 17
ffl How many terms, s, do we take in the Kronecker product approximation?
ffl For bccb preconditioners: is n a power of 2?
If we assume T is set up and application of the approximate
svd preconditioner is at most O(n 3 ). If we further assume that n is a power of 2, then
the corresponding cost for bccb preconditioners is at least O(n 2 log 2 n). It should be
noted that the approximate svd preconditioner does not require complex arithmetic,
does not require n to be a power of 2, or any zero padding. Moreover, decomposing
T into a sum of Kronecker products, whose terms are banded Toeplitz matrices,
might lead to other fast algorithms (as has occurred over many years of studying
displacement structure [18]). In this case, the work presented in this paper provides
an algorithm for efficiently computing an optimal Kronecker product approximation.
--R
Restoration of images degraded by spatially varying pointspread functions by a conjugate gradient method
Stability of methods for solving Toeplitz systems of equations
Application of ADI iterative methods to the image restoration of noisy images
Any circulant-like preconditioner for multilevel Toeplitz matrices is not superlinear
Conjugate gradient methods for Toeplitz systems
An optimal circulant preconditioner for Toeplitz systems
Preconditioners for Toeplitz-block matrices
Algorithms for the regularization of ill-conditioned least squares problems with tensor product structure
Matrix Computations
Kronecker Products and Matrix Calculus: with Applications
Restoration of atmospherically blurred images by symmetric indefinite conjugate gradient techniques
Matrix Analysis
Fundamentals of Digital Image Processing
Theory and applications
Singular value decomposition-based methods for signal and image restoration
Kronecker product and SVD approximations in image restoration
Iterative Identification and Restoration of Images
Decomposition of block Toeplitz matrices into a sum of Kronecker products with applications in image restoration
The Kronecker Product in Approximation and Fast Transform Generation
High performance algorithms to solve Toeplitz and block Toeplitz matrices
Approximation with Kronecker products
--TR
--CTR
S. Serra Capizzano , E. Tyrtyshnikov, How to prove that a preconditioner cannot be superlinear, Mathematics of Computation, v.72 n.243, p.1305-1316, July | conjugate gradient method;block Toeplitz matrix;singular value decomposition;kronecker product;image restoration;preconditioning |
587755 | Computing Symmetric Rank-Revealing Decompositions via Triangular Factorization. | We present a family of algorithms for computing symmetric rank-revealing VSV decompositions based on triangular factorization of the matrix. The VSV decomposition consists of a middle symmetric matrix that reveals the numerical rank in having three blocks with small norm, plus an orthogonal matrix whose columns span approximations to the numerical range and null space. We show that for semidefinite matrices the VSV decomposition should be computed via the ULV decomposition, while for indefinite matrices it must be computed via a URV-like decomposition that involves hypernormal rotations. | Introduction
. Rank-revealing decompositions of general dense matrices are
widely used in signal processing and other applications where accurate and reliable
computation of the numerical rank, as well as the numerical range and null space,
are required. The singular value decomposition (SVD) is certainly a decomposition
that reveals the numerical rank, but what we have in mind here are the RRQR and
(i.e., URV and ULV) decompositions which can be computed and, in particular,
updated more eciently than the SVD. See, e.g., [7, xx2.7.5{2.7.7], [20, x2.2] and [33,
Chapter 5] for details and references to theory, algorithms, and applications.
The key to the eciency of RRQR and UTV algorithms is that they consist of an
initial triangular factorization which can be tailored to the particular matrix, followed
by a rank-revealing post-processing step. If the matrix is mn with m n and with
numerical rank k, then the initial triangular factorization requires O(mn 2 )
ops, while
the rank-revealing step only requires O((n k)n 2 )
ops if k n, and O(kn 2 )
ops if
n. The updating can always be done in O(n 2 )
ops, when implemented properly.
We refer to the original papers [9], [10], [16], [18], [19], [23], [31], [32] for details about
the algorithms.
For structured matrices (e.g., Hankel and Toeplitz matrices), the initial triangular
factorization in the RRQR and UTV algorithms has the same complexity as the
rank-revealing step, namely, O(mn)
ops; see [7, x8.4.2] for signal processing aspects.
However, accurate principal singular values and vectors can also be computed by
means of Lanczos methods in the same complexity, O(mn)
ops [13]. Hence the
advantage of a rank-revealing decomposition depends on the matrix structure and
the numerical rank of the matrix.
Rank-revealing decompositions of general sparse matrices are also in use, e.g., in
optimization and geometric design [27]. For sparse matrices, the initial pivoted triangular
factorization can exploit the sparsity of A. However, the UTV post-processors
may produce a severe amount of ll, while the ll in the RRQR post-processor is
P. Y. Yalamov was supported by a fellowship from the Danish Rectors' Conference and by Grants
MM-707/97 and I-702/97 from the National Scientic Research Fund of the Bulgarian Ministry of
Education and Science.
y Department of Mathematical Modelling, Technical University of Denmark, Building 321, DK-
2800 Lyngby, Denmark (pch@imm.dtu.dk).
z Center of Applied Mathematics and Informatics, University of Rousse, 7017 Rousse, Bulgaria
(yalamov@ami.ru.acad.bg).
P. C. HANSEN AND P. Y. YALAMOV
restricted to lie in the columns that are permuted to the right of the triangular factor
[7, Thm. 6.7.1]. An alternative sparse URL decomposition A = U RL, where U is
orthogonal and R and L are upper and lower triangular, respectively, was proposed
in [26]. This decomposition can be computed with less ll, at the expense of working
with only one orthogonal matrix.
Numerically rank-decient symmetric matrices also arise in many applications,
notably in signal processing and in optimization algorithms (such as those based on
interior point and continuation methods). In both areas, fast computation and efcient
updating are key issues, and sparsity is also an issue in some optimization
problems. Symmetric rank-revealing decompositions enable us to compute symmetric
rank-decient matrix approximations (obtained by neglecting blocks in the rank-
revealing decomposition with small norm). This is important, e.g., in rank-reduction
algorithms in signal processing where one wants to compute rank-decient symmetric
semidenite matrices. In addition, utilization of symmetry leads to faster algorithms,
compared to algorithms for nonsymmetric matrices.
In spite of this, very little work has been done on symmetric rank-revealing decom-
positions. Luk and Qiao [24] introduced the term VSV decomposition and proposed
an algorithm for symmetric indenite Toeplitz matrices, while Baker and DeGroat [2]
presented an algorithm for symmetric semi-denite matrices.
The purpose of this paper is to put the work in [2] and [24] into a broader perspective
by surveying possible rank-revealing VSV decompositions and algorithms,
including the underlying theory. Our emphasis is on algorithms which, in addition
to revealing the numerical rank, provide accurate estimates of the numerical range
and null space. We build our algorithms on existing methods for computing rank-
revealing decompositions of triangular matrices, based on orthogonal transformations.
Our symmetric decompositions and algorithms inherit the properties of these underlying
algorithms which are well understood today.
We emphasize that the goal of this paper is not to present detailed implementations
of our VSV algorithms, but rather to set the stage for such implementations.
The papers [4] and [28] clearly demonstrate that careful implementations of ecient
and robust mathematical software for numerically rank-decient problems requires a
major amount of research which is outside the scope of the present paper.
Our paper is organized as follows. After brie
y surveying general rank-revealing
decompositions in x2, we dene and analyze the rank-revealing VSV decomposition of
a symmetric matrix in x3. Numerical algorithms for computing VSV decompositions
of symmetric semi-denite and indenite matrices are presented in x4, and we conclude
with some numerical examples in x5.
2. General Rank-Revealing Decompositions. In this paper we restrict our
attention to real square n n matrices. The singular value decomposition (SVD) of
a square matrix is given by
where u i and v i are the columns of the orthogonal matrices U and V , and
with
. The numerical rank k of A, with respect to the threshold , is the number of
singular values greater than or equal to , i.e., k > k+1 [20, x3.1].
The RRQR, URV, and ULV decompositions are given by
Here, Q, UR , UL , VR , and VL are orthogonal matrices, is a permutation matrix, T
and R are upper triangular matrices, and L is a lower triangular matrix. Moreover,
if we partition the triangular matrices as
then the numerical rank k of A is revealed in the triangular matrices in the sense that
are k k and
F
The rst k columns of the left matrices Q, UR , and UL span approximations to the
numerical range of A, dened as spanfu g, and the last n k columns of
the right matrices VR and VL span approximations to the numerical null-space of A,
dened as spanfv g. See, e.g., [20, x3.1] for details.
Precise denitions of RRQR decompositions and algorithms are given by Chandrasekaran
and Ipsen [11], Gu and Eisenstat [19] and Hong and Pan [23], and associated
large-scale implementations are available in Fortran [4]. Denitions of UTV
decompositions and algorithms are given by Stewart [31], [32]. Matlab software for
both RRQR and UTV decompositions is available in the UTV Tools package [17].
3. Symmetric Rank-Revealing Decompositions. For a symmetric n n
matrix A, we need rank-revealing decompositions that inherit the symmetry of the
original matrix. In particular this is true for the eigenvalue decomposition (EVD)
are the right singular vectors, while
Corresponding to the UTV decompositions, Luk and Qiao [24] dened the following
VSV decomposition
where VS is an orthogonal matrix, and S is a symmetric matrix with partitioning
in which S 11 is k k. We say that the VSV decomposition is rank-revealing if
4 P. C. HANSEN AND P. Y. YALAMOV
This denition is very similar to the denition used by Luk and Qiao, except that they
use ktriu(S 22 )k 2
F instead of kS 22 k 2
F , where \triu" denotes the upper triangular part.
Our choice is motivated by the fact that kS 22 k 2
n as kS 12
Given the VSV decomposition in (3.2), the rst k columns of VS and the last
columns of VS provide approximate basis vectors for the numerical range and
null space, respectively. Moreover, given the ill-conditioned problem b, we can
compute a stabilized \truncated VSV solution" x k by neglecting the three blocks in
consists of the rst k columns
of VS . We return to the computation of x k in x4.4.
Instead of working directly with the matrix S, it is more convenient to work
with a symmetric decomposition of S and, in particular, of S 11 . The form of this
decomposition depends on both the matrix A (semi-denite or indenite) and the
rank-revealing algorithm. Hence, we postpone a discussion of the particular form of
S to the presentation of the algorithms. Instead, we summarize the approximation
properties of the VSV decomposition.
Theorem 3.1. Let the VSV decompositions of A be given by (3.2), and partition
the matrix S as in (3.3) where k is the numerical rank. Then the singular values
of diag(S 11 ; S 22 ) are related to those of A as
Moreover, the angle between the subspaces spanned by the rst k columns of V and
VS , dened by sin
bounded as
sin
Proof. The bound (3.4) follows from the standard perturbation bound for singular
values:
where we use that the singular values of the symmetric \perturbation matrix" appear
in pairs. To prove the upper bound in (3.5), we partition
columns. Moreover, we write
k. If we insert these partitionings as well as
(3.1) and (3.2) into the product AV S;0 then we obtain
Multiplying from the left with V T
k we get
from which we obtain
Taking norms in this expression and inserting sin
we get
sin 1
which immediately leads to the upper bound in (3.5). To prove the lower bound, we
use that
Taking norms and using sin
k+1 , we obtain the left bound in (3.5).
We conclude that if there is a well-dened gap between k and kS 22 k 2 , and if the
norm kS 12 k 2 of the o-diagonal block is suciently small, then the numerical rank k
is indeed revealed in S, and the rst k columns of VS span an approximation to the
singular subspace spanfv g. The following theorem shows that a well-dened
gap is also important for the perturbation bounds.
Theorem 3.2. Let e
S , and let denote the angle between
the subspaces spanned by the rst k columns of VS and e
sin 4
g.
Proof. The bound follows from Corollary 3.2 in [14].
We see that a small upper bound is guaranteed when kAk 2 as well as and
k+1 are somewhat smaller than k .
4. Algorithms for Symmetric Rank-Revealing Decompositions. Similar
to general rank-revealing algorithms, the symmetric algorithms consist of an initial
triangular factorization and a rank-revealing post-processing step. The purpose of the
latter step is to ensure that the largest k singular values are revealed in the leading
submatrix S 11 and that the corresponding singular subspace is approximated by the
span of the rst k columns of VS .
For a semi-denite matrix A, our initial factorization is the symmetrically pivoted
where P is the permutation matrix, and C is the upper triangular (or trapezoidal)
factor. The numerical properties of this algorithm are discussed by Higham
in [22]. If A is a symmetric semi-denite Toeplitz matrix, then there is good evidence
(although no strict proof) that the Cholesky factor can be computed eciently and
reliably without the need for pivoting by means of the standard Schur algorithm [30].
When A is indenite, then it would be convenient to work with an initial factorization
of the form P T
C where C is again triangular
and
diag(1).
Unfortunately such factorizations are not guaranteed to exist. Therefore our initial
factorization is the symmetrically pivoted LDL T factorization
where P is the permutation matrix, L is a unit lower triangular matrix, and D is a
block diagonal matrix with 11 and 22 blocks on the diagonal. The state-of-the-art
in LDL T algorithms is described in [1], where it is pointed out that special care must
be taken in the implementation to avoid large entries in L when A is ill conditioned.
Alternatively, one could use the factorization
G
6 P. C. HANSEN AND P. Y. YALAMOV
Table
The four post-processing rank-revealing steps for a symmetric semi-denite matrix.
Post-proc. Decomposition Symmetric matrix
R T
22 R 22
RRQR
22 T 22
22 L 21 L T
22 L 22
described in [29], where G is block triangular. If A is a symmetric indenite Toeplitz
matrix, then the currently most reliable approach to computing the LDL T factorization
seems to be via orthogonal transformation to a Cauchy matrix [21].
The reason why we need the post-processing step is that the initial factorization
may not reveal the numerical rank of A| there is no guarantee that small eigenvalues
of A manifest themselves in small diagonal elements of C or in small eigenvalues
of D. In particular, since
2 , we obtain
2showing that a small n may not be revealed in D when L is ill conditioned.
4.1. Algorithms for Semi-Denite Matrices. For symmetric semi-denite
matrices there is a simple relationship between the SVDs of A and C.
Theorem 4.1. The right singular vectors of P T AP are also the right singular
vectors of C, and
Proof. The result follows from inserting the SVD of C into P T
Hence, once we have computed the initial pivoted Cholesky factorization (4.1),
we can proceed by computing a rank-revealing decomposition of C, and this can be
done in several ways. Let E denotes the exchange matrix consisting of the columns
of the identity matrix in reverse order, and write P T AP as
Then we can compute a URV or RRQR decomposition of C, a ULV decomposition of
ECE, or an RRQR decomposition of (ECE) T , as shown in the left part of Table 4.1.
The approach using the URV decomposition of C was suggested in [2]. Table 4.1 also
shows the particular forms of the resulting symmetric matrix S, as derived from the
following relations:
R
The rst, third and fourth approaches lead to a symmetric matrix S that reveals the
numerical rank of A by having both an o-diagonal block S 12 and a bottom right
block S 22 with small norm. The second approach does not produce blocks S 12 and
S 22 with small norm; instead (since T 11 is well conditioned) this algorithm provides a
permutation P that is guaranteed to produce a well-conditioned leading
The remaining three algorithms yield approximate bases for the range and null
spaces of A, due to Theorem 3.1. It is well known that among the rank-revealing
decompositions, the ULV decomposition can be expected to provide the most accurate
bases for the right singular subspaces, in the form of the columns of VL ; see, e.g., [32]
and [15]). Therefore, the algorithm that computes the ULV decomposition of ECE is
to be preferred. We remark that the matrix UL in the ULV decomposition need not
be computed.
In terms of the blocks S 12 and S 22 , the ULV-based algorithm is the only algorithm
that guarantees small norms of both the o-diagonal block S
22 and the bottom
right block S
22 L 22 , because the norms of both L 12 and L 22 are guaranteed
to be small. From Theorem 4.1 and the denition of the ULV decomposition we have
and therefore kS 12 k 2 ' kS 22 k 2 ' k+1 .
For a sparse matrix the situation is dierent, because the UTV post-processors
may produce severe ll, while the RRQR post-processor produces only ll in the n k
rightmost columns of T . For example, if A is the upper bidiagonal matrix
in which B p is an upper bidiagonal p p matrix of all ones, and e p is the pth column
of the identity matrix, then URV with threshold produces a full k k
upper triangular R 11 , while RRQR with the same threshold produces a k k upper
bidiagonal T 11 . Hence, for sparsity reasons, the UTV approaches may not be suited
for computing the VSV decomposition, depending on the sparsity pattern of A.
An alternative is to use the algorithm based on RRQR decomposition of the
transposed and permuted Cholesky factor (ECE) and we note that the
permutation matrix is not needed. In terms of the matrix S, only the bottom
right submatrix of S is guaranteed to have a norm of the order k+1 , because of the
relations kS
and kS 22 k
In practice the situation can be better, because the RRQR-algorithm|when
applied to the matrix E C T E |may produce an o-diagonal block T 12 whose norm
is smaller than what is guaranteed (namely, of the order 1=2
The reason is that the
initial Cholesky factor C often has a trailing (n
whose norm is close to 1=2
k+1 , which may produce a norm kS 12 k 2 close to k+1 . From
the partitionings
22 En k En k C T
and the fact that the RRQR post-processor leaves column norms unchanged and may
permute the leading n k columns of E C T E to the back, we see that the norm of
the resulting o-diagonal block T 12 in the RRQR decomposition can be bounded by
Our numerical examples in x5 illustrate this.
However, we stress that in the RRQR approach we can only guarantee that kS 12 k 2
is of the order 1=2
, and this point is illustrated by the matrix
8 P. C. HANSEN AND P. Y. YALAMOV
1. Compute the eigenvalue decomposition
2. Write as
jj 1=2 .
3. Compute an orthogonal W such that C
is lower triangular.
Fig. 4.1. Interim processor for symmetric indenite matrices.
where K is the \infamous" Kahan matrix [7, p. 105] that is left unchanged by QR
factorization with ordinary column pivoting, yet its numerical rank is
factorization with symmetric pivoting computes the Cholesky factor
and when we apply RRQR to E C T E we obtain an upper triangular matrix T in which
only the (n; n)-element is small, while kT 12
n .
4.2. Algorithms for Indenite Matrices. No matter which factorization is
used for an indenite matrix, such as (4.2) or (4.3), there is no simple relationship
between the singular values of A and the matrix factors. Hence the four \intrinsic"
decompositions from Table 4.1 do not apply here, and the diculty is to develop a
new factorization from which the numerical rank can be determined.
All rank-revealing algorithms currently in use maintain the triangular form of the
matrix in consideration, but when we apply the algorithms to the matrix L in the
LDL T factorization (4.2) we destroy the block diagonal form of D. We can avoid
this diculty by inserting an additional interim stage between the initial LDL T factorization
and the rank-revealing post-processor, in which the middle block-diagonal
matrix D is replaced by the signature
matrix
diag(1). At the same time, L is
replaced by the product of an orthogonal matrix and a triangular matrix. The interim
processor, which is summarized in Fig. 4.1, thus computes the factorization
where W is orthogonal and C is upper triangular
The interim processor is simple to implement and requires at most O(n 2 ) opera-
tions, because W and W are block diagonal matrices with the same block structure
as D. For each 1 1 block d ii in D the corresponding 1 1 blocks in W , jj 1=2 , and
W are equal to 1, jd ii j 1=2 , and 1, respectively. For each 2 2 block in D we compute
the eigenvalue decomposition
d ii d i;i+1
d i;i+1 d i+1;i+1
then the corresponding 22 block in W is W ii , and the associated 22 block in W is a
Givens rotation chosen such that C stays triangular. If A is sparse, then some ll may
be introduced in C by the interim processor, but since the Givens transformations are
applied to nonoverlapping 22 blocks, ll introduced in the treatment of a particular
block does not spread during the processing of the other blocks. The same type of
processor can also be applied to the
G
G T factorization (4.3) in order to
turn the block triangular matrix G into triangular form.
Future developments of rank-revealing algorithms for more general matrices than
the triangular ones may render the interim processor super
uous. It may also be
possible to compute the factorization (4.5) directly.
We shall now explore the possibilities for using triangular rank-revealing post-processors
similar to the ones for semi-denite matrices, but modied such that they
yield a decomposition of C in which the leftmost matrix U is hypernormal with respect
to the signature
matrices
and b
i.e., we require U
Hypernormal matrices
and the corresponding transformations are introduced in [8] in connection with up-
and downdating of symmetric indenite matrices. Here we use them to maintain the
triangular form of the matrix C.
The following theorem shows that a small singular value of A is guaranteed to be
revealed in the triangular matrix C.
Theorem 4.2. If n (C) denotes the smallest singular value of C in the interim
factorization (4.5), then
Proof. We have 1
, from which the result follows.
Unfortunately, there is no guarantee that n (C) does not underestimate 1=2
dramatically, neither does it ensure that the size of n is revealed in S. We illustrate
this with a small 5 5 numerical example from [1] where A is given by
with
and . The singular values of A are
such that A has full rank with respect to the threshold . The corresponding
matrix C has singular values
Thus, 5 (C) is not a good approximation of 1=2
5 , and if we base the rank decision on
5 (C) and the threshold wrongly conclude that A is numerically
rank decient.
The conclusion is that for indenite matrices, a well conditioned C ensures that A
is well conditioned, but we cannot rely solely on C for determination of the numerical
rank of A. This rules out the use of RRQR factorization of C and ECE. The following
theorem (which expands on results in [24]) shows how to proceed instead.
Theorem 4.3. Let wn be an eigenvector of C
C corresponding to the eigenvalue
n that is smallest in absolute value, and let ~
wn be an approximation to wn .
Moreover, choose the orthogonal matrix b
V such that b
the last column of
the identity matrix, and partition the matrix
P. C. HANSEN AND P. Y. YALAMOV
such that S 11 is (n 1) (n 1). Then
ks
wn wn k 2
and
wn wn
Proof. Let
C and consider rst the quantity
A ~
wn
s 22
Next, write ~
A ~
Au
wn
A u:
Combining these two results we obtain
s 22 n
A n I) u
and taking norms we get
ks
Both ks 12 k 2
2 and js 22 n j are lower bounds for the left-hand side, while kuk 2 is
bounded above by tan . Combining this with the bound k ^
obtain the two bounds in the theorem.
The above theorem shows that in order for n to reveal itself in S, we must
compute an approximate null vector of C
apply Givens rotations to this vector
to transform it into e n , and accumulate these rotations from the right into C. At the
same time, we should apply hypernormal rotations from the left in order to keep C
upper triangular. Theorem 4.3 ensures that if ~
wn is close enough to wn then ks 12 k 2
is small and s 22 approximates n . We note that hypernormal transformations can be
numerically unstable, and in our implementations we use the same stabilizations as
in the stabilized hyperbolic rotations [7, x3.3.4].
Once this step has been performed, we de
ate the problem and apply the same
technique to the (n 1)(n 1) submatrix S
are
the leading submatrices of the updated factors. This is precisely the algorithm from
[24]. When the process stops (because all the small singular values of A are revealed)
we have computed the URV-like decomposition
R such that U T
R
and the middle rank-revealing matrix is given by
R Tb
where
1 is k k.
The condition estimator used in the URV-like post-processor must be modied,
compared to the standard URV algorithm, because we must now estimate the smallest
Table
Summary of approaches for symmetric indenite matrices. Note that the RRQR approaches
do not reveal the numerical rank, and that the ULV-like approach is impractical.
Post-proc. Decomposition Comments to decomposition
R
R
R 11
RRQR
singular value of the matrix C
C. In our implementation we use one step of inverse
iteration applied to C
C, with starting vector from the condition estimator of the
ordinary URV algorithm applied to C.
Finally consider a ULV-like approach applied to ECE. Again we must compute
an approximate null vector of C
C and transform it into the form e n by means of
an orthogonal transformation. This transformation is applied from the right to ECE,
and a hypernormal transformation from the left is then required to resotre the lower
triangular form of
To de
ate this factorization, note that the leading (n 1) (n 1) block of
L is given by
L 11
and
This shows that we cannot merely work on the block L 11 ; also
the 1 (n 1) block ' T
21 is needed. Hence, after the de
ation step we must work with
trapezoidal matices instead of triangular matrices. This fact renders the ULV-like
approach impractical.
To summarize, for symmetric indenite matrices only the approach using the
URV-like post-processor leads to a practical algorithm for revealing the numerical
rank of A. Moreover, a well conditioned C signals a well conditioed A, but C cannot
reveal A's numerical rank. Our analysis is summarized in Table 4.2, and the URV-
based algorithm is summarized in Fig. 4.2 (following the presentations from [17]),
where is the rank-decision tolerance for A.
4.3. Updating the VSV Decomposition. One of the advantages of the rank-
revealing VSV decomposition over the EVD and SVD is that it can be updated
eciently when A is modied by a rank-one change v v T . From the relation
we see that the updating of A amounts to updating the rank-revealing matrix S by
the rank-one matrix ww T with
v, i.e., S . This can be done in
while the EVD/SVD updating requires O(n 3 ) operations.
Consider rst the semi-denite case, and let M denote one of the triangular
12 P. C. HANSEN AND P. Y. YALAMOV
1. Let k n and compute an initial factorization P T
2. Apply the interim processor to compute P T
3. Condition estimation: let e k estimate k C(1: k; 1:
and let w k estimate the corresponding right singular vector.
4. If e k > 1=2 then exit.
5. Revealment: determine an orthogonal Q k such that Q T
6. update C(1: k; 1:
7. update C(1: k; 1:
the hypernormal
is chosen such that the updated C is
8. De
ation: let k k 1.
9. Go to step 3.
Fig. 4.2. The URV-based VSV algorithm for symmetric indenite matrices.
matrices R, L, or T T from the algorithms in Table 4.1. Then
and we see that the VSV updating is identical to standard updating of a triangular
RRQR or UTV factor, which can be done stably and eciently by means of Givens
transformation as described in [5], [31] and [32].
Next we consider the indenite case (4.9), where the updating takes the form
showing that the VSV updating now involves hypernormal rotations. Hence, the up-dating
is computationally similar to UTV downdating, whose stable implementation
is discussed in [3] and [25]. Downdating the VSV decomposition will, in both cases,
also involve hypernormal rotations.
4.4. Computation of Truncated VSV Solutions. Here we brie
y consider
the computation of the truncated VSV solution which we dene as
consists of the rst k columns of VS . Both URV-based decompositions are
simple to use. For the ULV-based decomposition we have S
and we can safely neglect the term L T
whose norm is at most of the order k+1 .
Finally, for the RRQR-based decomposition we can use the following theorem.
Theorem 4.4. If
T is the triangular QR factor of (T
then
Alternatively, if the columns of the matrix
form an orthonormal basis for the null space of (T
I W 1
(4.
Table
Numerical results for the rank-revealing VSV algorithms.
Post-processor
(semi-def.)
ULV mean 2:7
(semi-def.)
RRQR mean 1:5
(semi-def.)
URV-like mean
(indef.)
Proof. If
T is a QR factorization then S
T and S 1
T whici is (4.11). The same relation leads to S 1
denotes the pseudoinverse. In [6] used that
which, combined with the relation (I W W T immediately leads
to (4.12).
The rst relation (4.11) in Theorem 4.4 can be used when k n, while the
second relation (4.12) is more useful when k n. Note that W can be computed by
orthonormalization of the columns of the matrix
I
This approach is particularly useful for sparse matrices because we only introduce ll
when working with the \skinny" n (n
5. Numerical Examples. The purpose of this section is to illustrate the theory
derived in the previous sections by means of some test problems. Although robust-
ness, eciency and
op counts are important practical issues, they are also tightly
connected to the particular implementation of the rank-revealing post-processor, and
not the subject of this paper.
All our experiments were done in Matlab, and we use the implementations of the
ULV, URV, and RRQR algorithms from the UTV Tools package [17]. The condition
estimation in all three implementations is the Cline-Conn-Van Loan (CCVL)
estimator [12]. The modied URV algorithm used for symmetric indenite matrices is
based on the URV algorithm from [17], augmented with stabilized hypernormal rotations
when needed, and with a condition estimator consisting of the CCVL algorithm
followed by one step of inverse iteration applied to the matrix C
C.
Numerical results for all the rank-revealing algorithms are shown in Table 5.1,
where we present mean and maximum values of the norms of various submatrices
associated with the VSV decompositions. In particular, X
or T 12 , and X 22 denotes either R 22 , L 22 , or T 22 . The results are computed on the
basis of randomly generated test matrices of size 64, 128, and 256 (100 matrices of
each size), each with n 4 eigenvalues geometrically distributed between 1 and 10 4 ,
14 P. C. HANSEN AND P. Y. YALAMOV
Table
Numerical results with improved singular vector estimates.
URV mean 1:8
(semi-def.)
URV-like mean 8:6
(indef.)
and the remaining four eigenvalues given by
the numerical rank with respect to the threshold
The test matrices were produced by generating random orthogonal matrices and
multiplying them to diagonal matrices with the desired eigenvalues. For the indenite
matrices the signs of the eigenvalues were chosen to alternate.
Table
5.1 illustrates the superiority of the ULV-based algorithm for semi-denite
matrices, for which the norm kS 12 k 2 of the o-diagonal block in S is always much
smaller than the norm kS 22 k 2 of the bottom right submatrix. This is due to the fact
that the ULV algorithm produces a lower triangular matrix L whose o-diagonal block
L 21 has a very small norm (and we emphasize that the size of this norm depends on
the condition estimator). The second best algorithm for semi-denite matrices is the
one based on the RRQR algorithm, for which kS 12 k 2 and kS 22 k 2 are of the same size.
Note that it is the latter algorithm which we recommend for sparse matrices. The
URV-based algorithm for semi-denite matrices produces results that are consistently
less satisfactory than the other two algorithms. All these results are consistent with
our theory.
For the indenite matrices, only the URV-like algorithm can be used, and the
results in Table 5.1 show that this algorithm also behaves as expected from the theory.
In order to judge the backward stability of this algorithm, which uses hypernormal
rotations, we also computed the backward error kA
k 2 for all three hundred
test problems. The largest residual norm was 1:9 10 11 , and the average is
We conclude that we loose a few digits of accuracy due to the use of the hypernormal
rotations.
It is well known that the norm of the o-diagonal block in the triangular URV
factor depends on the quality of the condition estimator | the better the singular
vector estimate, the smaller the norm. Hence, it is interesting to see how much the
norms of the o-diagonal blocks in R and S decrease if we improve the singular vector
estimates by means of one step of inverse iteration (at the expense of additional
ops). In the semi-denite case we now apply an inverse iteration step to
the CCVL estimate, and in the indenite case we use two steps of inverse iteration
applied to C
C instead of one. The results are shown in Table 5.2 for the same
matrices as in Table 5.1. As expected, the norms of the o-diagonal blocks are now
smaller, at the expense of more work. The average backward errors kA VS S V T
did not change in this experiment.
6. Conclusion. We have dened and analyzed a class of rank-revealing VSV
decompositions for symmetric matrices, and proposed algorithms for computing these
decomposition. For semi-denite matrices, the ULV-based algorithm is the method
of choice for dense matrices, while the RRQR-based algorithm is better suited for
sparse matrices because it preserves sparsity better. For indenite matrices, only the
URV-based algorithm is guaranteed to work.
--R
Accurate Symmetric Inde
A correlation-based subspace tracking algorithm
An algorithm and a stability theory for downdating the ULV decomposition
On rank-revealing QR factorizations
Generalizing the LINPACK condition estima- tor
Perturbation analysis for two-sided (or complete) orthogonal decompositions
Bounding the subspaces from rank revealing two-sided orthogonal decompositions
Matlab templates for rank- revealing UTV decompositions
Rank and null space calculations using matrix decomposition without column interchanges
Transformation techniques for Toeplitz and Toeplitz-plus-Hankel matrices II
Analysis of the Cholesky decomposition of a semi-de nite matrix
The rank revealing QR decomposition and SVD
A symmetric rank-revealing Toeplitz matrix decomposition
A Sparse URL Rather Than a URV Factorization
Sparse multifrontal rank revealing QR factorization
Cholesky factorization of semi-de nite Toeplitz matrices
An updating algorithm for subspace tracking
Updating a rank-revealing ULV decomposition
Matrix Algorithms Vol.
--TR | rank-revealing decompositions;hypernormal rotations;matrix approximation;symmetric matrices |
587766 | Twice Differentiable Spectral Functions. | A function F on the space of n n real symmetric matrices is called spectral if it depends only on the eigenvalues of its argument. Spectral functions are just symmetric functions of the eigenvalues. We show that a spectral function is twice (continuously) differentiable at a matrix if and only if the corresponding symmetric function is twice (continuously) differentiable at the vector of eigenvalues. We give a concise and usable formula for the Hessian. | Introduction
In this paper we are interested in functions F of a symmetric matrix argument
that are invariant under orthogonal similarity transformations:
orthogonal U and symmetric A :
Department of Combinatorics & Optimization, University of Waterloo, Waterloo, Ontario
N2L 3G1, Canada. Email: aslewis@math.uwaterloo.ca. Research supported by
NSERC.
y Department of Combinatorics & Optimization, University of Waterloo, Waterloo, Ontario
N2L 3G1, Canada. Email: hssendov@barrow.uwaterloo.ca. Research supported
by NSERC.
Every such function can be decomposed as
the map that gives the eigenvalues of the matrix A and f is a symmetric
function. (See the next section for more details). We call such functions
F spectral functions (or just functions of eigenvalues) because they depend
only on the spectrum of the operator A. Classical interest in such functions
arose from their important role in quantum mechanics [7], [20]. Nowadays
they are an inseparable part of optimization [11], and matrix analysis [4,
5]. In modern optimization the key example is \semidenite programming",
where one encounters problems involving spectral functions like log det(A),
the largest eigenvalue of A, or the constraint that A must be positive denite.
There are many examples where a property of the spectral function F
is actually equivalent to the corresponding property of the underlying symmetric
function f . Among them are rst-order dierentiability [9], convexity
[8], generalized rst-order dierentiability [9, 10], analyticity [26], and various
second-order properties [25], [24], [23]. It is also worth mentioning the
\Chevalley Restriction Theorem", which in this context identies spectral
functions that are polynomials with symmetric polynomials of the eigen-
values. Second-order properties of matrix functions are of great interest
for optimization because the application of Newton's method, interior point
methods [13], or second-order nonsmooth optimality conditions [19] requires
that we know the second-order behaviour of the functions involved in the
mathematical model.
The standard reference for the behaviour of the eigenvalues of a matrix
subject to perturbations in a particular direction is [6]. Second-order properties
of eigenvalue functions in a particular direction are derived in [25].
The problem that interests us in this paper is that of when a spectral
function is twice dierentiable (as a function of the matrix itself, rather than
in a particular direction) and when its Hessian is continuous. Analyticity is
discussed in [26]: thus our result lies in some sense between the results in [9]
and [26]. Smoothness properties of some special spectral functions (such as
the largest eigenvalue) on certain manifolds are helpful in perturbation theory
and Newton-type methods: see for example [15, 16, 18, 17, 22, 21, 14].
We show that a spectral function is twice (continuously) dierentiable at
a matrix if and only if the corresponding symmetric function is twice (con-
tinuously) dierentiable at the vector of eigenvalues. Thus in particular, a
spectral function is C 2 if and only if its restriction to the subspace of diagonal
matrices is C 2 . For example, the Schatten p-norm of a symmetric matrix is
the pth root of the function
(where the i s are the eigenvalues of
the matrix). We see that this latter function is C 2 for p 2, although not
analytic unless p is an even integer.
As part of our general result, we also give a concise and easy-to-use formula
for the Hessian: the results in [26], for analytic functions, are rather
implicit. The paper is self-contained and the results are derived essentially
from scratch, making no use of complex-variable techniques as in [2], for ex-
ample. In a parallel paper [12] we give an analogous characterization of those
spectral functions that have a quadratic expansion at a point (but that may
not be twice dierentiable).
Notation and preliminary results
In what follows S n will denote the Euclidean space of all n n symmetric
matrices with inner product hA;
will be the vector of its eigenvalues ordered in nonincreasing
order. By O n we will denote the set of all nn orthogonal matrices. For
any vector x in R n , Diag x will denote the diagonal matrix with the vector x
on the main diagonal, and
x will denote the vector with the same entries as
x ordered in nonincreasing order, that is x 1
x n . Let R n
# denote
the set of all vectors x in R n such that x 1 x 2 x n . Let also the
dened by diag
m=1 will denote a sequence of symmetric matrices converging to 0, and
m=1 will denote a sequence of orthogonal matrices. We describe sets in
R n and functions on R n as symmetric if they are invariant under coordinate
permutations. denotes a function, dened on an open
symmetric set, with the property
permutation matrix P and any x 2 domainf:
We denote the gradient of f by rf or f 0 , and the Hessian by r 2 f or f 00 . Vectors
are understood to be column vectors, unless stated otherwise. Whenever
we denote by a vector in R n
# we make the convention that
Thus r is the number of distinct eigenvalues. We dene a corresponding
partition
and we call these sets blocks. We denote the standard basis in R n by
e is the vector with all entries equal to 1. We also dene
corresponding matrices
For an arbitrary matrix A, A i will denote its i-th row (a row vector), and A i;j
will denote its (i; j)-th entry. Finally, we say that a vector a is block rened
by a vector b if
implies a
We need the following result.
R be a symmetric function, twice dierentiable
at the point 2 R n
# , and let P be a permutation matrix such that
Then
In particular we have the representation
a
a
a r1 E r1 a r2 E r2 a rr R rr
where the E uv are matrices of dimensions jI u j jI v j with all entries equal to
i;j=1 is a real symmetric matrix, b := (b 1 ; :::; b n ) is a vector which
is block rened by , and J u is an identity matrix of the same dimensions as
Proof. Just apply the chain rule twice to the equality
order to get parts (i) and (ii). To deduce the block structure of the Hessian,
consider the block structure of permutation matrices P such that
when we permute the rows and the columns of the Hessian in the way
dened by P , it must stay unchanged.
Using the notation of this lemma, we dene the matrix
Note 2.2 We make the convention that if the i-th diagonal block in the above
representation has dimensions 1 1 then we set a
().
Otherwise the value of b k i
is uniquely determined as the dierence between a
diagonal and an o-diagonal element of this block. Note also that the matrix
B and the vector b depend on the point and the function f .
Lemma 2.3 For 2 R n
# and a sequence of symmetric matrices Mm ! 0 we
have that
(Diag +Mm
Proof. Combine Lemma 5.10 in [10] and Theorem 3.12 in [3].
The following is our main technical tool.
Lemma 2.4 Let fMmg be a sequence of symmetric matrices converging to
0, such that Mm =kMm k converges to M . Let be in R n
# and Um ! U 2 O n
be a sequence of orthogonal matrices such that
Diag (Diag +Mm )
(2)
Then the following properties hold.
(i) The orthogonal matrix U has the form
l is an orthogonal matrix with dimensions jI l j jI l j for all l.
(ii) If i 2 I l then
lim
p2I l
(U i;p
0:
(iii) If i and j do not belong to the same block then
lim
(U i;j
(iv) If i 2 I l then
l Diag (X T
l MX l )
l
(v) If l , and p 62 I l then
lim
U i;p
0:
(vi) For any indices i 6= j such that
lim
p2I l
U i;p
0:
(vii) For any indices i 6= j such that
l Diag (X T
l MX l )
l
(viii) For any three indices i, j, p in distinct blocks,
lim
U i;p
0:
(ix) For any two indices i, j such that i 2 I l ,
lim
p2I l
U i;p
ks
p2Is U i;p
Proof.
(i) After taking the limit in equation (2) we are left with
(Diag
The described representation of the matrix U follows.
(ii) Let us denote
We use Lemma 2.3 in equation (2) to obtain
Diag
(Diag hm )U T
and the equivalent form
(Diag )Um
We now divide both sides of these equations by kMm k and rearrange:
Diag Um (Diag )U T
(Diag hm )U T
and
Diag U T
(Diag )Um
Diag hm
Notice that the right hand sides of these equations converge to a nite
limit as m increases to innity. If we call the matrix limit of the right
hand side of the rst equation L, then clearly the limit of the second
equation is U T LU .
We are now going to prove parts (ii) and (iii) together inductively, by
dividing the orthogonal matrix Um into the same block structure as U .
We begin by considering the rst row of blocks of Um .
Let i be an index in the rst block, I 1 . Then the limit of the (i; i)-th
entry in the matrix at the left hand side of equation (4) is
lim
(U i;p
ks
p2Is
(U i;p
Now recall that
and because V 1 is an orthogonal matrix, notice that
i(Diag (X T
We now sum equation (6) over all i in I 1 to get
lim
(U i;p
ks
(U i;p
0:
Notice here, that the coe-cients in front of the k l
in the
numerator sum up to zero. That is,
U i;p
r
U i;p
us choose a number such that
and add to every coordinate of the vector thus \shifting" it. The
coordinates of the shifted vector that are in the rst block are strictly
bigger than zero, and the rest are strictly less than zero. By our comment
above, the last limit remains true if we \shift" in this way. If
we rewrite the last limit for the \shifted" vector, because all summands
are positive, we immediately see that we must have
lim
(U i;p
and
lim
(U i;p
The rst of these limits can be written as
lim
(U i;p
and because all the summands are positive, we conclude that
lim
(U i;p
The second of the limits implies immediately that
lim
(U i;p
Thus we proved part (ii) for i 2 I 1 and part (iii) for the cases specied
above.
Here is a good place to say a few more words about the idea of the
proof. As we said, we divide the matrix Um into blocks complying with
the block structure of the vector (exactly as in part (i) for the matrix
U ). We proved part (ii) and (iii) for the elements in the rst row of
blocks of this division. What we are going to do now is prove the same
thing for the rst column of blocks. In order to do this we x an index
i in I 1 and consider the (i; i)-th entry in the matrix at the left hand
side of equation (5), and take the limit:
lim
(U p;i
ks
p2Is (U p;i
Using also the block-diagonal structure of the matrix U , we again have
So we proceed just as before in order to conclude that
lim
(U p;i
and
lim
(U p;i
We are now ready for the second step of our induction. Let i be an
index in I 2 . Then the limit of the (i; i)-th entry in the matrix at the
left hand side of equation (4) is
lim
U i;p
U i;p
r
ks
p2Is
U i;p
Analogously as above we have
so summing the above limit over all i in I 2 we get
lim
U i;p
U i;p
r
ks
U i;p
0:
We know from (8) that
lim
(U i;p
0:
So now we choose a number such that
and as before exchange with its shifted version. Just as before we
conclude that
lim
(U i;p
and
lim
(U i;p
We repeat the same steps for the second column of blocks in the matrix
Um and so on inductively until we exhaust all the blocks. This
completes the proof of parts (ii) and (iii).
(iv) For the proof of this part, one needs to consider the (i; i)-th entry of
the right hand side of equation (4). Because the diagonal of the left
hand side converges to zero (by (ii) and (iii)), taking the limit proves
the statement in this part.
(v) This part follows immediately from part (iii).
(vi) Taking the limit in equation (4) gives
lim
s6=l
ks
p2Is U i;p
p2I l
U i;p
where L i;j is the (i; j)-th entry of the limit of the right hand side of
equation (4). Note that the coe-cients of ks again sum up to zero:
s6=l
p2Is
U i;p
p2I l
U i;p
because Um is an orthogonal matrix. Now by part (v) we have
s6=l
p2Is U i;p
p2I l
U i;p
as required, and moreover L
(vii) The statement of this part is the detailed way of writing the fact, proved
in the previous part, that L
(viii) This part follows immediately from part (iii). (In fact the expression in
part (viii) is identical to the one in part (v), re-iterated with dierent
index conditions for later convenience.)
(ix) We again take the limit of the (i; j)-th entry of the matrices on both
sides of equation (4).
lim
t6=l;s
U i;p
p2I l
U i;p
ks
p2Is
U i;p
By part (viii) we have that all but the l-th and the s-th summand above
converge to zero. On the other hand
Mm
(Diag hm )U T
i;j
lim
Diag hm
because U i and U j are rows in dierent blocks and (Diag hm )=kMm k
converges to a diagonal matrix.
Now we have all the tools to prove the main result of the paper.
3 Twice dierentiable spectral functions
In this section we prove that a symmetric function f is twice dierentiable
at the point (A) if and only if the corresponding spectral function f - is
twice dierentiable at the matrix A.
Recall that the Hadamard product of two matrices
of the same size is the matrix of their elementwise product A -
Let the symmetric function f : R n ! R be twice dierentiable at
the point 2 R n
# , where, as before,
We dene the vector as in Lemma 2.1. Specically,
for any index i, (say i 2 I l for some l 2 f1; 2; :::; rg) we dene
ii (); if jI l
pq (); for any p 6= q 2 I l :
Lemma 2.1 guarantees that the second case of this denition doesn't depend
on the choice of p and q. We also dene the matrix A():
A i;j
Notice the similarity between this denition and classical divided dierence
constructions in Lowner theory (see [1, Chap. V], for example). For simplic-
ity, when the argument is understood by the context, we will write just b i
and A i;j . The following lemma is Theorem 1.1 in [9].
Lemma 3.1 Let A 2 S n and suppose (A) belongs to the domain of the
is dierentiable at the point (A)
if and only if f - is dierentiable at the point A. In that case we have the
for any orthogonal matrix U satisfying
We recall some standard notions about twice dierentiability. Consider
a function F from S n to R. Its gradient at any point A (when it exists)
is a linear functional on the Euclidean space S n , and thus can be identied
with an element of S n , which we denote rF (A). Thus rF is a map from
S n to S n . When this map is itself dierentiable at A we say F is twice
dierentiable at A. In this case we can interpret the Hessian r 2 F (A) as a
symmetric, bilinear function from S n S n into R. Its value at a particular
point will be denoted r 2 F (A)[H; Y ]. In particular, for
xed H, the function r 2 F (A)[H; ] is again a linear functional on S n , which
we consider an element of S n , for brevity denoted by r 2 F (A)[H]. When the
Hessian is continuous at A we say F is twice continuously dierentiable at
A. In that case the following identity holds:
t=0
The next theorem is a preliminary version of our main result.
Theorem 3.2 The symmetric function f : R n ! R is twice dierentiable
at the point 2 R n
# if and only if f - is twice dierentiable at the point
Diag . In that case the Hessian is given by
Hence
Proof. It is easy to see that f must be twice dierentiable at the point
whenever f - is twice dierentiable at Diag because by restricting f - to
the subspace of diagonal matrices we get the function f . So the interesting
case is the other direction. Let f be twice dierentiable at the point 2 R n
and suppose on the contrary that either f - is not twice dierentiable at
the point Diag , or equation (10) fails. Dene a linear operator by
(Lemma 3.1 tells us that f - is at least dierentiable around Diag .) So,
for this linear operator there is an > 0 and a sequence of symmetric
matrices fMm g 1
m=1 converging to 0 such that
loss of generality we may assume that the
sequence fMmg 1
m=1 is such that Mm =kMm k converges to a matrix M , because
some subsequence of fMm g 1
m=1 surely has this property. Let fUm g 1
m=1 be a
sequence of orthogonal matrices such that
Diag (Diag +Mm )
Without loss of generality we may assume that Um ! U 2 O n , or otherwise
we will just take subsequences of fMmg 1
m=1 and fUmg 1
m=1 . The above inequality
shows that for every m there corresponds a pair (or more precisely
at least one pair) of indices (i; j) such that
i;j
So at least for one pair of indices, call it again (i; j), we have innitely many
numbers m for which (i; j) is the corresponding pair, and because if necessary
we can again take a subsequence of fMmg 1
m=1 and fUmg 1
m=1 we may assume
without loss of generality that there is a pair of indices (i; j) for which the
last inequality holds for all :::. Dene the symbol hm again by
equation (3). Notice that using Lemma 3.1, Lemma 2.3, and the fact that
rf is dierentiable at , we get
We consider three cases. In every case we are going to show that the left
hand side of inequality (11) actually converges to zero, which contradicts the
assumption.
Case I. If using equation (12) the left hand side of inequality
(11) is less that or equal to
Diag rf()
Diag r 2 f()hm
We are going to show that each summand approaches zero as m goes to
innity. Assume that i 2 I l for some l 2 f1; :::; rg. Using the fact that the
vector block renes the vector rf() (Lemma 2.1, part (i)) the rst term
can be written askMmk
f 0
l
p2I l
U i;p
s:s6=l
ks
p2Is
U i;p
We apply now Lemma 2.4 parts (ii) and (iii) to the last expression.
We now concentrate on the second term above. Using the notation of
equation (1) (that is, r B+Diag b) this term is less than or equal to
Diag ((Diag b)h m )
(Diag b)(diag Mm )
As m approaches innity, we have that U i
. We dene the vector h to
be:
hm
taking limits, expression (13) turn into:
(Diag b)(diag M)
We are going to investigate each term in this sum separately and show that
they are both actually equal to zero. For the rst, we use the block structure
of the matrix B (see Lemma 2.1) and the block structure of the vector h to
obtain
r
a qs tr (X T
Using the fact that i 2 I l and that V l is orthogonal we get
l
Diag (Bh)
l
l (Diag (Bh))X l
l
r
a ls tr (X T
l
r
a ls tr (X T
(Bdiag M)
which shows that the rst term is zero. For the second term, we use the
block structure of the vector b, to write
(Diag
In the next to the last equality below we use part (iv) of Lemma 2.4:
l
Diag ((Diag b)h)
l
l (Diag ((Diag b)h))X l
l
l Diag b k l
l MX l )
l
(Diag b)(diag M)
We can see now that the second term is also zero.
Case II. If i 6= j but I l for some l 2 f1; 2; :::rg, then using equation
(12) the left hand side of inequality (11) becomes
Diag rf()
Using the fact that block renes vector rf(), we can write the rst
summand above askMm k
s6=l
ks
p2Is
U i;p
l
p2I l
U i;p
We use parts (v) and (vi) of Lemma 2.4 to conclude that this expression
converges to zero. We are left with
Substituting r 2 Diag b we get
Diag ((Diag b)h m )
Recall the notation from Lemma 2.1 used to denote the entries of the matrix
B. Then the limit of the rst summand above can be written as
lim
r
a sl tr (X T
l MX l )
p2Is
U i;p U j;p
because clearly
p2Is U i;p U :::rg. We are left with the
following limit
lim
Diag ((Diag b)h m )
Using Lemma 2.4 part (vii) we observe that the right hand side is zero.
Case III. If i 2 I l and j 2 I s , where l 6= s, then using equation (12), the left
hand side of inequality (11) becomes (up to o(1))
Diag rf()
Diag r 2 f()hm
l
ks
ks
We start with the second term above. Its limit is
lim
because in our case, U i has nonzero coordinates where the entries of U j are
zero. We are left with
lim
Diag rf()
l
ks
ks
We expand the rst term in this limit.
Diag rf()
l
p2I l
U i;p
ks
p2Is U i;p
t6=l;s
U i;p
Using Lemma 2.4 part (viii) we see that the third summand above converges
to zero as m goes to innity. Part (ix) of the same lemma tells us that
lim
p2I l
U i;p
ks
p2Is U i;p
In order to abbreviate the formulae we introduce the following notation
l
p2I l
U i;p
Substituting everything in (14) we get the following equivalent limit:
lim
l
ks
l
ks
ks
l
ks s
Simplifying we get
lim
ks
l
ks
ks
Notice now that r
l
because Um is an orthogonal matrix and the numerator of the above sum is
the product of its i-th and the j-th row. Next, Lemma 2.4, part (viii) says
that
lim
t6=l;s
so
lim
which completes the proof.
We are nally ready to give and prove the full version of our main result.
Theorem 3.3 Let A be an nn symmetric matrix. The symmetric function
twice dierentiable at the point (A) if and only if the spectral
function f - is twice dierentiable at the matrix A. Moreover in this case
the Hessian of the spectral function at the matrix A is
where W is any orthogonal matrix such that A = W Diag (A)
dened by equation (9). Hence
diag ~
Hi:
Proof. Let W be an orthogonal matrix which diagonalizes A in an ordered
fashion, that is
Let Mm be a sequence of symmetric matrices converging to zero, and let Um
be a sequence of orthogonal matrices such that
Diag (A) +W T
Then using Lemma 3.1 we get
We also have that
goes to innity. Because W is an orthogonal matrix
we have kWXW T matrix X. It is now easy to check the
result by Theorem 3.2.
4 Continuity of the Hessian
Suppose now that the symmetric function f : R n ! R is twice dierentiable
in a neighbourhood of the point (A) and that its Hessian is continuous at the
point (A). Then Theorem 3.3 shows that f - must be twice dierentiable
in a neighbourhood of the point A, and in this section we are going to show
that r 2 (f - ) is also continuous at the point A.
We dene a basis, fH ij g, on the space of symmetric matrices. If i
all the entries of the matrix H ij are zeros, except the (i; j)-th and (j; i)-th,
which are one. If we have one only on the (i; i)-th position. It su-ces
to prove that the Hessian is continuous when applied to any matrix of the
basis. We begin with a lemma.
Lemma 4.1 Let 2 R n
# be such that
and let the symmetric function f : R n ! R be twice continuously dieren-
tiable at the point . Let f m g 1
m=1 be a sequence of vectors in R n converging
to . Then
lim
Proof. For every m there is a permutation matrix Pm such that P T
m . (See the beginning of Section 2 for the meaning of the bar above a
vector.) But there are nitely many permutation matrices (namely n!) so we
can form n! subsequences of f m g such that any two vectors in a particular
subsequence can be ordered in descending order by the same permutation
matrix. If we prove the lemma for every such subsequence we will be done.
So without loss of generality we may assume that P T for every m,
and some xed permutation matrix P . Clearly, for all large enough m, we
have
Consequently the matrix P is block-diagonal with permutation matrices on
the main diagonal, and dimensions matching the block structure of , so
Consider now the block structure of the vectors f m g. Because
there are nitely many dierent block structures, we can divide this sequence
into subsequences such that the vectors in a particular subsequence have the
same block structure. If we prove the lemma for each subsequence we will
be done. So without loss of generality we may assume that the vectors f m g
have the same block structure for every m. Next, using the formula for the
Hessian in Theorem 3.3 we have
and Lemma 2.1 together with Theorem 3.2 give us
These equations show that without loss of generality it su-ces to prove the
lemma only in the case when all vectors f m g are ordered in descending
order, that is, the vectors m all block rene the vector . In that case we
have
and
We consider four cases.
Case I. If
lim
Diag r
just because r 2 f() is continuous at .
Case II. If i 6= j, but belong to the same block for m , then i, j will be in
the same block of as well and we have
lim
again because r 2 f() is continuous at .
Case III. If i and j belong to dierent blocks of m but to the same block
of , then
lim
and
So we have to prove that
lim
ii
(See the denition of b i () in the beginning of Section 3.) For every m we
dene the vectors _
m and
_
Because we conclude that both sequences f _
converge to , because f m g 1
does so. Below we are applying the mean
value theorem twice:
is a vector between m and _
is a vector between _
m and
. Notice that vector m is obtained
from m by swapping the i-th and the j-th coordinate. Then using the rst
part of Lemma 2.1 we see that f 0
Finally we just have to take
the limit above and use again the continuity of the Hessian of f at the point
.
Case IV. If i and j belong to dierent blocks of m and to dierent blocks
of , then
lim
because rf() is continuous at and the denominator is never zero.
Now we are ready to prove the main result of this section.
Theorem 4.2 Let A be an nn symmetric matrix. The symmetric function
continuously dierentiable at the point (A) if and only
if the spectral function f - is twice continuously dierentiable at the matrix
A.
Proof. We know that f - is twice dierentiable at A if and only if f
is twice dierentiable at (A), so what is left to prove is the continuity of
the Hessian. Suppose that f is twice continuously dierentiable at (A) and
that f - is not twice continuously dierentiable at A, that is, the Hessian
not continuous at A. Take a sequence, fAmg 1
m=1 , of symmetric
matrices converging to A such that for some > 0 we have
for all m. Let fUm g 1
m=1 be a sequence of orthogonal matrices such that
Without loss of generality we may assume that Um ! U , where U is orthogonal
and then
(Otherwise we take subsequences of fAmg and fUmg.) Using the formula for
the Hessian given in Theorem 3.3 and Lemma 4.1 we can easily see that
lim
for every symmetric H. This is a contradiction.
The other direction follows from the chain rule after observing
This completes the proof.
5 Example and Conjecture
As an example, suppose we require the second directional derivative of the
function f - at the point A in the direction B. That is, we want to nd
the second derivative of the function
at W be an orthogonal matrix such that A = W(Diag (A))W T .
Let ~
We dierentiate twice:
Using Lemma 3.1 and Theorem 3.3 at
Diag rf((A))
diag ~
In principle, if the function f is analytic, this second directional derivative
can also be computed using the implicit formulae from [26]. Some work shows
that the answers agree.
As a nal illustration, consider the classical example of the power series
expansion of a simple eigenvalue. In this case we consider the function f
given by
the k-th largest entry in x;
and the matrix
# and
Then we have
so for the function our results show the following
formulae (familiar in perturbation theory and quantum mechanics):
This agrees with the result in [6, p. 92].
We conclude with the following natural conjecture.
Conjecture 5.1 A spectral function f - is k-times dierentiable at the
matrix A if and only if its corresponding symmetric function f is k-times
dierentiable at the point (A). Moreover, f - is C k if and only if f is C k .
--R
Matrix Analysis.
Derivations, derivatives and chain rules.
Sensitivity analysis of all eigen-values of a symmetric matrix
Matrix Analysis.
Topics in Matrix Analysis.
A Short Introduction to Perturbation Theory for Linear Op- erators
The Fundamental Principles of Quantum Mechanics.
Convex analysis on the Hermitian matrices.
Derivatives of spectral functions.
Nonsmooth analysis of eigenvalues.
Eigenvalue optimization.
Quadratic expansions of spectral func- tions
On minimizing the maximum eigenvalue of a symmetric matrix.
Second derivatives for optimizing eigenvalues of symmetric matrices.
Towards second-order methods for structured nonsmooth optimization
WETS. Variational Analysis.
Quantum Mechanics.
First and second order analyis of nonlinear semid
On eigenvalue optimization.
Valeurs propres de matrices sym
On analyticity of functions involving eigenvalues.
--TR
--CTR
Xin Chen , Houduo Qi , Liqun Qi , Kok-Lay Teo, Smooth Convex Approximation to the Maximum Eigenvalue Function, Journal of Global Optimization, v.30 n.2-3, p.253-270, November 2004
Lin Xiao , Stephen Boyd , Seung-Jean Kim, Distributed average consensus with least-mean-square deviation, Journal of Parallel and Distributed Computing, v.67 n.1, p.33-46, January, 2007 | symmetric function;semidefinite program;spectral function;twice differentiable;perturbation theory;eigenvalue optimization |
587777 | Means and Averaging in the Group of Rotations. | In this paper we give precise definitions of different, properly invariant notions of mean or average rotation. Each mean is associated with a metric in SO(3). The metric induced from the Frobenius inner product gives rise to a mean rotation that is given by the closest special orthogonal matrix to the usual arithmetic mean of the given rotation matrices. The mean rotation associated with the intrinsic metric on SO(3) is the Riemannian center of mass of the given rotation matrices. We show that the Riemannian mean rotation shares many common features with the geometric mean of positive numbers and the geometric mean of positive Hermitian operators. We give some examples with closed-form solutions of both notions of mean. | where is the angle of rotation of R. The kth root exp(1 Log R) is the one for which
the eigenvalues have the largest positive real part, and is the only one we denote by
R1=k. In the case it is the only square root with positive real part.
2.2. Metrics in SO(3). A straightforward way to dene a distance function in
SO(3) is to use the Euclidean distance of the ambient space M(3), i.e., if R1 and
are two rotation matrices then
where k kF is the Frobenius norm which is induced by the Euclidean inner product,
known as the Frobenius inner product, dened by hR1; It is
easy to see that this distance is bi-invariant in SO(3), i.e., dF (P R1Q; P
dF (R1; R2) for all P ; Q in SO(3).
Another way to dene a distance function in SO(3) is to use its Riemannian
structure. The Riemannian distance between two rotations
is the length of the shortest geodesic curve that connects R1 and given by
4 M. MOAKHER
Note that the geodesic curve of minimal length may not be unique. If RT1 R2 is an
involution, in other words if (RT1 rotation through an angle , then
and can be connected by two curves of equal length. In such a case, the
rotations and are said to be antipodal points in SO(3) and is said to be
the cut point of R1 and vice versa.
The Riemannian distance (2.6) is also bi-invariant in SO(3). Indeed, using the
fact [3], we can show that dR(P R1Q; P
Remark 2.1. The Euclidean distance (2.5) represents the chordal distance between
and R2, i.e., the length of the Euclidean line segment in the space of M(3)
(except for the end points R1 and R2, this line segment does not lie in SO(3)), whereas
the Riemannian distance (2.6) represents the arc-length of the shortest geodesic curve
(great-circle arc), which lies entirely in SO(3), passing through R1 and R2.
Remark 2.2. If denotes the angle of rotation of RT1
Therefore, when theprotations R1 and are
suciently close, i.e., is small, we have dF (R1; R2) 2 dR(R1; R2).
2.3. Covariant derivative and Hessian. We recall that the tangent space at
a point R of SO(3) is the space of all matrices such that RT is skew symmetric
and that the normal space (associated with the Frobenius inner product) at R consists
of all matrices N such that RT N is symmetric [5].
For a real-valued function f(R) dened on SO(3), the covariant derivative rf is
the unique tangent vector at R such that
d
dt
where Q(t) is a geodesic emanating from R in the direction of , i.e.,
R exp(tA) and
The Hessian of f(R) is given by the quadratic form
d2
dt2
where Q(t) is a geodesic and is in the tangent space at R as above.
2.4. Geodesic convexity. We recall that a subset A of a Riemannian manifold
M is said to be convex if the shortest geodesic curve between any two points x and y
in A is unique in M and lies in A. A real-valued function dened on a convex subset
A of M is said to be convex if its restriction to any geodesic path is convex, i.e., if
its domain for all x 2 M and u 2 Tx(M),where exp is the exponential map at x.
x
With these denitions, one can readily see that any geodesic ball Br(Q) in SO(3)
of radius r less than around Q is convex and that the real-valued function f denedon Br(Q) by when r is less than 2 . Geodesic balls
with radius greater or equal than are not convex.3. Mean rotation. For a given set of N points xn; in IRd the
mean x is given by the barycenter of the N points. The
mean also has a variational property; it minimizes the sum of the squared
distances to the given points xn;
x2IRd n=1
where here de(; ) represents the usual Euclidean distance in IRd.
One can also use the arithmetic mean to average N positive real numbers xn >
and the mean is itself a positive number. In many applications,
however, it is more appropriate to use the geometric mean to average positive numbers,
which is possible because positive numbers form a multiplicative group. The geometric
1=N 1=N
mean also has a variational property; it minimizes the sum of the
squared hyperbolic distances to the given data
where dh(x; log yj is the hyperbolic distance1 between x and y.
As we have seen, for the set of positive real numbers dierent notions of mean can
be associated with dierent metrics. In what follows, we will extend these notions of
mean to the group of proper orthogonal matrices.
By analogy with IRd, a plausible denition of the mean of N rotation matrices
is that it is the minimizer in SO(3) of the sum of the squared
distances from that rotation matrix to the given rotation matrices
represents a distance in
SO(3). Now the two distance functions (2.5) and (2.6) dene the two dierent means.
Definition 3.1. The mean rotation in the Euclidean sense, i.e., associated with
the metric (2.5), of N given rotation matrices is dened as
Definition 3.2. The mean rotation in the Riemannian sense, i.e., associated
with the metric (2.6), of N given rotation matrices is dened as
The minimum here is understood to be the global minimum. We remark that
in IRd, or in the set of positive numbers, the objective functions to be minimized
are convex over their domains, and therefore the means are well dened and unique.
However, in SO(3), as we shall see, the objective functions in (3.3) and (3.4) are not
(geodesically) convex, and therefore the means may not be unique.
Before we proceed to study these two means, we note that both satisfy the following
desirable properties that one would expect from a mean in SO(3), and that
are counterparts of properties of means of numbers, namely,
1We borrow this terminology from the hyperbolic geometry of the Poincare upper half-plane. In
fact, the hyperbolic length of the geodesic segment joining the points P (a; y1) and Q(a; y2), y1; y2 > 0
is j log y1 j, (see [26]).
6 M. MOAKHER
1. Invariance under permutation: For any permutation of the numbers 1
through N, we have
2. Bi-invariance: If R is the mean rotation of fRng;
is the mean rotation of fP RnQg; every P and Q in SO(3). This
property follows immediately from the bi-invariance of the two metrics dened above.
3. Invariance under transposition: If R is the mean rotation of fRng;
then RT is the mean rotation of fRT
We remark that the bi-invariance property is in some sense the counterpart of the
homogeneity property of means of positive numbers (but here left and right multiplication
are both needed because the rotation group is not commutative).
3.1. Characterization of the Euclidean mean. The following proposition
gives a relation between the Euclidean mean and the usual arithmetic mean.
Proposition 3.3. The mean rotation
is the orthogonal projection of R = onto the special orthogonal group SO(3).
In other words, the mean rotation in the Euclidean sense is the projection of the
arithmetic mean R of in the linear space M(3) onto SO(3).
Proof. As are all orthogonal, it follows that
On the other hand, the orthogonal projection of R onto SO(3) is given by
"XN XN Rn RTm XN RTn #
Because of Proposition 3.3, the mean in the Euclidean sense will be termed the
projected arithmetic mean to reect the fact that it is the orthogonal projection of the
usual arithmetic mean in M(3) onto SO(3).
Remark 3.4. The projected arithmetic mean can now be seen to be related to the
classical orthogonal Procrustes problem [10], which seeks the orthogonal matrix that
most closely transforms a given matrix into a second one.
Proposition 3.5. If det R is positive, then the mean rotation in the Euclidean
sense given by the unique polar factor in
the polar decomposition [10] of R.
Proof. Critical points of the objective function
dened on SO(3) and corresponding to the minimization problem (3.3) are those
elements of SO(3) for which the covariant derivative of (3.5) vanishes. Using (2.8)
we get Therefore, critical points of (3.5) are the
rotation matrices R such that PNn=1 R RTn R RT equivalently, for
which the matrix S dened by
is symmetric.
Since R is orthogonal, and both S and are symmetric, it follows
that N2M. Therefore, there exists an orthogonal matrix U such that
of M. The eight possible square roots of M are UT diag(
To determine the square root of N2M that corresponds
to the minimum of (3.5) we require that the Hessian of the objective function (3.5)
at R given by (3.6) be positive for all tangent vectors at R. From (2.9) we obtain
Hess therefore at R given by (3.6) we have
Hess
where a; b; c are such that = UT RBU and
b a 0
As we are looking for a proper rotation matrix, i.e., an orthogonal matrix with
determinant one, it follows from (3.6) that det
that Hess F(; ) is positive for all tangent vectors at R if and only
positive
and In fact, (3.5) has four cpriticapl poinpts belonging to
which consist of apminimpum [(1p; 2;
Hence, the projected arithmetic mean is given by
which, when det R > 0, coincides with the polar factor of the polar decomposition of
R. Of course uniqueness fails when the smallest eigenvalue of M is not simple.
Remark 3.6. The case where det is a degenerate case. However, if R
has rank 2, i.e., when nd a unique closest proper
orthogonal matrix to R (see [6] for details), and hence can dene the mean rotation
in the Euclidean sense.
3.2. Characterization of the Riemannian mean. First, we compute the
derivative of the real-valued function H(P
to t where P is the geodesic emanating from R in the direction of
RA. As is in the tangent space at R, we have
be the angle of rotation of QT P (t), i.e., such that
8 M. MOAKHER
Dierentiate (3.8) to get d H(P is the
dt t=0 sin
angle of rotation of QT R and we have used the fact that H(P
Recall that, since A is skew symmetric, symmetric matrix S.
It follows that tr(QT
Then, with the help of (2.4) we obtain d H(P
dt t=0
fore, the covariant derivative of H is given by
The second derivative of (3.8) gives
d2 sin cos
Let U be an orthogonal matrix and B the skew-symmetric matrix such that
Then, as tr(QT it is easy to see that
d2 sin
The RHS of (3.10) is always positive for arbitrary a, b, c in IR and 2 (; ). It
follows that Hess H(; ) is positive for all tangent vectors .
denote the objective function of the minimization problem (3.4), i.e.,
Using the above, the covariant derivative of G is found to be
Therefore, a necessary condition for regular extrema of (3.11) is
By (3.10) we conclude that the Hessian Hess G(; ) of the objective function (3.11)
is positive for all tangent vectors . Therefore, equation (3.12) characterizes local
minima of (3.11) only. As a matter of fact, local maxima are not regular points, i.e.,
they are points where (3.11) is not dierentiable.
It is worth noting that, as R = R , the characterization for the Riemannian
mean given in (3.12) is similar to the characterization
of the geometric mean (3.2) of positive numbers. However, while in the scalar case
the characterization (3.13) has the geometric mean as unique solution, the characterization
(3.12) has multiple solutions, and hence is a necessary but not a sucient
condition to determine the Riemannian mean. The lack of uniqueness of solutions of
(3.12) is akin to the fact that, due to the existence of a cut point for each element of
SO(3), the objective function (3.11) is not convex over its domain.
In general, closed-form solutions to (3.12) cannot be found. However, for some
special cases solutions can be given explicitly. In the following subsections, we will
present some of these special cases.
Remark 3.7. The Riemannian mean of may also be called the Riemannian
barycenter of which is a notion introduced by Grove, Karcher
and Ruh [11]. In [17] it was proven that for manifolds with negative sectional curva-
ture, the Riemannian barycenter is unique.
3.2.1. Riemannian mean of two rotations. Intuitively, in the case
the mean rotation in the Riemannian sense should lie midway between R1 and
along the shortest geodesic curve connecting them, i.e., it should be the rotation
R2)1=2. Indeed, straightforward computation shows that R1(RT1 R2)1=2 does
satisfy condition (3.12). Alternatively, equation (3.12) can be solved analytically as
follows. First, we rewrite it as
then we take the exponential of both sides to obtain multiplying
both sides with RT1 R we get (RT1 Such an equation has two
solutions in SO(3) that correspond to local minima of (3.11). However, the global
minimum is the one that corresponds to taking the square root of the above equation
that has eigenvalues with positive real part, i.e., (RT1 R2)1=2. Therefore, for two non-
antipodal rotation matrices R1 and R2, the mean in the Riemannian sense is given
explicitly by
The second equality can be easily veried by pre-multiplying R1(RT1 R2)1=2 by R2RT2
which is equal to I. This makes it clear that G is symmetric with respect to R1 and
R2, i.e., G(R1;
3.2.2. Riemannian mean of rotations in a one-parameter subgroup. In
the case where all matrices Rn; belong to a one-parameter subgroup
of SO(3), i.e., they represent rotations about a common axis, we expect that their
mean is also in the same subgroup. Further, one can easily show that equation (3.12)
Y
reduces to saying that R is an Nth root of Rn. Therefore, the Riemannian mean
is the Nth root that yields the minimum value of the objective function (3.11).
In this case, all rotations lie on a single geodesic curve. One can show that
the geometric mean G(R1; R2; R3) of three rotations R1, and R3 such that
3, is the rotation that is located at 32 of the length
of the shortest geodesic segment connecting R1 and G(R2; R3), i.e., the rotation
have
M. MOAKHER
This explicit formula does not hold in the general case due to the inherent curvature
of SO(3), see the discussion at the end of Example 2 below.
When the rotations belong to a geodesic segment of length less than
and centered at the identity, the above formula reduces to
1=N 1=N
Once again we see the close similarity between the geometric mean of positive numbers
and the Riemannian mean of rotations. This is to be expected since both the set of
positive numbers and SO(3) are multiplicative groups, and we have used their intrinsic
metrics to dene the mean. For this reason, we will call the mean in the Riemannian
sense the geometric mean.
3.3. Equivalence of both notions of mean of two rotations. In the follow-
ing, we show that for two rotations the projected arithmetic mean and the geometric
mean coincide. First, we prove the following lemma.
Lemma 3.8. Let R1 and R2 be two elements of SO(3), then det(R1
Proof. Consider the real-valued function dened on [0; 1] by
We see that this function is continuous with
Assume that f(1) < 0, i.e., det(R1 there exists in [0; 1] such that
Hence, must be in the spectrum of RT2 R1 which is a proper orthogonal matrix. But
this cannot happen, which contradicts the assumption that det(R1
In general, the result of the above lemma does not hold for more than two rotations
matrices. We will see examples of three rotation matrices for which the determinant
of their sum can be negative.
Proposition 3.9. The polar factor of the polar decomposition of
and are two rotation matrices, is given by R1(RT1 R2)1=2.
Proof. Let Q be the proper orthogonal matrix and S be the positive-denite
matrix such that QS is the unique polar decomposition of
R1. One can easily verify that (RT1 R2)1=2
(RT1 R2)1=2 is the positive-denite square root of 2I and that the
inverse of this square root is given by Hence, the
polar factor is
Since the polar decomposition is unique, the result of this proposition together
with the previous lemma shows that both notions of mean agree for the case of
two rotation matrices. For more than two rotations, however, both notions of mean
coincide only in special cases that present certain symmetries. In Example 2 of x 4
below, we shall consider a two-parameter family of cases illustrating this coincidence.
4. Analytically solvable examples. In this section we present two cases in
which we can solve for both the projected arithmetic mean and the geometric mean
explicitly. These examples help us gain a deeper and concrete insight to both notions
of mean. Furthermore, Example 2 conrms our intuitive idea that for \symmetric"
cases, both notions of mean agree.
4.1. Example 1. We begin with a simple example where all rotation matrices
for which we want to nd the mean lie in a one-parameter subgroup of SO(3). Using
the bi-invariance property we can reduce the problem to that of nding the mean of
MEAN ROTATION 11
Projected arithmetic mean: The arithmetic sum of these matrices has a positive
determinant n)2. Hence, the projected arithmetic mean
of the given matrices is given by the polar factor of the polar decomposition of their
sum. After performing such a decomposition we nd thatN
2cos a sin a >< cos
> sin a = sin n:
Such a mean is well dened as long as This mean agrees with the notionof directional mean used in the statistics literature for circular and spherical data
[20, 7, 9, 8]. The quantity 1 r=N, which is called the circular variance, is a measure
of dispersion of the circular data . The direction dened by the angle a
is called the mean direction of the directions dened by
Geometric mean: Solutions of (3.12) are given by
cos l sin l 0 N
4sin l cos l 05; where
The geometric mean of these rotation matrices is therefore the solution that yields
the minimum value of the objective function (3.11). Of course, as we have seen in x 3,
the geometric mean is given explicitly by (3.15).
Note that, even though elements of a one-parameter subgroup commute, the two
rotations (3.15) and (3.16) are dierent. This is due to choice of the kth root of a
rotation matrix to be the one with eigenvalues that have the largest positive real parts.
To see this, consider the case
where P is a rotation of an angle about the z-axis while R
If the rotation matrices Rn are such that n <
certain number 2 IR, then their geometric mean is a rotation about the z-axis of
an angle
The geometric mean rotation of the rotations given by (4.1) coincides with the
concept of median direction of circular data [20, 7].
Remark 4.1. When neither the
projected arithmetic mean nor the geometric mean is well dened. On the one hand
so the projected arithmetic mean is not dened, while on the other hand
the objective function (3.11) for the geometric mean has two local minima with the
same value, namely, R1(RT1 R2)1=2 and its cut value andtherefore the global minimum is not unique.
Let F~ and G~ be the functions dened on [; ] such that
any rotation R about the z-axis through an angle , i.e., F~ and
G~ are the restrictions of the objective functions (3.5) and (3.11) to the subgroup
considered in this example. In Fig. 4.1 we give the plots of F~ and G~ for the sets of
data takes several dierent
values. It is clear that neither (3.5) nor (3.11) is convex. While the function (3.5)
M. MOAKHER
Projected Arithmetic Mean Geometric Mean10
~
a=p/4
~
a=p/4
a=p
Fig. 4.1. Plots of the objective functions F~() and G~() for dierent values of . Note that
is constant and G~ has four local minima with an equal value. Consequently, neither
the projected arithmetic mean nor the geometric mean is well dened.
is smooth the function (3.11) has cusp points but only at local maxima. However,
if the given rotations are located in a geodesic ball of radius less than =2, i.e., in
this example have angles i such that ji jj < then the objective
functions restricted to this geodesic ball are convex and hence the means are well
dened. Such case is illustrated in Fig. 4.2 which shows plots of F~ and G~ for the
following sets of data takes several
dierent values.
Projected Arithmetic Mean Geometric Mean8
~
a=p/4
~
a=p/4
Fig. 4.2. Plots of the objective functions F~() and G~() for dierent values of . Restricted to
[=4; 3=4], i.e., between the dashed lines, the objective functions are indeed convex.
4.2. Example 2. In the second example we consider N elements of SO(3) that
represent rotations through an angle about the axes dened by the unit vectors
sin sin n; cos ]T , where
Projected arithmetic mean: Straightforward computations show that the projected
arithmetic mean is given by
cos a sin a 0 ><cos
a cos a 05;
By using half-angle tangent formulas in the above we obtain the following simple
relation between a and
a
Geometric mean: Since the rotation axes are symmetric about the z-axis, and
the rotations share the same angle, we expect that their geometric mean is a rotation
about the z-axis through a certain angle g. Furthermore, because of this symmetry
we also expect that the mean in the Euclidean sense agrees with the one in the
Riemannian sense.
From the Campbell-Baker-Hausdor formula for elements of SO(3) [23] we have
where the coecients a; b; c and are given by
a
Therefore, the characterization (3.12) of the geometric mean reduces to
a Log Rn bN Log R
This is a matrix equation in so(3), which is equivalent to a system of three nonlinear
equations. Because the axes of rotation of Rn are symmetric about the z-axis
we have cos It follows that [Log Rn; Log
R. Therefore, this system reduces to the following
single equation for the angle g
which when compared with (4.2) indeed shows that a = g and therefore the projected
arithmetic mean and the geometric mean coincide.
This example provides a family of mean problems parameterized by and
where the projected arithmetic and geometric mean coincide. We now further examine
the problem of nding the mean of three rotations about the three coordinate axes
14 M. MOAKHER
through the same angle , which, by the bi-invariance property of both means, can be
considered as a special case of this two-parameter family with .Therefore the mean of these three rotations is a rotatiopn through an angle about
the axis generated by the vector [1; 1; 1]T with tan . The rotations R1,
and R3 form a geodesic equilateral triangle in SO(3). By symmetry arguments
the geometric mean should be the intersection of the three geodesic medians, i.e., the
geodesic segments joining the vertices of the geodesic triangle to the midpoints of the
opposite sides. In at geometry, this intersection is located at two-thirds from the
vertices of the triangle. However, in the case of SO(3), due to its intrinsic curvature,
this is not true. The ratio of the length of the geodesic segment joining one rotation
and the geometric mean, to the length of the geodesic median joining this rotation
and the midpoint of the geodesic curve joining the two other rotations is plotted as a
function of the angles in Fig. 4.3.0.65g0.620 p/4 p/2 3p/4 p
Fig. 4.3. Plot of the ratio of the geodesic distance from one vertex to the barycenter over
the geodesic distance from this vertex to the midpoint of the opposed edge in the geodesic equilateral
triangle in SO(3). The departure of from 2/3, which is due to the curvature of SO(3), increases
with the length, , of the sides of the triangle.
5. Weighted means and power means. Our motivation of this work was to
construct a lter that smooths the rotation data giving the relative orientations of
successive base pairs in a DNA fragment, see [19] for details. Such a lter can be a
generalization of moving window lters, which are based on weighted averages, used
in linear spaces to smooth noisy data. The construction of such lters and the direct
analogy we have found between the arithmetic and geometric means in the group of
positive numbers, and the projected arithmetic and geometric means in the group
of rotations, have led us to the introduction of weighted means and power means of
rotations that we discuss next.
Definition 5.1. The weighted projected arithmetic mean of N given rotations
dened as
This mean satises the bi-invariance property. Using similar arguments as for the
projected arithmetic mean one can show that the weighted projected arithmetic mean
is given by the polar factor of the polar decomposition of the matrix
MEAN ROTATION 15
provided that det A is positive.
Definition 5.2. The weighted geometric mean of N rotations
weights dened as
This mean also satises the bi-invariance property. Using arguments similar to
those used for the geometric mean, we can show that the weighted geometric mean is
characterized by n=1 wn Log(Rn
Definition 5.3. For a real number s such that 0 < jsj 1, we dene the
weighted s-th power mean rotation of N rotations
h is
We note that Mw
1 this is the weighted projected arithmetic mean. Because elements of SO(3) are
orthogonal, and the trace operation is invariant under transposition, the weighted
s-th power mean is the same as the weighted (s)-th power mean. Therefore, it is
immediate that the weighted projected harmonic mean, dened by
coincides with the weighted projected arithmetic mean.
This is a natural generalization of the s-th power mean of positive numbers and
it is in line with the fact that for positive numbers the s-th power mean
is given by the s-th root of the arithmetic mean of One has to
note, however, that for s such that 0 < jsj < 1 this mean is not invariant under the
action of elements of SO(3). This is not a surprise as the power mean of positive
numbers also does not satisfy the homogeneity property.
For the set of positive numbers [12] and similarly for the set of Hermitian denite
positive operators [25], there is a natural ordering of elements and the classical
mean inequalities holds. Furthermore, it is well known
[12, 25] that the s-th power mean converges to the geometric mean as s goes to 0.
However, for the group of rotations such a natural ordering does not exists. Nonethe-
less, one can show that if all rotations belong to a geodesic ball of radius
less than centered at the identity, then the projected power mean indeed convergesto the geometric mean as s tends to 0.
Analysis of numerical algorithms for computing the geometric mean rotation and
the use of the dierent notions of mean rotation for smoothing three-dimensional
orientation data will be published elsewhere.
Acknowledgment
. The author is grateful to Professor J. H. Maddocks for
suggesting this problem and for his valuable comments on this paper. He also thanks
the anonymous referee for his helpful comments.
--R
Dierential Geometry: Manifolds
Matrix Computations
Jacobi elds and Finsler metrics on compact Lie groups with an application to dierentiable pinching problem
optimization and Dynamical Systems
Maximum likelihood estimation for the matrix von Mises-Fisher and Bingham distributions
Fitting smooth paths to spherical data
Riemannian center of mass and mollier smoothing
The von Mises-Fisher matrix distribution in orientation statis- tics
A continuum rod model of sequence-dependent DNA structure
Statistics of directional data
A Mathematical Introduction to Robotic Manipula- tion
Fitting smooth paths to rotation data
Geometrical Methods in Robotics
optimization techniques on Riemannian manifolds
Hermitian semidenite matrix means and related matrix inequalities-an intro- duction
Convex functions and optimization methods on Riemannian manifolds
Equatorial distributions on a sphere
--TR
--CTR
Doug L. James , Christopher D. Twigg, Skinning mesh animations, ACM Transactions on Graphics (TOG), v.24 n.3, July 2005
Kavan , Steven Collins , Ji ra , Carol O'Sullivan, Skinning with dual quaternions, Proceedings of the 2007 symposium on Interactive 3D graphics and games, April 30-May 02, 2007, Seattle, Washington
Xavier Pennec, Intrinsic Statistics on Riemannian Manifolds: Basic Tools for Geometric Measurements, Journal of Mathematical Imaging and Vision, v.25 n.1, p.127-154, July 2006
Christophe Lenglet , Mikal Rousson , Rachid Deriche , Olivier Faugeras, Statistics on the Manifold of Multivariate Normal Distributions: Theory and Application to Diffusion Tensor MRI Processing, Journal of Mathematical Imaging and Vision, v.25 n.3, p.423-444, October 2006
Peter J. Basser , Sinisa Pajevic, Spectral decomposition of a 4th-order covariance tensor: Applications to diffusion tensor MRI, Signal Processing, v.87 n.2, p.220-236, February, 2007 | special orthogonal group;rotation;geodesics;averaging;operator means |
587780 | On Positive Semidefinite Matrices with Known Null Space. | We show how the zero structure of a basis of the null space of a positive semidefinite matrix can be exploited to determine a positive definite submatrix of maximal rank. We discuss consequences of this result for the solution of (constrained) linear systems and eigenvalue problems. The results are of particular interest if A and the null space basis are sparse. We furthermore execute a backward error analysis of the Cholesky factorization of positive semidefinite matrices and provide new elementwise bounds. | Introduction
. The Cholesky factorization A
exists for any symmetric positive semidenite matrix A. In fact, R is the upper triangular
factor of the QR factorization of A 1=2 [11, x10.3]. R can be computed with the
well-known algorithm for positive denite matrices. However, zero pivots may appear.
As zero pivots come with a zero row/column in the reduced A, a zero pivot implies a
zero row in R. To actually compute a numerically stable Cholesky factorization of a
positive semidenite matrix one is advised to apply diagonal pivoting [11].
A semidenite matrix A may be given implicitly, in factored form
pn is of full row rank that does not need to be a
exposes the singularity of A explicitly as In this
case both the linear system and the eigenvalue problem can be solved eciently and
elegantly by working directly on the matrix F , never forming the matrix A explicitly.
In fact, in some applications, not assembling the matrix A but its factor F is the most
important step in the overall process of the numerical computation. One obvious
reason is that the (spectral) condition number of F is the square root of the condition
number of A. In nite element computation, F is the so called natural factor of the
stiness matrix A [2]. In the framework of linear algebra, every symmetric positive
semidenite matrix is the Gram matrix of some set of vectors, the columns of F .
Another possibility to have the singularity of A explicit is to have available a
basis of its null space N (A). This is the situation that we want to investigate in this
note. We will see that knowing a basis of N (A) allows to determine a priori when the
zero pivots will occur in the Cholesky factorization. It also permits to give a positive
denite submatrix of A right away. These results are of particular interest if A and the
null space basis are sparse. This is the case in the application from electromagnetics
that prompted this study [1]. There, a vector that is orthogonal to the null space
corresponds to a discrete electric eld that is divergence-free.
Our ndings permit to work with the positive denite part of A and to compute
a rank revealing Cholesky factorization A = R T R where the upper trapezoidal R
has full row rank. What is straightforward in exact arithmetic amounts to simply
replacing by zero potentially inaccurate small numbers. We analyze the error that is
introduced by this procedure.
Swiss Federal Institute of Technology (ETH), Institute of Scientic Computing, CH-8092 Zurich,
Switzerland (arbenz@inf.ethz.ch)
y University of Zagreb, Departement of Mathematics, Bijenicka 30, HR-10000 Zagreb, Croatia
(drmac@math.hr). The work of this author was supported by the Croatian Ministry of Science and
Technology grant 037012.
We complement this note with some implications of the above for solving eigenvalue
problems and constrained systems of equations.
2. Cholesky factorization of a positive semidenite matrix with known
null space. In this section we consider joint structures of a semidenite matrix A
and its null space.
Theorem 2.1. Let A = R T R be the Cholesky factorization of the positive
semidenite matrix A 2 R nn . Let Y 2 R nm with R(Y
m. These are the only zero entries on the diagonal of R.
Proof. Notice that the assumptions imply that Y := [y has full rank.
By Sylvester's law of inertia R has precisely m zeros on its diagonal. Further,
(Ry
whence r n i n i
If only n 1 n 2 nm , Y ,
ipped upside-down, can be transformed into
column-echelon form in order to obtain strong inequalities.
The Cholesky factor R appearing in Theorem 2.1 is an n n upper triangular
matrix with m zero rows. These rows do not aect the product R T R. Therefore, they
can be removed from R to yield an (n m) n matrix b
R with b
If the numbers n i are known, it is convenient to permute the rows of Y and
accordingly the rows and columns n i of A to the end. Then Theorem 2.1 can be
applied with m+i. The last m rows of R in Theorem 2.1 vanish. So, b
R is
upper trapezoidal.
After the just mentioned permutation the lowest mm block of Y is non-singular,
in fact, upper triangular. This consideration leads to an alternative formulation of
Theorem 2.1.
Theorem 2.2. Let A = R T R be the Cholesky factorization of the positive
semidenite matrix A 2 R nn . Let Y 2 R nm with R(Y (A). If the last
m rows of Y are linearly independent, then the leading principal (n m) (n m)
submatrix of A is positive denite and R can be taken (n m) n upper triangular.
Proof. Let
O Y 2
consists of the last m rows of Y . W is therefore invertible. Applying a congruence
transformation with W on A gives
A 11 A 12
A 21 A 22
O Y 2
A 11 O
O O
By Sylvester's law of inertia A 11 must be positive denite.
Let A
R 11 be the Cholesky factorization of A 11 . Then, the Cholesky
factor of the matrix in (2.2) is [R 11 ; O] 2 R (n m)n . Therefore, the Cholesky factor
of A is [R
Theorem 2.2 is applicable as long as the last m rows of Y form an invertible
matrix. of Y are linearly independent, we can permute Y such that
these rows become the last ones. In particular, if we want A 11 to be as sparse as
possible, we may choose to be the m most densely populated rows/columns
of A with the following greedy algorithm: If we have determined choose
i k+1 to be the index of the densest column of A such that rows
are linearly independent. In this way we can hope for an A 11 with sparse Cholesky
factors.
Remark 2.1. The equation
in
@
in a simply connected
domain
is satised by all constant functions u. The discretization
of (2.3) with nite elements of Lagrange type [4] leads to a positive semidenite
matrix A with a one dimensional null space spanned by the vector e with all entries
equal to 1. Theorem 2.1 now implies that, no matter how we permute A, in the
factorization the single zero on the diagonal of R will not appear before the
very last elimination step.
Example 2.1. Let A and Y be given by
O. As the last two rows of Y are linearly independent, Theorem 2.2 states
that the principal 33 submatrix of A is positive denite and that its Cholesky factor
is 3 5 upper triangular. In fact,
R =4
Let P be the permutation matrix, that exchanges 2nd with 4th and 3rd with 5th
entry of a 5-vector. Then,
Now we have according to Theorem 2.1 the Cholesky factor R 1
of A 1 has zero diagonal elements at positions 3 and 5. Indeed,
pp
3. Consistent semidenite systems. In this section we discuss how to solve
where A, R, and Y are as in Theorem 2.1. Without loss of generality, we can assume
that m. We split matrices and vectors in (3.1),
A 11 A 12
A T
R TR T
with is obtained from A by deleting rows and
columns m. The factorization (3.2) yields
Although A 11 is invertible, its condition number can be arbitrarily high. To reduce
ll-in during factorization [8] any symmetric permutations can be applied to A 11
without aecting the sequel. As R T has full rank, O or
diagonal
elements. Because the right side b of (3.1) has to satisfy
It is now easy to show that a particular solution of (3.1) is given by x with components
In fact, employing (3.3){(3.5) the second block row in (3.2) is
A T
The manifold S of the solutions of (3.1){(3.2) is
The vector a can be determined such that the solution x satises some constraints
In particular, if
then x is perpendicular to the null space of A.
Let now A be given implicitly as a Gram matrix
and Y 2 R nm be as above. (This may require renumbering the columns of F .) As
and as Y 2 is nonsingular, the block F 2 depends linearly on F 1 . Therefore, the QR
factorization of F has the form
R 11 R 12
O O
RR T , the factor equals the upper trapezoidal
Cholesky factor in (3.2).
4. Error Analysis. In this section we give backward error analyses for the
semidenite Cholesky factorization and for the null space basis.
4.1. Semidenite Cholesky factorization. The
oating-point computation
of the Cholesky factorization of a semidenite matrix is classied as unstable by
Higham [11, x10.3.2]. The principal problem is the determination of the rank of the
matrix.
If we assume, as we do in this note, that a basis of the null space of the matrix
under consideration is known a priori then, of course, its rank is known. Let A be
partitioned as in (3.2). We assume that A 11 2 R rr is positive denite numerically,
i.e. that the Cholesky factorization does not break down in
oating point arithmetic
with round-o unit u. Due to a result by Demmel [5] (see also [11, Thm.10.14]) this
is the case if,
min
where min () denotes the minimal eigenvalue, kk is the spectral norm, and
If (4.1) does not hold, A 11 is not numerically denite. Note that
positive denite with unit diagonal. The assumption on min can be relaxed
if, for instance, we use double precision accumulation during the factorization. Then
f(r) can be replaced by a small integer for all r not larger than 1=u. We assume,
however, that 2rf(r)u < 1.
The Cholesky decomposition of A is computed as indicated in (3.3). The Cholesky
factor of A 11 is computed rst. Then the matrix R 12 is obtained as the solution of
the matrix equation R T
Let e
R 11 denote the computed
oating-point Cholesky factor of A 11 . Then the
following two important facts are well-known.
(1) There exists a symmetric A 11 such that A 11
R
R 11 and
1i;jr
f(r)u: (4.2)
This is the backward error bound by Demmel [5], [11, Theorem 10.5].
(2) Let imply that the
Frobenius norm of
assumption (4.1) implies show [7] that there
exists an upper triangular matrix such that
e
s
s
<p:
Let e
R 12 be the
oating-point solution of the matrix equation e
R T
is the computed approximation of the exact Cholesky factor
Let e
R T e
R be partitioned conforming with (3.2). Since A+A is positive
semidenite and of rank r by construction, the equation e
A
A 1e
A 12 holds.
If we compute e
R 12 column by column, then, using Wilkinson's analysis of triangular
linear systems [11, Theorem 8.5],
R
6 PETER ARBENZ AND ZLATKO DRMA
where the matrix absolute values and the inequality are to be understood entry-wise.
Thus, we can write e
R 12 as
e
R T
R 12 j: (4.3)
Also, if we
R T
R T
11 j, we have
e
R T
R T
Further, from the inequality j e
using the M-matrix
property of
I
we obtain
Hence, relations (4.2), (4.3), (4.5) imply that the backward error for all (i; j) in the
in (3.2) is bounded by
A 22
We rst observe that k j j k
and that
s k.
Note that our assumptions imply that
r
2:
It remains to estimate the backward error in the (2; 2) block of the partition (3.2).
Using relation (4.4), we compute A 22 = e
R
R
R T
R 1e
R T
Using the inequalities from relations (4.4), (4.5) we obtain, for all (i; j),
We summarize the above analysis in the following
Theorem 4.1. Let A be a nn positive semidenite matrix of rank r with block
partition (3.2), where the r r matrix A 11 is positive denite with the property (4.1).
Then the
oating-point Cholesky factorization with roundo u will compute an upper
trapezoidal matrix e
R of rank r such that e
R T e
A is a symmetric
backward perturbation with the following bounds:
A ii A jj
A ii A jj
~
A ii A jj ; r <
In the last estimate,
s k. Further, if e
is the exact Cholesky factor of A, then
e
R 11 R
R
~
Here, the matrix is upper triangular and is to the rst order j j
.
Further, let the Cholesky factorization of A 11 be computed with pivoting so that
(R 11 ) ii
k=i (R 11
Then, the error R
R 11 R 11 is also
row-wise small, that is
Remark 4.1. Note that Theorem 4.1 also states that in the positive denite case
the Cholesky factorization with pivoting computes the triangular factor with small
column- and row-wise relative errors. This aects the accuracy of the linear equation
solver (forward and backward substitutions following the Cholesky factorization) not
only by ensuring favorable condition numbers but also by ensuring that the errors in
the coecients of the triangular systems are small.
4.2. Null space error. We now derive a backward error for the null space
Y of A. We seek an n (n r) full rank matrix e
Y such that Y is
small and e
A e
As the null space and the range of A change simultaneously
(being orthogonal complements of each other), the size of Y necessarily depends on
a certain condition number of A; and the relevant condition number will depend on
the form of the perturbation A.
The equation that we investigate is e
equivalently, e
RY . If e
R is suciently close to R (to guarantee invertibility of e
RR
RR
RR
Though simple this equation is instructive. First of all, only the components of the
columns of R that lie in the null space N (A) aect the value of Y . Also, Y
keeps the full column rank of Y . Finally, Y T
kY k= min (Y ). It is easy to modify Y such that min (Y ) 1, e.g., if Y
Thus, kY k measures the angle between the true null space and the null space of the
perturbed matrix e
A. In the sequel we try to bound kY k.
If we rewrite (4.7) as
we get, after some manipulations,
Proposition 4.2. Let D be nonsingular matrix and let
If kR 0 (R
Here, y denotes the orthogonal projection onto the
null space of A.
We will discuss choices for D later. The Proposition indicates that the crucial
quantity for bounding kY k is kR 0 Y k. The following two examples detail this fact.
Example 4.1. Let be big, of the order of 1=u, and let
pp
The null space of A is spanned by
which means that deleting
any row and column of A leaves a nonsingular 22 matrix. Let's choose it be the last
one, and let us follow the algorithm. For the sake of simplicity, let the only error be
committed in the computation of the (1; 1) entry of e
R 11 which is
instead of
3. Then we solve the lower triangular system for e
R 12 and obtain
e
Thus,
If we take perform the computation in Matlab where u 2:22 10
O() such that
the angle between Y and Y is small.
Example 4.2. We alter the (1; 1) entry
3 of R of the previous example to get ,
Again, we delete the last row and column of A and
proceed as in Example 4.1. Let us again assume that the only error occurs in the
entry of R 11 which becomes =(1
e
and
Again, O(1). But now also kY In fact, in computations with
Matlab, we observe an angle as large as O(10 2 ) between Y and Y .
Remark 4.2. Interestingly, if we set Example 4.1, the Matlab function
chol() computes the Cholesky factor
e
R =4
It is clear that the computed and stored A is a perturbation of the true A. Therefore,
numerically, it can be positive denite. It is therefore quite possible to know the rank
r < n of A exactly, to have a basis of the null space of A and a numerically stored
positive denite
oating-point A. Strictly speaking, this is a contradiction. Certainly,
from an application or numerical point of view, it is advisable to be very careful when
dealing with semideniteness.
In Examples 4.1 and 4.2 we excluded the largest diagonal entry of A. In fact, we
can give an estimate that relates the error in R 12 to the size of the deleted entries.
Suppose we managed the deleted diagonal entries of A to be the smallest
ones. Can we then guarantee that the relevant error in R will be small, and can we
check the stability by a simple, inexpensive test?
According to Theorem 4.1, the matrix R 11 is computed with row-wise small relative
error, provided that the Cholesky factorization of A 11 is computed with pivoting.
If that is the case, then it remains to estimate the row-wise perturbations of R 12 . If
is as in Theorem 4.1, then the inequality
holds for all
with some i 2 (0; =2]. The angle i has a nice interpretation. Let A = F T F be any
factorization of A, with full column rank and F T
Then i is the angle between F 1 e i and the span of fF 1 e g. (This is easily
seen from the QR factorization of F 1 .)
The following Proposition states that well-conditioned dominance
of A 11 over A 22 ensure accurate rows of the computed matrix e
R.
Proposition 4.3. With the notation of Theorem 4.1, let A (and accordingly Y )
be arranged such that
If the Cholesky factorization of A 11 is computed with (standard) pivoting, then
sin i
where sin i is dened in (4.10).
Proof. This follows from relations (4.6), (4.9), (4.10) and the assumption (4.11).
We only note that in (4.9) and (4.10) we can replace
Remark 4.3. If spans N
partition of A s satises condition (4.11). If we apply the preceeding analysis to A s
and SY , we get an estimate for Y in the elliptic norm generated by S.
Note that Proposition 4.2 is true for any diagonal D as long as k(R 0 moderately
big and kR 0 k is small. We have just seen that R 0 is nicely bounded if we
choose
has an inverse nicely bounded
independent of A 11 because [11, x10]
Here the function h(r) is in the worst case dominated by 2 r and in practice one usually
observes an O(r) behaviour. In any case, k(D 1 R 11 is at most r times larger than
sophisticated pivoting can make sure that the behaviour of h(r)
is not worse than Wilkinson's pivot growth factor. We skip the details for the sake of
brevity.
To conclude, if the Cholesky factorization of A 11 is computed with pivoting and
relation (4.11) holds, then the backward error in Y can be estimated using (4.8)
and (4.12), where
4.3. Computation with implicit A. We consider now the backward stability
of the computation with A given implicitly as
rank r. Thus, the Cholesky factorization of A is accomplished by computing the QR
factorization of F .
In the numerical analysis of the QR factorization we use the standard, well-known
backward error analysis which can be found e.g. in [11, x18]. The simplest form of
this analysis states that the backward error in the QR factorization is column-wise
small. For instance, if we compute the Householder (or Givens) QR factorization of
F in
oating point arithmetic with roundo u, then the backward error F satises
n) is a polynomial of moderated degree in the matrix dimensions.
Our algorithm follows the same ideas as in the direct computation of R from
A. The knowledge of a null space basis admits that we can assume that F is in the
the p r matrix F 1 is of rank r, see section 3. We then apply r
Householder re
ections to F which yields, in exact arithmetic, the matrix
R 11 R 12
O R 22
where R 11 2 R rr is upper triangular and nonsingular. If
conforming with F , then F is the QR factorization of F 1 .
In
oating point computation, R 22 is unlikely to be zero. Our algorithm simply
sets to zero whatever is computed as approximation of R 22 . As we shall see, the
backward error (in F ) of this procedure depends on a certain condition number of the
Theorem 4.4. Let F 2 R pn have rank r and be partitioned in the form
pr has the numerically well determined full rank r. More
specically, if is obtained from F 1 by scaling columns to have unit Euclidean
norm, then we assume that
Let the QR factorization of F be computed as described above, and let e
be the computed upper trapezoidal factor.
Then there exist a backward perturbation F and an orthogonal matrix b
Q such
that F
R is the QR factorization of F +F . The matrix F +F has rank r.
are partitioned as F , and Q 1 := b
then
c k;
e
R 11 R
bounds the roundo.
Proof. Let e
F (r) be the matrix obtained after r steps of the Householder QR
factorization. Then there exist an orthogonal matrix b
Q and a backward perturbation
F such that
e
R 11
e
R 12
O e
R 22
Our assumption on the numerical rank of F 1 implies that F 1
e
R 11 is the
QR factorization with nonsingular e
R 11 . Now, setting e
R 22 to zero is, in the backward
error sense, equivalent to the QR factorization of a rank r matrix,
R 11
e
R 12
O O
O O
O e
R 22
It remains to estimate b
First note that F
the i-th column of R 12 has the same norm as the corresponding column of F 2 . Then,
and we can write
To estimate Q 1 , we rst note that F
e
R 11 imply that
(R 11
e
and that
R
R
R
Thus, e
R
11 is the Cholesky factor of I +E, where
Now, by [7], kEkF < 1=2 implies that e
R
I +, where is upper triangular
and
<p:
Hence, R 11
e
Finally note that e
R 11 R
We remark that
e
which means that we can nicely bound R 12
R 12 R 12 . We have, for instance,
If we use entry-wise backward analysis of the QR factorization (jF
then we can also write
where the matrix absoulute values and inequalities are understood entry-wise, and " 2
is dened similarly as " 1 .
From the above analysis we see that the error in the computed matrix e
R is
bounded in the same way as in Theorem 4.1. Also, the QR factorization can be
computed with the standard column pivoting and R 11 can have additional structure
just as in the Cholesky factorization of A 11 . Therefore, the analysis of the backward
null space perturbation based on e
R T holds in this case as well. However, the bounds
of Theorem 4.4 are sharper than those of Theorem 4.1.
5. Constrained systems of equations. Let again be
Y 2 R nm having full rank. Let C 2 R nm be a matrix with full rank. Systems of
equations of the form
A C
x
y
c
appear at many occasions, e.g. in mixed nite element methods [3], or constrained
optimization [12]. They have a solution for every right side if R
which is the case if H := Y T C is nonsingular. In computations of Stokes [3] or
Maxwell equations [1] the second equation in (5.1) with imposes a divergence-free
condition on the
ow or electric eld, respectively.
To obtain a solution of (5.1) we rst construct a particular solution of the rst
block row. Pre-multiplying it by Y T yields As b Cy 2 R(A) we
can proceed as in section 3 to obtain a vector ~
x with A~ Cy. The solution x
of (5.1) is obtained by setting determining a such that C T
Thus,
x).
This procedure can be described in an elegant way if a congruence transformation
as in (6.2) is applied. Multiplying (5.1) by W T I m , cf. (2.2), yields4
A 11 O C 1
O O H
a
n;r b:
Notice that ~
From (5.2) we read that
This geometric approach diers from the algebraic one based on the factorization4
A 11 A 12 C 1
A T
R T
O O
R T
O I m54
R
O O C
O C T
where the LU factorization of C
employed to solve (5.1). In the
geometric approach the LU factorization of H is used instead. Of course, there is a
close connection between the two approaches: Using (3.4) we get C T
. Notice that the columns of C or Y can be scaled such that the condition
numbers of H or C 2 R T
are not too big. Notice also that Y can be chosen
such that Y in which case C T
perturbation
analysis of (5.1){(5.3) remains to be done in our future work.
Golub and Greif [9] use the algebraic approach to solve systems of the form (5.1)
if the positive semidenite A has a low-dimensional null space. As they do not have
available a basis for the null space they apply a trial-and-error strategy for nding a
permutation of A such that the leading rr principal submatrix becomes nonsingular.
They report that usually the rst trial is successful. This is intelligible because n
the basis of the null space is dense which is often the case.
If the null space of A is high-dimensional then Golub and Greif use an augmented
Lagrangian approach. They modify (5.1) such that the (1; 1) block becomes positive
denite,
x
y
c
Here, is some symmetric positive denite matrix, e.g. a multiple of the identity.
A+CC T is positive denite if Y T C is nonsingular. The determiniation of a good
is dicult. Golub and Grei thoroughly discuss how to choose and how the 'penalty
aects the condition of the problem. In contrast to this approach where
a term is added to A that is positive denite on the null space of A, N (A) can be
avoided right away if a basis of it is known.
6. Eigenvalue problems. Let us consider the eigenvalue problem
where A is symmetric positive semidenite with
positive denite. We assume that the last m rows of Y are linearly independent such
that W in (2.1) is nonsingular. Then,
A 11 O
O O
where
Using the decomposition
O
O H
I O
I
with the Schur complement S := M 11 C
1 , and noting that P T W T
is easy to see that the positive eigenvalues of (6.1) are the eigenvalues of
A 11
14 PETER ARBENZ AND ZLATKO DRMA
Notice that S is dense, in general, whence, in sparse matrix computations, it should
not be formed explicitly.
If y is an eigenvector of (6.4) then
y
y
is an eigenvector of (6.1). By construction, C T
to the null space of A.
We now consider the situation when A and M are given in factored form,
such that the rank of F 1
equals the rank of A. Let us nd an implicit formulation of the reduced problem (6.4).
With W from (2.1) we have As before, A
R 11 is computed by the QR factorization of F 1 . It remains to compute a Cholesky
factor of the Schur complement S, but directly from the matrix B. To that end we
employ the QL factorization ('backward' QR factorization) of BW ,
whence, with (6.3),
Straightforward calculation now reveals that
Thus, the eigenvalues of the matrix pencil are the squares of the generalized
singular values [10] of the matrix pair (R equivalently, the squares of the
singular values of R 11 L 1
11 . An eigenvector y corresponds to a right singular vector
y. The blocks L 21 and L 22 come into play when the eigenvectors of (6.1) are to
be computed: using (6.7) equation (6.5) becomes
y
22 L 21 y
It is known that the GSVD of (R 11 ; L 11 ) can be computed with high relative
accuracy if the matrices (R 11 ) c and (L 11 ) c are well conditioned [6]. Here, (R 11 ) c and
are obtained by R 11 and L 11 , respectively, by scaling their columns to make
them of unit length. Obviously, 2 ((R 11 is the spectral
condition number. It remains to determine 2 ((L 11 ) c ). From (6.6) we get
whence
. Let the diagonal matrix D 1 be such that (B 1 1has columns of unit length. Further, let (B 1 be the QR factorization of
As Q 1 is orthogonal we have k(L 11
where is the largest principal angle [10] between
r minD=diagonal 2 (L 11 D) [13] [11, Thm.7.5], we have
So, we have identied condition numbers that do not depend on column scalings and
that have a nice geometric interpretation. If the perturbations are column-wise small,
then these condition number are the relevant ones.
7. Concluding remarks. In this paper we have investigated ways to exploit the
knowledge of an explicit basis of the null space of a symmetric positive semidenite
matrix.
We have considered consistent systems of equations, constrained systems of equations
and generalized eigenvalue problems. First of all, the knowledge of a basis of the
null space of a matrix A permits to extract a priori a maximal positive semidenite
submatrix. The rest of the matrix is redundant information and is needed neither for
the solution of systems of equations nor for the eigenvalue computation. The order
of the problem is reduced by the dimension of the null space. In iterative solvers it is
not necessary to complement preconditioners with projections onto the complement
of the null space.
Our error analysis shows that a backward stable positive semidenite Cholesky
factorization exists if the principal r r submatrix,
This does however not mean that the computed Cholesky factor ~
R has a null space
that is close to the known null space of R, A = R T R. We observed that the backward
error in the null space is small if the error in the Cholesky factor is (almost) orthogonal
to the null space of A. We show that this is the case if the positive denite principal
r r submatrix after scaling is well conditioned and if its diagonal elements dominate
those of the remaining diagonal block.
For systems of equations and eigenvalue problems, we considered the case when
is rectangular. This leads to interesting variants of the original
algorithms and most of all leads to more accurate results.
What remains to be investigated is the relation between extraction of a positive
denite matrix and ll-in during the Cholesky factorization. In future work we will
use the new techniques in applications and, if possible, extend the theory to matrix
classes more general than positive semidenite ones.
--R
A comparison of solvers for large eigenvalue problems originating from Maxwell's equations
Mixed and Hybrid Finite Element Methods
The Finite Element Method for Elliptic Problems
On oating point errors in Cholesky
Computer Solution of Large Sparse Positive De
Techniques for solving general KKT systems
Matrix Computations
Accuracy and Stability of Numerical Algorithms
Condition numbers and equilibration of matrices
--TR | positive semidefinite matrices;null space basis;cholesky factorization |
587787 | Joint Approximate Diagonalization of Positive Definite Hermitian Matrices. | This paper provides an iterative algorithm to jointly approximately diagonalize K Hermitian positive definite matrices ${\bf\Gamma}_1$, \dots, ${\bf\Gamma}_K$. Specifically, it calculates the matrix B which minimizes the criterion $\sum_{k=1}^K n_k [\log \det\diag(\B\C_k\B^*) - \log\det(\B\C_k\B^*)]$, nk being positive numbers, which is a measure of the deviation from diagonality of the matrices BCkB*$. The convergence of the algorithm is discussed and some numerical experiments are performed showing the good performance of the algorithm. | Introduction
The problem of diagonalizing jointly approximately several positive definite
matrices has arisen in (at least) two different contexts. The first one is the statistical
problem of common principal components in k group introduced by Flury (1984).
He considers k populations of multivariate observations of size n 1 obeying
the Gaussian distribution with zero means and covariance matrices
and assume that \Gamma k can be written as B k B for some orthogonal matrix B and
some diagonal matrices k (the symbol denotes the transpose). The problem is to
estimate the matrix B (the colum of which are the common principal components)
from the sample covariance matrices C k of the populations. As it is well
known that n k C k , are distributed independently according to the
Wishart distribution of n k degrees of freedom and covariance matrices \Gamma k (see for
example Seber, 1984), the log likelihood function of based on them is
where C is a constant term and tr denotes the trace. Therefore the log likelihood
method for estimating B and the k amounts to minimizing
For fixed B, it is not hard to see that the above expression is minimized with respect
to the k when the notation diag(M) denoting the diagonal
matrix with the same diagonal as M. Thus, one is led to the minimization (with
respect to B) of
which is the same as that of
since B has unit determinant. But (1.1) is precisely a measure of the global deviation
of the matrices from diagonality, since, from the Hadamard inequality (Noble
and Daniel, 1977, exercise 11.51), det M - det diag(M) with equality if and only if
M is diagonal. Thus minimizing (1.1) can be viewed as trying to find a matrix B
which diagonalizes jointly the matrices C much as it can.
More recently, several authors (Cardoso and Souloumiac, 1993, Belouchrami
et al., 1977, Pham and Garat, 1997) have introduced the joint approximate diagonalization
as a method for the separation of source problem. In this problem, there
are K sensors which record each a linear mixture of K sources, so that denoting
by X(t) and S(t) the vectors of measurements and of sources at time t, one has
A. The goal is to extract the sources from
the observations and in the so called blind separation one does not have any specific
knowledge about the sources other than that they are statistically independent. Thus
a sensible method is to try to find a matrix B such that the components of BX(t)
(which represent the reconstructed sources) are as independent as possible. As it
is easier to work with non correlation rather than independence, a simple method
would be to try to make the cross-correlation, eventually lagged, between the sources,
vanish. This would lead to the joint approximate diagonalization of a certain set of
covariance matrices, as proposed in Belouchrami et al. (1977). Note that Pham
and Garat (1997) also consider joint diagonalization but they use only two matrices
and then the diagonalization can be exact (see for ex. Golub and Van Loan, 1989).
Cardoso and Souloumiac (1993), on the other hand, do not consider lagged covariance
but use higher order cumulants between the sources instead. They construct
certain set of matrices in which such cumulants appear as off diagonal elements. The
separation of source is then solved through a joint approximate diagonalization of
these matrices.
It should be pointed out that the above authors use a different measure
of deviation to diagonality than that of Flury. Their measure is simply the sum
of squares of the off-diagonal elements of the considered matrices. But there is a
common feature in all above works in that the diagonalizing matrix B is taken to be
orthogonal. In this work we shall drop this restriction. The orthogonality condition
is part of the assumption of Flury (1984) but there is no clear reason that it should
be satisfied. This condition is justified in the works of Cardoso and Souloumiac
(1993) and Belouchrami et al. (1997), since these authors have pre-normalized their
observations to be uncorrelated and have unit variance. We want to avoid this pre-
normalizing stage in the separation of source procedure, which can adversely affect its
performance since the statistical error committed in this stage cannot be corrected in
the following "effective separation" stage. By dropping the orthogonality restriction,
we obtain a single-stage separation procedure which is simpler and can perform
better. Note that without the orthogonality restriction, exact joint diagonalization
is possible for two matrices (see for ex. Golub and Van Loan, 1989). But for
more than two one can only achieve approximate joint diagonalization, relative to
some measure of deviation to diagonality. We take this measure to be (1.1) for two
following reasons. Firstly, it can be traced back to the likelihood criterion, widely
used in statistics. Secondly, this criterion is invariant with respect to scale change: it
remains the same if the matrices to be diagonalized are pre- and post-multiplied by
a diagonal matrix. The other measure which consists in taking the sum of squares of
the off-diagonal elements of the matrices, does not have this nice invariant property.
Of course, one can introduce this property by first normalizing the matrices so that
they have unit diagonal element, but then the resulting criterion would be very hard
to manipulated.
The main result of this paper is the derivation of an algorithm to perform
the joint approximate diagonalization in the sense of the criterion (1.1) and without
the restriction that the diagonalizing matrix be orthogonal. Our algorithm has
some similarity with that of Cardoso and Souloumiac (1993) and even more with
that of Flury and Gautschi (1996): it also operates through successive transformations
on pairs of rows and columns of the matrices to be diagonalized. However the
convergence proof is completely different since we can no longer rely on the orthogonality
property. Incidentally our method of proof can be easily adapted to prove
the convergence result in Flury and Gautschi (1996), in a much simpler way.
2 The algorithm
As one frequently encounters complex data in signal processing applications,
we shall consider complex Hermitian (instead of real symmetric) positive definite
matrices . (Note that Cardoso and Soulomiac (1993) and Bellouchrani
et al. (1997) also work in a complex setting.) The goal is to find a complex matrix
B such that the matrices are as close to diagonal as possible,
the notation now denoting the transpose complex conjugated. The measure of
deviation to diagonality is taken to be (1.1) where the n k are positive weighs (they
need not be integers). Note that since the C k do not depend of B, the minimization
criterion (1.1) can be reduced to
The algorithm consists in performing successive transformations, each time
on a pair of rows of B, the i-th row B i\Delta and the j-th row say, according to
a b
c d
and in such a way that the criterion is sufficiently decreased. Whether the decrease
is sufficient is a question which we shall returned in next section. Once this is done,
the procedure is repeated with another pair of indices until convergence is achieved.
Denote by u
ij the general element of the matrix the decrease of
the criterion associated with the transformation (2.1) is
du
ii )=(u (k)
ii u (k)
A natural idea is to chose a, b, c, d to maximize the above decrease. However, this
maximization cannot be done analytically. Our idea is to maximize a lower bound of
it instead. Since the logarithm function is convex, for any two sequences of positive
1, one has
Applying this inequality, the above decrease is bounded below by
ii
ii )=u (k)
. Introducing the matrices
ii
=n
this lower bound can be rewritten as
[a b]P
- a
log
[c d]Q
d
The maximization of (2.3) can be done analytically as it will be shown below. Since
(2.3) vanishes at a = clear that its maximum
is non negative and can be zero only if this maximum is also attained a
Thus the decrease of the criterion, associated with the transformation
using the values of a, b, c, d realizing the maximum of (2.3), is positive unless
attains it maximum at a =
Let us return to the maximization of (2.3). For any given a
can always parameterize a; b; c; d as
a b
c d
a
provided that the last matrix is non singular. Put
a
a
a
a
one can express (2.3) as
log ja 0 d
The first term of the above right hand side is no other than (2.3) evaluated at the
point (a Therefore, a necessary and sufficient condition that this point
realizes the maximum of (2.3) is that the last term in the above right side is non
negative, for all -, ffl, j. But for this term can be seen to
be equivalent to n[(-fflp 0
Hence a necessary condition for
this condition to hold is that This is the same that the matrices P and
Q be jointly diagonalized by
a
. Further, for the right hand side
of (2.6) reduces to
Again, for 0, the last term in the above expression can be
seen to be equivalent to jfflj 2 p 0
j. Therefore it is also necessary
that this quadratic form in the variables ffl; - j be non negative. This requirement
is satisfied if and only if p 0
2 . On the other hand, using the inequality
can be seen to be bounded above by
log ja 0 d
But for p 0
2 , the quadratic form j-fflj
(in
the variables -
-ffl; -j) is non negative, entailing that the above expression is bounded
above by n log[ja
choices of -; ffl; j. Thus we have proved
that a necessary and sufficient condition for a to realize the maximum
of (2.3) is that
a
jointly diagonalizes P and Q and that the diagonal terms
2 of the diagonalized matrix satisfy p 0
. (Note that if the last
condition is not satisfied, one simply needs to permute a
In the appendix, the problem of jointly diagonalizing two Hermitian matrices
P and Q of size two (not necessarily positive definite) is completely solved. It is
shown there that if P and Q are positive definite and not proportional, the solution
exists and is unique up to a permutation and a scaling, that is all solutions can be
obtained from a representative one by pre-multiplying it by a diagonal matrix and a
permutation matrix. Denoting by the diagonal and the upper
off diagonal elements of P and Q and putting
a representative solution is given by
2ff D
and is positive.
We shall prove below that (i) p 2 q 1 - 1 with equality if and only if the
matrices P and Q are proportional and (ii) the p 0
defined by (2.5) with
a are such that p 0
1 has the same sign as
As obviously the above results show that the solution
to the maximization of (2.3) is given by [a b] / [D 2fl], [c d] / [2ff D], the
meaning proportional to. Of course, this result doesn't apply in the case
where the two matrices P and Q are proportional. But in this case, diagonalize one
would diagonalize the other. As it has been proved before, this would be enough to
ensure that (2.3) be maximized. Thus in this case, there exists an infinite number
of solutions (even after eliminating the ambiguity associated with scaling).
Note
The Flury and Gautschi (1996) algorithm operates on a similar principle.
However, these authors iterate the transformation (2.1) with a fixed pair (i;
convergence and only then he changes to another pair. We feel that this is less
efficient, because by using the same pair, the decrease of the criterion tends to be
smaller each time while by changing it one can get big decrease in the first few
iterations. Our algorithm is also simpler to program.
We now proved the results (i) and (ii) announced above. By formula (2.2),
one has
l
ii
ii
l
ii
ii
ii
ii
l
hi u (k)
ii
ii
ii
ii
It follows that p 2 q 1 ? 1 with equality if and only if u (1)
ii =u (1)
ii =u (K)
jj . In
the last case P and Q are proportional. This proves the result (i).
On the other hand, for a one gets from (2.5)
Hence the product p 0
denoting the real part. The product p 0
2 can be obtained from the above formula
by interchanging Therefore, the difference p 0
The last three terms of the above expression may be regrouped as
D)]:
putting
\Delta, one has
and hence, noting that
fi)=2:
Further
Therefore, combining the above result, one gets
fi)]g:
As
fi is purely imaginary, the last term in the above expression equals
One thus obtains finally p 0
which has the
same sign as
Note
The computation of ff, fi and fl could be subjected to large relative error if
the matrices P and Q are nearly proportional. But this doesn't matter as long as
the matrices P and Q are diagonalized with sufficient accuracy so that the criterion
be adequately decreased. To this end, note that the solution to the problem remains
the same if one replaces Q by arbitrary ae. One can chose ae
such that when P and Q are nearly proportional, R is almost zero. The numerical
calculation of R will then be subjected to large relative error, but this is the only
main source of error since there will be no near cancellation is subsequent calculation.
More precisely, let ~
R be the calculated R, the algorithm would diagonalize P and ~
R.
Hence it would diagonalize ~
the absolute error ~
small (it is the relative error which is large), P and Q are still accurately diagonalized.
The above argument can be repeated with the role of P and Q interchanged, that
is one takes jointly diagonalizes Q and R. This alternative would
be preferable, from the numerical accuracy point of view, if it leads to a smaller (in
absolute value) of ae. A simple rule to chose ae is to require that it results in a zero
diagonal element of R and is as small as possible. Thus
is defined as is defined as
we want ae to be small, we chose the first possibility if q 1 and the second
otherwise. We shall assume that as it is the case here. (But the case
can be handled in a similar way). Thus the diagonal and upper
off diagonal elements and r of R are
and one jointly diagonalizes P and R in the first case and Q and R in the second
case. Hence in the first case, we compute ff, fi, fl as
and in the second case as
The second right hand sides in the above formula provide more efficient computations
without loss of accuracy. Further, they remain applicable even in if the matrices
P and Q are proportional, by replacing
by an arbitrary non zero number (ff in the first case or fl in the second case can
be taken arbitrary too). Indeed the above computation ensures that in the first
case
this is sufficient to entail that the matrix
P is diagonalized (see the Proof of Proposition A1), and hence so is Q as it is
proportional to P. Similarly, in the second case, the above computation ensures
that Q is diagonalized and hence so is P.
3 Convergence of the algorithm
We have shown in previous section that our algorithm decreases the criterion
at each step, unless (2.5) is maximized at a = But from the results
of this section, this implies that the matrix P and Q are diagonal (we already proved
that 1). If this occurs for a pair of indexes (i; j) then one would skip
this pair and continue the algorithm with another pair. But if this occurs for all
pairs, then the algorithm stops. Explicitly, it stops when
One may recognize that the above condition is no other than the condition that B be
a stationary point of the criterion (1.1). Indeed, consider a small change in B of the
form ffiB (hence the matrix ffi represents a relative change), then the corresponding
change of (1.1) is
log[(u
ri
r
rs -
denoting the general elements of ffi. Expanding this expression with respect to ffi,
up to the first order, one gets
Thus, the vector with components 2n- ij , i 6= j can be viewed as the relative gradient
vector of the criterion (1.1).
We now prove that the decrease in the criterion at each step of the algorithm
is sufficient to ensure the convergence to zero of the above gradient vector. As it
has been proved in previous section, by parameterizing a; b; c; d as in (2.4) with
a being the point realizing the maximum of (2.3), one can express (2.3)
as
are defined by (2.5). Take -; ffl; j such that the
left hand side or (2.4) is the identity matrix. Then (2.3) vanishes. Thus (2:6 0 ) must
also vanish, hence its upper bound derived in section 2 must be positive. Therefore,
noting that p 0
, one has
log ja 0 d
where -; ffl; j; -, are the elements of the inverse of the matrix
a
. The left
hand side of (3.4) is no other than the value of (2.3) at a . Further, the
transformation (2.1) for this step of the algorithm uses precisely these a
Thus the decrease of the criterion associated with this transformation, which is
bounded below by (2.3), must be at least as large as the left hand side of (3.4). But
from the definition of -; ffl; -; j and (2.5), one has
hence, noting that the right hand side of (3.4) can be seen to equal
The last expression can be rewritten as
Therefore, noting that the middle matrix in this quadratic form has eigenvalues 1 \Sigma ae
and 0 - ae - 1, it is bounded below by n[(q 0
This is also
the lower bound of the decrease of our criterion at this step.
Since the criterion is always decreased during our algorithm, it must converge
to a limit. Therefore the decrease of the criterion at each step of the algorithm must
converge to zero, implying that (q 0
tends to zero. Note that
p, q are no other than - ij , - ji defined in (3.1) and 2n- ij are the components of
the relative gradient vector at this step of the algorithm. Still, the above result
proved the convergence to zero of this vector. The difficulty is due
to the lack of normalization. Indeed, our algorithm constructs the transformation
only up to a scaling of its rows, hence a row of B can be arbitrary large or
arbitrary small and this has an effect on the gradient, even when relative gradient
is considered. To avoid this, we shall renormalize the transformation matrices B
after each step of the algorithm. Any reasonable normalization procedure will do
but for simplicity and definiteness, we will consider the normalization which makes
the rows of B having unit norm. Then u
ii will be bounded between the smallest
and the largest eigenvalue of C k . Therefore, letting m and M be the minimum of
the smallest eigenvalues and the maximum of the largest eigenvalues, of C
respectively, one has m - u
for all for all i all k. Note that m ? 0 since the
matrices are all positive definite. Therefore, from (2.2) and (2.5),
[a
mn
a 0
Mn
a 0
and the same inequalities hold for q 0
1 , and similar inequalities, with a 0 , b 0 replaced by
2 . Thus both p 0
2 can be bounded above by M=m and
below by m=M . It follows that the relative gradient vector of the criterion evaluated
at each step of the algorithm, which has components 2n- ij , converges to zero.
The above result shows that if the algorithm converges, then the limit must
be a stationary point of the criterion. Further, since the algorithm always decreases
the criterion, this point is actually a local minimum, unless the algorithm is started
at a stationary point, in which case it stops immediately. Note that, the sequence of
transformation matrices constructed by the algorithm, being normalized and hence
lying on a compact set, will admit a convergent subsequence and this in fact also
holds for any of its subsequence. Therefore, if the criterion admit an unique local
minimum, the algorithm will converge to it. However, Flury and Gautschi (1996)
has shown that in some extreme cases, the minimization of the criterion (1.1) under
the orthogonality constraint, admits more than one local minimum. Therefore, it
seems likely that in our problem, where the maximization is without constraint, the
uniqueness of the local minimum is also not satisfied in all cases. Nevertheless, if
there are only several local minima, one can still expect that the algorithm converge
to one of them. Indeed, if this is not so, then since we have proved that the gradient
vector converge to 0, the algorithm must jump continually from one local minimum
to another, a quite implausible thing. The existence of a finite number of local
clearly does not hold if the matrices C are all proportional to a
single matrix, but this is a very extreme case.
We conclude this section by showing that our algorithm behaves near the
solution very much like to the Newton-Rhapson iteration, provided that the matrices
can be nearly jointly diagonalized. To derive the Newton-Rhapson iter-
ation, one makes a second order Taylor expansion of the criterion around the current
point, then minimizes this expansion (instead of the true criterion) to obtain the new
point. As we have already computed the change of the criterion corresponding to
a change ffiB of B, resulting in the formula (3.2), we need only to expand it up to
second order in ffi. Note that the first order expansion has already been given by
(3.3), we need only to pursue the expansion up to second order, yielding
r
s
rs =u (k)
ii
r
s
ir =u (k)
ii )!(ffi is -
is =u (k)
Assume that the matrices C can be nearly jointly diagonalized, then near
the solution, the off diagonal term u (k)
rs , r 6= s and u (k)
ir , r 6= i, of the matrices
would be small relative to the diagonal term u (k)
ii . Hence we may neglect,
in the above expression the term of second order in ffi containg the factor u (k)
rs =u (k)
ii ,
r 6= s or u (k)
ir =u (k)
ii , r 6= i. With this approximations, the above expansion for (3.2)
(up to second order in reduces to
The (approximate) Newton-Rhapson algorithm consists in minimizing (3.2')
with respect to ffi, then change B into B being the solution to the above
minimization. Note that (3:2 0 ) can be written as the sum over all (unordered) pairs
The minimization of this expression with respect to [ffi ij
can be easily done,
yielding -
It is worthwhile to note that the diagonal elements of ffi do not appear
in (3:2 0 ) so they can be anything as long as they are small. For convenience, we
take them to be 0. This is further justified by the fact that by dividing the i-th row
of B+ ffiB by 1 one is led to the matrix
0 has zero diagonal
element and (i; diagonal element which is about the same as
Note also that because the ffi ij are small, the matrix B + ffiB can be obtained
through successive transformations of the form (2.1) with a
associated with all distinct pairs of indexes (i; j), i 6= j. Reverting to the notation
defined by (2.2), are no other than p 2 , q 1 , p and -
q. Thus
On the other hand, for small p and
q, fi and D, as defined in (2.7) and (2.8), can both be approximated by p 1
Hence, since one gets that b and c equal approximately 2fi=(fi +D) and
+D). The above results show that the new matrix B resulting from one step
Newton-Rhapson algorithm is about the same as that resulting from a "sweep" of the
algorithm of section 2. constituted by successive steps associated with
all distinct pairs of indexes.) Threfore, our algorithm has about the same quadratic
convergence speed of the Newton-Rhapson iteration, near the solution. But the
Newton-Rhapson method may converge badly, even not at all, if it is started at a
point far from the solution. Our algorithm would have better convergence behavior
in this case since it always decreases the criterion.
4 Some numerical examples
We consider the same example as in Flury and Gautschi (1996). The following
6 \Theta 6 matrices are to be diagonalized
\Gamma12:5 27:5 \Gamma4:5 \Gamma:5 2:04 \Gamma3:72
\Gamma:5 \Gamma4:5 24:5 \Gamma9:5 \Gamma3:72 \Gamma2:04
\Gamma4:5 \Gamma:5 \Gamma9:5 24:5 3:72 2:04
\Gamma2:04 2:04 \Gamma3:72 3:72 54:76 \Gamma4:68
3:72 \Gamma3:72 \Gamma2:04 2:04 \Gamma4:68 51:247 7 7 7 7 5
We take our algorithm with B being the identity matrix.
The following table reports the values of criterion after each sweep, that is after
steps associated with each of the 15 possible pairs of indexes.
Criterion 0:809676 0:189367 0:00562301
The last sweep produces a zero value of the criterion up to machine precision, the
slightly negative value we have got comes from the rounding errors. Note that since
there are only 2 matrices, exact joint diagonalization can be achieved. Actually, after
3 sweeps (sweep 0 corresponds to the initial matrices) the diagonalization is already
quite good. We have
50:0000 \Gamma0:0198 \Gamma0:0013 \Gamma0:0001 0:0000 0:0000
\Gamma0:0013 0:0001 29:8099 0:0000 \Gamma0:0000 \Gamma0:0000
\Gamma0:0001 0:0000 0:0000 39:0333 \Gamma0:0000 \Gamma0:0000
0:0000 \Gamma0:0000 \Gamma0:0000 \Gamma0:0000 20:1550 \Gamma0:0000
0:0000 \Gamma0:0000 \Gamma0:0000 \Gamma0:0000 \Gamma0:0000 10:04497 7 7 7 7 5
20:0000 0:0177 0:0052 0:0000 \Gamma0:0000 \Gamma0:0000
0:0177 10:0000 \Gamma0:0003 \Gamma0:0000 0:0000 0:0000
0:0052 \Gamma0:0003 40:0120 \Gamma0:0000 0:0000 0:0000
0:0000 \Gamma0:0000 \Gamma0:0000 30:7912 0:0000 0:0000
\Gamma0:0000 0:0000 0:0000 0:0000 59:1714 0:0000
\Gamma0:0000 0:0000 0:0000 0:0000 0:0000 48:47167 7 7 7 7 5
which corresponds to the transformation matrix
(For definiteness, the rows of B have been normalized to have unit norm.) The fourth
sweep zeros all off diagonal elements of C 1 and C 2 (at least up to 4 digits after the
decimal point) without changing their diagonal elements. The transformation matrix
B is also almost unchanged:
0:5000 0:5000 \Gamma0:5000 \Gamma0:5000 0:0000 \Gamma0:0000
0:5000 0:5000 0:5000 0:5000 \Gamma0:0000 0:0000
0:5527 \Gamma0:5527 0:4206 \Gamma0:4206 0:1688 \Gamma0:0817
0:3977 \Gamma0:3977 \Gamma0:5752 0:5752 \Gamma0:0664 \Gamma0:1324
0:0073 \Gamma0:0073 \Gamma0:0272 0:0272 0:6082 0:79287 7 7 7 7 5
One can see that our algorithm converges quite fast. The Flury and Gautschi
needs 4 to 5 sweeps to converge. Moreover it makes several iterations
for each pair of indexes while we make only one. However, our algorithm does
not solves the same problem, since we do not require the transformation matrix to
be orthogonal. A simple way to implement the orthogonality constraint, at least
approximately, is to add another matrix C 3 which is the identity matrix and give it
a large weigh n 3 . For 1), the values of the criterion after each
sweep are given below
Criterion 0:809676 0:226183 0:0291083 0:0290463 0:0290454 0:0290454
The criterion does not decrease further after 4 sweeps. The change in the transformation
produced by the fifth sweep is also very slight, affecting only the
last digit and never more than 2 units. This matrix, after sweep 5, is
and the corresponding matrices C 1 , C 2 are
50:0000 0:0000 0:0000 0:0000 \Gamma0:0000 \Gamma0:0000
0:0000 29:9224 0:0000 \Gamma1:8497 2:2318 0:1111
0:0000 0:0000 60:0000 0:0000 \Gamma0:0000 \Gamma0:0000
0:0000 \Gamma1:8497 0:0000 39:7221 \Gamma0:7727 1:0432
\Gamma0:0000 2:2318 \Gamma0:0000 \Gamma0:7727 20:2390 \Gamma0:0385
\Gamma0:0000 0:1111 \Gamma0:0000 1:0432 \Gamma0:0385 10:02407 7 7 7 7 5
20:0000 \Gamma0:0000 \Gamma0:0000 \Gamma0:0000 \Gamma0:0000 \Gamma0:0000
\Gamma0:0000 40:2097 \Gamma0:0000 \Gamma2:3088 4:4428 1:7605
\Gamma0:0000 \Gamma0:0000 10:0000 \Gamma0:0000 \Gamma0:0000 \Gamma0:0000
\Gamma0:0000 \Gamma2:3088 \Gamma0:0000 31:7746 \Gamma1:2795 7:2126
\Gamma0:0000 4:4428 \Gamma0:0000 \Gamma1:2795 59:3949 0:5032
\Gamma0:0000 1:7605 \Gamma0:0000 7:2126 0:5032 48:34577 7 7 7 7 5
These results are very similar to that of Flurry and Gautsch (1986). (Note that our
matrix B is the transposed of their.) Of course, the orthogonality constraint is not
exactly satisfied here. We have
1:0000 \Gamma0:0000 0:0000 \Gamma0:0000 0:0000 0:0000
\Gamma0:0000 1:0000 \Gamma0:0000 0:0119 \Gamma0:0185 \Gamma0:0047
0:0000 \Gamma0:0000 1:0000 \Gamma0:0000 0:0000 0:0000
\Gamma0:0000 0:0119 \Gamma0:0000 1:0000 0:0060 \Gamma0:0253
0:0000 \Gamma0:0185 0:0000 0:0060 1:0000 \Gamma0:0007
0:0000 \Gamma0:0047 0:0000 \Gamma0:0253 \Gamma0:0007 1:00007 7 7 7 7 5
but the difference of this matrix from the identity matrix is slight. We should mention
here that our algorithm is not designed to enforce orthogonality, the above numerical
results are given only as examples showing its good convergence property.
Appendix
Joint diagonalization of two Hermitian matrices of size two
The following results provide the explicit and complete solutions to the problem
of joint diagonalization of two non proportional Hermitian matrices of order two.
(Proportionality should here be understood in the large sense so that a null matrix
is proportional to any other one.) Note that if the matrices are proportional, then
the problem degenerates to the diagonalization of a single matrix.
Proposition
Let P and Q be two non proportional Hermitian matrices of order two,
with diagonal elements diagonal elements
respectively. Then
are not all zero and real and the matrix
a b
c d
diagonalizes
simultaneously P and Q if and only if (i) [a b] or [c d] vanishes or (ii)
[a and proportional to [2ff fi + ffi] or to
[c d] 6= [0 0] and proportional to [2ff
ffi] or to [fi
where ffi is any one of the two square roots of \Delta.
Remark:
proportional to [2ff fi implies that it is
proportional to conversely, since
then [a b] proportional to [2ff fi reduces to it being proportional to [0 1],
which is the same as it being proportional to [fi \Gamma ffi 2fl] only if ffi has been chosen
to be fi and not \Gammafi . Similar conclusion holds if Also in the condition (ii) of
the Proposition, one has to exclude the case [a since then [c d] needs not
be proportional to anything. Similarly, the case [c has to be excluded.
hence [c d] / [a b]. Thus P and Q are not
only be diagonalized but are actually transformed by a singular matrix into the null
matrix.
Proof
Since P and Q are non proportional, the vectors [p 1
are linearly independent. This entails that ff, fi, fl are not all zero.
To prove that real, we expand it as
Consider now the solutions to the joint diagonalization problem. The condition
that the transformed matrices be diagonal can be written as
[a b]P
[a b]Q
d
or
[c d]P
[c d]Q
- a
If we exclude the trivial solutions [a then the above equations
imply that the matrices in their left hand side have zero determinants. Thus
and the same equation holds with a, b replaced by c, d. After expansion, one gets
the equations ffb
The solution [a b] to the equation ffb determined
only up to a multiplicative factor. Thus, if ff 6= 0 then either [a
a 6= 0 and b=a is the root of the quadratic polynomial ffz
is a square root of \Delta. Therefore [a b] is proportional to
Similarly, if fl 6= 0, then [a b] must be proportional to We have
chosen here the minus sign before ffi so that this solution is the same as the previous
one (with the same ffi) in the case where ff 6= 0. Note that these solutions still apply
in the case since then they reduce to that [a b] being proportional to
[0 fi] or [fi 0] which is the same as ab = 0.
Similar calculations apply to the equation ffd solutions
[c d] must be proportional to [2ff is a square root of
(not necessarily the same as ffi).
We have shown that the rows [a b] and [c d] of a matrix jointly diagonalizing
P and Q must have the above form. But we haven't proved the converse. Further,
our results yield two choices for [a b] (modulo a constant multiple), depending on the
choice of the square root ffi of \Delta, and similarly there are two other choices for [c d].
Hence one must determine which choice of the later must be associated with one of
the former. For this purpose, we shall consider, for each choice [a
[a the possible corresponding choices for [c d] of the form [ff fi \Sigma - ffi]
or [fi \Upsilon - ffi 2fl] and see if there is one for which [a b]P[c d]
will be more convenient here to write the two square roots of \Delta in the form \Sigma - ffi.)
Suppose that ff 6= 0, then we need to consider the choice [a
since the other is proportional. For [c ffi], we has
[a b]P[c d]
d
The last expression can be expanded as
Note that p 1 -
hence the first term in the above expression
using the equality -
fi)=2 or 2-ff-p+ -
reduces to
This expression vanishes if the minus sign is used in \Sigma. Hence [a b]Q[c d]
for the choice [c
ffi]. A similar calculation, with q 1 , q 2 , q in place of
shows that the same choice leads to [a b]Q[c d] One needs not
consider further choices. Indeed if there exists another choice [~c ~
d] not proportional
to [2ff fi such that [a b]P[~c ~
since one already
has [a b]P[2ff and the vectors [~c ~
d] and [2ff fi are linearly
independent, one must have [a By a similar argument, one also must
have [a these two equalities [a can be easily
seen to imply that P and Q are proportional, which contradicts our assumption.
The calculations are similar in the case fl 6= 0, interchanging a with b, c with
d, ff with fl, p 1 with
reversing the sign
of ffi. Finally, if the possible choice for [a b] is either proportional to
[0 1] or to [1 0] according to ffi equal plus or minus fi. But it can be easily seen that
this case can happen if and only if either
p-q is real, as P and Q are non proportional. Then, it can be checked that the only
possible corresponding choice for [c d] is, in the first case, proportional to [1 0] or
to [0 1] and in the second case, to [0 1] or to [1 0]. This completes the proof of the
Proposition.
Corollary
With the same notation and assumption of the Proposition and assume
further that at least one of the matrices P and Q has positive determinant. Then
and the matrix
a b
c d
jointly diagonalizes P and Q if only if it equals
denotes the sign function) pre-multiplied by a permutation and a diagonal
matrix.
Proof
We first show that \Delta ? 0 if det P ? 0 or det Q ? 0. We shall prove
the result only for the case det Q ? 0, the proof for the other case is similar. As
the last two terms in the above right hand side
can be written as
Thus
the first condition implies that -. Then the second
condition implies that Thus P is proportional to Q, contradicting our
assumption.
As its roots are real and we denote as usual
\Delta the positive one.
\Delta so that fi since
one obtains from Proposition A1 that
a b
c d
equals
pre-multiplied
by a diagonal matrix. On the other hand, for
\Delta so that
by a similar calculation, the Proposition shows that
a b
c d
equals a
diagonal matrix times
. This yields the result of the corollary.
--R
A blind source separation technique using second-order statistics
Matrix computations.
An algorithm for the simultaneous orthogonal transformation of several positive definite symmetric matrices to nearly orthogonal form.
Common principal components in k groups.
Applied linear Algebra.
Multivariate Observations.
--TR
--CTR
Ale Holobar , Milan Ojsterek , Damjan Zazula, Distributed Jacobi joint diagonalization on clusters of personal computers, International Journal of Parallel Programming, v.34 n.6, p.509-530, December 2006
Y. Moudden , J.-F. Cardoso , J.-L. Starck , J. Delabrouille, Blind component separation in wavelet space: application to CMB analysis, EURASIP Journal on Applied Signal Processing, v.2005 n.1, p.2437-2454, 1 January 2005
Ch. Servire , D. T. Pham, Permutation correction in the frequency domain in blind separation of speech mixtures, EURASIP Journal on Applied Signal Processing, v.2006 n.1, p.177-177, 01 January
Andreas Ziehe , Motoaki Kawanabe , Stefan Harmeling , Klaus-Robert Mller, Blind separation of post-nonlinear mixtures using linearizing transformations and temporal decorrelation, The Journal of Machine Learning Research, v.4 n.7-8, p.1319-1338, October 1 - November 15, 2004
Andreas Ziehe , Motoaki Kawanabe , Stefan Harmeling , Klaus-Robert Mller, Blind separation of post-nonlinear mixtures using linearizing transformations and temporal decorrelation, The Journal of Machine Learning Research, 4, 12/1/2003
Andreas Ziehe , Pavel Laskov , Guido Nolte , Klaus-Robert Mller, A Fast Algorithm for Joint Diagonalization with Non-orthogonal Transformations and its Application to Blind Source Separation, The Journal of Machine Learning Research, 5, p.777-800, 12/1/2004 | principal components;separation of sources;diagonalization |
587790 | Accurate Solution of Weighted Least Squares by Iterative Methods. | We consider the weighted least-squares (WLS) problem with a very ill-conditioned weight matrix. WLS problems arise in many applications including linear programming, electrical networks, boundary value problems, and structures. Because of roundoff errors, standard iterative methods for solving a WLS problem with ill-conditioned weights may not give the correct answer. Indeed, the difference between the true and computed solution (forward error) may be large. We propose an iterative algorithm, called MINRES-L, for solving WLS problems. The MINRES-L method is the application of MINRES, a Krylov-space method due to Paige and Saunders [SIAM J. Numer. Anal., 12 (1975), pp. 617--629], to a certain layered linear system. Using a simplified model of the effects of roundoff error, we prove that MINRES-L ultimately yields answers with small forward error. We present computational experiments for some applications. | Introduction
Consider the weighted least-squares (WLS) problem
In this formula and for
the remainder of this article, k \Delta k indicates the 2-norm. We make the following
assumptions: D is a diagonal positive definite matrix and rank A = n. These
assumptions imply that (1) is a nonsingular linear system with a unique
solution. The normal equations for (1) have the form
A T
Weighted least-squares problems arise in several application domains including
linear programming, electrical power networks, elliptic boundary
value problems and structural analysis, as observed by Strang [21]. This
article focuses on the case when matrix D is severely ill-conditioned. This
happens in certain classes of electrical power networks. In this case, A is
a node-arc adjacency matrix, D is matrix of load conductivities, b is the
vector of voltage sources, and x is the vector of voltages of the nodes. Ill-conditioning
occurs when resistors are out of scale, for instance, when modeling
leakage of current through insulators.
Ill-conditioning also occurs in linear programming when an interior-point
method is used. To compute the Newton step for an interior-point method,
we need to solve a weighted least-squares equation of the form (2). Since
some of the slack variables become zero at the solution, matrix D always becomes
ill-conditioned as the iterations approach the boundary of the feasible
region. In Section 9, we cover this application in more detail. Ill-conditioning
also occurs in finite element methods for certain classes of boundary value
problems, for example, in the heat equilibrium equation r \Delta
thermal conductivity field c varies widely in scale.
An important property of problem (1) or (2) is the norm bound on the
solution, which was obtained independently by Stewart [20], Todd [22] and
several other authors. See [6] for a more complete bibliography. Here we
state this result as in the paper by Stewart.
Theorem 1 Let D denote the set of all positive definite m \Theta m real diagonal
matrices. Let A be an m \Theta n real matrix of rank n. Then there exist constants
-A and -
-A such that for any D 2 D
Note that the matrix appearing in (3) is the solution operator for the normal
equations (2), in other words, (2) can be rewritten as
Since the bounds (3), (4) exist, we can hope that there exist algorithms
for (2) that possess the same property, namely, the forward error bound does
not depend on D. We will call these algorithms stable, where stability, as
defined by Vavasis [23], means that forward error in the computed solution
x satisfies
where ffl is machine precision and f(A) is some function of A not depending
on D. Note that the underlying rationale for this kind of bound is that the
conditioning problems in (1) stem from an ill-conditioned D rather than an
ill-conditioned A.
This stability property is not possessed by standard direct methods such
as QR factorization, Cholesky factorization, symmetric indefinite factoriza-
tion, range-space and null-space methods, nor by standard iterative methods
such as conjugate gradient applied to (2). The only two algorithms in literature
that are proved to have this property are the NSH algorithm by Vavasis
[23] and the complete orthogonal decomposition (COD) algorithm by Hough
and Vavasis [12], both of them direct. See Bj-orck [1] for more information
about algorithms for least-squares problems.
We would like to have stable iterative methods for this problem because
iterative methods can be much more efficient than direct methods for large
sparse problems, which is the common setting in applications.
This article presents an iterative algorithm for WLS problems called
MINRES-L. MINRES-L consists of applying the MINRES algorithm of Paige
and Saunders [14] to a certain layered linear system. We prove that MINRES-
satisfies (5). This proof of the forward error bound for MINRES-L is based
on a simplified model of how roundoff error affects Krylov space methods.
This analysis is then confirmed with computational experiments in Section 8.
(The simplified model itself is described in Section 5.) An analysis of round-off
in MINRES-L starting from first principles is not presented here because
the effect of roundoff on the MINRES iteration is still not fully understood.
MINRES-L imposes the additional assumption on the WLS problem instance
that D is "layered." This assumption is made without loss of generality
(i.e., every weighted least-squares problem can be rewritten in layered
form), but the MINRES-L algorithm is inefficient for problems with many
layers.
This article is organized as follows. In Section 2 we state the layering
assumption, and also the layered least-squares (LLS) problem. In Section 3
we consider previous work. In Section 4 we describe the MINRES-L method
for two-layered WLS problems. In Section 5 we analyze the convergence in
the two-layered case using the simplifying assumptions about roundoff error.
In Section 6 and Section 7 we extend the algorithm and analysis to the case
of p layers. In Section 8 we present some computational experiments in
support of our claims. In Section 9 we consider application of MINRES-L to
interior-point methods for linear programming.
2 The Layering Assumption
Recall that we have already assumed that the weight matrix D appearing
in (1) is diagonal, positive definite and ill-conditioned. For the rest of this
article we impose an additional "layering" assumption: we assume, after
a suitable permutation of the rows of (A; b) and corresponding symmetric
permutation of D, that D has the structure
where each D k is well-conditioned and scaled so that its smallest diagonal
entry is 1, and where denote the maximum
diagonal entry among D . The layering assumption is that - is not
much larger than 1.
Note that this assumption is made without any loss of generality (and we
could assume since we could place each diagonal entry of D in its own
layer. Unfortunately, the complexity of our algorithm grows quadratically
with p. Furthermore, our upper bound on the forward error degrades as p
increases (see (39) below). Thus, a tacit assumption is that the number of
layers p is not too large.
From now on, we write A in partitioned form as
A pC C A
to correspond with the partitioning of D. We partition
similarly.
Under this assumption, we say that (1) is a "layered WLS" problem. In
the context of electrical networks, this assumption means that there are several
distinct classes of wires in the circuit, where the resistance of wires in
class l is of order 1=ffi l . For instance, one class of wires might be transmission
lines, whereas the other class might consist of broken wires (open lines)
where the resistance is much higher. In the context of the heat equilibrium
equation, the layering assumption means that the object under consideration
is composed of a small number of different materials. Within each material
the conductivity ffi l is constant, but the different materials have very different
conductivities. In linear programming, taking means that the some
of the slack variables at the current interior-point iterate are "small" while
others are "large."
A limiting case of layered WLS occurs when the gaps between the ffi l 's
tend to infinity, that is, ffi 1 is infinitely larger than ffi 2 and so on. As the
weight gaps tend to infinity, the solution to (1) tends to the solution of the
following problem, which we refer to as layered least squares (LLS). Construct
a sequence of nested affine subspaces L 0 oe L 1 oe \Delta \Delta \Delta oe L p of R n . These spaces
are defined recursively: L
fminimizers of kD 1=2
l
Finally, x, the solution to the LLS problem, is the unique element in L p . The
layered least-squares problem was first introduced by Vavasis and Ye [25] as
a technique for accelerating the convergence of interior-point methods. They
also established the result mentioned above in this paragraph: the solution
to the WLS problem in the limit as ffi l+1 =ffi l ! 0 for all l converges to the
solution of the LLS problem.
Combining this result with Theorem 1 yields the following corollary, also
proved by Vavasis and Ye.
Corollary 1 Let x be the solution to the LLS problem posed with matrix A
and right-hand side vector b. Then kxk -Akbk and kAxk -
-Akbk for
any choice of diagonal positive definite weight matrices D
3 Previous Work
The standard iterative method for least-squares problems, including WLS
problems, is conjugate gradient (see Golub and Van Loan [7] or Saad [18])
applied to the normal equations (2). This algorithm is commonly referred to
as CGNR, which is how we will denote it here. There are several variants of
CGNR in the literature; see, e.g., Bj-orck, Elfving, and Strako-s [2]. Note that
in most variants one does not form the triple product A T DA when applying
CG to (2); instead, one forms matrix-vector products involving matrices A T ,
D and A. This trick can result in a substantial savings in the running time
since A T DA could be much denser than A alone. The same trick is applicable
to our MINRES-L method and was used in our computational experiments.
The difficulty with CGNR is that an inaccurate solution can be returned
because A T DA can be ill-conditioned when D is ill-conditioned. To understand
the difficulty, consider the two-layered WLS problem, which is obtained
by subtituting (6) in the case
Observe that if sequence
A T Db;
constructed by CGNR is very close to
In other words, information about A 2 , D 2 and b 2 is lost when forming the
Krylov sequence. A different framework for interpreting this difficulty is
described in Section 5.
Another iterative method for least-squares problems is LSQR due to Paige
and Saunders [15]. This method shares the same difficulty with CGNR because
it works in the same Krylov space.
A standard technique for handling ill-conditioning in conjugate gradient
is reorthogonalization; see, for example, Paige [16] and Parlett and
Scott [17]. Reorthogonalization, however, cannot solve the difficulty with
ill-conditioning in (2) because even the act of forming the first Krylov vector
A T Db causes a loss of information.
Another technique for addressing ill-conditioned linear systems with iterative
methods is called "regularization"; a typical regularization technique
modifies the ill-conditioned system with additional terms. See Hanke [10].
Regularization does not appear to be a good approach for solving (1) because
(1) already has a well-defined solution (in particular, Theorem 1 implies that
solutions are not highly sensitive to perturbation of the data vector b). A
regularization technique would compute a completely different solution.
In our own previous work [3], we proposed an iterative method for (2)
based on "correcting" the standard CGNR search directions. We have since
dropped that approach because we found a case that seemingly could not be
handled or detected by that algorithm.
4 MINRES-L for Two Layers
In this section and the next we consider the two-layered case, that is,
in (6). We consider the two-layered case separately from the p-layered case
because the two-layered case contains all the main ideas of the general case
but is easier to write down and analyze. (In the our algorithm
reduces to MINRES applied to (2) and hence is not novel.) Furthermore, the
case is expected to occur commonly in practice. We mention also that
the two-layered WLS and LLS problems were considered in x22 of Lawson
and Hanson [13].
As noted in the preceding section, the two-layered WLS problem is written
in the form (7), in which the diagonal entries of D 1 ; D 2 on the order of 1 and
us introduce a new variable v such that
A T
Note that this equation always has a solution v because the right-hand side
is in the range of A T
1 . Multiplying (8) by ffi 2 and adding to (7) yields
A T
Putting (8) and (9) together, we get
A T
!/
x
A T
Our algorithm, which we call MINRES-L (for MINRES "layered"), is the
application of the MINRES iteration due to Paige and Saunders [14] to (10).
Note that (10) is a symmetric linear system.
In general, this linear system is rank deficient because if (x; v) is a solution
solution. Thus, (10) is
rank deficient whenever the rank of A 1 is less than n. This means we must
address existence and uniqueness of a solution. Existence follows because
the original WLS problem (7) is guaranteed to have a solution. Uniqueness
of x is established as follows: if we add times the first row of (10) to ffi 1
times the second row, we recover the original WLS problem (7). Since (7)
has a unique solution, (10) must uniquely determine x. Since x is uniquely
determined, so is A 1 v.
The question arises whether MINRES (in exact arithmetic) will find a solution
of (10). MINRES can find a solution only if it lies in the Krylov space,
which (because of rank deficiency) is not necessarily full dimensional. This
question was answered affirmatively by Theorem 2.4 of Brown and Walker [4].
(Their analysis concerns GMRES, but the same result applies to MINRES in
exact arithmetic.) Furthermore, their result states that, assuming the initial
guess is 0, the computed solution (x; v) will have minimum norm over all
possible solutions. Since x is uniquely determined, their result implies that
will have minimum norm.
Recall from Section 3 that the problem with applying conjugate gradient
directly to (7) is that the linear system may be ill-conditioned when
and hence conjugate gradient may return an inaccurate answer. Thus, it may
seem paradoxical that we remedy a problem caused by ill-conditioning with
an iterative method based on a truly rank-deficient system. One explanation
of this paradox concerns the limiting behavior as 1. In this case,
(7) tends to the linear system A T
This system will, in
general, not have a unique solution (because A 1 is not assumed to have rank
n), so CGNR will compute some solution that may have nothing to do with
. Thus, the CGNR solution is not expected to have the forward
accuracy that we demand.
On the other hand, as we see that (10) tends to
A T
!/
x
A T
This system is easily seen to be the Lagrange multiplier conditions for the
two-layered LLS problem: recall from Section 2 that the two-layered LLS
problem is
minimize kD 1=2
subject to A T
This is the correct limiting behavior: the WLS solution tends to the LLS
solution as explanation of MINRES-L's convergence
behavior follows.
Convergence Analysis for Two Layers
In this section we consider convergence of MINRES-L in the presence of
roundoff error for the case 2. As mentioned in the introduction, we make
a simplifying assumption concerning the effect of roundoff error in Krylov
space methods. The assumption concerns either CG or MINRES applied to
the symmetric linear system In our use of these algorithms, there
is no preconditioner, and the initial guess is x Further, in our use of
MINRES, c lies in the range-space of M (i.e., the system is consistent). In
our use of CG, M is positive definite. With these restrictions in mind, our
assumption about the effect of roundoff is that after a sufficient number of
iterations, either method will compute an iterate -
x satisfying
where C is a modest constant, ffl is machine epsilon, and x is the true solution.
(If multiple solutions exist, we take x to be the minimum-norm solution.)
As far as we know, this bound has not been rigorously proved, but it is
related to a bound proved by Greenbaum [9] in the case of conjugate gradient.
In particular, Greenbaum's result implies that (11) would hold for CG if we
were guaranteed that the recursively updated residual drops to well below
machine precision, which always happens in our test cases.
As for MINRES, less is known, but a bound like (11) is known to hold
for GMRES implemented with Householder transformations [5]. GMRES is
equivalent to MINRES augmented with a full reorthogonalization process.
We are content to assert (11) for MINRES, with evidence coming from our
computational experiments.
This bound sheds light on why MINRES-L can attain much better accuracy
than CGNR. For CGNR, the error bound (11) implies that kA T
A T DA- xk gets very small, where - x is the computed solution. This latter
quantity is the same as
x)k. But recall that we are seeking
a bound on the forward error, that is, on
xk. In this case, the factor
greatly skew the norm when is close to zero, so there is
no bound on kx \Gamma -
xk independent of ffi 1 =ffi 2 , that is, (5) is not expected to be
satisfied by CGNR. This is confirmed by our computational experiments.
In contrast, an analysis of MINRES-L starting from (11) does yield the
accuracy bound (5). We need the following preliminary lemma.
A be an m \Theta n matrix of rank n and -
A an r \Theta n submatrix
of A. Suppose the linear system -
consistent. Here, c is a
given vector, and -
D is a given diagonal positive definite matrix. Then for
any solution x,
-A \Delta kck (12)
and
Furthermore, there exists a solution x satisfying
-A
Proof. First, note the following preliminary result. Let H;K be two symmetric
n \Theta n matrices such that H is positive semidefinite and K is positive
definite. Let b be an n-vector in the range space of H. Then (H
converges to a solution of . This is proved by reducing to
the diagonal case using simultaneous diagonalization of H;K.
Let D be the extension of -
D to an m \Theta m diagonal matrix obtained by
filling in zeros, so that A T
A. Since A T
the limit of solution x of -
as noted in the preceding paragraph. Let M be an m \Theta m diagonal
matrix with 1's in diagonal positions corresponding to -
D and zeros elsewhere.
We have
ck
ck (15)
ffl?0
-A \Delta kck:
The last line was obtained by the transpose of (4). This proves (12). Note
that this holds for all x satisfying -
c, since this latter equation
uniquely determines -
Ax. Similarly, to demonstrate (13), we start from (15):
ck
ffl?0
ck
For the second part of the proof, observe by the first part that A T
and thus
ffl?0
Axk:
Combining this with (12) proves (14).
To resume the analysis of MINRES-L, we define
where (- x; -
v) is the solution computed by MINRES-L. Then (11) applied to
yields the bounds
In this formula, H 2 is shorthand for the coefficient matrix of (10).
We can extract another equation from (16) and (17); in particular, if we
multiply (16) by multiply (17) by ffi 1 and then add, we eliminate the terms
involving - v:
Let x be the exact solution to the WLS problem. The last two terms of this
equation can be replaced with terms involving x by using (7). Interchanging
the left- and right-hand sides yields
The goal is to derive an accuracy bound like (5) from (18) and (19). We
start by bounding the quantity on the right-hand side of (18). Note that
can be bounded by because the largest entries in D 1 ; D 2 are
bounded by -. We can bound kxk by -Akbk using Theorem 1. Next we turn
to bounding kvk in (18). Recall that, as mentioned in the preceding section,
v is not uniquely determined, but MINRES will find the minimum-norm v
satisfying (10). Recall that v is determined by the constraint
One way to pick such a v is to make it minimize kA 2 vk subject to the above
constraint. In this case, v is a layered least-squares solution with right-hand
side data (b yields the bound
for this choice of v. (The factor -
can be improved to -
-A by using
the analysis of Gonzaga and Lara [8].) Combining the x and v contributions
means that we have bounded the right-hand side of (18); let us rewrite (18)
with the new bound:
Next, we write new equations for r . Observe that r 1 lies in the range
of A Tand A T, so we can find h 1 satisfying
Similarly, by (17) there exists h 2 satisfying
By applying (13) to r 1 and r 2 separately, with "A T c" in the lemma taken to
be first r 1 and then r 2 , we conclude from (21) and (22) that
Substituting (21) and (22) into (19) yields
Notice (by analogy with (7)) that the preceding equation is exactly a weighted
least-squares computation where the "unknown" is -
and the right-hand
side data is Thus, by Theorem 1,
We now build a chain of inequalities: the right-hand side of the preceding
inequality is bounded by (23) and (24), and the right-hand side of (23) and
(24) is bounded by (20). Combining all of this yields
To obtain the preceding inequality, we used the facts that
assumption) and that kdiag(D \Gamma1
by assumption, since the
smallest entry in each D i is taken to be 1).
Thus, we have an error bound of the form (5) as desired; in particular,
there is no dependence of the error bound on ffi 2 =ffi 1 . Note that this bound depends
on -. Recall that - is defined to be the maximum entry in D
and is assumed to be small. Indeed, as noted in Section 2, we can always
assume that if we are willing to divide the problem into many layers.
6 MINRES-L for p Layers
In this section we present the MINRES-L algorithm for the p-layered WLS
problem. The algorithm is the application of MINRES to the symmetric
linear system H p is a square matrix of size (1
1)=2)n \Theta (1 +p(p \Gamma 1)=2)n, c p is a vector of that order, and w is the vector of
unknowns. Matrix H p is partitioned into (1
blocks each of size n \Theta n. Vectors c p and w are similarly partitioned. The
WLS solution vector is the first subvector of w.
In more detail, the vector w is composed of x concatenated with p(p\Gamma1)=2
n-vectors that we denote v i;j , where i lies in lies in
Recall that the p-layered WLS problem may be written
Let x be the solution to this equation. Then we see from this equation that
A T
lies in the span of [A T
Therefore, there exists
a solution [v to the equation
A T
This equation is the first block-row of H In other words, the first
block row of H p contains one copy of each of the matrices A T
and the
first block of c p is A T
In general, the (p 1)th block-row of H
the equation
A T
A T
A T
This completes the description of block-rows . We now
establish some properties of these block-rows, and we postpone the description
of block-rows
Lemma Suppose w is a solution to the linear equation (28) for each
denotes the concatenation of x and all of the v i;j 's. Then
x is the solution to the WLS problem (26).
Proof. For each i, multiply (28) by ffi i and then sum all p equations obtained
in this manner. Observe that all the v i;j terms cancel out and we end up
exactly with (26).
We also need the converse to be true.
Lemma 3 Suppose x is the solution to (26). Then there exist vectors v i;j
such that (28) is satisfied for each
Proof. The proof is by induction on (decreasing) We assume
that we have already determined v i;j for all
that (28) is satisfied for now we must
determine v k;j for for the particular value
k. The base case of the induction is that we can select v
to satisfy (28) in the case
lies in the range of [A T
because of (26).
Now for the induction case of k ! p. Rewrite (28) for the case
multiply through by
A T
Recall that our goal is to choose v k;j for to make this equation
valid.
Multiply (28) for each and add this to (29). After
rearranging and summations and cancelling common terms on the left-hand
side, we end up with
Dividing through by ffi k and separating out the v k;j terms from the second
summation yields:
A T
A T
A T
But from (26) we know that
lies in the range of
the rightmost summation of (31) also lies in the same
range. Therefore, there exist v k;j for
But then these same choices will make (29) valid because the algebraic steps
used to derive (31) from (29) can be reversed. This proves the lemma.
Note that the preceding proof actually demonstrates a strengthened version
of the lemma. The strengthened version states that if we are given x
satisfying (26) and, for some k, vectors v i;j for k - that satisfy
(28) for all then we can extend the given data to a solution of
(28) for all strengthened version is needed below.
We now explain the remaining p(p \Gamma 1)=2 block-rows of H p . These rows
exist solely for the purpose of making H p symmetric. First, we have to
order the variables and equations correctly. The variables will be listed in
the order (x; v The first
equations will be listed in the order (28) for 1. This means
that the first p rows of H p have the format [S
and T p is a p \Theta (p \Gamma 1)(p \Gamma 2)=2 matrix. Furthermore, it is easily checked
that S p is symmetric: its first block-row and first block-column both consist
of A T
listed in the order 1)st entry of its main
diagonal is \Gamma(ffi p =ffi i )A T
all its other blocks are
zeros. Then we define H p to be
We define c p as
A T
A T
where there are p(p \Gamma 1)=2 blocks of zeros. For example, the following linear
system is H 3
A T
A T
A T
A T
x
A T
A T
We now must consider whether has any solutions; in particular,
we must demonstrate that the new group of equations T T
with the first p rows. Here w 0 denotes the first p blocks of w, that is,
Studying the structure of T p , we see that there are
indexed by (i;
correspondence with the columns of T p , which correspond to variables v i;j for
in that range). The row indexed by (i; j) has exactly two nonzero block
entries that yield the equation
A T
A T
Our task is therefore to show that we can simultaneously satisfy (28) for
Our approach is to select the v p;j 's in the order v In
particular, assuming v are already selected, we define v p;j to
be any solution to
A T
The following lemma shows that this linear system is consistent.
Lemma 4 If the v p;j 's are chosen in reverse order to satisfy (33), then at
each step the linear system is consistent, and (32) is satisfied.
Proof. The proof is by reverse induction on j. The base case is
which case (33) has a solution because, as noted above, A T
lies in the span of [A T
In the case vacuously
true: there is no i in the specified range.
Now consider the case any i in the range
Start with the version of (33) satisfied by v p;i , which holds by the induction
hypothesis:
A T
Move the terms of the first summation to the right-hand
side:
A T
A T
A T
A T
A T
The second line was obtained from the first by applying (32) inductively
(with "j" in (32) taken to be k). The third line was obtained by merging the
two summations on the right.
But notice that the preceding equation means that v p;i satisfies the same
linear system as v p;j , that is (33), except with the right-hand side scaled by
This proves that (33) is consistent for the j case since we have constructed
a solution to it. Although this linear system does not necessarily
have a unique solution, a linear system of the form A T uniquely determines
Ax. Thus, we have also proved that
for all This result is actually a strengthening of (32) for j; for
that equation we need only the specific case of
The reader may have noticed that the preceding proof is apparently too
complicated and that we could establish the result more simply by solving
for v p;p\Gamma1 in (33) with setting v
2. This simpler approach does not yield the bounds on kv p;j k
needed in the next section.
This proof shows that the above method for selecting v
consistent and satisfies (32). We also see that (27) is satisfied; this follows
immediately from taking (33). To complete the proof that there is
a solution to H p need only verify (28) in the case
But recall from the proof of Lemma 3 that the remaining v i;j 's for
can be determined sequentially by using the construction in the
proof. Thus, the arguments of this section have established the following
theorem.
Theorem 2 There exists at least one solution w to H p
more, any such solution has as its first n entries the vector x that solves
(26).
7 Convergence Analysis for p Layers
The convergence analysis for p layers follows the same basic outline as the
convergence analysis for two layers. In particular, we use (11) as the starting
point for the error analysis. Observe that (11) has the norm of the true
solution on the right-hand side. Thus, to apply that bound, we must get a
norm bound on v i;j for all
We start with bounds on v p;j for Apply Lemma
1 to (33) in the case In the lemma, take -
As noted above, A T
lies in
the range of [A T
so (33) is consistent. The right-hand side of (33)
in the case has the form A T c with
Note that kD p (b 1)kbk. Thus, from (12),
-A
To derive the third line from the second, we used the facts that kD \Gamma1
for each i and
Now we use the same line of reasoning to get a bound on v p;p\Gamma2 based on
(33) for the case 2. In this case, the right-hand side of (33) has the
Thus, kck is bounded by ffi p\Gamma2 (- A
which is at
most
We continue this argument inductively. Each time the bound grows by a
factor 2- A to take into account the fact that v p;i appears on the right-hand
side for the equation determining v p;i\Gamma1 . In the end we conclude that
Next we must bound v i;j for 1 These vectors are
determined by (28). We can find a solution to (28) by first solving
for z i , where -
is already known to be consistent. Furthermore, in the preceding
equation. We set v Using (12), we conclude that
for each
We now claim that
p. This is proved by induction on decreasing i using
recurrence (36). The on the right-hand side of (36) is bounded
by (35), and the remaining terms are bounded by the induction hypothesis.
We omit the details.
For the right-hand side of (11) we need a bound on kv i;j k. Note that up
to now we have not uniquely determined v i;j itself. Recall that in each case
Lemma 1 was used to bound kA k v i;j k. We can force unique determination
by choosing the v i;j as in the proof of Lemma 1, yielding
by (14). Note that MINRES does not necessarily select this v i;j , but because
of its minimization property (that is, Theorem 2.4 of Brown and Walker [4]
described in Section 4), it will select v i;j whose norm is no larger than in the
preceding bound.
We now can apply (11). The other factor on the right-hand side, namely,
easily seen to be bounded by p
w be the solution
computed by MINRES-L, and let
substituting (37) on the right-hand side of (11) yields
Let r be the first p block-entries of r. Note that r j must lie in the
span of [A
in order for the equation H
to have a
solution, because it can be seen from (28) that the (p 1)st block-row of
us find h i that solves r
for each i. By (13) we know that k[A
Let - x be the first n entries of -
that is, the computed WLS solution. If
we multiply the (p \Gamma i+1)st block row of H
and add these p rows, we obtain
A T
The third line was obtained from the second by interchanging the order of
summation. Thus, we see from the third line above that -
solves a WLS
problem in which the ith entry of the data vector is A i
in this range, we conclude that the data vector is bounded
in norm by k. Then Theorem 1
implies that
A
Substituting (38) yields
A \Delta (4- A )
This is a bound of the form (5) as desired.
Computational Experiments
In this section we present computational experiments on MINRES-L and
CGNR to compare their accuracy and efficiency. The first few tests involve
a small node-arc adjacency matrix. The remaining tests are on matrices
arising in linear programming and boundary value problems. All tests were
conducted in Matlab 4.2 running on an Intel Pentium under Microsoft Windows
NT 4.0. Matlab is a software package and programming language for
numerical computation written by The Mathworks, Inc. All computations
are in IEEE double precision with machine epsilon approximately 2:2
Matlab sparse matrix operations were used in all tests.
Our implementation of CGNR is based on CGLS1 as in (3.2) of Bj-orck,
Elfving and Strako-s [2]. These authors conclude that CGLS1 is a good way
to organize CGNR. There are two matrix-vector products per CGLS1 it-
eration, one with matrix A T D 1=2 and one with D 1=2 A. In our implemen-
tation, the CGNR iteration terminates when the scaled computed residual
ks k k=kA T Dbk drops below 10 \Gamma13 . Our implementation of MINRES is
based on [14], except Givens rotations were used instead of 2 \Theta 2 Householder
matrices (so that there are some inconsequential sign differences).
The MINRES-L iteration terminates when the scaled computed residual
The first matrix A used in the following tests is the reduced node-arc
adjacency matrix of the graph depicted in Figure 1. A "node-arc adjacency"
matrix contains one column for each node of a graph and one row for each
edge. Each row contains exactly two nonzero entries, a +1 and a \Gamma1 in the
columns corresponding to the endpoints of the edge. (The choice of which
endpoint is assigned +1 and which is assigned \Gamma1 induces an orientation
on the edge, but often this orientation is irrelevant for the application.) A
reduced node-arc incidence (RNAI) matrix is obtained from a node-arc incidence
matrix by deleting one column. RNAI matrices arise in the analysis of
an electrical network with batteries and resistors; see [23]. They also arise in
network flow problems. In the case of Figure 1, the column corresponding to
Figure
1: An based on this graph was used for the first
group of tests. The column corresponding to the top node is deleted. Edges
marked with heavy lines are weighted 1, and edges marked with light lines
are weighted varies from test to test.
the top node was deleted. Thus, A is an 9 matrix. It is well known that
the RNAI matrix for a connected graph always has full rank. RNAI matrices
are known to have small values of -A and -
-A [23].
In all these tests, the weight matrix has two layers. We took
vary from experiment to experiment.
The rows of A in correspondence with D 2 are drawn as thinner lines in Figure
1. Finally, the right-hand side b was chosen to be the first prime numbers.
The results are displayed in Table 1, and the cases when
are plotted in Figure 2. The scaled error that is tabulated and
plotted in all cases is defined to be k- x \Gamma xk=kbk. We choose this particular
scaling for the error because our goal is to investigate stability bound
(5). The true solution x is computed using the COD method [12]. Note
that the accuracy of CGNR decays as ffi 2 gets smaller, whereas MINRES-L's
accuracy stays constant. MINRES-L requires many more flops than CGNR
because the system matrix is larger. The running time of CGNR is about
the same for the first four rows of the table as the ill-conditioning increases.
In the last two rows the running time of CGNR drops because the matrix
A T DA masquerades as a low-rank matrix for small values of ffi 2 , causing early
termination of the Lanczos process.
Besides returning an inaccurate solution, CGNR has the additional difficulty
that its residual (the quantity normally measured in practical use of
this algorithm) does not reflect the forward error, so there is no simple way
Table
1: Behavior of the two-layered MINRES-L algorithm compared to
CGNR for decreasing values of ffi 2 . The error reported is the scaled error
defined in the text. Note that the CG accuracy degrades while the MINRES-
accuracy stays about the same.
MINRES-L MINRES-L MINRES-L CGNR CGNR CGNR
Iterations Error Flops Iterations Error
to determine whether CGNR is computing good answers. In contrast, the
error and residual in MINRES-L are closely correlated. This correlation is
predicted by our theory.
The next computational test involved a larger matrix A taken from the
Netlib linear programming test set, namely, the matrix in problem AFIRO,
which is 51 \Theta 27. We used a matrix D with 1's in its first 27 diagonal positions
its remaining 24 positions (i.e., D
The right-hand side vector b was chosen to contain the first
primes. MINRES-L required 137 iterations and 250 kflops and yielded
a solution -
x with scaled error 3:0 with respect to the true solution
computed by the COD method. For this matrix, -A and -
-A are not known.
CGNR on this problem required 69 iterations and 61 kflops and returned an
answer with scaled error 2:2 . The convergence plots are depicted in
Figure
3.
The excessive number of iterations required by MINRES is apparently
caused by a loss of orthogonality in the Lanczos process. To verify this
hypothesis, we ran GMRES on the same layered matrix. GMRES [19] on
a symmetric matrix is equivalent to MINRES with full reorthogonalization.
(In exact arithmetic the two algorithms are identical.) We call this algorithm
GMRES-L. The same termination tests were used. The result is depicted in
Figure
4. In this case, GMRES-L ran for 50 iterations (fewer than (1
returned a more accurate answer, one with forward error
. However, the number of flops was higher, 350 k, because of the
scaled error
scaled residual
scaled error
scaled residual
Figure
2: Convergence behavior of CGNR and MINRES-L for the
RNAI test case. The plots are for In
these plots and all that follow, the x-axis is the iteration number. For both
algorithms the computed (i.e., recursively updated) residual is plotted rather
than the true residual. Other experiments (not reported here) indicate that
these are usually indistinguishable. The \Theta on the y-axis indicates the cutoff
below which the CGNR scaled residual must drop in order for (11) to be true
. The ffi on the y-axis is the analog for MINRES-L.
Figure
3: Convergence behavior of CGNR and MINRES-L for AFIRO. The
curves are labeled as in Figure 2.
Gram-Schmidt process in the GMRES main loop.
The next computational test involves a larger matrix A arising from finite-element
analysis. The application is the solution of the boundary value
problem r \Delta on the polygonal domain depicted in Figure 5 with
Dirichlet boundary conditions. The conductivity field c is 1 on the outer part
of the domain and is 10 12 on the darker triangles. As discussed in [24], this
type of problem gives rise to a weighted least-squares problem in which A
encodes information about the geometry and D encodes the ill-conditioned
conductivity field. The values of -A and -
-A for this matrix are not known,
although bounds are known for variants of these parameters. The particular
matrix A is 652 \Theta 136. The right-hand side vector b was chosen according to
the Dirichlet boundary conditions described in [24]. The MINRES-L method
for this problem gave scaled error of 1:3 iterations and 6.5
mflops. To compute the true solution, we used the NSHI method in [24].
In this case, surprisingly, CGNR gave almost as accurate an answer, but the
termination test was never activated. (We cut off CGNR after 10n iterations.)
The residual of CGNR is quite oscillatory as depicted in Figure 6. In the
finite-element literature, CGNR would be referred to as conjugate gradient
on the assembled stiffness matrix, which is A T DA.
A cause of this odd behavior of CGNR is as follows. Note that the region
of high conductivity is not incident on the boundary of the domain so
Figure
4: Convergence behavior of GMRES-L (- and \Delta \Delta
Figure
5: Domain and finite element mesh used for the finite element exper-
iment. Conductivity in the dark triangles is 10 12 and in the light triangles is
Figure
Convergence of CGNR and MINRES-L for the finite element test
problem. The curves are labeled as in Figure 2.
Thus, A T
starts from a right-hand side that is already almost zero. Furthermore, this
right-hand side is nearly orthogonal to the span of A T
dominates
the stiffness matrix A T DA. Thus, CGNR has trouble making progress. The
surprisingly accurate answer from CGNR in this example is not so useful
in practice because there is no apparent way to detect that convergence is
underway.
The final test is a three-layered problem based on the matrix A from
ADLITTLE of the Netlib test set, a 138 \Theta 56 matrix. Matrix D has as its
first 28 diagonal entries 1, its next 28 diagonal entries 10 \Gamma8 and its last 82
entries . The right-hand side vector is the first 138 prime numbers.
The convergence is depicted in Figure 7. As expected, the scaled error of
MINRES-L decreased to while the scaled error of CGNR was 0:3.
Note the excessive number of iterations required by MINRES-L. Again, this
is apparently due to loss of orthogonality because the number of iterations
was only 118 for GMRES-L to achieve a scaled error of 9:4 In fact,
for this test GMRES-L was more efficient than MINRES-L in terms of flop
count.
In most cases we see that the MINRES-L algorithm performs essentially
as expected, except for the two cases in which a loss of orthogonality causes
many more iterations than expected. In every case, MINRES-L's running
time is higher than CGNR's, but CGNR can produce bad solutions as measured
by forward error.
Figure
7: Convergence of CGNR and MINRES-L for ADLITTLE. The curves
are labeled as in Figure 2. Note the excessive number of iterations for
MINRES-L caused by a loss of orthogonality.
9 An Issue for Interior-Point Methods
In this section we describe an issue that arises when using the MINRES-L
algorithm in an interior-point method for linear programming. Full consideration
of this matter is postponed to future work.
It is well known that the system of equations for the Newton step in an
interior-point method can be expressed as a weighted least-squares problem.
To be precise, consider the linear programming problem
subject to A T
whose dual is
subject to Ay
(which is standard form, except we have transposed A to be consistent with
least-squares notation). A primal-dual method starting at a feasible interior
point problem computes an update \Deltay to y satisfying
is an algorithm-dependent
parameter usually in [0; 1], - is the duality gap, and e is the vector of all 1's.
See Wright [26]. Since (40) has the form of a WLS problem, we can obtain
\Deltay using the MINRES-L algorithm.
One way to compute \Deltas is via \Deltas := \GammaA\Deltay. This method is not stable
because \Deltas has very small entries in positions where s has very small en-
these small entries must be computed accurately with respect to the
corresponding entry of s. In contrast, the error in all components of \Deltas
arising from the product A\Deltay is on the order of ffl \Delta ksk (where ffl is machine-
epsilon). A direct method for accurately computing all components of \Deltas
was proposed by Hough [11], who obtains a bound of the form
\Deltas
for each i. We will consider methods for extending MINRES-L to accurate
computation of \Deltas in future work. As noted by Hough, \Deltax is easily computed
from \Deltas with a similar accuracy bound assuming \Deltas satisfies (41).
Conclusions
We have presented an iterative algorithm MINRES-L for solving weighted
least squares. Theory and computational experiments indicate that the
method is more accurate than CGNR when the weight matrix is highly ill-
conditioned. This work raises a number of questions.
1. Is there an iterative method that does not require the layering assumption
2. If layering is indeed required, can we get a more parsimonious layered
linear system when p - 3? In particular, is there a 3n \Theta 3n system of
equations with all the desired properties for the 3-layered case (instead
of the 4n \Theta 4n system that we presented)?
3. What is the best way to handle loss of orthogonality in MINRES that
was observed in Section 8?
4. Can this work be extended to stable computation of \Deltax and \Deltas in an
interior-point method? (This question was raised in Section 9.)
5. What about preconditioning? In most of our computational tests, we
ran both MINRES and CG for more than n iterations because our aim
was to compute the solution vector as accurately as possible. In prac-
tice, one hopes for convergence in much fewer than n iterations. What
are techniques for preconditioning WLS problems? Note that the analysis
of MINRES-L's accuracy in Section 5 and Section 7 presupposes
that no preconditioner is used.
Acknowledgments
We had helpful discussions of this work with Anne Greenbaum and Mike
Overton of NYU; Roland Freund, David Gay, and Margaret Wright of Bell
Labs; Patty Hough of Sandia; Rich Lehoucq and Steve Wright of Argonne;
Homer Walker of Utah State; and Zden-ek Strako-s of the Czech Academy
of Sciences. We thank Patty Hough and Gail Pieper for carefully reading
an earlier draft of this paper. In addition, we received the Netlib linear
programming test cases in Matlab format from Patty Hough.
--R
Numerical methods for least squares problems.
Stability of conjugate gradient and Lanczos methods for linear least squares problems.
Iterative methods for weighted least squares.
GMRES on (nearly) singular systems.
Numerical stability of GMRES.
On linear least-squares problems with diagonally dominant weight matrices
Matrix Computations
A note on properties of condition numbers.
Estimating the attainable accuracy of recursively computed residual methods.
Conjugate gradient type methods for ill-posed problems
Stable computation of search directions for near-degenerate linear programming problems
Complete orthogonal decomposition for weighted least squares.
Solving Least Squares Problems.
Solution of sparse indefinite systems of linear equations.
LSQR: An algorithm for sparse linear equations and sparse least squares.
Practical use of the symmetric Lanczos process with re- orthogonalization
The Lanczos algorithm with selective reorthog- onalization
Iterative methods for sparse linear systems.
GMRES: A generalized minimum residual algorithm for solving nonsymmetric linear systems.
On scaled projections and pseudoinverses.
A framework for equilibrium equations.
A Dantzig-Wolfe-like variant of Karmarkar's interior-point linear programming algorithm
Stable numerical algorithms for equilibrium systems.
Stable finite elements for problems with wild coefficients.
A primal-dual interior point method whose running time depends only on the constraint matrix
--TR | krylov-space;iterative method;conjugate gradient;MINRES;weighted least squares;achievable accuracy |
587793 | Choosing Regularization Parameters in Iterative Methods for Ill-Posed Problems. | Numerical solution of ill-posed problems is often accomplished by discretization (projection onto a finite dimensional subspace) followed by regularization. If the discrete problem has high dimension, though, typically we compute an approximate solution by projecting the discrete problem onto an even smaller dimensional space, via iterative methods based on Krylov subspaces. In this work we present a common framework for efficient algorithms that regularize after this second projection rather than before it. We show that determining regularization parameters based on the final projected problem rather than on the original discretization has firmer justification and often involves less computational expense. We prove some results on the approximate equivalence of this approach to other forms of regularization, and we present numerical examples. | Introduction
. Linear, discrete ill-posed problems of the form
(1)
or
min
x
equivalently, A
(2)
arise, for example, from the discretization of first-kind Fredholm integral equations
and occur in a variety of applications. We shall assume that the full-rank matrix A
in (2) and in (1). In discrete ill-posed problems, A is ill-conditioned
and there is no gap in the singular value spectrum. Typically, the right
hand side b contains noise due to measurement and/or approximation error. This
noise, in combination with the ill-conditioning of A, means that the exact solution of
(1) or (2) has little relationship to the noise-free solution and is worthless. Instead, we
use a regularization method to determine a solution that approximates the noise-free
solution. Regularization methods replace the original operator by a better-conditioned
but related one in order to diminish the effects of noise in the data and produce a
regularized solution to the original problem. Sometimes this regularized problem is
too large to solve exactly. In that case, we typically compute an approximate solution
by projection onto an even smaller dimensional space, perhaps via iterative methods
based on Krylov subspaces.
The conditioning of the new problem is controlled by one or more regularization
parameters specific to the method. A large regularization parameter yields a new well-conditioned
problem, but its solution may be far from the noise-free solution since
the new operator is a poor approximation to A. A small regularization parameter
generally yields a solution very close to the noise-contaminated exact solution of (1)
or (2), and hence its distance from the noise-free solution also can be large. Thus,
This work was supported by the National Science Foundation under Grants CCR 95-03126 and
CCR-97-32022 and by the Army Research Office, MURI Grant DAAG55-97-1-0013.
y Dept. of Computer and Electrical Engineering, Northeastern University, Boston, MA 02115
z Dept. of Computer Science and Institute for Advanced Computer Studies, University of Mary-
land, College Park, MD 20742 (oleary@cs.umd.edu).
a key issue in regularization methods is to choose a regularization parameter that
balances the error due to noise with the error due to regularization.
A wise choice of regularization parameter is obviously crucial to obtaining useful
approximate solutions to ill-posed problems. For problems small enough that a rank-
revealing factorization or singular value decomposition of A can be computed, there
are well-studied techniques for computing a good regularization parameter. These
techniques include the Discrepancy Principle [8], generalized cross-validation (GCV)
[9], and the L-curve [15]. For larger problems treated by iterative methods, though,
the parameter choice is much less understood. If regularization is applied to the
projected problem that is generated by the iterative method, then there are essentially
two regularization parameters: one for the standard regularization algorithms, such
as Tikhonov or truncated SVD, and one controlling the number of iterations taken.
One subtle issue is that the standard regularization parameter that is correct for the
discretized problem may not be the optimal one for the lower-dimensional problem
actually solved by the iteration, and this observation leads to the research discussed
in this paper. At first glance, there can appear to be a lot of work associated with
the selection of a good regularization parameter, and many algorithms proposed in
the literature are needlessly complicated. But by regularizing after projection by the
iterative method, so that we are regularizing the lower dimensional problem that is
actually being solved, much of this difficulty vanishes.
The purpose of this paper is to present parameter selection techniques designed
to reduce the regularization work for iterative methods such as Krylov subspace tech-
niques. Our paper is organized as follows. In x2, we will give an overview of the
regularization methods we will be considering, and we follow up in x3 by surveying
some methods for choosing the corresponding regularization parameters. In x4, we
show how parameter selection techniques for the original problem can be applied instead
to a projected problem obtained from an iterative method, greatly reducing the
cost without much degradation in the solution. We give experimental results in x5
and conclusions and future work in x6.
2. Regularization background. In the following we shall assume that
e, where b true denotes the unperturbed data vector and e denotes zero-mean
white noise. We will also assume that b true satisfies the discrete Picard condition;
that is, the spectral coefficients of b true decay faster, on average, than the singular
values.
Under these assumptions, it is easy to see why the exact solution to (1) or (2) is
hopelessly contaminated by noise. Let "
denote the singular value decomposition
of A, where the columns of "
U and "
V are the singular vectors, and the singular values
are ordered as oe 1 oe . Then the solution to (1) or (2) is given by
"
"
As a consequence of the white noise assumption, j"u
i ej is roughly constant for all i,
while the discrete Picard condition guarantees that j"u
than oe i does . The matrix A is ill-conditioned, so small singular values magnify the
corresponding coefficients "
e in the second sum, and it is this large contribution of
noise from the approximate null space of A that renders the exact solution x defined
in (3) worthless. The following regularization methods try in different ways to lessen
the contribution of noise to the solution. For further information on these methods,
see, for example, [17].
2.1. Tikhonov regularization. One of the most common methods of regularization
is Tikhonov regularization [34]. In this method, the problem (1) or (2) is
replaced with the problem of solving
where L denotes a matrix, often chosen to be the identity matrix I or a discrete
derivative operator, and is a positive scalar regularization parameter. For ease in
notation, we will assume that Solving (4) is equivalent to solving
In analogy with (3) we have
In this solution, the contributions from noise components " u
e for values of oe are
much smaller than they are in (3), and thus x can be closer to the noise-free solution
than x is. If is too large, however, A I is very far from the original operator
A A, and x is very far from x true , the solution to (2) when Conversely, if
is too small, the singular values of the new operator A A are close to those of
A A; thus x x, so small singular values again greatly magnify noise components.
2.2. Truncated SVD. In the truncated SVD method of regularization, the regularized
solution is chosen simply by truncating the expansion in (3) as
"
Here the regularization parameter is ', the number of terms to be dropped from the
sum. Observe that if ' is small, very few terms are dropped from the sum, so x '
resembles x in that the effects of noise are large. If ' is too large, however, important
information could be lost; such is the case if "
An alternative, yet related, approach to TSVD is an approach introduced by Rust
[31] where the truncation strategy is based on the value of each spectral coefficient
itself. The strategy is to include in the sum (3) only those terms corresponding to
a spectral coefficient "
b whose magnitude is greater than or equal to some tolerance
ae, which can be regarded as the regularization parameter.
2.3. Projection and iterative methods. Solving (5) or (7) can be impractical
if n is large, but fortunately, regularization can be achieved through projection onto
a subspace; see, for example, [7]. The truncated SVD is an example of one such
projection: the solution is constrained to lie in the subspace spanned by the singular
vectors corresponding to the largest values. Other projections can be
more economical. In general, we constrain our regularized solution to lie in some
k-dimensional subspace of C n , spanned by the columns of an n \Theta k matrix Q (k) . For
example, we choose x (k)
min
kAQ
or equivalently
The idea is that with an appropriately chosen subspace, the operator (Q (k) ) A AQ (k)
will be better conditioned than the original operator and hence that x (k)
reg will approximate
x true well on that subspace.
This projection is often achieved through the use of iterative methods such as conjugate
gradients, GMRES, QMR, and other Krylov subspace methods. The matrix
contains orthonormal columns generated via a Lanczos tridiagonalization or
bidiagonalization process [27, 1]. In this case, Q (k) is a basis for some k-dimensional
Krylov subspace (i.e., the subspace K k (c; K) spanned by the vectors c;
for some matrix K and vector c). The regularized solutions x (k)
reg are generated iteratively
as the subspaces are built. Krylov subspace algorithms such as CG, CGLS,
GMRES, and LSQR tend to produce, at early iterations, solutions that resemble x true
in the subspace spanned by (right) singular vectors of A corresponding to the largest
singular values. At later iterations, however, these methods start to reconstruct increasing
amounts of noise into the solution. This is due to the fact that for large k,
the operator (Q (k) ) A AQ (k) approaches the ill-conditioned operator A A. There-
fore, the choice of the regularization parameter k, the stopping point for the iteration
and the dimension of the subspace, is very important. 1
2.4. Hybrid methods: projection plus regularization. Another important
family of regularization methods, often referred to as hybrid methods [17], was introduced
by O'Leary and Simmons [27]. These methods combine a projection method
with a direct regularization method such as TSVD or Tikhonov regularization. The
problem is projected onto a particular subspace of dimension k, but typically the
restricted operator in (9) is still ill-conditioned. Therefore, a regularization method
is applied to the projected problem. Since the dimension k is usually small relative
to n, regularization of the restricted problem is much less expensive. Yet, with an
appropriately chosen subspace, the end results can be very similar to those achieved
by applying the same direct regularization technique to the original problem. We will
become more precise about how "similar" the solutions are in x4.5. Because the projected
problems are usually generated iteratively by a Lanczos method, this approach
is useful when A is sparse or structured in such a way that matrix-vector products
can be handled efficiently with minimal storage.
3. Existing parameter selection methods. In this section, we discuss a sampling
of the parameter selection techniques that have been proposed in the literature.
They differ in the amount of a priori information required as well as in the decision
criteria.
3.1. The Discrepancy Principle. If some extra information is available - for
example, an estimate of the variance of the noise vector e - then the regularization
parameter can be chosen rather easily. Morozov's Discrepancy Principle [25] says that
if ffi is the expected value of kek 2 , then the regularization parameter should be chosen
so that the norm of the residual corresponding to the regularized solution x reg is
that is,
Usually, small values of the regularization parameter correspond to a closer solution to the noisy
equation, but despite this, we will call k, rather than 1=k, the regularization parameter.
x
l
||Fig. 1. Example of a typical L-curve. This particular L-curve corresponds to applying Tikhonov
regularization to the problem in Example 2
predetermined real number. Note that as
Other methods based on knowledge of the variance are given, for example, in [12, 5].
3.2. Generalized Cross-Validation. The Generalized Cross-Validation (GCV)
parameter selection method does not depend on a priori knowledge about the noise
variance. This idea of Golub, Heath, and Wahba [9] is to find the parameter that
minimizes the GCV functional
denotes the matrix that maps the right hand side b onto the regularized
solution x . In Tikhonov regularization, for example, A ]
is
GCV chooses a regularization parameter that is not too dependent on any one data
measurement [11, 12.1.3].
3.3. The L-Curve. One way to visualize the tradeoff between regularization
error and error due to noise is to plot the norm of the regularized solution versus
the corresponding residual norm for each of a set of regularization parameter values.
The result is the L-curve, introduced by Lawson and popularized by Hansen [15].
Figure
1 for a typical example. As the regularization parameter increases, noise
is damped, so that the norm of the solution decreases while the residual increases.
Intuitively, the best regularization parameter should lie on the corner of the L-curve,
since for values higher than this, the residual increases without reducing the norm
of the solution much, while for values smaller than this, the norm of the solution
increases rapidly without much decrease in residual. In practice, only a few points
on the L-curve are computed and the corner is located by approximate methods,
estimating the point of maximum curvature [19].
Like GCV, this method of determining a regularization parameter does not depend
on specific knowledge about the noise vector.
3.4. Disadvantages of these parameter choice algorithms. The appropriate
choice of regularization parameter - especially for projection algorithms - is a
difficult problem, and each method has severe flaws.
Basic cost Added Cost
Disc. GCV L-curve
Rust's TSVD O(mn 2 ) O(m log m) O(m log m) O(m log m)
Projection
Table
Summary of additional flops needed to compute the regularization parameter for each four
regularization methods with various parameter selection techniques. Notation:
q is the cost of multiplication of a vector by A.
p is the number of discrete parameters that must be tried;
k is the dimension of the projection.
m and n are problem dimensions.
The Discrepancy Principle is convergent as the noise goes to zero, but it relies on
knowing information that is often unavailable or incorrectly estimated. Even with a
correct estimate of the variance, the solutions tend to be oversmoothed [20, pg. 96]
(see also the discussion in x6.1 of [15]).
One noted difficulty with GCV is that G can have a very flat minimum, making
it difficult to determine the optimal numerically [35].
The L-curve is usually more tractable numerically, but its limiting properties are
nonideal. The solution estimates fail to converge to the true solution as n !1 [36]
or as the error norm goes to zero [6]. All methods that assume no knowledge of the
error norm - including GCV - have this latter property [6].
For further discussion and references about parameter choice methods, see [5, 17].
The cost of these methods is tabulated in Table 1.
3.5. Previous work on parameter choice for hybrid methods. At first
glance, it appears that for Tikhonov regularization, multiple systems of the form
(5) must be solved in order to evaluate candidate values of for the Discrepancy
Principle or the L-curve. Techniques have been suggested in the literature for solving
these systems using projection methods.
Chan and Ng [4], for example, note that the systems involve the closely related
matrices matrices suggest solving the systems simultaneously
using a Galerkin projection method on a sequence of "seed" systems. Although
this is economical in storage, it can be unnecessarily expensive in time because they
do not exploit the fact that for each fixed k, the Krylov subspace K k
the same for all values of .
Frommer and Maass [8] propose two algorithms for approximating the that
satisfies the Discrepancy Principle (10). The first is a "truncated cg" approach in
which they use conjugate gradients to solve k systems of the form (5), truncating
the iterative process early for large and using previous solutions as starting guesses
for later problems. Like Chan and Ng, this algorithm does not exploit any of the
redundancy in generating the Krylov-subspaces for each i . The second method
they propose, however, does exploit the redundancy so that the CG iterates for all k
systems can be updated simultaneously with no extra matrix-vector products. They
stop their "shifted cg" algorithm when kAx for one of their values.
Thus the number of matrix-vector products required is twice the number of iterations
for this particular system to converge. We note that while the algorithms we propose
in x4 for finding a good value of are based on the same key observation regarding
the Krylov subspace, our methods will usually require less work than the shifted cg
algorithm.
Calvetti, Golub, and Reichel [3] compute upper and lower bounds on the L-curve
generated by the matrices C() using a Lanczos bidiagonalization process. From this,
they approximate the best parameter for Tikhonov regularization without projection.
In x4, we choose instead to approximate the best parameter for Tikhonov regularization
on the projected problem, since this is the approximation to the continuous
problem that is actually being used.
Kaufman and Neumaier [21] suggest an envelope guided conjugate gradient approach
for the Tikhonov L-curve problem. Their method is more complicated than the
methods we propose because they maintain nonnegativity constraints on the variables.
Substantial work has also been done on TSVD regularization of the projected
problems. Bjorck, Grimme, and van Dooren [2] use GCV to determine the truncation
point for the projected SVD. Their emphasis is on stable ways to maintain an accurate
factorization when many iterations are needed, and they use full reorthogonalization
and implicit restart strategies. O'Leary and Simmons [27] take a somewhat different
viewpoint that the problem should be preconditioned appropriately so that a massive
number of iterations is unnecessary. That viewpoint is echoed in this current work,
so we implicitly assume that the problem has been left-preconditioned or "filtered"
[27]. For example, in place of (4), we solve
min
x
2for a square preconditioner M . See [14, 26, 24, 23] for preconditioners appropriate for
certain types of ill-posed problems. Note that we could alternately have considered
right preconditioning, which amounts to solving, in the Tikhonov case,
min
y
A
I
for y then setting . Note that either left or right preconditioning
effectively changes the balance between the two terms in the minimization.
4. Regularizing the projected problem. In this section we develop nine approaches
to regularization using Krylov methods. Many Krylov methods have been
proposed; for ease of exposition we focus on just two of these: the LSQR algorithm
of Paige and Saunders [29] and the GMRES algorithm of Saad and Schultz [33].
The LSQR algorithm of Paige and Saunders [29] iteratively computes the bidiag-
onalization introduced by Golub and Kahan [10]. Given a vector b, the algorithm is
as follows [29, Alg. Bidiag 1]:
Compute a scalar fi 1 and a vector u 1 of length one so that fi 1
Similarly, determine ff 1 and v 1 so that ff 1
For
where the non-negative scalars ff i+1 and fi i+1 are chosen
so that u i+1 and v i+1 have length one.
End for
The vectors u are called the left and right Lanczos vectors respectively. The
algorithm can be rewritten in matrix form by first defining the matrices
. ff k
denoting the ith unit vector, the following relations can be established:
A T U
where the subscript on I denotes the dimension of the identity.
Now suppose we want to solve
min
where S denotes the k-dimensional subspace spanned by the first k Lanczos vectors
. The solution we seek is of the form x vector y (k) of length k.
to be the corresponding residual. From the relations above,
observe that in exact arithmetic
Since U k+1 has, in exact arithmetic, orthonormal columns, we have
Therefore, the projected problem we wish to solve is
min
y
Solving this minimization problem is equivalent to solving the normal equations involving
the bidiagonal matrix:
Typically k is small, so reorthogonalization to combat the effects of inexact arithmetic
might or might not be necessary. The matrix B k may be ill-conditioned because some
of its singular values approximate some of the small singular values of A. Therefore
solving the projected problem might not yield a good solution y (k) . However, we
can use any of the methods of Section 3 to regularize this projected problem; we
discuss options in detail below. As alluded to in x4, the idea is to generate y (k)
reg , the
regularized solution to (18), and then to compute a regularized solution to (16) as
reg .
If we used the algorithm GMRES instead of LSQR, we would derive similar
relations. Here, though, the U and V matrices are identical and the B matrix is
upper Hessenberg rather than bidiagonal. Conjugate gradients would yield similar
relationships.
For cost comparisons for these methods, see Tables 1 and 2. Storage comparisons
are given in Tables 3 and 4.
4.1. Regularization by projection. As mentioned earlier, if we terminate the
iteration after k steps, we have projected the solution onto a k dimensional subspace
and this has a regularizing effect that is sometimes sufficient. Determining the best
value of k can be accomplished, for instance, by one of our three methods of parameter
choice:
1. Discrepancy Principle.
In this case, we stop the iteration for the smallest value of k for which kr k k
ffi . Both LSQR and GMRES have recurrence relations for determining kr k k
using scalar computations, without computing either r k or x k [29, 32].
2. GCV.
For the projected problems (see x4.1) defined by either LSQR or GMRES,
the operator AA ] is given by
U
is the pseudo-inverse of the matrix B k . Thus from (11), the GCV
functional is [17]
We note that there are in fact two distinct definitions for B y
and hence two
definitions for the denominator in G(k); for small enough k, the two are
comparable, and the definition we use here is less expensive to calculate [18,
x7.4].
3. L-Curve.
To determine the L-curve associated with LSQR or GMRES, estimates of
are needed for several values of k. Using either algorithm,
we can compute kr k k 2 with only a few scalar calculations. Paige and Saunders
give a similar method for computing kx k k 2 [29], but, with GMRES, the cost
for computing In using this method or GCV, one must go a
few iterations beyond the optimal k in order to verify the optimum [19].
4.2. Regularization by projection plus TSVD. If projection alone does not
regularize, then we can compute the TSVD regularized solution to the projected
problem (19). We need the SVD of the . This requires O(k 3 )
operations, but can also be computed from the SVD of B k\Gamma1 in O(k 2 ) operations [13].
Clearly, we still need to use some type of parameter selection technique to find a
good value of '(k). First, notice that it is easy to compute the norms of the residual
and the solution resulting from neglecting the ' smallest singular values. If jk is the
component of e 1 in the direction of the j-th left singular vector of B k , and if fl j is
the j-th singular value (ordered largest to smallest), then the residual and solution
2-norms are
and fi 1@ k\Gamma'(k) X
Using this fact, we can use any of our three sample methods:
1. Discrepancy Principle.
Let r (k)
denote the quantity b \Gamma Ax (k)
and note that by (13) and orthonor-
mality, kr (k)
k 2 is equal to the first quantity in (20). Therefore, we choose
'(k) to be the largest value for which kr (k)
if such a value exists.
2. GCV.
Another alternative for choosing '(k) is to use GCV to compute '(k) for
the projected problem. The GCV functional for the kth projected problem
is obtained by substituting B k for A and B
for A ] , and substituting the
expression of the residual in (20) for the numerator in (11):
3. L-Curve.
We now have many L-curves, one for each value of k. The coordinate values
in (20) form the discrete L-curve for a given k, from which the desired value
of '(k) can be chosen without forming the approximate solutions or residuals.
As k increases, the value '(k) chosen by the Discrepancy Principle will be monotonically
nondecreasing.
4.3. Regularization by projection plus Rust's TSVD. As in standard TSVD,
to use Rust's version of TSVD for regularization of the projected problem requires
that we compute the SVD of the . Using the previous notation,
Rust's strategy is to set
y
ae
ae
ik
where q (k)
are the right singular vectors of B k and I (k)
aeg. We
focus on three ways to determine ae:
1. Discrepancy Principle.
Using the notation from the previous section, the norm of the regularized solution
is given by fi 1 (
ae
ik
According to the discrepancy
principle, we must choose ae so that the residual is less than ffi . In practice,
this would require that the residual be evaluated by sorting the values j ik j
and adding terms in that order until the residual norm is less than ffi .
2. GCV.
Let us denote by card(I (k)
ae ) the cardinality of the set I (k)
ae . From (11), it is
easy to show that the GCV functional corresponding to the projected problem
for this regularization technique is given by
ae
ik
ae
In practice, for each k we first sort the values j ik smallest
to largest. Then we define k discrete values ae j to be equal to these values
with ae 1 being the smallest. We set ae that because the values of
are the sorted magnitudes of the SVD expansion coefficients,
we have
(j
Finally, we take the regularization parameter to be the ae j for which G k (ae j )
is a minimum.
3. L-Curve.
As with standard TSVD, we now have one L-curve for each value of k. For
fixed k, if we define the ae as we did for GCV above and we
reorder the fl i in the same way that the j ik j were reordered when sorted,
then we have
When these solution and residual norms are plotted against each other as
functions of ae, the value of ae j corresponding to the corner is selected as the
regularization parameter.
4.4. Regularization by projection plus Tikhonov. Finally, let us consider
using Tikhonov regularization to regularize the projected problem (18) for some integer
k. Thus, for a given regularization parameter , we would like to solve
min y
or, equivalently,
min y
I
The solution y
to either formulation satisfies
Using (13) and (15), we see that y (k)
also satisfies
A AV k
A b:
Therefore,
y
A
I
Using x
, we have
Thus as k ! n, the backprojected regularized solution x
approaches the solution
to (4).
We need to address how to choose a suitable value of .
1. Discrepancy Principle.
Note that in exact arithmetic, we have
r
Hence kB k y (k)
. Therefore, to use the Discrepancy Principle
requires we choose so that kr (k)
discrete trial values j .
For a given k, we take to be the largest value j for which kr (k)
it exists; if not, we increase k and test again.
2. GCV.
Let us define (B k ) y
to be the operator mapping the right hand side of the
projected problem onto the regularized solution of the projected problem:
Given the SVD of B k as above, the denominator in the GCV functional
defined for the projected problem (refer to (11)) is@ k
The numerator is simply kr (k)
2 . For values of k n, it is feasible to compute
the singular values of B k .
3. L-Curve.
The L-curve is comprised of the points (kB k y (k)
using
(25) and the orthonormality of the columns of V k , we see these points are
precisely (kr (k)
discrete values of , the
quantities kr (k)
k 2 can be obtained by updating their respective
estimates at the (k \Gamma 1)st iteration. 2
4.5. Correspondence between Direct Regularization and Projection
Plus Regularization. In this section, we argue why the projection plus regularization
approaches can be expected to yield regularized solutions nearly equivalent to
the direct regularization counterpart. The following theorem establishes the desired
result for the case of Tikhonov vs. projection plus Tikhonov.
Theorem 4.1. Fix ? 0 and define x
to be the kth iterate of conjugate
gradients applied to the Tikhonov problem
Let y (k)
be the exact solution to the regularized projected problem
are derived from the original problem A A = A b, and set z (k)
Then z
Proof: By the discussion at the beginning of x4.4 and equations (23) and (24), it
follows that y (k)
solves
A b:
Now the columns of V k are the Lanczos vectors with respect to the matrix A A and
right-hand side A b. But these are the same as the Lanczos vectors generated with
respect to the matrix A I and right-hand side A b. Therefore V k y (k)
is precisely
the kth iterate of conjugate gradients applied to pg. 495].
Hence z (k)
. 2
2 The technical details of the approach are found in [28, pp. 197-198], from which we obtain
. The implementation details for estimating kx (k)
k and kr (k)
were
taken from the Paige and Saunders algorithm at http://www.netlib.org/linalg/lsqr.
Projection plus - Disc. GCV L-curve
Table
Summary of flops for projection plus inner regularization with various parameter selection
techniques, in addition to the O(qk) flops required for projection itself. Here k is the number of
iterations (ie. the size of the projection) taken and p is the number of discrete parameters that must
be tried.
Let us turn to the case of TSVD regularization applied to the original problem
vs. the projection plus TSVD approach. Direct computation convinces us that the
two methods compute the same regularized solution if and arithmetic is exact.
An approximate result holds in exact arithmetic when we take k iterations, with
n. Let the singular value decomposition of B k be denoted by
and define the s \Theta j matrix W s;j as
I
Then the regularized solution obtained from the TSVD regularization of the projected
problem is
reg
denotes the leading j \Theta j principle submatrix of \Gamma k . If k is taken to be
enough larger than j so that V k Q k W k;j "
U T and
the leading principle submatrix of \Sigma, then we expect x (k)
reg to be a
good approximation to x ' . This is made more precise in the following theorem.
Theorem 4.2. Let k ? j such that
contain the first j columns of "
U respectively. Let
Then
reg
kbk:
Proof: Using the representations x
2 )b, we obtain
reg
and the conclusion follows from bounding each term. 2
Note that typically oe j AE oe n so that 1=oe j is not too large. For some results
relating to the value of k necessary for the hypothesis of the theorem to hold, the
interested reader is referred to theory of the Kaniel-Paige and Saad [30, x12.4].
Basic cost Added Cost
Disc. GCV L-curve
TSVD O("q) O(1) O(m) O(m)
Rust's TSVD O("q) O(m) O(m) O(m)
Projection O(kn) O(1) O(k) O(k)
Table
Summary of additional storage for each of four regularization methods under each of three
parameter selection techniques. The original matrix is m \Theta n with q nonzeros, p is the number of
discrete parameters that must be tried, k iterations are used in projection, and the factorizations are
assumed to take " q storage.
Projection plus - Disc. GCV L-curve
Rust's TSVD O(k) O(k
Table
Summary of storage, not including storage for the matrix, for projection plus inner regularization
approach, various parameter selection techniques. Here p denotes the number of discrete
parameters tried. Each of these regularization methods also requires us to save the basis V or else
regenerate it in order to reconstruct x.
5. Numerical results. In this section, we present two numerical examples. All
experiments were carried out using Matlab and Hansen's Regularization Tools [16],
with IEEE double precision floating point arithmetic. Since the exact, noise-free
solutions were known in both examples, we evaluated the methods using the two-
norm difference between the regularized solutions and the exact solutions. In both
examples when we applied Rust's method to the original problem, the ae i were taken
to be the magnitudes of the spectral coefficients of b sorted in increasing order.
5.1. Example 1. The 200\Theta200 matrix A and true solution x true for this example
were generated using the function baart in Hansen's Regularization Toolbox. We
generated true and then computed the noisy vector b as b + e, where e was
generated using the Matlab randn function and was scaled so that the noise level,
kb truek
. The condition number of A was on the order of 10 19 .
Many values of were tested: log displays the
values of the regularization parameters chosen when the three parameter selection
techniques were applied together with one of the four regularization methods on the
original problem. Since 5:3761E\Gamma4, we set ffi that defines the discrepancy
principle as the very close approximation 5:5E\Gamma4.
The last column in the table gives the value of the parameter that yielded a
regularized solution with the minimum relative error when compared against the true
solution. The relative error values for regularized solutions corresponding to the
parameters in Table 5 are given in Table 6. Note that using GCV to determine a
regularization parameter for Rust's TSVD resulted in an extremely noisy solution
with huge error.
The corners of the L-curves for the Tikhonov, projection, and TSVD methods
were determined using Hansen's lcorner function, with the modification that points
corresponding to solution norms greater than 10 6 for the TSVD methods were not
Rust's TSVD ae 1:223E\Gamma4 9:645E\Gamma7 1:223E\Gamma4 1:259E\Gamma4 or 1:223E\Gamma4
Projection
Table
Example 1: parameter values selected for each method.
Disc. GCV L-curve optimal
Rust's TSVD .1213 7E+14 .1213 .1213
Projection .1134 .1207 .1134 .1134
Table
Example 1: comparison of kx true for each of 4 regularization methods on
the original problem, where the regularization method was chosen using methods indicated.
considered (otherwise, a false corner resulted).
Next, we projected using LSQR and then regularized the projected problem with
one of the three regularization methods considered. For each of the three methods,
we computed regularization parameters for the projected problem using Discrepancy,
GCV, and L-curve, then computed the corresponding regularized solutions; the parameters
that were selected in each case at iterations 10 and 40 are given in Tables 7
and 9 respectively. As before, the lcorner routine was used to determine the corners
of the respective L-curves.
Comparing Table 6 and 8, we observe that computing the regularized solution
via projection plus Tikhonov for projection size of 10 using either the Discrepancy
Principle or the L-curve to find the regularization parameter gives results as good as if
those techniques had been used with Tikhonov on the original problem to determine
a regularized solution. Similar statements can be made for projection plus TSVD
and projection plus Rust's TSVD. We should also note that for Tikhonov, with and
without projection, none of the errors in the tables is optimal; that is, no parameter
selection techniques ever gave the parameter for which the error was minimal.
5.2. Example 2. The 255 \Theta 255 matrix A for this example was a symmetric
Toeplitz matrix with bandwidth 16 and exponential decay across the band. 3 The
true solution vector x true is displayed as the top picture in Figure 2. We generated
true and then computed the noisy vector b as b + e, where e was generated
using the Matlab randn function and was scaled so that the noise level, kek
kb truek
, was
. The vector b is shown in the bottom of Figure 2. The condition number of A
was
We generated our discrete i using log \Gamma1. The norm of the
noise vector was 7:16E\Gamma2, so we took the value of ffi that defines the discrepancy
principle to be 8:00E\Gamma2.
In this example, it took 61 iterations for LSQR to reach a minimum relative error
of 9:48E\Gamma2, and several more iterations were needed for the L-curve method to
3 It was generated using the Matlab command
Rust's TSVD ae(k) 1:679E\Gamma4 1:773E\Gamma4 1:679E\Gamma5 1:679E\Gamma5
Table
Example 1, iteration 10: regularization parameters selected for projection plus Tikhonov,
TSVD, and Rust's TSVD.
Disc. GCV L-curve optimal
Rust's TSVD .1213 .1663 .1213 .1213
Table
Example 1, iteration 10: comparison of kx true projection plus Tikhonov,
TSVD, and Rust's TSVD.
Disc. GCV L-curve optimal
Rust's TSVD ae(k) 9:201E\Gamma5 1:225E\Gamma4 9:201E\Gamma5 9:201E\Gamma5
Table
Example 1, iteration 40: regularization parameters selected for projection plus Tikhonov,
TSVD, and Rust's TSVD.
Disc. GCV L-curve optimal
Rust's TSVD .1162 .1162 .1162 .1162
Table
Example 1, iteration 40: comparison of kx true projection plus Tikhonov,
TSVD, and Rust's TSVD.
50 100 150 200 250
-22610exact solution
50 100 150 200 250
-226
Fig. 2. Example 2: Top: exact solution. Bottom: noisy right hand side b.
Rust's TSVD ae 2:183E\Gamma2 2:586E\Gamma6 1:477E\Gamma2 1:527E\Gamma2
Projection
Table
Example 2: parameter values selected for each method. The projection was performed on a left
preconditioned system.
Disc. GCV L-curve optimal
Rust's TSVD
Projection
Table
Example 2: comparison of kx true for each of 4 regularization methods on
the original problem.
estimate a stopping parameter. Likewise, the dimension k of the projected problem
had to be around 60 to obtain good results with the projection-plus-regularization ap-
proaches, and much larger than 60 for the L-curve applied to the projected, Tikhonov
regularized problem to give a good estimate of the corner with respect to the Tikhonov
regularized original problem. Therefore, for the projection based techniques, we chose
to work with a left preconditioned system (refer to the discussion at the end of x 3.5).
Our preconditioner was chosen as in [22] where the parameter defining the preconditioner
was taken to be
The values of the regularization parameters chosen when the three parameter
selection techniques were applied together with one of the four regularization methods
on the original problem are given in Table 11. The last column in the table gives the
value of the parameter that gave a regularized solution with the minimum relative
error over the range of discrete values tested, with respect to the true solution. The
relative errors that resulted from computing solutions according to the parameters in
Table
11 are in Table 12. We note that GCV with TSVD and Rust's TSVD were
ineffective.
The corners of the L-curves for the Tikhonov, projection, and TSVD methods
were determined using Hansen's lcorner function, with the modification that points
corresponding to the largest solution norms for the TSVD methods were not considered
(otherwise, a false corner was detected by the lcorner routine).
Next, we projected using LSQR (note that since the matrix and preconditioner
were symmetric, we could have used MINRES as in [22]) and then regularized the
projected problem with one of the three methods considered. For each of the three
methods, we computed regularization parameters for the projected problem using Dis-
crepancy, GCV, and L-curve, then computed the corresponding regularized solutions;
the parameters that were selected in each case at iterations 15 and 25 are given in Tables
13 and 15, respectively. The relative errors of the regularized solutions generated
accordingly are given in Tables 14 and 16.
Again, we used the lcorner routine to determine the corners of the respective
L-curves, except in the case of Rust's TSVD method. In the latter case, there was
Rust's TSVD ae(k) 3:558E\Gamma2 3:558E\Gamma2 3:558E\Gamma2 3:558E\Gamma2
Table
Example 2, iteration 15: regularization parameters selected for projection plus Tikhonov,
TSVD, and Rust's TSVD.
Disc. GCV L-curve optimal
Rust's TSVD
Table
Example 2, iteration 15: comparison of kx true projection plus Tikhonov,
TSVD, and Rust's TSVD.
always a very sharp corner that could be picked out visually.
Comparing Table 11 with Tables 13 and 15, we see that the parameter chosen
by applying the L-curve method to projected-plus-Tikhonov problem was the same
parameter chosen by applying the L-curve to the original problem. Moreover, a comparison
of Table 12 with Tables 14 and 16 shows that relative errors of the regularized
solutions computed accordingly are comparable to applying Tikhonov to the original
problem with that same parameter. Similar results are shown for the other cases,
with the exception that the discrepancy principle did not work well for the projection-
plus-TSVD problems, and GCV was not effective for the projected problems when
6. Conclusions. In this work we have given methods for determining the regularization
parameter and regularized solution to the original problem based on regularizing
a projected problem. The proposed approach of applying regularization
and parameter selection techniques to a projected problem is economical in time
and storage. We presented results that in fact the regularized solution obtained by
backprojecting the TSVD or Tikhonov solution to the projected problem is almost
equivalent to applying TSVD or Tikhonov to the original problem, where "almost"
depends on the size of k. The examples indicate the practicality of the method, and
illustrate that our regularized solutions are usually as good as those computed using
the original system and can be computed in a fraction of the time, using a fraction of
the storage. We note that similar approaches are valid using other Krylov subspace
methods for computing the projected problem.
In this work, we did not address potential problems from loss of orthogonality
as the iterations progress. In this discussion, we did, however, assume that either k
was naturally very small compared to n or that preconditioning had been applied to
enforce this condition. Possibly for this reason, we found that for modest k, round-off
did not appear to degrade either the LSQR estimates of the residual and solution
norms or the computed regularized solution in the following sense: the regularization
parameters chosen via the projection-regularization and the corresponding regularized
solutions were comparable to those chosen and generated for the original discretized
problem.
For the Tikhonov approach in this paper, we have assumed that the regularization
Disc. GCV L-curve optimal
Rust's TSVD ae(k) 4:828E\Gamma2 7:806E\Gamma3 4:828E\Gamma2 4:828E\Gamma2
Table
Example 2, iteration 25: regularization parameters selected for projection plus Tikhonov,
TSVD, and Rust's TSVD.
Disc. GCV L-curve optimal
Rust's TSVD
Table
Example 2, iteration 25: comparison of kx true projection plus Tikhonov,
TSVD, and Rust's TSVD.
operator L was the identity or was related to the preconditioning operator; this allowed
us to efficiently compute kr (k)
k and kx (k)
k for multiple values of efficiently for each k.
If L is not the identity but is invertible, we can first implicitly transform the problem
to "standard form" [17]. With
Lx, we can solve the equivalent system
min
Then the projection plus regularization schemes may be applied to this transformed
problem. Clearly the projection based schemes will be useful as long as solving systems
involving L can be done efficiently.
--R
Estimation of the L-curve via Lanczos bidiagonal- ization
Galerkin projection method for solving multiple linear systems
The 'minimum reconstruction error' choice of regularization pa- rameters: Some more efficient methods and their application to deconvolution problems
Using the L-curve for determining optimal regularization pa- rameters
Equivalence of regularization and truncated iteration in the solution of ill-posed image reconstruction problems
Fast CG-based methods for Tikhonov-Phillips regularization
Generalized cross-validation as a method for choosing a good ridge parameter
Calculating the singular values and pseudo-inverse of a matrix
Matrix Computations
Theory of Tikhonov Regularization for Fredholm equations of the First Kind
A stable and fast algorithm for updating the singular value decom- position
Preconditioned iterative regularization for ill-posed problems
Analysis of discrete ill-posed problems by means of the L-curve
a Matlab package for analysis and solution of discrete ill-posed problems
The use of the L-curve in the regularization of discrete ill-posed problems
Regularization for Applied Inverse and Ill-Posed Problems
Regularization of ill-posed problems by envelope guided conjugate gradients
Symmetric Cauchy-like preconditioners for the regularized solution of 1-d ill-posed problems
Pivoted Cauchy-like preconditioners for regularized solution of ill-posed problems
On the solution of functional equations by the method of regularization
Iterative image restoration using approximate inverse preconditioning
A bidiagonalization-regularization procedure for large scale discretization of ill-posed problems
Algorithm 583
The Symmetric Eigenvalue Problem
Truncating the singular value decomposition for ill-posed problems
Iterative Methods for Sparse Linear Systems
GMRES: A generalized minimal residual algorithm for solving nonsymmetric linear systems
Solutions of Ill-Posed Problems
Pitfalls in the numerical solution of linear ill-posed problems
--TR
--CTR
Angelika Bunse-Gerstner , Valia Guerra-Ones , Humberto Madrid de La Vega, An improved preconditioned LSQR for discrete ill-posed problems, Mathematics and Computers in Simulation, v.73 n.1, p.65-75, 6 November 2006
E. T. F. Santos , A. Bassrei, L- and -curve approaches for the selection of regularization parameter in geophysical diffraction tomography, Computers & Geosciences, v.33 n.5, p.618-629, May, 2007
G. Landi, The Lagrange method for the regularization of discrete ill-posed problems, Computational Optimization and Applications, v.39 n.3, p.347-368, April 2008
Alexander B. Konovalov , Vitaly V. Vlasov , Olga V. Kravtsenyuk , Vladimir V. Lyubimov, Space-varying iterative restoration of diffuse optical tomograms reconstructed by the photon average trajectories method, EURASIP Journal on Applied Signal Processing, v.2007 n.1, p.18-18, 1 January 2007 | projection;l-curve;truncated singular value decomposition;regularization;discrepancy principle;iterative methods;ill-posed problems;tikhonov;krylov subspace |
587794 | Data Fitting Problems with Bounded Uncertainties in the Data. | An analysis of a class of data fitting problems, where the data uncertainties are subject to known bounds, is given in a very general setting. It is shown how such problems can be posed in a computationally convenient form, and the connection with other more conventional data fitting problems is examined. The problems have attracted interest so far in the special case when the underlying norm is the least squares norm. Here the special structure can be exploited to computational advantage, and we include some observations which contribute to algorithmic development for this particular case. We also consider some variants of the main problems and show how these too can be posed in a form which facilitates their numerical solution. | Introduction
. Let A 2 R mn arise from observed data, and for
Then a conventional tting problem is to minimize krk over x 2 R n , where the norm
is some norm on R m . This involves an assumption that A is exact, and all the errors
are in b, which may not be the case in many practical situations; the eect of errors in
A as well as b has been recognized and studied for many years, mainly in the statistics
literature. One way to take the more general case into account is to solve the problem
subject to
where the matrix norm is one on (m(n+1)) matrices. This problem, when the matrix
norm is the Frobenius norm, was rst analyzed by Golub and Van Loan [10], who used
the term total least squares and developed an algorithm based on the singular value
decomposition of [A : b]. Since then, the problem has attracted considerable attention:
see, for example, [18], [19].
While the formulation (1.1) is often satisfactory, it can lead to a solution in which
the perturbations E or d are quite large. However, it may be the case that, for exam-
ple, A is known to be nearly exact, and the resulting correction to A may therefore be
excessive. In particular, if bounds are known for the size of the perturbations, then
it makes sense to incorporate these into the problem formulation, and this means
that the equality constraints in (1.1) should be relaxed and satised only approxi-
mately. These observations have motivated new parameter estimation formulations
where both A and b are subject to errors, but in addition, the quantities E and d are
bounded, having known bounds. This idea gives rise to a number of dierent, but
Received by the editors May 25, 1999; accepted for publication (in revised form) by L. El Ghaoui
December 5, 2000; published electronically April 6, 2001.
http://www.siam.org/journals/simax/22-4/35659.html
y Department of Mathematics, University of Dundee, Dundee DD1 4HN, Scotland (gawatson@
maths.dundee.ac.uk).
closely related, problems and algorithms and analysis for problems of this type based
on least squares norms are given, for example, in [1], [2], [3], [4], [5], [8], [9], [15], [17].
The general problem (1.1) is amenable to analysis and algorithmic development
for a wide class of matrix norms, known as separable norms, a concept introduced by
Osborne and Watson [13]. The main purpose of this paper is to show how problems
with bounded uncertainties also can be considered in this more general setting. In
particular, it is shown how such problems can be posed in a more computationally
convenient form. As well as facilitating their numerical solution, this enables connections
with conventional data tting problems to be readily established. Motivation
for extending these ideas beyond the familiar least squares setting is provided by the
important role which other norms can play in more conventional data tting contexts.
We continue this introductory section by dening separable norms and by introducing
some other necessary notation and tools. We rst introduce the concept of the
dual norm. Let k:k be a norm on R m . Then for any v 2 R m , the dual norm is the
norm on R m dened by
r T v:
The relationship between a norm on R m and its dual is symmetric, so that for any
r
r T v:
Definition 1.1. A matrix norm on m n matrices is said to be separable if
given vectors u there are vector norms k:k A on R m and k:k B on
R n such that
Most commonly occurring matrix norms (operator norms, orthogonally invariant
norms, norms based on an l p vector norm on the elements of the matrix treated as an
extended vector in R mn ) are separable. A result which holds for separable norms
and will be subsequently useful is that
see, for example, [13] or [20].
Another valuable tool is the subdierential of a vector norm, which extends the
idea of the derivative to the nondierentiable case. A useful characterization of the
subdierential (for k:k A ) is as follows.
Definition 1.2. The subdierential or set of subgradients of krkA is the set
A 1g:
If the norm is dierentiable at r, then the subdierential is just the unique vector
of partial derivatives of the norm with respect to the components of r.
The main emphasis of this paper is on problems which address the eects of worst
case perturbations. This gives rise to problems of min-max type. In section 2, we
consider problems where separate bounds on kEk and kdkA are assumed known, and
in section 3, we consider a similar problem except that a single bound on kE : dk is
given. In both cases, the matrix norm is assumed to be separable. In section 4, some
variants of the original problems are considered, and nally, in section 5 we consider
a related class of problems which are of min-min rather than min-max type.
1276 G. A. WATSON
2. Known bounds on kEk and kdkA . Suppose that the underlying problem
is such that we know bounds on the uncertainties in A and b so that
where the matrix norm is a separable norm, as in Denition 1.1. Then instead of
forcing the equality constraints of (1.1) to be satised, we wish to satisfy them approximately
by minimizing the A-norm of the dierence between the left- and right-hand
sides, over all perturbations satisfying the bounds. This leads to the problem
Therefore x minimizes the worst case residual, and this can be interpreted as permitting
a more robust solution to be obtained to the underlying data tting problem:
for an explanation of the signicance of the term robustness in this context, in the
least squares case, see, for example, [9], where a minimizing x is referred to as a robust
least squares solution. Another interpretation of the problem being solved is that it
guarantees that the eect of the uncertainties in the data will never be overestimated,
beyond the assumptions made by knowledge of the bounds.
We now show that (2.1) can be restated in a much simpler form as an unconstrained
problem in x alone.
Theorem 2.1. For any x, the maximum in (2:1) is attained when
where
otherwise u is arbitrary but 1. The maximum
value is
Proof. We have for any E; d such that kEk , kdkA d ,
Now let E and d be as in the statement of the theorem. Then
and further
The result follows.
An immediate consequence of this result is that the problem (2.1) is solved by
minimizing with respect to x
kAx bk A
and it is therefore appropriate to analyze this problem. In particular, we give conditions
for x to be a solution and also conditions for that solution to be
results are a consequence of standard convex analysis, as is found, for example, in
[14].
Theorem 2.2. The function (2:2) is minimized at x if and only if there exists
@kxkB such that
Theorem 2.3. Let there exist v 2 @kbkA so that
Proof. For to give a minimum we must have v 2 @kbkA so that (2.3) is
satised with kwk
1. The result follows.
2.1. Connections with least norm problems. We will next establish some
connections between solutions to (2.2) and solutions to traditional minimum norm
data tting problems. In [9], coincidence of solutions in the least squares case is said
to mean that the usual least squares solution may be considered to be robust.
Consider the least norm problem
minimize kAx bk A :
Then x is a solution if and only if there exists v 2 @kAx bk A such that
solves (2.4), then clearly it also solves (2.2). (Note that we can take
(2.3).) Otherwise, if k:k A is smooth, solutions to (2.2) and to (2.4) can coincide only
is unique). In this case, let b be any solution
to denote the minimum A-norm solution to A T
Then if x minimizes (2.2), (2.3) is satised with w 2 @kxkB and kvk
A 1, otherwise
v is unrestricted. Because
it follows that we must have
A
In other words, A is also a solution to (2.2) only if b 2 range(A) and
A
For example, if both norms are least squares norms, then this condition is
Note that if b is the minimum B-norm solution to immediately
solves (2.2), and so there must exist w such that this inequality is satised
independently of .
1278 G. A. WATSON
The case when k:k A is nonsmooth is more complicated.
Example 2.1. Let
corresponds to the separable norm being the sum of moduli of the components.) Then
(2:4) is solved by any x; 1 x 2. Further a solution to (2:2) provided that
2.
To summarize, we can augment Theorem 2.3 by the following, which can be
interpreted as a generalization of a result of [9].
Theorem 2.4. If b 2 range(A) and any solution to
provided that
A
We can also prove the following, which connects Theorems 2.3 and 2.4.
Theorem 2.5. Let b 2 range(A), and
A
min
Proof. Let otherwise arbitrary. It follows by
denition of A + and
A T
Thus
and
Now
using (2.5) and (2.6). The result follows.
A consequence of the above results is that if b 2 range(A) and
A
min
then any point in the convex hull of f0; A + bg is a solution.
2.2. Methods of solution. From a practical point of view, it is obviously of
importance to e-ciently solve (2.1) (or, equivalently, (2.2)) in appropriate cases.
Let
Most commonly occurring norms are either smooth (typied by l p norms, 1 < p < 1)
or polyhedral (typied by the l 1 and l 1 norms). If the norms in the denition of f are
smooth, then derivative methods are natural ones to use. A reasonable assumption
in most practical situations is that
range(A), so that would then give the
only derivative discontinuity. If is not a solution, then f is dierentiable in a
neighborhood of the solution, and once inside that neighborhood, derivative methods
can be implemented in a straightforward manner. Theorem 2.3 tells us when
is a solution; the following theorem gives a way of identifying a descent direction at
that point in the event that it is not. It applies to arbitrary norms.
Theorem 2.6. Assume that
vk
B be such that
g. Then d is a descent direction for f at
Proof. Let the stated conditions be satised, and let d be dened as in the
statement of the theorem. By theorem 2.3, is not a solution. For d to be a
descent direction at the directional derivative of f at in the direction d
must be negative, that is,
arbitrary. Then
vk
< 0:
The result follows.
If k:k A is smooth, then ^ v is unique, and the construction of d using this result is
straightforward. If the norm on E is the norm given by
1=p
or
then f becomes
G. A. WATSON
In fact, the minimization of f for any p; q satisfying 1 <
can be readily achieved by derivative methods, using Theorem 2.6 to get
started. Indeed, it is normally the case that second derivatives exist and can easily
be calculated so that Newton's method (damped if necessary) can be used following
a step based on Theorem 2.6. The Hessian matrix of f is positive denite (because f
is convex), so that the Newton direction is a descent direction away from a minimum.
Some numerical results are given in [21].
For polyhedral norms (typied by l 1 and l 1 norms), the convex objective function
(2.2) is a piecewise linear function. Therefore, it may be posed as a linear programming
problem, and solved by appropriate methods.
Arguably, the most interesting case from a practical point of view is the special
case when both k:k A and k:k B are the least squares norm, so that
This case has particular features which greatly facilitate computation, and Chandrasekaran
et al. [1], [3] exploit these in a numerical method. In contrast to the
problems considered above, which involve a minimization problem in R n , special features
of the l 2 case can be exploited so that the problem reduces to one in R. When
(2.7) is dierentiable, (2.3) becomes
or
where
Let the singular value decomposition of A be
where U 2 R mm and V 2 R nn are orthogonal and ng is the
matrix of singular values in descending order of magnitude. Let
It will be assumed in what follows that A has rank
n, and further that is not a solution (which means, in particular, that b 1 6=
(which means that b 2 6= 0). From Theorem 2.3, we require that
<
Then it is shown in [3] that satises the equation
where
This can be rearranged as
where
It is also shown in [3] that (2.8) is both necessary and su-cient for (2.10) to have
exactly one positive root . In addition, G 0 ( ) > 0. Dierent methods can be used
for nding in this case. One possibility which is suggested by (2.9) is the simple
iteration process
and it is of interest to investigate whether or not this is likely to be useful. It turns
out that this method is always locally convergent, as the following result shows.
Theorem 2.7. Let
<
Then (2:10) has exactly one positive root and (2:11) is locally convergent to .
Proof. Let satisfy (2.12). Then (2.10) has a unique positive root . Dieren-
tiating G() gives
and so
using G( Now g() and G() are related by
and so
using g(
G. A. WATSON
Table
Simple iteration: stack loss data.
9
Substituting from (2.13) gives
0:
It follows using (2.15) and G 0 ( ) > 0 that
and the result is proved.
Indeed, simple iteration seems to be remarkably eective, and in problems tried,
it converged in a satisfactory way from obvious starting points.
For example, for the stack loss data set of Daniel and Wood [6]
4), performance for dierent values of is shown in Table 1, where the iteration is
terminated when the new value of diers from the previous one by less than 10 6 .
Another example is given by using the Iowa wheat data from Draper and Smith
9). The performance of simple iteration in this case is shown in
Table
2.
Although simple iteration is in some ways suggested by the above formulation, of
course higher order methods can readily be implemented, such as the secant method
or Newton's method. Actual performance will depend largely on factors such as the
nature and size of the problem and the relative goodness of starting points.
3. A known bound on kE : dk. Suppose now that the underlying problem is
such that we know upper bounds on the uncertainties in A and b, in the form
where and the (separable) matrix norm are given. Consider the problem of deter-
mining
Table
Simple iteration: Iowa wheat data.
6 5.912536 12.925885 30.839779
9 12.927008 30.881951
14 30.882685
where the A-norm on R m is dened by the particular choice of separable norm (or vice
versa). This problem and variants have been considered, for example, by El Ghaoui
and Lebret [8], [9], where the matrix norm is the Frobenius norm, so that both the Aand
B-norms are least squares norms. Arguing as in Theorem 2.1 gives the following
result.
Theorem 3.1. For any x, the maximum in (3:1) is attained when
where
any vector with 1. The maximum value is
The problem (3.1) is therefore equivalent to the problem of minimizing with respect
to x
kAx bk A
Standard convex analysis then gives the following result.
Theorem 3.2. The function (3:4) is minimized at x if and only if there exists
where u 1 denotes the rst n components of u.
3.1. Connection with least norm problems. As before, it is of interest to
establish connections with the corresponding least norm problems. If
(2.4), then it will also minimize (3.4) for monotonic norms k:k B . (k:k B is a monotonic
norm on R n+1 if kck B kdkB whenever jc does
not solve (2.4), then just as before when k:k A is smooth, solutions to this problem
and (3.1) cannot coincide unless b 2 range(A). In that case, as in section 2.1 let
a solution to denote the minimum
A-norm solution to A T For a solution to (3.4), there must exist v; kvk
(otherwise unrestricted) so that
1284 G. A. WATSON
where consists of the rst n components of u Therefore,
and so
A
In other words the solutions will coincide if b 2 range(A) and
A
Note that if k:k B is smooth, then u is unique. For example, when both norms are
least squares norms, this gives
as given in [9]. The situation when k:k A is not smooth is, of course, once again more
complicated. Consider again Example 2.1 where > 0 is arbitrary. Recall that (2.4)
is solved by any x; 1 x 2: the unique solution to the problem of minimizing (3.4)
is
3.2. Connection with total approximation problems. The nature of the
bound in (3.1) means that there is a connection to be made with the total approximation
problem (1.1). It is known [13], [20] that a minimum value of (1.1) coincides
with the minimum of the problem
subject to
the smallest -generalized singular value of the matrix [A : b]. In particular, if the
vector norms are least squares norms, then this is just the smallest singular value of
at a minimum of (1.1) is obtained from a z = z T at a minimum
of (3.7) by scaling so that
z T
corresponds to nonexistence of a
solution to (1.1).) It is known also that a minimizing pair E; d is given by
and consider the problem
(3.
or equivalently,
so if k:k A is smooth, x T is a solution to
this problem provided that
A
as a consequence of the previous analysis. For example, when both norms are least
squares norms, this gives (see also [9])
For the least squares case, El Ghaoui and Lebret [8] suggest using robust methods
in conjunction with total approximation to identify an appropriate value of . The
idea is rst to solve the total approximation problem. Then (3.8) is constructed from
the total approximation solution and solved with set to T , the minimum value in
(3.7), that is,
Of course if T does not exceed the right-hand side of (3.9), there is nothing to solve.
3.3. Methods of solution. For the special case of (3.4) when the norms k:k A
and k:k B are (possibly dierent) l p norms, we have
derivative methods may again be used. Let us again make
the (reasonable) assumption that there is no x which makes kAx bk so that
kAx bk p is dierentiable for all x. Then in contrast to the earlier problem, since
the second term cannot be identically zero, f is dierentiable for all x. We can easily
compute rst and second derivatives of f , and so Newton's method, for example,
can be implemented. A line search in the direction of the Newton step will always
guarantee descent, because f is convex, so eventually we must be able to take full
steps and get a second order convergence rate. Some numerical results are given in
[21]. For polyhedral norms occurring in (3.4), linear programming techniques may be
used.
Now consider the special case when 2. An analysis similar to that
given in section 2.2 can be given in this case, leading to a similar numerical method.
This particular problem is considered by El Ghaoui and Lebret [8], [9]. The main
emphasis of those papers is on structured perturbations, which is a harder problem,
and an exact solution to that problem is obtained. For the present case, the method
suggested is similar to that given for the problem of section 2 by Chandrasekaran
et al. in [1], [3].
Let A have singular value decomposition as before and have full rank. Assume
also that
range(A). Then optimality conditions are
1286 G. A. WATSON
or
where
It can be shown as before that satises the equation
where
This can as before be rearranged as
where
with G() dened as in (2.10). It is easily seen that H() has at least one positive
root for any > 0. As in [3], it may be shown that H() in fact has exactly one
positive root, ^
, with
H(^) > 0:
Note that here there is no restriction on except that it should be positive. Consider
the simple iteration process
Theorem 3.3. The iteration scheme (3:11) is locally convergent to ^
.
Proof. We can rst show that
2:
We can then show that h() and H() are related by
h()
where
Thus
using
Substituting from (3.12) gives
It follows using (3.13) and H 0 (^) > 0 that
and the result is proved.
The performance of simple iteration in this case is, of course, similar to the same
method applied in the previous situation. Other methods like the secant method, or
Newton's method, are more complicated but can give potentially better performance.
4. Some modications. There are dierent ways in which additional information
may be incorporated into the problems of the last two sections, resulting in
appropriate modications of these problems. For example, some components of A or
b may be exact, in which case the corresponding components of E or d will be zero.
The bounds may take dierent forms and may be on submatrices of E rather than E
itself. Also the perturbation matrices may have known structure, which we want to
preserve. Examples of all these possibilities are considered in this section.
4.1. Exact columns and rows. Some problems are such that some of the
columns and possibly rows of A are known to be exact (see, for example, [3]). A
treatment can be given for both the problems of sections 2 and 3, and we will demonstrate
only for those of section 2; the appropriate requirements for the problems of
section 3 are obvious. We begin by considering the case when certain columns only
of A are known to be exact. In that case (following suitable reordering of columns if
necessary) the general problem is to minimize
and the (separable) matrix norm is one dened
on m t matrices. We can partition x as x
arguing as in Theorem 2.1, we have the following.
Theorem 4.1. For any x, the maximum in (4:1) is attained when
where
otherwise u is arbitrary, but 1. The maximum value
is
1288 G. A. WATSON
Therefore, the problem is solved by minimizing with respect to x
kAx bk A
Now consider the case when some columns and rows of A are exact. This corresponds
to the requirement to perturb only a submatrix of A. Assume this to be the
lower right-hand s t submatrix. An appropriate problem is then to minimize
(b d)
A
where A 2 and A 4 have t columns, A 3 and A 4 have s rows, and the matrix norm is
a separable norm on s t matrices. Unfortunately, the separable norm is dened in
terms of two vector norms k:k A on R s and k:k B on R t , and k:k A as used in (4:3)
is on R m . We get around this potential con
ict by assuming that k:k A is dened
for any length of vector; we will also assume that the introduction of additional zero
components does not change the value of the norm.
The attainment of the maximum in (4.3) is not quite so straightforward as before.
However, we can prove the following result.
Theorem 4.2. Let denote the rst m s components of r,
and let r 2 denote the last s components. Let x solve the problem
subject to r
Then x solves (4:3):
Proof. Arguing as in previous results, an upper bound for the maximum (subject
to the constraints) in (4.3) is
Now dene the set
For any x 2 X, dene
where u rst (m s) components zero, and last s components forming
the vector u 2 with
arbitrary except that
A 3 A 4 +E
(b d)
A
A
A
The result is proved.
Of course the set X may be empty. In that case, while the problem (4.3) is still
well dened, it is not clear that a matrix E and a vector d can be dened such that
the maximum in the problem is attained. That being the case, there is no obvious
equivalent simpler problem.
4.2. Bounded columns of E. Suppose that the columns of E are individually
bounded so that
where e i is the ith unit vector, and consider the problem of nding
As for Theorem 2.1, we can prove the following result.
Theorem 4.3. For any x, the maximum in (4:5) is attained when
otherwise u is arbitrary but
1. The maximum value is
Even in the least squares case, this objective function is not normally dieren-
tiable, being a combination of a least squares norm and a weighted l 1 norm. It can
be reposed as a smooth constrained optimization problem, and solved by standard
techniques.
4.3. Structured problems. In some applications, the perturbation matrices
have known structure, as in the following problem considered by El Ghaoui and Lebret
[9]. Given A
min
k-k
x
A
where k:k A is a given norm on R m and k:k is a given norm on R p . Dene for any
Consider the maximum in (4.6), which will be attained at the solution to the problem
maximize kr 0 +M-kA subject to
assuming that - maximizing kr exceeds in norm. Because the functions
involved are convex, necessary conditions for a solution can readily be given: these
are that there exists R such that
1290 G. A. WATSON
Using these conditions, it is easily seen that
Therefore, an equivalent (in a sense dual) problem is
subject to
Consider the special case when both norms are least squares norms. Then
and so the necessary conditions can be written
Thus
provided that I F is nonsingular. A way of solving this problem based on those
results is given by El Ghaoui and Lebret [9]. They also consider the problem when k:k
is the Chebyshev norm. Extending the ideas to more general norms, however, does
not look straightforward.
5. A min-min problem. The problems (2.1) and (3.1) are examples of min-max
problems: minimization is carried out with respect to x over all allowed perturbations
in the data. This is justied if the emphasis is on robustness. However, from other
considerations it may be su-cient to minimize with respect to x while simultaneously
minimizing with respect to the perturbations. This gives rise to a min-min problem,
as considered (least squares case) in [2], [3], [5]. In this nal section, we will brie
y
consider this problem. Again there are two versions, consistent with those treated in
sections 2 and 3. To illustrate the ideas involved, we will consider nding
min
(5.
In contrast to the min-max case, here we are seeking to nd a solution x which gives
the smallest possible error over allowable perturbations.
Again the problem can be replaced by an equivalent unconstrained optimization
problem.
Theorem 5.1. Let be small enough that
Then (5:1) is equivalent to the problem of minimizing with respect to x
kAx bk A k[x
Proof. Let (5.2) be satised and let x be arbitrary. Let
otherwise arbitrary. Then
Now x
where
otherwise u is arbitrary with
and further
using (5.2). The result follows.
There are two important dierences between (5.3) and (3.4): rst, the relationship
leading to (5.3) requires a condition on , and second, the resulting problem is not a
convex problem. The nonconvexity of (5.3) is interpreted in [2] as being equivalent
to using an \indenite" metric, in the spirit of recent work on robust estimation and
ltering: see, for example, [11], [12], [16].
The condition (5.2) is satised if
that is, if does not exceed T (see section 3.2). If
attained at Indeed if is set to any local minimum of (3.7), with value T ,
then the corresponding point x T generated from the local minimizer z T is a stationary
point of (5.1), as the following argument shows.
Necessary conditions for x to solve (3.7) are that there exist v
and a Lagrange multiplier such that
G. A. WATSON
Multiplying through by z T
T shows that
Now the relationship z
implies that sign()v 2 @kAx T bk A and
In other words, there exist v 2 @kAx T bk A , w 2
denotes the rst n components of w. It follows from standard convex
analysis that x T is a stationary point of the problem of minimizing
A similar treatment can be given if (5.1) is replaced by the related problem of
nding
min
Provided that is small enough that
then this is equivalent to the problem of nding
fkAx bk A kxkB g:
An algorithm is given in [2] for solving the least squares case of this problem. It
has similarities to the algorithms given before, involving the solution of a nonlinear
equation for and a linear system for x. Indeed it is clear that many of the ideas which
apply to min-max problems carry over to problems of the present type. However, we
do not consider that further here.
6. Conclusions. We have given an analysis in a very general setting of a range
of data tting problems, which have attracted interest so far in the special case when
least squares norms are involved. While this case is likely to be most useful in practice,
consideration of other possibilities can be motivated by the valuable role that other
norms play in a general data tting context. The main thrust of the analysis has been
to show how the original problems may be posed in a simpler form. This permits the
numerical treatment of a wide range of problems involving other norms, for example,
l p norms. We have also included some observations which contribute to algorithmic
development for the important least squares case.
Acknowledgment
. I am grateful to the referees for helpful comments which
have improved the presentation.
--R
Parameter estimation in the presence of bounded modeling errors
Parameter estimation in the presence of bounded data uncertainties
The degenerate bounded errors-in-variables model
Fitting Equations to Data
Applied Regression Analysis
Robust solutions to least squares problems with uncertain data
Robust solutions to least-squares problems with uncertain data
An analysis of the total least squares problem
Recursive linear estimation in Krein spaces-Part I: Theory
Filtering and smoothing in an H 1 setting
An analysis of the total approximation problem in separable norms
Convex Analysis
Estimation in the presence of multiple sources of uncertainties with applications
Inertia conditions for the minimization of quadratic forms in inde
Estimation and control in the presence of bounded data uncertainties
ed., Recent Advances in Total Least Squares Techniques and Errors-in- Variables Modeling
The Total Least Squares Problem: Computational Aspects and Analysis
Choice of norms for data
Solving data
--TR | robustness;data fitting;minimum norm problems;bounded uncertainties;separable matrix norms |
587796 | Inversion of Analytic Matrix Functions That are Singular at the Origin. | In this paper we study the inversion of an analytic matrix valued function A(z). This problem can also be viewed as an analytic perturbation of the matrix A0=A(0). We are mainly interested in the case where A0 is singular but A(z) has an inverse in some punctured disc around z=0. It is known that A-1(z) can be expanded as a Laurent series at the origin. The main purpose of this paper is to provide efficient computational procedures for the coefficients of this series. We demonstrate that the proposed algorithms are computationally superior to symbolic algebra when the order of the pole is small. | Introduction
Let fA k g k=0;1;::: ' R n\Thetan be a sequence of matrices that de-nes the analytic matrix valued
function
The above series is assumed to converge in some non-empty neighbourhood of z = 0. We will
also say that A(z) is an analytic perturbation of the matrix A Assume the inverse
matrices A \Gamma1 (z) exist in some (possibly punctured) disc centred at In particular, we
are primarily interested in the case where A 0 is singular. In this case it is known that A \Gamma1 (z)
can be expanded as a Laurent series in the form
A
and s is a natural number, known as the order of the pole at z = 0. The main
purpose of this paper is to provide eOEcient computational procedures for the Laurent series
coeOEcients As one can see from the following literature review, few computational
methods have been considered in the past.
This work was supported in part by Australian Research Council Grant #A49532206.
y INRIA Sophia Antipolis, 2004 route des Lucioles, B.P.93, 06902, Sophia Antipolis Cedex, France, e-mail:
k.avrachenkov@sophia.inria.fr
z Department of Statistics, The Hebrew University, 91905 Jerusalem, Israel and Department of Economet-
rics, The University of Sydney, Sydney, NSW 2006, Australia, e-mail: haviv@mscc.huji.ac.il
x CIAM, School of Mathematics, The University of South Australia, The Levels, SA 5095, Australia, e-mail:
The inversion of nearly singular operator valued functions was probably -rst studied in
the paper by Keldysh [22]. In that paper he studied the case of a polynomial perturbation
are compact operators on Hilbert space. In particular, he showed that
the principal part of the Laurent series expansion for the inverse operator A \Gamma1 (z) can be
given in terms of generalized Jordan chains. The generalized Jordan chains were initially
developed in the context of matrix and operator polynomials (see [13, 26, 30] and numerous
references therein). However, the concept can be easily generalized to the case of an analytic
perturbation (1).
Following Gohberg and Sigal [15] and Gohberg and Rodman [14], we say that the vectors
Jordan chain of the perturbed matrix A(z) at
for each 0 - k - r \Gamma 1. Note that ' 0 is an eigenvector of the unperturbed matrix A 0
corresponding to the zero eigenvalue. The number r is called the length of the Jordan chain
and ' 0 is the initial vector. Let f' (j)
j=1 be a system of linearly independent eigenvectors,
which span the null space of A 0 . Then one can construct Jordan chains initializing at each
of the eigenvectors ' (j)
0 . This generalized Jordan set plays a crucial role in the analysis of
analytic matrix valued functions A(z).
Gantmacher [11] analysed the polynomial matrix (3) by using the canonical Smith form.
Vishik and Lyusternik [37] studied the case of a linear perturbation
showed that one can express A \Gamma1 (z) as a Laurent series as long as A(z) is invertible in some
punctured neighbourhood of the origin. In addition, an undetermined coeOEcient method for
the calculation of Laurent series terms was given in [37]. Langenhop [25] showed that the
coeOEcients of the regular part of the Laurent series for the inverse of a linear perturbation
form a geometric sequence. The proof of this fact was re-ned later in Schweitzer [33, 34] and
Schweitzer and Stewart [35]. In particular, the paper [35] proposed a method for computing
the Laurent series coeOEcients. However the method of [35] cannot be applied (at least imme-
diately) to the general case of an analytic perturbation. Many authors have obtained existence
results for operator valued analytic and meromorphic functions [3, 15, 23, 27, 29, 36]. In par-
ticular, Gohberg and Sigal [15], used a local Smith form to elaborate on the structure of the
principal part of the Laurent series in terms of generalized Jordan chains. Recently, Gohberg,
Kaashoek and Van Schagen [12] have re-ned the results of [15]. Furthermore, Bart, Kaashoek
and Lay [5] used their results on the stability of the null and range spaces [4] to prove the
existence of meromorphic relative inverses of -nite meromorphic operator valued functions.
The ordinary inverse operator is a particular case of the relative inverse. For the applications
of the inversion of analytic matrix functions see for example [8, 9, 20, 23, 24, 28, 31, 32, 36].
Howlett [20] provided a computational procedure for the Laurent series coeOEcients based
on a sequence of row and column operations on the coeOEcients of the original power series
(1). Howlett used the rank test of Sain and Massey [32] to determine s, the order of the pole.
He also showed that the coeOEcients of the Laurent series satisfy a -nite linear recurrence
relation in the case of a polynomial perturbation. The method of [20] can be considered as a
starting point for our research. The algebraic reduction technique which is used in the present
paper was introduced by Haviv and Ritov [17, 18] in the special case of stochastic matrices.
Haviv, Ritov and Rothblum [19] also applied this approach to the perturbation analysis of
semi-simple eigenvalues.
In this paper we provide three related methods for computing the coeOEcients of the Laurent
series (2). The -rst method uses generalized inverse matrices to solve a set of linear
equations and extends the work in [17] and [20]. The other two methods use results that
appear in [2, 17, 18, 19] and are based on a reduction technique [6, 10, 21, 23]. All three
methods depend in a fundamental way on equating coeOEcients for various powers of z. By
substituting the series (1) and (2) into the identity A(z)A I and collecting coeOE-
cients of the same power of z, one obtains the following system which we will refer to as the
fundamental equations:
A similar system can written when considering the identity A I but of course
the set of fundamental equations (4:0); suOEcient. Finally, for matrix operators
each in-nite system of linear equations uniquely determines the coeOEcients of the Laurent
series (2). This fact has been noted in [3, 20, 23, 37, 36].
Main results
Let us de-ne the following augmented matrix A (t) 2 R (t+1)n\Theta(t+1)n
A (t) =6 6 6 6 6 4
A t A
and prove the following basic lemma.
s be the order of the pole at the origin for the inverse function A \Gamma1 (z). Any
eigenvector \Phi 2 R (s+1)n of A (s) corresponding to the zero eigenvalue possesses the property
that its -rst n elements are zero.
Proof: Suppose on the contrary that there exists an eigenvector \Phi 2 R (s+1)n such that
A
and not all of its -rst n entries are zero. Then, partition the vector \Phi into s blocks and
rewrite (5) in the form
with ' 0 6= 0. This means that we have found a generalized Jordan chain of length s + 1.
However, from the results of Gohberg and Sigal [15], we conclude that the maximal length
of a generalized Jordan chain of A(z) at z = 0 is s. Hence, we came to a contradiction and,
consequently,
direct proof of Lemma 1 is given in Appendix 1.
vectors \Phi 2 R (s+j+1)n in the null space of the augmented matrix A (s+j) , j - 0,
possess the property that the -rst (j + 1)n elements are zero.
The following theorem provides a theoretical basis for the recursive solution of the in-nite
system of fundamental equations (4).
Theorem 1 Each coeOEcient X k , k - 0 is uniquely determined by the previous coeOEcients
and the set of s fundamental equations
Proof: It is obvious that the sequence of Laurent series coeOEcients fX i
i=0 is a solution
to the fundamental equations (4). Suppose the coeOEcients X i ,
determined. Next we show that the set of fundamental equations (4.k)-(4.k+s) uniquely
determines the next coeOEcient X k . Indeed, suppose there exists another solution ~
are both solutions, we can write
A (s)6 4
~
~
and
A (s)6 4
where the matrix J i is de-ned as follows:
ae
and where ~
X k+s are any particular solutions of the nonhomogenous linear system
(4.k)-(4.k+s). Note that (6) and (7) have identical righthand sides. Of course, the dioeerence
between these two righthand sides, [ ~
is in the right null space of
A Invoking Lemma 1, the -rst n rows of [ ~
are hence zero. In
other words, ~
which proves the theorem.Using the above theoretical background, in the next section we provide three recursive
computational schemes which are based on the generalized inverses and on a reduction tech-
nique. The reduction technique is based on the following result. A weaker version of this
result was utilized in [17] and in [19].
Theorem 2 Let fC k g t
suppose that the
system of t equations
is feasible. Then the general solution is given by
(R
where C y
0 is the Moore-Penrose generalized inverse of C 0 and Q 2 R m\Thetap is any matrix whose
columns form a basis for the right null space of C 0 . Furthermore, the sequence of matrices
solves a reduced -nite set of t matrix equations
where the matrices D k 2 R p\Thetap and S k 2 R p\Thetan , are computed by the following
recursion. Set U
Then,
where M 2 R p\Thetam is any matrix whose rows form a basis for the left null space of C 0 .
Proof: The general solution to the matrix equation (7.0) can be written in the form
arbitrary matrix.
In order for the equation
to be feasible, we need that the right hand side R belongs to R(C 0
is
where the rows of M form a basis for N(C T
Substituting expression (13) for the general
solution into the above feasibility condition, one -nds that W 0 satis-es the equation
which can be rewritten as
Thus we have obtained the -rst reduced fundamental equation (9.0) with
Next we observe that the general solution of equation (7.1) is represented by
the formula
(R
with . Moving on and applying the feasibility condition to equation (7.2), we
obtain
and again the substitution of expressions (13) and (14) into the above condition yields
(R
which is rearranged to give
The last equation is the reduced equation (9.1) with
. Note that this equation imposes restrictions on W 1 as well as on
By proceeding in the same way, we eventually obtain the complete system of equations
with coeOEcients given by formulas (11) and (12) each of which can be proved by induction
in a straightforward way.Remark 3 In the above theorem it is important to observe that the reduced system has the
same form as the original but the number of matrix equations is decreased by one and the
coeOEcients are reduced in size to matrices in R p\Thetap , where p is the dimension of N(C 0 ) or,
equivalently, the number of redundant equations de-ned by the coeOEcient C 0 .
In the next section we use this reduction process to solve the system of fundamental
equations. Note that the reduction process can be employed to solve any appropriate -nite
subset of the fundamental equations.
3 Solution methods
In this section we discuss three methods for solving the fundamental equations. The -rst
method is based on the direct application of Moore-Penrose generalized inverses. The second
method involves the replacement of the original system of the fundamental equations by
a system of equations with a reduced dimension. In the third method we show that the
reduction process can be applied recursively to reduce the problem to a non-singular system.
Since all methods depend to some extent on the prior knowledge of s, we begin by discussing
a procedure for the determination of s. A special procedure for determining this order for the
case where the matrices A(z) are stochastic and the perturbation series is -nite is given in [16].
It is based on combinatorial properties (actually, network representation) of the processes and
hence it is a stable procedure. However, as will be seen in Section 3.4, it is possible to use
the third method without prior knowledge of s. Actually, the full reduction version of our
procedure determines s as well. Of course, as in any computational method which is used to
determine indices which have discrete values, using our procedures in order to compute the
order of singularity might lack stability.
3.1 The determination of the order of the pole
The rank test on the matrix A (t) proposed by Sain and Massey in [32] is likely to be the
most eoeective procedure for determining the value of s. The calculation of rank is essentially
equivalent to the reduction of A (t) to a row echelon normal form and it can be argued that
row operations can be used successively in order to calculate the rank of A (0) ,A (1) ,A (2)
and -nd the minimum value of t for which rankA (t\Gamma1) +n. This minimum value of t equals s,
the order of the pole. Note that previous row operations for reducing A (t\Gamma1) to row echelon
form are replicated in the reduction of A (t) and do not need to be repeated. For example,
if a certain combination of row operations reduces A 0 to row echelon form, then the same
operations are used again as part of the reduction of
to row echelon form.
3.2 Basic generalized inverse method
In this section we obtain a recursive formula for the Laurent series coeOEcients X k , k - 0 by
using the Moore-Penrose generalized inverse of the augmented matrix A (s) .
y be the Moore-Penrose generalized inverse of A (s) and de-ne the matrices
G
G
G
0s
G
Furthemore, we would like to note that in fact we use only the -rst n rows of the generalized
namely, [G
Proposition 1 The coeOEcients of the Laurent series (2) can be calculated by the following
recursive formula
s
G
0s and the matrix J i is de-ned by
ae
Proof: According to Theorem 1, once the coeOEcients X i , are determined, the
next coeOEcient X k can be obtained from the (4.k)-(4.k+s) fundamental equations.
A (s)6 4
The general solution to the above system is given in the form6 6 6 4
~
~
G
0s
G
1s
G
where the -rst block of matrix \Phi is equal to zero according to Lemma 1. Thus, we immediately
obtain the recursive expression (15). In particular, applying the same arguments as above to
the -rst s we obtain that
0s .Note that the matrices J j+k in the expression (15) disappear when the regular coeOEcients
are computed.
Remark 4 The formula (15) is a generalization of the recursive formula for the case where
A 0 is invertible. In this case,
while initializing with
Remark 5 Probably from the computational point of view it is better not to compute the
generalized inverse G (s) beforehand, but rather to -nd the SVD or LU decomposition of A
and then use these decompositions for solving the fundamental equations (3:k)-(3:k + s). This
is the standard approach for solving linear systems with various righthand sides.
3.3 The one step reduction process
In this section we describe an alternative scheme that can be used in the case where it is
relatively easy to compute the bases for the right and for the left null spaces of A 0 . Speci-cally,
be the dimension of the null space of A 0 , let Q 2 IR n\Thetap be a matrix whose
columns form a basis for the right null space of A 0 and let M 2 IR p\Thetan be a matrix whose p
rows form a basis for the left null space of A 0 . Of course, although
possible, we are interested in the singular case where p - 1.
Again, as before, we suppose that the coeOEcients X i ,
determined. Then, by Theorem 1, the next coeOEcient X k is the unique solution to the
subsystem of fundamental equations
The above system is like the one given in (9) with C and with R
Therefore, we can apply the reduction process described
in Theorem 2. This results in the system
where the coeOEcients D i and S i , can be calculated by the recursive formulae
(11) and (12).
Remark 6 Note that in many practical applications p is much less than n and hence the
above system (17) with D i 2 IR p\Thetap is much smaller than the original system (16).
Now we have two options. We can either apply the reduction technique again (see the
next subsection for more details) or we can solve the reduced system directly by using the
generalized inverse approach. In the latter case, we de-ne
and
0t
Then, by carrying out a similar computation to the one presented in the proof of Proposition 1,
we obtain
Once W 0 is determined it is possible to obtain X k from the formula
Furthermore, substituting for S i , 0 - i - s\Gamma1, from (12) and changing the order of summation
gives
A y
s
Note that by convention the sum disappears when the lower limit is greater than the upper
limit. Now, substituting R
into the expression (18), we
obtain the explicit recursive formula for the Laurent series coeOEcients
A y
s
A (J k+j \Gamma
for all k - 1. In particular, the coeOEcient of the -rst singular term in (2) can be given by the
3.4 The complete reduction process
As was pointed out in the previous section, the reduced system has essentially the same
structure as the original one and hence one can apply again the reduction step described in
Theorem 2. Note that each time the reduction step is carried out, the number of matrix
equations is reduced by one. Therefore one can perform up to s reduction steps. We now
outline how these steps can be executed. We start by introducing the sequence of reduced
systems. The fundamental matrix equations for the l-th reduction step are
A (l)
A (l)
A (l)
s\Gammal X (l)
s\Gammal
one gets the original system of fundamental equations and with gets the
reduced system for the -rst reduction step described in the previous subsection. Initializing
with R (0)
I and with A (0)
s, the matrices A (l)
and R (l)
for each reduction step 1 - l - s, can be computed successively by a
recursion similar to (11) and (12). In general we have
U (l)
A (l\Gamma1)
A (l)
R (l)
U (l)
where Q (l) and M (l) are the basis matrices for the right and left null spaces respectively of
the matrix A (l\Gamma1)
0 and where A (l\Gamma1)y
0 is the Moore-Penrose generalized inverse of A (l\Gamma1)
. After
s reduction steps, one gets the -nal system of reduced equations
A
is a unique solution to the subsystem of fundamental equations (4.0)-(4.s) and
Theorem 2 states the equivalence of the l-th and (l 1)-st systems of reduced equations, the
system (22) possesses a unique solution, and hence matrix A
0 is invertible. Thus,
The original solution X
0 can be now retrieved by the backwards recursive relationship
Now by taking R (0)
one gets the algorithm for computing
the Laurent series coeOEcients recursive formulae similar to (15) and (19)
can be obtained, but they are quite complicated in the general case.
The order s of the pole can also be obtained from the reduction process by continuing the
process until A (l)
becomes non-singular. The number of reduction steps equals the order of
the pole. Note also that the sequence of matrices A (l)
can be computed irrespectively
of the right hand sides. Once s is determined, one can compute R (l)
Computational complexity and comparison with symbolic algebr
In this section we compare the computational complexity of the one-step-reduction process
when applied to compute X 0 with the complexity of symbolic algebra. In particular, we show
that the former comes with a reduced complexity in the case where the pole has a relatively
small order. The computational complexity of the other two procedures can be determined
similarly.
To compute the coeOEcients D i , of the reduced fundamental system (17),
one needs to perform O(s 2 n 3 ) operations. The total number of reduced equations is sp
(recall that p is the dimension of the null space of A 0 ). Hence, the computational complexity
for determining X 0 by the one-step-reduction process is O(maxfs 2 g). The Laurent
series (2) in general, and the coeOEcient X 0 in particular, can also be computed by using
symbolic algebra. This for example can be executed by MATLAB symbolic toolbox and is
done as follows. Since X 0 is uniquely determined by the -rst s equations
one needs to do in order to compute X 0 is it to invert symbolically the
following matrix polynomial
Symbolic computations here mean performing operations, such as multiplication and division,
over the -eld of rational functions (and not over the -eld of the reals). In particular, if the
degrees of numerators and of denominators of rational functions do not exceed q, then each
operation (multiplication or division) which is performed in the -eld of rational functions
translates into qlog(q) operations in the -eld of real numbers [1]. Note that during the
symbolic inversion of the polynomial matrix (25), the degree of rational functions does not
exceed sn. The latter fact follows from Cramer's rule. Thus, the complexity of the symbolic
inversion of (25) equals O(n 3 ) \Theta log(sn)). As a result, one gets a matrix
A \Gamma1 (z) whose elements are rational functions of z. The elements of the matrix X 0 can then be
immediately calculated by dividing the leading coe-cients of the numerator and denominator.
Finally, one can see that if s !! n and p !! n, which is typically the case, then our method
comes with a reduced computational burden.
Concluding remarks
In this paper we have shown that the Laurent series for the inversion of an analytic matrix
valued function can be computed by solving a system of fundamental linear equations.
Furthermore, we demonstrated that the system of fundamental equations can be solved recur-
sively. In particular, the coeOEcient X k is determined by the previous coeOEcients X
and the next s is the order of the pole. We suggest three
basic methods, one without any reduction (see (15)), one with a single reduction step (see
and (20)), and one using a complete reduction process with s steps (see (23) and (24)).
Of course, an intermediate process with the number of reductions between 1 and s could be
used too. We note that when the complete reduction process is used the order of the pole
can be determined through the execution of the algorithm. When s !! n and p !! n, the
proposed algorithms by far outperform the method based on symbolic algebra.
Acknowledgement
The authors are grateful to Prof. Jerzy A. Filar for his helpful advice. Also the authors would
like to thank anonymous referees for their valuable suggestions and for directing us to some
existing literature.
Apendix 1: Another proof of Lemma 1
A direct proof of Lemma 1 can be carried out using augmented matrices. Speci-cally, de-ne
are the coeOEcients of the Laurent series (2). Then it follows from
the fundamental systems (4) and (5) that the augmented matrices A (t) and X (t) satisfy the
relationship
where the augmented matrix E (t) 2 R (t+1)n\Theta(t+1)n is de-ned by setting
p;q=0 where
n\Thetan and
ae
I for
s:
Now, as before, let \Phi 2 R (s+1)n satisfy the equation
A
If we multiply equation (27) from the left by X reduces to
The vector E (s) \Phi has ' 0 as the (s 1)-st block, which gives the required result.
Apendix 2: A Numerical example
Let us consider the matrix valued function
where 2. Construct the augmented matrices
and note that which is the dimension of the original
coeOEcients A 0 and A 1 . Therefore, according to the test of Sain and Massey [32], the Laurent
expansion for A \Gamma1 (z) has a simple pole. Alternatively, we can compute a basis for
which in this particular example consists of only one vector
\Theta
The -rst three zero elements in q (1) con-rm that the Laurent series has a simple pole. Next
we compute the generalized inverse of A (1) given by
G (1)
1=3 \Gamma5=12 \Gamma1=12 1=8 1=8 \Gamma1=8
1=3 \Gamma5=12 \Gamma1=12 \Gamma3=8 \Gamma3=8 3=8
Consequently,
\Gamma3 \Gamma3
Alternatively, we know that X 0 is uniquely determined by the fundamental equations
After one reduction step these equations reduce to
where
\Theta
and
\Theta
Hence,
\Theta
and
\Gamma3 \Gamma3
The latter expression is identical with (28) and coincides with the one computed by expanding
A \Gamma1 (z) with the help of the MATLAB symbolic toolbox. Note that even for this three
dimensional example the direct symbolic calculation of the Laurent series takes a relatively
long time.
--R
The design and analysis of computer algorithms
The fundamental matrix of singularly perturbed Markov chains
Meromorphic operator valued functions.
Stability properties of
Relative inverses of meromorphic operator functions and associated holomorphic projection functions
Generalized Inverses of Linear Transformation
iSingular systems of dioeerential equationsj
iSingular systems of dioeerential equations IIj
A reduction process for perturbed Markov chains
The theory of matrices
On the local theory of regular analytic matrix functions
Matrix Polynomials
Analytic matrix functions with prescribed local data
An operator generalization of the logarithmic residue theorem and the theorem of Rouch
Mean passage times and nearly uncoupled Markov chains
Series expansions for
Matrix Anal.
iTaylor expansions of eigenvalues of perturbed matrices with applications to spectal radii of nonnegative matrices
Input retrieval in
Perturbation theory for linear operators
On the characteristic values and characteristic functions of certain classes of non-selfadjoint equations
Mathematical foundations of the state lumping of large systems
iInversion of lambda-matrices and application to the theory of linear vibrationsj
The Laurent expansion for a nearly singular matrix
Introduction to the spectral theory of polynomial operator pencils
iSpectral properties of a polynomial op- eratorj
Theory of Suboptimal Decisions
An introduction to operator polynomials
The Laurent expansion of a generalized resolvent with some applications
Invertibility of linear time invariant dynamical systems
The Laurent expansion for a nearly singular pencil
Perturbation series expansions for nearly completely-decomposable Markov chains
The Laurent expansion of pencils that are singular at the origin
Theory of branching of solutions of non-linear equations
The solution of some perturbation problems in the case of matrices and self-adjoint and non-self-adjoint dioeerential equations
--TR
--CTR
Jerzy A. Filar, Controlled Markov chains, graphs, and Hamiltonicity, Foundations and Trends in Stochastic Systems, v.1 n.2, p.77-162, January 2006 | matrix inversion;matrix valued functions;analytic perturbation;laurent series |
587798 | Multiple-Rank Modifications of a Sparse Cholesky Factorization. | Given a sparse symmetric positive definite matrix $\mathbf{AA}\tr$ and an associated sparse Cholesky factorization $\mathbf{LDL}\tr$ or $\mathbf{LL}\tr$, we develop sparse techniques for updating the factorization after either adding a collection of columns to A or deleting a collection of columns from A. Our techniques are based on an analysis and manipulation of the underlying graph structure, using the framework developed in an earlier paper on rank-1 modifications [T. A. Davis and W. W. Hager, SIAM J. Matrix Anal. Appl., 20 (1999), pp. 606--627]. Computationally, the multiple-rank update has better memory traffic and executes much faster than an equivalent series of rank-1 updates since the multiple-rank update makes one pass through L computing the new entries, while a series of rank-1 updates requires multiple passes through L. | Introduction
. This paper presents a method for evaluating a multiple rank
update or downdate of the sparse Cholesky factorization LDL T or LL T of the matrix
AA T , where A is m by n. More precisely, given an m r matrix W, we evaluate the
Cholesky factorization of AA T either is +1 (corresponding to an
update) and W is arbitrary, or is 1 (corresponding to a downdate) and W consists
of columns of A. Both AA T and AA T +WW T must be positive denite. It follows
that in the case of an update, and n r m in the case of a downdate.
One approach to the multiple rank update is to express it as a series of rank-1
updates and use the theory developed in [10] for updating a sparse factorization after
a rank-1 change. This approach, however, requires multiple passes through L as it is
updated after each rank-1 change. In this paper, we develop a sparse factorization
algorithm that makes only one pass through L.
For a dense Cholesky factorization, a one-pass algorithm to update a factorization
is obtained from Method C1 in [18] by making all the changes associated with one
column of L before moving to the next column, as is done in the following algorithm
that overwrites L and D with the new factors of AA T
performs
oating-point operations.
Algorithm 1 (Dense rank-r update/downdate).
to r do
end for
do
to r do
This work was supported by the National Science Foundation.
y davis@cise.u
.edu/~davis, PO Box 116120, Department of Computer
and Information Science and Engineering, University of Florida, Gainesville, FL 32611-6120. Phone
(352) 392-1481. Fax (352) 392-1220. TR-99-006 (June 1999, revised Sept. 2000)
z hager@math.u
.edu/~hager, PO Box 118105, Department of Mathe-
matics, University of Florida, Gainesville, FL 32611-8105. Phone (352) 392-0281. Fax (352) 392-8357.
A. DAVIS AND WILLIAM W. HAGER
end for
do
to r do
l
end for
end for
end for
We develop a sparse version of this algorithm that only accesses and modies those
entries in L and D which can change. For the theory in our rank-1 paper [10]
shows that those columns which can change correspond to the nodes in an elimination
tree on a path starting from the node k associated with the rst nonzero element w k1
in W. For r > 1 we show that the columns of L which can change correspond
to the nodes in a subtree of the elimination tree, and we express this subtree as a
modication of the elimination tree of AA T . Also, we show that with a reordering
of the columns of W, it can be arranged so that in the inner loop where elements in
row p of W are updated, the elements that change are adjacent to each other. The
sparse techniques that we develop lead to sequential access of matrix elements and to
e-cient computer memory tra-c. These techniques to modify a sparse factorization
have many applications including the Linear Program Dual Active Set Algorithm
least-squares problems in statistics, the analysis of electrical circuits
and power systems, structural mechanics, sensitivity analysis in linear programming,
boundary condition changes in partial dierential equations, domain decomposition
methods, and boundary element methods (see [19]).
Section 2 describes our notation. In section 3, we present an algorithm for computing
the symbolic factorization of AA T using multisets, which determines the location
of nonzero entries in L. Sections 4 and 5 describe our multiple rank symbolic update
and downdate algorithms for nding the nonzero pattern of the new factors. Section 6
describes our algorithm for computing the new numerical values of L and D, for either
an update or downdate. Our experimental results are presented in Section 7.
2. Notation and background. Given the location of the nonzero elements of
AA T , we can perform a symbolic factorization (this terminology is introduced by
George and Liu in [15]) of the matrix to predict the location of the nonzero elements
of the Cholesky factor L. In actuality, some of these predicted nonzeros may be
zero due to numerical cancellation during the factorization process. The statement
will mean that l ij is symbolically nonzero. The main diagonals of L and
D are always nonzero since the matrices that we factor are positive denite (see [26,
p. 253]). The nonzero pattern of column j of L is denoted
while L denotes the collection of patterns:
Similarly, A j denotes the nonzero pattern of column j of A,
while A is the collection of patterns:
The elimination tree can be dened in terms of a parent map (see [22]). For any
node j, (j) is the row index of the rst nonzero element in column j of L beneath
the diagonal element:
where \min X" denotes the smallest element of
i:
Our convention is that the min of the empty set is zero. Note that j < (j) except
in the case where the diagonal element in column j is the only nonzero element. The
children of node j is the set of nodes whose parent is j:
The ancestors of a node j, denoted P(j), is the set of successive parents:
for each j, the ancestor sequence is nite. The sequence of nodes
j, (j), ((j)), , forming P(j), is called the path from j to the associated tree
root, the nal node on the path. The collection of paths leading to a root form an
elimination tree. The set of all trees is the elimination forest. Typically, there is a
single tree whose root is m, however, if column j of L has only one nonzero element,
the diagonal element, then j will be the root of a separate tree.
The number of elements (or size) of a set X is denoted jX j, while jAj or jLj denote
the sum of the sizes of the sets they contain.
3. Symbolic factorization. For a matrix of the form AA T , the pattern L j of
column j is the union of the patterns of each column of L whose parent is j and each
column of A whose smallest row index of its nonzero entries is j (see [16, 22]):
min Ak=j
To modify (3.1) during an update or downdate, without recomputing it from
scratch, we need to keep track of how each entry i entered into L j [10]. For example,
if (c) changes, we may need to remove a term L c n fcg. We cannot simply perform
a set subtraction, since we may remove entries that appear in other terms. To keep
track of how entries enter and leave the set L j , we maintain a multiset associated
with column j. It has the form
4 TIMOTHY A. DAVIS AND WILLIAM W. HAGER
where the multiplicity m(i; j) is the number of children of j that contain row index i
in their pattern plus the number of columns of A whose smallest entry is j and that
contain row index i. Equivalently, for i 6= j,
For we increment the above equation by one to ensure that the diagonal entries
never disappear during a downdate. The set L j is obtained from L ]
by removing the
multiplicities.
We dene the addition of a multiset X ] and a set Y in the following way:
where
Similarly, the subtraction of a set Y from a multiset X ] is dened by
where
The multiset subtraction of Y from X ] undoes a prior addition. That is, for any
multiset X ] and any set Y , we have
In contrast ((X [ Y) n Y) is equal to X if and only if X and Y are disjoint sets.
Using multiset addition instead of set union, (3.1) leads to the following algorithm
for computing the symbolic factorization of AA T .
Algorithm 2 (Symbolic factorization of AA T , using multisets).
do
for each c such that do
end for
for each k where min A do
end for
end for
4. Multiple rank symbolic update. We consider how the pattern L changes
when AA T is replaced by AA T +WW T . Since
we can in essence augment A by W in order to evaluate the new pattern of column
in L. According to (3.1), the new pattern L j of column j of L after the update is
min Ak=j
A
where W i is the pattern of column i in W. Throughout, we put a bar over a matrix
or a set to denote its new value after the update or downdate.
In the following theorem, we consider a column j of the matrix L, and how its
pattern is modied by the sets W i . Let L ]
j denote the multiset for column j after the
rank-r update or downdate has been applied.
Theorem 4.1. To compute the new multiset L ]
j and perform
the following modications:
Case A: For each i such that to the pattern for column
Case B: For each c such that
(c is a child of j in both the old and new elimination tree).
Case C: For each c such that
(c is a child of j in the new tree, but not the old one).
Case D: For each c such that
(c is a child of j in the old tree, but not the new one).
Proof. Cases A{D account for all the adjustments we need to make in L j in order
to obtain L j . These adjustments are deduced from a comparison of (3.1) with (4.1).
In case A, we simply add in the W i multisets of (4.1) that do not appear in (3.1). In
case B, node c is a child of node j both before and after the update. In this case, we
must adjust for the deviation between L c and L c . By [10, Prop. 3.2], after a rank-1
update, L c L c . If w i denotes the i-th column of W, then
Hence, updating AA T by WW T is equivalent to r successive rank-1 updates of AA T .
By repeated application of [10, Prop. 3.2], L c L c after a rank-r update of AA T . It
6 TIMOTHY A. DAVIS AND WILLIAM W. HAGER
follows that L c and L c deviate from each other by the set L c n L c . Consequently, in
case B we simply add in L c n L c .
In case C, node c is a child of j in the new elimination tree, but not in the old
tree. In this case we need to add in the entire set L c n fcg since the corresponding
term does not appear in (3.1). Similarly, in case D, node c is a child of j in the old
elimination tree, but not in the new tree. In this case, the entire set L c nfcg should be
deleted. The case where c is not a child of j in either the old or the new elimination
tree does not result in any adjustment since the corresponding L c term is absent from
both (3.1) and (4.1).
An algorithm for updating a Cholesky factorization that is based only on this
theorem would have to visit all nodes j from 1 to m, and consider all possible children
c < j. On the other hand, not all nodes j from 1 to m need to be considered since not
all columns of L change when AA T is modied. In [10, Thm. 4.1] we show that for
the nodes whose patterns can change are contained in P(k 1 ) where we dene
. For a rank-r update, let P (i) be the ancestor map associated with the
elimination tree for the Cholesky factorization of the matrix
Again, by [10, Thm. 4.1], the nodes whose patterns can change during the rank-r
update are contained in the union of the patterns Although we
could evaluate each i, it is di-cult to do this e-ciently since we need to
perform a series of rank-1 updates and evaluate the ancestor map after each of these.
On the other hand, by [10, Prop. 3.1] and [10, Prop. 3.2], P (i) (j) P (i+1) (j) for each
and j, from which it follows that P (i) Consequently, the
nodes whose patterns change during a rank-r update are contained in the set
1ir
Theorem 4.2, below, shows that any node in T is also contained in one or more
of the sets P (i) (k i ). From this it follows that the nodes in T are precisely those nodes
for which entries in the associated columns of L can change during a rank-r update.
Before presenting the theorem, we illustrate this with a simple example shown in
Figure
4.1. The left of Figure 4.1 shows the sparsity pattern of original matrix AA T ,
its Cholesky factor L, and the corresponding elimination tree. The nonzero pattern
of the rst column of W is 2g. If performed as a single rank-1 update, this
causes a modication of columns 1, 2, 6, and 8 of L. The corresponding nodes in the
original tree are encircled; these nodes form the path P (1) 8g from node
1 to the root (node 8) in the second tree. The middle of Figure 4.1 shows the matrix
after this rank-1 update, and its factor and elimination tree. The entries in the second
1 that dier from the original matrix AA T are shown as small
pluses. The second column of W has the nonzero pattern W 7g. As a rank-1
update, this aects columns P (2) of L. These columns
form a single path in the nal elimination tree shown in the right of the gure.
For the rst rank-1 update, the set of columns that actually change are P (1)
8g. This is a subset of the path in the nal tree. If we
use P(1) to guide the work associated with column 1 of W, we visit all the columns
after second update
after first update
elimination tree
Elimination tree
After first update
Elimination tree
Original factor L Factor after second update
Factor after first update
Original matrix
After second update1
A T
A T6743
A T
Original
A
Fig. 4.1. Example rank-2 update
that need to be modied, plus column 7. Node 7 is in the set of nodes P(3) aected
by the second rank-1 update, however, as shown in the following theorem.
Theorem 4.2. Each of the paths contained in T and conversely, if
contained in P (i)
Proof. Before the theorem, we observe that each of the paths contained
in T . Now suppose that some node j lies in the tree T . We need to prove that it is
contained in P (i) s be the largest integer such that P(k s ) contains
j and let c be any child of j in T . If c lies on the path P(k i ) for some i, then j lies
on the path P(k i ) since j is the parent of c. Since j does not lie on the path P(k i )
for any i > s, it follows that c does not lie on the path P(k i ) for any i > s. Applying
this same argument recursively, we conclude that none of the nodes on the subtree
of T rooted at j lie on the path P(k i ) for any i > s. Let T j denote the subtree of T
rooted at j. Since contained in P(k i ) for each i, none of the nodes of T j
lie on any of the paths Thm. 4.1], the patterns of all nodes
outside the path are unchanged for each i. Let L (i)
c be the pattern of column
c in the Cholesky factorization of (4.2). Since any node c contained in T j does not lie
8 TIMOTHY A. DAVIS AND WILLIAM W. HAGER
(d,c)
(b,e)
e
f
c
d
a
Fig. 4.2. Example rank-8 symbolic update and subtree T
on any of the paths
c for all i, l s. Since k s is a node
of T j , the path P must include j.
Figure
4.2 depicts a subtree T for an example rank-8 update. The subtree consists
of all those nodes and edges in one or more of the paths P(k 1
These paths form a subtree, and not a general graph, since they are all paths from
an initial node to the root of the elimination tree of the matrix L. The subtree T
might actually be a forest, if L has an elimination forest rather than an elimination
tree. The rst nonzero positions in w 1 through w 8 correspond to nodes k 1 through
k 8 . For this example node k 4 happens to lie on the path P (1) (k 1 ). Nodes at which
paths rst intersect are shown as smaller circles, and are labeled a through f . Other
nodes along the paths are not shown. Each curved arrow denotes a single subpath.
For example, the arrow from nodes b to e denotes the subpath from b to e in P(b).
This subpath is denoted as P(b; e) in Figure 4.2.
The following algorithm computes the rank-r symbolic update. It keeps track of
an array of m \path-queues," one for each column of L. Each queue contains a set
of path-markers in the range 1 to r, which denote which of the paths P(k 1 ) through
next. If two paths have merged, only one of the paths
needs to be considered (we arbitrarily select the higher-numbered path to represent
the merged paths). This set of path-queues requires O(m Removing and
inserting a path-marker in a path-queue takes O(1) time. The only outputs of the
algorithm are the new pattern of L and its elimination tree, namely, L ]
and (j) for
all columns are aected by the rank-r update. We dene L
and node j not in T .
Case C will occur for c and j prior to visiting column (c), since
We thus place c in the lost-child-queue of column (c) when encountering case C
for nodes c and j. When the algorithm visits node (c), its lost-child-queue will
contain all those nodes for which case D holds. This set of lost-child-queues is not
the same as the set of path-queues (although there is exactly one lost-child-queue and
one path-queue for each column j of L).
Algorithm 3 (Symbolic rank-r update, add new matrix W).
Find the starting nodes of each path
to r do
place path-marker i in path-queue of column k i
end for
Consider all columns corresponding to nodes in the paths P(k 1
to m do
if path-queue of column j is non-empty do
for each path-marker i on path-queue of column j do
Let c be the prior column on this path (if any), where
do
Case A: j is the rst node on the path P(k i ), no prior c
else if
Case B: c is an old child of j, possibly changed
else
Case C: c is a new child of j and a lost child of (c)
place c in lost-child-queue of column (c)
endif
end for
Case D: consider each lost child of j
for each c in lost-child-queue of column j do
end for
Move up one step in the path(s)
Let i be the largest path-marker in path-queue of column j
Place path-marker i in path-queue of column (j)
if path-queue of column j non-empty
end for
The optimal time for a general rank-r update is
A. DAVIS AND WILLIAM W. HAGER
The actual time taken by Algorithm 3 only slightly higher, namely,
because of the O(m) book-keeping required for the path-queues. In most practical
cases, the O(m) term will not be the dominant term in the run time.
Algorithm 3 can be used to compute an entire symbolic factorization. We start
by factorizing the identity matrix I = II T into LDL III. In this case, we have
j. The initial elimination tree is a forest of m nodes and no
edges. We can now determine the symbolic factorization of I +AA T using the rank-r
update algorithm above, with m. This matrix has identical symbolic
factors as AA T . Case A will apply for each column in A, corresponding to the
min Ak=j
term in (3.1). Since (c) = 0 for each c, cases B and D will not apply. At column j,
case C will apply for all children in the elimination tree, corresponding to the
term in (3.1). Since duplicate paths are discarded when they merge, we modify
each column j once, for each child c in the elimination tree. This is the same work
performed by the symbolic factorization algorithm, Algorithm 2, which is O(jLj).
Hence, Algorithm 3 is equivalent to Algorithm 2 when we apply it to the update
I +AA T . Its run time is optimal in this case.
5. Multiple rank symbolic downdate. The downdate algorithm is analogous.
The downdated matrix is AA T WW T where W is a subset of the columns of A.
In a downdate, P(k) P(k), and thus rather than following the paths P(k i ), we
follow the paths P(k i ). Entries are dropped during a downdate, and thus L j L j
and (j) (j). We start with L ]
j and make the following changes.
Case A: If then the pattern W i is removed from
column j,
Case B: If then c is a child of j in both the
old and new tree. We need to remove from L ]
entries in the old pattern L c
but not in the new pattern L c ,
Case C: If for some node c, then c is a child of j in the old
elimination tree, but not the new tree. We compute
MULTIPLE-RANK MODIFICATIONS 11
Case D: If for some node c, then c is a child of j in the new
tree, but not the old one. We compute
Case C will occur for c and j prior to visiting column (c), since
We thus place c in the new-child-queue of (c) when encountering case C for nodes c
and j. When the algorithm visits node (c), its new-child-queue will contain all those
nodes for which case D holds.
Algorithm 4 (Symbolic rank-r downdate, remove matrix W).
Find the starting nodes of each path
to r do
place path-marker i in path-queue of column k i
end for
Consider all columns corresponding to nodes in the paths P(k 1
to m do
if path-queue of column j is non-empty do
for each path-marker i on path-queue of column j do
Let c be the prior column on this path (if any), where
do
Case A: j is the rst node on the path P(k i ), no prior c
else if
Case B: c is an old child of j, possibly changed
else
Case C: c is a lost child of j and a new child of (c)
place c in new-child-queue of column (c)
endif
end for
Case D: consider each new child of j
for each c in new-child-queue of j do
end for
Move up one step in the path(s)
Let i be the largest path-marker in path-queue of column j
Place path-marker i in path-queue of column (j)
if path-queue of column j non-empty
end for
A. DAVIS AND WILLIAM W. HAGER
The time taken by Algorithm 4 is
which slightly higher than the optimal time,
In most practical cases, the O(m) term in the asymptotic run time for Algorithm 4
will not be the dominant term.
6. Multiple rank numerical update and downdate. The following numerical
rank-r update/downdate algorithm, Algorithm 5, overwrites L and D with the
updated or downdated factors. The algorithm is based on Algorithm 1, the one-pass
version of Method C1 in [18] presented in Section 1. The algorithm is used after
the symbolic update algorithm (Algorithm 3) has found the subtree T corresponding
to the nodes whose patterns can change, or after the symbolic downdate algorithm
(Algorithm 4) has found T . Since the columns of the matrix W can be reordered
without aecting the product WW T , we reorder the columns of W using a depth-rst
search [6] of T (or T ) so that as we march through the tree, consecutive columns
of W are utilized in the computations. This reordering improves the numerical up-
date/downdate algorithm by placing all columns of W that aect any given subpath
next to each other, eliminating an indexing operation. Reordering the columns of
a sparse matrix prior to Cholesky factorization is very common [3, 22, 23, 25]. It
improves data locality and simplies the algorithm, just as it does for reordering W
in a multiple rank update/downdate. The depth rst ordering of the tree changes as
the elimination tree changes, so columns of W must be ordered for each update or
downdate.
To illustrate this reordering, consider the subtree T in Figure 4.2 for a rank-8
update. If the depth-rst-search algorithm visits child subtrees from left to right, the
resulting reordering is as shown in Figure 6.1. Each subpath in Figure 6.1 is labeled
with the range of columns of W that aect that subpath, and with the order in which
the subpath is processed by Algorithm 5. Consider the path from node c to e. In
Figure
4.2, the columns of L corresponding to nodes on this subpath are updated by
columns 2, 8, 3, and 5 of W, in that order. In the reordered subtree (Figure 6.1), the
columns on this subpath are updated by columns 5 through 8 of the reordered W.
Algorithm 5 (Sparse numeric rank-r modication, add WW T ).
The columns of W have been reordered.
to r do
end for
for each subpath in depth-rst-search order in T
Let c 1 through c 2 be the columns of W that aect this subpath
for each column j in the subpath do
do
e
f
c
d
a
13th
2nd
1st
6th
3rd 4th
7th
8th
9th
10th
11th
12th
3 47-885th
Fig. 6.1. Example rank-8 update after depth-rst-search reordering
end for
for all
do
l
end for
end for
end for
end for
The time taken by r rank-1 updates [10] is
O@ r
where L (i)
j is the pattern of column j after the i-th rank-1 update. This time is
asymptotically optimal. A single rank-r update cannot determine the paths
but uses P(k i ) instead. Thus, the time taken by Algorithm 5 for a rank-r update is
O@ r
This is slightly higher than (6.1), because
14 TIMOTHY A. DAVIS AND WILLIAM W. HAGER
Table
Dense matrix performance for 64-by-64 matrices and 64-by-1 vectors
operation M
ops
DGEMM (matrix-matrix multiply) 171.6
DGEMV (matrix-vector multiply) 130.0
DTRSV (solve
DAXPY (the vector computation
DDOT (the dot product
the i-th column of W does not necessarily aect all of the columns
in the path P(k i ). If w i does not aect column j, then w ji and
will both be zero in
the inner loop in Algorithm 5. An example of this occurs in Figure 4.1, where column
1 of W does not aect column 7 of L. We could check this condition, and reduce the
asymptotic run time to
O@ r
In practice, however, we found that the paths dier much.
Including this test did not improve the overall performance of our algorithm. The
time taken by Algorithm 5 for a rank-r downdate is similar, namely,
O@ r
The numerical algorithm for updating and downdating LL T is essentially the
same as that for LDL T [4, 24]; the only dierence is a diagonal scaling. For either
LL T or LDL T , the symbolic algorithms are identical.
7. Experimental results. To test our methods, we selected the same experiment
as in our earlier paper on the single-rank update and downdate [10], which
mimics the behavior of the Linear Programming Dual Active Set Algorithm [20]. The
rst
consists of 5446 columns from a larger 6071-
arising in an airline scheduling problem
(DFL001) [13]. The 5446 columns correspond to the optimal solution of the linear
programming problem. Starting with an initial LDL T factorization of the matrix
T , we added columns from B (corresponding to an update) until we
obtained the factors of 10 6 I +BB T . We then removed columns in a rst-in-rst-out
order (corresponding to a downdate) until we obtained the original factors. The LP
DASA algorithm would not perform this much work (6784 updates and 6784 down-
dates) to solve this linear programming problem.
Our experiment took place on a Sun Ultra Enterprise running the Solaris 2.6
operating system, with eight 248 Mhz UltraSparc-II processors (only one processor
was used) and 2GB of main memory. The dense matrix performance in millions of
oating-point operations per second (M
ops) of the BLAS [12] is shown in Table 7.1.
All results presented below are for our own codes (except for colmmd, spooles, and
the BLAS) written in the C programming language and using double precision
oating
point arithmetic.
We rst permuted the rows of B to preserve sparsity in the Cholesky factors of
BB T . This can be done e-ciently with colamd [7, 8, 9, 21], which is based on an
Table
Average update and downdate performance results
ops
r in seconds
update downdate update downdate
9
14
approximate minimum degree ordering algorithm [1]. However, to keep our results
consistent with our prior rank-1 update/downdate paper [10], we used the same permutation
as in those experiments (from colmmd [17]). Both colamd and Matlab's
colmmd compute the ordering without forming BB T explicitly. A symbolic factorization
of BB T nds the nonzero counts of each column of the factors. This step takes
an amount of space this is proportional to the number of nonzero entries in B. It
gives us the size of a static data structure to hold the factors during the updating
and downdating process. The numerical factorization of BB T is not required. A
second symbolic factorization nds the rst nonzero pattern L. An initial numerical
factorization computes the rst factors L and D. We used our own non-supernodal
factorization code (similar to SPARSPAK [5, 15]), since the update/downdate algorithms
do not use supernodes. A supernodal factorization code such as spooles [3] or
a multifrontal method [2, 14] can get better performance. The factorization method
used has no impact on the performance of the update and downdate algorithms.
We ran dierent experiments, each one using a dierent rank-r update and
downdate, where r varied from 1 to 16. After each rank-r update, we solved the
sparse linear system LDL T using a dense right-hand side b. To compare the
performance of a rank-1 update with a rank-r update (r > 1), we divided the run time
of the rank-r update by r. This gives us a normalized time for a single rank-1 update.
The average time and M
ops rate for a normalized rank-1 update and downdate for
the entire experiment is shown in Table 7.2. The time for the update, downdate, or
solve increases as the factors become denser, but the performance in terms of M
ops
is fairly constant for all three operations. The rst rank-16 update when the factor
L is sparsest takes 0.47 seconds (0.0294 seconds normalized) and runs at 65.5 M
ops
compared to 65.1 M
ops in
Table
7.2 for the average speed of all the rank-16 updates.
The performance of each step is summarized in Table 7.3. A rank-5 update takes
about the same time as using the updated factors to solve the sparse linear system
even though the rank-5 update performs 2.6 times the work.
The work, in terms of
oating-point operations, varies only slightly as r changes.
With rank-1 updates, the total work for all the updates is 17.293 billion
A. DAVIS AND WILLIAM W. HAGER
Table
Dense matrix performance for 64-by-64 matrices and 64-by-1 vectors
Operation Time (sec) M
ops Notes
colamd ordering 0.45 -
Symbolic factorization (of BB T
Symbolic factorization for rst L 0.46 - 831 thousand nonzeros
Numeric factorization for rst L (our code) 20.07 24.0
Numeric factorization for rst L (spooles) 18.10 26.6
Numeric factorization of BB T (our code) 61.04 18.5 not required
Numeric factorization of BB T (spooles) 17.80 63.3 not required
Average rank-16 update 0.63 65.1 compare with rank-1
Average rank-5 update 0.25 51.0 compare with solve step
Average rank-1 update 0.084 30.3
Average solve LDL T
point operations, or 2.55 million per rank-1 update. With rank-16 updates (the
worst case), the total work increases to 17.318 billion
oating-point operations. The
downdates take a total of 17.679 billion
oating-point operations (2.61 million
per rank-1 downdate), while the rank-16 downdates take a total of 17.691 billion
operations. This conrms the near-optimal operation count of the multiple rank
update/downdate, as compared to the optimal rank-1 update/downdate.
Solving when L is sparse and b is dense, and computing the sparse LDL T
factorization using a non-supernodal method, both give a rather poor computation-
to-memory-reference ratio of only 2/3. We tried the same loop unrolling technique
used in our update/downdate code for our sparse solve and sparse LDL T factorization
codes, but this resulted in no improvement in performance.
A sparse rank-r update or downdate can be implemented in a one-pass algorithm
that has much better memory tra-c than that of a series of r rank-1 modications. In
our numerical experimentation with the DFL001 linear programming test problem, the
rank-r modication was more than twice as fast as r rank-1 modications for r 11.
The superior performance of the multiple rank algorithm can be explained using the
computation-to-memory-reference ratio. If c in Algorithm 5 (a subpath aected
by only one column of W), it can be shown that this ratio is about 4/5 when L j is
large. The ratio when c aected by 16 columns of W) is
about 64/35 when L j is large. Hence, going from a rank-1 to a rank-16 update
improves the computation-to-memory-reference ratio by a factor of about 2.3 when
column j of L has many nonzeros. By comparison, the level-1 BLAS routines for
dense matrix computations (vector computations such as DAXPY and DDOT) [11]
have computation-to-memory-reference ratios between 2/3 and 1. The level-2 BLAS
(DGEMV and DTRSV, for example) have a ratio of 2.
8.
Summary
. Because of improved memory locality, our multiple-rank sparse
update/downdate method is over twice as fast as our prior rank-1 update/downdate
method. The performance of our new method (65.1 M
ops for a sparse rank-16
update) compares favorably with both the dense matrix performance (81.5 M
ops to
solve the dense system and the sparse matrix performance (18.0 M
ops to
solve the sparse system and an observed peak numerical factorization of 63.3
ops in spooles) on the computer used in our experiments. Although not strictly
optimal, the multiple-rank update/downdate method has nearly the same operation
count as the rank-1 update/downdate method, which has an optimal operation count.
MULTIPLE-RANK MODIFICATIONS 17
--R
An approximate minimum degree ordering algorithm
Vectorization of a multiprocessor multifrontal code
SPOOLES: an object-oriented sparse matrix library
A Cholesky up- and downdating algorithm for systolic and SIMD architectures
SPARSPAK: Waterloo sparse matrix package
Introduction to Algorithms
A column approximate minimum degree ordering algorithm
A column approximate minimum degree ordering algorithm
Modifying a sparse Cholesky factorization
Philadelphia: SIAM Publications
A set of level-3 basic linear algebra subprograms
Distribution of mathematical software via electronic mail
The multifrontal solution of inde
Computer Solution of Large Sparse Positive De
A data structure for sparse QR and LU factorizations
Sparse matrices in MATLAB: design and implementation
Methods for modifying matrix factorizations
Updating the inverse of a matrix
An approximate minimum degree column ordering algorithm
The role of elimination trees in sparse factorization
A supernodal Cholesky factorization algorithm for shared-memory multiprocessors
New York
--TR
--CTR
W. Hager, The Dual Active Set Algorithm and Its Application to Linear Programming, Computational Optimization and Applications, v.21 n.3, p.263-275, March 2002
Ove Edlund, A software package for sparse orthogonal factorization and updating, ACM Transactions on Mathematical Software (TOMS), v.28 n.4, p.448-482, December 2002
Matine Bergounioux , Karl Kunisch, Primal-Dual Strategy for State-Constrained Optimal Control Problems, Computational Optimization and Applications, v.22 n.2, p.193-224, July 2002
Nicholas I. M. Gould , Jennifer A. Scott , Yifan Hu, A numerical evaluation of sparse direct solvers for the solution of large sparse symmetric linear systems of equations, ACM Transactions on Mathematical Software (TOMS), v.33 n.2, p.10-es, June 2007 | numerical linear algebra;matrix updates;cholesky factorization;sparse matrices;mathematical software;direct methods |
587801 | On Algorithms For Permuting Large Entries to the Diagonal of a Sparse Matrix. | We consider bipartite matching algorithms for computing permutations of a sparse matrix so that the diagonal of the permuted matrix has entries of large absolute value. We discuss various strategies for this and consider their implementation as computer codes. We also consider scaling techniques to further increase the relative values of the diagonal entries. Numerical experiments show the effect of the reorderings and the scaling on the solution of sparse equations by a direct method and by preconditioned iterative techniques. | Introduction
We say that an n \Theta n matrix A has a large diagonal if the absolute value of each diagonal
entry is large relative to the absolute values of the off-diagonal entries in its row and
column. Permuting large nonzero entries onto the diagonal of a sparse matrix can be
useful in several ways. If we wish to solve the system
where A is a nonsingular square matrix of order n and x and b are vectors of length n,
then a preordering of this kind can be useful whether direct or iterative methods are used
for solution (see Olschowka and Neumaier (1996) and Duff and Koster (1997)).
The work in this report is a continuation of the work reported by Duff and Koster
(1997) who presented an algorithm that maximizes the smallest entry on the diagonal
and relies on repeated applications of the depth first search algorithm MC21 (Duff 1981)
in the Harwell Subroutine Library (HSL 1996). In this report, we will be concerned with
other bipartite matching algorithms for permuting the rows and columns of the matrix so
that the diagonal of the permuted matrix is large. The algorithm that is central to this
report computes a matching that corresponds to a permutation of a sparse matrix such
that the product (or sum) of the diagonal entries is maximized. This algorithm is already
mentioned and used in Duff and Koster (1997), but is not fully described. In this report,
we describe the algorithm in more detail. We also consider a modified version of this
algorithm to compute a permutation of the matrix that maximizes the smallest diagonal
entry. We compare the performance of this algorithm with that of Duff and Koster (1997).
We also investigate the influence of scaling of the matrix. Scaling can be used before or
after computation of the matching to make the diagonal entries even larger relative to
the off-diagonals. In particular, we look at a sparse variant of a bipartite matching and
scaling algorithm of Olschowka and Neumaier (1996) that first maximizes the product of
the diagonal entries and then scales the matrix so that these entries are one and all other
entries are no greater than one.
The rest of this report is organized as follows. In Section 2, we describe some concepts
of bipartite matching that we need for the description of the algorithms. In Section 3,
we review the basic properties of algorithm MC21. MC21 is a relatively simple algorithm
that computes a matching that corresponds to a permutation of the matrix that puts as
many entries as possible onto the diagonal without considering their numerical values.
The algorithm that maximizes the product of the diagonal entries is described in Section
4. In Section 5, we consider the modified version of this algorithm that maximizes the
smallest diagonal entry of the permuted matrix. In Section 6, we consider the scaling of the
reordered matrix. Computational experience for the algorithms applied to some practical
problems and the effect of the reorderings and scaling on direct and iterative methods of
solution are presented in Sections 7 to 7.2. The effect on preconditioning is also discussed.
Finally, we consider some of the implications of this current work in Section 8.
matching
be a general n \Theta n sparse matrix. With matrix A, we associate a bipartite
graph E) that consists of two disjoint node sets V r and V c and an edge set
E, where (u; v) 2 E implies that . The sets V r and V c have cardinality n and
correspond to the rows and columns of A respectively. Edge (i; only if a ij 6= 0.
We define the sets ROW
c . These sets correspond to the positions of the entries in row i and column j of the
sparse matrix respectively. We use both to denote the absolute value and to signify
the number of entries in a set, sequence, or matrix. The meaning should always be clear
from the context.
A subset M ' E is called a matching (or assignment) if no two edges of M are incident
to the same node. A matching containing the largest number of edges possible is called a
maximum cardinality matching (or simply maximum matching). A maximum matching
is a perfect matching if every node is incident to a matching edge. Obviously, not every
bipartite graph allows a perfect matching. However, if the matrix A is nonsingular, then
there exists a perfect matching for GA . A perfect matching M has cardinality n and
defines an n \Theta n permutation matrix
so that both PA and AP are matrices with the matching entries on the (zero-free) diagonal.
Bipartite matching problems can be viewed as a special case of network flow problems (see,
for example, Ford Jr. and Fulkerson (1962)).
The more efficient algorithms for finding maximum matchings in bipartite graphs make
use of augmenting paths. Let M be a matching in GA . A node v is matched if it is incident
to an edge in M . A path P in GA is defined as an ordered set of edges in which successive
edges are incident to the same node. A path P is called an M-alternating path if the
edges of P are alternately in M and not in M . An M-alternating path P is called an M -
augmenting path if it connects an unmatched row node with an unmatched column node.
In the bipartite graph in Figure 2.1, there exists an M-augmenting path from column node
8 to row node 8. The matching M (of cardinality 7) is represented by the thick edges. The
black entries in the accompanying matrix correspond to the matching and the connected
matrix entries to the M-augmenting path. If it is clear from the context which matching
M is associated with the M-alternating and M-augmenting paths, then we will simply
refer to them as alternating and augmenting paths.
Let M and P be subsets of E. We define
If M is a matching and P is an M-augmenting path, then M \Phi P is again a matching,
and jM \Phi is an M-alternating cyclic path, i.e., an alternating path
whose first and last edge are incident to the same node, then M \Phi P is also a matching
and jM \Phi
Figure
2.1: Augmenting path
r1368425791368In the sequel, a matching M will often be represented by a pointer array
Augmenting paths in a bipartite graph G can be found by constructing alternating
trees. An alternating tree subgraph of G rooted at a row or column
node and each path in T is an M-alternating path. An alternating tree rooted at a
column node j 0 can be grown in the following way. We start with the initial alternating
tree (;; fj 0 g; ;) and consider all the column nodes j 2 T c in turn. Initially . For
each node j, we check the row nodes i 2 COL(j) for which an alternating path from i
to j 0 does not yet exist. If node i is already matched, we add row node i, column node
to T . If i is not matched, we extend T by row node i and
edge (and the path in T from node i to the root forms an augmenting path). A key
observation for the construction of a maximum or perfect matching is that a matching M
is maximum if and only if there is no augmenting path relative to M .
Alternating trees can be implemented using a pointer array c such that,
given an edge (i; is either the root node of the tree, or the edges
are consecutive edges in an alternating path towards the root.
Augmenting paths in an alternating tree (provided they exist) can thus easily be obtained
from p and m.
Alternating trees are not unique. In general, one can construct several alternating
trees starting from the same root node that have equal node sets, but different edge
sets. Different alternating trees in general will contain different augmenting paths. The
matching algorithms that we describe in the next sections impose different criteria on the
order in which the paths in the alternating trees are grown in order to obtain augmenting
paths and maximum matchings with special properties.
Matching
The asymptotically fastest currently known algorithm for finding a maximum matching is
by Hopcroft and Karp (1973). It has a worst-case complexity of O(
the number of entries in the sparse matrix. An efficient implementation of this algorithm
can be found in Duff and Wiberg (1988). The algorithm MC21 implemented by Duff
(1981) has a theoretically worst-case behaviour of O(n- ), but in practice it behaves more
like O(n Because this latter algorithm is simpler, we concentrate on this in the
following although we note that it is relatively straightforward to use the algorithm of
Hopcroft and Karp (1973) in a similar way to how we will use MC21 in later sections.
MC21 is a depth-first search algorithm with look-ahead. It starts off with an empty
matching M , and hence all column nodes are unmatched initially. See Figure 3.1. For
each unmatched column node j 0 in turn, an alternating tree is grown until an augmenting
path with respect to the current matching M is found (provided one exists). A set B is
used to mark all the matched row nodes that have been visited so far. Initially,
First, the row nodes in COL(j 0 ) are searched (look-ahead) for an unmatched node i 0 .
If one is found, the singleton path is an M-augmenting path. If there is
no such unmatched node, then an unmarked matched node i
is marked, the nodes i 0 and
, and the edges (i are added to
the alternating tree (by setting
The search then continues with column node
. For node j 1 , the row nodes in COL(j 1 ) are first checked for an unmatched node.
If one exists, say then the path forms an augmenting
path. If there is no such unmatched node, a remaining unmarked node i 1 is picked from
is set to j 1 ,
, and the search moves to node j 2 .
This continues in a similar (depth-first search) fashion until either an augmenting path
(with nodes j 0 and i k unmatched) or
until for some k ? 0, COL(j k ) does not contain an unmarked node. In the latter case,
MC21 backtracks by resuming the search at the previously visited column node j k\Gamma1 for
some remaining unmarked node i 0
Backtracking for
if MC21 resumes the search at column node j 0 and COL(j 0 ) does not contain an unmarked
node, then an M-augmenting path starting at node j 0 does not exist. In this case, MC21
continues with the construction of a new alternating tree starting at the next unmatched
column node. (The final maximum matching will have cardinality at most n \Gamma 1 and hence
will not be perfect.)
Figure
3.1: Outline of MC21.
do
repeat
if there exists i 2 COL(j) and i is unmatched then
else
if there exists
else
until iap 6= null or
if iap 6= null then augment along path from node iap to node j 0
end for
Weighted matching
In this section, we describe an algorithm that computes a matching for permuting a sparse
matrix A such that the product of the diagonal entries of the permuted matrix is maximum
in absolute value. That is, the algorithm determines a matching that corresponds to a
permutation oe that maximizes n
Y
This maximization multiplicative problem can be translated into a minimization
additive problem by defining matrix
log a
where a is the maximum absolute value in column j of matrix A. Maximizing
(4.1) is equal to minimizing
log
log a oe i
log ja ioe i
(log a oe i
log ja ioe i
Minimizing (4.2) is equivalent to finding a minimum weight perfect matching in an edge
weighted bipartite graph. This is known in literature as the bipartite weighted matching
problem or (linear sum) assignment problem in linear programming and combinatorial
optimization. Numerous algorithms have been proposed for computing minimum weight
perfect matchings, see for example Burkard and Derigs (1980), Carpaneto and Toth (1980),
Carraresi and Sodini (1986), Derigs and Metz (1986), Jonker and Volgenant (1987), and
Kuhn (1955). A practical example of an assignment problem is the allotment of tasks to
in the cost matrix C represents the cost or benefit of assigning person i
to task j.
be a real-valued n \Theta n E) be the
corresponding bipartite graph each of whose edges (i; . The weight
of a matching M in GC , denoted by c(M ), is defined by the sum of its edge weights, i.e.,
(i;j)2M
A perfect matching M is said to be a minimum weight perfect matching if it has smallest
possible weight, i.e., c(M) - c(M 0 ), for all possible maximum matchings M 0 .
The key concept for finding a minimum weight perfect matching is the so-called shortest
augmenting path. An M-augmenting path P starting at an unmatched column node j is
called shortest if c(M \Phi P possible M-augmenting paths P 0
starting at node j. We define
as the length of alternating path P . A matching M is called extreme if and only if it does
not allow any alternating cyclic path with negative length.
The following two relations hold. First, a perfect matching has minimum weight if it
is extreme. Second, if matching M is extreme and P is a shortest M-augmenting path,
then M \Phi P is extreme also. The proof for this goes roughly as follows. Suppose M \Phi P is
not extreme. Then there exists an alternating cyclic path Q such that c((M \Phi P
is extreme, there must exist a
subset forms an M-augmenting path and is shorter than P . Hence, P is
not a shortest M-augmenting path. This contradicts the supposition.
These two relations form the basis for many algorithms for solving the bipartite
weighted matching problem: start from any (possibly empty) extreme matching M and
successively augment M along shortest augmenting paths until M is maximum (or perfect).
In the literature, the problem of finding a minimum weight perfect matching is often
stated as the following linear programming problem. Find matrix
minimizing X
subject to X
If there is a solution to this linear program, there is one for which x ij 2 f0; 1g and there
exists a permutation matrix X such that 1g is a minimum weight perfect
matching (Edmonds and Karp 1972, Kuhn 1955). Furthermore, M has minimum weight
if and only if there exist dual variables u i and v j with
Using the reduced weight matrix
the reduced weight c(M) of matching M equals
the reduced length l(P ) of any M-alternating path P equals
(i;j)2P nM
and if M \Phi P is a matching, the reduced weight of M \Phi P equals
Thus, finding a shortest augmenting path in graph GC is equivalent to finding an
augmenting path in graph G C , with minimum reduced length.
edge contains no alternating paths P with negative length,
leading subpath P 0 of P .
Shortest augmenting paths in a weighted bipartite graph E) can be
obtained by means of a shortest alternating path tree. A shortest alternating path tree T
is an alternating tree each of whose paths is a shortest path in G. For any node i
we define d i as the length of the shortest path in T from node i to the root node (d
if no such path exists). T is a shortest alternating path tree if and only if d
for every edge (i; nodes i, j,
An outline of an algorithm for constructing a shortest alternating path tree rooted at
column node j 0 is given in Figure 4.1. Because the reduced weights c ij are non-negative,
and graph G C contains no alternating paths with negative length, we can use a sparse
variant of Dijkstra's algorithm (Dijkstra 1959). The set of row nodes is partitioned into
three sets B, Q, and W . B is the set of (marked) nodes whose shortest alternating paths
and distances to node j 0 are known. Q is the set of nodes for which an alternating path
to the root is known that is not necessarily the shortest possible. W is the set of nodes for
which an alternating path does not exist or is not known yet. (Note that since W is defined
implicitly as V r n (B [Q), it is not actually used in Figure 4.1.) The algorithm starts with
shortest alternating tree and extends the tree until an augmenting
path is found that is guaranteed to be a shortest augmenting path with respect to the
current matching M . Initially, the length of the shortest augmenting path lsap in the tree
is set to infinity, and the length of the shortest alternating path lsp from the root to any
node in Q is set to zero. On each pass through the main loop, another column node j is
chosen that is closest to the root j 0 . Initially
Each row node i 2 COL(j) whose shortest alternating path to the root is not known
yet (i 62 B), is considered. If P j 0 !j!i , the shortest alternating path from the root node
0 to node j (with length lsp) extended by edge (i; j) from node j to node i (with length
longer than the tentative shortest augmenting path in the tree (with length lsap),
then there is no need to modify the tree. If P j 0 !j!i has length smaller than lsap, and i
is unmatched, then a new shorter augmenting path has been found and lsap is updated.
If i is matched and P j 0 !j!i is also shorter than the current shortest alternating path to
(with length d i ), then a shorter alternating path to node i has been found and the tree
is updated, d i is updated, and if node i has not been visited previously, i is moved to Q.
Next, if Q is not empty, a node i 2 Q is determined that is closest to the root. Since all
weights c ij in the bipartite graph are non-negative, there cannot be any other alternating
path to node i that is shorter than the current one. Node i is marked (by adding it to
B), and the search continues with column node j This continues until there are
no more column nodes to be searched or until no new augmenting path can be
found whose length is smaller than the current shortest one (line lsap - lsp).
The original Dijkstra algorithm (intended for dense graphs) has O(n 2 ) complexity.
For sparse problems, the complexity can be reduced to O(- log n) by implementing the
set Q as a k-heap in which the nodes i are sorted by increasing distance d i from the root
(see for example Tarjan (1983) and Gallo and Pallottino (1988)). The running time of
the algorithm is dominated by the operations on the heap Q of which there are O(n)
delete operations, O(n) insert operations, and O(-) modification operations (these are
necessary each time a distance d i is updated). Each insert and modification operation
runs in O(log k n) time, a delete operation runs in O(k log k n) time. Consequently, the
algorithm for finding a shortest augmenting path in a sparse bipartite graph has run time
n) and the total run time for the sparse bipartite weighted algorithm is
n). If we choose 2, the algorithm uses binary heaps and we obtain a
time bound of O(n(- +n) log 2 n). If we choose we obtain a bound
of O(n- log -=n n).
The implementation of the heap Q is similar to the implementation proposed in Derigs
and Metz (1986). Q is a pair (Q is an array that contains all the row
nodes for which the distance to the root is shortest (lsp), and
By separating the nodes in Q that are closest to the root, we may reduce the number
of operations on the heap, especially in those situations where the cost matrix C has
only few different numerical values and many alternating paths have the same length.
Deleting a node from Q for which d i is smallest (see Figure 4.1), now consists of choosing
an (arbitrary) element from Q 1 . If Q 1 is empty, then we first move all the nodes in Q 2
that are closest to the root to Q 1 .
After the augmentation, the reduced weights c ij have to be updated to ensure that
alternating paths in the new G have non-negative length. This is done by modifying the
Figure
4.1: Construction of a shortest augmenting path.
while true do
dnew
if dnew ! lsap then
unmatched then
lsap := dnew; isap :=
else
choose
if lsap - lsp then exit while-loop;
if lsap 6= 1 then augment along path from node isap to node j
dual vectors u and v. If is the shortest alternating path tree that was
constructed until the shortest augmenting path was found, then u i and v j are updated as
The updated dual variables u and v satisfy (4.3) and the new reduced weights c ij are
non-negative.
The running time of the weighted matching algorithm can be decreased considerably
by means of a cheap heuristic that determines a large initial extreme matching M . We
use the strategy proposed by Carpaneto and Toth (1980). We calculate
Inspecting the sets COL(j) for each column node j in turn, we determine a large initial
matching M of edges for which for each remaining unmatched
column node j, every node i 2 COL(j) is considered for which and that is
matched to a column node other than j, say j . If an unmatched row node
can be found for which c i
in M is replaced
by (i; having repeated this for all unmatched columns, the search for
shortest augmenting paths starts with respect to the current matching.
Finally, we note that the above weighted matching algorithm can also be used for
maximizing the sum of the diagonal entries of matrix A (instead of maximizing the product
of the diagonal entries). To do this, we again minimize (4.2), but we redefine matrix C as
0; otherwise:
Maximizing the sum of the diagonal entries is equal to minimizing (4.2), since
a oe i
(a oe i
5 Bottleneck matching
We describe a modification of the weighted bipartite matching algorithm from the previous
section for permuting rows and columns of a sparse matrix A such that the smallest ratio
between the absolute value of a diagonal entry and the maximum absolute value in its
column is maximized. That is, the modification computes a permutation oe that maximizes
min
1-i-n
a oe i
where a j is the maximum absolute value in column j of the matrix A. Similarly to the
previous section, we transform this into a minimization problem. We define the matrix
a j
Then maximizing (5.1) is equal to minimizing
1-i-n
a oe i
a oe i
1-i-n
Given a matching M in the bipartite graph E), the bottleneck value of
M is defined as
(i;j)2M
The problem is to find a perfect (or maximum) bottleneck matching M for which c(M) is
minimal, i.e. c(M) - c(M 0 ), for all possible maximum matchings M 0 . A matching M is
called extreme if it does not allow any alternating cyclic path P for which c(M \PhiP
The bottleneck algorithm starts off with any extreme matching M . The initial
bottleneck value b is set to c(M ). Each pass through the main loop, an alternating tree
is constructed until an augmenting path P is found for which either c(M \Phi P
or c(M \Phi P small as possible. The initializations and the main loop
for constructing such an augmenting path are those of Figure 4.1. Figure 5.1 shows the
inner-loop of the weighted matching algorithm of Figure 4.1 modified to the case of the
bottleneck objective function. The main differences are that the sum operation on the path
lengths in Figure 4.1 is replaced by the "max" operation and, as soon as an augmenting
path P is found whose length lsap is less than or equal to the current bottleneck value
b, the main loop is exited, P is used to augment M , and b is set to max(b; lsap). The
bottleneck algorithm does not modify the edge weights c ij .
Similarly to the implementation discussed in Section 4, the set Q is implemented as a
now the array Q 1 contains all the nodes whose distance to the root is less
than or equal to the tentative bottleneck value b. Q 2 contains the nodes whose distance
to the root is larger than the bottleneck value but not infinity. Q 2 is again implemented
as a heap.
Figure
5.1: Modified inner loop of Figure 4.1 for the construction of a bottleneck
augmenting path.
dnew :=
if dnew ! lsap then
unmatched then
lsap := dnew; isap :=
if lsap - b then exit while-loop;
else
A large initial extreme matching can be found in the following way. We define
as the smallest entry in row i and column j, respectively. A lower bound b 0 for the
bottleneck value is
An extreme matching M can be obtained from the edges (i; j) for which c ij - b 0 ; we scan
all nodes in turn and for each node i 2 COL(j) that is unmatched and for which
is added to M . Then, for each remaining unmatched column node j,
every node i 2 COL(j) is considered for which c ij - b and that is matched to a column
node other than j, say j . If an unmatched row node i 1 2 COL(j 1 ) can be
found for which c i
is replaced by (i; having
done this for all unmatched columns, the search for shortest augmenting paths starts with
respect to the current matching.
Other initialization procedures can be found in the literature. For example, a slightly
more complicated initialization strategy is used by Finke and Smith (1978) in the context
of solving transportation problems. For every use
as the number of admissible edges incident to row node i and column node j respectively.
The idea behind using g i and h j is that once an admissible edge (i; j) is added to M , all
the other admissible edges that are incident to nodes i and j are no longer candidates
to be added to M . Therefore, the method tries to pick admissible edges such that the
number of admissible edges that become unusable is minimal. First, a row node i with
minimal g i is determined. From the set ROW (i) an admissible entry (i; (provided one
exists) is chosen for which h j is minimal and (i; j) is added to M . After deleting the edges
and the edges (k; j), k 2 COL(j), the method repeats the same for
another row node i 0 with minimal g i 0 . This continues until all admissible edges are deleted
from the graph.
Finally, we note that instead of maximizing (5.1) we also could have maximized the
smallest absolute value on the diagonal. That is, we maximize
min
1-i-n
and define the matrix C as
Note that this problem is rather sensitive to the scaling of the matrix A. Suppose for
example that the matrix A has a column containing only one nonzero entry whose absolute
value v is the smallest absolute value present in A. Then, after applying the bottleneck
algorithm, the bottleneck value b will be equal to this small value. The smallest entry on
the diagonal of the permuted matrix is maximized, but the algorithm did not have any
influence on the values of the other diagonal values. Scaling the matrix prior to applying
the bottleneck algorithm avoids this.
In Duff and Koster (1997), a different approach is taken to obtain a bottleneck
matching. Let A ffl denote the matrix that is obtained by setting to zero in A all entries
a ij for which ja ij denote the matching obtained by removing
from matching M all the entries (i; j) for which ja ij Throughout the
algorithm, fflmax and fflmin are such that a maximum matching of size jM j does not exist
for A fflmax but does exist for A fflmin . At each step, ffl is chosen in the interval (fflmin; fflmax),
and a maximum matching for the matrix A ffl is computed using a variant of MC21. If
this matching has size jM j, then fflmin is set to ffl, otherwise fflmax is set to ffl. Hence,
the size of the interval decreases at each step and ffl will converge to the bottleneck value.
After termination of the algorithm, M 0 is the computed bottleneck matching and ffl the
corresponding bottleneck value.
6 Scaling
Olschowka and Neumaier (1996) use the dual solution produced by the weighted matching
algorithm to scale the matrix. Let u and v be such that they satisfy relation (4.3). If we
define
then we have
Equality holds when that is (i; In words, D 1 AD 2 is a matrix whose
diagonal entries are one in absolute value and whose off-diagonal entries are all less than
or equal to one. Olschowka and Neumaier (1996) call such a matrix an I-matrix and use
this in the context of dense Gaussian elimination to reduce the amount of pivoting that is
needed for numerical stability. The more dominant the diagonal of a matrix, the higher
the chance that diagonal entries are stable enough to serve as pivots for elimination.
For iterative methods, the transformation of a matrix to an I-matrix is also of interest.
For example, from Gershgorin's theorem we know that the union of all discs
contains all eigenvalues of the n \Theta n matrix A. Disc K i has center at a ii and radius that is
equal to the sum of the absolute off-diagonal values in row i. Since the diagonal entries of
an I-matrix are all one, all the n disks have center at 1. The estimate of the eigenvalues
will be sharper as A deviates less from a diagonal matrix. That is, the smaller the radii of
the discs, the better we know where the eigenvalues are situated. If we are able to reduce
the radii of the discs of an I-matrix, i.e. reduce the off-diagonal values, then we tend to
cluster the eigenvalues more around one. In the ideal case, all the discs of an I-matrix
have a radius smaller than one, in which case the matrix is strictly row-wise diagonally
dominant. This guarantees that many types of iterative methods will converge (in exact
even simple ones like the Jacobi and Gauss-Seidel method. However, if at
least one disc remains with radius larger than or close to one, zero eigenvalues or small
eigenvalues are possible.
A straightforward (but expensive) attempt to decrease large off-diagonal entries of a
matrix is by row and column equalization (Olschowka and Neumaier 1996). Let A be
an I-matrix. We define matrix simplicity we assume
that A contains no zero entries.) Equalization consists of repeatedly equalizing the largest
absolute value in row i and the largest absolute values in column i:
for k := do
for to n do
For
and thus, if we define d 1
the algorithm minimizes the
largest off-diagonal absolute value in matrix D 1 AD 2 . The diagonal entries do not change.
Note that the above scaling strategy does not guarantee that all off-diagonal entries
of an I-matrix will be smaller than one in absolute value, for example if the I-matrix A
contains two off-diagonal entries a kl and a lk , k 6= l, whose absolute values are both one.
7 Experimental results
In this section, we discuss several cases where the reorderings algorithms from the previous
section can be useful. These include the solution of sparse equations by a direct method
and by an iterative technique. We also consider its use in generating a preconditioner for
an iterative method.
The set of matrices that we used for our experiments are unsymmetric matrices taken
from the Harwell-Boeing Sparse Matrix Test Collection (Duff, Grimes and Lewis 1992)
and from the sparse matrix collection at the University of Florida (Davis 1997).
All matrices are initially row and column scaled. By this we mean that the matrix is
scaled so that the maximum entry in each row and in each column is one.
The computer used for the experiments is a SUN UltraSparc with 256 Mbytes of main
memory. The algorithms are implemented in Fortran 77.
We use the following acronyms. MC21 is the matching algorithm from the Harwell
Subroutine Library for computing a matching such that the corresponding permuted
matrix has a zero free-diagonal (see Section 3). BT is the bottleneck bipartite matching
algorithm from Section 5 for permuting a matrix such that the smallest ratio between
the absolute value of a diagonal entry and the maximum absolute value in its column is
maximized. BT' is the bottleneck bipartite matching algorithm from Duff and Koster
(1997). MPD is the weighted matching algorithm from Section 4 and computes a
permutation such that the product of the diagonal entries of the permuted matrix
is maximum in absolute value. MPS is equal to the MPD algorithm, but after the
permutation, the matrix is scaled to an I-matrix (see Section 6).
Table
7.1 shows for some large sparse matrices the order, number of entries, and
the time for the algorithms to compute a matching. The times for MPS are not listed,
because they are almost identical to those for MPD. In general, MC21 needs the least time
to compute a matching, except for the ONETONE and TWOTONE matrices. For these
matrices, the search heuristic that is used in MC21 (a depth-first search with look-ahead)
does not perform well. This is probably caused by the ordering of the columns (variables)
and the entries inside the columns of the matrix. A random permutation of the matrix
prior to applying MC21 might lead to other results. There is not a clear winner between
the bottleneck algorithms BT and BT', although we note that BT' requires the entries
inside the columns to be sorted by value. This sorting can be expensive for relatively
dense matrices. MPD is in general the most expensive algorithm. This can be explained
by the more selective way in which this algorithm constructs augmenting paths.
7.1 Experiments with a direct solution method
For direct methods, putting large entries on the diagonal suggests that pivoting down the
diagonal might be more stable. Indeed, stability can still not be guaranteed, but if we have
a solution scheme like the multifrontal method of Duff and Reid (1983), where a symbolic
phase chooses the initial pivotal sequence and the subsequent factorization phase then
modifies this sequence for stability, it can mean that the modification required is less than
if the permutation were not applied.
In the multifrontal approach of Duff and Reid (1983), later developed by Amestoy and
Duff (1989), an analysis is performed on the structure of A+A T to obtain an ordering that
reduces fill-in under the assumption that all diagonal entries will be numerically suitable
for pivoting. The numerical factorization is guided by an assembly tree. At each node of
the tree, some steps of Gaussian elimination are performed on a dense submatrix whose
Schur complement is then passed to the parent node in the tree where it is assembled
Table
7.1: Times (in seconds) for matching algorithms. Order of matrix is n and
number of entries - .
Matrix
GOODWIN 7320 324784 0.27 2.26 4.17 1.82
(or summed) with Schur complements from the other children and original entries of the
matrix. If, however, numerical considerations prevent us from choosing a pivot then the
algorithm can proceed, but now the Schur complement that is passed to the parent is
larger and usually more work and storage will be needed to effect the factorization.
The logic of first permuting the matrix so that there are large entries on the diagonal,
before computing the ordering to reduce fill-in, is to try and reduce the number of pivots
that are delayed in this way thereby reducing storage and work for the factorization. We
show the effect of this in Table 7.2 where we can see that even using MC21 can be very
beneficial although the other algorithms can show significant further gains.
In
Table
7.3, we show the effect of this on the number of entries in the factors. Clearly
this mirrors the results in Table 7.2.
In addition to being able to select the pivots chosen by the analysis phase, the
multifrontal code MA41 will do better on matrices whose structure is symmetric or nearly
so. Here, we define the structural symmetry for a matrix A as the number of entries a ij for
which a ji is also an entry, divided by the total number of entries. The structural symmetry
after the permutations is shown in Table 7.4. The matching orderings in some cases
increase the symmetry of the resulting reordered matrix, which is particularly apparent
when we have a very sparse system with many zeros on the diagonal. In that case, the
reduction in number of off-diagonal entries in the reordered matrix has an influence on
the symmetry. Notice that, in this respect, the more sophisticated matching algorithms
may actually cause problems since they could reorder a symmetrically structured matrix
with a zero-free diagonal, whereas MC21 will leave it unchanged.
Table
7.2: Number of delayed pivots in factorization from MA41. An "-" indicates
that MA41 needed more than 200 MBytes of memory.
Matrix Matching algorithm used
None MC21 BT MPD MPS
GOODWIN 536 1622 427 53 41
Table
7.3: Number of entries (10 3 ) in the factors from MA41.
Matrix Matching algorithm used
None MC21 BT MPD MPS
ONETONE2 14,083 2,876 2,298 2,170 2,168
GOODWIN 1,263 2,673 2,058 1,282 1,281
Table
7.4: Structural symmetry after permutation.
Matrix Matching algorithm used
None MC21 BT MPD/MPS
GEMAT11
Finally, Table 7.5 shows the effect on the solution times of MA41. We sometimes
observe a dramatic reduction in time for the solution when preceded by a permutation.
Table
7.5: Solution time required by MA41.
Matrix Matching algorithm used
None MC21 BT MPD MPS
GOODWIN 3.64 14.63 7.98 3.56 3.56
Our implementations of the algorithms described in this paper have been used
successfully by Li and Demmel (1998) to stabilize sparse Gaussian elimination in a
distributed-memory environment without the need for dynamic pivoting. Their method
decomposes the matrix into an N \Theta N block matrix A[1 : by using the notion of
unsymmetric supernodes (Demmel, Eisenstat, Gilbert, Li and Liu 1995). The blocks are
mapped cyclically (in both row and column dimensions) onto the nodes (processors) of a
two-dimensional rectangular processor grid. The mapping is such that at step k of the
numerical factorization, a column of processors factorizes the block column A[k : N; k], a
row of processes participates in the triangular solves to obtain the block row U
and all processors participate in the corresponding multiple-rank update of the remaining
The numerical factorization phase in this method does not use (dynamic) partial
pivoting on the block columns. This allows for the a priori computation of the nonzero
structure of the factors, the distributed data structures, the communication pattern, and a
good load balancing scheme, which makes the factorization more scalable on distributed-memory
machines than factorizations in which the computational and communication
tasks only become apparent during the elimination process. To ensure a solution that
is numerically stable, the matrix is permuted and scaled before the factorization to
make the diagonal entries large compared to the off-diagonal entries, any tiny pivots
encountered during the factorization are perturbed, and a few steps of iterative refinement
are performed during the triangular solution phase if the solution is not accurate enough.
Numerical experiments demonstrate that the method (using the implementation of the
MPS algorithm) is as stable as partial pivoting for a wide range of problems.
7.2 Experiments with iterative solution methods
For iterative methods, simple techniques like Jacobi or Gauss-Seidel converge more
quickly if the diagonal entry is large relative to the off-diagonals in its row or column,
and techniques like block iterative methods can benefit if the entries in the diagonal
blocks are large. Additionally, for preconditioning techniques, for example for diagonal
preconditioning or incomplete LU preconditioning, it is intuitively evident that large
diagonals should be beneficial.
7.2.1 Preconditioning by incomplete factorizations
In incomplete factorization preconditioners, pivots are often taken from the diagonal and
fill-in is discarded if it falls outside a prescribed sparsity pattern. (See Saad (1996) for
an overview.) Incomplete factorizations are used so that the resulting factors are more
economical to store, to compute, and to solve with.
One of the reasons why incomplete factorizations can behave poorly is that pivots
can be arbitrarily small (Benzi, Szyld and van Duin 1997, Chow and Saad 1997). Pivots
may even be zero in which case the incomplete factorization fails. Small pivots allow
the numerical values of the entries in the incomplete factors to become very large, which
leads to unstable and therefore inaccurate factorizations. In such cases, the norm of the
residual
U will be large. (Here, -
L and -
U denote the computed incomplete
A way to improve the stability of the incomplete factorization, is to preorder the
matrix to put large entries onto the diagonal. Obviously, a successful factorization still
cannot be guaranteed, because nonzero diagonal entries may become very small (or even
zero) during the factorization, but the reordering may mean that zero or small pivots
are less likely to occur. Table 7.6 shows some results for the reorderings applied prior to
incomplete factorizations of the form ILU(0), ILU(1), and ILUT and the iterative methods
GMRES(20), BiCGSTAB, and QMR. In some cases, the method will only converge after
the permutation, in others it greatly improves the convergence.
However, we emphasize that permuting large entries to the diagonal of matrix will
not always improve the accuracy and stability of incomplete factorization. An inaccurate
factorization can also occur in the absence of small pivots, when many (especially large)
fill-ins are dropped from the incomplete factors. In this respect, it may be beneficial to
apply a symmetric permutation after the matching reordering to reduce fill-in. Another
kind of instability in incomplete factorizations, which can occur with and without small
pivots, is severe ill-conditioning of the triangular factors. (In this situation, jjRjj F need
not be very large, but jjI \Gamma A( -
will be.) This is also a common situation when
the coefficient matrix is far from diagonally dominant.
Table
7.6: Number of iterations required by some preconditioned iterative methods after
permutation.
Matrix and method Matching algorithm
QMR 72 21 12 12
MAHINDAS
WEST0497
We also performed a set of experiments in which we first permuted the columns of the
matrix A by using a reordering computed by one of the matching algorithms, followed by
a symmetric permutation of A generated by the reverse Cuthill-McKee ordering (Cuthill
and McKee 1969) applied to A . The motivation behind this is that the number
of entries that is dropped from the factors can be reduced by applying a reordering of
the matrix that reduces fill-in. In the experimental results, we noticed that the additional
permutation sometimes has a positive as well as a negative effect on the performance of the
iterative solvers. Table 7.7 shows some results for the three iterative methods from Table
7.6 preconditioned by ILUT on the WEST matrices from the Harwell-Boeing collection.
Table
7.7: Number of iterations required by some ILUT-preconditioned iterative methods
after the matching reordering with and without reverse Cuthill-McKee.
Matrix and method Matching algorithm Matching algorithm
without RCM with RCM
7.2.2 Experiments with a block iterative solution method
The Jacobi method is not a particularly current or powerful method so we focussed our
experiments on the block Cimmino implementation of Arioli, Duff, Noailles and Ruiz
(1992), which is equivalent to using a block Jacobi algorithm on the normal equations.
In this implementation, the subproblems corresponding to blocks of rows from the matrix
are solved by the sparse direct method MA27 (HSL 1996).
We show the effect of this in Table 7.8 on the problem MAHINDAS from Table 7.6.
The matching algorithm was followed by a reverse Cuthill-McKee algorithm to obtain a
block tridiagonal form. The matrix was partitioned into 2, 4, 8, and 16 blocks of rows and
the accelerations used were block CG algorithms with block sizes 1, 4, and 8. The block
rows are chosen of equal (or nearly equal) size.
Table
7.8: Number of iterations of block Cimmino algorithm for the matrix MAHINDAS.
Acceleration Matching algorithm
# block rows
None MC21 BT MPD MPS
In general, we noticed in our experiments that the block Cimmino method often was
more sensitive to the scaling (in MPS) and less to the reorderings. The convergence
properties of the block Cimmino method are independent of row scaling. However, the
sparse direct solver MA27 (HSL 1996) used for solving the augmented systems, performs
numerical pivoting during the factorizations of the augmented matrices. Row scaling
might well change the choice of the pivot order and affect the fill-in in the factors and the
accuracy of the solution. Column scaling should affect convergence of the method since it
can be considered as a diagonal preconditioner. For more details see (Ruiz 1992).
8 Conclusions and future work
We have considered, in Sections 3-4, techniques for permuting a sparse matrix so that
the diagonal of the permuted matrix has entries of large absolute value. We discussed
various criteria for this and considered their implementation as computer codes. We also
considered in Section 6 possible scaling strategies to further improve the weight of the
diagonal with respect to the off-diagonal values.
In Section 7, we then indicated several cases where such a permutation (and scaling)
can be useful. These include the solution of sparse equations by a direct method and
by an iterative technique. We also considered its use in generating a preconditioner for
an iterative method. The numerical experiments show that for a multifrontal solver and
preconditioned iterative methods, the effect of these reorderings can be dramatic. The
effect on the block Cimmino iterative method seems to be less dramatic. For this method,
the discussed scaling tends to have a more important effect.
While it is clear that reordering matrices so that the permuted matrix has a large
diagonal can have a very significant effect on solving sparse systems by a wide range of
techniques, it is somewhat less clear that there is a universal strategy that is best in all
cases. One reason for this is that increasing the size of the diagonal only is not always
sufficient to improve the performance of the method. For example, for the incomplete
preconditioners that we used for the numerical experiments in Section 7, it is not only the
size of the diagonal but also the amount and size of the discarded fill-in plays an important
role. We have thus started experimenting with combining the strategies mentioned in
Sections 3-4 and, particularly for generating a preconditioner and the block Cimmino
approach, with combining our unsymmetric ordering with symmetric orderings.
Another interesting extension to the discussed reorderings is a block approach to
increase the size of diagonal blocks instead of only the diagonal entries and use for example
a block Jacobi preconditioner on the permuted matrix. This is of particular interest for
the block Cimmino method. One could also build other criteria into the weighting for
obtaining a bipartite matching, for example, to incorporate a Markowitz cost so that
sparsity would also be preserved by the choice of the resulting diagonal as a pivot. Such
combination would make the resulting ordering suitable for a wider class of sparse direct
solvers.
Finally, we notice in our experiments with MA41 that one effect of the matching
algorithm was to increase the structural symmetry of unsymmetric matrices. We are
exploring further the use of ordering techniques that more directly attempt to increase
structural symmetry.
Acknowledgments
We are grateful to Michele Benzi of Los Alamos National Laboratory and Miroslav Tuma
of the Czech Academy of Sciences for their assistance on the preconditioned iterative
methods and Daniel Ruiz of ENSEEIHT for his help on block iterative methods.
--R
Orderings for incomplete factorization preconditioning of nonsymmetric problems
Assignment and Matching Problems: Solution Methods with FORTRAN-Programs
Experimental study of ILU preconditioners for indefinite matrices
Reducing the bandwidth of sparse symmetric matrices
University of Florida sparse matrix collection
A supernodal approach to sparse partial pivoting
The design and use of algorithms for permuting large entries to the diagonal of sparse matrices
Users' guide for the Harwell-Boeing sparse matrix collection (Release 1)
Making sparse Gaussian elimination scalable by static pivoting
Solution of large sparse unsymmetric linear systems with a block iterative method in a multiprocessor environment
Iterative methods for sparse linear systems
Data structures and network algorithms
--TR
--CTR
Laura Grigori , Xiaoye S. Li, A new scheduling algorithm for parallel sparse LU factorization with static pivoting, Proceedings of the 2002 ACM/IEEE conference on Supercomputing, p.1-18, November 16, 2002, Baltimore, Maryland
Abdou Guermouche , Jean-Yves L'Excellent , Gil Utard, Impact of reordering on the memory of a multifrontal solver, Parallel Computing, v.29 n.9, p.1191-1218, September
Abdou Guermouche , Jean-Yves L'excellent, Constructing memory-minimizing schedules for multifrontal methods, ACM Transactions on Mathematical Software (TOMS), v.32 n.1, p.17-32, March 2006
Chi Shen , Jun Zhang , Kai Wang, Distributed block independent set algorithms and parallel multilevel ILU preconditioners, Journal of Parallel and Distributed Computing, v.65 n.3, p.331-346, March 2005
Kai Shen, Parallel sparse LU factorization on second-class message passing platforms, Proceedings of the 19th annual international conference on Supercomputing, June 20-22, 2005, Cambridge, Massachusetts
Patrick R. Amestoy , Iain S. Duff , Jean-Yves L'excellent , Xiaoye S. Li, Analysis and comparison of two general sparse solvers for distributed memory computers, ACM Transactions on Mathematical Software (TOMS), v.27 n.4, p.388-421, December 2001
Kai Shen, Parallel sparse LU factorization on different message passing platforms, Journal of Parallel and Distributed Computing, v.66 n.11, p.1387-1403, November 2006
Xiren Wang , Wenjian Yu , Zeyi Wang , Xianlong Hong, An improved direct boundary element method for substrate coupling resistance extraction, Proceedings of the 15th ACM Great Lakes symposium on VLSI, April 17-19, 2005, Chicago, Illinois, USA
Anshul Gupta, Recent advances in direct methods for solving unsymmetric sparse systems of linear equations, ACM Transactions on Mathematical Software (TOMS), v.28 n.3, p.301-324, September 2002
Timothy A. Davis, A column pre-ordering strategy for the unsymmetric-pattern multifrontal method, ACM Transactions on Mathematical Software (TOMS), v.30 n.2, p.165-195, June 2004
Michele Benzi, Preconditioning techniques for large linear systems: a survey, Journal of Computational Physics, v.182 n.2, p.418-477, November 2002 | bipartite weighted matching;shortest path algorithms;sparse matrices;iterative methods;preconditioning;direct methods |
587812 | Approximation of the Determinant of Large Sparse Symmetric Positive Definite Matrices. | This paper is concerned with the problem of approximating det(A)1/n for a large sparse symmetric positive definite matrix A of order n. It is shown that an efficient solution of this problem is obtained by using a sparse approximate inverse of A. The method is explained and theoretical properties are discussed. The method is ideal for implementation on a parallel computer. Numerical experiments are described that illustrate the performance of this new method and provide a comparison with Monte Carlo--type methods from the literature. | Introduction
. Throughout this paper, A denotes a real symmetric positive
definite matrix of order n with eigenvalues
In a number of applications, for example in lattice Quantum Chromodynamics [12],
certain functions of the determinant of A, such as det(A) 1=n or ln(det(A)) are of
interest. It is well-known (cf. also x2) that for large n the function A ! det(A) has
poor scaling properties and can be very ill-conditioned for certain matrices A. In this
paper we consider the function
A few basic properties of this function are discussed in x2. In this paper we present
a new method for approximating d(A) for large sparse matrices A. The method is
based on replacing A by a matrix which is in a certain sense close to A \Gamma1 and for
which the determinant can be computed with low computational costs. One popular
method for approximating A is based on the construction of an incomplete Cholesky
factorization. This incomplete factorization is often used as a preconditioner when
solving linear systems with matrix A. In this paper we use another preconditioning
technique, namely that of sparse approximate inverses (cf. [1, 7, 9, 11]). In Re-mark
3.10 we comment on the advantages of the use of sparse approximate inverse
preconditoning for approximating d(A). Let A = LL T be the Cholesky decomposition
of A. Then using techniques known from the literature a sparse approximate
inverse GE of L, i.e. a lower triangular matrix GE which has a prescribed sparsity
structure E and which is an approximation of L \Gamma1 , can be constructed. We then
use det(GE
ii as an approximation for d(A). In x3 we explain
the construction of GE and discuss theoretical properties of this sparse approximate
inverse. For example, such a sparse approximate inverse can be shown to exist for
any symmetric positive definite A and has an interesting optimality property related
to d(A). As a direct consequence of this optimality property one obtains that
holds and that the approximation of d(A) by det(GE ) \Gamma2=n becomes
better if a larger sparsity pattern E is used.
Institut fur Geometrie und Praktische Mathematik, RWTH Aachen, Templergraben 55, D-52056
Germany.
A. REUSKEN
In x4 we consider the topic of error estimation. In the paper [2] bounds for the determinant
of symmetric positive definite matrices are derived. These bounds, in which
the Frobenius norm and an estimate of the extreme eigenvalues of the matrix involved
are used, often yield rather poor estimates of the determinant (cf. experiments in [2]).
In x4.1 we apply this technique to the preconditioned matrix GEAG T
E and thus obtain
reliable but rather pessimistic error bounds. It turns out that this error estimation
technique is rather costly. In x4.2 we introduce a simple and cheap Monte Carlo technique
for error estimation. In x5 we apply the new method to a few examples of large
sparse symmetric positive definite matrices.
2. Preliminaries. In this section we discuss a few elementary properties of the
function d. We give a comparision between the conditioning of the function d and
of the fuction A ! We use the notation k \Delta k 2 for the Euclidean
norm and denotes the spectral condition number of A. The trace of
the matrix A is denoted by tr(A).
Lemma 2.1. Let A and ffiA be symmetric positive definite matrices of order n.
The following inequalities hold:
Proof. The result in (2.1a) follows from
Y
The result in (2.1b) follows from the inequality between the geometric and arithmetic
mean:
Y
From the Courant-Fischer characterization of eigenvalues it follows that
for all i. Hence holds. Now note that
Y
Y
APPROXIMATION OF DETERMINANTS 3
Thus the result in (2.1c) is proved.
The result in (2.1c) shows that the function d(A) is well-conditioned for matrices
A which have a not too large condition number (A).
We now briefly discuss the difference in conditioning between the functions A !
det(A). For any symmetric positive definite matrix B of order n we
have
From the Courant-Fischer eigenvalue characterization we obtain
for all i. Hence
B is SPD
B is SPD
with equality for I . Thus for the condition number of the function d we have
=n
Note that for the diagonal matrix
in the inequality in (2.2) one obtains equality for n !1. For this A and with
we have equality in the second inequality in (2.1c), too.
For ~
the condition number is given by
~
i.e. n times larger than the condition number in (2.2). The condition numbers for
d and ~
d give an indication of the sensitivity if the perturbation kffiAk 2 is sufficiently
small. Note that the bound in (2.1c) is valid for arbitrary symmetric positive definite
perturbations ffiA. The bound shows that even for larger perturbations the function
is well-conditioned at A if (A) is not too large. For the function ~
the
effect of relatively large perturbations can be much worse than for the asymptotic
case (ffiA ! 0), which is characterized by the condition number in (2.3). Consider, for
example, for
2 a perturbation
~
~
which is very large if, for example,
The results in this section show that the numerical approximation of the function
is a much easier task than the numerical approximation of A ! det(A).
3. Sparse approximate inverse. In this section we explain and analyze the
construction of a sparse approximate inverse of the matrix A. Let A = LL T be the
Cholesky factorization of A, i.e. L is lower triangular and L \Gamma1 AL I . Note that
ii . We will construct a sparse lower triangular approximation
G of L \Gamma1 and approximate d(A) by d(G)
ii . The construction of a
sparse approximate inverse that we use in this paper was introduced in [9, 10, 11] and
can also be found in [1]. Some of the results derived in this section are presented in
[1], too.
4 A. REUSKEN
We first introduce some notation. Let E ae f(i; ng be a given
sparsity pattern. By #E we denote the number of elements in E. Let SE be the set
of n \Theta n matrices for which all entries are set to zero if the corresponding index is not
in E:
For use the
representation
For we define the projection
Note that the matrix
is symmetric positive definite. Typical choices of the sparsity pattern E (cf. x5) are
such that n i is a very small number compared to n (e.g. In such a case the
projected matrix P i AP T
i has a small dimension.
To facilitate the analysis below, we first discuss the construction of an approximate
sparse inverse ME 2 SE in a general framework. For we use the
representation
Note that if n
For given A; B 2 R n\Thetan with A symmetric positive definite we consider the following
problem:
determine
In (3.3) we have #E equations to determine #E entries in ME . We first give two basic
lemmas which will play an important role in the analysis of the sparse approximate
inverse that will be defined in (3.9).
Lemma 3.1. The problem (3.3) has a unique solution
the ith row of ME is given by m T
i with
i is the ith row of B.
Proof. The equations in (3.3) can be represented as
(b T
for all i with
i is the ith row of ME . Consider an i with
. For the unknown entries in m i we obtain the system of equations
APPROXIMATION OF DETERMINANTS 5
which is equivalent to
The matrix P i AP T
i is symmetric positive definite and thus m i must satisfy
Using
we obtain the result in (3.4). The construction in this proof
shows that the solution is unique.
Below we use the Frobenius norm, denoted by k
Lemma 3.2. Let A = LL T be the Cholesky factorization of A and let ME 2 SE
be the unique solution of (3.3). Then ME is the unique minimizer of the functional
Proof. Let e i be the ith basis vector in R n . Take M 2 SE . The ith rows of M
and B are denoted by m T
The minimum of the functional (3.6) is obtained if in (3.7) we minimize the functionals
for all i with
(3.8) can be rewritten as
The unique minimum of this functional is obtained for "
Using Lemma 3.1 it follows that ME
is the unique minimizer of the functional (3.6).
Sparse approximate inverse. We now introduce the sparse approximate inverse
that will be used as an approximation for L \Gamma1 . For this we choose a lower triangular
ng and we assume that (i; i) 2 E l for all i. The
sparse approximate inverse is constructed in two steps:
l such that ( "
2:
6 A. REUSKEN
The construction of G E l in (3.9) was first introduced in [9]. A theoretical background
for this factorized sparse inverse is given in [11]. The approximate inverse "
(3.9a) is of the form (3.3) with I . From Lemma 3.1 it follows that in (3.9a) there
is a unique solution "
. Note that because E l is lower triangular and (i; i) 2 E l we
have
Hence it follows from Lemma 3.1
that the ith row of "
denoted by g T
i , is given by
The ith entry of g i , i.e. e T
which is strictly positive
because
i is symmetric positive definite. Hence diag( "
contains only strictly
positive entries and the second step (3.9b) is well-defined.
. The sparse
in (3.9a) can be computed by solving the (low dimensional)
symmetric positive definite systems
We now derive some interesting properties of the sparse approximate inverse as in
(3.9). We start with a minimization property of "
Theorem 3.3. Let A = LL T be the Cholesky factorization of A and D :=
l as in (3.9a) is the unique minimizer of the functional
Proof. The construction of "
in (3.9a) is as in (3.3) with I . Hence
Lemma 3.2 is applicable with I . It follows that "
l is the unique minimizer of
Decompose L \GammaT as L \GammaT strictly upper triangular. Then D
and R are lower and strictly upper triangular, respectively, and we obtain:
F
Hence the minimizers in (3.13) and (3.12) are the same.
Remark 3.4. From the result in Theorem 3.3 we see that in a scaled Frobenius
norm (scaling with D
l is the optimal approximation of "
in the set S E l , in
the sense that "
L is closest to the identity. A seemingly more natural minimization
problem is
min
i.e. we directly approximate L \Gamma1 (instead of "
do not use the scaling with
. The minimization problem (3.14) is of the form as in Lemma 3.2 with
APPROXIMATION OF DETERMINANTS 7
. Hence the unique minimizer in (3.14), denoted by ~
must satisfy (3.3)
Because E l contains only indices (i;
~
must satisfy
This is similar to the system of equations in (3.9a), which characterizes "
l . However,
in (3.16) one needs the values L ii , which in general are not available. Hence opposite
to the minimization problem related to the functional (3.12) the minimization problem
(3.14) is in general not solvable with acceptable computational costs. 2
The following lemma will be used in the proof of Theorem 3.7.
Lemma 3.5. Let "
l be as in (3.9a). Decompose "
L), with
D diagonal and "
strictly lower triangular. Define E l
ng.
L is the unique minimizer of the functional
and also of the functional
Furthermore, for "
D we have
Proof. From the construction in (3.9a) it follows that
is such that ( "
. This is of the form (3.3)
From Lemma 3.2 we obtain that "
L is the unique minimizer of
the functional
i.e., of the functional (3.17). From the proof of Lemma 3.2, with
that the minimization problem
min
decouples into seperate minimization problems (cf. (3.8)) for the rows of L:
min
l
Al i g (3.20)
8 A. REUSKEN
for all i with
i and a T
i are the ith rows of L and A, respectively. The
minimization problem corresponding to (3.18) is
min
Y
Y
Al
This decouples into the same minimization problems as in (3.20). Hence the functionals
in (3.17) and (3.18) have the same minimizer.
Using the construction of "
in (3.9a) we
obtain
ii J
D)
(i;k)2E l
Hence "
ii holds for all i, i.e., (3.19) holds.
Corollary 3.6. From (3.19) it follows that diag( "
and thus, using (3.9b) we obtain
for the sparse approximate inverse G E l . 2
The following theorem gives a main result in the theory of approximate inverses.
It was first derived in [11]. A proof can be found in [1], too.
Theorem 3.7. Let G E l be the approximate inverse in (3.9). Then G E l is the
unique minimizer of the functional
Proof. For G 2 S E l we use the decomposition diagonal
and L
. Furthermore, for L
The inequality in (3.23) follows from the inequality between the arithmetic and geometric
in (3.9a) we use the decomposition "
L). For the approximate
L). From Lemma 3.5
it follows that det(J L ) det(J "
. Furthermore from Lemma 3.5
we obtain that for G
L) we have
I and thus equality in
We conclude that G E l is the unique minimizer of the functional
APPROXIMATION OF DETERMINANTS 9
in (3.22).
Remark 3.8. The quantity
can be seen as a nonstandard condition number (cf. [1, 9]). Properties of this quantity
are given in [1] (Theorem 13.5). One elementary property is
Corollary 3.9. For the approximate inverse G E l as in (3.9) we have (cf.(3.21))
i.e.,
Y
Y
Let ~
l be a lower triangular sparsity pattern that is larger than E l , i.e., E l ae ~
ng. From the optimality result in Theorem 3.7 it follows that
~
Motivated
by the theoretical results in Corollary 3.9 we propose to use the sparse approximate
l as in (3.9) for approximating
an estimate for d(A). Some properties of this method are discussed in the following
remark.
Remark 3.10. We consider the method of approximating d(A) by d(G
. The practical realization of this method boils down to chosing a sparsity
and solving the (small) systems in (3.11). We list a few properties of this
1. The sparse approximate inverse exists for every symmetric positive definite
A. Note that such an existence result does not hold for the incomplete Cholesky
factorization. Furthermore, this factorization is obtained by solving low dimensional
symmetric positive definite systems of the form P i AP T
(cf. (3.11)), which can
be realized in a stable way.
2. The systems P i AP T
solved in parallel.
3. For the computation of d(G
need the diagonal
entries of "
(cf. (3.24)). In the systems P i AP T
e i we then only have to
compute the last entry of "
. If these systems are solved using the Cholesky
lower triangular) we only need the
of L i , since ("g i
4. The sparse approximate inverse has an optimality property related to the
determinant: The functional G From
this the inequality (3.24) and the monotonicity result (3.25) follow.
A. REUSKEN
5. From(3.24) we obtain the upper bound 0 for the relative error
1. In x4 we will derive useful lower bounds for this relative error. These are a posteriori
error bounds which use the matrix G E l . 2
4. A posteriori error estimation. In the previous section it has been explained
how an estimate d(G E l ) \Gamma2 of d(A) can be computed. From (3.24) we have
the error bound
In this section we will discuss a posteriori estimators for the error d(A)=d(G
In x4.1 we apply the analysis from [2] to derive an a posteriori lower bound for the
quantity in (4.1). This approach results in safe, but often rather pessimistic bounds
for the error. In x4.2 we propose a very simple stochastic method for error estimation.
This method, although it does not yield guaranteed bounds for the error, turns out
to be very useful in practice.
4.1. estimation based on bounds from [2] . In this section we show
how the analysis from [2] can be used to obtain an error estimator. We first recall a
main result from [2] (Theorem 2). Let A be a symmetric positive matrix of order n,
F and
exp
l
exp
In [2] this result is applied to obtain computable bounds for d(A). Often these
bounds yield rather poor estimates of d(A). In the present paper we approximate
use the result in (4.2) for error estimation. The upper bound
turns out to be satisfactory in numerical experiments (cf. x5). Therefore we
restrict ourselves to the derivation of a lower bound for d(A)=d(G E l ) \Gamma2 , based on the
left inequality in (4.2).
Theorem 4.1. Let G E l be the approximate inverse from (3.9) and 0 ! ff
1. The following holds: ff 1,
and
exp
Proof. The right inequality in (4.3) is already given in (4.1). We introduce the
for the eigenvalues of G E l AG T
l . From (3.21) we obtainn
and from this it follows that ff 1 1 holds. Furthermore,
APPROXIMATION OF DETERMINANTS 11
yields
We now use the left inequality in
(4.2) applied to the matrix G E l AG T
l . Note that
A simple computation yieldsn
l
and
Substitution of (4.5) in (4.4) results inn
l
Using this the left inequality in (4.3) follows from the left inequality in (4.2).
Note that for the lower bound in (4.3) to be computable, we need
F
and a strictly positive lower bound ff for the smallest eigenvalue of G E l AG T
l . We now
discuss methods for computing ff and . These methods are used in the numerical
experiments in x5.
We first discuss two methods for computing ff. The first method, which can be
applied if A is an M-matrix, is based on the following lemma, where we use the
Lemma 4.2. Let A be a symmetric positive definite matrix of order n with A ij 0
for all i 6= j and G E l its sparse approximate inverse (3.9). Furthermore, let z be such
that
Then
holds.
Proof. From the assumptions it follows that A is an M-matrix. In [11] (Theorem
4.1) it is proved that then G E l AG T
l is an M-matrix, too. Let z
Because
nonnegative entries it follows that
A. REUSKEN
Hence
Using min (G E l AG T
we obtain the result (4.6).
Based on this lemma we obtain the following method for computing ff. Choose
apply the conjugate gradient method to the system G E l AG T
This results in approximations z of z . One iterates until the stopping criterion
1 . In
view of efficiency one should not take a very small tolerance j. In our experiments
in x5 we use 1. Note that the CG method is applied to a system
with the preconditioned matrix G E l AG T
l . In situations where the preconditioning is
effective one may expect that relatively few CG iterations are needed to compute z j
such that kG E l AG T
numerical experiments are
presented in x5.
As a second method for determining ff, which is applicable to any symmetric positive
definite A, we propose the Lanczos method for approximating eigenvalues applied to
the matrix G E l AG T
l . This method yields a decreasing sequence (1)
l ) of approximations (j)
1 of min (G E l AG T
holds, then
can be used in Theorem 4.1. However, in practice it is usually
not known how to obtain reasonable values for " in (4.7). Therefore, in our experiments
we use a simple heuristic for error estimation (instead of a rigorous bound as
in (4.7)), based on the observed convergence behaviour of (j)
It is known that for the Lanczos method the convergence to extreme eigenvalues is
relatively fast. Moreover, it often occurs that the small eigenvalues of the preconditioned
are well-separated from the rest of the spectrum, which
has a positive effect on the convergence speed (j)
In numerical
experiments we indeed observe that often already after a few Lanczos iterations we
have an approximation of min (G E l AG T
with an estimated relative error of a few
percent. However, for the ff computed in this second method we do not have a rigorous
analysis which guarantees that
from numerical experiments we see that this method is satisfactory. This is partly explained
by the relatively fast convergence of the Lanczos method towards the smallest
eigenvalue. A further explanation follows from the form of the lower bound in (4.3).
For ff which is typically the case in our experiments in x5, this lower
bound essentially behaves like exp(ffi ln ff) =: g(ff). Note that
holds. Hence the sensitivity of the lower bound with respect to perturbations in ff is
very mild.
We now discuss the computation of the quantity
F , which is needed
in (4.3). Clearly, for computing one needs the matrices G E l and A. To avoid un-necessary
storage requirements one should not compute the matrix X := G E l AG T
and then determine
F . A with respect to storage more efficient approach
can be based on
where e i is the ith basis vector in R n . For the computation of kG E l AG T
APPROXIMATION OF DETERMINANTS 13
be done in parallel, one needs only sparse matrix-vector
multiplications with the matrices G E l and A. Furthermore, for the computation
of AG T
use that (DG E l
It follows from (3.10) that
holds.
Remark 4.3. Note that for the error estimators discussed in this section the
must be available (and thus stored), whereas for the computation of
the approximation d(G E l ) \Gamma2 of d(A) we do not have to store the matrix G E l (cf.
Remark 3.10 item 3). Furthermore, as we will see in x5, the computation of these
error estimators is relatively expensive. 2
4.2. estimation based on a Monte Carlo approach. In this section
we discuss a simple error estimation method which turns out to be useful in practice.
Opposite to those treated in the previous section this method does not yield (an
approximation of) bounds for the error.
The exact error is given by
l is a sparse symmetric positive definite matrix. Fore ease of
presentation we assume that the pattern E l is sufficiently large such that
holds. In [11] it is proved that if A is an M-matrix or a (block) H-matrix then (4.8)
is satisfied for every lower triangular pattern E l . In the numerical experiments (cf.
with matrices which are not M-matrices or (block) H-matrices (4.8) turns out to
be satisfied for standard choices of E l . We note that if (4.8) does not hold then the
technique discussed below can still be applied if one replaces
a suitable damping factor such that ae(I \Gamma !E
For the exact error we obtain, using a Taylor expansion of ln(I \Gamma B) for B 2 R n\Thetan
with
\Gamman
Hence, an error estimation can be based on estimates for the partial sums Sm :=
The construction of G E l is such that diag(E E l
and thus tr(E
14 A. REUSKEN
For S 3 we obtain
Note that in S 2 and S 3 the quantity tr(E 2
F occurs which
is also used in the error estimator in x4.1. In this section we use a Monte Carlo
method to approximate the trace quantities in Sm . The method we use is based on
the following proposition [8, 3].
Proposition 4.4. Let H be a symmetric matrix of order n with tr(H) 6= 0.
Let V be the discrete random variable which takes the values 1 and \Gamma1 each with
probability 0:5 and let z be a vector of n independent samples from V . Then z T Hz is
an unbiased estimator of tr(H):
E(z
and
For approximating the trace quantity in S 2 we use the following Monte Carlo algorithm
1. Generate z j 2 R n with entries which are uniformly distributed in (0; 1).
2. If (z j
3. y j
Based on Proposition 4.4 and (4.10) we use
as an approximation for S 2 . The corresponding error estimator is
For the approximation of S 3 we replace step 3 in the algorithm above by
3. y j
and we use
as an estimate for S 3 . The corresponding error estimator is
exp(\Gamman
APPROXIMATION OF DETERMINANTS 15
Clearly, this technique can be extended to the partial sums Sm with m ? 3. However,
in our applications we only use "
S 3 for error estimation. It turns out that, at
least in our experiments, the two leading terms in the expansion (4.9) are sufficient
for a reasonable error estimation. Note that due to the truncation of the Taylor ex-
pansion, the estimators E 2 and E 3 for are biased.
It is shown in [3] that based on the so-called Hoeffding inequality (cf. [13]) probabilistic
bounds for
can be derived, where z are
independent random variables as in Proposition 4.4. In this paper we do not use
these bounds. Based on numerical experiments we take a fixed small value for the
parameter M in the Monte Carlo algorithm above (in the experiments in x5:
Remark 4.5. In the setting of this paper Proposition 4.4 is applied with
is a known polynomial of degree 2 or 3. In the Monte Carlo technique
for approximating Proposition 4.4 is applied with
ln(A). The quantity z T ln(A)z, which can be considered as a Riemann-Stieltjes
integral, is approximated using suitable quadrature rules. In [3] this quadrature is
based on a Gauss-Christoffel technique where the unknown nodes and weights in the
quadrature rule are determined using the Lanczos method. For a detailed explanation
of this method we refer to [3].
A further alternative that could be considered for error estimation is the use of this
method from [3]. In the setting here, this method could be used to compute a (rough)
approximation of det(G E l AG T
We did not investigate this possibility. The results
in [2, 3] give an indication that this alternative is probably much more expensive
than the method presented in this section. 2
5. Numerical experiments. In this section we present some results of numerical
experiments with the methods introduced in x3 and x4. All experiments are done
using a MATLAB implementation. We use the MATLAB notation nnz(B) for the
number of nonzero entries in a matrix B.
Experiment 1 (discrete 2D Laplacian). We consider the standard 5-point discrete
Laplacian on a uniform square grid with m mesh points in both directions, i.e.
For this symmetric positive definite matrix the eigenvalues are known:
sin
For the choice of the sparsity pattern E l we use a simple approach based on the
nonzero structure of (powers of) the matrix A:
We first describe some features of the methods for the case
that we will vary m and k. Let A denote the discrete Laplacian for the case
LA its lower triangular part. We then have nnz(LA 2640. For the sparse approximate
inverse we obtain nnz(G E l (2) 6002. The systems P i AP T
that have to be solved to determine G E l (2) (cf. (3.11)) have dimensions between 1 and
7; the mean of these dimensions is 6.7. As an approximation of
obtain
Y
A. REUSKEN
Hence 0:965. For the computation of this approximation along
the lines as described in Remark 3.10, item 3, we have to compute the Cholesky
factorizations
n. For this approximately
are needed (in the MATLAB implementation). If we compare this with the costs of
one matrix-vector multiplication A x (8760 flops), denoted by MATVEC, it follows
that for computing this approximation of d(A), with an error of 3.5 percent, we need
work comparable to only 5 MATVEC.
We will see that the arithmetic costs for error estimation are significantly higher.
We first consider the methods of x4.1. The arithmetic costs are measured in terms
of MATVEC. For the computation of ff as indicated in Lemma 4.2 with
using the CG method with starting vector need 8 iterations.
In each CG iteration we have to compute a matrix-vector multiplication G E l AG T
which costs approximately 3.7 MATVEC. We obtain ff 0:0155. For the method
based on the Lanczos method for approximating min (G E l AG T
use the heuristic
stopping criterion
We then need 7 Lanczos iterations, resulting in ff direct computation
results in min (G E l AG T
For the computation of
F we first computed the lower triangular part
of
l and then computed kXkF (making use of symmetry). The total
costs of this are approximately MATVEC. Application of Lemma 4.1, with ff CG
and ff Lanczos yields the two intervals
which both contain the exact error 0.965. In both cases, the total costs for error
estimation are 40-45 MATVEC, which is approximately 10 times more than the costs
for computing the approximation d(G E l (2) ) \Gamma2 .
We now consider the method of x4.2. We use the estimators E 2 and E 3 from
(4.13), (4.15) with 6. The results are 0:973. Note that
the order of magnitude of the exact error (3:5 percent) is approximated well by both
In step 3 in the Monte Carlo algorithm
for computing "
need one matrix-vector multiplication G E l AG T
MATVEC). The total arithmetic costs for E 2 are approximately 20 MATVEC. For
S 3 we need two matrix-vector multiplications with l in the third step of the Monte
Carlo algorithm. The total costs for E 3 are approximately 40 MATVEC.
In
Table
5.1 we give results for the discrete 2D Laplacian with
We use the sparsity pattern E l (2).
In the third column of this table we give the computed approximation of d(A) and the
corresponding relative error. In the fourth column we give the total arithmetic costs
for the Cholesky factorization of the matrices P i AP T
item 3). In the columns 5-8 we give the results and corresponding arithmetic costs
for the error estimators discussed in x4. The fifth column corresponds to the method
discussed in x4.1 with ff determined using the CG method applied to G E l AG T
with starting vector 1. In the stopping criterion we take
The computed used as input for the lower bound in (4.3). The resulting
bound for the relative error and the arithmetic costs for computing this error bound
are shown in column 5. In column 6 one finds the computed error bounds if ff is
determined using the Lanczos method with stopping criterion (5.2). In the last two
APPROXIMATION OF DETERMINANTS 17
Table
Results for 2D discrete Laplacian with
costs for Thm. 4.1, Thm. 4.1, MC MC
Table
Results for 2D discrete Laplacian with
costs for Thm. 4.1, Thm. 4.1, MC MC
columns the results for the Monte Carlo estimators are given.
In
Table
5.2 we show the results and corresponding arithmetic costs for the method
with sparsity pattern
Concerning the numerical results we note the following. From the third and fourth
column in Table 5.1 we see that using this method we can obtain an approximation
of d(A) with relative error only a few percent and arithmetic costs only a few
MATVEC. Moreover, this efficiency hardly depends on the dimension n. Comparison
of the third and fourth columns of the Tables 5.1 and 5.2 shows that the approximation
significantly improves if we enlarge the pattern from E l (2) to E l (4). The
corresponding arithmetic costs increase by a factor of about 9. This is caused by
the fact that the mean of the dimensions of the systems P i AP T
from approximately 7 (E l (2)) to approximately 20. For
For the other n
values we have similar ratios between the number of nonzeros in the matrices LA and
. Note that the matrix G E l has to be stored for the error estimation but not
for the computation of the approximation d(G E l ) \Gamma2 . The error bounds in the fifth
and sixth column in the Tables 5.1 and 5.2 are rather conservative and expensive.
Furthermore there is some deterioration in the quality and a quite strong increase in
the costs if the dimension n grows. The strong increase in the costs is mainly due to
the fact that the CG and Lanczos method both need significantly more iterations if n
increases. This is a well-known phenomenom (the matrix G E l AG T
E l has a condition
number that is proportional to n). Also note that the costs for these error estimators
are (very) high compared to the costs of the computation of d(G E l ) \Gamma2 . The results
in the last two columns indicate that the Monte Carlo error estimators, although less
reliable, are more favourable.
In
Figure
5.1 we show the eigenvalues of the matrix G E l AG T
l for the case
(computed with the MATLAB function eig). The eigenvalues are in the
A. REUSKEN
interval [0:025; 1:4]. The mean of these eigenvalues is 1
can see that relatively many eigenvalues are close to 1 and only a few eigenvalues are
close to zero.
Fig. 5.1. Eigenvalues of the matrix G E l AG T
l in Experiment 1
Experiment 2 (MATLAB random sparse matrix). The sparsity structure of the
matrices considered in Experiment 1 is very regular. In this experiment we consider
matrices with a pattern of nonzero entries that is very irregular. We used the
MATLAB generator (sprand(n; n; 2=n)) to generate a matrix B of order n with approximately
2n nonzero entries. These are uniformly distributed random entries in
(0; 1). The matrix B T B is then sparse symmetric positive semidefinite. In the generic
case this matrix has many eigenvalues zero. To obtain a positive definite matrix we
generated a random vector d with all entries chosen from a uniform distribution on
the interval (0; 1) (d :=rand(n; 1)). As a testmatrix we used A := B T B+diag(d). We
performed numerical experiments similar to those in Experiment 1 above. We only
consider the case with sparsity pattern (2). The error estimator based on the
CG method is not applicable because the sign condition in Lemma 4.2 is not fulfilled.
For the case 900 the eigenvalues of A and of G E l AG T
are shown in Figure 5.2.
For A the smallest and largest eigenvalues are 0:0099 and 5:70, respectively. The
picture on the right in Figure 5.2 shows that for this matrix A sparse approximate
inverse preconditioning results in a very well-conditioned matrix. Related to this, one
can see in Table 5.3 that for this random matrix A the approximation of d(A) based
on the sparse approximate inverse is much better than for the discrete Laplacian in
Experiment 1. For
and respectively. For the
mean of the dimensions of the systems P i AP T
spectively. In all three cases the costs for a matrix-vector multiplication G E l AG E l x
are approximately 4.3 MV. Furthermore, in all three cases the matrix G E l AG T
l is
well-conditioned and the number of Lanczos iterations needed to satisfy the stopping
criterion (5.2) hardly depends on n. Due to this, for increasing n, the growth in the
costs for the error estimator based on Theorem 4.1 (column 5) is much slower than in
Experiment 1. As in the Tables 5.1 and 5.2, in Table 5.3 the error quantities in the
columns 3, 5,6,7 are bounds or estimates for the relative error
APPROXIMATION OF DETERMINANTS 19
Fig. 5.2. Eigenvalues of the matrices A and G E l AG T
l in Experiment 2
Table
Results for MATLAB random sparse matrices with
costs for Thm. 4.1, MC MC
For the values of d(A) are not given (column 2). This has to do
with the fact that for these matrices with very irregular sparsity patterns the Cholesky
factorization suffers from much more fill-in than for the matrices in the Experiments
1 and 3. For the matrix A in this experiment with
10000 we run into storage problems
if we try to compute the Cholesky factorization using the MATLAB function chol.
Experiment 3 (QCD type matrix). In this experiment we consider a complex
Hermitean positive definite matrix with sparsity structure as in Experiment 1. This
matrix is motivated by applications from the QCD field. In QCD simulations the
determinant of the so-called Wilson fermion matrix is of interest. These matrices
and some of their properties are discussed in [4, 5]. The nonzero entries in a Wilson
fermion matrix are induced by a nearest neighbour coupling in a regular 4-dimensional
grid. These couplings consist of 12 \Theta 12 complex matrices M xy , which have a tensor
product structure M
xy\Omega U xy , where P xy 2 R 4\Theta4 is a projector, U xy 2 C 3\Theta3 is
from SU 3 and x and y denote nearest neighbours in the grid. These coupling matrices
M xy strongly fluctuate as a function of x and y. Here we consider a (toy) problem
with a matrix which has some similarities with these Wilson fermion matrices. We
start with a 2-dimensional regular grid as in Experiment 1 (n grid points). For the
couplings with nearest neighbours we use complex numbers with length 1. These
numbers are chosen as follows. The couplings with south and west neighbours at a
grid point x are exp(2iff S (x)) and exp(2iff W (x)), respectively, where ff S (x) and
ff W (x) are chosen from a uniform distribution on the interval (0; 1). The couplings
with the north and east neighbours are taken such that the matrix is hermitean. To
make the comparison with Experiment 1 easier the matrix is scaled by the factor n,
A. REUSKEN
i.e. the couplings with nearest neighbours have length n. For the diagonal we take flI ,
where fl is chosen such that the smallest eigenvalue of the resulting matrix is approximately
1 (this can be realized by using the MATLAB function eigs for estimating
the smallest eigenvalue). We performed numerical experiments as in Experiment 1
with (2). The number of nonzero entries in LA and G E l are the same as
in Experiment 1. For 900 the eigenvalues of the matrices A and G E l AG T
are
shown in Figure 5.3. These spectra are in the intervals
respectively.
The results of numerical experiments are presented in Table 5.4. Note that the error
Fig. 5.3. Eigenvalues of the matrices A and G E l AG T
l in Experiment 3
estimator from x4.1 in which the CG method is used for computing ff can not be used
for this matrix (assumptions in Lemma 4.2 are not satisfied). We did not consider the
case here because then the application of the eig function for computing
the smallest eigenvalue led to memory problems.
Comparison of the results in Table 5.4 with those in Table 5.1 shows that when the
Table
Results
costs for Thm. 4.1, MC MC
method is applied to the QCD type of problem instead of the discrete Laplacian the
performance of the method does not change very much.
Finally, we note that in all measurements of the arithmetic costs we did not take
into account the costs of determining the sparsity pattern E l (k) and of building the
matrices
--R
Cambridge University Press
Bounds on the trace of the inverse and the determinant of symmetric positive definite matrices
Some large scale matrix computation problems
Progress on lattice QCD algorithms
Exploiting structure in Krylov subspace methods for the Wilson fermion matrix
Matrix Computations
Parallel preconditioning with sparse approximate inverses
A stochastic estimator of the trace of the influence matrix for laplacian smoothing splines
An alternative approach to estimating the convergence rate of the CG method
On a family of two-level preconditionings of the incomplete block factorization type
Factorized sparse approximate inverse precondi- tionings I : Theory
Quantum Fields on a Lattice
Convergence of Stochastic Processes
--TR | sparse approximate inverse;determinant;preconditioning |
587814 | Generalized Polar Decompositions for the Approximation of the Matrix Exponential. | In this paper we describe the use of the theory of generalized polar decompositions [H. Munthe-Kaas, G. R. W. Quispel, and A. Zanna, Found. Comput. Math., 1 (2001), pp. 297--324] to approximate a matrix exponential. The algorithms presented have the property that, if $Z \in {\frak{g}}$, a Lie algebra of matrices, then the approximation for exp(Z) resides in G, the matrix Lie group of ${\frak{g}}$. This property is very relevant when solving Lie-group ODEs and is not usually fulfilled by standard approximations to the matrix exponential. We propose algorithms based on a splitting of Z into matrices having a very simple structure, usually one row and one column (or a few rows and a few columns), whose exponential is computed very cheaply to machine accuracy. The proposed methods have a complexity of ${\cal O}(\kappa n^{3})$, with constant $\kappa$ small, depending on the order and the Lie algebra ${\frak{g}}$. % The algorithms are recommended in cases where it is of fundamental importance that the approximation for the exponential resides in G, and when the order of approximation needed is not too high. We present in detail algorithms up to fourth order. | Introduction
With the recent developements in the theory of Lie-group integration schemes for ordinary differential
equations (Iserles, Munthe-Kaas, N-rsett & Zanna 2000), the problem of approximating
the matrix exponential has lately received a renewed attention. Most Lie-group methods require
a number of computation of matrix exponentials from a Lie algebra g ' R n\Thetan to a Lie group
G ' GL(n; R), that usually constitutes a bottleneck in the numerical implementation of the
schemes (Celledoni, Iserles, N-rsett & Orel 1999).
The matrix exponentials need be approximated to the order of the underlying ODE method (hence
exact computation is not an issue), however, it is of fundamental importance that such approximations
resides in G. In generality, this property is not fullfilled by many standard approximations
to the exponential function (Moler & van Loan 1978) unless the exponential is evaluated exactly.
In some few cases (usually for small dimension) the exponential of a matrix can be evaluated exactly.
This happens, for instance, for three by three skew-symmetric matrices, whose exponential can be
Institutt for informatikk, University of Bergen, H-yteknologisenteret, Thorm-hlensgate 55, N-5020 Bergen,
Norway. Email: anto@ii.uib.no, hans@ii.uib.no
calculated exactly by means of the well known Euler-Rodriguez formula
sin ff
ff
where
(Marsden & Ratiu 1994). Exact formulas for skew-symmetric matrices and matrices in so(p; q) can
be derived up to dimension eight making use of the Cayley-Hamilton theorem (Horn & Johnson
1985) with significant savings with respect to approximation techniques (Barut, Zeni & Laufer
1994, Leite & Crouch 1999). However, the algorithms are not practical for larger dimensions, for
several reasons. First, they require high powers of the matrix in question (and each matrix-matrix
multiplication amounts to O
secondly, it is well known that the direct use of
the characteristic polynomial, for large scale matrices, may lead to computational instabilities.
The problem of approximating the exponential of a matrix from a Lie algebra to its corresponding
Lie group has been recently considered by (Celledoni & Iserles 2000, Celledoni & Iserles 1999). In
the first paper, the authors construct the approximation by first splitting the matrix X 2 g as the
sum of bordered matrices. Strang-type splittings of order two are considered, so that one could
apply a Yoshida technique (Yoshida 1990), based on a symmetric composition of a basic scheme
whose error locally expands in odd powers of time only, to increase the order. In the second
paper, the authors consider techniques based on canonical coordinates of the second kind (CCSK)
(Varadarajan 1984). To follow that approach, it is necessary to choose a basis of the Lie algebra g.
The choice of the basis plays a significant role in the computational complexity of the algorithms
(Owren & Marthinsen 1999), and, by choosing Chevalley bases (Carter, Segal & Macdonald 1995)
which entail a large number of zero structure constants, it is possible to reduce significantly the
cost of the methods from O
\Delta to O
In this paper we consider the problem of approximating to a given order of accuracy
F (t; Z) - exp(tZ) 2 G; Z 2
so that F (t; Z) 2 G, where g ' gl(R; n) and G ' GL(R;n). The techniques we introduce consist
in a Lie-algebra splitting of the matrix Z by means of an iterated generalized polar decomposition
induced by an appropriate involutive automorphism oe G, as discussed in (Munthe-Kaas
et al. 2000b). We introduce a general technique for approximations of arbitrary high order, and
discuss practical algorithms of order two, three and four. For large n, these algorithms are very
competitive with standard approximations of the exponential function (for example diagonal Pad'e
approximants).
The paper is organized as follows. In Section 2 we discuss the background theory of the polar
decomposition on Lie groups and its symmetric version. Such polar decomposition can be used to
induce splitting in the Lie algebra g. As long as this splitting is practical to compute, together
with the exponential of each 'splitted' part, it leads to splitting methods for the approximation of
the exponential of practical interest.
In Section 3 we use the theory developed in x2 to derive approximations of the exponential function
for some relevant matrix Lie groups as SO(R;n), and SL(R; n). Methods of order two, three and
four are discussed in greater detail, together with their computational complexity. The methods
are based on splittings in bordered matrices, whose exact exponentials are very easy to compute.
Section 4 is devoted to some numerical experiments where we illustrate the results derived in this
paper, and finally Section 5 is devoted to some concluding remarks.
Background theory
It is usual in differential geometry to denote Lie-group elements with lower case letters and Lie-
algebra elements with upper-case letters, whether they represent matrices, vectors or scalars (Hel-
gason 1978). We adopt this convention throughout this section.
Let G be a Lie group with Lie algebra g. We restrict our attention to matrix groups, i.e to the
case when G ' GL(R;n).
It is known that, provided G is an involutive automorphism of G, every element z 2 G
sufficiently close to the identity can be decomposed in the product
wg, the subgroup of elements of G fixed under oe and
is the subset of anti-fixed points of oe (Lawson 1994, Munthe-Kaas et
al. 2000b). The set G oe has the structure of a symmetric space (Helgason 1978) and is closed under
the product
as it can be easily verified by application of oe to the right-hand-side of the above relation. The
decomposition (2:1) is called the polar decomposition of z in analogy with the case of real matrices
with the choice of automorphism
g. The automorphism oe induces an involutive automorphims doe
on g in a natural manner,
d
dt
and it defines a splitting of the algebra g into the sum of two linear spaces,
Zg is a subalgebra of g, while \GammaZ g has the
structure of a Lie-triple system, a set closed under the double commutator,
To keep our presentation relevant to the argument matter of this paper, we refer the reader to
(Munthe-Kaas et al. 2000b, Munthe-Kaas, Quispel & Zanna 2000a) and references therein for a
more extensive treatement of such decompositions. However, it is of fundamental importance to
note that the sets k and p possess the following properties:
We denote by \Pi the canonical projection onto the subspace p and by \Pi k
k the
projection onto k. Then,
where
Assume that x and y in (2:1) are of the form
and they can be expanded in series
where the X i and Y i can be explicitely calculated by means of the following recurrence relations
c 2'
and
k1 ;:::;k 2k?0
k1+\Delta\Delta\Delta+k 2k =2q
(2m)! ad 2m
Z
(Zanna 2000). Note that Y (t) expands in odd powers of t only. The first terms in the expansions
of X(t) and Y (t) are
O
We also consider a symmetric-type generalized polar decomposition,
where, as above, X(t) 2 p and Y (t) 2 k. To compute X(t), we apply oe to both sides of (2:7) to
obtain
Isolating the y term in (2:8) and (2:7) and equating the result, we obtain
This leads to a differential equation for X which is very similar to the one obeyed by Y in (2:5)
(Zanna 2000). Using the recursions in (Zanna 2000) we obtain recursions for X(t) and Y (t). The
first terms are given as
and both X(t) and Y (t) expand in odd powers of t only.
Generalized polar decomposition and its symmetric version
for the approximation of the exponential
Assume now that we wish to approximate exp(tZ) for some Z 2 g, and that oe 1 is an involutive
automorphism so that the exponential of terms in as well as analytic
functions of adP , are easy to compute. Then and we can approximate
where X [1] and Y [1] obey the order conditions (2:4)-(2:6) to suitable order.
Alternatively, we can approximate
where X [1] and Y [1] obey now the order conditions (2:10)-(2:11) to given accuracy.
The same mechanism can be applied to split k 1 in p 2 \Phi k 2 by means of a suitable automorphism
oe 2 . The procedure can be iterated and, provided that the exponential of k m is easy to compute,
we have an algorithm to approximate exp(tZ) to a given order of accuracy. In this circumstance,
(3:1) will read
while the analogue of (3:2) is
both corresponding to the algebra splitting
3.1 On the choice of the automorphisms oe i
In what follows, we will consider automorphisms oe of the form
G; (3.6)
where S is an idempotent matrix, i.e. S I [Munthe-Kaas and Zanna, 2000]. Clearly,
and for simplicity, we will abuse notation writing oeZ in place of doeZ, given that all our computations
take place in the space of matrices.
all the eigenvalues of S are either +1 or \Gamma1. Thus, powers of matrices
as well as powers of adP , are easy to evaluate by means of the (+1)- and (\Gamma1)-eigenspace of S
(Munthe-Kaas & Zanna 2000).
3.2 Automorphisms that lead to banded matrices splittings
Let Z 2 gl(n; R) be a n \Theta n matrix and consider the automorphism
is the idempotent matrix
. 0
It is easy to verify that
Z =2
z
z
while
\Pi k1
z
In general, assume that, at the j-th step, the space consists of matrices of the form
. O
O w
O
Then, the obvious choice is
O ~
. 0
where I j \Gamma1 denotes the (j \Gamma 1) \Theta (j \Gamma 1) identity matrix and ~
so that the subspace p j consists of matrices of the form
O ~
Exponentials of matrices of the form (3:11) are very easy to compute: in effect,
exp
O ~
O exp( ~
where exp( ~
can be computed exactly either with a formula analogous to the Euler-Rodriguez
formula (1:1): denote a
exp( ~
I
~
~
a T
I
I
~
~
\Gammaa T
Note that
~
Another alternative for the exact exponential of ~
is the one proposed in (Celledoni & Iserles
exp( ~
where
a j
e 1 is the vector [1; finally 1)=z. The latter formula (3:13),
as we shall see in the sequel, leads to significant savings in the computation and assembly of the
exponentials.
Moreover, given that
O ~
where
~
w j;j a
Next, if Z 2 g, to obtain an approximation of the exponential in G by these automorphisms, we
shall require that oe i 's, defined by the above matrices S i , map g into g. Clearly, this is the case for
ffl so(n; R), since oe i
Z is a map from so(n) ! so(n) given that each S i is an
orthogonal matrix;
ffl sl(n; R), since oe i leaves the diagonal elements of Z (hence its trace) unchanged;
ffl quadratic Lie algebras and the
commute. This is for instance the case when J is diagonal, hence our formulas are valid
for not for the symplectic algebra sp(n; R). In the latter situation,
we consider different choices for the automorphisms oe i , discussed at a greater length in
(Munthe-Kaas & Zanna 2000).
3.3 Splittings of order two to four, their implementation and complexity
In this section we describe in more details the algorithms, the implementation and the complexity
of the splittings induced by the automorphisms described above. The cases of a polar-type repre-
sentation, xy, or a symmetric polar-type representation, z = xyx, are discussed separately.
Algorithm 1 (Polar-type splitting, order two) Based on the iterated generalized polar decomposition
(3:3).
Note that the \Pi p j
and \Pi k j
projections need not be stored in separate matrices but can be stored
in places of the rows and columns of the matrix Z. We truncate the expansions (2:6) to order two,
hence at each step only the p j -part needs correction. Taking in mind (3:3), the matrices X [j] are
low rank matrices with nonzero entries only on the j-th row, column
row are stored in place of the corresponding Z entries. The
matrix Y [n\Gamma1] is diagonal and is stored in the diagonal entries of Z.
Purpose: 2nd order approximation of the splitting (3:3)
overwritten with the nonzero elements of X [i] and Y [m] as:
a
The computation of the splitting requires at each step two matrix-vector multiplications, each
amounting to O
floating point operations (we count both multiplications and ad-
ditions), as well as two vector updates, which are O(n operations. Hence, for large n, the
cost of computing the splitting is of the order
ffl 2n 3 for so(n), taking into account that b
Note that both for so(p; q) and so(n) the matrix Y [n\Gamma1] is the zero matrix.
Algorithm 2 (Symmetric polar-type splitting, order two) Based on the iterated generalized
polar decomposition (3:4)
We truncate the expansions (2:10)-(2:11) to order two. The storing of the entries is as above.
Purpose: 2nd order approximation of the splitting (3:4)
overwritten with the nonzero elements of X [i] and Y [m] as:
% Computation of the splitting
a
This splitting costs only
ffl n(n\Gamma1)for so(n), because of skew-symmetry.
Algorithm 3 (Polar-type splitting, order three)
We truncate (2:6)-(2:7) to include O
terms. Note that the term [K; [P; K]] is of the form (3:15).
We need include also the term of the form [P; [P; K]]. We observe that
Purpose: 3rd order approximation of the splitting (3:3)
overwritten with the nonzero elements of X [i] and Y [m] as:
% Computation of the splitting
a
Analyzing the computations involved, the most costly part is constituted by the matrix-vector
products in the computations in c products
in the update of Z(j j). The computation of c
amounting to 8n 3 in the whole process. For the update of Z(j
need to compute two vector-vector products (O
operations
to uptdate the elements of the matrix. Thus, the whole cost of updating the matrix Z(j
n) is 5n 3 . The update of z j;j requires operations per step, which give a 2n 3
contribution to the total cost of the splitting.
In summary, the total cost of the splitting is
ffl 5n 3 for so(p; q) and sl(n)
ffl for so(n), note that d j need not be calculated as well as z Similarly, we take into
account that b and that only half of the elements of Z(j need be
updated. The total amounts to 2 1
It is easy to modify the splitting above to obtain order four. Note that
which requires the computation of the scalar b T
costing 2=3n 3 operations in the whole pro-
cess. However, all the other powers ad i
~
~
no further computation. Next
can be computed with just two (one) extra matrix-vector computations
for sl(n) (resp. so(n)), which contribute 4n 3 (resp. 2n 3 ) to the cost of the splitting, so that the
splitting of order four costs a total of 7n 3 operations for sl(n) (resp. 4n 3 for so(n)).
Algorithm 4 (Symmetric polar-type splitting, order four)
We truncate (2:10)-(2:11) to include O
terms. Also in this case, the term [K; [P; K]] is of the
form (3:15), while the term [P; [P; K]] is computed according to (3:16).
Purpose: 4th order approximation of the splitting (3:4)
overwritten with the nonzero elements of X [i] and Y [m] as:
a
We need to compute a total of four matrix-vector products, yielding 8
operations. The update
of the block Z(j costs 5n 3 operations, while the update of z(j; costs 2n 3
operations, for a total of
ffl 5n 3 operations for sl(n) and so(p
operations for so(n).
3.4 On higher order splittings
The costs of implementing splittings following (3:3) or (3:4) depend on the type of commutation
involved: commutators of the form [P; K] and [P contribute as an
O
\Delta term to the total complexity of the splitting, however, commutators of the form [K
for easily contribute an O
\Delta to the total complexity of the splittings if the
special structure of the terms involved is not taken into consideration. If carefully implemented,
also these terms can be computed with only matrix-vector and vector-vector products, contributing
O
\Delta operations to the total cost of the splitting. For example, let us consider the term
which appears in the O
contribution in the expansion of the Y part,
both for the polar-type and symmetric polar-type splitting. One has
denotes the matrix z j;j I \Gamma -
. The parenthesis indicate the correct order in which the
operations should be executed to obtain the right complexity (O
per iteration, hence
a total of O
\Delta for the splitting). Many of the terms are already computed for the lower order
conditions, yet the complexity arises significantly. Therefore we recommend this splitting type
techniques when a moderate order of approximation is required.
To construct higher order approximations with these splitting techniques, one could use our symmetric
polar-type splittings, together with a Yoshida-type symmetric combination.
3.5 Assembly of the approximation F (t; Z) to the exponential
For each algorithm that computes the approximation to the exponential, we distinguish two cases:
when the approximation is applied to a vector v and when instead the matrix exponential exp(Z)
is required. Since the matrices X [j] are never constructed explicitely and are stored as vectors,
computations of the exponentials exp(X [j] ) is also never performed explicitely but it is implemented
as in the case of the Householder reflections (Golub & van Loan 1989) when applied to a vector.
First, let us return to (3:13). It is easy to verify that, if we denote by ff
has the exact form
I
where I is the 2 \Theta 2 identity matrix. Similar remarks hold about the matrix D \Gamma1 . Thus, the
computation of can be done in a very few flops that `do not contribute'
to the total cost of the algorithm. Next, if v; k; the assembly of exp( ~
according to
can be computed in 6j operations. If we let j vary between 1 and n, the total cost of the
multiplications is hence 3n 2 . This is precisely the complexity of for the assembly of the exponential
for polar-type splittings, that has the form as in (3:3).
Algorithm 5 (Polar-type approximation)
Purpose: Computing the approximant (3:3) applied to a vector v
containing the nonzero elements of X [i] and Y [m] as:
a
old
and
new := [a
new .
In the case when the output needs be applied to a n \Theta n matrix B, we can apply the above
algorithm to each column of B, for a total of 3n 3 operations. This complexity can be reduced to
about 2n 3 taking into account that the vector can be calculated once and for
all, depending only on the splitting of the matrix Z and not in any manner on the columns of B.
Also can be computed once and stored for latter use.
Algorithm 6 (Symmetric polar-type approximation)
The approximation to the exponential is carried out in a manner very similar to that described
above in Algorithm 5, except that, being (3:4) based on a Strang-type splitting, the assembly is
also performed in reverse order.
Purpose: Computing the approximant (3:4) applied to a vector v
containing the nonzero elements of X [i] and Y [m] as:
a
old
old
new := [a
new .
a
old
Table
1: Complexity for a polar-type order-two approximant.
Algorithm sl(n); so(p; q) so(n)
1+5 vector matrix vector matrix
splitting 1 1n 3 1 1n 3 2n 3 2n 3
assembly exp 3n 2 2n 3 3n 2 2n 3
new := [a
new
The vectors ff; fi and fl need be calculated only once and stored for latter use in the reverse-order
multiplication. The cost of the assembly is roughly twice as the cost of the assembly in Algorithm 1,
hence it amounts to 5n 2 operations (we save n 2 operations omitting the computation of ff).
When the result is applied to a matrix B, again we apply the same algorithm to each column of
B, which yields n 3 operations. Also in this case the vector ff does not depend on B and can be
computed once and for all, reducing the cost to 4n 3 operations. The same remark holds for the
vectors fi and fl .
It is important to mention that the matrix D might be singular or close to singular (for example
when a j and b j are close to be orthogonal), hence the computation of exp( ~
according to (3:13)
may be lead to instabilities. In this case, it is recommended to use (3:12) instead of (3:13). The
latter choice is twice as expensive (5n 2 for polar-type assemblies and 9n 2 for symmetric assemblies
for F (t; Z) applied to a vector), but deals better with the case when D is nearly singular.
4 Numerical experiments
4.1 Non-symmetric polar-type approximations to the exponential
We commence comparing the polar-type order-2 splitting of Algorithm 1 combined with the assembly
of the exponential in Algorithm 5 with the (1; 1)-Pad'e approximant for matrices in sl(n) and
so(n), with corresponding groups SL(n) and SO(n). We choose diagonal Pad'e approximants as
benchmarck because they are easy to implement, are the rational approximant with highest order
of approximation at the origin and it is well known that they map quadratic Lie algebras into
quadratic Lie groups (but not necessarily other Lie algebras into the corresponding Lie groups).
Table
1 reports the complexity of the method 1+5. A (1; 1)-Pad'e approximant costs O
floating point operations when applied to a vector (essentially the cost of LU-factorising a linear
system) and O
operations when applied to n \Theta n matrices (2n 3 operations come from the
construction of the right-hand-side, 2
3 from the LU factorization and 2n 3 from the n forward
and backward solution of triangular systems).
In
Figure
4.1 we compare the number of floating point operations scaled by n 3 for matrices Z up
to size 500 as obtained in Matlab for our polar-type order-two algorithm (method 1+5) and the
both applied to a matrix. We consider the cases when Z is in sl(n) and
so(n). The costs of computing both approximations clearly converges to the theroretical estimates
(which in the plot are represented by solid lines) given in Table 1 for large n.
Flops/nmethod 1+5, sl(n)
method 1+5, so(n)
Figure
1: Floating point operations (scaled by n 3 ) versus size for the approximation of the exponential
of a matrix in sl(n) and in so(n) applied to a matrix with the order-2 polar-type algorithm
(method 1+5) and (1; 1)-Pad'e approximant.
Table
2: Complexity for a polar-type order-three approximant. The numbers in parenthesis correspond
to the coefficients for an order-four approximation.
Algorithm sl(n); so(p; q) so(n)
3+5 vector matrix vector matrix
splitting 5(7)n 3 5(7)n 3 2 1(4)n 3 2 1(4)n 3
assembly exp 3n 2 2n 3 3n 2 2n 3
total 5(7)n 3 7(9)n 3 2 1(4)n 3 4 1(6)n 3
In
Figure
2 we compare the accuracy of the two approximations (left plot) for the exponential of a
normalized so that kZk methods show a local truncation error of O
revealing that the order of approximation
to the exact exponential is two. The right plot shows the error in the determinant as a function
of the Pad'e approximant has an error that behaves like h 3 , while our method preserves the
determinant equal to one to machine accuracy.
In table 2 we report the complexity of the method 3+5, which yields an approximation to the
exponential of order three. The numbers in parenthesis refer to the cost of the algorithm with
order four corrections.
4.2 Symmetric polar-type approximations to the exponential
We commence comparing our method 2+6, yielding an approximation of order two, with the (1; 1)
Pad'e approximant. Table 3 reports the complexity of the method 2+6.
Clearly, in the matrix-vector case, our methods are one order of magnitude cheaper than the Pad'e
approximant, and are definitively to be preferred (see Figure 3, for matrices in sl(n)). Furthermore,
from
exact
exponential
method 1+5, sl(n)
|det
1|
method 1+5, sl(n)
Figure
2: Error in the approximation (left) and in the determinant (right) versus h for the approximation
of the exponential of a traceless matrix of unit norm with the order-2 polar-type
algorithm (method 1+5) and (1; 1)-Pad'e approximant.
Table
3: Complexity for a symmetric polar-type order-two approximant.
Algorithm sl(n); so(p; q) so(n)
2+6 vector matrix vector matrix
splitting
assembly exp 5n 2 4n 3 5n 2 4n 3
total
our method maps the approximation in SL(n), while the Pad'e approximant does not. When
comparing approximations of the matrix exponential applied to a vector, it is a must to consider
Krylov subspace methods (Saad 1992). We compare the method 2+6 with a Krylov subspace
method when Z is a matrix in sl(n), normalized so that kZk a vector of unit
norm. The Krylov subspaces are obtained by Arnoldi iterations, whose computational cost amounts
to circa 2mn 2 counting both multiplications and additions. Here m is the
dimension of the subspace Km j spanfv; vg. To obtain the total cost of a Krylov
method, we have to add O
computations arising from the evaluation of the exponential of
the Hessenberg matrix obtained with the Arnoldi iteration, plus 2nm operations arising from the
multiplication of the latter with the orthogonal basis. However, when n is large and m - n, these
costs are subsumed in that of the Arnoldi iteration, and the leading factor is 2mn 2 . The error,
computed as and the floating point operations of both approximations for
are given in Table 4. The Krylov method converges very fast: in all the three cases
eight-nine iterations are sufficient to obtain almost machine accuracy, while two iterations yield an
error which is of the order of method 2+6, at about two thirds (0:64; 0:68; 0:69 respectively) the cost.
On the other hand, Krylov methods do not produce an SL(n) approximation to the exponential,
unless the computation is performed to machine accuracy, which, in our particular example, is
3:30, 2:84 and 2:85, circa three times more costly than the 2+6 algorithm. For what the SO(n)
case is concerned, it should be noted that, if Z 2 so(n), then the approximation w - exp(Z)v
produced by the Krylov method has the feature that kwk independently of the number m
of iterations: in this case, the Hessenberg matrix produced by the Arnoldi iterations is tridiagonal
and skew-symmetric, hence its exponential orthogonal. Thus, Krylov methods are the method of
choice for actions of SO(n) on R n (Munthe-Kaas & Zanna 1997). One might extrapolate that,
Floating
point
operations
method 2+6, sl(n)
Figure
3: Floating point operations versus size for the approximation of the exponential of a matrix
in sl(n) applied to a vector with the order-2 symmetric polar-type algorithm (method 2+6) and
Table
4: Krylov subspace approximations versus the method 2+6 for the approximation of exp(Z)v.
Krylov 2+6
size n error m flops error flops0.74 1 21041
7.
Table
5: Complexity for a symmetric polar-type order-four approximant.
Algorithm sl(n); so(p; q) so(n)
4+6 vector matrix vector matrix
splitting 5n 3 5n 3 2 1n 3 2 1n 3
assembly exp 5n 2 4n 3 5n 2 4n 3
total 5n 3 9n 3 2 1
if we wish to compute the exponential exp(Z)Q, where Q 2 SO(n), one could perform only a
few iterations of the Krylov method to compute w being
columns of Q. Unfortunately, the approximation [w ceases to be orthogonal: although
the vectors w i cease to be linearly independent and the final approximation is not in
SO(n). Similar analysis yields for Stiefel manifolds, unless Krylov methods are implemented to
approximate the exponential to machine accuracy.
In passing, we recall that our methods based on a symmetric polar-type decomposition are time-
symmetric. Hence it is possible to compose a basic scheme in a symmetric manner, following
a technique introduced by Yoshida (Yoshida 1990), to obtain higher order approximations: two
orders of accuracy can be obtained at three times the cost of the basic method. For instance we
can use the method 2+6 as a basic algorithm to obtain an approximation of order four. Thus an
approximation of order four applied to a vector can be obtained in 17n 2 operations for sl(n) (two
splittings and three assemblies), compared to O
operations required by the method 4+6.
To conclude our gallery, we compare the method 4+6, an order-four scheme, whose complexity is
described in Table 5, with a (2; 2)-Pad'e approximant, which requires 2 2
floating point operations
when applied to vectors (2n 3 for the assembly and 2
for the LU factorization) and 6 2
matrices (since we have to resolve for multiple right-hand sides). The figures obtained by numerical
simulations for matrices in sl(n) and SO(n) clearly agree with the theoretical asymptotic values
(plotted as solid lines), as shown in Figure 4. The cost of both methods is very similar, as is
the error from the exact exponential although, in the SL(n) case, the 4+6 scheme preserves the
determinat to machine accuracy while the Pad'e scheme does not (see Figure 5).
Conclusions
In this paper we have introduced numerical algorithms for approximating the matrix exponential.
The methods discussed possess the feature that, if Z 2 g, then the output is in G, the Lie group
of g, a property that is fundamental in the integration of ODEs by means of Lie-group methods.
The proposed methods have a complexity of O
denotes the size of the matrix whose
exponential we wish to approximate. Typically, for moderate order (up to order four), the constant
- is less than 10 whereas the exact computation of a matrix exponential in Matlab (which employes
the scaling and squaring with a Pad'e approximant) generally costs between 20n 3 and 30n 3 .
Comparing methods of the same order of accuracy applied to a vector v 2 R n and to a matrix
G:
ffl For the case F (t; Z)v - exp(tZ)v, where v is a vector: Symmetric polar-type methods are
slightly cheaper than their non-symmetric variant. For the SO(n) case, the complexity of
symmetric methods is very comparable with that of diagonal Pad'e approximants of the same
Flops/nmethod 4+6, sl(n)
method 4+6, so(n)
Figure
4: Floating point operations (scaled by n 3 ) versus size for the approximation of the exponential
of a matrix in sl(n) applied to a n \Theta n matrix with the order-4 symmetric polar-type
algorithm (method 4+6) and (2; 2)-Pad'e approximant.
from
exact
exponential
methods 4+6, sl(n)
|det
1|
methods 4+6, sl(n)
Figure
5: Error in the approximation(left) and in the determinant (right) versus h for the approximation
of the exponential of a traceless matrix of unit norm with the order-4 symmetric
polar-type algorithm (method 4+6) and (2; 2)-Pad'e approximant.
order.
The complexity of the method 2+6 is O
while for the rest of our methods it is O
Krylov subspace methods do, however, have the complexity O
if the number of iterations
is independent of n. Thus, if it is important to stay on the group, we recommend Krylov
methods with iteration to machine accuracy for this kind of problems. If convergence of
Krylov methods is slow, our methods might be good alternatives. See (Hochbruck & Lubich
1997) for accurate bounds on the number m of iterations of Krylov methods.
ffl For the case F (t; Z)B - exp(tZ)B, with B an n \Theta n matrix: Non-symmetric polar-type
methods are marginally cheaper than their symmetric counterpart; however the latter should
be preferred when the underlying ODE scheme is time-symmetric. The proposed methods
have a complexity very comparable with that of diagonal Pad'e approximants of the same
order (they require slightly less operations in the SO(n) case) in addition they map sl(n) to
SL(n), a property that is not shared by Pad'e approximants. For these problems our proposed
methods seem to be the best choice.
It should also be noted that significant advantages arise when Z is a banded matrix. For instance,
the cost of method 2+6 scales as O(nr) for F (t; Z) applied to a vector and O
applied to a matrix when Z has bandwidth 2r + 1. The savings are less striking for higher order
methods since commutation usually causes fill-in in the splitting.
Our schemes have an implementation cost smaller than those proposed by (Celledoni & Iserles
1999), that also produce an output in G when Z 2 g. For the SO(n) case, Celledoni et al. propose
an order-four scheme whose complexity is 11 1
our order-four schemes (method 3+5 with
order-four corrections and method 4+6) costs 6n 3 , 6 1
operations - very comparable with the
diagonal Pad'e approximant of the same order. Furthermore, the implementation of the schemes
of Celledoni et al. requires a precise choice of a basis in g, hence the knowledge of the structure
constants of the algebra. Our approach is instead based on the inclusion relations (2:3) and is
easily expressed in very familiar linear algebra formalism.
--R
Lectures on Lie Groups and Lie Algebras
Methods for the approximation of the matrix exponential in a Lie-algebraic setting
Complexity theory for Lie-group solvers
Matrix Computations
Differential Geometry
Matrix Analysis
'Polar and Ol'shanskii decompositions'
'Closed forms for the exponential mapping on matrix Lie groups based on Putzer's method'
Introduction to Mechanics and Symmetry
Numerical integration of differential equations on homogeneous manifolds
Integration methods based on canonical coordinates of the second kind
Lie Groups
Recurrence relation for the factors in the polar decomposition on Lie groups
--TR
--CTR
Ken'Ichi Kawanishi, On the Counting Process for a Class of Markovian Arrival Processes with an Application to a Queueing System, Queueing Systems: Theory and Applications, v.49 n.2, p.93-122, February 2005
Jean-Pierre Dedieu , Dmitry Nowicki, Symplectic methods for the approximation of the exponential map and the Newton iteration on Riemannian submanifolds, Journal of Complexity, v.21 n.4, p.487-501, August 2005 | lie algebra;matrix exponential;lie-group integrator |
587816 | More Accurate Bidiagonal Reduction for Computing the Singular Value Decomposition. | Bidiagonal reduction is the preliminary stage for the fastest stable algorithms for computing the singular value decomposition (SVD) now available. However, the best-known error bounds on bidiagonal reduction methods on any matrix are of the form \[ A are orthogonal, $\varepsilon_M$ is machine precision, and f(m,n) is a modestly growing function of the dimensions of A.A preprocessing technique analyzed by Higham [Linear Algebra Appl., 309 (2000), pp. 153--174] uses orthogonal factorization with column pivoting to obtain the factorization \[ A=Q \left( \begin{array}{c} C^T \\ 0 \end{array} \right) P^T, \] where Q is orthogonal, C is lower triangular, and P is permutation matrix. Bidiagonal reduction is applied to the resulting matrix C.To do that reduction, a new Givens-based bidiagonalization algorithm is proposed that produces a bidiagonal matrix B that satisfies $C bounded componentwise and $\delta C$ satisfies a columnwise bound (based upon the growth of the lower right corner of C) with U and V orthogonal to nearly working precision. Once we have that reduction, there is a good menu of algorithms that obtain the singular values of the bidiagonal matrix B to relative accuracy, thus obtaining an SVD of C that can be much more accurate than that obtained from standard bidiagonal reduction procedures. The additional operations required over the standard bidiagonal reduction algorithm of Golub and Kahan [J. Soc. Indust. Appl. Math. Ser. B Numer. Anal., 2 (1965), pp. 205--224] are those for using Givens rotations instead of Householder transformations to compute the matrix V, and 2n3/3 flops to compute column norms. | Introduction
. We consider the problem of reducing an m \Theta n matrix A to
bidiagonal form. That is, we find orthogonal matrices U n\Thetan such
that
To denote B in (1.1) we use the shorthand
or use the MATLAB-like form
We will also use MATLAB notation for submatrices. Thus A(i: j; k: ') denotes the
submatrix of A consisting of rows i through j and columns k through '. Likewise,
denotes all of columns k through ' and A(i: all of rows i through
j.
For a matrix
denote the i th singular value of X , for ng. We also let X y be
the Moore-Penrose psuedoinverse of X and we let J(i; be a Givens rotation
through an angle ' ij applied to columns i and j.
Department of Computer Science and Engineering, The Pennsylvania State University, University
Park, PA 16802-6106, e-mail: barlow@cse.psu.edu, URL: http://trantor.cse.psu.edu/~barlow.
The research of Jesse L. Barlow was supported by the National Science Foundation under grants
no. CCR-9201612 and CCR-9424435. Part of this work was done while the author was visiting the
University of Manchester, Department of Mathematics, Manchester M13 9PL UK
The reduction (1.1) is usually done as a preliminary stage for computing the
singular value decomposition of A. There are now a number of very good algorithms
for computing the singular value decomposition of bidiagonal matrices. We know that
the "zero-shift" Q-R algorithm [13], bisection [4], and the dqds algorithm [15] can
compute all of the singular values of B to relative accuracy. We also know that it
is not reasonable to expect any algorithm to compute all of the singular values of a
matrix to relative accuracy unless that matrix has an acyclic graph [11] or is totally
sign compound [12].
Thus, it is not surprising that no algorithm can be expected to produce the
bidiagonal form of a general matrix to relative accuracy in fixed precision arithmetic.
The Jacobi algorithm is a more accurate method for finding the singular values of a
general matrix than any algorithm that requires bidiagonal reduction. Unfortunately,
the Jacobi algorithm is usually slower. For simplicity, assume that
reduction followed by the Q-R algorithm can produce that SVD in about 20n 3 flops.
One Jacobi sweep requires about 7n 3 flops. Thus, for Jacobi to be competitive, it
must converge in about three sweeps, and that rarely happens.
In this paper, we present a bidiagonal reduction method that will often preserve
more of the accuracy in the singular value decomposition.
The reduction is computed in two stages. In the first stage, using a Householder
factorization method of Cox and Higham [10], we reduce A to a lower triangular
n\Thetan . In floating point arithmetic with machine unit " M , the first stage
reduction satisfies
an );
where
Here f(n) is a modestly sized functions and ae A is a growth factor given in [10]. A
similar reduction is recommended by Demmel and Veseli'c [14] before using the Jacobi
method. Thus the difference in our algorithm is the reduction of C.
In the second stage, we apply a new bidiagonal reduction algorithm to C. This
algorithm produces a bidiagonal matrix B such that for some n \Theta n matrices ~
and some orthogonal matrices U and V , we have
where s is the smallest integer such that kC(:
M kCkF . The growth
ae V is bounded provided that the s \Theta s principal submatrix of the corresponding
Krylov matrix is nonsingular. If that submatrix is singular, the standard backward
error bounds from [17] apply. This is not as good a bound as the Jacobi method
achieves [14, 27, 28], but can be much better than that achieved by the standard algo-
rithm. Moreover, the algorithm can be implemented in less than 2 8
flops than the Lawson-Hanson-Chan SVD [26, pp.107-120],[9]. Using fast versions of
Givens rotations, that additional overhead may be reduced to about 8=27n 3
flops.
Our procedure for bidiagonal reduction has some important differences from the
Golub-Kahan Householder transformation based procedure [17]. They are
Givens transformations are used in the construction of the right orthogonal
matrix V . (Clearly, 2 \Theta 2 Householder transformations could also be used.)
ffl The matrix A is preprocessed by Householder factorization with maximal
column pivoting using a new procedure due to Cox and Higham [10].
ffl The first column of V is not e 1 , the first column of the identity matrix.
ffl The computation of the matrices U and V are interleaved in a different manner
to preserve accuracy in the small columns.
In the next section, we give our algorithm for producing the bidiagonal form. In
x4 we prove the bounds (1.2)-(1.3). In x5, we give some tests and a conclusion.
2. Reduction to triangular form. Before giving the bidiagonal reduction pro-
cedure, we preprocess A by reducing it to lower triangular form using a Householder
transformation based procedure due to Cox and Higham [10]. It is based upon the
row and column pivoting procedure of Powell and Reid [30], but uses a simpler form
of pivoting.
The procedure is as follows.
1. Reorder the rows of A so that
2. Using the maximal column pivoting algorithm of Businger and Golub [8],
factor A into
n\Thetan is
lower triangular.
This particular Householder factorization algorithm has very strong numerical
stability properties. If we let C be the computed lower triangular factor, then Cox
and Higham [10] (based on the analysis of a more complicated pivoting strategy by
Powell and Reid [30]) showed that for some orthgonal matrix U 0 ,
and
(2.
ae A is a growth factor bounded by p
2.
The column oriented error bound (2.2) holds for standard Householder factorization
[31]. The second rowwise error bound (2.3) can only be shown for algorithms that do
some kind of row and column permutations for stability.
Similar results for Givens based algorithms are given by Barlow [3] and Barlow and
Handy [5]. The Givens based algorithm in [3] could be substituted for the Cox-Higham
algorithm. Gulliksen [21] has given a new framework for orthogonal transformations
with this property. Cox and Higham demonstrate that Householder's original version
of Householder transformation must be used for these bounds to hold. The bound
does not hold if Parlett's [29] version of the Householder transformation is used.
The columns of the matrix C satisfy
In fact, any reduction of A satisfying the property (2.4) will be a suitable preprocessing
step and will lead to the results given here.
We now give algorithms for computing the bidiagonal reduction of C.
3. Bidiagonal Reduction Algorithms.
3.1. The Golub-Kahan Bidiagonal Reduction. The usual bidiagonal reduction
algorithm is that given by Golub and Kahan [17, pp.208-210,Theorem 1], see
also Golub and Reinsch [19, pp.404-405]. It is given below for the square matrix C.
Since C is lower triangular, it is exactly rank deficient only if it has zero diagonals.
Minor and obvious modifications to the bidiagonal reduction algorithms in this section
are necessary if C has zero diagonals.
Algorithm 3.1 (Golub-Kahan Bidiagonal Reduction).
1. Find an orthogonal transformation U 1 such that
2. for
(a) Find an orthogonal transformation V k such that
(b) Find an orthogonal transformation U k such that
3.
% The bidiagonal reduction of C is given by
Golub and Kahan [17] used Householder transformations to describe their algo-
rithm. In that case, it requires 4n 3 +O(n 2 ) flops if the matrices U and V are kept in
factored form. If U and V are accumulated, it requires 8n 3 +O(n 2 ) flops.
3.2. A Givens Based Bidiagonal Reduction Algorithm. Below is our new
algorithm for the bidiagonal reduction of C.
Algorithm 3.2 (New Procedure for Bidiagonal Reduction).
We now present a Givens-based bidiagonal reduction procedure for an n \Theta n matrix
C satisfying (2.4). In x4, we show that this new algorithm will achieve error bounds
of the form (1.5)-(1.8).
1. Determine the smallest integer s such that
22 is an empty matrix and we use Algorithm 3.1, complete
steps 2-4.
2. Compute the vector z given by
22
be the product of Givens rotations
such that
Compute
Let U 1 be an orthogonal transformation such that
3. for
(a) Let V k be the product of Givens rotations
that satisfies
(b) Find an orthogonal transformation U k such that
C(k: n; k: n) / U T
4.
% The bidiagonal reduction of C is given by
If we use standard Givens rotations, steps 2-4 of this algorithm require 10n 3
flops. The use of fast Givens rotations as described in Hammarling [22], Gentleman
[16], Bareiss [2], Barlow [3], or Anda and Park [1] would produce an algorithm
with approximately the same as the Golub-Kahan Householder-
based procedure.
Step one requires the minimum length solution of the least squares problem
11 is already upper triangular, a procedure in Lawson and Hanson [26, pp.77-
83] would allow us to compute the orthogonal factorization of the above matrix in
flops. The maximum complexity of this step is for
this step never costs more than 8
Table
3.1 summarizes the complexity of the two bidiagonal reduction algorithms.
Table
Complexity of Bidiagonal Reduction Algorithms
Compute 3.1 Algorithm 3.2(SG) Algorithm 3.2(FG)
Givens rotations
fast Givens rotations
4. Growth Factors and Error Bounds.
4.1. Bounding the columns of C. To show the advantages of Algorithm 3.2
for bidiagonal reduction, we consider the effects of orthogonal transformations from
the right used to form V in Algorithm 3.1.
Let U be the matrix from Algorithm 3.1, then consider
By orthogonal equivalence,
If we let
~
then
has the form
where
and F (k)
12 is zero except for the last row. Therefore, ~
in effect, zeros out lower
k \Theta (n \Gamma k) block of F .
The following lemma bounds the effect of a large class of orthogonal transformations
from the right.
Lemma 4.1. Let F n\Thetan be partitioned according to
where V 11 is nonsingular. Let
Then
~
22
where
~
Proof: Matching blocks in (4.1) and (4.2) yields
Using the fact that V 11 is nonsingular, block Gaussian elimination yields (4.3).
The result of Lemma 4.1 generalizes to the case given next.
Lemma 4.2. Let F 2 ! m\Thetan be partitioned according to
22 0
and V has the form in (4.1). Let G = FV be partitioned
Then
where
~
Proof: We note that
22 V (1)
If we partition V according to (4.1) then
First we note that V 11 is nonsingular if and only if V (i)
are nonsingular.
Evaluating ~
22 leads to
~
\GammaV (2)
Thus k ~
. Now we have that
22 F 23
where the (1,3) and (2,3) blocks of C are unaffected. We then apply Lemma 4.1 to
F (1)
F (1)
!/
to obtain (4.5).
The following corollary relates the growth of the (2; 3) block to growth in Gaussian
elimination.
Corollary 4.3. Let F , G and V be as in Lemma 4.2. Then
V is the growth factor for k steps of Gaussian elimination on V in the Euclidean
norm.
Proof: If we let
where L is unit lower triangular and R is upper triangular then
Using the fact that V we have that ~
22 solves
'' ~
~
22
22
and that ~
V 22 are just the result of performing k steps of Gaussian elimination
on V . Thus
~
Taking norms yields
22
V is the growth factor from k steps of Gaussian elimination on V .
Unfortunately, the bound on growth for Gaussian elimination for a given row
ordering is no better for orthogonal matrices than it is for all matrices. The following
result is proven by Barlow and Zha [7].
Proposition 4.4. Let n\Thetan be nonsingular and have the P-L-R factoriza-
tion
by Gaussian elimination with the row ordering P where L is lower triangular and R
is upper triangular. For each have the growth factor ae (k)
. Let
X have the factorization
where V is orthogonal and Y is upper triangular. Then Gaussian elimination with
partial on V obtains the P-L-R factorization
where
and for each k, ae (k)
X .
Suppose that C 2 ! n\Thetan is a lower triangular matrix satisfying
j. Note that row pivoting procedure used in the
previous section assures us that a small block C 22 may be isolated.
n\Thetan be given by
Then X has blocks satisfying
where
22 C 21 =j;
(4.
22 C 22
Note that 2. We can prove some results above the Krylov
matrix associated with X .
We now give the following lemma about Krylov matrices for X .
Lemma 4.5. Let n\Thetan be given by (4.7)-(4.10) and the matrix X 11 in (4.7)
be nonsingular. Let y be such that
s y
Assume that y is chosen so that kzk
Proof: The proof is just a verification of the formulae. First consider
have that
The first term of both rows satisfy the lemma, and second term may be bounded by
kp (1)
kp (1)
The induction step is simply,
where
For both of the reccurances (4.11) and (4.12), we can bound the norms by
where
Simple substitution shows that i k - k(1
We can now prove the following lemma.
Lemma 4.6. Let z n\Thetan and n\Thetan be as in Lemma 4.5 and let
n\Thetan be the Krylov matrix
Let K be partitioned in the form
s K 11 K 12
where s is as defined in (4.7). If K 11 is nonsingular, then
Proof: First we note that if two lower triangular matrices are given by
s I 0
s I 0
then
s I 0
above be given by
Then
s K 11 K 12
~
K 22
Here
and
s:
Thus,
leading to the bound
If we let
s K 11 K 12
K 22
If we reconstruct this factorization, then
Taking norms yields
which is the desired result.
It is well known that matrix V from bidiagonal reduction is the orthgonal factor
of the Krylov matrix K [18, pp.472-473]. That allows us to find the following bound
of the growth factor for bidiagonal reduction. A caveat to all of our results is that the
s \Theta s matrix K 11 be nonsingular.
Proposition 4.7. Let V be as in Lemma 4.2. Let C and s be as in (3.1), let X
be as in (4.7)-(4.10), and let K be as in Lemma 4.6. Assume that K 11 is nonsingular.
Then
where
Proof: From Proposition 4.4, since the matrix K has the orthogonal factorization
where V is the set of Lanczos vector for the bidiagonal reduction of C with the first
we have that
where
and
By a classical equivalence,
Since jX
22
Corollary 4.8. Let C satisfy (4.6) for some value of s. Let z be defined by
s y
for some y 6= 0. Let Algorithm 3.2 be applied to C and let U
be the orthogonal transformations generated by that algorithm. Define
~
where
Let C (k) be defined by
and let ae V be defined by (4.13)-(4.15). Then
Proof: Since ~
k is product of Givens rotations in the standard order, we directly
apply the results in Lemma 4.2.
First, let
Then
Since ~
taking advantage of rotations that
commute, we can write
~
where
Y
Y
Y
Y
Thus, -
have the structure in (4.1). Using the terminology of Lemma 4.2, we
have
(2)and that
22
Combining the results of Lemma 4.2 and Proposition 4.7 obtains
Note that the ability to factor ~
2 is a feature of a Givens rotation
based algorithm for bidiagonal reduction.
4.2. Error Bounds and Implications. The error bounds for this paper are
stated in two theorems. The first one is proven in x7, the second is a consequence of
the first.
Theorem 4.9. Let C 2 ! n\Thetan and let n\Thetan be
the bidiagonal matrix computed by Algorithm 3.2 in floating point arithmetic with
machine be the contents of C after k passes thorough
the main loop of Algorithm 3.2. Then there exist U; modestly growing
functions
where
and for
Theorem 4.10. Let C 2 ! n\Thetan satisfy (2.4), and let
n\Thetan be the bidiagonal matrix computed by Algorithm
3.2 in floating point arithmetic with machine unit " M . Let C r be the
contents of C after k passes thorough the main loop of Algorithm 3.2. Let s be the
smallest integer such that
and let ae V be defined by (4.13). Then there exist U; satisfying (4.16) and a
modestly growing function g 5 (\Delta) such that B and C satisfy (4.17), ffiB is as in Theorem
4.9 and
Proof: The proof of this result is simply a matter of bounding
for each value of k.
For k - s, we have that
For k ? s, orthogonal equivalence yields
2:
An application of (4.21) yields (4.22).
The standard error bounds on bidiagonal reduction are of the form (4.17) but
where ffiC only satisfies a bound of the form
For the Golub-Kahan procedure, this bound is probably as good as we can expect.
These lead to error bounds on the singular values of the form
This is satisfactory for large singular values, but of little use for small ones. The following
4 \Theta 4 example illustrates the difference between standard bidiagonal reduction
and the approach advocated here.
Example 4.1. Let A be the 4 \Theta 4 matrix
are small parameters. Using the MATLAB value
we chose That yields a matrix that has two singular values
clustered at 1, and two distinct singular values smaller than ffl.
To the digits displayed,
If we perform Algorithm 3.1 on A without the preprocessing in x2, we obtain
The use of the bisection routine in [4] obtain the singular values
The computed singular vector matrices are
The invariant subspaces for the double singular value at 1 and for singular values 3
and 4 are correct. However, the individual singular vectors for singular values 3 and
4 are wrong.
Algorithm 3.2 used after the reduction in x2 obtains
The computed singular values to the number of digits displayed are
Moreover, these corresponded to those computed by the Jacobi method to about 15
significant digits. The computed singular vector matrices were
\Gamma0:408248 \Gamma0:408248 \Gamma0:553113 0:600611
\Gamma0:408248 \Gamma0:408248 \Gamma0:243588 \Gamma0:779315
\Gamma0:408248 \Gamma0:408248 0:7967 0:178704C C A
Our version of the Jacobi method (coded by P.A. Yoon) obtains the slightly different
singular vector matrices
However, singular vectors for oe 3 and oe 4 are essentially the same, and subspace for
the clustered singular value near 1 is also essentially the same.
Quite recently, there have been a number of papers on the singular values and
vectors of matrices under structured perturbations. We give two results below that
are relevant to singular values for the perturbation given in Theorem 4.9.
The first is due to Kahan [13].
Lemma 4.11. Let
~
n\Thetan , and let - 1. If- ~
~
Lemma 4.11 says the forward errors in the matrix B make only a small relative
difference in the singular values.
The following result shows that the non-orthogonality of U and V in Theorem
4.9 causes only a small relative change in the singular values. This theorem is given
in [24, pp.423-424,problem 18].
Lemma 4.12. Let A 2 ! m\Thetan and let n\Thetap where p - n. Then for
Standard bounds on eigenvalue perturbation [18, Chapter 8] and taking square
roots leads to
The errors in B and non-orthogonality of the transformations U and V , make
only small relative changes in the singular values. The important effect is that of the
error ffiC in Theorems 4.9 and 4.10.
To characterize the effect of the error ffiC in (4.17), we use a generalization of
results that have been published by Barlow and Demmel [4] , Demmel and Veseli'c
[14], and Gu [20]. This version was proven by Barlow and Slapni-car [6] in a work in
preparation.
Consider the family of symmetric matrices
and the associated family of eigenproblems
(4.
Let (- i (i); x i (i)) be the ith eigenpair of (4.25) and define S(ffi) be the set of indices
given by
The set S(ffi) is the set of eigenvalues for which relative error bounds can be found.
The next theorem gives such a bound. Its proof follows that of Theorem 4 in [4, p.773].
Lemma 4.13. Let (- i (i); x i (i)) be the ith eigenpair of the Hermitian matrix in
(4.24). Let S(ffi) be defined by (4.26). If i 2 S(ffi) then
where
jx
jx
Proof: First assume that - i (i) is simple at the point i. Then from standard
eigenvalue perturbation theory for sufficiently small - we have
x
x
x
x
Thus we have
jx
If - i (i) is simple for all i 2 [0; ffi], then the bound (4.29) follows by integrating from
0 to ffi. In Kato[25, Theorem II.6.1,p.139], it is shown that the eigenvalues of H(i) in
are real analytic, even when they are multiple. Moreover, Kato [25, p.143] goes
on to point out that there are only a finite number of i where - i (i) is multiple, so
that - i (i) is continuous and piecewise analytic throughout the interval [0; ffi]. Thus
we can obtain (4.27)-(4.28) by integrating over each of the intervals in which - i (i) is
analytic.
The proof of the following proposition was given by Slapni-car [6].
Proposition 4.14. Let
have the singular
value decomposition
n\Thetan are orthogonal and
Then for each we have that
where
Proof: We prove this lemma is by noting that oe i
of H(i) where
A T (i) 0
Its associated eigenvector is
The matrix EH in (4.24) is given by
Thus from Lemma 4.13, oe i (i) satisfies (4.30) where
A u T
which is the formula for - A
Proposition 4.14 can be used to bound the error in the singular values caused by
both stages of the bidiagonal reduction. For the stage in x2, we have bounds of the
form (1.3)-(1.4). Thus for
From (2.3), the value of ae A is a growth factor bounded by
y is the Moore-Penrose pseudoinverse of A.
Now consider the error bound in Theorem 4.9. In context, we now consider u
v i to be the ith left and right singular vectors of C rather than A as in the preceding
paragraph. If we let
where d i is defined in (1.8). Then for
i is bounded by
A matrix satisfying (2.4) will often have small singular values that have modest values
of - C
. As pointed out by Demmel and Veseli'c [14], only an algorithm with columnwise
backward error bounds can take advantage of that fact.
The error bounds on computed singular vectors of C are also better for a bidi-
agonal reduction satisfying Theorem 4.9. For such bounds see the papers by Veseli'c
and Demmel [14] or Barlow and Slapni-car [6].
5. Numerical Tests. We performed two sets of numerical tests on the bidiag-
onal reduction algorithms. Our tests compared the singular values to those obtained
by the Jacobi method described by Demmel and Veseli'c [14].
The two test sets are as follows
Example 5.1 (Test Set 1). We use the set
was the Cholesky factor of the Hilbert matrix of dimension k. That is, R k
was the upper triangular matrix with positive diagonals that satisfied
where H k is a k \Theta k matrix whose (i; j) entry is
Example 5.2 (Test Set 2). The set
These are just the transposes of the matrices in Example 5.1. The reduction in x2
produces an upper triangular matrix from that. Thus the resulting bidiagonal matrices
tended to be more graded.
Example 5.3 (Test Set 3). We computed the SVD of L 35 from Example 5.2
using the MATLAB SVD routine to obtain
We then constructed 50 matrices of the form
where F was 50\Theta35 matrix generated by the MATLAB function randn (for generating
matrices with normally distributed entries).
Each matrix in both test sets were reduced to a matrix C using the algorithm in
x2. Three separate routines were used to find the SVD of C.
Plots for Bidiagonal Reduction for the SVD
for
Factor
Hilbert
Matrices
FromtoFig. 5.1. Relative Error Plots from Example 5.1
Algorithm J The Jacobi method described in [14].
ffl Algorihtm G The bidiagonalization method of Algorithm 3.2 followed by
the bisection routine of Demmel and Kahan [13].
ffl Algorihtm H The bidiagonalization method of Algorithm 3.1 followed by
the same bisection routine.
For each matrix, we calculated the two ratios
1-i-n
joe G
oe J
1-i-n
oe J
where oe J
i are the ith singular values as calculated by Algorithms J, G, and
H respectively. Thus we were trying to measure how well the SVDs calculated from
the two bidiagonal reduction algorithms agreed with that from the Jacobi algorithm.
The graphs of these values for the three test sets are given in Figures 5.1, 5.2, and 5.3.
A logarithmic plot of singular values of the matrix R 90 from Example 5.1 is given in
Figure
5.4. Clearly the singular values of L 90 in Example 5.2 span the same range.
All three Examples had singular values that spanned a wide range.
As can be seen from the figures, the maximum relative error for any singular value
from either test set for Algorithm G or H is about 10 \Gamma13 or approximately 10 3 times
Plots for Bidiagonal Reduction for the SVD
for
Factor
Hilbert
Matrices
FromtoFig. 5.2. Relative Error Plots from Example 5.2
machine precision in MATLAB. There seems to be no measurable difference between
Algorithms G and H in the quality of the singular values produced.
The error analysis produced in the x4 indicates that we should expect better
accuracy from Algorithm G, but does not explain why the singular values produced
from both algorithms are so accurate. We suspect that matrix C produced from
the reduction in x2 will almost always have good bidiagonal reduction from either
algorithm, but we know of no analysis to explain why this happens for Algorithm H.
6. Conclusion. We have presented a new bidiagonal reduction algorithm that
have four difference from the standard algorithm. Although it cost from 8
to 2 8
flops than the Golub-Kahan algorithm , our analysis shows that the
new reduction gives a better guarantee of accurate singular values.
Our numerical tests seemed to indicate that we performed the Cox-Higham
Householder factorization routine as a pre-processor, both the new algorithm and the
Golub-Kahan routine produced singular values that were even more accurate than
any known theory predicts. Thus we strongly recommend this preprocessing step
whenever it is feasible. The matrix C resulting from this reduction is usually highly
graded and we suspect that there is still much to understand about the behavior of
bidiagonal redcution on graded matrices.
The extra cost of the new bidiagonal reduction method is far less than that of
any implementation of the Jacobi method to date. It gives a reasonable guarantee of
relative accuracy singular values larger than " 3=2
M and tests confirm that behaves well
Plots for Bidiagonal Reduction for the SVD
fortrials
for
Very
Small
Singular
values
Fig. 5.3. Relative Error Plots from Example 5.3
for singular values smaller than " 3=2
M .
--R
Fast plane rotations with dynamic scaling.
Numerical solution of the weighted linear least squares problem by G- transformations
Stability analysis of the G-algorithm and a note on its application to sparse least squares problems
Computing accurate eigensystems of scaled diagonally dominant matrices.
The direct solution of weighted and equality constrained least squares problems.
Optimal perturbation bounds for the Hermitian eigenvalue prob- lem
Growth factors in Gaussian elimination
Linear least squares solutions by Householder transformations.
An improved algorithm for computing the singular value decomposition.
Stability of Householder QR factorization for weighted least squares problems.
On computing accurate singular values and eigenvalues of matrices with acyclic graphs.
Computing the Singular Value Decomposition with High Relative Accuracy
Accurate singular values of bidiagonal matrices.
Jacobi's method is more accurate than QR.
Accurate singular values and differential qd algorithms.
Least squares computations by Givens rotations without square roots.
Calculating the singular values and pseudoinverse of a matrix.
Matrix Computations
Singular value decomposition and least squares solutions.
Studies in Numerical Linear Algebra.
Backward error analysis for the constrained and weighted linear least squares problem when using the weighted QR factorization.
A note on the modifications to the Givens plane rotation.
Accuracy and Stability of Numerical Algorithms.
Matrix Analysis.
A Short Introduction to Perturbation Theory for Linear Operators.
Solving Least Squares Problems.
Fast accurate eigensystem computation by Jacobi methods.
Fast accurate eigenvalue methods for graded positive definite matrices Numer.
Analysis of algorithms for reflectors in bisectors.
On applying Householder's method to linear least squares prob- lems
The Algebraic Eigenvalue Problem.
--TR | singular values;bidiagonal form;orthogonal reduction;accuracy |
587825 | A Fully Asynchronous Multifrontal Solver Using Distributed Dynamic Scheduling. | In this paper, we analyze the main features and discuss the tuning of the algorithms for the direct solution of sparse linear systems on distributed memory computers developed in the context of a long term European research project. The algorithms use a multifrontal approach and are especially designed to cover a large class of problems. The problems can be symmetric positive definite, general symmetric, or unsymmetric matrices, both possibly rank deficient, and they can be provided by the user in several formats. The algorithms achieve high performance by exploiting parallelism coming from the sparsity in the problem and that available for dense matrices. The algorithms use a dynamic distributed task scheduling technique to accommodate numerical pivoting and to allow the migration of computational tasks to lightly loaded processors. Large computational tasks are divided into subtasks to enhance parallelism. Asynchronous communication is used throughout the solution process to efficiently overlap communication with computation.We illustrate our design choices by experimental results obtained on an SGI Origin 2000 and an IBM SP2 for test matrices provided by industrial partners in the PARASOL project. | Introduction
We consider the direct solution of large sparse linear systems on distributed memory computers.
The systems are of the form A is an n \Theta n symmetric positive definite, general
symmetric, or unsymmetric sparse matrix that is possibly rank deficient, b is the right-hand
side vector, and x is the solution vector to be computed.
The work presented in this article, has been performed as Work Package 2:1 within the
PARASOL Project. PARASOL is an ESPRIT IV Long Term Research Project (No 20160) for
"An Integrated Environment for Parallel Sparse Matrix Solvers". The main goal of this Project,
which started on January 1996 and finishes in June 1999, is to build and test a portable library
for solving large sparse systems of equations on distributed memory systems. The final library
will be in the public domain and will contain routines for both the direct and iterative solution
of symmetric and unsymmetric systems.
In the context of PARASOL, we have produced a MUltifrontal Massively Parallel Solver [27, 28]
referred to as MUMPS in the remainder of this paper. Several aspects of the algorithms used in
MUMPS combine to give an approach which is unique among sparse direct solvers. These include:
ffl classical partial numerical pivoting during numerical factorization requiring the use of
dynamic data structures,
ffl the ability to automatically adapt to computer load variations during the numerical phase,
ffl high performance, by exploiting the independence of computations due to sparsity and
that available for dense matrices, and
ffl the capability of solving a wide range of problems, including symmetric, unsymmetric,
and rank-deficient systems using either LU or LDL T factorization.
To address all these factors, we have designed a fully asynchronous algorithm based on a
multifrontal approach with distributed dynamic scheduling of tasks. The current version of
our package provides a large range of options, including the possibility of inputting the matrix
in assembled format either on a single processor or distributed over the processors. Additionally,
the matrix can be input in elemental format (currently only on one processor). MUMPS can also
determine the rank and a null-space basis for rank-deficient matrices, and can return a Schur
complement matrix. It contains classical pre- and postprocessing facilities; for example, matrix
scaling, iterative refinement, and error analysis.
Among the other work on distributed memory sparse direct solvers of which we are aware
[7, 10, 12, 22, 23, 24], we do not know of any with the same capabilities as the MUMPS solver.
Because of the difficulty of handling dynamic data structures efficiently, most distributed
memory approaches do not perform numerical pivoting during the factorization phase. Instead,
they are based on a static mapping of the tasks and data and do not allow task migration
during numerical factorization. Numerical pivoting can clearly be avoided for symmetric positive
definite matrices. For unsymmetric matrices, Duff and Koster [18, 19] have designed algorithms
to permute large entries onto the diagonal and have shown that this can significantly reduce
numerical pivoting. Demmel and Li [12] have shown that, if one preprocesses the matrix using
the code of Duff and Koster, static pivoting (with possibly modified diagonal values) followed
by iterative refinement can normally provide reasonably accurate solutions. They have observed
that this preprocessing, in combination with an appropriate scaling of the input matrix, is a
issue for the numerical stability of their approach.
The rest of this paper is organized as follows. We first introduce some of the main terms
used in a multifrontal approach in Section 2. Throughout this paper, we study the performance
obtained on the set of test problems that we describe in Section 3. We discuss, in Section 4, the
main parallel features of our approach. In Section 5, we give initial performance figures and we
show the influence of the ordering of the variables on the performance of MUMPS. In Section 6,
we describe our work on accepting the input of matrices in elemental form. Section 7 then
briefly describes the main properties of the algorithms used for distributed assembled matrices.
In Section 8, we comment on memory scalability issues. In Section 9, we describe and analyse
the distributed dynamic scheduling strategies that will be further analysed in Section 10 where
we show how we can modify the assembly tree to introduce more parallelism. We present a
summary of our results in Section 11.
Most results presented in this paper have been obtained on the 35 processor IBM SP2
located at GMD (Bonn, Germany). Each node of this computer is a 66 MHertz processor
with 128 MBytes of physical memory and 512 MBytes of virtual memory. The SGI Cray
Origin 2000 from Parallab (University of Bergen, Norway) has also been used to run some
of our largest test problems. The Parallab computer consists of 64 nodes sharing 24 GBytes
of physically distributed memory. Each node has two R10000 MIPS RISC 64-bit processors
sharing 384 MBytes of local memory. Each processor runs at a frequency of 195 MHertz and
has a peak performance of a little under 400 Mflops per second.
All experiments reported in this paper use Version 4:0 of MUMPS. The software is written in
Fortran 90. It requires MPI for message passing and makes use of BLAS [14, 15], LAPACK
[6], BLACS [13], and ScaLAPACK [9] subroutines. On the IBM SP2, we are currently using a
non-optimized portable local installation of ScaLAPACK, because the IBM optimized library
PESSL V2 is not available.
Multifrontal methods
It is not our intention to describe the details of a multifrontal method. We rather just define
terms used later in the paper and refer the reader to our earlier publications for a more detailed
description, for example [3, 17, 20].
In the multifrontal method, all elimination operations take place within dense submatrices,
called frontal matrices. A frontal matrix can be partitioned as shown in Figure 1. In this
matrix, pivots can be chosen from within the block F 11 only. The Schur complement matrix
computed and used to update later rows and columns of the overall matrix.
We call this update matrix, the contribution block.
fully summed rows -
partly summed rows -
fully summed columns
partly summed columns
Figure
1: Partitioning of a frontal matrix.
The overall factorization of the sparse matrix using a multifrontal scheme can be described
by an assembly tree, where each node corresponds to the computation of a Schur complement
as just described, and each edge represents the transfer of the contribution block from the son
node to the father node in the tree. This father node assembles (or sums) the contribution
blocks from all its son nodes with entries from the original matrix. If the original matrix is
given in assembled format, complete rows and columns of the input matrix are assembled at
once, and, to facilitate this, the input matrix is ordered according to the pivot order and stored
as a collection of arrowheads. That is, if the permuted matrix has entries in, for example,
columns of row of column
the arrowhead list associated with variable i is
g. In the symmetric
case, only entries from the lower triangular part of the matrix are stored. We say that we
are storing the matrix in arrowhead form or by arrowheads. For unassembled matrices,
complete element matrices are assembled into the frontal matrices and the input matrix need
not be preprocessed.
In our implementation, the assembly tree is constructed from the symmetrized pattern of
the matrix and a given sparsity ordering. By symmetrized pattern, we mean the pattern of
the matrix A+A T where the summation is symbolic. Note that this allows the matrix to be
unsymmetric.
Because of numerical pivoting, it is possible that some variables cannot be eliminated from
a frontal matrix. The fully summed rows and columns that correspond to such variables are
added to the contribution block that is sent to the father node. The assembly of fully summed
rows and columns into the frontal matrix of the father node means that the corresponding
elimination operations are delayed. This will be repeated until elimination steps on the later
frontal matrices have introduced stable pivots to the delayed fully summed part. The delay
of elimination steps corresponds to an a posteriori modification of the original assembly tree
structure and in general introduces additional (numerical) fill-in in the factors.
An important aspect of the assembly tree is that operations at a pair of nodes where neither
is an ancestor of the other are independent. This gives the possibility for obtaining parallelism
from the tree (so-called tree parallelism). For example, work can commence in parallel on all
the leaf nodes of the tree. Fortunately, near the root node of the tree, where the tree parallelism
is very poor, the frontal matrices are usually much larger and so techniques for exploiting
parallelism in dense factorizations can be used (for example, blocking and use of higher Level
BLAS). We call this node parallelism. We discuss further aspects of the parallelism of the
multifrontal method in later sections of this paper. Our work is based on our experience
of designing and implementing a multifrontal scheme on shared and virtual shared memory
computers (for example, [2, 3, 4]) and on an initial prototype distributed memory multifrontal
version [21]. We describe the design of our resulting distributed memory multifrontal algorithm
in the rest of this paper.
3 Test problems
Throughout this paper, we will use a set of test problems to illustrate the performance of our
algorithms. We describe the set in this section.
In
Tables
1 and 2, we list our unassembled and assembled test problems, respectively.
All except one come from the industrial partners of the PARASOL Project. The remaining
matrix, bbmat, is from the forthcoming Rutherford-Boeing Sparse Matrix Collection [16]. For
symmetric matrices, we show the number of entries in the lower triangular part of the matrix.
Typical PARASOL test cases are from the following major application areas: computational
fluid dynamics (CFD), structural mechanics, modelling compound devices, modelling ships
and mobile offshore platforms, industrial processing of complex non-Newtonian liquids, and
modelling car bodies and engine components. Some test problems are provided in both
assembled format and elemental format. The suffix (rsa or rse) is used to differentiate them.
For those in elemental format, the original matrix is represented as a sum of element matrices
where each A i has nonzero entries only in those rows and columns that correspond to variables
in the ith element. Because element matrices may overlap, the number of entries of a matrix
in elemental format is usually larger than for the same matrix when assembled (compare the
matrices from Det Norske Veritas of Norway in Tables 1 and 2). Typically there are about twice
the number of entries in the unassembled elemental format.
Real Symmetric Elemental (rse)
Matrix name Order No. of elements No. of entries Origin
t1.rse 97578 5328 6882780 Det Norske Veritas
ship 001.rse 34920 3431 3686133 Det Norske Veritas
ship 003.rse 121728 45464 9729631 Det Norske Veritas
shipsec1.rse 140874 41037 8618328 Det Norske Veritas
shipsec5.rse 179860 52272 11118602 Det Norske Veritas
shipsec8.rse 114919 35280 7431867 Det Norske Veritas
thread.rse 29736 2176 3718704 Det Norske Veritas
x104.rse 108384 6019 7065546 Det Norske Veritas
Table
1: Unassembled symmetric test matrices from PARASOL partner (in elemental format).
In
Tables
3, 4, and 5, we present statistics on the factorizations of the various test problems
using MUMPS. The tables show the number of entries in the factors and the number of floating-point
operations (flops) for elimination. For unsymmetric problems, we show both the estimated
number, assuming no pivoting, and the actual number when numerical pivoting is used.
The statistics clearly depend on the ordering used. Two classes of ordering will be considered
in this paper. The first is an Approximate Minimum Degree ordering (referred to as AMD,
see [1]). The second class is based on a hybrid Nested Dissection and minimum degree
technique (referred to as ND). These hybrid orderings were generated using ONMETIS [26]
or a combination of the graph partitioning tool SCOTCH [29] with a variant of AMD (Halo-
AMD, see [30]). For matrices available in both assembled and unassembled format, we used
nested dissection based orderings provided by Det Norske Veritas and denote these by MFR.
Note that, in this paper, it is not our intention to compare the packages that we used to obtain
the orderings; we will only discuss the influence of the type of ordering on the performance of
MUMPS (in Section 5).
The AMD ordering algorithms are tightly integrated within the MUMPS code; the other
orderings are passed to MUMPS as an externally computed ordering. Because of this tight
integration, we observe in Table 3 that the analysis time is smaller using AMD than some
Real Unsymmetric Assembled (rua)
Matrix name Order No. of entries Origin
mixing-tank 29957 1995041 Polyflow S.A.
bbmat 38744 1771722 Rutherford-Boeing (CFD)
Real Symmetric Assembled (rsa)
Matrix name Order No. of entries Origin
oilpan 73752 1835470 INPRO
b5tuer 162610 4036144 INPRO
crankseg 1 52804 5333507 MacNeal-Schwendler
bmw7st 1 141347 3740507 MacNeal-Schwendler
ship 001.rsa 34920 2339575 Det Norske Veritas
ship 003.rsa 121728 4103881 Det Norske Veritas
shipsec1.rsa 140874 3977139 Det Norske Veritas
shipsec5.rsa 179860 5146478 Det Norske Veritas
shipsec8.rsa 114919 3384159 Det Norske Veritas
thread.rsa 29736 2249892 Det Norske Veritas
x104.rsa 108384 5138004 Det Norske Veritas
Table
2: Assembled test matrices from PARASOL partners (except the matrix bbmat).
AMD ordering
Entries in Flops Time for
Matrix factors (\Theta10 6
estim. actual estim. actual (seconds)
mixing-tank 38.5 39.1 64.1 64.4 4.9
inv-extrusion-1 30.3 31.2 34.3 35.8 4.6
bbmat 46.0 46.2 41.3 41.6 8.1
ND ordering
Entries in Flops Time for
Matrix factors (\Theta10 6
estim. actual estim. actual (seconds)
mixing-tank 18.9 19.6 13.0 13.2 12.8
bbmat 35.7 35.8 25.5 25.7 11.3
Table
3: Statistics for unsymmetric test problems on the IBM SP2.
user-defined precomputed ordering (in this paper ND or MFR orderings). In addition, the cost
of computing the external ordering is not included in these tables.
AMD ordering ND ordering
Entries Flops Time for Entries Flops
Matrix in factors analysis in factors
b5tuer 26 13 15 24 12
bmw7st 1
Table
4: Statistics for symmetric test problems on the IBM SP2.
Entries Flops
Matrix in factors
ship 003 57 73
shipsec1 37
shipsec5 51 52
shipsec8 34 34
thread
Table
5: Statistics for symmetric test problems, available in both assembled (rsa)
and unassembled (rse) formats (MFR ordering).
4 Parallel implementation issues
In this paper, we assume a one-to-one mapping between processes and processors in our
distributed memory environment. A process will thus implicitly refer to a unique processor
and, when we say for example that a task is allocated to a process, we mean that the task is
also mapped onto the corresponding processor.
As we did before in a shared memory environment [4], we exploit parallelism both arising
from sparsity (tree parallelism) and from dense factorizations kernels (node parallelism). To
avoid the limitations due to centralized scheduling, where a host process is in charge of
scheduling the work of the other processes, we have chosen a distributed scheduling strategy.
In our implementation, a pool of work tasks is distributed among the processes that participate
in the numerical factorization. A host process is still used to perform the analysis phase (and
identify the pool of work tasks), distribute the right-hand side vector, and collect the solution.
Our implementation allows this host process to participate in the computations during the
factorization and solution phases. This allows the user to run the code on a single processor
and avoids one processor being idle during the factorization and solution phases.
The code solves the system in three main steps:
1. Analysis. The host performs an approximate minimum degree ordering based on the
symmetrized matrix pattern A carries out the symbolic factorization. The
ordering can also be provided by the user. The host also computes a mapping of the nodes
of the assembly tree to the processors. The mapping is such that it keeps communication
costs during factorization and solution to a minimum and balances the memory and
computation required by the processes. The computational cost is approximated by the
number of floating-point operations, assuming no pivoting is performed, and the storage
cost by the number of entries in the factors. After computing the mapping, the host
sends symbolic information to the other processes. Using this information, each process
estimates the work space required for its part of the factorization and solution. The
estimated work space should be large enough to handle the computational tasks that
were assigned to the process at analysis time plus possible tasks that it may receive
dynamically during the factorization, assuming that no excessive amount of unexpected
fill-in occurs due to numerical pivoting.
2. Factorization. The original matrix is first preprocessed (for example, converted to
arrowhead format if the matrix is assembled) and distributed to the processes that will
participate in the numerical factorization. Each process allocates an array for contribution
blocks and factors. The numerical factorization on each frontal matrix is performed by
a process determined by the analysis phase and potentially one or more other processes
that are determined dynamically. The factors must be kept for the solution phase.
3. Solution. The right-hand side vector b is broadcast from the host to the other processes.
They compute the solution vector x using the distributed factors computed during the
factorization phase. The solution vector is then assembled on the host.
4.1 Sources of parallelism
We consider the condensed assembly tree of Figure 2, where the leaves represent subtrees of the
assembly tree.
SUBTREES
Type 2
Type 3
Type 2
Type 2
Type 1
Figure
2: Distribution of the computations of a multifrontal assembly tree over the four
processors P0, P1, P2, and P3.
If we only consider tree parallelism, then the transfer of the contribution block from a node
in the assembly tree to its father node requires only local data movement when the nodes
are assigned to the same process. Communication is required when the nodes are assigned
to different processes. To reduce the amount of communication during the factorization and
solution phases, the mapping computed during the analysis phase assigns a subtree of the
assembly tree to a single process. In general, the mapping algorithm chooses more leaf subtrees
than there are processes and, by mapping these subtrees carefully onto the processes, we achieve
a good overall load balance of the computation at the bottom of the tree. We have described
this in more detail in [5]. However, if we exploit only tree parallelism, the speedups are very
disappointing. Obviously it depends on the problem, but typically the maximum speedup is no
more than 3 to 5 as illustrated in Table 6. This poor performance is caused by the fact that
the tree parallelism decreases while going towards the root of the tree. Moreover, it has been
observed (see for example [4]) that often more than 75% of the computations are performed
in the top three levels of the assembly tree. It is thus necessary to obtain further parallelism
within the large nodes near the root of the tree. The additional parallelism will be based on
parallel blocked versions of the algorithms used during the factorization of the frontal matrices.
Nodes of the assembly tree that are treated by only one process will be referred to as nodes
of type 1 and the parallelism of the assembly tree will be referred to as type 1 parallelism.
Further parallelism is obtained by a one-dimensional (1D) block partitioning of the rows of
the frontal matrix for nodes with a large contribution block (see Figure 2). Such nodes will
be referred to as nodes of type 2 and the corresponding parallelism as type 2 parallelism.
Finally, if the frontal matrix of the root node is large enough, we partition it in a two-dimensional
(2D) block cyclic way. The parallel root node will be referred to as a node of type 3 and the
corresponding parallelism as type 3 parallelism.
4.2 Type 2 parallelism
During the analysis phase, a node is determined to be of type 2 if the number of rows in its
contribution block is sufficiently large. If a node is of type 2, one process (called the master)
holds all the fully summed rows and performs the pivoting and the factorization on this block
while other processes (called slaves) perform the updates on the partly summed rows (see
Figure
1).
The slaves are determined dynamically during factorization and any process may be selected.
To be able to assemble the original matrix entries quickly into the frontal matrix of a type 2
node, we duplicate the corresponding original matrix entries (stored as arrowheads or element
matrices) onto all the processes before the factorization. This way, the master and slave
processes of a type 2 node have immediate access to the entries that need to be assembled in
the local part of the frontal matrix. This duplication of original data enables efficient dynamic
scheduling of computational tasks, but requires some extra storage. This is studied in more
detail in Section 8. (Note that for a type 1 node, the original matrix entries need only be present
on the process handling this node.)
At execution time, the master of a type 2 node first receives symbolic information describing
the structure of the contribution blocks of its son nodes in the tree. This information is sent
by the (master) processes handling the sons. Based on this information, the master determines
the exact structure of its frontal matrix and decides which slave processes will participate in
the factorization of the node. It then sends information to the processes handling the sons to
enable them to send the entries in their contribution blocks directly to the appropriate processes
involved in the type 2 node. The assemblies for this node are subsequently performed in parallel.
The master and slave processes then perform the elimination operations on the frontal matrix
in parallel. Macro-pipelining based on a blocked factorization of the fully summed rows is used
to overlap communication with computation. The efficiency of the algorithm thus depends on
both the block size used to factor the fully summed rows and on the number of rows allocated
to a slave process. Further details and differences between the implementations for symmetric
and unsymmetric matrices are described in [5].
4.3 Type 3 parallelism
At the root node, we must factorize a dense matrix and we can use standard codes for
this. For scalability reasons, we use a 2D block cyclic distribution of the root node and
we use ScaLAPACK [9] or the vendor equivalent implementation (routine PDGETRF for
general matrices and routine PDPOTRF for symmetric positive definite matrices) for the actual
factorization.
Currently, a maximum of one root node, chosen during the analysis, is processed in parallel.
The node chosen will be the largest root provided its size is larger than a computer dependent
parameter (otherwise it is factorized on only one processor). One process (also called the master)
holds all the indices describing the structure of the root frontal matrix.
We call the root node, as determined by the analysis phase, the estimated root node.
Before factorization, the structure of the frontal matrix of the estimated root node is statically
mapped onto a 2D grid of processes. This mapping fully determines to which process an entry
of the estimated root node is assigned. Hence, for the assembly of original matrix entries
and contribution blocks, the processes holding this information can easily compute exactly the
processes to which they must send data to.
In the factorization phase, the original matrix entries and the part of the contribution blocks
from the sons corresponding to the estimated root can be assembled as soon as they are available.
The master of the root node then collects the index information for all the delayed variables
(due to numerical pivoting) of its sons and builds the final structure of the root frontal matrix.
This symbolic information is broadcast to all processes that participate in the factorization. The
contributions corresponding to delayed variables are then sent by the sons to the appropriate
processes in the 2D grid for assembly (or the contributions can be directly assembled locally if
the destination is the same process). Note that, because of the requirements of ScaLAPACK,
local copying of the root node is required since the leading dimension will change if there are
any delayed pivots.
4.4 Parallel triangular solution
The solution phase is also performed in parallel and uses asynchronous communications both
for the forward elimination and the back substitution. In the case of the forward elimination,
the tree is processed from the leaves to the root, similar to the factorization, while the back
substitution requires a different algorithm that processes the tree from the root to the leaves.
A pool of ready-to-be-activated tasks is used. We do not change the distribution of the factors
as generated in the factorization phase. Hence, type 2 and 3 parallelism are also used in the
solution phase. At the root node, we use ScaLAPACK routine PDGETRS for general matrices
and routine PDPOTRS for symmetric positive definite matrices.
5 Basic performance and influence of ordering
From earlier studies (for example [25]), we know that the ordering may seriously impact both
the uniprocessor time and the parallel behaviour of the method. To illustrate this, we report
in
Table
6 performance obtained using only type 1 parallelism. The results show that using
only type 1 parallelism does not produce good speedups. The results also show (see columns
"Speedup") that we usually get better parallelism with nested dissection based orderings than
with minimum degree based orderings. We thus gain by using nested dissection because of
both a reduction in the number of floating-point operations (see Tables 3 and 4) and a better
balanced assembly tree.
We now discuss the performance obtained with MUMPS on matrices in assembled format that
will be used as a reference for this paper. The performance obtained on matrices provided in
elemental format is discussed in Section 6. In Tables 7 and 8, we show the performance of
MUMPS using nested dissection and minimum degree orderings on the IBM SP2 and the SGI
Origin 2000, respectively. Note that speedups are difficult to compute on the IBM SP2 because
memory paging often occurs on a small number of processors. Hence, the better performance
with nested dissection orderings on a small number of processors of the IBM SP2 is due, in part,
to the reduction in the memory required by each processor (since there are less entries in the
factors). To get a better idea of the true algorithmic speedups (without memory paging effects),
we give, in Table 7, the uniprocessor CPU time for one processor, instead of the elapsed time.
Matrix Time Speedup
AMD ND AMD ND
oilpan 12.6 7.3 2.91 4.45
bmw7st 1 55.6 21.3 2.55 4.87
bbmat 78.4 49.4 4.08 4.00
b5tuer 33.4 25.5 3.47 4.22
Table
Influence of the ordering on the time (in seconds) and speedup for the
factorization phase, using only type 1 parallelism, on 32 processors of the IBM SP2.
When the memory was not large enough to run on one processor, an estimate of the Megaflop
rate was used to compute the uniprocessor CPU time. (This estimate was also used, when
necessary, to compute the speedups in Table 6.) On a small number of processors, there can
still be a memory paging effect that may significantly increase the elapsed time. However, the
speedup over the elapsed time on one processor (not given) can be considerable.
Matrix Ordering Number of processors
oilpan AMD 37 13.6 9.0 6.8 5.9 5.8
b5tuer AMD 116 155.5 24.1 16.8 16.1 13.1
crankseg 1 AMD 456 508.3 162.4 78.4 63.3
bmw7st 1 AMD 142 153.4 46.5 21.3 18.4 16.7
ND 104 105.7 36.7 20.2 12.9 11.7
mixing-tank AMD 495 - 288.5 70.7 64.5 61.3
ND 104 32.80 26.1 17.4 14.4 14.8
bbmat AMD 320 276.4 68.3 47.8 44.0 39.8
ND 198 106.4 76.7 35.2 34.6 30.9
Table
7: Impact of the ordering on the time (in seconds) for factorization on the IBM
estimated CPU time on one processor; - means not enough memory.
Table
8 also shows the elapsed time for the solution phase; we observe that the speedups
for this phase are quite good.
In the remainder of this paper, we will use nested dissection based orderings, unless stated
otherwise.
Factorization phase
Matrix Ordering Number of processors
bmw7st 1 AMD 85.7 56.0 28.2 18.5 15.1 14.2
ND 306.6 182.7 80.9 52.9 41.2 35.5
ND 152.1 93.8 52.5 33.0 22.1 17.0
Solution phase
Matrix Ordering Number of processors
crankseg 2 AMD 6.8 5.8 4.4 2.9 2.4 2.3
ND 4.3 2.7 1.8 1.5 1.1 1.8
bmw7st 1 AMD 4.2 2.4 2.3 1.9 1.4 1.6
ND 3.3 2.1 1.7 1.4 1.6 1.5
ND 8.3 4.7 2.7 2.1 1.8 2.0
ND 6.3 3.8 2.9 2.4 2.0 2.4
Table
8: Impact of the ordering on the time (in seconds) for factorization and solve
phases on the SGI Origin 2000.
6 Elemental input matrix format
In this section, we discuss the main algorithmic changes to handle efficiently problems that are
provided in elemental format. We assume that the original matrix can be represented as a sum
of element matrices
where each A i has nonzero entries only in those rows and columns that correspond to variables
in the ith element. A i is usually held as a dense matrix, but if the matrix A is symmetric, only
the lower triangular part of each A i is stored.
In a multifrontal approach, element matrices need not be assembled in more than one frontal
matrix during the elimination process. This is due to the fact that the frontal matrix structure
contains, by definition, all the variables adjacent to any fully summed variable of the front. As
a consequence, element matrices need not be split during the assembly process. Note that, for
classical fan-in and fan-out approaches [7], this property does not hold since the positions of
the element matrices to be assembled are not restricted to fully summed rows and columns.
The main modifications that we had to make to our algorithms for assembled matrices to
accommodate unassembled matrices lie in the analysis, the distribution of the matrix, and the
assembly process. We will describe them in more detail below.
In the analysis phase, we exploit the elemental format of the matrix to detect supervariables.
We define a supervariable as a set of variables having the same list of adjacent elements. This
is illustrated in Figure 3 where the matrix is composed of two overlapping elements and has three
supervariables. (Note that our definition of a supervariable differs from the usual definition, see
for example [11]).
Supervariables have been used successfully in a similar context to compress graphs associated
with assembled matrices from structural engineering prior to a multiple minimum degree
ordering [8]. For assembled matrices, however, it was observed in [1] that the use of
supervariables in combination with an Approximate Minimum Degree algorithm was not more
efficient.
Graph size with
Matrix supervariable detection
OFF ON
t1.rse 9655992 299194
ship 003.rse 7964306 204324
shipsec1.rse
shipsec5.rse 9933236 256976
shipsec8.rse 6538480 171428
thread.rse 4440312 397410
x104.rse 10059240 246950
Table
9: Impact of supervariable detection on the length of the adjacency lists given
to the ordering phase.
Table
9 shows the impact of using supervariables on the size of the graph processed
by the ordering phase (AMD ordering). Graph size is the length of the adjacency lists of
variables/supervariables given as input to the ordering phase. Without supervariable detection,
Initial graph of variables Initial matrix7426,7,8
4,5
Graph of supervariables
(sum of two overlapping elements)
Figure
3: Supervariable detection for matrices in elemental format.
Graph size is twice the number of off-diagonal entries in the corresponding assembled matrix.
The working space required by the analysis phase using the AMD ordering is dominated by
the space required by the ordering phase and is Graph size plus an overhead that is a small
multiple of the order of matrix. Since the ordering is performed on a single processor, the space
required to compute the ordering is the most memory intensive part of the analysis phase. With
supervariable detection, the complete uncompressed graph need not be built since the ordering
phase can operate directly on the compressed graph. Table 9 shows that, on large graphs,
compression can reduce the memory requirements of the analysis phase dramatically.
Table
shows the impact of using supervariables on the time for the complete analysis
phase (including graph compression and ordering). We see that the reduction in time is not
only due to the reduced time for ordering; significantly less time is also needed for building the
much smaller adjacency graph of the supervariables.
Time for analysis
Matrix supervariable detection
OFF ON
t1.rse 4.6 (1.8) 1.5 (0.3)
ship 003.rse 7.4 (2.8) 3.2 (0.7)
shipsec1.rse 6.0 (2.2) 2.6 (0.6)
shipsec5.rse 10.1 (4.6) 3.9 (0.8)
shipsec8.rse 5.7 (2.0) 2.6 (0.5)
thread.rse 2.6 (0.9) 1.2 (0.2)
x104.rse 6.4 (3.5) 1.5 (0.3)
Table
10: Impact of supervariable detection on the time (in seconds) for the analysis
phase on the SGI Origin 2000. The time spent in the AMD ordering is in parentheses.
The overall time spent in the assembly process for matrices in elemental format will differ
from the overall time spent in the assembly process for the equivalent assembled matrix.
Obviously, for the matrices in elemental format there is often significantly more data to assemble
(usually about twice the number of entries as for the same matrix in assembled format).
However, the assembly process of matrices in elemental format should be performed more
efficiently than the assembly process of assembled matrices. First, because we potentially
assemble at once a larger and more regular structure (a full matrix). Second, because most input
data will be assembled at or near leaf nodes in the assembly tree. This has two consequences.
The assemblies are performed in a more distributed way and most assemblies of original element
matrices are done at type 1 nodes. (Hence, less duplication of original matrix data is necessary.)
A more detailed analysis of the duplication issues linked to matrices in elemental format will be
addressed in Section 8. In our experiments (not shown here), we have observed that, despite the
differences in the assembly process, the performance of MUMPS for assembled and unassembled
problems is very similar, provided the same ordering is used. The reason for this is that the extra
amount of assemblies of original data for unassembled problems is relatively small compared to
the total number of flops.
The experimental results in Tables 11 and 12, obtained on the SGI Origin 2000, show the
good scalability of the code for both the factorization and the solution phases on our set of
unassembled matrices.
Matrix Number of processors
ship 003.rse 392 242 156 120 92
shipsec1.rse 174 128
shipsec5.rse 281 176 114 63 43
shipsec8.rse 187 127 68 36
thread.rse 186 120 69 46 37
x104.rse 56 34 20
Table
11: Time (in seconds) for factorization of the unassembled matrices on the SGI
Origin 2000. MFR ordering is used.
Matrix Number of processors
t1.rse 3.5 2.1 1.1 1.2 0.8
ship 003.rse 6.9 3.6 3.3 2.5 2.0
shipsec1.rse 3.8 3.1 2.1 1.6 1.5
shipsec5.rse 5.5 4.2 2.9 2.2 1.9
shipsec8.rse 3.8 3.1 2.0 1.4 1.3
thread.rse 2.3 1.9 1.3 1.0 0.8
x104.rse 2.6 1.9 1.4 1.0 1.1
Table
12: Time (in seconds) for the solution phase of the unassembled matrices on
the SGI Origin 2000. MFR ordering is used.
7 Distributed assembled matrix
The distribution of the input matrix over the available processors is the main preprocessing
step in the numerical factorization phase. During this step, the input matrix is organized into
arrowhead format and distributed according to the mapping provided by the analysis phase. In
the symmetric case, the first arrowhead of each frontal matrix is also sorted to enable efficient
assembly [5]. If the assembled matrix is initially held centrally on the host, we have observed
that the time to distribute the real entries of the original matrix can sometimes be comparable
to the time to perform the actual factorization. For example, for matrix oilpan, the time to
distribute the input matrix on 16 processors of the IBM SP2 is on average 6 seconds whereas
the time to factorize the matrix is 6.8 seconds (using AMD ordering, see Table 7). Clearly,
for larger problems where more arithmetic is required for the actual factorization, the time for
factorization will dominate the time for redistribution.
With a distributed input matrix format we can expect to reduce the time for the
redistribution phase because we can parallelize the reformatting and sorting tasks, and we
can use asynchronous all-to-all (instead of one-to-all) communications. Furthermore, we can
expect to solve larger problems since storing the complete matrix on one processor limits the
size of the problem that can be solved on a distributed memory computer. Thus, to improve
both the memory and the time scalability of our approach, we should allow the input matrix
to be distributed.
Based on the static mapping of the tasks to processes that is computed during the analyis
phase, one can a priori distribute the input data so that no further remapping is required at
the beginning of the factorization. This distribution, referred to as the MUMPS mapping, will
limit the communication to duplications of the original matrix corresponding to type 2 nodes
(further studied in Section 8).
To show the influence of the initial matrix distribution on the time for redistribution, we
compare, in Figure 4, three ways for providing the input matrix:
1. Centralized mapping: the input matrix is held on one process (the host).
2. MUMPS mapping: the input matrix is distributed over the processes according to the static
mapping that is computed during the analysis phase.
3. Random mapping: the input matrix is uniformly distributed over the processes in a
random manner that has no correlation to the mapping computed during the analysis
phase.
The figure clearly shows the benefit of using asynchronous all-to-all communications (required
by the MUMPS and random mappings) compared to using one-to-all communications (for the
centralized mapping). It is even more interesting to observe that distributing the input matrix
according to the MUMPS mapping does not significantly reduce the time for redistribution.
We attribute this to the good overlapping of communication with computation (mainly data
reformatting and sorting) in our redistribution algorithm.
Number of Processors1357
Distribution
time
(seconds) Centralized matrix
using MUMPS mapping
Random mapping
Figure
4: Impact of the initial distribution for matrix oilpan on the time for redistribution on
the IBM SP2.
8 Memory scalability issues
In this section, we study the memory requirements and memory scalability of our algorithms.
Figure
5 illustrates how MUMPS balances the memory load over the processors. The figure
shows, for two matrices, the maximum memory required on a processor and the average over
all processors, as a function of the number of processors. We observe that, for varying numbers
of processors, these values are quite similar.
Number of Processors50150250
Size
of
total
space
(Mbytes)
Maximum
Average
28
Number of Processors100300500700Size
of
total
space
(Mbytes) Maximum
Average
Figure
5: Total memory requirement per processor (maximum and average) during factorization
(ND ordering).
Table
13 shows the average size per processor of the main components of the working space
used during the factorization of the matrix bmw3 2. These components are:
ffl Factors: the space reserved for the factors; a processor does not know after the analysis
phase in which type 2 nodes it will participate, and therefore it reserves enough space to
be able to participate in all type 2 nodes.
ffl Stack area: the space used for stacking both the contribution blocks and the factors.
ffl Initial matrix: the space required to store the initial matrix in arrowhead format.
ffl Communication buffers: the space allocated for both send and receive buffers.
ffl Other: the size of all the remaining workspace allocated per processor.
ffl Total: the total memory required per processor.
The lines ideal in Table 13 are obtained by dividing the memory requirement on one processor
by the number of processors. By comparing the actual and ideal numbers, we get an idea how
MUMPS scales in terms of memory for some of the components.
Number of processors
Factors 423 211 107 58
ideal - 211 106 53 26
Stack area 502 294 172
Initial matrix 69 34.5 17.3 8.9 5.0 4.0 3.5
ideal - 34.5 17.3 8.6 4.3 2.9 2.2
Communication buffers 0 45 34 14 6 6 5
Other 20 20 20 20 20 20 20
Total 590 394 243 135 82 69 67
ideal - 295 147 74 37 25
Table
13: Analysis of the memory used during factorization of matrix bmw3 2 (ND ordering).
All sizes are in MBytes per processor.
We see that, even if the total memory (sum of all the local workspaces) increases, the average
memory required per processor significantly decreases up to 24 processors. We also see that
the size for the factors and the stack area are much larger than ideal. Part of this difference
is due to parallelism and is unavoidable. Another part, however, is due to an overestimation
of the space required. The main reason for this is that the mapping of the type 2 nodes on
the processors is not known at analysis and each processor can potentially participate in the
elimination of any type 2 node. Therefore, each processor allocates enough space to be able
to participate in all type 2 nodes. The working space that is actually used is smaller and, on
a large number of processors, we could reduce the estimate for both the factors and the stack
area. For example, we have successfully factorized matrix bmw3 2 on 32 processors with a
stack area that is 20% smaller than reported in Table 13.
The average working space used by the communication buffers also significantly decreases
up to 16 processors. This is mainly due to type 2 node parallelism where contribution blocks
are split among processors until a minimum granularity is reached. Therefore, when we increase
the number of processors, we decrease (until reaching this minimum granularity) the size of the
contribution blocks sent between processors. Note that on larger problems, the average size
per processor of the communication buffers will continue to decrease for a larger number of
processors. We see, as expected, that the line Other does not scale at all since it corresponds
to data arrays of size O(n) that need to be allocated on each process. We see that this space
significantly affects the difference between Total and ideal, especially for larger numbers of
processors. However, the relative influence of this fixed size area will be smaller on large matrices
from 3D simulations and therefore does not affect the asymptotic scalability of the algorithm.
The imperfect scalability of the initial matrix storage comes from the duplication of the
original matrix data that is linked to type 2 nodes in the assembly tree. We will study this in
more detail in the remainder of this section. We want to stress, however, that from a user point
of view, all numbers reported in this context should be related to the total memory used by the
MUMPS package which is usually dominated, on large problems, by the size of the stack area.
An alternative to the duplication of data related to type 2 nodes would be to allocate
the original data associated with a frontal matrix to only the master process responsible for
Matrix Number of processors
oilpan Type 2 nodes 0
Total entries 1835 1845 1888 2011 2235 2521
bmw7st
Total entries 3740 3759 3844 4031 4308 4793
Total entries 5758 5767 5832 6239 6548 7120
shipsec1.rsa Type 2 nodes 0 0 4 11 19 21
Total entries 3977 3977 4058 4400 4936 5337
shipsec1.rse Type 2 nodes
Total entries 8618 8618 8618 8627 8636 8655
thread.rsa Type 2 nodes
Total entries 2250 2342 2901 4237 6561 8343
thread.rse Type 2 nodes
Total entries 3719 3719 3719 3719 3719 3719
Table
14: The amount of duplication due to type 2 nodes. "Total entries" is the sum
of the number of original matrix entries over all processors (\Theta10 3 ). The number of
nodes is also given.
the type 2 node. During the assembly process, the master process would then be in charge
of redistributing the original data to the slave processes. This strategy introduces extra
communication costs during the assembly of a type 2 node and thus has not been chosen.
With the approach based on duplication, the master process responsible for a type 2 node has
all the flexibility to choose collaborating processes dynamically since this will not involve any
data migration of the original matrix. However, the extra cost of this strategy is that, based on
the decision during analysis of which nodes will be of type 2, partial duplication of the original
matrix must be performed.
In order to keep all the processors busy, we need to have sufficient node parallelism near the
root of the assembly tree, MUMPS uses a heuristic that increases the number of type 2 nodes with
the number of processors used. The influence of the number of processors on the amount of
duplication is shown in Table 14. On a representative subset of our test problems, we show the
total number of type 2 nodes and the sum over all processes of the number of original matrix
entries and duplicates. If there is only one processor, type 2 nodes are not used and no data
is duplicated. Figure 6 shows, for four matrices, the number of original matrix entries that are
duplicated on all processors, relative to the total number of entries in the original matrix.
Since the original data for unassembled matrices are in general assembled earlier in the
assembly tree than the data for the same matrix in assembled format, the number of duplications
is often relatively much smaller with unassembled matrices than with assembled matrices.
Matrix thread.rse (in elemental format) is an extreme example since, even on 16 processors,
type 2 node parallelism does not require any duplication (see Table 14).
To conclude this section, we want to point out that the code scales well in terms of memory
usage. On (virtual) shared memory computers, the total memory (sum of local workspaces over
all the processors) required by MUMPS can sometimes be excessive. Therefore, we are currently
investigating how we can reduce the current overestimates of the local stack areas so we can
Number of Processors515Percentage BMW3_2
THREAD.RSA
Figure
Percentage of entries in the original matrix that are duplicated on all
processors due to type 2 nodes.
reduce the total memory required. A possible solution might be to limit the dynamic scheduling
of a type 2 node (and corresponding data duplication) to a subset of processors.
9 Dynamic scheduling strategies
To avoid the drawback of centralized scheduling on distributed memory computers, we have
implemented distributed dynamic scheduling strategies. We remind the reader that type 1
nodes are statically mapped to processes at analysis time and that only type 2 tasks, which
represent a large part of the computations and of the parallelism of the method, are involved
in the dynamic scheduling strategy.
To be able to choose dynamically the processes that will collaborate in the processing of a
type 2 node, we have designed a two-phase assembly process. Let Inode be a node of type 2 and
let Pmaster be the process to which Inode is initially mapped. In the first phase, the (master)
processes to which the sons of Inode are mapped, send symbolic data (integer lists) to Pmaster.
When the structure of the frontal matrix is determined, Pmaster decides a partitioning of the
frontal matrix and chooses the slave processes. It is during this phase that Pmaster will collect
information concerning the load of the other processors to help in its decision process. The
slave processes are informed that a new task has been allocated to them. Pmaster then sends
the description of the distribution of the frontal matrix to all collaborative processes of all
sons of Inode so that they can send their contribution blocks (real values) in pieces directly
to the correct processes involved in the computation of Inode. The assembly process is thus
fully parallelized and the maximum size of a message sent between processes is reduced (see
Section 8).
A pool of tasks private to each process is used to implement dynamic scheduling. All tasks
ready to be activated on a given process are stored in the pool of tasks local to the process.
Each process executes the following algorithm:
Algorithm 1
while ( not all nodes processed )
if local pool empty then
blocking receive for a message; process the message
elseif message available then
receive and process message
else
extract work from the pool, and process it
endif
while
Note that the algorithm gives priority to message reception. The main reasons for this
choice are first that the message received might be a source of additional work and parallelism
and second, the sending process might be blocked because its send buffer is full (see [5]). In
the actual implementation, we use the routine MPI IPROBE to check whether a message is
available.
We have implemented two scheduling strategies. In the first strategy, referred to as cyclic
scheduling, the master of a type 2 node does not take into account the load on the other
processors and performs a simple cyclic mapping of the tasks to the processors. In the
second strategy, referred to as (dynamic) flops-based scheduling, the master process uses
information on the load of the other processors to allocate type 2 tasks to the least loaded
processors. The load of a processor is defined here as the amount of work (flops) associated
with all the active or ready-to-be-activated tasks. Each process is in charge of maintaining local
information associated with its current load. With a simple remote memory access procedure,
using for example the one-sided communication routine MPI GET included in MPI-2, each
process has access to the load of all other processors when necessary. However, MPI-2 is not
available on our target computers. To overcome this, we have designed a module based only
on symmetric communication tools (MPI asynchronous send and receive). Each process is in
charge of both updating and broadcasting its local load. To control the frequency of these
broadcasts, an updated load is broadcast only if it is significantly different from the last load
broadcast.
When the initial static mapping does not balance the work well, we can expect that the
dynamic flops-based scheduling will improve the performance with respect to cyclic scheduling.
Tables
15 and 16 show that significant performance gains can be obtained by using dynamic
flops-based scheduling. On more than 24 processors, the gains are less significant because our
test problems are too small to keep all the processors busy and thus lessen the benefits of a good
dynamic scheduling algorithm. We also expect that this feature will improve the behaviour of
the parallel algorithm on a multi-user distributed memory computer.
Another possible use of dynamic scheduling is to improve the memory usage. We have seen,
in Section 8, that the size of the stack area is overestimated. Dynamic scheduling based on
Matrix & Number of processors
scheduling 28
cyclic 79.1 47.9 40.7 41.3 38.9
flops-based 61.1 45.6 41.9 41.7 40.4
cyclic 52.4 31.8 26.2 29.2 23.0
flops-based 29.4 27.8 25.1 25.3 22.6
Table
15: Comparison of cyclic and flops-based schedulings. Time (in seconds) for
factorization on the IBM SP2 (ND ordering).
Matrix & Number of processors
scheduling 4 8
ship 003.rse
cyclic 156.1 119.9 91.9
flops-based 140.3 110.2 83.8
shipsec5.rse
cyclic 113.5 63.1 42.8
flops-based 99.9 61.3 37.0
shipsec8.rse
cyclic 68.3 36.3 29.9
flops-based 65.0 35.0 25.1
Table
Comparison of cyclic and flops-based schedulings. Time (in seconds) for
factorization on the SGI Origin 2000 (MFR ordering).
memory load, instead of computational load, could be used to address this issue. Type 2 tasks
can be mapped to the least loaded processor (in terms of memory used in the stack area). The
memory estimation of the size of the stack area can then be based on a static mapping of the
tasks.
Splitting nodes of the assembly tree
During the processing of a parallel type 2 node, both in the symmetric and the unsymmetric
case, the factorization of the pivot rows is performed by a single processor. Other processors
can then help in the update of the rows of the contribution block using a 1D decomposition (as
presented in Section 4). The elimination of the fully summed rows can represent a potential
bottleneck for scalability, especially for frontal matrices with a large fully summed block near
the root of the tree, where type 1 parallelism is limited. To overcome this problem, we subdivide
nodes with large fully summed blocks, as illustrated in Figure 7.3571
ASSEMBLY TREE
Pivot blocks
Contribution blocks
ASSEMBLY TREE
AFTER SPLITTING2NFRONT5NPIV
NPIV father
son
son
Figure
7: Tree before and after the subdivision of a frontal matrix with a large pivot block.
In this figure, we consider an initial node of size NFRONT with NPIV pivots. We replace
this node by a son node of size NFRONT with NPIV son pivots, and a father node of size
son , with NPIV father = NPIV \GammaNPIV son pivots. Note that by splitting a node,
we increase the number of operations for factorization, because we add assembly operations.
Nevertheless, we expect to benefit from splitting because we increase parallelism.
We experimented with a simple algorithm that postprocesses the tree after the symbolic
factorization. The algorithm considers only nodes near the root of the tree. Splitting large
nodes far from the root, where sufficient tree parallelism can already be exploited, would only
lead to additional assembly and communication costs. A node is considered for splitting only if
its distance to the root, that is, the number edges between the root and the node, is not more
than
Let Inode be a node in the tree, and d(Inode) the distance of Inode to the root. For all
nodes Inode such that d(Inode) dmax , we apply the following algorithm.
Algorithm 2 Splitting of a node
large enough then
1. Compute number of flops performed by the master of Inode.
2. Compute number of flops performed by a slave,
assuming that NPROCS \Gamma 1 slaves can participate.
3. if W master ? W slave
3.1. Split Inode into nodes son and father so that NPIV son
3.2. Apply Algorithm 2 recursively to nodes son and father.
endif
endif
Algorithm 2 is applied to a node only when NFRONT - NPIV/2 is large enough because
we want to make sure that the son of the split node is of type 2. (The size of the contribution
block of the son will be NFRONT - NPIV son .) A node is split only when the amount of
work for the master (W master ) is large relative to the amount of work for a slave (W slave ). To
reduce the amount of splitting further away from the root, we add, at step 3 of the algorithm,
a relative factor to W slave . This factor depends on a machine dependent parameter
and increases with the distance of the node from the root. Parameter p allows us to control
the general amount of splitting. Finally, because the algorithm is recursive, we may divide the
initial node into more than two new nodes.
The effect of splitting is illustrated in Table 17 on both the symmetric matrix crankseg 2
and the unsymmetric matrix inv-extrusion-1. Ncut corresponds to the number of type 2
nodes cut. A value used as a flag to indicate no splitting. Flops-based dynamic
scheduling is used for all runs in this section. The best time obtained for a given number of
processors is indicated in bold font. We see that significant performance improvements (of up to
40% reduction in time) can be obtained by using node splitting. The best timings are generally
obtained for relatively large values of p. More splitting occurs for smaller values of p, but the
corresponding times do not change much.
p Number of processors
28
200 Time 37.9 31.4 30.4 29.5 25.4
150 Time 41.8 31.3 31.0 28.9 27.2
100 Time 39.8 32.3 28.4 28.6 26.7
Ncut 9 11 13 14 15
50 Time 36.7 33.6 31.4 29.6 27.4
Ncut 28
p Number of processors
200 Time 25.5 16.7 13.4 12.1 12.4
150 Time 24.9 16.3 13.5 13.4 12.4
100 Time 24.9 16.2 13.7 13.1 13.6
50 Time 24.9 17.0 13.5 13.6 16.6
Table
17: Time (in seconds) for factorization and number of nodes cut for different values of
parameter p on the IBM SP2. Nested dissection ordering and flops-based dynamic scheduling
are used.
Summary
Tables
and 19 show results obtained with MUMPS 4.0 using both dynamic scheduling and
node splitting. Default values for the parameters controlling the efficiency of the package have
been used and therefore the timings do not always correspond to the fastest possible execution
time. The comparison with results presented in Tables 7, 8, and 11 summarizes well the benefits
coming from the work presented in Sections 9 and 10.
Matrix Number of processors
oilpan 33 11.1 7.5 5.2 4.8 4.6
b5tuer 108 82.1 51.9 13.4 13.1 10.5
bmw7st 1 104 - 29.8 13.7 11.7 11.3
mixing-tank 104 30.8 21.6 16.4 14.7 14.8
bbmat 198 255.4 85.2 34.8 32.8 30.9
Table
18: Time (in seconds) for factorization using MUMPS 4.0 with default options
on IBM SP2. ND ordering is used. estimated CPU time;
means swapping or not enough memory.
Matrix Number of processors
bmw7st 1 62 36
ship 003.rse 392 237 124 108 51
shipsec1.rse 174 125 63
shipsec5.rse 281 181 103 62 37
shipsec8.rse
thread.rse 186 125 70 38 24
x104.rse 56 34 19 12 11
Table
19: Time (in seconds) for factorization using MUMPS 4.0 with default options
on SGI Origin 2000. ND or MFR ordering is used.
The largest problem we have solved to date is a symmetric matrix of order 943695 with
more than 39 million entries. The number of entries in the factors is 1:4 \Theta 10 9 and the number
of operations during factorization is 5:9 \Theta 10 12 . On one processor of the SGI Origin 2000, the
factorization phase required 8.9 hours and on two (non-dedicated) processors 6.2 hours were
required. Because of the total amount of memory estimated and reserved by MUMPS, we could
not solve it on more than 2 processors. This issue will have to be addressed to improve the
scalability on globally addressable memory computers and further analysis will be performed
on purely distributed memory computers with a larger number of processors. Possible solutions
to this have been mentioned in the paper (limited dynamic scheduling and/or memory based
dynamic scheduling) and will be developed in the future.
Acknowledgements
We are grateful to Jennifer Scott and John Reid for their comments on an early version of this
paper.
--R
An approximate minimum degree ordering algorithm.
Linear algebra calculations on a virtual shared memory computer.
Vectorization of a multiprocessor multifrontal code.
Memory management issues in sparse multifrontal methods on multiprocessors.
Multifrontal parallel distributed symmetric and unsymmetric solvers.
The fan-both family of column-based distributed Cholesky factorisation algorithms
Compressed graphs and the minimum degree algorithm.
ScaLAPACK Users' Guide.
A parallel solution method for large sparse systems of equations.
A supernodal approach to sparse partial pivoting.
Making sparse Gaussian elimination scalable by static pivoting.
Working Note 94: A Users' Guide to the BLACS v1.
Algorithm 679.
Algorithm 679.
The Rutherford-Boeing Sparse Matrix Collection
Direct Methods for Sparse Matrices.
The design and use of algorithms for permuting large entries to the diagonal of sparse matrices.
On algorithms for permuting large entries to the diagonal of a sparse matrix.
The multifrontal solution of indefinite sparse symmetric linear systems.
D'eveloppement d'une approche multifrontale pour machines 'a m'emoire distribu'ee et r'eseau h'et'erog'ene de stations de travail.
Sparse Cholesky factorization on a local memory multiprocessor.
Highly scalable parallel algorithms for sparse matrix factorization.
Parallel algorithms for sparse linear systems.
Improving the runtime and quality of nested dissection ordering.
Scotch 3.1 User's guide.
Hybridizing nested dissection and halo approximate minimum degree for efficient sparse matrix ordering.
--TR
--CTR
Kai Shen, Parallel sparse LU factorization on different message passing platforms, Journal of Parallel and Distributed Computing, v.66 n.11, p.1387-1403, November 2006
Omer Meshar , Dror Irony , Sivan Toledo, An out-of-core sparse symmetric-indefinite factorization method, ACM Transactions on Mathematical Software (TOMS), v.32 n.3, p.445-471, September 2006
Patrick R. Amestoy , Iain S. Duff , Jean-Yves L'Excellent , Xiaoye S. Li, Impact of the implementation of MPI point-to-point communications on the performance of two general sparse solvers, Parallel Computing, v.29 n.7, p.833-849, July
Kai Shen, Parallel sparse LU factorization on second-class message passing platforms, Proceedings of the 19th annual international conference on Supercomputing, June 20-22, 2005, Cambridge, Massachusetts
Hong Zhang , Barry Smith , Michael Sternberg , Peter Zapol, SIPs: Shift-and-invert parallel spectral transformations, ACM Transactions on Mathematical Software (TOMS), v.33 n.2, p.9-es, June 2007
Mark Baertschy , Xiaoye Li, Solution of a three-body problem in quantum mechanics using sparse linear algebra on parallel computers, Proceedings of the 2001 ACM/IEEE conference on Supercomputing (CDROM), p.47-47, November 10-16, 2001, Denver, Colorado
Iain S. Duff , Jennifer A. Scott, A parallel direct solver for large sparse highly unsymmetric linear systems, ACM Transactions on Mathematical Software (TOMS), v.30 n.2, p.95-117, June 2004
Adaptive grid refinement for a model of two confined and interacting atoms, Applied Numerical Mathematics, v.52 n.2-3, p.235-250, February 2005
Abdou Guermouche , Jean-Yves L'Excellent , Gil Utard, Impact of reordering on the memory of a multifrontal solver, Parallel Computing, v.29 n.9, p.1191-1218, September
Vladimir Rotkin , Sivan Toledo, The design and implementation of a new out-of-core sparse cholesky factorization method, ACM Transactions on Mathematical Software (TOMS), v.30 n.1, p.19-46, March 2004
Dror Irony , Gil Shklarski , Sivan Toledo, Parallel and fully recursive multifrontal sparse Cholesky, Future Generation Computer Systems, v.20 n.3, p.425-440, April 2004
Abdou Guermouche , Jean-Yves L'excellent, Constructing memory-minimizing schedules for multifrontal methods, ACM Transactions on Mathematical Software (TOMS), v.32 n.1, p.17-32, March 2006
Olaf Schenk , Klaus Grtner, Two-level dynamic scheduling in PARDISO: improved scalability on shared memory multiprocessing systems, Parallel Computing, v.28 n.2, p.187-197, February 2002
Patrick R. Amestoy , Abdou Guermouche , Jean-Yves L'Excellent , Stphane Pralet, Hybrid scheduling for the parallel solution of linear systems, Parallel Computing, v.32 n.2, p.136-156, February 2006
Patrick R. Amestoy , Iain S. Duff , Jean-Yves L'excellent , Xiaoye S. Li, Analysis and comparison of two general sparse solvers for distributed memory computers, ACM Transactions on Mathematical Software (TOMS), v.27 n.4, p.388-421, December 2001
Xiaoye S. Li , James W. Demmel, SuperLU_DIST: A scalable distributed-memory sparse direct solver for unsymmetric linear systems, ACM Transactions on Mathematical Software (TOMS), v.29 n.2, p.110-140, June
Olaf Schenk , Klaus Grtner, Solving unsymmetric sparse systems of linear equations with PARDISO, Future Generation Computer Systems, v.20 n.3, p.475-487, April 2004
Michele Benzi, Preconditioning techniques for large linear systems: a survey, Journal of Computational Physics, v.182 n.2, p.418-477, November 2002
Patrick R. Amestoy , Iain S. Duff , Stphane Pralet , Christof Vmel, Adapting a parallel sparse direct solver to architectures with clusters of SMPs, Parallel Computing, v.29 n.11-12, p.1645-1668, November/December
Anshul Gupta, Recent advances in direct methods for solving unsymmetric sparse systems of linear equations, ACM Transactions on Mathematical Software (TOMS), v.28 n.3, p.301-324, September 2002
Timothy A. Davis, A column pre-ordering strategy for the unsymmetric-pattern multifrontal method, ACM Transactions on Mathematical Software (TOMS), v.30 n.2, p.165-195, June 2004
Nicholas I. M. Gould , Jennifer A. Scott , Yifan Hu, A numerical evaluation of sparse direct solvers for the solution of large sparse symmetric linear systems of equations, ACM Transactions on Mathematical Software (TOMS), v.33 n.2, p.10-es, June 2007
A. N. F. Klimowicz , M. D. Mihajlovi , M. Heil, Deployment of parallel direct sparse linear solvers within a parallel finite element code, Proceedings of the 24th IASTED international conference on Parallel and distributed computing and networks, p.310-315, February 14-16, 2006, Innsbruck, Austria
A. Bendali , Y. Boubendir , M. Fares, A FETI-like domain decomposition method for coupling finite elements and boundary elements in large-size problems of acoustic scattering, Computers and Structures, v.85 n.9, p.526-535, May, 2007 | gaussian elimination;dynamic scheduling;multifrontal methods;asynchronous parallelism;sparse linear equations;distributed memory computation |
587829 | On Weighted Linear Least-Squares Problems Related to Interior Methods for Convex Quadratic Programming. | It is known that the norm of the solution to a weighted linear least-squares problem is uniformly bounded for the set of diagonally dominant symmetric positive definite weight matrices. This result is extended to weight matrices that are nonnegative linear combinations of symmetric positive semidefinite matrices. Further, results are given concerning the strong connection between the boundedness of weighted projection onto a subspace and the projection onto its complementary subspace using the inverse weight matrix. In particular, explicit bounds are given for the Euclidean norm of the projections. These results are applied to the Newton equations arising in a primal-dual interior method for convex quadratic programming and boundedness is shown for the corresponding projection operator. | Introduction
. In this paper we study certain properties of the weighted linear
least-squares problem
where A is an m n matrix of full row rank and W is a positive denite symmetric
whose matrix square root is denoted by W 1=2 . (See, e.g., Golub and
Loan [14, p. 149] for a discussion on matrix square roots.) Linear least-squares
problems are fundamental within linear algebra, see, e.g., Lawson and Hanson [20],
Golub and Van Loan [14, Chapter 5] and Gill et al. [12, Chapter 6]. An individual
problem of the form (1.1) can be converted to an unweighted problem by substituting
e
g. However, our interest is in sequences of weighted
problems, where the weight matrix W changes and A is constant. The present paper
is a continuation of the paper by Forsgren [10], in which W is assumed to be diagonally
dominant. Our concern is when the weight matrix is of the form
where H is a constant positive semidenite symmetric matrix and D is an arbitrary
positive denite diagonal matrix. Such matrices arise in interior methods for convex
quadratic programming. See Section 1.1 below for a brief motivation.
The solution of (1.1) is given by the normal equations
To appear in SIAM Journal on Matrix Analysis and Applications.
y Optimization and Systems Theory, Department of Mathematics, Royal Institute of Technology,
44 Stockholm, Sweden (anders.forsgren@math.kth.se). Research supported by the Swedish
Natural Science Research Council (NFR).
z Optimization and Systems Theory, Department of Mathematics, Royal Institute of Technology,
44 Stockholm, Sweden (goran.sporre@math.kth.se). Research supported by the Swedish
Natural Science Research Council (NFR).
A. FORSGREN AND G. SPORRE
or alternatively as the solution to the augmented system (or KKT system)
r
In some situations, we will prefer the KKT form (1.4), since we
are interested in the case when M is a positive semidenite symmetric and singular
matrix. In this situation, W 1 and (1.3) are not dened, but (1.4) is well dened.
This would for example be the case in an equality-constrained weighted linear least-squares
problem, see, e.g., Lawson and Hanson [20, Chapter 22]. For convenience, we
will mainly use the form (1.3).
mathematically, (1.3) and (1.4) are equivalent. From a computational
point of view, this need not be the case. There is a large number of papers
giving reasons for solving systems of one type or the other, starting with Bartels et
al. [1], followed by, e.g., Du et al. [9], Bjorck [4], Gulliksson and Wedin [17],
Wright [29, 31], Bjorck and Paige [5], Vavasis [26], Forsgren et al. [11], and Gill et al.
[13]. The focus of the present paper is linear algebra, and we will not discuss these
important computational aspects.
If A has full row rank and if W+ is dened as the set of n n positive denite
symmetric matrices, then for any W 2 W+ , the unique solution of (1.1) is given by
In a number of applications, it is of interest to know if the solution remains in a
compact set as the weight matrix changes, i.e., the question is whether
sup
remains bounded for a particular subset W of W+ . It should be noted that boundedness
does not hold for an arbitrary subset W of W+ . Take for example
let
for > 0. Then W
This implies that k(AWA T ) 1 AWk is not bounded when W is allowed to vary in
W+ . See Stewart [24] for another example of unboundedness and related discussion.
For the case where W is the set of positive denite diagonal matrices, Dikin [8] gives
an explicit formula for the optimal in (1.1) as a convex combination of the basic
solutions formed by satisfying m linearly independent equations. From this result, the
boundedness is obvious. If A does not have full row rank, it is still possible to show
boundedness, see Ben-Israel [2, p. 108]. Later, Wei [28] has also studied boundedness
in absence of a full row rank assumption on A, and has furthermore given some
stability results. Bobrovnikova and Vavasis [6] have given boundedness results for
complex diagonal weight matrices. The geometry of the set (AWA T
varies over the set of positive denite diagonal matrices has been studied by Hanke
LEAST-SQUARES PROBLEMS RELATED TO QUADRATIC PROGRAMMING 3
and Neumann [18]. Based on the formula derived by Dikin [8], Forsgren [10] has
given boundedness results when W is the set of positive denite diagonally dominant
matrices.
We show boundedness for the set of weight matrices that are arbitrary nonnegative
combinations of a set of xed positive semidenite symmetric matrices, and the set of
inverses of such matrices. As a special case, we then obtain the set of weight matrices
of the form (1.2), which was our original interest. The boundedness is shown in the
following way. In Section 2, we review results for the characterization of as W
varies over the set of symmetric matrices such that AWA T is nonsingular. Section 3
establishes the boundedness when W is allowed to vary over a set of matrices that are
nonnegative linear combinations of a number of xed positive semidenite matrices
such that AWA T is positive denite. In Section 4, results that are needed to handle
the projection using the inverse weight matrix are given. In Section 5, we combine
results from the previous two sections to show boundedness for the that solves (1.4)
when M is allowed to vary over the nonnegative linear combinations of a set of xed
positive semidenite symmetric matrices.
The research was initiated by a paper by Gonzaga and Lara [15]. The link to
that paper has subsequently been superseded, but we include a discussion relating
our results to the result of Gonzaga and Lara in Appendix A.
1.1. Motivation. Our interest in weighted linear least-squares problems is from
interior methods for optimization, and in particular for convex quadratic program-
ming. There is a vast number of papers on interior methods, and here is only given a
brief motivation for the weighted linear least-squares problems that arise. Any convex
quadratic programming problem can be transformed to the form
minimize
subject to
x 0;
where H is a positive semidenite symmetric n n matrix and A is an m n matrix
of full row rank. For x 2 IR n , 2 IR m and s 2 IR n such that x > 0 and s > 0, an
iteration of a primal-dual path-following interior method for solving (1.6) typically
takes a Newton step towards the solution of the equations
(1.7a)
(1.7c)
where is a positive barrier parameter, see, e.g., Monteiro and Adler [21, page 46].
similarly below diag(s). Strict positivity of x and s is
implicitly required and typically maintained by limiting the step length. If is set
equal to zero in (1.7) and the implicit requirements x > 0 and s > 0 are replaced
by x 0 and s 0, the optimality conditions for (1.6) are obtained. Consequently,
equations (1.7) and the implicit positivity of x and s may be viewed as a perturbation
of the optimality conditions for (1.6). In a primal-dual path-following interior method,
the perturbation is driven to zero to make the method converge to an optimal solution.
The equations (1.7) are often referred to as the primal-dual equations. Forming
the Newton equations associated with (1.7) for the corrections x, , s, and
4 A. FORSGREN AND G. SPORRE
eliminating s gives
x
If x and s are strictly feasible, i.e., x and s are strictly positive and x satises
then a comparison of (1.4) and (1.8) shows that the Newton equations (1.8) can be
associated with a weighted linear least-squares problem with a positive denite weight
matrix (H +X 1 S) 1 . A sequence of strictly feasible iterates fx k g 1
k=0 gives rise to a
sequence of weighted linear least-squares problems, where the weight matrix changes
but A is constant.
In a number of convergence proofs for linear programming, a crucial step is to
ensure boundedness of the step (x; s), see, e.g., Vavasis and Ye [27, Lemma 4]
and Wright [30, Lemmas 7.2 and A.4]. Since linear programming is the special case
of convex quadratic programming where are interested in extending this
boundedness result to convex quadratic programming. Therefore, the boundedness of
as X 1 S varies over the set of diagonal positive denite matrices is of interest. This
boundedness property of (1.9) is shown in Section 5.
1.2. Notation. When we refer to matrix norms, and make no explicit reference
to what type of norm is considered, it can be any matrix norm that is induced from
a vector norm such that k(x T holds for any vector x. To denote the ith
eigenvalue and the ith singular value, we use i and i respectively. For symmetric
matrices A and B of equal dimension, A B means that A B is positive semidenite.
Similarly, A B means that A B is positive denite.
The remainder of this section is given in Forsgren [10]. It is restated here for
completeness. For an m n matrix A of full row rank, we shall denote by J (A) the
collection of sets of column indices associated with the nonsingular mm submatrices
of A. For J 2 J (A), we denote by A J the mm nonsingular submatrix formed by
the columns of A with indices in J . Associated with J 2 J (A), for a diagonal n n
matrix D, we denote by D J the mm diagonal matrix formed by the elements of D
that have row and column indices in J . Similarly, for a vector g of dimension n, we
denote by g J the vector of dimension m with the components of g that have indices in
J . The slightly dierent meanings of A J , D J and g J are used in order not to make the
notation more complicated than necessary. For an example clarifying the concepts,
see Forsgren [10, p. 766].
The analogous notation is used for an m n matrix A of full row rank and an
n r matrix U of full row rank in that we associate J (AU) with the collection of sets
of column indices corresponding to nonsingular mm submatrices of AU . Associated
with J 2 J (AU), for a diagonal r r matrix D, we denote by D J the mm diagonal
matrix formed by the elements of D that have row and column indices in J . Similarly,
for a vector g of dimension r, we denote by g J the vector of dimension m with the
components of g that have indices in J . Since column indices of AU are also column
indices of U , for J 2 J (AU ), we denote by U J the n m submatrix of full column
rank formed by the columns of U with indices in J . Note that each element of J (A)
as well as each element of J (AU) is a collection of m indices.
LEAST-SQUARES PROBLEMS RELATED TO QUADRATIC PROGRAMMING 5
2. Background. In this section, we review some fundamental results. The following
theorem, which states that the solution of diagonally weighted linear least-squares
problem can be expressed as a certain convex combination, is the basis for
our results. As far as we know, it was originally given by Dikin [8] who used it
in the convergence analysis of the interior point method for linear programming he
proposed [7]. The proof of the theorem is based on the Cauchy-Binet formula and
Cramer's rule.
Theorem 2.1 (Dikin [8]). Let A be an m n matrix of full row rank, let g be a
vector of dimension n, and let D be a positive denite diagonal n n matrix. Then,
A T
is the collection of sets of column indices associated with nonsingular
mm submatrices of A.
Proof. See, e.g., Ben-Tal and Teboulle [3, Corollary 2.1].
Theorem 2.1 implies that if the weight matrix is diagonal and positive denite,
then the solution to the weighted least-squares problem (1.1) lies in the convex hull of
the basic solutions formed by satisfying m linearly independent equations. Hence, this
theorem provides an expression on the supremum of k(ADA T diagonal
and positive denite, as the following corollary shows.
Corollary 2.2. Let A be an m n matrix of full row rank, and let D+ denote
the set of positive denite diagonal n n matrices. Then,
sup
is the collection of sets of column indices associated with nonsingular
mm submatrices of A.
Proof. See, e.g., Forsgren [10, Corollary 2.2].
The boundedness has been discussed by a number of authors over the years, see,
e.g., Ben-Tal and Teboulle [3], O'Leary [22], Stewart [24], and Todd [25]. Theorem 2.1
can be generalized to the case where the weight matrix is an arbitrary symmetric, not
necessarily diagonal, matrix such that AWA T is nonsingular. The details are given
in the following theorem.
Theorem 2.3 (Forsgren [10]). Let A be an m n matrix of full row rank and
let W be a symmetric n n matrix such that AWA T is nonsingular. Suppose
UDU T , where D is diagonal. Then,
where J (AU) is the collection of sets of column indices associated with nonsingular
mm submatrices of AU .
Proof. See Forsgren [10, Theorem 3.1].
3. Nonnegative combinations of positive semidenite matrices. Let A
be an mn matrix of full row rank and assume that we are given an nn symmetric
weight matrix W (), which depends on a vector 2 IR t for some t. If W () can
be decomposed as W does not depend on and D() is
diagonal, Theorem 2.3 can be applied, provided AW ()A T is nonsingular, and the
6 A. FORSGREN AND G. SPORRE
matrices (AU J
J involved do not depend on . If, in addition D() 0, then the
linear combination of Theorem 2.3 is a convex combination. Consequently, the norm
remains bounded as long as the supremum is taken over a set of values of for which
In particular, we are interested in the case where a set
of positive semidenite and symmetric matrices, W i , t, are given and W ()
is dened as W
. The following two lemmas and associated corollary
concern the decomposition of W (). The rst lemma concerns the set of all possible
decompositions of a positive semidenite matrix W as the relation
between dierent decompositions of this type.
Lemma 3.1. Let W be a symmetric positive semidenite n n matrix of rank r,
and let
U is nonempty and compact. Further, if
U and e
U belong to
U , then there is an r r orthogonal matrix Q such that
UQ.
Proof. It is possible to decompose W as is an n r matrix
of full column rank, for example using a Cholesky factorization with symmetric inter-
changes, see, e.g., Golub and Van Loan [14, Section 4.2.9]. Therefore,
U is nonempty.
If U and e
U T both belong to
U , then
U
U
U
Hence, U T and e
U T have the same null space, which implies that the range spaces of
U and e
U are the same. Therefore, there is a nonsingular r r matrix M such that
UM , from which it follows that e
U T . Premultiplying this equation
by e
U T and postmultiplying it by e
U gives
e
U is nonsingular, (3.1) gives MM I . Compactness is established by
proving boundedness and closedness. Boundedness holds because kU T e
is the ith unit vector. Let fU (i) g 1
be a sequence converging
to U , such that U (i) 2
U for all i. From the continuity of matrix multiplication, U
belongs to
U , and the closedness of
U follows.
A consequence of this lemma is that we can decompose each W i , t, as
stated in the following corollary.
Corollary 3.2. For t, let W i be an nn symmetric positive semidefinite
matrix of rank r i . Let
is a well-dened compact subset of IR nr . Furthermore, if U and e
U belong to U , then,
t, there are orthogonal r i r i matrices Q i , such that U
Proof. The result follows by applying Lemma 3.1 to each W i .
It should be noted that U depends on the matrices W i . This dependence will be
suppressed in order not to make the notation more complicated than necessary. From
Corollary 3.2, we get a decomposition result for matrices that are nonnegative linear
combinations of symmetric positive semidenite matrices, as is stated in the following
lemma. It shows that if we are given a set of positive semidenite and symmetric
matrices, t, and W () is dened as W
then we can
decompose W () into the form W does not depend on
and D() is diagonal.
Lemma 3.3. For 2 IR t , let W
are
symmetric positive semidenite n n matrices. Further, let U be associated with
LEAST-SQUARES PROBLEMS RELATED TO QUADRATIC PROGRAMMING 7
t, according to Corollary 3.2, and for each i, let r i denote rank(W i )
and let I i be an identity matrix of dimension r i . Then W () may be decomposed as
where U is any matrix in U and
Proof. Corollary 3.2 shows that we may write
where U is an arbitrary matrix in U and
Note that D() is positive semidenite if 0. An application of Theorem 2.3
to the decomposition of Lemma 3.3 now gives the boundedness result for nonnegative
combinations of positive semidenite matrices, as stated in the following proposition.
Proposition 3.4. Let A be an mn matrix of full row rank. For 2 IR t , 0,
let W
are symmetric positive semidenite
nn matrices. If W () is decomposed as W according to Lemma 3.3,
then for 0 and AW ()A T 0,
Furthermore,
sup
0:
U2U
where J (AU) is the collection of sets of column indices associated with nonsingular
submatrices of AU , and U is associated with t, according to
Corollary 3.2.
Proof. If AW ()A T 0, Theorem 2.3 immediately gives
Since 0, it follows that D() 0. Consequently, det(D J ()) 0 for all J 2
J (AU ). Thus, the above expression gives
sup
Since this result holds for all U 2 U , it holds when taking the inmum over U 2 U .
To show that the inmum is attained, let
for every J that is a subset of ng such that jJ m. For a xed J , f J is
continuous at every e
U such that det(A e
Further, at e
U such that A e
U J is
8 A. FORSGREN AND G. SPORRE
singular, f J is a lower semi-continuous function, see, e.g., Royden [23, p. 51]. Hence,
f J is lower semi-continuous everywhere. Due to the construction of f J (U ),
J:jJj=m
The maximum of a nite collection of lower semi-continuous functions is lower semi-
continuous, see, e.g., Royden [23, p. 51], and the set U is compact by Corollary 3.2.
Therefore, the inmum is attained, see, e.g., Royden [23, p. 195], and the proof is
complete.
Note that Proposition 3.4 as special cases includes two known cases: (i) the
diagonal matrices, where W
and (ii) the diagonally dominant
matrices, where
In both these cases, the supremum bound of (3.2) is sharp. This is because all the
matrices whose nonnegative linear combinations form the weight matrices are of rank
one. In that case, the minimum over U in (3.2) is not necessary since it follows from
Corollary 3.2 that the columns of U are unique up to multiplication by 1. Hence,
D() may be adjusted so as to give weight one to the submatrix AU J for which the
maximum of the right hand side of (3.2) is achieved, and negligible weight to the other
submatrices. In general, when not all matrices whose nonnegative linear combinations
form the weight matrix have rank one, it is an open question if the supremum bound
is sharp.
4. Inversion of the weight matrix. For a constant positive semidenite matrix
H , our goal is to obtain a bound on k(A(H
D is an arbitrary positive denite diagonal matrix. One major obstacle in applying
Theorem 2.3 is the inverse in the weight matrix (H +D) 1 . The following proposition
and its subsequent corollary and lemma provide a solution to this problem.
Proposition 4.1. Suppose that an n n orthogonal matrix Q is partitioned as
is an n s matrix, and 2s n. Further, let W be a symmetric
nonsingular n n matrix such that Z T W 1 Z and Y T WY are nonsingular. Then
and
s:
Proof. The orthogonality of Q ensures that Y T I . This
gives
and hence
LEAST-SQUARES PROBLEMS RELATED TO QUADRATIC PROGRAMMING 9
proving the rst part of the proposition.
are nonsingular, we may write
(Z
I (Z T W 1
(4.2a)
(4.2b)
The orthogonality of Q ensures that
s:
We also have
I (Z T W 1
(Z
combination of (4.2a), (4.3) and (4.4) gives
s:
An analogous argument applied to (4.2b), taking into account that 2s n gives
(4.6a)
s:
(4.6b)
The second part of the proposition follows by a combination of (4.1), (4.5) and (4.6).
In particular, Proposition 4.1 gives the equivalence between the Euclidean norms
of a projection and the projection onto the complementary space using the inverse
weight matrix, given that the matrices used to represent the spaces are orthogonal.
This is shown in the following corollary.
Corollary 4.2. Suppose that an n n orthogonal matrix Q is partitioned as
is an nm matrix. Further, let W be a symmetric nonsingular
matrix such that Z T W 1 Z and Y T WY are nonsingular. Then
Further, let W+ denote the set of n n positive denite symmetric matrices, and let
W W+ . Then,
sup
k(Z
Proof. If m n=2, the rst statement follows by letting Proposition 4.1.
The second statement is a direct consequence of the rst one. If m < n=2, we may
similarly apply Proposition 4.1 after interchanging the roles of Y and Z, and W and
As noted above, Corollary 4.2 states the equality between the Euclidean norms of
two projections, given that the matrices describing the spaces onto which we project
are orthogonal. The following lemma relates the Euclidean norms of the projections
when the matrices are not orthogonal.
A. FORSGREN AND G. SPORRE
Lemma 4.3. Let A be an m n matrix of full row rank, and let N be a matrix
whose columns form a basis for the null space of A. Further, let W be a symmetric
nonsingular n n matrix such that N T W 1 N and A T WA are nonsingular. Then
k(N
Proof. Let be an orthogonal matrix such that the columns of Z form
a basis for the null space of A. Then, there are nonsingular matrices RZ and R Y such
that a matrix norm which is induced from a vector
norm is submultiplicative, see, e.g., Horn and Johnson [19, Thm. 5.6.2], this giveskRZ k
k(N
k(Z
Z k;
Y k:
(4.7b)
If the Euclidean norm is used, the bounds in (4.7) can be expressed in terms of singular
values of A and N since Y and Z are orthogonal matrices, i.e.
(4.8a)
(4.8b)
A combination of Corollary 4.2, (4.7), and (4.8) gives the stated result.
If the weight matrix is allowed to vary over some subset of the positive denite
symmetric matrices, it follows from Lemma 4.3 that the norm of the projection onto
a subspace is bounded if and only if the norm of the projection onto the orthogonal
complement is bounded when using inverses of the weight matrices. This is made
precise in the following corollary.
Corollary 4.4. Let W+ denote the set of n n positive denite symmetric
matrices, and let W W+ . Let A be an m n matrix of full row rank and let N be
a matrix whose columns form a basis for the null space of A. Then
sup
only if sup
k(N
In particular,
sup
k(N
sup
sup
k(N
Proof. The second statement follows by multiplying the inequalities in Lemma 4.3
by k(N and then taking the supremum of the three expressions.
The rst statement of the corollary then follows from the equivalence of matrix norms
that are induced from vector norms, see, e.g., Horn and Johnson [19, Thm. 5.6.18].
5. Inversion and nonnegative combination. Let A be an m n matrix of
full row rank, and let Z be a matrix whose columns form an orthonormal basis for
the null space of A. Further, let
are
LEAST-SQUARES PROBLEMS RELATED TO QUADRATIC PROGRAMMING 11
given symmetric positive semidenite n n matrices. In Section 3 the weight matrix
was assumed to be the nonnegative combination of symmetric positive semidenite
matrices. This section concerns weight matrices that are the inverse of such combi-
nations, i.e., where the weight matrix is the inverse of M(). Further, if the problem
is originally posed as the KKT-system, cf. (1.4),
it makes sense to study the problem under the assumption that Z T M()Z 0, since
in our situation, Z T M()Z 0 if and only if the matrix of (5.1) is nonsingular,
see Gould [16, Lemma 3.4]. Note that Z T M()Z 0 is a weaker assumption than
M() 0, which is necessary if the least-squares formulation is to be valid. A
combination of Proposition 3.4 and Lemma 4.3 shows that () remains bounded
under the abovementioned assumptions. This is stated in the following theorem,
which is the main result of this paper.
Theorem 5.1. Let A be an m n matrix of full row rank and let g be an n-
vector. Further, let Z be a matrix whose columns form an orthonormal basis for the
null space of A. For 2 IR t , 0, let
are symmetric positive semidenite nn matrices. Further, let r() and () satisfy
Then,
sup
0:
In particular, if Z T M()Z 0, then
Finally, if M() is decomposed according to Lemma 3.3, then
sup
0:
where J (Z T U) is the collection of sets of column indices associated with nonsingular
submatrices of Z T U , and U is associated with M i , t, according to
Corollary 3.2.
Proof. For I 0. Therefore,
is well-dened. By Lemma 4.3 it follows that
(5.
A. FORSGREN AND G. SPORRE
For such that Z T M()Z 0, the matrix in the system of equations dening ()
and r() is nonsingular, see Gould [16, Lemma 3.4]. Then, the implicit function
theorem implies that lim !0 Therefore, letting
(5.3). Taking the supremum over such that 0 and Z T M()Z 0, and using
Proposition 3.4 gives (5.4), from which (5.2) follows upon observing that all norms on
a real nite-dimensional vector space are equivalent, see, e.g., Horn and Johnson [19,
As a consequence of Theorem 5.1, we are now able to prove the boundedness of
the projection operator for the application of primal-dual interior methods to convex
quadratic programming described in Section 1.1.
Corollary 5.2. Let H be a positive semidenite symmetric nn matrix, let A
be an m n matrix of full row rank, and let D+ denote the space of positive denite
diagonal n n matrices. Then,
sup
Proof. If M() 0, then () of Theorem 5.1 satises
Theorem 5.1 implies that ()
is bounded. This holds for any vector g, and hence
sup
0:M()0
The stated result follows by applying (5.6) with
and letting
For convenience in notation, it has been assumed that all variables of the convex
quadratic program are subject to bounds. It can be observed that the analogous
results hold when some variables are not subject to bounds. In this situation, M of
may be partitioned as
where H is symmetric and positive semidenite and D 11 is diagonal and positive
denite. Let A be partitioned conformally with M as
. Then, (1.4)
has a unique solution as long as there is no nonzero p 2 such that A 2
Gould [16, Lemma 3.4]. Hence, under this additional assumption,
Theorem 5.1 can be applied to bound k()k as D 11 varies over the set of positive
denite diagonal matrices.
6.
Summary
. It has been shown that results concerning the boundedness of
for A of full row rank and W diagonal, or diagonally dominant, and
symmetric positive denite can be extended to a more general case where W is a
nonnegative linear combination of a set of symmetric positive semidenite matrices
such that AWA T 0. Further, boundedness has been shown for the projection
onto the null space of A using as weight matrix the inverse of a nonnegative linear
combination of a number of symmetric positive semidenite matrices. This result
LEAST-SQUARES PROBLEMS RELATED TO QUADRATIC PROGRAMMING 13
has been used to show boundedness of a projection operator arising in a primal-dual
interior method for convex quadratic programming.
The main tools for deriving these results have been the explicit formula for the
solution of a weighted linear least-squares problem given by Dikin [8], and the relation
between a projection onto a subspace with a certain weight matrix and the projection
onto the orthogonal complement using the inverse weight matrix.
An interesting question that is left open is whether the explicit bounds that are
given are sharp or not. In the case where all the matrices whose nonnegative linear
combination form the weight matrix are of rank one, the bounds are sharp. In the
general case, this is an open question. On a higher level, an interesting question
is whether the results of this paper can be utilized to give new complexity bounds
for quadratic programming, analogous to the case of linear programming, see, e.g.,
Vavasis and Ye [27, Section 9].
Appendix
A. Relationship to partitioned orthogonal matrices. In this
appendix we review a result by Gonzaga and Lara [15] concerning diagonally weighted
projections onto orthogonally complementary subspaces, and combine this result with
a result concerning singular values of submatrices of orthogonal matrices. It was in
fact these results which lead to the more general results relating weighted projection
onto a subspace and the projection onto its complementary subspace using the inverse
weight matrix, as described in Section 4.
Gonzaga and Lara [15] state that if Y is an n m orthogonal matrix and Z is a
matrix whose columns form an orthonormal basis for the null space of Y T , then
sup
where D+ is the set of positive denite diagonal nn matrices. They use a geometric
approach to prove this result. We note that Corollary 4.2, specialized to the case of
diagonal positive denite weight matrices, allows us to state the same result. Fur-
thermore, we obtain an explicit expression for the supremum by Corollary 2.2. The
following corollary summarizes this result.
Corollary A.1. Suppose that an n n orthogonal matrix Q is partitioned as
is an n m matrix. Let D+ denote the set of diagonal positive
denite n n matrices. Then,
sup
~
where J (Z T ) is the collection of sets of column indices associated with nonsingular
(n m) (n m) submatrices of Z T and J (Y T ) is the collection of sets of column
indices associated with nonsingular mm submatrices of Y T .
Proof. Since D 2 D+ if and only if D 1 2 D+ , Corollary 4.2 shows that
sup
The explicit expressions for the two suprema follow from Corollary 2.2.
Hence, in our setting, we would rather state the result of Gonzaga and Lara [15]
in the equivalent form
sup
14 A. FORSGREN AND G. SPORRE
with the expressions for the suprema stated in Corollary A.1.
Note that an implication of Corollary A.1 is that if an nn orthogonal matrix Q is
partitioned as
, where Y has m columns, there is a certain relationship
between the smallest singular value of all nonsingular (n m)(n m) submatrices of
Z and the smallest singular value of all nonsingular mm submatrices of Y . This is
in fact a consequence of a more general result, namely that if Q is partitioned further
as
where Z 1 is (n m) (n m), then all singular values of Z 1 and Y 2 that are less
than one are identical. This in turn is a consequence of properties of singular values
of submatrices of orthogonal matrices that can be obtained by the CS-decomposition
of an orthogonal matrix, see, e.g., Golub and Van Loan [14, Section 2.6.4].
This result relating the singular values of Z 1 and Y 2 of (A.1) implies the existence
of J and ~
J , that are complementary subsets of which the maxima in
Corollary A.1 are achieved. This observation lead us to the result that
for any positive denite diagonal D. Subsequently, this result was superseded by the
more general analysis presented in Section 4.
Acknowledgement
. We thank the two anonymous referees for their constructive
and insightful comments, which signicantly improved the presentation.
--R
Numerical techniques in mathematical programming
A Volume
A geometric property of the least squares solution of linear equations
A norm bound for projections with complex weights
Iterative solution of problems of linear and quadratic programming
The factorization of sparse symmetric inde
On linear least-squares problems with diagonally dominant weight matrices
Stability of symmetric ill-conditioned systems arising in interior methods for constrained optimization
Numerical Linear Algebra and Optimization
On the stability of the Cholesky factorization for symmetric quasi-de nite systems
Matrix Computations
A note on properties of condition numbers
On practical conditions for the existence and uniqueness of solutions to the general equality quadratic programming problem
The geometry of the set of scaled projections
Matrix Analysis
Interior path-following primal-dual algorithms
On bounds for scaled projections and pseudoinverses
Real Analysis
On scaled projections and pseudoinverses
A Dantzig-Wolfe-like variant of Karmarkar's interior-point linear programming algorithm
Stable numerical algorithms for equilibrium systems
A primal-dual interior point method whose running time depends only on the constraint matrix
Upper bound and stability of scaled pseudoinverses
Stability of linear equations solvers in interior-point methods
--TR | quadra- tic programming;weighted least-squares problem;interior method;unconstrained linear least-squares problem |
587833 | Stability of Structured Hamiltonian Eigensolvers. | Various applications give rise to eigenvalue problems for which the matrices are Hamiltonian or skew-Hamiltonian and also symmetric or skew-symmetric. We define structured backward errors that are useful for testing the stability of numerical methods for the solution of these four classes of structured eigenproblems. We introduce the symplectic quasi-QR factorization and show that for three of the classes it enables the structured backward error to be efficiently computed. We also give a detailed rounding error analysis of some recently developed Jacobi-like algorithms of Fassbender, Mackey, and Mackey [Linear Algebra Appl., to appear] for these eigenproblems. Based on the direct solution of 4 4, and in one case 8 8, structured subproblems these algorithms produce a complete basis of symplectic orthogonal eigenvectors for the two symmetric cases and a symplectic orthogonal basis for all the real invariant subspaces for the two skew-symmetric cases. We prove that, when the rotations are implemented using suitable formulae, the algorithms are strongly backward stable and we show that the QR algorithm does not have this desirable property. | Introduction
. This work concerns real structured Hamiltonian and skew-
Hamiltonian eigenvalue problems where the matrices are either symmetric or skew-
symmetric. We are interested in algorithms that are strongly backward stable for these
problems. In general, a numerical algorithm is called backward stable if the computed
solution is the true solution for slightly perturbed initial data. If, in addition, this
perturbed initial problem has the same structure as the given problem, then the
algorithm is said to be strongly backward stable.
There are three reasons for our interest in strongly backward stable algorithms.
First, such algorithms preserve the algebraic structure of the problem and hence
force the eigenvalues to lie in a certain region of the complex plane or to occur in
particular kinds of pairings. Because of rounding errors, algorithms that do not
respect the structure of the problem can cause eigenvalues to leave the required region
[26]. Second, by taking advantage of the structure, storage and computation can be
lowered. Finally, structure-preserving algorithms may compute eigenpairs that are
more accurate than the ones provided by a general algorithm.
Structured Hamiltonian eigenvalue problems appear in many scientific and engineering
applications. For instance, symmetric skew-Hamiltonian eigenproblems arise
in quantum mechanical problems with time reversal symmetry [9], [23]. In response
theory, the study of closed shell Hartree-Fock wave functions yields a linear response
eigenvalue equation with a symmetric Hamiltonian [21]. Also, total least squares
problems with symmetric constraints lead to the solution of a symmetric Hamiltonian
problem [17].
# Received by the editors February 23, 2000; accepted for publication (in revised form) by V.
Mehrmann November 24, 2000; published electronically May 3, 2001.
http://www.siam.org/journals/simax/23-1/36800.html
Department of Mathematics, University of Manchester, Manchester, M13 9PL, England
(ftisseur@ma.man.ac.uk, http://www.ma.man.ac.uk/-ftisseur). This work was supported by Engineering
and Physical Sciences Research Council grant GR/L76532.
The motivation for this work comes from recently developed Jacobi algorithms for
structured Hamiltonian eigenproblems [10]. These algorithms are structure-preserving,
inherently parallelizable, and hence attractive for solving large-scale eigenvalue prob-
lems. Our first contribution is to define and show how to compute structured backward
errors for structured Hamiltonian eigenproblems. These backward errors are useful
for testing the stability of numerical algorithms. Our second contribution concerns
the stability of these new Jacobi-like algorithms. We give a unified description of
the algorithms for the four classes of structured Hamiltonian eigenproblems. This
provides a framework for a detailed rounding error analysis and enables us to show
that the algorithms are strongly backward stable when the rotations are implemented
using suitable formulae.
The organization of the paper is as follows. In section 2 we recap the necessary
background concerning structured Hamiltonians. In section 3 we derive computable
structured backward errors for structured Hamiltonian eigenproblems. In section 4,
we describe the structure-preserving QR-like algorithms proposed in [5] for structured
Hamiltonian eigenproblems. We give a unified description of the new Jacobi-like
algorithms and detail the Jacobi-like update for each of the four classes of structured
Hamiltonian. In section 5 we give the rounding error analysis and in section 6 we
use our computable backward errors to confirm empirically the strong stability of the
algorithms.
2. Preliminaries. A matrix P # R 2n-2n is symplectic if P T
-In
In
I n is the n - n identity matrix.
A matrix H # R 2n-2n is Hamiltonian if Hamiltonian
matrices have the form
where E, F, G # R n-n and F G. We denote the set of real Hamiltonian
matrices by H 2n .
A matrix S # R 2n-2n is skew-Hamiltonian if
Skew-Hamiltonian matrices have the form
where E, F, G # R n-n and F are skew-symmetric. We denote the
set of real skew-Hamiltonian matrices by SH 2n .
Note that if H # H 2n , then P
SH 2n , where P is an arbitrary symplectic matrix. Thus symplectic similarities
preserve Hamiltonian and skew-Hamiltonian structure. Also, symmetric and skew-symmetric
structures are preserved by orthogonal similarity transformations. Therefore
structure-preserving algorithms for symmetric or skew-symmetric Hamiltonian
or skew-Hamiltonian eigenproblems have to use real symplectic orthogonal trans-
formations, that is, matrices U # R 2n-2n satisfying U T As
in [10], we denote by SpO(2n) the group of real symplectic orthogonal matrices.
Any U # SpO(2n) can be written as
I and
In
Tables
2.1 and 2.2, we summarize the structure of Hamiltonian and skew-
Hamiltonian matrices that are either symmetric or skew-symmetric, their eigenvalue
STABILITY OF STRUCTURED HAMILTONIAN EIGENSOLVERS 105
Table
Properties of structured Hamiltonian matrices H # H 2n .
Symmetric
real,
Skew-symmetric
pure imaginary,
pairs #, -
Table
Properties of structured skew-Hamiltonian matrices S # SH 2n .
Symmetric
real,
double # D 0
Skew-symmetric
pure imaginary,
double,
properties, and their symplectic orthogonal canonical form. We use D # R n-n to
denote a diagonal matrix and B # R n-n to denote a block-diagonal matrix that is the
direct sum of 1 - 1 zero blocks and 2 - 2 blocks of the form [ 0
-b
0 ]. These canonical
forms are consequences of results in [19].
Next, we show that the eigenvectors of skew-symmetric Hamiltonian matrices can
be chosen to have structure. This property is important when defining and deriving
structured backward errors.
Lemma 2.1. The eigenvectors of a skew-symmetric Hamiltonian matrix H can
be chosen to have the form [ z
-iz ] with z # C n .
Proof. Let
HU be the canonical form of H with
symplectic orthogonal. The matrix
-iI
I
iI ] is unitary and diagonalizes the
canonical form of H:
# .
Hence
is an eigenvector basis for H and this shows that the eigenvectors can be taken to
have the form [ z
-iz ] with z # C n .
Note that an eigenvector of a skew-symmetric Hamiltonian matrix does not necessarily
have the form [ z
-iz ]. For instance, consider
106 FRANC-OISE TISSEUR
Table
t: Number of parameters defining H.
Hamiltonian Skew-Hamiltonian
is an eigenvector of H, corresponding to the eigenvalue -id, that
is not of the form [ z
3. Structured backward error. We begin by developing structured backward
errors that can be used to test the strong stability of algorithms for our classes of
Hamiltonian eigenproblems.
3.1. Definition. For notational convenience, the symbol H denotes from now
on both Hamiltonian and skew-Hamiltonian matrices. Let (#x,
#) be an approximate
eigenpair for the structured Hamiltonian eigenvalue problem
R 2n-2n . A natural definition of the normwise backward error of an approximate
eigenpair is
where we measure the perturbation in a relative sense and # denotes any vector
norm and the corresponding subordinate matrix norm. Deif [8] derived the explicit
expression for the 2-norm
#x -H#x is the residual. This shows that the normwise relative backward
error is a scaled residual. The componentwise backward error is a more stringent
measure of the backward error in which the components of the perturbation #H are
measured individually:
#x, |#H| #|H| # .
Here inequalities between matrices hold componentwise. Geurts [12] showed that
1#i#2n
The componentwise backward error provides a more meaningful measure of the stability
than the normwise version when the elements in H vary widely in magnitude.
However, this measure is not entirely appropriate for our problems as it does not
respect any structure (other than sparsity) in H. Bunch [2] and Van Dooren [25] have
also discussed other situations when it is desirable to preserve structure in definitions
of backward errors.
The four classes of structured Hamiltonian matrices we are dealing with are defined
by t # real parameters that make up E and F (see Table 3.1). We write
this dependence as extend the
notion of componentwise backward error to allow dependence of the perturbations on
STABILITY OF STRUCTURED HAMILTONIAN EIGENSOLVERS 107
a set of parameters and they define structured componentwise backward errors. Following
their idea and notation we define the structured relative normwise backward
error by
-(#x,
implies that #H has the same structure as H. The
structured relative componentwise backward error #x,
#) is defined as in (3.1) but
with the constraint #H#F #H#F replaced by |#H| #|H|.
In our case, the dependence of the data on the t parameters is linear. We naturally
require (#x,
#) to have any properties forced upon the exact eigenpairs, otherwise
the backward error will be infinite. In the next subsections, we give algorithms for
computing these backward errors. We start by describing a general approach that
was used in [13] in the context of structured linear systems and extend it to the case
where the approximate solution lies in the complex plane.
3.2. A general approach for the computation of -(# x,
#). Let
and
-+i#. By equating real and imaginary parts, the constraint
#x
in (3.1) becomes
# u
or equivalently #H [
# u
. Applying the vec operator (which stacks the
columns of a matrix into one long vector), we obtain
where# denotes the Kronecker product. We refer to Lancaster and Tismenetsky [18,
Chap. 12] for properties of the vec operator and the Kronecker product. By linearity
we have
-t of full rank and where #p is the t-vector of parameters defining #H.
There exists a diagonal matrix D 1 depending on the structure of H (symmetric/skew-
symmetric Hamiltonian/skew-Hamiltonian) such that
# u
. Using (3.4) we can rewrite (3.3)
as Y BD using (3.5),
-(#x,
y
This shows that the structured backward error is given in terms of the minimal 2-norm
solution to an underdetermined system. If the underdetermined system is consistent,
then the minimal 2-norm solution is given in terms of the pseudo-inverse by
In this case
-(#x,
When H is a symmetric structured Hamiltonian, we can assume that
# and
# x are
real. Therefore
and from (3.2) we have [
Applying the vec operation gives
# u
I 2n # vec(#-I
As
-I - H is also a symmetric structured Hamiltonian, we have by linearity that
vec(#-I
#- is the t-vector of parameters defining
#- lies in the range of Y BD Therefore, the underdetermined
system in (3.6) is consistent for symmetric Hamiltonians and for symmetric skew-
Hamiltonians. For a skew-symmetric Hamiltonian, we can again prove consistency
for pure imaginary approximate eigenvalues and approximate eigenvectors of the form
in Lemma 2.1. We have not been able to prove that the underdetermined system is
consistent for the skew-symmetric skew-Hamiltonian case.
As the dependence on the parameters is linear, in the definition of the structured
relative componentwise backward error #x,
#), we have the equivalence
|#H| #|H| #p| #|p|.
q. Then the smallest # satisfying |#p| #|p| is
#q# . The minimal #-norm solution of Y BD 2 can be approximated by
minimizing in the 2-norm. We have
#).
By looking at each problem individually, it is possible to reduce the size of the
underdetermined system. Nevertheless, solution of the system by standard techniques
still takes O(n 3 ) operations. In the next section, we show that by using a symplectic
quasi-QR factorization of the approximate eigenvector and residual (or some appropriate
parts) we can derive expressions for -(#x,
#) that are cheaper to compute for all
the structured Hamiltonians of interest except for skew-symmetric skew-Hamiltonians.
First, we define a symplectic quasi-QR factorization.
3.3. Symplectic quasi-QR factorization. We define the symplectic quasi-QR
factorization of an 2n -m matrix A by
where Q is real symplectic orthogonal, T 1 # R n-m is upper trapezoidal, and T 2 #
R n-m is strictly upper trapezoidal. Such a symplectic quasi-QR factorization has
also been discussed by Bunse-Gerstner [3, Cor. 4.5(ii)].Before giving an algorithm to
STABILITY OF STRUCTURED HAMILTONIAN EIGENSOLVERS 109
compute this symplectic quasi-QR factorization, we need to describe two types of elementary
orthogonal symplectic matrices that can be used to zero selected components
of a vector.
A symplectic Householder matrix H # R 2n-2n is a direct sum of n-n Householder
matrices:
where
diag # I k-1 , I n-k+1 -v T v
I n otherwise,
and v is determined such that for a given x # R n , P (k,
A symplectic Givens rotation G(k, # R 2n-2n is a Givens rotation where the
rotation is performed in the plane (k, k G(k, #) has the form
where # is chosen such that for a given x # R 2n , G(k,
We use a combination of these orthogonal transformations to compute our symplectic
quasi-QR factorization: symplectic Householder matrices are used to zero large
portions of a vector and symplectic Givens are used to zero single entries.
Algorithm 3.1 (symplectic quasi-QR factorization). Given a matrix
with A 1 , A 2 # R n-m , this algorithm computes the symplectic quasi-QR factorization
(3.8).
For
End
Determine G
End
We illustrate the procedure for a generic 6 - 4 matrix:
-#
-#
G3
-#
3.4. Symmetric Hamiltonian eigenproblems. Let
be the residual vector and be the symplectic quasi-QR factorization (3.8)
with Q symplectic orthogonal and
. 0
e n+1,2
We have Q
which is equivalent to
e 110
e 22e n+1,2#
still a symmetric Hamiltonian matrix. Equation (3.10)
defines the first column of #
H. As |e 11
e 22 /e 11 , # - h
E) T
STABILITY OF STRUCTURED HAMILTONIAN EIGENSOLVERS 111
and #
be such that
e 11
e 22 -0 -
e 11
where the -'s are arbitrary real coe#cients. Then, any symmetric Hamiltonian of the
F
F -#
#x. The Frobenius norm of #H is minimized by setting the
-'s to zero in the definition of #
F . We obtain the following lemma.
Lemma 3.2. The backward error of an approximate eigenpair of a symmetric
Hamiltonian eigenproblem is given by
-(#x,
|e 11 |
is the quasi-triangular factor in the symplectic quasi-QR
factorization of [#x r] with
#I -H)#x. We also have
-(#x,
where e 2 is the second unit vector.
3.5. Skew-symmetric Hamiltonian eigenproblems. For skew-symmetric
Hamiltonian eigenproblems the technique developed in section 3.4 needs to be modified
as in this case r,
# x are complex vectors and we want to define a real skew-symmetric
Hamiltonian perturbation
so that (H
#x.
In the definition of the structured backward error (3.1), we now assume that
# is
pure imaginary and that
# x has the form [
(see Lemma 2.1). Taking the
plus sign in
# x, the equation (H
#x can be written as
E)#z.
Multiplying (3.12) by -i gives (3.11). Hence, we carry out the analysis with (3.11)
only. Setting
in (3.11) and equating real and imaginary
parts yields
which is equivalent to
Using
we show that w and s are orthogonal:
For the other choice of sign with
, the equation
#x
is equivalent to
and
we can show that w T
We can now carry on the analysis as in section 3.4. Let be the
symplectic quasi-QR factorization of [w s]. As w T we have that e
obtain #H by solving the underdetermined system
e 110
#e 22e n+1,2#
Lemma 3.3. The backward error of an approximate eigenpair (#x,
#) of a skew-symmetric
Hamiltonian eigenproblem with
pure imaginary and
x of the form
is given by
-(#x,
|e 11 |
is the quasi-triangular factor in the symplectic quasi-QR factorization
of [w s] with
if
We also have
-(#x,
where e 2 is the second unit vector.
3.6. Symmetric skew-Hamiltonian eigenproblems. The analysis for symmetric
skew-Hamiltonian eigenproblems is similar to that in section 3.4. The only
di#erence comes from noting that
F
STABILITY OF STRUCTURED HAMILTONIAN EIGENSOLVERS 113
using v T
#I-E)#x 1 . Instead of computing a symplectic
quasi-QR factorization of [#x r], we compute a symplectic quasi-QR factorization of
# x r] in order to introduce one more zero in the triangular factor R. We summarize
the result in the next lemma.
Lemma 3.4. The backward error of an approximate eigenpair of a symmetric
skew-Hamiltonian eigenproblem is given by
-(#x,
#) =|e 11 |
x r] is the quasi-triangular factor in the symplectic quasi-QR factorization
of [J
x r] with
#I -H)#x. We also have
-(#x,
#H#F .
3.7. Comments. Lemmas 3.2-3.4 provide an explicit formula for the backward
error that can be computed in O(n 2 ) operations.
For skew-symmetric skew-Hamiltonian matrices H, the eigenvectors are complex
with no particular structure. The constraint (H
#x in (3.1) can be written
in the form #H[#x), is the residual.
We were unable to explicitly construct matrices #H satisfying this constraint via a
symplectic QR factorization of [#x), #x), #(r), #(r)]. Thus, in this case, we have to
use the approach described in section 3.2 to compute -(#x,
#), which has the drawback
that it requires O(n 3 ) operations.
4. Algorithms for Hamiltonian eigenproblems. A simple but ine#cient
approach to solve structured Hamiltonian eigenproblems is to use the (symmetric or
unsymmetric as appropriate) QR algorithm on the 2n - 2n structured Hamiltonian
matrix. This approach is computationally expensive and uses 4n 2 storage locations.
Moreover, the QR algorithm does not use symplectic orthogonal transformations and
is therefore not structure-preserving.
Benner, Merhmann, and Xu's method [1] for computing the eigenvalues and invariant
subspaces of a real Hamiltonian matrix uses the relationship between the
eigenvalues and invariant subspaces of H and an extended 4n - 4n Hamiltonian ma-
trix. Their algorithm is structure-preserving for the extended Hamiltonian matrix
but is not structure-preserving for H. Therefore, it is not strongly backward stable
in the sense of this paper.
4.1. QR-like algorithms. Bunse-Gerstner, Byers, and Mehrmann [5] provide
a chart of numerical methods for structured eigenvalue problems, most of them based
on QR-like algorithms. In this section, we describe their recommended algorithms for
our structured Hamiltonian eigenproblems. In the limited case where rank(F
Byer's Hamiltonian QR algorithm [6] based on symplectic orthogonal transformations
yields a strongly backward stable algorithm.
For symmetric Hamiltonian eigenproblems, the quaternion QR algorithm [4] is
suggested. The quaternion QR algorithm is an extension of the Francis QR algorithm
for complex or real matrices to quaternion matrices. This algorithm uses exclusively
quaternion unitary similarity transformations so that it is backward stable. Compared
with the standard QR algorithm for symmetric matrices, this algorithm cuts the
storage and work requirements approximately in half. However, its implementation
requires quaternion arithmetic and it is not clear whether it is strongly backward
stable.
A skew-symmetric Hamiltonian H is first reduced via symplectic orthogonal transformations
to block antidiagonal form [ 0
-T
the blocks are symmetric tridi-
agonal. The complete solution is obtained via the symmetric QR algorithm applied
to T . The whole algorithm is strongly backward stable as it uses only real symplectic
orthogonal transformations that are known to be backward stable.
For symmetric skew-Hamiltonian problems, the use of the "X-trick" is suggested:
# with X =# 2
I I
-iI iI
# .
The eigenvalues of H are computed from the eigenvalue of the Hermitian matrices
using the Hermitian QR algorithm for instance. One drawback of
this approach is that it uses complex arithmetic and does not provide a real symplectic
orthogonal eigenvector basis. Hence the algorithm does not preserve the "realness"
of the original matrix.
Finally, for the skew-symmetric skew-Hamiltonian case, H is reduced to block-diagonal
form via a finite sequence of symplectic orthogonal transformations. The
blocks are themselves tridiagonal and skew-symmetric. Then Paardekooper's Jacobi
algorithm [22] or the algorithm in [11] for skew-symmetric tridiagonal matrices can
be used to obtain the complete solution. The whole algorithm is strongly backward
stable.
4.2. Jacobi-like algorithms. Byers [7] adapted the nonsymmetric Jacobi algorithm
[24] to the special structure of Hamiltonian matrices. The Hamiltonian Jacobi
algorithm based on symplectic Givens rotations and symplectic double Jacobi rotations
of the form
J# I 2n , where J is a 2 - 2 Jacobi rotation, preserves the Hamiltonian
structure. This Jacobi algorithm, when it converges, builds a Hamiltonian
Schur decomposition [7, Thm. 1]. For symmetric H, this Jacobi algorithm converges
to the canonical form [ D0
-D ] and is strongly backward stable. For skew-symmetric
Hamiltonian H, this Jacobi algorithm does not converge as the symplectic orthogonal
canonical form for H is not Hamiltonian triangular.
Recently, Fa-bender, Mackey, and Mackey [10] developed Jacobi algorithms for
structured Hamiltonian eigenproblems that preserve the structure and produce a complete
basis of symplectic orthogonal eigenvectors for the two symmetric cases and a
symplectic orthogonal basis for all the real invariant subspaces for the two skew-symmetric
cases. These Jacobi algorithms are based on the direct solution of 4 - 4,
and in one case 8 - 8, subproblems using appropriate transformations. The algorithms
work entirely in real arithmetic. Note that "realness" of the initial matrix can
be viewed as additional structure that these Jacobi algorithms preserve. We give a
unified description of these Jacobi-like algorithms for the four classes of structured
Hamiltonian eigenproblems under consideration.
Let H # R 2n-2n be a structured Hamiltonian matrix (see Table 2.1 and 2.2).
These Jacobi methods attempt to reduce the quantity (o#-diagonal norm)
STABILITY OF STRUCTURED HAMILTONIAN EIGENSOLVERS 115
where S is a set of indices depending on the structure of the problem using a sequence
of symplectic orthogonal transformations H # SHS T with S # R 2n-2n . The aim
is that H converges to its canonical form. In the following, we note A i,j,i+n,j+n the
restriction to the (i, n) plane of A.
Algorithm 4.1. Given a structured Hamiltonian matrix H # R 2n-2n and a
tolerance tol > 0, this algorithm overwrites H with its approximate canonical form
orthogonal and o#(PHP T
while
Choose (i,
Compute a symplectic orthogonal S
such that (SHS T ) i,j,i+n,j+n is in canonical form.
preserving structure
preserving structure
Note that the pair (i, uniquely determines a 4 - 4 principal submatrix
that also inherits the Hamiltonian or skew-Hamiltonian structure together with the
symmetry or skew-symmetry property. There are many ways of choosing the indices
(i, j) but this choice does not a#ect the rest of the analysis. We refer to n(n - 1)/2
updates as a sweep. Each sweep must be complete, that is, every part of the matrix
must be reached. We see immediately that any complete sweep of the (1, 1) block of
H consisting of 2-2 principal submatrices generates a corresponding complete sweep
of H.
For each 4 - 4 target submatrix, a symplectic orthogonal matrix that directly
computes the corresponding canonical form is constructed and embedded into the
in the same way that the 4 - 4 target has been extracted.
For skew-symmetric skew-Hamiltonians, the 4-4 based Jacobi algorithm does not
converge. The aim of these Jacobi algorithms is to move the weight to the diagonal
of either the diagonal blocks or o#-diagonal blocks. That cannot be done for a skew-symmetric
skew-Hamiltonian because these diagonals are zero. There is no safe place
where the norm of the target submatrix can be kept. However, if an 8 - 8 skew-symmetric
skew-Hamiltonian problem is solved instead, the 2 - 2 diagonal blocks of
H become a safe place for the norm of target submatrices and the resulting 8 - 8
based Jacobi algorithm is expected to converge. The complete sweep is defined by
partitioning blocks along the rightmost and
lower edges when n is odd. Hence, in this case we must also be able to directly solve
subproblems.
Immediately, we see that the di#cult part in deriving these algorithms is to define
the appropriate symplectic orthogonal transformation S that computes the canonical
form of the restriction to the (i, n) plane of H. Fa-bender, Mackey, and
Mackey [10] show that by using a quaternion representation of the 4 - 4 symplectic
orthogonal group, as well as 4 - 4 Hamiltonian and skew-Hamiltonian matrices in the
tensor square of the quaternion algebra, we can define and construct 4 - 4 symplectic
orthogonal matrices R that do the job. These transformations are based on rotations
of the subspace of pure quaternions.
We need to give all the required transformations in a form suitable for rounding
error analysis and also to facilitate the description of the structure preserving Jacobi
algorithms. We start by defining two types of quaternion rotations. This enables us
to encode the formulas in [10] into one. Let e s #= e 1 be a standard basis vector of R 4
and p # R 4 such that p #= 0, e T
(p is a pure quaternion), and p/#p# 2 #= e s . Let
s
# .
We define the left quaternion rotation by
# .
QL is symplectic orthogonal and not di#cult to compute. We have x
and the other components of x are just permutations of the coordinates of p.
We define the right quaternion rotation by
# .
The matrix QR is orthogonal. It is symplectic when s #= 3 and x
R 4 be nonzero. Following [10], we define the 4 - 4
symplectic orthogonal Givens rotation associated with p by
# .
We now have all the tools needed to define the symplectic orthogonal transformations
that directly compute the canonical form for each of the 4 - 4 structured
Hamiltonian eigenproblems of interest. We refer to [10] for more details about how
these transformations have been derived.
4.2.1. Symmetric Hamiltonian. Let H # R 4-4 be a symmetric Hamiltonian
matrix. The canonical form of H is obtained in two steps: first H is reduced to 2 - 2
block diagonal form and then the complete diagonalization is obtained by using a
double Jacobi rotation.
For the first step we consider the singular value decomposition of the 3-3 matrix
Let u 1 and v 1 be the left and right singular vectors corresponding to the largest
singular value # 1 and let
v1 ]. We have A T
STABILITY OF STRUCTURED HAMILTONIAN EIGENSOLVERS 117
so that e T
v, the vector x in (4.3) is such that
which implies that the right quaternion rotation QR (v, 2) is symplectic
orthogonal. As shown in [10], the product diagonalizes
H, that is, QHQ
E). Complete diagonalization is obtained by using
a double Jacobi rotation
is chosen such that
sin #
sin #
diagonalizes
In summary, the symplectic orthogonal transformation S used in Algorithm 4.1 is
equal to the identity matrix except in the (i, plane, where the (i, j, n
j)-restriction matrix is given by
4.2.2. Skew-symmetric Hamiltonian. Let H # R 4-4 be a skew-symmetric
Hamiltonian matrix and let p # R 4 be defined from the elements of H by
It is easy to verify that for
4.2.3. Symmetric skew-Hamiltonian. Let H # R 4-4 be a symmetric skew-
Hamiltonian matrix and let p # R 4 be defined from the elements of H by
diagonalizes H and
4.2.4. Skew-symmetric skew-Hamiltonian. For the convergence of the Jacobi
algorithm to be possible we need to solve an 8 - 8 subproblem. The matrix
H # R 8-8 is block diagonalized with three 4 - 4 symplectic Givens rotations of the
form (4.6) and one symplectic Givens rotation of the form (3.9). Let G be the product
of these rotations. We have
where
tridiagonal and skew-symmetric. The complete 2 - 2 block-
diagonalization is obtained by directly transforming
its real Schur form as
follows. In [20], Mackey showed that the transformation
directly computes the real Schur form of
E, that is,
(Q# I 2 )G is the symplectic orthogonal
transformation that computes the real Schur form of the 8 - 8 skew-symmetric skew-
Hamiltonian H:
# .
When n is odd, we have to solve a 6-6 subproblem for each complete sweep of the
Jacobi algorithm. As for the 8-8 case, the 6-6 skew-symmetric skew-Hamiltonian H
is first reduced to the form (4.7), where
tridiagonal and skew-symmetric.
This is done by using just one 4 - 4 symplectic Givens rotation followed by one
symplectic Givens rotation. Let
and
. Then computes directly the
real Schur form of
Moreover, we have e T
Q) and
# with
5. Error analysis of the Jacobi algorithms. In floating point arithmetic,
Algorithm 4.1 computes an approximate canonical form
T such that
where P is symplectic orthogonal, and an approximate basis of symplectic orthogonal
eigenvectors
P . We want to derive bounds for #H#
- I#, and
5.1. Preliminaries. We use the standard model for floating point arithmetic
[16]
where u is the unit roundo#. We assume that (5.1) holds also for the square roots
operation. To keep track of the higher terms in u we make use of the following result
[16, Lem. 3.1].
Lemma 5.1. If |# i | # u and #
We define
where p denotes a small integer constant whose value is unimportant. In the following,
computed quantities will be denoted by hats.
First, we consider the construction of a 4 - 4 Givens rotation and left and right
quaternion rotations.
Lemma 5.2. Let a 4 - 4 Givens rotation constructed according to
(4.6) with p # R 4 . Then the computed
G satisfies | #
G-G| # 5 |G|.
Proof. This result is a straightforward extension of Lemma 18.6 in [16] concerning
Givens rotations.
The rounding error properties of right and left quaternion rotations require more
attention. When p s < 0, the computation of #p# therefore the computation
of QL (p, s) or QR (p, s) is a#ected by cancellation. This problem can be overcome by
using another formula as shown in the next lemma.
Lemma 5.3. Let 4 - 4 left and right quaternion rotations
constructed according to
where
and
with p # R 4 given. Then the computed
QL and
| #
QL -QL | #
Proof. It is straightforward to verify that the expressions for QL (p, s) and QR (p, s)
in (5.2) and (5.3) agree with the definitions in (4.4) and (4.5).
We have f
As p s # 0, there exists # 5 such that f
using the same argument we have
We also have
f l
##
##
Using [16, Lem. 3.3] we have
and
Hence, we certainly have
In the following we use the term elementary symplectic orthogonal matrix to
describe any double Givens rotation, 4-4 Givens rotation, or left or right quaternion
rotation that is embedded as a principal submatrix of the identity matrix I # R 2n-2n .
We have proved that any computed elementary symplectic orthogonal matrix
used by the Jacobi algorithm satisfies a bound of the form
| #
Lemma 5.4. Let x # R 2n-2n and consider the computation of
Px, where
P is
a computed elementary symplectic orthogonal matrix satisfying (5.5). The computed
y satisfies
where P is the exact elementary symplectic orthogonal matrix.
Proof. The vector
y di#ers from x only in elements We have
We obtain similar results for
y n+i , and
y n+j . Hence,
As
Finally, we define
and note that #x#
Now, we consider the pre- and postmultiplication of a matrix H by an approximate
elementary symplectic orthogonal matrix
Lemma 5.5. Let H # R 2n-2n and P # R 2n-2n be any elementary symplectic
orthogonal matrix such that f l(P ) satisfies (5.5). Then,
f
STABILITY OF STRUCTURED HAMILTONIAN EIGENSOLVERS 121
Proof. Let h i be the ith column of H. By Lemma 5.4 we have
The same result holds for h j , h n+i , and h n+j and the other columns of H are un-
changed. Hence, f
Similarly,
B)P T with #
B#F . Then, with
with #
As a consequence of Lemma 5.5, if H k+1 is the matrix obtained after one Jacobi
update with S k (which is the product up to six elementary symplectic orthogonal
matrices), we have
is the exact transformation for
Up to now, we made no assumption on H. If H is a structured Hamiltonian
matrix, the (i, j)-restriction of RHR T is in canonical form. For instance,
if H is a skew-symmetric Hamiltonian matrix, in a computer implementation the
diagonal elements of H are not computed but are set to zero. Also, h ij , h i,j+n and by
skew-symmetry h ji , h j+n,i are set to zero. But by forcing these elements to be zero,
we are making the error smaller so the bounds still hold.
Because of the structure of the problem, both storage and the flop count can be
reduced by a factor of four. Any structured Hamiltonian matrix needs less than n 2 +n
storage locations. If only the t parameters defining H are computed, the structure in
the error is preserved and #H has the same structure as H. It is easy to see that
the bounds in Lemma 5.6 are still valid with the property that #H has the same
structure as H.
Theorem 5.6. Algorithm 4.1 for structured Hamiltonians H compute a canonical
T such that
where #H has the same structure as H and #H#F # k #H#F , where k is the
number of symplectic orthogonal transformations S i applied for each Jacobi update.
The computed basis of symplectic orthogonal eigenvectors
satisfies
Proof. From (5.6), one Jacobi update of H satisfies
For the second update we have
122 FRANC-OISE TISSEUR
Continuing
in this fashion, we find that, after k updates,
1 . S T
k with #H k #F #
In a similar way, using the first part of Lemma 5.5 we have
After k updates,
readily.
Theorem 5.6 shows that the computed eigenvalues are the exact eigenvalues of a
nearby structured Hamiltonian matrix and that the computed basis of eigenvectors is
orthogonal and symplectic up to machine precision. This proves the strong backward
stability of the Jacobi algorithms.
6. Numerical experiments. To illustrate our results we present some numerical
examples. All computations were carried out in MATLAB, which has unit roundo#
For symmetric Hamiltonians, symmetric skew-Hamiltonians, and skew-symmetric
Hamiltonians with approximate eigenvector
# x of the form [ z
-iz ], computing -(#x,
#) in
involves a symplectic quasi-QR factorization of a 2n - 2 matrix, which can be
done in order n 2 flops, a cost negligible compared with the O(n 3 ) cost of the whole
eigendecomposition.
For skew-symmetric Hamiltonians with approximate eigenvector
x not of the form
-iz ], and for skew-symmetric skew-Hamiltonians, the computation of -(#x,
#) requires
O(n 3 ) flops as we have to find the minimal 2-norm solution of a large underdetermined
system in (3.6). Thus, in this case, -(#x,
#) is not a quantity we would compute
routinely in the course of solving a problem.
Note that in our implementation of the Jacobi-like algorithm for skew-symmetric
Hamiltonians we choose the approximate eigenvectors to be the columns of P [ I
-iI
I
where P is the accumulation of the symplectic orthogonal transformations used by
the algorithm to build the canonical form. In this case, the approximate eigenvectors
x are guaranteed to be of the form [ z
To test the strong stability of numerical algorithms for solving structured Hamiltonian
eigenproblems, we applied the direct search maximization routine mdsmax of
the MATLAB Test Matrix Toolbox [15] to the function
1#i#2n
are the computed eigenpairs. In this way we carried out a search for
problems on which the algorithms performs unstably.
As expected from the theory, we could not generate examples for which the structured
backward error for the Jacobi-like algorithms is large: -(#x,
#) < nu#H#F in all
our tests.
The symmetric QR algorithm does not use symplectic orthogonal transformations
and is therefore not structure-preserving. To our surprise, we could not generate examples
of symmetric Hamiltonian and symmetric skew-Hamiltonian matrices for which
STABILITY OF STRUCTURED HAMILTONIAN EIGENSOLVERS 123
Table
Backward error of the eigenpair for of the 4-4 skew-symmetric Hamiltonian defined
by (6.1).
#) -max (#x,
#)
Jacobi-like algorithm
Table
Backward errors of the approximation of the eigenvalue 0 for a 30-30 random skew-symmetric
skew-Hamiltonian matrix.
|
#)
any of the eigenpairs computed by the symmetric QR algorithm has a large backward
error. However, the QR algorithm does not compute a symplectic orthogonal basis
of eigenvectors and also, it is easy to generate examples for which the -# structure
for symmetric Hamiltonians and eigenvalue multiplicity 2 structure for symmetric
skew-Hamiltonians is not preserved. If we generalize the definition of the structured
backward error of a single eigenpair to a set of k eigenpairs, the symmetric QR algorithm
is likely to produce sets of eigenpairs with an infinite structured backward
error. The QR-like algorithm for symmetric skew-Hamiltonians is likely to provide
eigenvectors that are complex instead of real, yielding an infinite structured backward
error in (3.14).
The good backward stability of individual eigenpairs computed by the QR algorithm
does not hold for the skew-symmetric Hamiltonian case. For instance, we
considered the skew-symmetric Hamiltonian eigenproblem
# , with
whose eigenvalues are distinct: In Table
6.1, we give the normwise, componentwise, and structured normwise backward error
of the eigenpair for computed by the unsymmetric QR algorithm and
the skew-symmetric Jacobi algorithm. The QR algorithm does not use symplectic
orthogonal transformations and the computed eigenvectors do not have the structure
-iz ]. Therefore, for the computation of - max (#x,
#), we use the general formula (3.7).
In the skew-symmetric skew-Hamiltonian case, when n is odd, 0 is an eigenvalue
of multiplicity two and is not always well approximated with the unsymmetric QR
algorithm. We generated a random 15 - 15 E and F . We give in Table 6.2 the
backward errors associated with the approximation of the eigenvalue 0 for both the
QR algorithm and Jacobi algorithm.
7. Conclusion. The first contribution of this work is to extend existing definitions
of backward errors in a way appropriate to structured Hamiltonian eigen-
problems. We provided computable formulae that are inexpensive to evaluate except
for skew-symmetric skew-Hamiltonians. Our numerical experiments showed that for
symmetric structured Hamiltonian eigenproblems, the symmetric QR algorithm computes
eigenpairs with a small structured backward error but the algebraic properties
of the problem are not preserved.
Our second contribution is a detailed rounding error analysis of the new Jacobi
algorithms of Fa-bender, Mackey, and Mackey [10] for structured Hamiltonian eigen-
problems. These algorithms are structure-preserving, inherently parallelizable, and
hence attractive for solving large-scale eigenvalue problems. We proved their strong
stability when the left and right quaternion rotations are implemented using our formulae
(5.2), (5.3). Jacobi algorithms are easy to implement and o#er a good alternative
to QR algorithms, namely, the unsymmetric QR algorithm, which we showed to be
not strongly backward stable for skew-symmetric Hamiltonian and skew-Hamiltonian
eigenproblems, and the algorithm for symmetric skew-Hamiltonians based on applying
the QR algorithm to (4.1), which does not respect the "realness" of the problem.
Acknowledgments
. I thank Nil Mackey for pointing out the open question
concerning the strong stability of the Jacobi algorithms for structured Hamiltonian
eigenproblems and for her suggestion in fixing the cancellation problem when computing
the quaternion rotations. I also thank Steve Mackey for his helpful comments
on an earlier manuscript.
--R
A new method for computing the stable invariant subspace of a real Hamiltonian matrix
The weak and strong stability of algorithms in numerical linear algebra
Matrix factorizations for symplectic QR-like methods
A quaternion QR algorithm
A chart of numerical methods for structured eigenvalue problems
A Hamiltonian QR algorithm
IEEE Trans.
A relative backward perturbation theorem for the eigenvalue problem
Hamilton and Jacobi come full circle: Jacobi algorithms for structured Hamiltonian eigenproblems
Accurately counting singular values of bidiagonal matrices and eigenvalues of skew-symmetric tridiagonal matrices
A contribution to the theory of condition
Backward error and condition of structured linear systems
Structured backward error and condition of generalized eigenvalue problems
The Test Matrix Toolbox for Matlab (version 3.0)
Accuracy and Stability of Numerical Algorithms
Oxford University Press
The Theory of Matrices
Canonical forms for Hamiltonian and symplectic matrices and pencils
Hamilton and Jacobi meet again: Quaternions and the eigenvalue problem
Solution of the large matrix equations which occur in response theory
An eigenvalue algorithm for skew-symmetric matrices
A Jacobi-like algorithm for computing the Schur decomposition of a non-Hermitian matrix
Structured linear algebra problems in digital signal processing
A symplectic method for approximating all the eigenvalues of a Hamiltonian matrix
--TR | quaternion rotation;structure-preserving;symplectic;jacobi algorithm;backward error;skew-Hamiltonian;hamiltonian;symmetric;rounding error;skew-symmetric |
587834 | Structured Pseudospectra for Polynomial Eigenvalue Problems, with Applications. | Pseudospectra associated with the standard and generalized eigenvalue problems have been widely investigated in recent years. We extend the usual definitions in two respects, by treating the polynomial eigenvalue problem and by allowing structured perturbations of a type arising in control theory. We explore connections between structured pseudospectra, structured backward errors, and structured stability radii. Two main approaches for computing pseudospectra are described. One is based on a transfer function and employs a generalized Schur decomposition of the companion form pencil. The other, specific to quadratic polynomials, finds a solvent of the associated quadratic matrix equation and thereby factorizes the quadratic $\lambda$-matrix. Possible approaches for large, sparse problems are also outlined. A collection of examples from vibrating systems, control theory, acoustics, and fluid mechanics is given to illustrate the techniques. | Introduction
. Pseudospectra are an established tool for gaining insight into
the sensitivity of the eigenvalues of a matrix to perturbations. Their use is widespread
with applications in areas such as fluid mechanics, Markov chains, and control theory.
Most of the existing work is for the standard eigenproblem, although attention has also
been given to matrix pencils [4], [23], [33], [40], [46]. The literature on pseudospectra
is large and growing. We refer to Trefethen [41], [42], [43] for thorough surveys of
pseudospectra and their computation for a single matrix; see also the Web site [3].
In this work we investigate pseudospectra for polynomial matrices (or #-matrices)
0: m. We first define the #-pseudospectrum and obtain a computationally
useful characterization. We examine the relation between the backward
error of an approximate eigenpair of the polynomial eigenvalue problem associated
with (1.1), the #-pseudospectrum, and the stability radius. We consider both unstructured
perturbations and structured perturbations of a type commonly used in
control theory.
Existing methods for the computation of pseudospectra in the case
standard and generalized eigenvalue problems) do not generalize straightforwardly to
matrix polynomials. We develop two techniques that allow e#cient computation for
m > 1. A transfer function approach employs the generalized Schur decomposition of
the mn - mn companion form pencil. For the quadratic case an alternative
# Received by the editors May 1, 2000; accepted for publication (in revised form) by M. Chu
February 7, 2001; published electronically June 8, 2001. This work was supported by Engineering
and Physical Sciences Research Council grant GR/L76532.
http://www.siam.org/journals/simax/23-1/37145.html
Department of Mathematics, University of Manchester, Manchester, M13 9PL, England
(ftisseur@ma.man.ac.uk, http://www.ma.man.ac.uk/-ftisseur/, higham@ma.man.ac.uk,
http://www.ma.man.ac.uk/-higham/). The work of the second author was supported by a Royal
Society Leverhulme Trust Senior Research Fellowship.
solvent approach computes a solvent of the associated quadratic matrix equation
thereby factorizes the quadratic #-matrix; it works all
the time with n - n matrices once the solvent has been obtained. We give a detailed
comparison of these approaches and also outline techniques that can be e#ciently
used when n is so large as to preclude factorizations.
In the last section, we illustrate our theory and techniques on applications from
vibrating systems, control theory, acoustics, and fluid mechanics.
2. Pseudospectra.
2.1. Definition. The polynomial eigenvalue problem is to find the solutions
(x, #) of
where P (#) is of the form (1.1). If x #= 0 then # is called an eigenvalue and x the
corresponding right eigenvector; y #= 0 is a left eigenvector if y # P
of eigenvalues of P is denoted by #(P ). When Am is nonsingular P has mn finite
eigenvalues, while if Am is singular P has infinite eigenvalues. Good references for
the theory of #-matrices are [8], [20], [21], [37].
Throughout this paper we assume that P has only finite eigenvalues (and pseu-
doeigenvalues); how to deal with infinite eigenvalues is described in [16].
For notational convenience, we introduce
We define the #-pseudospectrum of P by
with
Here the # k are nonnegative parameters that allow freedom in how perturbations are
measured-for example, in an absolute sense (# k # 1) or a relative sense
By setting # unperturbed. The norm,
here and throughout, is any subordinate matrix norm. Occasionally, we will specialize
to the norm # p subordinate to the H-older vector p-norm.
When reduces to the
standard definition of #-pseudospectrum of a single matrix:
with #A# .
It is well known [43] that (2.4) is equivalent to
In the following lemma, we provide a generalization of this equivalence for the #-
pseudospectrum of P .
Lemma 2.1.
STRUCTURED PSEUDOSPECTRA FOR POLYNOMIAL EIGENPROBLEMS 189
Proof. Let S denote the set on the right-hand side of the claimed equality. We
first show that # (P ) implies # S. If # is an eigenvalue of P this is immediate,
so we can assume that # is not an eigenvalue of P and hence that P (#) is nonsingular.
Since
is singular, we have
so that # S.
Now let # S. Again we can assume that
with #, so that #x# = 1.
Then there exists a matrix H with
Lem. 6.3]). Let
y
y
and
We now apportion E between the A k by defining
where for complex z we define
Then
and #A k # k #, 0: m. Hence # (P ).
The characterization of the #-pseudospectrum in Lemma 2.1 will be the basis of
our algorithms for computing pseudospectra.
We note that for is the root neighborhood of the polynomial P
introduced by Mosier [28], that is, the set of all polynomials obtained by elementwise
perturbations of P of size at most #. This set is also investigated by Toh and Trefethen
[38], who call it the #-pseudozero set.
2.2. Connection with backward error. A natural definition of the normwise
backward error of an approximate eigenpair (x, #) of (2.1) is
and the backward error for an approximate eigenvalue # is given by
#) := min
#(x, #).
190 FRANC-OISE TISSEUR AND NICHOLAS J. HIGHAM
By comparing the definitions (2.3) and (2.6) it is clear that the #-pseudospectrum can
be expressed in terms of the backward error of # as
The following lemma gives an explicit expression for #(x, #) and #). This lemma
generalizes results given in [36] for the 2-norm and earlier in [5], [10] for the generalized
eigenvalue problem.
Lemma 2.2. The normwise backward error #(x, #) is given for x #= 0 by
p(|#x#
If # is not an eigenvalue of P then
#) =p(|#P (#)
Proof. It is straightforward to show that the right-hand side of (2.8) is a lower
bound for #(x, #). That the lower bound is attained is proved using a construction
for #A k similar to that in the proof of Lemma 2.1. The expression (2.9) follows on
using the equality, for nonsingular C # C n-n , min x#=0 #Cx#x#C
We observe that the expressions (2.7) and (2.9) lead to another proof of Lemma 2.1.
2.3. Structured perturbations. We now suppose that P (#) is subject to
structured perturbations that can be expressed as
with . The matrices D
and E are fixed and assumed to be of full rank, and they define the structure of the
perturbations; # is an arbitrary matrix whose elements are the free parameters. Note
that #A 0 , . , #Am in (2.10) are linear functions of the parameters in #, but that
not all linear functions can be represented in this form. We choose this particular
structure for the perturbations because it is one commonly used in control theory
[17], [18], [30] and it leads to more tractable formulae than a fully general approach.
Note, for instance, that the system
(which leads to a polynomial eigenvalue problem with may be interpreted as
a closed loop system with unknown static linear output feedback #; see Figure 2.1.
Note that unstructured perturbations are represented by the special case of (2.10)
with
For notational convenience, we introduce
Corresponding to (2.10) we have the following definition of structured backward error
for an approximate eigenpair (x, #):
STRUCTURED PSEUDOSPECTRA FOR POLYNOMIAL EIGENPROBLEMS 191
Fig. 2.1. Closed loop system with unknown static linear output feedback #.
and the backward error for an approximate eigenvalue is
#(x, #; D,E).
In the next result we use a superscript "+" to denote the pseudo-inverse [9].
Lemma 2.3. The structured backward error #(x, #; D,E) in the Frobenius norm
is given by
#F
if the system
is consistent; otherwise # F (x, #; D,E) is infinite.
Proof. It is immediate that # F (x, #; D,E) is the Frobenius norm of the minimum
Frobenius norm solution to (2.13). The result follows from the fact that
is the solution of minimum Frobenius norm to the consistent system
sect. 3.4.8].
To gain some insight into the expression (2.12) we consider the case of unstructured
but weighted perturbations, as in (2.11) but with
The system (2.13) is now trivially consistent and (2.12) gives
using the fact that #ab . The expression (2.14) di#ers
from that for #(x, #) in (2.8) for the 2-norm only by having the 2-norm of the vector
rather than the 1-norm in the denominator.
Lemma 2.4. If # is not an eigenvalue of P (#) then the structured backward error
#; D,E) is given by
(2.
192 FRANC-OISE TISSEUR AND NICHOLAS J. HIGHAM
Proof. We have
min
The companion form of P (#P (#) is given by
where
. 0
I
-A
I
I
I
Am
and
#Am
# .
As
# .
Then, using the identity det(I both AB and
BA are defined [47, p. 54],
det(P (#P
using [45, Lem. 1],
we have
But it is easily verified that
D,
STRUCTURED PSEUDOSPECTRA FOR POLYNOMIAL EIGENPROBLEMS 193
so that
We define the structured #-pseudospectrum by
Analogously to the unstructured case, #
so from Lemma 2.4 we have
which is a generalization of a result of Hinrichsen and Kelb [17, Lem. 2.2] for the
#-pseudospectrum of a single matrix.
2.4. Connection between backward error and stability radius. In many
mathematical models (e.g., those of a dynamical system) it is required for stability
that a matrix has all its eigenvalues in a given open subset C g # of the complex
plane. Various stability radii have been defined that measure the ability of a matrix
to preserve its stability under perturbations.
We partition the complex plane C into two disjoint subsets C g and C b , with
# an open set.
Consider perturbations of the form in (2.10). Following Pappas and Hinrichsen [30]
and Genin and Van Dooren [7], we define the complex structured stability radius of
the #-matrix P with respect to the perturbation structure (D, E) and the partition
by
r
Let #C b be the boundary of C b . By continuity, we have
r
#C s-t { #(P (#) +D#E(#C b # }
#; D,E).
Thus we have expressed the stability radius as an infimum of the eigenvalue backward
error. Using Lemma 2.4 we obtain the following result.
Lemma 2.5. If # is not an eigenvalue of P then
r
and for unstructured perturbations and the p-norm we have
r
The result for the unstructured case in the second part of this lemma is also
obtained by Pappas and Hinrichsen [30, Cor. 2.4] and Genin and Van Dooren [7,
Thm. 2].
194 FRANC-OISE TISSEUR AND NICHOLAS J. HIGHAM
3. Computation of pseudospectra. In this section, we consider the computation
of # (P ), concentrating mainly on the 2-norm. We develop methods for unstructured
perturbations and show how they can be extended to structured perturbations
of the form in (2.10).
Lemma 2.1 shows that the boundary of # (P ) comprises points z for which the
scaled resolvent norm p(|z|)#P (z) Hence, as for pseudospectra of a
single matrix, we can obtain a graphical representation of the pseudospectra of a
polynomial eigenvalue problem by evaluating the scaled resolvent norm on a grid of
points z in the complex plane and sending the results to a contour plotter. We refer
to Trefethen [42] for a survey of the state of the art in computation of pseudospectra
of a single matrix.
The region of interest in the complex plane will usually be determined by the
underlying application or by prior knowledge of the spectrum of P . In the absence
of such information we can select a region guaranteed to enclose the spectrum. If
Am is nonsingular (so that all eigenvalues are finite) then by applying the result
(A)| #A#" to the companion form (2.16) we deduce that
for any p-norm. Alternatively, we could bound max j |# j (P )| by the largest absolute
value of a point in the numerical range of P [24], but computation of this number
is itself a nontrivial problem. For much more on bounding the eigenvalues of matrix
polynomials see [15].
For the 2-norm, #P (z) denotes the smallest
singular value. If the grid is # and # min is computed using the Golub-Reinsch
SVD algorithm then the whole computation requires roughly
which is prohibitively expensive for matrices of large dimension and a fine grid. Using
the fact that # min (P (z)) is the square root of # min (P (z) # P (z)), we can approximate
with the power iteration or Lanczos iteration applied to P (z)
In the case of a single matrix, Lui [25] introduced the idea of using the Schur form of
A in order to speed up the computation of # min Unfortunately,
for matrix polynomials of degree m # 2 no analogue of the Schur form exists (that
is, at most two general matrices can be simultaneously reduced to triangular form).
We therefore look for other ways to e#ciently evaluate or approximate #P (z) -1 # for
many di#erent z.
3.1. Transfer function approach. The idea of writing pseudospectra in terms
of transfer functions is not new. Simoncini and Gallopoulos [34] used a transfer function
framework to rewrite most of the techniques used to approximate #-pseudospectra
of large matrices, yielding interesting comparisons as well as better understanding of
the techniques. Hinrichsen and Kelb [17] investigated structured pseudospectra of a
single matrix with perturbations of the form in (2.10), and they expressed the structured
#-pseudospectrum in terms of a transfer function.
Consider the equation
STRUCTURED PSEUDOSPECTRA FOR POLYNOMIAL EIGENPROBLEMS 195
It can be rewritten as
wm
where F and G are defined in (2.16). Hence
-I
# u.
Since this equation holds for all u, it follows that
-I
# .
This equality can also be deduced from the theory of #-matrices [21, Thm. 14.2.1].
We have thus expressed the resolvent in terms of a transfer function.
In control theory, P (z) -1 corresponds to the transfer function of the linear time-invariant
multivariate system described by
I
Several algorithms have been proposed in the literature [22], [27] to compute transfer
functions at a large number of frequencies, most of them assuming that G = I. Our
objective is to e#ciently compute the norm of the transfer function, rather than to
compute the transfer function itself.
For structured perturbations we see from (2.17) that the transfer function P (z)
is replaced by
E(z)P (z)
-D
# .
All the methods described below for the dense case are directly applicable with obvious
changes.
We would like a factorization of F - zG that enables e#cient evaluation or application
of di#erent z. There are various possibilities, including,
when G is nonsingular,
196 FRANC-OISE TISSEUR AND NICHOLAS J. HIGHAM
is a Schur decomposition, with W unitary and T upper tri-
angular. However this approach is numerically unstable when G is ill conditioned. A
numerically stable reduction is obtained by computing the generalized Schur decom-
position
where W and Z are unitary and T and S are upper triangular. Then
-I
# .
Hence once the generalized Schur decomposition has been computed, we can compute
x at a cost of O((mn) 2 ) flops, since T - zS is triangular of
dimension mn. For the 2-norm we can therefore e#ciently approximate #P (z)
using inverse iteration or the inverse Lanczos iteration, that is, the power method or
the Lanczos method applied to P (z)
The cost of the computation breaks into two parts: the cost of the initial transformations
and the cost of the computations at each of the # 2 grid points. Assuming
that (3.3) is computed using the QZ algorithm [9, Sec. 7.7] and the average number
of power method or Lanczos iterations per grid point is k, the total cost is about
For the important special case eigenvalue problem), this cost is
Comparing with (3.1) we see that this method is a significant improvement over the
SVD-based approach for a su#ciently fine grid and a small degree m.
For the 2-norm note that, because of the two outer factors in (3.4), we cannot
discard the unitary matrices Z and W , unlike in the analogous expression for the
resolvent of a single matrix in the standard eigenproblem. For the 1- and #-norms
we can e#ciently estimate #P (z) using the algorithm of Higham and Tisseur [14],
which requires only the ability to multiply matrices by P (z) -1 and P (z) -# .
An alternative to the generalized Schur decomposition is the generalized
Hessenberg-triangular form, which di#ers from (3.3) in that one of T and S is upper
Hessenberg. The Hessenberg form is cheaper to compute but more expensive to work
with. It leads to a smaller overall flop count when k# 2 >
# 25mn.
3.2. Factorizing the quadratic polynomial. The transfer function-based method
of the previous section has the drawback that it factorizes matrices of dimension m
times those of the original polynomial matrix. We now describe another method,
particular to the quadratic case, that does not increase the size of the problem.
Suppose we can find a matrix S such that A 2 S 2 +A 1 S+A that is, a solvent
of the quadratic matrix equation A 2
If we compute the Schur decomposition
STRUCTURED PSEUDOSPECTRA FOR POLYNOMIAL EIGENPROBLEMS 197
and the generalized Schur decomposition
then
so a vector can be premultiplied by Q(z) -1 or its conjugate transpose in O(n 2 ) flops
for any z. Moreover, for the 2-norm we can drop the outer Q and W # factors in (3.7),
by unitary invariance, and hence we do not need to form W . For the 2-norm, the
total cost of this method is
where c S is the cost of computing a solvent and we have assumed that we precompute
Z. Comparing this flop count with (3.5) we see that the cost per grid point of the
solvent approach is much lower.
The success of this method depends on two things: the existence of solvents
and being able to compute one at a reasonable cost. Some su#cient conditions for
the existence of a solvent are summarized in [13]. In particular, for an overdamped
problem, one for which A 2 and A 1 are Hermitian positive definite, A 0 is Hermitian
positive semidefinite, and a solvent is
guaranteed to exist.
Various methods are available for computing solvents [12], [13]. One of the most
generally useful is Newton's method, optionally with exact line searches, which requires
a generalized Sylvester equation in n-n matrices to be solved on each iteration,
at a total cost of about 56n 3 flops per iteration. If Newton's method converges within
8 iterations or so, so that c S # 448n 3 flops, this approach is certainly competitive in
cost with the transfer function approach.
When there is a gap between the n largest and n smallest eigenvalues ordered by
modulus, as is the case for overdamped problems [20, Sec. 7.6], Bernoulli iteration is an
e#cient way of computing the dominant or minimal solvent S [13]. If t iterations are
needed for convergence to the dominant or minimal solvent then the cost of Bernoulli
iteration is about c flops. Bernoulli iteration converges only linearly, but
convergence is fast if the eigenvalue gap is large.
A third approach to computing a solvent is to use a Schur method from [13],
based on the following theorem. Let F and G be defined as in (2.16), so that
-A
# .
Theorem 3.1 (Higham and Kim [13]). All solvents of Q(X) are of the form
is a generalized Schur decomposition with Q and Z unitary and T and S upper tri-
angular, and where all matrices are partitioned as block 2 - 2 matrices with n - n
blocks.
The method consists of computing the generalized Schur decomposition (3.9) by
the QZ algorithm and then forming
11 . The generalized Schur decomposition
may need to be reordered in order to obtain a nonsingular Z 11 . Note that the
198 FRANC-OISE TISSEUR AND NICHOLAS J. HIGHAM
unitary factor Q does not need to be formed. For this method, c
where the constant r depends on the amount of reordering required. From (3.8), the
total cost is now
which is much more favorable than the cost (3.5) of the transfer function method.
For higher degree polynomials we can generalize this approach by attempting to
linear factors by recursively computing solvents. However, for degrees
greater than 2 classes of problem for which a factorization into linear factors exists
are less easily identified and the cost of Newton's method (for example) is much higher
than for
3.3. Large-scale computation. All the methods described above are intended
for small- to medium-scale problems for which Schur and other reductions are pos-
sible. For large, possibly sparse, problems, di#erent techniques are necessary. These
techniques can be classified into two categories: those that project to reduce the size
of the problem and then compute the pseudospectra of the reduced problem, and
those that approximate the norm of the resolvent directly.
3.3.1. Projection approach. For a single matrix, A, Toh and Trefethen [39]
and Wright and Trefethen [48] approximate the resolvent norm by the Arnoldi method;
that is, they approximate or by # min (
where Hm is the square Hessenberg matrix of dimension m # n obtained from the
Arnoldi process and
Hm is the matrix Hm augmented by an extra row. Simoncini
and Gallopoulos [34] show that a better but more costly approximation is obtained by
approximating #(A- zI)
is the orthonormal
basis generated during the Arnoldi process. These techniques are not applicable
to the polynomial eigenvalue problem of degree larger than one because of the lack of
a Schur form for the Arnoldi method to approximate.
A way of approximating #P (z) -1 # for all z is through a projection of P (z)
a lower dimensional subspace. Let V k be an n - k matrix with orthonormal columns.
We can apply one of the techniques described in the previous sections to compute
pseudospectra of the projected polynomial eigenvalue problem
A possible choice for V k is an orthonormal basis of k selected linearly independent
eigenvectors of P (#). In this case,
P (#) is the matrix representation of the projection
of P (#) onto the subspace spanned by the selected eigenvectors. The eigenvectors can
be chosen to correspond to parts of the spectrum of interest and can be computed
using the Arnoldi process on the companion form pencil (F, G) or directly on P (#)
with the Jacobi-Davidson method or its variants [26], [35]. In the latter case, the
during the Davidson process.
3.3.2. Direct approach. This approach consists of approximating #P (z)
at each grid point z. Techniques analogous to those used for single matrices can be
applied, such as the Lanczos method applied to P (z) # P (z) or its inverse. We refer
the reader to [42] for more details and further references.
4. Applications and numerical experiments. We give a selection of applications
of pseudospectra for polynomial eigenvalue problems, using them to illustrate
the performance of our methods for computing pseudospectra. All our examples are
for 2-norm pseudospectra.
STRUCTURED PSEUDOSPECTRA FOR POLYNOMIAL EIGENPROBLEMS 199
4.1. The wing problem. The first example is based on a quadratic polynomial
A 0 from [6, Sec. 10.11], with numerical values modified as in
[20, Sec. 5.3]. The eigenproblem for Q(#) arose from the analysis of the oscillations
of a wing in an airstream. The matrices are
17.6 1.28 2.89
1.28 0.824 0.413
7.66 2.45 2.1
# .
The left plot in Figure 4.1 shows the boundaries of #-pseudospectra with perturbations
measured in the absolute sense . The
eigenvalues are plotted as dots. Another way of approximating a pseudospectrum
is by random perturbations of the original matrices [41]. We generated 200 triples
of complex random normal perturbation matrices (#A 1 ,
1: 3. In the right plot of Figure 4.1 are superimposed as small dots the
eigenvalues of the perturbed polynomials # 2
The solid curve marks the boundary of the #-pseudospectrum for
pictures show that the pair of complex eigenvalues are much more
sensitive to perturbations than the other two complex pairs.
The eigenvalues of Q(#) are the same as those of the linearized problem A- #I,
where
-A
# .
Figure
4.2 shows boundaries of #-pseudospectra for this matrix, for the same # as in
Figure
4.1. Clearly, the #-pseudospectra of the linearized problem (4.1) do not give
useful information about the behavior of the eigensystem of Q(#) under perturbations.
This emphasizes the importance of defining and computing pseudospectra for the
quadratic eigenvalue problem in its original form.
4.2. Mass-spring system. We now consider the connected damped mass-spring
system illustrated in Figure 4.3. The ith mass of weight m i is connected to the (i+1)st
mass by a spring and a damper with constants k i and d i , respectively. The ith mass
is also connected to the ground by a spring and a damper with constants # i and # i ,
respectively. The vibration of this system is governed by a second-order di#erential
equation
d
dt
where the mass matrix diagonal, and the damping matrix
C and sti#ness matrix K are symmetric tridiagonal. The di#erential equation leads
to the quadratic eigenvalue problem
In our experiments, we took all the springs (respectively, dampers) to have the same
constant except the first and last, for which the constant
is 2# (respectively, 2# ), and we took m i # 1. Then
200 FRANC-OISE TISSEUR AND NICHOLAS J. HIGHAM
Fig. 4.1. Wing problem. Left: # (Q), for # [10 approximation to #-
pseudospectrum with
Fig. 4.2. Wing problem. # (A), for A in (4.1) with # [10
and the quadratic eigenvalue problem is overdamped. We take an
of freedom mass-spring system over a 100 - 100 grid. A plot of the pseudospectra is
given in Figure 4.4.
For this problem we compare all the methods described. In the solvent approach
exact line searches were used in Newton's method and no reordering was used in the
generalized Schur method. The solvents from the Bernoulli and Schur methods were
refined by one step of Newton's method. The Bernoulli iteration converged in 12
iterations while only 6 iterations were necessary for Newton's method. The Lanczos
inverse iteration converged after 3 iterations on average. In Table 4.1 we give the estimated
flop counts, using the formulae from section 3, together with execution times.
The computations were performed in MATLAB 6, which is an excellent environment
for investigating pseudospectra. While the precise times are not important, the con-
STRUCTURED PSEUDOSPECTRA FOR POLYNOMIAL EIGENPROBLEMS 201
d i-1
d n-1
Fig. 4.3. An n degree of freedom damped mass-spring system.
Fig. 4.4. Pseudospectra of a 250 degree of freedom damped mass-spring system on a 100 - 100
grid.
clusion is clear: in this example, the three solvent-based methods are much faster
than the SVD and transfer function methods. (The high speed of the SVD method
relative to its flop count is attributable to MATLAB's very e#cient svd function.)
4.3. Acoustic problem. Acoustic problems with damping can give rise to large
quadratic eigenvalue problems (4.2), where, again, M is the mass matrix, C is the
damping matrix, and K the sti#ness matrix. We give in Figure 4.5 the sparsity
pattern of the three matrices M , C, and K of order 107 arising from a model of a
speaker box [1]. These matrices are symmetric and the sparsity patterns of M and
K are identical. There is a large variation in the norms: #M#
We plot in Figure 4.6 pseudospectra with perturbations measured in both an
absolute sense a relative sense
together with pseudospectra of the corresponding standard eigenvalue
problem of the form (4.1). The eigenvalues are all pure imaginary and are marked by
dots on the plot. The two first plots are similar, both showing that the most sensitive
Table
Comparison in terms of flops and execution time of di#erent techniques.
Method Estimated cost in flops Execution time
Golub-Reinsch SVD 26747n 3 102 min
Transfer function 3408n 3 106 min
Solvent: Schur 1677n 3 37 min
Matrices M and K
Matrix C
Fig. 4.5. Sparsity patterns of the three 107 - 107 matrices M,C, and K of an acoustic problem.
eigenvalues are located at the extremities of the spectrum; the contour lines di#er
mainly around the zero eigenvalue. The last plot is very di#erent; clearly it is the
eigenvalues close to zero that are the most sensitive to perturbations of the standard
eigenproblem form.
We mention that for this problem we have been unable to compute a solvent.
4.4. Closed loop system. In multi-input and multioutput systems in control
theory the location of the eigenvalues of matrix polynomials determine the stability
of the system. Figure 4.7 shows a closed-loop system with feedback with gains 1 and
The associated matrix polynomial is given by
# .
We are interested in the values of # for which P (z) has all its eigenvalues inside the
unit circle. By direct calculation with det(P (z)), using the Routh array, for example,
it can be shown that P (z) has all its eigenvalues inside the unit circle if and only if
# < 0.875.
The matrix P (z) can be viewed as a perturbed matrix polynomial with structured
perturbations:
STRUCTURED PSEUDOSPECTRA FOR POLYNOMIAL EIGENPROBLEMS 203
-4000 -3000 -2000 -1000 0 1000 2000 3000 4000
-4000 -3000 -2000 -1000 0 1000 2000 3000 4000
-4000 -3000 -2000 -1000 0 1000 2000 3000 4000
Fig. 4.6. Acoustic problem, Perturbations measured in an absolute sense
(top left) and relative sense (top right). Pseudospectra of the equivalent standard eigenvalue problem
are shown at the bottom.
where
I
zI
z 2 I
# .
We show in Figure 4.8 the structured pseudospectra as defined by (2.17). The dashed
lines mark the unit circle. Since the outermost contour has value just
touches the unit circle, this picture confirms the value for the maximal # that we
obtained analytically.
4.5. The Orr-Sommerfeld equation. The Orr-Sommerfeld equation is a linearization
of the incompressible Navier-Stokes equations in which the perturbations
in velocity and pressure are assumed to take the form
y,
204 FRANC-OISE TISSEUR AND NICHOLAS J. HIGHAM
z
z
Fig. 4.7. Closed-loop system with feedback gains 1 and 1
10.60.20.20.60.80.36Fig. 4.8. Structured pseudospectra of a closed-loop system with one-parameter feedback.
where # is a wavenumber and # is a radian frequency. For a given Reynolds number
R, the Orr-Sommerfeld equation may be written
We consider plane Poiseuille flow between walls at
in the streamwise x direction, for which the boundary conditions are
For a given real value of R, the boundary conditions will be satisfied only for certain
combinations of values of # and #. Two cases are of interest.
Case 1. Temporal stability. If # is fixed and real, then (4.3) is linear in the
parameter # and corresponds to a generalized eigenvalue problem. The perturbations
STRUCTURED PSEUDOSPECTRA FOR POLYNOMIAL EIGENPROBLEMS 205
are periodic in x and grow or decay in time depending on the sign of the imaginary
part of #. This case has been studied with the help of pseudospectra by Reddy,
Schmid, and Henningson [32].
Case 2. Spatial stability. For most real flows, the perturbations are periodic in
time, which means that # is real. Then the sign of the imaginary part of # determines
whether the perturbations will grow or decay in space. In this case, the parameter
is #, which appears to the fourth power in (4.3), so we obtain a quartic polynomial
eigenvalue problem. Bridges and Morris [2] calculated the spectrum of (4.3) using a
finite Chebyshev series expansion of # combined with the Lanczos tau method and
they computed the spectrum of the quartic polynomial by two methods: the QR
algorithm applied to the corresponding standard eigenvalue problem in companion
form, and Bernoulli iteration applied to determine a minimal solvent and hence to
obtain the n eigenvalues of minimal modulus.
For our estimation of the pseudospectra of the Orr-Sommerfeld equation we use
a Chebyshev spectral discretization that combines an expansion in Chebyshev polynomials
and collocation at the Chebyshev points with explicit enforcement of the
boundary conditions. We are interested in the eigenvalues # that are the closest to
the real axis, and we need Im(#) > 0 for stability. The linear eigenvalue problem
(Case 1) has been solved by Orszag [29]. The critical neutral point corresponding
to # and # both real for minimum R was found at
the frequency our calculations we set R and # to these
values and we computed the modes #, taking which gives matrices of order
1. The first few modes are plotted in Figure 4.9. For the first mode we obtained
which compares favorably with the result of Orszag. Figure
4.10 shows the pseudospectra in a region around the first few modes on a 100-100
grid, with # since A 4 is the identity matrix and is not
subject to uncertainty. The plot shows that the first mode is very sensitive. Interest-
ingly, the second and subsequent modes are almost as sensitive, with perturbations of
in the matrix coe#cients being su#cient to move all these modes across
the real axis, making the flow unstable. The pseudospectra thus give a guide to the
accuracy with which computations must be carried out for the numerical approximations
to the modes to correctly determine the location of the modes. For more on the
interpretation of pseudospectra for this problem, see [32] and [44].
Again, for comparison we computed the pseudospectra of the corresponding standard
eigenvalue problem. The picture was qualitatively similar, but the contour levels
were several orders of magnitude smaller, thus not revealing the true sensitivity of the
problem.
206 FRANC-OISE TISSEUR AND NICHOLAS J. HIGHAM
Fig. 4.9. The first few modes of the spectrum of the Orr-Sommerfeld equation for
109.39.058.88.99.159.59.5Fig. 4.10. Pseudospectra of the Orr-Sommerfeld equation for
STRUCTURED PSEUDOSPECTRA FOR POLYNOMIAL EIGENPROBLEMS 207
--R
http://www.
Elementary Matrices and Some Applications to Dynamics and Di
Stability Radii of Polynomial Matrices
Matrix Polynomials
Matrix Computations
Structured backward error and condition of generalized eigenvalue problems
Accuracy and Stability of Numerical Algorithms
Solving a Quadratic Matrix Equation by Newton's Method with Exact Line Searches
Numerical analysis of a quadratic matrix equation
A block algorithm for matrix 1-norm estimation
Bounds for Eigenvalues of Matrix Polynomials
More on Pseudospectra for Polynomial Eigenvalue Problems and Applications in Control Theory
Spectral value sets: A graphical tool for robustness analysis
Real and complex stability radii: A survey
Numerical solution of matrix polynomial equations by Newton's method
The Theory of Matrices
Numerical range of matrix polynomials
Computation of pseudospectra by continuation
Locking and restarting quadratic eigenvalue solvers
A determinant identity and its application in evaluating frequency response matrices
Root neighborhoods of a polynomial
Accurate solution of the Orr-Sommerfeld stability equation
Robust stability of linear systems described by higher order
Generalized Inverse of Matrices and Its Applications
Pseudospectra of the Orr-Sommerfeld operator
Transfer functions and resolvent norm approximation of large matrices
Backward error and condition of polynomial eigenvalue problems
The quadratic eigenvalue problem
Pseudozeros of polynomials and pseudospectra of companion matrices
Calculation of pseudospectra by the Arnoldi iteration
Portraits Spectraux de Matrices: Un Outil d'Analyse de la Stabilit-e
Pseudospectra of matrices
Computation of pseudospectra
Spectra and pseudospectra
Hydrodynamic stability without eigenvalues
On stability radii of generalized eigenvalue problems
Pseudospectra for matrix pencils and stability of equilibria
The Algebraic Eigenvalue Problem
--TR
--CTR
Kui Du, Note on structured indefinite perturbations to Hermitian matrices, Journal of Computational and Applied Mathematics, v.202 n.2, p.258-265, May, 2007
Graillat, A note on structured pseudospectra, Journal of Computational and Applied Mathematics, v.191 n.1, p.68-76, 15 June 2006
Kui Du , Yimin Wei, Structured pseudospectra and structured sensitivity of eigenvalues, Journal of Computational and Applied Mathematics, v.197 n.2, p.502-519, 15 December 2006
Kirk Green , Thomas Wagenknecht, Pseudospectra and delay differential equations, Journal of Computational and Applied Mathematics, v.196 n.2, p.567-578, 15 November 2006 | structured perturbations;matrix polynomial;solvent;orr-sommerfeld equation;stability radius;pseudospectrum;quadratic matrix equation;backward error;transfer function;lambda-matrix;polynomial eigenvalue problem |
587858 | A Multilevel Dual Reordering Strategy for Robust Incomplete LU Factorization of Indefinite Matrices. | A dual reordering strategy based on both threshold and graph reorderings is introduced to construct robust incomplete LU (ILU) factorization of indefinite matrices. The ILU matrix is constructed as a preconditioner for the original matrix to be used in a preconditioned iterative scheme. The matrix is first divided into two parts according to a threshold parameter to control diagonal dominance. The first part with large diagonal dominance is reordered using a graph-based strategy, followed by an ILU factorization. A partial ILU factorization is applied to the second part to yield an approximate Schur complement matrix. The whole process is repeated on the Schur complement matrix and continues for a few times to yield a multilevel ILU factorization. Analyses are conducted to show how the Schur complement approach removes small diagonal elements of indefinite matrices and how the stability of the LU factor affects the quality of the preconditioner. Numerical results are used to compare the new preconditioning strategy with two popular ILU preconditioning techniques and a multilevel block ILU threshold preconditioner. | Introduction
This paper is concerned with reordering strategies used in developing robust preconditioners based
on incomplete LU (ILU) factorization of the coefficient matrix of sparse linear system of the form
where A is an unstructured matrix of order n. In particular, we are interested in ILU preconditioning
techniques for which A is an indefinite matrix; i.e., a matrix with an indefinite symmetric part.
Indefinite matrices arise frequently from finite element discretizations of coupled partial differential
equations in computational fluid dynamics and from other applications.
Technical Report 285-99, Department of Computer Science, University of Kentucky, Lexington, KY, 1999. This
work was supported in part by the University of Kentucky Center for Computational Sciences and in part by the
University of Kentucky College of Engineering.
y E-mail: jzhang@cs.uky.edu. URL: http://www.cs.uky.edu/~jzhang.
ILU preconditioning techniques have been successful for solving many nonsymmetric and indefinite
matrices, despite the fact that their existence in these applications is not guaranteed. However,
their failure rates are still too high for them to be used as blackbox library software for solving general
sparse matrices of practical interests [9]. In fact, the lack of robustness of preconditioned iterative
methods is currently the major impediment for them to gain acceptance in industrial applications,
in spite of their intrinsic advantage for large scale problems.
For indefinite matrices, there are at least two reasons that make ILU factorization approaches
problematic [9]. The first problem is due to small or zero pivots [23]. Pivots in an indefinite matrix can
be arbitrarily small. This may lead to unstable and inaccurate factorizations. In such cases, the size
of the elements in the LU factors may be very large and these large size elements lead to inaccurate
factorization. The second problem is due to unstable triangular solves [18]. The incomplete factors
of an indefinite matrix are usually not diagonally dominant. An indication of unstable triangular
solves is when kL are extremely large while the offdiagonal elements of L and U are
reasonably bounded. Such problems are usually caused by very small pivots. They may sometimes
happen without a small pivot. A statistic, condest, was introduced by Chow and Saad [9] to measure
the stability of the triangular solves. It is defined to be k(LU) \Gamma1 ek1 , where e is a vector of all ones.
This statistic is useful when its value is very large, e.g., in the order of 10 15 .
Small pivots are usually related small or zero diagonal elements. It can be argued that by
restricting the magnitude of the diagonal elements, we may be able to alleviate, if not eliminate,
these two problems of ILU factorizations to a certain degree. Such restrictions can be seen in the
form of full or partial pivoting strategies in Gaussian elimination. In ILU factorization, column
pivoting strategy has been implemented with Saad's ILUT, resulting in an ILUTP techniques [32].
However, ILUTP has not always been helpful in dealing with nonsymmetric matrices [3, 9]. As
Chow and Saad pointed [9], a poor pivoting sequence can occasionally trap a factorization into a zero
pivot, even if the factorization would have succeeded without pivoting. In addition, existing pivoting
strategies for incomplete factorization cannot guarantee that a nonzero pivot can always be found,
unlike the case with Gaussian elimination [9].
Another obvious strategy of dealing with small pivots is to replace them by a larger value. The
ILU factorization can continue and the resulting preconditioner may be well conditioned. In such a
way, the ILU factorization is said to be stabilized. However, this strategy alters the values of the
matrix and the resulting preconditioner may be inaccurate. Thus, the choice of the replacing value
for the small pivots is critical for a good performance and a good choice is usually problem dependent
[23]. Too large a value will result in a stable but inaccurate factorization; too small a value will result
in an unstable factorization. A similar strategy is to factor a shifted matrix A + ffI , where ff is a
positive scalar so that A+ ffI is well conditioned [27, 44]. Such a strategy too obviously has a tradeoff
between stable and accurate factorization. For more studies on the stability of ILU factorizations,
we refer to [19, 29, 42, 13, 45].
It is also possible to reorder the rows of the matrix so that their diagonal dominance in a certain
sense is in decreasing order. In this way, small pivots are in the last rows of the matrix and may not
be used in an ILU factorization. This strategy also has some problems since the values of the pivots
are modified in an unpredictable way, small pivots may still affect the ILU factorization. In addition,
the effect of standard reordering schemes applied to general nonsymmetric sparse matrices is still an
unsettled issue [17, 24, 43].
This paper follows the above idea of putting the rows with small diagonal elements to the last
few rows. However, these small diagonal elements will never be used in the ILU factorization. Instead,
these rows form the rows of a Schur complement matrix and the values of the diagonal elements are
modified in a systematic way. This process is continued for a few times until all small diagonal
elements are removed; or until the last Schur complement matrix is small enough that a complete
pivoting strategy can be implemented inexpensively. With this reordering strategy, we can expect to
obtain a stable and accurate ILU factorization. We also implement a graph based reordering strategy
(minimum degree algorithm) to reduce the fill-in amount during the stable ILU factorization.
This paper is organized as follows. The next section introduces a dual reordering strategy
based on both the values and the graph of the matrix. Section 3 discusses a partial ILU factorization
technique to construct the Schur complement matrix implicitly. Section 4 gives analyses on the values
of the diagonal elements of the Schur complement matrix and shows how the stability of the LU factor
affects the quality of a preconditioner. Section 5 outlines the multilevel dual reordering algorithm.
Section 6 contains numerical experiments. Concluding remarks are included in Section 7.
Reordering Strategy
Most reordering strategies are originally developed for the direct solution of sparse matrices based on
Gaussian elimination. They are mainly used to reduce the fill-in elements in the Gaussian elimination
process or to extract parallelism from LU factorizations [15, 22]. They have also been used in ILU
preconditioning techniques for almost the same reasons [16, 20, 30]. Various reordering strategies were
first studied for preconditioned conjugate gradient methods, i.e., for the cases where the matrix is
symmetric positive definite [1, 4, 5, 10, 11, 26, 31]. They were then extended for treating nonsymmetric
problems [2, 7, 12, 14]. Most of these strategies are based on the adjacency graph but not on the
values of the matrices. They are robust for general sparse matrices only if used with suitable pivoting
strategies, which are based on the values of the matrices, to prevent unstable factorizations. Hence,
reordering strategies based on matrix values are needed to yield robust stable ILU factorizations.
Such an observation has largely been overlooked in ILU techniques for some time, partly because the
early ILU techniques were mainly developed to solve sparse matrices arising from finite difference
discretizations of partial differential equations [28]. In such cases, the diagonal elements of the
matrices usually have nonzero values.
In this paper, we introduce a dual reordering strategy for robust ILU factorization for solving
general sparse indefinite matrices. To this end, we first introduce a strategy to determine the row
diagonal dominance of a matrix. 1 We actually compute a certain measure to determine the relative
strength of the diagonal element with respect to a certain norm of the row in question. Algorithm 2.1
is an example of computing a diagonal dominance measure for each row of the matrix and was
originally introduced in [41] as a diagonal threshold strategy in a multilevel ILU factorization.
1 The reference to row diagonal dominance is due to the assumption that our matrix is stored in a row oriented
format, such as in the compressed sparse row format [34]. The proposed strategy works equally well if the matrix is
stored in a column oriented format with the reference to column diagonal dominance.
Algorithm 2.1 Computing a measure for each row of a matrix.
1. For do
2. r
3. If r i 6= 0, then
4. ~ t
5. End if
6. End do
7.
8. For do
9.
10. End do
In Line 2 of the Algorithm 2.1 the set defined as
i.e., the nonzero row pattern for the row i. A row with a small absolute diagonal value will have a
small t i measure. A row with a zero diagonal value will have an exact zero t i measure.
denote the adjacency graph of the matrix A, where is the
set of vertices and K is the set of edges. Let (v denote an edge from vertex v j to vertex v k . Since
a node in the adjacency graph of a matrix corresponds to a row of the matrix, we will use the term
node and row of a matrix interchangeably. Given a diagonal threshold tolerance ffl ? 0, we divide the
nodes of A into two parts, V 1 and V 2 , such that
It is obvious that
For convenience, we assume that a symmetric permutation is performed so that the nodes in V 1
are listed first, followed by the nodes in V 2 . Since the nodes in V 1 are "good" for ILU factorization
in terms of stability, we may further improve the quality of the ILU factorization by implementing
a graph based reordering strategy. The following minimum degree reordering algorithm is just one
example of such graph based reordering strategies to reduce the fill-in elements in the ILU factorization
We denote by deg(v i ) the degree of the node v i , which equals the number of nonzero elements
of the ith row minus one; i.e.,
The set of the degrees of the rows of the matrix A can be conveniently computed when Algorithm 2.1
is run to compute the diagonal dominance measure of A. For example, in Line 2 of Algorithm 2.1,
the number of nonzero elements of the ith row will be counted.
After the first reordering based on the threshold tolerance ffl, we perform a second reordering
based on the degrees of the nodes. But the second reordering is only performed with respect to the
nodes in V 1 . To be more precise, we reorder the nodes in V 1 in a minimum degree fashion; i.e., the
nodes with smaller degrees are listed first, those with larger degrees are listed last. After the two
steps of reorderings, we have
are the permutation matrices corresponding to the threshold tolerance reordering
and the minimum degree reordering, respectively. We use P g here to emphasize that it is just a graph
based reordering strategy, and is not necessarily restricted to the minimum degree reordering. Other
graph based reordering strategies such as the Cuthill-McKee or reverse Cuthill-McKee algorithms [25]
may be used to replace the minimum degree strategy. But their meaning may be slightly changed
since not all neighboring nodes of a node in V 1 belong to V 1 , some of them may be in V 2 . For
simplicity, we use A to denote both the original and the permuted matrices in the sequel so that the
permutation matrices will no longer appear explicitly. We also refer to the two reordering strategies
as threshold reordering and graph reordering for short.
3 Partial ILU Factorization
An incomplete LU factorization process with a double dropping strategy (ILUT) is first applied to
the upper part (D F ) of the reordered matrix A in (2). The ILUT algorithm uses two parameters
and - to control the amount of fill-in elements caused by the Gaussian elimination process and is
described in detail in [32]. ILUT builds the preconditioning matrix row by row. For each row of
the LU factors, ILUT first drops all computed elements whose absolute values are smaller than -
times the average nonzero absolute values of the current row. After an (incomplete) row is computed,
ILUT performs a search with respect to the computed current row such that the largest p elements
in absolute values are kept, the rest nonzero elements are dropped again. Thus the resulting ILUT
factorization has at most p elements in each row of the L and U parts. The use of a double dropping
strategy ensures that the memory requirement be met. It is easy to see that the total storage cost
for ILUT is bounded by 2pn for a matrix of order n.
The ILUT process is continued to the second part of the matrix A in (2) with respect to the
C) submatrix. However, the elimination process is only performed with respect to the columns in
E, and linear combinations for columns in C are performed accordingly. In other words, the elements
corresponding to the C submatrix are not eliminated. Such a process is called a partial Gaussian
elimination or a partial LU factorization in [38]. Note that, due to the partial Gaussian elimination,
all rows in the (E C) submatrix can be processed independently (in parallel). This is because all
nodes in the E submatrix that are to be eliminated use only the computed (I)LU factorization of
the (D F ) part. Note also that the diagonal values of the rows of the C submatrix are never used
as pivots. It can be shown [38] that such a partial Gaussian elimination process modifies C into the
(incomplete) Schur complement of A. In exact arithmetic, C would be changed into
where LU is the standard LU factorization of the D submatrix. Hence, this method constructs the
Schur complement indirectly, in contrast to some alternative methods, e.g., the BILUM preconditioner
in [37], in which the Schur complement is constructed explicitly by matrix-matrix multiplications.
The partial ILU factorization process just described yields a block LU factorization of the matrix
A of the form ' D F
where I and 0 are generic identity and zero matrices, respectively. If the factorization is exact and if
we can solve the Schur complement matrix A 1 , the solution of the original linear system (1) can be
found by a backward substitution. This process is similar to the sparse direct solution method based
on one step cyclic reduction technique [22].
The partial ILU factorization process is the backbone of a domain based multilevel ILU pre-conditioning
technique (BILUTM) described in [38]. Such an ILU factorization with a suitable block
independent set ordering yields a preconditioner (BILUTM) that is highly robust and possesses high
degree of parallelism. However, in this paper, the parallelism due to block independent set ordering is
not our concern, we restrict our attention to the robustness of multilevel ILU factorization resulting
from removing small pivots.
We can heuristically argue that the ILU factorization resulting from applying the above partial
ILU factorization to the reordered matrix is likely to be more stable than that would be generated
by applying ILUT directly to the original matrix. This is because the factorization is essentially
performed with respect to the nodes in V 1 that have a relatively good diagonal dominance. The
partial ILU factorization with respect to the nodes in V 2 never needs to divide any pivot elements.
So there is no reason that large size elements should be produced.
As remarked previously, if we can solve the Schur complement matrix A 1 in (3) to a certain
degree, we can develop a two level preconditioner for the matrix A. An alternative is based on the
observation that A 1 is another sparse matrix and we can apply the same procedures to A 1 that have
been applied to A to yield an even smaller Schur complement A 2 . This is the philosophy of multilevel
ILU preconditioning techniques developed in [33, 37, 38]. However, for this moment, we only discuss
the possible construction of a two level preconditioner.
A two level preconditioner. The easiest way to construct a two level preconditioner is to apply
the ILUT factorization technique to the matrix A 1 . One question will be naturally asked: is the
ILUT factorization more stable when applied to A 1 than when applied to A?
Notice that since the nodes with good diagonal dominance have all been factored out, we tend
to think that the nodes of A 1 are not good for a stable ILUT factorization. This may not always be
true, since the measure of diagonal dominance computed in Algorithm 2.1 is relatively to a certain
norm of the row in question. We need to examine relative changes in size of the diagonal value when
a node is considered as a node in A and when it is considered as a node in A 1 .
4 Analyses
Diagonal submatrix D. For the easy of analysis, unless otherwise indicated explicitly, we assume
that the partial LU factorization described above is exact; i.e., no dropping strategy is enforced. We
also assume that, in the reordered matrix, the D submatrix is diagonal. Such a reordering can be
achieved by an independent set search as in a multielimination strategy of Saad [33, 39]. Thus, the
factorization (4) is reduced to
ED
'' D F
We now assume that all indices are local to individual submatrices. In other words, when we say the
ith row of the matrix F , we mean the ith row of the submatrix F , not the ith row of the matrix A,
original or permuted. For convenience we assume that D is of dimension m and A 1 is of dimension
m. We also use the notations:
It can be shown [22, 38] that, with the partial LU factorization without dropping, an arbitrary
element of the Schur complement matrix A 1 is:
Since we assume that the nodes with large diagonal dominance measure are in V 1 and the nodes in
have small or zero diagonal dominance measure, we are interested in knowing how the diagonal
value of a node of A may change when it becomes a node in A 1 .
row i
F
e ik
rowk
l
l
Figure
1: An illustration of the partial LU factorization to eliminate e ik in the E submatrix.
The following proposition is obvious from Equation (6) and from Figure 1.
Proposition 4.1 If either the jth column of the submatrix F or the ith row of the submatrix E is a
zero vector, then s
Definition 4.2 A node v i of the vertex set V is said to be independent to a subset V I of V if and
only if
An immediate consequence of the independentness is the following corollary that is first proved in
[39].
Corollary 4.3 If a node v i in V 2 is independent to all the nodes in V 1 , then s i.e., the values
of the ith row of C will not be modified in the partial LU factorization.
We now modify our threshold tolerance reordering strategy slightly to a diagonal threshold
strategy, similar to that discussed in [39]. We assume that the node v i is in V 1 if ja ii j - ffl and D
is still a diagonal matrix. With such a modification, we have jd m. Denote by
the size of the largest elements in absolute value of A.
Proposition 4.4 The size of the elements of the Schur complement matrix A 1 is bounded by M(1+
mM=ffl).
Proof. Starting from Equation (6):
+mM=ffl):Proposition 4.4 shows that the size of the elements of the Schur complement matrix cannot
grow uncontrollably if ffl is large enough. This result indicates that our first level (I)LU factorization
is stable.
As we hinted previously, we will be interested in recursively applying our strategy to the successive
Schur complement matrices. We may assume that the matrix A is presparsified so that small
nondiagonal elements are removed. To be more specific, for the parameter - used in the ILUT fac-
torization, we assume min 1-i;j-n fja ij jg - for all nonzero elements of A, except for possibly the
diagonal elements. With some additional assumptions, we can have a lower bound on the variation
of the diagonal values of the Schur complement matrix A 1 .
Proposition 4.5 Suppose ja ij j - for all nonzero elements of the matrix A, and suppose that either
c ik f ki =d k - 0 or c ik f ki =d k - 0 holds for all 1 - k - m. Then
card
where are the index sets of the nonzero elements of the ith row of the E submatrix
and the ith column of the F submatrix, respectively. card(V ) denotes the cardinality of a set V .
Proof. If either c ik f ki =d k - 0 or c ik f ki =d k - 0 holds for all 1 - k - m, we have
The kth term in the right-hand side sum of (7) is nonzero if and only if both e ik and f ki are nonzero.
This happens if and only if k 2 Nz(E
Note that jc ik j -; jf ki j - and jd k m. It follows that
card
:It is implicitly assumed that ffl ! M . In practice, ffl is small so that the set V 1 may be large
enough to avoid constructing a large Schur complement matrix. Denote
card
By the motivation of the diagonal threshold strategy, the value of jc ii j is zero or very small. Thus
the size of js ii j can be considered as being close to \Delta i .
Corollary 4.6 Under the conditions of the Proposition 4.5, if c
Corollary 4.6 shows that if the ith diagonal element of A 1 is zero in A and if the set Nz(E i
is nonempty, then the size of the ith diagonal element is nonzero in the Schur complement. Thus,
under these conditions, a zero pivot is removed. In fact, the cardinality of Nz(E seems to
be the key factor to remove zero diagonal elements.
It is difficult to derive more useful bounds for general sparse matrices. If certain conditions
are given to restrict the class of matrices under consideration, it is possible to obtain more realistic
bounds to characterize the size of the elements of the Schur complement matrix, especially the size
of its diagonal elements.
General submatrix D. For general submatrix D corresponding to the factorization (4), it is easy
to see that, if the jth column of the submatrix F is zero, the jth column of the submatrix L \Gamma1 F is
zero. Hence, Proposition 4.1 carries over to the general case.
At this moment, we are unable to show results analogous to Propositions 4.4 and 4.5 for general
submatrix D. However, it can be argued heuristically that, if D is not a diagonal matrix, the
cardinality of the set Nz(E i likely to be larger than that of Nz(E
Size of k(LU) \Gamma1 k. Let us consider the quality of preconditioning in a nonstandard way. Denote by
the error (residual) matrix of the ILU factorization. At each iteration, the preconditioning step solves
for -
w the system
where r is the residual of the current iterate. In a certain sense, we can consider -
w as an approximate
to the correction term of the current iterate. The quality of the preconditioning step (9) can be
judged by comparing (9) with the exact or perfect preconditioning step
If Equation (10) could be solved to yield the exact correction term w, the preconditioned iterative
method would converge in one step. Of course, solving the Equation (10) is as hard as solving the
original system (1). However, we can measure the relative difference in the correction term when
approximating the Equation (10) by the Equation (9). This difference may tell us how good the
preconditioning step (9) approximates the exact preconditioning step (10). The following proposition
is motivated by the work of Kershaw [23].
Proposition 4.7 Suppose the matrix A and the factor LU from the incomplete LU factorization are
nonsingular, the following inequality holds:
wk
kwk
for any consistent norm k \Delta k.
Proof. It is obvious that r 6= 0, otherwise the iteration would have converged. The nonsingularity
of A implies that w 6= 0. Note that -
Equation (10), we have
It follows that, for any consistent norm,
The desired result (11) follows immediately by dividing kwk on both sides. 2
It is well known that the size of the error matrix E directly affects the convergence rate of the
preconditioned iterative methods [16]. Proposition 4.7 shows that the quality of a preconditioning
step is directly related to the size of both (LU) \Gamma1 and R. A high quality preconditioner must be
accurate; i.e., it must have an error matrix that is small in size. A high quality preconditioner must
also have a stable factorization and stable triangular solves; i.e., the size of (LU) \Gamma1 must be small.
Since the condition estimate, condest is a lower bound for k(LU) \Gamma1 k1 , it should
provide some information about the quality of the preconditioner and may be used to measure the
stability of the LU factorization and of the triangular solves.
5 Multilevel Dual Reordering and ILU Factorization
Based on our previous analyses, the size of a diagonal element of the matrix A 1 is likely to be
larger than that of the same element in A. 2 We can apply Algorithm 2.1 to A 1 and repeat on
A 1 the procedures that were applied to A. This process may be repeated for a few times until
2 This is obviously false for an M-matrix. However, there will be no Schur complement matrix at all if A is an
all small diagonal elements are modified to large values, or until the last Schur complement matrix
is small enough that an ILU factorization with a complete pivoting strategy can be implemented
inexpensively. Since the number of small or zero pivots in the last Schur complement matrix is small,
a third strategy is to replace them by a large value. This will not introduce too much error to the
overall factorization. Given a maximum level L and denote A the multilevel dual reordering
strategy and ILU factorization can be formulated as Algorithm 5.1.
Algorithm 5.1 Multilevel dual reordering and ILU factorization.
1. Given the parameters -; p; ffl; L
2. For
3. Run Algorithm 2.1 with ffl to find permutation matrices
and P jg
4. Perform matrix permutation A
5. If no small pivot has been found, then
6. Apply ILUT(p; -) to A j and exit
7. Else
8. Apply a partial ILU factorization to A j
9. to yield a Schur complement matrix A j+1
10. End if
11. End do
12. Apply ILUTP or a stabilized ILUT to AL if AL exists
The ILU preconditioner constructed by Algorithm 5.1 is structurally similar to the BILUTM
preconditioner in [38]. The difference is that we do not construct a block independent set for the D j
submatrix. Instead, we set up a diagonal measure constraint and employ a graph reordering scheme
to reduce fill-in. The emphasis of this paper is on solving indefinite matrices by removing small pivots.
It can be seen, if L levels of reduction are performed, the resulting ILU preconditioner has the
following
The application of the preconditioner can be done by a level by level forward elimination, followed
by a level by level backward substitution. There are also permutations and inverse permutations to
be performed, specific procedures depend on implementations. For detailed descriptions, we refer to
[37, 38].
6 Numerical Experiments
Standard implementations of multilevel preconditioning methods have been described in detail in
[33, 37, 38]. We used full GMRES as the accelerator [35]. We tested three preconditioners: standard
ILUT of [32], a column pivoting variant ILUTP [32], and the multilevel dual reordering preconditioner
designed in this paper, abbreviated as MDRILU (multilevel dual reordering ILU factorization). All
preconditioners used a safeguard (stabilization) procedure by replacing a zero pivot with (0:0001+-)r i ,
where r i was computed as the average nonzero values of the row in question. They were used as right
preconditioners for GMRES [34]. The main parameters used in all three preconditioners are the pair
(p; -) in the double dropping strategy. ILUTP needs another parameter 0 - oe - 1 to control the
actual pivoting. A nondiagonal element a ij is a candidate for a permutation only when oeja
It is suggested that reasonable values of oe are between 0:5 and 0:01, with 0:5 being the best in
many cases [34, p. 295]. MDRILU also needs another parameter ffl to enforce the diagonal threshold
reordering as in Algorithm 5.1. The maximum possible level number in MDRILU was
levels of dual reorderings the Schur complement A 10 is not empty, a stabilized ILUT factorization
was employed to factor A 10 . 3
For all linear systems, the right-hand side was generated by assuming that the solution is a
vector of all ones. The initial guess was a vector of some random numbers. The iteration was
terminated when the 2-norm of the residual was reduced by a factor of 10 7 . We also set an upper
bound of 100 for the full GMRES iteration. A symbol"-" indicates lack of convergence.
In all tables with numerical results, "iter" shows the number of preconditioned GMRES iter-
ations; "spar" shows the sparsity ratio which is the ratio between the number of nonzero elements
of the preconditioner to that of the original matrix; "prec" shows the CPU time in seconds spent in
constructing the preconditioners; is the condition estimate of the preconditioners
as introduced in Section 1. Since these ILU preconditioners approach direct solvers as
robustness with respect to the memory cost (sparsity ratio). We remark
that our codes were not optimized and they computed and outputed information such as the number
of zero diagonals, smallest pivots, ets. Consequently, the CPU times reported in this paper only have
relative meaning. Note that the solution time at each iteration is mainly the cost of the matrix (both
A and the preconditioner) vector products and is thus proportional to the product of the iteration
count and the sparsity ratio, i.e., solution time
The numerical experiments were conducted on a Power-Challenge XL Silicon Graphics workstation
equipped with 512 MB of main memory, one 190 MHz R10000 processor, and 1 MB secondary
cache. We used Fortran 77 programming language in 64 bit arithmetic computation.
Test matrices. Three test matrices were selected from different applications. Table 1 contains
simple descriptions of the test matrices. They have been used in several other papers [6, 9, 39, 46].
None of the three matrices has a zero diagonal.
Matrix order nonzeros description
buckling problem for container model
simulation
WIGTO966 3 864 238 252 Euler equation model
Table
1: Simple descriptions of the test matrices.
3 We found stabilized ILUT was better than ILUTP for solving the last system. We did not implement an ILUT
factorization with a full pivoting strategy.
WIGTO966 matrices. The WIGTO966 matrix 4 was supplied by L. Wigton from Boeing Com-
pany. It is solvable by ILUT with large values of p [6]. This matrix was also used to compare BILUM
with ILUT in [36], and BILUTM with ILUT in [38], and to test point and block preconditioning
techniques in [8, 9]. Since ILUT requires very large amount of fill-in to converge, the WIGTO966
matrix is ideal to test alternative preconditioners and to show the least memory that is required for
convergence. For example, BILUM (with GMRES(10)) was shown to be 6 times faster than ILUT
with only one-third of the memory required by ILUT [36]. BILUTM (with GMRES(50)) converged
almost 5 times faster and used just about one-fifth of the memory required by ILUT [38]. Table 2 lists
results from several runs to compare MDRILU and ILUT. It shows that MDRILU could converge
with low sparsity ratios, as low as 0:94. The threshold parameter ffl was in a fixed range when the
other parameters p and - changed. For all the values of p and - tested in Table 2, ILUT did not
converge. We found that there was no very small pivot, the size of the smallest pivot in all tests in
Table
was 1.19e-5. But the condition estimates for ILUT were very large, the smallest condest value
is 1.1e+82, indicating unstable triangular solves had resulted during the factorization and solution
processes.
MDRILU ILUT
iter prec spar cond iter prec spar cond
50 1.0e-3 0.38 27 4.92 2.17 4.4e+4 - 9.9e+116
50 1.0e-4 0.38 25 7.48 2.55 2.7e+4 - 2.7e+91
Table
2: Comparison of MDRILU and ILUT for solving the WIGTO966 matrix.
We further compared ILUTP and ILUT and list the results in Table 3. We see that ILUTP
is more robust than ILUT for solving the WIGTO966 matrix. ILUT required high sparsity ratios
to converge. For those cases, ILUTP was able to converge with fewer iterations. When we chose
failed to converge, but ILUTP converged in 49 iteration with a sparsity
ratio 3:06. Notice that both ILUTP and ILUT did not converge with
MDRILU could converge with these parameters. We point out that the condition estimates of ILUTP
are much smaller than those of ILUT. This implies that ILUTP did stabilize the ILU factorization
process with a column pivoting strategy, although there was no very small pivot in the factorization.
The results of Table 3 also show that the additional cost of implementing ILUTP is not high in
this test. However, as far as solving the WIGTO966 matrix is concerned, computing an MDRILU
preconditioner is much cheaper than computing either an ILUT or an ILUTP preconditioner.
RAEFSKY4 matrices. The RAEFSKY4 matrix 5 was supplied by H. Simon from Lawrence
Berkeley National Laboratory (originally created by A. Raefsky from Centric Engineering). This is
4 The WIGTO966 matrix is available from the author.
5 The RAEFSKY4 matrix is available online from the University of Florida Sparse Matrix Collection at
http://www.cise.ufl.edu/~davis/sparse.
ILUTP ILUT
iter prec spar cond iter prec spar cond
100 1.0e-4 0.50 34 22.90 3.08 2.3e+5 - 3.0e+69
300 1.0e-3 0.10 9 44.98 7.39 1.2e+4 74 51.52 7.91 1.3e+8
Table
3: Comparison of ILUTP and ILUT for solving the WIGTO966 matrix.
probably the hardest one in the total of 6 RAEFSKY matrices. Figure 2 shows the convergence
history of three preconditioners with 1.0e-4. The other parameters were
for MDRILU and oe = 0:03 for ILUTP. We see that both ILUT and ILUTP did not have much
convergence in 100 iterations; MDRILU converged in 13 iterations.
2-norm
residual
iterations
RAEFSKY4 Matrix
DRILUDRILU (dashed line)
ILUTP (dashdot line)
ILUT (solid line)
Figure
2: Convergence history of preconditioned GMRES for solving the RAEFSKY4 matrix.
In
Figure
3 we plotted the iteration counts (left part) and the values of condition estimate
(right part) of the MDRILU preconditioner with different values of the threshold parameter ffl, keeping
We found that the iteration count and the condition estimate were linked
to each other. A large value of condition estimate is usually accompanied by a large iteration count
of MDRILU. We also see that the convergence rates of MDRILU are not very sensitive to the choice
of the value of ffl. For 0:38 - ffl - 0:78, MDRILU gave very similar performance.
UTM5940 matrix. The UTM5940 matrix 6 is the largest matrix from the TOKAMAK collection
and was provided by P. Brown of Lawrence Livermore National Laboratory. Table 4 contains a few
runs with MDRILU and ILUT with different sparsity ratios. It is clear that MDRILU is more efficient
than ILUT when the sparsity ratios are low. The results are also consistent with other test results,
6 The UTM5940 matrix is available from online the MatrixMarket of the National Institute of Standards and Tech-
RAEFSKY4 Matrix
iterations
epsilon
0.850015002500condest
epsilon
RAEFSKY4 Matrix
Figure
3: Iteration counts (left) and condition estimates (right) of MDRILU with different values of
ffl for solving the RAEFSKY4 matrix.
indicating that MDRILU is able to solve this problem with less storage cost than ILUT. If sufficient
memory space is available, ILUT may be efficient in certain cases. Note that if both MDRILU and
ILUT converge with similar iteration counts, MDRILU is more expensive to construct than ILUT.
MDRILU ILUT
iter prec spar cond iter prec spar cond
50 1.0e-4 0.30 42 7.49 6.26 2.2e+7 86 3.67 5.72 1.3e+7
Table
4: Comparison of MDRILU and ILUT for solving the UTM5940 matrix.
Figure
4 shows the convergence history of MDRILU with different values of dropping tolerance
- to solve the UTM5940 matrix, keeping We note that the number of
iterations did not change very much when - changed from 1.0e-2 to 1.0e-5 and the sparsity ratio
changed from 2:67 to 4:15. It seems that MDRILU worked quite well with a relatively strict dropping
tolerance.
FIDAP matrices. The FIDAP matrices 7 were extracted from the test problems provided in
the FIDAP package [21]. They were generated by I. Hasbani of Fluid Dynamics International and
B. Rackner of Minnesota Supercomputer Center. The matrices were resulted from modeling the incompressible
Navier-Stokes equations and were generated using whatever solution method was specified
in the input decks. However, if the penalty method was used, there is usually a corresponding
7 All FIDAP matrices are available online from the MatrixMarket of the National Institute of Standards and Tech-
2-norm
residual
iterations
Matrix
solid line:
dashed line:
dashdot line:
dotted line:
Figure
4: Convergence history of MDRILU with different values of dropping tolerance - for solving
the UTM5940 matrix.
FIDAPM matrix, which was constructed using a fully coupled solution method (mixed u-p formula-
tion). The penalty method gives very ill conditioned matrices, whereas the mixed u-p method gives
indefinite, larger systems (they include pressure variables).
Many of these matrices contain small or zero diagonal values. 8 The zero diagonals are due to
the incompressibility condition of the Navier-Stokes equations [9]. The substantial amount of zero
diagonals makes these matrices indefinite. It is remarked in [6] that the FIDAP matrices are difficult
to solve with ILU preconditioning techniques, which require high level of fill-in to be effective and the
performance of the preconditioners is unstable with respect to the amount of fill-in. Many of them
cannot be solved by the standard BILUM preconditioner and in some cases, even the construction of
BILUM failed due to the occurrence of very ill conditioned blocks. Nevertheless, some of them may
be solved by the enhanced version of BILUM using singular value decomposition based regularized
inverse technique and variable block size [40].
The details of all of the largest 31 FIDAP matrices (n ? 2000) are listed in Table 5 and the
corresponding test results are given in Table 6. The second column of Table 6 lists the number of
zero diagonals of the given matrix. In our tests, we first set
0:5; 0:3; 0:1; 0:01. If none of these ffl values showed any promise, we increased the p value or decreased
the - value. If for a given pair of (p; - ), MDRILU with a certain value of ffl converged or showed some
convergence, we adjusted the value of ffl to get improved convergence rates if possible. However, there
was no effort made to find the best parameters. We stopped refining the parameters when we found
the iteration count was reasonable and the sparsity ratio was not high, or the computations took too
much time in case of large matrices. Once MDRILU was tested, the same pair (p; -) was used to test
ILUTP and ILUT. For ILUTP, we varied the value of oe analogously to what we did to choose the
value of ffl.
Table
6 shows that MDRILU can solve 27 out of the 31 largest FIDAP matrices. To the best of
8 The FIDAP matrices have structural zeros added on the offdiagonals to make them structurally symmetric. Structural
zeros were also added to the diagonals.
Matrix order nonzeros description
developing flow in a vertical channel
impingment cooling
flow over multiple steps in a channel
flow in lid-driven wedge
FIDAP015 6 867 96 421 spin up of a liquid in an annulus
turbulent flow over a backward-facing step
developing pipe flow, turbulent
attenuation of a surface disturbance
coating
convection
two merging liquids with an interior interface
turbulent flow in axisymmetric U-bend
species deposition on a heated plate
FIDAP035 19 716 218 308 turbulent flow in a heated channel
FIDAP036 3 079 53 851 chemical vapor deposition
flow of plastic in a profile extrusion die
flow past a cylinder in free stream
natural convection in a square enclosure
developing flow in a vertical channel
impingment cooling
flow over multiple heat sources in a channel
FIDAPM11 22 294 623 554 3D steady flow, head exchanger
FIDAPM15 9 287 98 519 spin up of a liquid in an annulus
turbulent flow is axisymmetric U-bend
radiation heat transfer in a square cavity
FIDAPM37 9 152 765 944 flow of plastic in a profile extrusion die
Table
5: Description of the largest 31 FIDAP matrices.
our knowledge, this is the first time that so many FIDAP matrices were solved by a single iterative
technique. (20 were solved in [40], in [46], 9 in [39], and 8 in [9].) In Table 6 the term "unstable"
means that convergence was not reached in 100 iterations and the condition estimate was greater than
Similarly the term "inaccurate" means that convergence was not reached, but the condition
estimate did not exceed 10 15 . They are categorized according to Chow and Saad's arguments [9]. We
remark that the results of "inaccurate" or "unstable" in Table 6 do not indicate that ILUT or ILUTP
can or cannot solve the given matrices with different parameters. The results only mean that they
did not converge with the parameters that made MDRILU converge. It is worth pointing out that,
in several tests, we observed that ILUTP encountered zero pivots when ILUT did not.
Although we allowed 10 levels of maximum dual reorderings to be performed, there were very
few cases that 10 levels of reorderings were actually needed. In most cases, 3 to 4 levels of dual
reorderings were performed for the FIDAP matrices. In many cases, the first Schur complement
MDRILU ILUTP ILUT
Matrix zero-d p - ffl iter spar iter spar iter spar
unstable unstable
unstable unstable
FIDAP026 457 20 1.0e-4 0.30 84 0.77 unstable unstable
FIDAP036 504 20 1.0e-4 0.10 23 1.75 83 1.91 unstable
FIDAPM07 432 300 1.0e-4 0.20 78 6.66 80 7.71 inaccurate
FIDAPM08 780 20 1.0e-4 0.10 25 1.70 78 2.22 unstable
unstable unstable
43 7.51 21 7.37 unstable
28 14.38 11 7.47 13 7.46
43 3.61 unstable unstable
Table
Solving the FIDAP matrices by MDRILU, ILUTP and ILUT.
matrix did not have any zero diagonal, even if the original matrix A did have many zero diagonals.
We listed in Table 7 those matrices that did have zero diagonals in their Schur complement matrices.
For all the FIDAP matrices solved by MDRILU, only the FIDAP026 matrix had 12 zero diagonals
in the last Schur complement A 5 . The test results show that the multilevel dual reordering strategy
does have the effect of removing small and zero pivots from ILU factorizations.
Remarks. Ironically, the four matrices, FIDAP011, FIDAP015, FIDAP018, and FIDAP035, that
were not solved by MDRILU do not have any zero diagonals. They may be solved by ILUT with
small values of - . Some of them may even be solved by GMRES without preconditioning if enough
iterations are allowed. We think this is because these matrices are very nonsymmetric and the
preconditioned matrices were worse conditioned than the original matrices, causing GMRES iteration
to converge extremely slowly. One of our strong feeling in these numerical experiments is that, in
general, MDRILU does not seem to work well when - is very small. Large values of p usually improve
convergence. This observation can be seen in Figure 5 which depicts the convergence history of
Matrix A 0 A 1 A 3 A 4 A 5
Table
7: Number of zero diagonals in the Schur complement matrices.
MDRILU for solving the largest FIDAP matrix, FIDAPM11. We used tested
two values of 1.0e-3. It is clear that more accurate (in terms of dropping tolerance)
ILU factorization does not help and sometimes hampers convergence. Good values for the parameter
ffl are between 0:1 and 0:5. For most problems, the performance of MDRILU is not very sensitive to
the choice of ffl, as long as it is in the range of 0:1 and 0:5.
2-norm
residual
iterations
FIDAPM11 Matrix
solid line:
dashed line:
Figure
5: Convergence history of MDRILU with different values of dropping tolerance - for solving
the FIDAPM11 matrix.
7 Conclusion
We have proposed a multilevel dual reordering strategy for constructing robust ILU preconditioners
for solving general sparse indefinite matrices. This reordering strategy is combined with a partial
ILU factorization procedure to construct recursive Schur complement matrices. The preconditioner
is a multilevel ILU preconditioner. However, the constructed preconditioner (MDRILU) is different
from all existing multilevel preconditioners in a fundamental concept [37, 47]. MDRILU never intends
to utilize any traditional multilevel property, it uses the Schur complement approach solely for the
purpose of removing small pivots.
We conducted analyses on simplified model problems to find out how the size of the small diagonal
elements and other elements is modified when these elements become the elements of the Schur
complement matrix. We gave an upper bound for the size of general elements of the Schur complement
matrix to show that their size will not grow uncontrollably if a suitable threshold reordering based
on the diagonal dominance measure is implemented. We also showed that under certain conditions,
a zero or very small diagonal element is likely to be modified to favor a stable ILU factorization by
the Schur complement procedure.
We further studied the quality of a preconditioning step. We showed that the quality of a
preconditioning step is directly related to the size of both (LU) \Gamma1 and R (the error matrix). Hence,
a high quality preconditioner must have a stable ILU factorization and stable triangular solves, as
well as a small size error matrix. In other words, both accuracy and stability affect the quality of a
preconditioner.
We performed numerical experiments to compare MDRILU with two popular ILU precondi-
tioners. Our numerical results show that MDRILU is much more robust than both ILUT and ILUTP
for solving most indefinite matrices under current consideration. The most valuable advantage of
MDRILU is that it can construct a sparse high quality preconditioner with low storage cost. The
preconditioners computed by MDRILU are more stable than those computed by ILUT and ILUTP,
thanks to the ability of MDRILU to remove (not replace) the small diagonal values.
Both analytic and numerical results strongly support our conclusion that the multilevel dual
reordering strategy developed in this paper is a very useful strategy to construct robust ILU preconditioners
for solving general sparse indefinite matrices. Due to the time and space limit, we have not
tested other graph reordering algorithms in the multilevel dual reordering algorithm. Some of the
popular reordering strategies such as Cuthill-McKee and reverse Cuthill-McKee algorithms may be
useful in such applications to further improve quality of the ILU preconditioner. However, we fell the
robustness of MDRILU is mainly a result of using threshold tolerance reordering strategy and partial
ILU factorization to remove small pivots. The difference arising from using different graph algorithm
may be significant in terms of the number of iterations. But such a difference is unlikely to alter the
stability problem in a systematic manner in the ILU factorization.
--R
Comparison of fast iterative methods for symmetric systems.
Incomplete factorization methods for fully implicit simulation of enhanced oil recovery.
Orderings for incomplete factorization preconditioning of nonsymmetric problems.
An incomplete-factorization preconditioning using red-black ordering
Parallel elliptic preconditioners: Fourier analysis and performance on the Connection machine.
On preconditioned Krylov subspace methods for discrete convection-diffusion problems
An object-oriented framework for block preconditioning
Experimental study of ILU preconditioners for indefinite matrices.
Weighted graph based ordering techniques for preconditioned conjugate gradient methods.
Ordering methods for preconditioned conjugate gradient methods applied to unstructured grid problems.
SOR as a preconditioner.
Stability and spectral properties of incomplete factorization.
On parallelism and convergence of incomplete LU factorizations.
Direct Methods for Sparse Matrices.
The effect of reordering on preconditioned conjugate gradients.
The effect of reordering on the preconditioned GMRES algorithm for solving the compressible Navier-Stokes equations
A stability analysis of incomplete LU factorization.
Relaxed and stabilized incomplete factorization for nonselfadjoint linear systems.
Ordering techniques for the preconditioned conjugate gradient method on parallel computers.
FIDAP: Examples Manual
Computer Solution of Large Sparse Positive Definite Systems.
On the problem of unstable pivots in the incomplete LU-conjugate gradient method
Conjugate gradient methods and ILU preconditioning of non-symmetric matrix systems with arbitrary sparsity patterns
Comparative analysis of the Cuthill-McKee and the reverse Cuthill-McKee ordering algorithms for sparse matrices
Ordering strategies for modified block incomplete factorizations.
An incomplete factorization technique for positive definite linear systems.
An iterative solution method for linear systems of which the coefficient matrix is a symmetric M-matrix
On the stability of the incomplete LU-factorization and characterizations of H-matrices
Ordering methods for approximate factorization preconditioning.
Orderings for conjugate gradient preconditionings.
ILUT: a dual threshold incomplete LU preconditioner.
ILUM: a multi-elimination ILU preconditioner for general sparse matrices
Iterative Methods for Sparse Linear Systems.
GMRES: a generalized minimal residual method for solving non-symmetric linear systems
Domain decomposition and multi-level type techniques for general sparse linear systems
BILUM: block versions of multielimination and multilevel ILU preconditioner for general sparse linear systems.
BILUTM: a domain-based multi-level block ILUT preconditioner for general sparse matrices
Diagonal threshold techniques in robust multi-level ILU preconditioners for general sparse linear systems
Enhanced multi-level block ILU preconditioning strategies for general sparse linear systems
A multi-level preconditioner with applications to the numerical simulation of coating problems
On the stability of the incomplete Cholesky decomposition for a singular perturbed problem
Incomplete LU preconditioners for conjugate-gradient-type iterative methods
Iterative solution methods for certain sparse linear systems with a non-symmetric matrix arising from PDE-problems
Stabilized incomplete LU-decompositions as preconditionings for the Tchebycheff iteration
Preconditioned Krylov subspace methods for solving nonsymmetric matrices from CFD applications.
A grid based multilevel incomplete LU factorization preconditioning technique for general sparse matrices.
--TR
--CTR
Wang , Jun Zhang, A new stabilization strategy for incomplete LU preconditioning of indefinite matrices, Applied Mathematics and Computation, v.144 n.1, p.75-87, 20 November
Kai Wang , Jun Zhang , Chi Shen, Parallel Multilevel Sparse Approximate Inverse Preconditioners in Large Sparse Matrix Computations, Proceedings of the ACM/IEEE conference on Supercomputing, p.1, November 15-21,
Jeonghwa Lee , Jun Zhang , Cai-Cheng Lu, Incomplete LU preconditioning for large scale dense complex linear systems from electromagnetic wave scattering problems, Journal of Computational Physics, v.185 n.1, p.158-175, 10 February
Chi Shen , Jun Zhang, Parallel two level block ILU Preconditioning techniques for solving large sparse linear systems, Parallel Computing, v.28 n.10, p.1451-1475, October 2002
Chi Shen , Jun Zhang , Kai Wang, Distributed block independent set algorithms and parallel multilevel ILU preconditioners, Journal of Parallel and Distributed Computing, v.65 n.3, p.331-346, March 2005
Michele Benzi, Preconditioning techniques for large linear systems: a survey, Journal of Computational Physics, v.182 n.2, p.418-477, November 2002 | sparse matrices;multilevel incomplete LU preconditioner;incomplete LU factorization;reordering strategies |
587861 | An Implicitly Restarted Symplectic Lanczos Method for the Symplectic Eigenvalue Problem. | An implicitly restarted symplectic Lanczos method for the symplectic eigenvalue problem is presented. The Lanczos vectors are constructed to form a symplectic basis. The inherent numerical difficulties of the symplectic Lanczos method are addressed by inexpensive implicit restarts. The method is used to compute some eigenvalues and eigenvectors of large and sparse symplectic operators. | Introduction
. We consider the numerical solution of the real symplectic
eigenvalue problem
where M 2 IR 2n\Theta2n is large and possibly sparse. A matrix M is called symplectic iff
or equivalently, M T
and I n is the n \Theta n identity matrix. The symplectic matrices form a group under
multiplication. The eigenvalues of symplectic matrices occur in reciprocal pairs: If -
is an eigenvalue of M with right eigenvector x, then - \Gamma1 is an eigenvalue of M with
left eigenvector (Jx) T . The computation of eigenvalues and eigenvectors of such matrices
is an important task in applications like the discrete linear-quadratic regulator
problem, discrete Kalman filtering, or the solution of discrete-time algebraic Riccati
equations. See, e.g., [21, 22, 28] for applications and further references. Symplectic
matrices also occur when solving linear Hamiltonian difference systems [6].
In order to develop fast, efficient, and reliable methods, the symplectic structure
of the problem should be preserved and exploited. Then important properties of symplectic
matrices (e.g., eigenvalues occurring in reciprocal pairs) will be preserved and
not destroyed by rounding errors. Different structure-preserving methods for solving
have been proposed. In [25], Lin introduces the S
which can be used to compute the eigenvalues of a symplectic matrix by a structure-preserving
method similar to Van Loan's square-reduced method for the Hamiltonian
eigenvalue problem [38]. Flaschka, Mehrmann, and Zywietz show in [14] how to construct
structure-preserving methods based on the SR method [10, 11, 26]. Patel
[34, 33] and Mehrmann [27] developed structure-preserving algorithms for the symplectic
generalized eigenproblem
submitted in July 1998
y Universit?t Bremen, Fachbereich 3 - Mathematik und Informatik, Zentrum f?r Technomathe-
matik, 28357 Bremen, FRG. E-mail: benner@math.uni-bremen.de
z corresponding author, Universit?t Bremen, Fachbereich 3 - Mathematik und Informatik, Zentrum
f?r Technomathematik, 28357 Bremen, FRG. E-mail: heike@math.uni-bremen.de
Benner and Fa-bender
Recently, Banse and Bunse-Gerstner [2, 3] presented a new condensed form for
symplectic matrices. The 2n \Theta 2n condensed matrix is symplectic, contains
nonzero entries, and is determined by 4n \Gamma 1 parameters. This condensed form, called
symplectic butterfly form, can be depicted as a symplectic matrix of the following
@ @@ @7 5
Once the reduction of a symplectic matrix to butterfly form is achieved, the SR
algorithm [10, 11, 26] is a suitable tool for computing the eigenvalues/eigenvectors of a
symplectic matrix. The SR algorithm preserves the butterfly form in its iterations and
can be rewritten in a parameterized form that works with the 4n\Gamma1 parameters instead
of the (2n) 2 matrix elements in each iteration. Hence, the symplectic structure, which
will be destroyed in the numerical process due to roundoff errors, can be restored in
each iteration for this condensed form. An analysis of the butterfly SR algorithm can
be found in [2, 4, 5].
In [2, 3] an elimination process for computing the butterfly form of a symplectic
matrix is given which uses elementary unitary symplectic transformations as well as
non-unitary symplectic transformations. Unfortunately, this approach is not suitable
when dealing with large and sparse symplectic matrices as an elimination process can
not make full use of the sparsity. Hence, symplectic Lanczos methods which create
the symplectic butterfly form if no breakdown occurs are derived in [2, 4]. Given
and a symplectic matrix M 2 IR 2n\Theta2n , these Lanczos algorithms produce
a matrix S which satisfies a recursion
of the form
MS
is a butterfly matrix of order 2k \Theta 2k, and the columns of S 2n;2k are
orthogonal with respect to the indefinite inner product defined by J (1.3). The latter
property will be called J-orthogonality throughout this paper. The residual r k+1
depends on v k+1 and w Such a symplectic Lanczos
method will suffer from the well-known numerical difficulties inherent to any Lanczos
method for unsymmetric matrices. In [2], a symplectic look-ahead Lanczos algorithm
is presented which overcomes breakdown by giving up the strict butterfly form. Un-
fortunately, so far there do not exist eigenvalue methods that can make use of that
special reduced form. Standard eigenvalue methods as QR or SR algorithms have to
be employed resulting in a full symplectic matrix after only a few iteration steps.
A different approach to deal with the numerical difficulties of the Lanczos process
is to modify the starting vectors by an implicitly restarted Lanczos process (see
the fundamental work in [9, 35]); for the unsymmetric eigenproblem the implicitly
restarted Arnoldi method has been implemented very successfully, see [24]). The
problems are addressed by fixing the number of steps in the Lanczos process at a prescribed
value k which depends upon the required number of approximate eigenvalues.
J-orthogonality of the k Lanczos vectors is secured by re-J-orthogonalizing these
vectors when necessary. The purpose of the implicit restart is to determine initial
vectors such that the associated residual vectors are tiny. Given (1.4), an implicit
Lanczos restart computes the Lanczos factorization
An implicitly restarted symplectic Lanczos method 3
which corresponds to the starting vector
(where p(M) 2 IR 2n\Theta2n is a polynomial) without having to explicitly restart the
Lanczos process with the vector - v 1 . Such an implicit restarting mechanism is derived
here analogous to the technique introduced in [4, 18, 35].
Section 2 reviews the symplectic butterfly form and some of its properties that will
be helpful for analyzing the symplectic Lanczos method which reduces a symplectic
matrix to butterfly form. This symplectic Lanczos method is presented in Section 3.
Further, that section is concerned with finding conditions for the symplectic Lanczos
method terminating prematurely such that an invariant subspace associated with
certain desired eigenvalues is obtained. We will also consider the important question
of determining stopping criteria. The implicitly restarted symplectic Lanczos method
itself is derived in Section 4. Numerical properties of the proposed algorithm are
discussed in Section 5. In Section 6, we present some preliminary numerical examples.
2. The Symplectic Butterfly Form. A symplectic matrix
is called a butterfly matrix if B 11 and B 21 are diagonal, and B 12 and B 22 are tridiag-
onal. Banse and Bunse-Gerstner [2, 3] showed that for every symplectic matrix M ,
there exist numerous symplectic matrices S such that MS is a symplectic
butterfly matrix. In [2], an elimination process for computing the butterfly form of a
symplectic matrix is presented (see also [4]).
In [4], an unreduced butterfly matrix is introduced in which the lower right tridiagonal
matrix is unreduced, that is, the subdiagonal elements of B 22 are nonzero. Using
the definition of a symplectic matrix, one easily verifies that if B is an unreduced
butterfly matrix, then B 21 is nonsingular. This allows the decomposition of B into
two simpler symplectic matrices:
I T
I @@ @
is tridiagonal and symmetric. Hence parameters that determine
the symplectic matrix can be read off directly. The unreduced butterfly matrices
play a role analogous to that of unreduced Hessenberg matrices in the standard QR
theory [2, 4, 5].
We will frequently make use of the decomposition (2.1) and will denote it by
a
a
(2.
4 Benner and Fa-bender
. d2
. dn
. b2 d2
.
. a2d2
.
an
Remark 2.1. (See [4].)
a) Any unreduced butterfly matrix is similar to an unreduced butterfly matrix
with
b) We will have deflation if d j. Then the eigenproblem can be
split into two smaller ones with unreduced symplectic butterfly matrices.
Eigenvalues and eigenvectors of symplectic butterfly matrices can be computed
efficiently by the SR algorithm [7], which is a QR like algorithm in which the QR decomposition
is replaced by the SR decomposition. Almost every matrix A 2 IR 2n\Theta2n
can be decomposed into a product A = SR where S is symplectic and R is J -
triangular, that is
R
where all submatrices R ij 2 IR n\Thetan are upper triangular, and R 21 is strictly upper
triangular [12]. In the following a matrix D 2 IR 2n\Theta2n will be called trivial if it is
both symplectic and J-triangular. D is trivial if and only if it has the form
where C and F are diagonal matrices, C nonsingular.
If the SR decomposition A = SR exists, then other SR decompositions of A can
be built from it by passing trivial factors back and forth between S and R. That
is, if D is a trivial matrix, ~
R is another SR
decomposition of A. If A is nonsingular, then this is the only way to create other SR
decompositions. In other words, the SR decomposition is unique up to trivial factors.
The SR algorithm is an iterative algorithm that performs an SR decomposition
at each iteration. If B is the current iterate, then a spectral transformation function
An implicitly restarted symplectic Lanczos method 5
q is chosen (such that q(B) 2 IR 2n\Theta2n ) and the SR decomposition of q(B) is formed,
if possible:
Then the symplectic factor S is used to perform a similarity transformation on B to
yield the next iterate, which we will call b
B:
If is an unreduced symplectic butterfly matrix, then so is
B in (2.5) [2, 3]. If rank is an unreduced symplectic
butterfly matrix, then b
in (2.5) is of the form (see [4])
@ @@ @
-z-z-z-z-
where
is a symplectic butterfly matrix and
ffl the eigenvalues of
are just the - shifts that are eigenvalues of B.
An algorithm for explicitly computing S and R is presented in [8]. As with explicit
QR steps, the expense of explicit SR steps comes from the fact that q(B) has to be
computed explicitly. A preferred alternative is the implicit SR step, an analogue to
the Francis QR step [15, 17, 20]. As the implicit SR step is analogous to the implicit
QR step, this technique will not be discussed here (see [4, 5] for details).
A natural way to choose the spectral transformation function q is to choose a
polynomial
these choices make use of the symmetries
of the spectrum of symplectic matrices. But, as explained in [5], a better choice is a
Laurent polynomial to drive the SR step. For example, instead of p 4 (-) we will use
2:
This reduces the size of the bulges that are introduced, thereby decreasing he number
of computations required per iteration. Moreover, the use of Laurent polynomials improves
the convergence and stability properties of the algorithm by effectively treating
each reciprocal pair of eigenvalues as a unit. Using a generalized Rayleigh-quotient
strategy, the butterfly SR algorithm is typically cubic convergent [5].
The right eigenvectors of unreduced butterfly matrices have the following property
which will be helpful when analyzing the symplectic Lanczos method introduced in
the next section.
6 Benner and Fa-bender
Lemma 2.2. Suppose that B 2 IR 2n\Theta2n is an unreduced butterfly matrix as in
(2.4). If
In order to proof this lemma we need the following definition. Let Pn be the
permutation matrix
If the dimension of Pn is clear from the context, we leave off the subscript.
Proof. The proof is by induction on the size of B. The entries of the eigenvector
x will be denoted by x
Suppose that 2. The second and fourth row of
a
Since B is unreduced, we know that a 2 6= 0 and d 2 6= 0. If x then from (2.9) we
obtain
while (2.8) gives b 2 Using (2.10) we obtain x
The third row of
a
Using since B is unreduced, we obtain x
which contradicts the assumption x 6= 0.
Assume that the lemma is true for matrices of order 2(n\Gamma1). Let B 2n;2n 2 IR 2n\Theta2n
be an unreduced butterfly matrix. For simplicity we will consider the permuted
equation B 2n;2n
Partition
2n\Gamma2 an an c n5 ;
~
x
~
is an unreduced butterfly matrix and y 2
This implies
since an 6= 0 as B 2n;2n is unreduced. Further we have
Hence, using (2.11) we get ~ x
-y. Using
~
further obtain from (2.11) y This is a contradiction,
because by induction hypothesis e T
Remark 2.3. If y be the right eigenvector
of B to - y, then (Jy)
that e T
2n y 6= 0, hence the nth component of the left eigenvector of B corresponding to
- is 6= 0.
An implicitly restarted symplectic Lanczos method 7
3. A Symplectic Lanczos Method for Symplectic Matrices. In this sec-
tion, we review the symplectic Lanczos method to compute the butterfly form (2.4)
for a symplectic matrix M derived in [4]. The usual unsymmetric Lanczos algorithm
generates two sequences of vectors. Due to the symplectic structure of M it is easily
seen that one of the two sequences can be eliminated here and thus work and storage
can essentially be halved. (This property is valid for a broader class of matrices, see
[16].) Further, this section is concerned with finding conditions for the symplectic
Lanczos method terminating prematurely such that an invariant subspace associated
with certain desired eigenvalues is obtained. Finally we will consider the important
question of determining stopping criteria.
In order to simplify the notation we use in the following permuted versions of M
and B as in the previous section. Let
with the permutation matrix P as in (2.7).
3.1. The symplectic Lanczos factorization. We want to compute a symplectic
matrix S such that S transforms the symplectic matrix M to a symplectic
butterfly matrix B; in the permuted version MS = SB yields
Equivalently, as
, we can consider
where
a
a
. dn 0
.
The structure preserving Lanczos method generates a sequence of permuted symplectic
matrices
satisfying
8 Benner and Fa-bender
T is a permuted 2k \Theta 2k symplectic butterfly matrix.
The vector r k+1 := d k+1 (b k+1 is the residual vector and is JP -
orthogonal to the columns of S 2n;2k
P , the Lanczos vectors. The matrix B 2k;2k
P is the
-orthogonal projection of MP onto the range of S 2n;2k
Here J 2k;2k
P denotes a permuted 2k \Theta 2k matrix J of the form (1.3). Equation (3.5)
defines a length 2k Lanczos factorization of MP . If the residual vector r k+1 is the zero
vector, then equation (3.5) is called a truncated Lanczos factorization when k ! n.
Note that r n+1 must vanish since (S 2n;2n
and the columns of S 2n;2n
form a JP -orthogonal basis for IR 2n . In this case the symplectic Lanczos method
computes a reduction to permuted butterfly form.
The symplectic Lanczos factorization is, up to multiplication by a trivial matrix,
specified by the starting vector v 1 (see [4, Theorem 4.1]).
wn ]. For a given v 1 , a Lanczos method constructs
the matrix SP columnwise from the equations
From this we obtain the algorithm given in Table 3.1 (for a more detailed discussion
see [4]).
method
Choose an initial vector e
do
(update of wm )
set
e
am
e
wm
(computation of c m )
(update of v m+1 )
e
Table
Symplectic Lanczos Method
Remark 3.1. Using the derived formulae for w k+1 , the residual term r
can be expressed as
An implicitly restarted symplectic Lanczos method 9
There is still some freedom in the choice of the parameters that occur in this
algorithm. Essentially, the parameters b m can be chosen freely. Here we set b
Likewise a different choice of the parameters am ; dm is possible.
Note that M \Gamma1
since M is symplectic. Thus M \Gamma1
just a
matrix-vector-product with the transpose of MP . Hence, only one matrix-vector
product is required for each computed Lanczos vector wm or v m . Thus an efficient
implementation of this algorithm requires 6n nz is the
number of nonzero elements in MP and 2k is the number of Lanczos vectors computed
(that is, the loop is executed k times). The algorithm as given in Table 3.1 computes
an odd number of Lanczos vectors, for a practical implementation one has to omit
the computation of the last vector v k+1 (or one has to compute an additional vector
w
In the symplectic Lanczos method as given above we have to divide by parameters
that may be zero or close to zero. If such a case occurs for the normalization parameter
dm+1 , the corresponding vector e v m+1 is zero or close to the zero vector. In this case, a
(good approximation to a) JP -orthogonal invariant subspace of MP or equivalently, a
symplectic invariant subspace of M is detected. By redefining e v m+1 to be any vector
satisfying
m, the algorithm can be continued. The resulting butterfly matrix is no
longer unreduced; the eigenproblem decouples into two smaller subproblems. In case
e
wm is zero (or close to zero), an invariant subspace of MP with dimension
is found (or a good approximation to such a subspace). In this case the parameter
am will be zero (or close to zero). From Table 3.1 we further obtain that in this case
is a real eigenvalue of MP (and hence of M) with corresponding
Due to the symmetry of the spectrum of M , we also have
that 1=b m is an eigenvalue of M . Computing an eigenvector y of MP corresponding
to 1=b m , we can try to augment the (2m \Gamma 1)-dimensional invariant subspace to
an MP -invariant subspace of even dimension. If this is possible, the space can be
made JP -orthogonal by JP -orthogonalizing y against f and
normalizing such that y T JP
Thus if either v m+1 or wm+1 vanishes, the breakdown is benign. If v m+1 6= 0
and wm+1 6= 0 but then the breakdown is serious. No reduction of the
symplectic matrix to a symplectic butterfly matrix with v 1 as first column of the
transformation matrix exists.
A convergence analysis for the symplectic Lanczos algorithm analogous to the one
for the unsymmetric Lanczos algorithm presented by Ye [39] can be given. Moreover,
an error analysis of the symplectic Lanczos algorithm in finite-precision arithmetic
analogous to the analysis for the unsymmetric Lanczos algorithm presented by Bai
[1] can also be derived. These results will be presented in [13]. As to be expected, the
computed Lanczos vectors loose J(JP )-orthogonality when some Ritz values begin to
converge.
3.2. Truncated symplectic Lanczos factorizations. This section is concerned
with finding conditions for the symplectic Lanczos method terminating prema-
turely. This is a welcome event since in this case we have found an invariant symplectic
(Following [17], we define each floating point arithmetic operation together with the associated
integer indexing as a flop.)
Benner and Fa-bender
subspace S 2n;2k and the eigenvalues of B 2k;2k are a subset of those of M . We will first
discuss the conditions under which the residual vector of the symplectic Lanczos factorization
will vanish at some step k. Then we will show how the residual vector and
the starting vector are related. Finally a result indicating when a particular starting
vector generates an exact truncated factorization is given.
First the conditions under which the residual vector of the symplectic Lanczos
factorization will vanish at some step k will be discussed. From the derivation of the
algorithm it is immediately clear that if no breakdown occurs, then
where K(X; v; vg. Further it is
easy to see that
If dim K(MP
Hence, there exist real scalars ff such that
Using the definition of a k+1 as given in Table 3.1 and the above expression we obtain
because of J-orthogonality,
a
0:
As e
This implies that an invariant subspace of MP with dimension 2k
If dim K(MP
g. Hence
a
for properly chosen ff and from the algorithm in Table 3.1
An implicitly restarted symplectic Lanczos method 11
Therefore e v This implies that the residual vector of the
symplectic Lanczos factorization will vanish at the first step k such that the dimension
of K(M; is equal to 2k and hence is guaranteed to vanish for some k - n.
Next we will discuss the relation between the residual term and the starting vector.
If dim K(M;
and Cn is a generalized companion matrix of the form
. 1
(see [2, proof of Satz 3.6]). Thus,
Define the residual in (3.7) by
Note that
where
We will now show that f k+1 is up to scaling the residual of the length 2k symplectic
Lanczos iteration with starting vector v 1 . Together with (3.9) this reveals the relation
between residual and starting vectors. Since det (C
J-orthogonal columns
(that is, (S 2n;2k ) T JnS is a J-triangular matrix. Then
. The diagonal elements of R are nonzero if and only if the columns of
are linear independent. Choosing
12 Benner and Fa-bender
assures that (\GammaJ k (S 2n;2k multiplying (3.7) from the right by
is an unreduced butterfly matrix (see [2, proof of Satz 3.6])
with the same characteristic polynomial as C k . Equation (3.10) is a valid symplectic
Lanczos recursion with starting vector v residual vector f k+1 =r 2k;2k .
By (3.9) and due to the essential uniqueness of the symplectic Lanczos recursion any
symplectic Lanczos recursion with starting vector v 1 yields a residual vector that can
be expressed as a polynomial in M times the starting vector v 1 .
Remark 3.2. From (3.8) it follows that if dim K(M; then we
can choose c 1 ; :::; c 2k such that f This shows that if the Krylov subspace
forms an 2k-dimensional M-invariant subspace, the residual of the
symplectic Lanczos recursion will be zero after k Lanczos steps such that the columns
of S 2n;2k span a symplectic basis for the subspace K(M; 1).
The final result of this section will give necessary and sufficient conditions for a
particular starting vector to generate an exact truncated factorization in a similar
way as stated for the Arnoldi method in [35]. This is desirable since then the columns
of S 2n;2k form a basis for an invariant symplectic subspace of M and the eigenvalues
of B 2k;2k are a subset of those of M . Here, -
will denote the Lanczos vectors
after permuting them back, i.e., - v
Theorem 3.3. Let MS 2n;2k
be the symplectic Lanczos factorization after k steps, with B 2k;2k unreduced. Then
Jordan matrix of order 2k.
Proof. If d
XJ be the Jordan canonical form of B 2k;2k
and put
X. Then
Suppose now that
it follows that
Hence by (3.6) dim K(M;
unreduced, dim K(M; k. Hence dim K(M;
and therefore, d
A similar result may be formulated in terms of Schur vectors or symplectic Schur
vectors (see, e.g., [28, 29] for the real symplectic Schur decomposition of a symplectic
matrix). These theorems provide the motivation for the implicit restart developed
in the next section. Theorem 3.3 suggests that one might find an invariant subspace
by iteratively replacing the starting vector with a linear combination of approximate
eigenvectors corresponding to eigenvalues of interest. Such approximations are readily
available through the Lanczos factorization.
3.3. Stopping Criteria. Now assume that we have performed k steps of the
symplectic Lanczos method and thus obtained the identity (after permuting back)
MS
An implicitly restarted symplectic Lanczos method 13
If the norm of the residual vector is small, the 2k eigenvalues of B 2k;2k are approximations
to the eigenvalues of M . Numerical experiments indicate that the norm of
the residual rarely becomes small by itself. Nevertheless, some eigenvalues of B 2k;2k
may be good approximations to eigenvalues of M . Let - be an eigenvalue of B 2k;2k
with the corresponding eigenvector y. Then the vector
The vector x is referred to as Ritz vector and - as Ritz value of M . If the last
component of the eigenvector y is sufficiently small, the right-hand side of (3.11) is
small and the pair f-; xg is a good approximation to an eigenvalue-eigenvector pair
of M . Note that by Lemma 2.2 je T
is unreduced. The pair (-; x) is
exact for the nearby problem
A small jjEjj is not sufficient for the Ritz pair f-; xg being a good approximation
to an eigenvalue-eigenvector pair of M . The advantage of using the Ritz estimate
jd
w k+1 jj is to avoid the explicit formation of the residual
deciding about the numerical accuracy of an
approximate eigenpair.
It is well-known that for non-normal matrices the norm of the residual of an
approximate eigenvector is not by itself sufficient information to bound the error in
the approximate eigenvalue. It is sufficient however to give a bound on the distance
to the nearest matrix to which the given approximation is exact. In the following, we
will give a computable expression for the error. Assume that B 2k;2k is diagonalizable
Since MS
2k , it follows that
MS
or
2k Y: Thus
k. The last equation can be re-written as
Using Theorem 2' of [19] we obtain that (- is an eigen-triplet of
where
jjJx k+i jj g:
14 Benner and Fa-bender
Furthermore, when jjEjj is small enough, then
is an eigenvalue of M and
Consequently, the symplectic Lanczos algorithm should be continued until both jjEjj
is small and cond(- j )jjEjj is below a given threshold for accuracy.
4. An Implicitly Restarted Symplectic Lanczos Method. In the previous
sections we have briefly mentioned two algorithms for computing approximations
to the eigenvalues of a symplectic matrix M . The symplectic Lanczos algorithm is
appropriate when the matrix M is large and sparse. If only a small subset of the
eigenvalues is desired, the length k symplectic Lanczos factorization may suffice. The
analysis in the last chapter suggests that a strategy for finding 2k eigenvalues in a
length k factorization is to find an appropriate starting vector that forces the residual
r k+1 to vanish. The SR algorithm, on the other hand, computes approximations to all
eigenvalues and eigenvectors of M . From Theorem 4.1 in [4] (an implicit Q-theorem
for the SR case) we know that in exact arithmetic, when using the same starting
vector, the SR algorithm and the length n Lanczos factorization generate the same
symplectic butterfly matrices (up to multiplication by a trivial matrix). Forcing the
residual for the symplectic Lanczos algorithm to zero has the effect of deflating a subdiagonal
element during the SR algorithm: by Remark 3.1 r
from the symplectic Lanczos process we have d Hence a zero residual
implies a zero d k+1 such that deflation occurs for the corresponding butterfly matrix.
Our goal in this section will be to construct a starting vector that is a member of
the invariant subspace of interest. Our approach is to implicitly restart the symplectic
Lanczos factorization. This was first introduced by Sorensen [35] in the context of
unsymmetric matrices and the Arnoldi process. The scheme is called implicit because
the updating of the starting vector is accomplished with an implicit shifted SR mechanism
on This allows to update the starting vector by working with a
symplectic matrix in IR 2j \Theta2j rather than in IR 2n\Theta2n which is significantly cheaper.
The iteration starts by extending a length k symplectic Lanczos factorization by
steps. Next, 2p shifts are applied to B 2(k+p);2(k+p) using double or quadruple SR
steps. The last 2p columns of the factorization are discarded resulting in a length k
factorization. The iteration is defined by repeating this process until convergence.
For simplicity let us first assume that and that a 2n \Theta 2(k
P is known such that
as in (3.5). Let - be a real shift and
Then (using
will be a permuted butterfly matrix and SP is
an upper triangular matrix with two additional subdiagonals.
With this we can re-express (4.1) as
MP (S 2n;2k+2
An implicitly restarted symplectic Lanczos method 15
P SP this yields
The above equation fails to be a symplectic Lanczos factorization since the columns
of the matrix d k+2 (b k+2 v k+2
2k+2 SP are nonzero. Let
ij be the (i; j)th entry of SP . The residual term in (4.2) is
Rewriting (4.2) as
where Z is blocked as6 6 6 6 4
dk+1e T
dk+1e T
dk+2 bk+2s 2k+2;2k e T
dk+2ak+2s 2k+2;2k e T
we obtain as a new Lanczos identity
r
where
d
a
Here, - a k+1 , - b k+1 , -
d k+1 denote parameters of -
are
parameters of B 2k+2;2k+2
P . In addition, -
w k+1 are the last two column vectors
from -
are the two last column vectors of S 2n;2k+2
As the space spanned by the columns of S
orthogonal, and SP is a permuted symplectic matrix, the space spanned by the
columns of -
is J-orthogonal. Thus (4.3) is a valid symplectic
Lanczos factorization. The new starting vector is -
ae 2 IR. This can be seen as follows: first note that for unreduced butterfly matrices
B 2k+2;2k+2 we have q 2 (B 2k+2;2k+2
Hence, from q 2 (B 2k+2;2k+2
we obtain q 2 (B 2k+2;2k+2
is an upper triangular
matrix. As q 2 (B 2k+2;2k+2
Using (4.3) it follows that
ae S 2n;2k+2
ae S 2n;2k+2
=ae (MP S 2n;2k+2
=ae (MP S 2n;2k+2
Benner and Fa-bender
as r k+2 e T
using again (4.3) we get
\Gammaae
as e T
Note that in the symplectic Lanczos process the vectors v j of S 2n;2k
P satisfy the
condition and the parameters b j are chosen to be one. This is no longer
true for the odd numbered column vectors of SP generated by the SR decomposition
and the parameters - b j from -
P and thus for the new Lanczos factorization (4.3).
Both properties could be forced using trivial factors. Numerical tests indicate that
there is no obvious advantage in doing so.
Using standard polynomials as shift polynomials instead of Laurent polynomials
as above results in the following situation: In p 2 (B 2k+2;2k+2
is an upper triangular matrix with four (!) additional
subdiagonals. Therefore, the residual term in (4.2) has five nonzero entries.
Hence not the last two, but the last four columns of (4.2) have to be discarded in
order to obtain a new valid Lanczos factorization. That is, we would have to discard
wanted information which is avoided by using Laurent polynomials.
This technique can be extended to the quadruple shift case using Laurent polynomials
as the shift polynomials as discussed in Section 2. The implicit restart can
be summarized as given in Table 4.1. In the course of the iteration we have to choose
shifts in order to apply 2p shifts: choosing a real shift - k implies
that - \Gamma1
k is also a shift due to the symplectic structure of the problem. Hence, - \Gamma1
k is
not added to \Delta as the use of the Laurent polynomial q 2 guarantees that - \Gamma1
k is used
as a shift once - k 2 \Delta. In case of a complex shift - k , implies that - k is
also a shift not added to \Delta. For complex shifts - k ,
- k in \Delta.
Numerous choices are possible for the selection of the p shifts. One possibility is
the case of choosing p "exact" shifts with respect to B 2(k+p);2(k+p)
. That is, first the
eigenvalues of B 2(k+p);2(k+p)
are computed (by the SR algorithm), then p unwanted
eigenvalues are selected. One choice for this selection might be: sort the eigenvalues
by decreasing magnitude. There will be k eigenvalues with modulus greater than
or equal to 1
Select the 2p eigenvalues with modulus closest to 1 as shifts. If - k+1 is complex with
then we either have to choose 2p shifts or just 2p
as - k+1 belongs to a quadruple pair of eigenvalues of B 2(k+p);2(k+p)
P and in order to
preserve the symplectic structure either - k and - k+1 have to be chosen or none.
An implicitly restarted symplectic Lanczos method 17
restarted symplectic Lanczos method
perform k steps of the symplectic Lanczos algorithm to compute S 2n;2k
obtain the residual vector r k+1
while jjr k+1 jj ? tol
perform p additional steps of the symplectic Lanczos method
to compute S 2n;2(k+p)
select p shifts - i
compute -
via implicitly shifted SR steps
set S 2n;2k
obtain the new residual vector r k+1
while
Table
k-step restarted symplectic Lanczos method
A different possibility of choosing the shifts is to keep those eigenvalues that are
good approximations to eigenvalues of M . That is, eigenvalues for which (3.11) is
small. Again we have to make sure that our set of shifts is complete in the sense
described above.
Choosing eigenvalues of B 2(k+p);2(k+p)
P as shifts has an important consequence
for the next iterate. Assume for simplicity that B 2(k+p);2(k+p)
P is diagonalizable. Let
be a disjoint partition of the spectrum
of B 2(k+p);2(k+p)
. Selecting the exact shifts - in the implicit restart,
following the rules mentioned above yields a matrix
g. This follows from (2.6).
Moreover, the new starting vector has been implicitly replaced by the sum of 2k
approximate eigenvectors:
ae
ae
properly chosen. The last
equation follows since q(B 2(k+p);2(k+p)
)e 1 has no component along an eigenvector of
associated with Hence
It should be mentioned that the k-step restarted symplectic Lanczos method as
in
Table
4.1 with exact shifts builds a J-orthogonal basis for a number of generalized
Krylov subspaces simultaneously. The subspace of length 2(k +p) generated during a
restart using exact shifts contains all the Krylov subspaces of dimension 2k generated
from each of the desired Ritz vectors, for a detailed discussion see [13]. A similar
Benner and Fa-bender
observation for Sorensen's restarted Arnoldi method with exact shifts was made by
Morgan in [30]. For a discussion of this observation see [30] or [23]. Morgan infers
'the method works on approximations to all of the desired eigenpairs at the same time,
without favoring one over the other' [30, p. 1220,l. 7-8 from the bottom]. This remark
can also be applied to the method presented here.
In the above discussion we have assumed that the permuted SR decomposition
exists. Unfortunately, this is not always true. During the
bulge-chase in the implicit SR step, it may happen that a diagonal element a j of B 1
(2.2) is zero (or almost zero). In that case no reduction to symplectic butterfly form
with the corresponding first column -
does exist. In the next section we will prove
that a serious breakdown in the symplectic Lanczos algorithm is equivalent to such
a breakdown of the SR decomposition. Moreover, it may happen that a subdiagonal
element d j of the (2; 2)-block of B
2 (2.3) is zero (or almost zero) such that
The matrix -
P is split, an invariant subspace of dimension j is found. If
shifts have been applied, then the iteration is halted. Otherwise we
continue similar to the procedure described by Sorensen in [35, Remark 3].
As the iteration progresses, some of the Ritz values may converge to eigenvalues of
long before the entire set of wanted eigenvalues have. These converged Ritz values
may be part of the wanted or unwanted portion of the spectrum. In either case it
is desirable to deflate the converged Ritz values and corresponding Ritz vectors from
the unconverged portion of the factorization. If the converged Ritz value is wanted
then it is necessary to keep it in the subsequent factorizations; if it is unwanted then
it must be removed from the current and the subsequent factorizations. Lehoucq and
Sorensen develop in [23, 36] locking and purging techniques to accomplish this in the
context of unsymmetric matrices and the restarted Arnoldi method. These ideas can
be carried over to the situation here.
5. Numerical Properties of the Implicitly Restarted Symplectic Lanczos
Method.
5.1. Stability Issues. It is well known that for general Lanczos-like methods
the stability of the overall process is improved when the norm of the Lanczos vectors is
chosen to be equal to 1 [32, 37]. Thus, Banse proposes in [2] to modify the prerequisite
our symplectic Lanczos method to
\Gammaoe
and
An implicitly restarted symplectic Lanczos method 19
For the resulting algorithm and a discussion of it we refer to [2]. It is easy to see that
BP SP is no longer a permuted symplectic matrix, but it still has the desired form
of a butterfly matrix. Unfortunately, an SR step does not preserve the structure of
and thus, this modified version of the symplectic Lanczos method can not
be used in connection with our restart approaches.
some form of reorthogonalization any Lanczos algorithm is numerically
unstable. Hence we re-J P -orthogonalize each Lanczos vector as soon as it is computed
against the previous ones via
wm
where for
defines the indefinite inner product implied
by J n
This re-J P -orthogonalization is costly, it requires 16n(m \Gamma 1) flops for the vector
wm and 16nm flops for v m+1 . Thus, if 2k Lanczos vectors are
computed, the re-J P -orthogonalization adds a computational cost of the order of
flops to the overall cost of the symplectic Lanczos method.
For standard Lanczos algorithms, different reorthogonalization techniques have
been studied (for references see, e.g., [17]). Those ideas can be used to design analogous
re-J P -orthogonalizations for the symplectic Lanczos method. It should be noted
that if k is small, the cost for re-J P -orthogonalization is not too expensive.
Another important issue is the numerical stability of the SR step employed in
the restart. During the SR step on the 2k \Theta 2k symplectic butterfly matrix, all but
are orthogonal. These are known to be numerically stable. For
the nonorthogonal symplectic transformations that have to be used, we choose
among all possible transformations the ones with optimal (smallest possible) condition
number (see [8]).
5.2. Breakdowns in the SR Factorization. If there is a starting vector -
aeq(M)v 1 for which the explicitly restarted symplectic Lanczos method breaks down,
then it is impossible to reduce the symplectic matrix M to symplectic butterfly form
with a transformation matrix whose first column is - v 1 . Thus, in this situation the SR
decomposition of q(B) can not exist.
As will be shown in this section, this is the only way that breakdowns in the
SR decomposition can occur. In the SR step, most of the transformations used are
orthogonal symplectic transformations; their computation can not break down. The
only source of breakdown can be one of the symplectic Gaussian eliminations L j .
For simplicity, we will discuss the double shift case. Only the following elementary
elimination matrices are used in the implicit SR step: elementary symplectic Givens
matrices [31]
where
20 Benner and Fa-bender
elementary symplectic Householder transformation
and elementary symplectic Gaussian elimination matrices [8]
where
Assume that k steps of the symplectic Lanczos algorithm are performed, then
from (3.5)
Now an implicit restart is to be performed using an implicit double shift SR step. In
the first step of the implicit SR step, a symplectic Householder matrix H 1 is computed
such that
H 1 is applied to B 2k;2k
introducing a small bulge in the butterfly form: additional elements are found in the
positions (2; 1), (1; 2), (n
1). The remaining implicit transformations perform a bulge-chasing
sweep down the subdiagonal to restore the butterfly form. An algorithm for this is
given in [2] or [4]; it can be summarized for the situation here as in Table 5.1, where
~
G j and G j both denote symplectic Givens transformation matrices acting in the same
planes but with different rotation angles.
compute G '+1 such that (G '+1 B 2k;2k )
compute L '+1 such that (L '+1 B 2k;2k )
compute ~
G '+1 such that (B 2k;2k ~
G '+1
compute H '+1 such that (B 2k;2k H '+1
Table
Reduction to butterfly form - double shift case.
An implicitly restarted symplectic Lanczos method 21
Suppose that the first exist and that we
have computed
~
In order to simplify the notation, we switch to the permuted version and rewrite the
permuted symplectic matrix b
SP as
I 2n\Gamma2j \Gamma2
making use of the fact that the accumulated transformations
affect only the rows 1 to j and j. The leading (2j
principal submatrix of
is given by
e
x
x
x
x
x
where the hatted quantities denote unspecified entries that would change if the SR
update could be continued. Next, the (2j should be annihilated
by a permuted symplectic Gaussian elimination. This elimination will fail to exist if
the SR decomposition of q(B 2k;2k ) does not exist.
As will be needed later, - a implies that - y This follows as e
P is
From
e
we obtain
- a j
x
x
If - a
(otherwise the last Gaussian transformation
did not exist).
Next we show that this breakdown in the SR decomposition implies a breakdown
in the Lanczos process started with the starting vector -
22 Benner and Fa-bender
For this we have to consider (5.1) multiplied from the right by b
SP . From the
derivations in the last section we know that the starting vector of that recursion is
given by -
As the trailing (2n
submatrix of b
SP is the identity, we can just as well consider
multiplied from the right by SP
P SP corresponds to the matrix in (5.2) (no butterfly
w j+1 ]. The
columns of -
are JP -orthogonal
The starting vector of the recursion (5.3) is given by - Deleting the
last four columns of -
P in the same way as in the implicit restart we obtain a
valid symplectic Lanczos factorization of length 2.
In order to show that a breakdown in the SR decomposition of q(B) implies a
breakdown in the above symplectic Lanczos recursion, we need to show
From (5.2) and (5.3) we obtain
and
Further we do know from the symplectic Lanczos algorithm
all of these quantities are already known. Now consider
x3
Obviously, x Using (5.6) we obtain
2. Hence x Using (5.5) end (5.4) will see that x
z3
An implicitly restarted symplectic Lanczos method 23
As - a
From (5.3) we obtain
Hence using (5.4) yields
Similar, it follows that z
This argumentation has shown that an SR breakdown implies a serious Lanczos
breakdown. The opposite implication follows from the uniqueness of the Lanczos
factorization. The result is summarized in the following theorem.
Theorem 5.1. Suppose the symplectic butterfly matrix B 2k;2k corresponding to
(3.5) is unreduced and let - 2 IR. Let L j be the jth symplectic Gauss transformation
required in the SR step on (B If the first
symplectic Gauss transformations of this SR step exist, then L j fails to exist if and
only if - v T
j as in (4.3).
6. Numerical Experiments. Some examples to demonstrate the properties of
the (implicitly restarted) symplectic Lanczos method are presented. The computational
results are quite promising but certainly preliminary. All computations were
done using Matlab Version 5.1 on a Sun Ultra 1 with IEEE double-precision arithmetic
and machine precision
Our code implements exactly the algorithm as given in Table 4.1. In order to
detect convergence in the restart process, the rather crude criterion
was used. This ad hoc stopping rule allowed the iteration to halt quite early. Usually,
the eigenvalues largest in modulus (and their reciprocals) of the wanted part of the
spectrum are much better approximated than the ones of smaller modulus. In a black-box
implementation of the algorithm this stopping criterion has to be replaced with
a more rigorous one to ensure that all eigenvalues are approximated to the desired
accuracy (see the discussion in Section 3.3). Benign breakdown in the symplectic
Lanczos process was detected by the criterion
while a serious breakdown was detected by
Our implementation intends to compute the k eigenvalues of M largest in modulus
and their reciprocals. In the implicit restart, we used exact shifts where we chose the
shifts to be the 2p eigenvalues of B 2k+p;2k+p closest to the unit circle.
Our observations have been the following.
Benner and Fa-bender
ffl Re-J-orthogonalization is necessary; otherwise J-orthogonality of the computed
Lanczos vectors is lost after a few steps, and ghost eigenvalues (see,
e.g., [17]) appear. That is, multiple eigenvalues of B 2k;2k correspond to simple
eigenvalues of M .
ffl The implicit restart is more accurate than the explicit one.
ffl The leading end of the 'wanted' Ritz values (that is, the eigenvalues largest
in modulus and their reciprocals) converge faster than the tail end (closest to
cut off of the sort). The same behavior was observed in [35] for the implicitly
restarted Arnoldi method. In order to obtain faster convergence, it seems
advisable (similar to the implementation of Sorensen's implicitly restarted
Arnoldi method in Matlab's eigs) to increase the dimension of the computed
Lanczos factorization. That is, instead of computing S 2n;2k
as a basis for the restart, one should compute a slightly larger factorization,
e.g. dimension 2(k instead of dimension 2k. When 2' eigenvalues have
converged, a subspace of dimension 2(k computed as a basis for
the restart, followed by p additional Lanczos steps to obtain a factorization
of length k Using implicit SR steps this factorization is reduced
to one of length k If the symplectic Lanczos method would be implemented
following this approach, the convergence check could be done using
only the k Ritz values of largest modulus (and their reciprocals) or those that
yield the smallest Ritz residual
jd
where the y j are the eigenvectors of B 2k;2k .
ffl It is fairly difficult to find a good choice for k and p. Not for every possible
choice of k, there exists an invariant subspace of dimension 2k associated to
the k eigenvalues - i largest in modulus and their reciprocals. If - k is complex
and - then we can not choose the 2p eigenvalues with modulus
closest to the unit circle as shifts as this would tear a quadruple of eigenvalues
apart resulting in a shift polynomial q such that q(B 2(k+p);2(k+p)
we can do is to choose the 2p \Gamma 2 eigenvalues with modulus closest to 1
as shifts. In order to get a full set of 2p shifts we add as the last shift
the real eigenvalue pair with largest Ritz residual. Depending on how good
that real eigenvalue approximates an eigenvalue of M , this strategy worked,
but the resulting subspace is no longer the subspace corresponding to the k
eigenvalues largest in modulus and their reciprocals. If the real eigenvalue
has converged to an eigenvalue of M , it is unlikely to remove that eigenvalue
just by restarting, it will keep coming back. Only a purging technique like the
one discussed by Lehoucq and Sorensen [23, 36] will be able to remove this
eigenvalue. Moreover, there is no guarantee that there is a real eigenvalue of
P that can be used here. Hence, in a black-box implementation
one should either try to compute an invariant subspace of dimension
or of dimension 2(k 1). As this is not known a priori, the algorithm should
adapt k during the iteration process appropriately. This is no problem, if as
suggested above, one always computes a slightly larger Lanczos factorization
than requested.
Example 6.1. The first test performed concerned the loss of J-orthogonality of
the computed Lanczos vectors during the symplectic Lanczos method and the ghost
An implicitly restarted symplectic Lanczos method 25
eigenvalue problem (see, e.g. [17]). To demonstrate the effects of re-J-orthogonali-
zation, a 100 \Theta 100 symplectic matrix with eigenvalues
200; 100; 50;
was used. A symplectic block-diagonal matrix with these eigenvalues on the block-diagonal
was constructed and a similarity transformation with a randomly generated
orthogonal symplectic matrix was performed to obtain a symplectic matrix M .
As expected, when using a random starting vector M 's eigenvalues largest in
modulus (and the corresponding reciprocals) tend to emerge right from the start,
e.g., the eigenvalues of B 10;10 are
199:99997; 100:06771; 48:71752; 26:85083; 8:32399
and their reciprocals. Without any form of re-J-orthogonalization, the J-orthogo-
nality of the Lanczos vectors is lost after a few iterations as indicated in Figure 6.1.
number of Lanczos steps
100,2k
JS
100,2k
||Fig. 6.1. loss of J-orthogonality after k symplectic Lanczos steps
The loss of J-orthogonality in the Lanczos vectors results, as in the standard
Lanczos algorithm, in ghost eigenvalues. That is, multiple eigenvalues of B 2k;2k correspond
to simple eigenvalues of M . For example, using no re-J-orthogonalization,
after 17 iterations the 6 eigenvalues largest in modulus of B 34;34 are
Using complete re-J-orthogonalization, this effect is avoided:
200; 100; 49:99992; 47:02461; 45:93018; 42:31199:
The second test performed concerned the question whether an implicit restart
is more accurate than an explicit one. After nine steps of the symplectic Lanczos
method (with a random starting vector) the resulting butterfly
had the eigenvalues (using the Matlab function eig)
200:000000000000 99:999999841718
13:344815062428 3:679215125563 \Sigma5:750883779240i
26 Benner and Fa-bender
and their reciprocals. Removing the 4 complex eigenvalues from B 18;18 using an
implicit restart as described in Section 4, we obtain a symplectic butterfly matrix
impl whose eigenvalues are
200:000000000000 99:999999841719
13:344815062428
and their reciprocals. From (2.6) it follows that these have to be the 14 real eigenvalues
of B 18;18 which have not been removed. As can be seen, we lost one digit during
the implicit restart (indicated by the 'underbar' under the 'lost' digits in the above
table). Performing an explicit restart with the explicitly computed new starting vector
butterfly
expl whose eigenvalues are
200:000000000000 99:999999841793
and their reciprocals. This time we lost up to nine digits.
The last set of tests performed on this matrix concerned the k-step restarted
symplectic Lanczos method as given in Table 4.1. As M has only one quadruple
of complex eigenvalues, and these eigenvalues are smallest in magnitude there is no
problem in choosing k - n. For every such choice there exists an invariant symplectic
subspace corresponding to the k eigenvalues largest in magnitude and their reciprocals.
In the tests reported here, a random starting vector was used. Figure 6.2 shows a plot
of jjr k+1 jj versus the number of iterations performed. Iteration Step 1 refers to the
norm of the residual after the first k Lanczos steps, no restart is performed. The three
lines in Figure 6.2 present three different choice for k and p:
Convergence was achieved for all three examples (and many more,
not shown here). Obviously, the choice results in faster convergence
than the choice 8. Convergence is by no means monotonic, during the major
part of the iteration the norm of the residual is changing quite dramatically. But once
a certain stage is achieved, the norm of the residual converges. Although convergence
quite fast, this does not imply that convergence is
as fast for other choices of k and p. The third line in Figure 6.2 demonstrates that
the convergence for does need twice as many iteration steps as for
Example 6.2. Symplectic matrix pencils that appear in discrete-time linear-quadratic
optimal control problems are typically of the form
- I \GammaBB T
(Note: For F 6= I , L and N are not symplectic, but L \Gamma -N is a symplectic matrix
pencil.) Assuming that L and N are nonsingular (that is, F is nonsingular), solving
this generalized eigenproblem is equivalent to solving the eigenproblem for the
symplectic matrix
- I \GammaBB T
If one is interested in computing a few of the eigenvalues of L \Gamma -N , one can use the
An implicitly restarted symplectic Lanczos method 27
number of iterations
norm(r
Fig. 6.2. k-step restarted symplectic Lanczos method, different choices of k and p
restarted symplectic Lanczos algorithm on In each step of the symplectic
Lanczos algorithm, one has to compute matrix-vector products of the form Mx and
Making use of the special form of L and N this can be done without explicitly
inverting us consider the computation of First compute
Next one has to solve the
linear system analogous to x and z, then from Ny = z
we obtain
In order to solve y we compute the LU decomposition of F and solve the
linear system F T y using backward and forward substitution. Hence, the explicit
inversion of N or F is avoided. In case F is a sparse matrix, sparse solvers can be
employed. In particular, if the control system comes from some sort of discretization
scheme, F is often banded which can be used here by computing an initial band LU
factorization of F in order to minimize the cost for the computation of y 2 . Note that
in most applications, such that the computational cost for C T Cx 1 and
significantly cheaper than a matrix-vector product with an n \Theta n matrix.
In case of single-input the corresponding operations
come down to two dot products of length n each.
Using Matlab's sparse matrix routine sprandn sparse normally distributed random
matrices F; B; C (here, n) of different dimensions and with different
densities of the nonzero entries were generated. Here an example of dimension
presented, where the density of the different matrices was chosen to be
matrix - nonzero entries
28 Benner and Fa-bender
Matlab computed the norm of the corresponding matrix to be - 5:3 \Theta
In the first set of tests k was chosen to be 5, and we tested
As can be seen in Figure 6.3, for the first 3 iterations, the norm of the residual
decreases for both choice of p, but then increases quite a bit. During the first step,
the eigenvalues of B 10;10 are approximating the 5 eigenvalues of L \Gamma -N largest in
modulus and their reciprocals. In step 4, a 'wrong' choice of the shifts is done in
both cases. The extended matrices B 20;20 and B 30;30 both still approximate the 5
eigenvalues of L \Gamma -N largest in modulus, but there is a new real eigenvalue coming
in, which is not a good approximation to an eigenvalue of L \Gamma -N . But, due to the
way the shifts are chosen here, this new eigenvalue is kept, while an already good
approximated eigenvalue - a little smaller in magnitude - is shifted away, resulting
in a dramatic increase of jjr k+1 jj. Modifying the choice of the shifts such that the
good approximation is kept, while the new real eigenvalue is shifted away, the problem
is resolved, the 'good' eigenvalues are kept and convergence occurs in a few steps (the
'o'-line in Figure 6.3).
Using a slightly larger Lanczos factorization as a basis for the restart, e.g., a
factorization of length k + 3 instead of length k and using a locking technique to
decouple converged approximate eigenvalues and associated invariant subspaces from
the active part of the iteration, this problem is avoided.
number of iterations
norm(r
modified
Fig. 6.3. k-step restarted symplectic Lanczos method, different choices of the shifts
Figure
6.4 displays the behavior of the k-step restarted symplectic Lanczos method
for different choices of k and p, where k is quite small. Convergence is achieved in
any case.
So far, in the tests presented, k was always chosen such that there exists a deflating
subspace of L \Gamma -N corresponding to the k eigenvalues largest in modulus and their
reciprocals. For there is no such deflating subspace (there is one for
and one for Figure 6.5 for a convergence plot. The eigenvalues of
B 2(k+p);2(k+p) in the first iteration steps approximate the k eigenvalues of largest
modulus and their reciprocals (where 5 - j - p) quite well. Our choice of shifts is
to select the 2p eigenvalues with modulus closest to 1, but as - k+1 is complex with
1, we can only choose shifts that way. The last shift is chosen
An implicitly restarted symplectic Lanczos method 29
number of iterations
norm(r
Fig. 6.4. k-step restarted symplectic Lanczos method, different choices of k and p
according to the strategy explained above. This eigenvalue keeps coming back before
it is annihilated. A better idea to resolve the problem is to adapt k appropriately.
number of iterations
norm(r
Fig. 6.5. k-step restarted symplectic Lanczos method, different choices of k and p
7. Concluding Remarks. We have investigated a symplectic Lanczos method
for symplectic matrices. Employing the technique of implicitly restarting the method
using double or quadruple shifts as zeros of the driving Laurent polynomials, this
results in an efficient method to compute a few extremal eigenvalues of symplectic
matrices and the associated eigenvectors or invariant subspaces. The residual of the
Lanczos recursion can be made to zero by choosing proper shifts. It is an open problem
how these shifts should be chosen in an optimal way. The preliminary numerical tests
reported here show that for exact shifts, good performance is already achieved.
Before implementing the symplectic Lanczos process in a black-box algorithm,
some more details need consideration: in particular, techniques for locking of con-
Benner and Fa-bender
verged Ritz values as well as purging of converged, but unwanted Ritz values, needs
to be derived in a similar way as it has been done for the implicitly restarted Arnoldi
method.
--R
analysis of the Lanczos algorithm for the nonsymmetric eigenvalue problem
Symplektische Eigenwertverfahren zur L-osung zeitdiskreter optimaler Steuerungs- probleme
A condensed form for the solution of the symplectic eigenvalue problem
The symplectic eigenvalue problem
SR and SZ algorithms for the symplectic (butterfly) eigenproblem
Linear Hamiltonian difference systems: Disconjugacy and Jacobi-type conditions
Matrix factorization for symplectic QR-like methods
A symplectic QR-like algorithm for the solution of the real algebraic Riccati equation
An implicitly restarted Lanczos method for large symmetric eigenvalue problems
Sur quelques Algorithmes de recherche de valeurs propres
Numerical linear algorithms and group theory
On some algebraic problems in connection with general eigenvalue algorithms
Symplectic Methods for Symplectic Eigenproblems
An analysis of structure preserving methods for symplectic eigenvalue problems
The QR transformation Part I and Part II
Matrix Computations
Model reduction of state space systems via an implicitly restarted Lanczos method
Residual bounds on approximate eigensystems of nonnormal matrices
On some algorithms for the solution of the complete eigenvalue problem
The Algebraic Riccati Equation
Invariant subspace methods for the numerical solution of Riccati equations
Deflation techniques for an implicitly restarted Arnoldi itera- tion
Solution of large-scale eigenvalue problems with implicitly restarted Arnoldi methods
A new method for computing the closed loop eigenvalues of a discrete-time algebraic Riccati equation
Canonical forms for Hamiltonian and symplectic matrices and pencils
On restarting the Arnoldi method for large nonsymmetric eigenvalue problems
A Schur decomposition for Hamiltonian matrices
Computation of the stable deflating subspace of a symplectic pencil using structure preserving orthogonal transformations
Implicit application of polynomial filters in a k-step Arnoldi method
Analysis of the look ahead Lanczos algorithm
A symplectic method for approximating all the eigenvalues of a Hamiltonian matrix
A convergence analysis for nonsymmetric Lanczos algorithms
--TR | implicit restarting;symplectic Lanczos method;symplectic matrix;eigenvalues |
587865 | Differences in the Effects of Rounding Errors in Krylov Solvers for Symmetric Indefinite Linear Systems. | The three-term Lanczos process for a symmetric matrix leads to bases for Krylov subspaces of increasing dimension. The Lanczos basis, together with the recurrence coefficients, can be used for the solution of symmetric indefinite linear systems, by solving a reduced system in one way or another. This leads to well-known methods: MINRES (minimal residual), GMRES (generalized minimal residual), and SYMMLQ (symmetric LQ). We will discuss in what way and to what extent these approaches differ in their sensitivity to rounding errors.In our analysis we will assume that the Lanczos basis is generated in exactly the same way for the different methods, and we will not consider the errors in the Lanczos process itself. We will show that the method of solution may lead, under certain circumstances, to large additional errors, which are not corrected by continuing the iteration process.Our findings are supported and illustrated by numerical examples. | Introduction
We will consider iterative methods for the construction of approximate solutions, starting
with for the linear system A an n by n symmetric matrix, in the
k-dimensional Krylov subspace
with r
With the standard 3-term Lanczos process, we generate an orthonormal basis v 1
process can be recast in matrix formulation
as
in which V j is defined as the n by j matrix with columns v 1
tridiagonal matrix.
Mathematical Institute, Utrecht University, Budapestlaan 6, Utrecht, the Netherlands.
y Institute of Mathematics, Medical University of L-ubeck, Wallstra-e 40, 23560 L-ubeck, Germany.
E-mail: sleijpen@math.uu.nl, vorst@math.uu.nl, modersitzki@informatik.mu-luebeck.de
This assumption does not mean a loss of generality, since the case x0 6= 0 can be reduced to this by a simple
shift
Paige [11] has shown that in finite precision arithmetic, the Lanczos process can be implemented
so that the computed V k+1 and T k satisfy
with, under mild conditions for k,
(u is the machine precision, m 1 denotes the maximum number of nonzeros in any row of A).
we obtain the convenient expression
Popular Krylov subspace methods for symmetric linear systems can be derived with formula
(1) as a starting point: MINRES, GMRES, 2 and SYMMLQ. The matrix T k can be interpreted
as the restriction of A with respect to the Krylov subspace, and the main idea behind these
Krylov solution methods is that the given system replaced by a smaller system
with T k over the Krylov subspace. This reduced system is solved - implicitly or explicitly
- in a convenient way and the solution is transformed with V k to a solution in the original
n-dimensional space. The main differences between the methods are due to a different way of
solution of the reduced system and to differences in the backtransformation to an approximate
solution of the original system. We will describe these differences in relevant detail in coming
sections.
Of course, these methods have been derived assuming exact arithmetic, for instance, the
generating formulas are all based on an exact orthogonal basis for the Krylov subspace. In
reality, however, we have to compute this basis, as well as all other quantities in the methods,
and then it is of importance to know how the generating formulas behave in finite precision
arithmetic. The errors in the underlying Lanczos process have been analysed by Paige [11, 12].
It has been proven by Greenbaum and Strakos [8], that rounding errors in the Lanczos process
may have a delaying effect on the convergence of iterative solvers, but do not prevent eventual
convergence in general. Usually, this type of error analysis is on a worst case scenario, and as
a consequence the error bounds are pessimistic. In particular, the error bounds cannot very
well be used to explain differences between these methods, so as we observe them in practical
situations.
In this paper, we propose a different way of analysing these methods, different in the way
that we do not attempt to derive sharper upper bounds, but that we try to derive upper
bounds for relevant differences between these processes in finite precision arithmetic. This will
not help us to understand why any of these methods converges in finite precision, but it will
give us some insight in answering practical questions such as:
- When and why is MINRES less accurate than SYMMLQ? This question was already posed
in the original publication [14], but the answer in [14, p.625] is largely speculative.
- Is MINRES suspect for ill-conditioned systems, because of the minimal residual approach
(see [14, p.619])? Although hints are given for the reasons of inaccuracies in MINRES, for
MINRES, it is also stated in [14, p. 625] that it is not as accurate as SYMMLQ for the reason
2 GMRES has been designed in combination with Arnoldi's method for unsymmetric systems, but for symmetric
systems Arnoldi's method and Lanczos' method lead, in exact arithmetic, to the same relation (1)
that the minimal residual method is suspect. In [3, p. 43] an explicit relation is suggested
between MINRES and working with A 2 , and it is argued that for that reason sensitivity to
rounding errors of the solution depends on - 2 (A) 2 (it is even stated: 'the squared condition
number of A 2 ', implying - 2 which seems to be a mistake).
- Why and when is SYMMLQ slower than for instance MINRES or GMRES?
- Why does MINRES sometimes lead to rather large residuals, whereas the error in the approximation
is significantly smaller? See, for instance observations on this, made in [14, p.626].
Most important, understanding the differences between these methods will help us in making
a choice.
We will now briefly characterize the different methods in our investigation:
1. MINRES [14]: determine x is minimal. This
minimization leads to a small system with T k , and the tridiagonal structure of T k is
exploited to get a short recurrence relation for x k . The advantage of this is that only
three vectors from the Krylov subspace have to be saved (in fact, MINRES works with
transformed basis vectors; this will be explained in Section 2.3). For the implementation
of MINRES that we have used, see the Appendix.
2. GMRES [16]: This method also minimizes, for y k 2 R k , the residual kb \Gamma Ax k k 2 .
GMRES was designed for unsymmetric matrices, for which the orthogonalisation of the
Krylov basis is done with Arnoldi's method. This leads to a small upper Hessenberg
system that has to be solved. However, when A is symmetric, then, in exact arithmetic,
the Arnoldi method is equivalent to the Lanczos method (see also [7, p.41]). Although
GMRES is commonly presented with an Arnoldi basis, there are various implementations
of it that differ in finite precision, for instance, with Modified Gram-Schmidt, Classical
Gram-Schmidt, Householder, and other variants. We view Lanczos as one way to obtain
an orthogonal basis, and therefore stick to the name GMRES rather than to introduce
a new and possibly confusing acronym. Due to the way of solution in GMRES, all the
basis vectors v j have to be stored, also when A is symmetric.
3. SYMMLQ [14]: determine x such that the error x
Euclidean length. It may come as a surprise that can be minimized without
knowing x, but this can be accomplished by restricting the choice of x k to AK k (A; r 0 ).
Conjugate Gradient approximations can, if they exist, be computed with little effort from
the SYMMLQ information. In the SYMMLQ implementation suggested in [14] this is
used to terminate iterations either at a SYMMLQ iterate or a Conjugate Gradient iterate,
depending on which one is best. For the implementation of SYMMLQ that we have used,
see the Appendix.
Note that these methods can be carried out with exactly the same basis vectors v j and
tridiagonal matrix T j .
Most of our bounds on perturbations in the solutions at the kth iteration step will be
expressed as bounds for corresponding perturbations to the residual in the kth step, relative
to the norm of an initial residual. Since all these iteration methods construct their search
spaces from residual vector information (that is, they all start with kr 0 k 2 ), and since we make
at least errors in the order of u kbk 2 in the computation of the residuals, we may not expect
perturbations of order less than u- 2 (A)kbk 2 in the iteratively computed solutions. So our
bounds can only be expected to show up in the computed residuals, if the errors are larger
than the error induced by the computation of the residuals itself.
Notations: Quantities associated with n dimensional spaces will be represented in bold face,
like A, and v j . Vectors and matrices on low dimensional subspaces are denoted in normal
mode: T , y. Constants will be denoted by Greek symbols, with the exception that we will use
u to denote the relative machine precision.
The absolute value of a matrix refers to elementwise absolute values, that is
2 Differences in round-off error behaviour between MINRES
and GMRES
2.1 The basic formulas for GMRES and MINRES in exact arithmetic
We will first describe the generic formulas for the iterative methods MINRES and GMRES,
and we will assume exact arithmetic in the derivation of these formulas. Without loss of
generality, we may assume that x
The aim is to minimize kb \Gamma Axk 2 over the Krylov subspace, and since
we see that for minimizing must be the linear least squares solution of the
overdetermined system
In GMRES this system is solved with Givens rotations, which leads to an upper triangular
reduction of
in which R k is k by k upper triangular with bandwidth 3, and Q k
is a
orthonormal columns. Using (6), y k can be solved from
and since x
(R
(R
The parentheses have been included in order to indicate the order of computation. In the
original publication [16], GMRES was proposed for unsymmetric A, in combination with
Arnoldi's method for an orthonormal basis for the Krylov subspace. However, when A is
symmetric then Arnoldi's method is equivalent to Lanczos' method, so that (8) describes
GMRES for symmetric A. The well-known disadvantage of this approach is that we have to
store all columns of V k for the computation of x k .
MINRES follows essentially the same approach as GMRES for the minimization of the
residual, but it exploits the banded structure of R k , in order to get short recurrences for x k ,
and in order to save on memory storage.
Indeed, the computations in the generating formula (8) can be reordered as
z k
For the computation of W
k , it is easy to see that the last column of W k is obtained
from the last two columns of W k\Gamma1 and v k . This makes it possible to update x
to x k with a short recurrence, since z k follows from the kth Givens rotation applied to the
vector (z T
This interpretation leads to MINRES.
We see that MINRES and GMRES both use V k , R k , T k , Q k , and z k , for the computation
of x k . Of course, we are not dictated to compute these items in exactly the same way for the
two methods, but there is no reason to compute them differently. Therefore, we will compare
implementations of GMRES and MINRES that are based on exactly the same items in floating
point finite arithmetic. From now on we will study in what way MINRES and GMRES differ
in finite precision arithmetic, given exactly the same set V k , R k , T k , Q k
, and z k (computed
in finite precision too) for the two different methods. Hence, the differences in finite precision
between GMRES and MINRES are only caused by a different order of computation of the
namely
ffl for GMRES: x
ffl for MINRES: x
z k .
Of course, we could have tried to get upper bounds for all errors made in each process, but
this would most likely not reveal the differences between the two methods. If we want to study
the differences between the two methods then we have to concentrate on the two generating
formulas.
2.2 Error analysis for GMRES
In order to understand the difference between GMRES and MINRES, we have to study the
computational errors in V k
. We will indicate actual computation in floating point
finite precision arithmetic by fl, and the result will be denote by a b. Then, according to [5,
p. 89], in floating point arithmetic the computed solution b y
(R
This implies that b
k
so that apart from second order terms in u
Here
is the exact value based on the computed R k and z k . Then we make also
errors in the computation of x k , that is we compute b x
y k ). With the error bounds for
the matrix vector product [10, p.76], we obtain
with Hence, the error \Deltax that can be attributed to
differences between MINRES and GMRES, has two components
This error leads to a contribution \Deltar k to the residual, that is \Deltar k is that part of r k that can
be attributed to differences between MINRES and GMRES (ignoring O(u 2 )
\Deltar
Note that in finite precision we have that AV , and that, because of (3), the
leads to a contribution of O(u 2 ) in \Deltar k . This is also the case in forthcoming situations
where we replace AV k by V k+1 T k in the derivation of upper bounds for error contributions.
Using the bound in (10) and the bound for \Delta 2 , we get (skipping higher order terms in u)
k
3
Here we have used that k jR k
from [21, Th. 4.2]; see Lemma 5.1 for details). The factor - 2 denotes the condition number
with respect to the Euclidean norm. 3
Note that we could bound kV k+1 k 2 by
which is, because of the local orthogonality of the v j , a crude overestimate. According to [15,
p. 267 (bottom)], it may be more realistic to replace this factor
m, where
m denotes the number of times that a Ritz value of T k has converged to an eigenvalue of A.
When solving a linear system, this value of m is usually very modest, 2 or 3 say.
Finally, we note that
R T
It has been shown in [6] that the matrix T k that has been obtained in finite precision arithmetic,
may interpreted as the exact Lanczos matrix obtained from a matrix e
A in which eigenvalues of
A are replaced by multiplets. Each multiplet contains eigenvalues that differ by O(u) 1
4 from
an original eigenvalue of A. 4 With e
k we denote the orthogonal matrix that generates T k , in
exact arithmetic, from e
A. Hence,
e
A T e
A e
3 We also have used that the computed Q k
are orthogonal matrices, with errors in the order of u, i.e.,
O(u). These O(u)-errors lead to O(u) 2 -errors in (13).
4 This order of difference is pessimistic; factors proportional to (u) 1
2 , or even u, are more likely, but have not
been proved [7, Sect.4.4.2].
so that
oe min (R T
A T e
and
oe (R T
A T e
which implies - 2 (R k
(ignoring errors proportional to mild orders of u).
This finally results in the upper bound for the error in the residual due to the difference
between GMRES and MINRES:
Note that, even if there were only rounding errors in the matrix-vector multiplication, then
the perturbation \Deltax to A \Gamma1 b would have been (in norm) in the order of u This
corresponds to an error kA\Deltaxk 2 - u- 2 (A)kbk 2 in the residual. Therefore, the stability of
GMRES cannot essentially be improved.
2.3 Error analysis for MINRES
The differences in finite precision between MINRES and GMRES are reflected by
z k .
We will first analyze the floating point errors introduced by the computation of the columns
of
k . The jth row w j;: of W k satisfies
w
which means that in floating point finite precision arithmetic we obtain the solution b
w j;: of a
perturbed system:
with
Note that the perturbation term \Delta R j depends on j. This gives b
w
when we combine the relations for
c
with
We may replace c
k in (18), because this leads only to O(u 2 ) errors.
Finally, we make errors in the computation of x k because of finite precision errors in the
multiplication of c
with The errors made in c
k and the error term are the only
errors that can be held responsible for the difference between MINRES and GMRES. Added
together, they lead to the \Deltax k related to MINRES:
and this leads to the following contribution to the MINRES residual:
\Deltar
If we use the bound (18) for \Delta W , and use for other quantities bounds similar as for GMRES,
then we obtain
3
3
Here we have also used the fact that
and, with kV k k F -
k, the expression can be further bounded.
This finally results in the following upper bound for the error contribution in the residual
due to the differences in the implementation between MINRES and GMRES:
3k
We see that the different implementation for MINRES leads to a relative error in the residual
that is proportional to the squared condition number of A, whereas for the GMRES implementation
the difference led to a relative error proportional to the condition number only.
This means that if we plot the residuals for MINRES and GMRES then we may expect to
see differences, more specifically, the difference between the computed residuals for the two
methods may be expected to be in the order of the square of the condition number. As soon
as the computed residual of GMRES gets below u difference may be
visible.
2.4 Discussion
In Fig. 1, we have plotted the residuals obtained for GMRES and MINRES. Our analysis
suggests that there may be a difference between both in the order of the square of the condition
number times machine precision relative to kbk 2 . Of course, the computed residuals reflect all
errors made in both processes, and if all these errors together lead to perturbations in the same
order for MINRES and GMRES, then we will not see much difference. However, as we see, all
the errors in GMRES lead to something proportional to the condition number, and now the
effect of the square of the condition number is clearly visible in the error in the residual for
MINRES.
Our analysis implies that one has to be careful with MINRES when solving linear systems
with an ill-conditioned matrix A, specially when eigenvector components in the solution,
corresponding to small eigenvalues, are important.
The residual norm reduction kr k k 2 =kbk 2 for the exact (but unknown) MINRES residual
can be computed efficiently as a product ae k j js 1 of the sines s k of the Givens
rotations. In MINRES (as well as GMRES) this value ae k is used to measure the reduction of
the residual norm: in practical computations, a residual norm is not often computed explicitly
Convergence history MINRES A=Q'*diag(D)*Q, Q Givens
log10(|r|)
(solid
line),
log10(|rho|)
(dotted
line)
-2Convergence history MINRES , A=Q'*diag(D)*Q, Q Givens
log10(|r|)
(solid
line),
log10(|rho|)
(dotted
line)
-2Convergence history GMRES A=Q'*diag(D)*Q, Q Givens
log10(|r|)
(solid
line),
log10(|rho|)
(dotted
line)
-2Convergence history GMRES , A=Q'*diag(D)*Q, Q Givens
log10(|r|)
(solid
line),
log10(|rho|)
(dotted
line)
Figure
1. MINRES (top) and GMRES (bottom): solid line dotted line
of the estimated residual norm reduction ae k . The pictures show the results for a positive definite
system (the left pictures) and for a non-definite system (the right pictures). For both examples -2
To be more specific: at the left
and G the Givens rotation in the (1; 30)-plane over an angle of at the right diagonal
G the same Givens rotation as for the left example; in both
examples (and others to come) b is the vector with all coordinates equal to 1, and the relative machine
precision
as the kth floating point approximate. Therefore, it is of interest to know
how much the computed ae k may differ from the exact residual norm reduction. The errors
made in the computation of ae k are of order u and can be neglected. Since the computation of
ae k and of b x k are based on the same inexact Lanczos process, (22) implies that
The situation for GMRES is much better: the difference between ae k and the true residual
reduction for GMRES can be bounded by the quantity in the right hand side of (14). In fact,
as observed at the end of x2.2, except for the moderate constant (3
about the most accurate computation that can be expected.
2.5 Diagonal matrices
Numerical Analysts often carry out experiments for (unpreconditioned) iterative solvers with
diagonal matrices, because, at least in exact arithmetic, the convergence behaviour depends
on the distribution of the eigenvalues and the structure of the matrix plays no role in Krylov
solvers. However, the behaviour of these methods for diagonal systems may be quite different
in finite precision, as we will show now, and, in particular for MINRES, experiments with
diagonal matrices may give a too optimistic view on the behaviour of the method.
Rotating the matrix from diagonal to non-diagonal (i.e., diagonal
and Q orthogonal, instead of A = D) has hardly any influence on the errors in the GMRES
residuals (no results shown here). This is not the case for MINRES: experimental results (cf.
Fig. 2) indicate that the errors in the MINRES residuals for diagonal matrices are of order
(A), as for GMRES. This can be understood as follows.
If we neglect O(u 2 ) terms, then, according to (15), the error, due to the inversion of R k ,
in the jth coordinate of the MINRES-x k is given by
k
When A is diagonal with (j; j)-entry - j , the error in the jth coordinate of the MINRES
residual is equal to (use (1) and (6))
k
k
Therefore, in view of (16), and including the error term for the multiplication with c
(cf. (19)), we have for MINRES applied to a diagonal matrix:
which is the same upper bound as for the errors in the GMRES residuals in (14).
The perturbation matrix \Delta R j
depends on the row index j. Since, in general, \Delta R j
will
be different for each coordinate j, (23) cannot be expected to be correct for non-diagonal
matrices. In fact, if orthogonal matrix, then errors of order
in the jth coordinate of x k can be transferred by Q to an mth coordinate
and may not be damped by a small value j- m j. More precisely, if \Gamma is the maximum size of
the off-diagonal elements of A that "couple" small diagonal elements of A to large ones, then
the error in the MINRES residual will be of order \Gamma
(R
we recover the bound (22).
2.6 The errors in the approximations
In exact arithmetic we have that
. Assuming that, in finite
precision, this also gives about the right order of magnitude, then the errors related to differences
between MINRES and GMRES, for the approximate solutions in (11) and (20) can be
bounded by essentially the same upper bound:
. (3
Convergence history MINRES with A=diag(D)
log10(|r|)
(solid
line),
log10(|rho|)
(dotted
line)
-2Convergence history MINRES with A=diag(D)
log10(|r|)
(solid
line),
log10(|rho|)
(dotted
line)
Figure
2. MINRES: solid line dotted line (\Delta \Delta \Delta) log 10 of the estimated
residual norm reduction ae k . The pictures show the results for a positive definite diagonal system (the left picture)
and for a non-definite diagonal system (the right picture). Except for the Givens rotation, the matrices in these
examples are equal to the matrices of the examples in Fig. 1: here
This may come as a surprise since the bound for the error contribution to the residual for
MINRES is proportional to
Based upon our observations for numerical experiments, we think that this can be explained
as follows. The error in the GMRES approximation has mainly large components in the
direction of the small singular vectors of A. These components are relatively reduced by
multiplication with A, and then have less effect to the norm of the residual. On the other
hand the errors in the MINRES approximation are more or less of the same magnitude over
the spectrum of singular values of A and multiplication with A will make error components
associated with larger singular values more dominating in the residual.
We will support our viewpoint by a numerical example. The results in Fig. 3 are obtained
with a positive definite matrix with two tiny eigenvalues. For b we took a random perturbation
of Ay in the order of 0:01: This example mimics the situation
where the right-hand side vector is affected by errors from measurements. The solution x of
the equation has huge components in the direction of the two singular vectors with
smallest singular value. In the other directions x is equal to y plus a perturbation of less than
one percent. The coordinates of the vector y in our example form a parabola, which makes
the effects easier visible.
The convergence history of GMRES and of MINRES (not shown here) for this example
with is comparable to the ones in the left pictures of Fig. 1, but, because of a higher
condition number, the final stagnation of the residual norm in the present example takes place
on a higher level (- 3
Fig. 3 shows the solution x k as computed at the 80th step of GMRES (top pictures) and
of MINRES (bottom pictures); the right pictures show the component of x k orthogonal to the
two singular vectors with smallest singular value, while the left pictures show the complete
x k . Note that kx k k . The curve of the projected GMRES solution (top-right picture)
is a slightly perturbed parabola indeed (the irregularities are due to the perturbation p). The
computational errors from the GMRES process are not visible in this picture: these errors are
-0.050.050.150.25x_{GMRES} proj on span(V(3:n))
sing. vectors V with increasing sing. values
-0.50.51.5x_{MINRES} proj on span(V(3:n))
sing. vectors V with increasing sing. values
Figure
3. The pictures show the solution x of computed with 80 steps of GMRES (top pictures)
and of MINRES (bottom pictures). The ith coordinate of xk (along the vertical axis) is plotted against i
(along the horizontal axis).
sin
0:01. The right
pictures show the component of xk orthogonal to the two singular vectors with smallest singular value, while the
left pictures show the complete xk .
mainly in the direction of the two small singular vectors. In contrast, the irregularities in the
MINRES curve (bottom-right) are almost purely the effect of rounding errors in the MINRES
process.
In SYMMLQ we minimize the norm of x which means that y k is
the solution of the normal equations
This system can be further simplified by exploiting the Lanczos relations (1):
A stable way of solving this set of normal equations is based on an L e
Q decomposition of T T
and this is equivalent to the transpose of the Q k
R k decomposition of T k (see (6)), which is
constructed for GMRES and MINRES:
This leads to
from which the basic generating formula for SYMMLQ is obtained:
with
k . We will further assume that x
The actual implementation of SYMMLQ [14] is based on an update procedure for V k+1 Q k ,
and on a three term recurrence relation for kr
Note that SYMMLQ can be carried out with exactly the same computed values for V k+1 , Q k ,
R k , and r 0 , as for GMRES and MINRES. In fact, there is no good reason for using different
values for each of the algorithms. Therefore, differences because of round-off, between the
three methods, must be attributed to the additional rounding errors made in the evaluation
of the right-hand side of (25).
The largest factor in the upper bound for these additional rounding errors in the construction
of the SYMMLQ approximation x k is caused by the inversion of L k . The multiplication
with
and the assembly of x k , leads to a factor k
k in the upper bound (similar as for MINRES
and GMRES). In order to simplify the much more complicated analysis for SYMMLQ,
we have chosen to study only the effect of the errors introduced by the inversion of L k . The
resulting error \Deltax k is written as
where g k represents the exact solution and b g k is the value obtained in finite precision arithmetic.
We likewise the coordinates of bg k =kr 0 k 2 are denoted by
These coordinates can be written as
manipulation leads to
where
From (25) it follows that
Hence, the error in the SYMMLQ residual r ME
k can be written as
The first term can be treated as in GMRES:
We define
By combining (29), (27), and the definition for t k , we conclude that
and because of the orthogonality of v k and v k+1 , we have that
The computed residual reduction k b t k k 2 is usually used for monitoring the convergence, in a
stopping criterion. In actual computations with SYMMLQ, no residual vectors are computed.
Expression (30) can now be bounded realistically byp
3
Here we have used that k jL k
Hence
3
A straight-forward estimate is
3
which is much larger than the first term in (33). Experiments indicate that k b t
towards 0 (even below the value u- 2 (A)). Below, we will explain why this is to be expected
(cf. (49)). Fig. 4 illustrates that the upper bound in (33), with k b t
Accuracy. From (33) it follows that
3
is the SYMMLQ residual with respect to the computed SYMMLQ approximate and
r k is the SYMMLQ residual for the exact SYMMLQ approximate (for the finite precision
Lanczos). Apparently, assuming that kr k increases, SYMMLQ is rather accurate
since, for any method, errors in the order u should be expected anyway.
Convergence history SYMMLQ A=Q'*diag(D)*Q, Q Givens
log10(|r|)
(solid
line),
log10(|rho|)
(dotted
line)
-55Convergence history SYMMLQ , A=Q'*diag(D)*Q, Q Givens
log10(|r|)
(solid
line),
log10(|rho|)
(dotted
line)
Figure
4. SYMMLQ: solid line dotted line (\Delta \Delta \Delta) log 10 of the estimated
residual norm reduction k b tk k2 . The pictures show the results for the positive definite system (the left picture)
and for the non-definite system (the right picture) of Fig. 1. Both systems have condition number
Convergence. It is not clear yet whether the convergence of SYMMLQ is insensitive to rounding
errors. This would follow from (33) if both t k and b t k would approach 0. It is unlikely that
will be (much) larger than k b t k k 2 , that is, it is unlikely that the inexact process converges
faster than the process in exact arithmetic. Therefore, when it is observed that k b t k k 2 is small
(of order u- 2 (A)), it may be concluded that the speed of convergence has not been affected
seriously by rounding errors. In experiments, we see that b t k approaches zero if k increases.
For practical applications, assuming that kt k k 2 . k b t k k 2 , it is useful to know that the
computable value k b t k k 2 informs us on the accuracy of the computed approximate and on a
possible loss of speed of convergence. However, it is of interest to know in advance whether
the computed residual reduction will decrease to 0. Moreover, we would like to know whether
course, it is impossible to prove that SYMMLQ will converge for any
symmetric problem: one can easily construct examples for which kr k k 2 will be of order 1 for
any k ! n. But, as we will analyse in the next subsection, the interesting quantities can be
bounded in terms of the MINRES residual. That result will be used in order to show that
the will be relatively unimportant as soon as MINRES has converged to some
degree.
3.1 A relation between SYMMLQ and MINRES residual norms
In this section we will assume exact arithmetic, in particular the Lanczos process is assumed
to be exact. The residuals r MR
k and r ME
k denote the residuals of MINRES and SYMMLQ,
respectively.
The norm of the residual b \Gamma Ax b , with x b the best approximate of x in K k
can be bounded in terms of the norm of the
residual r MR
kr MR
This follows from the observation that r MR
k where x MR
k is from the same subspace
from which the best approximate x b has been selected, and furthermore that kb \Gamma Ax b k 2 -
Unfortunately, SYMMLQ selects its
approximation x k from a different subspace, namely AK k (A; r 0 ). This makes a comparison
less straight forward.
The following lemma will be used for bounding the SYMMLQ error in terms of the MINRES
error. Its proof uses the fact that r MR
connects K k+1
spanned by r MR
k and AK k (A; r 0 ).
Lemma 3.1 For each z 2 K k+1
kr MR
2: (36)
Proof. For simplicity we will assume that x
By construction x ME
z in the space AK k (A; r 0 ). Hence
implies that
By construction we have that x ME
as a consequence:
From Pythogoras' theorem, with (37), we conclude that
and (36) follows by combining this result with (38).
Unfortunately, a combination of (36) with
k and the obvious estimate jff k j kr MR
, from (37) does not lead to a useful result. An interesting result follows from an
upper bound for jff k j that can be obtained from a relation between two consecutive MINRES
residuals and a Lanczos basis vector. This result is formulated in the next theorem.
Theorem 3.2
Proof. We use the relation
r MR
where r CG
k is the kth Conjugate Gradient residual. The scalars s and c represent the Givens
transformation used in the kth step of MINRES. This relation is a special case of the slightly
more general relation between GMRES and FOM residuals, formulated in [2, 22]. For symmetric
A, GMRES is equivalent with MINRES, and FOM is equivalent with CG. Since
r CG
r MR
kr MR
kr MR
and
. Moreover, since
r MR
k .
Therefore, with e ME
kr MR
r MR
kr MR
kr MR
r MR
kr MR
kr MR
and hence
kr MR
kr MR
A combination of (42) and (36) with
k+1 leads to
kr MR
kr MR
With
and using the minimal residual property kr MR
we obtain the following recursive
upper bound from (43):
kr MR
A simple induction argument shows that fi k - k+1 , and the definition of fi k implies
kr MR
which completes the proof.
For our analysis of the additional errors in SYMMLQ, we also need a slightly more general
result, formulated in the next theorem.
Theorem 3.3 Let y.
For the best approximation y ME
k of y in AK k (A; r 0 ), and for y MR
is the best approximation of c in AK k (A; r 0 ), with - k as in (39), we have
kr MR
i-k
kr MR
Proof. The proof comes along the same lines as the proof of Theorem 3.2.
Replace the quantities x and x MR
k by y and y MR
k . Since the y quantities fulfill the same
orthogonality relations, (36) is valid also in the y quantities. This is also the case for the
upper bound for jff k j kr MR
Hence, with e ME
j , we have
kr MR
kr MR
If we define b
we find that
kr MR
which implies (45).
For the relations between SYMMLQ and MINRES we have assumed exact arithmetic, that is
we have assumed an exact Lanczos process as well as an exact solve of the systems with L k .
However, we can exclude the influence of the Lanczos process by applying Theorem 3.2 right
away to a system with a Lanczos matrix Tm and initial residual kr 0 k 2 e 1 . In this setting, we
have, for k ! m, that ([2, 22])
kr MR
with s j the sine in the jth Givens rotation for the QR decomposition of T is the estimated
reduction of the norms of the MINRES residuals. From relation (44) in combination with (31)
we conclude that
Note that inequality (47) is correct for any symmetric tri-diagonal extension e
Tm of T
(47) holds with e
Tm instead of Tm . It has been shown in [6] that there is an extension e
Tm
of which any eigenvalue is in a O(u) 1
4 -neighborhood of some eigenvalue of A, and therefore
in fairly good precision. This leads to our upper bound
In x3.1.1, we will show that
The upper bound in (49) contains a square of the condition number. However, in the interesting
situation where ae k decreases towards 0, the effect of the condition number squared will be
annihilated eventually.
Remark 3.4 Except for the constants 'k
the estimates (48)
and (49), respectively, appear to be sharp (see Fig. 5).
Although the maximal values of the ratio of kt Fig. 5 exhibit slowly growing
behavior, the growth is not of order k 3 . In the proof of (49) (cf. x3.1.1), upper bounds as
in (48) are used in a consecutive number of steps. In view of the irregular convergence of
SYMMLQ, the upper bound (48) will be sharp for at most a few steps. By exploiting this
observation, one can show that a growth of order k 2 , or even less, will be more likely.
versus MINRES
log10
of
quotient of the estimates of the residual norms: SYMMLQ / MINRES
-22(e_k^t(L+Delta)\e1)./rho_k, |Delta|<eps*|L|, eps=2.958e-13
log10
of
perturbations in SYMMLQ
Figure
5. Results for the non-definite matrix with condition number (as in the right pictures) of
Fig. 1 and Fig. 4. The left picture shows log 10 of the ratio k b tk k2 =ae k of the estimated residual norm reduction
of SYMMLQ with the one of MINRES, the right picture models k b tk \Gamma tk k2 =ae k : it shows the log 10 of e T
3.1.1 SYMMLQ recurrences
In this section we derive the upper bound (49).
Suppose that the jth recurrence for the fl i 's is perturbed by a relatively small ffi and all
other recurrence relation are exact:
The resulting perturbed quantities are labeled as e.
Then
For is a multiple of the SYMMLQ residual for the Tm -system (m ?
as in the proof of inequality (48), Theorem 3.2 could be applied for estimating k e t . For
the situation where j 6= 1, Theorem 3.3 can be used.
To be more precise, with we have (in the notation of
Theorem 3.3), for
y
and
ae k
with c j the cosine in the jth Givens rotation. Therefore, by Theorem 3.3,
ae k
For this specific situation, the estimate for fi k in the last paragraph of the proof of Theorem
3.2 can be improved. It can be shown that fi j - 1 if fi k - k\Gammaj . Therefore, the - k+1 in
(54) can be replaced by - k\Gammaj .
A combination of (51) with (54) gives
Using the definition of M j and the recurrence relations for the fl j , we can express
\Gamma' jj
Therefore, from (48), we have that
Hence (cf. (50))
and, with (55), this gives gives
Because the recurrences are linear, the effect of a number of perturbations is the cumulation
of the effects of single perturbations. If each recurrence relation is perturbed as in (50) then the
estimate (49) appears as a cumulation of bounds as in (57). The vector b t k in (49) represents
the result of these successive perturbations due to finite precision arithmetic.
Finally, we will explain that the effect of rounding errors in solving L can be described
as the result of successively perturbed recurrence relations (50), with
First we note that the efl k 's resulting from the perturbation
are the same as those resulting from the perturbation
which means that a perturbation to the second term in the jth recurrence relation can also be
interpreted as a similar perturbation to the first term in the (j \Gamma 1)st recurrence relation.
Now we consider perturbations that are introduced in each recurrence relation due to finite
precision arithmetic errors. Let b actually computed
and this can be rewritten, with different - and - 0 , as
Since the perturbation to the second term in this jth recurrence relation can be interpreted as
a similar perturbation to the first term in the (j \Gamma 1)st recurrence relation (which was already
perturbed with a factor (1 + 3-)), we have that the computed b fl j can be interpreted as the
result of perturbing each leading term with a factor (1
4 Discussion and Conclusions
In Krylov subspace methods there are two main effects of floating point finite precision arithmetic
errors. One effect is that the generated basis for the Krylov subspace deviates from the
exact one. This may lead to a loss of orthogonality of the Lanczos basis vectors, but the main
effect on the iterative solution process is a delay in convergence rather than mis-convergence.
In fact, what happens is that we try to find an approximated solution in a subspace that is
not as optimal, with respect to its dimension, as it could have been.
The other effect is that the determination of the approximation itself is perturbed with rounding
errors, and this is, in our view a serious point of concern; it has been the main theme of
this study. In our study we have restricted ourselves to symmetric indefinite linear systems
b. Before we review our main results, it should be noted that we should expect upper
bounds for relative errors in approximations for x that contain at least the condition number
of A, simply because we can in general not compute Ax k exactly. We have studied the effects
of perturbations to the computed solution through their effect on the residual, because the
residual (or its norm) is often the only information that we get from the process. This residual
information is often obtained in a cheap way from some update procedure, and it is not
uncommon that the updated residual may take values far beyond machine precision (relative
to the initial residual). Our analysis shows that there are limits on the reduction of the true
residual because of errors in the approximated solution.
In view of the fact that we may expect at least a linear factor - 2 (A), when working with
Euclidean norms, GMRES (x2.2) and SYMMLQ (x3) lead to acceptable approximate solutions.
When these methods converge then the relative error in the approximate solution is, apart from
modest factors, bounded by u - 2 (A). SYMMLQ is attractive since it minimizes the norm of
the error, but it does so with respect to A times the Krylov subspace, which may lead to a
delay in convergence with respect to GMRES (or MINRES), by a number of iterations that is
necessary to gain a reduction by in the residual, see Theorem 3.2. For ill-conditioned
systems this may be considerable.
As has been pointed out in [14], the Conjugate Gradient iterates can be constructed with
little effort from SYMMLQ information if the they exist. For indefinite systems the Conjugate
Gradient iterates are well-defined for at least every other iteration step, and they can be used
to terminate the iteration if this is advantageous. However, the Conjugate Gradient process
has no minimization property (as for the positive definite case) when the matrix is indefinite
and so there is no guarantee that any of these iterates will be sufficiently close to the desired
solution before SYMMLQ converges.
For indefinite symmetric systems we see that MINRES may lead to large perturbation
errors: for MINRES the upper bound contains a factor This means that if the
condition number is large, then the methods of choice are GMRES or SYMMLQ. Note that
for the symmetric case, GMRES can be based on the three-term recurrence relation, which
means that the only drawback is the necessity to store all the Lanczos vectors. If storage is at
premium then SYMMLQ is the method of choice.
If the given system is well-conditioned, and if we are not interested in very accurate solu-
tions, then MINRES may be an attractive choice.
Of course, one may combine any of the discussed methods with a variation on iterative
refinement: after stopping the iteration at some approximation x k , we compute the residual
possible in higher precision, and we continue to solve
solution z j of this system is used to correct x . The procedure could be
repeated and eventually this leads to approximations for x so that the relative error in the
residual is in the order of machine precision (for more details on this, see [20]). However, if we
would use MINRES then, after restart, we have to carry out at least a number of iterations
for the reduction by a factor equal to the condition number, in order to arrive at something of
the same quality as GMRES, which may make the method much less effective than GMRES.
For situations where
u, MINRES may be even incapable of getting at a sufficient
reduction for the iterative refinement procedure to converge.
It is common practice, among numerical analysts, to test the convergence behavior of
Krylov subspace solvers for symmetric systems with well-chosen diagonal matrices. This gives
often a quite good impression of what to expect for non-diagonal matrices with the same
spectrum. However, as we have shown in our x2.5, for MINRES this may lead to a too
optimistic picture, since floating point error perturbations with MINRES lead to errors in the
residual (and the approximated solution) that are a factor smaller as for non-diagonal
matrices.
--R
Templates for the solution of linear sys- tems:building blocks for iterative methods
A theoretical comparison of the Arnoldi and GMRES algorithms
A survey of preconditioned iterative methods
Polynomial Based Iteration Methods for Symmetric Linear Systems
Matrix Computations
Behavior of slightly perturbed Lanczos and conjugate-gradient recurrences
Iterative Methods for Solving Linear Systems
Methods of conjugate gradients for solving linear systems
Accuracy and Stability of Numerical Algorithms
analysis of the Lanczos algorithm for tridiagonalizing a symmetric matrix
Accuracy and effectiveness of the Lanczos algorithm for the symmetric eigenproblem
Approximate solutions and eigenvalue bounds from Krylov subspaces
Solutions of sparse indefinite systems of linear equations
The symmetric eigenvalue problem
GMRES: a generalized minimal residual algorithm for solving nonsymmetric linear systems
Reliable updated residuals in hybrid Bi-CG methods
Relaxiationsmethoden bester Strategie zur L-osung linearer Gleichungssysteme
Efficient High Accuracy Solutions with GMRES(m)
The superlinear convergence behaviour of GMRES
--TR | GMRES;MINRES;linear systems;iterative methods;SYMMLQ;stability |
587872 | Polynomial Instances of the Positive Semidefinite and Euclidean Distance Matrix Completion Problems. | Given an undirected graph G=(V,E) with node set V=[1,n], a subset $S\subseteq V$, and a rational vector $a\in {\rm\bf Q}^{S\cup E}$, the positive semidefinite matrix completion problem consists of determining whether there exists a real symmetric n n positive semidefinite matrix X=(xij) satisfying xii=ai ($i\in S$) and xij=aij ($ij\in E$). Similarly, the Euclidean distance matrix completion problem asks for the existence of a Euclidean distance matrix completing a partially defined given matrix. It is not known whether these problems belong to NP. We show here that they can be solved in polynomial time when restricted to the graphs having a fixed minimum fill-in, the minimum fill-in of graph G being the minimum number of edges needed to be added to G in order to obtain a chordal graph. A simple combinatorial algorithm permits us to construct a completion in polynomial time in the chordal case. We also show that the completion problem is polynomially solvable for a class of graphs including wheels of fixed length (assuming all diagonal entries are specified). The running time of our algorithms is polynomially bounded in terms of n and the bitlength of the input a. We also observe that the matrix completion problem can be solved in polynomial time in the real number model for the class of graphs containing no homeomorph K4. | Introduction
.
1.1. The matrix completion problem. This paper is concerned with the completion
problem for positive semidefinite and Euclidean distance matrices. The positive
semidefinite matrix completion problem (P) is defined as follows:
Given a graph E), a subset S ' V and a rational vector a 2 Q S[E , determine
whether there exists a real matrix
(The notation X - 0 means that X is a symmetric positive semidefinite matrix or, for
short, a psd matrix.) In words, problem (P) asks whether a partially specified matrix
can be completed to a psd matrix; the terminology of graphs being used as a convenient
tool for encoding the positions of the specified entries. When problem (P) has a positive
answer, one says that a is completable to a psd matrix; a matrix X satisfying (1.1) is called
a psd completion of a and a positive definite (pd) completion when X is positive definite.
We let (P s ) denote problem (P) when diagonal entries are specified.
If one looks for a pd completion then one can assume without loss of generality that all
diagonal entries are specified (cf. Lemma 2.5); this is however not obviously so if one looks
for a psd completion (although this can be shown to be true when restricting the problem
to the class of chordal graphs; cf. the proof of Theorem 3.5).
i;j=1 is called a Euclidean distance matrix (a distance matrix, for
short) if there exist vectors
CWI, P.O. Box 94079, 1090 GB Amsterdam, The Netherlands (monique@cwi.nl).
M. LAURENT
(Here, kuk denotes the Euclidean norm of vector u 2 R k .) A set of vectors u i satisfying
(1.2) is called a realization of Y . Note that all diagonal entries of a distance matrix are
equal to zero. The Euclidean distance matrix completion problem (D) is defined as follows:
Given a graph E) and a rational vector d 2 determine whether there
exists a real matrix
Y is a distance matrix and y
Hence problem (D) asks whether a partially specified matrix can be completed to a distance
matrix.
As will be recalled in Section 2.3, psd matrices and distance matrices are closely related
and, thus, their associated completion problems can often be treated in an analogous
manner. These matrix completion problems have many applications, e.g., to multidimensional
scaling problems in statistics (cf. [28]), to the molecule conformation problem in
chemistry (cf. [10], [17]), to moment problems in analysis (cf. [5]).
1.2. An excursion to semidefinite programming. The psd matrix completion
problem is obviously an instance of the general semidefinite
programming feasibility problem
Given integral n \Theta n symmetric matrices Q determine whether there
exist real numbers z
The complexity status of problem (F) is a fundamental open question in the theory
of semidefinite programming; this is true for both the Turing machine model and the real
number model, the two most popular models of computation used in complexity theory.
In particular, it is not known whether there exists an algorithm solving (F) whose running
time is polynomial in the size L of the data, that is, the total space needed to store the
entries of the matrices Q
The Turing machine model (also called rational number model, or bit model; cf. [12])
works on rational numbers and, more precisely, on their binary representations; in par-
ticular, the running time of an elementary operation (+; \Gamma; \Theta; \Xi) depends on the length
of the binary representations of the rational numbers involved. Hence, the size L of the
data of problem (F) in this model can be defined as mn 2 L 0 , where L 0 is the maximum
number of bits needed to encode an entry of a matrix Q i . On the other hand, the real
number model (introduced in [9]) works with real numbers and it assumes that exact real
arithmetic can be performed; in particular, an elementary operation (+; \Gamma; \Theta; \Xi) between
any two real numbers takes unit time. Hence, the size L of the data of (F) in this model
is equal to mn 2 .
Semidefinite programming (SDP) deals with the decision problem (F) and its optimization
version:
s.t.
. SDP can be seen as a generalization of linear programming (LP), obtained
by replacing the nonnegativity constraints of the vector variable in LP by the semidefinite-
ness of the matrix variable in SDP. Information about SDP can be found in the handbook
[40]; cf. also the survey [38], and [3, 16] with an emphasis on applications to discrete
optimization.
A standard result in LP is that every feasible linear system: Ax - b with rational
coefficients has a solution whose size is polynomially bounded in terms of the size of A
and b (cf. [36], corollary 3.2b). This implies that the problem of testing feasibility of
an LP program belongs to NP in the bit model (this fact is obvious for the real number
model). Moreover, any LP optimization problem can be solved in polynomial time in the
bit model using the ellipsoid algorithm of Khachiyan [22] or the interior-point method of
Karmarkar [21]; it is an open question whether LP can be solved in polynomial time in
the real number model (cf. p. 60 in [41]).
The feasibility problem (F) belongs to NP in the real number model (since one can
test in polynomial time whether a matrix is psd, for instance, using Gaussian elimination;
in fact, for a rational matrix the running time is polynomial in its bitlength (cf. p. 295
in [15])). However, it is not known whether problem (F) belongs to NP in the bit model.
Indeed, in contrast with LP, it is not true that if a solution exists then one exists which
is rational and has a polynomially bounded size. Consider, for instance, the following
2 is the unique real for which X - 0; hence, this is an instance where there
is a real solution but no rational solution. Consider now the following matrix (taken from
and
thus any rational solution has exponential bitlength. More examples of 'ill-conditioned'
sdp's can be found in [33].
However, Ramana [33] has developed an exact duality theory for SDP which enables
him to show the following results: Problem (F) belongs to NP " co-NP in the real number
model. In the bit model, (F) belongs to NP if and only if it belongs to co-NP; hence,
is not NP-complete nor co-NP complete unless NP=co-NP.
Algorithms have been found that permit to solve the optimization problem (1.5) approximatively
in polynomial time; they are based on the ellipsoid method (cf. [15]) and
interior-point methods (cf. [31], [3]).
4 M. LAURENT
More precisely, set K := fz
S(K; ffl) := fy j 9z 2 K with points that are in the ffl-neighborhood
of K') and S(K; \Gammaffl) ('the points that are at distance at least ffl
from the border of K'). let L denote the maximum bit size of the entries of the matrices
. Assume that we know a constant R ? 0 such that either
with kzk - R. Then, the ellipsoid based algorithm, given rational ffl ? 0, either finds
or asserts that S(K;
Its running time is polynomial in n; m;L and log ffl and this algorithm is polynomial in the
bit model.
Assume that we know a constant R ? 0 such that kzk - R for all z 2 K and a point
z 2 K for which
'strictly feasible'). There is an interior-point
algorithm which finds y 2 K strictly feasible such that c T y -
time polynomial in n; m; L; log ffl; log R and the bitlength of z . Note, however, that no
polynomial bound has been established for the bitlengths of the intermediate numbers
occurring in the algorithm.
Khachiyan and Porkolab have shown that problem (F) and its analogue in rational
numbers can be solved in polynomial time in the bit model for a fixed number m of
variables.
Theorem 1.1.
(i) [32] Problem (F) can be solved in polynomial time for any fixed m.
(ii) [23] The following problem can be solved in polynomial time for any fixed m:
Given n \Theta n integral symmetric matrices Q rational numbers
satisfying (1.4) or determine that no such numbers exist.
The result from Theorem 1.1 (ii) extends to the context of semidefinite programming the
result of Lenstra [29] on the polynomial solvability of integer LP in fixed dimension.
1.3. Back to the matrix completion problem. As the matrix completion problem
is a special instance of SDP, it can be solved approximatively in polynomial time;
specific interior-point algorithms for finding approximate psd and distance matrix completions
have been developed, e.g., in [19],[10],[2],[30]. However, such algorithms are not
garanteed to find exact completions in polynomial time. This motivates our study in
this paper of some classes of matrix completion problems that can be solved exactly in
polynomial time.
As mentioned earlier, one of the difficulties in the complexity analysis of SDP arises
from the fact that a rational SDP problem might have no rational solution (recall the
example from (1.6)). This raises the following question in the context of matrix completion:
If a rational partial matrix has a psd completion, does a rational completion always
exist ?
We do not know of a counterexample to this statement. On the other hand, we will show
that the answer is positive, e.g., when the graph of specified entries is chordal or has
minimum fill-in 1 (cf. Lemma 4.2). (Note that the answer is obviously positive if a pd
completion exists.)
Motivated by the above discussion, let us define for each of the problems (P) and (D)
its rational analogue (P Q ) and (D Q ). Problem (P Q ) is defined as follows:
Given a graph E), a subset S ' V and a rational vector a 2 Q S[E , find a
rational matrix X satisfying (1.1) or determine that no such matrix exists.
diagonal entries are specified), we denote the problem as (P Q
Problem (D Q ) is defined as follows:
Given a graph E) and a rational vector d 2 rational matrix Y
satisfying (1.3) or determine that no such matrix exists.
The complexity of the problems (P), (D), (P Q ), and (D Q ) is not known; in particular,
it is not known whether they belong to NP in the bit model (they do trivially in the
real number model). In this paper, we present some instances of graphs for which the
completion problems can be solved in polynomial time. All our complexity results apply
for the bit model (unless otherwise specified, as in Section 5.3).
Recall that a graph is said to be chordal if it does not contain a circuit of length - 4
as an induced subgraph. Then, the minimum fill-in of graph G is defined as the minimum
number of edges needed to be added to G in order to obtain a chordal graph. Note that
computing the minimum fill-in of a graph is an NP-hard problem [42]. The following is
the main result of Sections 3 and 4.
Theorem 1.2. For any integer m - 0, problems (P), (P Q ), (D) and (D Q ) can be
solved in polynomial time (in the bit model) when restricted to the class of graphs whose
minimum fill-in is equal to m.
The essential ingredients in the proof of Theorem 1.2 are the subcase
case), Theorem 1.1, and the link (exposed in Section 2.3) between psd matrices and distance
matrices. In the chordal case, a simple combinatorial algorithm permits to solve the
completion problem in polynomial time.
The psd matrix completion problem for chordal graphs has been extensively studied
in the literature (cf. the survey of Johnson [18] for detailed references). In some sense,
this problem has been solved by Grone, Johnson, S'a and Wolkowicz [14] who, building
upon a result of Dym and Gohberg [11], have characterized when a vector a indexed by
the nodes and edges of a chordal graph admits a psd completion; cf. Theorem 3.1. From
this follows the polynomial time solvability of problem (P s ) for chordal graphs. In fact,
the result from Theorem 3.1 is proved in [14] in a constructive manner and, thus, yields
an algorithm permitting to solve problem (P Q
s ) for chordal graphs. This algorithm has
a polynomial running time in the real number model; however, it has to be modified in
order to achieve a polynomial running time in the bit model.
To summarize, the result from Theorem 1.2 also holds in the real number model for
chordal graphs would hold for all graphs having fixed minimum fill-in m - 1
if the result from Theorem 1.1 would remain valid in the real number model 1 .
We present in Section 5.1 another class of graphs for which the matrix completion
problem (P s ) can be solved in polynomial time (in the bit model). This class contains
(generalized) circuits and wheels having a fixed length (and fatness); these graphs arise
naturally when considering the polar approach to the psd matrix completion problem.
Then, Section 5.2 contains a brief description of this polar approach, together with some
open questions and remarks. In the final Section 5.3, we consider the matrix completion
problem for the class of graphs containing no homeomorph of K 4 (it contains circuits).
Then a condition characterizing existence of a psd or distance matrix completion exists
which permits to obtain a simple combinatorial algorithm solving the existence and construction
problems in polynomial time in the real number model.
claims to have a proof of this fact.
6 M. LAURENT
2. Preliminaries. We recall here some basic facts about Schur complements and Euclidean
distance matrices that will be needed in the paper and we make a few observations
about psd completions.
2.1. Schur complements. For a symmetric matrix M , set In(M) := (p; q; r), where
denotes the number of positive (resp. negative, zero) eigenvalues of M .
When M - 0, a maximal nonsingular principal submatrix of M is a nonsingular principal
submatrix of M of largest possible order, thus equal to the rank of M .
Lemma 2.1. Let
be a symmetric matrix, where A is nonsingular.
Then,
the matrix known as the Schur complement of A in M . In particular,
A is a
maximal nonsingular principal submatrix of M , then
As a direct application, we have the following results which will be used at several
occasions in the paper.
Lemma 2.2. Let X be a symmetric matrix having the following block decomposition:
where T , R, Z, A, S, D are rational matrices of suitable orders; all entries of X being
specified except those of Z that have to be determined in order to obtain X - 0. Assume
that
R A
0:
In the case when n - 1 and A 6= 0, let A 0 be a maximal nonsingular principal submatrix
of A, let
denote the corresponding block decompositions of A and X. Then, X - 0 if we set
Z
when
Proof. The result follows using Lemma 2.1 after noting that the Schur complement of
A 0 in X is given by@ T R T
Indeed, the Schur complement
in A is equal to 0 since A - 0 and
A 0 is a maximal nonsingular principal submatrix of A; as implies that
Lemma 2.3. Let X be a symmetric matrix of the form
R A
where A - 0 and T is a symmetric matrix of order ' whose diagonal entries are all equal
to some scalar N . Let A 0 be a maximal nonsingular principal submatrix of A and let
denote the corresponding block decompositions of A and X. Then, X - 0 if and only if (i)
In particular, X is pd if and only if A and
are pd. Moreover, T
large enough (namely, for N
greater or equal to the largest eigenvalue of R T
diagonal
entries and as off-diagonal entries those of T ).
2.2. Some observations about positive semidefinite completions. Given a
graph E), a subset S ' V , a vector a 2 Q S[E , and a scalar N ? 0, let a
denote the extension of a obtained by setting a i := N for all Obviously,
Lemma 2.4. a is completable to a psd matrix if and only if a N is completable to a
psd matrix for some N ? 0 (and then for all N 0 - N ).
Therefore, if one can "guess" a value N to assign to the unspecified diagonal entries, then
one can reduce the problem to the case when all diagonal entries are specified. This can
be done when the graph G of specified off-diagonal entries is chordal as we see later or if
we look for a pd completion as the next result shows.
Lemma 2.5. Given a 2 Q its
restriction to the subgraph induced by S. Then, a has a pd completion if and only if b has
a pd completion.
Proof. Apply Lemma 2.3.
This result does not extend to psd completions (which contradicts a claim from [14] (psd
case in Prop. 1)). Indeed, the following partial matrix@
has no psd completion while its lower principal 2 \Theta 2 submatrix is psd.
8 M. LAURENT
A final observation is that if a partial matrix contains a fully specified row, then the
completion problem can be reduced to considering a matrix of smaller order. Indeed,
suppose that is a partial symmetric matrix whose first row is fully specified. If
a A is not completable. If a then A is completable if and only if its
first row is identically zero and its lower principal submatrix of order
If a 11 ? 0 then one can reduce to a problem of order considering the Schur
complement of a 11 in A.
2.3. Euclidean distance matrices. The following connection (2.4) between psd
and distance matrices has been established by Schoenberg [35]. Let
square symmetric matrix with zeros on its main diagonal and whose rows and columns are
indexed by a set V , and let i 0 be a given element of V . Then, ' i 0 (Y ) denotes the square
whose rows and columns are indexed by set V nfi 0 g
and whose entries are given by
Then, one can easily verify that
Y is a distance matrix
(Indeed, a set of vectors u realization of the matrix Y if and only if ' i 0 (Y )
is the Gram matrix of the vectors u which means that its (i; j)-th
entry is equal to establishes a linear bijection between the
set of distance matrices of order jV j and the set of psd matrices of order
(2.4) has a direct consequence for the corresponding matrix completion problems. Let
E) be a graph and assume that i 0 2 V is a universal node, i.e., that i 0 is adjacent
to all other nodes of G. Then, an algorithm permitting to solve the psd matrix completion
problem for graph Gni 0 can be used for solving the distance matrix completion problem
for graph G and vice versa. Indeed,
Y is a distance matrix completion of d 2 R E
completion of ' i 0 (d)
(For the definition of ' i 0 (d) use (2.3) restricted to the pairs ij with
or i 6= j with ij edge of G.) For more information about connections between the two
problems, see [20],[26].
3. The Matrix Completion Problem for Chordal Graphs. We consider here
the matrix completion problems for chordal graphs. First, we recall results from [14] and
[4] yielding a good characterization for the existence of a completion; then, we see how
they can be used for constructing a completion in polynomial time.
3.1. Characterizing existence of a completion. Let E) be a graph and
let a 2 Q V [E be a vector; in the distance matrix case, the entries of a indexed by V
(corresponding to the diagonal entries of a matrix completion) are assumed to be equal to
zero. If K ' V is a clique in G (i.e., any two distinct nodes in K are joined by an edge
in G), the entries a ij of vector a are well-defined for all nodes
denote the jKj \Theta jKj symmetric matrix whose rows and columns are indexed by K and
with ij-th entry a ij for Obviously, if a is completable to a psd matrix, then a
satisfies:
for every maximal clique K in G:
Similarly, if a is a completable to a distance matrix, then a satisfies:
a(K) is a distance matrix for every maximal clique K in G:
The conditions (3.1) and (3.2) are not sufficient in general for ensuring the existence of
a completion. For instance, if E) is a circuit and a 2 Q V [E has all its entries
equal to 1 except one entry on an edge equal to \Gamma1, then a satisfies (3.1) but a is not
completable to a psd matrix. However, if G is a chordal graph, then (3.1) and (3.2) suffice
for ensuring the existence of a completion.
Theorem 3.1. Let E) be a chordal graph and let a 2 R V [E . If a satisfies
(3.1), then a is completable to a psd matrix [14]; if a satisfies (3.2), then a is completable to
a distance matrix [4]; moreover, if a is rational valued, then a admits a rational completion.
As the maximal cliques in a chordal graph can be enumerated in polynomial time
[37] (cf. below) and as one can check positive semidefiniteness of a rational matrix in
polynomial time (cf. [15], p. 295), one can verify whether (3.1) holds in polynomial time
when G is chordal; in view of (2.4), one can also verify whether (3.2) holds in polynomial
time when G is chordal. This implies:
Theorem 3.2. Problems (P s ) and (D) can be solved in polynomial time for chordal
graphs.
The proof given in [14, 4] for Theorem 3.1 is constructive; thus, it provides an algorithm
for constructing a completion and, as we see below, a variant of it can be shown to have a
polynomial running time. The proof is based on the following properties of chordal graphs.
E) be a graph.
Then, G is chordal if and only if it has a perfect elimination ordering; moreover,
such an ordering can be found in polynomial time [34]. An ordering v of the
nodes of a graph E) being called a perfect elimination ordering if, for every
1, the set of nodes v k with k ? j that are adjacent to v j induces a clique
in G. For the clique consisting of node v j together with
the nodes are adjacent to v j ; then the cliques K
maximal cliques of a chordal graph G.
Hence, if G is chordal and not a clique, then one can find (in polynomial time) an edge
which the graph H := G (obtained by adding e to G) is chordal. (Indeed,
let i be the largest index in [1; n] for which there exists j ? i such that v i and v j are
not adjacent in G; then we can choose for e the pair ij as v remains a perfect
elimination ordering for H.)
Moreover, if G is chordal then, for any e 62 E, there exists a unique maximal clique in
containing edge e [14] (easy to check).
Therefore, if G is complete and not a clique, we can order the missing edges in G as
in such a way that the graph G q := chordal for every
q be the unique maximal clique in G q containing edge
e q . Given a 2 Q V [E satisfying (3.1), set G 0 := G and x 0 := a. We execute the following
step for
Find z q 2 Q for which the vector x q :=
This can be done in view of Lemma 2.2 (case applied to the matrix X :=
x q (K q ) and one can choose for z q the rational value given by (2.2). Then, the final vector
M. LAURENT
rational psd completion of a. This shows Theorem 3.1 in
the psd case (the Euclidean distance matrix case being similar).
As mentioned earlier, the preprocessing step (find the suitable ordering e
the missing edges and the cliques K q ) can be done in polynomial time. Then, one can
construct the values z yielding a psd completion of a in p - n 2 steps. Therefore,
the algorithm is polynomial in the real number model. In order to show polynomiality in
the bit model, one has to verify that the encoding sizes of z polynomially
bounded in terms of n and the encoding size of a. This is, however, not clear. Indeed,
both R 0 and S 0 in the definition of z q via (2.2) may involve some previously defined z h for
(in fact, the same may hold for A 0 ); then, we have a quadratic dependence between
z q and the previously defined z which may cause a problem when trying to
prove that the encoding size of z q remains polynomially bounded. However, as we see
below, the above algorithm can be modified to obtain a polynomial running time. The
basic idea is that, instead of adding the missing edges one at a time, one adds them by
'packets' consisting of edges sharing a common endnode. Then, in view of Lemma 2.2,
one can specify simultaneously all the entries on these edges, which permits to achieve a
linear dependency among the z q 's.
3.2. Constructing a psd completion in polynomial time. Let E) be
a chordal graph and let perfect elimination ordering of its nodes. For
and let denote the elements
set F ' := fi ' denote the graph with node set V and edge set
Hence, we have a sequence of graphs:
where each G ' is chordal remains a perfect elimination ordering of its nodes)
and GL is the complete graph. We now show that G ' has only one maximal clique which
is not a clique in G '\Gamma1 .
Lemma 3.3. For there is a unique maximal clique K ' in G ' which is
not a clique in G '\Gamma1 . Moreover, J(i is a clique in G
and the set K ' n J(i ' ) is a clique in G.
Proof. Let K be a maximal clique in G ' which is not a clique in G
first show that J(i ' ) ' K. For this, assume that
By maximality of K, there exists an element i 2 K such that i and j 0
are not adjacent in G ' . Then, since the set [i ' ; n] is a clique in G ' . Therefore, the
pairs ij and ii ' are edges of G ' and, thus, of G. Since the ordering of the nodes is a perfect
elimination ordering for G, this implies that i ' and j must be adjacent in G, yielding a
contradiction.
Suppose now that K;K 0 are two distinct maximal cliques in G ' such that
and exist nodes i 2 K n K 0 that are not adjacent
in G ' . Given a node j 2 J(i ' ), one can easily verify that (i; i is an induced circuit
in G '\Gamma1 , which contradicts the fact that G '\Gamma1 is chordal and, thus, shows unicity of the
clique K ' . It is obvious that K ' n fi ' g is a clique in G '\Gamma1 . We now verify that K ' n J(i ' ) is
a clique in G. For this, note first that i ' is adjacent to every node of K '
G ' and, thus, in G. Suppose now that x 6= y are two nodes in K ' n (J(i that are
not adjacent in G. Then, as xy is an edge of G '\Gamma1 , we have:
y. As i ' is adjacent to both x and y in G this implies that x
and y must be adjacent in G, yielding a contradiction.
We now describe the modified algorithm. Let E) be a chordal graph and let
a Setting x 0 := a, we execute the following step for
Find z ' 2 Q F ' for which the vector x ' :=
Then, the final vector rational psd completion of a. For
instance, we can choose for z ' the value given by relation (2.2), applying Lemma 2.2 to
the matrix X := x ' (K ' ). (Indeed, in view of Lemma 3.3,
can be verified by induction.)
We verify that the encoding sizes of z are polynomially bounded in terms
of n and the encoding size of a. For this, we note that z are determined by a
recurrence of the form:
are matrices of (appropriate) orders - n. A crucial observation is that
all entries of R ' and A ' belong to the set, denoted as A, of entries of a (as K ' n J(i ' ) is
a clique in G, by Lemma 3.3), while the entries of S ' belong to the set A [ Z
Z denotes the set of entries of (z
For r 2 Q, let hri denote the encoding size of r, i.e., the number of bits needed
to encode r in binary notation and, for a vector
i). One can verify that, for two vectors
a denote the maximum encoding length of the entries of vector a and, for
We derive from (3.6) that
for all ' (setting S 0 := 0). This implies that
As L - n, we obtain that all encoding sizes of z are polynomially bounded in
terms of n and the encoding size of a. (We also use here the fact that the entries of A \Gamma1
are polynomially bounded in the input size; cf. chap. 1.3 in [15].) Thus, we have shown:
Theorem 3.4. Problem (P Q
s ) can be solved in polynomial time for chordal graphs.
We finally indicate how to solve the general problem when some diagonal entries are
unspecified.
Theorem 3.5. Problems (P) and (P Q ) can be solved in polynomial time for chordal
graphs.
Proof. Let E) be a chordal graph, let S ' V and let a 2 Q S[E satisfying:
for each maximal clique K ' S (else, we can conclude that a is not completable).
M. LAURENT
Following Lemma 2.4, we search for a scalar N ? 0 such that a is completable if and only
if its extension a N 2 Q V [E (assigning value N to the unspecified diagonal entries) is
completable or, equivalently, a N (K) - 0 for all maximal cliques K in G. Note that each
matrix a N (K) has the same form as matrix X from Lemma 2.3. Therefore, such N exists
if and only if the linear condition (i) from Lemma 2.3 holds for each clique K and an
explicit value for N can be constructed as indicated in Lemma 2.3. Once N has been
determined, we proceed with completing a N by applying the algorithm presented above.
To conclude note that the algorithm presented in this section outputs a pd completion
if one exists.
3.3. Constructing a distance matrix completion. The distance matrix completion
problem for chordal graphs can be solved in an analogous manner. Namely, let
E) be a chordal graph, let
be the sequence of chordal graphs from (3.4), let K ' be the cliques constructed
in Lemma 3.3, and let a 2 Q Setting a 0 := a, we execute the
following step for
Find z ' 2 Q F ' for which the vector x ' := (a
x ' (K ' ) is a distance matrix.
Then, the final vector provides a distance matrix completion of a.
The above step can be performed as follows. If then we let z ' be defined
by z ' (j) := x is a given element of J(i ' ). Otherwise, let
is a universal node in G[K ' ], the subgraph of G induced by
Therefore, in view of relation (2.5), we can find z ' satisfying (3.7) by applying Lemma
2.2. The polynomial running time of the above algorithm follows from the polynomial
running time of the corresponding algorithm in the psd case. Thus, we have shown:
Theorem 3.6. Problem (D Q ) can be solved in polynomial time for chordal graphs.
4. The Matrix Completion Problem for Graphs with Fixed Minimum Fill-
In. In this section we describe an algorithm permitting to solve problems (P), (P Q
and (D Q ) in polynomial time for the graphs having minimum fill-in m, where m - 1 is a
given integer. This algorithm is based on Theorems 1.1, 3.1, 3.2, 3.4 and 3.6.
E) be a graph with minimum fill-in m, let S ' V and let a 2 Q S[E be
given. (Again we assume that a in the distance matrix case.) We first
execute the following step.
Step 0. Find edges which the graph H :=
chordal and find the maximal cliques K (Such edges exist since G has
and they can be found in polynomial time, simply by enumeration as
m is fixed. The maximal cliques in H can also be enumerated in polynomial time since H
is chordal and, moreover, p - n.)
Then, we perform step x in order to solve problem (x) for x=P,P Q ,D,D Q .
Step P. Determine whether there exist real numbers z for which the vector
defined by x i := a
x eh := z h
Step D. Determine whether there exist real numbers z for which the vector x 2
defined by x
are distance matrices.
Then, a has a completion if and only if the answer in step P or D is positive.
Step P Q . Find rational numbers z holds or determine that
no such numbers exist; if they exist, find a rational psd completion of x.
Step D Q . Find rational numbers z holds or determine that no
such numbers exist; if they exist, find a rational distance matrix completion of x.
Steps P and P Q can be executed in the following manner. Let M denote the block
diagonal matrix with the p matrices x(K 1 diagonal blocks (and zeros else-
Hence, M has order jK holds if and only if M - 0.
Clearly, the matrix M can be written under the form:
are symmetric matrices with (0,1)-entries and Q 0 is a symmetric
matrix whose nonzero entries belong to the set of entries of a. Therefore, in view of
Theorem 1.1, one can determine the existence of z satisfying (4.1) in polynomial
time. Then, finding a rational psd completion of x in step P Q can be done in polynomial
time in view of Theorem 3.4.
In the distance matrix case, we use the following construction for distance matrices.
For a be a square symmetric matrix whose rows and columns are indexed
by set V a and let i a be a given element of V a . We construct a new matrix D, denoted as
whose rows and columns are indexed by set and whose entries
are given by
ae D a (i;
D a (i; i a
Lemma 4.1. D is a distance matrix if and only if D are distance
matrices.
Proof. The 'only if' part is obvious. Conversely, assume that D are distance
matrices; we show that D := D 1 is a distance matrix. For a 2 [1; p], let u a
a ) be vectors providing a realization of D a ; we can assume without loss of generality
that u a
i a
we construct a sequence of vectors w i 2 R n1+:::+np (i 2
setting w i := (0
denotes the zero vector in
R n ). One can easily verify that the vectors w i provide a realization of D.
Steps D and D Q can be performed as follows. Let M := x(K 1
the matrix indexed by K constructed as indicated in relation (4.3). Clearly,
M can be written under the form:
14 M. LAURENT
are symmetric matrices with entries in f0; 1g and Q 0 is a symmetric
matrix whose nonzero entries are sums of at most two entries of a. Let i 0 be a given
element of K
Hence, (4.2) holds if and only if matrix M is a distance matrix (by Lemma 4.1) or,
equivalently, if and only if ' i 0 (M) is positive semidefinite (by relation (2.4)). Therefore,
in view of Theorems 3.2 and 3.6, steps D and D Q can be executed in polynomial time.
This completes the proof of Theorem 1.2.
Lemma 4.2. When the minimum fill-in m is equal to 1, existence of a completion
implies existence of a rational one.
Proof. To see it, suppose first that all diagonal entries are specified; then, steps P and
P Q can be executed in an elementary manner. Indeed, each matrix x(K i
has at most one unspecified entry z 1 . Hence, the set of scalars z 1 for which x(K i
an interval of the form I
to see from Lemma
2.2). Therefore, (4.1) holds if and only if z 1 2
I
and v := min i there is a completion (i.e., if u - v), then one can
find one with z 1 rational. This is obvious if this follows from the fact
(easy to verify) that
Suppose now some diagonal entries are unspecified. If there is a completion with value z 2
at the unspecified diagonal entries, then we can assume that z 2 is rational (replacing if
necessary z 2 by a larger rational number). Then, by the above discussion, the off-diagonal
unspecified entry z 1 can also be chosen to be rational.
5. Further Results and Open Questions. We present in Section 5.1 another class
of graphs for which the completion problem can be solved in polynomial time (in the bit
model). Then, we discuss in Section 5.2 some open questions arising when considering a
polar approach to the positive semidefinite completion problem. Finally, we describe in
Section 5.3 a simple combinatorial algorithm permitting to solve the completion problem
in polynomial time (in the real number model) for the class of graphs containing no
homeomorph of K 4 .
5.1. Another class of polynomial instances. We present here another class of
graphs for which the positive semidefinite matrix completion problem (P s ) can be solved
in polynomial time. Given two integers p;q be the class consisting of the
E) satisfying the following properties: There exist two disjoint subsets
is disjoint from
E, the graph
is chordal, and H has q maximal cliques that are not cliques in G. We show:
Theorem 5.1. Given integers p; q - 1, the positive semidefinite completion problem
can be solved in polynomial time (in the bit model) over the class G p;q .
Examples of graphs belonging to class G p;q arise from circuits, wheels and some gen-
eralizations. A generalized circuit of length n is defined in the following manner: its node
set is U with two nodes being adjacent if and only if
generalized wheel of length n is obtained by adding a set U 0 (the
center of the wheel) of pairwise adjacent nodes to a generalized circuit of length n and
making each node in U 0 adjacent to each node in U Call a generalized circuit
or wheel p-fat if min(jU p-fat generalized circuit or wheel
of length q belongs to G p;q . We will see in Section 5.2 that generalized circuits and
wheels arise as basic objects when studying the matrix completion problem on graphs of
small order.
figwheel.eps
Fig. 5.1. (a) The wheel of length 4; (b) a 2-fat generalized wheel of length 4
The proof of Theorem 5.1 is based on the following result of Barvinok [8], which shows
that one can test feasibility of a system of quadratic equations in polynomial time for any
fixed number of equations 2 .
Theorem 5.2. For
be a quadratic polynomial
in x 2 R n , where A i is an n \Theta n symmetric matrix, b i 2 R n and c i 2 R. One
can test feasibility of the system: f i in polynomial time (in the bit
model) for any given m.
Proof of Theorem 5.1. Let E) be a graph in class G p;q and let a 2 R V [E be
given. We are also given the sets V 1 and V 2 for which, say, adding
to G all edges in F := fij creates a chordal graph H . We show that
deciding whether a can be completed to a psd matrix amounts to testing the feasibility of
a system of m quadratic polynomials where m depends only on p and q. As H is chordal,
a is completable to a psd matrix if and only if there exists a matrix Z of order
for which x := (a; Z) 2 R V [E[F satisfies: x(K) - 0 for each maximal clique K in H . We
assume that for each maximal clique K of H contained in G (else, we
can conclude that a is not completable). Consider now a maximal clique K of H which is
not contained in G. Then, x(K) has the following form:
setting the submatrix of Z with row
indices in the notation of Lemma 2.2, we
obtain that x(K) - 0 if and only if the following matrix
is positive semidefinite (we have assumed that A - 0). We can apply again a Schur
decomposition to matrix MK in order to reformulate the condition on Z. Setting TK :=
In [8] Barvinok considers the homogeneous case, where each equation is of the form: f i
for some symmetric matrix A i . However, the general nonhomogeneous case can be derived from it
(Barvinok, personal communication, 1998).
M. LAURENT
we have that
. Let D 0
0 be a largest nonsingular submatrix of D 0 and let
denote the corresponding block decompositions of D 0 and MK . Taking the Schur complement
of D 0
0 in MK , we obtain that MK - 0 if and only if
Let YK := Z[V denote the column submatrix of Z with column indices in
and set
Then,
Therefore, the condition: x(K) - 0 can be rewritten as the sytem:
ae
are matrices depending on input data a. We can reformulate condition
(1K) as an equation by introducing a new square matrix SK of order
namely, rewrite (1K) as
the columns of matrix Z, and let s K
denote the columns of matrix SK for each clique K. Then, condition (1'K) can be expressed
as a system of
equations of the form:
where f is a quadratic polynomial; similarly for condition (2K). The total number of
quadratic equations obtained in this manner depends only on p and q. Therefore, in view
of Theorem 5.2, one can check feasibility of this system in polynomial time when p and q
are fixed.
p;q denote the subclass of G p;q consisting of the graphs G for which every maximal
clique of H (the chordal extension of G) which is not a clique of G is not contained
in Then, the Euclidean distance matrix completion problem can be solved in
figX.eps
Fig. 5.2. The matrix completion problem for generalized circuits of length 4
polynomial time over the class G 0
p;q for any fixed p and q. The proof is similar to that of
Theorem 5.1, since we can get back to the psd case using relation (2.4) (a matrix and its
image under ' i 0 having same pattern of unknown entries if i 0 belongs to V n In
particular, the Euclidean distance matrix completion problem can be solved in polynomial
time for generalized circuits of length 4 and fixed fatness, or for generalized wheels (with
a nonempty center) of fixed length and fatness.
The complexity of the psd completion problem for generalized wheels and circuits
is not known; in fact, in view of the remark made at the end of Section 2.2, it suffices
to consider circuits. In view of Theorem 5.1, the problem is polynomial if we fix the
length and the fatness of the circuit. It would be particularly interesting to determine the
complexity of the completion problem for generalized circuits of length 4 and unrestricted
fatness. This problem can be reformulated as follows: Determine whether and how one
can fill the unspecified entries in the blocks marked '?' of the matrix X shown in Figure
5.2, so as to obtain X - 0 (all entries are assumed to be specified in the grey blocks).
Indeed, as will be seen in Section 5.2, these graphs constitute in some sense the next case
to consider after chordal graphs.
5.2. A polar approach to the completion problem. Given a graph
consider the cone CG consisting of the matrices
lies on an extremal ray
of the cone CG (i.e.,
define the order of G as the maximum rank of an extremal matrix X 2 CG . It is shown
in [1] that a 2 R V [E is completable to a psd matrix if and only if a satisfies:
a
for every extremal matrix One might suspect that the psd matrix
completion problem is somewhat easier to solve for graphs having a small order since the
extremal matrices in CG have then a small rank. Indeed, the graphs of order 1 are precisely
the chordal graphs, for which the problem is polynomially solvable. On the other hand,
a circuit of length n has order which is the highest possible order for a graph on n
nodes. Moreover, if i 0 is a universal node in a graph G, then both graphs G and G n i 0
have the same order, which corroborates the observation made at the end of Section 2.2.
A natural question concerns the complexity of the problem for graphs of order 2.
The graphs of order 2 have been characterized in [27]. It is shown there that, up to a
simple graph operation (clique-sum), they belong to two basic classes G 1 and G 2 . All the
graphs in G 1 have minimum fill-in at most 3; hence, the problem is polynomially solvable
for them (by Theorem 1.2). The graphs in class G 2 are the generalized wheels of length 4
(and unrestricted fatness). Hence, if the psd matrix completion problem is polynomially
solvable for generalized wheels of length 4, then the same holds for all graphs of order 2.
5.3. The matrix completion problem for graphs with no homeomorph of
K 4 . We now discuss the matrix completion problem for the class H consisting of the graphs
M. LAURENT
figK4.eps
Fig. 5.3. A homeomorph of K 4
containing no homeomorph of K 4 as a subgraph; a homeomorph of K 4 being obtained from
K 4 by repacing its edges by paths. (Graphs in H are also known as series parallel graphs.)
Clearly, H contains all circuits. The case of circuits is certainly interesting to understand
since circuits are the most simple nonchordal graphs.
Similarly to the chordal case, a condition characterizing existence of a psd completion
is known for the graphs in H. Namely, the following is shown in [24] (using a result of [7]).
Given a graph E) in H and a 2 R V [E satisfying: a then a has a
psd completion if and only if the scalars x e := 1
- arccos a e (e 2 E) satisfy the inequalities:
G; jF j odd.
Proposition 5.3. [6] Given x 2 [0; test in polynomial time whether x
satisfies the linear system (5.2).
Proof. Consider the graph ~
E)
consists of
the pairs ij;
is easy to see that x satisfies (5.2) if and only if z(P
for every path P from i to i 0 in ~
G and every . The result now follows as one can
compute shortest paths in polynomial time.
Therefore, problem (P s ) is polynomial time solvable in the real number model for
graphs in H. It is not clear how to extend this result to the bit model since the scalars
- arccos a e are in general irrational and, thus, one encounters problems of numerical
stability when trying to check whether (5.2) holds.
Moreover, there is a simple combinatorial algorithm (already briefly mentioned in [25])
permitting to construct a psd completion in polynomial time in the real number model.
E) be a graph in H and let a 2 R V [E be given satisfying a
The algorithm performs the following steps.
1. Set x e := 1
- arccos a e for e 2 E and test whether x satisfies (5.2). If not, one can
conclude that a has no psd completion. Otherwise, go to 2.
2. Find a set F of edges disjoint from E for which the graph H :=
and contains no homeomorph of K 4 .
3. Find an extension y 2 [0; 1] E[F of x satisfying the linear system (5.2) with respect to
graph H .
4. Set b e := cos(-y e ) for e is completable to a psd
matrix (since y satisfies (5.2) and H has no homeomorph of K 4 ) and one can compute a
psd completion X of b with the algorithm of Section 3.2 (since H is chordal). Then, X is
a completion of a.
All steps can be executed in polynomial time. This follows from earlier results for
steps 1 and 4; for step 2 use a result of [39] and, for step 3, one can use an argument
similar to the proof of Proposition 5.3. Namely, given x 2 [0; satisfying (5.2), in order
to extend x to [0; 1] E[feg in such a way that (5.2) remains valid with respect to G+ e, one
has to find a scalar ff 2 [0;
We have:
;. With the notation of the proof of Proposition 5.3, one finds that
is an ab path in ~
is an ab \Gamma path in ~
Hence one can compute ff in polytime. One can then determine the extension y of x to H
by iteratively applying this procedure.
The distance matrix completion problem for graphs in H can be treated in a similar
manner. Indeed, given E) in H and a 2 R E
a e for e 2 E. Then, a is
completable to a distance matrix if and only if x satisfies the linear inequalities:
f2Cne
circuits C in G and all e 2 C:
(Cf. [26].) Again one can test in polynomial time whether x - 0 satisfies (5.3) (simply,
test for each edge is an ab \Gamma path in G)). An
algorithm analogous to the one exposed in the psd case permits to construct a distance
matrix completion. Therefore, we have shown:
Theorem 5.4. One can construct a real psd (distance matrix) completion or decide
that none exists in polynomial time in the real number model for the graphs containing no
homeomorph of K 4 .
It is an open question whether the above result extends to the bit model of computa-
tion, even for the simplest case of circuits.
Acknowledgements
. We are grateful to A. Barvinok for providing us insight about
Theorem 5.2, to L. Porkolab for bringing [23] to our attention, and to A. Schrijver for
discussions about Section 3. We also thank the referees for their careful reading and for
their suggestions which helped us improving the presentation of the paper.
--R
Positive semidefinite matrices with a given sparsity pattern.
Solving Euclidean distance matrix completion problems via semidefinite programming.
Interior point methods in semidefinite programming with applications in combinatorial optimization.
The Euclidean distance matrix completion problem.
On the matrix completion method for multidimensional moment problems.
The real positive definite completion problem for a simple cycle.
Feasibility testing for systems of real quadratic equations.
On a theory of computation and complexity over the real num- bers: NP-completeness
Distance Geometry and Molecular Conformation.
Extensions of band matrices with band inverses.
Computers and Intractability: A Guide to the Theory of NP- Completeness
Algorithmic Theory and Perfect Graphs.
An interior-point method for semidefinite programming
The molecule problem: exploiting structure in global optimization.
Matrix completion problems: a survey.
An interior-point method for approximate positive semidefinite completions
Connections between the real positive semidefinite and distance matrix completion problems.
A new polynomial-time algorithm for linear programming
A polynomial algorithm in linear programming.
Computing integral points in convex semi-algebraic sets
The real positive semidefinite completion problem for series-parallel graphs
Cuts, matrix completions and graph rigidity.
A connection between positive semidefinite and Euclidean distance matrix completion problems.
On the order of a graph and its deficiency in chordality.
Theory of multidimensional scaling.
Interior Point Polynomial Algorithms in Convex Program- ming: Theory and Algorithms
On the complexity of semidefinite programs.
An exact duality theory for semidefinite programming and its complexity implica- tions
Algorithmic aspects of vertex elimination on graphs.
Remarks to M.
Theory of Linear and Integer Programming.
Decomposition by clique separators.
programming.
Steiner trees
Handbook of Semidefinite Program- ming: Theory
Computing the minimum fill-in is NP-complete
--TR
--CTR
Henry Wolkowicz , Miguel F. Anjos, Semidefinite programming for discrete optimization and matrix completion problems, Discrete Applied Mathematics, v.123 n.1-3, p.513-577, 15 November 2002 | real number model;chordal graph;positive semidefinite matrix;polynomial algorithm;euclidean distance matrix;order of a graph;minimum fill-in;bit model;matrix completion |
587873 | Preconditioners for Nondefinite Hermitian Toeplitz Systems. | This paper is concerned with the construction of circulant preconditioners for Toeplitz systems arising from a piecewise continuous generating function with sign changes.If the generating function is given, we prove that for any $\varepsilon >0$, only ${\cal O} (\log N)$ eigenvalues of our preconditioned Toeplitz systems of size N N are not contained in $[-1-\varepsilon, -1+\varepsilon] \cup [1 -\varepsilon, 1+\varepsilon]$. The result can be modified for trigonometric preconditioners. We also suggest circulant preconditioners for the case that the generating function is not explicitly known and show that only ${\cal O} (\log N)$ absolute values of the eigenvalues of the preconditioned Toeplitz systems are not contained in a positive interval on the real axis.Using the above results, we conclude that the preconditioned minimal residual method requires only ${\cal O} (N \log ^2 N)$ arithmetical operations to achieve a solution of prescribed precision if the spectral condition numbers of the Toeplitz systems increase at most polynomial in N. We present various numerical tests. | Introduction
Let L 2- be the space of 2-periodic Lebesgue integrable real-valued functions and let C 2- be
the subspace of 2-periodic real-valued continuous functions with norm
The Fourier coefficients of f 2 L 2- are given by
a
\Gamma-
Research supported in part by the Hong Kong-German Joint Research Collaboration Grant from the
Deutscher Akademischer Austauschdienst and the Hong Kong Research Grants Council.
and the sequence fAN (f)g 1
N=1 of (N; N)-Toeplitz matrices generated by f is defined by
real-valued the matrices AN (f) are Hermitian.
We are interested in the iterative solution of Toeplitz systems
where the generating function f 2 L 2- . To be more precise, we are looking for good preconditioning
strategies so that Krylov space methods applied to the preconditioned system converge
in a few number of iteration steps. Note that by the Toeplitz structure of AN each iteration
step requires only O(N log N) arithmetical operations by using fast Fourier transforms.
Preconditioning techiques for Toeplitz systems have been well-studied in the past 10 years.
However, most of the papers in this area are concerned with the case where the generating
function f is either positive or nonnegative, see for instance [4, 3, 18, 6, 16, 9] and the references
therein. In this paper, we consider f that has sign changes. The method we propose here will
also work for generating functions that are positive or nonnegative.
Up to now iterative methods for Toeplitz systems with generating functions having different
signs were only considered in [18, 20] and in connection with non-Hermitian systems in [7, 5].
In [7], we have constructed circulant preconditioners for non-Hermitian Toeplitz matrices with
known generating function of the form
where p is an arbitrary trigonometric polynomial and h is a function from the Wiener class
with jhj ? 0. We proved that the preconditioned matrices have singular values properly
clustered at 1. Then, if the spectral condition number of AN (f) fulfills - 2 (AN
the conjugate gradient method (CG) applied to the normal equation requires only O(log N)
iteration steps to produce a solution of fixed precision. However, in general nothing can be
said about the eigenvalues of the preconditioned matrix.
In this paper, we consider real-valued functions f 2 L 2- of the form
where
Y
is a trigonometric polynomial with a finite number of zeros of even
order 2s j and where h 2 L 2- is a piecewise continuous function with simple discontinuities at
there exist h(- j \Sigma
0))=2. Further, we assume that
In particular, we are interested in the Heavyside function h.
A similar setting was also considered in [18]. S. Serra Capizzano suggested the application
of band-Toeplitz preconditioners AN (p s ) in combination with CG applied to the normal
equation. He proved, beyond a more general result which can not directly be used for precon-
ditioning, that at most o(N) eigenvalues of the preconditioned matrix AN (p s
absolute values not contained in a positive interval on the real axis.
A result with o(N) outlyers was also obtained in [19], where the application of preconditioned
GMRES was examined.
In the following, we construct circulant preconditioners for the minimal residual method (MIN-
RES). Note that preconditioned MINRES avoids the transformation of the original system
to the normal equation but requires Hermitian positive definite preconditioners. Then, the
preconditioned matrices are again Hermitian, so that the absolute values of their eigenvalues
coincide with their singular values. If the generating function is given, we prove that for
" ? 0, only O(log N) singular values of the preconditioned matrices are not contained
in [1 \Gamma We also construct circulant preconditioners for the case that the generating
function of the Toeplitz matrices is not explicitly known. For this, we use positive reproducing
kernels with special properties previously applied by the authors in [16, 9] and show
that O(log N) singular values of the preconditioned matrices are not contained in a positive
interval on the real axis. Then, if in addition - 2 (AN preconditioned MINRES
converges in at most O(log N) iteration steps. In summary, the proposed algorithm requires
only O(N log 2 N) arithmetical operations.
This paper is organized as follows: In Section 2, we introduce circulant preconditioners for
(1.1) under the assumption that the generating function of the sequence of Toeplitz matrices is
known and prove clustering results for the eigenvalues of the preconditioned matrices. Section
3 deals with the construction of preconditioners if the generating function of the Toeplitz
matrices is not explicitly known. In Section 4, we modify the results of Section 2 with respect
to trigonometric preconditioners. The convergence of MINRES applied to our preconditioned
Toeplitz systems is considered in Section 5. Finally, we present numerical results in Section
6.
Circulant preconditioners involving generating functions
First we introduce some basic notation. By RN (M) we denote arbitrary (N; N)-matrices of
rank at most M . Let M N (g) be the circulant (N; N)-matrix
diag
F
where F N denotes the N-th Fourier matrix
e \Gamma2-ijk=N
and where F is the transposed complex conjugate matrix of F . For a trigonometric polynomial
k=\Gamman 1
the matrices AN (q) and M N (q) are related by
(see [14]). For a function g with a finite number of zeros we define the set I N (g) by
I N (g) :=
and the points xN;l (g) (l
xN;l (g) :=
where ~ l 2 1g is the next higher index to l so that ~ l 2 I N (g). For N large enough
we can simply choose ~ l By M N;g (f) we denote the circulant matrix
diag (f(x N;l
If g has m zeros, then we have by construction that
Assume now that the sequence fAN (f)g 1
N=1 of nonsingular Toeplitz matrices is generated by
a known piecewise continuous function f 2 L 2- of the form (1.2) - (1.4). Then we suggest
the Hermitian positive definite circulant matrix M N;f (jf j) as preconditioner for MINRES.
We examine the distribution of the eigenvalues of M N;f (jf
.
The following theorem is Lemma 10 of [22] written with respect to our notation.
Theorem 2.1 Let h 2 L 2- be a piecewise continuous function having only simple discontinuities
at By FN we denote the Fej'er kernel
FN (t) :=
e ikt
cos kt (2.4)
sin
and by FN h the cyclic convolution of FN and h. Then, for any " ? 0, there exist constants
independent of N so that the number -("; AN ) of eigenvalues of AN
absolute value exceeding " can be estimated by
In other words, we have by Theorem 2.1 that
where V N is a matrix of spectral norm - " and where
Using Theorem 2.1, we can prove the following lemma.
Lemma 2.2 Let be given by (1:2) - (1.4). Then, for any " ? 0 and sufficiently
large N , the number of singular values of M N;f
2 which are not
contained in the interval [1 \Gamma
Proof. By (2.6) and since the eigenvalues of M N;f (jhj) are restricted from below by h \Gamma , it
remains to show that for any " ? 0 and sufficiently large N , except for O(log N) eigenvalues,
all eigenvalues of M N;f (jhj) \Gamma1 M N (FN h) have absolute values in [1 \Gamma Indeed we
will prove that there are only O(1) outlyers.
For this we follow mainly the lines of proof of Gibb's ph-anomenon. Without loss of generality
we assume that h 2 L 2- has only one jump at
First we examine FN g, where g is given by
By (2.4) and since g has Fourier series
sin
we obtain Z xFN (t)
sin
and further by (2.5)
Z x/
sin Ntsin t! 2
Z x/
sin
sin N t' 2
Z Nx0
sin t
xand by partial integration and definition of g
where si (y) :=
y
Rsin t
dt. We are interested in the behavior of
Here dxe denotes the smallest integer - x. It is well known that lim
. Thus, if
so that
" for all N -
The same holds if we approach 0 from the left, i.e. if we consider 2-l=N for
Next we have by definition of g and h that
is a continuous function. Since FN is a reproducing kernel, for any " ? 0, there exists
~
so that for all l 2
Assume that l = we obtain by (2.7)
and (2.8) that for any " ? 0 there exists
and consequently, since jh
denote the number of zeros of f which are equal to one of the points 2-l=N
1). Then the set
contains at least absolute values of eigenvalues of M N;f (jhj) \Gamma1 M N (FN h) and we
conclude by (2.9) that except for O(1) eigenvalues and sufficiently large N , all eigenvalues of
have absolute values contained in [1 \Gamma This completes the
proof.
Remark 2.3 In a similar way as above we can prove that for any " ? 0 and N sufficiently
large, the number of eigenvalues of AN (h) with absolute values not in the interval
is O(log N ).
Note that the property that at most o(N) eigenvalues of AN (h) have absolute values not
contained in [h simply from the fact that the singular values of AN (h) are
distributed as jhj [13, 19].
Theorem 2.4 Let be given by (1:2) - (1.4). Then, for any " ? 0 and
sufficiently large N , except for O(log N) singular values, all singular values of
contained in [1 \Gamma
Proof. The polynomial p s in (1.3) can be rewritten as
where
Y
and -
p(t) is the complex conjugate of p(t). By straightforward computation it is easy to check
that
where only the first s columns (rows) of R c (r)
N are nonzero columns (rows).
p jhj the eigenvalues of M N;f (jf coincide with the eigenvalues of
BN
Now we obtain by (2.10), (2.1) and (2.3) that
BN
By Lemma 2.2, for any " ? 0 and N sufficiently large, except for O(log N) singular values,
all singular values of M N;f
are contained in [1 \Gamma "; 1+ "]. Now the
assertion follows by (2.12) and Weyl's interlacing theorem [12, p. 184].
3 Circulant preconditioners involving positive kernels
In many applications we only know the entries a k (f) of the Toeplitz matrices AN (f ), but
not the generating function itself. In this case, we use even positive reproducing kernels
. These are trigonometric polynomials of the form
c N;k cos kt; c
satisfying KN - 0,2-
\Gamma-
KN
and the reproducing property
lim
Since
(KN
\Gamma-
a k (f) c N;k e ikx ;
the cyclic convolution of KN and f is determined by the first N Fourier coefficients of f . As
preconditioner which can be constructed from the entries of AN (f) without explicit knowledge
of f we suggest the circulant matrix M N;KN \Lambdaf (jK N f j).
In order to obtain a suitable distribution of the eigenvalues of the preconditioned matrices,
we need kernels with a special property which is related to the order
of the zeros of p s .
The generalized Jackson kernels J m;N of degree - are defined by
determined by (3.1). Here btc denotes the largest
integer - t. In particular, we have that
i.e. there exist positive constants c 1 ; c 2 so that c 1 N 1\Gamma2m - m;N - c 2 N 1\Gamma2m . See [10, pp.
possibility for the construction of the Fourier coefficients of J m;N is prescribed
in [9].
The B-spline kernels B m;N of degree - are defined by
sinc
where Mm denotes the centered cardinal B-spline of order m and
sinc t :=
sin t
See [16, 8]. Since
cos kt
the Fourier coefficients of B m;N are given by values of centered cardinal B-splines. Note that
is just the Fej'er kernel FN .
The above kernels have the following important property:
Theorem 3.1 be given by (1:2) - (1:4). Assume that for all t j (j 2
exists a neighborhood so that f is a monotone function in this
neighborhood and moreover f(t m;N be given by
(3.2) or (3.3), where
Then there exist so that for N !1, except for O(1) points, all points of the
set
Proof. 1. First we consider the upper bound. Since p s and KN are nonnegative, we obtain
Z
\Gamma-
Z
\Gamma-
In [16, 9], we proved that m - oe implies that for all x 2 I N (p s ) ' I N (f ), there exists a
(KN p s )(x)
Thus, since jh(x)j - h \Gamma for
(KN p s )(x)
2. Next we deal with the lower bound.
2.1. Let x 2 I N (f) be not in the neighborhood of t j (j exist
independent of N so that
since KN is a reproducing kernel and by using the same arguments as in
the proof of Lemma 2.2 if x is in the neighborhood of some we obtain that,
for any " ? 0 there exists N("), so that except for at most a constant number of points, all
considered points x 2 I N (f) satisfy
and thus
2.2. It remains to consider the points
For simplicity we assume that
i.e. p s has only a zero of order 2s at
lim
For any fixed 0
(KN
Z
\Gammab
\Gammab
Z
\Gamma-
Z
Z
\Gammab
b+x
and since f is bounded
(KN
Z
\Gammab
b+x
By definition of KN we see that for any fixed 0 ! ~ b -
Z
KN (t) dt - const N \Gamma2m+1 ; (3.5)
so that we get for small x (e.g. x ! b=2)
(KN
Z
\Gammab
2.2.1. Assume that h has no jump at
that h(t) - h \Gamma or h(t) - \Gammah \Gamma for t 2 [\Gamma"; "]. We restrict our attention to the case h - h \Gamma .
monotone increasing on (0; -), we
obtain for x(N) 2 (0; ") " I N (f) and N sufficiently large that
Z
\Gamma"
Z
Z
with a positive constant c independent of N . On the other hand, we have by definition of
s and since by assumption . Then we
obtain by (3.6) with that for N large enough
(KN f)(x(N))
const
with a positive constant const independent of N .
The proof for x(N) 2 (\Gamma"; follows the same lines.
2.2.2. Finally, we assume that h has a jump at
Without loss of generality let h(0 by assumption on f , there exists
that
Z
We consider points of the form
with lim
in case of Jackson kernels and fl := 1 in case of
B-spline kernels. Then we have for t 2
and consequently for sufficiently small " 1 and y, since sin is odd and monotone increasing on
(0; -=2) that
Further, by definition of the B-spline kernels
m;N (t) := N
sinc
and similarly as in (3.9) we see that
By assumption h does not change the sign in (0; " 1 ). Then we obtain by (3.8), monotonicity
of p s in (0; -) and m - s
Z
Z
y
where K 0
m;N g. Set
there exist
Z
wl
l=r\Gammak
wl
Z
and further by (3.5) and since lim
Z
Straightforward computation yields
const
Z' sin u
Hence we get for N large enough that
Z
const
and by (3.10) that
Z
const (3.11)
with positive constants const independent of N .
Now we consider x(N) 2 I N (f) with y k (N) - x(N) ! y k+1 (N ).
Z
Z
Z
Z
and since f is by assumption monotone increasing on [\Gamma"
Z
Z
Z
Z
Z
Z
and by (3.5) and since f is bounded
Z
Z
By assumption
R
const
R
and since f(y k (N) - const N \Gamma2s and m - s by (3.12), (3.11) that for N large
enough
Z
const
with a nonnegative constant const independent of N . Finally, we use (3.6) with
again to finish the proof.
To show our main result we also need the following lemma.
Lemma 3.2 Let A 2 C N;N be a Hermitian positive definite matrix having
in [a be a Hermitian matrix with
singular values in [b 1. Then at least N \Gamma 4n
of A B are contained in [\Gammaa
Proof. 1. Assume first that n i.e. A has only eigenvalues in [a
the j-th eigenvalue of the matrix B. We consider the eigenvalues of
to t 2 R. By Weyl's interlacing theorem (see [12, p. 184]) we obtain for t - 0 that
and for t ! 0 that
we obtain by (3.13) and (3.14) that - j
. On the other hand, we see by (3.13) and (3.14) that - j
. Thus, since - j
is a continuous function in t 2 R, there exists
This implies that t j
is an eigenvalue of AB. Consequently, every corresponds to an eigenvalue
of AB. (Eigenvalues are called with multiplicities.)
The examination of the same lines.
In summary, N \Gamma n 2 eigenvalues of AB are contained in [\Gammaa
2. Let n 1 eigenvalues of A be outside [a since A is positive definite, the matrix
can be splitted as
A 1=2
where ~
A 1=2
is Hermitian with all eigenvalues in [a 1=2
of rank n 1 . The eigenvalues of AB coincide with the eigenvalues of A 1=2 BA 1=2 . Hence it
remains to show that at most 4n 1 singular values of A 1=2 BA 1=2 are not contained in
we have
A 1=2 BA
A 1=2
A 1=2 BA 1=2
~
A 1=2
A 1=2
By 1. all but n 2 singular values of ~
A 1=2
A 1=2
are contained in [a
and Weyl's interlacing theorem yield the assertion.
Theorem 3.3 Let be given by (1:2) - (1:4). Assume that for all t j (j 2
exists a neighborhood so that f is a monotone function in this
neighborhood and moreover f(t m;N be given by
(3.2) or (3.3), where
By ff; fi we denote the constants from Theorem 3:1.
Then, for any " ? 0 and sufficiently large N , except for O(log N) singular values, all singular
values of M N (jK N f
are contained in [ff \Gamma
Proof. Let BN (f) be defined by (2.11). Then we obtain by (2.12) that
The distribution of the eigenvalues of M N;f
2 is known by Lemma
2.2. It remains to examine the eigenvalues of the Hermitian positive definite matrix
These eigenvalues coincide with the reciprocal eigenvalues of M N;f (jf
By definition of M N;g and since KN is a reproducing kernel, except for O(1) eigenvalues, all
eigenvalues of M N;f (jf are given by j(K N f)(2-l=N)j=jf(2-l=N)j
(l 2 I N (f )). Thus, by Theorem 3.1, for N !1 only O(1) eigenvalues of
are not contained in [ff; fi]. Consequently, by (3.18), Lemma
2.2, Lemma 3.2 and Weyl's interlacing theorem at most O(log N) singular values of
2 are not contained in [ff \Gamma
4 Trigonometric preconditioners
In addition to Section 2, we suppose that the Toeplitz matrices AN 2 R N;N are symmetric,
i.e. the generating function f 2 L 2- is even. This suggests the application of so-called
trigonometric preconditioners. Note that in the symmetric case the multiplication of a vector
with AN can be realized using fast trigonometric transforms instead of fast Fourier transforms
(see [14]). In this way complex arithmetic can be completely avoided in the iterative solution
of (1.1). This is one of the reasons to look for preconditioners which can be diagonalized by
trigonometric matrices corresponding to fast trigonometric transforms instead of the Fourier
matrix F N .
In practice, four discrete sine transforms (DST I - IV) and four discrete cosine transforms
(DCT I - IV) were used (see [21]). Any of these eight trigonometric transforms can be realized
with O(N log N) arithmetical operations. Likewise, we can define preconditioners with respect
to any of these transforms.
In this paper, we restrict our attention to the so-called discrete cosine transform of type
II (DCT-II) and discrete sine transform of type II (DST-II), which are determined by the
following transform matrices:
where ffl N
1). We propose the preconditioners
diag (jf(~x N;l
diag (jf(~x N;l )j) N
where
~ xN;l :=
l-
l-
~ l-
and where ~ l 2 1g is the next higher index to l such that jf(~x N;l )j ? 0. See [15].
Then we can prove in a completely similar way as in Section 2 that for any " ? 0 and
sufficiently large N except for O(log N) singular values, all singular values of
are contained in [1 \Gamma
5 Convergence of preconditioned MINRES
In order to prescribe the convergence behavior of preconditioned MINRES with our preconditioners
of the previous sections, we have to estimate the smaller outlyers for increasing
N .
Lemma 5.1 Let f 2 L 2- be defined by (1:2)-(1:4). Assume that - 2 (AN
(ff ? 0). Then the smallest absolute values of the eigenvalues of M N;f (jf
behave for N !1 as O(N \Gammaff ).
Proof. Since
and both kM N;f (jf j)k 2 and kM N;KN \Lambdaf (jK N f j)k 2 are restricted from above, it remains to
show that there exists a constant c ? 0 independent of N so that
kAN
The above inequality follows immediately from the fact that the singular values of AN (f) are
distributed as jf j (see [13, 19]).
We want to combine our knowledge of the distribution of the eigenvalues of our preconditioned
matrices with results concerning the convergence of MINRES.
Theorem 5.2 Let A 2 C N;N be a Hermitian matrix with p and q isolated large and small
singular values, respectively:
Let for the
solution of
iteration steps to achieve precision - , i.e. jjr (k) jj 2
the k-th iterate.
The theorem can be proved by using the same technique as in [1, pp. 569 - 573]. Namely,
based on the known estimate
jjr
k denotes the space of polynomials of degree - k with p k are the
eigenvalues of A, we choose p k as product of the linear polynomials passing through the
outlyers and the modified Chebyshev polynomials
The above summand p ln 2 can be further reduced if we use polynomials of higher degree for
the larger outlyers.
Note that a similar estimate can be given for the CG method applied to the normal equation
A b. Here we need
iteration steps to archive precision
jje (0) jj A
Note that the latter
method requires two matrix-vector multiplications in each iteration step.
By Theorem 2.4, Theorem 3.3 and Lemma 5.1 our preconditioned MINRES with both preconditioners
produces a solution of (1.1) of prescribed
precision in O(log N) iteration steps and with O(N log 2 N) arithmetical operations. The
same holds for preconditioned CG applied to the normal equation.
6 Numerical results
In this section, we test our circulant and trigonometric preconditioners in connection with
different iterative methods on a SGI O2 work station. As transform length we use
right-hand side b of (1.1) the vector consisting of N entries "1" and as start vector the zero
vector.
We begin with a comparison of MINRES applied to
N g and CGNE (Craig's method) (cf. [17, p. 239]) applied to
For both algorithms we have used MATLAB implementations of B. Fischer. See also [11]. In
particular, his implementation of preconditioned MINRES avoids the splitting (6.2).
In order to make the following computations with MINRES and CGNE comparable, we have
stopped both computations if
Example 1. We begin with Hermitian Toeplitz matrices AN (f) arising from the generating
function
Table
1 presents the number of iterations for circulant preconditioners. The first row of the
table contains the exponent n of the transform length . According to Theorem 2.4
and Theorem 5.2, the preconditioners M N (jf j; F N ) lead to very good results. As expected,
the preconditioners M N;KN \Lambdaf (jK N f j; F N ) with the Fej'er kernels are not suitable
for (1.1) (cf. also [16]), while the preconditioners with do their job.
Further, CGNE needs half the number of iterations but twice the number of matrix-vector
multiplications per iteration than MINRES. See also Section 5.
method
MINRES I N 23 71 277 *
43
CGNE I N 11 37 164 *
Table
1:
Example 2. Next, we consider the symmetric Toeplitz matrices AN (f) arising from the
generating function
with
Tables
2 presents the number of iterations for trigonometric preconditioners. The results
are similar to those of Example 1, except that CGNE requires nearly the same number of
iterations as MINRES.
method
MINRES I N 9 17
MINRES M N;f (jf j; C II
MINRES M N;f (jf j; S II
MINRES M N;FN \Lambdaf (jF N f j; C II
CGNE I
Table
2: f 2
Acknowledgment
. The authors wish to thank B. Fischer for for the MATLAB implementations
of PMINRES and CGNE.
--R
Iterative Solution Methods.
preconditioning for Toeplitz matrices.
Toeplitz preconditioners for Toeplitz systems with nonnegative generating functions.
Conjugate gradient methods of Toeplitz systems.
Preconditioners for non-Hermitian Toeplitz systems
Circulant preconditioners from B-splines
The best circulant preconditioners for Hermitian Toeplitz matrices.
Constructive Approximation.
Polynomial Based Iteration Methods for Symmetric Linear Systems.
Matrix Analysis.
On the distribution of singular values of Toeplitz matrices.
Optimal trigonometric preconditioners for nonsymmetric Toeplitz systems.
Preconditioners for ill-conditioned Toeplitz matrices
Preconditioners for ill-conditioned Toeplitz matrices constructed from positive kernels
Iterative Methods for Sparse Linear Systems.
Preconditioning strategies for Hermitian Toeplitz systems with nondefinite generating functions.
A unifying approach to some old and new theorems on distribution and clustering.
Fast algorithms for the discrete W transform and for the discrete Fourier transform.
Circulant preconditioners for Toeplitz matrices with piecewise continuous generating functions.
--TR | circulant matrices;preconditioners;nondefinite Toeplitz matrices;krylov space methods |
587883 | Methods for Large Scale Total Least Squares Problems. | The solution of the total least squares (TLS) problems, $\min_{E,f}\|(E,f)\|_F$ subject to (A+E)x=b+f, can in the generic case be obtained from the right singular vector corresponding to the smallest singular value $\sigma_{n+1}$ of (A, b). When A is large and sparse (or structured) a method based on Rayleigh quotient iteration (RQI) has been suggested by Bjrck. In this method the problem is reduced to the solution of a sequence of symmetric, positive definite linear systems of the form $(A^TA-\bar\sigma^2I)z=g$, where $\bar\sigma$ is an approximation to $\sigma_{n+1}$. These linear systems are then solved by a {\em preconditioned} conjugate gradient method (PCGTLS). For TLS problems where A is large and sparse a (possibly incomplete) Cholesky factor of ATA can usually be computed, and this provides a very efficient preconditioner. The resulting method can be used to solve a much wider range of problems than it is possible to solve by using Lanczos-type algorithms directly for the singular value problem. In this paper the RQI-PCGTLS method is further developed, and the choice of initial approximation and termination criteria are discussed. Numerical results confirm that the given algorithm achieves rapid convergence and good accuracy.} | Introduction
.
The estimation of parameters in linear models is a fundamental problem in
many scientific and engineering applications. A statistical model that is often
realistic is to assume that the parameters x to be determined satisfy a linear
relation
where A 2 R m\Thetan , and b 2 R m , are known and (E; f) is an error matrix with
rows which are independently and identically distributed with zero mean and
the same variance. (To satisfy this assumption the data (A; b) may need to be
premultiplied by appropriate scaling matrices, see Golub and Van Loan [10].)
In statistics this model is known as the "errors-in-variables model".
The estimate of the true but unknown parameter vector x in the model (1.1)
is obtained from the solution of the total least squares (TLS) problem
min
subject to
Department of Mathematics, University of Link-oping, S-581 83 Link-oping, Sweden. e-mail:
akbjo@math.liu.se, pontus.matstoms@vti.se. The work of these authors was supported by the
Swedish Research Council for Engineering Sciences, TFR.
y Department of Informatics, University of Bergen, N-5020 Bergen, Norway, email:
pinar@ii.uib.no
denotes the Frobenius matrix norm. If a minimizing pair (E; f)
has been found for the problem (1.2) then any x satisfying
said to solve the TLS problem.
Due to recent advances in data collection techniques LS or TLS problems
where A is large and sparse (or structured) frequently arise, e.g., in signal and
image processing applications. For the solution of the LS problem both direct
methods based on sparse matrix factorizations and iterative methods are well
developed, see [2].
An excellent treatment of theoretical and computational aspects of the TLS
problem is given in Van Huffel and Vandewalle [25]. Solving the TLS problem
requires the computation of the smallest singular value and the corresponding
right singular vector of (A; b). When A is large and sparse this is a much more
difficult problem than that of computing the LS solution. For example, it is
usually not feasible to compute the SVD or any other two-sided orthogonal
factorization of A since the factors typically are not sparse.
Iterative algorithms for computing the singular subspace of a matrix associated
with its smallest singular values, with applications to TLS problems with slowly
varying data, have previously been studied by Van Huffel [24]. In [27, 3] a new
class of methods based on a Rayleigh quotient iteration was developed for the
efficient solution of large scale TLS problems. Related methods for Toeplitz
systems were studied by Kamm and Nagy [14]. In this paper the methods in [3]
are further developed and numerical results given. Similar algorithms for solving
large scale multidimensional TLS problems will be considered in a forthcoming
paper [4].
In Section 2 we recall how the solution to the TLS problem can be expressed
in terms of the smallest singular value and corresponding right singular vector
of the compound matrix (A; b). We discuss the conditioning of the LS and TLS
problems and illustrate how the TLS problem can rapidly become intractable.
Section 3 first reviews a Newton iteration for solving a secular equation. For
this method to converge to the TLS solution strict conditions on the initial approximation
have to be satisfied. We then derive the Rayleigh quotient method,
which ultimately achieves cubic convergence. The choice of initial estimates and
termination criteria are discussed. A preconditioned conjugate gradient method
is developed in Section 4 for the efficient solution of the resulting sequence of
sparse symmetric linear systems. Finally, in Section 5, numerical results are
given which confirm the rapid convergence and numerical stability of this class
of methods.
Preliminaries.
2.1 The TLS problem.
The TLS problem (1.2) is equivalent to finding a perturbation matrix (E; f)
having minimal Frobenius norm, which lowers the rank of the matrix (A; b).
Hence it can be analyzed in terms of the singular value decomposition
are the singular values of (A; b). Note that
by the minmax characterization of singular values it follows that the singular
values oe 0
i of A interlace those of (A; b), i.e.,
We assume in the following that A has full rank, that is, oe that
. Then the minimum is attained for the rank one perturbation
for which k(E; f)k solution is then obtained from the right
singular vector
z
x TLS
provided that i 6= 0. If the TLS problem is called nongeneric, and there
is no solution. This case cannot occur if oe and in the following we
always assume that this condition holds.
From the characterization (2.2) it follows that
n+1 and
the system of nonlinear equations
A T A A T b
x
x
Putting
n+1 the first block row of this system of equations can be written
which can be viewed as "the normal equations" for the TLS problem. Note that
from our assumption that oe 0 n ? oe n+1 it follows that A T A \Gamma oe 2
n+1 I is positive
definite.
2.2 Conditioning of the TLS problem.
For the evaluation of accuracy and stability of the algorithms to be presented
we need to know the sensitivity of the TLS problem to perturbations in data.
We first recall that if x LS 6= 0 the condition number for the LS problem is (see
[2, Sec. 1.4])
n . Note that the condition number depends on both A and
b, and that for large residual problems the second term may dominate.
Condition number for problem LS and TLS
beta
kappa
kappa LS
kappa TLS
Figure
2.1: Condition numbers -LS and - TLS as function of
Equation (2.4) shows that the TLS problem is always worse conditioned than
the LS problem. From (2.3), multiplying from the left with
This inequality is wek, but shows that kx TLS k 2 will be large when kr LS k 2 AE oe 0
n .
Golub and Van Loan [10] showed that an approximate condition number for
the TLS problem is
the TLS condition number can be much greater than
-(A). The relation between the two condition numbers (2.5) and (2.7) depend
on the relation between the kr LS k 2 and oe n+1 , which is quite intricate. (For a
study of this relation in another context see Paige and Strako-s [17].)
As an illustration we consider the following small overdetermined system@
Trivially, the LS solution is
If we take in (2.8) oe independent
of fi, and hence does not reflect the illconditioning of A. The TLS solution is of
similar size as the LS solution as long as jfij - oe 0 2 . However, when jfij AE oe 0 2
then from (2.6) it follows that kx TLS k 2 is large.
In Fig. 2.1 the two condition numbers are plotted as a function of jfij. We note
that - LS increases proportionally to jfij because of the second term in (2.5). For
the condition number - TLS grows proportionally to jfij 2 . It can be
verified that kx TLS k 2 also grows proportionally to jfij 2 .
3 Newton and Rayleigh Quotient methods.
3.1 A Newton method.
Equation (2.3) constitutes a system of (n equations in x and
-. One way to proceed (see [14]) is to eliminate x to obtain the rational secular
equation for
method applied to (3.1) leads to the
iteration
This iteration will converge monotonically at a rate that is asymptotically quad-
ratic. The convergence of this method can be improved by using a rational
interpolation similar to that in [6] to solve the secular equation. However, in
any case, - will converge to oe 2
n+1 and x (k) to the TLS solution only if the initial
approximation satisfies
In general it is hard to verify this assumption. For the special case of a Toeplitz
TLS problem Kamm and Nagy [14] use a bisection algorithm based on a fast
algorithm for factorizing Toeplitz matrices to find an initial starting value satisfying
(3.4).
3.2 The Rayleigh quotient method.
The main drawback of the Newton method above is that unless (3.4) is satisfied
it will converge to the wrong singular value. A different Newton method is
obtained by applying Newton's method to the full system
As remarked in [20] this is closely related to inverse itera-
tion, which is one of the most widely used methods for refining eigenvalues and
eigenvectors. Rayleigh quotient iteration (RQI) is inverse iteration with a shift
equal to the Rayleigh quotient. RQI has cubic convergence for the symmetric
eigenvalue problem, see [18, Sec.4-7], and is superior to the standard Newton
method applied to (3.5).
For the eigenvalue problem (2.3) the Rayleigh quotient equals
Let x (k) be the current approximation and ae k the corresponding Rayleigh
quotient. Then the next approximation x (k+1) in RQI and the scaling factor fi k
are obtained from the symmetric linear system
where
If J (k) is positive definite the solution can be obtained by block Gaussian elimi-
nation, '
\Gamma(z
where
It follows that x
In [2] a reformulation was made to express the solution in terms of the residual
vectors of (3.5) '
where r This uses the following formulas to compute
The RQI iteration is defined by equations (3.10)-(3.13).
3.3 Initial estimate and global convergence.
Parlett and Kahan [19] have shown that for almost all initial vectors the
Rayleigh quotient iteration converges to some singular value and vector pair.
However, in general we cannot say to which singular vector RQI will converge.
If the LS solution is known, a suitable starting approximation for - may be
Conditions to ensure that RQI will converge to the TLS solution from the
starting approximation (ae(x LS ); x LS ) are in general difficult to verify and often
not satisfied in practice. However, in contrast to the simple Newton iteration
in Section 3.1, the method may converge to the TLS solution even when
The Rayleigh quotient ae(x LS ) will be a large overestimate of oe 2
n+1 when the
residual norm kr LS k 2 is large and kx LS k 2 does not reflect the illconditioning of
A. Note that it is typical for illconditioned least squares problems that the right-hand
side is such that kx LS k 2 is not large! For example, least squares problems
arising from ill-posed problems usually satisfy a so called Picard condition, which
guarantees that the right-hand side has this property, see [11, Sec. 1.2.3].
Szyld [23] suggested that one or more steps of inverse iteration could be applied
initially before switching to RQI, in order to ensure convergence to the smallest
eigenvalue. Inverse iteration for oe 2
n+1 corresponds to taking oe in the RQI
algorithm. Starting from x = x LS the first step of inverse iteration simplifies as
follows. Using (3.9) and (3.10) with ae
z
and the new approximation becomes
Several steps of inverse iteration may be needed to ensure convergence of RQI to
the smallest singular value. However, since inverse iteration only converges lin-
early, taking more than one step will usually just hold up the rapid convergence
of RQI. We therefore recommend in general steps as the default value.
To illustrate the situation consider again the small 3 \Theta 2 system (2.8) with
. This has the LS solution x
does not reflect the illconditioning of A the initial
Rayleigh quotient approximation equals
By the interlacing property we have that oe 3 - oe 0 2 . Since jfij AE oe 0 2 it is clear that
the Rayleigh quotient fails to approximate oe 2
3 . This is illustrated in Figure 3.1,
where ae(x LS ) 1=2 and oe 3 are plotted as function of jfij. It is easily verified,
however, that after one step of inverse iteration ae(x INV ) will be close to oe 0 2
-5
beta
sqrt(r LS
Figure
3.1: Rayleigh quotient approximation and oe 3 for
3.4 Termination criteria for RQI.
The RQI algorithm for the TLS problem is defined by (3.10)-(3.13). When
should the RQI iteration be terminated? We suggest two different criteria.
The first is based on the key fact in the proof of global convergence that the
normalized residual norm
always decreases, fl k+1 - fl k , for all k. Thus, if an increase in the norm occurs
this must be caused by roundoff, and then it makes no sense to continue the
iterations. This suggests that we terminate the iterations with x k+1 when
A second criterion is based on the observation that since the condition number
for computing oe n+1 equals 1, we can expect to obtain oe n+1 to full machine
precision. Since convergence of RQI is cubic a criterion could be to stop when
the change in the approximation to oe n+1 is of the order of oe 1 u 1=p , where
similar criterion with used by Kamm and Nagy [14] for terminating
the Newton iteration.) However, as will be evident from the numerical results
in Section 5, full accuracy in x TLS in general requires one more iteration after
oe n+1 has converged. Therefore we recommend to stop when either (3.16) or
is satisfies, where u is the machine unit and C a suitable constant.
We summarize below the RQI algorithm with one step of inverse iteration (cf.
Algorithm 3.1. Rayleigh Quotient Iteration.
solve A T
solve
solve
3.5 Rounding errors and stability.
If the RQI iteration converges then f (k) , g (k) , and fi k will tend to zero. Consider
the rounding errors which occur in the evaluation of the residuals (3.11). Let
~
where u is the unit roundoff; see [13, Chap. 3]. Then the computed
residual vector satisfies -
Obviously convergence will cease when the residuals (3.11) are dominated by
roundoff. Assume that we perform one iteration from the exact solution, x TLS ,
r TLS , and
n+1 . Then the first correction to the current approximation is
obtained by solving the linear system in (3.13), which now becomes
For the correction this gives the estimate
This estimate is consistent with the condition estimate for the TLS problem.
We note that the equations (3.18) are of similar form to those that appear in
the corrected semi-normal equations for the LS problem; see [1], [2, Sec. 6.6.5].
A detailed roundoff error analysis similar to that done for the LS problem would
become very complex and is not attempted here. It seems reasonable to conjecture
that if if oe 0 2
will suffice to solve the linear equations for
the correction w (k) using the Cholesky factorization of
I). Methods
for the solution of the linear systems are considered in more detail in Section 4.
4 Solving the linear systems.
In the RQI method formulated in the previous section the main work consists
of solving in each step two linear systems of the form
Here oe is an approximation to oe n+1 and varies from step to step. Provided that
the system (4.1) is symmetric and positive definite.
4.1 Direct linear solvers.
then the system (4.1) can be solved by computing the (sparse)
Cholesky factorization of the matrix A T A \Gamma oe 2 I. Note that A T A only has to
be formed once and the symbolic phase of the factorization does not have to be
repeated. However, it is a big disadvantage that a new numerical factorization
has to be computed at each step of the RQI algorithm.
For greater accuracy and stability in solving LS problems it is often preferred
to use a QR factorization instead of a Cholesky factorization. However, since
in the TLS normal equations the term oe 2 I is subtracted from A T A, this is not
straightforward. The Cholesky factor of the matrix A T A \Gamma oe 2 I can be obtained
from the QR factorization of the matrix
A
ioeI
, where i is the imaginary unit.
This is a downdating problem for the QR factorization and can be performed
using stabilized hyperbolic rotations, see [2, pp. 143-144], or hyperbolic Householder
transformations, see [22]. However, in the sparse case this is not an
attractive alternative, since it would require nontrivial modifications of existing
software for sparse QR factorization.
4.2 Iterated deregularization.
To solve the TLS normal equations using only a single factorization of A T A
we can adapt an iterated regularization scheme due to Riley and analyzed by
Golub [9]. In this scheme, we solve the TLS normal equations by the iteration
A T Affi
If lim k!1 x b. This iteration will converge with
linear rate equal to ae
n provided that ae ! 1. This iteration may be
implemented very efficiently if the QR decomposition of A is available. We do
not pursue this method further, since it has no advantage over the preconditioned
conjugate gradient method developed in [3].
4.3 A preconditioned conjugate gradient algorithm.
Performing the change of variables is a given nonsingular
matrix, and multiplying from the left with S \GammaT the system (4.1) becomes
This system is symmetric positive definite provided that oe ! oe 0 n , and hence
the conjugate gradient method can be applied. We can use for S the same
preconditioners as have been developed for the LS problem; for a survey see [2,
Ch. 7].
In the following we consider a special choice of preconditioner, the complete
Cholesky factor R of A T A (or R from a QR decomposition of A). Unless A is
huge this is often a feasible choice, since efficient software for sparse Cholesky
and sparse QR factorization are readily available [2, Ch. 7]. Using AR
I, the preconditioned system (4.2) simplifies to
(Note that although A and A T have disappeared from this system of equations
matrix-vector multiplications with these matrices are used to compute the right-hand
side f !) In the inverse iteration step used in the initialization,
the solution obtained by two triangular solves.
The standard conjugate gradient method applied to the system (4.2) can be
formulated in terms of the original variables w. The resulting algorithm is a
slightly simplified version of the algorithm PCGTLS given in [3] and can be
Algorithm 4.1. PCGTLS
Preconditioned gradient method for solving using the
Cholesky factor R of A T A as preconditioner.
Initialize: w ks
.
For
ks (j+1) k 2fi
Denote the original and the preconditioned matrix by I and
e
respectively. Then a simple calculation shows that for
the condition number of the transformed system is reduced by a
factor of -(A),
!/
The spectrum of ~
C will be clustered close to 1. In particular in the limit when
the eigenvalues of e
C will lie in the interval
(Note the relation to the condition number - TLS !) Hence, unless oe 0
can expect this choice of preconditioner to work very well for solving the shifted
system (4.1).
The I is positive definite if oe ! oe 0 n . In this case
in PCGTLS, and the division in computing ff k can always be carried out. If
n then the system (4.2) is not positive definite and a division by zero
can occur. This can be avoided by including a test to ensure that
equivalently kp the CG iterations are considered to
have failed. The RQI step is then repeated with a new smaller value of oe 2
e.g.,
The accuracy of TLS solutions computed by Rayleigh Quotient Iteration will
basically depend on the accuracy residuals and the stability of the method used
to solve the linear systems (4.1). We note that the cg method CGLS1 for the
LS problem, which is related to PCGTLS, has been shown to have very good
numerical stability properties, see [5].
4.4 Termination criteria in PCGTLS.
The RQI iteration, using PCGTLS as an inner iteration for solving the linear
systems, is an inexact Newton method for solving a system of nonlinear equa-
tions. Such methods have been studied by Dembo, Eisenstat, and Steihaug [7],
who consider the problem of how to terminate the iterative solver so that the
rate of convergence of the outer Newton method is preserved.
Consider the iteration
where r k is the residual error. In [7] it is shown that maintaining a convergence
order of 1 requires that when k !1, the residuals satisfy inequalities
is a forcing sequence.
In practice the above asymptotic result turns out to be of little practical use
in our context. Once the asymptotic cubic convergence is realized, the ultimate
accuracy possible in double precision already has been achieved. A more prac-
tical, ad hoc termination criterion for the PCGTLS iterations will be described
together with the numerical results reported below.
Remark. In the second linear system to be solved in RQI,
the right-hand side converges to x TLS . Hence it is tempting to use the value of
u obtained from the last RQI to initialize PCGTLS in the next step. However,
our experience is that this slows down the convergence compared to initializing
u to zero.
5 Numerical results.
5.1 Accuracy and termination criteria.
Numerical tests were performed in Matlab on a SUN SPARC station 10 using
double precision with unit roundoff . For the initial testing we used
contrived test problems [A; similar to those in [5] and generated
in the following way. 1 Let
e
where Y; Z are random orthogonal matrices and
Further, let
Ax:
This ensures that the norm of the solution does not reflect the illconditioning of
A. We then add random perturbations
Note that since oe there is a perturbation E to A with
which makes A rank deficient. Therefore it is not realistic to consider perturbations
with To test the termination criteria for the inner iterations
iterations
log0||x
Figure
5.1: Errors problem PS(30,15), with
systems solved by PCGTLS with iterations.
iterations
log0||x
Figure
5.2: Errors
systems solved by PCGTLS with
we used problem P (30; 15), oe 0
The linear systems arising in RQI were solved using PCGTLS with the Cholesky
factor of A T A as preconditioning. The criterion (4.6) shows that the linear systems
should be solved more and more accurately as the RQI method converges.
The rate of convergence depends on the ratio oe n+1 =oe 0 n , see (4.4), and is usually
very rapid. We have used a very simple strategy where in the kth step of RQI
These test problems are neither large nor sparse!
iterations are performed, where - 0 is a parameter to be
chosen.
In
Figure
5.1 we show results for 2. The plots for
are almost indistinguishable, whereas delay in convergence.
Indeed, for this problem taking iterations in PCGTLS suffices to give
the same result as using an exact (direct) solver. Since no refactorizations are
performed the object should be to minimize the total number of PCGTLS iter-
ations. Based on these considerations and the test results we recommend taking
should work well for problems where the ratio oe n+1 =oe 0 n
is smaller.
Rarely more than 2-3 RQI iterations will be needed. In Figure 5.2 we show
results for problem PS(30,15), and different error levels
Here respectively, were needed to achieve an accuracy
of about 10 \Gamma11 in x TLS . Since oe 0 this is equal to the
best limiting accuracy that can be expected. Note also that the error in oe n+1
converges to machine precision, usually in one less iteration, which supports the
use of the criterion (3.17) to terminate RQI.
5.2 Improvement from inverse iteration.
We now show the improvement resulting from including an initial step of
inverse iteration. In Figure 5.3 we show results for the problem considered
above. For the first two error levels only one RQI iteration now suffices. For the
highest error level oe n+1 converges in two iterations and x TLS in three.
-22
iterations
log0||x
Figure
5.3: Errors
One step of inverse iteration. Linear systems solved by PCGTLS with k +1
iteration.
We now consider the second test problem in [14], which is defined
where A 2 R n\Thetan\Gamma1 . Here e is a vector with entries generated randomly from a
normal distribution with mean 0:0 and variance 1:0, and scaled so that jjejj
. For 0:01 the condition
numbers in (2.5)-(2.7) are
respectively. This problem has features similar to those of the small illcondi-
tioned example discussed previously in Section 2.2, although here the norm of
the solution x LS is large.
-22
iterations
log0||x
Figure
5.4: Second test problem with 0:001. RQI without/with one step of inverse
iteration
Applying the RQI algorithm we obtained the results shown in Figure 5.4. The
initial approximation ae(x LS ) is here far outside the interval [oe n+1 ; oe 0 n ). Thus
the matrix A T A \Gamma oe 2 I is initially not positive definite and we cannot guarantee
the existence of the Cholesky factor. However, the Algorithm PCGTLS still does
not break down, and as shown in Figure 5.4 the limiting accuracy is obtained
after five RQI iterations. This surprisingly good performance of RQI can be
explained by the fact that even though x LS does not approximate x TLS well,
the angle between them is small; the cosine equals 0:98453.
Performing one step of inverse iteration before applying the RQI algorithm
gives much improved convergence. The one initial step of inverse iteration here
suffices to give an initial approximation in the interval [oe n+1 ; oe 0 n ). This can
be compared with 12-23 steps of bisection needed to achieve such a starting
approximation, see [14]! Three RQI iterations now give the solution x TLS with
an error close to the limiting accuracy, see Fig. 5.4.
We note that in both cases we obtained oe n+1 to full machine precision. Also,
the relative error norm of in the TLS solution was consistent with the condition
5.3 A problem in signal restoration.
The Toeplitz matrix used in this example comes from an application in signal
restoration, see [14, Example 3]. Specifically, an n \Theta (n \Gamma 2!) convolution matrix
T is constructed to have entries in the first column given by
exp
and zero otherwise. Entries in the first row given by t
zero otherwise, where 8. A Toeplitz matrix T and right-hand
side vector g is then constructed as
e, where E is a
random Toeplitz matrix with the same structure as T , and e is a random vector.
The entries in E and e are generated randomly from a normal distribution with
mean 0.0 and variance 1.0, and scaled so that
In [14] problems with convergence were reported. However, these are due to the
choice of right-hand side -
1 , which was taken to be a vector of all ones. For the
unperturbed problem this vector is orthogonal to the space spanned by
the left singular vector corresponding to the smallest singular value. Therefore
the magnitude of the component in this direction of the initial vector x LS will be
very small, of the order fl. Also, although A is quite well conditioned the least
squares residual is large. The TLS problem is therefore close to a nongeneric
problem and thus very illconditoned.
Because of the extreme illconditioning for this right-hand side, the behavior
of any solution method becomes very sensitive to the particular random perturbation
added. We have therefore instead chosen a right-hand side - g 2 given
by - m. For this the TLS problem is much better
conditioned, see Table 5.1. Convergence is now obtained in just two iterations,
see
Figure
5.5.
Table
5.1: Condition numbers for test problem 3 for right-hand sides - g i ,
-2iterations
log0||x
Figure
5.5: Third test problem; RQI with one step of inverse iteration,
6
Summary
.
We have developed an algorithm for solving large scale TLS problems based
on Rayleigh quotient iteration for computing the right singular vector of
defining the solution. The main work in this method consists of solving a sequence
of linear systems with matrix A T A \Gamma oe 2 I, where oe is the current approximation
to the smallest singular value of oe n+1 of (A; b). For large and sparse
TLS problems these linear systems can be solved by a preconditioned conjugate
gradient method. An efficient preconditioner is given by a (possibly incomplete)
Cholesky factorization of A T A or QR factorization of A.
Termination criteria for the inner and outer iterations have been given. We
conjecture that the described method almost always computes the TLS solution
with an accuracy compatible with a backward stable method. Although
a detailed error analysis is not given this conjecture is supported by numerical
results.
Methods for solving the TLS problem are by necessity more complex than
those for the (linear) LS problem. Our algorithm contains several ad hoc choices.
On the limited set of test problems we have tried it has only failed for almost
singular problems, for which the total least squares model is not relevant and
should not be used.
In our method the perturbation E is a rank one matrix which in general is
dense. Sometimes it is desired to find a perturbation E that preserves the
sparsity structure of A. A Newton method for this more difficult problem has
been developed by Rosen, Park, and Glick [21]. However, the complexity of this
algorithm limits it to fairly small sized problems. Recently a method, which has
the potential to be applied to large sparse problems has been given by Yalamov
and Yun Yuan [26]. Their algorithm only converges with linear rate, which may
suffice to obtain a low accuracy solution.
--R
Improving the accuracy of computed singular values.
Numerical methods for solving least squares problems
An analysis of the total least squares problem
A survey of condition number estimation for triangular matrices
Accuracy and Stability of Numerical Algorithms
A total least squares method for Toeplitz systems of equations
The minimum eigenvalue of a symmetric positive-definite Toeplitz matrix and rational Hermitian interpolation
Sparse QR factorization in MATLAB
The Symmetric Eigenvalue Problem
On the convergence of a practical QR algorithm
Total least norm formulation and solution for structured problems
stability and pivoting
Criteria for combining inverse and Rayleigh quotient iteration
Iterative algorithms for computing the singular subspace of a matrix associated with its smallest singular values
The Total Least Squares Problem: Computational Aspects and Analysis
A successive least squares method for structured total least squares.
Iterative Methods for Least Squares and Total Least Squares Problems
--TR
--CTR
Computing smallest singular triplets with implicitly restarted Lanczos bidiagonalization, Applied Numerical Mathematics, v.49 n.1, p.39-61, April 2004 | rayleigh quotient iteration;singular values;conjugate gradient method;total least squares |
587895 | The Recursive Inverse Eigenvalue Problem. | The recursive inverse eigenvalue problem for matrices is studied, where for each leading principal submatrix an eigenvalue and associated left and right eigenvectors are assigned. Existence and uniqueness results as well as explicit formulas are proven, and applications to nonnegative matrices, Z-matrices, M-matrices, symmetric matrices, Stieltjes matrices, and inverse M-matrices are considered. | Introduction
Inverse eigenvalue problems are a very important subclass of inverse problems
that arise in the context of mathematical modeling and parameter identification.
They have been studied extensively in the last 20 years, see e.g. [3, 5, 7, 11, 12, 14]
and the references therein. In particular, the inverse eigenvalue problem for non-negative
matrices is still a topic of very active research, since a necessary and
sufficient condition for the existence of a nonnegative matrix with a prescribed
spectrum is still an open problem, see [4, 11]. In this paper we study inverse
eigenvalue problems in a recursive matter, that allows to extend already constructed
solutions if further data become available, as is frequently the case in
inverse eigenvalue problems, e.g. [3].
We investigate the following recursive inverse eigenvalue problem of order n:
Let F be a field, let s
l 1;1
l 2;1
l 2;2
l n;1
r 1;1
r 1;2
r 2;2
r 1;n
be vectors with elements in F . Construct a matrix A 2 F n;n such
l T
where Ahii denotes the i-th leading principal submatrix of A.
In the sequel we shall use the notation RIEP(n) for "the recursive inverse eigenvalue
problem of order n".
In Section 2 we study the existence and uniqueness of solutions for RIEP(n) in the
general case. Our main result gives a recursive characterization of the solution
for RIEP(n). We also obtain a nonrecursive necessary and sufficient condition
for unique solvability as well as an explicit formula for the solution in case of
uniqueness.
The results of Section 2 are applied in the subsequent sections to special cases.
In Section 3 we discuss nonnegative solutions for RIEP(n) over the field IR of real
numbers. We also introduce a nonrecursive sufficient condition for the existence
of a nonnegative solution for RIEP(n). Uniqueness of nonnegative solutions for
RIEP(n) is discussed in Section 4. In Section 5 we study Z-matrix and M-matrix
solutions for RIEP(n) over IR. In Section 6 we consider real symmetric solutions
for RIEP(n) over IR. In Section 7 we consider positive semidefinite real symmetric
solutions for RIEP(n) over IR. In Section 8 we combine the results of the previous
two sections to obtain analogous results for Stieltjes matrices. Finally, in Section
9 we investigate inverse M-matrix solutions for RIEP(n).
Existence and uniqueness results
In this section we study the existence and uniqueness of solutions for RIEP(n) in
the general case. For this purpose we introduce some further notation. For the
vectors l
~
l
l i;1
r 1;i
The case is easy to verify.
Proposition 1 If l solves RIEP(1). If
either l 1;1 6= 0 or r 1;1 6= 0 then is the unique solution for RIEP(1).
For we have the following recursive characterization of the solution for
RIEP(n).
Theorem 2 Let n - 2. There exists a solution for RIEP(n) if and only if there
exists a solution B for RIEP(n-1) such that
l
and
There exists a unique solution for RIEP(n) if and only if there exists a unique
solution for RIEP(n-1) and l n;n r n;n 6= 0.
Proof. Let A be an n \Theta n matrix. Partition A as
where B is an (n-1) \Theta (n-1) matrix. Clearly, A solves RIEP(n) if and only if B
solves RIEP(n-1) and
It thus follows that there exists a solution for RIEP(n) if and only if there exists
a solution B for RIEP(n-1) such that the equations (4)-(7) (with unknown x, y
and z) are solvable. We now show that these equations are solvable if and only
if (1) and (2) hold. Distinguish between four cases:
1. r Here (4) is equivalent to (2), (5) is equivalent to
l n;n
and (6) then follows from (4). For every y 2 F n\Gamma1 we can find z 2 F such
that (7) holds.
2. l Here (5) is equivalent to (1), (4) is equivalent to
r n;n
and (7) then follows from (5). For every x 2 F n\Gamma1 we can find z 2 F such
that (6) holds.
3. l Here (4) is equivalent to (2) and (5) is equivalent to (1).
For any x 2 F n\Gamma1 with x T ~ r we have (6), and for any y 2 F n\Gamma1 with
we have (7), where z can be chosen arbitrarily.
4. l n;n 6= 0; r n;n 6= 0. Here (4)-(7) have a unique solution, given by (8),
and
l n;n r n;n
It follows that (4)-(7) are solvable if and only if (1) and (2) hold.
To prove the uniqueness assertion, note that it follows from our proof that if
either l a solution is not unique, since at least one of the
vectors x, y and z can be chosen arbitrarily. If both l n;n 6= 0 and r n;n 6= 0 then
every solution B for RIEP(n-1) defines a unique solution A for RIEP(n). The
uniqueness claim follows.
This result is recursive and allows to derive a recursive algorithm to compute
the solution, but we do not get explicit nonrecursive conditions that characterize
the existence of solutions. In order to get a necessary and sufficient condition
for unique solvability as well as an explicit formula for the solution in case of
uniqueness, we define the n \Theta n matrix R n to be the matrix whose columns are
appended at the bottom to obtain n-vectors. Similarly, we
define the n \Theta n matrix L n to be the matrix whose rows are l
appended at the right to obtain n-vectors. That is, we have
l 2;1 l 2;2
r 2;2
. r n\Gamma1;n
We denote
Also, we denote by ffi the Hadamard (or elementwise) product of matrices.
Proposition 3 A solution A for RIEP(n) satisfies
Proof. We prove our claim by induction on n. For the claim follows easily.
Assume that the assertion holds for Partition A as in (3).
We have
l n;n
R
By the inductive assumption we have L Also, by
(4) we have B~r by (5) we have ~ l T
n , and by (7)
we have ~ l T
n;n . It thus follows that
In general, the converse of Proposition 3 does not hold, that is, a matrix A satisfying
does not necessarily form a solution for RIEP(n), as is demonstrated
by Example 5 below.
Theorem 4 There is a unique solution for RIEP(n) if and only if
l 1;1 6= 0 or r 1;1 6= 0
and
l i;i r i;i 6= 0;
Furthermore, the unique solution is given by
Proof. The uniqueness claim follows from Proposition 1 and Theorem 2. The
fact that the unique solution for RIEP(n) is given by (14) follows immediately
from Proposition 3.
In the case that the solution is not unique, that is, whenever l
or whenever l i;i or r i;i vanish for some i ? 1, the matrices L n and R n defined
in (11) are not invertible. Therefore, in this case (14) is invalid. We conclude
this section by an example showing that, in general, a revised form of (14), with
inverses replaced by generalized inverses, does not provide a solution for RIEP(n).
Example 5 Let
and let
hi
"0
We have
be the Moore-Penrose inverses of L and R respectively, see [1].
We have
Since Ah2i does not have an eigenvalue 2, A is not a solution for RIEP(3). Note
that we still have L n AR
In this section we have characterized solvability of RIEP(n) over a general field
F in terms of recursive conditions. We have also given a necessary and sufficient
condition for unique solvability and an explicit formula for the unique solution. In
the following sections we shall discuss the special cases of nonnegative matrices,
Z-matrices, M-matrices, real symmetric matrices, positive semidefinite matrices,
Stieltjes matrices and inverse M-matrices.
3 Existence of nonnegative solutions
In this section we apply the results of Section 2 to nonnegative solutions for
RIEP(n) over the field IR of real numbers. A matrix A 2 IR n;n is said to be
nonnegative [positive] if all elements of A are nonnegative [positive]. In this case
we
In order to state our results we define a vector over IR to be unisign if its nonzero
components have the same sign.
Theorem 6 Let n - 2. There exists a nonnegative solution for RIEP(n) if and
only if we have
l i or r i is a unisign nonzero vector =) s
and there exists a nonnegative solution B for RIEP(n-1)
sn ~ rn
rn;n
rn;n
l n;n
l n;n
and
l n;n r n;n 6= 0 =) s n (
l n;n r n;n
l n;n r n;n
There exists a positive solution for RIEP(n) if and only if there exists a positive
solution B for RIEP(n-1) such that (15)-(18) hold with strict inequalities and
every nonzero unisign vector l i or r i has no zero components.
Proof. Let A 2 IR n;n . As in the proof of Theorem 2, partition A as in (3), and so
A solves RIEP(n) if and only if B solves RIEP(n-1) and (4)-(7) hold. Therefore,
if A is a nonnegative solution for RIEP(n) then we have (16)-(18). Also, it follows
from the nonnegativity of A that (15) holds. Conversely, assume that (15) holds
and that B forms a nonnegative solution for RIEP(n-1) satisfying (16)-(18). We
show that in this case we can find nonnegative solutions x, y and z for (4)-(7).
Distinguish between four cases:
1. r Here x is given by (8), y can be chosen arbitrarily,
and z should be chosen such that (7) holds. It follows from (17) that x is
nonnegative. If s n - 0 then we choose so we
have a nonnegative solution for (4)-(7). If s n ! 0 then, by (15), l n is not
unisign and hence ~ l T
l n;n
has at least one negative component. It follows that
we can find a positive vector y such that ~ l T
l n;n
by (7) we have
l n;n
, it follows that z ? 0, and so again we have a nonnegative
solution for (4)-(7).
2. l Here y is given by (9), x can be chosen arbitrarily, and z
should be chosen such that (6) holds. The proof follows as in the previous
case.
3. l should be chosen such that x T ~
and z can be chosen arbitrarily. In order to obtain a nonnegative solution
we can choose x, y and z to be zero.
4. l n;n 6= 0; r n;n 6= 0. Here x is given by (8), y is given by (9), and z is given
by (10). It follows from (17), (16) and (18) that x, y and z are nonnegative.
Assume now that A is a positive solution for RIEP(n). It is easy to verify that in
this case (15)-(18) should hold with strict inequalities. Also, for every nonzero
unisign vector l i [r i ], the vector l T
has no zero components, implying
that l i , [r i ] has no zero components. Conversely, assume that (15) holds with a
strict inequality, that every nonzero unisign vector l i or r i has no zero components,
and that B forms a positive solution for RIEP(n-1) satisfying (16)-(18) with strict
inequalities. We show that in this case we can find positive solutions x, y and z
for (4)-(7). Note that in Case 1 above, the vector x now becomes positive. Also,
since the inequality in (15) is now strict, we have either s n ? 0, in which case
we can choose positive y sufficiently small such that z is positive, or s n - 0, in
which case y can be chosen positive as before and the resulting z is positive. The
same arguments hold for Case 2. In Case 4, it follows from the strict inequalities
(17)-(18) that x, y and z are positive. Finally, in Case 3, since l n and r n both
have at least one zero component, it follows that both vectors are not unisign.
Hence, we can find positive x and y such that x T ~ r
We assign any
positive number to z to find a positive solution A for RIEP(n).
By the Perron-Frobenius theory, see e.g. [8, 2], the largest absolute value ae(A)
of an eigenvalue of a nonnegative n \Theta n matrix A is itself an eigenvalue of A,
the so called Perron root of A, and it has an associated nonnegative eigenvector.
Furthermore, if A is irreducible, that is, if either there
exists no permutation matrix P such that P T
where B and D
are square, then ae(A) is a simple eigenvalue of A with an associated positive
eigenvector. If A is not necessarily irreducible then we have the following, see
e.g. [2].
Theorem 7 If B is a principal submatrix of a nonnegative square matrix A
then ae(B) - ae(A). Furthermore, ae(A) is an eigenvalue of some proper principal
submatrix of A if and only if A is reducible.
Note that if we require that the s i are the Perron roots of the principal submatrices
Theorem 7, we have
If, furthermore, all the leading principal submatrices of A are required to be
irreducible, then
Condition (19) is not sufficient to guarantee that a nonnegative solution A for
RIEP(n) necessarily has s Perron roots of Ahii,
demonstrated by the following example.
Example 8 Let
and let
hi
"0
The nonnegative matrix 24
In order to see cases in which s are the Perron roots of Ahii,
respectively, we prove
Proposition 9 If the vector l n or r n is positive then for a nonnegative solution
A for RIEP(n) we have
Proof. The claim follows immediately from the known fact that a positive eigenvector
of a nonnegative matrix corresponds to the spectral radius, see e.g. Theorem
2.1.11 in [2, p. 128].
ng we have either l i ? 0 or r i ? 0 then for
every nonnegative solution A for RIEP(n) we have
Lemma 11 Assume that there exists a nonnegative solution A for RIEP(n) such
that ae(Ahn-1i) ! s n . If r n 6= 0 or l n 6= 0 then
Proof. Since r n 6= 0 or l n 6= 0 it follows that s n is an eigenvalue of A. Assume that
ae(A). It follows that the nonnegative matrix A has at least two eigenvalues
larger than or equal to s n . By [6, p. 473], see also [10, Corollary 1], it follows
that ae(Ahn-1i) - s n , which is a contradiction. Therefore, we have s
Corollary 12 If for every ng we have either r i 6= 0 or l i 6= 0, and if
holds then for every nonnegative solution A for RIEP(n) we have
Proof. Note that Our result follows using
Lemma 11 repeatedly.
Lemma 13 Assume that r n - 0 and r n;n 6= 0 or that l n - 0 and l n;n 6=
0. Then for every nonnegative solution A for RIEP(n) we have
g.
Proof. Without loss of generality, we consider the case where r n - 0 and r n;n 6= 0.
If r n is positive then, by Proposition 9, we have since by the
Perron-Frobenius theory we have ae(Ahn-1i) - ae(A), the result follows. Other-
wise, r n has some zero components. Let ff be the set of indices i such that r i;n ? 0
and let ff c be the complement of ff in ng. Note that since r n is a nonnegative
eigenvector of the nonnegative matrix A it follows that the submatrix A[ff c jff]
of A, with rows indexed by ff c and columns indexed by ff, is a zero matrix. It follows
that A is a reducible matrix and ae(A[ffjff])g. Note
that the subvector r n [ff] of r n indexed by ff is a positive eigenvector of A[ffjff]
associated with the eigenvalue s n . It thus follows that
it follows that A[ff c jff c ] is a submatrix of Ahn-1i. Thus, by the Perron-Frobenius
theory we have ae(A[ff c jff c ]) - ae(Ahn-1i) - ae(A). Hence, it follows that
g.
Corollary 14 Assume that for every ng we have either r i - 0 and
r i;i 6= 0 or l i - 0 and l i;i 6= 0. Then for every nonnegative solution A for RIEP(n)
we have g.
Proof. Note that Our result follows using
repeatedly.
Corollary 15 Assume that for every
r i;i 6= 0 or l i - 0 and l i;i 6= 0. If (19) holds then for every nonnegative solution A
we have
Another interesting consequence of Theorem 4 is the following relationship between
the matrix elements and the eigenvectors associated with the Perron roots
of the leading principal submatrices of a nonnegative matrix.
Corollary 2. Let A 2 IR n;n be a nonnegative matrix, let s
be the Perron roots and associated left and right eigenvectors of Ahii,
respectively, and assume that (20) holds. Let S defined as in (11) and
(12). Then
Proof. Since (20) holds, it follows that s i is not an eigenvalue of Ahi-1i,
Therefore, it follows from (1) and (2) that l i;i r i;i 6= 0. Also, since l 1 and
r 1 are eigenvectors of Ah1i, we have l 1;1 r 1;1 6= 0. It now follows from Theorem 4
that Ahii is the unique solution for RIEP(i), and is given by (21).
While Theorem 6 provides a recursive characterization for nonnegative solvability
of RIEP(n), in general nonrecursive necessary and sufficient conditions for the
existence of nonnegative solution are not known. We now present a nonrecursive
sufficient condition.
Corollary 17 Assume that the vectors l are all positive and
that the numbers s are all positive. Let
r j;i
r
r j;i
r
l
l i;j
l i\Gamma1;j
l i;j
l i\Gamma1;j
If we have
and
then there exists a (unique) nonnegative solution A for RIEP(n).
Furthermore, if all the inequalities (22)-(24) hold with strict inequality then there
exists a (unique) positive solution A for RIEP(n).
Proof. We prove our assertion by induction on n. The case
the inductive assumption we can find a nonnegative solution B for RIEP(n-1).
Note that
Therefore, it follows from (22) that
and so (16) holds. Similarly we prove that (17) holds. To prove that
holds note that by (25) we have B~r n - Bm r
Similarly, we have ~ l T
. Hence, it follows that ~ l T
. By (24) applied to
Theorem 6, there exists a nonnegative solution for RIEP(n). The proof of the
positive case is similar.
The conditions in Corollary 17 are not necessary as is demonstrated by the following
example.
Example
hi
"1
We have m r
2. Note that both (22) and
(23) do not hold for 3. Nevertheless, the unique solution for RIEP(3) is the
nonnegative matrix 2
4 Uniqueness of nonnegative solutions
When considering uniqueness of nonnegative solutions for RIEP(n), observe that
it is possible that RIEP(n) does not have a unique solution but does have a unique
nonnegative solution, as is demonstrated by the following example.
Example 19 Let
and let
hi
"0
"1
By Theorem 2, there is no unique solution for RIEP(2). Indeed, the solutions for
RIEP(2) are all matrices of the form
a \Gammaa
Clearly, the zero matrix is the only nonnegative solution for RIEP(2).
Observe that, unlike in Theorem 2, the existence of a unique nonnegative solution
for RIEP(n) does not necessarily imply the existence of a unique nonnegative
solution for RIEP(n-1), as is demonstrated by the following example.
Example 20 Let
and let
hi
"0
Observe that all matrices of the form
a a
solve RIEP(2), and hence there is no unique nonnegative solution for RIEP(2).
However, the only nonnegative solution for RIEP(3) is the matrix6 4
We remark that one can easily produce a similar example with nonnegative vectors
In order to introduce necessary conditions and sufficient conditions for uniqueness
of nonnegative solutions for RIEP(n) we prove
Lemma and assume that B forms a nonnegative solution for
satisfying (15)-(18). Then there exist unique nonnegative vectors x,
y and z such that the matrix
solves RIEP(n) if and only if either
l n;n r n;n 6= 0, or s l n is a unisign vector with no zero components, or
is a unisign vector with no zero components.
Proof. We follow the proof of Theorem 6. Consider the four cases in that proof.
In Case 1, the vector x is uniquely determined and any nonnegative assignment
for y is valid as long as
l n;n
nonnegative vector
sufficiently small will do. If s as is shown in the proof of Theorem 6,
we can find a positive y such that z ? 0, and by continuity arguments there exist
infinitely many such vectors y. If s a unique such y exists if and only
if there exists a unique nonnegative vector y such that ~ l T
l n;n
l n has
a nonpositive component then every vector y whose corresponding component
is positive and all other components are zero solves the problem. On the other
hand, if ~ l n ? 0, which is equivalent to saying that l n is a unisign vector with no
zero components, then the only nonnegative vector y that solves the problem is
Similarly, we prove that, in case 2, a unique nonnegative solution exists
if and only if s is a unisign vector with no zero components. We
do not have uniqueness in Case 3 since then z can be chosen arbitrarily. Finally,
there is always uniqueness in Case 4.
Lemma 21 yields sufficient conditions and necessary conditions for uniqueness
of nonnegative solutions for RIEP(n). First, observe that if s l n is a
unisign vector with no zero components, or if s is a unisign vector
with no zero components, then the zero matrix is the only nonnegative solution
of the problem. A less trivial sufficient condition is the following.
Corollary 22 Let n - 2, and let A be a nonnegative solution for RIEP(n). If
Ahn-1i forms a unique nonnegative solution for RIEP(n-1) and if l n;n r n;n 6= 0,
then A is the unique nonnegative solution for RIEP(n).
Necessary conditions are given by the following
Corollary 23 Let n - 2. If there exists a unique nonnegative solution for
RIEP(n) then either l n;n r n;n 6= 0, or s l n is a unisign vector with no
zero components, or s is a unisign vector with no zero components.
The condition l n;n r n;n 6= 0 is not sufficient for the uniqueness of a nonnegative
solution for RIEP(n), as is shown in the following example.
Example
and let
hi
"0
Although we have l n;n r n;n 6= 0, all matrices of the
a a 0
a a 07solve RIEP(3), and hence there is no unique nonnegative solution for RIEP(3).
5 The Z-matrix and M-matrix case
A real square matrix A is said to be a Z-matrix if it has nonpositive off-diagonal
elements. Note that A can be written as ff is a real number
and B is a nonnegative matrix. If we further have that ff - ae(B) then we say
that A is an M-matrix.
In this section we discuss Z-matrix and M-matrix solutions for RIEP(n) over the
field IR of real numbers. The proofs of the results are very similar to the proofs
of the corresponding results in Sections 3 and 4 and, thus, are omitted in most
cases.
Theorem 25 Let n - 2. There exists a Z-matrix solution for RIEP(n) if and
only if there exists a Z-matrix solution B for RIEP(n-1)
sn ~ rn
rn;n
rn;n
and 8
l n;n
l n;n
Furthermore, if l n or r n is positive then a Z-matrix solution for RIEP(n) is an
M-matrix if and only if s n - 0.
Proof. The proof of the first part of the theorem is similar to the proof of Theorem
6, observing that here the vectors x and y are required to be nonnegative and that
the sign of z is immaterial. The proof of the second part of the Theorem follows,
similarly to Proposition 9, from the known fact that a positive eigenvector of a
Z-matrix corresponds to the least real eigenvalue.
Theorem 26 Let n - 2. Let A 2 IR n;n be a Z-matrix, let s be the
least real eigenvalues and the corresponding left and right eigenvectors of Ahii,
respectively, and assume that
defined as in (11) and (12). Then
For the numbers M r
and m l
, defined in Corollary 17, we have
Theorem 27 Assume that the vectors l are all positive and
that the numbers s are all positive. If we have
and
then there exists a (unique) M-matrix solution A for RIEP(n).
Theorem 28 Let n - 2, let A be a Z-matrix solution for RIEP(n) and assume
that Ahn-1i forms a unique Z-matrix solution for RIEP(n-1). Then A is the
unique Z-matrix solution for RIEP(n) if and only if l n;n r n;n 6= 0.
Here too, unlike in Theorem 2, the existence of a unique Z-matrix solution for
RIEP(n) does not necessarily imply the existence of a unique Z-matrix solution
for RIEP(n-1), as is demonstrated by the following example.
Example 29 Let s
hi
"0
"1
Observe that all matrices of the form
a \Gammaa
solve RIEP(2), and hence there is no unique Z-matrix solution for RIEP(2).
However, it is easy to verify that the zero matrix is the only Z-matrix solution
for RIEP(3).
6 The real symmetric case
The inverse eigenvalue problem for real symmetric matrices is well studied, see
e.g. [3]. In this section we consider symmetric solutions for RIEP(n) over the
field IR of real numbers. We obtain the following consequence of Theorem 2,
characterizing the real symmetric case.
Theorem 2. There exists a symmetric solution for RIEP(n) if
and only if there exists a symmetric solution B for RIEP(n-1) such that the
implications (1) and (2) hold, and
l n;n r n;n 6= 0 =) (s n I
~ l n
l n;n
~
r n;n
Furthermore, if there exists a unique symmetric solution for RIEP(n) then l n;n 6=
or r n;n 6= 0.
Proof. Let A 2 IR n;n . Partition A as in (3), and so A solves RIEP(n) if and only
solves RIEP(n-1) and (4)-(7) hold. It was shown in the proof of Theorem 2
that (4)-(7) are solvable if and only if (1) and (2) hold. Therefore, all we have
to show that if B is symmetric then we can find solutions x, y and z for (4)-(7)
such that only if (26) holds. We go along the four cases discussed in
Theorem 2. In Case 1, the vector x is uniquely determined and the vector y can
be chosen arbitrarily. Therefore, in this case we set y = x, and z is then uniquely
determined. In Case 2, the vector y is uniquely determined and the vector x can
be chosen arbitrarily. Thus, in this case we set y, and z is then uniquely
determined. In Case 3, we can choose any x and y as long as x T ~ r
In particular, we can choose Furthermore, z can be chosen
arbitrarily. Finally, in Case 4, we have only if (26) holds. Note that
this is the only case in which, under the requirement that the vectors x,
y and z are uniquely determined.
We remark that, unlike in Theorem 2, the existence of a unique symmetric solution
for RIEP(n) does not necessarily imply the existence of a unique symmetric
solution for RIEP(n-1), as is demonstrated by the following example.
Example
and let
hi
"1
l 4 =6 6 610
\Gamma17 7 7:
It is easy to verify that all symmetric matrices of the form61 1 a
a a b7
solve RIEP(3), while the unique solution for
This example also shows that there may exist a unique solution for RIEP(n) even
if l
Naturally, although not necessarily, one may expect in the symmetric case to
have the condition
Indeed, in this case we have the following corollary of Theorems 2 and 30.
Corollary assume that (27) holds. The following are equivalent
(i) There exists a symmetric solution for RIEP(n).
(ii) There exists a solution for RIEP(n).
(iii) There exists a symmetric solution B for RIEP(n-1) such that (1) holds.
(iv) There exists a solution B for RIEP(n-1) such that (1) holds.
Proof. Note that since (27) holds, we always have (26). We now prove the
equivalence between the four statements of the theorem.
(i) =) (ii) is trivial.
(ii) =) (iv) by Theorem 2.
(iv) =) (iii). Since (27) holds, it follows that B+B Talso solves RIEP(n-1).
(iii) =) (i). Since B is symmetric and since we have (27), the implications (1)
and (2) are identical. Our claim now follows by Theorem 30.
For uniqueness we have
Theorem 33 Let n - 2 and assume that (27) holds. The following are equivalent
(i) There exists a unique symmetric solution for RIEP(n).
(ii) There exists a unique solution for RIEP(n).
(iii) We have l i;i 6= 0;
Proof. In view of (27), the equivalence of (ii) and (iii) follows from Theorem 4.
To see that (i) and (iii) are equivalent note that, by the construction in Theorem
30, for every symmetric solution B for RIEP(n-1) there exists a solution A for
RIEP(n) such that Furthermore, A is uniquely determined if and
only if l n;n 6= 0. Therefore, it follows that there exists a unique symmetric solution
for RIEP(n) if and only if there exists a unique symmetric solution for
and l n;n 6= 0. Our assertion now follows by induction on n.
We conclude this section remarking that a similar discussion can be carried over
for complex Hermitian matrices.
7 The positive semidefinite case
In view of the discussion of the previous section, it would be interesting to find
conditions for the existence of a positive (semi)definite real symmetric solution
for RIEP(n). Clearly, a necessary condition is nonnegativity of the numbers s i
n. Nevertheless, this condition is not
sufficient even if a real symmetric solution exists, as is demonstrated by the
following example.
Example 34 Let
and let
hi
"1
The unique solution for RIEP(3) is the symmetric matrix6 4
which is not positive semidefinite.
The following necessary and sufficient condition follows immediately from Theorem
4.
Theorem assume that (27) holds. Assume, further, that r i;i 6=
n. Then the unique solution for RIEP(n) is positive semidefinite
[positive definite] if and only if S n ffi (R T
positive semidefinite [positive
definite].
Remark 36 By Theorem 33, in the case that r we do not have
uniqueness of symmetric solutions for RIEP(n). Hence, if there exists a symmetric
solution for RIEP(n) then there exist at least two different such solutions A and
B. Note that A a symmetric solution for RIEP(n) for
every real number c. It thus follows that in this a case it is impossible to have
all solutions for RIEP(n) positive semidefinite. Therefore, in this case we are
looking for conditions for the existence of some positive semidefinite solution for
RIEP(n).
The following necessary condition follows immediately from Proposition 3.
Theorem 37 Let n - 2 and assume that (27) holds. If there exists a positive
semidefinite real symmetric solution for RIEP(n) then S n ffi (R T
positive
semidefinite.
In order to find sufficient conditions for the existence of a positive semidefinite
solution for RIEP(n), we denote by oe(A) the least eigenvalue of a real symmetric
matrix A.
Lemma 38 Let n - 2 and assume that (27) holds. Assume that there exists a
symmetric solution A for RIEP(n) such that oe(Ahn-1i) ? s n . If r n 6= 0 then
Proof. Since r n 6= 0 it follows that s n is an eigenvalue of A. Assume that
It follows that A has at least two eigenvalues smaller than or equal to
s n . By the Cauchy Interlacing Theorem for Hermitian matrices, e.g. [8, Theorem
4.3.8, p. 185], it follows that oe(Ahn-1i) - s n , which is a contradiction. Therefore,
we have
Corollary assume that (27) holds. If r i 6= 0 for all i,
then every real symmetric solution A for
RIEP(n) is positive semidefinite. If s n ? 0 then every real symmetric solution
for RIEP(n) is positive definite.
Proof. Note that Using Lemma 38 repeatedly
we finally obtain implying our claim.
Remark 40 In view of Remark 36, it follows from Corollary 39 that if r i 6= 0
for all i and if
has a unique (positive semidefinite) solution.
The converse of Corollary 39 is, in general, not true. That is, even if every real
symmetric solution for RIEP(n) is positive semidefinite we do not necessarily
have as is demonstrated by the following example.
Example
and let
hi
"1
The unique solution for RIEP(3) is the positive definite matrix
Nevertheless, we do not have s 1 - s 2 .
We conclude this section with a conjecture motivated by Theorems 35 and 37.
One direction of the conjecture is proven in Theorem 37.
Conjecture 42 Let n - 2 and assume that (27) holds. Then there exists a
positive semidefinite [positive definite] real symmetric solution for RIEP(n) if
and only if S n ffi (R T
positive semidefinite [positive definite].
8 The Stieltjes matrix case
In this section we combine the results of the previous two sections to obtain
analogous results for Stieltjes matrices, that is, symmetric M-matrices.
The following theorem follows immediately from Theorems 30 and 25.
Theorem 43 Let n - 2. There exists a symmetric Z-matrix solution for
RIEP(n) if and only if there exists a symmetric Z-matrix solution B for
satisfying 8
sn ~ rn
rn;n
rn;n
l n;n
l n;n
and
l n;n r n;n 6= 0 =) (s n I
~ l n
l n;n
~
r n;n
Furthermore, if l n or r n is positive then a symmetric Z-matrix solution for
RIEP(n) is a Stieltjes matrix if and only if s n - 0.
Corollary 44 Let n - 2, and assume that the vectors l i , are all
positive and that (27) holds. There exists a symmetric Z-matrix solution A
for RIEP(n) if and only if there exists a symmetric Z-matrix solution B for
satisfying s n ~
. The solution A is a Stieltjes matrix if and
only if s n - 0.
The following nonrecursive sufficient condition from Theorem 27.
Theorem assume that the vectors l i , are all
positive, that (27) holds, and that the numbers s are all positive. If we
have
then there exists a (unique) Stieltjes matrix solution A for RIEP(n).
Proof. By Theorem 27 there exists a unique M-matrix solution A for RIEP(n).
Since A T also solves the problem, it follows that A = A T and the result follows.
9 The inverse M-matrix case
It is well known that for a nonsingular M-matrix A we have A
ingly, a nonnegative matrix A is called inverse M-matrix if it is invertible and
A \Gamma1 is an M-matrix. An overview of characterizations of nonnegative matrices
that are inverse M-matrices can be found in [9].In this section we discuss, as a
final special case, inverse M-matrix solutions for RIEP(n).
The following theorem follows immediately from two results of [9].
Theorem 46 Let A 2 IR n;n be partitioned as in (3). Then A is an inverse M-matrix
if and only if B is an inverse M-matrix and
and
for the diagonal entries: (31)
Proof. By Corollary 3 in [9], if A is an inverse M-matrix then B is an inverse
M-matrix. By Theorem 8 in [9], if B is an inverse M-matrix then A is an inverse
M-matrix if and only if (28)-(31) hold. Our claim follows.
The next result gives necessary and sufficient recursive conditions for the existence
of an inverse M-matrix solution for RIEP(n).
Theorem 2. There exists an inverse M-matrix solution for RIEP(n)
if and only if s n ? 0 and there exists an inverse M-matrix solution B for
satisfying 8
N~rn
rn;n
l n;n
l n;n r n;n 6= 0 =)
l n;n r n;n
and, except for the diagonal entries,
l n;n r n;n 6= 0 =) s n
l n;n r n;n
l n;n r n;n
Proof. As in the proof of Theorem 2, partition A as in (3). If A is an inverse M -
matrix solution for RIEP(n) then, as is well known, its eigenvalues lie in the open
right half plane, and so the real eigenvalue s n must be positive. Furthermore,
by Theorem 46, B is an inverse M-matrix and (28)-(31) hold. Finally, we have
(4)-(7). Distinguish between four cases:
1. r Here x is given by (8), and so it follows from (29) that
l n;n
Theorem 2 we have B~r implying that N ~
2. l Here y is given by (9), and so it follows from (28) that
N~rn
rn;n
Theorem 2 we have ~ l T
3. l Similarly to the previous cases prove that N ~
4. l n;n 6= 0; r n;n 6= 0. Here x is given by (8), y is given by (9), and z is given
by (10). It follows from (28) that N~rn
rn;n
- 0, and from (29) that ~ l T
l n;n
- 0. It
follows from (30) that
l n;n r n;n
l n;n
r n;n
l n;n r n;n
0:
now follows that ~ l T
l n;nrn;n
! 1. Finally, it follows from (31)
that, except for the diagonal entries,
l n;n r n;n
r n;n
l n;n
l n;n r n;n
We have thus proven that if A is an inverse M-matrix solution for RIEP(n)
then is an inverse M-matrix solution B for RIEP(n-1) satisfying
(32)-(35).
Conversely, assume that s n ? 0 and B is an inverse M-matrix solution B for
satisfying (32)-(35). We show that x, y and z can be chosen such
that (28)-(31) hold, and so by Theorem 46, A is an inverse M-matrix. Here too
we distinguish between four cases:
1. r Here x is given by (8), and by (33) we obtain (29). Note
that y can be chosen arbitrarily, and and z should be chosen such that (7)
holds. If we choose It follows that
so we also have (30). Finally, since
is an M-matrix, it follows that (31) holds (except for
the diagonal entries).
2. l Here y is given by (9), and by (32) we obtain (28). The
vector x can be chosen arbitrarily, so we choose The proof follows
as in the previous case.
3. l should be chosen such that x T ~
and z can be chosen arbitrarily. We choose and the proof follows.
4. l n;n 6= 0; r n;n 6= 0. Here x is given by (8), y is given by (9), and z is given
by (10). By (32) and (33) we obtain (28) and (29) respectively. Finally,
similarly to the corresponding case in the proof of the other direction, (34)
implies (30) and (35) implies (31).
Note that Conditions (32)-(33) imply immediately Conditions (16)-(17) by multiplying
the inequality by the nonnegative matrix B. This is not surprising, since
an inverse M-matrix is a nonnegative matrix. The converse, however, does not
hold in general. The following example shows that although (16)-(17) is satisfied,
(32)-(33) do not hold.
Example 48 Let
and let
hi
0:5257
0:8507
0:3859
0:91267
The unique solution for RIEP(3) is the nonnegative matrix
A =6 4
which is not an inverse M-matrix since
A
1:6429 \Gamma1:5714 0:4286
\Gamma1:5714 2:2857 \Gamma0:7143
0:4286 \Gamma0:7143 0:2857
Indeed, the unique nonnegative solution
for RIEP(2) satisfies (16),
as
2:8673
8:2024
However, B does not satisfy (32), since the vector
\Gamma1:3688
2:2816
is not nonnegative.
--R
Generalized Matrix Inverses: Theory and Applications
Nonnegative Matrices in Mathematical Sci- ences
A survey of matrix inverse eigenvalue problems
The spectra of non-negative matrices via symbolic dynamics
On an inverse problem for nonnegative and eventually nonnegative matrices
On some inverse problems in matrix theory
Matrix Analysis
Inverse eigenvalue problems for matrices
A note on an inverse problem for nonnegative matrices
Nonnegative matrices whose inverses are M-matrices
Note on an inverse characteristic value problem
--TR
--CTR
Fan-Liang Li , Xi-Yan Hu , Lei Zhang, Left and right inverse eigenpairs problem of skew-centrosymmetric matrices, Applied Mathematics and Computation, v.177 n.1, p.105-110, 1 June 2006 | m-matrices;recursive solution;stieltjes matrices;inverse eigenvalue problem;inverse M-matrices;hermitian matrices;nonnegative matrices;z-matrices |
587916 | Extremal Properties for Dissections of Convex 3-Polytopes. | A dissection of a convex d-polytope is a partition of the polytope into d-simplices whose vertices are among the vertices of the polytope. Triangulations are dissections that have the additional property that the set of all its simplices forms a simplicial complex. The size of a dissection is the number of d-simplices it contains. This paper compares triangulations of maximal size with dissections of maximal size. We also exhibit lower and upper bounds for the size of dissections of a 3-polytope and analyze extremal size triangulations for specific nonsimplicial polytopes: prisms, antiprisms, Archimedean solids, and combinatorial d-cubes. | Introduction
. Let A be a point conguration in R d with its convex hull
having dimension d. A set of d-simplices with vertices in A is a dissection
of A if no pair of simplices has an interior point in common and their union equals
conv(A). A dissection is a triangulation of A if in addition any pair of simplices
intersects at a common face (possibly empty). The size of a dissection is the number
of d-simplices it contains. We say that a dissection is mismatching when it is not a
triangulation (i.e. it is not a simplicial complex). In this paper we study mismatching
dissections of maximal possible size for a convex polytope and compare them with
maximal triangulations. This investigation is related to the study of Hilbert bases
and the hierarchy of covering properties for polyhedral cones which is relevant in
Algebraic Geometry and Integer Programming (see [5, 10, 24]). Maximal dissections
are relevant also in the enumeration of interior lattice points and its applications (see
[2, 15] and references there).
It was rst shown by Lagarias and Ziegler that dissections of maximal size turn
out to be, in general, larger than maximal triangulations, but their example uses
interior points [16]. Similar investigations were undertaken for mismatching minimal
dissections and minimal triangulations of convex polytopes [4]. In this paper we
augment previous results by showing that it is possible to have simultaneously, in
the same 3-polytope, that the size of a mismatching minimal (maximal) dissection
is smaller (larger) than any minimal (maximal) triangulation. In addition, we show
that the gap between the size of a mismatching maximal dissection and a maximal
triangulation can grow linearly on the number of vertices and that this occurs already
for a family of simplicial convex 3-polytopes. A natural question is how dierent
are the upper and lower bounds for the size of mismatching dissections versus those
bounds known for triangulations (see [21]). We prove lower and upper bounds on their
size with respect to the number of vertices for dimension three and exhibit examples
showing that our technique of proof fails already in dimension four. Here is the rst
Dept. of Mathematics, Univ. of California-Davis (deloera@math.ucdavis.edu). The research of
this author partially supported by NSF grant DMS-0073815.
y Depto. de Matematicas, Estad. y Comput., Univ. de Cantabria (santos@matesco.unican.es).
The research of this author was supported partially by grant PB97-0358 of the Spanish Direccion
General de Investigacion Cientca y Tecnica.
z Dept. of Information Science, Univ. of Tokyo (fumi@is.s.u-tokyo.ac.jp).
summary of results:
Theorem 1.1.
1. There exists an innite family of convex simplicial 3-polytopes with increasing
number of vertices whose mismatching maximal dissections are larger than
their maximal triangulations. This gap is linear in the number of vertices
(Corollary 2.2).
2. (a) There exists a lattice 3-polytope with 8 vertices containing no other lattice
point other than its vertices whose maximal dissection is larger than its
maximal triangulations.
(b) There exists a 3-polytope with 8 vertices for which, simultaneously, its
minimal dissection is smaller than minimal triangulations and maximal
dissection is larger than maximal triangulations.
(Proposition 2.3)
3. If D is a mismatching dissection of a 3-polytope with n vertices, then the size
of D is at least n 2. In addition, the size of D is bounded above by n 2
(Proposition 3.2).
A consequence of our third point is that the result of [4], stating a linear gap
between the size of minimal dissections and minimal triangulations, is best possible.
The results are discussed in Sections 2 and 3.
The last section presents a study of maximal and minimal triangulations for
combinatorial d-cubes, three-dimensional prisms and anti-prisms, as well as other
Archimedean polytopes. The following theorem and table summarize the main results:
Theorem 1.2.
1. There is a constant c > 1 such that for every d 3 the maximal triangulation
among all possible combinatorial d-cubes has size at least c d d! (Proposition
4.1).
2. For a three-dimensional m-prism, in any of its possible coordinatizations, the
size of a minimal triangulation is 2m 5+ d me. For an m-antiprism, in any
of its possible coordinatizations, the size of a minimal triangulation is 3m 5
(Proposition 4.3). The size of a maximal triangulation of an m-prism depends
on the coordinatization, and in certain natural cases it is (m
(Proposition 4.4).
3. The following table species sizes of the minimal and maximal triangulations
for some Platonic and Archimidean solids. These results were obtained via
integer programming calculations using the approach described in [8]. All computations
used the canonical symmetric coordinatizations for these polytopes
[6]. The number of vertices is indicated in parenthesis (Remark 4.5):
Icosahedron (12) 15 20
Dodecahedron (20) 23 36
Cuboctahedron
Icosidodecahedron
Truncated
Truncated Octahedron
Truncated Cube (24) 25 48
Small Rhombicuboctahedron
Pentakis Dodecahedron (32) 54 ?
Rhombododecahedron
Table
Sizes of extremal triangulations of Platonic and Archimidean solids.
2. Maximal dissections of 3-polytopes. We introduce some important de-
nitions and conventions: We denote by Qm a convex m-gon with m an even positive
be two edges parallel to Qm , orthogonal to each other,
on opposite sides of the plane containing Qm , and such that the four segments v i
intersect the interior of Qm . We suppose that v 1 v 2 and u 1 u 2 are not parallel to any
diagonal or edge of Qm . The convex hull Pm of these points has m+ 4 vertices and it
is a simplicial polytope. We will call north (respectively south) vertex of Qm the one
which maximizes (respectively minimizes) the scalar product with the vector v 2 v 1 .
Similarly, we will call east (west) the vertex which maximizes (minimizes) the scalar
product with u 2 u 1 . We denote these four vertices n, s, e and w, respectively. See
Figure
2.1.
e
s
Fig. 2.1. North, South, East, and West vertices.
We say that a directed path of edges inside Qm is monotone in the direction v 1
(respectively when the vertices of the path appear in the path following the same
order given by the scalar product with
formulation is that any line orthogonal to v 1 v 2 cuts the path in at most one point.
We remark that by our choice of v 1 v 2 and u 1 u 2 all vertices of Qm are ordered by the
values of their scalar products with v 2 v 1 and also with respect to u 2 u 1 . In the
same way, a sequence of vertices of Qm is ordered in the direction of v 1 v 2 (respectively
if the order is the same as the one provided by using the values of the scalar
products of the points with the vector Consider the
two orderings induced by the directions of v 1 v 2 and u 1 u 2 on the set of vertices of Qm .
Let us call horizontal (respectively vertical) any edge joining two consecutive vertices
in the direction of v 1 v 2 (respectively of u 1 u 2 ). As an example, if Qm is regular then
the vertical edges in Qm form a zig-zag path as shown in Figure 2.2.
Our examples in this section will be based on the following observation and are
inspired by a similar analysis of maximal dissections of dilated empty lattice tetrahedra
in R 3 by Lagarias and Ziegler [16]: Let Rm be the convex hull of the m+2 vertices
consisting of the m-gon Qm and . Rm is exactly one half of the polytope Pm .
Consider a triangulation T 0 of Qm and a path of edges of T 0 monotone with respect
to the direction u 1 u 2 . Observe that divides T 0 in two regions, which we will call
the \north" and the \south". Then, the following three families of tetrahedra form a
triangulation T of Rm : the edges of joined to the edge the southern triangles
of T 0 joined to v 1 ; and the northern triangles of T 0 joined to v 2 (see Figure 2.3).
Moreover, all the triangulations of Rm are obtained in this way: Any triangulation T
e
Fig. 2.2. The minimal monotone path (middle) and the maximal monotone path made by the
vertical edges (right) in the direction u 1 u 2 .
s
e
s
e
s
e
Fig. 2.3. Three types of tetrahedra in Rm .
of Rm induces a triangulation T 0 of Qm . The link of v 1 v 2 in T is a monotone path of
edges contained in T 0 and it divides T 0 in two regions, joined respectively to v 1 and
Using the Cayley trick, one can also think of the triangulations of Rm as the ne
mixed subdivisions of the Minkowski sum Qm (see [13] and references within).
The size of a triangulation of Rm equals is the number of
edges in the path . There is a unique minimal path in Qm of length one (Figure
2.2, middle) and a unique maximal path of length m 1 (Figure 2.2, right). Hence
the minimal and maximal triangulations of Rm have, respectively, m 1 and 2m 3
tetrahedra. The maximal triangulation is unique, but the minimal one is not: after
choosing the diagonal in the rest of the polygon Qm can be triangulated in many
ways. From the above discussion regarding Rm we see that we could independently
triangulate each of the two halves of Pm with any number of tetrahedra from m 1
to 2m 3. Hence, Pm has dissections of sizes going from 2m 2 to 4m 6. Among
the triangulations of Pm , we will call halving triangulations those that triangulate the
two halves of Pm . Equivalently, the halving triangulations are those which do not
contain any of the four edges
Proposition 2.1. Let Pm be as described above, with Qm being a regular m-gon.
No triangulation of Pm has more than 7m+ 1 tetrahedra. On the other hand, there
are mismatching dissections of Pm with 4m 6 tetrahedra.
Proof. Let T be a triangulation of Pm . It is an easy application of Euler's formulas
for the 3-ball and 2-sphere that the number of tetrahedra in a triangulation of any
3-ball without interior vertices equals the number of vertices plus interior edges minus
three (such formula appears for instance in [9]). Hence our task is to prove that T has
at most 5minterior edges. For this, we classify the interior edges according to how
many vertices of Qm they are incident to. There are only four edges not incident to
any vertex of Qm (the edges contains at most m 3
edges incident to two vertices of Qm (i.e. diagonals of Qm ), since in any family of
more than m 3 such edges there are pairs which cross each other. Thus, it su-ces
to prove that T contains at most 3m1 edges incident to just one vertex of Qm , i.e.
of the form v i p or u i p with
Let p be any vertex of Qm . If p equals w or e then the edges pv 1 and pv 2 are both
in the boundary of Pm ; for any other p, exactly one of pv 1 and pv 2 is on the boundary
and the other one is interior. Moreover, we claim that if pv i is an interior edge in a
triangulation T , then the triangle pv 1 v 2 appears in T . This is so because there is a
plane containing pv i and having v 3 i as the unique vertex on one side. At the same
time the link of pv i is a cycle going around the edge. Hence, v 3 i must appear in the
link of pv i . It follows from the above claim that the number of interior edges of the
form pv i in T equals the number of vertices of Qm other than w and e in the link of
In a similar way, the number of interior edges of the form pu i in T equals the
number of vertices of Qm other than n and s in the link of u 1 u 2 . In other words, if
we call in the index
and of the vertices are reversed, because in this way u is monotone with respect to
with respect to v 1 v 2 ), then the number of interior edges in T incident
to exactly one vertex of Qm equals jvertices( v )j Our goal is to
bound this number. As an example, Figure 2.4 shows the intersection of Qm with a
certain triangulation of Pm 12). The link of v 1 v 2 in this triangulation is the
chain of vertices and edges wabu 1 nu 2 ce (the star of v 1 v 2 is marked in thick and grey
in the gure). u consists of the chains wab and ce and the isolated vertex n. In turn,
the link of u 1 u 2 is the chain nv 1 s and v consists of the isolated vertices n and s.
s
e
a
c
Fig. 2.4. Illustration of the proof of Proposition 2.1.
Observe that v has at most three connected components, because it is obtained
by removing from link T the parts of it incident to v 1 and v 2 , if any.
Each component is monotone in the direction of v 1 v 2 and the projections of any two
components to a line parallel to v 1 v 2 do not overlap. The sequence of vertices of Qm
ordered in the direction of v 1 v 2 , can have a pair of consecutive vertices contained in
only where there is a horizontal edge in v or in the at most two discontinuities
of v . This is true because Qm is a regular m-gon.
We denote n hor the number of horizontal edges in v and n 0
hor this number plus
the number of discontinuities in v (hence n 0
hor n hor non-horizontal
edge of v produces a jump of at least two in the v 1 v 2 -ordering of the vertices of Pm ,
hence we have
hor
hor
Analogously, and with the obvious similar meaning for n vert and n 0
vert ,
jvertices(
vert
vert
can be completed to a triangulation of Qm , and exactly four non-
interior edges of Qm are horizontal or vertical, we have n hor
hor
vert m+ 5. Hence,
hor
vert
3:
Thus, there are at most 3m1 interior edges in T of the form pv i or pu i and at
most 5minterior edges in total, as desired.
Corollary 2.2. The polytope Pm described above has the following properties:
It is a simplicial 3-polytope with m+ 4 vertices.
Its maximal dissection has at least 4m 6 tetrahedra.
Its maximal triangulation has at most 7m+ 1 tetrahedra.
In particular, the gap between sizes of the maximal dissection and maximal triangulation
is linear on the number of vertices.
Three remarks are in order: First, the size of the maximal triangulation for Pm
may depend on the coordinates or, more specically on which diagonals of Qm intersect
the tetrahedron v 1 concerning the size of the minimal triangulation
of Pm , we can easily describe a triangulation of Pm with only m+ 5 tetrahedra:
let the vertices n, s, e and w be as dened above (see Figure 2.1) and let us call
northeast, northwest, southeast and southwest the edges in the arcs ne, nw, se and
sw in the boundary of Qm . Then, the triangulation consists of the ve tetrahedra
(shown in the left part of Figure
2.5) together with the edges respectively, to the
northeast, northwest, southeast and southwest edges of Qm . The right part of Figure
2.5 shows the result of slicing through the triangulation by the plane containing the
polygon Qm .
Finally, although the corollary above states a dierence between maximal dissections
and maximal triangulations only for Pm with m > 14, experimentally we have
observed there is a gap already for 8. Now we discuss two other interesting
examples. The following proposition constitutes the proof of Theorem 1.1 (2).
Proposition 2.3.
1. Consider the following eight points in R 3
The vertices of
a square in the plane
The vertices of a horizontal edge above
the square, and
The vertices of a horizontal edge
below the square.
These eight points are the vertices of a polytope P whose only integer points
are precisely its eight vertices and with the following properties:
(a) Its (unique) maximal dissection has 12 tetrahedra. All of them are uni-
modular, i.e. they have volume 1=6.
(b) Its (several) maximal triangulations have 11 tetrahedra.
e
s
e
s
Fig. 2.5. For the triangulation of Pm with its ve central tetrahedra (left)
and the intersection of the triangulation with the polygon Qm (right) are shown. The four interior
vertices are the intersection points of the edges with the plane containing
Qm .
2. For the 3-polytope with vertices
the sizes of its (unique) minimal dissection and (several) minimal triangulations
are 6 and 7 respectively, and the sizes of its (several) maximal triangulations
and (unique) maximal dissection are 9 and 10 respectively.
Proof. The polytopes constructed are quite similar to P 4 constructed earlier
except that Q 4 is non-regular (in part 2) and the segments u 1 u 2 and v 1 v 2 are longer
and are not orthogonal, thus ending with dierent polytopes. The polytopes are
shown in Figure 2.6. Figure 2.7 describes a maximal dissection of each of them, in
ve parallel slices. Observe that both polytopes have four vertices in the plane
and another four in the plane y = 1. Hence, the rst and last slices in parts (a) and
(b) of
Figure
2.7 completely describe the polytope.
e
s
e
s
Fig. 2.6. The two polytopes in Proposition 2.3.
(1) The vertices in the planes quadrangles whose
only integer points are the four vertices. This proves that the eight points are in
convex position and that the polytope P contains no integer point other than its
vertices. Let us now prove the assertions on maximal dissections and triangulations
of
(a) Consider the paths of length three are
monotone respectively in the directions orthogonal to v 1 v 2 and u 1 u 2 . Using them,
u2
s
e
(a)
(b)
y y y y y
Fig. 2.7. Five 2-dimensional slices of the maximal dissections of the polytopes in Proposition
2.3. The rst and last slices are two facets of the polytopes containing all the vertices.
we can construct two triangulations of size ve of the polytopes conv(nsewv 1
they do not ll P completely. There is space
left for the tetrahedra swv 1 u 1 and env 2 u 2 . This gives a dissection of P with twelve
tetrahedra. All the tetrahedra are unimodular, so no bigger dissection is possible.
(b) A triangulation of size 11 can be obtained using the same idea as above, but
with paths v and u of lengths three and two respectively, which can be taken from
the same triangulation of the square nswe.
To prove that no triangulation has bigger size, it su-ces to show that P does not
have any unimodular triangulation. This means all tetrahedra have volume 1=6. We
start by recalling a well-known fact (see Corollary 4.5 in [25]). A lattice tetrahedron
has volume 1=6 if and only if each of its vertices v lies in a consecutive lattice plane
parallel to the supporting plane of the opposite facet to v. Two parallel planes are
said to be consecutive if their equations are ax+ by
Suppose that T is a unimodular triangulation of P . We will rst prove that
the triangle u 1 u 2 e is in T . The triangular facet u 1 lying in the hyperplane
has to be joined to a vertex in the plane x 1. The two
possibilities are e and v 1 . With the same argument, if the tetrahedron u 1 u 2 sv 1 is in
which lies in the hyperplane 2x will be joined to a
vertex in 2x 2, and the only one is e. This nishes the proof that u 1
is a triangle in T . Now, u 1 u 2 e is in the plane x must be joined to a
vertex in i.e. to w. Hence u 1 u 2 ew is in T and, in particular, T uses the
edge ew. P is symmetric under the rotation of order two on the axis
g.
Applying this symmetry to the previous arguments we conclude that T uses the edge
ns too. But this is impossible since the edges ns and ew cross each other.
(2) This polytope almost ts the description of P 4 , except for the fact that the
edges intersect the boundary and not the interior of the planar quadrangle
nsew. With the general techniques we have described, it is easy to construct halving
dissections of this polytope with sizes from 6 to 10. Combinatorially, the polytope
is a 4-antiprism. Hence, Proposition 4.3 shows that its minimal triangulation has 7
tetrahedra. The rest of the assertions in the statement were proved using the integer
programming approach proposed in [8], which we describe in Remark 4.5. We have
also veried them by enumerating all triangulations [19, 29]. It is interesting to
observe that if we perturb the coordinates a little so that the planar quadrilateral
becomes a tetrahedron with the right orientation and without changing the
face lattice of the polytope, then the following becomes a triangulation with ten
3. Bounds for the size of a dissection. Let D be a dissection of a d-polytope
. Say two (d 1)-simplices S 1 and S 2 of D intersect improperly in a (d 1)-
hyperplane H if both lie in H , are not identical, and they intersect with non-empty
relative interior. Consider the following auxiliary take as nodes the (d 1)-
simplices of a dissection, and say that two (d 1)-simplices are adjacent if they
intersect improperly in certain hyperplane. A mismatched region is the subset of R d
that is the union of (d 1)-simplices over a connected component of size larger than
one in such a graph. Later, in Proposition 3.4 we will show some of the complications
that can occur in higher dimensions.
Dene the simplicial complex of a dissection as all the simplices of the dissection
together with their faces, where only faces that are identical (in R d ) are identied.
This construction corresponds intuitively to an in
ation of the dissection where for
each mismatched region we move the two groups of (d 1)-simplices slightly apart
leaving the relative boundary of the mismatched region joined. Clearly, the simplicial
complex of a dissection may be not homeomorphic to a ball.
The deformed d-simplices intersect properly, and the mismatched regions become
holes. The numbers of vertices and d-simplices do not change.
Lemma 3.1. All mismatched regions for a dissection of a convex 3-polytope P
are convex polygons with all vertices among the vertices of P . Distinct mismatched
regions have disjoint relative interiors.
Proof. Let Q be a mismatched region and H the plane containing it. Since a
mismatched region is a union of overlapping triangles, it is a polygon in H with a
connected interior. If two triangles forming the mismatched region have interior points
in common, they should be facets of tetrahedra in dierent sides of H . Otherwise, the
two tetrahedra would have interior points in common, contradicting the denition of
dissection. Triangles which are facets of tetrahedra in one side of H cover Q. Triangles
coming from the other side of H also cover Q.
take triangles coming from one side. As mentioned above, they have no
interior points in common. Their vertices are among the vertices of the tetrahedra
in the dissection, thus among the vertices of the polytope P . Hence, the vertices of
the triangles are in convex position, thus the triangles are forming a triangulation of
a convex polygon in H whose vertices are among the vertices of P .
For the second claim, suppose there were distinct mismatched regions having an
interior point in common. Then their intersection should be an interior segment for
Let Q be one of the mismatched regions. It is triangulated in two dierent
ways each coming from the tetrahedra in one side of the hyperplane. The triangles
in either triangulation cannot intersect improperly with the interior segment. Thus
the two triangulations of Q have an interior diagonal edge in common. This means
the triangles in Q consists of more than one connected components of the auxiliary
graph, contradicting the denition of mismatched region.
Proposition 3.2.
1. The size of a mismatching dissection D of a convex 3-polytope with n vertices
is at least n 2.
2. The size of a dissection of a 3-polytope with n vertices is bounded from above
by n 2 .
Proof. (1) Do an in
ation of each mismatched region. This produces as many holes
as mismatched regions, say m of them. Each hole is bounded by two triangulations
of a polygon. This is guaranteed by the previous lemma. Denote by k i the number of
vertices of the polygon associated to the i-th mismatched region. In each of the holes
introduce an auxiliary interior point. The point can be used to triangulate the interior
of the holes by lling in the holes with the coning of the vertex with the triangles it
sees. We now have a triangulated ball.
Denote by jDj the size of the original dissection. The triangulated ball has then
total. The number of interior edges of this triangulation
is the number of interior edges in the dissection, denoted by e i (D), plus the new
additions, for each hole of length k i we added k i interior edges. In a triangulation T of
a 3-ball with n boundary vertices and n 0 interior vertices, the number of tetrahedra jT j
is related to the number of interior edges e i of T by the formula: jT
The proof is a simple application of Euler's formula for triangulated 2-spheres and
3-balls and we omit the easy details.
Thus, we have the following equation:
3:
This can be rewritten as
Taking into account
that e i (D)
(because diagonals in a polygon are interior edges of the
dissection), we get an inequality
3:
Finally note that in a mismatching dissection we have m 1 and k i 4. This
gives the desired lower bound.
(2) Now we look at the proof of the upper bound on dissections. Given a 3-
dissection, we add tetrahedra of volume zero to complete to a triangulation with
at
simplices that has the same number of vertices. One can also think we are lling in
the holes created by an in
ation with (deformed) tetrahedra.
The lemma states that mismatched regions were of the shape of convex polygons.
The 2-simplices forming a mismatched region were divided into two groups (those
becoming apart by an in
ation). The two groups formed dierent triangulations of a
convex polygon, and they had no interior edges in common. In this situation, we can
make a sequence of
ips (see [17]) between the two triangulations with the property
that any edge once disappeared does not appear again (see Figure 3.1). We add one
abstract, volume zero tetrahedron for each
ip, and obtain an abstract triangulation
of a 3-ball.
The triangulation with
at simplices we created is a triangulated 3-ball with n
vertices. By adding a new point in a fourth dimension, and coning from the boundary
2-simplices to the point, we obtain a triangulated 3-sphere containing the original 3-
ball in its boundary. From the upper bound theorem for spheres (for an introduction
Fig. 3.1. Filling in holes with tetrahedra according to
ips.
to this topic see [30]) its size is bounded from above by the number of facets of a
cyclic 4-polytope minus 2n 4, the number of 2-simplices in the boundary of D. The
4-dimensional cyclic polytope with vertices is well-known to have (n
2)=2 facets (see [11, page 63]), which completes the proof after a trivial algebraic
calculation.
Open Problem 3.3. What is the correct upper bound theorem for dissections of
d-dimensional polytopes with d 4?
In our proof of Proposition 3.2 we built a triangulated PL-ball from a three-dimensional
dissection, using the
ip connectivity of triangulations of a convex n-gon.
Unfortunately the same cannot be applied in higher dimensions as the
ip connectivity
of triangulations of d-polytopes is known to be false for convex polytopes in general
[22]. But even worse, the easy property we used from Lemma 3.1 that mismatched
regions are convex polyhedra fails in dimension d 4.
Proposition 3.4. The mismatched regions of a dissection of a convex 4-polytope
can be non-convex polyhedra.
Proof. The key idea is as follows: suppose we have a 3-dimensional convex polytope
P and two triangulations T 1 and T 2 of it with the following properties: removing
from P the tetrahedra that T 1 and T 2 have in common, the rest is a non-convex
polyhedron P 0 such that the triangulations T 0
2 of it obtained from T 1 and
do not have any interior 2-simplex in common (actually, something weaker would
su-ce: that their common interior triangles, if any, do not divide the interior of the
polytope).
In these conditions, we can construct the dissection we want as a bipyramid over
to one of the apices and T 2 to the other one. The bipyramid over the
non-convex polyhedron P 0 will be a mismatched region of the dissection.
For a concrete example, start with Schonhardt's polyhedron whose vertices are
labeled in the lower face and 4; 5; 6 in the top face. This is a non-convex
polyhedron made, for example, by twisting the three vertices on the top of a triangular
prism. Add two antipodal points 7 and 8 close to the \top" triangular facets (those
not breaking the quadrilaterals see Figure 3.2). For example, take as coordinates
for the points
Let P 0 be this non-convex polyhedron and let T 0
1468g.
1 cones vertex 7 to the rest of the boundary of P 0 , and T 0
vertex 8. Any
common interior triangle of T 0
would use the edge 78. But the link of 78 in
contains only the points 1, 2 and 3, and the link in T 0
contains only 4, 5 and 6.
Let P be the convex hull of the eight points, and let T 1 and T 2 be obtained from
2 by adding the three tetrahedra 1245, 2356 and 1346.
Fig. 3.2. The mismatched region of a four-dimensional dissection.
4. Optimal dissections for specic polytopes. The regular cube has been
widely studied for its smallest dissections [12, 14]. This receives the name of simplexity
of the cube. In contrast, because of the type of simplices inside a regular d-cube, a
simple volume argument shows that the maximal size of a dissection is d!, the same
as for triangulations. On the other hand, we know that the size of the maximal
triangulation of a combinatorial cube can be larger than that: For example, the
combinatorial 3-cube obtained as the prism over a trapezoid (vertices on a parabola
for instance) has triangulations of size 7. Figure 4.1 shows a triangulation with 7
simplices for those coordinatizations where the edges AB and GH are not coplanar.
The tetrahedron ABGH splits the polytope into two non-convex parts, each of which
can be triangulated with three simplices. To see this, suppose that our polytope is
a very small perturbation of a regular 3-cube. In the regular cube, ABGH becomes
a diagonal plane which divides the cube into two triangular prisms ABCDGH and
ABEFGH . In the non-regular cube, the diagonals AH and BG, respectively, become
non-convex. Any pair of triangulations of the two prisms, each using the corresponding
diagonal, together with tetrahedron ABGH give a triangulation of the perturbed cube
with 7 tetrahedra. The boundary triangulation is shown in the
at diagram. It is
worth noticing that for the regular cube the boundary triangulation we showed does
not extend to a triangulation of the interior.
A
F
G
A
F
G
Fig. 4.1. A triangulation of a combinatorial 3-cube into seven tetrahedra.
One can then ask, what is the general growth for the size of a maximal dissection
of a combinatorial cube? To answer this question, at least partially, we use the above
construction and we adapt an idea of M. Haiman, originally devised to produce small
triangulations of regular cubes [12]. The idea is that from triangulations of a d 1 -
cube and a d 2 -cube of sizes s 1 and s 2 respectively we can get triangulations of the
rst subdividing it into s 1 s 2 copies of the product of two simplices
of dimensions d 1 and d 2 and then triangulating each such piece. We recall that any
triangulation of the Cartesian product of a d 1 -simplex and a d 2 -simplex has d1+d2
maximal simplices. Hence, in total we have a triangulation of the (d 1
maximal simplices. Recursively, if one starts with a triangulation of
size s of the d-cube, one obtains triangulations for the rd-cube of size (rd)!( s
In
Haiman's context one wants s to be small, but here we want it to be big.
More precisely, denote by f(d) the function max C: d-cube (max T of C jT j) and call
Haiman's argument shows that if f(d 1
dierently, that g(d 1
. The value on the right hand
side is the weighted geometric mean of g(d 1 ) and g(d 2 ). In particular, if both g(d 1 )
and g(d 2 ) are 1 and one of them is > 1 then g(d 1
We have constructed above a triangulation of size 7 for the Klee-Minty 3-cube,
which proves g(3) 3
Haiman's idea we can now construct
\large" triangulations of certain 4-cubes and 5-cubes, which prove respectively that
equal to one and
two respectively). Finally, since any d > 5 can be expressed as a sum of 3's and 4's,
we have g(d) minfg(3); g(4)g 1:039 for any d > 5. Hence:
Proposition 4.1. For the family of combinatorial d-cubes with d > 2 the function
admits the lower bound f(d) c d d! where
c 1:031.
Exactly as in Haiman's paper, the constant c can be improved (asymptotically)
if one starts with larger triangulations for the smaller dimensional cubes. Using
computer calculations (see Remark 4.5), we obtained a maximal triangulation for the
Klee-Minty 4-cube with 38 maximal simplices, which shows that g(d) 4
1:122 for every d divisible by 4 (see [1] for a complete study of this family of cubes).
We omit listing the triangulation here but it is available from the authors by request.
Open Problem 4.2. Is the sequence g(d) bounded? In other words, is there an
upper bound of type c d d! for the function f(d)? Observe that the same question for
minimal triangulations of the regular d-cube (whether there is a lower bound of type
c d d! for some c > 0) is open as well. See [26] for the best lower bound known.
We continue our discussion with the study of optimal triangulations for three-dimensional
prisms and antiprisms. We will call an m-prism any 3-polytope with
the combinatorial type of the product of a convex m-gon with a line segment. An m-
antiprism will be any 3-polytope whose faces are two convex m-gons and 2m triangles,
each m-gon being adjacent to half of the triangles. Vertices of the two m-gons are
connected with a band of alternately up and down pointing triangles.
Each such polyhedron has a regular coordinatization in which all the faces are
regular polygons, and a realization space which is the set of all possible coordinatiza-
tions that yield the same combinatorial information [20]. Our rst result is valid in
the whole realization space.
Proposition 4.3. For any three-dimensional m-prism, in any of its possible
coordinatizations, the number of tetrahedra in a minimal triangulation is 2m 5+d m
e.
For any three-dimensional m-antiprism, in any of its possible coordinatizations,
the number of tetrahedra in a minimal triangulation is 3m 5.
Proof. In what follows we use the word cap to refer to the m-gon facets appearing
in a prism or antiprism. We begin our discussion proving that any triangulation of
the prism or antiprism has at least the size we state, and then we will construct
triangulations with exactly that size.
We rst prove that every triangulation of the m-prism requires at least 2m 5
We call a tetrahedron of the m-prism mixed if it has two vertices on
the top cap and two vertices on the bottom cap of the prism, otherwise we say that
the tetrahedron is top-supported when it has three vertices on the top (respectively
bottom-supported). For example, Figure 4.2 shows a triangulation of the regular 12-
prism, in three slices. Parts (a) and (c) represent, respectively, the bottom and top
caps. Part (b) is the intersection of the prism with the parallel plane at equal distance
to both caps. In this intermediate slice, bottom or top supported tetrahedra appear
as triangles, while mixed tetrahedra appear as quadrilaterals.
(b)
(a) (c)
Fig. 4.2. A minimal triangulation of the regular 12-prism.
Because all triangulations of an m-gon have m 2 triangles there are always
exactly 2m 4 tetrahedra that are bottom or top supported. In the rest, we show
there are at least d m
mixed tetrahedra. Each mixed tetrahedra marks an edge
of the top, namely the edge it uses from the top cap. Of course, several mixed
tetrahedra could mark the same top edge. Group together top-supported tetrahedra
that have the same bottom vertex. This grouping breaks the triangulated top m-gon
into polygonal regions. Note that every edge between two of these regions must be
marked. For example, in part (c) of Figure 4.2 the top cap is divided into 6 regions
by 5 marked edges (the thick edges in the Figure). Let r equal the number of regions
under the equivalence relation we set. There are r 1 interior edges separating the
r regions, and all of them are marked. Some boundary edges of the top cap may be
marked too (none of them is marked in the example of Figure 4.2).
We can estimate the marked edges in another way: There are m edges on the
boundary of the top, which appear partitioned among some of the regions (it could
be the case some region does not contain any boundary edge of the m-gon). We claim
that no more than two boundary edges per region will be unmarked (). This follows
because a boundary edge is not marked only when the top supported tetrahedron that
contains it has the point in the bottom cap that is directly under one of the vertices
of the edge. In a region, at most two boundary edges can satisfy this. Hence we get
at least m 2r marked edges on the boundary of the top and at least (r
marked edges in total. Thus the number of mixed tetrahedra is at
least the maximum of r 1 and m r 1. In conclusion, we get that, indeed, the
number of mixed tetrahedra is bounded below by d m
e 1. Note that we only use
the combinatorics and convexity of the prism in our arguments. We will show that
minimal triangulations achieve this lower bound, but then, observe that if m is even, in
a minimal triangulation we must have no boundary edge can be marked,
as is the case in Figure 4.2. If m is odd, then we must have r 2 f(m 1)=2; (m+1)=2g
and at most one boundary edge can be marked.
The proof that any triangulation of an m-antiprism includes at least 3m 5 tetrahedra
is similar. There are 2m 4 top-supported and bottom-supported tetrahedra
in any triangulation and there are r 1 marked edges between the regions in the
top. The only dierence is that, instead of claim (), one has at most one unmarked
boundary edge per region. Thus there are at least m r marked edges in the boundary
of the top, and in total at least (r marked edges in the top.
Hence there exist at least (2m 4)+(m in any triangulation.
For an m-antiprism we can easily create a triangulation of size 3m 5 by choosing
any triangulation of the bottom m-gon and then coning a chosen vertex v of the top
m-gon to the m 2 triangles in that triangulation and to the 2m 3 triangular facets of
the m-antiprism which do not contain v. This construction is exhibited in Figure 4.3.
Parts (a) and (c) show the bottom and top caps triangulated (each with its 5 marked
edges) and part (b) an intermediate slice with the 5 mixed tetrahedra appearing as
quadrilaterals.
(c)
(b)
(a)
Fig. 4.3. A minimal triangulation of the regular 6-antiprism.
For an m-prism, let u i and v i , denote the top and bottom vertices
respectively, so that the vertices of each cap are labeled consecutively and u
always an edge of the prism.
If m is even we can chop o the vertices u i for odd i and v j for even j, so that
the prism is decomposed into m tetrahedra and an ( m)-antiprism. The antiprism can
be triangulated into 3m5 tetrahedra, which gives a triangulation of the prism into
5m5 tetrahedra, as desired. Actually, this is how the triangulation of Figure 4.2
can be obtained from that of Figure 4.3.
If m is odd we do the same, except that we chop o only the vertices
and vertex is chopped in the edge um v m ). This produces m 1
tetrahedra and an ( m+1
)-antiprism. We triangulate the antiprism into 3m+3
tetrahedra and this gives a triangulation of the m-prism into 5m+15 tetrahedra.
We have seen that the coordinates are not important when calculating minimal
triangulations of the three-dimensional prisms and antiprisms. On the other hand,
the dierence in size of the maximal triangulation can be quite dramatic. Below we
prove that in certain coordinatizations it is roughly m 2and show experimental data
indicating that for the regular prism it is close to m 2Proposition 4.4. Let Am be a prism of order m, with all its side edges parallel.
1. The size of a maximal triangulation of Am is bounded as
2. The upper bound is achieved if the two caps (m-gon facets) are parallel and
there is a direction in which the whole prism projects onto one of its side
quadrangular facets. (For a concrete example, let one of the m-gon facets
have vertices on a parabola and let Am be the product of it with a segment).
Proof. Let the vertices of the prism be labeled um and v so that
the u i 's and the v j 's form the two caps, vertices in each cap are labeled consecutively
and u i v i is always a side edge.
For the upper bound in part (1), we have to prove that a triangulation of Am has
at most m 2 +m 62m diagonals. The possible diagonals are
the edges is not in f1; 0; 1g modulo m. This gives exactly twice the
number we want. But for any i and j the diagonals u
one of them can appear in each triangulation.
We now prove that the upper bound is achieved if Am is in the conditions of part
(2). In fact, the condition on Am that we will need is that for any 1 i < j k <
l m, the point v j sees the triangle v i u k u l from the same side as v k and v l (i.e. \from
above" if we call top cap the one containing the v i 's). With this we can construct a
triangulation with
First cone the vertex v 1 to any triangulation of the bottom cap (this gives m 2
tetrahedra). The m 2 upper boundary facets of this cone are visible from v 2 , and
we cone them to it (again m 2 tetrahedra). The new m 2 upper facets are visible
from v 3 and we cone them to it (m 2 tetrahedra more). Now, one of the upper
facets of the triangulation is of the upper cap, but the other m 3 are
visible from v 4 , so we cone them and introduce m 4 tetrahedra. Continuing the
process, we will introduce coning the vertices
which gives a total of
The triangulation we have constructed is the placing triangulation [17] associated
to any ordering of the vertices nishing with dierent description of the
same triangulation is that it cones the bottom cap to v 1 , the top cap to um , and its
mixed tetrahedra are all the possible 1. This gives
We nally prove the lower bound stated in part (1). Without loss of general-
ity, we can assume that our prism has its two caps parallel (if not, do a projective
transformation keeping the side edges parallel). Then, Am can be divided into two
prisms in the conditions of part (2) of sizes k and l with 2: take any
two side edges of Am which posses parallel supporting planes and cut Am along the
plane containing both edges. By part (2), we can triangulate the two subprisms with
respectively, taking care that the two triangulations
use the same diagonal in the dividing plane. This gives a triangulation of Am with
This expression achieves its minimum
when k and l are as similar as possible, i.e.
Plugging
these values in the expression gives a triangulation of size
l
Based on an integer programming approach we can compute maximal triangulations
of specic polytopes (see remark at the end of the article). Our computations
with regular prisms up to show that the size of their maximal triangulations
achieve the lower bound stated in part (1) of Proposition 4.4 (see Table 4.1). In
other words, that the procedure of dividing them into two prisms of sizes b m
and d m
in the conditions of part (2) of Proposition 4.4 and triangulating the
subprisms independently yields maximal triangulations.
We have also computed maximal sizes of triangulations for the regular m-antiprisms
up to which turn out to follow the formula
. A construction
of a triangulation of this size for every m can be made as follows: Let the vertices
of the regular m-antiprism be labeled um and v so they are forming
the vertices of the two caps consecutively in this order and v i u i and u i v i+1 are side
edges. We let v . The triangulation is made by placing the vertices in any
ordering nishing with . The tetrahedra used are the
bottom-supported tetrahedra with apex v 1 , top-supported tetrahedra with apex u d me
and the mixed tetrahedra
We conjecture that these formulas for regular base prisms and antiprisms actually
give the sizes of their maximal triangulations for every m, but we do not have a proof.
Prism (regular base) 43 50
Antiprism (regular base) 4 8 12 17 22 28 34 41 48 56
Table
Sizes of maximal triangulations of prisms and antiprisms.
Remark 4.5. How can one nd minimal and maximal triangulations in specic
instances? The approach we followed for computing Tables 1.1 and 4.1 and some
of the results in Proposition 2.3 is the one proposed in [8], based on the solution of
an integer programming problem. We think of the triangulations of a polytope as
the vertices of the following high-dimensional polytope: Let A be a d-dimensional
polytope with n vertices. Let N be the number of d-simplices in A. We dene PA
as the convex hull in R N of the set of incidence vectors of all triangulations of A.
For a triangulation T the incidence vector v T has coordinates (v T
and (v T . The polytope PA is the universal polytope dened in
general by Billera, Filliman and Sturmfels [3] although it appeared in the case of
polygons in [7]. In [8], it was shown that the vertices of PA are precisely the integral
points inside a polyhedron that has a simple description in terms of the oriented
matroid of A (see [8] for information on oriented matroids). The concrete integer
programming problems were solved using C-plex Linear Solver TM . The program to
generate the linear constraints is a small C ++ program written by Samuel Peterson
and the rst author. Source code, brief instructions, and data les are available via
ftp at http://www.math.ucdavis.edu/~deloera. An alternative implementation
by A. Tajima is also available [27, 28]. He used his program to corroborate some of
these results.
It should be mentioned that a simple variation of the ideas in [8] provides enough
equations for an integer program whose feasible vertices are precisely the 0=1-vectors
of dissections. The incidence vectors of dissections of conv(A), for a point set A,
are just the 0=1 solutions to the system of equations hx; v T are the
incidence vectors for every regular triangulation T of the Gale transform A (regular
triangulations in the Gale transform are the same as chambers in A). Generating all
these equations is as hard as enumerating all the chambers of A. Nevertheless, it is
enough to use those equations coming from placing triangulations (see [23, Section
3.2]), which gives a total of about n d+1 equations if A has n points and dimension d.
Acknowledgments
. We are grateful to Alexander Below and Jurgen Richter-
Gebert for their help and ideas in the proofs of Proposition 3.2 and 3.4. Alexander
Below made Figure 3.2 using the package Cinderella. The authors thank Akira Tajima
and Jorg Rambau for corroborating many of the computational results. We thank
Samuel Peterson for his help with our calculations. Finally, we thank Hiroshi Imai,
Bernd Sturmfels, and Akira Tajima for their support of this project.
--R
Deformed products and maximal shadows of polytopes
An algorithmic theory of lattice points in polyhedra
Constructions and complexity of secondary poly- topes
Minimal simplicial dissections and triangulations of convex 3-polytopes
New York
Triangulations (tilings) and certain block triangular matrices
Tetrahedrizing point sets in three di- mensions
binary covers of rational polyhedral cones
A simple and relatively e-cient triangulation of the n-cube
The Cayley trick
Simplexity of the cube
Triangulations of integral polytopes and Ehrhart polynomials
Subdivisions and triangulations of polytopes
Triangulations for the cube
TOPCOM: a program for computing all triangulations of a point set
On triangulations of the convex hull of n points
A point con
Triangulations of oriented matroids
Optimality and integer programming formulations of triangulations in general di- mension
Optimizing geometric triangulations by using integer programming
Enumerating triangulations for products of two simplices and for arbitrary con
--TR
--CTR
Mike Develin, Note: maximal triangulations of a regular prism, Journal of Combinatorial Theory Series A, v.106 n.1, p.159-164, April 2004
Jesus A. De Loera , Elisha Peterson , Francis Edward Su, A polytopal generalization of Sperner's lemma, Journal of Combinatorial Theory Series A, v.100 n.1, p.1-26, October 2002 | archimedean solid;antiprism;lattice polytope;mismatched region;combinatorial d-cube;prism;dissection;triangulation |
587931 | Scheduling Unrelated Machines by Randomized Rounding. | We present a new class of randomized approximation algorithms for unrelated parallel machine scheduling problems with the average weighted completion time objective. The key idea is to assign jobs randomly to machines with probabilities derived from an optimal solution to a linear programming (LP) relaxation in time-indexed variables. Our main results are a $(2+\varepsilon)$-approximation algorithm for the model with individual job release dates and a $(3/2+\varepsilon)$-approximation algorithm if all jobs are released simultaneously. We obtain corresponding bounds on the quality of the LP relaxation. It is an interesting implication for identical parallel machine scheduling that jobs are randomly assigned to machines, in which each machine is equally likely. In addition, in this case the algorithm has running time O(n log n) and performance guarantee 2. Moreover, the approximation result for identical parallel machine scheduling applies to the on-line setting in which jobs arrive over time as well, with no difference in performance guarantee. | Introduction
It is by now well-known that randomization can help in the design of algorithms, cf., e. g., [27, 26]. One way
of guiding randomness is the use of linear programs (LPs). In this paper, we give LP-based approximation algorithms
for problems which are particularly well-known for the difficulties to obtain good lower bounds: machine
(or processor) scheduling problems. Because of the random choices involved, our algorithms are rather randomized
approximation algorithms. A randomized r-approximation algorithm is a polynomial-time algorithm that
produces a feasible solution whose expected value is within a factor of r of the optimum; r is also called the
(expected) performance guarantee of the algorithm. Actually, most often we compare the output of an algorithm
with a lower bound given by an optimum solution to a certain LP relaxation. Hence, at the same time we obtain an
analysis of the quality of the respective LP. All our off-line algorithms can be derandomized with no difference in
performance guarantee, but at the cost of increased (but still polynomial) running times.
We consider the following model. We are given a set J of n jobs (or tasks) and m unrelated parallel machines.
Each job j has a positive integral processing requirement which depends on the machine i job j will be processed
on. Each job j must be processed for the respective amount of time on one of the m machines, and may
be assigned to any of them. Every machine can process at most one job at a time. In preemptive schedules, a job
may repeatedly be interrupted and continued later on another (or the same) machine. In nonpreemptive schedules,
a job must be processed in an uninterrupted fashion. Each job j has an integral release date r j ? 0 before which
Parts of this paper appeared in a preliminary form in [36, 35]
# M.I.T., Sloan School of Management and Operations Research Center, E53-361,
Fachbereich Mathematik, MA 6-1, Technische Universitat Berlin, Strae des 17. Juni 136, D-10623 Berlin, Germany, skutella@math.tu-
berlin.de
it cannot be processed. We denote the completion time of job j in a schedule S by C S
or C j , if no confusion is
possible. We seek to minimize the total weighted completion time: a weight w associated with each job j
and the goal is to find a schedule S that minimizes j2J w j C j . In scheduling, it is quite convenient to refer to the
respective problems using the standard classification scheme of Graham, Lawler, Lenstra, and Rinnooy Kan [17].
The nonpreemptive problem R j r just described, is strongly NP-hard [23].
Scheduling to minimize the total weighted completion time (or, equivalently, the average weighted completion
time) has recently achieved a great deal of attention, partly because of its importance as a fundamental problem
in scheduling, and also because of new applications, for instance, in compiler optimization [5] or in parallel
computing [3]. There has been significant progress in the design of approximation algorithms for this kind of
problems which led to the development of the first constant worst-case bounds in a number of settings. This
progress essentially follows on the one hand from the use of preemptive schedules to construct nonpreemptive
ones [31, 4, 7, 14, 16]. On the other hand, one solves an LP relaxation and then a schedule is constructed by list
scheduling in a natural order dictated by the LP solution [31, 19, 34, 18, 25, 14, 8, 28, 37].
In this paper, we utilize a different idea: random assignments of jobs to machines. To be more precise, we
exploit an LP relaxation in time-indexed variables for the problem R j r and we then show that a certain
variant of randomized rounding leads to a e)-approximation algorithm, for any e ? 0. In the absence of
nontrivial release dates, the performance guarantee can be improved to 3=2 e. At the same moment, the corresponding
LP is a respectively, i. e., the true optimum is always within this
factor of the optimal value of the LP relaxation. Our algorithm improves upon a 16=3-approximation algorithm of
Hall, Shmoys, and Wein [19] that is also based on time-indexed variables which have a different meaning, how-
ever. In contrast to their approach, our algorithm does not rely on Shmoys and Tardos' rounding technique for
the generalized assignment problem [39]. We rather exploit the LP by interpreting LP values as probabilities with
which jobs are assigned to machines. For an introduction to and the application of randomized rounding to other
combinatorial optimization problems, the reader is referred to [33, 26].
Using a different approach, the second author has subsequently developed slightly improved approximation
results for the problems under consideration. For the problem R he gives a 3=2-approximation algorithm
[41] that is based on a convex quadratic programming relaxation in assignment variables, which is inspired by the
time-indexed LP relaxation presented in this paper. Only recently, this approach has been generalized to the
problem with release dates for which it yields performance guarantee 2 [42].
For the special case of identical parallel machines, i. e., for each job j and all machines i we have
Chakrabarti et al. [4] obtained a (2:89 + e)-approximation by refining an online greedy framework of Hall et al.
[19]. The former best known LP-based algorithm, however, relies on an LP relaxation solely in completion time
variables which is weaker than the one we propose. It has performance guarantee (4 \Gamma 1=m) (see [18] for the
details). For the LP we use here, an optimum solution can greedily be obtained by a certain preemptive schedule
on a virtual single machine which is m times as fast as any of the original machines. The idea of using a preemptive
relaxation on such a virtual machine was employed before by Chekuri, Motwani, Natarajan, and Stein [7]. They
show that any preemptive schedule on such a machine can be converted into a nonpreemptive schedule on the
identical parallel machines such that, for each job j, its completion time in the nonpreemptive schedule is at
most (3 \Gamma 1=m) times its preemptive completion time. For the problem to minimize the average completion time,
they refine this to a 2:85-approximation algorithm.
For the algorithm we propose delivers in time O(nlogn) a solution that is expected to be within
a factor of 2 of the optimum. Since the LP relaxation we use is even a relaxation of the corresponding preemptive
problem, our algorithm is also a 2-approximation for which improves upon a 3-approximation
algorithm due to Hall, Schulz, Shmoys, and Wein [18]. In particular, our result implies that the value of an optimal
nonpreemptive schedule is at most a factor 2 the value of an optimal preemptive schedule. For the problem without
release dates, our algorithm achieves performance guarantee 3=2. Since an optimum solution to
the LP relaxation can be obtained greedily, our algorithm also works in the corresponding online setting where
jobs continually arrive to be processed and, for each time t, we must construct the schedule until time t without
any knowledge of the jobs that will arrive afterwards; the algorithm achieves competitive ratio 2 for both the
nonpreemptive and the preemptive variant of this setting.
Recently, Skutella and Woeginger [43] developed a polynomial-time approximation scheme for the problem
which improves upon the previously best known (1
2)=2-approximation algorithm due to
Kawaguchi and Kyan [22]. Subsequently, Chekuri, Karger, Khanna, Skutella, and Stein [6] gave polynomial-time
approximation schemes for the problem its preemptive variant also for the
corresponding problems on a constant number of unrelated machines, Rm j r
On the other hand, it has been shown by Hoogeveen, Schuurman, and Woeginger [20] that the problems R j r
are MAXSNP-hard and therefore do not have a polynomial time approximation scheme, unless
P=NP.
The rest of the paper is organized as follows. In Section 2, we start with the discussion of our main result:
the algorithm with performance guarantee 2 in the general context of unrelated parallel machine scheduling. In
the next section, we give combinatorial approximation algorithms for identical parallel machine scheduling. We
also show how to use these algorithms in an online setting. Then, in Section 4, we discuss the derandomization
of the previously given randomized algorithms. Finally, in Section 5 we give the technical details of turning the
pseudo-polynomial algorithm of Section 2 into a polynomial-time algorithm with performance guarantee
We conclude by pointing out some open problems in Section 6.
Unrelated Parallel Machine Scheduling with Release Dates
In this section, we consider the problem R j r As in [30, 19, 18, 42], we will actually discuss a slightly
more general problem in which the release date of every job j may also depend on the machine. The release
date of job j on machine i is thus denoted by r i j . Machine-dependent release dates are relevant to model network
scheduling in which parallel machines are connected by a network, each job is located at a given machine at time
0, and cannot be started on another machine until sufficient time elapses to allow the job to be transmitted to its
new machine. This model has been introduced in [9, 1].
The problem R j r is well-known to be strongly NP-hard; in fact, already P2
strongly NP-hard, see [2, 23]. The first nontrivial approximation algorithm for this problem was
given by Phillips, Stein, and Wein [30]. It has performance guarantee O(log 2 n). Subsequently, Hall et al. [19]
gave a 16=3-approximation algorithm which relies on a time-indexed LP relaxation whose optimum value serves
as a surrogate for the true optimum in their estimations. We use a somewhat similar LP relaxation, but whereas
Hall et al. invoke the deterministic rounding technique of Shmoys and Tardos [39] to construct a feasible schedule
we randomly round LP solutions to feasible schedules.
be the time horizon, and introduce for every job j 2 J, every machine
m, and every point which represents the time job j is processed on
machine i within the time interval (t; t Equivalently, one can say that a y i jt =p i j -fraction of job j is being
processed on machine i within the time interval (t; t 1]. The LP, which is an extension of a single machine LP
relaxation of Dyer and Wolsey [10], is as follows:
minimize
subject to
for all
Equations (1) ensure that the whole processing requirement of every job is satisfied. The machine capacity constraints
(2) simply express that each machine can process at most one job at a time. Now, for (3), consider an
arbitrary feasible schedule S where job j is being continuously processed between time C S
on machine
h. Then, the expression for C LP
in (3) corresponds to the real completion time C S
j of j if we assign the values
to the LP variables y i jt as defined above, i. e., y
wise. The right-hand side of (4) equals the processing time p h j of job j in the schedule S, and is therefore a lower
bound on its completion time C S
. Finally, constraints (5) ensure that no job is processed before its release date.
Hence, (LP R ) is a relaxation of the scheduling problem R j r In fact, note that even the corresponding
mixed-integer program, where the y-variables are forced to be binary, is only a relaxation.
The following algorithm takes an optimum LP solution, and then constructs a feasible schedule by using a kind
of randomized rounding.
Algorithm LP ROUNDING
Compute an optimum solution y to (LP R ).
Assign each job j to a machine-time pair (i; t) independently at random with probability y i jt
draw t j from the chosen time interval (t; t +1] independently at random with uniform distribution.
Schedule on each machine i the jobs that were assigned to it nonpreemptively as early as possible
in order of nondecreasing t j .
In the last step ties can be broken arbitrarily; they occur with probability zero. For the analysis of the algorithm it
will be sufficient to assume that the random decisions for different jobs are pairwise independent.
Remark 2.1. The reader might wonder whether the seemingly artificial random choice of the t j 's in Algorithm LP
ROUNDING is really necessary. Indeed, it is not, which also implies that we could work with a discrete probability
space: The following results are still true if we simply set t is assigned to a machine-time pair (i; t);
ties are broken randomly or even arbitrarily. We mainly chose this presentation for the sake of giving a different
interpretation in terms of so-called a-points in Section 3.
The following lemma illuminates the intuition in Algorithm LP ROUNDING by relating the implications of the
second step to the solution y of (LP R ). For the analysis of the algorithm, however, we will only make use of the
second part of the Lemma. Its first part is a generalization of a result due to Goemans [14] for the single machine
case.
Lemma 2.2. Let y be the optimum solution to (LP R ) in the first step of Algorithm LP ROUNDING. Then, for each
J the following holds:
a) The expected value of t j is equal to m
y jt
b) The expected processing time of job j in the schedule constructed by Algorithm LP ROUNDING is equal to
Proof. First, we fix a machine-time pair (i; t) job j has been assigned to. Then, the expected processing time
of j under this condition is p i j . Moreover, the conditional expectation of t j is equal to t
. By adding these
conditional expectations over all machines and time intervals, weighted by the corresponding probabilities y jt
, we
get the claimed results.
Note that the lemma remains true if we start with an arbitrary, not necessarily optimal solution y to (LP R ) in
the first step of Algorithm LP ROUNDING. This is also true for the following results. The optimality of the LP
solution will only be needed to get a lower bound on the value of an optimal schedule.
Lemma 2.3. The expected completion time of each job j in the schedule constructed by Algorithm LP
ROUNDING can be bounded by
this bound is even true if t j is set to t in the second step of the algorithm and ties are broken arbitrarily, see
Remark 2.1. In the absence of nontrivial release dates the following stronger bound holds:
this bound also holds if t j is set to t in the second step of the algorithm and ties are broken uniformly at random.
Proof. We consider an arbitrary, but fixed job j 2 J. To analyze the expected completion time of job j, we first
also consider a fixed assignment of j to a machine-time pair (i; t). Then, the expected starting time of job j under
these conditions precisely is the conditional expected idle time plus the conditional expected amount of processing
of jobs that come before j on machine i.
Observe that there is no idle time on machine i between the maximum release date of jobs on machine i which
start no later than j and the starting time of job j. It follows from the ordering of jobs and constraints (5) that this
maximum release date and therefore the idle time of machine i before the starting time of j is bounded from above
by t. In the absence of nontrivial release dates there is no need for idle time at all.
On the other hand, we get the following bound on the conditional expected processing time on machine i before
the start of j:
y ik'
y ik'
The last inequality follows from the machine capacity constraints (2). However, if t j is set to t in the second step of
the algorithm and ties are broken arbitrarily, we have to replace E[t by 1 on the right-hand side and get a weaker
bound of t +1. Putting the observations together we get an upper bound of 2
2 ) for the conditional expectation
of the starting time of j. In the absence of nontrivial release dates it can be bounded by t
. Unconditioning the
expectation by the formula of total expectation together with Lemma 2.2 b) yields the result.
Theorem 2.4. For instances of R j r the expected value of the schedule constructed by Algorithm LP
ROUNDING is bounded by twice the value of an optimal solution.
Proof. By Lemma 2.3 and constraints (3) the expected completion time of each job is bounded by twice its LP
completion time C LP
. Since the optimal LP value is a lower bound on the value of an optimal schedule and the
weights are nonnegative, the result follows by linearity of expectations.
Note that Theorem 2.4 still holds if we use the weaker LP relaxation where constraints (4) are missing. How-
ever, this is not true for the following result.
Theorem 2.5. For instances of R the expected value of the schedule constructed by Algorithm LP
ROUNDING is bounded by 3=2 times the value of an optimal solution.
Proof. The result follows from Lemma 2.3 and the LP constraints (3) and (4).
Independently, the result in Theorem 2.5 has also been obtained by Fabian A. Chudak (communicated to us by
David B. Shmoys, March 1997) after reading a preliminary version of the paper on hand which only contained the
bound of 2 for R j r Theorem 2.4. In the absence of nontrivial release dates, Algorithm LP ROUNDING
can be improved and simplified:
Corollary 2.6. For instances of R the approximation result of Theorem 2.5 also holds for the following
improved and simplified variant of Algorithm LP ROUNDING: In the second step we assign each job j independently
at random with probability T
to machine i. In the last step we apply Smith's ratio rule [44] on each
machine, i. e., we schedule the jobs that have been assigned to machine i in order of nonincreasing ratios w
Proof. Notice that the random assignment of jobs to machines remains unchanged in the described variant of
Algorithm LP ROUNDING. However, for a fixed assignment of jobs to machines, sequencing the jobs according
to Smith's ratio rule on each machine is optimal. In particular, it improves upon the random sequence used in the
final step of Algorithm LP ROUNDING.
In the analysis of Algorithm LP ROUNDING we have always compared the value of the computed solution to
the optimal LP value which is itself a lower bound on the value of an optimal solution. Therefore we can state the
following result on the quality of the LP relaxation:
Corollary 2.7. The linear program (LP R ) is a 2-relaxation for R j r (even without constraints (4)) and a2 -relaxation for R
We show in the following section that (LP R ) without constraints (4) is not better than a 2-relaxation, even for
instances of P . On the other hand, the relaxation can be strengthened by adding the constraints
These constraints ensure that in each time period no job can use the capacity of more than one machine. Unfor-
tunately, we do not know how to use these constraints to get provably stronger results on the quality of the LP
relaxation and better performance guarantees for Algorithm LP ROUNDING.
Notice that the results in Theorem 2.4 and Theorem 2.5 do not directly lead to approximation algorithms for
the considered scheduling problems. The reason is that we cannot solve (LP R ) in polynomial time due to the
exponentially many variables. However, we can overcome this drawback by introducing new variables which are
not associated with exponentially many time intervals of length 1, but rather with a polynomial number of intervals
of geometrically increasing size. In order to get polynomial-time approximation algorithms in this way, we have
to pay for with a slightly worse performance guarantee. For any constant e ? 0 we get approximation algorithms
with performance guarantee 2+ e and 3=2+ e for the scheduling problems under consideration. We elaborate on
this in Section 5.
It is shown in [40] that the ideas and techniques presented in this section and Section 5 can be modified to
construct approximation algorithms for the corresponding preemptive scheduling problems. Notice that, although
the LP relaxation (LP R ) allows preemptions of jobs, it is not a relaxation of R j r it is shown in
[40, Example 2.10.8.] that the right-hand side of (3) can in fact overestimate the actual completion time of a job
in the preemptive schedule corresponding to a solution of (LP R ). However, one can construct an LP relaxation
for the preemptive scheduling problem by replacing (3) with a slightly weaker constraint. This leads to a (3
approximation algorithm for R j r e)-approximation algorithm for R j pmtn j w j C j .
These results can again be slightly improved by using convex quadratic programming relaxations, see [42].
Scheduling with Release Dates
We now consider the special case of m identical parallel machines. The processing requirement and the release
date of job j no longer depend on the machine job j is processed on and are thus denoted by p j and r j , respectively.
As mentioned above, already the problem P2
In this setting, Algorithm LP ROUNDING can be turned into a purely combinatorial algorithm. Taking up an
idea that has been used earlier, e. g., by Chekuri et al. [7], we reduce the identical parallel machine instance to
a single machine instance. However, the single machine is assumed to be m times as fast as each of the original
machines, i. e., the virtual processing time of job j on this virtual single machine is p 0
without loss of generality that p j is a multiple of m). Its weight and its release date remain the same. The crucial
idea for our algorithm is to assign jobs uniformly at random to machines. Then, on each machine, we sequence the
assigned jobs in order of random a-points with respect to a preemptive schedule on the fast single machine.
For 1, the a-point C S
j (a) of job j with respect to a given preemptive schedule S on the fast single
machine is the first point in time when an a-fraction of job j has been completed, i. e., when j has been processed
for a
time units. In particular, C S
j and for
j (0) to be the starting time of job j.
Slightly varying notions of a-points were considered in [31, 19], but their full potential was only revealed when
Chekuri et al. [7] as well as Goemans [14] chose the parameter a at random. The following algorithm may be seen
as an extension of their single machine techniques to identical parallel machines.
Algorithm: RANDOM ASSIGNMENT
Construct a preemptive schedule S on the virtual single machine by scheduling at any point in
time among the available jobs the one with largest w j =p 0
ratio.
For each job j 2 J, draw a j independently at random and uniformly distributed from [0; 1] and
assign j uniformly at random to one of the m machines.
Schedule on each machine i the jobs that were assigned to it nonpreemptively as early as possible
in nondecreasing order of C S
Notice that in the first step whenever a job is released, the job being processed (if any) will be preempted if the
released job has a larger
ratio. An illustration of Algorithm RANDOM ASSIGNMENT can be found in the
Appendix
. The running time of this algorithm is dominated by the effort to compute the preemptive schedule in
the first step. Goemans observed that this can be done in O(nlogn) time using a priority queue [14].
In the following we will show that Algorithm RANDOM ASSIGNMENT can be interpreted as a reformulation of
Algorithm LP ROUNDING for the special case of identical parallel machines. One crucial insight for the analysis
is that the above preemptive schedule on the virtual single machine corresponds to an optimum solution to an LP
relaxation which is equivalent to (LP R ). We introduce a variable y jt for every job j and every time period (t; t +1]
that is set to 1=m if job j is being processed on one of the m machines in this period and to 0 otherwise. Notice
that in contrast to the unrelated parallel machine case we do not need machine dependent variables since there is
no necessity to distinguish between the identical parallel machines. We can express the new variables y jt in the old
variables y i jt by setting
This leads to the following simplified LP (ignoring constraints (4) of (LP R )):
minimize
subject to
y jt
for all
For the special case was introduced by Dyer and Wolsey [10]. They also indicated that it follows
from the work of Posner [32] that the program can be solved in O(nlogn) time. Goemans [13] showed (also for the
case m= 1) that the preemptive schedule that is constructed in the first step of Algorithm RANDOM ASSIGNMENT
defines an optimum solution to (LP P ). This result as well as its proof can be easily generalized to an arbitrary
number of identical parallel machines:
Lemma 3.1. For instances of the problems the relaxation (LP P ) can be solved in O(nlogn) time
and the preemptive schedule on the fast single machine in the first step of Algorithm RANDOM ASSIGNMENT
corresponds to an optimum solution.
Theorem 3.2. Algorithm RANDOM ASSIGNMENT is a randomized 2-approximation algorithm for
Proof. We show that Algorithm RANDOM ASSIGNMENT can be interpreted as a special case of Algorithm LP
ROUNDING. The result then follows from its polynomial running time and Theorem 2.4.
Lemma 3.1 implies that in the first step of Algorithm RANDOM ASSIGNMENT we simply compute an optimum
solution to the LP relaxation (LP P ) which is equivalent to (LP R ) without constraints (4). In particular, the corresponding
solution to (LP R ) is symmetric with regard to the m machines. Therefore, in Algorithm LP ROUNDING
each job is assigned uniformly at random to one of the machines. The symmetry also yields that for each job j the
choice of t j is not correlated with the choice of i in Algorithm LP ROUNDING.
It can easily be seen that the probability distribution of the random variable t j in Algorithm LP ROUNDING
exactly equals the probability distribution of C S
Algorithm RANDOM ASSIGNMENT. For this, observe that
the probability that C S
equals the fraction y jt =p 0
j of job j that is being processed in
this time interval. Moreover, since a j is uniformly distributed in (0;1] each point in (t; t + 1] is equally likely
to be obtained for C S
Therefore, the random choice of C S
Algorithm RANDOM ASSIGNMENT is
an alternative way of choosing t j as it is done in Algorithm LP ROUNDING. Consequently, the two algorithms
coincide for the identical parallel machine case. In particular, the expected completion time of each job is bounded
from above by twice its LP completion time and the result follows by linearity of expectations.
At this point, let us briefly compare the approximation results of this section for the single machine case
related results. If we only work with one a for all jobs instead of individual and independent a j 's
and if we draw a uniformly from [0; 1], then RANDOM ASSIGNMENT precisely becomes Goemans' randomized
2-approximation algorithm RANDOM a for 1jr j j w j C j [14]. Goemans, Queyranne, Schulz, Skutella, and Wang
have improved this result to performance guarantee 1:6853 by using job-dependent a j 's as in Algorithm RANDOM
ASSIGNMENT together with a nonuniform choice of the a j 's [15]. The same idea can also be applied in the
parallel machine setting to get a performance guarantee better than 2 for Algorithm RANDOM ASSIGNMENT. This
improvement, however, depends on m. We refer the reader to the single machine case for details. A comprehensive
treatment and a detailed overview of the concept of a-points for machine scheduling problems can be found in
[40, Chapter 2].
We have already argued in the last section that (LP R ) and thus (LP P ) is a 2-relaxation of the scheduling problem
under consideration:
Corollary 3.3. The relaxation (LP P ) is a 2-relaxation of the scheduling problem and this bound is
tight, even for
Proof. The positive result follows from Corollary 2.7. For the tightness, consider an instance with m machines
and one job of length m and unit weight. The optimum LP completion time is (m+ 1)=2, whereas the optimum
completion time is m. When m goes to infinity, the ratio of the two values converges to 2.
Our approximation result for identical parallel machine scheduling can be directly generalized to the corresponding
preemptive scheduling problem. In preemptive schedules a job may repeatedly be interrupted and
continued later on another (or the same) machine. It follows from the work of McNaughton [24] that already
is NP-hard since there always exists an optimal nonpreemptive schedule and the corresponding
nonpreemptive problem is NP-hard. We make use of the following observation:
Lemma 3.4. The linear program (LP P ) is also a relaxation of the preemptive problem
Proof. Since all release dates and processing times are integral, there exists an optimal preemptive schedule where
preemptions only occur at integral points in time. Take such an optimal schedule S and construct the corresponding
feasible solution to (LP P ) by setting y being processed on one of the m machines within the time
interval (t; t +1] and y It is an easy observation that C LP
j and equality holds if and only if job
j is continuously processed in the time interval (C S
Thus, the value of the constructed solution to (LP P )
is a lower bound on the value of an optimal schedule.
This observation leads to the following results which generalize Theorem 3.2 and Corollary 3.3.
Corollary 3.5. The value of the (nonpreemptive) schedule constructed by Algorithm RANDOM ASSIGNMENT
is not worse than twice the value of an optimum preemptive schedule. Moreover, the relaxation (LP P ) is a 2-
relaxation of the scheduling problem and this bound is tight.
The 2-approximation algorithm in Corollary 3.5 improves upon a performance guarantee of 3 due to Hall,
Schulz, Shmoys, and Wein [18]. Another consequence of our considerations is the following result on the power
of preemption:
Corollary 3.6. For identical parallel machine scheduling with release dates so as to minimize the weighted sum
of completion times, the value of an optimal nonpreemptive schedule is at most twice as large as the value of an
optimal preemptive one.
Moreover, the techniques in Algorithm LP ROUNDING can be used to convert an arbitrary preemptive schedule
into a nonpreemptive one such that the value increases at most by a factor of 2: for a given preemptive schedule,
construct the corresponding solution to (LP P ) or (LP R ), respectively. The value of this feasible solution to the LP
relaxation is a lower bound on the value of the given preemptive schedule. Using Algorithm LP ROUNDING, the
solution to (LP R ) can be turned into a nonpreemptive schedule whose expected value is bounded by twice the value
of the LP solution, and thus by twice the value of the preemptive schedule we started with. This improves upon a
bound of 7=3 due to Phillips et al. [29].
Algorithm RANDOM ASSIGNMENT can easily be turned into an online algorithm. There are several different
online paradigms that have been studied in the area of scheduling, see [38] for a survey. We consider the setting
where jobs continually arrive over time and, for each time t, we must construct the schedule until time t without
any knowledge of the jobs that will arrive afterwards. In particular, the characteristics of a job, i. e., its processing
time and its weight become only known at its release date.
In order to apply Algorithm RANDOM ASSIGNMENT in the online setting, note that for each job j its random
variable a j can be drawn immediately when the job is released since there is no interdependency with any other
decisions of the randomized algorithm. The same holds for the random machine assignments. Moreover, the
preemptive schedule in the first step can be constructed until time t without the need of any knowledge of jobs that
are released afterwards. Furthermore, it follows from the analysis in the proof of Lemma 2.3 that we get the same
performance guarantee if job j is not started before time t j (respectively C S
Thus, in the online variant of
Algorithm RANDOM ASSIGNMENT we schedule the jobs as early as possible in order of nondecreasing C S
with the additional constraint that no job j may start before time C S
The following result improves upon the
competitive ratio 2:89
Corollary 3.7. The online variant of Algorithm RANDOM ASSIGNMENT achieves competitive ratio 2.
The perhaps most appealing aspect of Algorithm RANDOM ASSIGNMENT is that the assignment of jobs to
machines does not depend on job characteristics; any job is put with probability 1=m to any of the machines. This
technique also proves useful for the problem without (nontrivial) release dates:
Theorem 3.8. Assigning jobs independently and uniformly at random to the machines and then applying Smith's
ratio rule on each machine is a 3=2-approximation algorithm for P There exist instances for which this
bound is asymptotically tight.
Proof. First notice that the described algorithm exactly coincides with Algorithm RANDOM ASSIGNMENT (LP
ROUNDING, respectively). Because of the negative result in Corollary 3.3, we cannot derive the bound 3=2 by
comparing the expected value of the computed solution to the optimal value of (LP P ). Remember that we used a
stronger relaxation including constraints (4) in order to derive this bound in the unrelated parallel machine setting.
However, as a result of Lemma 2.3 we get
since the second term on the right-hand side of (6) is equal to p j for the case of identical parallel machines. Since
both
are lower bounds on the value of an optimal solution, the result follows.
In order to show that the performance guarantee 3=2 is tight, we consider instances with m identical parallel
machines and m jobs of unit length and weight. We get an optimal schedule with value m by assigning one job
to each machine. On the other hand we can show that the expected completion time of a job in the schedule
constructed by random machine assignment is 3=2 \Gamma 1=2m which converges to 3=2 for increasing m. Since the
jobs, we can without loss of generality schedule on each machine the jobs that were
assigned to it in a random order. Consider a fixed job j and the machine i it has been assigned to. The probability
that a job k 6= j was assigned to the same machine is 1=m. In this case k is processed before j on the machine with
probability 1=2. We therefore get E[C j
2m .
Quite interestingly, the derandomized variant of this algorithm precisely coincides with the WSPT-rule for
which Kawaguchi and Kyan proved performance guarantee (1
2)=2 1:21 [22]: list the jobs according to
nonincreasing ratios w j =p j and schedule the next job whenever a machine becomes available. Details for the
derandomization are given in Section 4. While the proof given by Kawaguchi and Kyan is somewhat complicated,
our simpler randomized analysis yields performance guarantee 3=2 for their algorithm. However, this weaker
result also follows from the work of Eastman, Even, and Isaacs [11] who gave a combinatorial lower bound for
which coincides with the lower bound given by (LP P ). The latter observation is due to Uma and Wein
[48] and Williamson [50].
Derandomization
Up to now we have presented randomized algorithms that compute a feasible schedule the expected value of which
can be bounded from above in terms of the optimum solution to the scheduling problem under consideration. This
means that our algorithms will perform well on average; however, we cannot give a firm guarantee for the performance
of any single execution. From a theoretical point of view it is perhaps more desirable to have (deterministic)
algorithms that obtain a certain performance in all cases.
One of the most important techniques for derandomization is the method of conditional probabilities. This
method is implicitly contained in a paper of Erdos and Selfridge [12] and has been developed in a more general
context by Spencer [45]. The idea is to consider the random decisions in a randomized algorithm one after another
and to always choose the most promising alternative. This is done by assuming that all of the remaining decisions
will be made randomly. Thus, an alternative is said to be most promising if the corresponding conditional
expectation for the value of the solution is as small as possible.
The randomized algorithms in this paper can be derandomized by the method of conditional probabilities. We
demonstrate this technique for the most general problem R j r Algorithm LP ROUNDING. Making
use of Remark 2.1 and Lemma 2.3 we consider the variant of this algorithm where we set t being
assigned to the machine-time pair (i; t) (ties are broken by prefering jobs with smaller indices). Thus, we have to
construct a deterministic assignment of jobs to machine-time pairs.
Our analysis of Algorithm LP ROUNDING in the proof of Lemma 2.3 does not give a precise expression for
the expected value of the computed solution but only an upper bound. Hence, for the sake of a more accessible
derandomization, we modify Algorithm LP ROUNDING by replacing its last step with the following variant:
3') Schedule on each machine i the jobs that were assigned to it nonpreemptively in nondecreasing
order of t j , where ties are broken by preferring jobs with smaller indices. At the starting time of
job j the amount of idle time on its machine has to be exactly t j .
for each job j that has been assigned to machine i and t j 6 t k if job k is scheduled after job j, Step 3'
defines a feasible schedule. In the proof of Lemma 2.3 we have bounded the idle time before the start of job j
on its machine from above by t j . Thus, the analysis still works for the modified Algorithm LP ROUNDING. The
main advantage of the modification of Step 3 is that we can now give precise expressions for the expectations and
conditional expectations of completion times.
Let y be the optimum solution we started with in the first step of Algorithm LP ROUNDING. Using the same
arguments as in the proof of Lemma 2.3 we get the following expected completion time of job j in the schedule
constructed by our modified Algorithm LP ROUNDING
y ikt
Moreover, we are also interested in the conditional expectation of j's completion time if some of the jobs have
already been assigned to a machine-time pair. Let K ' J be such a subset of jobs. For each job k 2 K the 0=1-
variable x ikt for t ? r ik indicates whether k has been assigned to the machine-time pair (i; t) not
enables us to give the following expressions for the conditional expectation of j's completion time.
If j 62 K we get
y ikt
and, if j 2 K, we get
where (i; t) is the machine-time pair job j has been assigned to, i. e., x 1. The following lemma is the most
important part of the derandomization of Algorithm LP ROUNDING.
Lemma 4.1. Let y be the optimum solution we started with in the first step of Algorithm LP ROUNDING, K ' J,
and x a fixed assignment of the jobs in K to machine-time pairs. Furthermore let j 2 J nK. Then, there exists an
assignment of j to a machine-time pair (i; t) (i. e., x i t such that
6 EK;x
Proof. Using the formula of total expectation, the conditional expectation on the right-hand side of (11) can be
written as a convex combination of conditional expectations E K[f jg;x
\Theta
over all possible assignments of
job j to machine-time pairs (i; t) with coefficients y i jt
We therefore get a derandomized version of Algorithm LP ROUNDING if we replace the second step by
0; x:=0; for all j 2 J do
i) for all possible assignments of job j to machine-time pairs (i; t) (i. e., x i
\Theta
ii) determine the machine-time pair (i; t) that minimizes the conditional expectation in i);
set K := K[f jg; x
Notice that we have replaced Step 3 of Algorithm LP ROUNDING by 3' only to give a more accessible analysis
of its derandomization. Since the value of the schedule constructed in Step 3 is always at least as good as the one
constructed in Step 3', the following theorem can be formulated for Algorithm LP ROUNDING with the original
Step 3.
Theorem 4.2. If we replace Step 2 in Algorithm LP ROUNDING with 2' we get a deterministic algorithm whose
performance guarantee is at least as good as the expected performance guarantee of the randomized algorithm.
Moreover, the running time of this algorithm is polynomially bounded in the number of variables of the LP relaxation
Proof. The result follows by an inductive use of Lemma 4.1. The computation of (9) and (10) is polynomially
bounded in the number of variables. Therefore, the running time of each of the n iterations in Step 2' is polynomially
bounded in this number.
The same derandomization also works for the polynomial time algorithms that are based on interval-indexed LP
relaxations described in Section 5. Since these LP relaxations only contain a polynomial number of variables, the
running time of the derandomized algorithms is also bounded by a polynomial in the input size of the scheduling
problem. Notice that, in contrast to the situation for the randomized algorithms, we can no longer give job-by-job
bounds for the derandomized algorithms.
An interesting application of the method of conditional probabilities is the derandomization of Algorithm
RANDOM ASSIGNMENT in the absence of release dates. We have already discussed this matter at the end of
Section 3. It essentially follows from the considerations above that the derandomized version of this algorithm
always assigns a job to the machine with the smallest load so far if we consider the jobs in order of nonincreasing
. Thus, the resulting algorithm coincides with the WSPT-rule analyzed by Kawaguchi and Kyan [22].
5 Interval-Indexed LP Relaxations
As mentioned earlier, our LP-based algorithms for unrelated parallel machine scheduling suffer from the exponential
number of variables in the corresponding LP relaxation (LP R ). However, we can overcome this drawback by
using new variables which are not associated with exponentially many time intervals of length 1, but rather with
a polynomial number of intervals of geometrically increasing size. This idea was earlier introduced by Hall et al.
[19]. We show how Algorithm LP ROUNDING can be turned into a polynomial time algorithm for R j r
at the cost of an increase in the performance guarantee to 2 e. The same technique can be used to derive a
e)-approximation algorithm for R
For a given h ? 0, L is chosen to be the smallest integer such that (1 Consequently, L is
polynomially bounded in the input size of the considered scheduling problem. Let I
\Theta
and for 1 6 ' 6 L let
I
. We denote with jI ' j the length of the '-th interval, i. e., jI '
To simplify notation we define (1 +h) '\Gamma1 to be 1
with the following interpretation: y is the time job j is processed on machine i within time
interval I ' , or, equivalently: (y i j' \Delta jI ' j)=p i j is the fraction of job j that is being processed on machine i within I ' .
Consider the following linear program in these interval-indexed variables:
minimize
subject to
for all
Consider a feasible schedule and assign the values to the variables y i j' as defined above. This solution is
clearly feasible: Constraints (12) are satisfied since a job j consumes units if it is processed on machine
constraints (13) are satisfied since the total amount of processing on machine i of jobs that are processed within
the interval I ' cannot exceed its size. Finally, if job j is continuously being processed between C
machine h, then the right-hand side of equation (14) is a lower bound on the real completion time. Thus, (LP h
R ) is
a relaxation of the scheduling problem R j r
Since (LP
R ) is of polynomial size, an optimum solution can be computed in polynomial time. We rewrite
Algorithm LP ROUNDING for the new LP:
Algorithm: LP ROUNDING
Compute an optimum solution y to (LP h
Assign each job j to a machine-interval pair (i; I ' ) independently at random with probability
from the chosen time interval I ' independently at random with uniform
distribution.
On each machine i schedule the jobs that were assigned to it in order of nondecreasing t j .
The following lemma is a reformulation of Lemma 2.2 b) for the new situation and can be proved analogously.
Lemma 5.1. The expected processing time of each job j 2 J in the schedule constructed by Algorithm LP ROUNDING
is equal to m
Theorem 5.2. The expected completion time of each job j in the schedule constructed by Algorithm LP ROUNDING
is at most 2 \Delta (1 +h) \Delta C LP
.
Proof. We argue almost exactly as in the proof of Lemma 2.3, but use Lemma 5.1 instead of Lemma 2.2 b). We
consider an arbitrary, but fixed job j 2 J. We also consider a fixed assignment of j to machine i and time interval I ' .
Again, the conditional expectation of j's starting time equals the expected idle time plus the expected processing
time on machine i before j is started.
With similar arguments as in the proof of Lemma 2.3, we can bound the sum of the idle time plus the processing
time by 2 This, together with Lemma 5.1 and (14) yields the theorem.
For any given e ? 0 we can choose e)-approximation
algorithm for the problem R j r
R ) is a
6 Concluding Remarks and Open Problems
In this paper, we have developed LP-based approximation algorithms for different scheduling problems and in
doing so we have also gained some insight of the quality of the employed time-indexed LPs. A number of open
problems arises from this and related research, and in the following wrap up we distinguish between the off-line
and the on-line setting.
Our central off-line result is the (2+e)-approximation for R j r there exist instances which show
that the underlying LP relaxation ((LP R ) without inequalities (4)) is indeed not better than a 2-relaxation. However,
it is open whether the quality of (LP R ) (with (4) and/or (7)) is better than 2 and therefore also whether it can be
used to derive an approximation algorithm with performance guarantee strictly less than 2. On the negative side,
In other words, the best known approximation algorithm for R j r
performance guarantee 2 (we proved 2+ e here and [42] gets rid of the e using a convex quadratic relaxation), but
the only known limit to its approximation is the non-existence of a polynomial-time approximation scheme, unless
NP. The situation for R j j w j C j is similar. (LP R ) is a 3=2-relaxation, the quality of (LP R ) together with (7) is
unknown, the 3=2-approximation given in [41] (improving upon the (3=2+e)-approximation in Section 2) is best
known, and again there cannot be a PTAS, unless As far as identical parallel machines are concerned,
one important property of our 2-approximation algorithm for is that it runs in time O(nlogn). The
running time of the recent PTAS is O
[6]. The other important feature of the O(nlogn)
algorithm is that it is capable of working in an on-line context as well, which brings us to the second set of open
problems.
If jobs arrive over time and if the performance of algorithms is measured in terms of their competitiveness
to optimal off-line algorithms, it is theoretically of the utmost importance to distinguish between deterministic
and randomized algorithms. For identical parallel machine scheduling to minimize total weighted completion
time, there is a significant gap between the best-known deterministic lower bound and the competitive ratio of
the best-known deterministic algorithm. The lower bound of 2 follows from the fact that for on-line single machine
scheduling to minimize total completion time no deterministic algorithm can have competitive ratio less
than 2 [21, 46]. A (4 e)-competitive algorithm emerges from a more general framework [19, 18]. For randomized
algorithms, our understanding seems slightly better. The best-known randomized lower bound of e=(e \Gamma 1) is
again inherited from the single machine case [47, 49], and there is a randomized 2-competitive algorithm given in
the paper in hand.
Acknowledgements
.
The authors are grateful to Chandra S. Chekuri, Michel X. Goemans, and David B. Shmoys for helpful comments
on an earlier version of this paper [36].
--R
Competitive distributed job scheduling
Resource scheduling for parallel database and scientific applications
Improved scheduling algorithms for minsum criteria
Approximation schemes for minimizing average weighted completion time with release dates.
Approximation techniques for average completion time scheduling
Approximation algorithms for precedence-constrained scheduling problems on parallel machines that run at different speeds
Deterministic load balancing in computer networks
Formulating the single machine sequencing problem with release dates as a mixed integer program
Bounds for the optimal scheduling of n jobs on m processors
A supermodular relaxation for scheduling with release dates
Single machine scheduling with release dates.
RINNOOY KAN
Scheduling to minimize average completion time: Off-line and on-line approximation algorithms
Scheduling to minimize average completion time: Off-line and on-line algorithms
Optimal on-line algorithms for single-machine schedul- ing
Worst case bound of an LRF schedule for the mean weighted flow-time problem
RINNOOY KAN
Management Science
Randomized approximation algorithms in combinatorial opti- mization
Randomized Algorithms
Approximation bounds for a general class of precedence constrained parallel machine scheduling problems
Improved bounds on relaxations of a parallel machine scheduling problem
Task scheduling in networks
A sequencing problem with release dates and clustered jobs
A technique for provably good algorithms and algorithmic proofs
Scheduling to minimize total weighted completion time: Performance guarantees of LP-based heuristics and lower bounds
New approximations and LP lower bounds
An approximation algorithm for the generalized assignment problem
Approximation and Randomization in Scheduling
A PTAS for minimizing the weighted sum of job completion times on parallel machines
Various optimizers for single-stage production
Ten Lectures on the Probabilistic Method
Cited as personal communication in
How low can't you go?
On the relationship between combinatorial and LP-based approaches to NP-hard scheduling problems
PhD thesis
Cited as personal communication in
--TR
--CTR
Feng Lu , Dan C. Marinescu, An R || Cmax Quantum Scheduling Algorithm, Quantum Information Processing, v.6 n.3, p.159-178, June 2007
Nicole Megow , Marc Uetz , Tjark Vredeveld, Models and Algorithms for Stochastic Online Scheduling, Mathematics of Operations Research, v.31 n.3, p.513-525, August 2006
Martin Skutella, Convex quadratic and semidefinite programming relaxations in scheduling, Journal of the ACM (JACM), v.48 n.2, p.206-242, March 2001 | on-line algorithm;scheduling;linear programming relaxation;randomized rounding;approximation algorithm |
587939 | On Lower Bounds for Selecting the Median. | We present a reformulation of the 2n+o(n) lower bound of Bent and John [Proceedings of the 17th Annual ACM Symposium on Theory of Computing, 1985, pp. 213--216] for the number of comparisons needed for selecting the median of n elements. Our reformulation uses a weight function. Apart from giving a more intuitive proof for the lower bound, the new formulation opens up possibilities for improving it. We use the new formulation to show that any pair-forming median finding algorithm, i.e., a median finding algorithm that starts by comparing $\lfloor n/2\rfloor$ disjoint pairs of elements must perform, in the worst case, at least 2.01 n comparisons. This provides strong evidence that selecting the median requires at least cn+o(n) comparisons for some c> 2. | Introduction
. Sorting and selection problems have received extensive attention
by computer scientists and mathematicians for a long time. Comparison based
algorithms for solving these problems work by performing pairwise comparisons between
the elements until the relative order of all elements is known, in the case of
or until the i-th largest element among the n input elements is found, in the
case of selection.
Sorting in a comparison based computational model is quite well understood. Any
deterministic algorithm can be modeled by a decision tree in which all internal nodes
represent a comparison between two elements; every leaf represents a result of the
computation. Since there must be at least as many leaves in the decision tree as there
are possible re-orderings of n elements, all algorithms that sort n elements use at least
dlog n!e n log n n log e comparisons in the worst
case. (All logarithms in this paper are base 2 logarithms.) The best known sorting
method, called merge insertion by Knuth [9], is due to Lester Ford Jr. and Selmer
Johnson [7]. It sorts n elements using at most n log n 1:33n
Thus, the gap between the upper and lower bounds is very narrow in that the error
in the second order term is bounded by 0:11n.
The problem of nding the median is the special case of selecting the i-th largest
in an ordered set of n elements, when Although much eort has been put
into nding the exact number of required comparisons, there is still an annoying gap
between the best upper and lower bounds currently known.
Knowing how to sort, we could select the median by rst sorting, and then selecting
the middle-most element; it is quite evident that we could do better, but how much
better? This question received a somewhat surprising answer when Blum et al. [3]
showed, in 1973, how to determine the median in linear time using at most 5:43n
comparisons. This result was improved upon in 1976 when Schnhage, Paterson, and
Pippinger [13] presented an algorithm that uses only 3n
School of Computer Science, Raymond and Beverly Sackler Faculty of Exact Sciences, Tel Aviv
University, Tel Aviv 69978, Israel. E-mail: dorit@checkpoint.com and zwick@post.tau.ac.il.
y Department of Numerical Analysis and Computing Science, Royal Institute of Technology, 100 44
Stockholm, Sweden. E-mail: fjohanh,staffanug@nada.kth.se.
D. DOR AND J. H ASTAD AND S. ULFBERG AND U. ZWICK
main invention was the use of factories which mass-produce certain partial orders
that can be easily merged with each other.
This remained the best algorithm for almost 20 years, until Dor and Zwick [5]
pushed down the number of comparisons a little bit further to 2:95n + o(n) by adding
green factories that recycle debris from the merging process used in the algorithm
of [13].
The rst non-trivial lower bound for the problem was also presented, in 1973, by
Blum et al. [3] using an adversary argument. Their 1:5n lower bound was subsequently
improved to 1:75n + o(n) by Pratt and Yao [12] in 1973. Then Yap [14], and later
Munro and Poblete [10], improved it to 38
43 n+O(1), respectively. The
proofs of these last two bounds are long and complicated.
In 1979, Fussenegger and Gabow [8] proved a 1:5n + o(n) lower bound for the
median using a new proof technique. Bent and John [2] used the same basic ideas
when they gave, in 1985, a short proof that improved the lower bound to 2n
which is currently the best available. Thus, the uncertainty in the coe-cient of n is
larger for nding the median than it is for sorting, even though the linear term is the
second order term in the case of sorting.
Since our methods are based on the proof by Bent and John, let us describe it in
some detail. Given the decision tree of a comparison based algorithm, they invented
a method to prune it that yields a collection of pruned trees. Then, lower bounds
for the number of pruned trees and for their number of leaves are obtained. A nal
argument saying that the leaves of the pruned trees are almost disjoint then gives a
lower bound for the size of the decision tree.
In Section 2 we reformulate the proof by Bent and John by assigning weights
to each node in the decision tree. The weight of a node v corresponds to the total
number of leaves in subtrees with root v in all pruned trees where v occurs in the
proof by Bent and John. The weight of the root is approximately 2 2n ; we show that
every node v in the decision tree has a child whose weight is at least half the weight
of v, and that the weights of all the leaves are small.
When the proof is formulated in this way, it becomes more transparent, and one
can more easily study individual comparisons, to rule out some as being bad from the
algorithm's point of view.
For many problems, such as nding the maximal or the minimal element of an
ordered set, and nding the maximal and minimal element of an ordered set, there
are optimal algorithms that start by making bn=2c pairwise comparisons between
singleton elements. We refer to algorithms that start in this way as being pair-
forming. It has been discussed whether there are optimal pair-forming algorithms for
all partial orders, and in particular this question was posed as an open problem by
Aigner [1]. Some examples were then found by Chen [4], showing that pair-forming
algorithms are not always optimal.
It is interesting to note that the algorithms in [5] and [13] are both pair-forming.
It is still an open problem whether there are optimal pair-forming algorithms for
nding the median.
In Section 3 we use our new approach to prove that any pair-forming algorithm
uses at least 2:01227n comparisons to nd the median.
Dor and Zwick [6] have recently been able to extend the ideas described here and
obtain a (2+)n lower bound, for some tiny > 0, on the number of comparisons
performed, in the worst case, by any median selection algorithm.
ON LOWER BOUNDS FOR SELECTING THE MEDIAN 3
2. Bent and John revisited. Bent and John [2] proved that 2n + o(n) comparisons
are required for selecting the median. Their result, in fact, is more general
and provides a lower bound for the number of comparisons required for selecting the
i-th largest element, for any 1 i n. We concentrate here on median selection
although our results, like those of Bent and John, can be extended to general i.
Although the proof given by Bent and John is relatively short and simple, we here
present a reformulation. There are two reasons for this: the rst is that the proof
gets more transparent; the second is that this formulation makes it easier to study
the eect of individual comparisons.
Theorem 2.1 (Bent and John [2]). Finding the median requires 2n comparisons
Proof.
Any deterministic algorithm for nding the median can be represented by a decision
tree T , in which each internal node v is labeled by a comparison a : b. The
two children of such a node, v a<b and v a>b , represent the outcomes a < b and a > b,
respectively. We assume that decision trees do not contain redundant comparisions
between elements whose relative order has already been established.
We consider a universe U containing n elements. For every node v in T and
subset C of U we make the following denitions:
every comparison a : b above v
with b 2 C had outcome a > b
every comparison a : b above v
with b 2 C had outcome a < b
Before we proceed with the proof that selecting the median requires 2n
parisons, we present a proof of a somewhat weaker result. We assume that U contains
show that selecting the two middlemost elements requires
comparisons. The proof in this case is slightly simpler, yet it demonstrates
the main ideas used in the proof of the theorem.
We dene a weight function on the nodes of T . This weight function satises the
following three properties: (i) the weight of the root is 2 2n+o(n) . (ii) each internal
node v has a child whose weight is at least half the weight of v. (iii) the weight of
each leaf is small.
For every node v in the decision tree, we keep track of subsets A of size m which
may contain the m largest elements with respect to the comparisons already made.
Let A(v) contain all such sets which are called upper half compatible with v. The As
are assigned weights which estimate how far from a solution the algorithm is, assuming
that the elements in A are the m largest. The weight of every A 2 A(v) is dened as
and the weight of a node v is dened as
The superscript 1 in w 1
(A) is used as we shall shortly have to dene a second weight
function w 2
(B).
4 D. DOR AND J. H ASTAD AND S. ULFBERG AND U. ZWICK
case w 1
a 2 A b 2 A 1
a 2 A b 2
a 2
a 2
Table
The weight of a set A 2 A(v) in the children of a node v, relative to its weight in v.
In the root r of T , all subsets of size m of U are upper half compatible with r so
that
. Also, each A 2 A(r) has weight 2 2m , and we nd, as promised,
that
2m
Consider the weight w 1
(A) of a set A 2 A(v) at a node v labeled by the comparison
b. What are the weights of A in v's children? This depends on which of the
elements a and b belongs to A (and on which of them is minimal in A or maximal
in
A). The four possible cases are considered in Table 2.1. The weights given there are
relative to the weight w 1
of A at v. A zero indicates that A is no longer compatible
with this child and thus does not contribute to its weight. The weight w 1
va<b (A), when
example, is 1
(A), and is w 1
(A), otherwise. As can be
seen, v always has at least one child in which the weight of A is at least half its weight
at v. Furthermore, in each one of the four cases, w 1
(A).
Each leaf v of the decision tree corresponds to a state of the algorithm in which
the two middlemost elements were found. There is therefore only one set A left in
A(v). Since we have identied the minimum element in A and the maximum element
in
A, we get that w 1
4. So, if we follow a path from the root of the tree and
repeatedly descend to the child with the largest weight, we will, when we eventually
reach a leaf, have performed at least 2n
We now prove that selecting the median also requires at least 2n
isons. To make the median well dened we assume that 1. The problem
that arises in the above argument is that the weights of the leaves in T , when the
selection of the median, and not the two middlemost elements, is considered, are not
necessarily small enough: it is possible to know the median without knowing any relations
between elements in
A (which now contains m 1 elements); this is remedied
as follows.
In a node v where the algorithm is close to determining the minimum element
in A, we essentially force it to determine the largest element in
A instead. This is done
by moving an element a 0 out of A and creating a set
g. This set is lower
half compatible with v and the median is the maximum element in B. By a suitable
choice of a 0 , most of max v (
is in max v (B). A set B is lower half compatible with v
may contain the m smallest elements in U . We keep track of Bs in
the multiset B(v).
For the root r of T , we let A(r) contain all subsets of size m of U as before, and
let B(r) be empty. We exchange some As for Bs as the algorithm proceeds. The
ON LOWER BOUNDS FOR SELECTING THE MEDIAN 5
case
a 2
a 2
Table
The weight of a set B 2 B(v) in the children of a node v, relative to its weight in v.
weight of a set B is dened as
The weight of B estimates how far the algorithm is from a solution, assuming that
the elements in B are the m smallest elements. The weight of a node v is now dened
to be
In the beginning of an algorithm (in the upper part of the decision tree), the weight
of a node is still the sum of the weights of all As, and therefore
We now dene A(v) and B(v) for the rest of T more exactly. For any node v in T ,
except the root, simply copy A(v) and B(v) from the parent node and remove all sets
that are not upper or lower half compatible with v, respectively. We ensure that the
weight of every leaf is small by doing the following: If, for some A 2 A(v) we have
ne, we select an element a 0 2 min v (A) which has been compared to
the fewest number of elements in
A; we then remove the set A from A(v) and add the
set
to B(v).
Note that at the root, jmin r and that this quantity
decreases by at most one for each comparison until a leaf is reached. In a leaf v the
median is known; thus, A(v) is empty.
Lemma 2.2. Let A(v) and B(v) be dened by the rules described above. Then,
every internal node v (labeled a : b) in T has a child with at least half the weight of v,
i.e., w(v a<b ) w(v)=2 or w(v a>b ) w(v)=2.
Proof.
Table
2.1 gives the weights of a set A 2 A(v) at v's children, relative to the
weight
of A at v. Similarly, Table 2.2 gives the weights of a set B 2 B(v) in v's
children, relative to the weight w 2
v (v) of B at v. As w 1
v (B), for every A 2 A(v) and B 2 B(v), all that remains to
be checked is that the weight does not decrease when a lower half compatible set B
replaces an upper half compatible set A. This is covered by Lemma 2.3.
Lemma 2.3. If A is removed from A(v) and B is added in its place to B(v), and
if fewer than 4n comparisons have been performed on the path from the root to v, then
(A).
Proof. A set A 2 A(v) is replaced by a set
only when
ne. The element a 0 , in such a case, is an element of min v (A) that
has been compared to the fewest number of elements in
A. If a 0 was compared to at
least 2
n elements in
A, we get that each element of min v (A) was compared to at
6 D. DOR AND J. H ASTAD AND S. ULFBERG AND U. ZWICK
least 2
n elements in
A, and at least 4n comparisons have been performed on the path
from the root to v, a contradiction. We get therefore that a 0 was compared to fewer
than 2
n elements of
A and thus jmax v (B)j > jmax v (
n. As a consequence,
we get that 4
A)j and thus 2 4
as required.
We now know that the weight of the root is large, and that the weight does not
decrease too fast; what remains to be shown is that the weights of the leaves are
relatively small. This is established in the following lemma.
Lemma 2.4. For a leaf v (in which the median is known), w(v) 2m2 4
Proof. Clearly, the only sets compatible with a leaf of T are the set A containing
the m largest elements, and the set B containing the m smallest elements. Since
we get that w 2
Since there are exactly m elements that can be removed from B to obtain a
corresponding
A, there can be at most m copies of B in B(v).
Let T be a comparison tree that corresponds to a median nding algorithm. If
the height of T is at least 4n, we are done. Otherwise, by starting at the root and
repeatedly descending to a child whose weight is at least half the weight of its parent,
we trace a path whose length is at least 2n + o(n) and Theorem 2.1 follows.
Let us see how the current formalism gives room for improvement that did not
exist in the original proof. The 2n lower bound is obtained by showing that
each node v in a decision tree T that corresponds to a median nding algorithm has
a child whose weight is at least half the weight of v. Consider the nodes v
along the path obtained by starting at the root of T and repeatedly descending to the
child with the larger weight, until a leaf is reached. If we could show that su-ciently
many nodes on this path have weights strictly larger than half the weights of their
parents, we would obtain an improved lower bound for median selection. If w(v i then the length of this path, and therefore the
depth of T , is at least 2n
3. An improved lower bound for pair-forming algorithms. Let v be a
node of a comparison tree. An element x is a singleton at v if it was not compared
above v with any other element. Two elements x and y form a pair at v if the
elements x and y were compared to each other above v, but neither of them was
compared to any other element.
A pair-forming algorithm is an algorithm that starts by constructing
By concentrating on comparisons that involve elements that are part of
pairs, we obtain a better lower bound for pair-forming algorithms.
Theorem 3.1. A pair-forming algorithm for nding the median must perform,
in the worst case, at least 2:00691n
Proof.
It is easy to see that a comparison involving two singletons can be delayed until
just before one of them is to be compared for the second time. We can therefore
restrict our attention to comparison trees in which the partial order corresponding
to each node contains at most two pairs. Allowing only one pair is not enough as
algorithms should be allowed to construct two pairs fa; bg and fa
compare an element from fa; bg with an element from fa g.
We focus our attention on nodes in the decision tree in which an element of a
pair is compared for the second time and in which the number of non-singletons is at
most m, for some < 1. If v is a node in which the number of non-singletons is at
ON LOWER BOUNDS FOR SELECTING THE MEDIAN 7
r
a
r
a
r
a
r
r
a
r
a
r
a
r
A
A
Fig. 3.1. The six possible ways that a, b, and c may be divided between A and
A. Note that c
is not necessarily a singleton element; it may be part of a larger partial order.
most m, for some < 1, then B(v) is empty and thus
we do not have to consider Table 2.2 for the rest of the section.
Recall that A(v) denotes the collection of subsets of U size m that are upper half
compatible with v. If H;L U are subsets of U , of arbitrary size, we let
Ag:
We let wH=L (v) be the contribution of the sets of AH=L (v) to the weight of v, i.e.,
For brevity, we write A
(v) for A fh1 ;:::;h r g=fl1 ;:::;l s g (v) and w
(v)
for w fh1 ;:::;h r g=fl1 ;:::;l s g (v).
Before proceeding, we describe the intuition that lies behind the rest of the proof.
Consider Table 2.1 from the last section. If, in a node v of the decision tree, the
two cases a 2 A; b 2
A and a 2
are not equally likely, or more precisely,
if the contributions w a=b (v) and w b=a (v) of these two cases to the total weight of v
are not equal, there must be at least one child of v whose weight is greater than half
the weight of v. The di-culty in improving the lower bound of Bent and John lies
therefore at nodes in which the the contributions of the two cases a 2 A; b 2
A and
a 2
are almost equal. This fact is not so easily seen when looking at the
original proof given in [2].
Suppose now that v is a node in which an element a of a pair fa; bg is compared
with an arbitrary element c and that the number of non-singletons in v is at most
m. We assume, without loss of generality, that a > b. The weights of a set A 2 A(v)
in v's children depend on which of the elements a, b, and c belongs to A, and on
whether c is minimal in A or maximal in
A. The six possible ways of dividing the
elements a, b, and c between A and
A are shown in Figure 3.1. The weights of the
set A in v's children, relative to the weight w 1
of A at v, in each one of these six
cases are given in Table 3.1. Table 3.1 is similar to Table 2.1 of the previous section,
with c playing the role of b. There is one important dierence, however. If a; b; c 2 A,
as in the rst row of Table 3.1, then the weight of A in v a>c is equal to the weight of A
in v. The weight is not halved, as may be the case in the rst row of Table 2.1. If the
contribution w abc= (v) of the case a; b; c 2 A to the weight of v is not negligible, there
must again be at least one child of v whose weight is greater than half the weight of v.
The improved lower bound is obtained by showing that if the contributions of
the cases a 2 A, b 2
A and a 2
are roughly equal, and if most elements in
the partial order are singletons, then the contribution of the case a; b; c 2 A is non-
negligible. The larger the number of singletons in the partial order, the larger is the
relative contribution of the weight w abc= (v) to the weight w(v) of v. Thus, whenever
8 D. DOR AND J. H ASTAD AND S. ULFBERG AND U. ZWICK
case w 1
a 2 A b 2
a 2 A b 2 A c 2
a 2 A b 2
a 2
A 11
Table
The weight of a set A 2 A(v) in the children of a node v, relative to its weight in v, when the
element a of a pair a > b is compared with an arbitrary element c.
an element of a pair is compared for the second time, we make a small gain. The
above intuition is made precise in the following lemma:
Lemma 3.2. If v is a node in which an element a of a pair a > b is compared
with an element c, and if the number of singletons in v is at least m+ 2 p
(w a=c (v) w c=a (v)
Proof. Both inequalities follow easily by considering the entries in Table 3.1. To
obtain the second inequality, for example, note that w(v a>c
As w c=ab
w a=c (v), the second inequality follows.
It is worth pointing out that in Table 3.1 and in Lemma 3.2, we only need to
assume that a > b; we do not use the stronger condition that a > b is a pair. This
stronger condition is crucial however in the sequel, especially in Lemma 3.4.
To make use of Lemma 3.2 we need bounds on the relative contributions of the
dierent cases. The following lemma is a useful tool for determining such bounds.
Lemma 3.3. Let E) be a bipartite graph. Let - 1 and - 2 be the minimal
degree of the vertices of V 1 and V 2 , respectively. Let 1 and 2 be the maximal degree
of the vertices of V 1 and V 2 , respectively. Assume that a positive weight function w is
dened on the vertices of G such that w(v 1
and (v
r
Proof. Let denote the two vertices connected by the
edge e. We then have
ON LOWER BOUNDS FOR SELECTING THE MEDIAN 9
The other inequality follows by exchanging the roles of V 1 and V 2 .
Using Lemma 3.3 we obtain the following basic inequalities.
Lemma 3.4. If v is a node in which a > b is a pair and the number of non-
singletons in v is at most m, then2
Each one of these inequalities relates a weight, such as w abc= (v), to a weight, such
as w ac=b (v), obtained by moving one of the elements of the pair a > b from A to
A.
In each inequality we 'lose' a factor of 1 . When the elements a and b are joined
together a factor of 2 is introduced. When the elements a and b are separated, a
factor of 1is introduced.
Proof. We present a proof of the inequality w abc= (v) 1
(v). The proof
of all the other inequalities is almost identical.
Construct a bipartite graph E) whose vertex sets are
(v). Dene an edge
A ac=b (v) if and only if there is a singleton d 2
A 1 such that A
Suppose that is such an edge. As a 62 min v
other elements are extremal with respect to A 1 if and only if they are extremal with
respect to A 2 (note that b 2 min v
A 2 )), we get that w 1
For every set A of size m, the number of singletons in A is at least (1 )m and
at most m. We get therefore that the minimal degrees of the vertices of V 1 and V 2
are and the maximal degrees of V 1 and V 2 are 1 ; 2 m. The
inequality w abc= (v) 1
therefore follows from Lemma 3.3.
Using these basic inequalities we obtain:
Lemma 3.5. If v is a node in which a > b is a pair and the number of non-
singletons is at most m, for some < 1, then
Proof. We present the proof of the rst inequality. The proof of the other two
inequalities is similar. Using inequalities from Lemma 3.4 we get that
and the rst inequality follows.
D. DOR AND J. H ASTAD AND S. ULFBERG AND U. ZWICK
We are now ready to show that if v is a node in which an element of a pair is
compared for the second time, then v has a child whose weight is greater than half
the weight of v. Combining Lemma 3.2 and Lemma 3.5, we get that2 (w(v a<c
w(v)
As a consequence, we get that
The coe-cient of w(v), on the right hand side, is minimized when the two expressions
whose maximum is taken are equal. This happens when
. Plugging
this value of into the two expressions, we get that
where
It is easy to check that f 1 () > 0 for < 1.
A pair-forming comparison is a comparison in which two singletons are compared
to form a pair. A pair-touching comparison is a comparison in which an element
of a pair is compared for the second time. In a pair-forming algorithm, the number
of singletons is decreased only by pair-forming comparisons. Each pair-forming
comparison decreases the number of singletons by exactly two. As explained above,
pair-forming comparisons can always be delayed so that a pair-forming comparison
immediately followed by a comparison that touches the pair fa; bg, or by a
pair-forming comparison a then by a comparison that touches both pairs
g.
Consider again the path traced from the root by repeatedly descending to the
child with the larger weight. As a consequence of the above discussion, we get that
when the i-th pair-touching comparison along this path is performed, the number
of non-singletons in the partial order is at most 4i. It follows therefore from the
remark made at the end of the previous section that the depth of the comparison tree
corresponding to any pair-forming algorithm is at least
log
ON LOWER BOUNDS FOR SELECTING THE MEDIAN 11
r
a
r
a
r
a
r
r
a
r
a
r
a
r
r
a
r
a
r
a
r
r c
r d
r c
r d
r c
r d
r c
r d
r c
r d
r c
r d
r c
r d
A
A
Fig. 3.2. The nine possible ways that a, b, c, and d may be divided between A and
A.
case w 1
Table
The weight of a set A 2 A(v) in the children of a node v, relative to its weight in v, when the
element a of a pair a > b is compared with an element of a pair c > d.
This completes the proof of Theorem 3.1.
The worst case in the proof above is obtained when the algorithm converts all the
elements into quartets . A quartet is a partial order obtained by comparing elements
contained in two disjoint pairs. In the proof above, we analyzed cases in which an
element a of a pair a > b is compared with an arbitrary element c. If the element c is
also part of a pair, a tighter analysis is possible. By performing this anaylsis we can
improve Theorem 3.1.
Theorem 3.6. A pair-forming algorithm for nding the median must perform,
in the worst case, at least 2:01227n
Proof. Consider comparisons in which the element from a pair a > b is compared
with an element of a pair c > d. The nine possible ways of dividing the elements a,
b, c, and d among A and
A are depicted in Figure 3.2. We may assume, without loss
of generality, that the element a is compared with either c or with d.
Let v be a node of the comparison tree in which a > b and c > d are pairs and
which one of the comparions a : c or a : d is performed. Let A 2 A(v). The weights
of a set A in v's children, relative to the weight w 1
of A at v, in each one of these
nine cases are given in Table 3.2. The two possible comparisons a : c and a : d are
considered separately. The following equalities are easily veried.
Lemma 3.7. If a > b and c > d are pairs in v then
12 D. DOR AND J. H ASTAD AND S. ULFBERG AND U. ZWICK
The following inequalities are analogous to the inequalities of Lemma 3.4.
Lemma 3.8. If a > b and c > d are pairs in v and if the number of non-singletons
in v is at most m, for some < 1, then2
Consider rst the comparison a : c. By examining Table 3.2 and using the equalities
of Lemma 3.7, we get that
w(va<c)+w(va>c )= w abcd= (v)
Minimizing this expression, subject to the equalities of Lemma 3.7, the inequalities of
Lemma 3.8, and the fact that the weights of the nine cases sum up to w(v), amounts
to solving a linear program. By solving this linear program we get that
where
It seems intuitively clear that the comparison a : d is a bad comparison from the
algorithm's point of view. The adversary will most likely answer with a > d. Indeed,
by solving the corresponding linear program, we get that
w(va>d
As for every 0 1, we may disregard the comparison a : d from
any further consideration.
It is easy to verify that (1+f 1 As a result, we get a lower bound
of
This completes the proof of Theorem 3.6.
ON LOWER BOUNDS FOR SELECTING THE MEDIAN 13
4. Concluding remarks. We presented a reformulation of the 2n + o(n) lower
bound of Bent and John for the number of comparisons needed for selecting the
median of n elements. Using this new formulation we obtained an improved lower
bound for pair-forming median nding algorithms. As mentioned, Dor and Zwick [6]
have recently extended the ideas described here and obtained a (2+)n lower bound
for general median nding algorithms, for some tiny > 0.
We believe that the lower bound for pair-forming algorithms obtained here can
be substantially improved. Such an improvement seems to require, however, some
new ideas. Obtaining an improved lower bound for pair-forming algorithms may be
an important step towards obtaining a lower bound for general algorithms which is
signicantly better than the lower bound of Bent and John [2].
Paterson [11] conjectures that the number of comparisons required for selecting
the median is about (log 4=3 2)n 2:41n.
--R
Producing posets.
Finding the median requires 2n comparisons.
Time bounds for selection.
Partial Order Productions.
Selecting the median.
Median selection requires (2
A tournament problem.
A counting approach to lower bounds for selection problems.
The Art of Computer Programming
A lower bound for determining the median.
Progress in selection.
On lower bounds for computing the i-th largest element
New lower bounds for medians and related problems.
--TR
--CTR
Krzysztof C. Kiwiel, On Floyd and Rivest's SELECT algorithm, Theoretical Computer Science, v.347 n.1-2, p.214-238, November 2005 | median selection;lower bounds;comparison algorithms |
587941 | The Maximum Edge-Disjoint Paths Problem in Bidirected Trees. | A bidirected tree is the directed graph obtained from an undirected tree by replacing each undirected edge by two directed edges with opposite directions. Given a set of directed paths in a bidirected tree, the goal of the maximum edge-disjoint paths problem is to select a maximum-cardinality subset of the paths such that the selected paths are edge-disjoint. This problem can be solved optimally in polynomial time for bidirected trees of constant degree but is APX-hard for bidirected trees of arbitrary degree. For every fixed $\varepsilon >0$, a polynomial-time $(5/3+\varepsilon)$-approximation algorithm is presented. | Introduction
. Research on disjoint paths problems in graphs has a long
history [12]. In recent years, edge-disjoint paths problems have been brought into the
focus of attention by advances in the field of communication networks. Many modern
network architectures establish a virtual circuit between sender and receiver in order
to achieve guaranteed quality of service. When a connection request is accepted, the
network must allocate sufficient resources on all links along a path from the sender
to the receiver. Edge-disjoint paths problems are at the heart of the arising resource
allocation problems.
We study the maximum edge-disjoint paths problem (MEDP) for bidirected tree
networks. A bidirected tree is the directed graph obtained from an undirected tree
by replacing each undirected edge by two directed edges with opposite directions.
Bidirected tree networks have been studied intensively because they are a good model
for optical networks with pairs of unidirectional fiber links between adjacent nodes [26,
MEDP in bidirected trees is defined as follows. Given a bidirected tree E)
and a set P of simple, directed paths in T , the goal is to find a subset P
that the paths in P 0 are edge-disjoint and the cardinality of P 0 is maximized. We say
that an algorithm is a ae-approximation algorithm for MEDP if it always outputs a
subset of edge-disjoint paths whose cardinality is at least a (1=ae)-fraction of
the cardinality of an optimal solution.
The conflict graph of a set of directed paths in a bidirected tree is an undirected
graph with a vertex for each path and an edge between two vertices if the corresponding
paths intersect (i.e., if they share an edge). One can view MEDP in bidirected
trees as a maximum independent set problem in the conflict graph.
We assume that the given tree is rooted at an arbitrary node. For a node v, we
let p(v) denote the parent of v. The level of a node is then defined as its distance to
the root node. The root has level zero. We say that a path touches a node if it begins
at that node, passes through that node, or ends at that node. The level of a path is
the minimum of the levels of all nodes it touches. The unique node on a path whose
level is equal to the level of the path is the least common ancestor (lca) of the path.
A preliminary version of this article has appeared in the Proceedings of the 9th Annual International
Symposium on Algorithms and Computation ISAAC'98, LNCS 1533, pages 179-188, 1998.
y Institut f?r Informatik, TU M-unchen, 80290 M-unchen, Germany (erlebach@in.tum.de).
z IDSIA Lugano, Corso Elvezia 36, 6900 Lugano, Switzerland (klaus@idsia.ch).
We denote a path that begins at node u and ends at node v by (u; v) and its lca by
lca(u; v).
1.1. Results. First, in x2, we determine the complexity of MEDP in bidirected
trees: MEDP can be solved optimally in polynomial time in bidirected trees of constant
degree and in bidirected stars, but it is MAX SNP-hard in bidirected trees of
arbitrary degree. The main result of this paper is summarized by the following theorem
Theorem 1.1. For every fixed " ? 0, there is a polynomial-time approximation
algorithm for the maximum edge-disjoint paths problem in bidirected trees with
approximation ratio 5=3
The description of the algorithm and a proof that the claimed approximation ratio
is indeed achieved appear in x3. In x4, we discuss how our results can be generalized
to the weighted version of the problem and to the maximum path coloring problem.
1.2. Related work.
Path coloring in bidirected trees. Previous work on bidirected trees has focused
on the path coloring problem: Given a set of directed paths in a bidirected tree, assign
colors to the paths such that paths receive different colors if they share an edge. The
goal is to minimize the total number of colors used. This problem is NP-hard even
for binary trees [8, 24]. The best known approximation algorithms [11, 10] use at most
d(5=3)Le colors, where L is the maximum load (the load of an edge is the number
of paths using that edge) and thus a lower bound on the optimal solution. Previous
algorithms had used (15=8)L colors [26] and (7=4)L colors [18, 25] in the worst case.
For the special case of all-to-all path coloring, it was shown that the optimal number
of colors is equal to the maximum load [14].
Multicommodity flow in trees. Garg et al. [13] studied the integral multicommodity
flow problem in undirected trees, which is a generalization of MEDP in undirected
trees. They showed that the problem with unit edge capacities (equivalent to MEDP
in undirected trees) can be solved optimally in polynomial time. For undirected trees
with edge capacities one or two, they proved the problem MAX SNP-hard. They also
presented a 2-approximation algorithm for integral multicommodity flow in trees. It
works by considering the demands in order of non-increasing levels of their lcas and
by satisfying them greedily. This approximation algorithm can be adapted to MEDP
in bidirected trees, where it also gives a 2-approximation. The main idea that leads
to our improved approximation algorithm for MEDP in bidirected trees is to consider
all paths with the same lca simultaneously instead of one by one.
Online algorithms for MEDP in trees. MEDP has also been studied in the on-line
scenario, where the paths are given to the algorithm one by one. The algorithm must
accept or reject each path without knowledge about future requests. Preemption is not
allowed. It is easy to see that no deterministic algorithm can have a competitive ratio
better than the diameter of the tree in this case. Awerbuch et al. gave a randomized
algorithm with competitive ratio O(log n) for undirected trees with n nodes [2]. Their
algorithm works also for bidirected trees. An improved randomized algorithm with
competitive ratio O(log d) for undirected trees with diameter d was given in [3].
MEDP for other topologies. If MEDP is studied for arbitrary graphs, the algorithm
must solve both a routing problem and a selection problem. For arbitrary
directed graphs with m edges, MEDP was recently shown to be NP-hard to approximate
within m 1=2\Gamma" [16]. Approximation algorithms with approximation ratio
O( p
m) are known for the unweighted case [20, 28] and for the weighted case [22]. Better
approximation ratios can be achieved for restricted classes of graphs. For a class
EDGE-DISJOINT PATHS IN BIDIRECTED TREES 3
of planar graphs containing two-dimensional mesh networks, an O(1)-approximation
algorithm has been devised in [21].
2. Complexity results. MEDP in bidirected trees is NP-hard in general. This
can be proved by a reduction from 3D-matching that is similar to the reduction
used by Garg et al. to prove the NP-hardness of integral multicommodity flow in
undirected trees with edge capacities one and two [13]. We omit the details, because
the modification is straightforward. If we reduce from the bounded variant of the
3D-matching problem [19], the reduction is an L-reduction and an AP-reduction,
implying that MEDP in bidirected trees is MAX SNP-hard [27] and APX -hard [7].
This shows that there is no polynomial-time approximation scheme for the problem
Nevertheless, MEDP can be solved optimally in polynomial time if the input
is restricted in certain ways. First, consider the case that the maximum degree of
the given tree is bounded by a constant. The optimal solution can be computed by
dynamic programming in this case. We process the nodes of the tree in order of
non-increasing levels. At every node v, we record for each possible subset S of edge-disjoint
paths touching v and its parent (note that jSj - 2) the maximum number of
paths contained in the subtree rooted at v that can be accepted in addition to the
paths in S. Node v is processed only when these values are known for all its children.
We can then enumerate all possible edge-disjoint subsets of paths touching v. For
each such subset, we can look up the corresponding values stored at children of v
and update the values stored at v accordingly. Note that there are only polynomially
many subsets to consider at each node. When the root node has been processed, the
optimal solution can easily be constructed.
Another special case that can be solved optimally in polynomial time is the case
that the given bidirected tree T is a star, i.e., it contains only one node with degree
greater than one. MEDP in bidirected stars can be reduced to the maximum matching
problem in a bipartite graph as follows. First, we can assume without loss of generality
that every given path uses exactly two edges of the star; if a path uses only one
edge, we can add a new node to the star and extend the path by one edge without
changing the set of solutions. Now, observe that every path uses exactly one edge
directed towards the center and one edge directed away from the center of the star.
Construct a bipartite graph G by including a vertex for every edge of the star and
by adding an edge between two vertices u and v in G for every path in T that uses
the edges corresponding to u and v. Two paths in T are edge-disjoint if and only if
the corresponding edges in G do not share an endpoint. Sets of edge-disjoint paths
in T correspond to matchings in G. A maximum matching in G can be computed in
polynomial time [17].
The latter result can actually be generalized from stars to spiders. A spider is a
bidirected tree in which at most one node (the center) has degree greater than two.
MEDP in a bidirected spider can be solved in polynomial time using an algorithm
for the maximum-weight bipartite matching problem as a subroutine. The bipartite
graph G is constructed as above from the paths touching the center of the spider, and
the weight of an edge e in G specifies how many fewer paths not touching the center
of the spider can be accepted if the path corresponding to e is accepted. The details
are left to the reader.
3. Approximating the optimal solution. Fix any " ? 0. Let an instance of
the maximum edge-disjoint paths problem be given by a bidirected tree T and a set
4 T. ERLEBACH AND K. JANSEN
P of directed paths in T . Denote by P an arbitrary optimal solution for the given
instance.
The algorithm proceeds in two passes. In the first pass, it processes the nodes
of T in order of non-increasing levels (i.e., bottom-up). Assume that the algorithm
is about to process node v. Let P v denote the subset of all paths (u; w) 2 P with
that do not intersect any of the paths that have been accepted by the
algorithm at a previous node and that do not use any edges that have been reserved or
fixed by the algorithm (see below). For the sake of simplicity, we can assume without
loss of generality that we have u otherwise, we
could add an additional child to v for each path in P v starting or ending at v and
make the path start or end at this new child instead. Every path uses exactly
two edges incident to v, and we refer to these two edges as the top edges of p. We say
that two paths are equivalent if they use the same
two edges incident to v, i.e., if their top edges are the same. For a set Q of paths with
the same lca, this defines a partition of Q into different equivalence classes of paths
in the natural way.
While the algorithm processes node v, it tries to determine for the paths in P v
whether they should be included in the solution (these paths are called accepted)
or not (these paths are called rejected ). Sometimes, however, the algorithm cannot
make this decision right away. In these cases the algorithm will leave some paths in
an intermediate state and resolve them later on. The possibilities for paths in such
intermediate states are
(i) undetermined paths,
(ii) groups of deferred paths,
(iii) groups of exclusive paths, and
(iv) groups of 2-exclusive paths.
We refer to undetermined paths and to paths in groups of exclusive paths and in
groups of 2-exclusive paths as unresolved paths and to paths in groups of deferred paths
as deferred paths. The status of unresolved paths is resolved at later nodes during the
first pass. The second pass of the algorithm proceeds top-down and accepts one path
from each group of deferred paths.
3.1. Paths in intermediate states. In the following we give explanations regarding
the possible groups of paths in intermediate states. First, the algorithm will
sometimes leave a single path p of P v in an undetermined state. If P v has only one
equivalence class of paths, accepting a path might cause the algorithm to
miss the chance of accepting two paths of smaller level (than v) later on. Hence, the
algorithm could at best achieve a 2-approximation. Therefore, instead of accepting or
rejecting the paths in P v right away, the algorithm picks one of them and makes it an
undetermined path. All other paths in P v , if any, are rejected, and the undetermined
path will be accepted or rejected at a later node.
A second situation in which the algorithm does not accept or reject all paths in
right away is sketched in Fig. 3.1. (Here and in the following, pairs of oppositely
directed edges are drawn as undirected edges in all figures.) In this situation, the
algorithm decides to accept one of several intersecting paths from P v , but it defers
the decision which one of them to accept. The intersecting paths are called a group
of deferred paths. All paths in a group of deferred paths use the same edge incident
to v and to a child c of v. In the figure, this is the edge (c; v). (The case that the
deferred paths share the edge (v; c) is symmetrical.) Furthermore, each deferred path
uses also an edge (v; c 0 ) connecting v and a child c 0 6= c, and not all of the deferred
EDGE-DISJOINT PATHS IN BIDIRECTED TREES 5
e
c
Fig. 3.1. A group of deferred paths.
c
c
Fig. 3.2. Possible configuration of a group of exclusive paths (left-hand side), and situation in
which both exclusive paths are blocked (right-hand side).
paths use the same such edge. If the algorithm decides to create a new group of
deferred paths, it marks the edge (c; v) as reserved (assuring that no path accepted
at a node processed after v can use the edge), but leaves all edges (v; c 0 ) for children
available. The reserved edge is indicated by a dashed arrow in Fig. 3.1. The
motivation for introducing groups of deferred paths is as follows: first, the reserved
edge blocks at most one path of smaller level that could be accepted in an optimal
solution; second, no matter which path using the edge (p(v); v) is accepted at a node
processed after v, that path uses at most one of the edges (v; c 0 ), and as there is still
at least one deferred path that does not use that particular edge (v; c 0 ), the algorithm
can pick such a deferred path in the second pass. When processing later nodes during
the first pass, the algorithm actually treats the group of deferred paths like a single
accepted path that uses only the reserved edge of the deferred paths.
A group of exclusive paths is sketched in Fig. 3.2 (left-hand side). Such a group
consists of one path q (called the lower path) contained in the subtree rooted at a
child c of v and one path p (called the higher path) with lca v that intersects q. At
most one of the two paths can be accepted, but if the algorithm picks the wrong one
this choice can cause the algorithm to accept only one path while the optimal solution
would accept the other path and one or two additional paths. Hence, the algorithm
defers the decision which path to accept until a later node. For now, it only marks
6 T. ERLEBACH AND K. JANSEN
Fig. 3.3. Group of 2-exclusive paths consisting of a pair of independent groups of exclusive paths.
the top edge of path q that is intersected by p as fixed. (Fixed edges are indicated by
dotted arrows in our figures.) Obviously, a group of exclusive paths has the following
property.
Property (E). If at most one path touching v but not using the fixed edge is
accepted at a later node, either p or q can still be accepted. Only when two paths
touching v are accepted at a later node, they can block p and q from being accepted.
The right-hand side of Fig. 3.2 shows how two paths accepted at a later node can
block both exclusive paths. While processing later nodes, the algorithm will try to
avoid this whenever possible.
The last types of unresolved paths are sketched in Figures 3.3 and 3.4. These
groups of 2-exclusive paths consist of a set of four paths at most two of which can
be accepted. More precisely, the first possibility for a group of 2-exclusive paths is
to consist of two independent groups of exclusive paths (Fig. 3.3), i.e., of two groups
of exclusive paths such that the fixed edge of one group is directed towards the root
and the fixed edge of the other group is directed towards the leaves. Furthermore,
the two groups must either be contained in disjoint subtrees (as shown in Fig. 3.3), or
only their lower paths are contained in disjoint subtrees and their higher paths do not
intersect each other. A pair of independent groups of exclusive paths has two fixed
edges: the fixed edges of both groups.
The second possibility for a group of 2-exclusive paths is to consist of a group of
exclusive paths contained in a subtree rooted at a child of v and two paths
with lca v that intersect the exclusive paths (but not their fixed edge) in a way such
that accepting p 1 and p 2 would block both of the exclusive paths from being accepted
(Fig. 3.4). Two edges are marked fixed, namely the top edge of the higher exclusive
path intersected by a path with lca v and the top edge of the lower exclusive path
intersected by a path with lca v. It is not difficult to show by case analysis that a
group of 2-exclusive paths has the following property.
Property (2E). If at most one path touching v but not using a fixed edge is
accepted at a later node, two paths from the group of 2-exclusive paths can still be
accepted. If two paths touching v but not using a fixed edge are accepted at a later
node, at least one path from the group of 2-exclusive paths can still be accepted.
While processing later nodes, the algorithm will try to avoid accepting two paths
touching v such that only one path from the group of 2-exclusive paths can be accepted
EDGE-DISJOINT PATHS IN BIDIRECTED TREES 7
vFig. 3.4. Further configurations of groups of 2-exclusive paths.
3.2. Invariants. In x3.4 we will present the details of how the algorithm proceeds
during the first pass. At the same time, we will show that the approximation
ratio achieved by the algorithm is 5=3 ". In order to establish this, we will prove
by induction that the following invariants can be maintained. These invariants hold
before the first node of T is processed, and they hold again each time an additional
node of T has been processed. A node v is called a root of a processed subtree if the
node v has already been processed but its parent has not.
Invariant A. For every root v of a processed subtree, all paths in that subtree
are accepted, rejected, or deferred except if one of the following cases occurs:
(i) The subtree contains one undetermined path. All other paths contained in
the subtree are accepted, rejected, or deferred. No edge in the subtree is marked fixed.
(ii) The subtree contains one group of exclusive paths. All other paths contained
in the subtree are accepted, rejected, or deferred. The only edge marked fixed in the
subtree is the one from the group of exclusive paths.
(iii) The subtree contains one group of 2-exclusive paths. All other paths contained
in the subtree are accepted, rejected, or deferred. The only edges marked fixed
in the subtree are the two from the group of 2-exclusive paths.
All accepted paths are edge-disjoint and do not contain any reserved edges. Every unresolved
path is edge-disjoint from all accepted paths and does not contain any reserved
edges. Every deferred path contains exactly one reserved edge: the reserved edge of
the group of deferred paths to which the path belongs. If a deferred path p intersects
an accepted or unresolved path q, then the level of q is smaller than that of p.
Invariant B. Let A be the set of all paths that have already been accepted by the
algorithm. Let F be the set of all paths in P whose lca has not yet been processed
and which are not blocked by any of the accepted paths, by reserved edges, or by fixed
edges. Let d be the number of groups of deferred paths that are contained in processed
subtrees. Let U be the set of all undetermined paths. Let X be the union of all groups of
exclusive paths and groups of 2-exclusive paths. Then there is a subset O ' F [U [X
of edge-disjoint paths satisfying the following conditions:
(a) jP j - (5=3
(b) For every group of exclusive paths, O contains one path from that group; for
every group of 2-exclusive paths, O contains two paths from that group.
Intuitively, the set O represents a subset of P containing edge-disjoint paths that
could still be accepted by the algorithm and that has the following property: if the
algorithm accepts at least a 1=(5=3+ ")-fraction of the paths in O (in addition to the
8 T. ERLEBACH AND K. JANSEN
paths it has already accepted), its output is a (5=3 + ")-approximation of the optimal
solution.
Observe that the invariants are satisfied initially with
While it will be easy to see from the description of
the algorithm that Invariant A is indeed maintained throughout the first pass, special
care must be taken to prove that Invariant B is maintained as well.
3.3. The second pass. If the invariants are satisfied after the root node is
processed, we have jOj. At this
time, there can still be one undetermined path (which can, but need not be contained
in O: therefore, jOj 2 f0; 1g in this case) one group of exclusive paths (from which
O contains exactly one path, or one group of 2-exclusive paths (from which
O contains two edge-disjopint paths, 2). If there is an undetermined path, the
algorithm accepts it. If there is a group of exclusive paths, the algorithm accepts one
of them arbitrarily. If there is a group of 2-exclusive paths, the algorithm accepts two
edge-disjoint paths of them arbitrarily. The algorithm accepts at least jOj additional
paths in this way, and the resulting set A 0 of accepted paths satisfies jA
and, therefore, jP j - (5=3
In the second pass, the algorithm processes the nodes of the tree in reverse order,
i.e., according to non-decreasing levels (top-down). At each node v that is the lca of
at least one group of deferred paths, it accepts one path from each of the groups of
deferred paths such that these paths are edge-disjoint from all previously accepted
paths and from each other. This can always be done due to the definition of groups
of deferred paths. Hence, the number of paths accepted by the algorithm increases
by d in the second pass, and the set A 00 of paths that are accepted by the algorithm in
the end satisfies jA 00
Theorem 1.1.
3.4. Details of the first pass. Assume that the algorithm is about to process
node v. Recall that P v ' P is the set of all paths with lca v that do not intersect
any previously accepted path nor any fixed or reserved edge. Let U v be the set of
undetermined paths contained in subtrees rooted at children of v. Let X v be the set
of all paths in groups of exclusive paths and groups of 2-exclusive paths contained
in subtrees rooted at children of v. In the following, we explain how the algorithm
processes node v and determines which of the paths in P v [U v [X v should be accepted,
rejected, deferred, or left (or put) in an unresolved state.
Observe that for a given set of paths with lca v the problem of determining a
maximum-cardinality subset of edge-disjoint paths is equivalent to solving MEDP in
a star and can thus be done in polynomial time by computing a maximum matching
in a bipartite graph (cf. x2). Whenever we use an expression like compute a maximum
number of edge-disjoint paths in S ' P v in the following, we imply that the computation
should be carried out by employing this reduction to maximum matching.
We will use the following property of bipartite graphs: for 2, the fact
that a maximum matching in a bipartite graph G has cardinality s implies that there
are s vertices in G such that every edge is incident to at least one of these s vertices.
(The property holds for arbitrary values of s and is known as the K-onig theorem [23];
see, e.g., the book by Berge [4, pp. 132-133].)
Observe that each child of the current node v is the root of a processed subtree,
which can, by Invariant A, contain at most one of the following: one undetermined
path, or one group of exclusive paths, or one group of 2-exclusive paths. Let k be
the number of children of v that have an undetermined path in their subtree, let ' be
EDGE-DISJOINT PATHS IN BIDIRECTED TREES 9
the number of children of v that have a group of exclusive paths, and let m be the
number of children of v that have a group of 2-exclusive paths. We use the expression
subtrees with exclusive paths to refer to all subtrees rooted at children of v with either
a group of exclusive paths or with a group of 2-exclusive paths.
Note that one main difficulty lies in determining which of the paths in U v [
should be accepted and which should be rejected. If k bounded by a
constant, all possible combinations of accepting and rejecting paths in U v [X v can be
tried out in polynomial time, but if k is large, the algorithm must proceed in
a different way in order to make sufficiently good decisions. The exact threshold for
determining when k considered large and, consequently, the running-time
of the algorithm depend on the constant ".
Let F , U , X , A and d denote the quantities defined in x3.2 at the instant just
before the algorithm processes node v. Let F 0 , U 0 , X 0 , A 0 and d 0 denote the respective
quantities right after node v is processed. Furthermore, denote by a v the number of
paths that are newly accepted while processing v and by d v the number of groups of
deferred paths that are newly created while processing v.
We can assume that there is a set O ' F [U [X of edge-disjoint paths satisfying
Conditions (a) and (b) of Invariant B before v is processed. In every single case of the
following case analysis, we show how to construct a set O 0 that satisfies Invariant B
after v is processed. O 0 is obtained from O by replacing paths, removing paths,
or inserting paths as required. In particular, O 0 must be a set of edge-disjoint paths
satisfying O 0 ' F 0 [U 0 [X 0 . Therefore, all paths intersecting a newly accepted path or
the reserved edge of a newly created group of deferred paths must be removed from O.
Note that at most two such paths can have smaller level than v, because all such paths
of smaller level must use the edge (v; p(v)) or (p(v); v). Paths that are rejected by the
algorithm must be removed or replaced in O. If a new group of exclusive paths or
group of 2-exclusive paths is created, O 0 must contain one or two paths, respectively,
from that group so that Condition (b) of Invariant B is maintained. Furthermore, we
must ensure that jO 0 j is smaller than jOj by at most ( 5
As the value jAj+d
increases by a v while v is processed (i.e., we have jA
this implies that Condition (a) of Invariant B holds also after v is processed, i.e.,
Case 1. 2="g. The algorithm can try out all combinations
of accepting or rejecting unresolved paths in the subtrees rooted at children of v:
for undetermined paths there are two possibilities (accepting or rejecting the path),
for groups of exclusive paths there are two possibilities (accepting the lower path or
accepting the higher path), and for groups of 2-exclusive paths there are either four
possibilities (in the case of a pair of independent groups of exclusive paths as shown
in Fig. 3.3 on page accepting the lower or higher path in one group and the lower
or higher path in the other group) or two relevant possibilities (in the cases shown
in Fig. 3.4 on page 7: accepting the lower or higher path of the group of exclusive
paths contained in the group of 2-exclusive paths and the edge-disjoint path among
the remaining two paths; note that accepting no path of the group of exclusive paths
and only the remaining two paths blocks more paths from F than any of the other
two possibilities, hence we do not need to consider this third possibility) of accepting
two edge-disjoint paths of the group. Hence, the number of possible combinations is
bounded from above by 2 k+' 4 O(1). For each combination fl, the algorithm
computes a maximum number s fl of edge-disjoint paths in P v not intersecting
any of the u fl paths from U v [X v that are (tentatively) accepted for this combination.
Let s be the maximum of u taken over all combinations fl. Note that s is the
cardinality of a maximum-cardinality subset of edge-disjoint paths in
and the algorithm does nothing and
proceeds with the next node. Otherwise, we distinguish the following cases.
Case 1.1.
Case 1.1.1. has only one equivalence class of paths, pick
one of them, say p, arbitrarily and make it an undetermined path. (Hence,
and U Reject all other paths in P v . If O contains a path p 0 6= p from P v ,
replace p 0 by p in O to obtain O 0 (in order to ensure O
O We have a j. Obviously, the invariants are satisfied.
If P v has more than one equivalence class of paths, there must be an edge e
incident to v that is shared by all paths in P v (as a consequence of the K-onig theorem).
Make P v a group of deferred paths with reserved edge e. We have a
O can contain at most one path intersecting edge e: either a path from P v or a path of
smaller level. It suffices to remove this path from O in order to obtain a valid set O 0 ,
and we get and the
invariants are satisfied.
Case 1.1.2. There is one child c of v that has an undetermined
path p with lca w in its subtree, possibly the algorithm does
nothing and leaves p in its undetermined state. If P v 6= ;, all paths in P v must
intersect p in the same edge, say in the edge (u; w) with (The case that
they intersect p in an edge (w; u) is symmetrical.) The algorithm picks an arbitrary
path q from P v and makes fp; qg a group of exclusive paths with fixed edge (u; w).
(Hence, other paths in P v are rejected, and we have
a must ensure that O 0 contains p or q in order to satisfy Condition (b)
of Invariant B. If O does not contain any path from P v [ U v , by Property (E) either
or q can be inserted into O after removing at most one path of smaller level. If
O contains a path p 0 from P v [ U v already, this path can be replaced by p or q if
and the invariants are satisfied.
Case 1.1.3. 1. There is one child of v that has a group of
exclusive paths in its subtree. As any path from P v could be combined with a path
from the group of exclusive paths to obtain two edge-disjoint paths and because we
have assumed must have Hence, the algorithm does nothing at
node v and leaves the group of exclusive paths in its intermediate state.
Case 1.2. 2. Observe that k 2. In many of the subcases of
Case 1.2, the algorithm will yield a v 2. If O contains at most one path from
removing that path and at most two paths of smaller level is clearly
sufficient to obtain a valid set O 0 in such subcases. Therefore, we do not repeat this
argument in every relevant subcase; instead, we discuss only the case that O contains
two paths from
Case 1.2.1. There is a subtree rooted at a child of v that
contains a group of 2-exclusive paths. We must have any path in P v
could be combined with two paths from X v to form a set of three edge-disjoint paths.
Hence, the algorithm does nothing at node v and leaves the group of 2-exclusive paths
in its unresolved state.
Case 1.2.2. There are two children of v whose subtrees contain
a group of exclusive paths. Note that this case, as any path
from P v could be combined with one exclusive path from each subtree to obtain a set
of three edge-disjoint paths.
EDGE-DISJOINT PATHS IN BIDIRECTED TREES 11
e
Fig. 3.5. Case 1.2.3.1: Pv contains two edge-disjoint paths (left-hand side); Case 1.2.3.2 (a):
The fixed edge and e have the same direction (right-hand side).
If the fixed edges of both groups of exclusive paths point in the same direction
(i.e., are both directed to the root or to the leaves), the algorithm accepts the lower
paths of both groups of exclusive paths. The higher paths are rejected, and no edge is
marked fixed anymore. We have a 0, and at most three paths must be
removed from O to obtain a valid set O 0 : the two paths from the groups of exclusive
paths that are contained in O, and at most one path of smaller level using the edge
between v and p(v) whose direction is opposite to the direction of the formerly fixed
edges.
If the fixed edges of the groups of exclusive paths point in different directions (i.e.,
one is directed towards the root and one towards the leaves), the groups represent a
pair of independent groups of exclusive paths, and the algorithm can create a new
group of 2-exclusive paths. Note that O contains two paths from the new group of
2-exclusive paths already, because it contained one path from each of the two groups
of exclusive paths in X v due to Condition (b) of Invariant B. Therefore, we can set
O and the invariants are satisfied.
Case 1.2.3. There is one child of v that has a group of
exclusive paths in its subtree and one child of v that has an undetermined path in its
subtree. All paths in P v must intersect the undetermined path, because otherwise a
path from P v could be combined with the undetermined path and an exclusive path
to obtain a set of three edge-disjoint paths.
Case 1.2.3.1. There are two edge-disjoint paths in P v . In this case, the situation
must be as shown on the left-hand side of Fig. 3.5: the two edge-disjoint paths from
must intersect the group of exclusive paths in a way that blocks all exclusive paths
from being accepted, and there cannot be any other kinds of paths in P v .
The algorithm accepts the lower path from the group of exclusive paths and the
undetermined path, and it rejects all other paths in marked
fixed anymore. We have a Note that any combination of two
edge-disjoint paths from P v [U v [X v blocks at least three of the four top edges of the
paths accepted by the algorithm. Hence, if O contains two paths from
it can contain at most one path of smaller level intersecting the paths accepted by
the algorithm, and it suffices to remove at most three paths from O to obtain a valid
e
e
Fig. 3.6. Cases 1.2.3.2 (b) and 1.2.3.2 (c): The fixed edge and e have different directions.
Case 1.2.3.2. All paths in P v intersect the same edge e of the undetermined path.
Case 1.2.3.2 (a). The direction of e is the same as that of the fixed edge of the
group of exclusive paths (see the right-hand side of Fig. 3.5). The algorithm accepts
the undetermined path and the lower path from the group of exclusive paths. All
other paths in are rejected, and no edge is marked fixed anymore. We have
a O contains two paths from P v [ use
the fixed edge and the edge e, and at most one further path from O can be blocked by
the paths accepted by the algorithm (because such a path must use the edge between
v and p(v) in the direction opposite to the direction of e). Thus, it suffices to remove
at most three paths from O to obtain a valid set O 0 .
Case 1.2.3.2 (b). The direction of e is different from that of the fixed edge, and
there is a path that does not intersect the higher exclusive path (see the left-hand
side of Fig. 3.6). The algorithm uses X v , p and the undetermined path together
to create a new group of 2-exclusive paths consisting of a pair of independent groups
of exclusive paths. All other paths is P v are rejected by the algorithm. In addition to
the fixed edge of the old group of exclusive paths, the edge e is marked fixed. Note
that O contains one path from X v due to Condition (b) of Invariant B. If O contains
the undetermined path or the path p, let O O contains a path other than p
from P v , replace this path either by p or by the undetermined path (one of these
must be possible). If O does not contain a path from P v [ U v but contains a path p 0
using the edge between v and p(v) in the direction given by edge e, replace p 0 either
by p or by the undetermined path (one of the two must be possible). If O does not
contain a path from P v [ U v and no path using the edge between v and p(v) in the
direction given by edge e, add either p or the undetermined path to O. In any case,
the invariants are satisfied. In particular, jO
Case 1.2.3.2 (c). The direction of e is different from that of the fixed edge, and
all paths in P v intersect the higher exclusive path (see the right-hand side of Fig. 3.6).
The algorithm accepts the undetermined path and the lower path from the group of
exclusive paths, and it rejects all other paths from marked fixed
anymore. We have a O contains two paths from
must contain at least one of the two paths accepted by the algorithm, and the other
path in O uses a top edge of the other path accepted by the algorithm. O contains at
most one path of smaller level intersecting the paths accepted by the algorithm, and
it suffices to remove at most three paths from O in order to obtain a valid set O 0 .
EDGE-DISJOINT PATHS IN BIDIRECTED TREES 13epe
pp c c" c'
pp
p'
c c" c'
Fig. 3.7. Case 1.2.4.1: Pv contains two edge-disjoint paths that block the exclusive paths.
Case 1.2.4. There is one child c of v that has a group of
exclusive paths in its subtree. Denote the higher and the lower path in the group of
exclusive paths by p and q, respectively. Assume without loss of generality that the
fixed edge e 0 of the group of exclusive paths is directed towards the root of the tree
(as shown in Fig. 3.7). Note that We distinguish further cases
regarding the maximum number of edge-disjoint paths in P v .
Case 1.2.4.1. There are two edge-disjoint paths p 1 and p 2 in P v . As
must intersect the exclusive paths in a way that blocks all of them from being
accepted. See Fig. 3.7. Let p 1 intersect p, and let p 2 intersect q. Let c 0 6= c be the
child of v such that p 1 uses the edges (c; v) and (v; c 0 ), and let c 00 6= c be the child
of v such that p 2 uses the edges (c 00 ; v) and (v; c). Note that c
the top edge of p intersected by p 1 be e 1 , and let the top edge of q intersected by p 2
be e 2 . As contains only two edge-disjoint paths, every path p must
either intersect edge e 1 , or intersect edge e 2 , or intersect both p 1 and p 2 . (The latter
case is possible only if c 0 6= c 00 and if all paths in P v that intersect e 1 use the edges
(c; v) and (v; c 0 ) and all paths in P v that intersect e 2 use the edges (c 00 ; v) and (v; c); in
that case, p 0 must use as shown on the right-hand side of Fig. 3.7.)
Case 1.2.4.1 (a). All paths in P v that intersect e 1 use the edges (c; v) and (v; c 0 ),
and all paths in P v that intersect e 2 use the edges
First, assume that all paths in P v intersect either e 1 or e 2 . Note that there are
exactly two equivalence classes of paths in P v in this case. See Fig. 3.7 (left-hand
side). The algorithm uses the group of exclusive paths and one representative from
each of the two equivalence classes of paths in P v to create a group of 2-exclusive
paths. All other paths in P v are rejected. The fixed edge e 0 of the group of exclusive
paths is no longer marked fixed, instead the edges e 1 and e 2 are marked fixed. If O
contains two paths from P v [X v , one of them must be from X v due to Condition (b)
of Invariant B and the other can be replaced by a path in the new group of 2-exclusive
paths. Otherwise, it is possible to remove the path from X v and at most one additional
path from O such that the resulting set contains no path from P v [ X v , at most one
14 T. ERLEBACH AND K. JANSEN
path of smaller level touching v, and no path of smaller level intersecting a fixed edge
of the new group of 2-exclusive paths. By Property (2E), two paths from the new
group of 2-exclusive paths can then be inserted into that set to obtain O 0 . We have
and the invariants are satisfied.
Now, assume that there is a path p 0 2 P v that intersects neither e 1 nor e 2 . As
noted above, we must have c 0 6= c 00 in this case, and p 0 must use the edges
Fig. 3.7 (right-hand side). The algorithm accepts the lower path from the
group of exclusive paths and the path p 0 , and it rejects all other paths in
No edge is marked fixed anymore. We have a Note that any
combination of two edge-disjoint paths from blocks at least three of the four
top edges of the paths accepted by the algorithm. Hence, if O contains two paths
from can contain at most one path of smaller level intersecting the paths
accepted by the algorithm, and it suffices to remove at most three paths from O to
obtain a valid set O 0 .
Case 1.2.4.1 (b). There are at least two equivalence classes of paths in P v intersecting
the higher path of the group of exclusive paths. The algorithm accepts the
lower path of the group of exclusive paths and makes the paths in P v intersecting the
higher path a group of deferred paths. All other paths in P v [X v are rejected, and no
edge is marked fixed anymore. The reserved edge of the group of deferred paths is the
top edge shared by all these paths. If O contains two paths from P v [ X v , note that
one of the two paths must be from X v (due to Condition (b) of Invariant B) and that
these two paths also block the top edges of the lower path of the group of exclusive
paths. Hence, O cannot contain any path of smaller level intersecting the lower path,
and it can contain at most one path of smaller level intersecting the reserved edge of
the newly deferred paths. It suffices to remove at most three paths from O to obtain
a valid set O 0 .
Case 1.2.4.1 (c). There is only one equivalence class of paths in P v intersecting
the higher path of the group of exclusive paths, and there are at least two equivalence
classes of paths in P v intersecting the lower path of the group of exclusive paths.
The algorithm accepts the higher path of the group of exclusive paths and makes the
paths in P v intersecting the lower path a group of deferred paths. All other paths
in are rejected, and no edge is marked fixed anymore. The reserved edge of
the group of deferred paths is the top edge shared by all these paths. If O contains
two paths from note that one of the two paths must be from X v (due to
Condition (b) of Invariant B) and that these two paths also block edge e 1 . Hence, O
cannot contain any path of smaller level intersecting e 1 , and it can contain at most
one path of smaller level intersecting the reserved edge of the newly deferred paths or
the top edge of the higher path that is directed towards the leaves, because all such
paths must use the edge (p(v); v). It suffices to remove at most three paths from O
to obtain a valid set O 0 .
Case 1.2.4.2. P v does not contain two edge-disjoint paths. Let e be an edge
incident to v such that all paths in P v use edge e.
Case 1.2.4.2 (a). has at least two different
equivalence classes of paths. The algorithm makes all paths in P v a new group of
deferred paths with reserved edge e and accepts q, the lower path of the group of
exclusive paths. Path p is rejected, and no edge in this subtree is marked fixed
anymore. We have a O contains two paths from P v [ X v , these paths
block two of the three top edges blocked by the algorithm: the fixed edge e 0 of the
group of exclusive paths and edge e. O can contain at most one path of smaller level
Fig. 3.8. Case 1.2.5 (a): All sets of two edge-disjoint paths use the same four top edges (left-
hand side); Case 1.2.5 (b): there is only one equivalence class of paths using edge e 1 , but more than
one class using edge e2 (right-hand side).
that intersects the path accepted by the algorithm or the reserved edge of the new
group of deferred paths, and it suffices to remove at most three paths from O to obtain
a valid set O 0 .
Case 1.2.4.2 (b). e 6= (v; c 0 ) for all children c 0 6= c of v, or P v has only one
equivalence class of paths. If there is a path p that does not intersect q, the
algorithm accepts p 0 and q. If all paths in P v intersect q, the algorithm accepts p and
an arbitrary path from P v . In both cases, all other paths in P v [X v are rejected, and
no edge in this subtree is marked fixed anymore. We have d 2. Assume
that O contains two paths from P v [ X v . We will show that it suffices to remove at
most three paths from O to obtain a valid set O 0 .
If the algorithm has accepted p, O must also contain p and a path from P v , thus
blocking at least three of the four top edges of the paths accepted by the algorithm.
At most one further path in O can be blocked by the paths accepted by the algorithm.
Now assume that the algorithm has accepted q. Observe that the two paths from
that are in O must also use the edges e 0 and e, thus blocking two of the four
top edges of paths accepted by the algorithm. If e and e 0 have the same direction,
O can contain at most one path of smaller level intersecting the paths accepted by
the algorithm, because such a path must use the edge (p(v); v). If P v has only one
equivalence class of paths, the paths from P v [X v that are in O block three of the four
top edges of paths accepted by the algorithm, and again it suffices to remove at most
one path of smaller level from O. Finally, consider the case that P v has more than
one equivalence class of paths and that e = (v; c). Since edge e blocks more paths of
smaller level than the top edge of q that is directed towards the leaves, the two paths
from P v [X v that are in O do in fact block at least as many paths of smaller level as
three of the four top edges of the paths accepted by the algorithm.
Case 1.2.5. there must be two edges incident to v such
that all paths in P v use at least one of these two edges (by the K-onig theorem). Let
e 1 and e 2 be two such edges.
Case 1.2.5 (a). All possible sets of two edge-disjoint paths from P v use the same
four edges incident to v. See the left-hand side of Fig. 3.8 for an example. The
algorithm picks two arbitrary edge-disjoint paths from P v , accepts them, and rejects
all other paths from P v . We have a O contains two paths from P v ,
removing these two paths is sufficient to obtain a valid set O 0 , because they use the
same top edges as the paths accepted by the algorithm and O cannot contain any
further path intersecting the paths accepted by the algorithm.
In the following, let D be the set of paths in P v that intersect all other paths
from P v . In other words, a path p 2 P v is in D if P v does not contain a path q that is
edge-disjoint from p. Note that if Case 1.2.5 (a) does not apply, it follows that either
the paths in P v n D using edge e 1 or those using edge e 2 must have more than one
Fig. 3.9. Case 1.2.5 (c): Configurations in which two groups of deferred paths can be created.
equivalence class of paths.
Case 1.2.5 (b). There is only one equivalence class C of paths in P v n D using
more than one equivalence class of paths in P v n D using edge e 2 and not
intersecting a path from C. See the right-hand side of Fig. 3.8. (The case with e 1
and e 2 exchanged is symmetrical. Furthermore, note that the case that there is only
one equivalence class C of paths in P v n D using edge e 1 and only one equivalence
class of paths in P v n D using edge e 2 and not intersecting a path from C satisfies
the condition of Case 1.2.5 (a).) The algorithm picks a path p from C arbitrarily,
accepts p, and makes the paths using edge e 2 and not intersecting p a group of deferred
paths with reserved edge e 2 . All other paths in P v are rejected. We have a
O contains two paths from P v , these paths must also use both top edges of p
and the newly reserved edge, and thus removing these two paths from O is sufficient
to obtain a valid set O 0 .
Case 1.2.5 (c). There is more than one equivalence class of paths in P v n D using
there is more than one equivalence class of paths in P v n D using edge e 2 ,
and Case 1.2.5 (a) does not apply. The algorithm makes the paths in P v n D using e 1
a group of deferred paths with reserved edge e 1 and the paths in P v n D using e 2 a
group of deferred paths with reserved edge e 2 . All other paths in P v are rejected. Note
that no matter which paths of smaller level are accepted by the algorithm later on,
there are still two paths, one in each of the two groups of newly deferred paths, that
are edge-disjoint from these paths of smaller level and from each other. (Otherwise,
Case 1.2.5 (a) would apply.) We have a 2. If O contains two paths
from P v , these paths use e 1 and e 2 as well, and removing these two paths from O
is sufficient to obtain a valid set O 0 , because O cannot contain any further path
intersecting a reserved edge of the newly deferred paths.
Case 1.2.6. There is one child of v that has an undetermined
path p in its subtree. Let P 0
denote the set of paths in P v that do not intersect p.
We begin by making some simple observations. First, P 0
v must not contain two edge-disjoint
paths. Hence, there must be an edge e incident to v that is shared by all
paths in P 0
v . Second, implies that the maximum number of edge-disjoint paths
in P v is at most two. So there must be two edges e 1 and e 2 incident to v such that
every path in P v uses at least one of these two edges.
Let the lca of the undetermined path be v 0 , and let c be the child of v whose
subtree contains the undetermined path (possibly
of v 0 such that the undetermined path uses the edges (v 1
a number of subcases regarding the number of equivalence classes in P 0
v .
Case 1.2.6 (a). P 0
v is empty. Let P 1 and P 2 denote the sets of paths in P v
that intersect p in the edge (v in the edge (v respectively. Note that
the algorithm accepts an arbitrary
path from P i if P i has only one equivalence class of paths and creates a new group
of deferred paths from P i otherwise. The undetermined path p is rejected. We have
EDGE-DISJOINT PATHS IN BIDIRECTED TREES 17
v'
v' v'
v"
v"
Fig. 3.10. Case 1.2.7: v has two children with undetermined paths in their subtrees.
2. If O contains two paths from P v [ U v , removing these two paths is
sufficient, because they block at least as many paths of smaller level as the newly
accepted paths or newly reserved edges.
Case 1.2.6 (b). P 0
v has one equivalence class of paths. The algorithm accepts
an arbitrary path from P 0
v and the undetermined path p. All other paths in P v are
rejected. We have a Assume that O contains two paths from P v [U v .
If O contains p, O must also contain a path from P 0
v , and it suffices to remove these
two paths from O to obtain a valid set O 0 . If O does not contain p but contains a
path from P 0
must also contain a path from P v that intersects p; these two paths
block at least three of the four top edges blocked by the algorithm, and it suffices to
remove these two paths and at most one path of smaller level. Finally, if O contains
neither p nor a path from P 0
must contain two paths from P v that intersect p in
different top edges and at least one of which intersects also a top edge of the paths in
suffices to remove at most three paths from O to obtain a valid set O 0 .
Case 1.2.6 (c). P 0
v has more than one equivalence class of paths. Let e be the
edge incident to v that is shared by all paths in P 0
v . The algorithm accepts the
undetermined path p and creates a new group of deferred paths from the paths in
. All other paths in P v are rejected. We have a 1. Assume that O
contains two paths from P v [ U v . If O contains p, O must also contain a path from
v , and it suffices to remove these two paths from O to obtain a valid set O 0 . If O
does not contain p but contains a path from P 0
must contain a path from P v that
intersects these two paths block at least two of the three top edges blocked by the
algorithm, and it suffices to remove these two paths and at most one path of smaller
level. Finally, if O contains neither p nor a path from P 0
must contain two paths
from P v that intersect p in different top edges; again, these two paths block at least
two of the three top edges blocked by the algorithm, and it suffices to remove at most
three paths from O to obtain a valid set O 0 .
Case 1.2.7. Two children of v have undetermined paths
in their subtrees. Denote the undetermined paths by p and q. See Fig. 3.10. As
every path in P v must intersect at least one undetermined path. In addition,
if there are two paths in P v that intersect one undetermined path in different top
edges, at least one of them must also intersect the other undetermined path. Let P 1
and P 2 denote the sets of paths in P v that intersect p and q, respectively. Note that
Case 1.2.7 (a). There are edge-disjoint paths p 1 and p 2 in P v such that p 1 intersects
p in a top edge e 1 but does not intersect q, and p 2 intersects q in a top edge
e 2 but does not intersect p, and such that e 1 and e 2 have different directions (i.e.,
one is directed towards the root, and the other is directed towards the leaves). The
algorithm makes p, q, p 1 and p 2 a group of 2-exclusive paths consisting of a pair of
independent groups of exclusive paths and rejects all other paths from P v . The edges
e 1 and e 2 are marked fixed. If O contains two paths from the new group of 2-exclusive
paths already, let O Otherwise, it is possible to replace paths in O by paths
from the new group of 2-exclusive paths to obtain O 0 . In any case, jO
Case 1.2.7 (b). If the condition for Case 1.2.7 (a) does not hold, the algorithm
accepts p and q and rejects all paths from P v . We have a
that O contains two paths from P v [ U v . If O contains p and q, it suffices to remove
these two paths. If O contains only one of p and q, say p, it must contain a path from
P v that intersects q, and these two paths block three of the four top edges blocked
by the algorithm. If O contains neither p nor q, it must contain two paths from P v .
If at least one of these two paths in O intersects both p and q, these two paths again
block at least three of the four top edges blocked by the algorithm. If both paths in
O intersect only one of p and q, it must be the case that one of them intersects p in
an edge e 1 and one of them intersects q in an edge e 2 . If e 1 and e 2 have the same
direction, O can contain at most one path of smaller level intersecting a path accepted
by the algorithm. If e 1 and e 2 have different directions, the condition of Case 1.2.7 (a)
applies.
Case 1.3. s - 3. The algorithm accepts the s paths and rejects all other paths
from this subtree is marked fixed anymore. As s is the
maximum number of edge-disjoint paths in can contain at most s
paths from P v [U v [X v . Furthermore, O can contain at most two paths from F using
the edges (v; p(v)) or (p(v); v), and these are the only two further paths in O that
could possibly be blocked by the s paths accepted by the algorithm. Hence, a valid
set O 0 can be obtained from O by deleting at most s paths. As s
the invariants are maintained.
Case 2. 2="g. In this case, the algorithm cannot try out all
possibilities of accepting or rejecting unresolved paths in polynomial time. Instead,
it calculates only four candidate sets of edge-disjoint paths from
chooses the largest of them.
For obtaining two of the four sets, we employ a method of removing paths from
an arbitrary set S of edge-disjoint paths in P v such that ' exclusive paths from
X v can be accepted in addition to the paths remaining in S. The resulting set of
edge-disjoint paths in S [X v has cardinality jSj where r is the number
of paths that were removed from S. The details of the method and a proof that
will be presented later in Lemma 3.1. With this tool we are ready
to describe the candidate sets S 1
be the subset of paths
in P v that do not intersect any undetermined path in U v .
1. Compute a maximum number s 1 of edge-disjoint paths in P 0
v . S 1 is obtained
by taking these paths, all k undetermined paths, and as many additional edge-disjoint
paths from X v as possible. We have
undetermined paths and at least m paths from groups of 2-exclusive paths in X v due
to Property (2E).
2. S 2 is obtained from S 1 by removing r of the s 1 paths in S
such that ' +2m exclusive paths can be accepted. S 2 contains ' +2m exclusive paths,
and according to Lemma 3.1 only r - m)=3 of the s 1 paths in S
were removed to obtain S 2 . As S 2 still contains the k undetermined paths, we have
In addition, we have jS
EDGE-DISJOINT PATHS IN BIDIRECTED TREES 19
because S 2 contains all k undetermined paths from U v and ' exclusive paths.
3. S 3 is obtained by first computing a maximum number s 3 of edge-disjoint
paths in P v and then adding as many edge-disjoint paths from X v [ U v as possible.
We have jS 3 j - s 3 +m, because S 3 contains at least m paths from groups of 2-exclusive
paths in X v due to Property (2E).
4. S 4 is obtained from S 3 by removing r of the s 3 paths in S 3 " P v from S 3 such
that '+2m exclusive paths can be accepted, in the same way as S 2 is obtained from S 1 .
according to Lemma 3.1, we have jS 4 j - m+(2=3)(s 3 +'+m).
The algorithm accepts the paths in that set S i with maximum cardinality and
rejects all other paths from P v [U v [X v . We have a
Note that a v - jS 2 j - maxf3; 2="g and that this implies 2 - "a v .
Let O be the number of paths from P v that are
contained in O v and that intersect at least one of the k undetermined paths. Observe
that O v can contain at most k \Gamma b 0 =2 undetermined paths from U v . Note that the
maximum number of edge-disjoint paths in P v is s 3 and that the maximum number
of edge-disjoint paths in P 0
v is s 1 . Using jO
and using jO
With this upper bound on jO v j and the lower bounds on the cardinalities of the
four sets S i , we can now prove that at least one of the sets S i satisfies jO
it suffices to remove at most jO paths from O in order
to obtain a valid set O 0 , this implies that the invariants are maintained. If
we have jO
the following cases.
Case 2.1. ff ? 3=2. If '
we have a v - jS 3
use (3.1) and a v - jS 4 j to bound
the ratio between jO
a v
a v
Case 2.2.
we have a v - jS 3
a
use (3.1) and a v - jS 2 j to bound the ratio between jO
a v
a v
20 T. ERLEBACH AND K. JANSEN
c
e
a d
Fig. 3.11. Set of edge-disjoint paths in Pv .
Case 2.3. ff - 4=3. From (3.1) we get jO 2m, and we have
a
We have shown that jO holds in all subcases of Case 2. To
complete the description of Case 2, we still have to explain the method for removing
paths from S 1 and S 3 in order to obtain S 2 and S 4 , respectively. The method takes
an arbitrary set S of edge-disjoint paths in P v and removes paths from S to obtain
a set S 0 such that every subtree with exclusive paths is touched by at most one path
in S 0 . The motivation for this is that S can cause all paths from a group of exclusive
paths to be blocked only if two paths from S intersect the corresponding subtree
(Property (E)). Similarly, if only one path from a group of 2-exclusive paths can be
accepted, S must contain two paths from P v that intersect the corresponding subtree
(Property (2E)).
The method proceeds as follows. Consider a graph G with the paths in S as its
vertices and an edge between two paths if they touch the same child of v. G has
maximum degree two and consists of a collection of chains and cycles. Note that
every edge of G corresponds to a child of v that is touched by two paths in S. We
are interested in the maximal parts of chains and cycles that consist entirely of edges
corresponding to children of v that are the roots of subtrees with exclusive paths.
There are the following possibilities for such parts:
(i) A cycle such that all paths on the cycle have both endpoints in a subtree
with exclusive paths.
(ii) A chain such that the paths at both ends have only one endpoint in a subtree
with exclusive paths, while the internal paths have both endpoints in subtrees with
exclusive paths.
(iii) A chain such that the path at one end has only one endpoint in a subtree
with exclusive paths, while all other paths have both endpoints in a subtree with
exclusive paths.
(iv) A chain such that all its paths have both endpoints in a subtree with exclusive
paths.
Note that every such maximal part of a cycle or chain has length (number of paths)
at least two, because it contains at least one edge. The method for removing paths
proceeds as follows. Cycles of even length and chains are handled by removing every
other path from S, starting with the second path for chains. Cycles of odd length are
handled by removing two consecutive paths in one place and every other path from
the rest of the cycle.
Consider the example depicted in Fig. 3.11. The node v has eight children, named
a to h, and six of them (c to h) are roots of subtrees with exclusive paths (indicated
EDGE-DISJOINT PATHS IN BIDIRECTED TREES 21
e-a
g-h
a-d
d-c
h-f
f-g
c-b
Fig. 3.12. Graph G representing the structure of the paths.
by an exclamation mark). A set S of edge-disjoint paths in P v is sketched. The graph
G obtained from this set is shown in Fig. 3.12, and the label of a vertex in G is u-w
if the corresponding path begins in the subtree rooted at u and ends in the subtree
rooted at w. With respect to (i)-(iv) above, G contains a cycle of type (i) with length
three (containing the paths f -g, g-h, and h-f) and a chain of type (ii) with length
three (containing the paths a-d, d-c, and c-b). According to the rules given above,
three paths would be removed from S: two paths, say f -g and g-h, from the cycle,
and the path d-c from the chain of length three.
It is easy to see that this process always ensures that in the end S contains, for
each subtree with exclusive paths, at most one path with an endpoint in that subtree.
Hence, due to Properties (E) and (2E), S can be filled up with edge-disjoint exclusive
paths until it contains all exclusive paths.
Lemma 3.1. Let v be a node with '+m children with exclusive paths. Let S ' P v
be a set of edge-disjoint paths. Let S 0 ' S be the set of paths obtained from S by
removing paths according to the method described above. Let
Proof. Let a be the number of cycles of type (i), and let a i , 1 - i - a, be
the length of the ith cycle. Denote the number of chains of type (ii) by b and their
lengths by b i , 1 - i - b. Denote the number of chains of type (iii) by c and their
lengths by c i , Denote the number of chains of type (iv) by d and their
lengths by d d. Note that a As the number of
paths contained in the union of all these chains and cycles is at most s, we have
P a
Furthermore, considering the number
of children with exclusive paths covered by each chain or cycle, we obtain
P a
in the latter inequality and adding up the two inequalities, we obtain
P a
Taking into account
that
P a
\Sigma a i\Upsilon
c and that da i =2e - (2=3)a i
for a i - 2, the lemma follows.
In the example displayed in Fig. 3.11, we had
sufficient to remove
3 .
3.5. Running-time of the algorithm. The running-time of our algorithm is
polynomial in the size of the input for fixed " ? 0, but exponential in 1=". Let a
bidirected tree E) with n nodes and a set P containing h directed paths in T
(each path specified by its endpoints) be given. For arbitrary " ? 0, we claim that
our approximation algorithm can be implemented to run in time
O
The details of the implementation as well as experimental results will be reported
in [9].
22 T. ERLEBACH AND K. JANSEN
Note that we can choose n) and still achieve running-time polynomial
in the size of the input. The resulting algorithm achieves approximation ratio
therefore, asymptotic approximation ratio 5=3. (If the optimal
solution contains many paths, n must also be large, and the approximation ratio gets
arbitrarily close to 5=3.)
4. Generalizations. There are several generalizations of MEDP. First, it is
meaningful to consider the weighted version of the problem, where each path has a
certain weight and the goal is to maximize the total weight of the accepted paths.
The weighted version of MEDP can still be solved optimally in polynomial time in
bidirected stars and spiders (by reduction to maximum-weight matching in a bipartite
graph) and in bidirected trees of bounded degree (by a minor modification of the
dynamic programming procedure given in x2).
Another generalization of MEDP is the MaxPC problem. For a given bidirected
tree set P of directed paths in T , and number W of colors, the maximum
path coloring (MaxPC) problem is to compute a subset P 0 ' P and a W -coloring of P 0 .
The goal is to maximize the cardinality of P 0 . The MaxPC problem is equivalent to
finding a maximum (induced) W -colorable subgraph in the conflict graph of the given
paths. Studying MaxPC is motivated by the admission control problem in all-optical
WDM (wavelength-division multiplexing) networks without wavelength converters:
every wavelength (color) can be used to establish a set of connections provided that the
paths corresponding to the connections are edge-disjoint, and the number of available
wavelengths is limited [5]. The weighted variant of MaxPC is interesting as well.
MaxPC and weighted MaxPC can both be solved optimally in polynomial time
for bidirected stars by using an algorithm for (the weighted version of) the capacitated
b-matching problem [15, pp. 257-259]. If the number W of colors and the maximum
degree of the bidirected tree are both bounded by constants, MaxPC and weighted
MaxPC can be solved optimally in polynomial time by dynamic programming (similar
to the procedure in x2). MaxPC is NP-hard for arbitrary W in bidirected binary trees
(because path coloring is NP-hard) and for bidirected trees of arbitrary
degree (because it is equivalent to MEDP in this case).
In order to obtain approximation algorithms for MaxPC with arbitrary number W
of colors, a technique due to Awerbuch et al. [1] can be employed. It allows reducing
the problem with W colors to MEDP with only a small increase in the approximation
ratio. The technique works for MaxPC in arbitrary graphs G; we discuss it here only
for trees. Let an instance of MaxPC be given by a bidirected tree
set P of paths in T , and a number W of colors. An approximation algorithm A for
arbitrary number W of colors is obtained from an approximation algorithm A 1 for
one color (i.e., for the maximum edge-disjoint paths problem) by running W copies
of A 1 , giving as input to the ith copy the bidirected tree T and the set of paths that
have not been accepted by the first copies of A 1 (see Fig. 4.1). The output of A
is the union of the W sets of paths output by the copies of A 1 , and the paths in the
ith set are assigned color i.
In [1] it is shown that the algorithm A obtained using this technique has approximation
ratio at most ae +1 if A 1 has approximation ratio ae, even if different colors are
associated with different network topologies. For identical networks, which we have
in our application, the approximation ratio achieved by A can even be bounded by
aeW which is smaller than 1=(1 \Gamma e \Gamma1=ae ) for all W . This bound is
mentioned in the journal version of [1] and can be viewed as an adaptation of a similar
result in [6]. It can be proved easily using the fact that if A has selected p k paths after
Algorithm A
Input: bidirected tree T , set P of paths, number W of colors
Output: disjoint subsets P 1 ,. ,P W of P (each P i is edge-disjoint)
begin
to W do
begin
Fig. 4.1. Reduction from many colors to one color.
running k copies of A 1 , there is still a set of at least (jP
among the remaining paths and the next copy of A 1 accepts at least a (1=ae)-fraction
of this number. The reduction works also for the weighted case.
Since we have an optimal algorithm for MEDP in bidirected trees of bounded
degree and (5=3 ")-approximation algorithms for MEDP in arbitrary bidirected
trees, we can employ the above technique and obtain approximation algorithms with
bidirected trees of bounded degree and with
ratio approximately 2:22 for MaxPC in arbitrary bidirected trees.
Acknowledgments
. The authors are grateful to Stefano Leonardi for pointing
out the reduction from MaxPC with arbitrary number of colors to MaxPC with one
color and to Adi Ros'en for informing them about the improved analysis for the ratio
obtained by this reduction in the case of identical networks for all colors and for
supplying a preliminary draft of the journal version of [1].
--R
Competitive non
Graphs and Hypergraphs
Special issue on Dense Wavelength Division Multiplexing Techniques for High Capacity and Multiple Access Communication Systems
Location of bank accounts to optimize float: An analytic study of exact and approximate algorithms
Structure in approximation classes
Call scheduling in trees
Optimal wavelength routing on directed fiber trees
An optimal greedy algorithm for wavelength allocation in directed tree networks
Colouring paths in directed symmetric trees with applications to WDM routing
Efficient wavelength routing on directed fiber trees
Maximum bounded 3-dimensional matching is MAX SNP-complete
Approximation algorithms for disjoint paths problems
Approximating disjoint-path problems using greedy algorithms and packing integer programs
A note on optical routing on trees
Improved access to optical bandwidth in trees
Efficient access to optical bandwidth
Computational Complexity
Improved approximations for edge-disjoint paths
--TR
--CTR
Thomas Erlebach , Klaus Jansen, Implementation of approximation algorithms for weighted and unweighted edge-disjoint paths in bidirected trees, Journal of Experimental Algorithmics (JEA), 7, p.6, 2002
R. Sai Anand , Thomas Erlebach , Alexander Hall , Stamatis Stefanakos, Call control with k rejections, Journal of Computer and System Sciences, v.67 n.4, p.707-722, December
Thomas Erlebach , Klaus Jansen, Conversion of coloring algorithms into maximum weight independent set algorithms, Discrete Applied Mathematics, v.148 n.1, p.107-125, | approximation algorithms;bidirected trees;edge-disjoint paths |
587948 | Improved Algorithms and Analysis for Secretary Problems and Generalizations. | In the classical secretary problem, n objects from an ordered set arrive in random order, and one has to accept k of them so that the final decision about each object is made only on the basis of its rank relative to the ones already seen. Variants of the problem depend on the goal: either maximize the probability of accepting the best k objects, or minimize the expectation of the sum of the ranks (or powers of ranks) of the accepted objects. The problem and its generalizations are at the core of tasks with a large data set, in which it may be impractical to backtrack and select previous choices.Optimal algorithms for the special case of are well known. Partial solutions for the first variant with general k are also known. In contrast, an explicit solution for the second variant with general k has not been known. It seems that the fact that the expected sum of powers of the ranks of selected items is bounded as n tends to infinity has been known to follow from standard results. We derive our results by obtaining explicit algorithms. For each $z \geq 1$, the resulting expected sum of the zth powers of the ranks of the selected objects is at most $k^{z the best possible value at all is kz O(kz). Our methods are very intuitive and apply to some generalizations. We also derive a lower bound on the trade-off between the probability of selecting the best object and the expected rank of the selected object. | Introduction
In the classical secretary problem, n items or options
are presented one by one in random order (i.e., all
n! possible orders being equally likely). If we could
observe them all, we could rank them totally with no
ties, from best (rank 1) to worst (rank n). However,
when the ith object appears, we can observe only
its rank relative to the previous the
relative rank is equal to one plus the number of the
predecessors of i which are preferred to i. We must
accept or reject each object, irrevocably, on the basis
of its rank relative to the objects already seen, and we
are required to select k objects. The problem has two
main variants. In the first, the goal is to maximize
the probability of obtaining the best k objects. In
the second, the goal is to minimize the expectation of
the sum of the ranks of the selected objects or, more
generally, for a given positive integer z, minimize the
expectation of the sum of the zth powers of the ranks.
Solutions to the classical problem apply also in variety
of more general situations. Examples include (i)
the case where objects are drawn from some probability
distribution; the interesting feature of this variant
is that the decisions of the algorithms may be based
not only on the relative rank of the item but also on
an absolute "grade" that the item receives, (ii) the
number of objects is not known in advance, (iii) objects
arrive at random times, (iv) some limited back-tracking
is allowed: objects that were rejected may
be recalled, (v) the acceptance algorithm has limited
memory, and also combinations of these situations. In
addition to providing intuition and upper and lower
bounds for the above important generalizations of the
problem, solutions to the classical problem also provide
in many cases very good approximations, or even
exact solutions (see [4, 13, 14] for survey and also [8]).
Our methods can also be directly extended to apply
for these generalizations.
The obvious application to choosing a best applicant
for a job gives the problem its common name,
although the problem (and our results) has a number
of other applications in computer science. For
any problem with a very large data set, it may be
impractical to backtrack and select previous choices.
For example, in the context of data mining, selecting
records with best fit to requirements, or retrieving images
from digital libraries. In such applications limited
backtracking may be possible, and in fact this is
one of the generalizations mentioned above. Another
important application is when one needs to choose an
appropriate sample from a population for the purpose
of some study. In other applications the items may
be jobs for scheduling, opportunities for investment,
objects for fellowships, etc.
1.1 Background and Intuition
The problem has been extensively studied in the
probability and statistics literature (see [4, 13, 14]
for surveys and also [10]).
The case of k = 1. Let us first review the case of
one object has to be selected. Since
the observer cannot go back and choose a previously
presented object which, in retrospect, turns out to be
the best, it clearly has to balance the risk of stopping
too soon and accepting an apparently desirable object
when an even better one might still arrive, against the
risk of waiting for too long and then find that the best
item had been rejected earlier.
It is easy to see that the optimal probability of selecting
the best item does not tend to zero as n tends
to infinity; consider the following stopping rule: reject
the first half of the objects and then select the first
relatively best one (if any). This rule chooses the best
object whenever the latter is among the second half
of the objects while the second best object is among
the first half. Hence, for every n, this rule succeeds
with probability greater than 1=4. Indeed, it has been
established ([7, 5, 2]) (see below) that there exists an
optimal rule that has the following form: reject the
first objects and then select the first relatively
best one or, if none has been chosen through the end,
accept the last object. When n tends to infinity, the
optimal value of r tends to n=e, and the probability
of selecting the best is approximately 1=e. (Lind-
ley showed the above using backward induction [7].
Later, Gilbert and Mosteller provided a slightly more
accurate bound for r [5]. Dynkin established the result
as an application of the theory of Markov stopping
times [2].)
It is not as easy to see that the optimal expected
rank of the selected object tends to a finite limit as n
tends to infinity. Observe that the above algorithm
(for maximizing the probability of selecting the best
object) yields an expected rank of n=(2e) for the selected
item; the argument is as follows. With probability
1=e, the best item is among the first n=e items,
and in this case the algorithm selects the last item.
The conditional expectation of the rank of the last
object in this case is approximately n=2. Thus, the
expected rank for the selected object in this algorithm
tends to infinity with n. Indeed, in this paper
we show that, surprisingly, the two goals are in fact
in conflict (see Section 1.2).
It can be proven by backward induction that there
exists an optimal policy for minimizing the expected
rank of selected item that has the following form: accept
an object if and only if its rank relative to the
previously seen objects exceeds a certain threshold
(depending on the number of objects seen so far).
Note that while the optimal algorithm for maximizing
the probability of selecting the best has to remember
only the best object seen so far, the threshold
algorithm has to remember all the previous objects.
(See [11] for solutions where the observer is allowed
to remember only one of the previously presented
items.) This fact suggests that minimizing the expected
rank is harder. Thus, not surprisingly, finding
an approximate solution for the dynamic programming
recurrence for this problem seems significantly
harder than in the case of the first variant of the prob-
lem, i.e., when the goal is to maximize the probability
of selecting the best. Chow, Moriguti, Robbins, and
Samuels, [1] showed that the optimal expected rank
of the selected object is approximately 3:8695. The
question of whether higher powers of the rank of the
selected object tend to finite limits as n tends to infinity
was resolved in [11]. It has also been shown that
if the order of arrivals is determined by an adversary,
then no algorithm can yield an expected rank better
than n=2 [12].
The case of a general k. There has been much interest
in the case where more than one object has to
be selected. It is not hard to see that for every fixed
k, the maximum probability of selecting the best k
objects does not tend to zero as n tends to infinity.
The proof is as follows. Partition the sequence of n
objects into k disjoint intervals, each containing n=k
consecutive items. Apply the algorithm for maximizing
the probability of selecting the best object to each
set independently. The resulting algorithm selects the
best item in each interval with probability e \Gammak . The
probability that the best k objects belong to distinct
intervals tends to k!=k k as n tends to infinity. For
this first variant of the problem, the case of
was considered in [9]; Vanderbei [16], and independently
Glasser, Holzager, and Barron [6], considered
the problem for general k. They showed that there is
an optimal policy with the following threshold form:
accept an object with a given relative rank if and only
the number of observations exceeds a critical number
that depends on the number of items selected so
in addition, an object which is worse than any of
the already rejected objects need not be considered.
Notice that this means that not all previously seen
items have to be remembered, but only those that
were already selected and the best among all those
that were already rejected. This property is analogous
to what happened in the case, where the
goal was to maximize the probability of selecting the
best item. Both papers derive recursive relations using
backward induction. General solutions to their
recurrences are not known, but the authors give explicit
solutions (i.e., critical values and probability)
for the case of
[16] also presents certain asymptotic results as
n tends to infinity and k is fixed and also as both k
and n tend to infinity so that (2k \Gamma n)=
remains
finite.
In analogy to the case of bounding the optimal
expected sum of ranks of k selected items appears
to be considerably harder than minimizing the probability
of selecting the best k items. Also, here it is
not obvious to see whether or not this sum tends to
a finite limit when n tends to infinity. Backward induction
gives recurrences that seem even harder to
solve than those derived for the case of maximizing
the probability of selecting the best k. Such equations
were presented by Henke [8], but he was unable
to approximate their general solutions.
Thus, the question of whether the expected sum of
ranks of selected items tends to infinity with n has
been open. There has not been any explicit solution
for obtaining a bounded expected sum. Thus the sec-
ond, possibly more realistic, variant of the secretary
problem has remained open.
1.2 Our Results
In this paper we present a family of explicit algorithms
for the secretary problem such that for each
positive integer z, the family includes an algorithm
for accepting items, where for all values of n and k,
the resulting expected sum of the zth powers of the
ranks of the accepted items is at most
where C(z) is a constant. 2
kg.
Clearly, the sum of ranks of the zth powers of the
best k objects is k z+1 =(z Thus, the
sum achieved by our algorithms is not only bounded
by a value independent of n, but also differs from the
best possible sum only by a relatively small amount.
For every fixed k, this expected sum is bounded by a
constant. Thus we resolve the above open questions
regarding the expected sum of ranks and, in general,
zth powers of ranks, of the selected objects.
Our approach is very different from the dynamic
programming approach taken in most of the papers
mentioned above. In addition to being more successful
in obtaining explicit solution to this classical prob-
lem, it can more easily be used to obtain explicit solutions
for numerous generalizations, because it does
not require a completely new derivation for each objective
function.
We remark that our approach does not partition
the items into k groups and select one item in each.
Such a method is suboptimal since with constant
probability, a constant fraction of the best k items
appear in groups where they are not the only ones
from the best k. Therefore, this method rejects a
constant fraction of the best k with constant prob-
ability, and so the expected value of the sum of the
ranks obtained by such an algorithm is greater by at
least a constant factor than the optimal.
Since the expected sums achieved by our algorithms
depend only on k and z and, in addition, the
probability of our algorithms to select an object does
not decrease with its rank, it will follow that the probabilities
of our algorithms to actually select the best
objects depend only on k and z, and hence for fixed
k and z, do not tend to zero when n tends to infin-
ity. In particular, this means that for our
algorithms will select the best possible object with
probability bounded away from zero.
In contrast, for any algorithm for the problem, if
the order of arrival of items is the worst possible (i.e.,
generated by an oblivious adversary), then the algorithm
yields an expected sum of at least kn z 2 \Gamma(z+1)
for the zth powers of the ranks of selected items. Our
lower bound holds also for randomized algorithms.
Finally, in Section 1.1 we observed that an optimal
algorithm for maximizing the probability of selecting
the best object results in an unbounded expected
rank of the selected object. As a second part of this
work we show that this fact is not a coincidence: the
two goals are in fact in conflict. No algorithm can
simultaneously optimize the expected rank and the
probability of selecting the best. We derive a lower
bound on the trade-off between the probability of accepting
the best object and the expected rank of the
accepted item.
Due to lack of space, most proofs are omitted or
only sketched.
2. The Algorithms
In this section we describe a family of algorithms for
the secretary problem, such that for each positive integer
z, the family includes an algorithm for accepting
objects, where the resulting expected sum of the zth
powers of the ranks of accepted objects is
In addition, it will follow that the algorithm accepts
the best k objects with positive probability that depends
only on k and z. Let z be the positive integer
that we are given. Denote
For the convenience of exposition, we assume without
loss of generality that n is a power of 2. We partition
the sequence [1; . ; n] (corresponding to the
objects in the order of arrival) into
consecutive intervals I i m), so that
I
fng if
In other words, the first are [1; n
4 ]; . ; each containing a half of the remaining
elements. The mth interval contains the last element.
Note that jI
Let us refer to the first
intervals as the opening ones, and let the rest be the
closing ones. Note that since p - 64, the last five
intervals are closing. For an opening I i , the expected
number of those of the top k objects in I i is
(The latter is not necessarily an integer.) Further-
more, for any d -
(i.e., d is in one of the
opening intervals), the expected number of those of
the top k objects among the first d to arrive is d \Delta k
n .
Let
Observe that pm 0
We will refer to p i as the minimum number of acceptances
required for I i m). Observe that
On the other hand,
Intuitively, during each interval the algorithm attempts
to accept the expected number of top k objects
that arrive during this interval, and in addition
to make up for the number of objects that should
have been accepted prior to the beginning of this
interval but have not. Note that since p
during such intervals the algorithm
only attempts to make up for the number of objects
that should have been accepted beforehand and have
not.
Let us explain this slightly more formally. During
each execution of the algorithm, at the beginning
of each interval, the algorithm computes a threshold
for acceptance, with the goal that by the time the
processing of the last object of this interval is com-
pleted, the number of accepted objects will be at least
the minimumnumber of acceptances required prior to
this time. In particular, recall that for
denotes the minimum number of acceptances required
for I i . Given a "prefix" of an execution prior
to the beginning of I i
1), be the number of items accepted
in I j . Let D
Roughly speaking, D i\Gamma1 is the difference between the
minimumnumber of acceptances required prior to the
beginning of I i and the number of items that were
actually accepted during the given prefix. Note that
Given a prefix of an execution prior to the beginning
of I i , let
ae
We refer to A i computed at the beginning of I i as the
acceptance threshold for I i in this execution. Loosely
stated, given a prefix of execution of the algorithm
prior to the beginning of I i , A i is the number of objects
the algorithm has to accept during I i in order to
meet the minimum number required by the end of I i .
The algorithm will aim at accepting at least A i objects
during I i . To ensure that it accepts that many, it
attempts to accept a little more. In particular, during
each opening interval I i , the algorithm attempts to
accept an expected number of A i +6(z +1) p
A i log k.
As we will see, this ensures that the algorithm accepts
at least A i objects during this interval with probability
of at least k \Gamma5(z+1) . During each closing interval
I i , the algorithm attempts to accept an expected
number of 32(z This ensures that the algorithm
accepts at least A i objects during this interval
with probability of at least 2 \Gamma5(z+1)(a i +1) .
We make the distinction between opening and closing
intervals in order to restrict the expected rank of
the accepted objects. If I i is closing, then A i may be
much smaller than p
A i log k. Let
ae
A i log k if I i is opening
closing.
In order to accept an expected number of B i objects
during interval I i , the algorithm will accept the dth
item if it is one of the approximately
ones among the first d. Since the order of arrival
of the items is random, the rank of the dth object
relative to the first d ones is distributed uniformly
in the set f1; . ; dg. Therefore, the dth object will
be accepted with probability of B
since jI e, the expected number of objects
accepted during I i is indeed B i .
If at some point during the execution of the algo-
rithm, the number of slots that still have to be filled
equals the number of items that have not been processed
yet, all the remaining items will be accepted
regardless of rank. Analogously, if by the time the
dth item arrives all slots have already been filled, this
item will not be accepted.
Finally, the algorithm does not accept any of the
first dn=(8
k)e items except in executions during
which the number of slots becomes equal to the number
of items before dn=(8
k)e items have been pro-
cessed. Roughly speaking, this modification will allow
to bound the expected rank of the dth item in
terms of its rank relative to the first d items.
The above leads to our algorithm, which we call
Select.
Algorithm Select: The algorithm processes the
items, one at a time, in their order of arrival. At the
beginning of each interval I i , the algorithm computes
A i as described above. When the dth item (d 2 I i )
arrives, the algorithm proceeds as follows.
(i) If all slots have already been filled then the object
is rejected.
(ii) Otherwise, if d ? dn=(8
k)e, then
(a) If the dth item is accepted if it is one
of the top
items among the first d.
(b) If the algorithm accepts the dth item
if it is one of the top b32(z
items among the first d.
(iii) Otherwise, if the number of slots that still have
to be filled equals the number of items left (i.e.,
1), the dth item is accepted.
We refer to acceptances under (3) , i.e., when the
number of slots that still have to be filled equals the
number of items that remained to be seen, as manda-
tory, and to all other acceptances as elective. For
example, if the dth item arrives during I 1 , and the
latter is opening, then the item is accepted electively
if and only if it is one of the approximately
k=2 log
k=2 log
top objects among the first d. In general, if the dth
object arrives during an opening I i , then the object
is accepted electively if and only if it is one of the
approximately
top objects among the first d.
3. Analysis of Algorithm Select
Very loosely stated, the proof proceeds as follows.
In Section 3.1 we show that for
Observe that this implies that for
high probability, A i is approximately p i , i.e.,
In Section 3.2 we show that if the dth object arrives
during an opening I i , then the conditional expectation
of the zth power of its rank, given that it is
accepted electively, is not greater than 2 iz 1
z+1 A z
c 4 (z)2 iz A z\Gamma0:5
log k, for some constant c 4 (z) (depend-
ing on z); if I i is closing, this conditional expectation
is not greater than c 6 (z)2 iz A z
c 6 (z). In Section 3.3 these results of Sections 3.1 and
3.2 are combined and it is established that if the dth
object arrives during an opening I i , then its conditional
expected zth power of rank, given that it is
accepted electively, is at most
k z
for some constant c(z). If I i is closing, that conditional
expected zth power of rank is at most c 0 (z)k z ,
for some constant c 0 (z), if approximately
otherwise. From this it will follow that the
expected sum of the zth powers of ranks of the elec-
tively accepted objects is 1
In addition we use the result of Section 3.1 to show
that the expected sum of the zth powers of ranks of
mandatorily accepted objects is O(k z+0:5 log k). Thus
the expected sum of the zth powers of ranks of the
accepted objects is 1
In addition, from the fact that the expected sum
of the zth powers of ranks of the accepted objects is
bounded by a value that depends only on k and z, it
will also follow that the algorithm accepts the top k
objects with probability that depends only on k and
z.
3.1 Bounding the A i s
In this section we show that for
high probability, A i is very close to p i . To this end
we distinguish between 'smooth' and `nonsmooth' executions
(see below).
3.1.1 Smooth Prefixes. Denote by E i the prefix
of an execution E prior to the end of I i . Note that Em
is E. We say that
computed in E i is - jI j j. Denote by ME i
the event
in which E i is smooth.
In this section we show that for an opening interval
I i , in executions whose prefix prior to the end of
the 1th interval is smooth, the probability that
exponentially with j (Part 1 of
Lemma 3.3). For a closing I i , in executions whose
prefix prior to the end of the i\Gamma1th interval is smooth,
the probability that A i exponentially
both with j and with i (Part 2 of Lemma 3.3).
Part 1 and Part 2 of Lemma 3.3 will follow, respec-
tively, from Lemmas 3.1 and 3.2 that show that in
executions whose prefix prior to the end of the ith
interval is smooth, in I i the algorithm accepts A i objects
with high probability (where A i is computed for
the prefix of the execution). Intuitively, the restriction
to smooth executions is necessary since at most
objects can be selected in I i .
Lemma 3.1 For every any value a i
of A i ,
Sketch of Proof: Note that D i ? 0 only if the
number of objects accepted in I i is less than a i .
Loosely stated, the algorithm accepts the dth object
electively if it is one of the top
A i log
objects among the first d. Since the
objects arrive in a random order, the rank of the dth
object within the set of first d is distributed uniformly
and hence it will be accepted electively with probability
not less than b(a
a i log
c=d.
Moreover, the rank of the dth object within the set
of the first d is independent of the arrival order of the
first d \Gamma 1, and hence is independent of whether or
not any previous object in this interval, say the th
one, is one of the top
objects among the first d 1 . The rest of the proof
follows from computing the expected number of accepted
candidates and Chernoff inequality.
Analogously,
Lemma 3.2 If n - 16, then for every
Lemma 3.3
(i) For
(ii) If n - 16, then for
Sketch of Proof: We outline the proof for Part
(1). Recall that the minimum number of acceptances
required for an opening interval I i is
Thus if A i ? k2 \Gammai , then D
are positive. These events are
dependent and their probabilities are conditioned on
however, it can be shown that both the dependency
and the conditioning are working in our
favour. Lemma 3.1 thus implies that each of the
underlying events fD q ? 0g
with probability less than k \Gamma5(z+1) . Hence,
3.1.2 Nonsmooth Executions. Lemma 3.3 implies
that in smooth executions, with high probability,
A i is very close to p i . To complete the proof that A i
is close to p i , we now show that nonsmooth executions
are rare. In particular, Part (1) of Lemma 3.3
is used to show:
Lemma 3.4 If
Analogously,
Lemma 3.5 If n - 16, k - 1
The case of k - n=2 is excluded (Lemma 3.5) and
thus handled separately later (Section 3.3).
3.2 Expected zth powers of Ranks
Let us denote by R d the random variable of the rank
of the dth object. We define the arrival rank of the
dth object as its rank within the set of the first d
objects, i.e., one plus the number of better objects
seen so far. Denote by S d the random variable of the
arrival rank. Denote by NA d the event in which the
dth object is accepted electively.
Lemma 3.6 There exist constants c 2 (z), c 3 (z) and
c (z) such that for all d - n
k and s,
E(R z
d
d
s z
d
d
Combining the result of Lemma 3.6 with the fact
that given that the object is accepted electively during
an opening interval I i and A
distributed uniformly in the set f1; 2; . ; b(a
a i log k)2 i d=ncg, we will get:
Lemma 3.7 There exist constants c 4 (z) and c 5 (z)
such that for all opening intervals I i (i.e.,
every value a i of A i , if the dth object arrives during
I i and d - n
E(R z
r
d
Analogously,
Lemma 3.8 There exists a constant c 6 (z), such that
for all closing intervals I i (i.e.,
a i of A i , if the dth object arrives during I i , and d -
, then
E(R z
3.3 Expected Sum of Ranks
In this section we show that the expected sum of the
zth powers of ranks of the k accepted objects isz
(Theorem 3.1). This will follow by adding up the expected
sum of the zth powers of ranks of electively
accepted objects (Lemmas 3.13), and the expected
sum of the zth powers of ranks of mandatorily accepted
objects (Lemma 3.15).
3.3.1 Elective Acceptances. Denote by SUMZ i
the sum of the zth powers of ranks of objects that are
accepted electively during I i .
Lemma 3.9 There exists a constant c 7 (z) such that
for all opening intervals I i and for all values a i of A i ,
a z+1
Lemma 3.10 There exists a constant c 8 (z) such that
for all closing intervals I i , for all acceptance thresholds
a i computed for I i ,
Lemma 3.9 is combined with Part 1 of Lemma 3.3
and with Lemma 3.4 to show:
Lemma 3.11 There exists a constant c 9 (z) such that
for all opening intervals I i ,
Analogously,
Lemma 3.12 If n - 16, then there exists a constant
such that for any closing interval I i ,
The following lemma completes the proof of the
upper bound on the sum of the ranks of the electively
accepted objects. It sums up the expected sum of
ranks of electively accepted objects over all intervals.
Lemma 3.13
3.3.2 Mandatory Acceptances. This section
bounds the expected sum of mandatorily accepted
objects. We first observe:
Lemma 3.14 If the dth object is mandatorily accepted
in execution E during I i , then :ME i+1
Denote by SUMDZ i the sum of the zth powers of
ranks of objects that are accepted mandatorily during
I i .
Lemmas 3.4 and 3.5 of Section 3.1.2 imply that,
for each I i , the probability that a prefix of execution
prior to the end of I i is not smooth, is at
most c(z)n \Gamma2:5(z+1) log n, where c(z) is a constant.
(The case of k - 1
2 n is handled without the use of
Lemma 3.5, since this lemma excludes it.) Clearly,
this bound applies also for the probability that objects
will be mandatorily accepted in I i . We combine
this bound with the facts that the rank of an object
never exceeds n, and the number of accepted objects
is at most k - n, to show:
Lemma 3.15 There exist constants c 21 (z) and
c 22 (z) such that
Lemmas 3.13 and 3.15 imply:
Theorem 3.1 The expected sum of ranks of accepted
objects is at
Corollary 3.1 Algorithm Select accepts the best k
objects with positive probability that depends only on
k and z.
4. Trade-Off between Small Expected
Rank and Large Probability of Accepting
the Best
Theorem 4.1 Let p 0 be the maximum possible probability
of selecting the best object. There is a c ? 0 so
that for all ffl ? 0 and all sufficiently large n, if A is an
algorithm that selects one of n objects, and the probability
pA that A selects the best one is greater than
then the expected rank of the selected object is
at least c=ffl.
Proof: Suppose that contrary to our assertion there
is an algorithm A that selects the best object with
probability of at least p yet the expected
value of the rank of the selected object is less than
c=ffl.
Starting from A, we construct another algorithm R
so that R selects the best object with a probability
Denote by OPT the following algorithm: Let n=e
objects pass, and then accept the first object that is
better than anyone seen so far. If no object was accepted
by the time the last object arrives, accept the
last object. For n sufficiently large, this algorithm accepts
the best object with the highest possible prob-
ability, and hence with probability p 0 [7]. 3
In better approximation to
r than ne \Gamma1 although the difference is never more than 1 [5].
We ignore this difference for the sake of simplicity.
We define R by modifying A. The definition will
depend on parameters c 1 ? d ? 0. We will assume
that d is a sufficiently large absolute constant and c 1
is sufficiently large with respect to d. R will accept
an object if at least one of the following conditions is
(i) A accepts the object after time n=d and by time
and the object is better than anybody
else seen
(ii) OPT accepts the object whereas A accepted earlier
somebody who, at the time of acceptance,
was known not to be the best one (that is there
was a better one before);
(iii) OPT accepts the object and A has already accepted
somebody by time n=d;
(iv) the object comes after time it is better
than anybody else seen before and R has not yet
accepted anybody based on the rules (1), (2),
(v) the object is the nth object and R has not accepted
yet any object.
Notation: Denote by BA, BR, and BOPT the
events in which A, R and OPT, repectively, accept
the best object. Denote by B1, B2, and B3 the
events in which the best object appears in the intervals
spectively. Denote by IA1, IA2 and IA3 the events
in which A makes a selection in the intervals [1; n=d],
We distinguish between two cases.
Case I: ProbfIA1g -
4.1
Proof: Suppose that A made a selection by time
n=d. According to rule (3), in this case R will accept
an object that arrives after time n=d if and only if
OPT accepts this object. By choosing d sufficiently
large, we have that objects are accepted by OPT only
after time n=d. Thus, if A made a selection by time
n=d, R will accept the object if and only if OPT accepts
it. Thus,
The second inequality follows since the probability
that OPT accepts the best object is independent of
the order of arrival of the first n=d objects, and hence
independent of whether or not A makes a selection by
time n=d.
On the other hand,
Thus, by choosing d to be sufficiently large the claim
follows.
4.2
Proof: The claim follows immediately from the fact
that if A picks the best object between n=d and t 0 ,
then this object must be the best seen so far, and
hence by rule (1), R picks the same object.
4.3
Proof: If IA3 holds then neither A nor R have accepted
anybody till time t 0 . Let X be the event when
A chooses no later than R. By the definition of R we
have that if X " IA3 holds then either A accepts an
object that already at the moment of acceptance is
known not to be the best, or A and R accept the same
object. Thus,
To complete the proof, it suffices to show that
Suppose that IA3 " :X holds and R accepts an object
at some time t ? t 0 . By definition, A has not
accepted anybody yet, and the object accepted by R
at t is better than anyone else seen earlier. Thus, if a
better object than the one accepted by R arrives after
time t, this means that the best object arrives after
time t. Since the objects arrive in a random order,
the rank of each dth arriving object within the set
of first d is distributed uniformly. Hence, the probability
that the best object will arrive after time t
is at most (n \Gamma t)=n - c 1 ffln. Notice that this probability
is independent of the ordering of the first t
objects, and hence is independent of the fact that
R has accepted the tth object. Therefore the probability
that the object accepted by R is indeed the
best object is at least 1 \Gamma c 1 ffln, while the probability
that A accepts the best one later is smaller than
ffln. Thus, for any fixed choice of t and fixed order
of the first t objects (with the property IA3 " :X),
the probability of BR is larger than BA, and hence
Now we can complete the proof of Case I:
ProbfBRg
The second inequality follows from Claims 4.1, 4.2
and 4.3. The fourth inequality follows from (i)
by the theorem assumption and
(ii) ProbfIA1g - 3ffl=p 0 by Case I assumption.
Case II: ProbfIA1g ! 3ffl=p 0 . Denote by BR1,
BR2, and BR3 the events when R picks the
best object and its selections are in the interval
respectively. Denote by
BA1, BA2, and BA3 the corresponding events for A.
Since by the assumption of this case ProbfIA1g !
If A picks the best object between n=d and t 0 , then
this object must be the best seen so far, and hence
by rule (1), R picks the same object. Thus
By choosing d sufficiently large, we have that objects
are accepted by OPT only after time n=d. Observe
that in that case, if the second best comes by
time n=d and the best comes after time t 0 , then R
accepts the best object. The probability that the second
best object arrives by time n=d is 1=d, and the
conditional probability that the best object comes after
given that the second best comes by time
n=d, is at least c 1 ffl. It thus follows:
For bounding ProbfBA3g, we first use the assumption
that the expected rank of the object selected by
A is less than c=ffl, to show:
Proof: Each of the 1=(10dc 1 ffl) objects with a rank
smaller than 1=(10dc 1 ffl) arrives after time t
probability of at most c 1 ffl. Therefore, with
probability of at least 1 \Gamma 1=(10d), all objects that
arrive after time t 0 are of rank larger than 1=(10dc 1 ffl).
Hence, if the probability of IA3 had been greater than
1=(2d), then the expected value of the rank would
have been larger than c 0 =ffl for some absolute constant
the c of the theorem to be equal to c 0 ,
and we get a contradiction to the assumption that the
expected rank of the selected object is at most c=ffl.
Recall that B3 denotes the event in which the best
object arrives in interval
IA3g. But B3 is independent
of the order of arrival of the first t 0 objects
and hence independent on whether or not A has accepted
an object by time t 0 . Thus, Claim 4.4 implies
that ProbfIA3g \Delta ProbfB3
Equations (1) to (4) imply
(The last inequality follows from our assumption that
c 1 is sufficiently large with respect to d.) Therefore
Acknowledgements
We are indebted to James Aspnes, Eugene Dynkin,
John Preater, Yossi Rinott, Mike Saks, Steve
Samuels, and Robert Vanderbei for helpful references.
--R
"secretary problem"
The optimum choice of the instant for stopping a Markov process.
Who solved the secretary prob- lem? Statistical Science <Volume>4</Volume>
The secretary problem and its ex- tensions: A review
Recognizing the maximum of a sequence.
The d Choice secretary problem.
Dynamic programming and decision theory.
Sequentialle Auswahlprobleme bei Unsicherheit.
A generalization of the best choice problem.
On multiple choice secretary prob- lems
The finite memory secretary problem.
Optimal counter strategies for the secretary problem.
Secretary problems.
Secretary problems as a source of benchmark sounds.
Amortized efficiency of list updates and paging rules.
The optimal choice of a sub-set of a population
--TR
--CTR
Andrei Broder , Michael Mitzenmacher, Optimal plans for aggregation, Proceedings of the twenty-first annual symposium on Principles of distributed computing, July 21-24, 2002, Monterey, California
Robert Kleinberg, A multiple-choice secretary algorithm with applications to online auctions, Proceedings of the sixteenth annual ACM-SIAM symposium on Discrete algorithms, January 23-25, 2005, Vancouver, British Columbia
Mohammad Taghi Hajiaghayi , Robert Kleinberg , David C. Parkes, Adaptive limited-supply online auctions, Proceedings of the 5th ACM conference on Electronic commerce, May 17-20, 2004, New York, NY, USA | optimal stopping;expected rank maximization;dynamic programming |
587953 | Global Price Updates Help. | Periodic global updates of dual variables have been shown to yield a substantial speed advantage in implementations of push-relabel algorithms for the maximum flow and minimum cost flow problems. In this paper, we show that in the context of the bipartite matching and assignment problems, global updates yield a theoretical improvement as well. For bipartite matching, a push-relabel algorithm that uses global updates runs in $O\big(\sqrt n m\frac{\log(n^2/m)}{\log n}\big)$ time (matching the best bound known) and performs worse by a factor of $\sqrt n$ without the updates. A similar result holds for the assignment problem, for which an algorithm that assumes integer costs in the range $[\,-C,\ldots, C\,]$ and that runs in time $O(\sqrt n m\log(nC))$ (matching the best cost-scaling bound known) is presented. | Introduction
.
The push-relabel method [10, 13] is the best currently known way for solving the maximum
flow problem [1, 2, 18]. This method extends to the minimum cost flow problem using cost
scaling [10, 14], and an implementation of this technique has proven very competitive on a wide
class of problems [11]. In both contexts, the idea of periodic global updates of node distances
or prices has been critical to obtaining the best running times in practice.
Several algorithms for the bipartite matching problem run in O(
and Karp [15] first proposed an algorithm that achieves this bound. Karzanov [16] and Even
and Tarjan [5] proved that the blocking flow algorithm of Dinitz [4] runs in this time when
applied to the bipartite matching problem. Two phase algorithms based on a combination of
the push-relabel method [13] and the augmenting path method [7] were proposed in [12, 19].
Feder and Motwani [6] give a "graph compression" technique that combines with the algorithm
of Dinitz to yield an O(
log n ) algorithm. This is the best time bound known
for the problem.
The most relevant theoretical results on the assignment problem are as follows. The best
currently known strongly polynomial time bound of O(n(m log n)) is achieved by the
classical Hungarian method of Kuhn [17]. Under the assumption that the input costs are
integers in the range [ Gabow and Tarjan [9] use cost scaling and blocking flow
techniques to obtain an O(
nm log(nC)) time algorithm. An algorithm using an idea similar
to global updates with the same running time appeared in [8]. Two-phase algorithms with the
same running time appeared in [12, 19]. The first phase of these algorithms is based on the
push-relabel method and the second phase is based on the successive augmentation approach.
We show that algorithms based on the push-relabel method with global updates match
the best bounds for the bipartite matching and assignment problems. Our results are based
on new selection strategies: the minimum distance strategy in the bipartite matching case
and the minimum price change in the assignment problem case. We also prove that the
algorithms perform significantly worse without global updates. Similar results can be obtained
for maximum and minimum cost flows in networks with unit capacities. Our results are a step
toward a theoretical justification of the use of global update heuristics in practice.
This paper is organized as follows. Section 2 gives definitions relevant to bipartite matching
and maximum flow. Section 3 outlines the push-relabel method for maximum flow and shows
its application to bipartite matching. In Section 4, we present the time bound for the bipartite
matching algorithm with global updates, and in Section 5 we show that without global updates,
the algorithm performs poorly. Section 6 gives definitions relevant to the assignment problem
and minimum cost flow. In Section 7, we describe the cost-scaling push-relabel method for
minimum cost flow and apply the method to the assignment problem. Sections 8 and 9 gen-
denote the number of nodes and edges, respectively.
eralize the bipartite matching results to the assignment problem. In Section 10, we give our
conclusions and suggest directions for further research.
2. Bipartite Matching and Maximum Flow
E) be an undirected bipartite graph, let
A matching in G is a subset of edges M ' E that have no node in common. The cardinality
of the matching is jM j. The bipartite matching problem is to find a maximum cardinality
matching.
The conventions we assume for the maximum flow problem are as follows: Let
V; E) be a digraph with an integer-valued capacity u(a) associated with each arc 2 a 2 E. We
assume that a (where a R denotes the reverse of arc a). A pseudoflow is a
satisfying the following for each a 2 E:
The antisymmetry constraints are for notational convenience only, and we will often take
advantage of this fact by mentioning only those arcs with nonnegative flow; in every case, the
antisymmetry constraints are satisfied simply by setting the reverse arc's flow to the appropriate
value. For a pseudoflow f and a node v, the excess flow into v, denoted e f (v); is defined by
v). A preflow is a pseudoflow with the property that the excess of every
node except s is nonnegative. A node v 6= t with e f (v) ? 0 is called active.
A flow is a pseudoflow f such that, for each node v 2 V , e f that a preflow
f is a flow if and only if there are no active nodes. The maximum flow problem is to find a
flow maximizing e f (t).
3. The Push-Relabel Method for Bipartite Matching
We reduce the bipartite matching problem to the maximum flow problem in a standard way.
For brevity, we mention only the "forward" arcs in the flow network; to each such arc we give
unit capacity. The "reverse" arcs have capacity zero. Given an instance
of the bipartite matching problem, we construct an instance
\Delta of the
maximum flow problem by
setting
ffl for each node v 2 X placing arc (s; v) in E;
ffl for each node v 2 Y placing arc (v; t) in E;
Sometimes we refer to an arc a by its endpoints, e.g., (v; w). This is ambiguous if there are multiple
arcs from v to w. An alternative is to refer to v as the tail of a and to w as the head of a, which is
precise but inconvenient.
Given Matching Instance
Bipartite Matching Instance Corresponding Maximum Flow Instance
(Reverse arcs not shown)
Figure
1. Reduction from Bipartite Matching to Maximum Flow
ffl for each edge fv; wg 2 E with placing arc (v; w) in E
A graph obtained by this reduction is called a matching network. Note that if G is a matching
network, then for any integral pseudoflow f and for any arc a 2 E, u(a); f(a) 2 f0; 1g. Indeed,
any integral flow in G can be interpreted conveniently as a matching in G: the matching is
exactly the edges corresponding to those arcs a 2 X \Theta Y with 1. It is a well-known
fact [7] that a maximum flow in G corresponds to a maximum matching in G.
For a given pseudoflow f , the residual capacity of an arc a 2 E is
The set of residual arcs E f contains the arcs a 2 E with f(a) ! u(a). The residual graph
is the graph induced by the residual arcs.
A distance labeling is a function d . We say a distance labeling d is valid with
respect to a pseudoflow f if
Those residual arcs (v; w) with the property that are called admissible arcs.
We begin with a high-level description of the generic push-relabel algorithm for maximum
flow specialized to the case of matching networks. The algorithm starts with the zero flow,
then sets f(s; . For an initial distance labeling, the algorithm sets
the algorithm applies push
and relabel operations in any order until the current pseudoflow is a flow. The push and relabel
operations, described below, preserve the properties that the current pseudoflow f is a preflow
and that the current distance labeling d is valid with respect to f .
push(v; w).
send a unit of flow from v to w.
end.
relabel(v).
replace d(v) by min (v;w)2Ef f
end.
Figure
2. The push and relabel operations
The push operation applies to an admissible arc (v; w) whose tail node v is active. It consists
of "pushing" a unit of flow along the arc, i.e., increasing f(v; w) by one, increasing e f (w) by
one, and decreasing e f (v) by one. The relabel operation applies to an active node v that is not
the tail of any admissible arc. It consists of changing v's distance label so that v is the tail of
at least one admissible arc, i.e., setting d(v) to the largest value that preserves the validity of
the distance labeling. See Figure 2.
Our analysis of the push-relabel method is based on the following facts. See [13] for details;
note that arcs in a matching network have unit capacities and thus push(v; w) saturates the
arc (v; w).
(2) Distance labels do not decrease during the computation.
(3) relabel(v) increases d(v).
(4) The number of relabel operations during the computation is O(n) per node.
(5) The work involved in relabel operations is O(nm).
If a node v is relabeled t times during a computation segment, then the number of
pushes from v is at most (t degree(v).
(7) The number of push operations during the computation is O(nm).
The above lemma implies that any push-relabel algorithm runs in O(nm) time given that
the work involved in selecting the next operation to apply does not exceed the work involved in
applying these operations. This can be easily achieved using simple data structures described
in [13].
4. Global Updates and the Minimum Distance Discharge Algorithm
In this section, we specify an ordering of the push and relabel operations that yields certain
desirable properties. We also introduce the idea of a global distance update and show that the
algorithm resulting from our operation ordering and global update strategy runs in O(
time.
For any nodes v; w, let dw (v) denote the breadth-first-search distance from v to w in the
residual graph of the current preflow. If w is unreachable from v in the residual graph, dw (v)
is infinite. Setting for every node v is called a global update
operation. Such an operation can be accomplished with O(m) work that amounts to two
breadth-first-search computations.
The ordering of operations we use is called Minimum Distance Discharge, and it consists of
repeatedly choosing an active node whose distance label is minimum among all active nodes
and, if there is an admissible arc leaving that node, pushing a unit of flow along the admissible
arc, otherwise relabeling the node. For convenience, we denote by \Gamma(f; d) (or simply \Gamma) the
minimum distance label of an active node with respect to the pseudoflow f and the distance
labeling d. We let \Gamma max denote the maximum value reached by \Gamma during the algorithm so far.
attains a new maximum, we perform a global update operation.
Our analysis hinges on a parameter k in the range 2 - k - n, to be chosen later. We divide
the execution of the algorithm into four stages: In the first two stages, excesses are moved to
t; in the final two stages, excesses that cannot reach the sink return to s. We analyze the first
stage of each pair using the following lemma.
Lemma 4.1. The Minimum Distance Discharge algorithm uses O
work during the
period beginning when \Gamma first exceeds ending when \Gamma first exceeds j.
Proof: The number of relabelings that can occur when \Gamma max lies in the interval [i; j] is at most
1). Thus the relabelings and pushes require O
work. The observations that
a global update requires O(m) work and during the period there are O(j \Gamma i) global updates
complete the proof.
Lemma 4.1 allows us to account for the periods when \Gamma n+k]. The
algorithm expends O(km) work during these periods. To study the behavior of the algorithm
during the remainder of its execution, we introduce a combinatorial lemma that is a special
case of a well-known decomposition theorem [7] (see also [5]).
Lemma 4.2. Any integral pseudoflow f 0 in the residual graph of an integral preflow f in
a matching network can be decomposed into cycles and simple paths that are pairwise node-disjoint
except at the endpoints of the paths. Each path takes one of the following forms:
ffl from s to t;
ffl from a node v with e f (v) ? 0 to a node w with e f+f 0
(w can be t);
ffl from a node v with e f (v) ? 0 to s.
Lemma 4.2 allows us to show that when \Gamma max is outside the intervals covered by Lemma 4.1,
the amount of excess the algorithm must process is small.
Lemma 4.3. If \Gamma(f; d) - k ? 2, the total excess that can reach the sink is at most n=(k \Gamma 1).
Proof: Let f be a maximum flow in G, and let f is a pseudoflow in G f , and
therefore can be decomposed into paths as in Lemma 4.2. Because \Gamma - k and d is a valid
distance labeling with respect to f , any path from an active node to t in G f must contain at
least nodes. In particular, the excess-to-sink paths of Lemma 4.2 contain at least k
nodes each, and are node-disjoint except for their endpoints. Since G contains only n+2 nodes,
there can be no more than n=(k \Gamma 1) such paths. Since f is a maximum flow, the amount of
excess that can reach the sink in G f is no more than n=(k \Gamma 1).
The proof of the next lemma is similar.
Lemma 4.4. If \Gamma(f; d) - n + k, the total excess at nodes in V is at most n=(k \Gamma 1).
Lemma 4.3 and Lemma 4.4 show that outside the intervals covered by Lemma 4.1, the
total excess processed by the algorithm is at most 2n=(k \Gamma 1). To complete the bound on the
work expended by the algorithm outside these intervals, we use the following lemma and the
fact that at most O(m) work takes place between consecutive global updates to deduce that
O
time suffices to process the excess outside the intervals covered by Lemma 4.1.
Lemma 4.5. Between any two consecutive global update operations, at least one unit of excess
reaches the source or the sink.
Proof: For every node v, at least one of d s (v), d t (v) is finite. Therefore, immediately after a
global update operation, at least one admissible arc leaves every node, by the definition of a
global update. Hence the first unit of excess processed by the algorithm immediately after a
global update arrives at t or at s before any relabeling occurs.
The time bound for the Minimum Distance Discharge algorithm is O
. Choosing
n ) to balance the two terms, we see that the Minimum Distance Discharge
algorithm with global updates runs in O(
Feder and Motwani [6] give an algorithm that runs in o(
time and produces a "com-
pressed" version G
) of a bipartite graph in which all adjacency information is
preserved, but that has asymptotically fewer edges if the original graph E) is dense.
This graph consists of all the original nodes of X and Y , as well as a set of additional nodes W .
If an edge fx; yg appears in E, either fx; yg 2 E
or G
contains a length-two path from x to y
through some node of W . It is possible to show that an analogue to Lemma 4.2 holds in such
a graph; the paths in the decomposition may not be node-disjoint at nodes of W , but remain
so at nodes of X and Y , and this is enough to show that the Minimum Distance Discharge
algorithm with graph compression runs in O
log n
time. This bound matches the
bound of Feder and Motwani for Dinitz's algorithm.
1. Initialization establishes jX j units of excess, one at each node of X ;
2. Nodes of X are relabeled one-by-one, so all v 2 X have
3. While e f (t) ! jY j,
3.1. a unit of excess moves from some node v 2 X to some node w 2 Y with
3.2. w is relabeled so that
3.3. The unit of excess moves from w to t, increasing e f (t) by one.
4. A single node, x 1 with e f relabeled so that d(x 1 2.
5. ' / 1.
6. While ' - n,
Remark: All nodes v 2 V now have with the exception of the one node
which has d(x ' are at nodes of X ;
6.1. All nodes with excess, except the single node x ' , are relabeled one-by-one so that
all
6.2. While some node y 2 Y has
6.2.1. A unit of excess is pushed from a node in X to
6.2.2. y is relabeled so
6.2.3. The unit of excess at y is pushed to a node x 2 X with
6.2.4. x is relabeled so that if some node in Y still has distance label ',
otherwise
6.3. '
7. Excesses are pushed one-by-one from nodes in X (labeled
Figure
3. The Minimum Distance Discharge execution on bad examples.
5. Minimum Distance Discharge Algorithm without Global Updates
In this section we describe a family of graphs on which the Minimum Distance Discharge
algorithm without global updates
(for values of m between \Theta(n) and
This shows that the updates improve the worst-case running time of the algorithm.
Given ~ n and ~
we construct a graph G as follows: G is the complete bipartite
graph with
~
~
It is straightforward to verify that this graph has
m+ O( ~
edges. Note that jX j ? jY j.
Figure
3 describes an execution of the Minimum Distance Discharge algorithm on G, the
matching network derived from G, that
time. With more complicated analysis,
it is possible to show that every execution of the Minimum Distance Discharge algorithm on
G
It is straightforward to verify that in the execution outlined, all processing takes place at
active nodes with minimum distance labels among the active nodes. Another important fact
is that during the execution, no relabeling changes a distance label by more than two. Hence
the execution uses \Theta(nm) work in the course of its \Theta(n 2 ) relabelings.
6. Minimum Cost Circulation and Assignment Problems
Given a weight function c and a set of edges M , we define the weight of M to be
the sum of weights of edges in M . The assignment problem is to find a maximum cardinality
matching of minimum weight. We assume that the costs are integers in the range
where C - 1. (Note that we can always make the costs nonnegative by adding an appropriate
number to all arc costs.)
For the minimum cost circulation problem, we adopt the following framework. We are given
a graph E), with an integer-valued capacity function as before. In addition to the
capacity function, we are given an integer-valued cost c(a) for each arc a 2 E.
We assume c(a) = \Gammac(a R ) for every arc a. A circulation is a pseudoflow f with the property
that e f for every node v 2 V . (The absence of a distinguished source and sink accounts
for the difference in nomenclature between a circulation and a flow.)
The cost of a pseudoflow f is given by
f(a)?0 c(a)f(a). The minimum cost circulation
problem is to find a circulation of minimum cost.
7. The Push-Relabel Method for the Assignment Problem
We reduce the assignment problem to the minimum cost circulation problem as follows. As in
the unweighted case, we mention only "forward" arcs, each of which we give unit capacity. The
"reverse" arcs have zero capacity and obey cost antisymmetry. Given an instance
\Delta of the assignment problem, we construct an instance
\Delta of
the minimum cost circulation problem by
ffl creating special nodes s and t, and setting
ffl for each node v 2 X placing arc (s; v) in E and defining c(s;
ffl for each node v 2 Y placing arc (v; t) in E and defining c(v;
ffl for each edge fv; wg 2 E with placing arc (v; w) in E and defining c(v;
c(v; w);
ffl placing n=2 arcs (t; s) in E and defining c(t;
If G is obtained by this reduction, we can interpret an integral circulation in G as a matching
in G just as we did in the bipartite matching case. Further, it is straightforward to verify that
a minimum cost circulation in G corresponds to a maximum matching of minimum weight in
G.
Given Assignment Instance
Assignment Problem Instance Corresponding Minimum Cost Circulation Instance
Given Costs
Large Negative Costs
Zero Costs
Figure
4. Reduction from Assignment to Minimum Cost Circulation
A price function is a function R. For a given price function p, the reduced cost of
an arc (v; w) is c p (v;
ftg. Note that all arcs in E have one endpoint in U and one endpoint in its
complement. U to be the set of arcs whose tail node is in U .
For a constant ffl - 0, a pseudoflow f is said to be ffl-optimal with respect to a price function
if, for every residual arc a 2 E f , we have
ae a
A pseudoflow f is ffl-optimal if f is ffl-optimal with respect to some price function p. If the arc
costs are integers and ffl ! 1=n, any ffl-optimal circulation is optimal.
For a given f and p, an arc a 2 E f is admissible iff
ae a 2 E U and c p (a) ! ffl or
The admissible graph is the graph induced by the admissible arcs.
Our asymmetric definitions of ffl-optimality and admissibility are natural in the context of
the assignment problem. They have the benefit that the complementary slackness conditions
are violated on O(n) arcs (corresponding to the matched arcs). For the symmetric definition,
complementary slackness can be violated on \Omega\Gamma m) arcs.
procedure min-cost(V; E; u; c);
while ffl - 1=n do
end.
Figure
5. The cost scaling algorithm.
procedure refine(ffl; f; p);
while f is not a circulation
apply a push or a relabel operation;
return(ffl; f; p);
end.
Figure
6. The generic refine subroutine.
First we give a high-level description of the successive approximation algorithm (see Figure
5). The algorithm starts with
the beginning of every iteration, the algorithm divides ffl by a constant factor ff and saturates
all arcs a with c p (a) ! 0. The iteration modifies f and p so that f is a circulation that is
(ffl=ff)-optimal with respect to p. When ffl ! 1=n, f is optimal and the algorithm terminates.
The number of iterations of the algorithm is dlog ff (nC)e.
Reducing ffl is the task of the subroutine refine. The input to refine is ffl, f , and p such
that (except in the first iteration) circulation f is ffl-optimal with respect to p. The output
from refine is ffl circulation f , and a price function p such that f is ffl 0 -optimal with
respect to p. At the first iteration, the zero flow is not C-optimal with respect to the zero
price function, but because every simple path in the residual graph has length of at least \GammanC,
standard results about refine remain true.
The generic refine subroutine (described in Figure 6) begins by decreasing the value of ffl,
and setting f to saturate all residual arcs with negative reduced cost.
This converts f into an ffl-optimal pseudoflow (indeed, into a 0-optimal pseudoflow). Then the
subroutine converts f into an ffl-optimal circulation by applying a sequence of push and relabel
operations, each of which preserves ffl-optimality. The generic algorithm does not specify the
order in which these operations are applied. Next, we describe the push and relabel operations
push(v; w).
send a unit of flow from v to w.
end.
relabel(v).
then replace p(v) by
else replace p(v) by max (u;v)2Ef f p(u)
end.
Figure
7. The push and relabel operations
for the unit-capacity case.
As in the maximum flow case, a push operation applies to an admissible arc (v; w) whose tail
node v is active, and consists of pushing one unit of flow from v to w. A relabel operation applies
to an active node v. The operation sets p(v) to the smallest value allowed by the ffl-optimality
constraints, namely max (v;w)2Ef
otherwise.
The analysis of cost scaling push-relabel algorithms is based on the following facts [12, 14].
During a scaling iteration
(1) no node price increases;
(2) every relabeling decreases a node price by at least ffl;
(3) for any v 2 V , p(v) decreases by O(nffl).
8. Global Updates and the Minimum Change Discharge Algorithm
In this section, we generalize the ideas of minimum distance discharge and global updates to
the context of the minimum cost circulation problem and analyze the algorithm that embodies
these generalizations.
We analyze a single execution of refine, and to simplify our notation, we make some assumptions
that do not affect the results. We assume that the price function is identically zero
at the beginning of the iteration. Our analysis goes through without this assumption, but the
required condition can be achieved at no increased asymptotic cost by replacing the arc costs
with their reduced costs and setting the node prices to zero in the first step of refine.
Under the assumption that each iteration begins with the zero price function, the price
change of a node v during an iteration is \Gammap(v). By analogy to the matching case, we define
denote the maximum value attained by \Gamma(f; p) so
far in this iteration. The minimum change discharge strategy consists of repeatedly choosing
a node v with applying a push or relabel operation at v.
In the weighted context, a global update takes the form of setting each node price so that
there is a path in GA from every excess to some deficit (a node v with e f (v) ! 0) and every
node reachable in GA from a node with excess lies on such a path. This amounts to a modified
shortest-paths computation, and can be done in O(m) time using ideas from Dial's work [3].
We perform a global update every time \Gamma max has increased by at least ffl since the last global
update. We developed global updates from an implementation heuristic for the minimum cost
circulation problem [11], but in retrospect, they prove similar in the assignment context to the
one-processor Hungarian Search technique developed in [8].
We use essentially the same argument as for the unweighted case to analyze the part of the
algorithm's execution when \Gamma max is small.
Lemma 8.1. The Minimum Change Discharge algorithm uses O
during the
period beginning when \Gamma first exceeds ending when \Gamma first exceeds j.
Proof: Similar to Lemma 4.1.
large, the argument we used in the unweighted case does not generalize because
it is not true that \Gammap(v) gives a bound on the breadth-first-search distance from v to a deficit
in the residual graph. Let E(f) denote the total excess in pseudoflow f , i.e.,
The following lemma is analogous to Lemma 4.2.
Lemma 8.2. Given a matching network G and a circulation g, any pseudoflow f in G g can
be decomposed into
ffl cycles and
ffl paths, each from a node u with e f (u) ! 0 to a node v with e f (v) ? 0,
where all the elements of the decomposition are pairwise node-disjoint except at the endpoints
of the paths, and each element carries one unit of flow.
We denote a path from node u to node v in such a decomposition by (u / v).
The following lemma is similar in spirit to those in [8] and [12], although the single-phase
push-relabel framework of our algorithm changes the structure of the proof.
Lemma 8.3. At any point during refine, E(f) \Theta \Gamma max -
ffl.
Proof: Let c denote the (reduced) arc cost function at the beginning of this execution of
refine, and let E) denote the residual graph at the same instant. For simplicity in the
following analysis, we view a pseudoflow as an entity in this graph G. Let f , p be the current
pseudoflow and price function at the most recent point during the execution of refine when
. Then we have
E(f)
We will complete our proof by showing that
and then deriving an upper bound on this quantity.
By the definition of the reduced costs,
Letting P be a decomposition of f into paths and cycles according to Lemma 8.2 and noting
that cycles make no contribution to the sum, we can rewrite this expression as
Since nodes u with e f are never relabeled, for such a node, and we have
Because the decomposition P must account for all of f 's excesses and deficits, we can rewrite
Now we derive an upper bound on c p (f) \Gamma c(f ). It is straightforward to verify that for any
matching network G and integral circulation g, G g has exactly n arcs
and so from the
fact that the execution of refine begins with the residual graph of an (ffffl)-optimal circulation,
we deduce that there are at most n negative-cost arcs in E. Because each of these arcs has
cost at least \Gammaffffl, we have c(f) - \Gammaffnffl. Hence c
Now consider c ffl-optimality of f with respect to p
says that a R 2
Now by Lemma 8.2, f can be decomposed into cycles and paths from deficits to excesses. Let P
denote this decomposition, and observe that c p
the interior
of a path P , i.e., the path minus its endpoints and initial and final arcs, and let @(P ) denote
the set containing the initial and final arcs of P . If P is a cycle, -(P
can write
The total number of arcs in the cycles and path interiors is at most n+2, by node-disjointness.
Also, the total excess is never more than n, so the initial and final arcs of the paths number
no more than 2n. And because each arc carrying positive flow has reduced cost at most ffl, we
have
Therefore, c p (f) \Gamma c(f) -
Now to complete our time bound, we use the following lemma.
Lemma 8.4. Between any two consecutive global update operations, at least one unit of excess
reaches a deficit.
Proof: This lemma is a simple consequence of the ffl-optimality of f with respect to p. In
particular, the definition of ffl-optimality implies that no push operation can move a unit of
excess from a node to another node with higher price change, and indeed, two consecutive push
operations on any given unit of excess suffice to move the excess to some node with strictly
lower price change. By the definition of a global update operation, these properties suffice to
ensure that a unit of excess reaches some deficit immediately after a global update, and before
any relabeling occurs.
Lemma 8.3 shows that when \Gamma the total excess remaining is O(n=k). Lemma 8.4
shows that O(m) work suffices to cancel each unit of excess remaining. As in the unweighted
case, the total work in an execution of refine is O(mk choosing
gives a O(
time bound on an execution of refine. The overall time bound follows from
the O(log(nC)) bound on the number of scaling iterations.
Graph compression methods [6] do not apply to graphs with weights because the compressed
graph preserves only adjacency information and cannot encode arbitrary edge weights. Hence
the Feder-Motwani techniques do not apply in the assignment problem context.
9. Minimum Change Discharge Algorithm without Global Updates
We present a family of assignment instances on which we show refine without global updates
performs\Omega\Gamma nm) work in the first scaling iteration, under the minimum distance discharge
selection rule. Hence this family of matching networks suffices to show that global updates
account for an asymptotic difference in running time.
The family of assignment instances on which we show refine without global updates takes
structurally the same as the family of bad examples we used in the unweighted
case, except that they are have two additional nodes and one additional edge. The costs of the
edges present in the unweighted example are zero, and there are two extra nodes connected
only to each other, sharing an edge with cost ff.
At the beginning of the first scaling iteration, ff. The execution starts by setting
1. From this point on, the execution of refine restricted to the nodes and arcs present
in the unweighted example parallels the execution of the maximum flow algorithm detailed in
Section 5.
10. Conclusions and Open Questions
We have given algorithms that achieve the best time bounds known for bipartite matching,
namely O
log n
, and for the assignment problem in the cost scaling context, namely
O (
nm log(nC)). We have also given examples to show that without global updates, the
algorithms perform worse. Hence we conclude that global updates can be a useful tool in
theoretical development of algorithms.
We have shown a family of assignment instances on which refine performs poorly, but our
proof seems to hinge on details of the reduction, and so it applies only in the first scaling iter-
ation. An interesting open question is the existence of a family of instances of the assignment
problem on which refine
uses\Omega\Gamma nm) time in every scaling iteration.
--R
Goldberg's Algorithm for the Maximum Flow in Perspective: a Computational Study.
Implementing Goldberg's Max-Flow Algorithm - A Computational In- vestigation
Algorithm 360: Shortest Path Forest with Topological Ordering.
Algorithm for Solution of a Problem of Maximum Flow in Networks with Power Estimation.
Network Flow and Testing Graph Connectivity.
Clique Partitions
Faster Scaling Algorithms for Network Problems.
Efficient Graph Algorithms for Sequential and Parallel Computers.
An Efficient Implementation of a Scaling Minimum-Cost Flow Algorithm
A New Approach to the Maximum Flow Problem.
Finding Minimum-Cost Circulations by Successive Approxima- tion
O nakhozhdenii maksimal'nogo potoka v setyakh spetsial'nogo vida i nekotorykh prilozheniyakh.
The Hungarian Method for the Assignment Problem.
Implementations of Goldberg-Tarjan Maximum Flow Algo- rithm
New Scaling Algorithms for the Assignment and Minimum Cycle Mean Problems.
--TR | push-relabel algorithm;dual update;assignment problem;cost scaling;zero-one flow;bipartite matching |
587956 | Realizing Interval Graphs with Size and Distance Constraints. | We study the following problem: given an interval graph, does it have a realization which satisfies additional constraints on the distances between interval endpoints? This problem arises in numerous applications in which topological information on intersection of pairs of intervals is accompanied by additional metric information on their order, distance, or size. An important application is physical mapping, a central challenge in the human genome project. Our results are (1) a polynomial algorithm for the problem on interval graphs which admit a unique clique order (UCO graphs). This class of graphs properly contains all prime interval graphs. (2) In case all constraints are upper and lower bounds on individual interval lengths, the problem on UCO graphs is linearly equivalent to deciding if a system of difference inequalities is feasible. (3) Even if all the constraints are prescribed lengths of individual intervals, the problem is NP-complete. Hence, problems (1) and (2) are also NP-complete on arbitrary interval graphs. | Introduction
. A graph G(V; E) is an interval graph if one can assign to each vertex v an interval
I v on the real line, so that two intervals have a non-empty intersection if and only if their vertices are
adjacent. The set of intervals fI v g v2V is called a realization of G. The problems which we study here are
concerned with the existence of an interval realization to a graph, subject to various types of distance
(or difference) constraints on interval endpoints. These are inequalities of the form x
y and constant C xy . Specifically, we study the following problems (we defer
further definitions to section 2):
Distance-Constrained Interval Graph (DCIG):
INSTANCE: A graph E) and a system S of distance constraints on the
variables
QUESTION: Does G have a closed interval realization whose endpoints satisfy S?
That is, is there a set of intervals f[l which form a realization of G and
their endpoints satisfy S?
A special case is DCIG in which all constraints are lower and upper bounds on interval lengths:
Bounded Interval Graph Recognition (BIG):
INSTANCE: A graph
QUESTION: Is there a closed interval realization of G such that for each vertex v:
In the following problem, each interval must have a prescribed
Measured Interval Graph Recognition (MIG
INSTANCE: A graph E) and a length function
QUESTION: Is there a closed interval realization of G, in which for every
We shall prove here that even MIG , the most restricted problem of the three, is strongly NP-
complete. Unlike the situation with interval graphs, the fact that the intervals must be closed causes
some loss in generality. In contrast, we show that when the interval graph admits a unique consecutive
clique order (up to complete reversal), DCIG is polynomial, and hence, so are the other two problems.
The class of graphs satisfying this property (which we call UCO graphs) properly contains the class of
prime interval graphs, and is recognizable in linear time. Our solution is based on reducing the problem
to a system of difference constraints. We also prove that we cannot do better, by showing that the
problem of solving a system of difference constraints and the problem BIG on UCO graphs are linearly
equivalent.
Interval graphs have been intensively studied, due to their central role in many applications (cf. [33,
17, 11]). They arise in many practical problems which require the construction of a time line where
each particular event or phenomenon corresponds to an interval representing its duration. Among
the applications are planning [3], scheduling [22, 31], archaeology [26], temporal reasoning [2], medical
diagnosis [29], and circuit design [36]. There are also non-temporal applications in genetics [6] and
behavioral psychology [9]. In the Human Genome Project, a central problem which bears directly on
interval graphs is the physical mapping of DNA [8, 25]: It calls for the reconstruction of a map (a
realization) for a collection of DNA segments, based on information on the pairwise intersections of
segments.
In the applications above, size and distance constraints on the intervals may occur naturally: The
lengths of events (intervals) may be known precisely, or may have upper and lower bounds. The order
or distance between two events may be known. This is often the case in scheduling problems and
temporal reasoning. In physical mapping, certain experiments provide information on the sizes of the
DNA segments [21]. Our goal here is to study how to combine those additional constraints with precise
intersection data.
Green and Xu (cf. [20]) developed and implemented a program (called SEGMAP) for construction
of physical maps of DNA, which utilizes intersection and size data. The intersection data is obtained
by experimentally testing whether each of the segments contain a sequence of DNA (called STS) which
appears in a unique, unknown location along the chromosome. Hence, two segments which contain a
common STS must intersect. Their algorithm works in two phases: the first phase ignores the size data.
It obtains a partition of the STSs into groups, and a linear order on the groups. The second phase uses
the partial order of phase 1 together with the size data to obtain the map using linear programming
algorithms. Our results in section 3 imply that faster algorithms (utilizing network flow techniques) can
be used under certain conditions on the data. However, the results in section 5 imply that the general
problem tackled by SEGMAP is intractable (unless P=NP) even with perfect data.
Recognizing interval graphs (i.e., deciding if a graph has an interval realization) can be done in linear
time [7, 28, 23]. Surprisingly, much less is known about the realization problem when the input contains
additional constraints on the realization. The special case of MIG where all intervals have equal
length corresponds to recognizing unit interval graphs [33], which can be done in linear time [10]. The
special case of DCIG where all distance constraints have the form r is the problem
of seriation with side constraints [27, 19] (also called interval graph with order constraints), which can also
be solved in linear time [32]. When DCIG is further restricted to the special case where for each pair u; v
where (u; v) 62 E, we have either the constraint r The problem is equivalent to
recognizing an interval order, which can be done in linear time [4]. Fishburn and Graham [12] discussed
a special case of BIG where all intervals have the same pair p and q of upper and lower bounds. For
each p and q, they characterized the resulting class of interval graphs (and interval orders), in terms
of the family of minimal forbidden induced subgraphs (respectively, suborders). They proved that such
a family is finite if and only if p
q is rational. In this case, for integer p and q, their characterization
yields an exponential time n O(pq) algorithm for identification of such graphs (orders), where n is the
number of vertices. Isaak [24] studied a variant of BIG in which the input is an interval order, there are
upper and lower integer bounds on individual interval lengths, and the question is whether there exist a
realization in which all endpoints are integers. Using Bellman's notion of a distance graph, Isaak gave
an O(min(n 3
log time algorithm for that problem, where C is the sum of bounds on lengths.
He also posed the more general problem of BIG, which we answer here. We generalize distance graphs
to handle both strict and weak inequalities on endpoints, in order to solve DCIG on a particular class
of graphs.
There have been other studies on the realization of a set of intervals based on partial information on
their intersection, length and order. Those are different from our problems here inasmuch the information
on intersection is incomplete, i.e., the underlying interval graph is not completely known. Among these
are studies on interval sandwich [18], interval satisfiability [19, 37, 32], on interval graphs and orders
which have realizations with at most k different lengths [11, chapter 9], on the smallest interval orders
whose representation requires at least k different lengths [11, chapter 10], and on the number of distinct
interval graphs and orders on n vertices which have a realization with k given lengths [35].
The paper is organized as follows: Section 2 contains some preliminaries and background. Section 3
studies problem DCIG on UCO graphs, and proves its linear equivalence to solving systems of difference
constraints. This implies in particular an O(min(n 3
log time algorithm for all three problems
on UCO graphs. In section 4 we sketch a simple proof that DCIG is strongly NP-complete. Section 5
proves the stronger result that MIG is strongly NP-complete. The reduction (performed in two steps)
is rather involved, but we feel it gives insight on the interplay between the topological side of the problem
(i.e., intersection, open or closed intervals) and its metric aspect (i.e., the intervals sizes).
2. Preliminaries. A graph E) is called an intersection graph of a family of sets
is called an interval graph if it is an intersection graph of a family
of intervals on the real line. In that case, S is called a realization of G. Depending on
the convention, each interval may be either closed or open, with no loss of generality. For simplicity, we
sometimes use the same names for the intervals and for the corresponding vertices.
For an interval I denote its left and right endpoints by l(I) and r(I), respectively. The length of I,
denoted jIj, is r(I) \Gamma l(I). If G has a realization in which all the intervals are of equal length, then it is
called a unit interval graph.
be the maximal cliques in a graph vng. The clique
matrix of G is the n \Theta k zero-one matrix . If the
columns in C(G) can be permuted so that the ones in each row are consecutive, then we say that
has the consecutive ones property, and we call such a permutation of the columns a consecutive (clique)
order. According to Gilmore and Hoffman [16], G is an interval graph if and only if C(G) has the
consecutive ones property.
For two non-intersecting intervals x; y where x is completely to the left of y, we write xOEy or,
equivalently, y-x. Let be a partial order. Call ! an interval order if there exists a set of
v2V such that v ! u if and only if I v OEI u . S is called a realization for P . Call E)
the incomparability graph of P , if for each u; only if u and v are incomparable in
Hence, G is an interval graph if and only if it is the incomparability graph of
some interval order. In this case we will say that the graph G admits the order !.
For a vertex in the graph (v)[fvg.
For a vertex set U ' V denote N [U . A set M ' V is called a module
in E) if for each x; y 2 M , and for each u Surely, V is a module, and
for each v 2 V , fvg is a module. Such modules are called trivial. If all modules in G are trivial, then G
is called prime. For a subset X ae V define Xg. For a module M in the graph
G, the graph G is said to
be obtained from G by contracting M to v. We usually denote by n and m the number of vertices and
edges, respectively, in the graph.
3. Distance Constraints in UCO graphs. We call an interval graph uniquely clique-orderable
(UCO for short) if it has a unique consecutive clique order, up to complete reversal, in every realization.
An interval graph G is UCO if and only if the only non-trivial modules in it are cliques [34]. Note that
G is UCO if and only if the interval order admitted by G is unique, up to complete reversal, because
an interval order of the vertices of G uniquely determines a linear order of the maximal cliques in G,
and vice versa. Denote this order by OE G . Note also that the class of UCO graphs properly contains the
class of prime interval graphs. UCO graphs can be recognized in linear time by applying the PQ-tree
algorithm of Booth and Lueker [7], and noting that G is UCO if and only if the final tree consists of a
single internal Q-node and the leaves. This procedure also computes OE G in O(m + n) time.
In this section we study the problem DCIG when the input graph is UCO. We show how to reduce
this problem, in linear time, to the problem of deciding whether a system of difference constraints is
feasible. Hence, DCIG, BIG and MIG are all polynomial on UCO graphs. We also prove that for BIG
and DCIG, we cannot do any better, since deciding the feasibility of a system of difference constraints
can be reduced in linear time to an instance of BIG with a UCO graph.
3.1. A Polynomial Algorithm for DCIG on UCO Graphs. Let be an instance
of DCIG, where E) is UCO and A is a set of difference inequalities on the interval endpoints.
Construct two systems T and -
T of difference constraints on the variables fl v ; r v g v2V , as follows: Both
systems include all inequalities in A. In addition, for each x; y contains an
inequality r x ! l y , and -
T contains an inequality r y ! l x . If xy 2 E then both T and -
T contain an
inequality r x - l y (and r y - l x ). With these definitions we prove:
Lemma 3.1. P has a realization if and only if either T or -
T has a feasible solution.
Proof. If
r v g v2V is a feasible solution to T or to -
T , then X is a solution to A, and
realizes G. On the other hand, let
r v ]g v2V be a realization of G, whose endpoints
satisfy A. Then the order of the intervals f[ ~ l v ; ~ r v ]g v2V on the real line is either OE G , or its reversal.
Therefore, f ~ l v ; ~ r v g v2V is a feasible solution to either T or -
T .
Hence, we can solve our problem by deciding whether system T or -
T is feasible. We shall prove now
that a system S of weak and strict difference constraints on n variables is reducible in linear time to a
system S 0 which consists of weak difference constraints, with numbers only O(n) times larger. (Standard
transformation techniques [14] would give numbers O(2 L ) times larger for binary input length L.)
Assume all constants in S to be integral, and fix ffl - 1
n . Define S 0 to include every weak inequality
in S, and a weak inequality strict inequality x \Gamma y ! c in S. Note that
the number of variables and number of inequalities in the two systems is the same, and the constants in
(after multiplying by an appropriate factor to restore integrality) are larger than the constants in S
by a factor of \Theta(n).
Lemma 3.2.
S has a feasible solution if and only if S 0 has one.
Proof. The 'if' direction is trivial, since a feasible solution to S 0 also satisfies S.
To prove the 'only if', we generalize the notion of a distance graph (cf. [1, p. 103]), to handle
strict and weak inequalities: For a system T of difference constraints, construct a directed weighted
graph weights and arc labels, as follows: For every constraint x \Gamma y - C xy or
add an arc (y; x) to D(T ) with weight C xy and label the arc - or !, respectively. D(T ) is
called the distance graph of the system T . The weight of a path (or a cycle) in this graph is the sum of
the weights of its arcs. Bellman has shown that when all inequalities in T are weak, T is feasible if and
only if D(T ) contains no negative cycle ([5], see also [1, p. 103]).
Suppose S 0 is not feasible. Then D(S 0 ) must contain a negative-weight cycle c. Let w(c) and w 0 (c)
be the total weight of c in D(S) and D(S 0 ), respectively. Distinguish two cases:
ffl All arcs in c have labels -. Then
(y;x)2c
(y;x)2c
(1)
Hence, S is infeasible.
ffl c contains an arc marked !. Since the weight of each arc in c differs from the weight of the
corresponding arc in c 0 by no more than ffl, we get:
Since the weights in D(S) are integral, it follows that w(c) - 0. Since the cycle c in D(S)
contains an arc marked !, the inequality (1) is strict, namely, w(c) ? 0, so S is infeasible.
Corollary 3.3.
A system T is feasible if and only if the weight of every cycle in its distance graph D(T ) is either
positive, or it is zero and the cycle consists of - arcs only.
We now show that addition of identical strict inequalities to the equivalent systems S and S 0 above
maintains the equivalence between them. (We will need this property in section 5.3): For constants
define the following systems
2 and S 3 on the set of variables
Lemma 3.4. Let
are integers and ffl ! 1
n . S
has a feasible solution if and only if S 0 has one.
Proof. The proof is by induction on the size of I 3 . For I this is lemma 3.2. Suppose both
S and S 0 have feasible solutions, and consider adding a single strict inequality E: x \Gamma y ! C to both
systems. This implies adding an arc labeled ! with to both distance graphs D(S)
and D(S 0 ). By corollary 3.3, it suffices to prove that there exist a cycle of non-positive weight passing
through e in D(S [E) if and only if such a cycle exists in D(S 0 [E). But for every simple path p from
x to y, w S 0 is an integer. Hence, dwS 0 since
C is integral, wS
By lemma 3.1, and lemma 3.2, solving an instance of DCIG linearly reduces into determining if at
least one of two systems of difference constraints is feasible. Using the distance graph reformulation, the
feasibility of such a system with M weak inequalities on N variables, with sum of absolute values of arc
weights C, can be decided in O(min(NM;
log In our instance (G; there
are n vertices, so
Corollary 3.5. Deciding if a UCO graph with difference constraints has a realization can be done
in O(min(n 3
log nC)) time.
Note that the algorithms of [30, 13] for deciding the feasibility of a system also produce a feasible
solution if one exists. This enables construction of a realization (if one exists) in O(min(n 3
log nC))
time.
3.2. Reducing a System of Difference Constraints to BIG on UCO graphs. Given a system
of weak difference constraints, we shall show how to reduce it, in linear time, to an equivalent instance
of BIG, in which the graph is UCO. According to lemma 3.2, the assumption that all constraints are
weak can be made without loss of generality.
Let P be the following system of weak difference constraints in the variables
xN g:
a new system P 0 of difference constraints
on the same variable set X:
Note that the choice of C guarantees that c 0
can be
rewritten as:
where all right hand side terms are larger than one.
We call a solution f~x i g N
to P 0 monotone if for each
Lemma 3.6. P has a solution if and only if P 0 has a monotone solution. Moreover, if -
is a feasible solution to P for which is minimal, then -
is a monotone feasible solution to P 0 .
Proof. Suppose P 0 has a monotone solution x
N . Let ~ x
for each
Therefore, the i-th inequality in P is satisfied by f~x i g N
Hence, P has a feasible solution.
Let ~ x
xN be a solution of P , for which is minimal. (P defines
an intersection of closed half-spaces, which is a closed set, therefore there is a solution attaining this
minimal value). By [5], \Delta is the sum of arc weights along some simple path in the distance graph
By P we get, for each 1 -
is a feasible solution of
. For each 1
and X 0 is monotone.
For the above system P , define to be the following BIG instance (compare figure 1):
ffl G is the intersection graph of the set of intervals A defined as follows:
i=0 where a
ffl The length constraints are as follows:
- For integral i: U (b i
a 1
a 2
a 3
a 4
a 0
a 5
a 1
a 2
a 3
a 4
a 0
a 5
1Fig. 1. The graph G used in the reduction (top) and a realization for it (bottom)
Lemma 3.7. G is UCO.
Proof. Let G 0 be the intersection graph of A [ B. It is easy to see that G 0 is prime, and hence, it
has a unique clique order [34]. Moreover, G 0 has exactly 2N cliques, each one containing
(among other vertices) a unique and distinct b i . The set of maximal cliques in G is fN [b x namely,
each clique is distinguished by a single b i . Since G 0 is UCO, its unique clique order determines a unique
linear order on fb x jx 2 Bg, and hence, also on the maximal cliques of G. Hence, G is UCO.
Theorem 3.8. P has a feasible solution if and only if J has a realization.
Proof. Only if: Suppose f~x i g N
is a feasible solution to P , for which is
minimal. By lemma 3.6, fx 0
iC is a monotone solution to P 0 . Choose arbitrary
1. Define the following set R [ T [ S of intervals:
is monotone, the intersection graph of R[T [S is isomorphic to G. The length bounds on
vertices of T and R are trivially satisfied. If
i as required. If
satisfying the length bounds on the vertices of
S.
Suppose J has a realization. Let fy i g N
be the points in a realization of J which correspond
to the intervals fb i g N
(which have length zero). W.l.o.g. y N ? y 1 , because otherwise we can reverse
the realization. Since G is UCO, the order of the intervals in J is identical to the order of the intervals
A[B [W in the definition of G. Therefore y due to the length constraint
on be the interval corresponding to w
in the
realization. Define a system P 00 of difference constraints as follows:
i . It follows that fy i g N
is a monotone solution to P 00 . A proof similar to lemma 3.2 implies that P 0 and P 00 are equivalent, so P 0
is feasible. We would like to show that P 0 has a monotone solution. Let Q 0 be the system of constraints
have only monotone solutions. According to
lemma 3.4, adding Q 0 to both P 0 and P 00 maintains the equivalence between them. But a monotone
solution of P 00 realizes has a monotone solution and according to lemma 3.6 P is
feasible.
Corollary 3.9. The problem of deciding whether there exists a feasible solution to a system of
difference constraints is linearly reducible to the problem BIG on a UCO graph.
4. DCIG is NP-complete. We will now show, that although DCIG is polynomial when restricted
to UCO graphs, it is NP-complete in general. A stronger result will be proven in the next section, but
we include a sketch of this proof as it is much more transparent.
Theorem 4.1.
DCIG is strongly NP-complete.
Proof. We show a pseudo-polynomial reduction from the problem 3-PARTITION which is known
to be strongly NP-complete (see, e.g., [15]).
An instance of 3-PARTITION is a set X of
2 ), such that
k. The question is to determine whether there exists a partition of X into k subsets (which
have to be triplets) so that for each 1 - j - k:
xng be an instance of 3-PARTITION. Define an instance of DCIG, I = (G; S)
where G is the empty graph on the vertices fv j g n
, and S consists of the following three
types of constraints:
We shall see that I is satisfiable if and only if X is a "yes" instance (see figure 2). Assume for now that
all intervals in X must be open.
Suppose there exists a partition X
Examine the set of intervals
1-j-k;1-i-3 where I a j
I v i
). The intervals in T are disjoint, and their endpoints trivially satisfy S, hence,
T is a realization of I.
a 1 a k
a 0
Fig. 2. The v i 's can be squeezed between the a i 's if and only if a 3-partition exists
Conversely, suppose fI a i
is a realization of I. For each
g. According to the constraints, l(a
the I j 's do not intersect each other, and therefore the sets X j are disjoint. Moreover, every x i is a
member of some X j . Therefore is a partition of X. For each 1 - j - n: Since G is empty
all the I v i
's are disjoint, hence,
We assumed here that all intervals in the realization are open. To form a closed realization, it
suffices to modify the reduction by allowing an interval of length 1 (instead of length 1) for each
'gap' interval [r(a sufficiently small. (If each a
are integers, then
Since 3-partition is strongly NP-complete, and the reduction is pseudo-polynomial, our problem is
strongly NP-complete.
5. Recognizing Measured Interval Graphs is NP-Complete. In this section we prove the
NP-completeness of the problem MIG , introduced in section 1. The main part in this proof is a
hardness result for the following, slightly more general problem, in which we specify in advance for each
interval whether it should be closed or open:
Recognizing a Measured Interval Graph with Specified Endpoints (MIG):
INSTANCE: A graph non-negative length L(v) for every v 2 V , and a
closedg.
QUESTION: Is there a realization of G, in which the length of I v is exactly L(v), and
I v is open if and only if
We shall denote such an instance by When P is a "yes" instance, we say that P is a
measured interval graph (with endpoint specification). We shall first prove that MIG is NP-complete,
and then reduce MIG to MIG .
The issue of endpoint specification seems unnatural at first sight. It is well known that for interval
graphs in general the endpoint specification can be arbitrary, namely, a graph is interval if and only if
it has a realization for any possible specification of endpoints. This is not the case in the presence of
length constraints. For example: a K 1;3 graph with length 1 assigned to all vertices has no realization if
all intervals are open (or all closed), but it has a realization precisely if the degree-3 vertex and two of
the others are closed, as in figure 3.
a
c
d
a
c
d
Fig. 3. The K 1;3 graph shown (right) has a realization (left) if all intervals but c are closed.
We shall often use the following implicit formulation for the problem, by representing G and OE using
intervals, and using L to modify their
MIG: Implicit formulation:
INSTANCE: A pair (T; L) where x2V is a set of intervals, and
is a length function.
QUESTION: Is there a set of intervals only
if I x " I y closed if and only if I x is closed.
This formulation is sometimes more convenient as it suggests a possible realization. We need the
following notations and definitions:
Definition 5.1.
OE) be a measured interval graph with endpoints specification, and let U ' V be a set
of its vertices. Define the measured interval graph PU induced by P on U , to be (GU
GU is the subgraph of G induced on U , and LU , OE U are the restrictions of L and OE, respectively, to U .
Call two MIG instances there is a graph isomorphism
f between G and G 0 , and for each namely, the
length and closure properties of intervals are preserved by f . In this case denote P
Definition 5.2.
OE) be an instance of MIG. Let be a realization of P . Define
realization of Pg.
5.1. Basic Structures. We now describe three "gadgets" which are building blocks in our NP-completeness
construction, and prove some of their properties. The structure of these gadgets assures
us that their realization has very few degrees of freedom. To formalize this we introduce the following
notion:
Definition 5.3. Two realizations of the same interval graph are isometric if they are identical up
to reversal and an additive shift. Namely, there exists a function
j for all j. Let OE) be an instance of MIG. We call U ' V (G) rigid
in P if in any two realizations of P , the sets of intervals realizing U are isometric. In particular, all
endpoints are located at fixed distances from the leftmost endpoint, including the rightmost one. Thus in
every realization U has the same length. If V (G) is rigid in P , we call P rigid.
Note that the fact that U is rigid in P does not imply that PU is rigid. For example, the instance P
defined implicitly by the intervals in figure 3 is rigid, and in particular fb; c; dg is rigid in P , but P fb;c;dg
is not rigid.
5.1.1. The Switch. We first define the switch, a gadget which will be used as a toggle in larger
structures. For the parameter real value a - 1, define the MIG instance
(compare figure 4): G is the graph on the five vertices
assigns lengths 0; 1
respectively, and OE(v) specifies v 3 to be open, and all the
other vertices to be closed.
A realization fI of a Switch(a) will be called straight if I 1 is to the left of I 5 . Otherwise,
it will be called reversed. We say that such a realization is located at I 3 . For a straight realization U
of a Switch(a) located at by \GammaU the reverse realization located at
Hence, \GammaU is a "mirror image" of U , covering the same interval [x; x along the real line.
Lemma 5.4. Switch(a) is rigid. In particular,
Proof. Let S be a straight realization, as in the top left of figure 4. Suppose S 0 is another realization,
such that both leftmost endpoints, I 1 and I 0
are identical. The intersection graph of a Switch(a) is prime,
Fig. 4. The Switch (bottom), a straight realization (top left) and a reversed realization(top right).
hence, I 3 is between I 1 and I 5 , and l(I 5
therefore all inequalities hold as equalities. In particular,
a.
Note that lemma 5.4 implies that a realization of a straight Switch(a) located at (x; x+1) is unique.
The same is true for a reversed Switch.
Lemma 5.5.
OE) be an instance of MIG, and let be a realization for it. Let
be a module such that PU
Proof. Let
vertices numbered in the same order as in the definition of a Switch.
According to lemma 5.4, I(v 1 ) and I(v 5 ) are one unit apart, but both of them intersect I(x) and I(y).
Therefore I(x)"I(y) contains the unit length interval between I(v 1 ) and I(v 5 ), yielding jI(x)"I(y)j - 1.
5.1.2. The Fetters. Our second gadget binds two Switches and imposes a prescribed distance
between them. For positive real parameters, d; r and two sets of vertices, U 1 and U 2 , define a
five-parameter instance of MIG, F etters(d; r or in short, F etters = (G; L; OE), as follows:
are modules in G, and each of them induces a Switch. More precisely, there exist
constants a 1 ; a 2 such that F etters U1
ffl The graph ~
E), constructed from G by contracting U respectively,
is as follows (compare figure 5):
long
long
long
long
long
long
(v long
long
long
long
(v short
long
2.
long
long
endFig. 5. The F etters (bottom) and a realization of it (top). The distance between the Switches v 1 and v 2 is fixed.
ffl The lengths for the remaining intervals are:
2.
long
2.
2.
ffl OE specifies v tot to be open, and all the other intervals outside U to be closed.
When there is no confusion, we shall use the vertex and the corresponding interval in the realization
interchangeably. For example, l(v short
i ) is the position of the left endpoint of the interval corresponding
to v short
i in the realization, jv short
its length, etc.
Call a realization of F etters straight if v 1 is to the left of v 2 . Otherwise call the realization reversed.
A realization of F etters is said to be located at the interval corresponding to v tot . The F etters instance
fixes the distance between its two Switches. To formalize this notion we need the following definition:
OE) be a MIG instance. Let M;M 0 ' V (G) be modules in G where
For a realization of P , in which I and I 0 are the intervals
corresponding to the middle vertices in M and M 0 , respectively, define Dist(M;
Lemma 5.7. ~
is rigid in the F etters. In particular, in every realization of the F etters,
Proof. Recall that ~
G is the graph constructed from G by contracting U 1 and U 2 into v 1 and v 2 ,
respectively. It is easy to see that ~
G is prime. Prime interval graphs have an interval order which
is unique, up to complete reversal [34]. Hence, let us refer to the order in figure 5, where w.l.o.g.
tot is between v end
1 and v end
according to the length constraints: l(v end
long
By lemma 5.5, r(v long
long
long
yielding l(v end
long
In a similar way we prove v short
long
and the result follows.
By lemma 5.7, for a realization of the F etters(d; r which is straight (or reversed) and
located at 1), the only degrees of freedom are reversals of the Switches.
5.1.3. The Frame. We now construct an element which divides an interval into sub-intervals of
prescribed lengths. Each sub-interval is characterized by a distinct set of intervals which contain it.
This element will be used as a frame, into which the moving and toggling elements will fit, and have the
desired degrees of freedom.
be a sequence of real positive numbers, whose sum is s.
to be an instance
consists of 3r+3 vertices, where V
and g.
ffl The edges in G are:
k such that j is odd and ji \Gamma jj - 1.
are odd and ji \Gamma jj - 2.
ffl The lengths are:
ffl OE specifies ff i to be open if i is odd, and fl 4 to be open, but all other intervals to be closed.
A realization of a F rame is said to be straight if fl 1 is to the left of fl 3 . Otherwise it is called reversed.
Such a realization is located at the interval corresponding to fl 4 .
adjacent to all ff; fi
Fig. 6. A graph of a F rame (top) and its realization (bottom). The F rame structure is rigid and divides an interval
into smaller intervals of prescribed lengths and positions.
is rigid in a F rame.
Proof. Let G 0 be the subgraph of G induced on V ff [ V fi . It is easy to see that G 0 is prime, and
hence, has a unique clique order [34]. Moreover, G 0 has exactly k maximal cliques, each one containing
(among other vertices) a unique and distinct ff i . The set of maximal cliques in G is fN [ff i
namely,
each clique is distinguished by a single ff i . Since G 0 is UCO, its unique clique order determines a unique
linear order on V ff , and hence, also on the maximal cliques of G. Hence, G is UCO.
Let S and S 0 be two realizations of the same F rame. Suppose
are their leftmost endpoints,
respectively. The F rame graph is UCO, hence, the order of the ff-intervals is identical in both S and S 0 .
Moreover,
all the ff-intervals are disjoint, and must be between
1 and fl 0
3 , which
are at distance exactly s. Thus, the position of all ff endpoints is uniquely determined. It is easy to see
that also all fi-intervals except must have identical position in both realizations.
By lemma 5.8, for any straight (or reversed) realization of a F rame(x located at (x; x+ s),
the positions of all intervals except are uniquely determined.
In the sequel, when we use a realization of such a F rame to implicitly define a MIG instance, we
shall assume that fi 1 and fi k are contained in [x; x+s], so the realization has the shortest possible length.
In addition, when we use any gadget in the implicit definition, and we describe its intervals by saying
that "the gadget is located at we mean that "a straight realization of the gadget is located at
5.2. The Reduction. The realization of a MIG instance is a polynomial witness for a "yes"
instance, hence, MIG is in NP. We describe a reduction from 3-Coloring, which is NP-complete (see,
e.g., [15]). Let E) be an instance of 3-Coloring. We construct an instance P= (T; L) of MIG
(in implicit form), and prove that P is a "yes" instance if and only if G is 3-colorable.
The general plan is as follows: We construct measured interval sub-instances for each vertex and for
each edge of G. The sub-instance of a vertex is designed so that it can be realized only in three possible
ways, which will correspond to its color. The sub-instance for each edge will prevent the vertices at its
endpoints from having the same color.
5.2.1. The Vertex Sub-instance. Let 1. Define the
following set of intervals (compare figures 7 and 8):
1), located at (0; 11).
, and each of ffi 1 and ffi 2 is a Switch(3), located at (1; 2) and (7; 8),
respectively. The superscripts match the vertex numbers in each Switch.
long
long
2 g, such that i [
located at (\Gamma2M; 2M ).
Note that the intersection graph of ! [ i is prime.
For each interval I 2 S let jIj. The sub-instance of each vertex is isomorphic to (S; L), and
the F rames of the n vertices are layed out contiguously as follows: For an interval J and a real number
1g. For each
Denote
be the measured interval graph defined implicitly
by S.
\Gamma2M 2M
A frame
Fig. 7. This is the set of intervals S. ffi can be positioned in the frame. The distance between ffi 1 and ffi 2 is enforced
by the i.
Fig. 8. This is a sketch of the structure of a vertex. The Switches ffi can be positioned in the frame !. The distance
between enforced by the i.
A realization of a vertex sub-instance is called straight (respectively, reversed) if the realization of
its ! is straight (respectively, reversed).
Lemma 5.9. Let [ i2V S(i) 0 be a realization of P V
, with . Then either every
S(i) 0 is straight, or every S(i) 0 is reversed.
Proof. It suffices to prove that l(fl 1 (i)
follows from the identity of the zero-length intersecting intervals and the disjointness
of are both, by lemma 5.8, at distance 11 from the former pair, respectively.
be a straight realization of P V
. Define the function
(2)
Call Col the coloring defined by S 0
. We now show that each vertex subgraph can be realized in exactly
three distinct colors. This is also demonstrated in figure 9.
Lemma 5.10.
For each
Color 0:
Color 1:
Color 2:
Fig. 9. The three possible positions of in a vertex sub-instance, which will correspond to the three possible colors
of the vertex.
Proof. According to lemma 5.8, in each straight S(i) 0 , the positions of the intervals in ff(i) 0 relative
to l(fl 1 (i) 0 ), are fixed. Assume w.l.o.g. For each J 2 for each
But according to lemma 5.7:
Therefore,
and
5.2.2. The Edge Sub-instance. Let the edges of G be
an edge in E, where j. For each edge e k we construct an edge sub-instance, that forces the colors of
the vertices i and j to be different.
moving
frame
fixed
frame
Y
Z
Fig. 10. This is a functional sketch of the edge sub-instance. The two Switches D can only reverse in their relative
fixed positions inside the moving frame. Their distances from the corresponding Switches in the vertices respectively,
are fixed. The moving frame itself can have different positions along the fixed frame.
We first give an overview of this construction (compare Figure 10): Each edge is assigned a fixed
rame w which contains two Switches, D 1 and D 2 , which are the heart of its sub-instance. The F rames
of the edges are layed out contiguously to the right of the vertex F rames. The sub-instance is a collection
of intervals fA designed so that:
1. D 1 and are kept at a fixed distance (this is done by the Y intervals).
2. D 2 and are kept at a fixed distance (this is done by Z).
3. D 1 and D 2 are restricted to be in one of four possible relative positions, allowing the four possible
color differences between the vertices i and j (this is done by W ).
4. D 1 and D 2 together can undergo a translation, allowing the six possible color combinations of
the vertices i and j, as demonstrated in figure 12 (this is done by A 0 and w).
We now describe the construction in detail (compare Figure 11): Define the following set
of intervals. Let 11n. For readability, we
omit the parameter k whenever possible.
is a F rame(5; 12; 1), located at (0;
1), located at (7; 16) Base(k).
located at (8;
is a Switch(4) located at (11; 12) Base(k).
long
long
2 g, such that Y [ is the
located at
long
long
2 g, such that Z [ is the
located at
The length function on the sub-instance (X; L) is defined so that
the Y -s in which we set:
L(Y short
long
long
This change, together with the +1 in the first parameter of the F etters of Y , forces a +1 shift on the
location of ffi 1 (i). This shift will be crucial in forcing the vertices i and j to have different colors.
Note that the intersection graph of X(k) \Gamma D(k) is prime. Note also that the left and right ends
of the F etters sub-instances Y and Z are positioned way beyond the contiguous F rames, in all edge
sub-instances and vertex sub-instances. This allows every F etters to move independently, and no vertex
or edge subgraph is a module.
J 0 be a set of intervals with the same intersection graph
as of X, which satisfy the corrected length constraints, where X(i)
straight (respectively, reversed) realization if the frame w(i) 0 is straight (respectively, reversed). A proof
similar to that of lemma 5.9 implies:
Lemma 5.11. For each 1 only if X(j) 0 is straight.
The complete constructed instance is P= (S[ X;L), where the interval lengths L are implicit in
each of the two types of subgraphs, and the only exception is the corrected length in the Y (k)-s. Due
to this exception simple super-imposition of Sand X does not give a realization.
Lemma 5.12. If S 0
is a realization of P for which S 0
is straight, then X 0
is straight.
Proof. Recall that c 1 (1) and fl 3 (n) are the leftmost and the rightmost zero length intervals in the
leftmost edge sub-instance, and the rightmost vertex sub-instance, respectively. Suppose, to the contrary,
that X 0
is not straight. The zero-length intersecting intervals c 1 (1) 0 and fl 3 (n) 0 must be identical. W.l.o.g.
these two intervals intersect,
in contrary to our constructed interval graph.
d
Z
Z
Z
Z
tot
Z
long 2
Z
Z
long 1
Z
Z
\Gamma6k
\Gammad
Y
Y
Y
tot
Y
longY
Y
longY
shortY
\Gamma6k
\GammaD1
Base(k
Y
Fig.
1.
The
whole
edgeFig. 11. The edge sub-instance: A moving frame can be positioned inside the fixed frame. The Switches D 1 and D 2
are positioned inside the moving frame. Each of D 1 and D 2 is connected to its vertex sub-instance via F etters.
Lemma 5.13.
then for every realization S 0
of P: Col(i) 6= Col(j).
Proof. Assume w.l.o.g that the realization is straight, and that l(c 1 Base(k). Again, we omit
the parameter k whenever possible. Surely
The first inequality follows since C 0
1 and A 0
must intersect, the second inequality follows since A 0
0 and a 0should intersect, and the last equality follows since w 0 is rigid (lemma 5.8). Since Length(W
we conclude that W straight.
According to lemma 5.8 the relative positions of all the intervals in the F rame (except B 0
are fixed relative to l(C 0
The realization for each of D 1 and D 2 can be either straight or reversed,
giving rise to four possible combinations of positions (Any of these combinations fixes the positions of
with respect to l(C 0
1 )). In particular:
Due to lemma 5.7, and the realization being straight:
Therefore:
Corollary 5.14.
If P is a "yes" instance, then G is 3-colorable.
Proof. If P is a measured interval graph, then it has a straight realization (since the realization can
reversed completely). Define the coloring as described in (2). By lemma 5.13, Col
is a proper 3-Coloring of G.
Let us now prove the converse:
Lemma 5.15.
If G is 3-colorable, then P admits a realization.
Proof. Let be a proper 3-coloring of G. We build a realization S 0
for the
instance P as follows:
1. For the vertex position its Switches
2 as follows (compare figure 9):
ffl If
ffl If
ffl If
The rest of the intervals in the vertex subgraph are positioned accordingly (cf. lemma 5.10).
2. For the edge e the directions of the Switches D 1 (k) 0 and
in the realization are determined by y, thus fixing the distance between D 3
. The absolute position of these Switches is determined according to Col(i) and Col(j),
as follows (compare figure 12):
and S[ X have the same intersection graph, all interval lengths match the prescribed
lengths, and their endpoints meet the specification.
From lemma 5.15 and corollary 5.14 we can finally conclude:
Theorem 5.16. MIG is NP-complete.
In fact, the same reduction implies strong NP-completeness, as 3-Coloring is strongly NP-complete
and the reduction is also pseudo-polynomial.
5.3. Closing the Open Intervals. We have proved that recognizing a measured interval graph
with specified endpoints is NP-complete. We now show that this problem is hard even where all the
intervals are closed. Given an instance MIG, define a new instance P
MIG (in which all intervals are closed), as follows: Let
L(v) if v is closed
Let P be an instance generated by the reduction in section 5.2. We shall prove that P has a realization
if and only if P 0
has one.
First, we observe that the construction introduced in the proof of theorem 5.16 has a special property:
Let S be a realization in which the shortest non-zero length of an interval is C. S is called discrete if all
the endpoints of its intervals are integer multiples of C. In that case, C is called the grid size of S.
Remark 5.17.
Fig. 12. The relative position of D 3and D 3forces the colors of the vertices to be different.
By the proofs of lemma 5.15 and corollary 5.14, P has a realization if and only if it has a discrete
realization, with grid size 1
.
Lemma 5.18.
If P has a realization, then P 0
has one.
Proof. If P has a realization, then by remark 5.17, it has a discrete realization fI v g v2V (G) with grid
size 1
. Construct the set of closed intervals fI 0
defined as follows: If I v is closed, let I 0
I v is open, let I 0
We lemma that this set is a realization of P 0
is discrete, the intervals I v and I u intersect
if and only if I 0
v and I 0
u intersect, since if one (or both) of I v ; I u is open, then their overlap is at least 1
.
Furthermore, clearly jI 0
Unfortunately, the converse of lemma 5.18 does not always hold for arbitrary MIG instances, as
demonstrated in figure 13. We shall prove that the converse does hold for instances generated by the
reduction in section 5.2.
Fig. 13. In the MIG instance P on the left, the the numbers denote lengths, and the four intervals corresponding to
the vertices marked 00 should be open. P has no realization, but P 0 has one, as shown on the right.
Define the following order-oriented analog of MIG and MIG , respectively:
Recognizing a Measured Interval Order with Specified Endpoints (MIO):
INSTANCE: A partial order OE on a set V , a non-negative length L(v) for every
and a function closedg.
QUESTION: Is there an interval realization of (V; OE), in which the length of I v is
exactly L(v), and I v is open if and only if
MIO is the restriction of MIO to instances with all intervals closed. MIO can be solved in
polynomial time [19, 24, 32], and that solution can be generalized to deal with open intervals and solve
MIO as well.
We need to generalize the notion of rigidness in the following manner:
Definition 5.19. For a real p - 0, two realizations fI j g and fI 0
j g of the same interval graph are
p-isometric if there exists a function
that
j for all j. We call U ' V (G) p-rigid in a MIO instance if in any two realizations
of the instance, the sets of intervals realizing U are p-isometric. Note that in this case all endpoints of
U are located at fixed distances from the leftmost endpoint, up to \Sigmap. Hence, every realization has the
same length, up to \Sigmap.
For an instance Q of MIO, define the following system of inequalities S(Q), on the variables fl v
ffl If xOEy, and both x; y are closed: l x
ffl If xOEy, and at least one of x; y is open: l x
o, and both x; y are closed: l x
o, and at least one of x; y is open: l x
Q has a realization if and only if S(Q) has a feasible solution, since the left endpoints of the
realization satisfy S(Q), and vice versa. Recall that D(S(Q)) is the distance graph of S(Q), as in the
proof of lemma 3.2, and denote it D(Q), for short.
Lemma 5.20. Let Q be an instance of MIO. If U is a strongly connected component in the union
of all zero-weight cycles in D(Q), then U is rigid in Q.
Proof. For vertices there is a zero-weight cycle c in D(Q) passing through both x and y.
Let d (resp. \Gammad) be the weight of the path from x to y (resp. y to x) along c. Summing the inequalities
in S(Q) along the two paths we get l y - l x respectively, implying l y \Gamma l
every realization must satisfy S(Q), for every realization l y \Gamma l so U is rigid.
The converse holds as well:
Lemma 5.21. Let Q be a realizable instance of MIO, with U rigid in Q. Then for each x; y 2 U
there is a zero-weight cycle in D(Q) containing both x and y.
Proof. Suppose to the contrary x; y 2 U and there is no zero-weight cycle in D(Q) containing both
x and y. Either there is a cycle in D(Q) through x and y, or there is no such cycle.
If there is no cycle in D(Q) through x and y, then w.l.o.g. there is no path in D(Q) from x to y.
fxg be the set of all vertices in V to which there is a path from y in D(Q) (including y
itself). Then Q does not contain any inequalities v
Let fI v g v2V be a realization for Q. Then fI realizes Q, with a different
distance between the intervals corresponding to x and to y, contradicting the rigidness of U in Q.
If there exists a cycle in D(Q) through x and y, then let c be such a cycle of minimum weight,
and let l = w(c). By assumption l 6= 0, and since Q has a realization, by corollary 3.3 l ? 0. Let d
be the weight of the path from x to y along c. For every \Delta, d d, consider a new directed
graph D 0 (\Delta) obtained from D(Q) by adding two arcs xy and yx, both labeled -, with weights \Delta and
\Gamma\Delta, respectively. Observe that adding the two arcs does not introduce any cycles of negative-weight
or zero-weight cycles with strict arcs, into the graph. Hence, the augmented system corresponding to
0 (\Delta) has a realization. Moreover, in every realization of D 0 (\Delta), the distance between the left endpoints
of x and y is \Delta. By choosing different values of \Delta, we contradict the rigidness of U .
We can now generalize lemma 5.20:
Lemma 5.22. Let Q be a realizable instance of MIO. For a non-negative p, let C be a union of
cycles in D(Q), each of weight less than p. If U is a strongly connected component in C then U is
jU jp-rigid in Q.
Proof. For vertices by the definition of U , there is a simple path
in C. Every edge x i x i+1 is in C, therefore there exists a path P i in C from x i+1 to x i s.t. x i x i+1 and
a cycle in C. The concatenation of P path P from y to x in C. Moreover,
the concatenation of P 0 and P is a cycle c in C (not necessarily simple) of weight at most (k \Gamma 1)p. Let
we have established that w(c) = w(P
d. Summing the inequalities in S(Q)
along the two paths P 0 and P we get l y - l x realization
of Q satisfies S(Q), so \Gammad - l x any two realizations are q-isometric.
Lemma 5.23. Let OE) be a MIO instance, and let Q be the corresponding
MIO instance (obtained by the transformation in (18)). Suppose U ' V (OE) is rigid in Q, and let
then U is 2n 2 ffl-rigid in Q 0 .
Proof. The weight of each arc in D(Q 0 ) changes by no more than 2ffl, compared to D(Q). Hence,
the weight of every simple cycle changes by at most 2nffl. U is rigid in Q, hence, by lemma 5.21 it is
contained in a strongly connected component W of a union of zero-weight cycles in D(Q). W is also
a union of simple zero-weight cycles in D(Q). The weight of each such cycle in D(Q 0 ) is at most 2nffl.
Hence, by lemma 5.22, W is 2jW jnffl-rigid in Q 0 , and so is U .
We now return to the instance P generated by the reduction in the proof of theorem 5.16. Recall
that P 0
is the instance obtained from P by the transformation (18). Suppose P 0
has a realization. Let
OE be the corresponding interval order, and let
Consider each of our
gadgets: By lemma 5.4, every Switch is rigid in Q . A slight modification of lemma 5.7 shows that every
F etters must be rigid in Q (since the directions of the Switches are set). By lemma 5.8 every F rame
is rigid in Q, with the exception of its end fi-intervals. Hence, each of these gadgets is 2n 2 ffl-rigid in
, by lemma 5.23. This imposes, up to small additive shifts, the relative positions of the intervals in
each vertex (or edge) sub-instance. Define the function Col as in (2). We shall show that the choice of
ffl makes these shifts sufficiently small so that the properties of the coloring are preserved.
Lemma 5.24. For each exist C i , jC
Proof. The proof is analogous to lemma 5.10: The relations (3)-(7) hold up to \Sigma2n 2 ffl. Hence, (8)
holds up to \Sigma4n 2 ffl.
Lemma 5.25. For every edge (i; j),
Proof. The proof is analogous to lemma 5.13: The relations hold up to \Sigma2n 2 ffl. Relations
hold up to \Sigma8n 2 ffl, as they involve up to four differences of endpoint distances.
Let round(x) be the integer closest to x. Recall that ffl ! 1
0:2. By lemma 5.25, for every edge (i; j): jCol(i)\GammaCol(j)j - 0:6. Hence, round(Col(i)) 6= round(Col(j)).
This proves that if there exist a realization to P 0
, by rounding the colors to the nearest integer we obtain
a proper 3-coloring. By lemma 5.15 this implies the existence of a realization to P. Thus, P has
a realization if and only if P 0
has one. Since the transformation described in (18) is polynomial, we
conclude:
Theorem 5.26. MIG is NP-complete.
5.4. Related Problems. In section 1 we have introduced the recognition problem of interval
graph with individual lower and upper bounds on interval lengths (the BIG problem). Since MIG is
a restriction of BIG and DCIG:
Corollary 5.27. BIG and DCIG are NP-complete.
When restricted to interval graphs with depth 0 decomposition trees (see [23] for a definition of the
decomposition tree), i.e., to prime interval graphs, the MIG problem can be solved in polynomial time,
using the algorithm devised in section 3 for UCO graphs. This depth bound is indeed tight, namely,
when allowing deeper decomposition trees the problem is NP-complete:
Proposition 5.28.
MIG is NP-complete even when restricted to interval graphs with decomposition tree of depth 1.
Proof. We shall see that besides the Switches ffi i (j) and D i (k), and the K 2 modules fc 3 (k); c 1 (k+1)g,
there are no non-trivial modules in the interval graph constructed
by the reduction in the proof of theorem 5.26: Let H be the graph obtained by contraction of the above
modules. Suppose to the contrary that H contains a non-trivial module M , and suppose v;
v; u are in the same vertex subgraph (or in the same edge subgraph) HU , then M " U is a non-trivial
module in HU , contradicting the primality of the vertex subgraph (and the edge subgraph). Hence,
are in different vertex/edge subgraphs. In this case, there are intervals in these subgraphs, which
intersect only one out of u; v, in contradiction to M being a module.
6.
Acknowledgments
. We thank Phil Green for illuminating conversations on the physical mapping
problems which motivated this work. We also thank Garth Isaak for helpful discussions. We further
thank the referees for their useful remarks.
--R
Network Flows: Theory
Maintaining knowledge about temporal intervals.
Reasoning about plans.
A linear time and space algorithm to recognize interval orders.
On a routing problem.
On the topology of the genetic fine structure.
Testing for the consecutive ones property
Establishing the order of human chromosome-specific DNA fragments
On the detection of structures in attitudes and developmental processes.
Linear time representation algorithms for proper circular arc graphs and proper interval graphs.
Interval Orders and Interval Graphs.
Classes of interval graphs under expanding length restrictions.
Faster scaling algorithms for network problems.
Khachiyan's algorithm for linear programming.
Computers and Intractability: A Guide to the Theory of NP-Completeness
A characterization of comparability graphs and of interval graphs.
Algorithmic Graph Theory and Perfect Graphs.
Graph sandwich problems.
Complexity and algorithms for reasoning about time: A graph-theoretic approach
Chromosomal region of the Cystic Fibrosis gene in yeast artificial chromosomes: a model for human genome mapping.
Substitution decomposition on chordal graphs and applications.
Discrete interval graphs with bounded representation.
Mapping the genome: some combinatorial problems arising in molecular biology.
Incidence matrices
Transitive orientation of graphs with side constraints.
An incremental linear time algorithm for recognizing interval graphs.
Temporally Distributed Symptoms in Technical Diagnosis.
New scaling algorithms for the assignment and minimum cycle mean problems.
Scheduling interval ordered tasks.
Satisfiability problems on intervals and unit intervals.
Discrete Mathematical Models
Partially ordered sets and their comparability graphs.
Hyperplane arrangements
Computation Structures.
Proof of an interval satisfiability conjecture.
--TR
--CTR
Peter Damaschke, Point placement on the line by distance data, Discrete Applied Mathematics, v.127 n.1, p.53-62, April | size constraints;distance constraints;interval graphs;NP-completeness;graph algorithms;computational biology |
587957 | Stack and Queue Layouts of Posets. | The stacknumber (queuenumber) of a poset is defined as the stacknumber (queuenumber) of its Hasse diagram viewed as a directed acyclic graph. Upper bounds on the queuenumber of a poset are derived in terms of its jumpnumber, its length, its width, and the queuenumber of its covering graph. A lower bound of $\Omega(\sqrt n)$ is shown for the queuenumber of the class of n-element planar posets. The queuenumber of a planar poset is shown to be within a small constant factor of its width. The stacknumber of n-element posets with planar covering graphs is shown to be $\Theta(n)$. These results exhibit sharp differences between the stacknumber and queuenumber of posets as well as between the stacknumber (queuenumber) of a poset and the stacknumber (queuenumber) of its covering graph. | Introduction
. Stack and queue layouts of undirected graphs appear in a variety
of contexts such as VLSI, fault-tolerant processing, parallel processing, and sorting
networks (Pemmaraju [13]). In a new context, Heath, Pemmaraju, and Ribbens [8, 13]
use queue layouts as the basis of an efficient scheme to perform matrix computations
on a data driven network. Bernhart and Kainen [1] introduce the concept of a stack
layout, which they call book embedding. Chung, Leighton, and Rosenberg [3] study
stack layouts of undirected graphs and provide optimal stack layouts for a variety of
classes of graphs. Heath and Rosenberg [10] develop the notion of queue layouts and
provide optimal queue layouts for many classes of undirected graphs. Heath, Leighton,
and Rosenberg [7] study relationships between queue and stack layouts of undirected
graphs. In some applications of stack and queue layouts, it is more realistic to model
the application domain with directed acyclic graphs (dags) or with posets, rather than
with undirected graphs. Various questions that have been asked about stack and queue
Department of Computer Science, Virginia Polytechnic Institute and State University, Blacks-
burg, VA 24061-0106
y Department of Computer Science, University of Iowa, Iowa City, IA 52242
layouts of undirected graphs acquire a new flavor when there are directed edges (arcs).
This is because the direction of the arcs imposes restrictions on the node orders that
can be considered. Heath, Pemmaraju, and Trenk [9, 13] initiate the study of stack
and queue layouts of dags and provide optimal stack and queue layouts for several
classes of dags.
In this paper, we focus on stack and queue layouts of posets. Posets are ubiquitous
mathematical objects and various measures of their structure have been defined.
Some of these measures are bumpnumber, jumpnumber, length, width, dimension,
and thickness [2, 6]. Nowakowski and Parker [12] define the stacknumber of a poset
as the stacknumber of its Hasse diagram viewed as a dag. They derive a general lower
bound on the stacknumber of a planar poset and an upper bound on the stacknumber
of a lattice. Nowakowski and Parker conclude by asking whether the stacknumber of
the class of planar posets is unbounded. Hung [11] shows that there exists a planar
poset with stacknumber 4; moreover, no planar poset with stacknumber 5 is known.
Sys/lo [15] provides a lower bound on the stacknumber of a poset in terms of its bump-
number. He also shows that, while posets with jumpnumber 1 have stacknumber at
most 2, posets with jumpnumber 2 can have an arbitrarily large stacknumber.
The organization of this paper is as follows. Section 2 contains definitions. In
Section 3, we derive upper bounds on the queuenumber of a poset in terms of its
jumpnumber, its length, its width, and the queuenumber of its covering graph. In
Section 4, we show that the queuenumber of the class of planar posets is unbounded.
In a complementary upper bound result, we show that the queuenumber of a planar
poset is within a small constant factor of its width. In Section 5, we show that the
stacknumber of the class of n-element posets with planar covering graphs is \Theta(n).
In Section 6, the decision problem of recognizing a 4-queue poset is defined; Heath,
Pemmaraju, and Trenk [9, 13] show that the problem is NP-complete. In Section 7,
we present several open questions and conjectures concerning stack and queue layouts
of posets.
2. Definitions. This section contains the definitions of stack and queue layouts
of undirected graphs, dags, and posets. Other measures of the structure of posets are
also defined.
E) be an undirected graph without multiple edges or loops. A k-stack
layout of G consists of a total order oe on V along with an assignment of each edge
in E to one of k stacks, s Each stack s j operates as follows. The vertices
of V are scanned in left-to-right (ascending) order according to oe. When a vertex v
is encountered, any edges assigned to s j that have v as their right endpoint must be
at the top of the stack and are popped. Any edges that are assigned to s j and have
left endpoint v are pushed onto s j in descending order (according to oe) of their right
endpoints. The stacknumber SN(G) of G is the smallest k such that G has a k-stack
layout. G is said to be a k-stack graph if k. The stacknumber of a class of
graphs C, denoted by SN C (n), is the function of the natural numbers that equals the
least upper bound of the stacknumber of all graphs in C with at most n vertices. We
are interested in the asymptotic behavior of SN C (n) or in whether SN C (n) is bounded
above by a constant.
A k-queue layout of G consists of a total order oe on V along with an assignment
of each edge in E to one of k queues, Each queue q j operates as follows.
The vertices of V are scanned in left-to-right (ascending) order according to oe. When
a vertex v is encountered, any edges assigned to q j that have v as their right endpoint
must be at the front of the queue and are dequeued. Any edges that are assigned
to q j and have left endpoint v are enqueued into q j in ascending order (according to
oe) of their right endpoints. The queuenumber QN(G) of G is the smallest k such
that G has a k-queue layout. The queuenumber of a class of graphs C, denoted by
QN C (n), is the function of the natural numbers that equals the least upper bound
of the queuenumber of all graphs in C with at most n vertices. We are interested
in the asymptotic behavior of QN C (n) or in whether QN C (n) is bounded above by a
constant.
For a fixed order oe on V , we identify sets of edges that are obstacles to minimizing
the number of stacks or queues. A k-rainbow is a set of k edges
such that
i.e., a rainbow is a nested matching. Any two edges in a rainbow are said to nest.
A k-twist is a set of k edges
such that
i.e., a twist is a fully crossing matching. Any two edges in a twist are said to cross.
A rainbow is an obstacle for a queue layout because no two edges that nest can
be assigned to the same queue, while a twist is an obstacle for a stack layout because
no two edges that cross can be assigned to the same stack. Intuitively, we can think
of a stack layout or a queue layout of a graph as a drawing of the graph in which the
vertices are laid out on a horizontal line and the edges appear as arcs above the line.
In a stack layout no two edges that intersect can be assigned to the same stack, while
in a queue layout no two edges that nest can be assigned to the same queue. Clearly,
the size of the largest twist (rainbow) in a layout is a lower bound on the number
of stacks (queues) required for that layout. Heath and Rosenberg [10] show that the
size of the largest rainbow in a layout equals the minimum queue requirement of the
layout.
Proposition 2.1. (Heath and Rosenberg, Theorem 2.3 [10]) Suppose E)
is an undirected graph, and oe is a fixed total order on V . If G has no rainbow of more
than k edges with respect to oe, then G has a k-queue layout with respect to oe.
In contrast, the size of the largest twist in a layout may be strictly less than the
minimum stack requirement of the layout (see [10], Proposition 2.4).
The definitions of stack and queue layouts are now extended to dags by requiring
that the layout order be a topological order. Following a common distinction, we
use vertices and edges for undirected graphs, but nodes and arcs for directed graphs.
Suppose that E) is an undirected graph and that ~
E) is a dag whose
arc set ~
E is obtained by directing the edges in E. A topological order of ~
G is a total
order oe on V such that (u; v) 2 ~
layout of the
6 LENWOOD S. HEATH AND SRIRAM V. PEMMARAJU
dag ~
E) is a k-stack (k-queue) layout of the graph G such that the total order
is a topological order of ~
G. As before, SN( ~
G) is the smallest k such that ~
G has a
k-stack layout and QN( ~
G) is the smallest k such that ~
G has a k-queue layout.
A partial order is a reflexive, transitive, anti-symmetric binary relation. A poset
is a set V with a partial order - (see Birkhoff [2] or Stanton and White
[14]). The cardinality jP j of a poset P equals jV j. We only consider posets with finite
cardinality in this paper. We write v. The Hasse diagram
~
E) of a poset is a dag with arc set
~
there is no w such that u
(see Stanton and White [14]). A Hasse diagram is a minimal representation of a poset
because it contains none of the arcs implied by transitivity of -. The stacknumber
SN(P ) of a poset P is SN( ~
H(P )), the stacknumber of its Hasse diagram. Similarly,
the queuenumber QN(P ) of a poset P is QN( ~
H(P )), the queuenumber of its Hasse
diagram. Fig. 1 gives an example of a 2-stack poset, while Fig. 2 gives an example
of a 2-queue poset. The underlying undirected graph, H(P ), of ~
H(P ) is called the
covering graph of P . Clearly, for any poset P , we have
and
The stacknumber and the queuenumber of the covering graphs of the posets in both
Fig. 1 and Fig. 2 are 1. A poset P is planar if its Hasse diagram ~
H(P ) has a planar
ae-
ae-
@@@I
@
@
@
@I
Fig. 1. A 2-stack poset.
ae-
-ae-
oe- ae-
QQQQQk
ae
ae
ae
ae ae?
Fig. 2. A 2-queue poset.
embedding in which all arcs are drawn as straight line segments with the tail of each
arc strictly below its head with respect to a Cartesian coordinate system; call such
an embedding of any dag an upwards embedding. Without loss of generality, we may
always assume that no two nodes of ~
H(P ) are on the same horizontal line. (If two
nodes are on the same horizontal line, a slight vertical perturbation of either of them
yields another upwards embedding with the nodes on different horizontal lines). Given
an upwards embedding of a dag, the y coordinates of the nodes give a topological order
on the nodes from lowest to highest called the vertical order. Note that the covering
graph H(P ) may be planar even though the poset P is not. Fig. 3 shows an example
8 LENWOOD S. HEATH AND SRIRAM V. PEMMARAJU
ae-
@@@I
ae-
ae-
@
@
@
@I
@
@
@
@
@
@
@
@
@I
Fig. 3. A non-planar poset whose covering graph is planar.
of a non-planar poset whose covering graph is planar.
Let fl be a fixed topological order on ~
are adjacent
in fl if there is no w such that spine arc in ~
with respect to fl is an arc (u; v) in ~
H(P ) such that u and v are adjacent in fl. A break
in ~
H(P ) with respect to fl is a pair (u; v) of adjacent elements such that u ! fl v and
(u; v) is not an arc in ~
connection C in ~
H(P ) with respect to fl is a maximal
sequence of elements
other words a connection is a maximal path of spine arcs without a
break. Since ~
contains no transitive arcs, there can be no non-spine arcs between
nodes in a connection. The breaknumber BN(fl; P ) of a topological order fl of ~
is the number of breaks in ~
H(P ) with respect to fl. The jumpnumber of P , denoted
by JN(P ), is the minimum of BN(fl; P ) over all topological orders fl on ~
A chain in a poset P is a set of elements such that
. The length L(P ) of a poset P is the maximum cardinality of any chain in P .
An antichain in a poset P is a subset of elements of S that does not contain a chain
of size 2. The width W (P ) of a poset P is the maximum cardinality of any antichain
in P .
3. Upper Bounds on Queuenumber. In this section we derive upper bounds
on the queuenumber of a poset in terms of its jumpnumber, its length, its width, and
the queuenumber of its covering graph.
3.1. Jumpnumber and Queuenumber. Sys/lo [15] proves the following relationship
between the jumpnumber and the stacknumber of posets.
Proposition 3.1. (Sys/lo [15]) For any poset P with JN(P
2. If J 2 is the infinite class of posets having jumpnumber 2, then SNJ 2
n).
In contrast, we show that, for any poset P , the queuenumber of P is at most the
jumpnumber of P plus 1. Moreover, we show that this bound is tight within a small
constant factor.
Theorem 3.2. For any poset P , QN(P 1. For every n - 2, there
exists a poset P such that jP
Proof. For the upper bound on queuenumber, suppose that P is any poset and
that JN(P be a topological order on ~
H(P ) that has exactly k breaks
connections. Lay out ~
according to fl and label these connections
from left to right. Let any two nonspine arcs
such that u 1 and u 2 are in C i and v 1 and v 2 are in C j , where 1
and nest, then one of that nests over the other
arc) is a transitive arc. Since ~
contains no transitive arcs,
do not nest. This suggests the following assignment of arcs to queues. Assign all
non-spine arcs between pairs of connections C i and C j , where to
queue q ' . Assign all the spine arcs to a queue q 0 . Hence, we use k queues for non-spine
arcs and one queue for spine arcs, for a total of k queues.
For the lower bound on queuenumber, construct the Hasse diagram of a poset
P from the complete bipartite graph K E) by directing all the edges
from vertices in V 1 to vertices in V 2 . All topological orders on ~
layouts. Hence, JN(P
The lower bound follows. Proposition 3.1 and Theorem 3.2 lead to the following
corollary.
Corollary 3.3. There exists a class of posets P for which the ratio
QNP (n)
is unbounded.
Looking ahead, Theorem 4.2 shows the existence of a class of posets P for which
the reciprocal ratio QNP (n)=SNP (n) is unbounded.
3.2. Length and Queuenumber. To prove the next theorem, we need the
following lemma that gives a bound on the queuenumber of a layout of a graph whose
vertices have been rearranged in a limited fashion.
Lemma 3.4. (Pemmaraju [13]) Suppose that oe is an order on the vertices of an
m-partite graph E) that yields a k-queue layout of G. Let oe 0 be
an order on the vertices of G in which the vertices in each set V i
consecutively and in the same order as in oe. Then oe 0 yields a layout of G in 2(m \Gamma 1)k
queues.
Theorem 3.5, the main result of this section, gives an upper bound on the queue-
number of a poset in terms of its length and the queuenumber of its covering graph.
Theorem 3.5. For any poset P ,
There exists an infinite class of posets P such that LP
Proof. Suppose P is any poset, ~
E), and QN(H(P oe be
a total order on V that yields a k-queue layout of H(P ). The nodes of ~
H(P ) can be
labeled by a function l
follows. Let ~
all the nodes with indegree 0 in ~
with the label
1. Delete all the labeled nodes in ~
H 0 to obtain ~
In general, label the nodes with
indegree 0 in ~
with the label i + 1. Delete the labeled nodes in ~
H i to obtain ~
By an inductive proof, it can be checked that the labeling so obtained satisfies the
required conditions. Let V ig. For any arc (u; v) 2 ~
is an L(P )-partite dag.
Define the total order fl on the nodes of ~
1. The elements in each set V i contiguously and in the order
prescribed by oe.
2. The elements in V i occur before the elements in V i+1 for all
Since every arc in ~
H(P ) is from a node in V i to a node in V j , 1
a topological order on ~
yields a layout that requires no more
queues.
We now prove the second part of the theorem. For each n - 2, let
e. Let the complete bipartite graph K E) be such that jV 1
and We get the Hasse diagram of a poset P of size n by directing the edges
in K p;q from V 1 to V 2 . Clearly, L(P
and Pemmaraju [13] present different proofs of the following formula that gives the
precise queuenumber of an arbitrary complete bipartite
Therefore,
Let P be the class of all posets constructed in the manner described above. The second
part of the theorem follows. Note that Theorem 3.5 holds for dags as well as for
posets as its proof does not rely on the absence of transitive arcs. Theorem 3.5 leads
to the following corollary.
Corollary 3.6. For any poset P ,
Suppose P is a class of posets such that there exists a constant K with
all
We conjecture, but have been unable to show, that the upper bound in Theorem
3.5 is tight, within constant factors, for larger values of L(P ) also.
3.3. Width and Queuenumber. In this section, we establish an upper bound
on the queuenumber of a poset in terms of its width. We need the following result of
Dilworth.
Lemma 3.7. (Dilworth [4]) Let be a poset. Then V can be partitioned
into
For a poset be a partition of V into W (P )
chains. Define an i-chain arc as an arc in ~
H(P ), both of whose end points belong to
is an arc whose tail belongs to
chain Z i and whose head belongs to chain Z j .
Theorem 3.8. The largest rainbow in any layout of a poset P is of size no greater
than Hence, the queuenumber of any layout of P is at most W (P
Proof. Fix an arbitrary topological order of ~
partition of V into W (P ) chains. For any i, no two i-chain arcs nest, since ~
14 LENWOOD S. HEATH AND SRIRAM V. PEMMARAJU
contains no transitive arcs. Therefore, the largest rainbow of chain arcs has size no
greater than W (P ). If i 6= j then no two (i; j)-cross arcs can nest without one of
them being a transitive arc. Therefore, the largest rainbow of cross arcs has size no
greater than W (P 1). The size of the largest rainbow is at most W (P
By Proposition 2.1, the theorem follows. The
bound established in the above theorem is not known to be tight. In fact, we believe
that the queuenumber of a poset is bounded above by its width (see Conjecture 1 in
Section 7).
4. The Queuenumber of Planar Posets. In this section, we first show that
the queuenumber of the class of planar posets is unbounded. We then establish an
upper bound on the queuenumber of a planar poset in terms of its width.
4.1. A Lower Bound on the Queuenumber of Planar Posets. We construct
a sequence of planar posets P n with jP
n). In
fact, we determine the queuenumber of P n almost exactly. To prove the theorem, we
need the following result of Erd-os and Szekeres.
Lemma 4.1. (Erd-os and Szekeres [5]) Let
be a sequence of distinct elements
from a set X. Let ffi be a total order on X. Then
either contains
a monotonically increasing subsequence of size d
e or a monotonically decreasing
subsequence of size d
e with respect to ffi .
The proof of Theorem 4.2 constructs the desired sequence of posets.
Theorem 4.2. For each n - 1, there exists a planar poset P n with 3n+3 elements
such that
l p
\Upsilon
Proof. Suppose n - 1. Define three disjoint sets U; V , and W as follows:
ng
ng
. The planar poset P is given by
Fig. 4 shows the Hasse diagram of P 4 . Let oe be an arbitrary
order on the elements of S. The elements of U appear in the order
in oe, and all elements of W appear between u n and
v n . Define a total order ffi on the elements of W by w
'j 'i
'j 'i
'j 'i
'j 'i
'j 'i
'j 'i
'j 'i
'j 'i
'j 'i
'j 'i
'j 'i
'j 'i
'j 'i
@
@
@
@
@
@
@
@
@
@
@
@
@
@
@
@
@ @
@
@
@
@
@
@
@
@
@
@
@
@
@ @
@
@
@
@
@
@
@
@
@
@
@
@
@
@
@
@ @
@
@ @
Fig. 4. The planar poset P4 .
is an increasing sequence of nodes in W with respect to ffi . Since w
appears after u
in any topological order of ~
the following sequence of nodes is a subsequence of
oe:
Therefore, the set f(u i j
kg is a k-rainbow in oe. Similarly, if
is a decreasing sequence of nodes in W with respect to ffi, then the set f(w
kg is a k-rainbow in oe. By Lemma 4.1, in oe, there is an increasing subsequence of
size
l p
or a decreasing subsequence of size
l p
with respect to ffi . Thus
there is a rainbow of size
l p
in any topological order on ~
l p
. This is the desired lower bound.
To prove the upper bound, we give a layout of P n in d
queues. Let
e, and let
e. Partition W \Gamma fw 0 g into s nearly equal-sized
subsets
as follows:
ng
Construct an order oe on the elements of S by first placing the elements in U [
in the order
Now place the elements of W \Gamma fw 0 g between u 0 and v 0 such that the elements
belonging to each set W i appear contiguously and the sets themselves appear in the
order
Within each set W i place the elements in increasing order with respect to
Fig. 5 schematically represents the constructed order. The arcs from U to W form
Fig. 5. Schematic layout of planar poset Pn .
s mutually intersecting rainbows each of size at most t. Therefore t queues suffice for
these arcs. The arcs from W to V form s nested twists each of size at most t. Therefore
s queues suffice for these arcs. Since no two arcs, one from U to W and the other
from W to V nest, they can all be assigned to the same set of s queues. An additional
queue is required for the remaining arcs. This is a layout of P n in d
queues.
Therefore,
We believe that the upper bound
in the above proof can be tightened to exactly match the lower bound. In fact, we
have been able to show that for
l p
The situation for stacknumber of planar posets is somewhat different in that there
is no known example of a sequence of planar posets with unbounded stacknumber. Two
observations about the sequence P n in Theorem 4.2 are in order. The first observation
is that SN(P n 2. A 2-stack layout of ~
shown in Fig. 6. The second
observation is that the stacknumber and the queuenumber of H(P n ) is 2. A 2-queue
layout of H(P 4 ) is shown in Fig. 7. Theorem 4.2 and the above observations imply
the following corollaries.
ff \Phi
ff \Phi
ff \Phi
ff
\Omega \Psi\Omega \Psi\Omega \Psi\Omega \Psi\Omega \Psi\Omega \Psi
l l l l l l l l l l l l
l l l
Fig. 6. A 2-stack layout of the planar poset P4 .
ff \Phi
ff \Phi
ff \Phi
ff
\Omega \Psi\Omega \Psi\Omega \Psi\Omega \Psi
ff
l l l l l l l l l l l l
l l l
Fig. 7. A 2-queue layout of the covering graph of P4 .
Corollary 4.3. There exists a class P of planar posets such that
QNP (n)
=\Omega \Gammap
Corollary 4.4. There exists a class P of planar posets such that
QNP (n)
QN H(P) (n)
While Theorem 4.2 establishes a lower bound of \Omega\Gamma
n) on the queuenumber of the
class of n-element planar posets, a matching upper bound is not known (see Conjecture
2 in Section 7).
4.2. An Upper Bound on the Queuenumber of Planar Posets. In this
section, we show that the queuenumber of a planar poset is bounded above by a small
constant multiple of its width. The bound is a consequence of the following theorem,
the proof of which occupies the remainder of the section.
Theorem 4.5. For any planar poset P where ~
contains at least one arc and
for any upward embedding of ~
H(P ), the layout of ~
by the vertical order oe
has queuenumber less than 4W (P ).
Before the proof of Theorem 4.5, we present some definitions, some observations,
and a series of three lemmas. First, we fix notation and terminology to use through-out
the section. Suppose that poset with a given upwards
embedding of ~
oe be the vertical order on V . Now suppose that the size
of a largest rainbow in the vertical order of ~
H(P ) is k - 1. By Proposition 2.1, the
queuenumber of this layout is k. Focus on a particular k-rainbow whose arcs are
these arcs the rainbow arcs ; in particular, the arc
is the rainbow arc of a i and of b i . The nodes in the set
are bottom nodes, and the nodes in the set are top nodes. Let
y(v) denote the y-coordinate of a node v in the upwards embedding. Suppose (a
and (a are distinct rainbow arcs. Since these arcs nest in the vertical order oe, we
know that maxfy(a i )g. More generally,
The horizontal line defined by the equation In
moving along this line from left to right, we encounter these intersections in a definite
\Omega \Omega OE
\Theta
\Theta
\Theta
\Theta
\Theta
\Theta
\Theta
\Theta
\Theta \ThetaffiP
Z
Z
Z
ZZ-
ae ae?
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
@
@
@I
a 5
a 6
a 4
a 3
a 1
a 2
Fig. 8. An example of rainbow arcs.
order. By re-indexing the rainbow arcs, we may assume that these intersections are
encountered in the order (a call this the left-to-right order
of the rainbow arcs. Fig. 8 illustrates an upwards embedding of a Hasse diagram with
6. The arcs are indexed in left-to-right order.
Define the left-to-right total order -LR on A (respectively, B) by a i -LR a j
(respectively, j. If a i -LR a j , we say that a i is to the left of a j and
that a j is to the right of a i . These notions of left and right do not always correspond
to our normal understanding of these notions when looking at an upwards embedding.
For example, in Figure 8, the x-coordinate of a 1 is greater than that of a 2 , though
a 1 !LR a 2 and hence a 1 is to the left of a 2 . We consistently use left and right with
22 LENWOOD S. HEATH AND SRIRAM V. PEMMARAJU
respect to the order -LR .
A bottom chain is any chain of bottom nodes, and a top chain is any chain of top
nodes. In Figure 8, the set fa 1 ; a 3 ; a 4 g is a bottom chain, while the set fa 2 ; a 3 ; a 5 g is
not. If C is a chain of P and u; v 2 V , then the closed interval from u to v is the
subchain C[u; and the open interval from u to v is
the subchain C(u; vg. Subchains C(u; v] and C[u; v), the
corresponding half-open intervals, are defined analogously. For any bottom chain C,
the extent of C is
min
a j 2C
that is, the extent is the distance from the leftmost node in C to the rightmost node
in C, measured in rainbow arcs. The extent of a top chain is defined analogously.
Suppose C is any chain. We say that C covers the nodes it contains. If D is a path
in ~
H(P ) that contains every node of C, then D covers C. Note that there must be at
least one path in ~
H(P ) that covers C.
In what follows, we show that more than k=4 chains are required to cover the set
A[B. Since W (P ) is the minimum number of chains required to cover all the nodes in
the poset, it follows that k=4 ! W (P ) and therefore QN(P As the proof
is long and tedious, we give here an informal overview. Start with a partition CA of A
into bottom chains and a partition CB of B into top chains. Because each element of
is a chain, there is a path in ~
covering it. Thinking of each such path as
a vertex, we construct a graph G that contains an edge connecting a pair of vertices
if the corresponding paths in ~
H(P are connected by a rainbow arc. It is easy to see
that G is planar if the paths in ~
covering the chains in CA [A B are pairwise non-
intersecting. The construction of a collection of pairwise non-intersecting paths that
cover the chains of CA [ CB is not always possible. This leads us to the weaker notion
of a crossing of two chains and to the construction of G from chains rather than paths.
Since the final step of the proof requires G to be planar, we first show (Lemmas 4.7
and 4.8) that all crossings between pairs of chains can be eliminated. Applying Euler's
formula to the resulting planar G finally yields the bound in Theorem 4.5.
At this point, we restrict our argument to bottom nodes, as the corresponding
argument for top nodes is similar. If C is any bottom chain, the order in which its
elements appear with respect to - P is constrained by the rainbow arcs. In particular,
we make the following observation.
Observation 1. Suppose C is a bottom chain whose nodes occur in the following
order with respect to -
For any i with Similarly,
for any i with
Intuitively, if the chain starts going to the right after c i , then the remainder of
the chain must be to the right of the rainbow arc of c i . The rainbow arc of c i is a
barrier to the chain reaching a bottom node to the left of c i . For example, in Fig. 8,
the rainbow arc (a 5 ; b 5 ) is a barrier to any path originating at a 6 . Since a 5 ! P a 6 and
a 5 !LR a 6 , no bottom chain containing both a 5 and a 6 has a node a i ? P a 6 to the
left of a 5 .
By Lemma 3.7, there is a partition of A into at most W (P ) chains. Let CA be
such a partition. Let C 1 2 CA have the order
and let C have the order
These two bottom chains cross if there exist c
c p !LR d r !LR c q !LR d s or c p ?LR d r ?LR c q ?LR d s ; in such a case, the 4-tuple
crossing of C 1 and C 2 . Since c p and c q are related by - P , there is
a directed path D 1 in ~
Similarly, there
is a directed path D 2 in ~
Lemma 4.6. D 1 and D 2 have at least one node in common.
Proof. Without loss of generality, assume that c p !LR d r !LR c q !LR d s . Consider
the polygonal path consisting of the horizontal ray from c p to \Gamma1, followed by
the line segments completed by the horizontal ray
from d s to 1. Let R be the region of the plane consisting of this polygonal path and
all points below it. (Fig. 9 illustrates the region R derived from Fig. 8 with crossing
Topologically, R is a 2-dimensional ball with a single boundary point
removed. Topologically, D 1 and D 2 are paths in the plane with endpoints on the
boundary of R. By Observation 1, neither path can cross either of the two infinite
oe
a 2
a 4
a 5
a 1
Fig. 9. The region R.
rays. Also, neither path can pass above the rainbow arc of c q or d r , because every top
node is higher than any bottom nodes in the upwards embedding of ~
either path crosses one of the three line segments of the polygonal path and proceeds
outside of R, then that path must return to the polygonal path at a higher point on
the same line segment. In essence, we can disregard any excursions outside of R and
assume, from a topological viewpoint, that both paths remain within R. The nodes
of D 1 and D 2 alternate along the polygonal path. Hence, these paths must intersect
topologically, and D 1 and D 2 must have at least one node in common.
A node that D 1 and D 2 have in common is an intersection of C 1 and C 2 . Note
that an intersection need not be a bottom node. In Fig. 8, the chains fa 1 ; a 3 ; a 4 g and
cross and have the intersection v, which is not a bottom node.
Observation 2. Since, with respect to - P , an intersection associated with the
crossing and c q and between d r and d s , we have these
26 LENWOOD S. HEATH AND SRIRAM V. PEMMARAJU
relations:
The following observation is helpful in constructing pairs of non-crossing chains.
Observation 3. Suppose C do not cross. If no d
c with respect to -LR or if no d with respect to
-LR , then C 1 and C 2 do not cross.
We wish to be able to assume that CA does not contain a pair of crossing chains.
The first of two steps in justifying that assumption is to show that we can replace two
crossing chains with two non-crossing chains according to the following lemma. The
replacing pair is further constrained to satisfy the 5 properties in the lemma. The need
for Properties 1, 2, and 3 is clear. Property 4 states that, if the original pair crosses,
then the replacing pair is smaller, in a precise technical sense, than the original
hence the process of replacement of a crossing pair by a noncrossing pair cannot be
repeated forever. Property 5 allows us to identify the minima in the replacing
this property is a technical condition useful only within the inductive proof of the
lemma.
Lemma 4.7. Suppose C 1 and C 2 are disjoint bottom chains. Then there exists
a function NC that yields a pair of bottom chains (C 0
properties:
1. C 0
2. C 0
are disjoint ;
3. C 0
2 do not cross ;
4. The sum of extents does not increase:
if equality holds and if C 1 and C 2 cross, then the minimum extent decreases:
and
5. Chain minima are preserved:
Proof. In addition to our previous notation for C 1 and C 2 , we define
By Observation 1, either c
choose a path D 1 from c 1 to fi that covers the subchain C 1 choose a
path D 1 from c 1 to ff that covers the subchain C 1 Similarly, if d choose
a path D 2 from d 1 to ffi that covers the subchain C 2 [d choose a path
28 LENWOOD S. HEATH AND SRIRAM V. PEMMARAJU
D 2 from d 1 to fl that covers the subchain C 2 [d 1 ; fl]. By Observation 1, both paths are
monotonic with respect to -LR .
We proceed to show the lemma by induction on the pair (m; n). Recall that m
is the cardinality of C 1 and n is the cardinality of C 2 . The base cases are all pairs
(m; n) with either In these cases, C 1 and C 2 do not cross, and setting
yields the desired pair of bottom chains.
For the inductive case, we assume that m - 2, that n - 2, and that the lemma
holds for (m
show that the lemma then holds for C 1 and C 2 . Without loss of generality, we assume
ff !LR fl. There are now three main cases depending on the relative order of ff, fi, fl,
and ffi with respect to !LR .
Case 1: ff !LR fi !LR fl !LR ffi . In this case, C 1 and C 2 do not cross and the
lemma trivially holds.
Case 2: ff !LR fl !LR fi !LR ffi . In this case, C 1 and C 2 necessarily cross. There
are four subcases.
Case 2.1: necessarily contain at least one
intersection. Let v be the intersection that occurs first in going from ff to fi on D 1 .
The subpath D 0
1 of D 1 from c 1 to v does not meet the subpath D 0
2 of D 2 from d 1 to
v until v. Hence, unless D 0
consists only of d 1 (that is, d one of D 0
1 and D 0is above the other in the upwards embedding. D 0
1 cannot be above D 0
because the
rainbow arc of d 1 is a barrier to D 0
going above d 1 . Hence, either D 0
consists only of
d 1 or D 0
2 is above D 0
1 . There are two subcases, depending on the relative order of c 2
and v according to P .
Case 2.1.1: 2 is on D 0
1 and the rainbow arc of c 2 must not be a
barrier for D 0
2 , we have c 2 !LR d 1 . Let (C 0
we set
0). For this case only, we provide a full proof that
the lemma holds for leaving the details for the remaining cases to the
reader. We employ the properties that hold for (C 0
by the inductive hypothesis.
By Property 5 of the inductive hypothesis, C 0
are bottom chains with c
2 . We must show that C 0
is a bottom chain. If d
have since any path in ~
between c 1 and d j must cross D 0
and v. In any case, for any d j 2 C 0
2 ) is a pair of bottom chains, as required.
We now establish that properties.
1. By Property 1 of the inductive hypothesis, C 0
2. By Property 2 of the inductive hypothesis, C 0and C 0are disjoint. Since
are disjoint.
3. By Property 3 of the inductive hypothesis, C 0
2 do not cross. Since
there is no node of C 2 between c 1 and c 2 . Also, by Observation 1
there is no node in C 1 that is between c 1 and c 2 . Therefore there is no node
in C 0
hence by Observation 3, C 0
2 do not
cross. Since there is no node of C 0
with respect to -LR ,
2 do not cross by Observation 3.
4. To be definite, let
of the induction hypothesis and the fact that c 1 !LR c 2 !LR fi, we have
and, if equality holds and if C
If hC 0
then we are done. So assume that hC 0
1 , a contradiction to C 0
not
crossing. Hence
Then we have
Hence Property 4 holds for (C 0
5. By Property 5 of the inductive hypothesis, c
and
This completes the full proof for the case c 2 ! P v.
Case 2.1.2: . For this case, let c
a y . We have y.
Consider the relative left-to-right positions of c 2 and d 2 .
First suppose that d 2 !LR c 2 . Since d 2 !LR c 2 !LR ffi , no node in C 2 (d 2 ; d n ] is
between d 1 and d 2 . Since the subpath of D 2 from d 2 to ffi must go below or through
must be above d 2 in the vertical order. Hence no node of C 1 is between d 1
and d 2 . Let (C 0
pair of noncrossing chains. Set
We need to show that
4. By Property 4 of the inductive hypothesis,
Calculate
If hC 0
holds. So assume hC 0
(that is C 0
2. Since d 2 2 C 0and C 0
2 do not cross,
1 . Hence hC 0
We have
Hence Property 4 holds.
Now suppose that c 2 !LR d 2 . There are finally three subcases to consider.
Case 2.1.2.1: d 2 !LR ffi and c 2 !LR fi. Let (C 0
]). There are no nodes of C 1 [ C 2 between d 1 and c 2 . So set
2 is a chain that does not
cross
1 . By Property 4 of the inductive hypothesis,
and, if equality holds, then either fc 1 do not cross, or
We proceed to show that Property 4 holds for C 0and fd 1
and hence
If this inequality is strict, then Property 4 holds. If equality holds, then one of two
possibilities holds. First suppose that fc 1 do not cross. In
that case, we have fi !LR d 2 !LR ffi and
Second suppose that
Then
34 LENWOOD S. HEATH AND SRIRAM V. PEMMARAJU
For both possibilities, Property 4 holds. We conclude that
the desired pair of chains.
Case 2.1.2.2: d 2 !LR ffi and c
are chains, and they do not cross. Setting
give the desired pair of chains.
Since
and
Property 4 holds.
Case 2.1.2.3: d
is leftmost and d 2 rightmost in C 1 [ C 2 , the pair (C 0
2 ) is also noncrossing.
Let a
of the inductive hypothesis, we have
We proceed to show Property 4 for (C 0
If this inequality is strict, then we are done. Otherwise, hC 0
We have
Hence Property 4 holds for (C 0
Case 2.2: . In this case, C 1 and C 2 always cross. If we succeed
in replacing these with two non-crossing chains C 0
2 having the same nodes,
then maxLR C 0
2 . Hence, Property 4 follows easily for every (C 0
constructed for this case.
Again, let v be the first intersection of D 1 and D 2 . If v 2 A, then all of C 1 (v; c m ]
is to the right of v, and all of C 2 (v; d n ] is to the left of v. If v 62
36 LENWOOD S. HEATH AND SRIRAM V. PEMMARAJU
setting gives the desired pair
of chains. If v 2 setting
gives the desired pair of chains. In either case, C 0
2 do not cross.
If v 62 A, then the argument is a bit more involved. Otherwise, if c 2 !LR fl, then
let (C 0
the
desired pair of chains. If fi !LR d 2 , then let (C 0
Setting
gives the desired pair of chains. Hence suppose fl !LR c 2
and d 2 !LR fi. Since the rainbow arcs of c 2 and d 2 are barriers, we have
. By Observation 1, there are four possibilities.
Case 2.2.1: C 1 is to the left of c 2 and C 2 (d 2 ; d n ] is to the left of d 2 . If
remains to the right of d 2 , then set
for all j - 2. Hence is a chain, and there are no nodes of
c 2 and d 1 . Let (C 0
gives the desired pair of chains.
Case 2.2.2: C 1 is to the left of c 2 and C 2 (d 2 ; d n ] is to the right of d 2 . Let
gives the desired pair of chains.
Case 2.2.3: C 1 is to the right of c 2 and C 2 (d 2 ; d n ] is to the left of d 2 . Here
do not cross. Setting
gives the desired pair of chains.
Case 2.2.4: C 1 is to the right of c 2 and C 2 (d 2 ; d n ] is to the right of d 2 .
This is the left-to-right mirror image of 2.2.1. The same argument applies, mutatis
mutandis.
Case 2.3: This case cannot occur because the rainbow arcs
of c 1 and d 1 are barriers to the paths D 1 and D 2 . It would require both D 1 to go
below d 1 and D 2 to go below c 1 , which is impossible.
Case 2.4: This case is the left-to-right mirror image of Case
2.1. The same argument applies, mutatis mutandis.
Case 3: ff !LR fl !LR ffi !LR fi. In this case, C 1 and C 2 may cross. There are
again four subcases.
Case 3.1: First suppose c 2 !LR d 1 . Let (C 0
desired pair of chains is
Suppose d 1 !LR c 2 !LR ffi . Then D 1 and D 2 necessarily have an intersection before
c 2 and before ffi . This is handled as in Case 2.1. Suppose ffi !LR c 2 and c 2 6= fi.
is to the right of c 2 , C 2 is between c 1 and c 2 , and C 1 and C 2 do not
cross. Finally, suppose ffi !LR c 2 and c
Since all of C 1 with respect to -LR , and since
2 do not cross. The desired pair of chains is
2 ). It is necessary to justify Property 4. Let
where min is taken with respect to -LR . There are three subcases.
38 LENWOOD S. HEATH AND SRIRAM V. PEMMARAJU
Case 3.1.1: - !LR fl. Note that all of C is to the right of d
and fl are unrelated with respect to - P or if
2 , since
.
being to the left of fl, is unrelated to every node in C 2 [d 2 ; d n ]; again
1 , we have hC 0
Applying Property 4, we must have
cross. It follows that hC 0
if C 1 and C 2 cross.
Case 3.1.2: case is the same as Case
2.2. For all the possibilities in that case, we get that hC 0
as desired.
Case 3.1.3: . In this case, C 0
do no cross.
Hence, neither do C 0
and C 0Case 3.2: c . In this case, C 1 and C 2 do not cross, as the
rainbow arc of d 1 is a barrier to D 1 crossing D 2 .
Case 3.3: This case is the left-to-right mirror image of Case
3.2.
Case 3.4: This case is the left-to-right mirror image of Case
3.
The second and last step in justifying the assumption converts any CA into a C 0
A
that has no pair of crossing chains.
Lemma 4.8. Suppose CA is a set of disjoint bottom chains of minimum cardinality
that covers A. Then there exists a set C 0
A of disjoint bottom chains that covers A such
that jC 0
no pair of chains in C 0
A cross.
Proof. If CA contains no pair of crossing chains, then C 0
is the set required
for the lemma.
Otherwise, let C be a pair of chains that cross. By Lemma 4.7, there
exist chains C 0
2 such that by substituting these chains for C 1 and C 2 , we get
the set C 00
which is also a set of bottom chains of minimum
cardinality that covers A. By Property 4, either
(i) the sum of the extents of chains in C 00
A is strictly less than the sum of the
extents of chains in CA or
ig.
Since every chain has extent at least 0, repeated substitution of a pair of crossing
chains by a pair of non-crossing chains must eventually reduce the sum of the extents
of the chains. Again, since every chain has extent at least 0, the sum of the extents of
the chains cannot reduce infinitely, and hence we must eventually arrive at a set C 0
A
that contains no pair of non-crossing chains. This set C 0
A is the set required for the
lemma.
We are finally prepared to prove our main result.
Proof of Theorem 4.5. By Lemma 4.8, we may assume that CA contains no pair of
crossing chains. Now let CB be a partition of B into at most W (P ) chains. Similarly,
we may assume that CB contains no pair of crossing chains.
Consider an arbitrary bottom chain C and an arbitrary top chain C 0 . It is possible
that a rainbow arc connects a node in C to a node in C 0 . However, it is not possible
for more than one rainbow arc to connect C and C 0 , for then one of the rainbow arcs
(the "longest" one) would be a transitive arc in ~
H(P ). For example, in Figure 8, we
cannot have a bottom chain and a top chain C for then there
is a path from b 2 to b 1 and (a
We now construct a bipartite graph contains an edge
between there is a rainbow arc connecting C to C 0 . Since
every rainbow arc connects exactly one bottom chain to exactly one top chain, there
is exactly one edge in G for every rainbow arc; that is, k. Since there is no
pair of crossing bottom chains and no pair of crossing top chains, G is planar. As
an example, Figure 10 illustrates a graph E) obtained from the poset of
Figure
8. In particular,
and
'j 'i `j 'i
'j 'i
'j 'i
OEAE
OEAE
OEAE
OEAE
\Omega \Omega \Omega \Omega \Omega \Omega \Omega \Omega \Omega \Omega
@
@
@
@
@
@
@
@
@
@ @
a 3 ; a 4
Fig. 10. A bipartite planar graph E) corresponding to the poset in Figure 8
According to Euler's formula for planar graphs, we have
(1)
where f is the number of faces in a planar embedding of G and t is its number of
connected components. If G consists of a single edge, then
Otherwise, since G is bipartite, we have the following inequality:
(2)
Combining Equations 1 and 2, we obtain
We know that and that both jC A j and jC B j are at most W (P ). Substituting
42 LENWOOD S. HEATH AND SRIRAM V. PEMMARAJU
into Equation 3, we obtain
Hence, the queuenumber of ~
H(P ) with respect to oe is less than 4W (P ).
Corollary 4.9. For any planar poset P where ~
contains at least one arc,
We believe that this result can be improved to show that, for any poset P , there
exists a W (P)-queue layout of ~
H(P ). See Conjecture 1 in Section 7.
5. Stacknumber of Posets with Planar Covering Graphs. In this section,
we construct, for each n - 1, a 3n-element poset R n such that H(R n ) is planar and
hence has stacknumber at most 4 (see Yannakakis [19]), but such that the stacknumber
of the class 1g is not bounded.
Theorem 5.1. For each n - 1, there exists a poset R n such that jR n
Proof. Suppose n - 1. Define three disjoint sets U , V , and W as follows
ng
ng
STACK AND QUEUE LAYOUTS OF POSETS 43
The poset R is given by
Fig. 11 shows H(R 4 ).
Aside. While the covering graph H(R n ) is clearly planar, the poset R n is not
planar. This can be seen as follows. In any upward embedding of ~
in the plane,
the nodes
have increasing y-coordinates. Thus, any point in the plane whose y-coordinate is
between the y-coordinates of u 1 and v 2 lies either on the left or on the right of the
path
Now add the nodes w 1 and w 2 to the embedding. Their y-coordinates are between
the y-coordinates of u 1 and v 2 because of u
44 LENWOOD S. HEATH AND SRIRAM V. PEMMARAJU
oe
oe
oe
oe
oe
oe
'j 'i
'j 'i `j 'i
'j 'i
'j 'i `j 'i
'j 'i
'j 'i `j 'i
'j 'i
'j 'i `j 'i
Fig. 11. The covering graph of R4 .
If both w 1 and w 2 are embedded on the same side of D, then the paths
must cross somewhere. If w 1 and w 2 are embedded on different sides of D,
then the line segment (w line segment in D. End Aside.
To prove the lower bound on SN(R n ), let oe be any topological order on ~
The order oe contains the elements of U [ V in the order
and the elements of W in the order w . The elements of W are mingled
among the elements of U [ V . Suppose occur before u n in oe, while
occur after u n . Then the arcs
form a k-twist, while the arcs
form an (n \Gamma k)-twist. Hence,
Therefore, dn=2e, as desired.
The proof of the upper bound is constructive. An n-stack layout of R n is obtained
by laying out the elements of U [ V in the only possible order, and then placing each
immediately after u i for all n. The assignment of arcs to stacks is as
follows. Assign each arc in the set f(u
assign each arc in the set f(u stack s n . Note
that no two arcs assigned to the same stack intersect. The only arcs remaining to be
assigned are the arcs in the set
The arcs (v do not intersect any other arc and can be
assigned to any stack. Each arc assigned to stack s i+1 and
arc assigned to stack s 1 . An n-stack layout of R n is obtained. The upper
bound follows.
Two observations about the poset R n constructed in the above proof are in order.
The first observation is that QN(R n 2. A 2-queue layout of R 4 is shown in Fig. 12.
In general, the total order used in the n-stack layout of R n described in the above
ff \Phi
ff \Phi
ff \Phi
ff
\Omega \Psi\Omega \Psi\Omega \Psi
l l l l l l l l l l l l
Fig. 12. A 2-queue layout of R4 .
ff \Phi
ff \Phi
ff \Phi
ff
ff \Phi
ff \Phi
ff \Phi
ff \Phi
ff \Phi
ff \Phi
ff
l l l l l l l l l l l l
Fig. 13. A 2-stack layout of the covering graph of R4 .
proof yields a 2-queue layout of R n . The second observation is that the stacknumber
and the queuenumber of the covering graph H(R n ) is 2. A 2-stack layout of H(R 4 )
is shown in Fig. 13. In general, a 2-stack layout of H(R n ) can be obtained because
hamiltonian planar graph [1].
Theorem 5.1 and the above observations lead to the following corollaries.
Corollary 5.2. There exists a class of posets such that
QNR (n)
Corollary 5.3. There exists a class of posets R n such that
6. NP-completeness Results. Heath and Rosenberg [10] show that the problem
of recognizing a 1-queue graph is NP-complete. Since a 1-stack graph is an outerplanar
graph, it can be recognized in linear time (Sys/lo and Iri [16]). But Wigderson
[17] shows that the problem of recognizing a 2-stack graph is NP-complete. Heath,
Pemmaraju, and Trenk [9, 13] show that the problem of recognizing a 4-queue poset
is NP-complete.
Formally, the decision problem for queue layouts of posets is POSETQN.
POSETQN
INSTANCE: A poset P .
Does P have a 4-queue layout?
Theorem 6.1. (Heath, Pemmaraju, and Trenk [9, 13]) The decision problem
POSETQN is NP-complete.
Since the Hasse diagram of a poset is a dag, this result hold for dags in gen-
eral. This result is in the spirit of the result of Yannakakis [18] that recognizing a
3-dimensional poset is NP-complete.
48 LENWOOD S. HEATH AND SRIRAM V. PEMMARAJU
7. Conclusions and Open Questions. In this paper, we have initiated the
study of queue layouts of posets and have proved a lower bound result for stack
layouts of posets with planar covering graph. The upper bounds on the queuenumber
of a poset in terms of its jumpnumber, its length, its width, and the queuenumber of
its covering graph, proved in Section 3, may be useful in proving specific upper bounds
on the queuenumber of various classes of posets. We believe that the upper bound
of W (P ) 2 on the queuenumber of an arbitrary poset P , proved in Section 3, and the
upper bound of 3W (P on the queuenumber of any planar poset P , proved in
Section 4 are not tight. We conjecture that:
Conjecture 1. For any poset P , QN(P
We have established a lower bound of \Omega\Gamma p n) on the queuenumber of the class of
planar posets. We believe that this bound is tight and conjecture that:
Conjecture 2. For any n-element planar poset P , QN(P
n).
We conjecture that another upper bound on the queuenumber of a planar poset P
is given by its length L(P ). We believe that it is possible to embed a planar poset in an
"almost" leveled-planar fashion with L(P ) levels. (See Heath and Rosenberg [10] for
a definition of leveled-planar embeddings.) From such an embedding, a queue layout
of P in L(P ) queues should be obtainable. Therefore we conjecture that:
Conjecture 3. For any planar poset P , QN(P
In Section 5, we show that the stacknumber of the class of n-element posets having
planar covering graphs is \Theta(n). However the stacknumber of the more restrictive class
of planar posets is still unresolved.
Acknowledgments
. This research was partially supported by National Science
Foundation Grant CCR-9009953. We thank Praveen Paripati for his helpful com-
ments. We are also grateful for the helpful comments of the referees, especially the
elucidation of an error in the original statement of Theorem 4.5.
--R
The book thickness of a graph
Lattice Theory
Embedding graphs in books: a layout problem with applications to VLSI design
A decomposition theorem for partially ordered sets
Thickness of ordered sets
Comparing queues and stacks as mechanisms for laying out graphs
Sparse matrix-vector multiplication on a small linear array
Stack and queue layouts of directed acyclic graphs
Laying out graphs using queues
A planar poset which requires 4 pages.
Ordered sets
Exploring the Powers of Stacks and Queues via Graph Layouts
Springer Verlag
The complexity of the Hamiltonian circuit problem for maximal planar graphs
The complexity of the partial order dimension problem
--TR
--CTR
Josep Daz , Jordi Petit , Maria Serna, A survey of graph layout problems, ACM Computing Surveys (CSUR), v.34 n.3, p.313-356, September 2002 | jumpnumber;poset;hasse diagram;stack layout;book embedding;queue layout |
587958 | Task Scheduling in Networks. | Scheduling a set of tasks on a set of machines so as to yield an efficient schedule is a basic problem in computer science and operations research. Most of the research on this problem incorporates the potentially unrealistic assumption that communication between the different machines is instantaneous. In this paper we remove this assumption and study the problem of network scheduling, where each job originates at some node of a network, and in order to be processed at another node must take the time to travel through the network to that node.Our main contribution is to give approximation algorithms and hardness proofs for fully general forms of the fundamental problems in network scheduling. We consider two basic scheduling objectives: minimizing the makespan and minimizing the average completion time. For the makespan, we prove small constant factor hardness-to-approximate and approximation results. For the average completion time, we give a log-squared approximation algorithm for the most general form of the problem. The techniques used in this approximation are fairly general and have several other applications. For example, we give the first nontrivial approximation algorithm to minimize the average weighted completion time of a set of jobs on related or unrelated machines, with or without a network. | Introduction
Scheduling a set of tasks on a set of machines so as to yield an efficient schedule is a basic problem
in computer science and operations research. It is also a difficult problem and hence, much of the
research in this area has incorporated a number of potentially unrealistic assumptions. One such
assumption is that communication between the different machines is instantaneous. In many application
domains, however, such as a network of computers or a set of geographically-scattered repair
shops, decisions about when and where to move the tasks are a critical part of achieving efficient
resource allocation. In this paper we remove the assumption of instantaneous communication from
the traditional parallel machine models and study the problem of network scheduling, in which each
job originates at some node of a network, and in order to be processed at another node must take
the time to travel through the network to that node.
Until this work, network scheduling problems had either loose [2, 4] or no approximation algo-
rithms. Our main contribution is to give approximation algorithms and hardness proofs for fully
general forms of the fundamental problems in network scheduling. Our upper bounds are robust,
as they depend on general characteristics of the jobs and the underlying network. In particular,
our algorithmic techniques to optimize average completion time yield other results, such as the
first nontrivial approximation algorithms for a combinatorial scheduling question: minimization of
average weighted completion time on unrelated machines, and the first approximation algorithm for
a problem motivated by satellite communication systems. (To differentiate our network scheduling
models from the traditional parallel machine models, we will refer to the latter as combinatorial
scheduling models.)
Our results not only yield insight into the network scheduling problem, but also demonstrate
contrasts between the complexity of certain combinatorial scheduling problems and their network
variants, shedding light on their relative difficulty.
An instance of the network scheduling problem consists of a network
non-negative edge lengths '; we define ' max to be the maximum edge length. At each
vertex v i in the network is a machine M i . We are also given a set of n jobs, J Each job J j
originates, at time 0, on a particular origin machine M
and has a processing requirement
define p max to be max 1-j-n p j . Each job must be processed on one machine without interruption.
Job J j is not available to be processed on a machine M 0 until time d(M
the length of the shortest path in G between M i and M k . We assume that the M i are either identical
on every machine) or that they are unrelated (J j takes time p ij on M i , and the
may all be different). In the unrelated machines setting we define p . The
identical and unrelated machine models are fundamental in traditional parallel machine scheduling
and are relatively well understood [3, 10, 11, 12, 15, 17, 25]. Unless otherwise specified, in this
paper the machines in the network are assumed to be identical.
An alternative view of the network scheduling model is that each job J j has a release date,
a time before which it is unavailable for processing. In previous work on traditional scheduling
models a job's release date was defined to be the same on all machines. The network model can
be characterized by allowing a job J j 's release date to be different on different machines; J j 's
release date on M k is d(M
One can generalize further and consider problems in which a
job's release date can be chosen arbitrarily for all m machines, and need not reflect any network
structure. Almost all of our upper bounds apply in this more general setting, whereas our lower
bounds all apply when the release dates have network structure.
We study algorithms to minimize the two most basic objective functions. One is the makespan
or maximum completion time of the schedule; that is, we would like all jobs to finish by the earliest
time possible. The second is the average completion time. We define an ff-approximation algorithm
to be a polynomial-time algorithm that gives a solution of cost no more than ff times optimal.
1.1 Previous Work
The problem of network scheduling has received some attention, mostly in the distributed setting.
Deng et. al. [4] considered a number of variants of the problem. In the special case in which each
edge in the network is of unit length, all job processing times are the same, and the machines are
identical, they showed that the off-line problem is in P . It is not hard to see that the problem is
NP-Complete when jobs are allowed to be of different sizes; they give an off-line O(log(m' max ))-
approximation algorithm for this. They also give a number of results for the distributed version of
the problem when the network topology is completely connected, a ring or a tree.
Awerbuch, Kutten and Peleg [2] considered the distributed version of the problem under a novel
notion of on-line performance, which subsumes the minimization of both average and maximum
completion time. They give distributed algorithms with polylogarithmic performance guarantees
in general networks. They also characterize the performance of feedback-based approaches. In
addition they derived off-line approximation results similar to those of Deng et. al [2, 20]. Alon
et. al. [1] proved an
m) lower bound on the performance of any distributed scheduler that
is trying to minimize schedule length. Fizzano et. al. [5] give a distributed 4:3-approximation
algorithm for schedule length in the special case in which the network is a ring.
Our work differs from these papers by focusing on the centralized off-line problem and by giving
approximations of higher quality. In addition, our approximation algorithms work in a more general
setting, that of unrelated machines.
1.2 Summary of Results
We first focus on the objective of minimizing the makespan, and give a 2-approximation algorithm
for scheduling jobs on networks of unrelated machines; the algorithm gives the same performance
guarantee for identical machines as a special case. The 2-approximation algorithm matches the best
known approximation algorithm for scheduling unrelated machines with no underlying network [17].
Thus it is natural to ask whether the addition of a network to a combinatorial scheduling problem
actually makes the problem any harder. We resolve this question by proving that the introduction of
the network to the problem of scheduling identical machines yields a qualitatively harder problem.
We show that for the network scheduling problem, no polynomial-time algorithm can do better
than a factor of 4
3 times optimal unless even in a network in which all edges have length
one. Comparing this with the polynomial approximation scheme of Hochbaum and Shmoys [10] for
parallel machine scheduling, we see that the addition of a network does indeed make the problem
harder.
Although the 2-approximation algorithm runs in polynomial time, it may be rather slow [21]. We
thus explore whether a simpler strategy might also yield good approximations. A natural approach
to minimizing the makespan is to construct schedules with no unforced idle time. Such strategies
provide schedules of length a small constant factor times optimal, at minimal computational cost,
for a variety of scheduling problems [6, 7, 15, 24]. We call such schedules busy schedules, and show
that for the network scheduling problem their quality degrades significantly; they can be as much
as
an\Omega
log log m
factor longer than the optimal schedule.
This is in striking contrast to the combinatorial model (for which Graham showed that a busy
strategy yields a 2-approximation algorithm [6]). In fact, even when release dates are introduced
into the identical machine scheduling problem, if each job's release date is the same on all machines,
busy strategies still give a
)-approximation guarantee [8, 9]. Our result shows that when the
Combinatorial Network
min. makespan, identical machines ff
min. makespan, identical machines,
log log m
log log m
Busy schedules
min. makespan, unrelated machines 3=2
min. avg. completion time
unrelated machines
n)
min. avg. wtd. completion time
unrelated machines, release dates ff - O(log 2
n)
Figure
1: Summary of main algorithms and hardness results. The notation x ! ff - y means that
we can approximate the problem within a factor of y, but unless
the problem within a factor of x. Unreferenced results are new results found in this paper.
release dates of the jobs are allowed to be different on different machines busy scheduling degrades
significantly as a scheduling strategy. This provides further evidence that the introduction of a
network makes scheduling problems qualitatively harder. However, busy schedules are of some
quality; we show that they are of length a factor of O
log log m
longer than optimal. This analysis
gives a better bound than the (O(log m' max )) bound of previously known approximation algorithms
for identical machines in a network [2, 4, 20].
We then turn to the NP-hard problem of the minimization of average completion time. Our
major result for this optimality criterion is a O(log 2 n)-approximation algorithm in the general
setting of unrelated machines. It formulates the problem as a hypergraph matching integer program
and then approximately solves a relaxed version of the integer program. We can then find an integral
solution to this relaxation, employing as a subroutine the techniques of Plotkin, Shmoys and Tardos
[21]. In combinatorial scheduling, a schedule with minimum average completion time can be found
in polynomial time, even if the machines are unrelated.
The techniques for the average completion time algorithm are fairly general, and yield an
O(log 2 n)-approximation for minimizing the average weighted completion time. A special case of
this result is an O(log 2 n)-approximation algorithm for the NP-hard problem of minimizing average
weighted completion time for unrelated machines with no network; no previous approximation
algorithms were known, even in the special case for which the machines are just of different speeds [3,
15]. Another special case is the first O(log 2 n)-approximation algorithm for minimizing the average
completion time of jobs with release dates on unrelated machines. No previous approximation
algorithms were known, even for the special case of just one machine [15]. The technique can also
be used to give an approximation algorithm for a problem motivated by satellite communication
systems [18, 26].
We also give a number of other results, including polynomial-time algorithms for several special
cases of the above-mentioned problems and a 5
-approximation for a variant of network scheduling
in which each job has not only an origin, but also a destination.
A summary of some of these upper bounds and hardness results appears in Figure 1.
A line of research which is quite different from ours, yet still has some similarity in spirit, was
started by Papadimitriou and Yannakakis [19]. They modeled communication issues in parallel
machine scheduling by abstracting away from particular networks and rather describing the communication
time between any two processors by one network-dependent constant. They considered
the scheduling of precedence-constrained jobs on an infinite number of identical machines in this
model; the issues involved and the sorts of theorems proved are quite different from our results.
Although all of our algorithms are polynomial-time algorithms, they tend to be rather inefficient.
Most rely on the work of [21] as a subroutine. As a result we will not discuss running times explicitly
for the rest of the paper.
Makespan
In this section we study the problem of minimizing the makespan for the network scheduling
problem. We first give an algorithm that comes within a factor of 2 of optimal. We then show that
this is nearly the best we can hope for, as it is NP-hard to approximate the minimum makespan
within a factor of better than 4
3 for identical machines in a network. This hardness result contrasts
sharply with the combinatorial scenario, in which there is a polynomial approximation scheme [10].
The 2-approximation algorithm is computationally intensive, so we consider simple strategies that
typically work well in parallel machine scheduling. In another sharp contrast to parallel machine
scheduling, we show that the performance of such strategies degrades significantly in the network
setting; we prove
an\Omega
log log m
lower bound on the performance of any such algorithm. We also
show that greedy algorithms do have some performance guarantee, namely O( log m
log log m ). Finally we
consider a variant of the problem in which each job has not only an origin, but also a destination,
and give a 5
-approximation algorithm.
2.1 A 2-Approximation Algorithm For Makespan
In this section we describe a 2-approximation algorithm to minimize the makespan of a set of jobs
scheduled on a network of unrelated machines; the same bound for identical machines follows as a
special case. Let U be an instance of the unrelated network scheduling problem with
optimal schedule length D. Assuming that we know D, we will show how to construct a schedule
of length at most 2D. This can be converted, via binary search, into a 2-approximation algorithm
for the problem in which we are not given D [10].
In the optimal schedule of length D, we know that the sum of the time each job spends travelling
and being processed is bounded above by D. Thus, job J j may run on machine M i in the optimal
schedule only if:
In other words, the length of an optimal schedule is not altered if we allow job J j to run only on
the machines for which (1) is satisfied. Formally, for a given job J j , we will denote by Q(J j ) the
set of machines that satisfy (1). If we restrict each J j to only run on the machines in Q(J j ), the
length of the optimal schedule remains unchanged.
combinatorial unrelated machines scheduling problem (Z) as follows:
If the optimal schedule for the unrelated network scheduling problem has length D, then the
optimal solution to the unrelated parallel machine scheduling problem (2) is at most D. We will
use the 2-approximation algorithm of Lenstra, Shmoys and Tardos [17] to assign jobs to machines.
The following theorem is easily inferred from [17].
Theorem 2.1 (Lenstra, Shmoys, Tardos [17]) Let Z be an unrelated parallel machine scheduling
problem with optimal schedule of length D. Then there exists a polynomial-time algorithm that
finds a schedule S of length 2D. Further, S has the property that no job starts after time D.
Theorem 2.2 There exists a polynomial-time 2-approximation algorithm to minimize makespan
in the unrelated network scheduling problem.
Proof: Given an instance of the unrelated network scheduling problem, with shortest schedule
of length D, form the unrelated parallel machine scheduling problem Z defined by (2) and use
the algorithm of [17] to produce a schedule S of length 2D. This schedule does not immediately
correspond to a network schedule because some jobs may have been scheduled to run before their
release dates. However, if we allocate D units of time for sending all jobs to the machines on which
they run, and then allocate 2D units of time to run schedule S, we immediately get a schedule of
length 3D for the network problem.
By being more careful, we can create a schedule of length 2D for the network problem. In
schedule S, each machine M i is assigned a set of jobs S i . Let jS i j be the sum of the processing
times of the jobs in S i and let S max
i be the job in S i with largest processing time on machine
call its processing time p max
. By Theorem 2.1 and the fact that the last job run on machine i is
no longer than the longest job run, we know that jS
i denote the set of jobs
i . We form the schedule for each machine i by running job S max
i at time
by the jobs in S 0
In this schedule the jobs assigned to any machine clearly finish by time 2D; it remains to be
shown that all jobs can be routed to the proper machines by the time they need to run there. Job
must start at time
conditions (1) and (2) guarantee that it arrives in time. The
remaining jobs need only arrive by time D; conditions (1) and (2) guarantee this as well. Thus we
have produced a valid schedule of length 2D.
Observe that this approach is fairly general and can be applied to any problem that can be
characterized by a condition such as (2). Consider, for example the following very general problem,
which we call generalized network scheduling with costs. In addition to the usual unrelated network
scheduling problem, the time that it takes for job J j to travel over an edge is dependent not only
on the endpoints of the edge, but also on the job. Further, there is a cost c ij associated with
processing job J j on machine M i . Given a schedule in which job J j runs on machine M -(j) , the
cost of a schedule is
Given any target cost C, we define s(C) to be the minimum length
schedule of cost at most C.
Theorem 2.3 Given a target cost C, we can, in polynomial time, find a schedule for the generalized
network scheduling problem with makespan at most 2s(C) and of cost C if a schedule of cost C exists.
Proof: We use similar techniques to those used for Theorem 2.2. We first modify Condition (1)
so that d(\Delta; \Delta) depends on the job as well. We then use a generalization of the algorithm of Lenstra,
Shmoys and Tardos for unrelated machine scheduling, due to Shmoys and Tardos [25] which, given
a target cost C finds a schedule of cost C and length at most twice that of the shortest schedule of
cost C. The schedule returned also has the property that no job starts after time D, so the proof
of Theorem 2.2 goes through if we use this algorithm in place of the algorithm of [17].
2.2 Nonapproximability
Theorem 2.4 It is NP-complete to determine if an instance of the identical network scheduling
problem has a schedule of length 3, even in a network with '
Proof: See Appendix.
Corollary 2.5 There does not exist an ff-approximation algorithm for the network scheduling problem
with even in a network with '
Proof: Any algorithm with ff ! 4=3 would have to give an exact answer for a problem with a
schedule of length 3 since an approximation of 4 would have too high a relative error.
It is not hard to see, via matching techniques, that it is polynomial-time decidable whether there
is a schedule of length 2. We can show that this is not the case when the machines in the network
can be unrelated. Lenstra, Shmoys and Tardos proved that it is NP-Complete to determine if
there is a schedule of length 2 in the traditional combinatorial unrelated machine model [17]. If we
allow multiple machines at one node, their proof proves Theorem 2.6. If no zero length edges are
allowed, i.e. each machine is forced to be at a different network node, this proof does not work,
but we can give a different proof of hardness, which we do not include in this paper.
Theorem 2.6 There does not exist an ff-approximation algorithm for the unrelated network scheduling
problem with ff ! 3=2 unless even in a network with '
2.3 Naive Strategies
The algorithms in Section 2.1 give reasonably tight bounds on the approximation of the schedule
length. Although these algorithms run in polynomial time, they may be rather slow [21]. We thus
explore whether a simpler strategy might also yield good approximations.
A natural candidate is a busy strategy: construct a busy schedule, in which, at any time t there
is no idle machine M i and idle job J j so that job J j can be started on M i at time t. Busy strategies
and their variants have been analyzed in a large number of scheduling problems (see [15]) and have
been quite effective in many of them. For combinatorial identical machine scheduling, Graham
showed that such strategies yield a
In this section we analyze
the effectiveness of busy schedules for identical machine network scheduling. Part of the interest of
this analysis lies in what it reveals about the relative hardness of scheduling with and without an
underlying network; namely, the introduction of an underlying network can make simple strategies
much less effective for the problem.
2.3.1 A Lower Bound
We construct a family of instances of the network scheduling problem, and demonstrate, for each
instance, a busy schedule which is \Omega\Gamma
log log m ) longer than the shortest schedule for that instance.
The network E) consists of ' levels of nodes, with level nodes.
Each node in level is connected to every node in level by an edge of length
1. Each machine in levels ae jobs of size 1 at time 0. The machines in level '
initially receive no jobs. The optimal schedule length for this instance is 2 and is achieved by each
machine in level taking exactly one job from level i \Gamma 1. We call this instance I. See
Figure
2.
The main idea of the lower bound is to construct a busy schedule in which machine M always
processes a job which originated on M , if such a job is available. This greediness "prevents" the
scheduler from making the much larger assignment of jobs to machines at time 2 in which each job
is assigned to a machine one level away.
To construct a busy schedule S, we use algorithm B, which in Step t constructs the subschedule
of S at time t.
Step t:
Phase 1: Each machine M processes one job that originated at M , if any such jobs remain. We
call such jobs local to machine M .
r
r
r
r
r
Level 1 Level 2 . Level L
Figure
2: Lower Bound Instance for Theorem 2.8. Circles represent processors, and the numbers
inside the circles are the number of jobs which originate at that processor at time 0. Levels i and
are completely connected to each other. The optimal schedule is of length 2 and is achieved
by shifting each job to a unique processor one level to its right.
Phase 2: Consider the bipartite graph G has one vertex representing each
job that is unprocessed after Phase 1 of time t, Y contains one vertex representing each machine
which has not had a job assigned to it in Phase 1 of Step t, and (x; y) 2 A if and only if job x
originated a distance no more than t \Gamma 1 from machine y. Complete the construction of S at time t
by processing jobs on machines based on any maximum matching in G . It is clear that S is busy.
When we apply algorithm B to instance I, the behavior follows a well-defined pattern. In Phase
2 of Step 2, all unprocessed jobs that originated in level are processed by distinct processors
in level '. During Phase 2 of Step 3, all unprocessed jobs that originated in levels are
processed by machines in levels This continues, so that at Step i an additional (i \Gamma 1)
levels pass their jobs to higher levels and all these jobs are processed. This continues until either
level 1 passes its jobs, or processes its own jobs. We characterize the behavior of the algorithm
more formally in the following lemma.
Lemma 2.7 Let j(i; t) be the number of local jobs of processor i still unprocessed after Phase 2 of
Step t and let lev(i) be the level number of processor i. Then for all times t - 2, if ae - t, then
Proof: We prove the lemma by induction on t. During Phase 2 of Step 2, the only edges in the
graph G connect levels ' and ' \Gamma 1. There are ae '\Gamma1 nodes in level ' and ae '\Gamma2 (ae \Gamma 1) remaining
jobs local to machines in level ' \Gamma 1, so the matching assigns all the unprocessed jobs in level
to level '. Machines in level 1 to process local jobs during Phase 1. As a result, all the
neighbors of machines in levels 1 to are busy in Phase 1 and cannot process jobs local to these
machines during Phase 2. The number of local jobs on these machines, therefore, decreases only
by 1. Thus the base case holds.
Assume the lemma holds for all
greater than b as well. We now show that j(i; t 0
level b+x has ae b+x\Gamma1 processors. Level
has at most ae \Delta ae b+x\Gamma(t 0 local jobs remaining. If t 0 - 2 then there are enough
machines on level b + x to process all the remaining jobs local to level b
another of the highest-numbered levels have their local jobs completed during time t 0 . Thus
at time t 0 we have
Since we assumed sufficiently large initial workloads on all processors on levels
by the induction hypothesis, for all machines in levels less than
distance them have local jobs remaining after time will be assigned a local job
during Phase 1 of Step t 0 . Therefore all machines i such that lev(i)
any jobs to higher levels and j(i; t 0
Depending on the relative values of ae and ', either the machine in level 1 processes all of the
jobs which originated on it, or some of those jobs are processed by machines in higher-numbered
levels. Balancing these two cases we get the following theorem:
Theorem 2.8 For the family of instances of the identical machine network scheduling problem
defined above, there exist busy schedules of length a
log log m ) longer than optimal.
Proof: The first case in (3) will apply to level 1 when 1 This inequality does
not hold when
2', but it does hold when
2' then the schedule
length is
2', while if ae !
2' then the jobs in level 1 will be totally processed in their level,
which takes ae time. Therefore the makespan of S is at most min(
ae). Given that the total
number of machines is calculation reveals that min(c
'; ae) is maximized at
log log m ). Thus S is a busy schedule of length '(
log log m ) longer than optimal.
Note that this example shows that several natural variants of busy strategies, such as scheduling
a job on the machine on which it will finish first, or scheduling a job on the closest available
processor, also perform poorly.
2.3.2 An Upper Bound
In contrast to the lower bound of the previous subsection, we can prove that busy schedules are
of some quality. Given an instance I of the network scheduling problem, we define C
(I) to be
the length of a shortest schedule for I and C A
(I) to be the length of the schedule produced by
algorithm A; when it causes no confusion we will drop the I and use the notation C
Definition 2.9 Consider a busy schedule S for an instance I of the identical machines network
scheduling problem. Let p j (t) be the number of units of job J j remaining to be processed in schedule
S at time t, and W
be the total work remaining to be processed in schedule S at time
t.
Lemma 2.10 W iC
Proof: We partition schedule S into consecutive blocks
what happens in each block of schedule S to an optimal schedule S of length C
for instance I.
Consider a job J j that was not started by time C
in schedule S, and let M j be the machine
on which job J j is processed in schedule S . This means that in block B 1 machine M j is busy
for units of time during job J j 's slot in schedule S - the period of time during which job J j
was processed on machine M j in schedule S . Hence for every job J j that is not started in block
there is an equal amount of unique work which we can identify that is processed in block B 1 ,
implying that WC max
Successive applications of this argument yields W iC
which proves the lemma for 2.
To obtain the stronger bound W iC
we increase the amount of processed work
which we identify with each unstarted job. Choose i - 3 and consider a job J j which is unstarted
in schedule S at the start of block B i+1 , namely at time iC
. Assume for the sake of simplicity
that in every block B k of schedule S, only one job is processed in job J j 's slot (the time during
which job J j would be processed if block B k was schedule S ). Assume also that this job is exactly
of the same size as job J multiple jobs are processed the argument is essentially the same. Let
J r be the job that took job J j 's slot in block B r , for r - 2. We will show that J j could have
been processed in J r 's slot in block B i for all 2. Figure 2.3.2 illustrates the network
structure used in this argument.
Assume that job J j originated on machine M
, and that job J r originated on machine M or , and
that job J j was processed on machine M j in schedule S . Then d(M
since job J j
was processed on machine M j in schedule S , and d(M or ; M j ) - rC
since job J r was processed
in job J j 's slot in block B r . Thus d(M
consequently J j could have run
in job J r 's slot in any of blocks B We focus on block B i . Since J j was not processed in
and schedule S is busy, some job must have been processed during job J r 's slot in block
We identify this work with job J note that no work is ever identified with
more than one job.
When we consider the (i\Gamma2) different jobs which were processed in J j 's slot in blocks
and consider the jobs that were processed in their slots in B i , we see that with each job J j unstarted
at time iC
, we can uniquely identify units of work that was processed in block
O
<_
<_
O r
r
<_
Figure
3: If J r takes J j 's slot in B r , then the machine on which J j originates, M
, is at most a
distance of (r
r , the machine on which J r runs in S . Thus J j could have been
run in J r 's slot in block i,
. If all these slots were not full in block B i , then job J j would have been started in one of them.
Including the work processed during job J j 's slot in block B i , we obtain
Corollary 2.11 During time iC
max at most m=(2i!) machines are completely busy.
Proof: We have W 0 - mC
. Therefore, by Lemma 2.10, we have W iC
machine that is completely busy from time iC
does C
work during that
time and therefore at most m=(2i!) machines can be completely busy.
To get a stopping point for the recurrence, we require the following lemma:
Lemma 2.12 In any busy schedule, if at time t all remaining unprocessed jobs originated on the
same machine, the schedule is no longer than t
Let M be the one machine with remaining local jobs. Let W
be the amount of work
from machine M that is done by machine M i in the optimal schedule. Clearly
equals the
amount of work that originated on machine M . Because there is no work left that originated on
machines other than M , each machine M i can process at least W
work from machine M in the
next C
steps. If after C
steps, all the work originating on machine M is done, then
we have finished. Otherwise, some machine M i processed less than W
work during this time,
which means there was no more work for it to take. Therefore after C
steps all the jobs that
originated on machine M have started. Because no job is longer than C
suffices to finish all the jobs that have started.
We are now ready to prove the upper bound:
Theorem 2.13 Let A be any busy scheduling algorithm and I an instance of the identical machine
network scheduling problem. Then C A
log log m C
Proof: If a machine ever falls idle, all of its local work must be started. Otherwise it would process
remaining local work. Thus by Corollary 2.11, in O( lg m
time, the number of processors
with local work remaining is reduced to 1. By Lemma 2.12, when the number of processors with
remaining local work is down to one, a constant number of extra blocks suffice to finish.
2.4 Scheduling with Origins and Destinations
In this subsection we consider a variant of the (unrelated machine) network scheduling problem in
which each job, after being processed, has a destination machine to which it must travel. Specif-
ically, in addition to having an origin machine M
, job J j also has a terminating machine M t j
begins at machine M
, travels distance d(M
) to machine M d j
, the machine it gets
processed on, and then proceeds to travel for d(M d j
units of time to machine M t j
. We call
this problem the point-to-point scheduling problem.
Theorem 2.14 There exists a polynomial-time 5
-approximation algorithm to minimize makespan
in the point-to-point scheduling problem.
Proof: We construct an unrelated machines scheduling problem as in the proof of Theorem 2.2.
In this setting the condition on when a job J j can run on machine M i depends on the time for J j
to get to M i , the time to be processed there, and the time to proceed to the destination machine.
Thus a characterization of when job J j is able to run on machine M i in the optimal schedule is
Now, for a given job J j , we define Q(J j ) to be the set of machines that satisfy (4). We can then
form a combinatorial unrelated machines scheduling problem as follows:
We then approximately solve this problem using [17] to obtain an assignment of jobs to machines.
Pick any machine M i and let J i be the set of jobs assigned to machine M i . By Theorem 2.1 we
know that the sum of the processing times of all of the jobs in J i except the longest is at most
D. We partition the set of jobs J i into three groups, and place each job into the lowest numbered
group which is appropriate:
1. J 0
i contains the job in J i with the longest processing time,
2. J 1
contains jobs for which d(M
3. J 2
contains jobs for which d(M
i ) be the sum of the processing times of the jobs in group J k
2. As noted above,
We will always schedule J 1
i in a block of D consecutive time steps, which
we call B. The first p(J 1
steps will be taken up by jobs in J 1
i while the last p(J 2
steps will be taken up by jobs in J 2
. Note that there may be idle time in the interior of the block.
We consider two possible scheduling strategies based on the relative sizes of p(J 1
Case 1:(p(J 1
In this case we first run the long job in J 0
by condition (4) it finishes by
time D. We then run block B from time D to 2D. Since p(J 1
the jobs in J 1
all finish by
time 3D=2 and by condition (4) reach their destinations by time 5D=2. By the definition of J 2
for any job J
i is scheduled to complete processing
by time 2D, it will arrive at its destination by time 5D=2.
Case 2: (p(J 1
first run block B from time D=2 to 3D=2. We then start the long job
in J 0
i at time 3D=2; by condition (4) it arrives at its destination by time 5D=2. Since p(J 2
machine M i need not start processing any job in J 2
hence we are guaranteed
that they have arrived at machine M i by that time. By definition of J 1
i all of its jobs are available
by time D=2; it is straightforward from condition (4) that all jobs arrive at their destinations by
time 5D=2.
We can also show that the analysis of this algorithm is tight, for algorithms in which we assign
jobs to processors using the linear program defined in [17] using the processing times specified by
Equation 5. Let D be the length of the optimal schedule. Then we can construct instances for
which any such schedule S has length at least 5=2D \Gamma 1. Consider a set of k+1 jobs and a particular
machine M i . We specify the largest of these jobs to have size D and to have M i as both its origin
and destination machine. We specify that each of the other k jobs are of size D=k and have distance
to both their origin and destination machines. The combinatorial unrelated
machines algorithm may certainly assign all of these jobs to M i , but it is clear that any schedule
adopted for this machine will have competion time at least ( 5
2k )D.
3 Average Completion Time
3.1 Background
We turn now to the network scheduling problem in which the objective is to minimize the average
completion time. Given a schedule S, let C S
j be the time that job J j finishes running in S. The
average completion time of S is 1
whose minimization is equivalent to the minimization of
. Throughout this section we assume without loss of generality that n - m.
We have noted in Section 1 that our network scheduling model can be characterized by a set of
and a set of release dates r ij , where J j is not available on m i until time r ij . We noted
that this is a generalization of the traditional notion of release dates, in which r
will refer to the latter as traditional release dates; the unmodified phrase release date will refer to
the general r ij .
The minimization of average completion time when the jobs have no release dates is polynomial-time
solvable [3, 12], even on unrelated machines. The solution is based on a bipartite matching
formulation, in which one side of the bipartition has jobs and the other side (machine, position)
pairs. Matching J j to (m corresponds to scheduling J j in the kth-from-last position on m i ; this
edge is weighted by kp ij , which is J j 's contribution to the average completion time if J j is kth from
last.
When release dates are incorporated into the scheduling model, it seems difficult to generalize
this formulation. Clearly it can not be generalized precisely for arbitrary release dates, since even
the one machine version of the problem of minimizing average completion time of jobs with release
dates is strongly NP-hard [3]. Intuitively, even the approximate generalization of the formulation
seems difficult, since if all jobs are not available at time 0, the ability of J j to occupy position
on m i is dependent on which jobs precede it on m i and when. Release dates associated with
a network structure do not contain traditional release dates as a subclass even for one machine,
so the NP-completeness of the network scheduling problem does not follow immediately from the
combinatorial hardness results; however, not surprisingly, minimizing average completion time for
a network scheduling problem is NP-complete.
Theorem 3.1 The network scheduling problem with the objective of minimum average completion
time is NP-complete even if all the machines are identical and all edge lengths are 1.
Proof: See Appendix.
In what follows we will develop an approximation algorithm for the most general form of this
problem. We will follow the basic idea of utilizing a bipartite matching formulation; however we
will need to explicitly incorporate time into the formulation. In addition, for the rest of the section
we will consider a more general optimality criterion: average weighted completion time. With each
J j we associate a weight w j , and the goal is to minimize
. All of our algorithms handle
this more general case; in addition they allow the nm release dates r ij to be arbitrary and not
necessarily derived from the network structure.
3.2 Unit-Size Jobs
We consider first the special case of unit-size jobs.
Theorem 3.2 There exists a polynomial-time algorithm to schedule unit-size jobs on a network of
identical machines with the objective of minimizing the average weighted completion time.
Proof: We reduce the problem to minimum-weight bipartite matching. One side of the bipartition
will have a node for each job J j , 1 - j - n, and the other side will have a node [m
to be described below. An edge included if J j
is available on m i at time t, and the inclusion of that edge in the matching will represent the
scheduling of J j on m i from time t to t + 1. Release dates are included in the model by excluding
an edge will not be available on m i by time t.
To determine the necessary sets T i , we observe that there is no advantage in unforced idle time.
Since each job is only one unit long, there is no reason to make it wait for a job of higher weight
that is about to be released. It is clear, therefore, that setting T would
suffice, since no job would need to be scheduled more than n time later than its release date. This
this can be reduced to O(n), but we omit the details for the sake of brevity.
By excluding edges which do not give job J j enough time to travel between the machine on which
runs and the destination machine M d j
, we can prove a similar theorem for the point-to-point
scheduling problem, defined in Section 2.4.
Theorem 3.3 There exists a polynomial-time algorithm to solve the point-to-point scheduling problem
with the objective of minimizing the average weighted completion time of unit-size jobs.
3.3 Polynomial-Size Jobs
We now turn to the more difficult setting of jobs of different sizes and unrelated machines. The
minimization of average weighted completion time in this setting is strongly NP-hard, as are
many special cases. For example, the minimization of average completion time of jobs with release
dates on one machine is strongly NP-hard [16]; no approximation algorithms were known for this
special case, to say nothing of parallel identical or unrelated machines, or weighted completion
times. If there are no release dates, namely all jobs are available at time 0, then minimization of
average weighted completion time is NP-hard for parallel identical machines. A small constant
factor approximation algorithm was known for this problem [14], but no approximation algorithms
were known for the more general cases of machines of different speeds or unrelated machines. We
introduce techniques which yield the first approximation algorithms for several other problems as
well, which we discuss in Section 3.5.
Our approximation algorithm for minimum average completion time begins by formulating the
scheduling problem as a hypergraph matching problem. The set of vertices will be the union of two
sets, J and M , and the set of hyperedges will be denoted by F . J will contain n vertices J j , one for
each job, and M will contain mT vertices, where T is an upper bound on the number of time units
that will be needed to schedule this instance. The time units will range over
g. M will have a node for each (machine, time) pair; we will denote the node that
corresponds to machine M i at time t as [m i ; t]. A hyperedge e 2 F represents scheduling a job J j
on machine M i from time t 1 to t 2 by including nodes J The cost
of an edge e, denoted by c e , will be the weighted completion time of job J j if it is scheduled in the
manner represented by e. There will be one edge in the hypergraph for each feasible scheduling of
a job on a machine; we exclude edges that would violate the release date constraints.
The problem of finding the minimum cost matching in the hypergraph can be phrased as the
following integer program I. We use decision variable x e 2 f0; 1g to denote whether hyperedge e
is in the matching.
minimize
e
subject to X
(i;t)2e
Two considerations suggest that this formulation might not be useful. The formulation is not
of polynomial size in the input size, and in addition the following theorem suggests that calculating
approximate solutions for this integer program may be difficult.
Theorem 3.4 Consider an integer program in the form I which is derived from an instance of the
network scheduling problem with identical machines, with the c e allowed to be arbitrary. Then there
exists no polynomial-time algorithm A to approximate I within any factor unless
Proof: For an arbitrary instance of the network scheduling problem construct the hypergraph
matching problem in which an edge has weight W ?? n if it corresponds to a job being completed
later than time 3 and give all other edges weight 1. If there is a schedule of length 3 then the
minimum weight hypergraph matching is of weight n; otherwise the weight is at least W ; therefore
an ff-approximation algorithm with ff ! W
would give a polynomial-time algorithm to decide if
there was a schedule of length 3 for the network scheduling problem, which by Theorem 2.4 would
imply
In order to overcome this obstacle, we need to seek a different kind of approximation to the
hypergraph matching problem. Typically, an approximate solution is a feasible solution, i.e. one
that satisfies all the constraints, but whose objective value is not the best possible. We will look for
a different type of solution, one that satisfies a relaxed set of constraints. We will then show how to
turn a solution that satisfies the relaxed set of constraints into a schedule for the network scheduling
problem, while only introducing a bounded amount of error into the quality of the approximation.
We will assume for now that p max - n 3 . This implies that the size of program I is polynomial
in the input size. We will later show how to dispense with the assumption on the size of p max via
a number of rounding and scaling techniques.
We begin by turning the objective function of I into a constraint. We will then use the standard
technique of applying bisection search to the value of the objective function. Hence for the
remainder of this section we will assume that C, the optimal value to integer program I, is given.
We can now construct approximate solutions to the following integer linear program (J
(i;t)2e
e
This integer program is a packing integer program, and as has been shown by Raghavan [22],
Raghavan and Thompson [23] and Plotkin, Shmoys and Tardos [21], it is possible to find provably
good approximate solutions in polynomial time. We briefly review the approach of [21], which
yields the best running times.
Plotkin, Shmoys and Tardos [21] consider the following general problem.
The Packing Problem: 9?x 2 P such that Ax - b, where A is an m \Theta n nonnegative matrix,
b ? 0, and P is a convex set in the positive orthant of R n .
They demonstrate fast algorithms that yield approximately optimal integral solutions to this
linear program. All of their algorithms require a fast subroutine to solve the following problem.
The Separation Problem: Given an m-dimensional vector y - 0, find ~ x 2 P such that
A.
The subroutine to solve this problem will be called the separating subroutine.
An approximate solution to the packing problem is found by considering the relaxed problem
and approximating the minimum - such that this is true. Here the value - characterizes the "slack"
in the inequality constraints, and the goal is to minimize this slack.
Our integer program can be easily put in the form of a packing problem; the equality constraints
(7) define the polytope P and the inequality constraints (8,9) make up Ax - b. The quality of the
integral solutions obtained depends on the width of P relative to Ax - b, which is defined by
a
It also depends on d, where d is the smallest integer such that any solution returned by the
separating routine is guaranteed to be an integral multiple of 1
d .
Applying equation (10) to compute ae for polytope P (defined by (7)) yields a value that is at
least n, as we can create matchings (feasible schedules) whose cost (average completion time) is
much greater than C, the optimal average completion time.
In fact, many other packing integer programs considered in [21] also, when first formulated,
have large width. In order to overcome this obstacle, [21] gave several techniques to reduce the
width of integer linear programs. We discuss and then use one such technique here, namely that of
decomposing a polytope into n lower-dimensional polytopes, each of which has smaller width. The
intuition is that all the non-zero variables in each equation of the form (7) are associated with only
one particular job. Thus we will be able to decompose the polytope into n polytopes, one for each
job. We will then be able to optimize individually over each polytope and use only the inequality
constraints (8) and (9) to describe the relationships between different jobs.
We now procede in more detail. We say that a polytope P can be decomposed into a product
of n polytopes the coordinates of each vector x can be partitioned into
our polytope can be decomposed
in this way, and we can solve the separation problem for each polytope P l , then we can apply a
theorem of [21] to give an approximately optimal solution in polynomial time. In particular, let -
be the optimum value of J . The following theorem is a specialization of Theorem 2.11 in [21] to
our problem, and describes the quality of integral solutions that can be obtained for such integer
programs.
Theorem 3.5 [21] Let ae l be the width of P l and -
. Let fl be the number of constraints
in Ax - b, and let - log fl). Given a polynomial-time separating subroutine for
each of the P l , there exists a polynomial-time algorithm for J which gives an integral solution with
We will now show how to reformulate J so that we will be able to apply this theorem. Polytope
(from can indeed be decomposed into n different polytopes:
to those equality constraints which include only J j . In order to keep the width of the P j
small, we also include into the definition of P j the constraint x for each edge e which includes
J j and has c e ? C; this does not increase the optimal value of the integer program. We integrate
each of these new constraints into the appropriate polytope P j , and decompose
consists of those components of x which represent edges that include J j . In other words,
P l is defined by
J l 2e
e:
This yields the following relaxation L:
subject to
(i;t)2e
e
To apply Theorem 3.5 we must (1) demonstrate a polynomial-time separating subroutine and
ae, d and fl. The decomposition of P into n separate polytopes makes this task much
easier. The separating subroutine must find x l 2 P l that minimizes cx l ; however, since the vector
that is 1 in the eth component and 0 in all other components is in P l for all e such that J l 2 e
and c e - C, the separating routine reduces merely to finding the minimum component c e 0 of c and
returning the vector with a 1 in position e 0 and 0 everywhere else. An immediate consequence of
this is that d = 1. Recall as well that the assumption that p max - n 3 implies that fl is upper
bounded by a polynomial in n.
It is not hard to see that -
ae is 1; HERE IT IS. therefore
(-ae=d) log fl(-ae=d) log(flnd))
By employing binary search over C and the knowledge that the optimal solution has
can obtain an invalid "schedule" in which as many as O(-) jobs are scheduled at one time. If p max
is polynomial in n and m then we have a polynomial-time algorithm; therefore we have proven the
following lemma.
Lemma 3.6 Let C be the solution to the integer program I and assume that jM j is bounded by
mn 4 . There exists a polynomial-time algorithm that produces a solution x such that
j2e
x
(i;t)2e
x
e
x
x
This relaxed solution is not a valid schedule, since O(log n) jobs are scheduled at one time;
however, it can be converted to a valid schedule by use of the following lemma.
Lemma 3.7 Consider an invalid schedule S for a set of jobs with release dates on m unrelated
parallel machines, in which at most - jobs are assigned to each machine at any time. If W is the
average weighted completion time of S, then there exists a schedule of average weighted completion
time at most -W , in which at most one job is assigned to each machine at any time,
Proof: Consider a job J j scheduled in S; let its completion time be C S
. If we schedule the jobs
on each machine in the order of their completion times in S, never starting one before it's release
date, then in the resulting schedule
1. J j is started no earlier than its release date,
2. J j finishes by time at most -C S
.
Statement 1 is true by design of the algorithm. Statement 2 is true since at most -C S
work from other jobs can complete no later than C S
in schedule S, and jobs run simultaneously
in schedule S can run back-to-back with no intermediate idle time in our expanded schedule.
Therefore, job J j is started by time -C S
completed by time -C S
.
Combining the last two lemmas with the observation that p max - n 3 implies jM j - mn 4 yields
the following theorem.
Theorem 3.8 There is a polynomial-time O(log 2 n)-approximation algorithm for the minimization
of average weighted completion time of a set of jobs with machine-varying release dates on unrelated
machines, under the assumption that the maximum job sizes are bounded by p
3.4 Large Jobs
Since the p ij are input in binary and in general need not be polynomial in n and m, the technique
of the last section can not be applied directly to all instances, since it would yield superpolynomial-
size formulations. Therefore we must find a way to handle very large jobs without impacting
significantly on the quality of solution.
It is a standard technique in combinatorial scheduling to partition the jobs into a set of large
jobs and a set of small jobs, schedule the large jobs, which are scaled to be in a polynomially-
bounded range, and then schedule the small jobs arbitrarily and show that their net contribution is
not significant, (see e.g. [24]). In the minimization of average weighted completion time, however,
we must be more careful, since the small jobs may have large weights and can not be scheduled
arbitrarily.
We employ several steps, each of which increases the average weighted completion time by a
small constant factor. With more care we could reduce the constants introduced by each step;
however since our overall bound is O(log 2 n) we dispense with this precision for the sake of clarity
of exposition.
The basic idea is to characterize each job by the minimum value, taken over all machines, of its
(release date processing time) on that machine. We then group the jobs together based on the
size of their minimum . The jobs in each group can be scaled down to be of polynomial
size and thus we can construct a schedule for the scaled down versions of each group. We then
scale the schedules back up, correct for the rounding error, and show that this does not affect the
quality of approximation by more than a constant factor. We then apply Lemma 3.9 (see below)
to show that the makespan can be kept short simultaneously.
The resulting schedules will be scheduled consecutively. However, since we have kept the
makespan from growing too much, we have an upper bound on the start time of each subsequent
schedule and thus we can show that the the net disturbance of the initial schedules to the latter
schedules will be minimal.
We now proceed in greater detail. Let m(J
g. Note that there are at most n nonempty J i , one for each of the n jobs. We will employ the
following lemma in order to keep the makespan from growing too large.
Lemma 3.9 A schedule S for J k can be converted, in polynomial time, to a schedule T of makespan
at most 2n k+1 such that C T
Proof: Remove all jobs from S that complete later than time n k+1 , and, starting at time n k+1 ,
schedule them arbitrarily on the machine on which they run most quickly. This will take at most
n k+1 time, so therefore any rescheduled job J j satisfies C T
.
We now turn to the problem of scheduling each J l with a bounded guarantee on the average
completion time.
Lemma 3.10 There exists an O(log 2 n)-approximation algorithm to schedule each J l . In addition
the schedule for J l has makespan at most 2n l+1 .
Proof: Let A be the algorithm referred to in Theorem 3.8. We will use A to find an approximately
optimal solution S l for each J l . A can not be applied directly to J l since the sizes of the jobs
involved may exceed n 3 , so we apply A to a scaled version of J l .
For all j such that J j 2 J l and for all i, set p 0
c and r 0
c. Note that on at least
one machine i , for each job J j , p 0
We use A to obtain an approximate solution to the scaled version of J l of average weighted
completion time W . Although some of the p 0
may still be large, Lemma 3.9 indicates that restricting
the hypergraph formulation constructed by A to allow completion times no later than time
only affect the quality of approximation by at most a factor of 2. Therefore jM j,
the number of (machine, time) pairs, is O(mn 3 ). Note that some of the p 0
ij may be 0, but it is still
important to include an edge in the hypergraph formulation for each job of size 0.
Now we argue that interpreting the solution of the scaled instance as a solution to the original
instance J l does not degrade the quality of approximation by more than a constant factor. The
conversion from the scaled instance to the original instance is carried out by multiplying p
ij (which has no impact on quality of approximation) and then adding to each
r
ij and p
ij the residual amount that was lost due to the floor operation.
The additional residual amounts of the release dates contribute at most a total of n l\Gamma1 time to
the makespan of the schedule, since jr
therefore the entire contribution to the
makespan is bounded above by n \Theta n . By a similar argument, the entire contribution of
the residual amounts of the processing times to the makespan is bounded above by n l\Gamma1 .
So in the conversion from p
ij to we add at most 2n l\Gamma1 to the makespan of the schedule
for J l . However, n l\Gamma1 is a lower bound on the completion time of any job in J l . Therefore, even if
this additional time were added to the completion time of every job, the restoration of the residual
amounts of the r ij and p ij degrades the quality of the approximation to average completion time
by at most a constant factor. Finally, to satisfy the makespan constraint, we apply Lemma 3.9.
We now construct two schedules S o and S e . In S o we consecutively schedule
in S e we consecutively schedule :. For the sake of clarity our schedule will have time
of length 2n i+1 dedicated to each S i even if S i has no jobs.
Lemma 3.11 Let J o be the set of jobs scheduled in S o and J e the set of jobs scheduled in S e .
The average weighted completion time of S o is within a factor of O(log 2 n) of the best possible for
similarly for S e and J e .
Proof: The subschedule for any set J i scheduled in S o or S e begins by time
since J i is scheduled after J i\Gamma2 ; J and the makespan of J l is at most 2n l+1 . Since n i\Gamma1 is
a lower bound on the completion time of any job in J i , in the combined schedule S o or S e , each
job completes within a small constant factor of its completion time in S i .
We now combine S o and S e by superimposing them over the same time slots. This creates an
infeasible schedule in which the sum of completion times is just the sum of the completions times in
but in which there may be two jobs scheduled simultaneously. We then use Lemma 3.7
to combine S o and S e to obtain a schedule S ff for all the jobs, whose average weighted completion
time is within a factor of O(log 2 n) of optimal.
Theorem 3.12 There is a polynomial-time O(log 2 n)-approximation algorithm for the minimization
of average weighted completion time of a set of jobs with machine-varying release dates on
unrelated machines.
3.5 Scheduling with Periodic Connectivity
The hypergraph formulation of the scheduling problem can model time-varying connectivity between
jobs and machines; e.g. a job can only be processed during certain times on each machine.
In this section we show how to apply our techniques to scheduling problems of periodic connectivity
under some modest assumptions on the length of the period and job sizes.
Definition 3.13 The periodic scheduling problem is defined by n jobs, m unrelated machines, a
period P , and for each time unit of P a specification of which jobs are allowed to run on which
machines at that time.
Theorem 3.14 Let I be an instance of the periodic scheduling problem in which p max is polynomial
in n and m, and let the optimum makespan of I be L. There exists a polynomial-time algorithm
which delivers a schedule of makespan O(log n)(L
Proof:
As above, we assume that L is known in advance, and then use binary search to complete the
algorithm.
We construct the integer program
(i;t)2e
Lg. We include an edge in the formulation if and only
if it is valid with respect to the connectivity conditions. We then use Theorem 3.8 to produce a
relaxed solution that satisfies
j2e
x
(i;t)2e
x
x
Let the length of this relaxed schedule be L; L - L. We construct a valid schedule of length
concatenating O(log n) blocks of length L. At the end of each block we will
have to wait until the start of the next period to begin the next block; hence we obtain an overall
bound of O(log n)(L
Note that we are assuming that the entire connectivity pattern of P is input explicitly; if it is
input in some compressed form then we must assume that P is polynomial in n and m.
One motivation for such problems is the domain of satellite communication systems [18, 26].
One is given a set of sites on Earth and a set of satellites(in Earth orbit). Each site generates a
sequence of communication requests; each request is potentially of a different duration and may
require communication with any one of the satellites. A site can only transmit to certain satellites at
certain times, based on where the satellite is in its orbit. The connectivity pattern of communication
opportunities is periodic, due to the orbiting nature of the satellites.
The goal is to satisfy all communication requests as quickly as possible. We can use our
hypergraph formulation technique to give an O(log n)-approximation algorithm for the problem
under the assumption that the p j are bounded by a polynomial, since the rounding techniques do
not generalize to this setting.
Acknowledgments
We are grateful to Phil Klein for several helpful discussions early in this
research, to David Shmoys for several helpful discussions, especially about the upper bound for
average completion time, to David Peleg and Baruch Awerbuch for explaining their off-line approximation
algorithm to us, and to Perry Fizzano for reading an earlier draft of this paper.
--R
Lower bounds on the competitive ratio for mobile user tracking and distributed job scheduling.
Competitive distributed job scheduling.
Deterministic load balancing in computer networks.
Job scheduling in rings.
Bounds for certain multiprocessor anomalies.
Bounds on multiprocessing anomalies.
Bounds for naive multiple machine scheduling with release times and deadlines.
Approximation schemes for constrained scheduling problems.
Using dual approximation algorithms for scheduling problems: theoretical and practical results.
A polynomial approximation scheme for machine scheduling on uniform processors: using the dual approximation approach.
Minimizing average flow time with parallel machines.
Reducibility among combinatorial problems.
Worst case bound of an lrf schedule for the mean weighted flow-time problem
Rinnooy Kan
Rinnooy Kan
Mobile satellite communication systems: Toward global personal communications.
Towards an architecture-independent analysis of parallel algo- rithms
Private communication
Fast approximation algorithms for fractional packing and covering problems.
Probabilistic construction of deterministic algorithms: approximating packing integer programs.
Randomized rounding: a technique for provably good algorithms and algorithmic proofs.
Improved approximation algorithms for shop scheduling problems.
Scheduling parallel machines with costs.
Mobile satellite services for travelers.
--TR
--CTR
Dekel Tsur, Improved scheduling in rings, Journal of Parallel and Distributed Computing, v.67 n.5, p.531-535, May, 2007
Cynthia A. Phillips , R. N. Uma , Joel Wein, Off-line admission control for general scheduling problems, Proceedings of the eleventh annual ACM-SIAM symposium on Discrete algorithms, p.879-888, January 09-11, 2000, San Francisco, California, United States
S. Muthukrishnan , Rajmohan Rajaraman, An adversarial model for distributed dynamic load balancing, Proceedings of the tenth annual ACM symposium on Parallel algorithms and architectures, p.47-54, June 28-July 02, 1998, Puerto Vallarta, Mexico
Martin Skutella, Convex quadratic and semidefinite programming relaxations in scheduling, Journal of the ACM (JACM), v.48 n.2, p.206-242, March 2001 | approximation algorithm;networks;NP-completeness;scheduling |
587963 | Determining When the Absolute State Complexity of a Hermitian Code Achieves Its DLP Bound. | Let g be the genus of the Hermitian function field $H/{\mathbb F}_{q^2}$ and let $C_{\cal L}(D,mQ_{\infty})$ be a typical Hermitian code of length n. In [Des. Codes Cryptogr., to appear], we determined the dimension/length profile (DLP) lower bound on the state complexity of $C_{\cal L}(D,mQ_{\infty})$. Here we determine when this lower bound is tight and when it is not. For $m\leq \frac{n-2}{2}$ or $m\geq \frac{n-2}{2}+2g$, the DLP lower bounds reach Wolf's upper bound on state complexity and thus are trivially tight. We begin by showing that for about half of the remaining values of m the DLP bounds cannot be tight. In these cases, we give a lower bound on the absolute state complexity of $C_{\cal L}(D,mQ_{\infty})$, which improves the DLP lower bound.Next we give a "good" coordinate order for $C_{\cal L}(D,mQ_{\infty})$. With this good order, the state complexity of $C_{\cal L}(D,mQ_{\infty})$ achieves its DLP bound (whenever this is possible). This coordinate order also provides an upper bound on the absolute state complexity of $C_{\cal L}(D,mQ_{\infty})$ (for those values of $m$ for which the DLP bounds cannot be tight). Our bounds on absolute state complexity do not meet for some of these values of m, and this leaves open the question whether our coordinate order is best possible in these cases.A straightforward application of these results is that if $C_{\cal L}(D,mQ_{\infty})$ is self-dual, then its state complexity (with respect to the lexicographic coordinate order) achieves its DLP bound of $\frac {n}{2}-\frac{q^2}{4}$, and, in particular, so does its absolute state complexity. | Introduction
Let C be a linear code of length n. Many soft-decision decoding algorithms for C (such as the
Viterbi algorithm and lower complexity derivatives of it) take place along a minimal trellis for
C. The complexity of trellis decoding algorithms can be measured by various trellis complexities.
The most common one is the state complexity s(C) of C, which varies with the coordinate order
of C. Since the number of operations required for Viterbi decoding of C is proportional to s(C),
it is desirable that s(C) be small. A classical upper bound for s(C) is the Wolf bound
dim(C)g, [9]. It is well-known that if C is a Reed-Solomon code, then
W(C).
Let [C] denote the set of codes equivalent to C by a change of coordinate order. We write s[C] for
the minimum of s(C) over all coordinate orders of C and call it the absolute state complexity of
Research supported by the U. K. Engineering and Physical Sciences Research Council under Grant L88764 at
the Algebraic Coding Research Group, Centre for Communications Research, University of Bristol. Copyright 2000,
Society for Industrial and Applied Mathematics.
C. (We note that state-complexity notation and terminology varies in the literature. For example,
state complexity is called minimal trellis size in [2]; absolute state complexity is called absolute
minimal trellis size in [2] and minimal state complexity in [13].) Finding a coordinate order of C
that achieves s[C] is called the 'art of trellis decoding' in [10] since exhaustive computation of s(C)
over all possible coordinate orders of C is infeasible, even for quite short codes. An important step
towards attaining this goal is determining good lower bounds on s[C].
The dimension/length prole (DLP) of C is a deep property which is equivalent to the generalised
weight hierarchy (GWH) of C. (For a survey of GWH, see [15].) The DLP of C is independent of
the coordinate order of C and provides a natural lower bound r(C) for s[C]. For example, if C is
a Reed-Solomon code, then so that s[C] is as bad as possible and uninteresting.
However, determining when important. An obvious and useful way of doing
this is to nd a coordinate order of C for which In particular this provides one
route to the art of trellis decoding. It is also important to develop methods for determining when
r(C) < s(C), and in these cases to improve on r(C).
Geometric Goppa codes generalise Reed-Solomon codes. Hermitian codes are widely studied geometric
Goppa codes which are longer than Reed-Solomon codes and have very good parameters
for their lengths. Let q be a xed prime power,
we write CL (D; mQ1 ) for a typical Hermitian code of length n dened over F q 2 . In [5], we
determined r(CL (D; mQ1 using some of the GWH of Hermitian codes obtained in [11, 16].
(The complete GWH of Hermitian codes has subsequently appeared in [1].) From [5], we have
s(CL (D; mQ1 so we restrict ourselves to
the interesting Hermitian codes i.e. to CL (D; mQ1
Here we determine precisely when r(CL (D; mQ1 In the process, we exhibit
a good coordinate order which often gives s(CL (D; mQ1 )) < W(CL (D; mQ1 )). We also improve
on the DLP bound (when it is strictly less than the state complexity).
'Points of gain and fall' were introduced in [3, 4, 6, 7] to help determine the state complexity
of certain generalisations of Reed-Muller codes. For these codes, the points of gain and fall had
particularly nice characterisations. For Hermitian codes however, their characterisation is not
quite as nice and so our approach is slightly dierent. We describe a coordinate order giving
and characterise the points of gain and fall of Cm . We also characterise these
points of gain and fall in terms of runs. This has the advantage of greatly reducing (from n to
the number of trellis depths needed to nd s(Cm ).
The paper is arranged as follows. Section 2 contains terminology, notation and some previous
results that will be used throughout the paper. The paper proper begins with Section 3. Here we
show that for m 2 I(n; g), just under half of the Hermitian codes cannot attain their DLP bound.
In these cases we give an improvement of the DLP bound, written r { (CL (D; mQ1 )).
The main goal of Section 4 is to characterise the points of gain and fall of Cm in runs. In Section
5 we determine s(Cm ) using Section 4. We show that s(Cm just over half the
Thus we have determined precisely when the DLP bound for Hermitian codes is tight. Furthermore
{ (Cm ) for around a further quarter (respectively 1=q) of m 2 I(n; g) when q is odd
(respectively even).
In conclusion, we have found s[Cm ] for three quarters (respectively one half) of the m 2 I(n; g)
when q is odd (respectively even). For the remaining m 2 I(n; g), we do not know a better
coordinate order (than that described in Section 4) nor a better bound (than that given in Section
3). Thus, although we have reduced the possible range of s[Cm ], some of its actual values remain
open. Finally, our method of characterising points of gain and fall is essentially the same as the
one used to determine r(CL (D; mQ1 )) in [5] and may be able to be used quite generally in
determining DLP bounds and state complexity.
We would like to thank Paddy Farrell for his continued interest and support of our work. An initial
account of some of these results was given in [8].
The state complexity of Hermitian codes has also been studied in [13]. For a stronger version of
[13, Proposition 1] (an application of Cliord's theorem), see [5, Proposition 3.4]. Also, Example
5.11 below generalizes the main result of [13] to arbitrary self-dual Hermitian codes.
Notation and Background
State complexity. Let C be a linear code of length n and 0 i n. The state space dimension
of C at depth i is
where C 0g: The state
complexity of C is ng. It is well known that s(C ? simple
upper bound on s(C) (and hence on s[C]) is the Wolf bound dim(C)g.
We write [C] for the set of codes equivalent to C by a change of coordinate order i.e. C
if and only if there exists a permutation (l n) such that C
Cg. Then we dene the absolute state complexity of C to be
The dimension/length prole (DLP) of C is (k
ig. Clearly dim(C
(C). The DLP bound on s i (C) is
and the DLP bound on s(C) is ng. We will use DLP bound to mean
for some C. It is well known that r(C ?
Since r(C) is independent of coordinate order of C, r(C) s[C]. If we say that C
is DLP-tight; e.g. if then C is DLP-tight.
Hermitian codes. Our terminology and notation for Hermitian codes for the most part follow [14].
We write H=F q 2 for the Hermitian function eld. Thus
over F q 2 and y q is the minimal polynomial of y over F q 2 [x]. The genus of H=F q 2 is
We write PH for the set of places of H=F q 2 and DH for the divisor group of H=F q 2 .
For for the valuation of z at Q. Thus v Q (z) < 0 if and
only if Q is a pole of z and v Q (z) > 0 if and only if Q is a zero of z. Also (z) 2 DH is given by
Q2PH vQ (z)Q and for A 2 DF ,
There are q 3 places of degree one in PH . One of these is the place at innity, which we denote
Q1 . We denote the others as . For the rest of the paper, unless otherwise stated
We put
. For an integer m, L(mQ1
The Hermitian codes over F q 2 are CL (D; mQ1
permutation (l Strictly speaking the code C(D;mQ1 ) depends on the
permutation (l n) and may be better denoted CL (Q l 1
this notation is cumbersome and CL (D; mQ1 ) is standard. Unless otherwise stated, when we write
CL (D; mQ1 ) we have some xed but arbitrary coordinate order in mind.
From the usual expression for the dimension of geometric Goppa codes,
dim(CL (D; mQ1
When m is understood, stated otherwise. The abundance of
CL (D; mQ1 ) is dim(mQ1 D). For m < n, the abundance is 0 and the code is non-abundant.
For
, so we restrict
our attention to m 2 m, the dual of CL (D; mQ1 ) is
be the pole number sequence of Q1 . Also, for
ig. Thus [1; 1) is the set of pole numbers
of Q1 , (r) is the rth pole number and 1 [R 1 g. We note that
From [14, Proposition
VI.4.1] we deduce that
We note that, for m < n,
State complexity of Hermitian codes. For 0 i n we put D
(where (l xed but arbitrary permutation of We deduce that
In particular
These identities yield s(CL (D; mQ1
Thus we will almost exclusively be interested in m 2 I(n;
In fact, since
restrict our attention
to
deducing results for
It is convenient to put J(n;
Using results of [11, 16], [5, Proposition 5.1] shows
that for m 2 I(n; g),
which is used to prove
Theorem 2.1 ([5, Theorem 5.5]) For
is attained at m 2g
2 cq and equals
min
l um
If CL (D; mQ1 ) is DLP-tight then we just say m is DLP-tight.
3 When the DLP bound is not tight
by [5, Proposition 4.3, Example 4.9], we have
r(CL (D; mQ1
r(CL (D; mQ1
where CL (D; mQ1 ) can have any coordinate order. Such m are therefore DLP-tight and we are
reduced to determining which m 2 I(n; g) are DLP-tight. We note that n 3+ 2g < n, so that the
codes that we are interested in are non-abundant.
In this section we determine the m 2 I(n; g) which are not DLP-tight, i.e with s[CL (D; mQ1 )] >
r(CL (D; mQ1 )): The coordinate order of CL (D; mQ1 ) is arbitrary, so it suces to show that
Table
1: Table of New Notation
xed prime power
r { (CL (D; mQ1 Improved DLP bound for m 2 I(n; g) Denition 3.11
{ (CL (D; mQ1 (Theorem 3.9 and Corollary 3.10)
Finite places of degree one in PH
ab Elements of F q 2 such that q+1
ac Elements of F q 2 such that q
Element of P 1
H such that x(Q a;b;c ab and y(Q a;b;c
Cm Element of [CL (D; mQ1 )] with coordinate order given in Section 4
of points of gain and fall of Cm
fall (m) jP gain (m) \ [1; i]j and jP fall (m) \ [1; i]j
by (j;
gain 0; q q2; q depending on M : dened before Proposition 4.8
fall 0; q+q2
depending on M : dened before Proposition 4.8
3:
Our approach has three steps.
(i) we prove the key lemma, Lemma 3.2, and indicate how this can be used to show that m is not
DLP-tight (Example 3.3)
(ii) a generalisation of the key lemma (Lemma 3.4) and an application 3.5. We indicate how this
can be used to improve on the DLP bound by more than one (Example 3.6).
(iii) an application of Proposition 3.5 to improve the DLP bound for m 2 I(n; g), Theorem 3.9
and Corollary 3.10.
We conclude Section 3 with a table of the improved DLP bound for small values of q and an
analysis of the proportion of those m 2 I(n; g) for which our bound is strictly better than the DLP
bound (Proposition 3.12).
The key lemma. We begin with a clarication of Equations (3) and (4).
Lemma 3.1 For
dim(mQ1 D
and s i (CL (D; mQ1 only if there is equality in both.
Proof. The rst part follows from [5, Lemma 4.1] and the fact that the gonality sequence of
the pole number sequence of Q1 by [12, Corollary 2.4]). The second part then
follows from (3) and (4). 2
We note that Lemma 3.1 implies that a coordinate order is inecient, in the sense of [9], if and only
if there exists an i, 0
dim(mQ1 D i;+ ). To show the stronger result that s(CL (D; mQ1
require a stronger condition on i, namely that it satises
so that s i (CL (D; mQ1 This stronger condition is clearly more likely to hold
attains or is close to attaining r(CL (D; mQ1 )).
For now, we concentrate on determining when the equalities in (5) cannot hold. For these equalities
to hold, dim(mQ1 D must change with
respectively. We shall see that it is possible for both
(i.e. it is possible that both
are pole numbers of Q1 ).
Lemma 3.2 For m n 2+g, it is not possible that dim(mQ1 D
and dim(mQ1 D i;+
Proof. We assume that dim(mQ1 D
derive a contradiction. Suppose we have z
is a principal divisor of H=F q 2 (e.g.
as in the proof of [14, Proposition VII.4.2]), say nQ1
and vQ l i
so that by [14,
Lemma I.4.8])
dim((2m n)Q1 +Q l i
Now (2g 2)Q1 is a canonical divisor of H=F q 2 (e.g. by [14, Lemma VI.4.4] or because 2g 2
is the gth pole number of Q1 and [14, Proposition I.6.2]). Thus dim((2m n)Q1
by the Riemann-Roch theorem, so from (6),
Again by the Riemann-Roch
theorem dim((2g so that
and hence L((2g 2 2m+n)Q1 Q l i
giving the
required contradiction. 2
Example 3.3 Let 3. We show is not DLP-tight.
From (2), we have [1; 11. From Theorem 2.1,
r(CL (D; mQ1
and similarly, r 14 (CL (D; mQ1
implies that s i (CL (D; mQ1 3.1 then implies that
dim(mQ1 D 13;
dim(mQ1 D 14;+
and since the coordinate order of CL (D; mQ1 ) is arbitrary, m is not DLP-tight.
We will see in Section 5 that 14 and 15 are DLP-tight.
Generalisation of the key lemma. Since dim(mQ1 D i
dim(mQ1 Lemma 3.2 can be restated
as: for m n 2+ g, either dim(mQ1 D i
dim(mQ1 D i 1;+ ). This generalises as
Lemma 3.4 For
or (ii) dim(mQ1 D i;+ ) dim(mQ1 D i t;+
Proof. Suppose that dim(mQ1 D
tc. So there are
such that dim(mQ1 D
dim(mQ1 so that, since jfi t
contradicting Lemma 3.2. 2
The following application of Lemmas 3.1, 3.4 is a straightforward consequence of (3),(4).
Proposition 3.5 For
Example 3.6 Let
We have (e.g. by the Riemann-Roch Theorem). From (2), the rst few pole numbers of
Q1 are [1; 16g. From Theorem 2.1, we have r(CL (D; mQ1
so that, from (4),
8g. Thus Proposition 3.5 gives
We shall see in Section 5 that s[CL (D; mQ1
Improvement on the DLP bound. We show how Proposition 3.5 can be used to improve on
the DLP bound generally. First, we introduce some useful notation: q is even and q
if q is odd. For a xed m 2 J(n; g), we put
We easily deduce:
Lemma 3.7 (i) for q odd, 0 M q 3
We begin by reinterpreting Theorem 2.1 in terms of M and M .
Lemma 3.8 For m 2 J(n; g), the DLP bound is attained at
Proof. If u; v are dened as in Theorem 2.1, then
The result now follows from the fact that the DLP bound is attained at m 2g
Next we give our improvement on the DLP bounds for m 2 J(n; g). The size of the improvement
is given by
We note that (m) > 0 if and only if q q2
2 or q M M q q 2 .
Theorem 3.9 For
Proof. First assume that q q2
is attained at
We have two subcases.
(a) For q q2
2 we have 0 < t M +q 2 . Now, from (2), M
1), so that j 1 [m t, and Proposition 3.5 gives
2:
(b) For q M M q 2+q2we have From (2),
2 (N) since M q 2, so that j 1 [m+i n t+1; m+i
Suppose now that q M M q q 2 . From Lemma 3.8, r(CL (D; mQ1 )) is attained at
3.5. Now again we have two subcases.
(a) For q M M q M +1
2 we have 0 < t M +1 q 2 . From (2) (M +1 q
t, and Proposition
3.5 gives s[CL (D; mQ1 )] r(CL (D; mQ1
(b) For q M M q q 2 we have M From (2)
so that j 1 [m so that from Proposition 3.5,
Corollary 3.10 For
Proof. Easy consequence of Theorem 3.9, the denition of (m). 2
Definition 3.11 For m 2 I(n; g), we put r { (CL (D; mQ1
We note that for m 2 I(n; g),
r { (CL (D; mQ1
In
Table
2 we have written r { (m) for r { (CL (D; mQ1 ) and the DLP bound is calculated using
Theorem 2.1. The bold face entries are those for which r { (CL (D; mQ1
(The values of r { (CL (D; mQ1
can of course be deduced from (7).)
Table
2: r { (CL (D; mQ1
r { (m) 3
r { (m) 11 11 11
r { (m) 26 27 27 28 28 28
r { (m) 53 53 54 54 55 56 56 56 56 56
r { (m) 151 151 152 153 153 154 155 156 156 156
r { (m) 157 157 157 158 159 159 159 159 159 159 159
r { (m) 228 229 230 231 231 232 233 234 234 234 235 236 236 236
r { (m) 237 238 238 238 238 239 239 239 239 240 240 240 240 240
We conclude this section by calculating the proportion of m 2 I(n; g) for which (m) > 0.
Proposition 3.12
if q is odd2
Proof. We note rst that jI(n; 1. Recall from the denition of (m) that
Next we note that j 1 (0;
This follows from the denition of (m)
for when q is odd and from n 2+
when q is even. Now,
xing
We note that the restriction M q 1
does not aect
this. We also note that for q even and M
2 , the restriction of M
Thus the result follows from
is even:Thus, for large q at least, r { (CL (D; mQ1 )) improves on r(CL (D; mQ1 )) for just under half the
We shall see in Section 5 that m is DLP-tight when r { (CL (D; mQ1 )) fails to improve
on r(CL (D; mQ1 )).
4 A Good Coordinate Order
We describe a 'good' coordinate order for Hermitian codes, denoting the code in [CL (D; mQ1 )]
with this coordinate order by Cm . After recalling the notions of points of gain and fall for a linear
code, we give the most natural description of the points of gain and fall of Cm in Propositions 4.2
and 4.4. We conclude by characterising the points of gain and fall of Cm as 'runs' in Theorem 4.10
(which we will use in Section 5 to derive a formula for s(Cm ) .)
The good coordinate order. As noted at the beginning of Section 3, for m n 2
coordinate orders of CL (D; mQ1 ) are equally bad with regard to state complexity.
Thus we are interested in m 2 I(n; g).
Recall that H=F q 2 has places of degree one viz. Q1 , and the nite places of degree one,
for some xed but arbitrary ordering (Q l 1
H . Thus the order of P 1
H determines the
coordinate order of CL (D; mQ1 ). As in [14], for each (;
there exists a unique
H , such that x(Q
We now describe an order of P 1
H giving Cm 2 [CL (D; mQ1 )]. First we relabel the elements of P 1
as Q a;b;c for certain integers a; b; c. We write f0; Now for each
a 2 F q nf0g there exist such that q
for q. Thus for each a 2 F q n f0g, 0 c q 1 and 0 b q, there
exists
H , such that x(Q a;b;c
H .
For exist
Thus the remaining q elements of P 1
H , which we write as Q 0;0;c for 0 c q 1, are such that
We note that Q
When a, b or c takes any of its possible values we write Q ;b;c , Q a;;c or Q a;b; . Note that for a
we have q. Thus there are q places of the form Q 0;;
and for 1 a q 1 there are q 2 1 places of the form Q a;; .
We rst describe the ordering of P 1
H giving Cm 2 [CL (D; mQ1 )] for m 2 J(n; g). This uses
lexicographic order of t-tuples of integers: (i only if there exists u
such that
is dened by simply using the order
of
H . For q M 1
, Cm is dened by the'Order O2' of P 1
H into
three sets
Then Order O2 of P 1
H is given by putting P 1
3 by Q 1;b;c < Q
2 by Q a;b;c < Q a 0 ;b 0 ;c 0 if (a; b; c) < (a
For
the coordinate order of Cm is dened to be that of Cm ? .
From now on Q i denotes the ith element of P 1
H ordered as above. Thus
The points of gain and fall of Cm . Points of gain and fall were introduced in [3, 6]. For this
paragraph, C is a length n linear code with dimension k. We note that dim(C i; ) (as dened in
Section 2) increases in unit steps from 0 to k and dim(C i;+ ) decreases in unit steps from k to 0 as
i increases from 0 to n. If 0 i n then
i is a point of gain of C if dim(C i;+
i is a point if fall of C if dim(C
These denitions are motivated by (1). We note that there are k points of gain and k points of fall.
Points of gain and fall describe the local behaviour of a minimal trellis, [6], and being able to give
a succinct characterisation of them for particular families of codes has been useful in calculating
for their state complexity, e.g. [3, 6]. The same proves to be the case here. We note that,
as in [6], i is a point of gain of Cm if and only if i is the 'initial point' of a codeword of Cm i.e. if
and only if there exists z 2 L(mQ1 ) such that
Similarly i is a point of fall of Cm if and only if i is the 'nal point' of a codeword of Cm i.e. if
and only if there exists z 2 L(mQ1 ) such that
We write P gain (C) and P fall (C) for the sets of points of gain and fall of C. With P
fall (C):
We also write P gain (m) := P gain (Cm ) and P fall (m) := P fall (Cm ).
We will need a function closely related to . Dene
We have [1; Im() from [14]. We note that
and for m < Proposition VII.4.3]. For 0 a q 1, we put
and
Also we
put
a=0 A(a) and
We will determine the initial and nal points of certain z 2 H=F q 2 of the form
that (x ab )(Q a 0 ;b only if a =
a only if a = a course, we are interested in
when
Lemma 4.1 If (j; l) 2 1 [0; m],
Proof. We put z Using the facts
that (i) vQ1
fQ1g. Hence (j; l) 2 1 [0; m] implies that z jl 2 L(mQ1 ). 2
Proposition 4.2 (O1 ordering of P 1
1. P gain
2. P fall
Proof. We order the set A by ab < a 0 b 0 if and only if (a; b) < (a
and only if Q a;b; < Q a 0 ;b 0 ; . For 0 d q 2 1, we write d for the (d 1)st element of A.
Thus
a(d) by
Thus
and
(y ac
We begin with P gain (m). For (j; l) 2 1 [0; m] we put
We note that jq (j; l) m n 2
which implies that j < q 2 +q 1
that u gain
jl and z gain
jl are well-dened for all (j; l) 2 1 [0; m]. Now u gain
only
Hence the initial point
of z gain
jl is jq Also, by Lemma 4.1, z gain
Finally,
each (j; l) 2 1 [0; m] gives a dierent point of gain of Cm and, since j 1 [0; are
all the points of gain. Similarly for points of fall. 2
We use Proposition 4.2 to determine s(Cm ) for
To do this we use
and so we put
gain
fall
fall (Cm
Example 4.3 If
C 4 is our rst example of a geometric Goppa code with
where the latter is given by Theorem 2.1.)
Proof. The coordinate order of C 4 is Q
In the notation of Proposition 4.2, we have
Now P gain (4) is the set of initial points of z gain
jl , where (j; l) 2 1 [0; 4]. These are given in the table
below. The third column in the table gives the 'initial place' i.e. the Q a;b;c such that Q
where i is the initial point.
(j; l) z gain
jl Initial Place Initial Point
Thus P gain given by the nal points of z fall
jl such that (j; l) 2
(j; l) z fall
jl Final Place Final Point
Thus P fall using (9) we have
gain
giving
For
2:
Proposition 4.4 (O2 ordering of P 1
gain (m) and P fall
fall (m) where
Proof. We recall that P 1
3 were dened in (8). We note that
1 , so that writing
and
for gain (q
2 and
for gain (q
3 .
We begin by showing that P 1
gain (m) P gain (m). For (j; l) 2 1 [0; m] such that 0 j q and
we exhibit an element of L(mQ1 ) with initial point
Thus v gain
l (Q a;;c only if a = 1, and 0 c l 1 and u gain (Q a;b; only if
l
(taking
q). Hence the initial point of z gain
jl is 1. Also, from Lemma 4.1,
z gain
gain (m) P gain (m).
Next we show that P 2
gain (m) P gain (m). We order A n A(1) by ab < a 0 b 0 if and only if (a; b) <
for the (d+1)st element of AnA(1), where 0 d q 2 q 2. (This is dierent
from the labelling in the proof of Proposition 4.2 since we do not include A(1) in the relabelling.)
We dene a(d) by
writing
set
z gain
We note that jq (j; l) m gain (q which implies that j q 2 q 2.
Thus
jl and z gain
are well-dened for all (j; l) 2
jl (Qd a(d)c l 1.
Thus
Therefore the
initial point of z gain
is gain (q
and (z gain
Hence z gain
gain P gain (m) and
gain (m) P gain (m).
it remains to show that jP 1
To do this we exhibit a bijection 1 [0; m] !
gain (m). First, for (j; l) 2 1 [0; m] we map (j; l) to l(q
gain (m) if
Now we are left with dening a bijection F :
gain (m) by
F (j;
(j
It is easy to check that F maps into P 2
gain (m) and F is one-to-one since for gain l q 1 and
Finally we prove F is onto.
For
gain (m), such that
(j
(j
It is straightforward to see that (i) (j
This completes the proof for P gain (m). Similarly for the points of fall. 2
Example 4.5 If
{ (CL (D; 13Q1 using Theorem 3.9, but s(C 13
Proof. The coordinate order of C 13 is
We use the notation of the proof of Proposition 4.4. We note that gain = 1. Thus for 0 j q
and 0 l gain 1, jq
gain (13) is the set
of initial points of z gain
which are as follows.
(j; l) z gain
jl Initial Place Initial Point
(3;
Thus
gain (13). Now we have
so that 2. Then P 2
gain (13) is the set of initial points of
z gain
such that (j; l) 2 1 [0; 13 gain (q
giving the following.
Initial Place Initial Point
(0;
(0;
Thus
for P fall (13). We have P gain
From Propositions 4.2 and 4.4 we have, if (i) 0 M q M 2or (ii) q M M q or (iii)
In these cases the
following useful property holds.
Remark 4.6 For a length n code C, if P fall
In particular, for m 2 J(n; g), if (i) q is odd and 0 M q M 2or q M M q
or (ii) q is even and 0 M q, then s i (Cm n. The same holds for
Proof. The proof is similar to that of [6, Proposition 2.5], and in fact can be modied to hold
for branch complexity as in [6, Proposition 2.5]. We put P i;+
Of course, with
gain (C) and P i;+
fall (C)
for any linear code C. The condition P fall implies that also
fall (C) and P
gain (C):
Thus, from (9), we have
gain (C)
is ordered by O1 then as in the proof of Proposition 4.2
so that
by O2, P gain
Thus O2 is strictly better than O1 for m=14.
If C 2 [CL (D; 15Q1 )] is ordered by O1, then then as in the proof of Proposition 4.2, P gain
But if C is ordered by O2, we get P gain
Thus O1 is strictly better than O2 for m=15.
To summarise, for { (CL (D; mQ1 )). Thus, in these
cases s(Cm and the coordinate order for Cm is optimal with regard to s(Cm ).
In fact, except for
Another characterisation of the points of gain and fall of Cm . We now characterise
gain (m) and P fall (m) as runs i.e. as sequences of non-contiguous intervals of integers. This is
useful since s(Cm ) must be attained at the end of a run of points of gain. Thus to determine
s(Cm ), we only need to nd the maximum of s i (Cm ) over those i that end a run of points of gain,
i.e. over those i such that
We begin by combining Propositions 4.2 and 4.4 for a common development of the cases (i) 0
2 or q M
. First we extend the
denitions of gain and fall as follows:
and
Proposition 4.8 For
gain (m) and P fall
fall (m) where
Proof. From the examples above and Remark 4.7, we can assume that q 4. For q M 2
M q M +1, the result is just a restatement of Proposition 4.4. Also, for
the result states that P gain
in agreement with Proposition 4.2.
So we are reduced to m such that q M
Rewriting
see that P 1
qg.
We claim that P 2
1g. Firstly, if 0 j q and
4. Thus we need to show that
If k is in the left-hand side,
(l
In either case, 0 j q, 0 l q 1 and l(q that k is in the
right-hand side. The reverse inclusion is similar.
The result now follows from Proposition 4.2 since for q M
gain (m)+1
and
Lemma 4.9 If
2.
Proof. Straightforward using Lemma 3.7. 2
Theorem 4.10 For
1. P gain (m) is the union of
(b) fm 2g gain
(c) fm 2g gain
and
2. P fall (m) is the union of
(a) [n m+ 2g
(b) fn m+ 2g
(c) fn m+ 2g
Proof. As in the proof of 4.8, we assume that q 4. We will use the fact that
For convenience we put R 1
e q
We show that R 1
gain (m) P gain (m) in two steps. First we note that P 1
since for q 4, 0 j q and 0 l gain 1 q 1, (j; l)
Next we show that [ gain (q
gain (m). Now from (10) we have for
and 0 l q 1:
Also, if 0 j
and 0 l q 1 then, again using (10),
(j; l)
so that (j; l) 2
gain (m). Next we show that R 2
gain
gain 1. Then, from (10),
and using (10),
gain (m) if (e
that R 2
gain
gain (m). If q 1 gain e q 1 and 0 f q 1 e then
and f gain , so that R 3
gain
gain (m).
Thus
gain (m) P gain (m) and it suces to show that j
that jP gain
[
gain (m)
The proof for P fall (m) is similar and we omit the details. 2
5 When the DLP bound is tight
Here we use Theorem 4.10 to determine s(Cm ). We know (from Corollary 3.10 and Proposition
3.12) that s[CL (D; mQ1 )] > r(Cm ) for just under half of the m in the range I(n; g). We show
that for the remaining m in this range, s(Cm As a consequence, we have determined
s[CL (D; mQ1 )] and a coordinate order that achieves s[CL (D; mQ1 )] for such m. For those m
with s(Cm ) > r(CL (D; mQ1 )) we compare the upper bound, s(Cm ), on s[CL (D; m1 )] with the
lower bound r { [(C L (D; mQ1 )] given in Corollary 3.10. When q is odd, these bounds meet for
over three-quarters of those m in I(n; g), but when q is even, the bounds meet for only a little over
one half of those m in I(n; g).
Determining s(Cm ). As discussed in Section 4, it suces to nd the maximum of s i (Cm ) over
those i such that
(m). From Theorem 4.10, there are only q
i. Thus concentrating on these i is signicantly simpler. So we calculate s
values of i (in Proposition 5.5) by determining P
gain (m) and P
fall (m) (in Lemmas 5.1 and 5.4).
We determine which of these i gives the largest s i (Cm ) (in Lemma 5.6). This enables us to write
Theorem 5.7).
Early on we introduce a variable which plays a crucial role in the proofs and statements
of many of the results and we end with a table of s(Cm
8g.
We begin by determining s(Cm We note rst that
were dened just before Proposition 4.8.
As noted above, s i (Cm
From
Theorem 4.10 such i are either (i) of the form m 2g gain
or (ii) of the form m 2g
Thus putting
we have
From
gain (m); so we wish to determine P
gain (m) and P
fall (m) for
1. The rst of these is straightforward.
Lemma 5.1 For
Proof. Since 1. For 1 e q 2 gain , Theorem 4.10
gives
e
The rst case follows since
In the
second case,
q e
fall (m) it is convenient to introduce some more notation. For xed m we put
Thus norm is 0, 1 or 2 depending on whether
Also we put
3:
In Lemma 5.4 and Propositions 5.5 we will see a symmetry between the roles of e in P
gain (m)
and e in P
fall (m). We will see in Lemma 5.6 that s i e (Cm ) is maximised near and hence
appears naturally in our formula for s(Cm ).
Lemma 5.2 q 1 2q 3.
Proof. First, it follows from Lemma 3.7 that
is even and M > 0
is even and M
Next, clearly 2q 2, with equality only if M 1. However, from Lemma
2 so that norm 1. 2
Now, in order to use Theorem 4.10 to calculate P
fall (m), we need to write i e as n m+2g+ fall
preferably non-negative, integer e 0 and 0 f q 1. We could then determine
an expression for
fall (m) in terms of e 0 and f in a similar way to the proof of Lemma 5.1, except
that f would add complications. This would give us an expression for s i e (Cm ) in terms of e, e 0
and f . To maximise this over 1 e q 1 we would need to relate e 0 and f to e. Fortunately
these relationships are reasonably simple.
Lemma 5.3 Let m 2 I(n; g) and 1 e q 1. If we write
then e
e for q 1 gain e q 1.
In particular e 0 0. Also if e q
Proof. For
Now
giving norm )q which implies that
e. Similarly, for q 1 gain e q 1 we get
For the second part we have q 1 (from Lemma 5.2) and f e (from the rst part). Thus
We show that, for e q
it is not possible that Firstly implies that e q 1 gain . Also
imply that e fall . Thus q 1 gain e fall so that,
adding gain to both sides,
Now, as in (12), implies that either (i)
2 and
Each of these clearly contradicts (13). 2
Lemma 5.4 For
Proof. We as in Lemma 5.3 and work from
Theorem 4.10.
First, if e 0 q, i.e. if e q, then P
We note also that, for e q, q
Next, if q 1 fall e 0 q 1, i.e. if q
the last equality following from the second part of Lemma 5.3. Finally (since e 0 0 by Lemma
and q since q 1 and f e, by Lemma 5.3. 2
We use the convention that, for b 0, a
In particular
a
a for a 0
0 for a 0,
a
1 for a 0
0 for a < 0,
a
a 1
a 1
where b 1. Lemmas 5.1 and 5.4, together with (9), give
Proposition 5.5 For
Now we determine for which e, 1 e q 1, s i e (Cm ) is maximised.
Lemma 5.6 For
1. at
2. at
Proof. From Proposition 5.5, with
q e
we have s i e (Cm maximising s i e (Cm ) is equivalent to minimising (e) over 1
e q 1. Now, for 0 e q 1,
q e
Thus, since 0 q 2 gain e 1, we have
First, for 0 e q implies that (e) (e 1) 2q so that (e) is
minimised over 1 e q + 1 at 1. Thus it is sucient to determine where (e)
is minimised over q We note that, since 2q 3 (Lemma 5.2),
l m
Similarly, for q
that if e b
1. if d e q
2. if b
e.
This leaves the case b
. In this case, the
above analysis implies that (e) is minimised at either b
d
Also we have
fall gain
so that (e) is minimised at d e.
Finally we note that if 2q 3 2 fall then 2q so that adding
sides and dividing by 2 gives
l m
and we are in case 1 above. Also if we have d
we are in case 1. Similarly for 2q 6 2 fall we are in case 2 above. 2
Proposition 5.5 and Lemma 5.6 give us
Theorem 5.7 For
Proof. The result follows since
1. for 2q 6 2 fall , q 2 gain b
2. for
3. for 2q 4 2 gain , q 2 gain d
For example, 2q 6 2 fall implies that b
so that
The other equalities and inequalities follow similarly. 2
Of course, Theorem 5.7 essentially gives the values of s(Cm ) for I(n; g) since
Table
Comparing these values of s(Cm )
with the values of r { (CL (D; mQ1 )) given in Table 2 (where r { (CL (D; mQ1 )) is as dened in
Denition 3.11), we have s(Cm { (CL (D; mQ1 )) except for
281g. In particular, s(Cm ) achieves
the DLP bound for Cm for q 2 f2; 3; 4; 5; 7; 8g and m 2 I(n; g) when this is not excluded by
Corollary 3.10 i.e. whenever the entry for m or m ? in Table 2 is not in boldface.
Table
3: s(Cm ) for q 2 f2; 3; 4; 5; 7; 8g and m 2 J(n; g)
28 28 28
Comparing s(Cm ) with r { (CL (D; mQ1 )) We start by reinterpreting r(CL (D; mQ1 )) in terms
of in Theorem 5.8. We use this to calculate (in Proposition 5.9) and hence to show (in Corollary
5.10) that s(Cm this is not excluded by Corollary 3.10 . This means
that s(Cm ) achieves the DLP bound for Cm for just over half of those m in the range [
We then compare s(Cm ) with r { (CL (D; mQ1 )) in Table 4 and see that s(Cm ) achieves the bound
r { (CL (D; mQ1 )) for approximately a further quarter of those m in [ n 1; n 3+2g] if q is odd but
only for about a further 1=q of those m in [ n 1; n 3+ 2g] if q is even.
Previously we partitioned J(n; g) into three subintervals, according to whether 0 M q M 2
Now we consider a ner partition and say that
according to whether (A) 0 M q 2
2 or
We compare s(Cm ) with r { (CL (D; mQ1 )), by reinterpreting Theorems 3.9 and 5.7 using (A){(E).
Theorem 5.8 If m 2 J(n; g), then
r(CL (D; mQ1
Proof. Take u and v as in the statement of Theorem 2.1. It is straightforward to show, using
the characterisation of (u; v) given in the proof of Lemma 3.8, that if m satises (A), (C) or
2. Thus Theorem 2.1 implies that, for m satisfying (A), (C) or (E),
r(CL (D; mQ1
min
and for m satisfying (B) or (D),
r(CL (D; mQ1
min
l m
First, for m satisfying (A), (C) or (E) we have (i) d +1e q M 1 if norm 2 f0; 1g or (ii)
2. Also gain
or (iii) for . Thus, for m satisfying (A), (C)
or (E), r(CL (D; mQ1 )) is equal to
as required. Similarly, for m satisfying (B) or (D) (so that norm 1) it is easy to see that
(by considering the cases that
1. Thus, for m satisfying (B) or (D),
r(CL (D; mQ1
as required. 2
Before comparing s(Cm ) with r { (CL (D; mQ1 )), we compare it with r(CL (D; mQ1 )). To do this
we rene (A){(E) as follows: if m satises (C) then we say that m satises (C1), (C2) or (C3) if
Proposition 5.9 For
Proof. Using it is straightforward to see that if
1. if m satises (A), (B), (D) or (C3), then 2q 2 fall 4,
2. if m satises (C1) or (E), then 2q 6 fall or
3. if m satises (C2), then
Also, if
The result then follows from Theorems 5.7 and 5.8 noting that, for cases (B) and (D), gain
cases (C1) and (E) with M q 1,
It follows from Proposition 5.9 that s(Cm ) achieves the DLP bound for Cm as often as this is
possible. We state this as
Corollary 5.10 For only if
Proof. Since for
and s(Cm suces to show the result for m 2 J(n; g). It follows from the denition
of (m) for such m that only if (i) m satises (A) or (ii) m satises (C3) or (iii)
q. These are exactly the values of M for which Proposition 5.9 implies that
Example 5.11 If Cm is self-dual, then r(Cm where Cm has the lexicographic
coordinate order. In particular, s[Cm We know that q is a power of 2,
g). From the
denitions,
4 by Theorem 5.8. The result
now follows since
We remark that the main result of [13] is Example 5.11 with q 4. Corollary 5.10 and Proposition
3.12 imply that r(Cm ) is attained for just over half the m 2 I(n; g). Explicitly, the proportion of
these m for which the DLP bound is attained is 1+ 1
for q odd and 1+ 3q 5
for q even. Of
course Corollary 5.10 implies that if m satises (A), (C3) or M = q is odd, then
s[CL (D; mQ1
The bounds on s[CL (D; mQ1 )] given by Theorem 3.9 and Proposition 5.9 for all m in J(n; g)
(and hence implicitly also for
are given in Table 4. The lower bound is
Table
4: Table of Bounds on s[CL (D; mQ1
Lower Bound Upper Bound
satises r(CL (D; mQ1))+ r(CL (D; mQ1))+ Range
(D) M +M
r { (CL (D; mQ1 )) and the upper bound is s(Cm ). The entries for both bounds are the amount by
which they exceed r(CL (D; mQ1 )). The range is the upper bound minus the lower bound.
As well as those m for which s(Cm implies that
{ (CL (D; mQ1
for those m 2 J(n; g) such that
if q is odd
Hence (15) also holds for those
In all these
cases except M 2 and M
s[CL (D; mQ1
For
3 we have
s[CL (D; mQ1 2:
For q odd, this gives q 2 1values of m 2 I(n; g) for which s[CL (D; mQ1 )] is determined but is
strictly greater than r(CL (D; mQ1 )). Thus, for q odd, the total proportion of those m in I(n; g)
for which we have determined s[CL (D; mQ1 )] is2
For q even, it gives q 2 values of m 2 I(n; g) for which s[CL (D; mQ1 )] is determined but is
strictly greater than r(CL (D; mQ1 )). Thus, for q even, the total proportion of those m 2 I(n; g)
for which we have determined s[CL (D; mQ1
Thus we have determined s[CL (D; mQ1 )] for over three quarters of those m in I(n; g) when q is
odd but only for something over one half of those m in I(n; g) when q is even. For q odd, the rst
m for which s[CL (D; mQ1 )] is not determined is (when it is either 56 or 57),
and for q even the rst m for which s[CL (D; mQ1 )] is not determined is
it is either 236 or 237).
--R
The Weight Hierarchy of Hermitian codes.
On the state complexity of some long codes.
On the trellis structure of GRM codes.
Lower bounds on the state complexity of geometric Goppa codes.
On trellis structures for Reed-Muller codes
On a family of abelian codes and their state complexities.
Bounds on the state complexity of geometric Goppa codes.
Foundation and methods of channel encoding.
On the generalized Hamming weights of geometric Goppa codes.
On special divisors and the two variable zeta function of algebraic curves over
Bounds on the State Complexity of Codes from the Hermitian Function Field and its Sub
Algebraic Function Fields and Codes.
Geometric approach to higher weights.
On the weight hierarchy of geometric Goppa codes.
--TR
--CTR
T. Blackmore , G. H. Norton, Lower Bounds on the State Complexity of Geometric Goppa Codes, Designs, Codes and Cryptography, v.25 n.1, p.95-115, January 2002 | hermitian code;dimension/length profile bound;state complexity |
587964 | Efficiency of Local Search with Multiple Local Optima. | The first contribution of this paper is a theoretical investigation of combinatorial optimization problems. Their landscapes are specified by the set of neighborhoods of all points of the search space. The aim of the paper consists of the estimation of the number N of local optima and the distributions of the sizes $(\alpha_j)$ of their attraction basins. For different types of landscapes we give precise estimates of the size of the random sample that ensures that at least one point lies in each attraction basin. A practical methodology is then proposed for identifying these quantities ($N$ and $(\alpha_j)$ distributions) for an unknown landscape, given a random sample of starting points and a local steepest ascent search. This methodology can be applied to any landscape specified with a modification operator and provides bounds on search complexity to detect all local optima. Experiments demonstrate the efficiency of this methodology for guiding the choice of modification operators, eventually leading to the design of problem-dependent optimization heuristics. | Introduction
. In the eld of stochastic optimization, two search techniques
have been widely investigated during the last decade: Simulated Annealing [25] and
Evolutionary Algorithms (EAs) [6, 7]. These algorithms are now widely recognized
as methods of order zero for function optimization as they impose no condition on
function regularity. However, the e-ciency of these search algorithms, in terms of
the time they require to reach the solution, is strongly dependent on the choice of
the modication operators used to explore the landscape. These operators in turn
determine the neighborhood relation of the landscape under optimization.
This paper provides a new methodology allowing to estimate the number and the
sizes of the attraction basins of a landscape specied in relation to some modication
operator. This allows one to derive bounds on the probability that one samples a
point in the basin of the global optimum for example. Further, this method could be
used for guiding the choice of e-cient problem-dependent modication operators or
representations.
Formally, a landscape can be denoted by E) where f is the function
to optimize and the modication operator that is applied to elements of the search
space E. The structure of the landscape, heavily depends on the choice of the modication
operators, which in turn may depend on the choice of the representation (the
coding of the candidate solutions into binary or gray strings for example). Hence,
before the optimization process can be started, there is a number of practical choices
(representation and operators) that determine the landscape structure. Consequently,
these choices are often crucial for the success of stochastic search algorithms.
Some research has studied how the tness landscape structure impacts the potential
search di-culties [13, 21, 22, 26]. It is shown that every complex tness landscape
can be represented as an expansion of elementary landscapes {one term in the Fourier
expansion{ which are easier to search in most cases. This result has been applied to
Centre de Mathematiques Appliquees, Ecole Polytechnique, 91128 Palaiseau Cedex, France
y Corresponding author, Tel: (33).1.69.33.46.30 Fax: (33).1.69.33.30.11, E-mail:
Josselin.Garnier@polytechnique.fr
J. Garnier and L. Kallel
solve a di-cult NP-complete problem [20] (the identication of minimal nite k-state
automaton for a given input-output behavior), using evolutionary algorithms. Other
theoretical studies of search feasibility consider the whole landscape as a tree of local
optima, with a label describing the depth of the attraction basin at each node
[16, 19]. Such a construction naturally describes the inclusion of the local attraction
basins present in the landscape. These studies investigate tree structures that ensure
a minimal correlation between the strength of the local optima and their proximity to
the global optimum, with respect to an ultra-metric distance on the tree. However,
from a practical point of view, the tree describing the repartition of local optima is
unknown and too expensive in terms of computational cost to determine for a given
landscape.
The lack of an e-cient method at reasonable cost that allows one to characterize
a given landscape, motivates the construction of heuristics for extracting a priori
statistical information about landscape di-culty, for example based on random sampling
of the search space. We cite from the eld of evolutionary algorithms: Fitness
Distance relations, rst proposed in [8] and successfully used to choose problem dependent
random initialization procedures [11, 14]; Fitness Improvement of evolution
operators, rst proposed in [5], then extended and successfully used to choose binary
crossover operators [12] and representations [9]. However, even if such heuristics can
guide the a priori choice of some EA parameters, they do not give signicant information
about landscape structure, for instance, recent work suggests that very dierent
landscapes (leading to dierent EA behaviors) can share the same tness distance
relation [18, 10]. Further, the e-ciency of such summary statistics is limited to the
sampled regions of the space, and therefore does not necessarily help the long term
convergence results as implicitly illustrated in [12] for example. This gives strong
motivation for developing tools that allow one to derive a more global (beyond the
sampled regions) information on the landscape at hand, relying on an implicit assumption
of stationarity of the landscape. Along that line, this paper proposes a new
method to identify the number and the repartition of local optima with respect to
a given neighborhood relation of a given landscape. The proposed method applies
to any neighborhood relation specied with a modication operator, and hence provides
a practical tool to compare landscapes obtained with dierent operators and
representations.
The framework is the following. We assume that the search space E can be split
into the partition E 1 ,.,E N of subspaces which are attraction basins of local maxima
of the tness function. We also assume that there exists a local search
algorithm (for example a steepest ascent) which is able to nd from any point of the
search space the corresponding local maximum:
The basic problem consists in detecting all local maxima m j . This is equivalent
to nding a way to put a point in all attraction basins because the local search
algorithm will complete the job. We shall develop the following strategy. First we
shall study the direct problem, which consists in studying the covering of the search
space by a collection of points randomly distributed when the partition
Second we shall deal with the inverse problem which consists in estimating the number
of local maxima from information deduced from the covering.
Direct problem (Section 4): One puts M points randomly in the search space.
The question is the following: Given the statistical distribution of the relative sizes of
E-ciency of local search with multiple local optima 3
Fig. 1. Schematic representations of the search space E with
points have been randomly placed on both pictures. As a result there is at least one point in each
attraction basin in the left picture, but not in the right picture, where E 4 is empty.
the attraction basins and their number N , what is the probability pN;M that at least
one point lies in every attraction basin ? This probability is very important. Indeed,
using the local search algorithm, it is exactly equal to the probability to detect all
local maxima of the function.
Inverse problem (Section 5): The statistical distribution of the relative sizes of
the attraction basins and their number are assumed to be known for computing pN;M
in Section 4. Unfortunately, this is rarely the case in practical situations, and one
wants to estimate both. The strategy is to put randomly M initial points in the search
space and to detect the corresponding local maxima by the local search algorithm.
The data we collect is the set ( j ) j1 of the number of maxima detected with j initial
points. Of course 0 is unknown (number of local maxima of the landscape that have
not been detected). The question is the following: How can the total number of local
e-ciently estimated from the set lower bound
is
but we aim at constructing a better estimator.
The paper is divided into three parts. First, Section 4 addresses the direct problem
of sample sizing in the case of basins of random sizes then in the case of basins of
equal sizes. Second Section 5 is devoted to the estimation of the distribution of
the relative basins sizes for an unknown landscape, using a random sample from the
search space. This is achieved by a two step methodology: Section 5.2 starts by
considering a parametrized family of laws for the relative sizes of basins, for which
it derives the corresponding covering of the search space (law of ( j )). Then Section
5.3 comments on how these results can be practically used for characterizing the
sizes of basins of an unknown landscape. For instance, it proposes to compare the
covering of an unknown landscape (given by the empirically observed ( j ) values) to
the coverings studied in Section 5.2. Finally, the last part of the paper (Section
devoted to some experiments that validate (Section 6.1) and illustrate (Section 6.2)
the methodology: First, a landscape is purposely designed to test the reliability of the
method according to the size of the random sample, and to the number of local optima
(recall the theoretical results are asymptotic with respect to N and M ). Second, the
method is used to investigate some problems, known to be di-cult to optimize for
EAs. For each problem, we also compare the landscapes related to dierent mutation
operators.
2. Notations and Denitions. Consider a tness f R, and a neighborhood
relation induced by a modication operator , such that the number of dierent
-neighbors (neighbors that can be obtained by one application of to x) of x
is 'bounded'. In the following, we denote by N the number of local optima of L,
4 J. Garnier and L. Kallel
and by ( j ) the random variables describing the sizes of the attraction basins of L
(normalized to the average size). As shown in [23, 24], a local improvement algorithm
is e-cient to nd quickly a local optimum starting from some given point. Among the
possible algorithms we present the Steepest Ascent (SA) also called optimal adjacency
algorithm in [23]:
Steepest Ascent Algorithm (SA).
Input: A tness R, an operator and a point X 2 E.
Algorithm: Modify X by repeatedly performing the following steps:
- Record, for all -neighbors of X denoted by i (X): (i; f( i (X)))
chosen such that f( i (X)) reaches the highest
possible value (this is the steepest ascent).
- Stop when no strictly positive improvement in -neighbors tnesses has been
found.
Output: The point X, denoted by SA (X).
The SA algorithm thus consists in selecting the best neighbors after the entire
neighborhood is examined. An alternative algorithm, the so-called First Improvement
consists in accepting the rst favorable neighbor as soon as it is found, without
further searching. Note that in the FI case there are extra free parameters which are
the order in which the neighborhood is searched. As pointed out in [15, p. 470], the
steepest ascent is often not worth the extra computation time, although it is sometimes
much quicker. Nevertheless our focus in this paper is not a complete optimization of
the computational time, so we let this problem as an open question.
Definition 2.1. Attraction basin: The attraction basin of a local optimum m j is
the set of points of the search space such that a steepest ascent algorithm
starting ends at the local optimum m j . The normalized size of
the attraction basin of the local optimum m j is then equal to k=jEj.
Remarks.
1. This denition of the attraction basins yields a partition of the search space into
dierent attraction basins, as illustrated in Figure 1. The approach proposed in this
paper is based on this representation of the search space into a partition of attraction
basins, and could be generalized to partitions dened with alternative denitions of
attraction basins.
2. In the presence of local constancy in the landscape, the above denition of the
steepest ascent (and hence also the related denition of the attraction basins) is not
rigorous. For instance, if the ttest neighbors of point p have the same tness value,
then the steepest ascent algorithm at point p have to make a -random or user dened-
choice. Nevertheless, even in the presence of local constancy, the comparison of the
results (distribution of ( j obtained with dierent steepest ascent choices, may give
useful information about the landscape and guide the best elitism strategy: 'move' to
tter points, or 'move' to strictly tter points only.
3. Summary of the results. Given a distribution of ( j ), we determine Mmin ,
the minimal size of a random sample of the search space, in order to sample at least one
point in each attraction basin of the landscape. Two particular cases are investigated.
1. Deterministic conguration: all the attraction basins have the same size (( j )
are deterministic).
2. Random conguration: the sizes of the attraction basins are completely random
are uniformly distributed).
In both congurations, we give the value of Mmin as a function of the number of
local optima N . For instance, a random sample of size
E-ciency of local search with multiple local optima 5
*m
*m
*m
*m
Fig. 2. Schematic representation of the search space E with its attraction basins and
the 4 corresponding local maxima m 1 ,.,m 4 . In the left picture we have put randomly
chosen. We apply the search algorithm and detect 3 maxima according to the right picture, so that
we have 5.
the deterministic conguration (resp. for the random conguration),
ensures that a point is sampled in each attraction basin with probability exp( 1=a).
We then address the inverse problem of identifying the distribution of the normalized
of the attraction basins, for an unknown landscape. Some
direct analysis is rst required as discussed below.
Direct analysis. Consider a random sample (X uniformly chosen in the
search space. For each steepest ascent starting from X i (with the
modication operator(s) at hand ) ends at the local optimum SA (X i ). Dene j
as the number of local optima (m : ) that are reached by exactly j points from (X i )
(see an example in Figure 2):
Proposition 5.1 gives the distribution of ( j ) for a family of parametrized distributions
for asymptotically with respect to N and M . More precisely, if (Z j ) j=1;:::;N
denotes a family of positive real-valued independent random variables with Gamma
distributions whose densities are:
z
and
, then the expected number j;
(j
a j
a=M=N
Moreover, the ratio M=N is the unique solution of:
r
(1)
The latter equation is then used to nd a good estimator of N , with observed values
of the variables j , as explained below.
Inverse problem. Given an unknown landscape, we then propose to characterize
the distribution of ( j ) through the empirical estimation of the distribution of the
random family ( j ). In fact, by construction, the distribution of ( j ) and that of
6 J. Garnier and L. Kallel
are tightly related: We experimentally determine observed values taken by
(random sampling and steepest ascent search). Then, for each
value, we use a 2
test to compare the observed law for to the law should (theoretically) obey if
the law of ( j ) were Law
. Naturally, we nd a (possible) law for only if
one of the latter tests is positive. Otherwise, we only gain the knowledge that
does not obey the law Law
. Note also that the method can be used to determine
sub-parts of the search space with a given distribution for ( j ). In case the law of
, Eq. (1) is used to nd a good estimator of N .
Last, Section 6 validates the methodology of Section 5, by considering known landscapes
with random and deterministic sizes of basins, showing that the estimations of
the number of local optima N are accurate, even if M is much smaller than N . Fur-
ther, we apply the methodology on unknown landscapes, and show that the Hamming
binary and gray F1 landscapes contain much more local optima than the 3-bits-
ip
landscapes.
4. Direct problem. We assume that the search space E can be split into the
partition of subspaces which are attraction basins of local maxima m 1 ,.,m N
of the tness function. Let us put a sample of M points randomly in the search space.
We aim at computing the probabilities pN;M that at least one point of the random
sample lies in each attraction basin.
Proposition 4.1. If we denote by j := jE j j=jEj the normalized size of the j-th
attraction basin, then:
(2)
Proof. Let us denote by A j the event:
there is no point in
The probability of the intersection of a collection of events A j is easy to compute. For
any 1 there is one initial point chosen uniformly in E
then we have
If there are M initial points chosen uniformly and independently in E, then:
On the other hand, 1 pN;M is the probability that at least one of the attraction
basin contains no point, which reads as:
The result thus follows from the inclusion-exclusion formula [28, Formula 1.4.1a].
Proposition 4.1 gives an exact expression for pN;M which holds true whatever
but is quite complicated. The following corollaries show that the
expression of pN;M is much simpler in some particular congurations.
Corollary 4.2. 1. If the attraction basins all have the same size j 1=N (the
so-called D-conguration), then:
E-ciency of local search with multiple local optima 7
2. If moreover the numbers of attractors and initial points are large N 1 and
3. Let us denote by MD the number of points which are necessary to detect all local
maxima. Then in the asymptotic framework N 1, MD obeys the distribution of
where Z is an exponential variable with mean 1.
An exponential variable with mean 1 is a random variable whose density with respect
to the Lebesgue measure over R + is
Proof. The rst point is a straightforward application of Proposition 4.1. It is
actually referenced in the literature as the \coupon-collector" problem. The fact that
MD =(N ln N) converges in probability to 1 is also well-known. The corollary is going
one step further by exhibiting the statistical distribution of M d N ln N . Let us
assume that We begin by establishing an estimate
of
First note C k
Second ln(1 x) x for any 0 x 1 so that
As a consequence, uniformly with respect to
e:
We thus have
pN;M;k ea k =k! for all k uniformly with respect to N .
Choosing some K 1, we can write from Eq. (3):
pN;M
eX
a k
It is easy to check that, for any xed k: N k C k
as N !1, so that:
lim sup
pN;M
eX
a k
This holds true for any K, so that we take the limit K ! 1 which gives the result
of the second point. The third point then follows readily from the identity P(M d
stands for the integral part of the real number x.
8 J. Garnier and L. Kallel
Corollary 4.3. 1. If the sizes of the attraction basins are random (the so-called
R-conguration), in the sense that their joint distribution is uniform over the simplex
of
and the numbers of attractors and initial points are large: N 1 and
a > 0, then:
2. Let us denote by MR the number of points which are necessary to detect all local
maxima. Then in the asymptotic framework N 1, MR obeys the distribution of
where Z is an exponential variable with mean 1.
A construction of the R-conguration is the following. Assume that the search
space E is the interval [0; 1). Choose N 1 points (a i uniformly over [0; 1]
and independently. Consider the order statistics (a (i) ) i=;1;:::;N 1 of this sample, that
is to say permute the indices of these points so that a (0) := 0 a (1) :::
a (N 1) a (N) := 1. Denote the spacings by a (j) a (j 1) for
Note that j is also called the j-th coverage. If the j-th attraction basin E j is the
interval [a (j 1) ; a (j) ), then the sizes of the attraction basins
obey a uniform distribution over the simplex SN .
Proof. From Eq. (2) and the relation
stands for the expectation with respect to ( j ) j=1;:::;N whose distribution is
uniform over SN . As pointed out in Ref. [28, Section 9.6a], the probability distribution
of the sum of any k of the N coverages j is described by the repartition function
given by Formula 9.6.1-[28], which shows that it admits a density q N;k () with respect
to the Lebesgue measure over [0; 1]:
We can thus write a closed form expression for
E-ciency of local search with multiple local optima 9
We shall rst prove an estimate of pN;M;k .
Step 1. pN;M;k k!
We have N !=(N k)! N k and (N 1)!=(N k 1)! (N 1) k N k . For
any k = 0; :::; N we also have (M +N 1 k)!=(M +N 1)! M k . Substituting
these inequalities into Eq. (9) establishes the desired estimate.
Step 2. For any xed k, if
On the one hand, C
On the other hand:
is bounded by 1 and converges to exp( as) as N ! 1,
the dominated convergence theorem implies that:
which yields the result by Eq. (8).
Step 3. Convergence of pN;M when
We rst choose some K 1. We have from the result of Step 1:
pN;M
X
Substituting the result of Step 2 into Eq. (10) shows that:
lim sup
pN;M
X
a k
This inequality holds true for any K, so letting K ! 1 completes the proof of the
corollary.
It follows from the corollaries that about N ln N points are needed in the D-
conguration to detect all maxima, while about N 2 points are needed to expect the
same result in the R-conguration. This is due to the fact that there exists very small
attraction basins in the R-conguration. Actually it can be proved that the smallest
attraction basin in the R-conguration has a relative size which obeys an exponential
distribution with mean N 2 (for more detail about the asymptotic distribution concerning
order statistics we refer to [28]). That is why a number of points of the order
of N 2 is required to detect this very small basin.
Mean values. The expected value of MD is:
where C is the Euler's constant whose value is C ' 0:58. The expected value of
MR =N 2 is equal to innity. This is due to the fact that the tail corresponding to
exceptional large values of MR is very important:
P(MR N 2 a)
J. Garnier and L. Kallel
Standard deviations. The normalized standard deviation, which is equal to the
standard deviation divided by the mean, of the number of points necessary to detect
all local maxima in the D-conguration is equal to:
which goes to 0 as which proves in particular that MD =(N ln N) converges
to 1 in probability. This is of course not surprising. The D-conguration
has a deterministic environment, since all basins have a xed size, so that we can
expect an asymptotic deterministic behavior. The situation is very dierent in the
R-conguration which has a random environment, and it may happen that the smallest
attraction basin be much smaller than its expected size N 2 . That is why the
uctuations of MD , and especially the tail corresponding to exceptional large values,
are very important.
5. Inverse problem.
5.1. Formulation of the problem. We now focus on the inverse problem. We
look for the number N of local maxima of the tness function and also some pieces
of information on the distribution of the sizes of the corresponding attraction basins.
We assume that we can use an algorithm that is able to associate to any point of the
search space the corresponding local maximum. In order to detect all local maxima,
we should apply the algorithm to every point of the search space. Nevertheless this
procedure is far too long since the search space has a large cardinality. Practically
we shall apply the algorithm to M points that will be chosen randomly in the search
space E. The result of the search process can consequently be summed up by the
following set of observed values (j 1):
number of maxima detected with j points:
Our arguments are based upon the following observations. First note that
is the number of detected maxima. It is consequently a lower bound of the
total number of local maxima N , but a very rough estimate in the sense that it may
happen that many maxima are not detected, especially those whose attraction basins
are small. Besides
N represents less information than the complete set ( j ) j1 . By
a clever treatment of this information, we should be able to nd a better estimate of
than
N .
5.2. Analysis. The key point is that the distribution of the set j is closely
related to the distribution of the sizes of attraction basins. Let us assume that the
relative of the attraction basins can be described by a distribution
parametrized by some positive number
as follows. Let (Z j ) j=1;:::;N be a sequence of
independent random variables whose common distribution has density p
with respect
to the Lebesgue measure over (0; 1): 2
where is the Euler's Gamma function
dt. The density p
is the
so-called Gamma density with parameters (
is a positive integer then p
is a negative-binomial distribution.
E-ciency of local search with multiple local optima 11
z
Fig. 3. Probability density of the sizes of the attraction basins under H
for dierent
the expected value of Z 1 is 1 and its standard deviation is 1= p
. In the following we
shall say that we are under H
if the relative sizes of the attraction basins
can be described as (Z 1
and the distribution
of Z j has density p
. Note that the large deviations principle (Cramer's theorem
[1, Chapter 1]) applied to the sequence (Z j ) yields that for any x > 0 there exists
c
TN
exp( Nc
which shows that, in the asymptotic framework N 1, the ratio Z j =N stands for the
relative size j up to a negligible correction. The so-called D and R congurations
described in Section 4 are particular cases of this general framework:
- For
so that we get back the deterministic D-
- For
1, the Z j 's obey independent exponential distributions with mean 1, and
the family obeys the uniform distribution over SN [17].
The important statement is the following one.
Proposition 5.1. Under H
the expected values j;
of the j 's can
be computed for any N , M , and
In the asymptotic framework N 1, if
can be expanded as:
(j
a j
Proof. Under H
, the probability that j of the M points lie in the k-th attraction
basin can be computed explicitly:
(j points in E k
3 Applying the procedure described in [1] establishes that c
J. Garnier and L. Kallel
stands for the expectation of Z j with distribution p
.
Accordingly, in terms of the Z i 's this expression reads:
(j points in E k
where
. The random variables Z k and
Z k are independent. The probability
density of Z k is p
given by Eq. (12). The random variable
Z k is the sum of
independent random variables with densities p
, so that its probability density
is [4, p. 47, Formula 2.3]:
z (N 1)
Accordingly:
(j points in E k
Z 1dz
Z 1dzp
(z)p
z
By the change of variables
z) and
z we get:
(j points in E k
Z 1du
Z 1dvv N
The integral with respect to v is straightforward by denition of the Gamma function.
The integral with respect to u can be obtained via tabulated formulae [4, p. 47, formula
2.5]. This gives the explicit formula (14) for j;
since
(j points in E
If N 1 and then we have N
(N
a) j+
, and N j M !=(M j)! ! a j as
N !1. This proves the asymptotic formula (15).
In particular, the distribution of the j 's under the D-conguration is Poisson in
the asymptotic framework N 1:
while it is geometric under the R-conguration:
From Eq. (15) we can deduce that the following relation is satised by the ratio
r
E-ciency of local search with multiple local optima 13
5.3. Estimator of the number of local maxima. We have now su-cient
tools to exhibit a good estimator of the number of local maxima. We remind the reader
of the problem at hand. We assume that some algorithm is available to determine from
any given point the corresponding local maximum. We choose randomly M points in
the search space and detect the corresponding local maxima. We thus obtain a set
of values ( j ) j1 as dened by (11). We can then determine from the set of values
0 is the most probable, or at least which H
0 is the
closest conguration of the real underlying distribution of the relative sizes of the
attraction basins. The statistics used to compare observed and expected results is the
so-called 2 goodness of t test [27, Section 8.10], which consists rst in calculating
for each
where
is the set of the indices j for which j 1. Obviously a
large value for T
indicates that the corresponding j;
are far from the observed ones,
that is to say H
is unlikely to hold. Conversely, the smaller T
, the more likely H
holds true. In order to determine the signicance of various values of T : , we need the
distribution of the statistics. A general result states that if the hypothesis H
does
hold true, then the distribution of T
0 is approximatively the so-called 2 -distribution
with degrees of freedom equal to the cardinality of the
set
minus 1. Consequently
we can say that the closest conguration of the real underlying distribution of the
relative sizes of the attraction basins is H
0 is given by:
Furthermore, we can estimate the accuracy of the conguration H
0 by referring T to tables of the 2 -distribution
1 degrees of freedom. A value of T much larger than the one indicated in the tables mean that none of the congurations
hold true. Nevertheless, H
0 is the closest distribution of the real one.
Remark. The distribution theory of 2 goodness of t statistic can be found in
[3, Chapter 30]. The result is in any case approximate, and all the poorer as they are
many expected j;
less than ve. These cases must be avoided by combining cells.
But power in the tail regions is then lost, where dierences are more likely to show up.
Dening
0 as (17), we denote by
the quantity:
From Eq. (16), under H
0 the ratio M=N is the unique solution of:
Consequently, once we have determined
is a good estimator of the
14 J. Garnier and L. Kallel
gc
Simul. 3
Nb.
of
optima
9100Estimated
Visited
Exact
gc
Simul. 3
Nb.
of
optima
91000Estimated
Visited
Exact
Fig. 4. Basins with random uniform sizes: The left gures plot the 2 test results (i.e. the
values of T
comparing the empirically observed distribution to the family of
-parametrized
distributions. The right gures plot for dierent
values the estimation of the number of local
optima computed by Eq. (18). These estimations are very robust (only one estimation is plotted)
and are accurate for
1. The same gures also show the visited numbers of optima actually
visited by the steepest ascent (
The numerical simulations exhibit unstable results
for the 2 test for small N values and
6. Experiments. Given a landscape L, the following steps are performed in
order to identify a possible law for the number and sizes of the attraction basins of
L, among the family of laws Law
studied above.
1. Choose a random sample (X uniformly in E.
2. Perform a steepest ascent starting from each X i up to SA (X i ).
3. Compute dened as the number of local optima reached by exactly j initial
points X i .
4. Compare the observed law of to the laws of (
dierent
values, using
the 2 test.
To visualize the comparison of the last item, we propose to plot the obtained 2
value for dierent
values. We also plot the corresponding 2 value below which the
test is positive with a condence of 95 %.
6.1. Experimental validation. The results obtained in Section 5 are asymptotic
with respect to the number of local optima N and the size of the random sample
M . Hence before the methodology can be applied, some experimental validation is
required in order to determine practical values for M and N for which the method is
reliable. This is achieved by applying the methodology to determine the distribution
of (normalized sizes of the attraction basins) in two known purposely constructed
landscapes: The rst contains basins with random sizes, the second contains basins
with equal sizes.
Results are plotted in Figures 4-5 and 6. Samples with smaller sizes than those
shown in these gures yield j values which are not rich enough to allow a signicant
E-ciency of local search with multiple local optima 15
gc
Simul. 3
Nb.
of
optima
9100Estimated
Visited
Exact
gc
Simul. 3
Nb.
of
optima
Estimated
Visited
Exact
gc
Simul. 3
Nb.
of
optima
Estimated
Visited
Exact
Fig. 5. The same as in Figure 4 with dierent values for N and M . Stable results are obtained
when N increases and M is bounded (M min(2000; 3N) here). The estimation of N corresponding
to the smallest 2 value (
very accurate.
test comparison. For instance, the 2 test requires that observed j are non-null for
some j > 1 at least (some initial points are sampled in the same attraction basin). In
case all initial points are sampled in dierent attraction basins the 2 test comparison
is not signicant.
These experiments give practical bounds on the sample sizes (in relation to the
number of local optima) for which the methodology is reliable: The numerical simulations
exhibit unstable results for the 2 test for
4). When N increases and M is bounded (M min(2000; 3N) in the experiments),
results become stable and accurate (Figures 5). Further, we demonstrate that the estimation
of number of local optima is accurate, even when initial points visit a small
number of attraction basins of the landscape (Figure 6). This situation is even more
striking in the experiments of the following section on Baluja F1 problems.
6.2. The methodology at work. Having seen that the methodology is a powerful
tool, provided that the information obtained for is rich enough, we apply it to
investigate the landscape structure of the di-cult gray and binary coded F1 Baluja
problems [2], for a 1-bit-
ip and 3-bit-
ips neighborhood relations.
J. Garnier and L. Kallel
gc
Simul. 3
Nb.
of
optima
Estimated
Visited
Exact
gc
Nb.
of
optima
Estimated
Visited
Exact
2:
log
scale
c
N=10^5, M=500
Nb.
of
optima
3.
Estimation 1
Estimation 2
Visited
Exact
N=10^5, M=500
Fig. 6. Basins with Deterministic equal sizes: The 2 results are stable for smaller sample
sizes than those of the random conguration. The bottom gures correspond to the case
and 500, where the 2 test is not signicant, yet the predicted number of local optima is very
accurate! With 500 initial points, 497 local optima have been visited, while there are actually 10 5
optima. Yet, formula (18) is able to estimate the true number with an error of less than 30% when
the adequate
value is used.
Gray-coded Baluja F1 functions. Consider the function of k variables
It reaches its maximum value among 10 7 at point 0). The Gray-encoded F1g
and binary F1b versions, with respectively 2, 3 and 4 variables encoded on 9 bits are
considered. This encoding consequently corresponds to the binary search space with
Considering the 1-bit-
ip mutation (Hamming landscape), Figure 7 shows that
the distribution of the sizes of the basins is closer to the random conguration than
to the deterministic one, and that the estimated number of local optima is similar for
the binary and gray codings. On the other hand, considering the 3-bit-
ip mutation
Figure
8), the estimated number of local optima drops signicantly for both problems:
less than 250 for both binary and gray landscapes, whereas the Hamming landscape
E-ciency of local search with multiple local optima 17
gc
bits F1g, M=500
Nb.
of
optima
Estimated
Visited
bits F1g, M=500
gc
bits F1b, M=500
Nb.
of
optima
Estimated
Visited
bits F1b, M=500
Fig. 7. The di-cult Baluja 27-bits F1 gray (F1g) and binary (F1b) landscapes with a 1-bit-
ips
mutation. Experiments with samples of sizes show the same results for
the 2 test, and the corresponding estimations of the number of local maxima converge to a stable
value around 4000.
contains thousands of local optima (Figure 7).
Experiments at problem sizes l = carried out in addition
to the plotted ones (l = 27), leading to similar results for both F1g and F1b problems:
The number of local optima of the 3-bit-
ips landscape is signicantly smaller than
that of the Hamming landscape. For example, when there are less than 10
local optima in the 3-bit-
ips landscape versus hundreds in the Hamming landscape.
the estimations for the Hamming landscape show about two times more
local optima for the gray than for the binary encoding (resp. 45 000 and 25 000). Still
but for the 3-bit-
ips landscape, the estimated number of local optima
drops respectively to 1400 and 1000.
A new optimization heuristic. A simple search strategy for solving di-cult problems
naturally follows from the methodology presented in this paper: Once the number
N and distribution of the attraction basins is estimated following the guidelines
summarized in the beginning of Section 6, generate a random sample whose size is set
to if the sizes of the basins are close to the deterministic conguration
if the sizes of the basins are close to random). Then a simple steepest
ascent starting from each point of the sample, ensures that the global optimum is
found with probability exp( 1=a).
In the 27-bits F1 problem, this heuristic demonstrates to be very robust and ecient
in solving the problem with the 3-bits-
ip operator. Using a 3-bits-
ip mutation
steepest ascent, an initial random sample of 5 points (versus 100 with 1-bit-
ip mu-
tation) is enough to ensure that one point at least lies in the global attraction basin
(experiments based on 50 runs)! This is due to the fact that the basin of the global
optimum is larger than the basins of the other local optima. In order to detect all
attraction basins, we can estimate the required sample size to 62500 (250 250 using
Corollary 4.3 and the estimation of in the experiments of Figure 8).
J. Garnier and L. Kallel
gc
Nb.
of
optima
Visited
bits F1g, M=500
gc
bits F1b, M=500
Nb.
of
optima
Exact
bits F1b, M=500
Fig. 8. The di-cult Baluja 27-bits F1 gray (F1g) and binary (F1b) landscapes with a 3-
bit-
ips mutation: the number of local optima drops signicantly compared to the Hamming 1-bit-
ip landscape. These results are conrmed by experiments using samples of sizes 2000 and
which give the same estimation for the number of local optima.
6.3. Discussion. This paper provides a new methodology allowing to estimate
the number and the sizes of the attraction basins of a landscape specied in relation to
some modication operator. This allows one to derive bounds on the probability that
one samples a point in the basin of the global optimum for example. Further, it allows
to compare landscapes related to dierent modication operators or representations,
as illustrated with the Baluja problem.
The e-ciency of the proposed method is certainly dependent on the class of laws
of of the attraction basins) for which the distribution of is known. We
have chosen a very particular family of distributions p
for representing all possible
distributions for the relative sizes of attraction basins. The constraints for this choice
are twofold and contradictory. On the one hand, a large family of distributions is
required to be sure that at least one of them is su-ciently close of the observed
repartition On the other hand, if we choose an over-large family, then we need a
lot of parameters to characterize the distributions. It is then very di-cult to estimate
all parameters and consequently to decide which distribution is the closest to the
observed one. That is why the choice of the family is very delicate and crucial.
We feel that the particular family p
that has been chosen (12) fullls determinant
conditions. First it contains two very natural distributions, the so-called D and R
congurations that we have studied with great detail. Second it is characterized by
a single parameter easy to estimate. Third it contains distributions with a complete
range of variances, from 0 (the D-conguration) to innity, by going through 1 (the
R-conguration).
However, the experiments with the Baluja problem, appeal for rening the class
of laws of ( j ) around basins with random sizes. We may propose
where Z j are independent and identically distributed with one of the distributions of
E-ciency of local search with multiple local optima 19
the bidimensional family p
;- (:),
The parameter - characterizes the distribution of the sizes of the small basins, since
;- (z) z - 1 as z ! 0, while
characterizes the distribution of the sizes of the
large basins, since the decay of p
;- (z) as z !1 is essentially governed by e
z . The
density p
;- is the so-called Gamma density with parameters (
2.2]. This family presents more diversity than the family p
(:) we have considered in
Section 5.2. The expected value of j is under p
(j
a j
a=M=N
The method of estimating the number of local minima described in Section 5.3 could
then be applied with this family.
To apply our method we have also made a crucial choice which consists in executing
the local search algorithm from randomly distributed points. We do so because we
have no a priori information on the landscape at hand. However, assume for a while
that we have some a priori information about the tness function, for instance its
average value. Consequently we could hope that starting with points whose tnesses
are better than average would improve the detection of the local maxima. Nevertheless
extensive computer investigations of some particular problems have shown that
this is not the case [15, p. 456], possibly because a completely random sampling of
starting points allows one to get a wider sample of local optima.
A rst application of the methodology presented in this paper is to compare
landscapes obtained when dierent operators are used (k-bit
ip binary mutations for
dierent k values for example). However, the complexity of this method is directly
related to size of the neighborhood of a given point. Hence, its practical usefulness
to study k-bit-
ip landscapes is limited when k value increases. Hence, it seems
most suited to investigate dierent representations. Its extension to non-binary representations
is straightforward, provided that a search algorithm that leads to the
corresponding local optimum can be provided for each representation. Further, this
methodology can be used to determine sub-parts of the search space, such that
obey a particular law, hence guiding a hierarchical search in dierent subparts of the
space.
Note nally that the distributions of the sizes of basins do not fully characterize
landscape di-culty. Depending on the relative position of the attraction basins, the
search still may range from easy to di-cult. Additional information is necessary
to compare landscapes di-culty. Further work may address such issues to extract
additional signicant information in order to guide the choice or the design of problem
dependent operators and representations.
--R
Grandes d
An empirical comparison of seven iterative and evolutionary function optimization heuristics
Mathematical methods of statistics
An introduction to probability theory and its applications
Genetic algorithms in search
Adaptation in natural and arti
Fitness distance correlation as a measure of problem di-culty for genetic algorithms
Convergence des algorithmes g
On functions with a
Alternative random initialization in genetic algorithms
A priori predictions of operator e-ciency
The genetic algorithm and the structure of the
Memetic algorithms and the
Algorithms and com- plexity
Spin glass theory and beyond
Ultrametricity for Physicists
Genetic algorithms
Complex systems and binary networks
Landscapes and their correlation functions
Local improvement on discrete structures
Hill climbing with multiple local optima
Theory and Applications
Correlated and uncorrelated
Chapman and Hall
Mathematical statistics
--TR
--CTR
Sheldon H. Jacobson , Enver Ycesan, Analyzing the Performance of Generalized Hill Climbing Algorithms, Journal of Heuristics, v.10 n.4, p.387-405, July 2004 | combinatorial complexity;neighborhood graph;local search;randomized starting solution |
587975 | Reserving Resilient Capacity in a Network. | We examine various problems concerning the reservation of capacity in a given network, where each arc has a per-unit cost, so as to be "resilient" against one or more arc failures. For a given pair (s,t) of nodes and demand T, we require that, on the failure of any k arcs of the network, there is sufficient reserved capacity in the remainder of the network to support an (s,t) flow of value T. This problem can be solved in polynomial time for any fixed k, but we show that it is NP-hard if we are required to reserve an integer capacity on each arc. We concentrate on the case where the reservation has to consist of a collection of arc-disjoint paths: here we give a very simple algorithm to find a minimum cost fractional solution, based on finding successive shortest paths in the network. Unlike traditional network flow problems, the integral version is NP-hard: we do, however, give a polynomial time $\frac{15}{14}$-approximation algorithm in the case k=1 and show that this bound is best possible unless | Introduction
1.1
Overview
A commonly encountered network design problem is that of reserving capacities in a network so as to
support some given set of pairwise traffic demands. Algorithms for this network capacity allocation
problem have been developed by a number of groups, see for example [6, 8, 9, 21, 22, 23]. These are
primarily based on polyhedral methods. One significant drawback to the capacity reservation problem
discussed, and especially to the successive shortest path approach, is that if we simply reserve capacity
along a single path, we make ourselves totally vulnerable to the failure of any arc (or node) along our
chosen path. In many practical settings, this is not acceptable, and we wish to reserve our capacities
so as to allow for the failure of any one arc of the network.
Several groups have recently addressed this issue of "resilience" or "survivability" in network design
problems, e.g., [2, 3, 5, 10, 14, 15, 20, 24, 25, 26], to name a few. As with the preceding batch
of references, these are also primarily based on polyhedral or branch and cut methods (although
computationally, these problems prove to be dramatically more difficult to solve in practice (cf [10]))
and and hence if they terminate, they usally produce an optimality certificate. This aspect of this
approach is very desirable in situations where (i) costs and data are 'certain' and (ii) there is time
available to solve the optimization problem off-line. These conditions are often not met, however,
and consequently many of the network planning tools in the telecommunication industry solve the
'vanilla' form of the capacity allocation problem by successively solving a shortest path problem for
each demand pair incrementing loads on the arcs of the shortest path by the demand
for that pair. 1 In its favour, this approach is fast and allows for trivial implementation in software.
Unfortunately, one easily concocts examples where this approach produces solutions arbitrarily far
from the optimum. On the other hand, one may also produce examples where the exact methods do
not terminate or exhibit poor running times; in addition, they may require substantial mathematical
sophistication on the part of its implementors. Another situation where single source-destination pair
algorithms are sought is in on-line settings. This area is becoming increasingly important as network
management becomes a key concern of network operators. This is largely driven by the changing
nature of demands from their customers. In particular, bursty or short-term requests for connectivity
are becoming increasingly common.
The present paper is dedicated adapting the successive shortest path heuristic to be able to find
resilient capacity reservations. Thus we restrict ourselves throughout to the special case of the problem
with a single source-destination pair of nodes; we show that even this case presents some surprising
difficulties. This case may be equally viewed as an extension of finding a minimum cost flow for a
single source-destination pair. (Overviews of previous computational and theoretical work on such
survivability and augmentation problems can be found in [18] and [17].)
We adopt the viewpoint that the network, with specified nodes s and t, is given to us, along with a
per-unit cost c a for each arc a, and that we are free to reserve, once and for all, as much capacity as
we like on whatever arcs we choose. Our objective is to find a "reservation vector" x minimizing the
total cost
a c a x a , subject to supporting a given target amount T of traffic from s to t, even if any
one (or, more generally, any k) of the arcs in the network fails.
This rough description of the problem admits many different versions, depending on the type of
network we are dealing with, the way we are required to recover from arc failures, and especially on
any structure imposed on the vector of reserved capacities itself. In this paper, we consider two types
of constraints on the capacity reservation vector.
1 Costs are often altered so that they exponentially increase as the load on an edge approaches its capacity.
1. Integrality: We may be forced to reserve capacity in discrete amounts, so that our reservation vector
must be integral.
2. Structural: It may be required that our reservation vector be formed by selecting a collection of
arc-disjoint (s; t)-paths (i.e., directed paths from s to t in the network), and assigning a capacity to
each path - we call such a reservation a diverse-path reservation.
We begin Section 2 by studying resilient diverse-path reservations. This version arose out of a problem
encountered by British Telecom which was solved by two of the authors in a consultancy report [13]
- this work forms the basis of Section 2.1. The research described in this paper is partly an attempt
to determine conditions under which such diverse-path solutions will be optimal in general.
Diverse-path solutions have several features which are attractive to network planners. For a start, a
diverse-path routing may be "hardwired" at the terminating nodes, thus decreasing routing complexity.
If a traffic flow control is centralized, then this allows load balancing of traffic over the collection of
diverse paths. Even if this is not the case, as in a noncooperative network, the restoration phase is
much simpler since an arc failure may be treated as a path failure, and traffic routed along the path
may be shifted to the remaining non-failed paths. Finally, like shortest paths, they are conceptually
simple to visualize. In Section 1.3 we give an overview of different types of resilience and restoration
and discuss some of the issues involved.
In Sections 2.1 and 2.2 we describe an extremely simple algorithm to find a minimum cost resilient
diverse-path reservation. If the paths are pre-specified, this may be used to find the optimal integral
reservation. If the paths are not specified, the integer version of this problem turns out to be NP-hard.
Here instead we give a polytime 1
14 -approximate algorithm in the case of resilience against the failure
of one arc, and show this bound is the best possible (if P 6= NP). Similar results hold if k takes other
values, or is unrestricted.
In Section 3, we discuss the general problem of resilience with no side-constraints. We give several
examples of basic optimal solutions, showing that we do not always obtain anything resembling a
diverse-path reservation in the general case.
We show that the integral resilience problem is NP-hard. However, we do show that the naive algorithm
that allocates capacity T to the arcs of a cheapest pair of arc-disjoint (s; t)-paths gives a solution with
at most k times the cost of an optimal solution.
1.2 Notation and Definitions
We start with basic definitions and facts concerning flows in networks.
Throughout, we suppose (sometimes implicitly) that we are given a directed graph (network)
with node set V and arc set A. We shall always assume that D comes with two nodes permanently
fixed as the source (or origin) s and the destination (or sink) t. We also suppose that we
are given a rational number T (usually an integer) representing the required traffic flow from s to t
through the network D in the case of failure. Finally, we are also given a vector (c a ) of non-negative
rational (again, usually integer) costs on the arcs a of D. Even though there may be parallel arcs, we
often use the notation (u; v) when no confusion arises.
For any S ' V , let
D (S) (or simply if the context is clear) denote the set of arcs with tail in
S and head in denote the set of arcs S). For a node v 2 V , we
we denote by [S; S 0 ] the set of arcs with tail in S and head in S 0 . We call
an (s; t)-set if s 2 S and t called s-minimal if the graph induced by
S contains a directed path from s to each node of S.
Let Q+ denote the set of non-negative rational numbers, so that Q A
is the set of all assignments of
a non-negative rational to each member of the arc-set A, which we will frequently view as a vector.
For any vector x 2 Q A
and A 0 ' A, we denote by
a2A 0 x a . We let I(x) denote the
support of x, that is
A vector x 2 Q A
is an (s; t) flow vector (or simply a flow) if x(ffi
and (therefore) x(ffi (t)). In this case, the value of the flow is
For any rational M and (s; t)-set S, the M-cut constraint for S is: i.e., the total
capacity of arcs leaving S is at least M . We define the M-cut polyhedron as
for each (s; t)-set S
Network flow theory asserts that for a given vector x 2 Q A , there is a flow vector w - x of value M if
and only if x 2 C(M;D). It is an exercise to show that in (1) we need only include the cut constraints
corresponding to s-minimal sets.
A vertex of a polyhedron P ' R n is an extreme point of P, i.e., a vector in P that is not a convex
combination of two other vectors in P. For a polyhedron P in R n , a vector y 2 P is a vertex if and
only if there is some linearly independent set of n inequalities defining P that are satisfied by y with
equality. Since we consider only polyhedra in R n
linear function is bounded below on P, then it
attains its minimum at a vertex; such a vertex is called a basic optimal solution to the minimization
problem.
1.3 Types of Resilience
We consider vectors
of reserved capacities on the arcs of D such that if any k arcs of D are
deleted, then the remaining arcs have sufficient capacity. A vector x 2 Q A
is (T; k)-resilient if, for each
set K of k arcs, the capacities on arcs in A \Gamma K are sufficient to admit an (s; t) flow of value T . As in
the previous section, this is equivalent to requiring that x satisfies the constraint x(ffi
for each (s; t)-set S and each subset K ae size k. In fact, it is easily shown that we need only
require these constraints for s-minimal sets S.
We also call a (T; 1)-resilient reservation T -resilient, and for the remainder of this section we do indeed
restrict ourselves to the case Everything here goes through in much the same way for the case
As with the standard network flow problem, the problem of finding a minimum cost T -resilient reservation
vector can be expressed as an optimization problem over a certain polyhedron, which we now
describe.
For any rational T , (s; t)-set S, and arc e (S), the partial T -cut constraint associated with the
is the constraint x(ffi T: The resilience polyhedron is defined by the system of
all partial cut constraints.
for each (s; t)-set S and e 2
Note that R(T; D) is empty if there is an (s; t)-set S with of size at most 1, and otherwise
R(T; D) is full-dimensional. Also note that if x 2 R(T; D), then setting any x results in a
vector in C(T; D). Thus if we reserve the capacities x, then even if an arc fails, there is still enough
capacity to build an (s; t) flow of value T . Conversely, any vector x not in R(T; D) fails the partial
T -cut constraint for some pair (S; e) and hence if e fails, there is not sufficient capacity in the network
to support a flow of value T . Therefore R(T; D) consists exactly of the T -resilient vectors, and the
problem of finding a minimum cost T -resilient reservation is that of minimizing the linear function
a2A c a x a over R(T; D).
A consequence of this formulation is that there is a polynomial-time algorithm to find a minimum
cost T -resilient vector. Indeed, it is easily seen that the separation problem for R(T; D) amounts to
solving jAj maximum flow problems. Still better, as noted previously by several researchers, we can
rephrase the problem as that of finding one (s; t) flow vector y a of size T for each failing arc (i.e., with
y a
along with a common upper bound x - y a whose cost is to be minimized: this formulation
constitutes a linear program with a polynomially bounded number of variables and constraints. This
is not offered as a practical approach however, and the remainder of the paper addresses the task of
finding more direct combinatorial algorithms.
Let us give an example, which also illustrates a possible limitation of our notion of resilience. Consider
the network D in Figure 1. For appropriate costs, the arc values in Figure 1(a) form a (unique)
minimum cost 6-resilient reservation vector: in other words, the vector shown is a vertex of the
polytope R(6; D). A "working flow" of value 6 for this network is displayed in Figure 1(b), where the
numbers in brackets denote the "spare capacity" for this flow. Note however that if the arc (u; t) were
to fail, and we are asked to redirect the one unit of traffic currently flowing along the path s !
then this cannot be done without altering flow values on paths which were unaffected by the arc's
failure. This might not be acceptable in some practical situations.
(a)213
s
(b)
s
Figure
1: A resilient reservation vector with
It is easily seen that this situation does not arise if we demand that our reservation vector x consist
of a collection of arc-disjoint paths, which is one motivation for considering that restriction.
Another way to refine the notion of resilience so as to address this issue is to replace the partial T -cut
constraints by stronger inequalities. The net partial T -cut constraint associated with a pair (S; e) is:
is net T -resilient if it satisfies all the net partial T -cut
constraints; we denote by N (T; D) the set of all such vectors. For instance, the reservation vector in
Figure
1(a) fails the net partial 6-cut constraint for (S; e) where It is
not hard to show that, given a net T -resilient reservation vector x, and any flow of value T , thought
of as a combination of traffic flowing down (s; t)-paths, on the failure of any arc, flow of value T can
be restored without altering the routing of any unaffected traffic. Net resilience is of course a much
stronger requirement than resilience, but it does share most of the good computational properties.
For more details, see the technical report [11], which is a longer version of the present paper.
We close this section with a brief discussion of other types of resilience. We have concentrated on the
case where we require resilience against one or more arc failures. One may also wish to guard against
node failures. We remark only that this problem can be formulated using our previous models by
applying standard splitting operations on nodes other than s and t.
Another possibility is that we may wish to guard against losing some fraction ff 2 [0; 1] of the flow on
each arc, giving, for each (S; e), the constraint:
A vector x satisfying these constraints will be called )-resilient. Note that if ff = 0, this leads
to a formulation for traditional network flows. If it is the partial T -cut constraint again. The
effect of intermediate values of ff can essentially be analyzed by duplicating arcs.
To be precise, suppose that ff = p=q is rational, and that we require an (ff; T )-resilient reservation x
in a network D. To model this, we form D 0 by taking q copies of each arc of D, each with the same
cost as in D. It is easy to check that a minimum cost reservation vector for D 0 that is qT -resilient
against the loss of any p arcs can be constructed from a minimum cost (ff; T )-resilient reservation in
D by duplicating the reservation on each set of multiple arcs.
One final possibility is that the structure of the reservation vector x when all arcs are functional is
of paramount importance. A likely scenario is that there is some value M such that the vector x is
required to admit an M -flow, that is, there is an (s; t) flow f of value M with f - x, or maybe even
that x itself is required to be an M -flow. We don't discuss such constraints in this paper, but we plan
to discuss this situation in a future paper [12], where we also consider the effect of imposing upper
bounds on the capacities that can be reserved on each arc.
Resilient Diverse-Path Reservations
As we have mentioned, our aim with resilience problems is often to reduce to the case where the
reservation has to form a collection of arc-disjoint paths, and we term such a reservation a diverse-
path reservation. We thus need to consider how we solve the problem once it has been reduced to this
special case.
We start with the case where the collection of diverse paths is fixed ahead of time. This forms the
basis for much of what follows and so we present it in detail. The material in the first two parts of
this section, written for a non-technical audience, appears as [13]. A more thorough handling of the
polyhedron considered herein (including a complete linear description of the integer hull) has been
given by Bienstock and Muratore - see [10]. The case can also be regarded as a special case of
a problem treated by Bartholdi, Orlin and Ratliff [7]; our methods give a somewhat simpler solution
in this special case.
2.1 Reservations on a Fixed Set of Paths
Suppose that we are given a network and two integers T and k, together with a source s and a
destination t, along with a fixed set of diverse paths from s to t on which capacity may
be reserved for (s; t) traffic. We want to ensure that, in the absence of any k of the paths P i , there
is sufficient capacity on the remaining paths to carry a given amount T of traffic from s to t. To
accomplish this requires us to fix a capacity x i for each path P i , and give each arc on path P i capacity
. The cost of reservations can be calculated from the per-unit total costs c i of the paths P i . Thus
in practice we can think of the network as consisting of just the two nodes s and t, with the P i being
single arcs of cost c i from s to t.
A diverse-path reservation as above is (T; k)-resilient if the total amount of reserved capacity, excluding
any k arcs, is at least T . We may assume that the paths P i are numbered in increasing order of per-unit
cost c i , so we can state the problem formally as follows.
k-Failure Allocation Problem.
Given a demand T , and a sequence of costs c
find non-negative real numbers x subject to the
conditions
For any k-set S '
We shall also consider the integral version of this problem.
A result that is fundamental in much of the rest of the paper is that the optimal allocation of capacities
(for any costs satisfying (4)) is always achieved by some vector z j;k of the form
ae
for some j between k m. Thus we need only find the appropriate j for which the cost is
minimized.
Of course, the single-failure problem is just the special case of the k-failure problem with
this case we denote by z j the vector z j;1 .
Theorem 1 An optimal solution to the k-Failure Allocation Problem is obtained at one of the solutions
z j;k .
Proof. Because of the symmetry of the situation, we know that there is an optimal solution such
that x 1 - x 2 - xm . Thus we lose nothing by including these inequalities as constraints. Once
we do this, we see that, if the constraint x all the other
resilience constraints given by the removal of k of the paths are automatically satisfied. Thus we may
reformulate the problem as follows.
Given a demand T , and a sequence of costs c 1 - c 2 - c m , find non-negative real numbers
to minimize
subject to the constraints x 1
We note for future reference that the same reformulation goes through if the x i are all constrained to
be integers.
Consider a basic optimal solution for the resulting linear program which necessarily satisfies m linearly
independent inequalities with equality. If there are j non-zero variables at the optimum, then the
only possibility is that all of the inequalities x 1
are satisfied with equality, i.e., that x
This is just the solution z j;k and the result follows.
Choosing amongst the various solutions z j;k is evidently not hard, and in fact the structure of the
problem allows a particularly simple procedure for doing this. The cost of solution z j;k is
which is best thought of as the average of c 1
. , and c j . We are trying to minimize A j;k , over the range of possible values
Note that the A j;k are decreasing in j up to the minimum, which is attained for the last j where
increasing thereafter: this unimodality property will be a recurring theme. Thus
for each . If this is the case, then we terminate and
z j;k is optimal. If we reach z m;k is the optimal solution.
allocation problems with integer capacity allocations
To find the optimal solution in the case where the allocated capacities x i are required to be integers
(and the demand T is an integer), we will show that the following procedure suffices.
First find the optimal solution z j;k to the original (non-integral) k-failure Allocation Problem. Then
(if the z j;k
i are not already integers) consider the two integer solutions "nearest" to z j;k , as follows.
(a) Set r equal to either bT=(j \Gamma k)c or dT=(j \Gamma k)e. (Here dae denotes the next integer above the real
number a, and bac the next integer below.) Note that r may be zero if T
(b) If r is one of the two chosen values and r is nonzero, we attempt to construct a solution x with
all the non-zero x i , except possibly one, equal to r. To do this, we set l
(The choice of l ensures that
could have l ? m, but this is not possible with
Note that this is a feasible solution, since removing any k of the x i leaves capacity at least (l
which is constructed to be at least T .
(c) We now have either one or two candidate integral solutions, corresponding to the two choices of
r in (a). We denote by z j;k;+ the solution with it is feasible), and by z j;k;\Gamma the
solution with (which is always feasible). To finish, just calculate the costs of the two
solutions, and choose the lower.
Theorem 2 Suppose we have an instance of the k-Failure Allocation Problem in which the optimal
solution is z j;k . Then the optimal solution x with all the x i integer is either
Proof. We work with the reformulation of the problem as in the beginning of the proof of Theorem 1,
which, as we noted, is also valid for the integer case. Suppose that x is an optimal integer solution.
Clearly, since the only constraints on the first k variables are that x 1 - x 2 - x k+1 , we will
have at the optimum. Now suppose that some x j is non-zero, but that not all
of x are equal. Then let i be the minimum index with x
the previous observation. Also let x l be the last non-zero variable, so
increasing x i by one, and decreasing x l by one. This keeps the solution feasible, since all the variables
remain in the right order, and x unaltered. Also, this operation does not increase the
cost.
We have thus shown that one may restrict attention to integral solutions with the following form: for
some j with k we have all of x 1 , . , all of x equal to 0. If
the value of x j is q, then the common value of the earlier x i is which will be an
integer, at least q.
At this point, there are still potentially a large number of candidate solutions; in fact, there is one for
each integer value of x 1 at least T=m, namely to set e, and
We next observe that the integer solution given above is a convex combination of the two "fractional"
solutions z j;k and z j \Gamma1;k . To be precise, our solution is
z
Thus each of our candidate integral solutions is a convex combination of two consecutive z i;k s. Let
A be the set of all such convex combinations; we think of A as a "path" with vertices corresponding
to z We note that any vector on this path gives a feasible solution. Next observe
that the first coordinate values are decreasing along A, and the values for any vector in A are thus
determined by the first coordinate x 1 . Moreover, this solution will be integral if and only if x 1 is an
integer. Also, since the costs of the vertices z i;k are unimodal, and cost is a linear function between
each pair of vertices, the solution cost is unimodal along A, with the minimum obtained at the vertex
corresponding to some z j;k , with first coordinate equal to, say, r. These arguments show that the
lowest cost integral solution among our candidates, and so the overall optimal integer solution, is the
solution on A corresponding to taking x 1 to be either dre or brc, i.e., taking either z j;k;+ or z j;k;\Gamma .
This completes the proof.
We close this section with a bound relating the costs of the optimal integral and fractional solutions.
Proposition 3 For any j ? k - 1, let O be the cost of a solution z j;k and O 0 be the cost of the
integral solution z j;k;\Gamma . Then O
T .
Proof. Recall that, in the solution z j;k , the j cheapest paths are chosen, each with capacity
k). The "rounded" solution z j;k;\Gamma is obtained from this by taking the l \Gamma 1 cheapest paths
with capacity k)e, and the next cheapest path with capacity
The first observation is that the average cost per unit of reservation in z j;k;\Gamma is no greater than that
in z j;k . Thus O 0 =O is at most the ratio of the total numbers of units of capacity reserved in the two
allocations.
In z j;k , a total of jT=(j \Gamma units of capacity are allocated, while in z j;k;\Gamma , the total is (l \Gamma 1)x
Hence we have
O 0
O
Now (j by definition of x, so we have the estimate as claimed.
Corollary 4 If O I is the cost of the optimal integral solution to an instance of the k-failure allocation
problem with target flow T , and OF the cost of the optimal fractional solution to the same problem,
then O I
Consideration of the proof of Proposition 3 shows that the ratio (1 in general be
improved. Indeed, if our network consists of a large number M of paths of cost 1, then it is easy to
see that O I
2.2 Diverse-Path Reservations without Specified Paths
We now consider what happens when we are still required to find a resilient reservation consisting of
a set of diverse paths in a given network D, but we are not restricted as to what paths we may use.
We give a fast algorithm for the fractional case, but show that the integral case is NP-hard.
We start with the fractional case. The results of Section 2.1 imply that the optimal solution will have
as support the arcs of some j ? k diverse paths, with each arc in the support given capacity T=(j \Gamma k).
Of course, j can only take integer values up to the (s; t)-connectivity -(D) of D. We may take
advantage of this structure and apply the successive shortest path method - c.f. [1] - for minimum
cost flow problems, thus only needing to solve -(D) shortest path problems.
For an arc a = (u; v) 2 A, we let a \Gamma denote an "artificial" arc (v; u), not present in D. In the course
of the following algorithm, we construct a series of auxiliary digraphs D j , each of which will contain
exactly one of each pair a; a \Gamma . We assume that we are given a digraph D with -(D) ? k.
f
While (D j contains a directed (s; t)-path)
Let Q j be a minimum c j -cost directed (s; t)-path in D j
z j+1;k be the vector obtained by assigning
to each arc in P j+1
then Output(z j;k ) and Quit
are the same as D
if a 2 R
remove a \Gamma , and include a with cost c j+1
if a 2 F
remove a, and include a \Gamma with cost c j+1
EndWhile
We will also refer to the version of the algorithm which does not terminate early and thus generates
a reservation vector z j;k for every
Proposition 5 (c.f. [1]) Let c be a nonnegative vector of arc costs in a network A). The
algorithm Paths finds a minimum cost (T; k)-resilient diverse-path reservation.
To establish correctness, we need two facts. First, for each j, the collection P j induces a minimum
cost collection of j diverse (s; t)-paths; this follows from the correctness of the successive shortest
path method. This implies that each solution z j;k is the minimum cost solution using j paths, and
hence the minimum cost (T; k)-resilient vector is among these vectors z j;k . Moreover, traditional flow
theory implies that for each is a minimum cost flow of value subject to the
capacities on each arc. Second, as we now show, the sequence cost(z j;k ) is unimodal for
termination is justified.
Proposition 6 Let h, i and j be such that k ! h
Proof. Suppose the contrary: there exists h, i, j, with
and cost(z i;k
i\Gammak and choose - 2 (0; 1) such that i
is a flow of value does not exceed T=(i \Gamma 1) on any
arc. Thus by the remarks preceding the proposition, cost(z 0
maxfcost(z h;k ); cost(z j;k )g, a contradiction. 2
integer diverse-path reservations
We now consider the problem of finding a minimum cost T -resilient integral reservation whose support
is a collection of diverse paths. Thus we denote by idp the problem each instance of which consists
of a network with two specified nodes s; t together with nonnegative integer costs on the arcs and an
integer T . An optimal solution for an instance will be a minimum cost T -resilient reservation obtained
by reserving integer capacities on a collection of diverse (s; t)-paths.
The results of the previous section once again show that the support of an optimal such solution will
consist of a collection of diverse (s; t)-paths the arcs of the first
reserve a commonamount, r, of capacity, and the last path's arcs will reserve capacity
r. 2 We now show that the subproblem of idp with denoted by 3-idp, is NP-hard.
Let 2div-paths denote the problem of determining whether a given digraph D, with four distinct
nodes contains a pair of arc-disjoint paths
Fortune, Hopcroft and Wyllie [16] show that this problem is NP-complete.
Suppose that we are given an instance of 2div-paths as above. Construct a digraph obtained from
D by adding new nodes s; t as well as the arcs (s;
2, 1 and 2 respectively. All remaining arcs will have cost zero. This is our instance of 3-idp. From
the preceding section, we deduce that an optimal 3-resilient reservation on diverse paths will either
have support on (i) 2 diverse paths, in which case capacity 3 is reserved on each of the arcs of these
paths, or (ii) 3 diverse paths in which case two of the paths will have reserved capacity 2 and the third
capacity 1.
Note that the cheapest collection of 2 diverse paths has cost 5 and hence any solution of the form (i)
will have cost at least 15. Next note that if there exists a positive solution to the instance of 2div-
paths, with P i a path between s then by assigning capacity 2 to the arc (s; t) and the
arcs of P 1 , and capacity 1 to the arcs of P 2 we obtain a solution to 3-idp of cost 14. Conversely, if
the instance of 2div-paths has no solution, then any "3-path" solution to 3-idp will use only paths
of cost 3, from which we deduce that the reservation will cost at least 15. Thus the optimal solution
to the instance of 3-idp is 14 if and only if the instance of the 2-disjoint path problem has a positive
solution.
Note that the above result shows that approximating 3-idp to within a factor of (1 + 1) is NP-
hard. On the other hand, Proposition 3 implies that applying the rounding procedure to an optimal
fractional solution will yield a (1+ 1)-approximation to the optimal integral solution. We now improve
this latter bound.
We continue to restrict attention to the case allow T to take arbitrary integer values.
Consider the polynomial time algorithm A (based on Paths) that finds, for each value of j, the
fractional solution z j;1 based on some cheapest set of j diverse paths, and the two "rounded" integer
solutions z j;1;\Gamma and z j;1;+ , and chooses the best among all of the integer solutions. The algorithm A
In essence we thus need to solve an integer 2-multicommodity flow problem where both commodities have the same
origin and destination.
can fail to find the optimal integer solution because it may use a minimum cost set of l paths in which
the costs are distributed "more evenly" between the paths (in particular, the most expensive path is
cheaper) than in some other (not necessarily even minimum cost) set of l paths. All we know is that
an optimal solution for an instance of idp will have the same form as either z l;1;\Gamma or z l;1;+ for some
l, since it will arise from a similar rounding process applied to some collection of diverse paths.
Let OF be the cost of a fractional optimum solution, O I the cost of an integer optimum solution, and
OA the best solution among those considered by the algorithm, i.e., the value returned by A. Clearly
we have OF - O I - OA . We prove the following result.
Theorem 7 The algorithm A is a 1
14 -approximate algorithm for idp with that is, for each
instance: OA - 15O I . Moreover, there is no ffl-approximate algorithm for idp with
latter statement, of course follows from the previous example showing that the decision version of
idp with We note that the quality of approximation by the algorithm depends
greatly on the input T . If we view A as an infinite collection of algorithms fA T g 1
T=1 , each restricted
to instances with a fixed value of T , then many of these are ffl-approximate algorithms with
14 .
Indeed, Proposition 3 tells us that, for each T , OA I .
Furthermore, we note in the course of the proof that, for exactly solves the
subproblem T -idp.
Proof. Suppose an optimum integer solution z uses paths each with reserved capacity
r, and path P l+1 , with reserved capacity y. We may assume that 0 - y - r, l - 2, and that r
is either bT=(l \Gamma 1)c or dT =le, with y, since otherwise there is a solution of one of
the given forms with no greater cost using the same set of paths, by Theorem 2. We also have that
Clearly we can assume that O I 6= OA . In particular, this means that otherwise one of
z l+1;1 or z l;1 would be an integral solution found by A whose cost was at most that of z . Thus
The values of T and r determine y and l and, as we noted before the proof, we may assume that
7. There are thus only a finite number of possible forms of z (in
just 23 pairs (T; r) satisfy all the restrictions mentioned so far), and we shall rule all these out
using the same basic method. At this point, let us observe that there are no cases with
or 12; for these values of T , any r not dividing T exactly is not of the form bT=(l \Gamma 1)c or dT =le for
any integer l. In particular, we may assume that we have T - 3.
We require a lower bound on the cost of z . Notice that z can be written as y times the characteristic
vector of some set of l times the characteristic vector of some set of l
diverse paths. Let C i be the cost of reserving one unit of capacity on the arcs in a cheapest collection
of i diverse paths. So we have O I = C(z
Our algorithm A considers some integer solution of the same form as z (i.e., the same values of
using some set of l +1 paths of cost C l+1 . The cost of this solution is at most what it would
be if all the l +1 paths had the same cost, which is (lr +y)C l+1 =(l +1). So OA is at most this quantity,
i.e.,
We aim for a similar bound on C l , and to get this we need to look at a solution produced by A on at
most l paths. Accordingly, let r there is some integer solution with reserved capacity
r 0 on the first m ! l paths from C l , and v - r 0 on one further path, with total of reserved capacities
on all the paths equal to T Our algorithm will have looked at an integer solution with a cost at
least as good as some solution of this form, and the average cost of a path in any solution of this form
is at most C l =l, as in the proof of Proposition 3.
So we have OA -
We conclude that
O I -
After a little manipulation, this becomes
O I
G:
We could now run through all the 23 cases separately and show that G - 1=15 in each case. Let us
proceed slightly more systematically.
First, we consider all cases with l = 2. In this case, we have r which implies
that r
Next, if This gives
l - 2. Assume from now on that l; r - 3. This implies that T - 7 and that r ! T=2.
If r we have G 1), and we are
done if 4. On the other hand, if
thus the only two cases with
are
From here on in, there seems to be no great saving on dealing with all the cases individually. Here
are all the cases not so far ruled out.
All the values for G above are less than 1=15, so the theorem is proved.
It is clear that this technique can be used to prove similar results for other values of k. If k is
unrestricted, it turns out that the algorithm Paths is a 1
5 -approximation algorithm for idp, and that
this is best possible, i.e., there is no ffl-approximation algorithm for idp with
omit the details.
Resilient Reservations with no Restrictions
3.1 Examples of Vertices of R(T;D)
Although the problem of finding minimum cost resilient reservations can be solved in polynomial
time, we have been unable to find a truly practical algorithm for the problem. It is natural to believe
that there might be an algorithm which uses some generalization of the pivot operation (i.e., cycle
augmentations) for standard minimum cost flows. In order to explain why it is likely to be difficult
to find any such algorithm, we consider various examples of vertices of polyhedra R(T; D) - these are
basic optimal solutions to the resilience problem, and we also term these basic resilient reservations -
which are far from the pleasant diverse-path reservations we have been working with so far. We have
already seen one such example in Figure 1: two more are given in Figures 2 and 3 below. Here and
throughout this section, we restrict attention to the single-failure case.
Figure
2: A basic resilient reservation with a cycle
Figure
3: A vertex of both R(T; D) and N (T; D)
By definition, x is a vertex of R(T; D) if (i) x 2 R(T; D) and (ii) there is some subset of jAj linearly
independent constraints, from the system of non-negativity and partial T -cut constraints, which are
satisfied with equality by x. If the partial T -cut inequality for (S; e), where S is an (s; t)-set and e an
arc in is called a critical arc, and S is called a tight set,
for the reservation vector x.
One hope of understanding the structure of the vertices of R(T; D) is to understand the role of the
critical arcs. One may apply uncrossing techniques to show that if a is a critical arc, then there is no
tight set S for which a 2 For suppose this were the case, let e be a critical arc for S and let S 0
be a second (s; t)-set for which x(ffi
the left hand side is also equal to x(ffi In particular,
a Consider the case where e we have that
contradiction. The other cases follow similarly. This
immediately implies the following.
Proposition 8 If x 2 R(T; D) and all arcs in I(x) are critical, then I(x) is acyclic.
Figure
above shows a vertex of R(T; D) for the given digraph D in which the reservation vector
contains a cycle. The two arcs forming the cycle are not critical. Figure 4 shows another vertex x of
a polyhedron R(T; D) for which there are non-critical arcs a (the two arcs not incident with s and
with x a ? 0. (This example is also a basic optimal solution for the node-deletion version of the
problem; this reservation is resilient against the failure of any internal node.)
s
Figure
4: A basic resilient reservation for both standard and node-deletion versions
Further understanding of the structure of the vertices has eluded us; we offer two conjectures.
Conjecture 3.1 Let x be a vertex of R(T; D).
ffl There exists a directed path from s to t consisting only of critical arcs.
ffl There exists an (s; t)-set S such that the reserved capacities on all arcs in I(x) " have the
same value, and each arc in this set is critical.
Finally, we observe that basic resilient reservations may have components of the form pT
q for any
rational Indeed for any such p; q, Figure 5 displays such a vertex.
3.2 NP-completeness of Integer Resilience Problems, and an Approximation
Algorithm
We next show that the problem of finding a minimum cost integral T -resilient vector is NP-hard, even
in the single-failure case. We couch the problem as a decision problem.
Integer Resilience
Instance: a digraph D, with integer costs c ij on the arcs, with a single source s and destination t, a
target resilience T (integer), and a target cost C (integer).
q-p arcs
s
(pT)/q
Figure
5: A vertex of R(T; D) and N (T; D) with a component a given rational multiple of T
Question: is there an integer reservation vector x on the arcs of D such that c \Delta x - C, and such that
x is T -resilient?
Theorem 9 Integer Resilience is strongly NP-complete.
Proof. Certainly the problem is in NP, since checking T -resilience simply involves finding flows of
value T in the networks obtained by removing individual edges.
To prove the problem is NP-complete, we give a reduction from 3d-matching. Recall that an instance
of 3d-matching consists of three sets A, B, C of size n, and a collection T of m "triangles" each
containing exactly one element from each of A, B and C; the question is whether A can be
written as the disjoint union of n triangles from T .
Suppose that we are given an instance of 3d-matching as above. We show how to construct an
instance of Integer Resilience with m+ 3n arcs each of cost 1, n, or 2n, such
that there is a (4m 1)-resilient integer reservation of cost at most (2n
only if the original instance did possess a 3D-matching.
We take one node of D for each triangle abc 2 T , one node for each element of A
nodes s and t. We take four parallel arcs, each of cost 1, from s to each node corresponding to an
element of T . Each node abc has seven arcs leaving it: four, of cost 2n, go directly to t and one, of
cost n, to each of the constituent elements a, b and c. Finally, there is a single arc of cost n from each
element of A t.
Consider any T -resilient integer reservation vector x of cost at most (2n
First note that the reservation x must dominate a flow of value
otherwise there is a cut of capacity at most T . Look first at the set A 0 of "expensive" arcs not
incident with s. A flow of value necessarily costs 2n(T + 1) on A 0 . Thus the cost of x on A 0 is
at least 2n(T + 1), and equality is only possible if every arc in A 0 has reserved capacity just 1. Thus
on A 0 , the reservation looks like the characteristic vector of a set of T diverse paths.
Such a set of paths must include every arc arriving at t, and uses between 4 and 7 arcs from each node
of T . For be the number of reserved arcs leaving v. The cost of any integer T -resilient
reservation not of this form must be at least 2n(T arcs.
Now consider the reservation vector x on the set A 1 of arcs leaving s. Certainly the total cost of
the reservation on A 1 is greater than T must have reserved capacity at least 2.
Thus we see that, since the total cost of x is at most (2n the reservation on
A 0 does indeed form a set of T diverse paths, with d(v) 2 f4; 5; 6; 7g starting from each v 2 T .
For the reservations on the four arcs between s and v must total at least d(v), and the sum
of any three of them must be at least after deleting an arc there still exists a T -flow).
The minimum cost of such a reservation between s and v is just 4 if since then only 1 need
be reserved on each arc. For d(v) ? 4, one routinely checks that such a reservation costs at least
since arcs of capacity at least 2 are needed. So the total cost of x on A 1 consistent with the
values d(v) is at least
plus the number N of elements v for which We know that
v. Therefore N - n, with equality if and
only if just n elements of T , and for the remainder. In this case, the cost of x
on A 1 is otherwise is at least T
Hence, if there is such a reservation vector x of total cost just (2n are just
while the remainder have 4. A collection of T
paths on A 0 subject to these conditions must involve n elements of T for which the 3n arcs from these
elements to A [ B [C go to distinct nodes, i.e., a 3D-matching in the original instance.
Conversely, given a 3D-matching U , we can find a T -resilient reservation by reversing the argument.
We reserve capacity 1 on all arcs entering t, and all arcs leaving elements of U ; we further reserve
capacity 1 on arcs from s to elements of T n U , and capacity 2 on arcs from s to elements of U . It is
easy to check that this reservation is T -resilient, and has the required cost.
Of course, the requirement that all reservations be integers is crucial. However, notice that the
reservation we emerge with is actually net T -resilient as well, so the corresponding problem for net
resilience is also NP-hard.
On the positive side, there is a simple k-approximate algorithm for (T; k)-resilience in a network D - in
the fractional or integral case - namely to find and output the minimum cost diverse paths reservation
using which of course reserves capacity T on each arc of a cheapest set of k paths.
The following result states that this is indeed a k-approximate algorithm.
Proposition 10 If x is a (T; k)-resilient vector, then cx - 1
Proof. Let x be a minimum cost (T; k)-resilient vector. Define x 0 by setting, for each arc a,
Let S be any (s; t)-set. We claim that x
Suppose first that there is a set K of
k+1 , and so
If instead x a ! T
k+1 for all arcs a those in a set K of k arcs. Then x(ffi
and so x This proves our claim.
Now, since x for each (s; t)-set S, there exists an (s; t) flow x 00 of value (k+1)T such
that x 00 - x 0 , so in particular no arc has reserved capacity more than T ; thus x 00 is a (T; k)-resilient
vector.
As we remarked earlier, the diverse path reservation z k+1;k is a minimum cost (s; t) flow of value
subject to an upper bound T on the flow through any given arc. It follows that cz k+1;k -
Thus we have that the minimum cost diverse-path reservation has at most k times the cost of
an optimal (T; k)-resilient reservation. Of course, this is of greatest interest in the case 1. The
best possible: consider the following example. The network D has three nodes s,
u and t. There are k cost 1 from s to u, and r arcs of large cost c from u to t. The
minimum cost (1; k)-resilient reservation involves giving capacity 1 to the arcs between s and u, and
capacity 1
r\Gammak to the arcs between u and t, at a cost of k However, the minimum cost
diverse-path reservation, z k+1;k , uses only k+1 of the arcs from u to t, at a total cost of (k +1)(1+c).
For large c and r, the ratio between these two costs can be made arbitrarily close to k+ 1: the optimal
diverse-path reservation is almost twice as expensive as the optimal (1; k)-resilient reservation.
As usual, one would expect that the normal situation is even better than suggested by Proposition 10,
and in many practical settings the optimal diverse-path reservation (which can be found easily, as
shown in Section 2.2) will not be very far from the overall optimal solution in cost. Given the
simplicity of a diverse-path solution, it is likely that this is a good solution to adopt.
4 Applications to More than one Source-Destination Pair
We now consider the problem where we are given a collection of node pairs
as a collection of failure rates T i , a collection of integers k Each
commodity i must reserve capacity in a network D which is (T )-resilient for the source-destination
One approach to tackling this problem is to insist on diverse-path reservation vectors. We have
seen that such reservations may cost more, even in the 1-commodity case, but also have several
practical advantages. This approach also allows us to formulate the problem in a similar manner to
the well-known multicommodity capacity allocation problem where there is a flow demand between
each commodity pair and one must find routings of the flow such that capacity cost in the network is
minimized.
Suppose that, for each commodity i, we look for a reservation using j i diverse paths, each with capacity
. If each arc a is equipped with an integer upper bound M a on the total capacity that can be
reserved on a, and a cost c a , then one may pose this as a mixed integer program as follows.
min
a c a y a
a for each arc a
a 2 f0; 1g for each a and i
Of course, there are still many choices of for each commodity, and we must branch on these in
order to find the global optimum.
Acknowledgements
: The research of the first and second authors is supported by the EU-HCM
grant TMRX-CT98 0202 DONET. They would also like to acknowledge support from DIMACS during
extended visits to Bell Labs. Some of the first author's research was also carried out while visiting
the University of Memphis. The authors are grateful for insightful remarks and encouragement from
Gautam Appa, Dan Bienstock, Fan Chung, Michele Conforti, Bharat Doshi, Susan Powell, Paul
Seymour and Mihalis Yannakakis.
A major inspiration for our work on this paper was Dr. Ewart Lowe, of British Telecom, who tragically
died in a diving accident on May 22nd, 1998, off the coast of Normandy. Ewart introduced the authors
to many mathematical problems in telecommunications. He also acted as mentor to the final author
during his projects for British Telecom. We dedicate this paper to the memory of his inspiration,
generosity, and his unbounded enthusiasm which will be greatly missed by all who knew him.
--R
Network Flows - Theory
Capacity and survivability models for telecommunications networks
Combinatorial online optimization in practice
Modeling and solving the single facility line restoration problem
Network design using cut inequalities
Cyclic scheduling via integer programs with circular ones
Minimum cost capacity installation for multicommodity network flows
Capacitated network design - polyhedral structure and computation
Strong inequalities for capacitated survivable network design prob- lems
Some strategies for reserving resilient capacity
Reserving resilient capacity with upper bound constraints
Resilience strategy for a single source-destination pair
An optimal spare-capacity assignment model for survivable networks with hop limits
The directed subgraph homeomorphism problem
Connectivity and network flows
Design of survivable networks
Optimal capacity placement for path restoration in mesh survivable networks
Modelling and solving the capacitated network loading problem
The convex hull of two core capacitated network design problems
Modelling and solving the two facility capacitated network loading problem
Polyhedral properties of the network restoration problem - with the convex hull of a special case Working Paper OR 323-97
Two strategies for spare capacity placement in mesh restorable networks
--TR
--CTR
Friedrich Eisenbrand , Fabrizio Grandoni, An improved approximation algorithm for virtual private network design, Proceedings of the sixteenth annual ACM-SIAM symposium on Discrete algorithms, January 23-25, 2005, Vancouver, British Columbia | network flows;resilience;capacity reservation |
587996 | Graphs with Connected Medians. | The median set of a graph G with weighted vertices comprises the vertices minimizing the average weighted distance to the vertices of G. We characterize the graphs in which, with respect to any nonnegative vertex weights, median sets always induce connected subgraphs. The characteristic conditions can be tested in polynomial time (by employing linear programming) and are immediately verified for a number of specific graph classes. | Introduction
. Given a (finite, connected) graph G one is sometimes interested in
finding the vertices minimizing the total distance
to the vertices u of G; where the distance d(u; x) between u and x is the length of a shortest
path connecting u and x: The subgraph induced by all minima of F need not be connected:
actually every (possibly disconnected) graph can be realized as such a "median" subgraph of
another graph (Slater [23]); see Hendry [18] for further information and pertinent references.
Here we will focus on the weighted version of the median problem, which arises with one of
the basic models in discrete facility location [25] and with majority consensus in classification
and data analysis [3, 9, 10, 11, 22]. A weight function is any mapping - from the vertex set to
the non-negative real numbers, which is not the zero function (in order to avoid trivialities).
The total weighted distance of a vertex x in G is given by
A vertex x minimizing this expression is a median (vertex) of G with respect to -; and the
set of all medians is the median set Med(-): By a local median one means a vertex x such
that F - (x) does not exceed F - (y) for any neighbour y of x: Denote by Med loc
(-) the set of
all local medians. We will consider the following questions:
Research supported in part by the Alexander von Humboldt Stiftung
ffl When are all median sets Med(-) of a graph connected?
ffl When does Med loc
(-) hold for all weight functions - ?
If we allow only 0-1 weight functions -; then recognition of graphs with Med loc
is an NP -complete problem [2]. We will show that connectivity of median sets with respect
to arbitrary weight functions turns out to be equivalent to the condition that for each function
F - all local minima are global. This property can also be formulated as a certain convexity
condition.
In the next section we investigate weakly convex functions on graphs, and in Section 3
we then obtain the basic characterizations of graphs with connected medians. One of those
equivalent conditions, weak convexity of F - for each weight function -, can be formulated
as a linear programming problem, thus allowing to recognize graphs with connected medians
in polynomial time. This LP condition, however, entails a lot of redundancy and cannot
be read off from other graph properties, which are known to imply median connectedness.
Striving for less redundancy in the requirements, we establish the main result in Section 4 by
employing LP duality to a particular instance of the original LP formulation. This theorem
can conveniently be used to derive median properties in several specific classes of graphs, as
is demonstrated in the final section.
2. Weakly convex functions. A real-valued function f defined on the vertex set V
of a graph G is said to be weakly convex if for any two vertices u; v; and a real number -
between 0 and 1 such that -d(u; v) and (1 \Gamma -)d(u; v) are integers, there exists a vertex x
such that
Weakly convex functions were introduced by Arkhipova and Sergienko [1] under the name
"r-convex functions"; see also Lebedeva, Sergienko, and Soltan [19].
The interval I(u; v) between two vertices u and v consists of all vertices on shortest
u; v-paths, that is:
For convenience we will use the short-hand
I
to denote the "interior" of the interval between u and v:
Lemma 1. For a real-valued function f defined on the vertex set of a graph G the following
conditions are equivalent:
(i) f is weakly convex;
(ii) for any two non-adjacent vertices u and v there exists w 2 I ffi (u; v) such that
(iii) any two vertices u and v at distance 2 have a common neighbour w with
Proof. (i)
Consider two vertices u and v at distance d(u; shortest paths
connecting u and v select a path
is as small as possible. Condition (iii) implies that 2f(w i
as points in the plane R Connecting the consecutive points by segments, we will get a
graph of a piecewise-linear function. This function is necessarily convex (in the usual sense),
because it coincides on P with the function f: From this we conclude that
Recall that a real-valued function f defined on a path
holds only
say that a function f defined on the vertex set of a graph
G is pseudopeakless if any two vertices of G can be joined by a shortest path along which f
is peakless [15]. Equivalently, f is pseudopeakless if for any two non-adjacent vertices u; v
there is a vertex w 2 I ffi (u; v) such that f(w) - maxff(u); f(v)g and equality holds only if
The key property of pseudopeakless functions is their unimodality, that is, every
local minimum of f is global. The proof is simple: let u be a global minimum of f and v a
local minimum of G: Consider a shortest path P between u and v along which f is peakless.
Then for any neighbour w of v we have f(w) - maxff(u); f(v)g; whence
as required. Note that every weakly convex function is pseudopeakless. The following result
constitutes a partial converse.
Remark 1. A real-valued function f defined on the vertex set of a graph G is pseudopeakless
if and only if the composition ffffif is weakly convex for some strictly isotone transformation
ff of the reals.
Proof. The property of being pseudopeakless is clearly invariant under strictly isotone transformations
of the range. Conversely, let f be a pseudopeakless function taking n distinct
values a 1 ! a ff be a strictly isotone map which assigns to each a i the
We assert that the composition ffffif is weakly convex. For given vertices u and
v at distance 2, let w be their common neighbour such that f is peakless along the path
consequently
as required. 2
In what follows we will apply Lemma 1 to the functions F - of G: The simplest instance is
given by the weight function - assigning 1 to a distinguished vertex w and 0 otherwise: then
just measures the distance in G to that vertex. We say that G is meshed [6] if
the function d(\Delta; w) is weakly convex for every choice of w: A somewhat weaker property is
used in subsequent results. Three vertices u; v; w of a G are said to form a metric triangle
uvw if the intervals I(u; v); I(v; w); and I(w; u) pairwise intersect only in the common end
vertices. If d(u; triangle is called equilateral of
size k:
Remark 2. Every metric triangle in a meshed graph G is equilateral.
To see this, suppose the contrary: let uvw be a metric triangle in G with d(u; w) ! d(v; w):
The above definition of weak convexity applied to the pair u; v; and the number
provides us with a neighbour x of v which necessarily belongs to
3. Basic characterizations. We commence by giving first answers to the questions
raised in the introduction.
Proposition 1. For a graph G the following conditions are equivalent:
(i) Med loc
weakly convex for all -;
is pseudopeakless for all -;
(iv) all level sets fx : F - (x) -g induce isometric subgraphs;
(v) all median sets Med(-) induce isometric subgraphs;
(vi) all median sets Med(-) are connected.
Any of the conditions (i) to (vi) is equivalent to the analogous condition with the additional
requirement that - be positive.
The following observation is basic for the proof of Proposition 1.
Lemma 2. If the function F - is not weakly convex on the vertex set V of a graph G for some
weight function -; then there exist a positive weight function - + and vertices u; v at distance
2 such that Med(-
Proof. If F - is not weakly convex, then by Lemma 1 there exist two vertices u and v at
distance 2 such that 2F - (w) ? F - (u) +F - (v) for all (common neighbours) w 2 I ffi (u; v): This
inequality can be maintained under sufficiently small positive perturbations of - : viz., add
any ffi satisfying
to all weights, yielding the new weights - 0
for all w 2 I ffi (u; v); that is, the initial inequality remains valid with respect to the thus
perturbed weight function. We may therefore assume that - is actually positive.
We stipulate that denote the maximum value of
Define a new positive weight function - + by
for every vertex x outside I(u; v): For w 2 I ffi (u; v) one obtains
Therefore consists only of u and v: This concludes the proof. 2
Proof of Proposition 1. The implications (ii)
are trivial, while (vi) ) (ii) is covered by Lemma 2. It remains to verify that (i) implies
(ii). Suppose by way of contradiction that some function F - is not weakly convex. By
Lemma 2 there exist a positive weight function - + and vertices u; v at distance 2 such that
Pick any ffl satisfying
and define a new weight function - 0 by
for all x Therefore both u and v are local minima of F - 0 ; but
establishes the implication (i) ) (ii).
The same arguments can be applied to prove that the analogous conditions (i + ) to (vi
additionally requiring the weight functions to be positive are all equivalent. Since (i)) (i
is trivial and (vi covered by Lemma 2, the proof is complete. 2
In view of Lemma 1(iii) and Proposition 1, all median sets are connected if and only if for
each pair u; v of vertices at distance 2 the following system of linear inequalities is unsolvable
in -:
Since LP problems can be solved in polynomial time, we thus obtain the following result.
Corollary 1. The problem to decide whether all median sets Med(-) of a graph G are
connected is solvable in polynomial time.
4. The main result. In order to obtain a convenient characterization of median con-
nectedness, we will restrict the supports of the weight functions - to be tested and then take
advantage of LP duality in the following form:
Remark 3. Let u; v be any vertices of a graph G with d(u; W be a nonempty
subset of I ffi (u; v) and X be a nonempty subset of the vertex set of G: Then for every weight
function - with support included in X there exists some w 2 W such that
if and only if there exists a weight function j with support included in W such that for every
Proof. Let D denote the submatrix of D uv (as just defined) with rows and columns corresponding
to W and X; respectively. The asserted equivalence is then a particular instance
of LP duality, as formulated by J.A.Ville (cf. [16, p.248]) in terms of systems of linear
inequalities:
For vertices x and y define the following superset of I(x;
Proposition 2. The median sets Med(-) in a graph G are connected for all weight functions
- if and only if (1) all metric triangles are equilateral and (2) for any vertices u; v with
every weight function - whose support fx included in the
set J(u; v) there exists a vertex w 2 I ffi (u; v) such that 2F - (w) - F - (u)
Proof. Necessity immediately follows from Remark 2 and Proposition 1. To prove sufficiency
assume that (1) and (2) hold but there exists a weight function - for which F - is not weakly
convex. Then by Lemma 1 there are vertices u and v at distance 2 such that
fixing u and v; we may assume that - was chosen so that this inequality still holds and
the number of vertices from the support of - outside J(u; v) is as small as possible. By (2)
we can find a vertex x with -(x) ? 0 and
To arrive at a contradiction, we will
slightly modify - by setting the weight of x to zero (and possibly transferring its old weight
to some vertex in J(u; v)): We distinguish three cases.
Case 1: jd(u; 2:
Then 2d(w; Hence the weight function - 0
defined by
contrary to the minimality assumption on -:
Case 2: jd(u;
Without loss of generality assume that d(v; be a vertex in I(u;
I(v; x) at maximal distance to x: Then x From the
latter equality and d(u; t be a vertex
in I(u; v) " I(v; x 0 ) at maximal distance to v: Since
the vertices u; t; x 0 must
constitute an equilateral metric triangle. Therefore d(u; x 2:
Define a new weight function -
For every w 2 I ffi (u; v) we have either
or
2:
Hence
and further
again a contradiction to the choice of -:
Case 3:
Let x 0 be a vertex in I(u; x) " I(v; x) at maximal distance to x: Since all metric triangles
of G are equilateral and d(u; we obtain that
2:
Define the new weight function - 0 as in Case 2. Then
and for every w 2 I ffi (u; v)
a final contradiction. 2
In view of Remark 2 we may require in Proposition 2 (as well as the Theorem below) that
G be meshed instead that all metric triangles in G be equilateral.
Let M(u; v) denote the set of those vertices of J(u; v) which are equidistant to u and v :
The neighbourhood N(x) of a vertex x consists of all vertices adjacent to x: Note that if
Lemma 3. If all median sets of a graph G are connected, then for any vertices u and v at
distance 2 there exist (not necessarily distinct) vertices s; t 2 I ffi (u; v) such that
for all vertices y 2 M(u; v):
Proof. We define a weight function - for which the hypothesis that F - be pseudopeakless
implies the asserted inequality: let
and
Since weights are zero outside M(u; v); we have F - v) such that
F - is peakless along the (shortest) path (u; s; v); that is, F -
We claim that 2: Indeed, otherwise holds and either there are two distinct
vertices there exists some vertex z 2 M(u;
I 3: In the first case we would get
and in the second case
both giving a contradiction. Now, if all vertices in M(u; v) \Gamma fsg are equidistant to s and u
(as well as v), then
and thus is the required solution to the asserted inequality. Else, there exists some
whence 2: Consequently,
and therefore s; t constitutes the required vertex pair in this case. 2
Lemma 4. Let u and v be vertices at distance 2 in a graph G; for which all median sets
are connected. Select a maximal subset S of I ffi (u; v) with the property that for each vertex
there exists a vertex t 2 S (not necessarily distinct from s) such that d(s; y)
weight function - with support included in
J(u; v) there exists some vertex w
Proof. By Lemma 3, the set S is nonempty. Now, assume the contrary and among all weight
functions violating our assertion, choose a function - for which the set
is as small as possible. This set contains some vertex x because F - is weakly convex. Pick
First suppose that d(x; distinct vertices
new weight function - 0 by
-(w) otherwise.
Then
Hence
and
for all w 2 I ffi (u; v) distinct from x; contrary to the choice of -: Therefore, x is adjacent to
all vertices possibly one vertex.
Now, suppose that d(x; for the modified weight
function - 0 defined by
-(w) otherwise
we have
Then we obtain the same inequalities as in the preceding case and thus again arrive at a
contradiction. We conclude that d(x;
If x is adjacent to all other vertices of I ffi (u; v); then 2d(x;
z 2 M(u; v); z 6= x; and hence we could adjoin x to S; contrary to the maximality of S:
Therefore, by what has been shown, I ffi (u; v) contains exactly one vertex y distinct from
x which is not adjacent to x: Since
2 there exists a vertex z 2 M(u; v) such that
necessarily
new weight function - 0 by
-(w) otherwise.
Then
giving the same contradiction as before. This concludes the proof. 2
Now we are in a position to formulate the principal result.
Theorem. For a graph G all median sets are connected if and only if the following conditions
are satisfied:
(i) all metric triangles of G are equilateral;
(ii) for any vertices u and v at distance 2 there exist a nonempty subset S of I ffi (u; v) and
a weight function j with support included in S having the two properties:
(ff) every vertex s 2 S has a companion t 2 S (not necessarily distinct from s) such
that
and
(fi) the joint weight of the neighbours of any x 2 J(u; v) \Gamma M(u; v) from S is always
at least half the total weight of S :
Proof. First assume that G is a graph with connected medians. By Proposition 2 all metric
triangles of G are equilateral. Let u; v be a pair of vertices at distance 2. The nonempty
subset S of I ffi (u; v) provided by Lemma 4 satisfies the inequality in (ff) for each s 2 S
and companion t: Moreover, Lemma 4 guarantees that for every weight function - with
support included in J(u; v) there exists a vertex w 2 S with F - (u)
duality as formulated in Remark 3 for yields a weight
function j with support included in S such that for every x 2 J(u; v) the weighted sum of
all
and therefore (fi) holds. If x 2 M(u; v); then d(s; x) - 2 for all s 2 S and
In the latter case is the companion of This yields the inequality
0; and by interchanging the role of s and t; we infer that 2:
Conversely, if conditions (i) and (ii) are satisfied, then by virtue of LP duality (Remark
conditions (1) and (2) of Proposition 2 are fulfilled, whence G has connected medians. 2
Corollary 2. All median sets in a meshed graph G are connected whenever the following
condition is satisfied:
any two vertices u and v with #I ffi (u; v) - there exist (not necessarily
vertices s; t 2 I ffi (u; v) such that
If, moreover, G satisfies the stronger condition requiring in addition that d(s; t) - 1; then
Med(-) induces a complete subgraph for every positive weight function -:
Proof. First observe that the inequality of ( ) also holds in the case that u and v are
at distance 2 with a unique common neighbour s: Indeed, if x 2 J(u; v); then necessarily
since G is meshed. To see that ( ) implies condition (ii) of the
Theorem, put trivially satisfied. For x 2 J(u;
we have d(u; x) +d(v; is meshed, and therefore x is adjacent to at least one
of s; t; thus establishing (fi):
To prove the second statement, observe that the inequality in ( ) actually holds for all
vertices x of G: Indeed, for each vertex x select a vertex x 0 from I(u; x) " I(v; x) at maximal
distance to x: Then x 0 belongs to J(u; v) and
Adding up all these inequalities each multiplied by the corresponding weight -(x) then yields
where strict inequality holds exactly when one of the former inequalities is strict for x with
positive, u and v cannot both belong to
Med(-) under these circumstances. 2
In general, we can neither dispense with weight functions nor impose any fixed upper
bound on the cardinalities of the sets S occuring in the Theorem, as the following example
shows.
Example. Let G be the chordal graph with 2
vertices comprising a set
of mutually adjacent vertices, two vertices u and v adjacent to all of
and vertices wZ associated with certain subsets Z of S; namely the sets fx 1
and fx k-subsets Y of fy such that each vertex wZ is
adjacent to u and all vertices from Z: Then a weight function j with support included in
I for the pair u; v if and only if for each of the above sets Z (which
come in complementary pairs) one has
z2Z
which is equivalent to
Thus, in order to satisfy (fi); we are forced to take a weight function j having a large support
on which it is not constant. Note that (ff) for u; v is trivially fulfilled with any choice of
from S: Finally, every other pair of vertices at distance 2 meets condition ( ) of Corollary
when either replacing u by wZ and selecting from Z or replacing u; v by wZ ; w Z 0
setting This shows that G has connected medians.
5. Particular cases. A number of graph classes [12] consist of particular meshed
graphs, for instance, the classes of chordal graphs (and more generally, bridged graphs),
Helly graphs, and weakly median graphs (see below), respectively. In the case of chordal
graphs the first statement of Corollary 2 is due to Wittenberg [26, Theorem 1.1]. Whenever
a chordal graph satisfies condition ( ), then any pair s; t selected in ( ) necessarily satisfies
comes into full action for the class of Helly graphs. A Helly graph
G (alias "absolute retract of reflexive graphs") has the characteristic "Helly property" that
for any vertices non-negative integers r the system of inequalities
admits a solution x whenever
holds [8]. Since this Helly property for (as formulated in
Lemma 1(iii)) of the function d(\Delta; z); all Helly graphs are meshed. To verify condition ( )
(with d(s; t) - 1), first observe that the Helly property for
N(v) and thus M(u; 2: Now, all vertices in
are pairwise at distance - 2; whence the Helly property guarantees
a common neighbour s: Similarly, the vertices from (N(v) " J(u; v)) [ fug have a common
neighbour t: Necessarily s; t 2 I ffi (u; v) and d(s; t) - 1 hold, and we have therefore established
the following observation, which implies the result of Lee and Chang [20] on strongly chordal
graphs (being special Helly graphs):
Corollary 3. All median sets Med(-) of a Helly graph G are connected, and moreover, if
- is positive, then Med(-) induces a complete graph in G:
In some particular classes of meshed graphs the pair s; t meeting ( ) can always be selected
by the following trivial rule: given vertices u; v at distance 2, choose any pair s; t from I ffi (u; v)
for which d(s; t) is as large as possible. Evidently, a meshed graph G satisfies condition ( )
with this selection rule provided that the following two requirements are met:
is a complete subgraph for vertices u and v at distance 2, then d(s;
induce a 4-cycle where d(s;
Notice that in (xx) we could replace the last equality by an inequality because the inequality
would imply J(s; in a meshed graph and thus the reverse inequality would follow
by symmetry. We will now show that the so-called weakly median graphs satisfy (x) and
(xx). A graph is weakly median [14] if (i) any three distinct common neighbours of any two
distinct vertices u and v always induce a connected subgraph and (ii) all functions d(\Delta; z)
satisfy the following condition, stronger than weak convexity: if u; v are at distance 2 and
Condition (ii)
is also referred to weak modularity; see [4]. Trivially, weakly modular graphs are meshed.
Remark 4. A weakly modular graph G satisfies conditions (x) and (xx) if and only if G does
not contain any of the graphs of Fig.1 as an induced subgraph. In particular, weakly median
graphs have connected medians.
@
@
@
@
@
@
@
@
@ @
\Theta
\Theta
\Theta
\Theta \Theta
@
@
@
@
@
@
@
@
@ @
\Theta
\Theta
\Theta
\Theta \Theta
@
@
@
@
@
@
@ @
s
x
s
x
s
(a) (b) (c)
Fig. 1
Proof. If G includes an induced subgraph from Fig. 1, then s; t; u; v; x violate either (xx)
or (x). Conversely, assume that G is weakly modular but violates (x) or (xx) for some
vertices s; t; u; v; x: Then, as x 2 J(u; v); we have d(u; x)+d(v; x) - 3 by weak modularity. If
necessarily s; t; u; v; x induce one of the first two graphs in Fig. 1. Otherwise,
say, As G is meshed, u; v; x have some common
neighbour w: If w is not adjacent to both s and t; then we are back in the preceding case
with w playing the role of x: Therefore we may assume that w is a common neighbour of s
and t: If s and t are not adjacent, then s; t; u; v; w induce a subgraph isomorphic to Fig. 1b.
Otherwise, s and t are adjacent, and we obtain the graph of Fig. 1c as an induced subgraph.
Finally, note that the latter graph minus v as well as Fig. 1a,b are all forbidden in a weakly
median graph. 2
Inasmuch as pseudo-median graphs and quasi-median graphs are weakly median, Remark
4 (in conjunction with Proposition 1) generalizes some results from [3, 7, 17, 24]. Another
class of meshed graphs fulfilling (x) and (xx) is associated with matroids. A matroid M can be
defined as a finite set E together with a collection B of subsets (referred to as the bases of M)
such that for there exists some e
The basis graph of M is the (necessarily connected) graph whose vertices are the bases of M
and edges are the pairs B; B 0 of bases differing by a single exchange. We immediately infer
from the characterization established by Maurer [21] that the basis graph of every matroid
is meshed (but not weakly modular in general) and satisfies (x) and (xx).
Corollary 4. The basis graph of every matroid has connected medians.
In view of Corollary 4 and Proposition 1 one can solve the median problem in the basis
graph of a matroid with the greedy algorithm. Given a weight function - on
assign a weight to each element e 2 E by
Applying the greedy
algorithm one finds a base B maximizing the function B 7!
We assert that B is a minimum for the function F - in the basis graph. Indeed, if
is any neighbour of B ; then
Hence B 2 Med loc
(-) =Med(-); as required.
--R
On conditions of coincidence of local and global extrema in optimization problems (in Russian)
A Helly theorem in weakly modular space
decomposition via amalgamation and Cartesian multiplication
Weak Cartesian factorization with icosahedra
Graph Theory
Dismantling absolute retracts of reflexive graphs
The Geometry of Geodesics
Separation of two convex sets in convexity structures
Discrete Math.
Medians of pseudo-median graphs (in Russian)
On graphs with prescribed median I
On conditions of coincidence of local and global minima in problems of discrete optimization
Matroid basis graphs I
The median procedure in a formal theory of consensus
Medians of arbitrary graphs
The solution of the Weber problem for discrete median metric spaces (in Russian)
a survey I
Local medians in chordal graphs
--TR
--CTR
Victor Chepoi , Clmentine Fanciullini , Yann Vaxs, Median problem in some plane triangulations and quadrangulations, Computational Geometry: Theory and Applications, v.27 n.3, p.193-210, March 2004 | majority rule;local medians;LP duality;medians;graphs |
587999 | Single Machine Scheduling with Release Dates. | We consider the scheduling problem of minimizing the average weighted completion time of n jobs with release dates on a single machine. We first study two linear programming relaxations of the problem, one based on a time-indexed formulation, the other on a completion-time formulation. We show their equivalence by proving that a O(n log n) greedy algorithm leads to optimal solutions to both relaxations. The proof relies on the notion of mean busy times of jobs, a concept which enhances our understanding of these LP relaxations. Based on the greedy solution, we describe two simple randomized approximation algorithms, which are guaranteed to deliver feasible schedules with expected objective function value within factors of 1.7451 and 1.6853, respectively, of the optimum. They are based on the concept of common and independent $\alpha$-points, respectively. The analysis implies in particular that the worst-case relative error of the LP relaxations is at most 1.6853, and we provide instances showing that it is at least $e/(e-1) \approx 1.5819$. Both algorithms may be derandomized; their deterministic versions run in O(n2) time. The randomized algorithms also apply to the on-line setting, in which jobs arrive dynamically over time and one must decide which job to process without knowledge of jobs that will be released afterwards. | Introduction
We study the single-machine scheduling problem with release dates in which the objective is to
minimize a weighted sum of completion times. It is dened as follows. A set ng
of n jobs has to be scheduled on a single disjunctive machine. Job j has a processing time
and is released at time r j 0. We assume that release dates and processing times are
integral. The completion time of job j in a schedule is denoted by C j . The goal is to nd a
non-preemptive schedule that minimizes
where the w j 's are given positive weights.
M.I.T., Department of Mathematics, Room 2-351, 77 Massachusetts Avenue, Cambridge, MA 02139, USA.
y University of British Columbia, Faculty of Commerce and Business Administration, Vancouver, B.C., Canada
V6T 1Z2. Email: maurice.queyranne@commerce.ubc.ca
z M.I.T., Sloan School of Management, Room E53-361, 77 Massachusetts Avenue, Cambridge, MA 02139, USA.
x Technische Universitat Berlin, Fakultat II { Mathematik und Naturwissenschaften, Institut fur Mathematik,
MA 6-1, Strae des 17. Juni 136, 10623 Berlin, Germany. Email: skutella@math.tu-berlin.de
{ PeopleSoft Inc., San Mateo, CA, USA 94404. Email: Yaoguang Wang@peoplesoft.com
In the classical scheduling notation [12], this problem is denoted by 1j r . It is strongly
NP-hard, even if w
One of the key ingredients in the design and analysis of approximation algorithms as well
as in the design of implicit enumeration methods is the choice of a bound on the optimal value.
Several linear programming based as well as combinatorial lower bounds have been proposed
for this well studied scheduling problem, see for example, Dyer and Wolsey [9], Queyranne [22],
and Queyranne and Schulz [23], as well as Belouadah, Posner and Potts [4]. The LP relaxations
involve a variety of dierent types of variables which, e. g., either express whether job j is completed
at time t (non-preemptive time-indexed relaxation), or whether it is being processed at
time t (preemptive time-indexed relaxation), or when job j is completed (completion time relax-
ation). Dyer and Wolsey show that the non-preemptive time-indexed relaxation is stronger than
the preemptive time-indexed relaxation. We will show that the latter relaxation is equivalent to
the completion time relaxation that makes use of the so-called shifted parallel inequalities. In
fact, it turns out that the polyhedron dened by these inequalities is supermodular and hence
one can optimize over it by using the greedy algorithm. A very similar situation arises in [24].
The greedy solution may actually be interpreted in terms of the following preemptive schedule,
which we call the LP schedule: at any point in time it schedules among the available jobs one
with the largest ratio of weight to processing time. Uma and Wein [38] point out that the value
of this LP solution coincides with one of the combinatorial bounds of Belouadah, Posner and
Potts based on the idea of allowing jobs to be split into smaller pieces that can be scheduled
individually.
We show that the optimal value of 1j r j j
is at most 1:6853 times the lower bound
given by any of these three equivalent relaxations | the preemptive time-indexed relaxation,
the completion time relaxation or the combinatorial relaxation in [4]. We prove this result on the
quality of these relaxations by converting the (preemptive) LP schedule into a non-preemptive
schedule. This technique leads to approximation algorithms for 1j r j j
Recall that a {
approximation algorithm is a polynomial-time algorithm guaranteed to deliver a solution of cost
at most times the optimal value. A randomized {approximation algorithm is a polynomial-time
algorithm that produces a feasible solution whose expected objective function value is within
a factor of of the optimal value.
The technique of converting preemptive schedules to non-preemptive schedules in the design
of approximation algorithms was introduced by Phillips, Stein and Wein [21]. More specically,
showed that list scheduling in order of the completion times of a given
preemptive schedule produces a non-preemptive schedule while increasing the total weighted
completion time by at most a factor of 2. In the same paper they also introduced a concept of
-points. This notion was also used by Hall, Shmoys and Wein [13], in connection with the non-preemptive
time-indexed relaxation of Dyer and Wolsey to design approximation algorithms in
various scheduling environments. For our purposes, the -point of job j in a given preemptive
schedule is the rst point in time at which an -fraction of j has been completed. When
one chooses dierent values of , sequencing in order of non-decreasing -points in the same
preemptive schedule may lead to dierent non-preemptive schedules. This increased
exibility
led to improved approximation algorithms: Chekuri, Motwani, Natarajan and Stein [6] for
Goemans [11] for 1j r chose at random and analyzed the expected
performance of the resulting randomized algorithms. We will show that, using a common value
of for all jobs and an appropriate probability distribution, sequencing in order of -points of
the LP schedule has expected performance no worse than 1:7451 times the optimal preemptive
time-indexed LP value. We also prove that by selecting a separate value j for each job j,
one can improve this bound to a factor of 1:6853. Our algorithms are inspired by and partly
resemble the algorithms of Hall et al. [13] and Chekuri et al. [6]. In contrast to Hall et al.
Reference and/or O-line On-line
type of schedule deterministic randomized deterministic
Phillips et al. [21]
Hall et al. [13] 4 4
Schulz [26] 3
Hall et al. [14] 3 3
Chakrabarti et al. [5] 2:8854
Combining [5] and [14] 2:4427
-schedule
for
Table
1: Summary of approximation bounds for 1j r j
. An -schedule is obtained by sequencing
the jobs in order of non-decreasing -points of the LP schedule. The use of job-dependent j
's yields an
)-schedule. The results discussed in this paper are below the second double line. Subsequently, Anderson
and Potts [2] gave a deterministic 2{competitive algorithm. For the unit-weight problem 1j r j
the rst constant-factor approximation algorithm is due to Phillips, Stein and Wein [21]. It has performance
ratio 2, and it also works on-line. Further deterministic 2{competitive algorithms were given by
Stougie [36] and Hoogeveen and Vestjens [15]. All these algorithms are optimal for deterministic on-line
algorithms [15]. Chekuri, Motwani, Natarajan and Stein [6] gave a randomized e=(e 1){approximation
algorithm, which is optimal for randomized on-line algorithms [37, 39].
we exploit the preemptive time-indexed LP relaxation, which, on the one hand, provides us
with highly structured optimal solutions and, on the other hand, enables us to work with mean
busy times. We also use random -points. The algorithm of Chekuri et al. starts from an
arbitrary preemptive schedule and makes use of random -points. They relate the value of the
resulting -schedule to that of the given preemptive schedule, and not to that of an underlying
LP relaxation. While their approach gives better approximations for 1j r
insights on limits of the power of preemption, the link of the LP schedule to the preemptive time-indexed
LP relaxation helps us to obtain good approximations for the total weighted completion
time.
Variants of our algorithms also work on-line when jobs arrive dynamically over time and, at
each point in time, one has to decide which job to process without knowledge of jobs that will be
released afterwards. Even in this on-line setting, we compare the value of the computed schedule
to the optimal (o-line) schedule and derive the same bounds (competitive ratios) as in the o-
line setting. See Table 1 for an account of the evolution of o-line and on-line approximation
results for the single machine problem under consideration.
The main ingredient to obtain the results presented in this paper is the exploitation of the
structure of the LP schedule. Not surprisingly, the LP schedule does not solve the strongly NP-hard
[16] preemptive version of the problem, 1j r
However, we show that the LP
schedule solves optimally the preemptive problem with the related objective function
where M j is the mean busy time of job j, i. e., the average point in time at which the machine
is busy processing j. Observe that, although 1j r are
equivalent optimization problems in the non-preemptive case (since C
are not
when considering preemptive schedules.
The approximation techniques presented in this paper have also proved useful for more
general scheduling problems. For the problem with precedence constraints 1j r
sequencing jobs in order of random -points based on an optimal solution to a time-indexed
relaxation leads to a 2:7183{approximation algorithm [27]. A 2{approximation algorithm
for identical parallel machine scheduling is given in [28]; the result is based on
a time-indexed LP relaxation an optimal solution of which can be interpreted as a preemptive
schedule on a fast single machine; jobs are then assigned randomly to the machines and sequenced
in order of random j -points of this preemptive schedule. For the corresponding scheduling
problem on unrelated parallel machines R j r j j
performance guarantee of 2 can be
obtained by randomized rounding based on a convex quadratic programming relaxation [33],
which is inspired by time-indexed LP relaxations like the one discussed herein [28]. We refer to
[32] for a detailed discussion of the use of -points for machine scheduling problems.
Signicant progress has recently been made in understanding the approximability of scheduling
problems with the average weighted completion time objective. Skutella and Woeginger [34]
developed a polynomial-time approximation scheme for scheduling identical parallel machines
in the absence of release dates,
Subsequently, several research groups have found
polynomial-time approximation schemes for problems with release dates such as
and
see the resulting joint conference proceedings publication [1] for details.
We now brie
y discuss some practical consequences of our work. Savelsbergh, Uma and
Wein [25] and Uma and Wein [38] performed experimental studies to evaluate, in part, the
quality of the LP relaxation and approximation algorithms studied herein, for 1j r
and related scheduling problems. The rst authors report that, except for instances that were
deliberately constructed to be hard for this approach, the present formulation and algorithms
\deliver surprisingly strong experimental performance." They also note that \the ideas that led
to improved approximation algorithms also lead to heuristics that are quite eective in empirical
experiments; furthermore they can be extended to give improved heuristics for more
complex problems that arise in practice." While the authors of the follow-up study [38] report
that when coupled with local improvement the LP-based heuristics generally produce the
best solutions, they also nd that a simple heuristic often outperforms the LP-based heuristics.
Whenever the machine becomes idle, this heuristic starts non-preemptively processing an available
job of largest w j =p j ratio. By analyzing the dierences between the LP schedule and this
heuristic schedule, Chou, Queyranne and Simchi-Levi [7] have subsequently shown the asymptotic
optimality of this on-line heuristic for classes of instances with bounded job weights and
bounded processing times.
The contents of this paper are as follows. Section 2 is concerned with the LP relaxations and
their relationship. We begin with a presentation and discussion of the LP schedule. In Section 2.1
we then review a time-indexed formulation introduced by Dyer and Wolsey [9] and show that it is
solved to optimality by the LP schedule. In Section 2.2 we present the mean busy time relaxation
(or completion time relaxation) and prove, among other properties, its equivalence to the time-indexed
formulation. Section 2.3 explores some polyhedral consequences, in particular the fact
that the mean busy time relaxation is (up to scaling by the job processing times) a supermodular
linear program and that the \job-based" method for constructing the LP schedule is equivalent to
the corresponding greedy algorithm. Section 3 then deals with approximation algorithms derived
from these LP relaxations. In Section 3.1 we present a method for constructing ( j )-schedules,
which allows us to analyze and bound the job completion times in the resulting schedules. In
Section 3.2 we derive simple bounds for -schedules and ( j )-schedules, using a deterministic
common or uniformly distributed random j 's. Using appropriate probability distributions,
we improve the approximation bound to the value of 1.7451 for -schedules in Section 3.3 and to
the value of 1.6853 for ( j )-schedules in Section 3.4. We also indicate how these algorithms can
be derandomized in O(n 2 ) time for constructing deterministic schedules with these performance
guarantees. In Section 3.5 we show that our randomized approximations also apply in an on-line
setting and, in Section 3.6 we present a class of \bad" instances for which the ratio of the
optimal objective function value and our LP bound is arbitrarily close to e
1:5819. This
constant denes a lower bound on the approximation results that can be obtained by the present
approach. We conclude in Section 4 by discussing some related problems and open questions.
Relaxations
In this section, we present two linear programming relaxations for 1j r j j
We show their
equivalence and discuss some polyhedral consequences.
For both relaxations, the following preemptive schedule plays a crucial role: at any point in
time, schedule (preemptively) the available job with highest w j =p j ratio. We assume (throughout
the paper) that the jobs are indexed in order of non-increasing ratios w 1
wn
pn
and ties are broken according to this order. Therefore, whenever a job is released, the job
being processed (if any) is preempted if the released job has a smaller index. We refer to this
preemptive schedule as the LP schedule. See Figure 1 for an example of an LP schedule.011011011
Figure
1: An LP schedule for a 4-job instance given by r 1
r 4
5. Higher rectangles represent jobs with larger weight to processing time ratio. Time is
shown on the horizontal axis.
Notice that this LP schedule does not in general minimize
preemptive
schedules. This should not be surprising since the preemptive problem 1j r
(strongly) NP-hard [16]. It can be shown, however, that the total weighted completion time of
the LP schedule is always within a factor of 2 of the optimal value for 1j r
this bound is tight; see [29].
The LP schedule can be constructed in O(n log n) time. To see this, we now describe an
implementation, which may be seen as \dynamic" (event-oriented) or, using the terminology
of [19], \machine-based" and can even be executed on-line while the jobs dynamically arrive
over time. The algorithm keeps a priority queue [8] of the currently available jobs that have not
yet been completely processed, with the ratio w j =p j as the key and with another eld indicating
the remaining processing time. A scheduling decision is made at only two types of events: when
a job is released, and when a job completes its processing. In the former case, the released job is
added to the priority queue. In the latter case, the completed job is removed from the priority
queue. Then, in either case, the top element of the priority queue (the one with highest w
ratio) is processed; if the queue is empty, then move on to the next job release; if there is none,
then all jobs have been processed and the LP schedule is complete. This implementation results
in a total of O(n) priority queue operations. Since each such operation can be implemented in
O(log n) time [8], the algorithm runs in O(n log n) time.
The LP schedule can also be dened in a somewhat dierent manner, which may be seen as
\static" or \job-based" [19]. Consider the jobs one at a time in order of non-increasing w j =p j .
Schedule each job j as early as possible starting at r j and preempting it whenever the machine
is busy processing another job (that thus came earlier in the w j =p j ordering). This point-of-view
leads to an alternate O(n log n) construction of the LP schedule, see [10].
2.1 Time-Indexed Relaxation
Dyer and Wolsey [9] investigate several types of relaxations of 1j r j j
the strongest ones
being time-indexed. We consider the weaker of their two time-indexed formulations, which
they call formulation (D). It uses two types of variables: y being processed
during time interval [; + 1), and zero otherwise; and t j represents the start time of job j. For
simplicity, we add p j to t j and replace the resulting expression by C j ; this gives an equivalent
relaxation.
subject to
(D)
where T is an upper bound on the makespan of an optimal schedule (for example,
We refer to this relaxation as the preemptive time-indexed relaxation.
The expression for C j given in (1) corresponds to the correct value of the completion time if
job j is not preempted; an interpretation in terms of mean busy times is given in the next section
for the case of preemptions. Observe that the number of variables of this formulation is pseudo-
polynomial. If we eliminate C j from the relaxation by using (1), we obtain a transportation
problem [9] and, as a result, y j can be assumed to be integral:
Lemma 2.1. There exists an optimal solution to (D) for which y j 2 f0; 1g for all j and .
As indicated in [9], (D) can be solved in O(n log n) time. Actually, one can derive a feasible
solution to (D) from the LP schedule by letting y LP
j be equal to 1 if job j is being processed in
Theorem 2.2. The solution y LP derived from the LP schedule is an optimal solution to (D).
Proof. The proof is based on an interchange argument. Consider any optimal 0=1-solution y to
(D). If there exist j < k and > r j such that y
replacing y
j and y
by 0, and y
j and y
k by 1, we obtain another feasible solution with an increase in the objective
function value of ( )
0. The resulting solution must therefore also be optimal.
By repeating this interchange argument, we derive that there exists an optimal solution y such
that there do not exist j < k and > r j such that y
This implies that the
solution y must correspond to the LP schedule.
In particular, despite the pseudo-polynomial number of variables in the LP relaxation (D)
an optimal solution can be obtained e-ciently. We will make use of this fact as well as of the
special structure of the LP schedule in the design and analysis of the approximation algorithms,
see Section 3. We note again that in spite of its nice properties the preemptive time-indexed
LP relaxation (D) solves neither 1j r . In the former case, the
processing of a job in the LP solution may fail to be consecutive; in the latter case equation (1)
does not necessarily dene the completion time of a job in the preemptive LP schedule, as will
be shown in the next lemma.
2.2 Mean Busy Time Relaxation
Given any preemptive schedule, let I j be the indicator function of the processing of job j at time t,
i. e., I j (t) is 1 if the machine is processing j at time t and 0 otherwise. To avoid pathological
situations, we require that, in any preemptive schedule, when the machine starts processing a
job, it does so for a positive amount of time. Given any preemptive schedule, we dene the
mean busy time M j of job j to be the average time at which the machine is processing j, i. e.,
I
For instance, in the example given in Figure 1, which will be used throughout this paper, the
mean busy time of job 4 is 5:5.
We rst establish some important properties of M j in the next two lemmas.
Lemma 2.3. For any preemptive schedule, let C j and M j denote the completion and mean busy
time, respectively, of job j. Then for any job j, we have
only if job j is not preempted.
Proof. If job j is processed without preemption, then I j
is not processed during some interval(s) of
total length L > 0 between times C
R T
I j (t) must be processed
during some time interval(s) of the same total length L before C j p j . Therefore,
I
and the proof is complete.
Let S N denote a set of jobs and dene
Let I S (t) :=
j2S I j (t). Since, by the machine capacity constraint, I S (t) 2 f0; 1g for all t, we
may view I S as the indicator function for job set S. We can thus dene the mean busy time of
set S as M S := 1
R T
dt. Note that we have
I j (t)
So, unlike its start and completion time, the mean busy time of a job set is a simple weighted
average of the mean busy times of its elements. One consequence of this observation is the
validity of the shifted parallel inequalities (3) (see, e.g., [10, 23, 24]) below.
Lemma 2.4. For any set S of jobs and any preemptive schedule with mean busy time vector M ,
we have
and equality holds if and only if all the jobs in S are scheduled without interruption from r min (S)
to r min (S)
Proof. Note that
R T
r min (S) I S (t) t dt ; that I S
and I S (t) 1 for t r min (S) ; and that
R T
r min (S) I S (t)
is minimized when I S
M S is uniquely minimized among all feasible preemptive schedules when all the jobs in S are
continuously processed from r min (S) to r min (S) + p(S). This minimum value is p(S)(r min (S) +2
p(S)) and is a lower bound for
in any feasible preemptive schedule.
As a result of Lemma 2.4, the following linear program provides a lower bound on the optimal
value of 1j r
and hence on that of 1j r j j
subject to
(R)
The proof of the following theorem and later developments use the notion of canonical
decompositions [10]. For a set S of jobs, consider the schedule which processes jobs in S as
early as possible, say, in order of their release dates. This schedule induces a partition of S
into such that the machine is busy processing jobs in S exactly in the disjoint
We refer to this partition as the canonical
decomposition of S. A set is canonical if it is identical to its canonical decomposition, i. e., if its
canonical decomposition is fSg. Thus a set S is canonical if and only if it is feasible to schedule
all its jobs in the time interval [r min (S); r min (S)+p(S)). Note that the set our
example is canonical whereas the subset f1; 2; 3g is not; it has the decomposition ff3g; f1; 2gg.
Let
is the canonical decomposition of S N . Then Lemma 2.4 implies that
is a valid inequality for the mean busy time vector of
any preemptive schedule. In other words, relaxation (R) may be written as:
min
Theorem 2.5. Let M LP
j be the mean busy time of job j in the LP schedule. Then M LP is an
optimal solution to (R).
Proof. By Lemma 2.4, M LP is a feasible solution for (R).
To prove optimality of M LP , we construct a lower bound on the optimal value of (R) and
show that it is equal to the objective function value of M LP . Recall that the jobs are indexed
in non-increasing order of the w
k(i) denote
the canonical decomposition of [i]. Observe that for any vector
where we let w n+1 =p n+1 := 0. We have therefore expressed
as a nonnegative
combination of expressions
sets. By construction of the LP schedule,
the jobs in any of these canonical sets S i
are continuously processed from r min (S i
' ) to r min (S i
) in the LP schedule. Thus, for any feasible solution M to (R) and any such canonical set
' we have
r min (S i
where the last equation follows from Lemma 2.4. Combining this with (5), we derive a lower
bound on
feasible solution M to (R), and this lower bound is attained by the
LP schedule.
>From Theorems 2.2 and 2.5, we derive that the values of the two relaxations (D) and (R)
are equal.
Corollary 2.6. The LP relaxations (D) and (R) of
yield the same optimal objective
function value, i. e., weights w 0. This value can be computed in
O(n log n) time.
Proof. For the equivalence of the lower bounds, note that the mean busy time M LP
j of any job j
in the LP schedule can be expressed as
y LP
where y LP is the solution to (D) derived from the LP schedule. The result then follows directly
from Theorems 2.2 and 2.5. We have shown earlier that the LP schedule can be constructed in
O(n log n) time.
Although the LP schedule does not necessarily minimize
over the preemptive sched-
ules, Theorem 2.5 implies that it minimizes
over the preemptive schedules. In addition,
by Lemma 2.3, the LP schedule is also optimal for both preemptive and non-preemptive problems
it does not preempt any job. For example,
this is the case if all processing times are equal to 1 or if all jobs are released at the same date.
Thus, the LP schedule provides an optimal solution to problems 1j r
and to
This was already known. In the latter case it coincides with Smith's ratio rule [35];
see Queyranne and Schulz [24] for the former case.
2.3 Polyhedral Consequences
We now consider some polyhedral consequences of the preceding results. Let P 1
D be the feasible
region dened by the constraints of relaxation (D) when
In addition, we denote by PR := fM 2 R
the polyhedron
dened by the constraints of relaxation (R).
Theorem 2.7.
(i) Polyhedron PR is the convex hull of the mean busy time vectors M of all preemptive sched-
ules. Moreover, every vertex of PR is the mean busy time vector of an LP schedule.
(ii) Polyhedron PR is also the image of P 1
D in the space of the M-variables under the linear
mapping dened by
for all j 2 N:
Proof. (i) Lemma 2.4 implies that the convex hull of the mean busy time vectors M of all feasible
preemptive schedules is contained in PR . To show the reverse inclusion, it su-ces to show that
(a) every extreme point of PR corresponds to a preemptive schedule; and (b) every extreme ray
of PR is a direction of recession for the convex hull of mean busy time vectors. Property (a) and
the second part of statement (i) follow from Theorem 2.5 and the fact that every extreme point
of PR is the unique minimizer of
note that the extreme
rays of PR are the n unit vectors of R N . An immediate extension to preemptive schedules and
mean busy times of results in Balas [3] implies that these unit vectors of R N are directions of
recession for the convex hull of mean busy time vectors. This completes the proof of (i).
(ii) We rst show that the image M(P 1
D is contained in PR . For this, let y be a
vector in P 1
D and S N with canonical decomposition g. By denition of M(y) j ,
we have
r min (S '
The inequality follows from the constraints dening P 1
and the interchange argument which
we already used in the proof of Theorem 2.2. This shows M(y) 2 PR and thus M(P 1
To show the reverse inclusion, we use the observation from the proof of part (i) that PR can
be represented as the sum of the convex hull of the mean busy time vectors of all LP schedules
and the nonnegative orthant. Since, by equation (6), the mean busy time vector M LP of any LP
schedule is the projection of the corresponding 0=1-vector y LP , it remains to show that every
unit vector e j is a direction of recession for M(P 1
D ). For this, x an LP schedule and let y LP and
denote the associated 0=1 y-vector and mean busy time vector, respectively.
For any job j 2 N and any real > 0, we need to show that M LP
Let max := argmaxfy LP
Ng. Choose such that y LP
choose an integer
k otherwise. In the
associated preemptive schedule, the processing of job j that was done in interval [; +1) is now
postponed, by time units, until interval [ +; ++1). Therefore, its mean busy time vector
k for all k 6= j. Let 0 := =p j , so
. Then the vector M LP + e j is a convex combination of M
and y be the corresponding convex combination of y LP and y 0 . Since P 1
D is
convex then y
D and, since the mapping M is linear, M LP
In view of earlier results for single machine scheduling with identical release dates [22], as
well as for parallel machine scheduling with unit processing times and integer release dates [24],
it is interesting to note that the feasible set PR of the mean busy time relaxation is, up to scaling
by the job processing times, a supermodular polyhedron:
Proposition 2.8. The set function h dened in (4) is supermodular.
Proof. Consider any two elements any subset S N n fj; kg. We may construct
an LP schedule minimizing
using the job-based method by considering rst the
jobs in S and then job k. (Note that considering the jobs in any sequence leads to a schedule
minimizing
because jobs are weighted by their processing times in this objective
function). By denition (4) the resulting mean busy times M LP satisfy
and
Note that job k is scheduled, no earlier than its release
date, in the rst p k units of idle time left after the insertion of all jobs in S. Thus M LP
k is the
mean of all these p k time units. Similarly, we may construct an LP schedule, whose mean busy
time vector will be denoted by f
considering rst the jobs
in S, so f
f
job k, so
f
has been inserted after subset S was
scheduled, job k cannot use any idle time interval that is earlier than those it used in the former
schedule M LP |and some of the previously available idle time may now be occupied by job
j, causing a delay in the mean busy time of job k; thus we have f
k and therefore
f
This su-ces to establish that h is supermodular.
An alternate proof of the supermodularity of h can be derived, as in [10], from the fact, observed
by Dyer and Wolsey and already mentioned above, that relaxation (D) becomes a transportation
problem after elimination of the C j 's. Indeed, from an interpretation of Nemhauser,
Wolsey and Fisher [20] of a result of Shapley [31], it then follows that the value of this transportation
problem as a function of S is supermodular. One of the consequences of Proposition 2.8
is that the job-based method to construct an LP schedule is just a manifestation of the greedy
algorithm for minimizing
over the supermodular polyhedron PR .
We nally note that the separation problem for the polyhedron PR can be solved combina-
torially. One can separate over the family of inequalities
by trying all possible values for r min (S) (of which there are at most n) and then applying a
O(n log n) separation routine of Queyranne [22] for the problem without release dates. The
overall separation routine can be implemented in O(n 2 ) time by observing that the bottleneck
step in Queyranne's algorithm | sorting the mean busy times of the jobs | needs to be done
only once for the whole job set.
Provably Good Schedules and LP Relaxations
In this section, we derive approximation algorithms for 1j r j j
that are based on converting
the preemptive LP schedule into a feasible non-preemptive schedule whose value can
be bounded in terms of the optimal LP value This yields results on the quality of
both the computed schedule and the LP relaxations under consideration since the value of the
computed schedule is an upper bound and the optimal LP value is a lower bound on the value
of an optimal schedule.
In Section 3.6 below, we describe a family of instances for which the ratio between the
optimal value of the 1j r problem and the lower bounds ZR and ZD is arbitrarily
close to e
1:5819. This lower bound of e
e 1 sets a target for the design of approximation
algorithms based on these LP relaxations.
In order to convert the preemptive LP schedule into a non-preemptive schedule we make
use of so-called -points of jobs. For 0 < 1 the -point t j () of job j is the rst point in
time when an -fraction of job j has been completed in the LP schedule, i. e., when j has been
processed for p j time units. In particular, t j (1) is equal to the completion time and we dene
to be the start time of job j. Notice that, by denition, the mean busy time M LP
j of
job j in the LP schedule is the average over all its -points
We will also use the following notation: For a xed job j and 0 < 1 we denote the fraction
of job k that is completed in the LP schedule by time t j () by k (); in particular, j
The amount of idle time that occurs between time 0 and the start of job j in the LP schedule is
denoted by idle . Note that k and idle implicitly depend on the xed job j. By construction,
there is no idle time between the start and completion of job j in the LP schedule; therefore we
can express j's -point as
For a given 0 < 1, we dene the -schedule as the schedule in which jobs are processed
non-preemptively as early as possible and in the order of non-decreasing -points. We denote
the completion time of job j in this schedule by C
. The idea of scheduling non-preemptively
in the order of -points in a preemptive schedule was introduced by Phillips, Stein and Wein
[21], and used in many of the subsequent results in the area.
This idea can be further extended to individual, i. e., job-dependent j -points t
1. We denote the vector consisting of all j 's by :=
Then, the ( j )-schedule is constructed by processing the jobs as early as possible and in non-decreasing
order of their j -points; the completion time of job j in the ( j )-schedule is denoted
by C
.
Figure
compares an -schedule to an ( j )-schedule both derived from the LP schedule
in
Figure
1.
In the sequel we present several results on the quality of -schedules and ( j )-schedules.
These results also imply bounds on the quality of the LP relaxations of the previous section.
The main result is the construction of a random ( j )-schedule whose expected value is at most
a factor 1:6853 of the optimal LP value Therefore the LP relaxations (D) and
(R) deliver a lower bound which is at least 0:5933 ( 1:6853 1 ) times the optimal value. The
corresponding randomized algorithm can be implemented on-line; it has competitive ratio 1:6853
and running time O(n log n); it can also be derandomized to run o-line in O(n 2 ) time. We also
investigate the case of a single common and show that the best -schedule is always within a
factor of 1:7451 of the optimum.
Figure
2: A non-preemptive -schedule (for and an ( j
)-schedule shown above and below the
LP schedule, respectively. Notice that there is no common value that would lead to the latter schedule.
3.1 Bounding the completion times in ( j )-schedules
To analyze the completion times of jobs in ( j )-schedules, we consider non-preemptive schedules
of similar structure that are, however, constructed by a slightly dierent conversion routine which
we call ( j )-Conversion:
Consider the jobs j 2 N in order of non-increasing j -points t iteratively
change the preemptive LP schedule to a non-preemptive schedule by applying the
following steps:
i) remove the j p j units of job j that are processed before t j leave the
machine idle during the corresponding time intervals; we say that this idle time
is caused by job j;
ii) delay the whole processing that is done later than t
iii) remove the remaining (1 j )-fraction of job j from the machine and shrink
the corresponding time intervals; shrinking a time interval means to discard the
interval and move earlier, by the corresponding amount, any processing that
occurs later;
iv) process job j in the released time interval
Figure
3 contains an example illustrating the action of ( j )-Conversion starting from the LP
schedule of Figure 1. Observe that in the resulting schedule jobs are processed in non-decreasing
order of j -points and no job j is started before time t . The latter property will be
useful in the analysis of on-line ( j )-schedules.
Figure
3: Illustration of the individual iterations of ( j
)-Conversion.
Lemma 3.1. The completion time of job j in the schedule constructed by
equal to
Proof. Consider the schedule constructed by )-Conversion. The completion time of job j
is equal to the idle time before its start plus the sum of processing times of jobs that start no
later than j. Since the jobs are processed in non-decreasing order of their j -points, the amount
of processing before the completion of job j is
The idle time before the start of job j can be written as the sum of the idle time idle that
already existed in the LP schedule before j's start plus the idle time before the start of job j
that is caused in steps i) of ( j )-Conversion; notice that steps iii) do not create any additional
idle time since we shrink the aected time intervals. Each job k that is started no later than j,
i. e., such that k units of idle time, all other jobs k only contribute
units of idle time. As a result, the total idle time before the start of job j can be
written as
The completion time of job j in the schedule constructed by is equal to the
sum of the expressions in (9) and (10); the result then follows from equation (8).
It follows from Lemma 3.1 that the completion time C j of each job j in the non-preemptive
schedule constructed by hence is a feasible
schedule. Since the ( j )-schedule processes the jobs as early as possible and in the same order
as the ( j )-Conversion schedule, we obtain the following corollary.
Corollary 3.2. The completion time of job j in an ( j )-schedule can be bounded by
3.2 Bounds for -schedules and ( j )-schedules
We start with a result on the quality of the -schedule for a xed common value of .
Theorem 3.3. For xed , (i) the value of the -schedule is within a factor max
of the optimal LP value; in particular, for
2 the bound is 1
2. Simultaneously, (ii)
the length of the -schedule is within a factor of 1 + of the optimal makespan.
Proof. While the proof of (ii) is an immediate consequence of (8) and Corollary 3.2, it follows
from the proof of Theorem 2.5 that for (i) it is su-cient to prove that, for any canonical set S,
we have
Indeed, using (5) and Lemma 2.4 it would then follow that
proving the result.
Consider now any canonical set S and let us assume that, after renumbering the jobs,
the ordering is not necessarily anymore in
non-increasing order of w j =p j ). Fix now any job j 2 S. From Corollary 3.2, we derive that
where k := k () represents the fraction of job k processed in the LP schedule before t j ().
Let R denote the set of jobs k such that t k () < r min (S) (and thus k ). Since S is a
canonical set, the jobs in S are processed continuously in the LP schedule between r min (S) and
therefore every job k with k is either in S or in R. Observe that
implies that p(R) 1
r min (S). We can thus simplify (12)
Since the jobs in S are scheduled with no gaps in [r min (S); r min (S) + p(S)], we have that
Combining (13) and (14), we derive that
Multiplying by p j and summing over S, we get:
which implies (11).
In the sequel we will compare the completion time C
j of every job j with its \completion time"
in the LP schedule. However, for any xed common value of , there exist instances
which show that this type of job-by-job analysis can give a bound no better than 1+
One can also show that, for any given value of , there exist instances for which the objective
function value of the -schedule can be as bad as twice the LP lower bound.
In view of these results, it is advantageous to use several values of as it appears that no
instance can be simultaneously bad for all choices of . In fact, the -points develop their full
power in combination with randomization, i. e., when a common or even job-dependent j 's are
chosen randomly from (0; 1] according to an appropriate density function. This is also motivated
by equation (7) which relates the expected -point of a job under a uniform distribution of
to the LP variable M LP
. For random values j , we analyze the expected value of the resulting
)-schedule and compare it to the optimal LP value. Notice that a bound on the expected
value proves the existence of a vector ( j ) such that the corresponding ( j )-schedule meets this
bound. Moreover, for our results we can always compute such an ( j ) in polynomial time by
derandomizing our algorithms with standard methods; see Propositions 3.8 and 3.13.
Although the currently best known bounds can only be achieved for ( j )-schedules with
job-dependent j 's, we investigate -schedules with a single common as well. On the one
hand, this helps to better understand the potential advantages of ( j )-schedules; on the other
hand, the randomized algorithm that relies on a single admits a natural derandomization. In
fact, we can easily compute an -schedule of least objective function value over all between
0 and 1; we refer to this schedule as the best-schedule. In Proposition 3.8 below, we will
show that there are at most n dierent -schedules. The best-schedule can be constructed
in O(n 2 ) time by evaluating all these dierent schedules.
As a warm-up exercise for the kind of analysis we use, we start by proving a bound of 2 on
the expected worst-case performance ratio of uniformly generated ( j )-schedules in the following
theorem. This result will then be improved by using more intricate probability distributions and
by taking advantage of additional insights into the structure of the LP schedule.
Theorem 3.4. Let the random variables j be pairwise independently and uniformly drawn
from (0; 1]. Then, the expected value of the resulting ( j )-schedule is within a factor 2 of the
optimal LP value
Proof. Remember that the optimal LP value is given by
To get the claimed
result, we prove that EU [C
denotes the
expectation of a function F of the random variable when the latter is uniformly distributed.
The overall performance follows from this job-by-job bound by linearity of expectations.
Consider an arbitrary, but xed job j 2 N . To analyze the expected completion time of j,
we rst keep j xed, and consider the conditional expectation EU [C
the random
variables j and k are independent for each k 6= j, Corollary 3.2 and equation (8) yield
EU [C
To obtain the unconditional expectation EU [C
we integrate over all possible choices of
EU [C
Z 1EU [C
the last equation follows from (7).
We turn now to deriving improved results. We start with an analysis of the structure of
the LP schedule. Consider any job j, and assume that, in the LP schedule, j is preempted at
time s and its processing resumes at time t > s. Then all the jobs which are processed between
s and t have a smaller index; as a result, these jobs will be completely processed between times
s and t. Thus, in the LP schedule, between the start time and the completion time of any job j,
the machine is constantly busy, alternating between the processing of portions of j and the
complete processing of groups of jobs with smaller index. Conversely, any job preempted at the
will have to wait at least until job j is complete before its processing
can be resumed.
We capture this structure by partitioning, for a xed job j, the set of jobs N n fjg into two
subsets N 1 and denote the set of all jobs that are processed between the start and
completion of job j. All remaining jobs are put into subset N 1 . Notice that the function k is
constant for jobs k 2 N 1 ; to simplify notation we write k := k ( j ) for those jobs. For k 2 N 2 ,
the fraction of job j that is processed before the start of job k; the
function k is then given by
We can now rewrite equation (8) as
Plugging (15) into equation (7) yields
and Corollary 3.2 can be rewritten as
where, for k 2 N 2 , we have used the fact that k k ( j ) is equivalent to j > k . The
expressions (15), (16), and (17) re
ect the structural insights that we need for proving stronger
bounds for ( j )-schedules and -schedules in the sequel.
As mentioned above, the second ingredient for an improvement on the bound of 2 is a more
sophisticated probability distribution of the random variables j . In view of the bound on C
given in (17), we have to cope with two contrary phenomena: On the one hand, small values of
k keep the terms of the form (1 on the right-hand side of (17) small;
on the other hand, choosing larger values decreases the number of terms in the rst sum on the
right-hand side of (17). The balancing of these two eects contributes to reducing the bound
on the expected value of C
.
3.3 Improved bounds for -schedules
In this subsection we prove the following theorem.
Theorem 3.5. Let
0:4675 be the unique solution to the equation
satisfying 0 <
< 1. Dene c := 1+
e
< 1:7451 and - := 1 1+
0:8511. If is chosen
according to the density function
then the expected value of the resulting random -schedule is bounded by c times the optimal LP
value
Before we prove Theorem 3.5 we state two properties of the density function f that are
crucial for the analysis of the corresponding random -schedule.
Lemma 3.6. The function f given in Theorem 3.5 is a density function with the following
properties:
Property (i) is used to bound the delay to job j caused by jobs in N 1 which corresponds to
the rst summation on the right-hand side of (17). The second summation re
ects the delay to
caused by jobs in N 2 and will be bounded by property (ii).
Proof of Lemma 3.6. A short computation shows that
. The function f is a density
function since
Z -e
In order to prove property (i), observe that for 2 [0; -]
For 2 (-; 1] we therefore get
Property (ii) holds for 2 (-; 1] since the left-hand side is 0 in this case. For 2 [0; -] we have
e
This completes the proof of the lemma.
Proof of Theorem 3.5. In Lemma 3.6, both (i) for
denotes the expected value of a random variable that is distributed according to
the density f given in Theorem 3.5. Thus, using inequality (17) and Lemma 3.6 we derive that
C
the last inequality follows from the denition of N 1 and k and the last equality follows from (16).
Notice that any density function satisfying properties (i) and (ii) of Lemma 3.6 for some
value c 0 directly leads to the job-by-job bound E f [C
for the corresponding
random -schedule. It is easy to see that the unit function satises Lemma 3.6 with c
which establishes the following variant of Theorem 3.4.
Corollary 3.7. Let the random variable be uniformly drawn from (0; 1]. Then, the expected
value of the resulting -schedule is within a factor 2 of the optimal LP value
The use of an exponential density function is motivated by the rst property in Lemma 3.6;
notice that the function 7! (c 1)e satises it with equality. On the other hand, the exponential
function is truncated in order to reduce the term
in the second property.
In fact, the truncated exponential function f in Theorem 3.5 can be shown to minimize c 0 ; it
is therefore optimal for our analysis. In addition, there exists a class of instances for which the
ratio of the expected cost of an -schedule, determined using this density function, to the cost
of the optimal LP value is arbitrarily close to 1:745; this shows that the preceding analysis is
essentially tight in conjunction with truncated exponential functions.
Theorem 3.5 implies that the best-schedule has a value of at most 1:7451 ZR . The following
proposition shows that the randomized algorithm that yields the -schedule can be easily
derandomized because the sample space is small.
Proposition 3.8. There are at most n dierent -schedules; they can be computed in O(n 2 )
time.
Proof. As goes from 0 + to 1, the -schedule changes only whenever an -point, say for
reaches a time at which job j is preempted. Thus, the total number of changes in the -schedule
is bounded from above by the total number of preemptions. Since a preemption can occur in the
LP schedule only whenever a job is released, the total number of preemptions is at most n 1,
and the number of -schedules is at most n. Since each of these -schedules can be computed
in O(n) time, the result on the running time follows.
3.4 Improved bounds for ( j )-schedules
In this subsection, we prove the following theorem.
Theorem 3.9. Let
0:4835 be the unique solution to the equation
)e
satisfying 0 <
< 1. Dene - :=
=- < 1:6853. Let the
's be chosen pairwise independently from a probability distribution over (0; 1] with the density
function
Then, the expected value of the resulting random ( j )-schedule is bounded by c times the optimal
LP value
The bound in Theorem 3.9 yields also a bound on the quality of the LP relaxations:
Corollary 3.10. The LP relaxations (D) and (R) deliver in O(n log n) time a lower bound
which is at least 0:5933 ( 1:6853 1 ) times the objective function value of an optimal schedule.
Following the lines of the last subsection, we state two properties of the density function g
that are crucial for the analysis of the corresponding random ( j )-schedule.
Lemma 3.11. The function g given in Theorem 3.9 is a density function with the following
properties:
denotes the expected value of a random variable that is distributed according to g.
Notice the similarity of Lemma 3.11 and Lemma 3.6 of the last subsection. Again, properties
(i) and (ii) are used to bound the delay to job j caused by jobs in N 1 and N 2 , respectively,
in the right-hand side of inequality (17). Property (i) for
yield E g [] c 1.
Proof of Lemma 3.11. A short computation shows that
. It thus follows from the
same arguments as in the proof of Lemma 3.6 that g is a density function and that property (i)
holds. In order to prove property (ii), we rst compute
Z -e
Property (ii) certainly holds for 2 (-; 1]. For 2 [0; -] we get
e
)e
e
e
This completes the proof of the lemma.
Proof of Theorem 3.9. Our analysis of the expected completion time of job j in the random
)-schedule follows the line of argument developed in the proof of Theorem 3.4. First we
consider a xed choice of j and bound the corresponding conditional expectation E g [C
In a second step we bound the unconditional expectation E g [C
by integrating the product
over the interval (0; 1].
For a xed job j and a xed value j , the bound in (17) and Lemma 3.11 (i) yield
The last inequality follows from (15) and . Using property (ii) and
equation (16) yields
The result follows from linearity of expectations.
While the total number of possible orderings of jobs is log n) , we show in the
following lemma that the maximum number of ( j )-schedules is at most 2 n 1 . We will use the
following observation. Let q j denote the number of dierent pieces of job j in the LP schedule;
thus q j represents the number of times job j is preempted plus 1. Since there are at most n 1
preemptions, we have that
Lemma 3.12. The maximum number of ( j )-schedules is at most 2 n 1 and this bound can be
attained.
Proof. The number of ( j )-schedules is given by
. Note that q
is not preempted in the LP schedule. Thus,
1). By the
mean inequality, we have that
Y
Furthermore, this bound is attained if q and this is achieved for example
for the instance with
Therefore, and in contrast to the case of random -schedules, we cannot aord to derandomize
the randomized 1:6853{approximation algorithm by enumerating all ( j )-schedules. We
instead use the method of conditional probabilities [18].
>From inequality (17) we obtain for every vector an upper bound on the objective
function value of the corresponding ( j )-schedule,
denotes the right-hand side of inequality (17). Taking expectations
and using Theorem 3.9, we have already shown that
where c < 1:6853. For each job j 2 N let
g denote the set of the intervals
for j corresponding to the q j pieces of job j in the LP schedule. We consider the jobs one by
one in arbitrary order, say, Assume that, at step j of the derandomized algorithm,
we have identied intervals Q d
Using conditional expectations, the left-hand side of this inequality is
Since
there exists at least one interval Q j' 2 Q j such that
Therefore, it su-ces to identify such an interval Q d
satisfying (18) and we may conclude
that
Having determined in this way an interval Q d
j for every job note that the
)-schedule is the same for all 2 Q d Q d Q d
n . The (now deterministic) objective
function value of this ( j )-schedule is
as desired. For every checking whether an interval Q d
amounts to evaluating O(n) terms, each of which may be computed in constant time. Since, as
observed just before Lemma 3.12, we have a total of
follows that the derandomized algorithm runs in O(n 2 ) time.
Proposition 3.13. The randomized 1:6853{approximation algorithm can be derandomized; the
resulting deterministic algorithm runs in O(n 2 ) time and has performance guarantee 1:6853 as
well.
Constructing provably good schedules on-line
In this subsection we show that our randomized approximation results also apply in an on-line
setting. There are several dierent on-line paradigms that have been studied in the area
of scheduling; we refer to [30] for a survey. We consider the setting where jobs continually
arrive over time and, for each time t, we must construct the schedule until time t without any
knowledge of the jobs that will arrive afterwards. In particular, the characteristics of a job, i. e.,
its processing time and its weight become only known at its release date.
It has already been shown in Section 2 that the LP schedule can be constructed on-line.
Unfortunately, for a given vector ( j ), the corresponding ( j )-schedule cannot be constructed
on-line. We only learn about the position of a job k in the sequence dened by non-decreasing
-points at time t k ( k ); therefore we cannot start job k at an earlier point in time in the on-line
setting. On the other hand, however, the start time of k in the ( j )-schedule can be earlier than
its k -point t k ( k ).
Although an ( j )-schedule cannot be constructed on-line, the above discussion reveals that
the following variant, which we call on-line-( j )-schedule, can be constructed on-line: For a
given vector ( j ), process the jobs as early as possible in the order of their j -points, with the
additional constraint that no job k may start before time t k ( k ). See Figure 4 for an example.
We note that this idea of delaying the start of jobs until su-cient information for a good decision
is available was in this setting introduced by Phillips, Stein and Wein [21].
Notice that the non-preemptive schedule constructed by
straints; its value is therefore an upper bound on the value of the on-line-( j )-schedule. Our
analysis in the last subsections relies on the bound given in Corollary 3.2 which also holds for
the schedule constructed by 3.1. This yields the following results.
Theorem 3.14. For any instance of the scheduling problem 1j r j j
a) choosing
2 and constructing the on-line-schedule yields a deterministic on-line
algorithm with competitive ratio 1
2:4143 and running time O(n log n);
b) choosing the j 's randomly and pairwise independently from (0; 1] according to the density
function g of Theorem 3.9 and constructing the on-line-( j )-schedule yields a randomized
on-line algorithm with competitive ratio 1:6853 and running time O(n log n).
The competitive ratio 1:6853 in Theorem 3.14 beats the deterministic on-line lower bound 2
for the unit-weight problem 1j r 36]. For the same problem, Stougie and Vestjens
proved the lower bound e
randomized on-line algorithms [37, 39].
Figure
4: The on-line schedule for the previously considered instance and j
-points. The LP schedule
is shown above for comparison.
3.6 Bad instances for the LP relaxations
In this subsection, we describe a family of instances for which the ratio between the optimal
value of the 1j r j j
problem and the lower bounds ZR and ZD is arbitrarily close to
e
These instances I n have n 2 jobs as follows: one large job, denoted job n, and
small jobs denoted 1. The large job has processing time
and release date r Each of the n 1 small jobs j has zero processing time, weight
release date r
Throughout the paper, we have assumed that processing times are non-zero. In order to
satisfy this assumption, we could impose a processing time of 1=k for all small jobs, multiply
all processing times and release dates by k to make the data integral, and then let k tend to
innity. For simplicity, however, we just let the processing time of all small jobs be 0.
The LP solution has job n start at time 0, preempted by each of the small jobs; hence its
mean busy times are: M LP
Its objective function value
is
Notice that the completion time of each job j is in fact equal
to M LP
such that the actual value of the preemptive schedule is equal to ZR .
Now consider an optimal non-preemptive schedule C and let
c n 0, so k is the
number of small jobs that can be processed before job n. It is then optimal to process all these
small jobs at their release dates, and to start processing job n at date r just after
job k. It is also optimal to process all remaining jobs k just after
job n. Let C k denote the resulting schedule, that is, C k
otherwise. Its objective function value is
n(n 1) . Therefore the optimal
schedule is C n 1 with objective function value
. As n grows large, the LP
objective function value approaches e 1 while the optimal non-preemptive cost approaches e.
Even though polynomial-time approximation schemes have now been discovered for the problem
[1], the algorithms we have developed, or variants of them, are likely to be superior
in practice. The experimental studies of Savelsbergh et al. [25] and Uma and Wein [38] indicate
that LP-based relaxations and scheduling in order of j -points are powerful tools for a variety
of scheduling problems.
Several intriguing questions remain open. Regarding the quality of linear programming
relaxations, it would be interesting to close the gap between the upper (1:6853) and lower
(1:5819) bound on the quality of the relaxations considered in this paper. We should point out
that the situation for the strongly NP-hard [16] problem 1j r is similar. It is
shown in [29] that the completion time relaxation is in the worst case at least a factor of 8=7 and
at most a factor of 4=3 o the optimum; the latter bound is achieved by scheduling preemptively
by LP-based random -points. Chekuri et al. [6] prove that the optimal non-preemptive value
is at most e=(e 1) times the optimal preemptive value; our example in Section 3.6 shows that
this bound is tight.
Dyer and Wolsey [9] propose also a (non-preemptive) time-indexed relaxation which is
stronger than the preemptive version studied here. This relaxation involves variables for each job
and each time representing whether this job is being completed (rather than simply processed)
at that time. This relaxation is at least as strong as the preemptive version, but its worst-case
ratio is not known to be strictly better.
For randomized on-line algorithms, there is also a gap between the known upper and lower
bound on the competitive ratios that are given at the end of Section 3.5. For deterministic
on-line algorithms, the 2{competitve algorithm of Anderson and Potts [2] is optimal.
Acknowledgements
The research of the rst author was performed partly when the author was at C.O.R.E., Louvain-
la-Neuve, Belgium and supported in part by NSF contract 9623859-CCR. The research of the
second author was supported in part by a research grant from NSERC (the Natural Sciences
and Research Council of Canada) and by the UNI.TU.RIM. S.p.a. (Societa per l'Universita
nel riminese), whose support is gratefully acknowledged. The research of the third author was
performed partly when he was with the Department of Mathematics, Technische Universitat
Berlin, Germany. The fourth author was supported in part by DONET within the frame of
the TMR Programme (contract number ERB FMRX-CT98-0202) while staying at C.O.R.E.,
Louvain-la-Neuve, Belgium, for the academic year 1998/99. The fth author was supported by
a research fellowship from Max-Planck Institute for Computer Science, Saarbrucken, Germany.
We are also grateful to an anonymous referee whose comments helped to improve the presentation
of this paper.
--R
Introduction to Algorithms
Rinnooy Kan
Rinnooy Kan
Rinnooy Kan and P.
Randomized Algorithms
", Mathematical Programming, 82, 199-223 (1998). An extended abstract appeared under the title \Scheduling jobs that arrive over time"
" in R. Burkard and
cited as personal communication in
--TR
--CTR
Jairo R. Montoya-Torres, Competitive Analysis of a Better On-line Algorithm to Minimize Total Completion Time on a Single-machine, Journal of Global Optimization, v.27 n.1, p.97-103, September
Leah Epstein , Rob van Stee, Lower bounds for on-line single-machine scheduling, Theoretical Computer Science, v.299 n.1-3, p.439-450,
R. N. Uma , Joel Wein , David P. Williamson, On the relationship between combinatorial and LP-based lower bounds for NP-hard scheduling problems, Theoretical Computer Science, v.361 n.2, p.241-256, 1 September 2006
Martin W. P. Savelsbergh , R. N. Uma , Joel Wein, An Experimental Study of LP-Based Approximation Algorithms for Scheduling Problems, INFORMS Journal on Computing, v.17 n.1, p.123-136, Winter 2005
Martin Skutella, Convex quadratic and semidefinite programming relaxations in scheduling, Journal of the ACM (JACM), v.48 n.2, p.206-242, March 2001
F. Afrati , I. Milis, Designing PTASs for MIN-SUM scheduling problems, Discrete Applied Mathematics, v.154 n.4, p.622-639, 15 March 2006
Rolf H. Mhring , Andreas S. Schulz , Frederik Stork , Marc Uetz, Solving Project Scheduling Problems by Minimum Cut Computations, Management Science, v.49 n.3, p.330-350, March | on-line algorithm;LP relaxation;approximation algorithm;scheduling |
588006 | Binary Clutters, Connectivity, and a Conjecture of Seymour. | A binary clutter is the family of odd circuits of a binary matroid, that is, the family of circuits that intersect with odd cardinality a fixed given subset of elements. Let A denote the 0,1 matrix whose rows are the characteristic vectors of the odd circuits. A binary clutter is ideal if the polyhedron $\{ x \geq {\bf 0}: \; Ax \geq {\bf 1} \}$ is integral. Examples of ideal binary clutters are st-paths, st-cuts, T-joins or T-cuts in graphs, and odd circuits in weakly bipartite graphs. In 1977, Seymour [J. Combin. Theory Ser. B, 22 (1977), pp. 289--295] conjectured that a binary clutter is ideal if and only if it does not contain ${\cal{L}}_{F_7}$, ${\cal{O}}_{K_5}$, or $b({\cal{O}}_{K_5})$ as a minor. In this paper, we show that a binary clutter is ideal if it does not contain five specified minors, namely the three above minors plus two others. This generalizes Guenin's characterization of weakly bipartite graphs [J. Combin. Theory Ser., 83 (2001), pp. 112--168], as well as the theorem of Edmonds and Johnson [ Math. Programming, 5 (1973), pp. 88--124] on T-joins and T-cuts. | INTRODUCTION
A clutter H is a finite family of sets, over some finite ground set E(H), with the property that no set of
H contains, or is equal to, another set of H. A clutter is said to be ideal if the polyhedron fx 2 R jE(H)j
is an integral polyhedron, that is, all its extreme points have 0; 1 coordinates. A
clutter H is trivial if f;g. Given a nontrivial clutter H, we write A(H) for a 0,1 matrix whose
columns are indexed by E(H) and whose rows are the characteristic vectors of the sets S 2 H. With this
notation, a nontrivial clutter H is ideal if and only if fx is an integral polyhedron.
Given a clutter H, a set T E(H) is a transversal of H if T intersects all the members of H. The clutter
b(H), called the blocker of H, is defined as follows: E b(H)
E(H) and b(H) is the set of inclusion-wise
minimal transversals of H. It is well known that b b(H)
Hence we say that H; b(H) form a
blocking pair of clutters. Lehman [14] showed that, if a clutter is ideal, then so is its blocker. A clutter is said
to be binary if, for any S 1 contains, or is equal to, a
set of H.
Given a clutter H and i 2 E(H), the contraction H=i and deletion H n i are clutters defined as follows:
fig, the family H=i is the set of inclusion-wise minimal members of
Hg. Contractions and deletions can be performed sequentially,
and the result does not depend on the order. A clutter obtained from H by a set of deletions J d and a set of
contractions J c , (where J c \J called a minor of H and is denoted by HnJ d =J c . It is a proper minor
if J c [ J d 6= ;. A clutter is said to be minimally nonideal (mni) if it is not ideal but all its proper minors are
ideal.
Date: March 2000, revised December 2001.
Key words and phrases. Ideal clutter, signed matroid, multicommodity flow, weakly bipartite graph, T -cut, Seymour's conjecture.
Classification: 90C10, 90C27, 52B40.
This work supported in part by NSF grants DMI-0098427, DMI-9802773, DMS-9509581, ONR grant N00014-9710196, and DMS
96-32032.
EJOLS AND BERTRAND GUENIN
The clutter OK5 is defined as follows: E(OK5 ) is the set of 10 edges of the complete graph K 5 and OK5 is
the set of odd circuits of K 5 (the triangles and the circuits of length 5). The 10 constraints corresponding to the
triangles define a fractional extreme point
3 ) of the associated polyhedron fx
1g. Thus OK5 is not ideal and neither is its blocker. The clutter LF7 is the family of circuits of length three
of the Fano matroid (or, equivalently, the family of lines of the Fano plane), i.e. E(LF7
and
The fractional point
3 ) is an extreme point of the associated polyhedron, hence LF7 is not ideal.
The blocker of LF7 is LF7 itself. The following excluded minor characterization is predicted.
Seymour's Conjecture [Seymour [23] p. 200, [26] (9.2), (11.2)]
A binary clutter is ideal if and only if it has no LF7 , no OK5 , and no b(OK5 ) minor.
Consider a clutter H and an arbitrary element t 62 E(H). We for the clutter with E(H
Hg. The clutter Q 6 is defined as follows: E(Q 6 ) is the set of edges
of the complete graph K 4 and Q 6 is the set of triangles of K 4 . The clutter Q 7 is defined as follows:
Note that the first six columns of A(Q 7 ) form the matrix A b(Q 6 )
The main result of this paper is that Seymour's Conjecture holds for the class of clutters that do not have
6 and Q
7 minors.
Theorem 1.1. A binary clutter is ideal if it does not have LF7 , OK5 , b(OK5
6 or Q
7 as a minor.
Since the blocker of an ideal binary clutter is also a ideal, we can restate Theorem 1.1 as follows.
Corollary 1.2. A binary clutter is ideal if it does not have LF7 , OK5 , b(OK 5
6 ) as a minor.
We say that H is the clutter of odd circuits of a graph G if E(H) is the set of edges of G and H the set of
odd circuits of G. A graph is said to be weakly bipartite if the clutter of its odd circuits is ideal. This class of
graphs has a nice excluded minor characterization.
Theorem 1.3 (Guenin [10]). A graph is weakly bipartite if and only if its clutter of odd circuits has no OK5
minor.
The class of clutters of odd circuits is closed under minor taking (Remark 8.2). Moreover, one can easily
check that OK5 is the only clutter of odd circuits among the five excluded minors of Theorem 1.1 (see
Remark 8.3 and [20]). It follows that Theorem 1.1 implies Theorem 1.3. It does not provide a new proof of
Theorem 1.3 however, as we shall use Theorem 1.3 to prove Theorem 1.1.
Consider a graph G and a subset T of its vertices of even cardinality. A T -join is an inclusion-wise
minimal set of edges J such that T is the set of vertices of odd degree of the edge-induced subgraph G[J ].
IDEAL BINARY CLUTTERS, CONNECTIVITY, AND A CONJECTURE OF SEYMOUR 3
A T -cut is an inclusion-wise minimal set of edges -(U ) := U is a set of
vertices of G that satisfies jU \ T j odd. T -joins and T -cuts generalize many interesting special cases. If
then the T -joins (resp. T -cuts) are the st-paths (resp. inclusion-wise minimal st-cuts) of G. If
then the T -joins of size jV j=2 are the perfect matchings of G. The case where T is identical to the
set of odd-degree vertices of G is known as the Chinese postman problem [6, 12]. The families of T -joins
and T -cuts form a blocking pair of clutters.
Theorem 1.4 (Edmonds and Johnson [6]). The clutters of T -cuts and T -joins are ideal.
The class of clutters of T -cuts is closed under minor taking (Remark 8.2). Moreover, it is not hard to check
that none of the five excluded minors of Theorem 1.1 are clutters of T -cuts (see Remark 8.3 and [20]). Thus
Theorem 1.1 implies that the clutter of T -cuts is ideal, and thus that its blocker, the clutter of T -joins, is ideal.
Hence Theorem 1.1 implies Theorem 1.4. However, we shall also rely on this result to prove Theorem 1.1.
The paper is organized as follows. Section 2 considers representations of binary clutters in terms of
signed matroids and matroid ports. Section 3 reviews the notions of lifts and sources, which are families of
binary clutters associated to a given binary matroid [20, 29]. Connections between multicommodity flows
and ideal clutters are discussed in Section 4. The material presented in Sections 2, 3 and 4 is not all new. We
present it here for the sake of completeness and in order to have a unified framework for the remainder of the
paper. In Sections 5, 6, 7 we show that minimally nonideal clutters do not have small separations. The proof
of Theorem 1.1 is given in Section 8. Finally, Section 9 presents an intriguing example of an ideal binary
clutter.
2. BINARY MATROIDS AND BINARY CLUTTERS
We assume that the reader is familiar with the basics of matroid theory. For an introduction and all
undefined terms, see for instance Oxley [21]. Given a matroid M , the set of its elements is denoted by
E(M ) and the set of its circuits by
). The dual of M is written M . The deletion minor M n e of M
is the matroid defined as follows: E(M n
and
)g.
The contraction minor M=e of M is defined as (M n e) . Contractions and deletions can be performed
sequentially, and the result does not depend on the order. A matroid obtained from M by a set of deletions
J d and a set of contractions J c is a minor of M and is denoted by M n J d =J c .
matroid M is binary if there exists a 0; 1 matrix A with column set E(M ) such that the independent
sets of M correspond to independent sets of columns of A over the two element field. We say that A is a
representation of M . Equivalently, a 0; 1 matrix A is a representation of a binary matroid M if the rows of A
span the circuit space of M . If C 1 and C 2 are two cycles of a binary matroid then C 1 4 C 2 is also a cycle
of M . In particular this implies that every cycle of M can be partitioned into circuits. Let M be a binary
matroid and E(M ). The pair (M; ) is called a signed matroid, and is called the signature of M . We
say that a circuit C of M is odd (resp. even) if jC \ j is odd (resp. even).
The results in this section are fairly straightforward and have appeared explicitly or implicitly in the
literature [8, 13, 20, 23]. We include some of the proofs for the sake of completeness.
EJOLS AND BERTRAND GUENIN
Proposition 2.1 (Lehman [13]). The followingstatements are equivalent for a clutter: (i) H is binary; (ii) for
every
contains, or is equal, to an element of H.
Proposition 2.2. The odd circuits of a signed matroid (M; ) form a binary clutter.
Proof. Let circuits of (M; ). Then L := C 1 4 C 2 4 C 3 is a cycle of M . Since
each of C parity, so does L. Since M is binary, L can be partitioned into a
family of circuits. One of these circuits must be odd since jL \ j is odd. The result now follows from the
definition of binary clutters (see Section 1).
Proposition 2.3. Let F be a clutter such that ; 62 F . Consider the following properties: (i) for all C
F and e 2 C 1 \ C 2 there exists C 3 2 F such that e 62 C 3 . (ii) for all C there exists C 3 2 F such
that C 3 C 1 4 C 2 . If property (i) holds then F is the set of circuits of a matroid. If property (ii) holds then
F is the set of circuits of a binary matroid.
Property (i) is known as the circuit elimination axiom. Circuits of matroids satisfy this property. Note that
property (ii) implies property (i). Both results are standard, see Oxley [21].
Proposition 2.4. Let H be a binary clutter such that ; 62 H. Let F be the clutter consisting of all inclusion-
wise minimal, non-empty sets obtained by taking the symmetric difference of an arbitrary number of sets of
H. Then H F and F is the set of circuits of a binary matroid.
Proof. By definition, F satisfies property (ii) in Proposition 2.3. Thus F is the set of circuits of a binary
matroid M . Suppose for a contradiction there is S 2 H F . Then there exists S 0 2 F such that S 0 S.
Thus S 0 is the symmetric difference of a family of, say t, sets of H. If t is odd then, Proposition 2.1 implies
that S 0 contains a set of H. If t is even then, Proposition 2.1 implies that S 0 4 S contains a set of H. Thus S
is not inclusion-wise minimal, a contradiction.
Consider a binary clutter H such that ; 62 H. The matroid defined in Proposition 2.4 is called the up matroid
and is denoted by u(H). Proposition 2.1 implies that every circuit of u(H) is either an element of H or the
symmetric difference of an even number of sets of H. Since H is a binary clutter, sets of b(H) intersect with
odd parity exactly the circuits of u(H) that are elements of H. Hence,
Remark 2.5. A binary clutter H such that ; 62 H is the clutter of odd circuits of (u(H); ) where 2 b(H).
Moreover, this representation is essentially unique.
Proposition 2.6. Suppose that the clutters of odd circuits of the signed matroid (N; ) and (N are the
same and are not trivial. If N and N 0 are connected then
To prove this, we use the following result (see Oxley [21] Theorem 4.3.2).
Theorem 2.7 (Lehman [13]). Let e be an element of a connected binary matroid M . The circuits of M not
containing e are of the form C are circuits of M containing e.
We shall also need the following observation which follows directly from Proposition 2.3.
IDEAL BINARY CLUTTERS, CONNECTIVITY, AND A CONJECTURE OF SEYMOUR 5
Proposition 2.8. Let (M; ) be a signed matroid and e an element not in E(M ). Let F := fC [
eveng. Then F is the set of circuit of a binary matroid.
Proof of Proposition 2.6. Let M (resp. M 0 ) be the matroid constructed from
Proposition 2.8. By construction the circuits of M and M 0 using e are the same. Since N is connected and
H is not trivial, M and M 0 are connected. It follows from Theorem 2.7 that and in particular
. By the same argument and Remark 2.5,
In a binary matroid, any circuit C and cocircuit D have an even intersection. So, if D is a cocircuit, the
clutter of odd circuits of (M; ) and (M; 4 D) are the same (see Zaslavsky [28]). Let e 2 E(M ). The
deletion (M; )ne of (M; ) is defined as (M n e; feg). The contraction (M; )=e of (M; ) is defined
as follows: if e 62 then (M; )=e := (M=e; ); if e 2 and e is not a loop then there exists a cocircuit D
of M with e 2 D and (M; )=e := (M=e; 4D). Note if e 2 is a loop of M , then H=e is a trivial clutter.
A minor of (M; ) is any signed matroid which can be obtained by a sequence of deletions and contractions.
A minor of (M; ) obtained by a sequence of J c contraction and J d deletions is denoted (M; )=J c n J d .
Remark 2.9. Let H be a the clutter of odd circuits of a signed matroid (M; ). If J c does not contain an odd
circuit, then H=J c n J d is the clutter of odd circuits of the signed matroid (M; )=J c n J d .
Let M be a binary matroid and e an element of M . The clutter P ort(M; e), called a port of M , is defined
as
)g.
Proposition 2.10. Let M be a binary matroid, then P ort(M; e) is a binary clutter.
Proof. By definition S 2 P ort(M; e) if and only if S [ feg is an odd circuit of the signed matroid (M; feg).
We may assume P ort(M; e) is nontrivial, hence in particular, e is not a loop of M . Therefore, there exists a
cocircuit D that contains e. Thus P ort(M; e) is the clutter of odd circuits of the signed matroid (M=e; D 4
feg). Proposition 2.2 states that these odd circuits form a binary clutter.
Proposition 2.11. Let H be a binary clutter. Then there exists a binary matroid M with element e 2 E(M )
E(H) such that P ort(M;
Proof. If define M to have element e as a loop. If ; 62 H, we can represent H as the set of odd
circuits of a signed matroid (N; ) (see Remark 2.5). Construct a binary matroid M from (N; ) as in
Proposition 2.8. Then P
Proposition 2.12 (Seymour [23]). P ort(M; e) and P ort(M ; e) form a blocking pair.
Proof. Proposition 2.10 implies that P ort(M; e) and P ort(M ; e) are both binary clutters. Consider T 2
is a circuit of M . For all is a circuit of M . Since
T [feg and S[feg have an even intersection jS \T j is odd. Thus we proved: for all
is To complete the proof it suffices to show: for all T
there is
odd (Proposition 2.1). Thus T 0 [ feg intersects every circuit of M using e with even parity. It follows from
Theorem 2.7 that T 0 [ feg is orthogonal to the space spanned by the circuits of M , i.e. T 0 [ feg is a cycle of
M . It follows that there is a circuit of M of the form T [ feg where T T 0 . Hence,
required.
EJOLS AND BERTRAND GUENIN
3. LIFTS AND SOURCES
Let N be a binary matroid. For any binary matroid M with element e such that M=e, the binary
clutter P ort(M; e) is called a source of N . Note that H is a source of its up matroid u(H). For any binary
matroid M with element e such that e, the binary clutter P ort(M; e) is called a lift of N . Note
that a source or a lift can be a trivial clutter.
Proposition 3.1. Let N be a binary matroid. H is a lift of N if and only if b(H) is a source of N .
Proof. Let H be a lift of N , i.e. there is a binary matroid M with M n
Proposition 2.12, we have that b(H) is a source of
N . Moreover, the implications can be reversed.
It is useful to relate a description of H in terms of excluded clutter minors to a description of u(H) in
terms of excluded matroid minors.
Theorem 3.2. Let H be a binary clutter such that its up matroid u(H) is connected, and let N be a connected
binary matroid. Then u(H) does not have N as a minor if and only if H does not have H 1 or H
2 as a minor,
a source of N and H 2 is a lift of N .
To prove this we will need the following result (see Oxley [21] Proposition 4.3.6).
Theorem 3.3 (Brylawski [3], Seymour [25]). Let M be a connected matroid and N a connected minor of
M . For any i 2 E(M ) E(N ), at least one of M n i or M=i is connected and has N as a minor.
Proof of Theorem 3.2. Let M := u(H) and let 2 b(H). Remark 2.5 states that H is the clutter of odd
circuits of (M; ). Suppose first that H has a minor H 1 that is a source of N . Remark 2.9 implies that H 1
is the clutter of odd circuits of a signed minor (N 0 ; 0 ) of (M; ). Since N is connected, H 1 is nontrivial
and therefore Proposition 2.6 implies In particular N is a minor of M . Suppose now that H has a
is a lift of N . Let e be the element of E(H
implies that H +is the clutter of odd circuits of a signed minor
of (M; ). Since H 2 is a lift of N there is a connected
M 0 with element e such that ^
2 is the clutter of odd
circuits of
M 0 is a minor of M and so is
Now we prove the converse. Suppose that M has N as a minor and does not satisfy the theorem. Let H be
such a counterexample minimizing the cardinality of E(H). Clearly, N is a proper minor of M as otherwise
is a source of N . By Theorem 3.3, for every i 2 E(M ) E(N ), one of M n i and M=i is
connected and has N as a minor. Suppose M=i is connected and has N as a minor. Since i is not a loop of M ,
it follows from Remark 2.9 that H=i is nontrivial and is a signed minor (M=i; 0 ) of (M; ). Proposition 2.6
implies contradicts the choice of H minimizing the cardinality of E(H). Thus,
for every connected and has an N minor. Suppose for some
because of Remark 2.9 and Proposition 2.6 u(H n a contradiction to the
choice of H. Thus for every or equivalently, all odd circuits of (M; )
use i. As even circuits of M do not use i. We claim that E(M ) E(N
not and let j 6= i be an element of E(M ) E(N ). The set of circuits of (M; ) using j is exactly the set
of odd circuits. It follows that the elements must be in series in M . But then M n i is not connected,
IDEAL BINARY CLUTTERS, CONNECTIVITY, AND A CONJECTURE OF SEYMOUR 7
a contradiction. Therefore E(M . As the circuits of (M; ) using i are
exactly the odd circuits of (M; ), it follows that column i of A(H) consists of all 1's. Thus
is a lift of N .
Next we define the binary matroids F 7 ; F
7 and R 10 . For any binary matroid N , let BN be a 0,1 matrix
whose rows span the circuit space of N (equivalently BN is a representation of the dual matroid N ). Square
identity matrices are denoted I . Observe that R
I
Given a binary matroid N , let M be a binary matroid with element e such that M=e. The circuit space
of M is spanned by the rows of a matrix of the form [BN jx], where x is a 0,1 column vector indexed by e.
Assuming M is connected, we have (up to isomorphism), the following possible columns x for each of the
three aforementioned matroids
x
x
x
Note that (1),(2) are easy and (3) can by found in [24] (p. 357). The rows of the matrix [BF7 jx b ] (resp.
span the circuit space of a matroid known as AG(3; 2) (resp. S 8 ). If [BN jx] is a matrix whose
rows span the circuits of M , then by definition of sources, P ort(M; e) is a source of N . Thus,
Remark 3.4.
7 has a unique source, namely Q
6 .
F 7 has three sources:
sources including b(OK5 ) (when
Luetolf and Margot [16] have enumerated all minimally nonideal clutters with at most 10 elements (and many
more). Using Remark 3.4, we can then readily check the following.
Proposition 3.5. Let H be the clutter of odd circuits of a signed matroid (M; ).
or H is ideal.
7 , then H is ideal.
4. MULTICOMMODITY FLOWS
In this section, we show that a binary clutter H is ideal exactly when certain multicommodity flows exist
in the matroid u(H). This equivalence will be used in Sections 6 and 7 to show that minimally nonideal
EJOLS AND BERTRAND GUENIN
binary clutters do not have small separations. Given a set S, a function
i2T p(i). Consider a signed matroid (M; F ). The set of circuits of M that have exactly one
element in common with F , is
denoted
F . Let be a cost function on the elements of M .
Seymour [26] considers the following two statements about the triple (M; F; p).
For any cocircuit D of M :
There exists a function
F
We say that the cut condition holds if inequality (4.1) holds for all cocircuits D. We say that M is F -flowing
with costs p if statement (4.2) holds; the corresponding solution is an F -flow satisfying costs p. M is F -
flowing [26] if, for every p for which the cut condition holds, M is F -flowing with costs p. Elements in F
are called demand (resp. capacity) elements. It is helpful to illustrate the aforementioned
definitions in the case where M is a graphic matroid [9]. For a demand edge f , p(f) is the amount of flow
required between its endpoints. For a capacity edge e, p(e) is the maximum amount of flow that can be carried
by e. Then M is F -flowing with costs p when a multicommodity flow meeting all demands and satisfying all
capacity constraints exists. The cut condition requires that for every cut the demand across the cut does not
exceed its capacity. When F consists of a single edge f and when M is graphic then M is f-flowing [7].
The cut condition states p(D \F ) p(D F
sides, we obtain p(F
Remark 4.1. The cut condition holds if and only if p(F ) p(D 4 F ) for all cocircuits D.
Let H be the clutter of odd circuits of (M; F ). We define:
(a)
(H;
(b)
By linear programming duality we have: (H; p) (H; p). When
write (H) for (H; p) and (H) for (H; p).
Proposition 4.2. Let H be the clutter of odd circuits of a signed matroid (M;F ) and let
only if the cut condition holds.
-flowing with costs p.
(iii) If (C) > 0 for a solution to (4.2), then CF for all F 2 b(H) with p(F
Proof. We say that a set X E(M ) is a (feasible) solution for (a) if its characteristic vector is. Consider
(i). Suppose (H; We can assume that F is an inclusion-wise minimal solution of (a) and thus
D be any cocircuit of M and consider any S 2 H. Since S is a circuit of M , jD \ Sj is even
and since H is binary, jF \Sj is odd. Thus j(D 4F ) \Sj is odd. It follows that D4F is a transversal of H.
Therefore, D 4 F is a feasible solution to (a) and we have p(F ) p(D 4 F ). Hence, by Remark 4.1, the
IDEAL BINARY CLUTTERS, CONNECTIVITY, AND A CONJECTURE OF SEYMOUR 9
cut condition holds. Conversely, assume the cut condition holds and consider any set X that is feasible for
(a). We need to show p(F ) p(X). We can assume that X is inclusion-wise minimal, i.e. that X 2 b(H).
Observe that F and X intersect circuits of M with the same parity. Thus D := F 4 X is a cocycle of M .
Since the cut condition holds, by Remark 4.1, p(F ) p(D 4 F
Consider (ii). Suppose (H; it follows from linear
programming duality that F is an optimal solution to (a). Let y be an optimal solution to (b). Complementary
slackness states: if jF \ Cj > 1, then the corresponding dual variable y
F
y C , for all e 2 E(M ). Complementary slackness states: if e 2 F , then
F
Hence, choosing every CF satisfies (4.2). Conversely, suppose is a solution to (4.2).
For each e 2 F such that
C:
F
reduce the values C on the left hand side until equality
holds. Since C contains no element of F other than e, we can get equality for every e 2 F . So we may
assume
F
Now y is a feasible solution to (b) and F; y satisfy all complementary slackness conditions. Thus F and y
must be a pair of optimal solutions to (a) and (b) respectively.
Finally, consider (iii). From (ii) we know there is an optimal solution y to (b) with y C > 0. By complementary
slackness, it follows that jF \ F that are optimal solutions to (a).
The last proposition implies in particular that, if M is F -flowing with costs p, then the cut condition is
satisfied. We say that a cocircuit D is tight if the cut condition (4.1) holds with equality, or equivalently
(Remark 4.1) if p(F
Proposition 4.3. Suppose M is F -flowing with costs p and let D be a tight cocircuit. If C is a circuit with
Proof. We may assume C\D 6= ;. As CF , it follows that ffg. Moreover, C\D 6= ffg, since
M is binary. To complete the proof, it suffices to show that there is no pair of elements
Suppose for a contradiction that we have such a pair and let F As D is tight, p(F
It follows from Proposition 4.2(iii) that CF 0 . But
Corollary 4.4. Let H be the clutter of odd circuits of a signed matroid (M; F ). (i) If H is ideal then M
is F -flowing with costs p, for all satisfies the cut condition. (ii) If H
is nonideal then M is not F 0 -flowing with costs p, for some
minimizes p(F 0 ).
Proof. Consider (i). Proposition 4.2 states (H; Because H is ideal, (H;
This implies by Proposition 4.2(ii) that M is F -flowing with costs p. Consider (ii). If H
is nonideal then for some be an optimal solution to (a).
states M is not F 0 -flowing with costs p.
We leave the next result as an easy exercise.
Corollary 4.5. A binary clutter H is ideal if and only if u(H) is F -flowing for every F 2 b(H).
Consider the case where H = OK5 . Let F be a set of edges of K 5 such that E(K 5 ) F induces a K 2;3 .
(the graphic matroid of K 5 ) is not F -flowing.
EJOLS AND BERTRAND GUENIN
5. CONNECTIVITY, PRELIMINARIES
be a partition of the elements E of a matroid M and let r be the rank function. M
is said to have a k-separation
then the separation is said to be strict. A matroid M has a k-separation only if its dual M does
(Oxley [21], 4.2.7). A matroid is k-connected if it has no (k 1)-separation and is internally k-connected if
it has no strict (k 1)-separation. A 2-connected matroid is simply said to be connected. We now follow
Seymour [24] when presenting k-sums. Let M be binary matroids whose element sets E(M 1
may intersect. We define M 1 4M 2 to be the binary matroid on E(M 1 the cycles are all
the subsets of E(M 1 of the form C 1 4 C 2 where C i is a cycle of M i , 2. The following
special cases will be of interest to us:
Definition 5.1.
is the 1-sum of M
f is not a loop of M 1 or M 2 . Then M 1 4M 2 is the 2-sum of M
a circuit of both M 1 and M 2 . Then M 1 4M 2 is the
3-sum of M
We denote the k-sum of M 1 and M 2 as Mk M 2 . The elements in E(M i are called the markers
of M i . As an example, for 3, the k-sum of two graphic matroids corresponds to taking two graphs,
choosing a k-clique from each, identifying the vertices in the clique pairwise and deleting the edges in the
clique. The markers are the edges in the clique. We have the following connection between k-separations
and k-sums.
Theorem 5.2 (Seymour [24]). Let M be a k-connected binary matroid and k 2 f1; 2; 3g. Then M has a
k-separation if and only if it can be expressed as Mk M 2 . Moreover, M 1 (resp. M 2 ) is a minor of M
obtained by contracting and deleting elements in E(M 2
We say that a binary clutter H has a (strict) k-separation if u(H) does.
Remark 5.3. H has a 1-separation if and if A(H) is a block diagonal matrix. Moreover, H is ideal if and
only if the minors corresponding to each of the blocks are ideal.
Recall (Proposition 2.11) that every binary clutter H can be expressed as P ort(M; e) for some binary
matroid M with element e. So we could define the connectivity of H to be the connectivity of the associated
matroid M . The two notions of connectivity are not equivalent as the clutter LF7 illustrates. The matroid
AG(3; 2) has a strict 3-separation while F 7 does not, but P ort(AG(3; 2); and LF7 is the clutter of
odd circuits of the signed matroid F 7
Chopra [4] gives composition operations for matroid ports and sufficient conditions for maintaining ide-
alness. This generalizes earlier results of Bixby [1]. Other compositions for ideal (but not necessarily binary
clutters) can be found in [19, 17, 18]. Novick-Seb-o [20] give an outline on how to show that mni binary
clutters do not have 2-separations, the argument is similar to that used by Seymour [26](7.1) to show that
k-cycling matroids are closed under 2-sums. We will follow the same strategy (see Section 6). Proving that
mni binary clutters do not have 3-separations is more complicated and requires a different approach (see
IDEAL BINARY CLUTTERS, CONNECTIVITY, AND A CONJECTURE OF SEYMOUR 11
Section 7). In closing observe that none of LF7 ; OK5 and b(OK5 ) have strict 4-separations. So if Seymour's
Conjecture holds, then mni binary clutters are internally 5-connected.
6. 2-SEPARATIONS
Let (M;F ) be a signed matroid with a 2-separation
We say that is a part of (M;F ) if it is a signed minor
of (M; F ). It is not hard to see that at most two choices of F i can give distinct signed matroids
Therefore (M;F ) can have at most four distinct parts. In light of Remark 2.5 we can identify binary clutters
with signed matroids. The main result of this section is the following.
Proposition 6.1. A binary clutter with a 2-separation is ideal if and only if all its parts are ideal.
To prove this, we shall need the following results.
Proposition 6.2 (Seymour [24]). If connected if and only if M 1 and M 2 are
connected.
Proposition 6.3 (Seymour [24]). Let M be a binary matroid with a 2-separation
two circuits of M . If C 1
Proposition 6.4 (Seymour [24]). Let choose any circuit C of M such that C\E 1
and C \E 2 6= ;. Let j. For any f 2 C
Proof of Proposition 6.1. Let H be a binary clutter with a 2-separation,
without loss of generality that M is connected. Remark 2.5 states that H is the clutter of odd circuits of
(M; F ). If H is ideal, then so are all its parts by Remark 2.9. Conversely, suppose all parts of (M; F ) are
ideal. Consider any p : E(M Because of Corollary 4.4(ii),
it suffices to show that M is F -flowing with costs p. Observe that the cut condition is satisfied because of
Proposition 4.2(i).
Since M has a 2-separation, it can be expressed as M2 M 2 . Throughout this proof,
denote arbitrary distinct elements of f1; 2g. Define F be the marker of M i . Since f i is
not a loop, there is a cocircuit D i of M i using f i . Let i denote the smallest value of
where D i is any cocircuit of M i using f i . In what follows, we let D i denote some cocircuit where the
minimum is attained. Expression (*) gives the difference between the sum of the capacity elements and the
sum of the demand elements in D i , excluding the marker f i . Thus
is a cocycle of M and the cut condition is satisfied, we must have:
Claim 1. If i > 0, then there is an even circuit of (M uses marker f i .
Proof of Claim: Suppose for a contradiction that all circuits C of M i that use f i , satisfy jC \ F i j odd. Then
intersects all these circuits with even parity. By hypothesis M is connected and, because of
Proposition 6.2, so is M i . We know from Theorem 2.7 that all circuits that do not use the marker f i are the
EJOLS AND BERTRAND GUENIN
symmetric difference of two circuits that do use f i . It follows that D intersects all circuits of M i with even
parity. Thus D is a cocycle of M i . But expression (*) is nonpositive for cocycle D. D can be partitioned
into cocircuits. Because the cut condition holds, expression (*) is nonpositive for the cocircuit that uses f i , a
contradiction as i > 0. 3
2. If i < 0, then there is an odd circuit of (M uses marker f i .
Proof of Claim: Suppose, for a contradiction, that all circuits C of M i that use f i , satisfy jC \ F i j even. By
the same argument as in Claim 1, we know that in fact so do all circuits of M i . This implies that F and F j
intersect each circuit of M with the same parity. As F is inclusion-wise minimal must have
;. But this implies that expression (*) is non negative, a contradiction. 3
a part of (M; F ).
Proof of Claim: From Claim 2 (resp. Claim 1), there is an odd (resp. even) circuit C using f j of (M
Proposition 6.3 implies that elements are in series in M n (E j C). Proposition 6.4 implies that M i
is obtained from M n (E j C) by replacing series elements of C \E j by a unique element f j . The required
signed minor is (M; F any element of C
Because suffices to consider the following cases.
Case 1: 1 0; 2 0.
We know from Proposition 6.4 that M i is a minor of M (where no loop is contracted) say M n J d =J c .
For
be the signed minor (M; F ) n J d =J c . Since (M
a part of (M; F ), it is ideal.
So in particular (M
be defined as follows:
Let D be a cocircuit of M i n f i . The inequality p(D \ F i ) p(D F i ) follows
from i 0 when D [ f i is a cocircuit of M i and it follows from the fact that the cut condition holds for
when D is a cocircuit of M i . Therefore the cut condition holds for It follows from
Corollary 4.4(i) that each of these signed matroids has an F i -flow satisfying costs p i . Let i
be the corresponding function satisfying (4.2). By scaling p, we may assume i for each circuit in
. Let L i be the multiset where each circuit C
in
appears i (C) times. Define L j similarly. The union
(with repetition) of all circuits in L i and L j correspond to an F -flow of M satisfying costs p.
Case 2: i < 0; j > 0.
Because of Claim 3, there are parts (M
defined as follows: p i (f be defined as follows:
Since we can scale p, we can assume that the F i -flow of M i
satisfying costs p i is a multiset L i of circuits and that the F j [ff j g-flow of M j satisfying costs p j is a multiset
l can be partitioned into L l
is a demand element for the flow L j , jL j
is a capacity element for the
flow
j. Let us define a collection of circuits of M
as follows: include all circuits of L i
. Pair each circuit C
1 with a different circuit C
1 , and
add to the collection the circuit included in C i 4C j that contains the element of F . The resulting collection
corresponds to a F -flow of M satisfying costs p.
IDEAL BINARY CLUTTERS, CONNECTIVITY, AND A CONJECTURE OF SEYMOUR 13
7. 3-SEPARATIONS
The main result of this section is the following,
Proposition 7.1. A minimally nonideal binary clutter H has no strict 3-separation.
The proof follows from two lemmas, stated next and proved in sections 7.1 and 7.2 respectively.
Lemma 7.2. Let H be a minimally nonideal binary clutter with a strict 3-separation There exists a
set F 2 b(H) of minimum cardinality such that F E 1 or F E 2 .
Let (M; F ) be a signed matroid with a strict 3-separation
be the triangle common to both M 1 and M 2 .
Let
be obtained by deleting from M i a (possibly empty) set of elements of C 0 . We call
of (M; F ) if it is a signed minor of (M; F ).
Lemma 7.3. Let (M; F ) be a connected signed matroid with a strict 3-separation suppose
-Flowing with costs p if the cut condition is satisfied and all parts of (M; F ) are ideal.
Proof of Proposition 7.1. Suppose H is a mni binary clutter that is connected with a strict 3-separation. Re-mark
2.5 states that H is the clutter of odd circuits of a signed matroid (M;F ). Consider
defined by We know (see Remark 7.5) that (H; p) > (H; p). From
Lemma 7.2 and Remark 2.5, we may assume F E 1 and p(F It follows from Proposition
4.2(i) that the cut condition holds. Since the separation of (M; F ) is strict, all parts of (M; F ) are proper
minors, and hence ideal. It follows therefore from Lemma 7.3 that M is F -flowing with costs p. Hence,
because of Proposition 4.2(ii), (H;
7.1. Separations and blocks. In this section, we shall prove Lemma 7.2. But first let us review some results
on minimally nonideal clutters. For every clutter H, we can associate a 0; 1 matrix A(H). Hence we shall
talk about mni 0; 1 matrices, blocker of 0; 1 matrices, and binary 0; 1 matrices (when the associated clutter is
binary). The next result on mni 0,1 matrices is due to Lehman [15] (see also Padberg [22], Seymour [27]).
We state it here in the binary case.
Theorem 7.4. Let A be a minimally nonideal binary 0,1 matrix with n columns. Then
nonideal binary as well, the matrix A (resp. B) has a square, nonsingular row submatrix
A (resp.
B) with
entries in every row and columns, rs > n. Rows of A (resp. B) not in
A (resp.
B) have
at least r entries. Moreover,
A
(rs n)I , where J denotes an n n
matrix filled with ones.
It follows that ( 1
r ) is a fractional extreme point of the polyhedron fx 2 R n
1g. Hence,
Remark 7.5. If H is a minimally nonideal binary clutter, then (H) > (H).
The submatrix
A is called the core of A. Given a mni clutter H with we define the core of H to
be the clutter
H for which A(
A. Let H and be binary and mni. Since H;G are binary, for all
H and T 2
G, we have jS \T j odd. As
A
(rs n)I , for every S 2
H, there is exactly one set
G called the mate of S such that jS \ T (rs n). Note that if A is binary then rs
EJOLS AND BERTRAND GUENIN
Proposition 7.6. Let A be a mni binary matrix. Then no column of
A is in the union of two other columns.
Proof. Bridges and Ryser [2] proved that square 0; 1 matrices
B that satisfy
A
commute, i.e.
A T
(rs n)I. Thus col(
rs ng.
Hence there is no ng fig such that col(
A; i), for otherwise
contradicting the equation
A T
(rs n)I.
Proposition 7.7 (Guenin [10]). Let H be a mni binary clutter and e 2 E(H). There exists
such that S 1
Proposition 7.8 (Guenin [10]). Let H be a mni binary clutter and S 1
H. If S
then either
Proposition 7.9. Let H be a mni binary clutter and let
2.
Proof. Let T be the mate of S. Then jT \ Sj 3 and jT \ S
Proposition 7.10 (Luetolf and Margot [16]). Let H be a mni binary clutter. Then
H). Further-
more, if T is a transversal of
H and jT
then T is a transversal of H.
We shall also need,
Proposition 7.11 (Seymour [24]). Let M be a binary matroid with 3-separation . Then there exist
circuits such that every circuit of M can be expressed as the symmetric difference of a subset of
circuits in fC 2
g.
Throughout this section, we shall consider a signed matroid (M; F ) with a 3-separation
will denote the corresponding circuits of Proposition 7.11. Let H be the clutter of odd circuits of (M; F ). We
shall partition b(H) into sets
Proposition 7.12. If S 1 contains a set of b(H).
Proof. Let Note that since S 1 circuits C of M , jS 1 \ Cj
and jS 2 \ Cj have the same parity. This implies that if C is a circuit where C
intersects S 0 and S 1 with the same parity. It also implies, together with the definition of B i , that S 0 intersects
with the same parity as S 1 . It follows from Proposition 7.11 that S 0 and S 1 intersect all
circuits of M with the same parity.
Proof of Lemma 7.2. Let G denote the blocker of H and let be the sets partitioning G. We
will denote by
G the core of G. It follows that
G can be partitioned into sets
. Assume for a contradiction that for all S 2
We will say that a set
with forms an E 1 -block if, for all pairs of sets
Similarly we define E 2 -blocks.
IDEAL BINARY CLUTTERS, CONNECTIVITY, AND A CONJECTURE OF SEYMOUR 15
Claim 1. For each nonempty
is either an or an E 2 -block.
Proof of Claim: Consider S 1
Proposition 7.12 states that (S 1 contains a set
G. Proposition 7.8 implies that S
Moreover, by hypothesis neither S 0 \E 1 nor S 0 \E 2 is empty. Since
chosen arbitrarily, the result follows. 3
For any nonempty
. We define E(
to be equal to S \
is an
and to S \
is an E 2 -block. Let r (resp. s) be the cardinality of the members of
H)j. As H is binary r 3 and s 3.
2. Let U E(
G) be a set that intersects E(
. Then U is a transversal of
G
and jU j
Proof of Claim: Clearly U is a transversal of
G, thus jU j (
G). Proposition 7.10 states (
3. Let U; U 0 be distinct transversals of
G. If (
2.
Proof of Claim: Proposition 7.10 imply that U and U 0 are minimum transversals of G. Hence, U; U
The result now follows from Corollary 7.9. 3
4. None of the
Proof of Claim: Let U be a minimum cardinality set that intersects E(
r 3, it follows from Claim 2 that at most one of the
can be empty. Assume for a contradiction that
one of the
is empty. It follows from Claim 2 and the choice of U that each of E(
E(
are pairwise disjoint (otherwise U contains an element common to at least 2 of E(
E(
1 be distinct elements of E(
E(
E(
contradict Claim 3. Thus jE(
similarly, jE(
are not all E 1 -blocks and not
-blocks. Thus w.l.o.g. we may assume
-blocks and
3 is an E 2 -block. Let t 1 be any
element in E 1 E(
E(
be the unique element in E(
the column of A(
indexed
by t 1 is included in the column of A(
indexed by t 2 , a contradiction to Proposition 7.6. 3
Consider first the case where every
is an E 1 -block. Suppose that no two E(
has four columns that add up to the vector of all ones. By Theorem 7.4, each of these columns has s ones
and therefore 4s. Furthermore the four elements that index these columns form a transversal of
G
and therefore r 4 (see Claim 2). This contradicts Theorem 7.4 stating that rs > n. Thus two E(
intersect, say
. For otherwise a contradiction to rs > n. Let t be any element of
E(
any element of E(
E(
4 g. It follows
from Claim 2 that r = 3. It follows from Claim 3 that each of E(
E(
cardinality one, and
E(
E(
contains a unique element e. Since there are no dominated columns in A(
we have that
E(
E(
a contradiction to the hypothesis that the 3-separation is
strict.
Consider now the case where
-blocks and
-blocks. Suppose there exists
that is not in any of E(
4g. Assume without loss of generality that e
EJOLS AND BERTRAND GUENIN
Then column e of A(
is included in the union of any two columns f 1 2 E(
E(
contradiction to Proposition 7.6. Thus every element of E(H) is in E(
there is e 2 E(
E(
g. Then implies that
partitioned into E(
E(
partitioned into E(
E(
4, then we can use Claim 3 to show that for each i 2
contradiction as then jE 2. Thus
be a minimum transversal of
G. Suppose both u; v 2 E(
E(
is a transversal. It then follows that T intersects all sets of
parity, a contradiction as H is binary. Thus we may assume w 2 E(
all sets in
It follows that, for any x 2 E(
E(
3 ), the set fw; x; yg is a transversal of
G, a contradiction to
3. Hence for any transversal each element of T is in a different E(
may
assume E(
E(
E(
It follows that for any x 2 E(
wg is a transversal
and thus by Claim 3 E(
contains a unique element t. Since jE 1 j > 2, we cannot have a transversal
E(
E(
E(
would imply jE(
Hence every minimum transversal
contains t, a contradiction to Theorem 7.4.
Finally, consider the case where
-blocks and
B 4 is an E 2 -block. Note that every t
is in some E(
3g. Otherwise the corresponding column t of A(
is dominated by any
E(
Suppose there is t 2 E(
E(
are distinct elements in
3g. Proposition 7.7 states there exist three sets of
G that intersect exactly in t. This implies jE(
Now since E(
E(
there is a column in say E(
E(
Similarly,
a contradiction to jE 1 j > 3. Thus E(
E(
and therefore either (1) for some distinct E(
E(
is a partition of E(
E(
E(
for each distinct 3g. By considering sets U containing one element of
E(
intersecting each of E(
E(
E(
use Claim 3 to show that jE 1 j 2 in
Case (1) and jE 1 j 3 in Case (2), a contradiction.
7.2. Parts and minors. In this section, we prove Lemma 7.2 Consider the matroid with exactly three elements
which form a circuit C 0 . Let I 0 ; I 1 be disjoint subsets of C 0 . We say that a signed matroid
(N; ) is a fat triangle is obtained from C 0 by adding a parallel element for every
signed binary matroid with a circuit C
of M is a simple circuit of type We say that a
cocircuit D has a small intersection with a simple circuit C if either: D \ and the
unique element in C \ is in D.
Lemma 7.13. Let (M; ) be a signed binary matroid with a circuit C
(1) Let I C 0 be such that for all i 2 I there is a simple circuit C i of type i. Suppose for all distinct
small intersection with
the simple circuits in fC Ig. Then the fat triangle (;; I) is a minor of (M; ).
(2) Let C 1 be a simple circuit of type 1. Suppose we have a cocircuit D 12 where D 12 \C
D 12 has a small intersection with C 1 . If C 1 f1g [ f2g is dependent then C 1 f1g [ f2g contains
an odd circuit using 2 and the fat triangle (f3g; f2g) is a minor of (M; ).
IDEAL BINARY CLUTTERS, CONNECTIVITY, AND A CONJECTURE OF SEYMOUR 17
(3) Suppose for each we have a simple circuit C i of type i. Suppose we have a cocircuit D 12
where D 12 \C small intersection with C 1 and C 2 . If both C 1 f1g [ f2g
and C 2 f2g [ f1g are independent, then the fat triangle (;; f1; 2g) is a minor of (M; ).
Proof. Throughout the proof distinct elements of C 0 .
Let us prove (1). For each i 2 I let f i be the unique element in C i \. For each D jk either: D jk \C
or D jk \ C is an element not in . Let E 0 be the set of elements in C 0 or in any of C i
(a) If g i exists then f i is in each of D 12 ; D 13 ; D 23 and g i is in D jk but not D ij ; D ik .
(b) If g i does not exists but f i does then f i is in D ij ; D ik but not in D jk .
. Observe that (a) and (b) imply respectively (a') and (b').
(a') If g i exists then f i 62 and g i 2 .
(b') If g i does not exist then f i 2 .
Let (N; ) be the minor of (M; ) obtained by deleting the elements not in E 0 and then contracting the
elements not in C 0 [ . It follows from (a') and (b') that if C 0 is a circuit then (N; ) is the fat triangle
(;; I). Otherwise some element i 2 C 0 is a loop of N , say 1. Then there is a circuit C of M such that
does not intersect D 12 and D 23 with the same
parity. Consider any e 2 C C 0 such that e is in some cocircuit D ij . Since e 62 , it follows from (a') and
(b') that I and that g i exists. But then (a) implies that e 2 D 12 \ D 13 \ D 23 . It follows
that C cannot intersect D 12 and D 23 with the same parity, a contradiction.
Let us prove (2). Let f be the unique element in C 1 \. By hypothesis there is a circuit C in C 1 f1g[
f2g. Since C 1 is a circuit 2 2 C. Since D 12 has a small intersection C 1 \ D fg. It follows that
be the circuit using 3 in C 1 4C4C 0 . Since C 0 is a circuit, 3 is not a loop, hence
contains at least one element say g.
Observe that f2; fg is an odd cycle of (N; ) and that f3; gg and C 0 are even cycles of (N; ). Hence, if C 0 is
a circuit of N then (N; ) is the fat triangle (f3g; f2g). Because D 12 is a cocircuit of M , f1; 2; fg is a cocycle
of N , in particular 1; 2; f are not loops. If 3 is a loop of N then there is a circuit S C 1 f1; 2; f; gg[ f3g
of (M; ). But C 0 4 S is a cycle and C 0 4 S C 1 , a contradiction as C 1 is a circuit.
Let us prove (3). Let M 0 be obtained from M by deleting all elements not in C 0 [
small intersection with C 1 and C 2 we have
is a signed minor of (M; ). Choose a minor N of M 0 which is minimal and satisfies the
following properties:
(i) C 0 is a circuit of N ,
there exist circuits C i of N such that C i \C
Note that by hypothesis M satisfies properties (i)-(iv) and thus so does M 0 . Hence N is well defined. We
will show that jC 1 in N . Then (N; f1; 2g) is a minor of (M; ) and after resigning on the
cocircuit containing we obtain the fat triangle (;; f1; 2g). There is no circuit S C 1 f1g [ f3g of N ,
for otherwise there exists a cycle C 1 4 S 4C 0 C 1 f1g [ f2g, a contradiction with (iii). Hence,
EJOLS AND BERTRAND GUENIN
2. C 1 \ C
Proof of Claim: Otherwise define N 0 := N=(C 1 \ C 2 ). Note that N 0 satisfies (ii)-(iv). Suppose (i) does not
hold for N 0 , i.e. C 0 is a cycle but not a circuit of N 0 . Then 3 is a loop of N 0 . Thus there is S C 1 \C 2 such
that S [ f3g is a circuit of N , contradicting Claim 1. 3
Assume for a contradiction jC
3. There exists a circuit S of N .
Proof of Claim: Let e 2 C 1 f1g and consider (N is not a circuit of N 0 .
Then 2 or 3 is a loop of N . But then either f2; eg or f3; eg is a circuit of N . In the former case it contradicts
(iii), in the latter it contradicts Claim 1. Hence (i) holds for N 0 . Trivially (iii) holds for N 0 as well. Suppose
(ii) does not hold, then C 2 is not a circuit of N 0 . It implies there exists a circuit S C 2 [ feg f2g of N .
Then S is the required circuit. Suppose (iv) does not hold. Then there is a circuit S
N , and S 4 C 1 contains the required circuit. 3
Let S be the circuit in the previous claim. Since C are circuits, S \C 1 are non-empty. Let C 0
2 be
the circuit in C 2 4 S which uses 2. Note that N n (E(N
properties (i)-(iii) using
2 instead of C 2 . Thus, by minimality, (iv) is not satisfied for C 0
contains a circuit C 0
1 .
2 is a circuit, 1 2 C 0
. By the same argument as above, (iii) is not satisfied for C 0
contains a circuit C 00
using 2. Since C 00
it follows that C 00
2 is a circuit). Therefore,
f1g. But the cycle C 0
contradicts the fact that C 0 is a circuit.
Lemma 7.14. Let (M; ) be an ideal signed binary matroid with a circuit C
Suppose we have such that the cut condition is satisfied. Then there exists
Q+ which satisfies the following properties: (i) p 0 satisfies the cut condition; (ii) p 0
and 0g. There is a
-flow
(1) The fat triangle (;; I) is a signed minor of (M; ) and
circuits C such that (C) > 0.
Or after possibly relabeling elements of C 0 we have p 0
(3) The fat triangle (f3g; f2g) is a signed minor of (M; ) and
(4) for all odd circuits C with (C) > 0 and C \ C
contains an odd circuit using 2.
Proof.
Claim 1. We can assume that there exists Q+ such that properties (i)-(iii) hold. For distinct
be the minimum of p 0 (D jg. We then have
(after possibly relabeling the elements of C 0 ) the following cases, either: (a)
IDEAL BINARY CLUTTERS, CONNECTIVITY, AND A CONJECTURE OF SEYMOUR 19
Proof of Claim: Choose which satisfies the following prop-
erties: the cut condition holds for
(i)-(iii) holds for p 0 . Suppose (a) does not hold. Then we may assume (after relabeling) that 23 > 0 and that
Consider first the case where 12 > 0. Then 2 is in no tight cocircuit, it follows from the choice of p 0
that then for all circuits C such that
holds. Moreover, (1) is satisfied since (M; )n(E(M ) C 0 ) is the
Thus we may assume relabeling 2 and 3 we satisfy (b).
Hence we can assume is in no tight cocircuit, thus p 0
holds. Thus we may assume defined as
hold for ^
p. Suppose
p(3). Thus we may assume for each distinct there is a
cocircuit D where D \ C which is tight for
p. It follows that (a) holds. 3
Throughout the proof distinct elements of C 0 . Let p 0 be the costs given in Claim 1.
implies that there is a -flow,
ij be the cocircuits of M for which D ij \ C
Consider first case (a) of Claim 1, i.e. D ij is tight for all distinct We will show that (1) and
(2) hold. Let C be any circuit with (C) > 0. Then jC 1. Suppose there is an element i in C 0 \ C .
Proposition 4.3 states C is the unique element in C \ .
Thus C \ C is in a tight cocircuit, thus if p 0 (i) > 0 then
there is a circuit C i with Moreover, (2) implies that C i is a simple circuit of type i.
Proposition 4.3 implies that D 12 ; D 13 ; D 23 all have small intersections with each of the simple circuits. Then
(1) follows from Proposition 7.13(1).
Consider case (b) of Claim 1, i.e.
2. Let C be a circuit with (C) > 0. If i 2 C \ f1; 2g, then C i is a simple circuit of type i.
This follows from the fact that 3 62 C (as p 0 and that jC \ f1; (because of Proposition 4.3
and the fact that D 12 is tight). The case where p 0 has already been considered (see proof
of Claim 1). Suppose for some f be the unique element in
. The minor (M; )n(E(M is the fat triangle (;; fig) and both (1) and
(2) hold. Thus p 0 (1) > 0; p 0 (2) > 0. Suppose now for all i 2 f1; 2g there exists a circuit C i with
states that these circuits are simple circuits
of type i. Then (2) holds and Proposition 7.13(3) implies that (M; ) contains the fat triangle (;; f1; 2g), i.e.
(1) holds. Thus we may assume, for some i 2 f1; 2g that for all circuits C i such that (C i
dependent. If interchange the labels 2 and 1. Since we had
we get in that case p 0 (2)
Proposition 7.13(2) implies that for all circuits C 1 with contains an
odd circuit using 1 and that (M; ) contains the fat triangle (f3g; f2g) as a minor. Together with
this implies (3) and (4) hold.
EJOLS AND BERTRAND GUENIN
We are now ready for the proof of the main lemma.
Proof of Lemma 7.3. Since M has a strict 3-separation,
a triangle. Throughout this proof i; j; k will denote distinct elements of C 0 . Recall that F E 1 . Let 1
denote the smallest value of
some cocircuit of M 1 with D ij \ C jg. Expression (*) gives the difference between
the sum of the capacity elements and the sum of the demand elements in D ij , excluding the marker C 0 .
Denote by D 1
ij the cocircuit for which the minimum is attained in (*). Let 2
ij denote the smallest value of
some cocircuit of M 2 with jg. In what follows, we let D 2
the cocircuit for which p(D 2
ij . For each
Proof of Claim: We have 2
jk . Thus
2. 1
Proof of Claim: 1
But the last expression is non negative since the cut condition holds for (M; F; p). 3
signed minor of (M; F ).
Proof of Claim: Theorem 5.2 implies that M 1 is a minor of M obtained by contracting and deleting elements
in
4. The cut condition is satisfied for
Proof of Claim: Since the cut condition holds for (M; F; p) the cut condition is satisfied for all cocircuits of
1 disjoint from C 0 . Let D be a cocircuit of M 1 such that D\C
ij . It follows
from Claim 2 that the previous expression is non-negative. 3
implies that (M 1 of (M; F ) and hence its clutter of odd circuits is ideal. Together with
implies that (M 1 the hypothesis of Lemma 7.14. It follows that M 1 is F -flowing
with costs p 0
1 is as described in the lemma) and either case 1 or case 2 occurs.
Case 1: Statements (1) and (2) hold.
We define I :=
5. (M 0
is a signed minor of (M; F ).
Proof of Claim: Statement (1) says that the fat triangle (;; I) is a signed minor of (M is equal to
showed that (M3 M 2 ) nJ d =J
IDEAL BINARY CLUTTERS, CONNECTIVITY, AND A CONJECTURE OF SEYMOUR 21
6. The cut condition is satisfied for (M 0
Proof of Claim: It suffices to show the cut condition holds for cocircuits D that intersect C 0 . Suppose D \
states that p 0
ij .
Thus
implies that (M 0
is a part of (M; F ) and hence its clutter of odd circuits is ideal. It follows
from Claim 6 and Corollary 4.4(i) that M 0
2 is I-flowing with costs p 2 . Since we can scale p (and hence p 0
1 and
may assume that the F -flow of M 1 satisfying costs p 0
1 is a multiset L 1 of circuits and that the I-flow
of M 0
satisfying costs p 2 is a multiset L 2 of circuits. Because of Statement (2), L 1 can be partitioned into L 1and L 1
. Because
can be partitioned into L 2
1 (i) for each i 2 I, jL 1
us define a collection of circuits of M
as follows: include all circuits of L 1
, and for every i 2 I pair each circuit C
i with a different circuit
and add to the collection the circuit included in C 1 4C 2 that contain the element of F . The resulting
collection corresponds to a F -flow of M satisfying costs p.
Case 2:
statements (3) and (4) hold (after possibly relabeling C 0 ).
that the fat triangle (f3g; f2g) is a signed minor of (M 1 ; F ).
Proceeding as in the proof of Claim 5 we obtain the following.
7. (M 0
is a signed minor of (M; F ).
1 (1) and
8. The cut condition is satisfied for (M 0
Proof of Claim: Consider first a cocircuit D of M 0
2 such that D \ C 3g. Let us check D does not
violate the cut condition. The following expression should be non negative: p 2 (D f2g) p 2 (D \
states
Consider a cocircuit D of M 0
2 such that 2 2 D but 3 62 D. Let us check D does not violate the cut condition.
The following expression should be non negative: p 2 (D f2g) p 2 (D \
is a cocircuit of M 2 ,
It follows that p(D C 0
imply that M 0
2 is f2g-flowing with costs p 2 . We may assume that
the F -flow of M 1 satisfying costs p 0
1 is a multiset L 1 of circuits. Because of Statement (4), L 1 can be
partitioned into L 1
. We may assume that the f2g-flow of M 0
satisfying costs p 2 is a multiset
L 2 of circuits. Since C 2 L 2 implies Cf2g and since 1 62 E(M 0
can be partitioned into L 2
, and L 2
22 G -
EJOLS AND BERTRAND GUENIN
9. (i) jL 1
j.
Proof of Claim: Let us prove (i). 2 is a demand element for the flow L 2 , thus jL 2
(2). 3 is a capacity element for the flow L 2 , thus jL 2
where the last inequality follows from the fact that 2 is a capacity element for the flow L 1 . Let us prove (ii).
Let us define a collection of circuits of M as follows: (a) include all circuits of L 1
every circuit
2 with a different circuit C
- such a pairing exists because of Claim 9(i) - and add to the
collection pair as many circuits C 1 of L 1
1 to as many different circuits C 2 of L 2
1 as possible, and
add to the collection C 1 remaining circuits C 1 of L 1
1 to circuits of L 2
not already used in (b).
Such a pairing exists because of Claim 9(ii). Statement (4) says that C 1 f1g [ f2g contains an odd circuit
1 . For every pair C add to the collection the cycle C 0
for each cycle C in the collection
only keep the circuit included in C that contains the element of F . The resulting collection corresponds to an
F -flow of M satisfying costs p.
8. SUFFICIENT CONDITIONS FOR IDEALNESS
We will prove Theorem 1.1 in this section, i.e. that a binary clutter is ideal if it has none of the following
minors: LF7 , OK5 , b(OK5
6 and Q
7 . The next result is fairly straightforward.
Proposition 8.1 (Novick and Seb-o [20]).
H is a clutter of odd circuits of a graph if and only if u(H) is graphic.
H is a clutter of T -cuts if and only if u(H) is cographic.
Remark 8.2. The class of clutters of odd circuits and the class of clutters of T -cuts is closed under minor
taking.
This follows from the previous proposition, Remark 2.9 and the fact that the classes of graphic and cographic
matroid are closed under taking (matroid) minors. We know from Remark 3.4 that b(Q 6
is a source of F 7 , and Q
6 is a source of F
7 . Thus Proposition 8.1 implies,
Remark 8.3. Q
7 and Q
6 are not clutters of odd circuits or clutters of T -cuts.
We use the following two decomposition theorems.
Theorem 8.4 (Seymour [24]). Let M be a 3-connected and internally 4-connected regular matroid. Then
is graphic or M is cographic.
Theorem 8.5 (Seymour [24, 26]). Let M be a 3-connected binary matroid with no F
Then M is regular or
Corollary 8.6. Let H be a binary clutter such that u(H) has no F
7 minor. If H is 3-connected and internally
4-connected, then H is one of b(Q 7 or one of the 6 lifts of R 10 , or a clutter of odd circuits or
a clutter of T-cuts.
IDEAL BINARY CLUTTERS, CONNECTIVITY, AND A CONJECTURE OF SEYMOUR 23
Proof. Since H is 3-connected, u(H) is 3-connected. So, by Theorem 8.5, u(H) is regular or In
the latter case, Remark 3.4 implies that H is one of b(Q 7 . Thus we can assume that u(H) is
regular. By hypothesis, u(H) is internally 4-connected and therefore, by Theorem 8.4,
is graphic or u(H) is cographic. Now the corollary follows from Proposition 8.1 and Remark 3.4.
We are now ready for the proof of the main result of this paper.
Proof of Theorem 1.1. We need to prove that, if H is nonideal, then it contains LF7 , OK5 , b(OK5
6 or
7 as a minor. Without loss of generality we may assume that H is minimally nonideal. It follows from
Remark 5.3 and propositions 6.1 and 7.1 that H is 3-connected and internally 4-connected. Consider first the
case where u(H) has no F
7 minor. Then, by Corollary 8.6 either: (i) H is one of b(Q 7
(ii) H is one of the 6 lifts of R 10 , or (iii) H is a clutter of odd circuits, or (iv) H is a clutter of T-cuts. Since H
is minimally nonideal, it follows from Proposition 3.5 that if (i) occurs then occurs then
occurs then, by Theorem 1.3, occur because of Theorem 1.4.
Now consider the case where u(H) has an F
7 minor. It follows by Theorem 3.2 that H has a minor H 1
a source of F
7 and H 2 is a lift of F
7 . Proposition 3.1 states that the lifts of F
7 are the
blockers of the sources of F 7 . Remark 3.4 states that the sources of F 7 are b(Q 7 ), LF7 or b(Q 6 that
F
7 has only one source, namely Q
6 . This implies that H
6 and H
7 or b(LF7 )
has an LF7 minor and b b(Q
has a Q
6 minor, the proof of the theorem is complete.
One can obtain a variation of Theorem 1.1 by modifying Corollary 8.6 as follows: Let H be a binary
clutter such that u(H) has no F 7 minor. If H is 3-connected and internally 4-connected, then H is b(Q 7 ),
6 ) or one of the 6 lifts of R 10 or a clutter of odd circuits or a clutter of T-cuts. Following the proof
of Theorem 1.1, this yields: A binary clutter is ideal if it does not have an LF7 , OK5 , b(OK5
minor. But this result is weaker than Corollary 1.2. Other variations of Theorem 1.1 can be obtained
by using Seymour's Splitter Theorem [24] which implies, since u(H) is 3-connected and u(H) 6= F
7 , that
u(H) has either S 8 or AG(3; 2) as a minor. Again, by using Proposition 3.2, we can obtain a list of excluded
minors that are sufficient to guarantee that H is ideal.
9. SOME ADDITIONAL COMMENTS
Corollary 8.6 implies the following result, using the argument used in the proof of Theorem 1.1.
Theorem 9.1. Let H be an ideal binary clutter such that u(H) has no F
7 minor. If H is 3-connected and
internally 4-connected, then H is one of b(Q 7 or one of the 5 ideal lifts of R 10 , or a clutter of odd
circuits of a weakly bipartite graph, or a clutter of T-cuts.
A possible strategy for resolving Seymour's Conjecture would be to generalize this theorem by removing
the assumption that u(H) has no F
7 minor, while allowing in the conclusion the possibility for H to also be
a clutter of T-joins or the blocker of a clutter of odd circuits in a weakly bipartite graph. However, this is not
possible as illustrated by the following example.
EJOLS AND BERTRAND GUENIN
Let T 12 be the binary matroid with the following partial representation.6 6 4
This matroid first appeared in [11]. It is self dual and satisfies the following properties:
(i) For every element t of T 12 , T 12 =t is 3-connected and internally 4-connected.
(ii) For every element t of T 12 , T 12 =t is not regular.
We are indebted to James Oxley (personal communication) for bringing to our attention the existence of
the matroid T 12 and pointing out that it satisfies properties (i) and (ii). Let t be any element of T 12 and let
Because of (i), T 12 3-connected and internally 4-connected and thus so is H.
Because of (ii), T 12 not graphic or cographic thus Proposition 8.1 implies that H is not a clutter
of T -cuts and not a clutter of odd circuits. We know from Proposition 2.12 that
Thus, b(H) is also 3-connected, internally 4-connected, and H is not the clutter of T -joins or
the blocker of the clutter of odd circuits. However, it follows from the results of Luetolf and Margot [16] that
the clutter H is ideal.
--R
On the length-width inequality for compound clutters
Combinatorial designs and related systems.
A decomposition for combinatorial geometries.
Composition for matroids with the Fulkerson property.
Euler tours and the chinese postman.
Blocking and anti-blocking pairs of polyhedra
Graphs and polyhedra
A characterization of weakly bipartite graphs.
A generalization of a graph result of D.
Graphic programming using odd or even points (in chinese).
A solution of the Shannon switching game.
On the width-length inequality
On the width-length inequality and degenerate projective planes
A catalog of minimally nonideal matrices.
The anti-join composition and polyhedra
Polyhedral properties of clutter amalgam.
Composition operations for clutters and related polyhedra.
On combinatorial properties of binary spaces.
Matroid Theory.
Lehman's forbidden minor characterization of ideal 0 1 matrices.
The Matroids with the Max-Flow Min-Cut property
Decomposition of regular matroids.
A note on the production of matroid minors.
European J.
On Lehman's width-length characterization
Signed graphs.
Biased graphs.
--TR
--CTR
Bertrand Guenin, Integral Polyhedra Related to Even-Cycle and Even-Cut Matroids, Mathematics of Operations Research, v.27 n.4, p.693-710, November 2002
Grard Cornujols , Bertrand Guenin, Ideal clutters, Discrete Applied Mathematics, v.123 n.1-3, p.303-338, 15 November 2002 | ideal clutter;signed matroid;connectivity;multicommodity flow;t-cut;seymour's conjecture;weakly bipartite graph;separation |
588327 | Geometric Computation of Curvature Driven Plane Curve Evolutions. | We present a new numerical scheme for planar curve evolution with a normal velocity equal to $F(\kappa)$, where $\kappa$ is the curvature and F is a nondecreasing function such that F(0)=0 and either $x\mapsto F(x^3)$ is Lipschitz with Lipschitz constant less than or equal to 1 or $F(x)=x^\gamma$ for $\gamma\geq 1/3$. The scheme is completely geometrical and avoids some drawbacks of finite difference schemes. In particular, no special parameterization is needed and the scheme is monotone (that is, if a curve initially surrounds another one, then this remains true during their evolution), which guarantees numerical stability. We prove consistency and convergence of this scheme in a weak sense. Finally, we display some numerical experiments on synthetic and real data. | Introduction
In this paper, we investigate the evolution of a closed smooth plane curve, when each point of
the curve moves with a normal velocity depending on the curvature of the curve at this point.
More precisely, we study evolution of a curve C obeying the equation
@t
where N(s; t) is the inner normal vector to the curve at the point with parameter s at evolution
time t. Equations of type (1) model phenomena in physics or material science. They also play
an important role in digital image analysis. Indeed, it was proved in [1] that any image analysis
process satisfying some reasonable properties and invariance (essentially causality, stability, invariance
with respect to isometries and contrast changes) is described by an equation of type (1),
or more precisely by the corresponding grey level evolution described by the scalar equation
@t
which has to be considered in the viscosity sense [6]. In this equation
x
z e-mail: moisan@cmla.ens-cachan.fr
x Mail adress: CMLA, ENS-CACHAN, 61 Avenue du Pr'esident Wilson, 94 235 Cachan Cedex FRANCE.
Phone:
is the curvature of the level line passing through x. Equation (2) means that the level lines of u
move with respect to Equation (1). The case F (with the convention F
for ff 2 R and x 2 R) is particularly interesting since it is the only contrast invariant equation
that also commutes with affine transformations preserving the area (this set is named the special
affine group, and is composed of the mappings of the type x 7! Ax
is a linear transformation such that det A = 1). The case F
has extensively been studied ([8, 9]), and existence, regularity, and vanishing in finite time have
been proved. The affine invariant case F x 1=3 has also been studied in [2, 22, 23]. For
results have been exposed in [25]. The equivalence between curve motion
and contrast invariant smoothing was proved in [7].
In this paper, we focus on a possible numerical algorithm solving (1). According to [7], if u
is a continuous real-valued function in R 2 , we can solve Equation (2) by applying this algorithm
to all level lines of u. Of course, this implies that these level lines must not intersect during their
evolution. We thus require that, like Equation (1) the algorithm satisfies an inclusion principle,
meaning that the order (with respect to inclusion) is respected. Among the first attempts to
solve Equation (1), Osher and Sethian ([19, 24]) solve (2) by introducing the signed distance
function to the curve at time Unfortunately, this algorithm satisfies neither inclusion
principle nor rotation invariance. In addition, the evolution a priori depends on the chosen
distance function, since the scalar algorithm is not contrast invariant. Moreover, the data (and
thus the CPU time) becomes rapidly huge when a high precision is requested. Some attempts
on curves by finite difference schemes have also been made ([15, 16]) with interesting results,
but the nongeometric nature of the scheme still prevents inclusion principle from being satisfied.
Moreover, for evolution driven by a power a the curvature larger than 1, the discretization of the
curves are enclined to becoming sparser around points with high curvature, thus preventing a
good accuracy. On the contrary, a completely different scheme has been implemented in [17, 18]
for the affine invariant case. It is fully geometrical and also satisfies inclusion principle (implying
numerical stability). In [11], a theoretical algorithm for moving hypersurface by a power of Gauss
Curvature has also been studied. We generalize and implement this algorithm in the plane for
nonconvex curves and for more general functions of the curvature. We just mention that a
numerical approach on curves has the advantage that the resolution is not limited by the pixel
size, which allows a very high precision. Moreover, the computation time is far shorter than
in a scalar approach. On the other hand, topology changes (e.g. a single curve breaking into
two connected components) are automatically handled in a scalar approach. This should not
be a real problem for plane curves evolution since it is likely that no topological changes occur.
This has been proved at least in [9] for the Mean Curvature Flow and in [2] for Affine Curve
evolution. On the contrary, it is known that topology changes do occur in higher dimension. In
addition, it seems difficult (at least not trivial) to generalize our algorithm in higher dimensions
even for hypersurfaces in a three dimensional space.
The paper is as follows. We first give in section x2 some preliminary definitions and introduce
some operators on sets. This provides operators on curves which are boundaries of sets. These
operators are consistent with curvature dependent differential operators. They satisfy some
monotonicity and continuity properties, allowing to extend them to real valued functions. We
prove some consistency results and give the proof of convergence without entering too much
into details since we prefer to focus on numerical applications. In x3, we adapt the operator
previously defined to the special case of evolution driven by a power of the curvature and in x4,
we give an algorithm with the same scaling covariance properties as the curve Scale Space. We
then show in x5 some numerical experiments in the convex case as well as in the nonconvex case.
2 Definition and Properties
We first give some notations (previously used in [18]). Let C a semi-closed curve in R 2 (that is
an oriented simple curve dividing the plane in exactly two connected components) and K the
interior of C (which is the bounded component of R 2 nC when C is closed). We suppose that C
is oriented such that K lies on "the left" when C is positively described. More rigorously, if we
assume that the plane is counterclockwise oriented, the inner normal is such that the tangent
vector and the inner normal form an orthonormal direct basis. We assume that a smooth
parameterization is defined on C (for example piecewise C 1 ).
A chord is a segment of the form ]C(s); C(t)[ that does not intersect any point of C with
parameter between s and t. A chord set C s;t is the connected set enclosed by a chord ]C(s); C(t)[
and the curve C(]s; t[). We say that C s;t is a oe-chord set and (s; t) is a oe-chord if L 2 (C s;t
and if the area of any chord set strictly included in C s;t is strictly less than oe. We denote by
K oe (C) the set of oe-chord sets.
a oe-chord of K and C s;t the associated oe-chord set. We call chord arc
distance of C (or of C s;t ) the number ffi(C([s; t]); [C(s); C(t)]) where ffi is the Hausdorff semi-
distance (in particular it is not commutative) defined by
For x in R 2 , we also denote by ffi s;t (x) the signed distance from x to the oriented line ]C(s); C(t)[,
that is
are in R 2 , we denote by [x; y] the determinant of the 2 \Theta 2 matrix with columns x and
y). We also denote by K
oe (C) the sets of positive oe-chords i.e the chord-sets C s;t satisfying
In the same way, we can define
oe the set of negative chord sets. Remark that for positive
chord sets, the chord-arc distance is nothing but sup ffi s;t (x) for x 2 C([s; t]) and for negative
chord sets, the chord-arc distance is
To finish with notations, we set
We now define a mapping on the set of plane sets that we shall assume piecewise regular for
sake of simplicity. Let G be a nondecreasing 1-Lipschitz function defined in R+ and such that
K be a set in R 2 whose oriented boundary
assume that L 2 (K) ? oe. For C s;t a chord set of K with chord-arc distance
h, we write
oe (C s;t
ae
!oe 2=3
'oe
In the sequel, we shall briefly say that oe (C s;t ) is a modified chord set. We remark that the
right-hand term of the inequality above is nonnegative because of the Lipschitz assumption on
G. Hence, a modified oe-chord set is always included in its associated oe-chord set. On Figure 1,
we represent a oe-chord and its modified chord. The modified oe-chord set is filled.
PSfrag replacements
oe
!oe 2=3 G( h
s
Figure
1: A oe-chord set before and after transform.
K be the interior of a piecewise C 1 semi-closed curve. We define
oe (A): (4)
We will refer to E oe as an erosion operator and to E oe (K) as the eroded of K at scale oe.
Remark 2. The algorithm in [18] corresponds to
Remark 3. We can generalize Definition 1 to sets with several connected components by applying
the erosion to each component.
Lemma 4 Let K be a convex set of R 2 . Then E oe (K) is also convex.
Proof. We can write
Kn oe (A);
which proves that E oe K is convex as an intersection of convex sets (each of them is the intersection
between K and a half plane).
Lemma 5 If K is a smooth compact set, then E oe (K) is a compact set.
Proof. This is obvious since K is an intersection of compact sets.
The proposition below is crucial in a theoretical point of view as well in a numerical one since
it is necessary to obtain a stable numerical algorithm.
Proposition 6 (Inclusion Principle) Let K 1 ae K 2 . Assume that G is nondecreasing and
1-Lipschitz. Then
Proof. Assume that x 2 K 2 and x 62 E oe (K 2 ). We prove that x 62 E oe (K 1 ). If x 62 K 1 then
Assume now that x 2 K 1 . By assumption, there exists a
oe 0 -chord of K 2 (that we denote by C) with oe 0 oe such that x belongs to the modified chord-set.
The Lipschitz condition on G implies that x also belongs to the oe 0 -chord set. The same oe 0 -chord
delimits in K 1 a unique chord-set containing x. Let oe 00 be its area : we have oe 00 oe 0 and thus
oe 00 oe. It then suffices to prove that this chord excludes x from E oe (K 1 ). Consider the situation
illustrated in Figure 2.
are the chord-arc distances of C in K 1 and K 2 .
are the chord-arc distances of the associated modified chords.
(iii) l is the difference of length between K 1 and K 2 in the direction that is orthogonal to
the chords, i.e.
It is enough to prove that l + l 1 l 2 . But, we know that l
oe 2=3 !G( h 2
is 1-Lipschitz, we conclude.
PSfrag replacements
l
l 1
l 2
x
oe 00
Figure
2: Inclusion Principle.
Proposition 7 Assume that is of class C 2 . Let M 2 C such that the curvature of C
at M is not equal to 0. Then
lim
!oe 2=3
is the positive part of , defined by
Proof. Assume that C is concave at M (that is the curvature at M is strictly negative in
a neighborhood of M ). Then for oe small enough, any oe-chord (s; t) such that M 2 C([s; t]) is
a (strictly) negative oe-chord. Hence M 2 E oe (K) and the proposition follows from
Assume now that C is strictly convex at M (thus in a neighborhood of M ). Then for any chord
parallel to the tangent at M enclosing a oe-chord set containing M , the chord-arc distance h
satisfies
which can be easily established for a parabola, then for any regular curve by approximation (see
[18] for example). Then the result follows from 3 and 4, which imply
!oe 2=3
and the fact that G is continuous.
The following property is a continuity property allowing to extend E oe to any plane set, and
then to define an operator acting on functions with real values.
Proposition 8 (Continuity) Let K n a sequence of compact smooth sets. Let
Assume that K is also a smooth set. Then
Proof. Since K ae K n for any n, by monotonicity we have also
the first part of the equality. In order to prove the reverse inclusion, we can assume that the
family K n is nonincreasing. Without loss of generality, we also suppose that (K n ) converges to
K for the Hausdorff distance between compact sets. Assume that x 62 E oe (K). By definition,
there is a chord (s; t) with area not more than oe such that the modified chord excludes x. Since
oe (K) is closed, its complementary is an open set; thus we can assume that the area of C s;t is
strictly less than oe. In K n it also defines a chord and for n large enough, the area of this chord
is also less than oe (by using convergence of measures). Moreover, as K n tends to K for the
Hausdorff distance, the chord-arc distance also converges. Since G is continuous, this implies
that the chord excludes x in K n for n large enough. Hence x 62 "E oe (K n ) and this ends the
proof.
Remark 9. The compactness assumption in the proposition above is far from necessary. It suffices
for example that the boundary of the sets is locally convex or concave. This ensures that the
erosion is local when oe is small. We can then conclude by the same kind of arguments. Note
also that the continuity property allows to define the erosion on any closed set by approximating
closed sets by smooth closed sets.
We can now extend E oe to real-valued functions in R 2 . First, if u R, we define as usual
the level set with value , the set
Applying a theorem by Matheron (see [14]) yields the following
F a set of real valued functions in R 2 such that the level sets of the elements
of F are compact and smooth. Then, we can extend E oe to elements of F by setting
It is equivalent to define E oe (u) by its level sets,
This uniquely defines a monotone, translation invariant operator commuting with nondecreasing
continuous functions.
We also define a dual operator D oe (called dilation operator) by
the subscript designing the complementary set in R 2 . This operator satisfies the same properties
as E oe except the consistency result where the positive part of the curvature has to be replaced
by the negative part.
By standard arguments ([4], [10], [20]), we can derive the following consistency result on the
operator acting on functions.
Proposition 11 Let u be a C 3 function and x a point such that Du(x) 6= 0 and (u)(x) 6= 0.
Then
Proof. The whole proof is not very difficult but a bit technical and long. Thus, we do
not enter into all details and we shall skip some points. The aim is to prove that E oe (u) only
depends on local features of u. Choose ff such that oe
oe tends to 0 (note that there is no incompatibility; choose r = oe 1=4 for instance). By using
translation invariance and contrast invariance, we assume that
of the proof is the following. If r is small enough, the curvature of the level lines of u has a
strict sign in D(0; r). As a consequence, for any oe-chord, we can estimate the chord-arc distance
by Equation (5). Moreover the same kind of approximation (made on a circle or a parabola)
shows that the length of the chord is of order ( oe
) 1=3 . Hence, any oe-chord intersecting D(0; r
asymptotically included in D(0; r(1 (because of the choice of r). Assume
first that (u)(0) ? 0. Define u+ and u \Gamma by
and u+ elsewhere (we can replace the infinite value by very large
numbers). The global inequalities
enough, the level lines of u are uniformly strictly concave in D(0; r). Thus, there is no positive
oe-chord of level sets of u \Gamma and u+ intersecting D(0; r
Hence
r
Assume now that ! 0. Define
in D(0; r) and
in D(0; r) and elsewhere. The constant k is chosen such that for oe small enough,
we have
PSfrag replacements
Figure
3: Case ? 0. On the left, a level set of u \Gamma . The oriented boundary is the bold line. The
level set is the bounded connected component delimited by the curve. The dashed line is the
boundary of the eroded set. On the right, a level set of u+ : the boundary is the bold oriented
line and the level set is the unbounded component. In both cases, there is no positive oe-chord
set intersecting D(0; r
Hence, the erosion has no effect.
This is possible since we assumed that u is C 3 . Monotonicity yields
As v and w have trivial level sets out of D(0; r), it is quite easy to estimate their image by E oe .
We now use the consistency result (Proposition 7). The only trick is that the level lines of u
and v are not parabola in the canonical form. Nevertheless, with some few arguments we can
be led back to this situation ([4, 10, 20]). The computation of the eroded level sets of v and w
is drawn on Figure 4. From this, it is no longer difficult to prove that
and
are constants depending on D 2 u(0). We can apply the same result to the
dilation operator D oe to obtain the second part of the proposition.
For the next proposition, we first extend G to make it odd, that is if x ! 0, we set
\GammaG(\Gammax).
Proposition 12 Let u be a C 3 function. Suppose that Du(x) 6= 0 and (u)(x) 6= 0. Then
Proof. This follows from the fact that : 1) E oe and D oe are monotone and commute with
addition of constants, 2) near a point with gradient and curvature different from zero, the
arguments developed in the previous proposition are uniform.
In order to be complete, we just give without proof (it is simple when introducing the affine
erosion operator ([18])) a lemma controlling the behavior of E oe at critical points.
PSfrag replacements
Figure
4: Case ! 0. On the left, a level line of v (oriented bold line). The eroded line is the
dashed line. On the right, the same thing with w.
lim
oe 2=3
the limit being taken when y and oe tend to 0 independently.
This and the consistency result above allow to deduce the convergence result we now enounce
(see [21]). We denote T
Theorem 14 Let u 0 bounded and uniformly continuous in R 2 . For oe ? 0, let define
R 2 \Theta R! R by
Then, when oe tends to 0, u oe converges locally uniformly to the unique viscosity solution of the
equation
@t
The usual definition of viscosity solution ([6]) is no longer valid here because of the possible
singularity of the operator at critical points. An extended notion of solution is defined in [3]
and the solution still exists and is unique.
3 Curvature Power
3.1 Approximation of power functions
In this section, we show that the previous study can be adapted to the particular case of power
functions F not 1-Lipschitz on the whole real line,
we define
where
is the largest positive number at which the power function x 3fl has a derivative less than 1. As
G is 1-Lipschitz, we can then apply E oe to this function. We could think that this scheme is not
consistent with motion by curvature power. Indeed, if the curvature is too large for fixed oe then
it may happen that the chord arc distance is also very large and the erosion is then given by the
linear part of G. Nevertheless, we shall see that by an adequate scaling, G is not evaluated in
its linear part. From now on, we slightly change the notations in the definition of E oe . This will
simplify the statements in the case of power functions. For a oe-set C s;t , we now set
oe (C s;t
and we still define the erosion operator by
oe (A): (23)
When the scale tends to 0, h also tends to 0 and G is not taken in its linear part. The fact that
we get an operator consistent with a power function is due to the homogeneity properties of
power functions (this implies that except for power functions, this new definition of the erosion
operator makes no sense). We can adapt the proof of the inclusion principle 6 to prove that
the modified erosion operator E oe still satisfies this inclusion principle. Consistency on curves
(circles is enough !) is easy to establish. Continuity is not a problem as well. Thus, by using
Matheron's Theorem, we can extend this erosion operator to an operator acting on functions.
The following proposition asserts that this operator is consistent with a power of the curvature.
Proposition 15 Let u be a C 3 function. Let x be such that Du(x) 6= 0 and (u)(0) 6= 0. Then
Moreover, consistency is locally uniform.
Proof. As usual, we assume that In a first time, assume that (u)(0) ? 0.
We use the same locality argument as in the first proof of consistency in this chapter. Since the
curvature of u in a small ball with radius r ? 0 is bounded from below by a positive constant,
say the level sets of u have no positive oe-chord in D(0; r). We set
We use the same locality argument as in the first
proof of consistency as above: For oe small enough, the level sets of u \Gamma have no positive oe-chord
in D(0; r). Thus the erosion has no effect upon u \Gamma in D(0; r). By using monotonicity, we have
and the result is proved in the case ? 0.
Let us now come to the most difficult case: Assume that
cxy be the Taylor expansion of u at the origin with
parameter and let
If r is chosen small enough, we have v(x) u(x) in D(0; r). By extending v by \Gamma1 out of
D(0; r) this remains true everywhere. Moreover, we can assume that the curvature of the level
lines of v is still strictly negative. Indeed, its value is is the value at the
origin. We want to approximate E oe (v)(0). Let now j ? 0 be also small, such that the curvatures
of the level lines of v is larger than that in this part of the proof the curvatures
are all negative, hence a circle with a small curvature will also have a small radius). We can
again invoke the same locality property of the erosion in the case of a curve with a strictly
negative curvature: We know that the chord-arc distance is equal to O(oe 2=3
the length of the chord is a We deduce from this that the oe-chord sets
containing 0 must be included in a ball with radius O(oe 1=3 ). A small computation shows that
the modified corresponding oe-chord sets are included in a ball with radius O(oe fl ). The constant
in these terms are clearly uniform because the curvature is bounded from above by a negative
constant. In particular, they do not depend on " and j. Let now x be in a D(0; r). We call
C \Gamma"\Gammaj (x) the disk of curvature that is tangent to the level line of v at x and that is
in the same side as v(x) (x). Because of the comparison of the curvatures, we can still assume
that r is small enough such that we have the inclusion
Now, since the erosion operator is local, we also have
Assume that E oe (v)(0) . By definition, this means that 0 62 E oe ( (u)). By using inclusion
principle, we deduce that 0 62 E oe (C \Gamma"\Gammaj (x)) for any point such that . On the level line
of v with the same value , we can find a unique point x such that x and the normal at x
are colinear (this is due to the strict convexity of the level lines and the theorem of intermediary
values). This point is also characterized by the fact that the distance between the origin and
the tangent to the level line is minimal. Hence, the modified oe-chord set of C \Gamma"\Gammaj
also contains the origin. Let be the coordinates of x . A simple calculation gives
Since x and Du(x ) are colinear, we have
By using consistency on disks, we have jx
From this, we finally deduce
where we have approximated jDv(x )j by p up to a O(oe 2fl ) term. This analysis can be performed
since eventhough the constant were not explicited, we already stress
that they do not depend upon " and j. We then deduce that
Let now search an upper bound to E oe (u)(0). We do not repeat all the arguments since there
will be some similarity with the research of a lower bound. We approximate u by its Taylor
expansion and define 2cxy such that u w is a small ball
of radius r with
enough, the curvature of the level lines of w is
smaller than which can also be chosen negative if r is small enough. The locality of the
oe-chords still holds. We now define C +"+j (x) as above; its radius is equal
r is small enough, for any x the level set w(x) (w) is included in C +"+j (x) inside D(0; r). The
rest of the proof is still an application of comparison principle and the asymptotic behavior on
erosion on disks. Assume that E oe (w)(0) . This means that In particular
is a above. This implies that x ! 3fl
We use the characterization (24) of x and deduce that we must have
Since, this is true for any " ? 0 and j ? 0, we also obtain
Again, we can pass to the limit since the o(1) term is uniform in " and j, since all the curvatures
may be taken bounded by above by a strictly negative constant. Thus, if ! 0, we have
With the case 0, this gives the result.
The case of the dilation can be deduced by the relation D oe (\Gammau). The uniform
consistency follows from the fact that the oe-chords are uniformly bounded in some ball with
radius O(oe 2=3 ) and the constants of these terms are bounded as soon as the curvature have an
absolute value strictly more that a positive constant.
Uniform consistency yields consistency for the alternate operator.
Corollary
At critical points, we need to describe the behavior of the erosion in the following manner. Let
R be a C 2 function such that
tends to 0.
Lemma any sequence of points x n tending to 0, we have
lim
oe 2fl
where the limit is taken as n and oe tends to their respective limit independently and T h designs
either E oe or D oe or D oe
This lemma can be easily established by finding some estimates on the radius of the circle after
erosion. We can then prove the following convergence theorem.
Theorem
3 . Let T the alternate dilation-erosion for the curvature
power function x fl . Let u 0 in BUC(R 2 ) and define u oe by
Then, when oe tends to 0, u oe tends locally uniformly to the unique viscosity solution of the
equation
@t
with initial value u 0 .
As soon as the power fl is more than 1, then the usual notion of viscosity solution is not
appropriate since the elliptic operator jDuj(curv u) fl is singular at critical points. Ishii and
Souganidis proved in [12] that existence and uniqueness where still true if test functions were
restricted to a class of functions with flat critical point. This flatness is given by the same
conditions in the previous lemma. This point apart, the scheme of the proof is standard so we
do not explicit it. The only new point is the previous lemma in the case where test functions
are stationnary. In this case, the lemma directly gives the solution and we leave the rest of the
proof to the reader.
3.2 Scale Covariance
We denote by H the dilation with ratio , that is H t be the evolution
semi-group of
@t
that is, is the curve evolving according to Equation (25) above (the solution exists
and is unique at least for short times for smooth initial data (see [25]). The semi-group S t
satisfies the relation
Indeed, let C 1 (t) the evolving curve defined by
Then, it is simple to check that C 1 satisfies Equation (25) with initial condition H C(0). This
is exactly what Equation (26) asserts. The erosion operator E oe does not satisfy the same
covariance property. We thus define a modified operator
O
a 2
where a and oe are positive parameters depending upon t (and possibly on C). We want O t to
satisfy covariance Property (26), like S t . This will be true as soon as
and oe(H C;
We also want O t to be consistent with S t , that is, for any convex set K with C 3 boundary,
O t
the term o(t) being measured with the Hausdorff distance (we also use an abusive notation by
denoting S t (K) the set whose boundary is S t (@K)). By using consistency result above, we see
that a, oe and t must be linked by the relation
Assume that oe ? 0 is fixed. If a is chosen large enough such that, for any oe chord of K with
chord-arc distance h the inequality
a
ff fl (30)
holds, then the modified chord-arc distance is then given by G near the origin, that is Equation
(21) is not involved. Precisely, for M 2 @K (with smooth boundary) consistency writes
a
a
a
Notice that it is interesting to take the smallest possible value of a (given by the case of equality
in (30)) in order to get the largest possible scale step t from (29). We can summarize these
results in the following
Proposition 19 Let h(A) denote the chord-arc distance of a chord A. Then, the operator O t
defined by (27) with
sup
t! \Gamma3fl a 3fl \Gamma1 \Delta 1=2fl (32)
is consistent with (25) and satisfy the same scaling property as S t in (26).
Remark 20. (Error Analysis) In order to obtain consistency, it is not necessary that h
a
ff fl for
all (oe; a); for instance, if we fix a and define O t by (27) and (29), we still have consistency since
inequality (30) holds for oe small enough. However if a and oe do not satisfy (30), the difference
between O t C and S t C is of the same magnitude as t (since Equation(21) is involved). On the
contrary, this difference is O(t 2 ) if a and oe satisfy (30) (because of Consistency 5).
4 Algorithm
4.1 General method
Each iteration of the operator defined above involves three parameters: the scale step t (that can
be viewed as a time step), the erosion area oe and the saturation length a. These three quantities
have to satisfy (32), which leaves only one degree of freedom. A usual numerical scheme would
consider t as the free parameter (the time step, related to the required precision), and then
define a and oe from t. In the present case, this would not be a good choice for two reasons.
First, a is defined as an explicit function of oe but as an implicit function of t, which suggests
that oe may be a better (or at least simpler) free parameter than t. Second, t has no geometrical
interpretation in the scheme we defined, contrary to oe which corresponds in some way to "the
scale at which we look at the curve". In particular, oe is constrained by the numerical precision at
which the curve is known: roughly speaking, if C is approximated by a polygon with a precision
" (corresponding, for example, to the Hausdorff distance between the both of them), then we
must have oe AE " 2 in order that the effect of the erosion at each iteration overcomes the effect
of the spatial quantization. For all these reasons, we choose to fix oe as the free parameter, and
then compute t and a using (32). If the scale step t obtained this way is too large, we can simply
adjust it by reducing oe while keeping the same value of a. We propose the following algorithm
for the evolution of a convex set K at final scale T with area precision oe.
1. Let
2. While
- For each oe-set of K t , compute the chord-arc distance.
- Set a to the maximal value of these distances.
decrease oe in order to keep the previous equality.
Apply operator O t to K t , yielding K t+ffit .
Increment t by ffi t.
In practice, it is of course impossible to deal with all the oe-chords. In fact, the curve is a
polygonal line and we take the chords with an end point equal to a vertex of the polygon.
4.2 Computation of the erosion
The boundary of O t (K) is included in the envelop of all the modified chords. To obtain an
approximation of this set, we explicitly determine the position of the unique point of each chord
belonging to the envelop. This result is a generalization of the middle point property exposed
in [18],
Lemma 21 (Middle point Property) Let K be a strictly convex set. Let A 1 be a oe-set of
K with oe-chord C 1 . Let A 2 be another oe-set, and let C 2 be its oe-chord. Then, when dH
tends to 0, the intersection point of C 1 and C 2 tends to the middle point of C 1 .
Proof. (quoted from [18]). Let ' be the geometrical angle between C 1 and C 2 . If A 1 and A 2
are close enough, C 1 and C 2 intersect at a point that we call I('). We also call r 1 (') and r 2 (')
the length of the part of the chord C 2 on each side of I('). Since C 1 and C 2 are oe-chords, we
PSfrag replacements
Figure
5: The middle point property.
This implies that lim(r 1 tends to 0.
The boundary of O t (K) is included in the envelop of the "modified" oe-chords. In the case
these chords are the oe-chords themselves. For all other values of fl, this is no longer
true and we have to compute the position of the intersection of closer and closer chords. This is
the purpose of
Proposition 22 Let K be a strictly convex set, and a oe-chord of K. Consider P the
farthest point of C([s; t]) from C. If (L; h) are the coordinates of P in the direct orthonormal
referential whose origin is the middle point of C and whose first axis is directed by C(t)-C(s),
then the contribution of the modified chord arising from C is either void or the unique point with
coordinates
a
a
in the same referential.
Proof. Examine the situation on Figure 6. Let C ' the oe-chord making an angle ' with C. We
search the coordinates of the intersection point of the modified chord of C and C ' when ' tends
to 0. We set x(') the abscissa of the point . By the middle point property, we know that
' the modified chord of C ' . The distance between these two chords
is
a
is the chord-arc distance of C ' . Let (L('); H(0)) be the
coordinates of the common point of C 0 and C 0
' . Elementary but a bit fastidious geometry proves
PSfrag replacements'
Figure
The modified middle point property.
that
a
implying that the limit point we are looking for has coordinates given by (33).
Remark 23. In the case of the general function of the curvature and without introducing the
scaling preventing saturation phenomenon, the coordinates of the point are
!oe 2=3
!oe 2=3
As precised in the proposition, the limit point may not belong to the boundary of O t (K). Indeed
@O t (K) is in general strictly included in the envelop of the modified oe-chords. In general, this
envelop is not even the boundary of a convex set! Nevertheless, if we know that C is a convex
curve then it is simple to decide whether a point has to be kept or not by comparing its position
with adjacent modified chord. Hence we can remove the bad points and obtain a convex set.
For example, on Figure 7, we display the envelop of the modified oe-chords of a square. If the set
is a "corner", the explicit computation can be made and shows the same behavior in the corner
as in
Figure
7. The eroded set is obtained by removing the parts with cusps in the corners of
the square.
5 Numerical Experiments
To finish, we display numerical experiments, first in the case of convex sets. By a change of
scale variable, we implemented an approximation of the equation
@t
For this rescaling, scale and space are homogeneous (precisely, if T t maps C to C(t) by this
equation, we check that T t
Figure
7: Envelop of the modified oe-chords for a square 2). The result is not convex. The
bold line is what is to be kept.
5.1 Closed convex curves
The first example is the the case of circles. The radius is explicitly computable for the scale
space since
Remark that the extinction scale for a circle with initial radius R(0) is R(0) for any fl. On
Figure
8, we display the evolution of a circle with radius 10, for 9.
Figure
8: Evolution of circles for displayed at scale 0, 1,., 9 (fast computation).
In
Table
1, we give the theoretical and computed radius for several values of fl and for a circle
with initial radius equal to 10. We performed two sets of experiments with different precisions,
and for each of them we give the CPU time on a Pentium II 366MHz, the number of performed
iterations and the final obtained radius. In Figures 9 and 10, we display the evolution of convex
closed polygons. On each figure, the display scales are the same for all the different values of fl.
5.2 Unclosed curves evolution
Until now, we have only studied the evolution of compact convex sets. The boundary of such
sets is a closed convex curve. It is possible to make nonclosed convex curves evolve by fixing their
end points as in [5]. This is equivalent to symmetrize and periodize the curve. The steady state
to this evolution is a segment if the two ends are disjoint. If they are equal, then a singularity
fast computation slow computation
R theo R comp # iter CPU
2.0 6.47 6.39 112 1.22 6.45 421 27
3.0 7.65 7.56 108 1.38 7.64 400 28
10.0 9.66 9.62 100 1.19 9.66 214 15
Table
1: The shortening at scale 9 of an initial circle with radius 10 is considered. For different
values of fl, we give the theoretical value of the final radius (R theo ), the corresponding values
obtained with different precisions (fast or slow computation), the number of performed iterations
and the used CPU time.
Figure
9: Evolution of a triangle. Up-left:
occurs in finite time. Such an evolution is displayed on Figure 11 for a nonclosed convex curve
for several values of fl.
As can be derived from [8, 9, 2], if C moves by Equation 25 and is locally the graph of a
function, y then y satisfies the equation
We use this equation to determine whether a convex curve with distinct fixed end points becomes
a straight line in finite time.
Proposition 24 Let u 0 be a strictly convex function on [\Gamma1; 1] with u
Figure
10: Evolution of a pentagon. Up-left:
be the solution of
Then, if identically zero in finite time. If fl 1, then the steady state is
attained for infinite time.
Proof. From Equation (37), we deduce that u is subsolution of the following equation
On the other hand, we can derive from Equation (37) an equation for u 0 . This equation is also
parabolic. By maximum principle, the supremum of u 0 is attained at time
that u is supersolution of
For both Equations (38), (39), we can compute separable solutions of the type g(t)f(x). Both
functions f and g then satisfy an ordinary differential equation that is explicitely solvable for
g and can be expressed in terms of elliptic functions for f . We can then check that the time
components becomes null in finite time if and only if fl ! 1. In order to conclude, it suffices to
bound u 0 fromabove if 1 and from below if fl 1 by a adequate f (the spatial part of the
separable solution) and to apply maximum principle.
Figure
11: Evolution of an angle for different powers of the curvature
2:0). The displayed curves correspond to scales that are integer multiples of
a same fixed value.
Let us interpret this experiment in terms of image processing. Imagine that we process the
filtering of a grey level image by applying (25) to its level lines, as done in [13] for
This will have smoothing effects, and in particular one can expect to remove pixellization effects.
The periodic structure corresponding to the initial state of Figure 5.2 is a "staircase" line that
corresponding to the discretization of a perfect straight line (oriented at 45 ffi ). As the evolution
scale increases, this infinite staircase is smoothed and eventually becomes a perfect straight line
in finite time if fl ! 1. A natural question now arises: how choose fl in order to smooth these
staircase effects with the smallest possible damaging effects on the image? In other terms, what
power of the curvature regularizes discrete lines in the shortest time? The previous proposition
asserts that only powers smaller than 1 can straigthen a staircase in finite time. Experiments of
Figure
5.2 corroborate this result and indicate that the straightening time increases with fl. But
this result has to be counterbalanced in the following. By smoothing an image, we would like
to remove small undesirable details while keeping the rest of the image unchanged. The results
and experiments on circles (Figure 8) tend to prove that large powers can do this in a better
way than small powers (i.e. when a circle with initial radius 9 disappears at scale
with radius 10 is less changed for large values of fl).
5.3 Generalization to nonconvex sets
An algorithm for nonconvex curves has been proposed in [18] for the affine erosion
apply the same method in the general case. It consists in splitting a curve into its convex and
concave components. This decomposition is unique and well defined. By this method, we do
not find inflexion points but inflexion segments and we define an inflexion point as the middle
point of an inflexion segment. We then apply the erosion operator to all the convex components
by fixing the end points (which are the inflexion points defined above). Once this is done we
gather all the parts to form a new curve. We reapply the decomposition and the erosion to
this new curve. Notice that near an inflexion point, the ending segments of the joining convex
components do not stay parallel in general; thus, inflexion points have have no reason to stay
still. Practically, this is what we observe. Moreover, as fl increases, they seem to move more
and more slowly, which also seems logical. On Figures 12 and 13, we display the result of the
algorithm on nonconvex curves. Nevertheless, we do not have any strict justification for the
convex component decomposition and the displacement of inflexion points should be studied
more carefully.
Acknowledgements
. The authors would like to thanks Jean-Michel Morel for all valuable
conversations and advice.
Figure
12: Scale space of a "T" shape: the curves are displayed at the same evolution scale.
From left to right and top to bottom: fl =0.4, 1, 2, 3. CPU times are respectively 5, 9, 23 and
Figure
13: Scale space of a hand curve: the curves are displayed at the same evolution scale.
From left to right and top to bottom: initial curve,fl =0.4, 1, 2. CPU times are respectively 6,
--R
Axioms and fundamental equations of image processing.
On the affine heat equation for nonconvex curves.
Convergence of approximation schemes for fully nonlinear second order equations.
Partial differential equations and mathematical morphology.
Is scale space possible
User's guide to viscosity solution of second order partial differential equations.
Motion of level sets by mean curvature.
The heat equation shrinking convex plane curves.
The heat equation shrinks embedded plane curves to round points.
Partial Differential Equations and Image Iterative Filtering.
An approximation scheme for gauss curvature flow and its convergence.
Generalized motion of noncompact hypersurfaces with velocity having arbitrary growth on the curvature tensor.
Geometric multiscale representation of numerical images.
Random Sets and Integral Geometry.
Solution of nonlinear curvature driven evolution of plane convex curves.
Traitement num'erique d'images et de films
Affine plane curve evolution: A fully consistent scheme.
Fronts propagating with curvature dependent speed: Algorithms based on Hamilton-Jacobi formulation
Approximation de propagation de fronts avec ou sans termes locaux.
Approximation of viscosity solution by morphological filters.
Affine invariant scale space.
On affine plane curve evolution.
Curvature and the evolution of fronts.
The generalized curve shortening problem.
--TR
--CTR
L. Alvarez , A.-P. Blanc , L. Mazorra , F. Santana, Geometric Invariant Shape Representations Using Morphological Multiscale Analysis, Journal of Mathematical Imaging and Vision, v.18 n.2, p.145-168, March | curve evolution;image processing;level set methods;viscosity solutions;numerical approximation |
588362 | On the Accuracy of the Finite Volume Element Method Based on Piecewise Linear Polynomials. | We present a general error estimation framework for a finite volume element (FVE) method based on linear polynomials for solving second-order elliptic boundary value problems. This framework treats the FVE method as a perturbation of the Galerkin finite element method and reveals that regularities in both the exact solution and the source term can affect the accuracy of FVE methods. In particular, the error estimates and counterexamples in this paper will confirm that the FVE method cannot have the standard O(h2) convergence rate in the L2 norm when the source term has the minimum regularity, only being in L2, even if the exact solution is in H2. | Introduction
. In this paper, we consider the accuracy of finite volume element
methods for the following elliptic boundary value problem: Find
such that
where# is a bounded convex polygon in R 2 with boundary #
uniformly positive definite matrix in # and the source term
has enough regularity so that this boundary value problem has a unique
solution in a certain Sobolev space.
Finite volume (FV) methods have a long history as a class of important numerical
tools for solving di#erential equations. In the early literature [26, 27] they were
investigated as the so-called integral finite di#erence methods, and most of the results
were given in one-dimensional cases. FV methods also have been termed as box
schemes, generalized finite di#erence schemes, and integral-type schemes [20]. Generally
speaking, FV methods are numerical techniques that lie somewhere between finite
di#erence and finite element methods; they have a flexibility similar to that of finite
element methods for handling complicated solution domain geometries and boundary
conditions; and they have a simplicity for implementation comparable to finite di#er-
ence methods with triangulations of a simple structure. More important, numerical
solutions generated by FV methods usually have certain conservation features that
# Received by the editors March 10, 2000; accepted for publication (in revised form) May 25,
2001; published electronically January 30, 2002. This work was partially supported by NSF grant
DMS-9704621, U.S. Army grant ARO-39676-MA, and by the NSERC of Canada.
http://www.siam.org/journals/sinum/39-6/36887.html
Institute for Scientific Computation, Texas A&M University, College Station,
(ewing@isc.tamu.edu). This author was supported by NSF grants DMS-9626179, DMS-9706985,
DMS-9707930, NCR9710337, DMS-9972147, INT-9901498; EPA grant 825207; two generous awards
from Mobil Research and Development; and Texas Higher Education Coordinating Board Advanced
Research and Technology Program grants 010366-168 and 010366-0336.
# Department of Mathematics, Virginia Polytechnic Institute and State University, Blacksburg,
VA 24061 (tlin@math.vt.edu).
- Department of Mathematics, University of Alberta, Edmonton, AL T6G 2G1 Canada
(ylin@hilbert.math.ualberta.ca).
are desirable in many applications. However, the analysis of FV methods lags far
behind that of finite element and finite di#erence methods. Readers are referred to
[3, 6, 17, 21, 22, 25] for some recent developments.
The FVE method considered in this paper is a variation of the FV method, which
also can be considered as a Petrov-Galerkin finite element method. Much has been
published on the accuracy of FVE methods using conforming linear finite elements.
Some early work published in the 1950s and 1960s can be found in [26, 27]. Later, the
authors of [20] and their colleagues obtained optimal-order H 1 error estimates and
superconvergence in a discrete H 1 norm. They also obtained L 2 error estimates of
the following form:
where u and u h comprise the solution of (1.1) and its FVE solution, respectively. Note
that the order in this estimate is optimal, but its regularity requirement on the exact
solution seems to be too high compared with that for finite element methods having
an optimal-order convergence rate when the exact solution is in W 2,p or H
Optimal-order H 1 estimates and superconvergence in a discrete H 1 norm also have
been given in [3, 17, 21, 22, 25] under various assumptions on the above form for
equations or triangulations.
More recently, the authors of [7, 8] presented a framework based on functional
analysis to analyze the FVE approximations. The authors in [11] obtained some new
error estimates by extending the techniques of [20]. The authors of [14, 15] considered
FVE approximations for parabolic integrodi#erential equations, covering the above
boundary value problems as a special case, in both one and two dimensions. All the
authors obtained optimal-order H 1 and W 1,# error estimates and superconvergence
in H 1 and W 1,# norms. In addition, they found an optimal-order L # error estimate
in the following form:
which is in fact an error estimate without any logarithmic factor. However, all the
estimates obtained by these authors require that the exact solution have H 3 regularity.
To the best of our knowledge, there have been no results indicating whether the
above W 3,p
regularity is necessary for the FVE solution with conforming linear
finite elements to have the optimal-order convergence rate. On the other hand, it
is well known that in many applications the exact solution of the boundary value
problem cannot have W 3,p or H 3 regularity. In fact, the regularities of the source
f , the coe#cient, and the solution domain all can abate the regularity of the
exact solution. A typical case is the regularity of the solution domain that may force
the exact solution not to be in W 3,p or H 3 even for the best possible coe#cient A
and source term f , such as constant functions.
It has been noticed that the regularity of the source term may a#ect the convergence
rate of an FVE solution. The counterexample in [18] showed that the FVE
solution with the conforming linear elements cannot have the optimal L 2 convergence
rate if the exact solution is in H 2 but the source term f is only in L 2 . On the other
hand, the author of [6] found an optimal error estimate for the FVE solution with the
nonconforming Crouzeix-Raviart linear element under the assumption that the exact
solution is in H 2 and the source term f is in H 1 , but did not state whether this H 1
regularity of f is necessary for the FVE method presented there.
The central aim of this paper is to show how, by both error estimates and coun-
terexamples, the regularity of the source term f can a#ect the convergence rate of the
FVE solution with conforming linear elements. The results indicate that, unlike the
finite element method, the H 2 regularity of the exact solution indeed cannot guarantee
the optimal convergence rate of the conforming linear FVE method if the source
term has a regularity worse than H 1 , assuming that the coe#cient is smooth enough.
Namely, we will present the following error estimate:
which leads to the optimal convergence rate of the FVE method only if f # H # with
# 1. Note first that, except for special cases such as when the dimension
of# is
one or the solution domain has a boundary smooth enough, the H 1 regularity of the
source term does not automatically imply the H 3 regularity of the exact solution. On
the other hand, the H 3 regularity of the exact solution will lead to the H 1 regularity
of the source term when the coe#cient is smooth enough, and this error estimate
reduces to one similar to estimates obtained in [11, 20]. Also, this error estimate is
optimal from the point of view of the best possible convergence rate and the regularity
of the exact solution. Moreover, counterexamples given in this paper indicate that
the regularity of the source term cannot be reduced. Hence, we believe this is a more
general error estimate than those in the literature.
In fact, the FVE method is a Petrov-Galerkin finite element method in which
the test functions are piecewise constant. As we will see later, the nonsmoothness in
the test function demands a stronger regularity of the source term than the Galerkin
finite element method. Also, our view of the FVE method as a Petrov-Galerkin finite
element method suggests that we treat the FVE method as a perturbation of the
Galerkin finite element method [6, 20] so that we can derive optimal-order L 2 , H 1 ,
and L # error estimates with a minimal regularity requirement just like finite element
methods except for the additional smoothness assumption on the source term f . This
error estimation framework also enables us to investigate superconvergence of the FVE
method in both H 1 and W 1,# norms using the regularized Green's functions [23, 29]
and to obtain the uniform convergence of the FVE method similar to that in [24]
for the finite element method. To summarize, we observe that the FVE method not
only preserves the local conservation of certain quantities of the solution (problem
but also has optimal-order convergence rates in all usual norms. The
additional smoothness requirement on the source term f is necessary due to the
formulation of the method.
The results of this paper can easily be extended to cover more complicated models.
For example, most of the results and analysis framework are still valid if the di#erential
equation contains a convection term # - (b u) (see [21] and [22]) and the symmetry
of the tensor coe#cient A(x) is not critical. Also, one may consider Neumann and
Robin boundary conditions on the whole or a part of the boundary #
In fact, the
FVE method was introduced in [2] as a consistent and systematic way to handle the
flux boundary conditions for finite di#erence methods. We also refer readers to [1, 19]
for FVE approximations of nonlinear problems, to [12] for an immersed FVE method
to treat boundary value problems with discontinuous coe#cients, and to [13] for the
mortar FVE methods with domain decomposition.
This paper is organized as follows. In section 2, we introduce some notation,
formulate our FVE approximations in piecewise linear finite element spaces defined
on a triangulation, and recall some basic estimates from the literature. All error
estimates are presented in the pertinent subsections of section 3. Section 4 is devoted
to counterexamples demonstrating that smoothness of the source term is necessary in
order for the FVE method to have the optimal-order convergence rate.
2. Preliminaries.
2.1. Basic notation. We will use the standard notation for Sobolev spaces
consisting of functions that have generalized derivatives of
order s in the spaces L
p(#7 The norm of W s,p is defined by
|#s
with the standard modification for In order to simplify the notation, we denote
s(# and skip the index
and# whenever possible; i.e., we will use
We denote by H 1
0(# the subspace of H
1(# of functions
vanishing on the boundary
# in the sense of traces. Finally, H
-1(# denotes the
space of all bounded linear functionals on H 1
0(#7 For a functional f # H
its
action on a function u
0(# is denoted by (f, u), which represents the duality
pairing between H
0(#1 To avoid confusion, we use (-) to denote both
the L
2(#89#401 product and the duality pairing between H
For the polygonal
domain# we now consider a quasi-uniform triangulation T h
consisting of closed triangle elements K such
K. We will use N h to
denote the set of all nodes or vertices of T h ,
is a vertex of element K # T h and p #
-#
and we let N 0
For a vertex x i # N h , we denote by #(i) the index set of
those vertices that, along with x i , are in some element of T h .
We then introduce a dual mesh T #
h based on T h ; the elements of T #
h are called
control volumes. There are various ways to introduce the dual mesh. Almost all
approaches can be described by the following general scheme: In each element K # T h
consisting of vertices x i , x j , and x k , select a point q in K, and select a point x ij on
each of the three edges x i x j of K. Then connect q to the points x ij by straight lines
# ij,K . Then for a vertex x i we let V i be the polygon whose edges are # ij,K in which x i
is a vertex of the element K. We call V i a control volume centered at x i . Obviously,
we have
and the dual mesh T #
h is then defined as the collection of these control volumes.
Figure
1 gives a sketch of a control volume centered at a vertex x i .
We call the control volume mesh T #
h regular or quasi-uniform if there exists a
positive constant C > 0 such that
here h is the maximum diameter of all elements
There are various ways to introduce a regular dual mesh T #
h depending on the
choices of the point q in an element K # T h and the points x ij on its edges. In
Fig. 1. Control volumes with barycenter as internal point and interface # ij of V i and V j .
this paper, we use a popular configuration in which q is chosen to be the barycenter
of an element K # T h , and the points x ij are chosen to be the midpoints of the
edges of K. This type of control volume can be introduced for any triangulation
T h and leads to relatively simple calculations for both two- and three-dimensional
problems. In addition, if T h is locally regular, i.e., there is a constant C such that
dual mesh
h is also locally regular. Other dual meshes also may be used. For example, the
analysis and results of this paper for all the error estimates in the H 1 norm are still
valid if the dual mesh is of the so-called Voronoi type [21].
2.2. The FVE method. We now let S h be the standard linear finite element
space defined on the triangulation T h ,
is linear for all K # T h and v|
and its dual volume element space S # h ,
h and v|
Obviously,
h } and S #
# i are the standard nodal basis functions associated with the node x i , and # i are the
characteristic functions of the volume V i . Let I h :
be the usual interpolation operators, i.e.,
I h
where
Then, the FVE approximation u h of (1.1) is defined as a solution to the following
problem: Find u h # S h such that
or
h .
Here the bilinear form a(u, v) is defined as follows:
where n is the outer-normal vector of the involved integration domain. Note that the
bilinear form a(u, v) has di#erent definition formulas according to the function spaces
involved. We hope that this will not lead to serious confusion but rather will simplify
tremendously the notation and the overall exposition of the material.
To describe features of the bilinear forms defined in (2.3), we first define some
discrete norms on S h and S # h ,
is the distance between x i and x j .
In the lemmas below, we assume that the lines of discontinuity (if any) of the
matrix A(x) are aligned with edges of the elements in the triangulation T h and that
the entries of the matrix A(x) are C 1 -functions over each element of T h .
Lemma 2.1 (see, e.g., [7, 21]). There exist two positive constants C 0 , C 1 > 0,
independent of h, such that
Lemma 2.2 (see, e.g., [7, 21]). There exist two positive constants C 0 , C 1 > 0,
independent of h and h 0 > 0, such that for all 0 < h # h 0 ,
3. Error estimates for the FVE method.
3.1. Optimal-order H 1 error estimates. We first consider the error of the
FVE solution u h in the H 1 norm. We start with the following two lemmas.
Lemma 3.1. For any u h , v h # S h , we have
with
j#Nh
and
Moreover, if A is in W
1,# , then there is a positive constant C > 0, independent
of h, such that
Proof. For the proof, see [12, 13].
Lemma 3.2. Assume that u h is the FVE solution defined by (2.1). Then we have
Proof. The proof follows directly from Lemma 3.1.
Theorem 3.3. Assume that u and u h are the solutions of (1.1) and (2.1), re-
spectively,
-1+# with 0 < # 1, and A # W
1,# .
Then we have
||f
||u|| 1+# .
Proof. By (3.1) and (1.1), we see that for #
Notice that from Lemma 2.2 and the approximation theory we have
the proof is then completed by combining these inequalities.
Remark. The main idea in the proof above is motivated by [6], which is somewhat
di#erent from those ideas in [3, 17, 20, 21, 25]. The approach is also more direct and
simpler because the key identity (3.2) allows us to employ the standard error estimation
procedures developed for finite element methods. In particular, the estimate for
||I h u - u h || is not needed in this proof. Moreover, the estimate here describes how
the regularities of the exact solution and the source term can independently a#ect the
accuracy of the FVE solution.
3.2. Optimal-order L 2 error estimates. In this section, we derive an optimal-
order L 2 error estimate for the FVE method with the minimal regularity assumption
for the exact solution u. This error estimate also will show how the error in the L 2
norm depends on the regularity of the source term.
The following lemma gives another key feature of the bilinear form in the FVE
method.
Lemma 3.4. Assume that u h , v h # S h . Then we have
Proof. It follows from Green's formula that
and
j#Nh
Then the proof is completed by taking the di#erence of these two identities.
Theorem 3.5. Assume that u and u h are the solutions of (1.1) and (2.1), respec-
tively, and u # H 2
2,# . Then there exists a
positive constant C > 0 such that
Proof. Let
0 be the solution of
# , and
# .
Then we have ||w|| 2 # ||u - u h || 0 . By Theorem 3.3 we have
Then by Lemma 3.4,
where the J i 's are defined for u h , w h # S h by
and the continuity of #u - n on each #K is used.
Since the dual mesh is formed by the barycenters, we have
(w h - I # h w h
so that
where f K is the average value of f on K. Similarly, using the fact that A # W 2,# ,
we have
For J 3 , according to the continuity of #u - n and the shape of the control volume,
we have
AK is a function designed in a piecewise manner such that for any edge E of a
triangle
and x c is the middle point of E. Since |A(x)-
AK | # h||A|| 1,# , we have from Theorem
3.3 that
Thus, it follows by taking w
therefore, we have
and the proof is completed.
Corollary 3.6. Assume that u # H
# with 0 < # 1, and
A # W
2,# . Then we have
Proof. Let f h be the L 2 projection of f into S h and consider S(u,
linear operator from H s
s > 0. For
any (u, f) # H s
-H -1+s , we let
-1+s .
Then, by Theorem 3.5, we have
Hence, according to the theory of interpolation spaces [4, 5], we have
which in fact is (3.7).
Remark. When the source term f is in H 1 , the order of convergence in Theorem
3.5 is optimal with respect to the approximation capability of finite element space.
Note that, in many applications, the H 1 regularity of f does not imply the W 3,# or
regularity of the exact solution required by the L 2 norm error estimates in the
literature. Moreover, counterexamples presented in the next subsection indicate that
the regularity assumption on f cannot be reduced. The result in Theorem 3.5 reveals
how the regularities of the exact solution and the source term can a#ect the error of
the FVE solution in the L 2 norm, and this is a more general result than those in the
literature.
3.3. Superconvergence in the H 1 norm. In a way similar to the finite element
solution with linear elements, we can show that the FVE solution has a certain
superconvergence in the H 1 norm when the exact solution has a stronger regularity
and the partition used has a better quality. Specifically, throughout this subsection
we assume that the involved partition for the FVE solution is uniform or piecewise
uniform without any interior meeting points. This requirement might be relaxed (see,
for example, [29]), but we would rather use this simpler assumption to present our
basic idea.
We first recall the following superconvergence estimates for the Lagrange interpolation
[9, 28, 29, 30] from finite element theory.
Lemma 3.7. Assume that u # W 3,p
0(# . We have
Theorem 3.8. Assume that f # H
2,# .
Then we have
Proof. It follows from Lemma 3.7 that
Following a similar argument used in the proof of Theorem 3.5, we see that
because I h u - u h is in S h . The result of this theorem follows by combining these two
inequalities.
We can use one of the applications of the above superconvergence property of the
FVE solution to obtain a maximum norm error estimate.
Corollary 3.9. Under the assumptions of Theorem 3.8 and u # W 2,#
3(# , we have
logh
Proof. The proof follows from Theorem 3.8 and from the approximation theory
stating that
logh
logh
We remark that this result is not optimal with respect to the regularity required
on the exact solution u. This excessive regularity can be removed according to the
result in the following subsection.
3.4. Error estimates in maximum norm. Now we turn to the L # norm and
W 1,# norm error estimates for the FVE solution. First, we recall from [10, 16, 23, 29]
the definition and estimates on the regularized Green's functions.
For a point
2(# to be the
solution of the equation
in# ,
is a smoothed #-function associated with the point z, which has the
following properties:
Let G z
h be the finite element approximation of the regularized Green's function,
i.e.,
a(G z
-G z
Following [29], for a given point z
# we define # z G z by
G z+#z
-G z
|#z|
for any fixed direction L in R 2 , where #z//L means that #z is parallel to L. Clearly,
# z G z satisfies
The finite element approximation # z G z
h of # z G z is then defined by
It is well known that the functions G z and # z G z have the following properties [29]:
For any w # H 1
where P h is an L 2 -projection operator on S h , i.e.,
Moreover, the following estimates have been established in the literature
[10, 16, 23, 29]:
#G z
-G z
# z G z
||G z
||# z G z
#u# 2,#
with constant C > 0 independent of h and z.
First, let us consider the W 1,# norm error estimate.
Theorem 3.10. Assume that u # W
# , and A # W
1,# .
Then there exist positive constants C > 0 and h 0 > 0 independent of u such that for
Proof. It follows from (3.8) that
For the second term on the right-hand side, we have
(f, # z G z
logh
For the third term, by the definition of E h given in Lemma 3.1 and the fact that # z G z
is a piecewise linear polynomial, we have
logh ||u|| 1,# .
Thus, we obtain
logh
so that we have for some h 0 > 0, such that 0 < h # h 0 ,
logh
Applying this inequality and (3.13) in
leads to the result of this theorem.
The following theorem gives a maximum norm error estimate for the FVE
solution.
Theorem 3.11. Assume that u # W
1,# , and A # W
2,# .
Then there exist constants C > 0 and h 0 > 0, independent of u, such that for all
Proof. We follow an idea similar to the proof of the previous theorem, but we
now use the regularized Green's function G z and its finite element approximation G z
as follows:
-G z
-G z
-G z
The functionals J 1 , J 2 , and J 3 above are defined in the same way as given in the proof
of Theorem 3.5. For J 1 (u h , G z
h ), from (3.6) we have
||G z
logh ||f || 1,# .
Similarly, we have
||G z
We know by Theorem 3.10 that
# Ch logh
Therefore, there exists a small h 0 > 0 such that for 0 < h # h 0 ,
logh
As for J 3 (u h , G z
), we note that G z
is a piecewise linear polynomial and
Thus, it is easy to see from Theorem 3.10 and (3.11) that
||G z
logh
Combining the estimates obtained above for the J i 's, we have
logh
This together with (3.13) completes the proof.
The following theorem gives a superconvergence property in the maximum norm
for the FVE solution.
Theorem 3.12. Under the same conditions as in Theorem 3.11, we have
Proof. It follows from the properties of # z G z
h and # z G z and from Lemma 3.7 that
||u|| 3,# z G z
We see from (3.6) and (3.11) that
logh ||f || 1,# .
When h > 0 is small, we also have
logh
For
h ), we have
|# z G z
because # z G z
h is piecewise linear in each element K # T h . Finally, the proof is
completed by combining the above estimates.
3.5. Uniform convergence for u in H 1
In many applications, the exact
solution u of (1.1) may be in the space H
1(#5 but not in H
1+# for any # > 0. In
this situation, the authors of [24] showed that for any # > 0, there exists h
such that for all 0 < h # h 0 , we have
for the Galerkin finite element solution u h # S h (or the Ritz projection of u into S h
of the exact solution of (1.1)). This implies that u h converges to u uniformly even
though there is no order of convergence for u h .
The following theorem shows that the FVE solution also has this uniform convergence
feature.
Theorem 3.13. Assume that A is uniformly continuous and f # L
. Let
0(# and u h # S h be the solutions of (1.1) and (2.1), respectively. Then for any
# > 0, there exists h such that for all 0 < h # h # , the following holds:
Proof. As in the proof in Theorem 3.3, we have
uniformly continuous in #
for any # 0 > 0, there exists h
. Thus, by Lemma 3.7 we can take h # (0, h 0 ) to obtain
is defined in Lemma 3.1. By Lemma 3.2, we have
Thus it follows from the triangle inequality that
Lemma 2 of [24] indicates that for any # 1 > 0, there exists h
Notice that the constant C > 0 above is independent of u, f , and A; therefore, the
theorem follows from the last two inequalities.
4. Counterexamples. In this section, we will present two examples to show
that, when the source term f(x, y) is only in L
2(#7 the FVE solution generally cannot
have the optimal second-order convergence rate even if the exact solution u(x, y) has
the usual H 2 regularity. The first example is based on theoretical error estimates,
while the second is presented through numerical computations. We also provide an
example to corroborate the optimal error estimate obtained in this paper under the
condition that the exact solution u is in H 2 and the source term f is in H 1 .
4.1. A one-dimensional example. First, we consider an example in one
dimension,
but is not in H 1 (0, 1) if 0 # < 1/2. Clearly this problem has an
exact solution,
x 2-#
which is in the space H 2 (0, 1).
Let T h be the uniform partition of the interval [0, 1] such that x
0, 1, . , N and x 1. Let S h be the piecewise
linear finite element space. Let u f # S h be the finite element solution of (4.1) defined
by
and let u h be the FVE solution. Then we have
with e
#h
Our main task is to show that there exists a constant C > 0 such that
This inequality and
together imply that the FVE solution cannot have the optimal L 2 norm convergence
rate for 0 < # < 1/2.
We start with the estimates of the error function e(x) at the nodes. Let G(x, y)
be the Green's function defined by
Then, we have
x k-1/2
xk
Now we will estimate the J l 's one by one under the assumptions that
# .
For J 1 and J 6 it easily follows from a simple calculation that
For J 5 , using the definition of x #
h and integration by parts, we have
x -# dx #
x 1-#
x k+1/2
x 1-# dx
# .
Note that
Thus there is a positive constant C 5 independent of h such that
because of the error estimate for the trapezoidal quadrature formula. Now consider
J 3 and J 4 . First rewrite J 3
x k-1/2
xk
x k-1/2
x k-1/2
xk
x k-1/2
x k-1/2
)xdx
Clearly, we have
and
Hence
For calculation similar to that for J 5 we have
x 1-#
x 1/2
x 1-# dx
# .
Letting applying the error formula for the trapezoidal quadrature
rule, we have
x 1/2
x 1/2
x
x 1/2
x 1/2
Hence
For J 7 we have
x k-1/2
xk
- 1. It is obvious that
Hence
Finally, it follows from the above estimates for the J i 's that there is a positive constant
independent of h, such that for all x k # [1/3, 2/3],
which in turn implies that
for all small h > 0 due to the equivalence of the discrete and continuous norms on
S h given in Lemma 2.1. This clearly indicates that the convergence rate of the FVE
solution for this example cannot be O(h 2
On the other hand, our discussion in subsection 3.2 shows that the FVE solution
can have the optimal convergence rate when the exact solution u is in H 2 and the
source term f is in H 1 . This is supported by the following example. We consider the
following boundary value problem:
where
a
a
Table
errors of the FVE solutions for various partition sizes h.
The boundary conditions are chosen so that
dt
is the exact solution to this boundary value problem. Note that u is piecewise smooth,
u # is continuous, but u # is discontinuous at Hence, in this example, the right-hand
side function f is H 1 (0, 1), but the exact solution to the boundary value problem
is only in H 2 (0, 1). The L 2 errors of the FVE solutions with linear finite elements
corresponding to various mesh sizes h are listed in Table 1. The involved calculations
were carried out such that is one of the mesh points in the partitions used.
Linear regression indicates that the data in this table satisfy
which suggests the optimal convergence rate, and the data are in agreement with the
error estimate obtain in subsection 3.2.
4.2. A two-dimensional example. We consider the following boundary value
problem:
where# is the unit square (0, 1) - (0, 1). It is easy to see that the exact solution to
this boundary value problem is
which is in H 2 but not in H
3(#2 On the other hand, the source term
5 is just in L
We have applied the FVE method (2.1) to generate the FVE solution u h (x, y)
to this boundary value problem by the usual uniform partition T h of the unit square
with the partition size h. Due to the lack of regularity in the source term, an exact
Table
Errors of the FVE solutions for various partition sizes h.
integration formula is used to carry out all the quadratures in (2.1) that involve the
source term f(x, y). In fact, we can show that for each triangle #A 1 A 2 A 2 with vertices
z 3
we have
f(x)dx
with
Note that this formula is valid only if the vertices of the triangle #A 1 A 2 A 2 have
distinct coordinate values. This is true when #A 1 A 2 A 2 is a triangle used in the
integration over a control volume.
Table
2 contains the errors of the FVE solutions for this boundary problem with
various typical partition sizes h. In this table,
is the usual L 2 error of an FVE solution u h (x, y). Obviously, the FVE solutions
in these computations do not seem to have the standard second-order convergence
because the error is not reduced by a factor of 4 when the partition size is reduced by
a factor of 2. Also see the counterexample in [18].
5. Conclusion. In this paper, we have considered the accuracy of FVE methods
for solving second-order elliptic boundary value problems. The approach presented
herein combines traditional finite element and finite di#erence methods as a variation
of the Galerkin finite element method, revealing regularities in the exact solution and
establishing that the source term can a#ect the accuracy of FVE methods. Optimal-
estimates and superconvergence also have been discussed. The
examples presented above show that the FVE method cannot have the standard O(h 2 )
convergence rate in the L 2 norm when the source term has the minimum regularity
in L 2 , even if the exact solution is in H 2 .
--R
A box scheme for coupled systems resulting from microsensor thermistor problems
A new finite-element formulation for convection-di#usion problems
Some error estimates for the box method
Notes Math.
The Analysis of
A finite Volume
On the finite Volume
On the accuracy of the finite Volume
Element analysis method and superconvergence
High Accuracy Theory of Finite Element Methods
estimates in L 2
The immersed finite Volume
The Mortar Finite Volume
Finite Volume
Finite Volume
Eine
On first and second order box schemes
On the finite Volume
Piecewise linear Petrov-Galerkin error estimates for the box method
Generalized Di
Finite Volume
Finite Volume
Some optimal error estimates for piecewise linear finite element approximations
Some new estimates for Ritz-Galerkin methods with minimal regularity assumptions
Homogeneous di
Homogeneous di
Superconvergence in Galerkin Finite Element Methods
Superconvergence Theory for Finite Element Methods
A survey of superconvergence techniques in finite element methods
--TR
--CTR
Zhoufeng Wang , Zhiyue Zhang, The characteristic finite volume element methods for the two-dimensional generalized nerve conduction equation, Neural, Parallel & Scientific Computations, v.15 n.1, p.27-44, March 2007
Rajen K. Sinha , Jrgen Geiser, Error estimates for finite volume element methods for convection-diffusion-reaction equations, Applied Numerical Mathematics, v.57 n.1, p.59-72, January 2007
Chunjia Bi , Hongxing Rui, Uniform convergence of finite volume element method with Crouzeix-Raviart element for non-self-adjoint and indefinite elliptic problems, Journal of Computational and Applied Mathematics, v.200 n.2, p.555-565, March, 2007
Guoliang He , Yinnian He, The finite volume method based on stabilized finite element for the stationary Navier-Stokes problem, Journal of Computational and Applied Mathematics, v.205 n.1, p.651-665, August, 2007 | finite volume;error estimates;elliptic;counterexamples |
588405 | A Mortar Finite Element Method Using Dual Spaces for the Lagrange Multiplier. | The mortar finite element method allows the coupling of different discretization schemes and triangulations across subregion boundaries. In the original mortar approach the matching at the interface is realized by enforcing an orthogonality relation between the jump and a modified trace space which serves as a space of Lagrange multipliers. In this paper, this Lagrange multiplier space is replaced by a dual space without losing the optimality of the method. The advantage of this new approach is that the matching condition is much easier to realize. In particular, all the basis functions of the new method are supported in a few elements. The mortar map can be represented by a diagonal matrix; in the standard mortar method a linear system of equations must be solved. The problem is considered in a positive definite nonconforming variational as well as an equivalent saddle-point formulation. | Introduction
. Discretization methods based on domain decomposition techniques
are powerful tools for the numerical approximation of partial differential equa-
tions. The coupling of different discretization schemes or of nonmatching triangulations
along interior interfaces can be analyzed within the framework of the mortar
methods [6, 7]. In particular, for time dependent problems, diffusion coefficients with
jumps, problems with local anisotropies as well as corner singularities, these domain
decomposition techniques provide a more flexible approach than standard conforming
formulations. One main characteristic of such methods is that the condition of
pointwise continuity across the interfaces is replaced by a weaker one. In a standard
primal approach, an adequate weak continuity condition can be expressed by appropriate
orthogonality relations of the jumps of the traces across the interfaces of the
decomposition of the domain [6, 7]. If a saddle point formulation arising from a mixed
finite element discretization is used, the jumps of the normal components of the fluxes
are relevant [29]. To obtain optimal results, the consistency error should be at least of
the same order as the best approximation error. Most importantly, the quality of the
a priori error bounds depends strongly on the choice of weak continuity conditions at
the interfaces.
Section 2 contains a short overview of the mortar finite element method restricted
to the coupling of P 1 -Lagrangian finite elements and a geometrically conforming sub-division
of the given region. We briefly review the definition of the discrete Lagrange
multiplier space and the weak continuity condition imposed on the product space as
it is given in the literature. In Section 3, we introduce local dual basis functions,
which span the modified Lagrange multiplier space. We also give an explicit formula
of projection-like operators and establish stability estimates as well as approximation
properties. Section 4 is devoted to the proof of the optimality of the modified
nonconforming variational problem. It is shown that we can define a nodal basis
function satisfying the constraints at the interface and which at the same time has
local support. This is a great advantage of this modified method compared with the
Math. Institut, Universitat Augsburg, Universitatsstr. 14, D-86 159 Augsburg, Germany.
Email: wohlmuth@math.uni-augsburg.de, http://wwwhoppe.math.uni-augsburg.de/~wohlmuth
standard mortar methods. Central results such as uniform ellipticity, approximation
properties and consistency error are given in separate lemmas. A saddle point formu-
lation, which is equivalent to these nonconforming variational problems is considered
in Section 5. Here, the weak continuity condition at the interface enters explicitly
in the variational formulation. As in the standard mortar case, we obtain a priori
estimates for the discretization error for the Lagrange multiplier. Here, we analyze
the error in the dual norm of H 1=2
00 , as well as in a mesh dependent L 2 -norm. Finally,
in Section 6, numerical results indicate that the discretization errors are comparable
with the errors obtained when using the original mortar method.
2. Problem setting. We consider the following model problem
where\Omega is a bounded, polygonal domain in IR 2 , and that f 2 L
Furthermore,
we assume a 2 L 1
(\Omega\Gamma to be an uniformly positive function and 0 b 2 L
We will consider a non-overlapping decomposition
of\Omega into polyhedral subdo-
Each
subdomain\Omega k is associated with a family of shape regular simplicial triangulations
is the maximum of the diameters of the elements in
T hk . The sets of vertices and edges of the
subdomains\Omega k and
of\Omega are denoted by P hk ,
respectively. We use P 1 -conforming finite elements
individual subdomains and enforce the homogeneous Dirichlet boundary conditions
on
@\Omega k .
We restrict ourselves to the geometrical conforming situation where the intersection
between the boundary of any two different subdomains
l, is either
empty, a vertex, or a common edge. We only call it an interface in the latter case. The
mortar method is characterized by introducing Lagrange multiplier spaces given on
the interfaces. A suitable triangulation on the interface is necessary for the definition
of a discrete Lagrange multiplier space. Each interface
@\Omega k is associated with
a one dimensional triangulation, inherited either from T hk or from T h l . In general,
these triangulations do not coincide. The interface in question will be denoted by
its triangulation is given by that
and\Omega l , respectively. We call
the inherited one dimensional triangulation on \Gamma kl and \Gamma lk , \Sigma kl and \Sigma lk , respectively
with the elements of \Sigma kl and \Sigma lk being edges of T hk and T h l , respectively. We remark
that geometrically \Gamma lk and \Gamma kl are the same.
Thus, each
@\Omega k can be decomposed, without overlap, into
where
denotes the subset of f1; Kg such that
@\Omega k is an interface
for l 2
M(k): The union of all interfaces S can be decomposed uniquely in
A Mortar Method Using Dual Spaces 3
Here,
M(k) such that for each set fk; lg, 1 k K, l 2
l 2 M(k) or k 2 M(l) but not both. The elements of f\Gamma kl j 1 k K; l 2 M(k)g are
called the mortars and those of f\Gamma lk j 1 k K; l 2 M(k)g the non-mortars. The
choice of mortars and non-mortars is arbitrary but fixed. We note that the discrete
Lagrange multiplier space will be associated with the non-mortars. To simplify the
analysis, we will assume that the coefficients a and b are constant in each subdomain,
with a k := a
It is well known that the unconstrained product space
Y
is not suitable as a discretization of (2.1). We also note that in case of non-matching
meshes at the interfaces, it is in general not possible to construct a global continuous
space with optimal approximation properties. It is shown [6, 7] that weak constraints
across the interface are sufficient to guarantee an approximation and consistency error
of order h if the weak solution u is smooth enough. The nonconforming formulation
of the mortar method is given by:
Find ~ u h 2 e
h such that
where a(v; w) :=
R\Omega
arv
1(\Omega
Here, the global space e
V h is defined by
e
where the bilinear form b(\Delta; \Delta) is given by the duality pairing on S
Y
1(\Omega
Y
Y
and [v] := v
j\Omega l
. Here, (H 1
denotes the dual space of H 1
Of crucial importance is the suitable choice of f
M h in (2.2)
f
Y
Y
f
where in general the local space f
chosen as a modified trace space of finite
element functions in S
can be found satisfying
is the trace space of S
f
subspace of W h (\Gamma lk ) of codimension 2 and given by
f
contains an endpoint of \Gamma lk g
and N lk := dim f
denotes the number of elements in \Sigma lk .
Here, we assume that N e 2. The nodal basis functions fOE i g N lk
of f
are
associated with the interior vertices given by
The space f
its nodal basis functions fOE i g N lk
are illustrated in Figure 2.1;
for a detailed analysis of f
Fig. 2.1. Lagrange multiplier space
Let us remark that continuity was imposed at the vertices of the decomposition
in the first papers about mortar methods. However, this condition can be removed
without loss of stability. Both these settings guarantee uniform ellipticity of the
bilinear form a(\Delta; \Delta) on e
as well as a best approximation error and a consistency
error of O(h) [6, 7]. Combining the Lemmas of Lax Milgram and Strang, it can be
shown that a unique solution of (2.2) exists and that the discretization error is of
order h if the solution of (2.1) is smooth; see [6, 7].
In a second, equivalent approach the space f
explicitly plays the role of a
Lagrange multiplier space. This approach is studied in [4] and used further in [11, 25,
26]. The resulting variational formulation gives rise to a saddle-point problem:
Find
M h such that
In particular, it can be easily seen that the first component of the solution of (2.3)
is the unique solution of (2.2). Observing that ~
h is an approximation of the normal
derivative of u on the interface, it makes sense to consider a priori estimates for
suitable norms. Here n lk is the outer unit normal
restricted to
lk . This issue was first addressed in [4] where a priori estimates in the (H 1=2
were established. Similar bounds are given in [26] for a weighted L 2 -norm. As in the
general saddle-point approach [13], the essential point is to establish adequate inf-sup
conditions; such bounds have been established with constants independent of h for
both these norms; see [4, 26].
In the following, all constants are generic depending on the local
ratio between the coefficients b and a, the aspect ratio of the elements and subdomains
but not on the mesh size and not on a. We use standard Sobolev notations and
Y
stand for the broken H 1 -norm and semi-norm. The dual space of a Hilbert space X
is denoted by X 0 and the associated dual norm is defined by
(2.
A Mortar Method Using Dual Spaces 5
3. Dual basis functions. The crucial point for the unique solvability of (2.2)
and (2.3) is the definition of the discrete space f
M h . As we have seen, the discrete
space of Lagrange multipliers is closely related to the trace space in the earlier work on
mortar methods; these spaces are only modified in the neighborhood of the interface
boundaries where the degree of the elements of the test space is lower. We note that
it has been shown only recently, see [23], that for Pn -conforming finite elements the
finite dimensional space of piecewise polynomials of only degree can be used
instead of degree n in the definition of the Lagrange multiplier space without losing
the optimality of the discretization error . However, in none of these studies has
duality been used to construct an adequate finite element space for the approximation
of the Lagrange multiplier. We recall that the Lagrange multiplier in the continuous
setting represents the flux on the interfaces. Even if the weak solution of (2.1) is
does not have to be continuous on the interfaces. This observation has
motivated us to introduce a new type of discrete Lagrange multiplier space. We note
that local dual basis functions have been used in [22] to define global projection-like
operators which satisfy stability and approximation properties; in this paper we use
the same dual basis functions to define the discrete Lagrange multiplier space.
Let oe be an edge and e
be a polynomial space satisfying P 0 (oe) ae e
2g, be a basis satisfying
R
oe OE oe;i ds 6= 0. We can then
define a dual basis f/ oe;i g N
by the following relation
Z
oe
Z
oe
The definition (3.5) guarantees that f/ oe;i g N
is well defined. Each / oe;i can be written
as a linear combination of the OE oe;i , 1 i N and the coefficients are obtained by
solving a N \Theta N mass matrix system. Furthermore (3.5) yields
Z
oe
ds
and thus
1. The f/ oe;i g N
also form a linearly independent set. To see
this, let us assume that
R
oe
ds
R
oe
0:
As a consequence, we obtain e
Ng.
Let us consider the case that f oe;i g 2
are the nodal basis functions of
Then, the dual basis is given by
2:
Based on these observations, we introduce a global space M h (\Gamma lk ) on each non-
6 BARBARA I. WOHLMUTH
be the nodal basis function of f
introduced in Section 2. Then,
each OE i can be written as the sum of its local contributions
=: OE oe;i
where the local contributions are linearly independent. We set e
g. In particular by construction, it is guaranteed that P 0 (oe) ae
e
Using the local dual
basis functions on each oe, the global basis functions of M h (\Gamma lk ) are defined as
The support of / i is the same as that of OE i and the f/ i g N lk
form a linear independent
system. Figure 3.2 depicts the two different types of dual basis functions.
G lk
G lk
Fig. 3.2. Dual basis functions in the neighborhood of the boundary of \Gamma lk (left) and in the
interior (right)
Remark 3.1. The following global orthogonality relation holds
Z
Z
oe
Z
oe
Z
We note the similarity with (3.5).
The central point in the analysis of the consistency and approximation error will
be the construction of adequate projection-like operators. We refer to [6, 7] for the
standard mortar approach. Here, we use different operators onto M h (\Gamma lk ), f
and W h (\Gamma lk
defined by
R
ds
R
ds
A dual operator
M h (\Gamma lk ), is now given by
R
ds
R
ds
A detailed discussion of this type of operator can be found in [22]. It is easy to see
that P lk and Q lk restricted to M h (\Gamma lk ) and f
respectively, is the identity
A Mortar Method Using Dual Spaces 7
In addition, using (3.6), (3.7) and (3.8), we find that for v; w 2 L 2 (\Gamma lk ),
and it therefore makes sense to call Q lk a dual operator to P lk . Furthermore, the
operators are L 2 -stable. We have
0;oe
R
ds
R
ds
R
where the domain D oe is defined by
Here, we have used the fact that D oe contains at most three elements and that
independently of the length of the edges. The same
type of estimate holds true for Q lk
0;oe
R
ds
R
ds
R
0;D oe
Thus, P lk and Q lk are L 2 -projection-like operators which preserve the constants.
Using (3.9) and (3.10), it is easy to establish an approximation result.
Lemma 3.2. There exist constants such that for
C
Proof. The proof of (3.12) follows by applying the Bramble-Hilbert Lemma and
using the stability (3.10) and the identity (3.9); it is important that the constants are
contained in the space M h (\Gamma lk ). For each v we define a constant c v in the following
way
Z
where jD oe j is the length of D oe . We remark that the constant c v depends only on the
values of v restricted on D oe . Now, by means of P lk c
oe jvj s;D oe :
The global estimate (3.12) is obtained by summing over all local contributions and
observing that each oe 0 is only contained in a fixed number of D oe .
Although dim f
we get the same type of estimate as (3.14)
for Q lk instead of P lk by using (3.11).
For the estimate (3.13) in the dual norm, we use the definition (2.4)
R
ds
R
ds
In a next step, we consider the last integral in more detail. Using (3.14) for Q lk
instead of P lk and setting
0;oe CjOEj 2
(D oe )
Summing over all oe 2 \Sigma lk and using that the sum over jOEj 2
is bounded by
Combining this upper bound with (3.14) gives (3.13).
4. Nonconforming formulation. Replacing the space f
M h in the definition of
e
we get a new nonconforming space
The original nonconforming variational problem (2.2) is then replaced by:
Find h such that
In what follows, we analyze the structure of an element v h 2 V h . Each v 2 X h
restricted to a non-mortar side \Gamma lk can be written as
are defined in Section 2 and OE 0 and OE N lk +1 are the nodal basis
functions of W h (\Gamma lk ) associated with the two endpoints of \Gamma lk . The following lemma
characterizes the elements of V h .
Lemma 4.1. Let v 2 X h restricted on \Gamma lk be given as in (4.17). Then,
and only if for each non-mortar \Gamma lk
R
(v
ds
R
ds
The proof follows easily from (4.17) and the global orthogonality relation (3.6).
As in case of e
h the values of a function v 2 V h at the nodal Lagrange interpolation
points in the interior, p i , 1 i N lk , of any non-mortar \Gamma lk are uniquely determined
by its values on the corresponding mortar side \Gamma kl and the values at the endpoints of
A Mortar Method Using Dual Spaces 9
. The nodal values in the interior of the non-mortars \Gamma lk are obtained by combining
(4.18) with a basis transformation. In particular, these values can be directly obtained
by the simple formula
j\Omega l
R
ds
R
ds
For the two interior nodal points p 1 and pN lk next to the endpoints of \Gamma lk , we get
j\Omega l
R
(v
ds
R
ds
j\Omega l
(p N lk
R
(v
j\Omega l (pN lk +1 )OE N lk +1 ) ds
R
ds
Here, we have used that v
j\Omega l
j\Omega l
(p N lk +1 and that
are identically 1 on the edges next to the endpoints of \Gamma lk . We note that
by definition of the basis functions there exist
Z
Z
Z
Z
OE N lk ds:
If we have a closer look at the nodal basis functions of e
we realize that
there is a main difference in the structure of the basis functions. Figures 4.3 and
4.4 illustrate this difference for the special situation that we have a uniform but
nonmatching triangulation on the mortar and the non-mortar side.
nodal
basis
functions
mortar side
nodal
basis
functions
non-mortar side
nodal
basis
functions
non-mortar side
Fig. 4.3. Nodal basis function on a mortar side (left) and on the non-mortar side in e
and in V h (right)
In
Figure
4.3, the mortar side is associated with the finer triangulation whereas
in
Figure
4.4 it is associated with the coarser one.
As in the standard finite element context, nodal basis functions can be defined for
contained in a circle of diameter Ch: This is in general not possible
for e
h . In the latter case, the support of a nodal basis function associated with a
nodal point on the mortar side is a strip of length j\Gamma lk j and width h, see Figure 4.5,
and the locality of the basis functions is lost.
We conclude this section, by establishing a priori bounds for the discretization
error. As in [6, 7] a mortar projection will be a basic tool in the analysis of the
best approximation error. We now use the new Lagrange multiplier space M h in
nodal
basis
functions
mortar side0.20.61
nodal
basis
functions
non-mortar side0.20.61
nodal
basis
functions
non-mortar side
Fig. 4.4. Nodal basis function on a mortar side (left) and on the non-mortar side in e
and in V h (right)
lk
G
lk
Fig. 4.5. Support of a nodal basis function in e
the definition of suitable projection-like operators. For each non-mortar side the new
mortar projection is given by \Pi
Z
By using (4.19) and (4.20), it can be easily seen that the operator \Pi lk is well defined.
To analyze the approximation error, it is sufficient to show that the mortar projection
is uniformly stable in suitable norms. The stability in the L 2 - and H 1 -norms is given
in the following lemma.
Lemma 4.2. The mortar projection \Pi lk is L 2 - and H 1 -stable
Proof. Using the explicit representation (4.19) and (4.20) where v
has to be
replaced by v and v
j\Omega l
j\Omega l
(p N lk +1 ) have to be set to zero, (4.22) is obtained.
It can be easily seen that even the local estimate
holds true. By means of an inverse inequality, we find for each p 2 W h (\Gamma lk
satisfying
const. and
R
ds
R
ds
We remark that if @D oe " @ \Gamma lk 6= ;, then p was set to zero. However, due to the
boundary conditions of v we obtain kvk 0;D oe
Ch oe jvj 1;D oe in this case.
A Mortar Method Using Dual Spaces 11
The mortar projection can be extended to the space H 1=2
in the following
way:
Z
Then, an interpolation argument together with Lemma 4.2 gives the H 1=2
-stability
Ckvk
It is of interest to compute the stability constant in (4.22) in the special case of
a uniform triangulation of \Gamma lk with h := joej. Then,
are the two endpoints of oe. Using the mortar definition and summing
over all elements in \Sigma lk , we get
In general, the constants in a priori estimates depend on the coefficients. Here,
we will give a priori estimates which depends explicitly on the coefficient a. For each
ff k in the following way
sup
min
min
a j
~
sup
a j
We note that ff k and ~
ff k are bounded by 2 if the non-mortar side is chosen as that
with the smaller value of a.
The uniform ellipticity of the bilinear form a(\Delta; \Delta) on V h \Theta V h is important for
the a priori estimates. For the standard mortar space, it is well known, see [6, 7, 8].
Moreover in [8], it is shown that the bilinear form a(\Delta; \Delta) is uniform elliptic on Y \Theta Y ,
where
Y
1(\Omega
Z
The starting point of the proof is a suitable Poincar'e-Friedrichs type inequality. For
general considerations on Poincar'e-Friedrichs type inequalities in the mortar situation,
we refer to [24]. In [17, Theorem IV.1], it is shown that the ellipticity constant does
not depend on the number of subdomains. A similar estimate is given for the three
field formulation in [14]. We refer to [17] for a detailed analysis of the constants
in the a priori estimates in terms of the number of subdomains and their diameter.
Observing that V h is a subspace of Y , it is obvious that that the bilinear form a(\Delta; \Delta)
on V h \Theta V h is uniform elliptic.
4.1. Approximation property. To establish an approximation property for
h , we follow [6, 7]. One central point in the analysis is an extension theorem. In
[9], a discrete extension is used such that the H 1 -norm of the extension
on\Omega k is
bounded by a constant times the H 1=2 -norm on the boundary
@\Omega k . The support of
such an extension is in
general\Omega k and it is assumed that the triangulation is quasi-
uniform. However, it can be generalized to the locally quasi-uniform case. Combining
the approximation property of
using the mortar projection \Pi lk ,
we obtain the following lemma.
Lemma 4.3. Under the assumption that u 2
the best
approximation error is of order h s ,
vh 2Vh
k a k kuk 2
where the ff k are defined in (4.25).
Proof. The proof follows exactly the same lines as for e
h and the Laplace operator;
we therefore omit the details and refer to [6, 7]. For each
subdomain\Omega k , we use the
Lagrange interpolation operator I k . Then, we define w
I k v. We
note that w h is not in general, contained in V h . To obtain an element in V h , we
have to add appropriate corrections. For each interface \Gamma lk , we consider the jump
apply the mortar projection. The result is extended as a discrete
harmonic function into the interior
of\Omega l . Finally, we define
H l (\Pi lk [w h ])
where H l denotes the discrete harmonic extension operator
kH l
vk1;\Omega l Ckvk H 1(@\Omega l )
see [9, Lemma 5.1]. Here, \Pi lk [w h ] is extended by zero onto
vanishes
outside\Omega l . By construction, we have
Z
Z
and thus v h 2 V h .
A coloring argument yields,
a l k\Pi lk [w h ]k 2
C
a k h 2s
a l k\Pi lk [w h ]k 2
C
a k h 2s
a l
C
a k h 2s
a l
l kuk 2
C
ff k a k h 2s
A Mortar Method Using Dual Spaces 13
Here, we have used the stability of the harmonic extension; see [9], the stability of the
mortar projection (4.24) and the approximation property of the Lagrange interpolant.
4.2. Consistency error. The space V h is in general not a subspace of H 1
Therefore, we are in a nonconforming setting and in addition to uniform ellipticity and
the approximation property we need to consider the consistency error [10] to obtain
a stable and convergent finite element discretization. In Strang's second Lemma, the
discretization error is bounded by the best approximation error and the consistency
error [10].
Lemma 4.4. The consistency error for
[arun lk is of order h s
sup
R
a @u
@n lk
ds
k a k kuk 2
1where ff k is defined in (4.25).
Proof. The proof generalizes that given for e
V h in [6, 7]. Here, the Lagrange
multiplier space M h is used and we also consider the effect of discontinuous coefficients.
By the definition of V h , we have
Z
and thus
Z
a
@n lk
Z
(a
@n lk
@n lk
ds:
where P lk is defined in (3.7). Using a duality argument and the continuity of the
trace, we get
Z
a
@n lk
ds
where := a @u
@n lk
. To replace, in the last inequality, the H 1 -norm by the H 1 -semi-
norm, we take into account that
j\Omega l
where \Pi lk denotes the L 2 -projection onto piecewise constant functions on \Gamma lk . In the
duality argument, the H 1=2 -norm can therefore be replaced by the H 1=2 -semi-norm
kw
hj\Omega l
j\Omega l
C
j\Omega l
j\Omega l
Finally, Lemma 3.2, which states the approximation property of P lk in the (H 1=2
norm, yields
l jj H
l min(a l juj
14 BARBARA I. WOHLMUTH
Remark 4.5. In case that the coefficient a is smaller on the non-mortar side
then ff k is bounded by 2 independently of the jumps in a. Otherwise the upper bound
for ff k depends on the jumps in a. A possibly better bound might depend on the ratio of
the mesh size across the interface; see (4.25). However, numerical results have shown
that in case of adaptive mesh refinement controlled by an a posteriori error estimator,
remains bounded independently of the jump in the coefficients; see [26, 27].
Using Lemmas 4.3 and 4.4, we obtain a standard a priori estimate for the modified
mortar approach (4.16). Under the assumptions that [arun lk
ff k a k h 2(s\Gamma1)
Remark 4.6. The a priori estimates in the literature [6, 7] are often given in
the following form
This is weaker than the estimate (4.26), since for generally we only have
K
4.3. A priori estimates in the L 2 -norm. Finite element discretizations pro-
vide, in general, better a priori estimates in the L 2 -norm than in the energy norm.
In particular, if we assume H 2 -regularity, we have the following a priori estimate for
in the L 2 -norm
The proof can be found in [11] and is based on the Aubin-Nitsche trick. In addition,
the nonconformity of the discrete space has to be taken into account. An essential
role in the proof of the a priori bound is the following lemma. It shows a relation
between the jumps of an element across the interfaces and its nonconformity.
The same type of result for v 2 e
V h can be found in [26].
Lemma 4.7. The weighted L 2 -norm of the jumps of an element v 2 V h is bounded
by its nonconformity
a l
0;oe inf
~
Proof. The proof follows the same ideas as in case for v h 2 e
use
the orthogonality of the jump and the definition (3.8) to obtain
j\Omega l
Now, it is sufficient to consider an interface \Gamma lk at a time, and we find
a l
a l
j\Omega l
j\Omega l
(v
A Mortar Method Using Dual Spaces 15
Using (3.15) and the continuity of the trace operator, we get for each w
a l
a l jv
j\Omega l
a l jv
Summing over the subdomains and using the definition for ~
ff k give the assertion.
Using the dual problems:
Find h such that
gives
a @w
@n lk
ds
Z
a @u
@n lk
ds
Then, the H 2 -regularity, Lemma 3.2, Lemma 4.7, and observing that the jump of an
element in V h is orthogonal on M h yield
a l
0;oe
a l
0;oe
a l
0;oe
a l
0;oe
:= a @w
@n lk
. Using the a priori estimate for the energy norm (4.26), we
obtain an a priori estimate for the L 2 -norm. The following lemma gives the a priori
estimate for the modified mortar approach.
Lemma 4.8. Assuming H 2 -regularity, the discretization error in the L 2 -
norm is of order h 2 .
5. Saddle point formulation. A saddle point formulation for mortar methods
was introduced in [4]. In particular, a priori estimates involving the (H 1=2
for the Lagrange multiplier were established in that paper whereas estimates in a
weighted L 2 -norm were given in [26]. Here, we analyze the error in the Lagrange
multiplier for both norms and obtain a priori estimates of the same quality as for the
standard mortar approach.
The norm for the Lagrange multiplier is defined by
l2M(k)a l
Y
Y
The weight a \Gamma1
l is related to the fact that we use the energy norm for in the a
priori estimates.
Working within the saddle point framework, the approximation property on V h ,
which is given in Lemma 4.3, is a consequence of the approximation property on X h ,
the continuity of the bilinear form b(\Delta; \Delta), and an inf-sup condition [13]. A discrete
inf-sup condition is necessary to obtain a priori estimates for the Lagrange multiplier.
The saddle point problem associated with the new nonconforming formulation
(4.16) involves the space (X
We get a new saddle point
problem, with exactly the same structure as (2.3):
Find
The inf-sup condition, established in [4] for the pairing (X
Lemma 5.1. There exists a constant independent of h such that
sup
c:
Proof. Using the definition of the dual norm (2.4), we get
OE2H2
OE2H2
C sup
OE2H2
The maximizing element in W h (\Gamma lk
called
and a v lk 2 X h is defined in the following way
j\Omega n\Omega l
\Omega l
where OE h is extended by zero on
We then find
and a(v lk
21;\Omega
l
. Finally, we set
a l
and observe that a(v h ; v h only if coloring argument gives
Summing over all interfaces yields
C
l2M(k)a l
By construction, we have found for each
C
A Mortar Method Using Dual Spaces 17
The proof of the inf-sup condition (5.28) together with the approximation Lemma
3.2 and the first equation of the saddle point problem gives an a priori estimate similar
to (4.26) for the Lagrange multiplier.
Lemma 5.2. Under the assumptions u 2 Q K
[arun lk the following a priori estimate for the Lagrange multiplier holds true
C
ff k a k h 2(s\Gamma1)
Proof. Following [4] and using the first equation of the saddle point problem, we
get
Taking (5.29) into account, we find that the inf-sup condition even holds if the supremum
over X h is replaced by the supremum over a suitable subspace of X h . For the
proof of (5.30), we start with (5.29) and not with the inf-sup condition (5.28)
constructed as in the proof of Lemma 5.1. We recall
that w is defined as a linear combination of discrete harmonic functions
a l
where w
1. A coloring argument shows that
the energy norm of w is bounded by the H 1=2
00 -dual norm of h \Gamma h , moreover we
find
a l
a l
K
l2M(k)a l
combining (5.31) and (5.32), we obtain
kwk
Applying the triangle inequality, choosing h
using
we find that (3.13) yields, for
C
a l
C
ff k a k h 2
Here, we have used that restricted on \Gamma lk is arun lk and a trace theorem.
We note that in spite of Lemma 3.2 we cannot obtain a priori estimates of order h
for the norm of the dual of H 1=2 (S). This is due to the fact that the inf-sup condition
(5.28) cannot be established for that norm.
Remark 5.3. The a priori estimate (5.30) also holds if we replace the (H 1=2
norm by the weighted L 2 -norm
a l
Using (3.12) and the techniques of the proof of Lemma 5.2, it is sufficient to have
a discrete inf-sup condition similar to (5.28) for the weighted L 2 -norm, i.e.
sup
c:
The only difference in the proof is the definition of v lk . Instead of using a discrete
harmonic extension
onto\Omega l , we use a trivial extension by zero, i.e. we set all nodal
values on
@\Omega l n\Gamma lk and
on\Omega l to zero. Then, v lk is non zero only on a strip of length j\Gamma lk j
and width h l and a(v lk ; v lk ) is bounded form below and above by
a l
0;oe .
6. Numerical results. We get a priori estimates of the same quality for the
error in the weak solution and the Lagrange multiplier as in the standard mortar case
[4, 6, 7]. In contrast to e
, we can define nodal basis functions for V h which have local
supports. Efficient iterative solvers for linear equation systems arising from mortar
finite element discretization are very often based on the saddle point formulation or
work with the product space X h instead of the nonconforming mortar space. Different
types of efficient iterative solvers are developed in [1, 2, 3, 11, 15, 16, 19, 20, 18, 25].
However, most of these techniques require that each iterate satisfies the constraints
exactly. In most studies of multigrid methods, these constraints have to be satisfied
even in each smoothing step [11, 12, 18, 25]. If we replace e
the constraints
are much easier to satisfy, since instead of solving a mass matrix system, the nodal
values on the non-mortar side can be given explicitly.
Fig. 6.6. Decomposition and initial triangulation (left) and solution (right) (Example 1)
Here, we will present some numerical results illustrating the discretization errors
for the standard and the new mortar methods in the case of P 1 Lagrangian finite
elements. We recall that in the standard mortar approach the Lagrange multipliers
belong to f
M h whereas we use M h in the new method. We have used a multigrid
method which satisfies the constraints in each smoothing step; see [11, 25] for a
A Mortar Method Using Dual Spaces 19
discussion of the standard mortar case. This multigrid method can be also applied
without any modifications to our modified mortar setting. It does not take advantage
of the diagonal mass matrix on the non-mortar side of the new formulation. To obtain
a speedup in the numerical computations, special iterative solvers for the new mortar
setting have to be designed. We will address this issue in a forthcoming paper [28].
We start with an initial triangulation T 0 , and obtain the triangulation T l on level l by
uniform refinement of T l\Gamma1 .
Both discretization techniques have been applied to the following test example:
where the right hand side f and the Dirichlet boundary conditions
are chosen so that the exact solution is
yy. The solution and the initial triangulation
are given in Figure 6.6. The domain is decomposed into nine subdomains defined by
and the triangulations do not
match at the interfaces. We observe two different situations at the interface, e.g. the
isolines of the solution are almost parallel at
@\Omega 12 whereas at
@\Omega 21 the
angle between the isolines and the interface is bounded away from zero. In case that
the isolines are orthogonal on the interface the exact Lagrange multiplier will be zero.
Table
Discretization errors (Example 1)
standard approach modified approach
Lagrange multiplier f
level # elem. L 2 -err. energy err. L 2 -err. energy err.
In
Table
6.1, the discretization errors are given in the energy norm as well as in the
for the two different mortar methods. We observe that the energy error is
of order h whereas the error in the L 2 -norm is of order h 2 . There is no significant
difference in the accuracy between the two mortar algorithm. The discretization errors
in the energy norm as well as in the L 2 -norm are almost the same.
Fig. 6.7. Decomposition and initial triangulation (left) and solution (right) (Example 2)
In our second example, we consider the union square with a slit decomposed
into four subdomains, see Figure 6.7. Here, the right hand side f and the Dirichlet
boundary conditions of are chosen so that the exact solution is given by
sin OE. The solution
has a singularity in the center of the domain. We do not have H 2 -regularity, and we
therefore cannot expect an O(h) behavior for the discretization error in the energy
norm.
Table
Discretization errors (Example 2), Energy error in 1e \Gamma 01
standard approach modified approach
Lagrange multiplier f
level # elem. L 2 -err. energy err. L 2 -err. energy err.
2:069586
The discretization errors are compared in Table 6.2. In this case, we observe a difference
in the performance of the different mortar methods. The L 2 -error of the modified
mortar method is asymptotically better than that of the standard method. The situation
is different for the energy error; the standard mortar approach gives slightly
better results. A non-trivial difference can only be observed in this example where
there is no H 2 -regularity. In that case, the modified mortar method gives better
results in the L 2 -norm.
Our last example illustrates the influence of discontinuous coefficients. We consider
the diffusion equation \Gammadiv the coefficient a is dis-
continuous. The unit
square\Omega is decomposed into four
as in Figure 6.8.
Fig. 6.8. Decomposition and initial triangulation (left) and solution (right) (Example
The coefficients on the subdomains are given by a
The right hand side f and the Dirichlet boundary conditions are chosen to match
a given exact solution,
solution is continuous with vanishing [arun] on the interfaces. Because of the discontinuity
of the coefficients, we use a highly non-matching triangulation at the interface,
see
Figure
6.8.
A Mortar Method Using Dual Spaces 21
The discretization errors in the energy norm as well as in the L 2 -norm are given
for the two different mortar algorithms in Table 6.3. We observe that the energy error
is of order h. As in Example 1, there is only a minimal difference in the performance
of the two mortar approaches.
Table
Discretization errors (Example 3), Energy error in 1e \Gamma 01
standard approach modified approach
Lagrange multiplier f
level # elem. L 2 -err. energy err. L 2 -err. energy err.
The following two figures illustrate the numbers given in Tables 6.1 - 6.3. In
Figure
6.9, the errors in the energy norm are visualized whereas in Figure 6.10 the
errors in the L 2 -norm are shown. In each figure a straight dashed line is drawn below
the obtained curves to indicate the asymptotic behavior of the discretization errors.1100 1000 10000 100000
in
the
energy
norm
Number of elements
Example 1
standard
modified
in
the
energy
norm
Number of elements
Example 2
standard
modified
in
the
energy
norm
Number of elements
Example 3
standard
modified
Fig. 6.9. Discretization errors in the energy norm versus number of elements0.0010.1100 1000 10000 100000
in
the
norm
Number of elements
Example 1
standard
modified
in
the
norm
Number of elements
Example 2
standard
modified
100 1000 10000 100000
in
the
norm
Number of elements
Example 3
standard
modified
Fig. 6.10. Discretization errors in the L 2 -norm versus number of elements
In Examples 1 and 2, almost from the beginning on the predicted order h for the energy
norm and the order h 2 for the L 2 -norm can be observed. In these two examples only
one plotted curve for the standard and the new mortar approach can be seen. The
numerical results are too close to see a difference in the pictures. In Example 2,
where we have no full H 2 -regularity, the asymptotic starts late. We observe for both
22 BARBARA I. WOHLMUTH
mortar methods an O(h 1=2 ) behavior for the discretization error in the energy norm.
During the first refinement steps the error decreases more rapidly. For the L 2 -norm
the asymptotic rate is given by O(h 3=2 ). Moreover, it seems to be the case that the
new mortar method performs asymptotically better than the standard one. However,
this cannot be observed for other examples without full regularity.
Acknowledgment
The author would like to thank Professor Olof B. Widlund
for his continuous help and for fruitful discussions as well as the referees for their
valuable comments.
--R
Substructuring preconditioners for finite element methods on nonmatching grids.
Substructuring preconditioners for the Q 1 mortar element method.
Iterative substructuring preconditioners for mortar element methods in two dimensions.
The mortar finite element method with Lagrange multipliers.
The mortar element method for three dimensional finite elements.
Domain decomposition by the mortar element method.
A new nonconforming approach to domain de- composition: The mortar element method
Raffinement de maillage en elements finis par la methode des joints.
Iterative methods for the solution of elliptic problems on regions partitioned into substructures.
A multigrid algorithm for the mortar finite element method.
Stability estimates of the mortar finite element method for 3- dimensional problems
Mixed and hybrid finite element methods.
estimates for the three-field formulation with bubble stabilization submitted to Math
A hierarchical preconditioner for the mortar finite element method.
Adaptive macro-hybrid finite element methods
On the mortar finite element method.
Multigrid for the mortar finite element method.
Analysis and parallel implementation of adaptive mortar finite element methods.
Domain decomposition with nonmatching grids: Augmented Lagrangian approach.
Finite element interpolation of nonsmooth functions satisfying boundary conditions.
Convergence results for non-conforming hp methods: The mortar finite element method
Poincar'e and Friedrichs inequalities for mortar finite element methods.
The coupling of mixed and conforming finite element dis- cretizations
Hierarchical a posteriori error estimators for mortar finite element methods with Lagrange multipliers.
Mortar finite element methods for discontinuous coefficients.
Multigrid methods based on the unconstrained product space arising from mortar finite element discretizations.
A mixed finite element discretization on non-matching multiblock grids for a degenerate parabolic equation arising in porous media flow
--TR
--CTR
Q. Hu, Numerical integrations and unit resolution multipliers for domain decomposition methods with nonmatching grids, Computing, v.74 n.2, p.101-129, March 2005
Dan Stefanica, Parallel FETI algorithms for mortars, Applied Numerical Mathematics, v.54 n.2, p.266-279, July 2005
Ralf Unger , Matthias C. Haupt , Peter Horst, Application of Lagrange multipliers for coupled problems in fluid and structural interactions, Computers and Structures, v.85 n.11-14, p.796-809, June, 2007
S. Heber , M. Mair , B. I. Wohlmuth, A priori error estimates and an inexact primal-dual active set strategy for linear and quadratic finite elements applied to multibody contact problems, Applied Numerical Mathematics, v.54 n.3-4, p.555-576, August 2005
Barbara I. Wohlmuth, An a Posteriori Error Estimator for Two-Body Contact Problems on Non-Matching Meshes, Journal of Scientific Computing, v.33 n.1, p.25-45, October 2007 | a priori estimates;lagrange multiplier;nonmatching triangulations;dual norms;mortar finite elements |
588410 | Modified Adaptive Algorithms. | It is well known that the adaptive algorithm is simple and easy to program but the results are not fully competitive with other nonlinear methods such as free knot spline approximation. We modify the algorithm to take full advantages of nonlinear approximation. The new algorithms have the same approximation order as other nonlinear methods, which is proved by characterizing their approximation spaces. One of our algorithms is implemented on the computer, with numerical results illustrated by figures and tables. | Introduction
. It is common knowledge that nonlinear approximation methods
are better, in general, than their linear counterparts. In the case of splines,
nonlinear approximation puts more knots where the function to be approximated
changes rapidly, which results in dramatic improvements in approximating functions
with singularities. There are various satisfactory results on free knot spline approxi-
mation, in which knots are chosen at one's will. Most related theorems are proved by
showing the existence of certain balanced partitions (a more accurate description will
be given later). This may cause di#culties in practice, since it is often numerically
expensive to find such balanced partitions. Then, there is so-called adaptive approximation
by piecewise polynomial (PP) functions, in which only dyadic intervals are
used in the partition. Adaptive approximation draws great attention because of its
simplicity in nature. As a price to pay for the simplicity, its approximation power
is slightly lower than that of its free knot counterpart. Moreover, it is not known
exactly what kind of functions can be approximated to a prescribed order; that is,
there is no characterization of adaptive approximation spaces. We point out here that
when we say adaptive algorithms in this paper, we mean those that approximate a
given (univariate) function by PP functions/splines. There are other kinds of adaptive
algorithms; some are characterized in the literature (see [10] for an example).
In this paper, we shall modify the existing adaptive algorithms in two ways.
The resulting algorithms have the same approximation power as free knot spline
approximation while largely keeping the simplicity of adaptive approximation. In the
next section, we shall state some known results on free knot spline approximation.
After describing our algorithms in section 3, in section 4 we shall give our main results,
which are parallel to those on free knot spline approximation given in the next section.
Numerical implementation and examples will be the contents of the last section.
# Received by the editors March 17, 1999; accepted for publication (in revised form) February 9,
2000; published electronically August 29, 2000.
http://www.siam.org/journals/sinum/38-3/35356.html
Department of Mathematics and Computer Science, Georgia Southern University, Statesboro,
GA 30460 (yhu@gasou.edu).
# Department of Mathematics, University of Manitoba, Winnipeg, MB, Canada R3T 2N2
(kkopotun@math.vanderbilt.edu). This author was supported by NSF grant DMS 9705638.
- Department of Mathematics, Southwest Missouri State University, Springfield, MO 65804
(xmy944f@mail.smsu.edu).
1014 Y.-K. HU, K. A. KOPOTUN, AND X. M. YU
We emphasize that we consider only the univariate case in this paper. The idea
of merging cubes was initially introduced and used by Cohen et al. in their recent
paper [11] on multivariate adaptive approximation. The resulting partition consists
of rings, which are cubes with a (possibly empty) subcube removed. Their algorithm
produces near minimizers in extremal problems related to the space BV (R 2 ). The
authors further explored this algorithm in [21]. In particular, we were able to obtain
results on extremal problems related to the spaces V #,p (R d ) of functions of "bounded
variation" and Besov spaces B # (R d ). This algorithm is ready to implement for some
settings, depending on the value of p (if L p norm is chosen) and order of local polynomials
(it is more di#cult for r > 1), though the bookkeeping may be messy. On
the other hand, this algorithm is designed for the multivariate case. Its univariate
version would not only be much more complex than necessary, but would also produce
one-dimensional rings, that is, unions of the subintervals not necessarily neighboring,
which are unpleasant and, as it turned out, unnecessary. Our modified algorithms
take advantage of the simplicity of the real line topology, simply merging neighboring
intervals, thus resulting in partitions consisting of only intervals. These algorithms
cannot be easily generalized to multivariate setting, since a procedure of emerging
neighboring cubes may generate very complicated and undesirable sets in a partition.
This also makes it much more di#cult to establish Jackson inequalities for local ap-
proximants. We refer the interested reader to [21], where one can find that the proof
of Jackson inequality on a ring is already di#cult enough. For these reasons, we
strongly believe that simpler and more e#cient univariate algorithms are necessary.
2. Preliminaries. Throughout this paper, when we say that f, the function to
be approximated, belongs to L p (I), we mean f # L p (I) if 0 < p < #, and f # C(I)
r is an integer, 0 < # < r and 0 < p, q #, then the Besov space
is the set of all functions f # L p (I) such that the semi-(quasi)norm
sup
is finite, where # r is the usual rth modulus of smoothness. The (quasi)norm for
defined by
We also define a short notation for a special case that is used frequently in the theory:
If there is no potential confusion, especially in the case I = [0, 1], the interval I will be
omitted in the notation for the sake of simplicity. For example, L p stands for L p [0, 1]
are quasi-normed, complete, linear spaces continuously embedded
in a Hausdor# space X, then the K-functional for all f #
defined as
K(f, t, X 0 ,
This can be generalized if we replace # X1 by a quasi-seminorm | - | X1 on
K(f, t, X 0 ,
MODIFIED ADAPTIVE ALGORITHMS 1015
The interpolation space (X 0 , consists of all functions
< #, where
|f | (X0,X1 ) #,q
sup
When studying an approximation method, it is very revealing to know its approximation
spaces, which we now define. Let functions in a quasi-normed linear space X
be approximated by elements of its subsets # n , . , which are not necessarily
linear but are required to satisfy the assumptions
any a #= 0;
does not depend on n;
(v) # n=0 # n is dense in X;
(vi) Any f # X has a best approximation from each # n .
All approximant sets in this paper satisfy these assumptions. Denoting
we define the approximation space
A #
to be the set of all f # X for which E n (f) is of order n -# in the sense that the
following seminorm is finite:
|f | A #
sup
The general theorem below enables one to characterize an approximation space by
merely proving the corresponding Jackson and Bernstein inequalities (see [13, sections
7.5 and 7.9], [9], and [15]).
Theorem A. Let Y := Y # > 0, be a linear space with a semi-(quasi)norm | - | Y
that is continuously embedded in X. If {# n } satisfies the six assumptions above, and
Y satisfies the Jackson inequality
and the Bernstein inequality
then for all 0 < #, 0 < q #, the approximation space
A #
By a partition
of the interval [0, 1] we mean a finite set of subintervals
whose union # n
I . The (nonlinear)
Y.-K. HU, K. A. KOPOTUN, AND X. M. YU
spaces # n,r of all PP functions of order r on [0, 1] with no more than n > 0 pieces
are defined by
I#P
I (x)# I (x), |P| # n},
where P I are in P r-1 , the space of polynomials of degree < r, and # I are the characteristic
functions on I. # 0,r is defined as {0}. These #
all assumptions (i)-(vi) on {# n } (see p. 3). The degree of best approximation of a
function f by the elements of # n,r is denoted by # n,r (f) p := E(f, # n,r ) p .
Remark . Some authors use the notation # (n-1)r,r in place of # n,r , since PP
functions can be viewed as special kinds of splines with each interior break point x i ,
a knot of multiplicity r. Also in use is PP n,r . Following
general notation in nonlinear approximation, we use the first subscript for the number
of coe#cients in the approximant. See [13], [14], [17], [26]. Strictly speaking, all n-
piece PP function of order r only form a proper subset of the free knot spline space
(n-1)r,r , but this subset has the same approximation power in L p as the whole space
(see Theorem 12.4.2 of [13]).
In his 1988 paper [23] (also see [24] and [13, section 12.8]), Petrushev characterized
the approximation space A #
using the Besov spaces; see the
following theorem.
Theorem B. Let 0 < p < #, n > 0, and 0 < # < r. Then we have
and
Therefore for 0 < q # and 0 < # < r
A #
In particular, if #
A #
The inequality (2.4) can be proved by finding a balanced partition
according to the function
s (f, x)| # dsdt
in the sense that
(see [13] for details of the proof). In fact, many Jackson-type inequalities can be
proved by showing the existence of a balanced partition (see, e.g., Theorems 12.4.3,
5, and 6 in [13], Theorem 1.1 in [19], and parts of Theorems 2.1 and 4.1 in [17]). We
state here Theorem 12.4.6 of [13], given by Burchard [8] in 1974 for the case
(see also de Boor [3]).
MODIFIED ADAPTIVE ALGORITHMS 1017
Theorem C. Let r and n be positive integers, and let # := (r
monotone function, then
#,p be the
space of functions f # L p [0, 1] for which the variation
|f | V #,p
I#P
is finite, where the sup is taken over all finite partitions P of [0, 1]. Following [17]
(see also Brudnyi [7] and Bergh and Peetre [1]), we define a modulus of smoothness
f,
0<h#t
sup
The following theorem, which is due to DeVore and Yu [17], provides characterization
of A #
using interpolation spaces involving V #,p .
Theorem D. Let 0 < p #, 0 < # < r, and #
approximation by elements from {# n,r } # 0 , we have the Jackson inequality
and the Bernstein inequality
Therefore
A #
In particular, if p < #,
A #
The Jackson inequality (2.12) follows from the definition of # f, t) #,p and the existence
for any f # V #,p of an S # n,r with n :=
which can be proved (see [17]) by showing the existence of a balanced partition
such that
and then defining S by
where P i are best L p approximations to f on I i from the space P r-1 .
Y.-K. HU, K. A. KOPOTUN, AND X. M. YU
3. Adaptive algorithms.
3.1. The original adaptive algorithm. More than likely it will be hard to
find an exactly balanced partition numerically. An algorithm of this sort by Hu
[20], for instance, uses two nested loops (there is another level of loop that increases
the number of knots). This is probably one of the reasons why much attention is
paid to adaptive approximation, which selects break points by repeatedly cutting the
intervals into two equal halves, and produces PP functions with dyadic break points,
which can be represented by finite binary numbers of the form m - 2 -k ,
. Denote the spaces of such PP functions by # d
n,r and their approximation
errors E(f, # d
n,r (f) p . We now describe the original adaptive algorithms in
the univariate setting.
Let E be a nonnegative set function defined on all subintervals of [0, 1] which
satisfies
E(I) # E(J) if I # J ;
uniformly as |I| # 0.
Given a prescribed tolerance # > 0, we say that an interval I is good if E(I) #;
otherwise it is called bad. We want to generate a partition G := G(#, E) of [0, 1] into
good intervals. If [0, 1] is good, then is the desired partition; otherwise
we put [0, 1] in B, which is a temporary pool of bad intervals. We then proceed with
this B and divide every interval in it into two equal pieces and test whether they are
good, in which case they are moved into G, or bad, in which case they are kept in B.
The procedure terminates when resulting intervals are good
and are in G), which is guaranteed to happen by (3.2).
The set function E(I) usually depends on the function f that is being approximated
and measures the error of approximation of f on I, such as # I G(x) dx in (2.9),
thus will be called the (error) measure of I throughout this paper. In the simplest
case, E(I) is taken as the local approximation error of f on I # [0, 1] by polynomials
of degree < r:
E(I)
and the corresponding approximant on G is defined by (2.15). This gives an error
where |G| is the number of intervals in G. One can estimate in di#erent ways
a n (f) p := a n (f, E) p := inf |G| 1/p #,
where the infimum is taken over all # > 0 such that
and Solomjak [2] and DeVore [12] for estimates for functions f in Sobolev spaces.
Other estimates can be found in Rice [25], de Boor and Rice [6], and DeVore and
Yu [18] and the references therein. We only mention the following two results.
Theorem E (see [18, Theorem 5.1]). Let
If f # C r (0, 1) with |f (r) (x)| #(x), where # L # is a monotone function such that
I
MODIFIED ADAPTIVE ALGORITHMS 1019
where C 1 is an absolute constant, then we have
a n (f) # Cn -r # .
Note that compared with Theorem C with
theorem has an extra requirement (3.5) on #.
Theorem F (see [18, Corollary 3.3]). Let 0 < p < # > 0, and q > # :=
a
# (L q ) , we see (3.7) is weaker than (2.4), which is for free
knot spline approximation. The reason for this is not hard to see: adaptive algorithms
not only select break points from a smaller set of numbers (that is, the set of all finite
binary numbers), but they also do it in a special order. Consider
as an example, a good free knot approximant will have most knots very close to 0 (see
examples in [20] and Table 5.2 later in this paper). However, an adaptive algorithm
needs at least n - 1 knots, 2 before it can put one at 2 -n and thus
needs more knots than a free knot spline algorithm. Although one classifies adaptive
approximation as a special kind of free knot spline approximation (since the knots
sequence depends on the function to be approximated), one is far from free when
choosing knots. It is considered "more restrictive" (DeVore and Popov [14]) than free
knot spline approximation.
We should point out that all theorems mentioned in this subsection are of a
Jackson-type, that is, so-called direct theorems. Bernstein inequalities (closely related
to inverse theorems, sometimes referred to also as inverse theorems themselves) for free
knot splines, such as (2.5) and (2.13), are valid for all splines, including PP functions
produced by adaptive algorithms. The problem is that all Jackson inequalities for
the original adaptive algorithms are not strong enough to match those Bernstein
inequalities in the sense of Theorem A. From this point of view, Theorems E and
F are weaker than they look. We do not know exactly what kind of functions can
be approximated by the original adaptive algorithms to a prescribed order, that is,
we can not characterize their approximation spaces A #
q . They do not fully exploit the
power of nonlinear approximation, and sometimes they generate too many intervals,
many of which may have an error measure much smaller than #.
As mentioned above, there are two major aspects in which adaptive approximation
is di#erent from free knot spline approximation: (a) a smaller set of numbers to choose
knots from and (b) a special, and restrictive, way to select knots from the set. It turns
out that (b) is the reason for its drawback. Although it is also the reason why adaptive
approximation is simple (and we want to keep it that way), it does not mean we have to
keep all the knots it produces. In this paper, we modify the usual adaptive algorithm
in two ways. The idea is that of splitting AND merging intervals/cubes used in a
recent paper by Cohen et al. [11]. The two new algorithms generate partitions of
[0, 1] with fewer dyadic knots which are nearly balanced in some sense. In section 4,
we prove that they have the same approximation order as that of free knot splines.
3.2. Algorithm I. We start with the original adaptive procedure with some
# > 0, which generates a partition
of [0, 1] into good intervals. The
number N # may be much larger than it has to be. To decrease it, we merge some
of the intervals I #
i . We begin with I #
1 and check the union of I #
1 and I # 2 . If it is still
a good interval, that is, if its measure E(I #
#, we add I # 3 to the union and
1020 Y.-K. HU, K. A. KOPOTUN, AND X. M. YU
check whether E(I #
3 ) #, and we proceed until we find the largest good union
I #
k in the sense that
but
E(I #
We name I # 1 # I #
k as I 1 . If k < N # , we continue with I #
k+1 and find the next
largest good union as I 2 . At the end of this procedure, we obtain a modified partition
consisting of N # N # good intervals
for which each union J i := I i # I i+1
is bad,
This partition is considered nearly balanced. For the size of N we have
# .
3.3. Algorithm II. Our second algorithm generates a nearly balanced partition
in another way. It does not make heavy use of prescribed tolerance #; rather, it merges
intervals with relatively small measures while dividing those with large ones. As in
the ordinary adaptive algorithms, we start with dividing [0, 1] into two intervals I 1
and I 2 of equal length. However, this is where the similarity ends. We then compare
measures E(I 1 ) and E(I 2 ) and divide the interval with larger measure into two equal
pieces. In the case of equal measure, we divide, rather randomly, the one on the left.
Now we have three intervals and are ready for the three-step loop below.
Step 1. Assume there is currently a partition {I i } k
I j has the largest
measure among all I i . If E(I j+1
is a fixed parameter, we check the union of I j+1 # I j+2 to see whether its measure
E(I j+1 # I j+2 ) < M . If so, add the next interval I j+3 into the union and check its
measure again. We continue until we get a largest union # j+m1
I i whose measure is
less than M, and replace this union by the intervals it contains. Then, if j +m 1 < k,
we find the next largest union # j+m1+m2
I i in the same manner and replace these
intervals by their union. Furthermore, we do the same to the intervals to the left of
I j (but keep I j intact). In this way we obtain a new partition with (the old) I j still
having the largest measure. This partition is nearly balanced in the sense that the
measure of the union of any two consecutive new intervals is no less than #
(because these new intervals were largest unions of old intervals). At the end of this
step we renumber the new intervals and update the value of k.
Step 2. Check whether the new partition produced in Step 1 is satisfactory using
an application-specific criterion, for instance, whether k has reached a prescribed value
n or the error is reduced to a certain level. If not, continue with Step 3; otherwise
define the final spline by (2.15) and terminate the algorithm.
Step 3. Divide the interval with the largest measure into two equal pieces,
renumber the intervals, update the values of k and M, and then go back to Step 1.
Remark. In Step 1, if I l and I l+1 are the two newest intervals (two "brothers"
with equal length), one needs only to check I l-1 # I l if l - 1, l #= j, and/or I l+1 # I l+2
since other unions of two consecutive intervals have measures no
MODIFIED ADAPTIVE ALGORITHMS 1021
less than the value of M in the previous iteration, which is, in turn, no less than the
current M . We stated it in the way above only because it shows the purpose of the
step more clearly.
It should be pointed out that one needs to be careful about the stopping criterion
in Algorithm II. For example, if it is applied to the characteristic function
after two iterations we will always
have The break point # 2/2 in this example can be
replaced by any number in (0, 1) which does not have a finite binary representation
such as 0.4. If k # used as the sole stopping criterion, the algorithm will fall
into infinite loop. Fortunately, the error in this example still tends to 0; therefore,
infinite loop can be avoided by adding error checking in the criterion. The next lemma
shows this is the case in general.
Lemma 3.1. Let E be an interval function satisfying (3.1) and (3.2), and let
prescribed. Then the criterion
will terminate Algorithm II.
Proof. We show that if k never exceeds n, then as the number
of iterations goes to #. Let 0 < # < 1 be fixed. Let
with the
max taken at one moment. Fix this
M and denote the group of all subintervals in
the partition with "large" errors by G
be as in Step 1, changing from iteration to iteration. We have
# M from now on.
We first make a few observations. Since the interval currently having the largest
measure is always in G
M , each iteration cuts a member of G
M . However, the algorithm
will not merge any member I i # G
M with another interval because E(I i
any union of I i with another interval would have even larger measure by
(3.1). By (3.2), there exists # > 0 such that |I i | > # for any I i # G
M . Note all
intervals in a partition are disjoint, thus the total length of the intervals in G
M is no
larger than 1, and its cardinal number |G #
M | # 1/#.
From these observations, we conclude the following. When an iteration cuts a
member I i of G
M into two "children" of equal length, one of the three cases will
happen: (a) neither child of I i belongs to G
M , thus |I i | > # is removed from the total
length of G
exactly one of the children belongs to it (hence having a length
> #) and the other child, with the same length |I i |/2 > #, is removed from G
or (c)
both children belong to it. The case (a) decreases |G #
M | by 1, (b) keeps it unchanged,
and (c) increases it by 1. Now one can see that at most # 3/# +1 iterations will empty
G
M , since at least one third of them will be cases (a) or (b) to keep |G #
M | # 1/#, which
will remove all the total length of G
M , thus emptying it. This reduces the maximum
error by a factor # < 1. Repeat this enough times and the maximum error
will eventually tend to 0.
Although (3.2) does not say anything about the convergence rate of E(I) as |I| #
0, and the proof of the above lemma may make it sound extremely slow, one can expect
a fairly fast convergence in most cases. For example, in the case
if f is in the generalized Lipschitz space Lip #, p) := Lip #, L p [a, b]), 0 < # < r,
that is, if
|f | Lip # := |f | Lip #,p) := sup
1022 Y.-K. HU, K. A. KOPOTUN, AND X. M. YU
then for any I # [a, b]
|f | Lip # .
We feel it is safe to say that most functions in applications belong to Lip #, p) with
an # reasonably away from 0, at least on subintervals not containing singularities,
thus halving an interval often reduces its error by a factor of 2 # .
A natural question that may arise here is: How complex are the new algorithms?
We give brief comparisons below to answer this question. Algorithm I is straight for-
ward. It is the original adaptive algorithm with a second (merging) phase added.
This phase consists of no more than merging attempts, where N # is
the number of subintervals the original algorithm generates, and N that of the final
subintervals. As for Algorithm II, there are two major di#erences from the original
version. The first one, as mentioned in the remark after the algorithm description,
is: up to two merging attempts are made after cutting each interval. The other one
is in the book-keeping. In the original version, a vector is needed to record errors
on all intervals (or to indicate which intervals are bad), while Algorithm II keeps the
index of the interval that has the largest error E(I) in a scalar variable, in addition
to the vector containing all errors. This requires a search for the largest element in
the vector after each cutting or merging operation.
One can see from above that the new algorithms are not much more complex
in terms of programming steps. The added CPU time, in terms of the number of
results mainly from the evaluations of the error measure E(I)
required by merging operations. Our estimate is that either algorithm uses two or
three times as much CPU time as the original algorithm. More information on CPU
time will be given in section 5 together with numerical details.
4. Approximation power of the algorithms. We now show that our modified
adaptive algorithms have the full power of nonlinear approximation. More precisely,
we prove that they produce piecewise polynomials satisfying the very same Jackson
inequalities for free knot spline approximation (with possibly larger constants on the
right-hand side since the partitions are not exactly balanced). As we mentioned
earlier, the corresponding Bernstein inequalities hold true for all splines; therefore we
are really proving that the approximation spaces for the modified adaptive algorithms
are the same as those for free knot spline approximation.
We state below our results as three main theorems, parallel to Theorems B, C,
and D, respectively. In fact, we can prove most results of this kind for our algorithms,
such as Kahane's theorems and its generalization [13, Theorems 12.4.3 and 5], but
the proofs would be too similar to the ones below.
We recall that throughout this paper, I j denotes the interval with largest measure
among all I i in the partition, the union of any two consecutive intervals J
has a measure E(J i ) > E(I j ), and J i is called bad in Algorithm I. All PP functions
on the resulting partitions are defined by (2.15).
Theorem 4.1. Let n and r be positive integers, and let 0 < p < #, 0 < # < r,
then the two modified adaptive algorithms (with
defined in (2.8) or (ii)
functions S of (2.15) that satisfy
the Jackson inequality
(4.
MODIFIED ADAPTIVE ALGORITHMS 1023
From Theorem A we obtain the approximation space A #
product. It turns out to be the same as A #
which is not surprising
since # d
n,r is dense in # n,r . The surprising part is that one can get such an approximant
using a simple adaptive algorithm.
Corollary 4.2. Let 0 < p < #, 0 < q #, 0 < # < r, and
. For approximation by PP functions in # d
n,r , we have
A #
In particular,
A #
Proof of Theorem 4.1. The proofs of the theorem in the cases (i) and (ii) are
very similar. We only consider (i) and remark that, in the case (ii), the inequality
plays the major role.
PP approximants produced by Algorithm I. Let E(I) := # I G(x) dx, where G is as
in (2.8), and # :=
We claim that the number N of intervals it produces is no greater than 2n+1. Indeed,
by (3.8)
The rest of the proof of (4.1) is similar to that of (2.4) (cf. section 12.8, p. 386 of
[13]); we sketch it here for completeness. It is proved in [13] that for any f # B # [0, 1],
M is equivalent to |f | B # [0, 1] with constants of equivalence depending only on r and
#, and that for such an f
Define the approximant S by (2.15) and we have
here in the fifth step we have used the equality # and in the last step
we have used the equivalence of M and |f | B # .
PP approximants produced by Algorithm II. Let E(I), M, and # be the same as
above, and use (3.9) as stopping criterion in Step 2. If the algorithm terminates due
1024 Y.-K. HU, K. A. KOPOTUN, AND X. M. YU
to (thus giving less than n pieces), it is the same situation as with
Algorithm I. Otherwise we have n pieces when it terminates, and (4.1) follows:
Cn
Theorem 4.3. Under the conditions of Theorem C, the modified adaptive algorithms
(with E(I) := # I #(x) # dx and # := n
produce PP approximants
S in # d
n,r that satisfy the Jackson inequality:
Proof of Theorem 4.3.
PP approximants produced by Algorithm I. Let
E(I) := # I
as the Taylor polynomial for f of degree r - 1 at the point x i+1 (not best
we have (see equation (4.15) in Chapter 12 of [13])
Using (4.6) in place of (4.4), then (4.5) for p < # can be proved by arguments very
similar to those in the proof of Theorem 4.1 by Algorithm I. We also refer the reader
to the proof of Theorem C in [13]. For #, the estimate of N is the same and we
need only to replace # N
by
PP approximants produced by Algorithm II. Let E(I), #, and M be the same as
above. Use (3.9) again as the stopping criterion in Step 2. If the algorithm terminates
because it is the same situation as in Algorithm I. Otherwise, for
p/#
MODIFIED ADAPTIVE ALGORITHMS 1025
where we have used the inequality (4.6) in the second step, and
the last one. For #, we make similar changes to those in Algorithm I:
Theorem 4.4. Let n and r be positive integers, and let 0 < p, #, 0 < # < r,
then the two modified adaptive algorithms (with
E(I)
#,p ) produce PP functions S of (2.15) that
satisfy the Jackson inequality
Using Theorems 4.4 and A we have the following characterization of A #
Corollary 4.5. For approximation by PP functions in # d
n,r we have
A #
In particular, if p < #,
A #
Proof of Theorem 4.4. It su#ces to show (2.14) since (4.7) immediately follows
from it with any t > 0 and n := (see the end of section 2). We only prove it
for p < #. The case of can be verified by making changes similar to those in
the proof of the L# case in the previous theorem.
PP approximants produced by Algorithm I. Let E(I)
p and
#,p . From (3.8), the number N of intervals the algorithm produces
can be estimated as
Indeed, if N > 2n (otherwise, it's done) we have
#,p
Cn #/p
#,p
where we have used the definition (2.11) of # f, t) #,p . Since 1 - #/p, this gives
Cn. Now (2.14) follows, since
#,p .
PP approximants produced by Algorithm II. We set E(I) := E r (f, I) #
, and use
#,p in the stopping criterion (3.9). If it stops
because have exactly the same situation as with Algorithm I, with
1026 Y.-K. HU, K. A. KOPOTUN, AND X. M. YU
the same partition; otherwise there are n intervals when it terminates. In the latter
case, we have
5. Numerical implementation and examples. Theoretically, the two algorithms
have the same approximation power. However, when it comes to numerical
implementation, we prefer Algorithm II since it directly controls the number of polynomial
pieces n, while # in Algorithm I is neither a power of n nor a tolerance for
(though it is closely related to both). We implemented Algorithm II on the
computer, using Fortran 90 and mainly for 2. The error measure used in the code
is
2 unless we have a better one to use, such as # I # I |f (r)
| # for
the square root function in the first example in this section. The L 2 norm of f on the
interval I estimated by the composite Simpson rule for integral
and its L# norm is estimated by
are equally distributed nodes, n p is a program
parameter roughly set as 6 times r, and . The best L 2 polynomial
approximant on I i , discretized by (5.1) as an overdetermined n p - r system of linear
equations for the least squares method, is calculated by either QR decomposition or
singular value decomposition by calling LINPACK subroutines Sqrdc and Sqrsl, or
Ssvdc (or their double precision counterparts). The latter takes longer but we did
not see any di#erence in the first four or five digits of the local approximation errors
they computed; thus we did not test it extensively.
The L# version of algorithm is basically the same, except that we use
estimated by (5.2). The local polynomials P I (and the global smooth splines)
are still obtained by the least squares method, that is, still best L 2 approximants.
This is common in the literature, and it is justified by the fact that the best L 2
polynomial approximant is also a near-best L# polynomial on the same interval; see
Lemma 3.2 of DeVore and Popov [16].
The number of polynomial pieces is used as the main termination criterion, while
# in (3.9) is set to a small value mainly to protect the program from falling into infinite
loops, rather than the sophisticated ones as in proofs in the previous section. It turned
out that infinite loop is not a problem. A nonfull rank matrix in the least squares
method is a problem, which happens far before it falls into an infinite loop. This
is because if I i is too small, the machine will have di#culties distinguishing the n p
MODIFIED ADAPTIVE ALGORITHMS 1027
points needed in (5.1). Therefore, we added a third condition to protect the program
from failing: stop the program when
We also added a second part in the code, namely, finding an L 2 smooth spline
approximation to the function with the knot sequence {t i } n+r
i=2-r , where the interior
knots a < t 2 < t 3 < - < t n < b are the break points of the PP function obtained
by Algorithm II, used as single knots, and the auxiliary knots are set as t
b. Despite the fact that the
partitions are guaranteed to be good only for PP functions, they usually work well
for smooth splines, too. De Boor gave some theoretical justification in the discussion
of his subroutine Newnot [4, Chapter XII].
The least square objective function for finding this smooth spline -
S is
set as 5r+1, is the number of equal pieces into which we cut each subinterval
I are the points resulted from such cutting, and the weights w j are chosen so
that (5.4) becomes a composite trapezoidal rule for the integral # b
a # f(x) -
dx:
The actual calculation
of the B-spline coe#cients of
are the B-splines with the knot sequence {t i } scaled
so that
done by de Boor's subroutine L2Appr in [4, Chapter XIV].
We used the source code of all the subroutines in the book from the package PPPACK
on the Internet.
We tested our code on a SUN UltraSparc, with a clock frequency 167MHz, 128MB
of RAM, and running Solaris 2.5.1. The speed is so fast that it is not an issue here:
for finding break points, it is somewhere from 0.015 second for to 0.1 second for
printing minimum amount of messages on the screen, and it is less than
10% of these for computing smooth splines. We also tested the code on a 300 MHz
Pentium II machine with 64 MB of RAM running Windows NT 4.0. The speed is at
least three times as fast. None of the problems we tested used more than 0.1 second.
(The reason for the great di#erence in speed may be that the SUN we used is a file
server, not ideal for numerical computation.) There is still room for improvement in
e#ciency. For example, one can use a value of n p , larger than what we use, at the
beginning and decrease it as n increases (and the error on each subinterval decreases).
The value of n s should be related to n, too, for the same reason.
The main cost of CPU time is the evaluation of the error measure E(I) for each
subinterval I. We use
estimated by QR decomposition, as an exam-
ple. Each such problem involves n p function evaluations, and (n p - r
operations required in QR decomposition, plus some more for estimating the error
from the resulting matrices. Each cutting of intervals requires two E(I) evaluations,
1028 Y.-K. HU, K. A. KOPOTUN, AND X. M. YU
Table
Approximation order of
and each merging attempt requires one. Our numerical experiments show that a typical
run resulting in n subintervals cuts intervals about 2n time. Each cutting results
in up to two attempts of merging subintervals. That gives about 8n least squares
problems, each of which involves n p function evaluations plus about n p r 2 arithmetic
operations. In view of the approximation order we proved in the previous section, and
the fact that n p is roughly a multiple of r, we think it pays to use a relatively large r,
at least 4 or 5. For 5, the error will reach the machine epsilon (single precision)
when n is somewhere between 30 and 70 in most cases.
We use the square root function to test the PP function approximation
order. This function is only in the Lipschitz space Lip( 1
thus the approximation
order is only 1/2 for splines with equally spaced knots in the L# norm, no matter
what their order r is. By Theorem 4.3, we should have e n := #f -S n # p # Cn -r , where
S n is the function consisting of n polynomial pieces computed by Algorithm II using
I |f (r) (x)| # dx, and we have combined # in the theorem
into the constant C. After the knot sequence has been found, QR decomposition is
used at the end of the program on each subinterval to estimate e n . Since the error
decreases fast for double precision had to be used in QR decomposition for
large values of n. Assume that what we actually obtain from the code is e
where # is the approximation order. Since log e plot the
points in the plane, they should form a line. Since such
a plot zigzags very much, we calculated the least squares line fitting
to find the order. Table 5.1 gives values of # for di#erent r using both L 2 and
L# norms. We should mention that the points values of n are too
low and ruin the obvious line pattern formed by those for larger n, thus we give two
values of #, one from the points for and the other from
As can be seen from the table, the latter values are right around or even exceed r.
Remark. We tried some power of E r (f, I) p for E(I) and felt, in view of (4.6),
it would yield a better balance of subintervals, thus a higher order. But the orders so
obtained were well below r (4.46 for e.g. The reason might be that
I # is additive, but (power of) E r (f, I) p is not.
To illustrate the advantage of interval merging, we compare the original adaptive
algorithm and our modified ones with the function
log 2
-m
This function is in C # , and is decreasing and convex on [0, 1] with
. Note that
since f is decreasing on [0, 1]. Table 5.2 shows
comparison in numbers of knots produced for the same approximation error by the
original adaptive algorithm and our Algorithm II. Both programs try to put first knots
near where the graph is very steep. The original algorithm has to, as pointed
out early, lay down knots 2 one by one before reaching an error of
MODIFIED ADAPTIVE ALGORITHMS 1029
Table
Comparison in numbers of interior knots produced by the original and modified adaptive algorithms
for the same error in approximating
Original
Alg.
while Algorithm II, after trying all these knots one at a time and merging all
but the last interval, puts the very first knot at 2 -23 .
It is interesting to watch how Algorithm II moves a knot toward a better position
in successive iterations without increasing the total number of pieces. The following
screen output shows that in iterations 1 and 2 the program moves the break point 0.5
to 0.25 and then to 0.125, while the error decreases form 0.5 to 0.47; in iterations 3-22
it moves the break point all the way to 2 -23 # with the error decreased
to 0.27. What happened internally is, in iteration 1, e.g., it cuts the interval [0, 0.5]
into [0, 0.25] and [0.25, 0.5]. Since the error on the union of [0.25, 0.5] and [0.5, 1] is
smaller than that on [0, 0.25], it then merges the two intervals into [0.25, 1]. The net
e#ect of these steps is moving the break point 2 -1 to 2 -2 .
Iteration 0: # of
errors=
L_\infty error on [a,
Iteration 1: # of
errors=
L_\infty error on [a,
Iteration 2: # of
errors=
L_\infty error on [a,
(Many lines deleted.)
Iteration 22: # of
errors=
2.70000E-01 2.30000E-01
L_\infty error on [a,
Y.-K. HU, K. A. KOPOTUN, AND X. M. YU
Table
Approximation errors to the Runge function on [-5, 5].
9
We now consider the infamous Runge function, which is also in C # but, on the
other hand, is hard to interpolate or approximate. Lyche and M-rken [22] approximated
it by the knot removal algorithm, and Hu [20] approximated it by balancing
the rth derivative of the function on subintervals in two nested loops. Here and in
the rest of the paper, we use 4. In Table 5.3, we compare our results with those
of Lyche and M-rken (LM) [22] and Hu [20]. For the same number of knots (that is,
list our errors measured in # 2 / # b - a for the PP function S n and the
smooth spline -
also that of -
measured in L# norm. We divide the L 2 norm by
since it is more comparable to the L# norm, which is what LM and Hu used.
The errors by LM are estimated from figures in [22]. Because of the simple nature of
our algorithm, we only expected to compete with their results by splines with two or
three times as many knots. It turns out that our approximation errors are almost as
good as theirs, which were produced by more sophisticated methods.
By now, the reader may begin to wonder: what is the e#ect of the parameter
of Algorithm II, used in Lemma 3.9 to guarantee the termination
of Algorithm II. We tried functions we tested, it worked excellently
except that the number of polynomial pieces went up and down a few times with the
square root function using dx, in which case used
instead. It is true that in theory it might get into an infinite loop, but since our goal
is to find a nearly balanced partition, better in this aspect, provided
infinite loop does not happen. It did not. As a matter of fact, sometimes we feel
the need for a value slightly larger than 1, e.g., with symmetric functions such as the
Runge function. What happens with # 1 is that if there are two subintervals having
the same largest measure at the moment, symmetric about the center of the interval,
then the outcome of the next iteration, which processes the subinterval on the left,
will very often interfere with the processing of the subinterval on the right later. It
may not make the approximation error worse, at least not by much, it is just that the
knot sequence becomes unsymmetrical, thus unnatural and unpleasant. Furthermore,
most algorithms in the literature produce symmetric knots for symmetric functions;
it would be hard to compare our results with theirs. For these minor reasons, we
used in preparation of Table 5.3. In the next example, we consider the PP
function
which has a jump at # 2/2. As we mentioned in the discussion before Lemma 3.1, since
# 2/2 has no finite binary representation, this function can never be approximated
exactly by a PP function with dyadic break points. The program (with
cutting and merging around the jump (since the number of pieces is always 3 after
two iterations), until it is stopped by the criterion (5.3), resulting in t
MODIFIED ADAPTIVE ALGORITHMS 1031
500 600 700 800 900 1000 11000.611.41.82.2-.04
Temperature
Fig. 5.1. Titanium Heat Data (circles). The final spline (solid line) has 15 interior knots. The
errors for preapproximation (dotted) and for the final spline (dashed) use scales on the right.
and 0.70710754. The PP function matches f exactly on the computer screen
since the two points are indistinguishable. One can very well combine them into a
single break point, thus virtually reproducing f . The original adaptive algorithm, in
contrast, would put many many knots around the jump while trying to narrow the
subinterval containing the jump: 0.5, 0.75, 0.625, 0.6875, 0.71875, . All these knots
are useless except the newest two.
In practice, one often wants to approximate discrete data points other than known
functions as in the previous examples. In this case, we preapproximate the points by
a spline with as many parameters as we wish to use, then apply our algorithm to this
spline. For smooth-looking data, we interpolate the data by a C 1 cubic spline with
knots at the data points, using de Boor's subroutine Cubspl in [4]. This worked very
well. We produced some sample data points from the Runge function and square root
function and applied this approach to them. It resulted in virtually the same knot
sequences as those generated by directly approximating the original functions.
In the real world, however, it is likely that the data will contain errors. If the
data points are interpolated, one can see small wiggles in the graph, which tricks the
program laying knots in areas where the curve is otherwise flat. One such example
is the Titanium Heat Data (experimentally determined), see [4, Chapter XIII], and
also LM [22] and Hu [20]. In Figure 5.1 the reader can see wiggles on both the left
and right. De Boor [4, Chapter XIV] suggests that the data be approximated by a
less smooth spline. We absolutely agree. For the same reason, we used fewer knots
for preapproximating spline in the flat parts at both ends, than we did near the high
1032 Y.-K. HU, K. A. KOPOTUN, AND X. M. YU
peak around 900 # , trying to ignore the wiggles. In fact, we used almost the same knot
sequence for preapproximating spline as in Figure 4 of [20].
Table
Approximation errors to the Titanium Heat Data.
Obtained by Order # of knots Error
Alg. II 4 11 0.070
Alg. II 4 15 0.031
Since de Boor, LM, and Hu all used L# norm for approximating these data, we
also used the L# version of our program. Figure 5.1 shows a cubic spline approximation
to the Titanium Data obtained by this method. It has 15 interior knots with
an error of 0.031. Table 5.4 gives a comparison of our results with those by others on
the same data.
Acknowledgments
. We are deeply indebted to Professor Ron DeVore, who
inspired us by discussing the excellent ideas in [11] during our visit to the University
of South Carolina. We want to thank him and Professors Pencho Petrushev and
Albert Cohen for providing us with drafts of their manuscript [11]. Credit is also due
to Professor Dietrich Braess, the editor of this paper, and the referees, whose opinions
and suggestions helped very much in improving the manuscript. As a matter of fact,
we reshaped the last section during the communication with them.
--R
On the space Vp (0
Piecewise polynomial approximation of functions of classes W
Good approximation by splines with variable knots
A Practical Guide to Splines
Least squares cubic spline approximation II-Variable knots
An adaptive algorithm for multivariate approximation giving optimal convergence rates
Spline approximation and functions of bounded variation
Splines with optimal knots are better
Jackson and Bernstein-type inequalities for families of commutative operators in Banach spaces
Adaptive wavelet methods for elliptic operator equations-Convergence rates
Nonlinear approximation and the space BV (R 2
A note on adaptive approximation
in Function Spaces and Applications
Interpolation of Besov spaces
Degree of adaptive approximation
Convexity preserving approximation by free knot splines
An algorithm for data reduction using splines with free knots
On multivariate adaptive approximation
A data reduction strategy for splines with applications to the approximation of functions and data
Direct and converse theorems for spline and rational approximation and Besov spaces
Rational Approximation of Real Functions
Basic Theory
--TR | adaptive algorithms;nonlinear approximation;data reduction;besov spaces;degree of approximation;piecewise polynomials;splines;modulus of smoothness;approximation spaces |
588414 | The Best Circulant Preconditioners for Hermitian Toeplitz Systems. | In this paper, we propose a new family of circulant preconditioners for ill-conditioned Hermitian Toeplitz systems A x= b. The preconditioners are constructed by convolving the generating function f of A with the generalized Jackson kernels. For an n-by-n Toeplitz matrix A, the construction of the preconditioners requires only the entries of A and does not require the explicit knowledge of f. When f is a nonnegative continuous function with a zero of order 2p, the condition number of A is known to grow as O(n2p). We show, however, that our preconditioner is positive definite and the spectrum of the preconditioned matrix is uniformly bounded except for at most 2p+1 outliers. Moreover, the smallest eigenvalue is uniformly bounded away from zero. Hence the conjugate gradient method, when applied to solving the preconditioned system, converges linearly. The total complexity of solving the system is therefore of O(n log n) operations. In the case when f is positive, we show that the convergence is superlinear. Numerical results are included to illustrate the effectiveness of our new circulant preconditioners. | Introduction
An n-by-n matrix A n with entries a ij is said to be Toeplitz if a a i\Gammaj . Toeplitz systems
of the form A n occur in a variety of applications in mathematics and engineering
E-mail: rchan@math.cuhk.edu.hk. Department of Mathematics, The Chinese University of Hong
Kong, Shatin, Hong Kong. Research supported in part by Hong Kong Research Grants Council Grant No.
CUHK 4207/97P and CUHK DAG Grant No. 2060143.
y
E-mail: mhyipa@hkusua.hku.hk. Department of Mathematics, The University of Hong Kong, Pokfu-
lam Road, Hong Kong.
z E-mail: mng@maths.hku.hk. Department of Mathematics, The University of Hong Kong, Pokfulam
Road, Hong Kong. Research supported in part by HKU CRCG grant no. 10201939.
[7]. In this paper, we consider the solution of Hermitian positive definite Toeplitz systems.
There are a number of specialized fast direct methods for solving such systems in O(n 2 )
operations, see for instance [22]. Faster methods requiring O(n log 2 n) operations have
also been developed, see [1].
Strang in [21] proposed using the preconditioned conjugate gradient method with circulant
matrices as preconditioners for solving Toeplitz systems. The number of operations
per iteration is of order O(n log n) as circulant systems can be solved efficiently by fast
Fourier transforms. Several successful circulant preconditioners have been introduced and
analyzed; see for instance [11, 5]. In these papers, the given Toeplitz matrix A n is assumed
to be generated by a generating function f , i.e., the diagonals a j of A n are given by the
Fourier coefficients of f . It was shown that if f is a positive function in the Wiener class
(i.e., the Fourier coefficients of f are absolutely summable), then these circulant preconditioned
systems converge superlinearly [5]. However, if f has zeros, the corresponding
Toeplitz systems will be ill-conditioned. In fact, for the Toeplitz matrices generated by a
function with a zero of order 2p, their condition numbers grow like O(n 2p ), see [19]. Hence
the number of iterations required for convergence will increase like O(n p ), see [2, p.24].
Tyrtyshnikov [23] has proved that the Strang [21] and the T. Chan [11] preconditioners
both fail in this case.
To tackle this problem, non-circulant type preconditioners have been proposed, see
[6, 4, 18, 16]. The basic idea behind these preconditioners is to find a function g that
matches the zeros of f . Then the preconditioners are constructed based on the function
g. These approaches work when the generating function f is given explicitly, i.e., all
Fourier coefficients
of f are available. However, when we are given only a finite
n-by-n Toeplitz system, i.e., only fa j g jjj!n are given and the underlying f is unknown,
then these preconditioners cannot be constructed. In contrast, most well-known circulant
preconditioners, such as the Strang and the T. Chan preconditioners, are defined using only
the entries of the given Toeplitz matrix. Di Benedetto in [3] has proved that the condition
numbers of the preconditioned matrices by sine transform preconditioners are uniformly
bounded. However, the preconditioners themselves may be singular or indefinite in general.
Our aim in this paper is to develop a family of positive definite circulant preconditioners
that work for ill-conditioned Toeplitz systems and do not require the explicit knowledge
of f , i.e., they require only fa j g jjj!n for an n-by-n Toeplitz system.
Our idea is based on the unified approach proposed in Chan and Yeung [9], where
they showed that circulant preconditioners can be derived in general by convolving the
generating function f with some kernels. For instance, convolving f with the Dirichlet
kernel D bn=2c gives the Strang preconditioner. They proved that for any positive 2-
periodic continuous function f , if C n is a kernel such that the convolution product C n f
tends to f uniformly on [\Gamma-], then the corresponding circulant preconditioned matrix
n A n will have clustered spectrum. In particular, the conjugate gradient method will
converge superlinearly when solving the preconditioned system. This result turns the
problem of finding a good preconditioner to the problem of approximating f with C n f .
Notice that D bn=2c f , being the partial sum of f , depends solely on the first bn=2c
Fourier coefficients fa j g jjj!bn=2c of f . Thus the Strang preconditioner, and similarly for
other circulant preconditioners constructed through kernels, does not require the explicitly
knowledge of f .
In this paper, we construct our preconditioners by approximating f with the convolution
product K m;2r f that matches the zeros of f and depends only on fa j g jjj!n . Here
K m;2r is chosen to be the generalized Jackson kernels, see [15]. Since K m;2r are positive
kernels, our preconditioners are positive definite for all n. In comparison, the Dirichlet
kernel D n is not positive and hence the Strang preconditioner is indefinite in general. We
will prove that if f has a zero of order 2p, then K m;2r f matches the zero of f when
r ? p. Using this result, we can show that the spectra of the circulant preconditioned
matrices are uniformly bounded except for at most 2p outliers, and that their smallest
eigenvalues are bounded uniformly away from zero. It follows that the conjugate gradient
method, when applied to solving these circulant preconditioned systems, will converge
linearly. Since the cost per iteration is O(n log n) operations, see [7], the total complexity
of solving these ill-conditioned Toeplitz systems is of O(n log n) operations. In the case
when f is positive, we show that the spectra of the preconditioned matrices are clustered
around 1 and thus the method converges superlinearly. The case where f has multiple
zeros is more involved and will be considered in a future paper.
This paper is an expanded version of the proceedings paper [10] where some of the
preliminary results were reported. Recently Potts and Steidl [17] have proposed skew-
circulant preconditioners for ill-conditioned Toeplitz systems. Their idea is also to use
convolution products that match the zeros of f to construct the preconditioners. In par-
ticular, they have used the generalized Jackson kernels and the B-spline kernels proposed
in [8] in their construction. However, in order to guarantee that the preconditioners are
positive definite, the position of the zeros of f is required which in general may not be
readily available. In contrast, our circulant preconditioners can be constructed without
any explicit knowledge of the zeros of f .
The outline of the paper is as follows. In x2, we give an efficient method for computing
the eigenvalues of the preconditioners. In x3 we show that K m;2r f matches the zeros of f .
We then analyze the spectrum of the preconditioned matrices in x4. Numerical results are
given in x5 to illustrate the effectiveness of our preconditioners in solving ill-conditioned
Toeplitz systems. Concluding remarks are given in x6.
2 Construction of Circulant Preconditioners
Let C 2- be the space of all 2-periodic continuous real-valued functions. The Fourier
coefficients of a function f in C 2- are given by
a
\Gamma-
f(')e \Gammaik' d';
a \Gammak for all k. Let A n [f ] be the n-by-n Hermitian Toeplitz matrix with the
j)th entry given by a i\Gammaj , We will use C
2- to denote the space of all
nonnegative functions in C 2- which are not identically zero. We remark that the Toeplitz
matrices A n [f ] generated by f
are positive definite for all n, see [6, Lemma 1].
Conversely, if f 2 C 2- takes both positive and negative values, then A n [f ] will be non-
definite. In this paper, we only consider f
being positive definite
Hermitian Toeplitz matrices.
We say that ' 0 is a zero of f of order p if f(' 0 is the smallest positive
integer such that f (p) (' 0 ) 6= 0 and f (p+1) (') is continuous in a neighborhood of ' 0 . By
Taylor's theorem,
p!
for all ' in that neighborhood. Since f is nonnegative, c ? 0 and p must be even. We
remark that the condition number of A n [f ] generated by such an f grows like O(n p ), see
[19]. In this paper, we will consider f having a single zero. The general case where f has
multiple zeros is more complicated and will be considered in a future paper.
The systems A n [f will be solved by the preconditioned conjugate gradient
method with circulant preconditioners. It is well known that all n-by-n circulant matrices
can be diagonalized by the n-by-n Fourier matrix F n , see [7]. Therefore, a circulant matrix
is uniquely determined by its set of eigenvalues. For a given function f , we define the
circulant preconditioner C n [f ] to be the n-by-n circulant matrix with its j-th eigenvalue
given by
We note that C n [f
Hence the matrix-vector multiplication
[f ]v, which is required in each iteration of the preconditioned conjugate
gradient method, can be done in O(n log n) operations by fast Fourier transforms. Clearly
if f is a positive function, then C n [f ] is positive definite.
In the following, we will use the generalized Jackson kernel functions
sin( m'
to construct our circulant preconditioners. Here k m;2r is a normalization constant such
that
\Gamma- K m;2r 1. It is known that k m;2r is bounded above and below by constants
independent of m, see [15, p.57] or (11) below. We note that K m;2 (') is the Fej'er kernel
and K m;4 (') is the Jackson kernel [15, p.57].
For any m, the Fej'er kernel K m;2 (') can be expressed as
where
see for instance [9]. Note that
\Gamma- K m;2
1. By (2), we see that K m;2r (')
is the r-th power of K m;2 (') up to a scaling. Hence we have
where the coefficients b (m;2r)
k can be obtained by convolving the vector (b (m;2)
times and this can be done by fast Fourier transforms, see
[20, pp.294-296]. Thus the cost of computing the coefficients fb (m;2r)
k g for all jkj -
is of order O(rm log m) operations. In order to guarantee that
\Gamma- K m;2r
can normalize b (m;2r)
0 to 1=(2-) by dividing all coefficients b (m;2r)
k by 2-b (m;2r)
The convolution product of two arbitrary functions
defined as
(g h)(') j
\Gamma-
When we are given an n-by-n Toeplitz matrix A n [f ], our proposed circulant preconditioner
is
By (3) and (4), since
a k e ik' , the convolution product of K m;2r f is given by
(K m;2r
a k b (m;2r)
where
0; otherwise:
Clearly, K m;2r f depends only on a k for jkj ! n, i.e., only on the entries of the
given n-by-n Toeplitz matrix A n [f ]. Notice that by (1), to construct our preconditioner
we only need the values of K m;2r f at 2-j=n for n. By (6), these
values can be obtained by taking one fast Fourier transform of length n. Thus the cost of
constructing C n [K m;2r f ] is of O(n log n) operations.
We remark that the Strang [21] and the T. Chan circulant preconditioners [11] for
are just equal to C n [D bn=2c f ] and C n [K n;2 f ] respectively where D bn=2c is the
Dirichlet kernel and K n;2 (') is the Fej'er kernel, see [9].
3 Properties of the Kernel K m;2r
In this section, we study some properties of K m;2r in order to see how good the approximation
of f by K m;2r f will be. These properties are useful in the analysis of our circulant
preconditioners in x4. First we claim that our preconditioners are positive definite.
Lemma 3.1
2- . The preconditioner C n [K m;2r f ] is positive definite for all
positive integers m, n and r.
Proof: By (2), K m;2r (') is positive except at
2- is nonnegative and not identically zero, the function
(K m;2r f)(') j
\Gamma-
is clearly positive for all ' 2 [\Gamma-]. Hence by (1), the preconditioners C n [K m;2r f ] are
positive definite.
In the following, we will use ' to denote the function ' defined on the whole real line R.
For clarity, we will use ' 2- to denote the periodic extension of ' on [\Gamma-], i.e. ' 2-
1 below). It is clear that ' 2- 2 C
2- . We
first show that K m;2r ' 2p
2- matches the order of the zero of ' 2p
2- at
Lemma 3.2 Let p and r be positive integers with r ? p. Then
\Gamma-
where
Proof: The first two equalities in (7) are trivial by the definition of ' 2- . For the last
equality, since '=- sin('=2) - '=2 on [0; -], we have by (2)
\Gamma-
Z -sin 2r
Z m-
0sin 2r u
aeZ 1sin 2r u
Z 1sin 2r u
oe
aeZ 1u 2p du
Z 11
oe
On the other hand, we also have
\Gamma-
Z -sin 2r
Z 1sin 2r u
By setting
\Gamma-
Putting (11) back into (9) and (10), we then have (8).
We remark that using the same arguments as in (10), we can show that
i.e., the T. Chan preconditioner does not match the order of the zeros of ' 2p at
when p - 1. We will see in x5 that the T. Chan preconditioner does not work for Toeplitz
matrices generated by functions with zeros of order greater than or equal to 2.
Next we estimate (K m;2r ' 2p
2- )(OE) for OE 6= 0. In order to do so, we first have to replace
the function ' 2p
2- in the convolution product by ' 2p defined on R.
Lemma 3.3 Let p be a positive integer. Then
\Theta
and
Proof: To prove (13), we first claim that
By the definition of
2- , we have (see Figure 1)
For \Gamma-=2], we have
For \Gamma-=2], we have
Thus we have (16).
By (16), we see that
\Gamma-
\Gamma-
\Theta K m;2r ' 2p ('
Similarly, we also have
5-
Thus, we have (13).
To prove (14), we just note that
As for (15), we have
With Lemmas 3.2 and 3.3, we show that K m;2r ' 2p
2- and ' 2p
are essentially the same
away from the zero of ' 2p
2- .
Figure
1: The functions
Theorem 3.4 Let p and r be positive integers with r ? p and dn=re. Then there
exist positive numbers ff and fi independent of n such that for all sufficiently large n,
Proof: We see from Lemma 3.3 that for different values of OE, (K m;2r ' 2p
2- )(OE) can be
replaced by different functions. Hence, we proceed the proof for different ranges of values
of OE.
We first consider OE 2 [-=n; -=2]. By the binomial expansion,
\Gamma-
\Gamma-
For odd k,
\Gamma- K m;2r (t)t k
OE \Gamma2k
\Gamma-
K m;2r (t)t 2k dt:
By (7), we then have \Gamma K m;2r ' 2p
where by (8), c k;2r are bounded above and below by positive constants independent of m
by (5), -=r -m=n - OEm, we have
Thus by (18),
Hence by (14), (17) follows for OE 2 [-=n; -=2].
The case with OE 2 [\Gamma-=2; \Gamma-=n] is similar to the case where OE 2 [-=n; -=2].
Next we consider the case OE 2 [-=2; -]. Note that
\Theta K m;2r ' 2p
\Gamma-
\Gamma-
dt
where
is a degree 4p polynomial without the constant term. By (7), we have
\Gamma-
c 2j;2r
Thus
\Theta
c 2j;2r
2- for OE 2 [-=2; -], we have
\Theta
which is clearly bounded independent of n. For the lower bound, we use the fact that
2- for OE 2 [-=2; -] in (19), then we have
\Theta K m;2r ' 2p
c 2j;2r
c 2j;2r
for sufficiently large n (and hence large m), the last expression is bounded uniformly
from below say by - 2p =2. Combining (20), (21) and (15), we see that (17) holds
for OE 2 [-=2; -] and for n sufficiently large.
The case where OE 2 [\Gamma-; \Gamma-=2] can be proved in a similar way as above.
Using the fact that
\Gamma-
we obtain the following corollary which deals with functions having a zero at fl 6= 0.
Corollary 3.5 Let fl 2 [\Gamma-], p and r be positive integers with r ? p and
Then there exist positive numbers ff and fi, independent of n, such that for all sufficiently
large n,
Now we can extend the results in Theorem 3.4 to any functions in C
2- with a single
zero of order 2p.
Theorem 3.6 Let f 2 C
2- and have a zero of order 2p at fl 2 [\Gamma-]. Let r ? p be any
integer and dn=re. Then there exist positive numbers ff and fi, independent of n,
such that for all sufficiently large n,
Proof: By the definition of zeros (see x2),
2- g(') for some positive continuous
function g(') on [\Gamma-]. Write
(K m;2r f) (OE)
\Deltag(OE)
Clearly the last factor is uniformly bounded above and below by positive constants. By
Corollary 3.5, the same holds for the second factor when -=n As for the
first factor, by the Mean Value Theorem for integrals, there exists a i 2 [\Gamma-] such that
Hence
where g min and g max are the minimum and maximum of g respectively. Thus the theorem
follows.
So far we have considered only the interval -=n -
now show that the convolution product K m;2r f matches the order of the zero of f at
the zero of f .
Theorem 3.7 Let f 2 C
2- and have a zero of order 2p at fl 2 [\Gamma-]. Let r ? p be any
integer and dn=re. Then for any jOE \Gamma flj -=n, we have
(K m;2r f)
Proof: We first prove the theorem for the function
. By the binomial theorem,
\Gamma-
\Gamma-
Since
\Gamma- K m;2r (t)t j we have for jOEj -=n,
\Gamma-
\Gamma-
By (7), (8) and (5), we then have
Hence by (14),
On the other hand, from (22), (8) and (5),
we have
(K m;2r ' 2p )(OE) -
\Gamma-
O
Hence by (14) again,
Thus the theorem holds for
2- .
In the general case where
2- g(') for some positive function g 2 C 2- , by
the Mean Value Theorem for integrals, there exists a i 2 [\Gamma-] such that
(K m;2r
Hence
min \Delta (K m;2r ' 2p
for all OE 2 [\Gamma-]. Here g min and g max are the minimum and maximum of g respectively.
From the first part of the proof, we already see that (K m;2r ' 2p
is of O
1=n 2p
for all jOE \Gamma flj -=n, hence the theorem follows.
4 Spectral Properties of the Preconditioned Matrices
4.1 Functions with a Zero
In this subsection, we analyze the spectra of the preconditioned matrices when the generating
function has a zero. We will need the following lemma.
Lemma 4.1 [4, 16] Let f 2 C
2- . Then A n [f ] is positive definite for all n. Moreover if
2- is such that 0 ! ff - f=g - fi for some constants ff and fi, then for all n,
x A n [g]x
Next, we have our first main theorem which states that the spectra of the preconditioned
matrices are essentially bounded.
Theorem 4.2 Let f 2 C
2- and have a zero of order 2p at fl. Let r ? p and
Then there exist positive numbers ff ! fi, independent of n, such that for all sufficiently
large n, at most 2p
are outside the interval [ff; fi].
Proof: For any function g 2 C 2- , we let ~
[g] to be the n-by-n circulant matrix with the
j-th eigenvalue given by
there is at most one j such that j2-j=n \Gamma flj ! -=n, by (1),
~
is a matrix of rank at most 1.
By assumption, positive function g in C 2- . We
use the following decomposition of the Rayleigh quotient to prove the theorem:
x A n [f ]x
x A n
sin 2p
x
x A n
sin 2p
x
x ~
sin 2p
x
x ~
sin 2p
x
x ~
x ~
x ~
x ~
We remark that by Lemma 4.1 and the definitions (1) and (23), all matrices in the factors
in the right hand side of (24) are positive definite.
As g is a positive function in C 2- , by Lemma 4.1, the first factor in the right hand
side of (24) is uniformly bounded above and below. Similarly, by (23), the third factor is
also uniformly bounded. The eigenvalues of the two circulant matrices in the fourth factor
differ only when j2-j=n \Gamma flj -=n. But by Theorem 3.6, the ratios of these eigenvalues
are all uniformly bounded when n is large. The eigenvalues of the two circulant matrices
in the last factor differ only when j2-j=n -=n. But by Theorem 3.7, their ratios
are also uniformly bounded.
It remains to handle the second factor. Define s 2p (') j sin 2p ( '\Gammafl
i.e., s 2p (') is a p-th degree trigonometric polynomial in '. Recall that for any function
the convolution product of the Dirichlet kernel D n with h is just
equal to the nth partial sum of h, i.e., (D n
j=\Gamman b j e ij' . Hence for n - 2p,
(D bn=2c s 2p
Since C n [D bn=2c s 2p (')] is the Strang preconditioner for A n [s 2p (')], see [9], C n [s 2p (')]
will be the Strang preconditioner for A n [s 2p (')] when n - 2p. As s 2p (') is a p-th degree
trigonometric polynomial, A n [s 2p (')] is a band Toeplitz matrix with half bandwidth p+ 1.
Therefore when n - 2p, by the definition of the Strang preconditioner,
R
where R p is a p-by-p matrix, see [21]. Thus A n [s 2p
where the n-by-n
matrix R n is of rank at most 2p + 1.
Putting this back into the numerator of the second factor in (24), we have
x A n [f ]x
x ~
x ~
x ~
x ~
x ~
x A n [f ]x
x R n x
Notice that for all sufficiently large n, except for the last factor, all factors above are
uniformly bounded below and above by positive constants. We thus have
x A n [f ]x
when n large, where
Hence for large n,
x
If R n has q positive eigenvalues, then by Weyl's theorem [13, p.184], at most q eigenvalues
of C n [K m;2r f are larger than ff max . By using a similar argument, we can prove
that at most 2p are less than ff min . Hence the
theorem follows.
Finally we prove that all the eigenvalues of the preconditioned matrices are bounded
from below by a constant independent of n. Hence the computational cost for solving this
class of n-by-n Toeplitz systems will be of O(n log n) operations.
Theorem 4.3 Let f 2 C
2- and have a zero of order 2p at fl. Let r ? p and
Then there exists a constant c independent of n, such that for all n sufficiently large, all
eigenvalues of the preconditioned matrix C
are larger than c.
Proof: In view of the proof of Theorem 4.2, it suffices to get a lower bound of the
second Rayleigh quotient in the right hand side of (24). Equivalently, we have to get
an upper bound of ae(A \Gamma1
denotes the spectral radius and
We note that by the definition (23), ~
the zero matrix or is given by
F
diag
for some j such that j2-j=n \Gamma flj ! -=n. Thus
By Lemma 4.1, A \Gamma1
positive definite. Thus the matrix
A
is similar to the symmetric matrix
A \Gamma1=2
Hence we have
ae
A
A \Gamma1=2
A \Gamma1=2
A \Gamma1=2
A
By [6, Theorem 1], we have
Hence the last term in (26) is of
O(1).
It remains to estimate the first term in (26). According to (25), we partition A \Gamma1
as
A
are p-by-p matrices. Then by (25),
ae
A
'-
where the last equality follows because the 3-by-3 block matrix in the equation has vanishing
central column blocks. In [3, Theorem 4.3], it has been shown that R p , B 11 , B 13 and
all have bounded ' 1 -norms and ' 1 -norms. Hence using the fact that ae(\Delta) - k
we see that (27) is bounded and the theorem follows.
By combining Theorems 4.2 and 4.3, the number of preconditioned conjugate gradient
iterations required for convergence is of O(1), see [3]. Since each PCG iteration
requires O(n log n) operations (see [7]) and so is the construction of the preconditioner
(see x2), the total complexity of the PCG method for solving Toeplitz systems generated
by
2- is of O(n log n) operations.
4.2 Positive Functions
In this subsection, we consider the case where the generating function is strictly positive.
We note that the spectrum of A n [f ] is contained in [f min
are the minimum and maximum values of f , see [6, Lemma 1]. Since f min ? 0, A n [f
is well-conditioned. In [9], it was shown that for such f , the spectrum of C
f ]A n [f ] is clustered around 1 and the PCG method converges superlinearly. Recall that
is just the T. Chan circulant preconditioner. In the following, we generalize
this result to other generalized Jackson kernels. First, it is easy to show that
(K m;2r f) (OE) - f max . Thus the whole spectrum of C
is contained in
[f min =f i.e. the preconditioned system is also well-conditioned. We now
show that its spectrum is clustered around 1.
Theorem 4.4 Let f 2 C 2- be positive. Then the spectrum of C
clustered around 1 for sufficiently large n. Here
Proof: We first prove that K m;2r f converges to f uniformly on [\Gamma-]. For - ? 0,
be the modulus of continuity of f . It has the
property that
see [15, p.43].
By the uniform continuity of f , for each " ? 0, there exists a ffi ? 0 such that !(f; ffi) ! ".
\Gamma-
\Gamma-
\Gamma-
\Gamma-
\Gamma-
bounded by a constant independent of n (cf. the proof of
Lemma 3.2 for Therefore, K m;2r f converges uniformly to f . By [9, Theorem
1], the spectrum of C
clustered around 1 for sufficiently large n.
As an immediate consequence, we can conclude that when f is positive and C n [K m;2r \Lambdaf
is used as the preconditioner, the PCG method converges superlinearly, see for instance
[5].
5 Numerical Experiments
In this section, we illustrate by numerical examples the effectiveness of the preconditioner
solving Toeplitz systems. For comparisons, we also test the Strang [21]
and the T. Chan [11] circulant preconditioners. In the following, m is set to dn=re.
Example 1: The first set of examples is on mildly ill-conditioned Toeplitz systems where
the condition numbers of the systems grow like O(n ' ) for some ' ? 0. They correspond
to Toeplitz matrices generated by functions having zeros of order ', see [19]. Because of
the ill-conditioning, the conjugate gradient method will converge slowly and the number
of iterations required for convergence grows like O(n '=2 ) [2, p.24]. However, we will see
that using our preconditioner C n [K m;2r f ] with 2r ? ', the preconditioned system will
converge linearly, i.e., the number of iterations required for convergence is independent of
n.
We solve Toeplitz systems A n [f by the preconditioned conjugate gradient
method for twelve nonnegative test functions. Since the functions are nonnegative, the
so generated are all positive definite. We remark that if f takes negative values, then
A n [f ] will be non-definite for large n. As mentioned in x2, the construction of our preconditioners
for an n-by-n Toeplitz matrix requires only the n diagonal entries fa j g jjj!n
of the given Toeplitz matrix. knowledge of f is required. In the tests, the
right-hand side vectors b are formed by multiplying random vectors to A n [f ]. The initial
guess is the zero vector and the stopping criteria is jjr q jj 2 =jjr 0 jj is the
residual vector after q iterations.
Tables
1-4 show the numbers of iterations required for convergence for different choices
of preconditioners. In the table, I denotes no preconditioner, S is the Strang preconditioner
[21], K m;2r are the preconditioners from the generalized Jackson kernel K m;2r defined in
(2) and is the T. Chan preconditioner [11]. Iteration numbers more than 3,000
are denoted by "y". We note that S in general is not positive definite as the Dirichlet
kernel D n is not positive, see [9]. When some of its eigenvalues are negative, we denote
the iteration number by "-" as the PCG method does not apply to non-definite systems
and the solution thus obtained may be inaccurate.
The first two test functions in Table 1 are positive functions and therefore correspond
to well-conditioned systems. Notice that the iteration number for the non-preconditioned
systems tends to a constant when n is large, indicating that the convergence is linear.
In this case, we see that all preconditioners work well and the convergence is fast, see
Theorem 4.4 and [9].
I
Table
1: Numbers of iterations for well-conditioned systems.
The four test functions in Table 2 are nonnegative functions with single or multiple
zeros of order 2 on [\Gamma-]. Thus the condition numbers of the Toeplitz matrices are
growing like O(n 2 ) and hence the numbers of iterations required for convergence without
using any preconditioners is increasing like O(n). We see that for these functions, the T.
Chan preconditioner does not work. This is to be expected from the fact that the order of
does not match that of ' 2 at see (12). However, we see that K m;4 , K m;6
and K m;8 all work very well as predicted from our convergence analysis in x4.
When the order of the zero is 4, like the two test functions in Table 3, the condition
number of the Toeplitz matrices will increase like O(n 4 ) and the matrices will be very
ill-conditioned even for moderate n. We see from the table that both the Strang and the
T. Chan preconditioners fail. For the T. Chan preconditioner, the failure is also to be
expected from the fact that the order of K m;2 ' 4 does not match that of ' 4 at
(12). As predicted by our theory, K m;6 and K m;8 still work very well. The numbers of
iterations required for convergence are roughly constant independent of n.
In
Table
4, we test functions that our theory does not cover. The first two functions are
not differentiable at their zeros. The last two functions are functions with slowly decaying
Fourier coefficients. We found numerically that the minimum values of
I 36 79 170 362 753 1544 53 141 293 547 1113 2213
I
Table
2: Numbers of iterations for functions with order 2 zeros.
I
26 42 71 161 167 247 24 35 58 106 144 196
KN;4 15 17 20 24 26 26 15
KN;8
Table
3: Numbers of iterations for functions with order 4 zeros.
I
I
Table
4: Numbers of iterations for other functions.
and
jkj!1024jkj 0:5 +1 e ik' are approximately equal to 0.3862 and 0.4325 respectively. Hence
the last two test functions are approximately zero at some points in [\Gamma-]. Table 4
shows that the K m;2r preconditioners still perform better than the Strang and the T.
Chan preconditioners.
To further illustrate Theorems 4.2 and 4.3, we give in Figures 2 and 3 the spectra of
the preconditioned matrices for all five preconditioners for
We see that the spectra of the preconditioned matrices for K m;6 and K m;8 are in a small
interval around 1 except for one to two large outliers and that all the eigenvalues are well
separated away from 0. We note that the Strang preconditioned matrices in both cases
have negative eigenvalues and they are not depicted in the figures.
Example 2: In image restoration, because the blurring is an averaging processing, the
resulting matrix is usually strongly ill-conditioned in the sense that its condition number
grows exponentially with respect to its size n. In contrast, the condition numbers of the
mildly ill-conditioned matrices considered in Example 1 are increasing like polynomials
of n only. Regularization techniques have been used for some time in mathematics and
engineering to treat these strongly ill-conditioned systems. The idea is to restrict the
solution in some smooth function spaces [14]. This approach has been adopted in the
circulant preconditioned conjugate gradient method and is very successful when applied
to ground-based astronomy [7].
To illustrate the idea, we use a "prototype" image restoration problem given in [12].
Strang Preconditioner (has negative eigenvalues)
T. Chan Preconditioner
K m,4 Jackson Preconditioner
K m,6 Jackson Preconditioner
K m,8 Jackson Preconditioner
Figure
2: Spectra of preconditioned matrices for
Strang Preconditioner (has negative eigenvalues)
T. Chan Preconditioner
K m,4 Jackson Preconditioner
K m,6 Jackson Preconditioner
K m,8 Jackson Preconditioner
Figure
3: Spectra of preconditioned matrices for
Consider a 100-by-100 Toeplitz matrix A with (i; entries given by
ae 0; if
where
-oe
Blurring matrices of this form (called the truncated Gaussian blur) occur in many image
restoration contexts and are used to model certain degradations in the recorded image.
The condition number of A is approximately 2.3\Theta10 6 . Thus if no regularization is used,
the result obtained will be very inaccurate.
In our experiment, we solve the regularized least squares problem min x
as suggested in [12]. The problem is equivalent to the normal equations (ffI
which we solve by the preconditioned conjugate gradient method. We choose
the solution vector x with its entries given by
see [12], and then we compute noise vector is added to b where each component
of the noise vector is taken from a normal distribution with mean zero and standard
deviation . The stopping criteria is jjr q jj 2 =jjr 0 jj is the residual
vector after q iterations.
We choose the optimal regularization parameter ff such that it minimizes the relative
error between the computed solution x(ff) of the normal equations and the original solution
x given in (28), i.e. ff minimizes . By trial and error, it is found to be
\Gamma6 up to one digit of accuracy. The preconditioner we used for the normal equations
is of the form ff I is chosen to be S, T , K m;4 , K m;6 , and K m;8 . The
corresponding numbers of iterations required for convergence are equal to 21; 33; 22; 22,
and 23 respectively. The number of iterations without preconditioning is 171. The relative
error of the regularized solution is about 3:1 \Theta 10 \Gamma1 . In contrast, it is about 6:9 \Theta 10 +2
if no regularization is used. Thus we see that our preconditioners also work for strongly
ill-conditioned systems after it is regularized.
6 Concluding Remarks
We remark that even for mildly ill-conditioned matrices with condition number of order
then the matrix A n will be very ill-conditioned already for moderate
regularization is also needed in this case. Once the system is
regularized, our preconditioner C n [K m;8 f ] will work even if p ? 6, cf. Example 2 in x5
for instance. Hence in general, the circulant preconditioner C n [K m;8 f ] should be able
to handle all cases, whether the matrix A n is well-conditioned, mildly ill-conditioned, or
very ill-conditioned but regularized.
--R
Superfast solution of real positive definite Toeplitz systems
Finite Element Solution of Boundary Value Problems
Analysis of preconditioning techniques for ill-conditioned Toeplitz matrices
Circulant Preconditioners for Hermitian Toeplitz Systems
Toeplitz Preconditioners for Toeplitz Systems with Nonnegative Generating Functions
Conjugate Gradient Methods for Toeplitz Systems
Circulant Preconditioners from B-Splines
Circulant Preconditioners Constructed from Kernels
Circulant Preconditioners for Ill-Conditioned Hermitian Toeplitz Matrices
An Optimal Circulant Preconditioner for Toeplitz Systems
An algorithm for the regularization of ill-conditioned
Matrix Analysis
The Theory of Tikhonov Regularization for Fredholm Equations of the First Kind
Approximation of Functions
Preconditioners for Ill-Conditioned Toeplitz Matrices
Preconditioners for Ill-Conditioned Toeplitz Systems Constructed from Positive Kernels
Preconditioning Strategies for Hermitian Toeplitz Systems with Nondefinite Generating Functions
On the extreme eigenvalues of Hermitian (block) Toeplitz matrices
Introduction to Applied Mathematics
A Proposal for Toeplitz Matrix Calculations
An Algorithm for the Inversion of Finite Toeplitz Matrices
Circulant Preconditioners with Unbounded Inverses
--TR
--CTR
Weiming Cao , Ronald D. Haynes , Manfred R. Trummer, Preconditioning for a Class of Spectral Differentiation Matrices, Journal of Scientific Computing, v.24 n.3, p.343-371, September 2005 | preconditioned conjugate gradient method;kernel functions;toeplitz systems;circulant preconditioner |
588420 | Existence Verification for Singular Zeros of Complex Nonlinear Systems. | Computational fixed point theorems can be used to automatically verify existence and uniqueness of a solution to a nonlinear system of n equations in n variables ranging within a given region of n-space. Such computations succeed, however, only when the Jacobi matrix is nonsingular everywhere in this region. However, in problems such as bifurcation problems or surface intersection problems, the Jacobi matrix can be singular, or nearly so, at the solution. For n real variables, when the Jacobi matrix is singular, tiny perturbations of the problem can result in problems either with no solution in the region, or with more than one; thus no general computational technique can prove existence and uniqueness. However, for systems of n complex variables, the multiplicity of such a solution can be verified. That is the subject of this paper.Such verification is possible by computing the topological degree, but such computations heretofore have required a global search on the (n-1)-dimensional boundary of an n-dimensional region. Here it is observed that preconditioning leads to a system of equations whose topological degree can be computed with a much lower-dimensional search. Formulas are given for this computation, and the special case of rank-defect one is studied, both theoretically and empirically.Verification is possible for certain subcases of the real case. That will be the subject of a companion paper. | Introduction
. Given an approximate solution -
x to a nonlinear system of
equations F
is useful in various contexts to construct bounds
around -
x in which it is proven that there exists a unique solution x # , F
continuously di#erentiable F for which the Jacobian det(F # (x #= 0 and for which
that Jacobian is well conditioned, interval computations have no trouble proving that
there is a unique solution within small boxes with x # reasonably near the center; see
[8], [16], [23]. However, if F # conditioned or singular, such computations
necessarily must fail. In the singular case, for some classes of systems F
arbitrarily small perturbations of the problem can lead to no solutions
or an even number of solutions, so multiplicity verification is not logical. In contrast,
verification is always possible if F maps C n into C n . Here, algorithms are developed
for the multiplicity of such solutions for F
The algorithms are presented in the context of solutions that lie near the real line
of complex extensions of real systems. (Such solutions arise, for example, in bifurcation
problems.) However, the algorithms can be generalized to arbitrary solutions
z # C n with z not necessarily near the real line.
Also, verification is possible for singular solutions of particular general classes of
We will cover this in a separate paper.
# Received by the editors September 10, 1999; accepted for publication (in revised form) February
21, 2000; published electronically July 19, 2000. This work was supported by National Science
Foundation grant DMS-9701540.
http://www.siam.org/journals/sinum/38-2/36107.html
Department of Mathematics, University of Louisiana at Lafayette, Lafayette, LA 70504 (rbk@
louisiana.edu, dian@louisiana.edu).
# Institut f?r Mathematik, Universit?t Wien, Strudhofgasse 4, A-1050 Wien, Austria (neum@
cma.univie.ac.at).
SINGULAR COMPLEX ZEROS VERIFICATION 361
1.1. Previous work, related material, and references. The emphasis in
this paper is on rigorous verification of existence of a zero of a system of nonlinear
equations in a small region containing an approximate, numerically computed solu-
tion. Verification for F : R n
with the Jacobi matrix of F nonsingular at points
x with F done with computational fixed point theorems based on interval
Newton methods. Such methods are introduced, for example, in the books [2], [8],
[11], [16], [21], and [23].
The techniques in this paper for handling singularities are based on the topological
degree. Introductions to degree theory include parts of [3] (in German) or [20]. A
basic computational procedure for the degree over large regions appears in Stenger
[27]. Stynes [28], [29] and Kearfott [12], [13], [14] derived additional formulas and
algorithms based on Stenger's results. These degree computation procedures, however,
involved heuristics, and the result was not guaranteed to be correct. Aberth [1] based
a verified degree computation method on interval Newton methods and a recursive
degree-computation formula such as Theorem 2.2 below. The work here di#ers from
this previous work in two important aspects:
. The algorithms here execute in polynomial time with respect to the number
of variables and equations, 1 and
. the algorithms here assume at least second-order smoothness, and are meant
to compute the degree over small regions containing the solution, over which
certain asymptotic approximations are valid.
The treatment of verified existence represented in this paper involves computation
of the topological degree in n-dimensional complex space. In loosely related work,
Vrahatis et al. develop an algorithm for computing complex zeros of a function of a
complex variable in [31].
Finally, most of the literature we know on specialized methods for finding complex
zeros, verified or otherwise, of equations and systems of equations deals with
polynomial systems. Along these lines, continuation methods, as introduced in [6]
and [22], figure prominently. The article [4] contains methods for determining the
complex zeros of a single polynomial, while [7] and [9] contain verified methods for
determining the complex zeros of a single polynomial.
1.2. Notation. We assume familiarity with the fundamentals of interval arith-
metic; see [16, 23] for an introduction in the present context. (The works [2], [8], [24]
also contain introductory material.)
Throughout, scalars and vectors will be denoted by lower case, while matrices
will be denoted by upper case. Intervals, interval vectors (also called "boxes"), and
interval matrices will be denoted by boldface. For instance,
an interval vector, denotes an
interval matrix. Real n-space will be denoted by R n , while the set of n-dimensional
interval matrices will be denoted by IR n-n . Similarly, complex n-space will be denoted
by C n . The midpoint of an interval or interval vector x will be denoted by m(x). The
nonoriented boundary of a box x will be denoted by #x while its oriented boundary
will be denoted by b(x) (see section 2).
1.3. Traditional computational existence and uniqueness. Computational
existence and uniqueness verification rests on interval versions of Newton's method.
Typically, such computations can be described as evaluation of a related interval
1 The general degree computation problem is NP-complete; see [26].
362 R. BAKER KEARFOTT, JIANWEI DIAN, AND A. NEUMAIER
operator implies existence and uniqueness of the solution of
To describe these, we review the following definition.
Definition 1.1 (see [23, p. 174], etc. Let F : R n
. The matrix A is said
to be a Lipschitz matrix for F over x provided for every x # x and y # x, F (x) -
A.
Most interval Newton methods for F : R n
abstractly, are of the general
where v is computed to contain the solution set to the interval linear system
and where, for initial uniqueness verification, A is generally a Lipschitz matrix 2 for F
over the box (interval vector) x and - x # x is a guess point. We sometimes write F # (x)
in place of A, since the matrix can be an interval extension of the Jacobi matrix of
F . Uniqueness verification traditionally depends on regularity of the matrix A. We
have the following lemma.
Lemma 1.2. (see [16], [23]). Suppose -
is the image under the interval
Newton method (formula (1.1)), where v is computed by any method that bounds the
solution set to the interval linear system (1.2), and -
x # x. Then A is regular.
The method of bounding the solution set of (1.2) to be considered here is the
interval Gauss-Seidel method, defined by the following definition.
Definition 1.3. The preconditioned interval Gauss-Seidel image GS(F ; x, - x) of
a box x is defined as GS(F ; x, -
x i is defined sequentially for
to n by
where
and where -
is an initial guess point, Y A # IR n-n and Y F (-x) are the
matrix and right-hand-side vector for the preconditioned interval system Y A(x-
-Y F (-x), Y # R n-n is a point preconditioning matrix, Y i denotes the ith row of Y ,
and A j denotes the jth column of A.
Lemma 1.2 applies when N(F
x), provided we specify that
x) be in the interior 3 int(x) of x. In particular, we have the following
theorem.
Theorem 1.4 (see [16], [23]). Suppose F : x # R n
A is a Lipschitz
matrix such as an interval extension F # (x) of the Jacobi matrix. If -
x is the image
under an interval Newton method as in formula (1.1) and -
x # int(x), then there is a
Various authors have proven Theorem 1.4; see [16], [23]. In particular, Miranda's
theorem can be used to easily prove Theorem 1.4 for
see
[19], [30], or [16, p. 60]. For worked-out examples, see [18, p. 3] or [17].
However, see [16, 25] for techniques for using slope matrices.
3 We must specify the interior because of the intersection step in Definition 1.3.
SINGULAR COMPLEX ZEROS VERIFICATION 363
Inclusion in the interval Gauss-Seidel method is possible because the inverse
midpoint preconditioner reduces the interval Jacobi matrix to approximately a diagonal
matrix. In the singular case, an incomplete factorization for the preconditioner
leads to an approximate diagonal matrix in the upper (n-1)- (n-1) submatrix, but
with approximate zeros in the last row. We discovered the methods in this paper by
viewing the interval Gauss-Seidel method on this submatrix, then applying special
techniques to the preconditioned nth function.
1.4. A simple singular example. Consider the following example.
Example 1. Take
and
Even though there is a unique root x
F is as in Example 1, the interval Gauss-Seidel method cannot prove this, since the
In fact, the interval Jacobi matrix is computed to
be
,
and the midpoint matrix is m(F #
). The midpoint matrix, often used
as the preconditioner Y , is singular. 4
Symbolic methods can be used to show that Example 1 has a unique solution at
arbitrarily small perturbations of the problem result in
either no solutions or two solutions. Consider the following example.
Example 2. Take
and
. Here, |#| is very small.
The system in Example 2 has two solutions for # < 0 and no solutions for # > 0.
Roundout in computer arithmetic and, perhaps, uncertainties in the system itself due
to modelling or measurement uncertainties, however, make it impossible to distinguish
systems such as in Example 2 for di#erent #, especially when computer arithmetic is
used as part of the verification process. In such instances, no verification is possi-
ble. However, if F is viewed as a complex function of two variables, then, for all #
su#ciently small, F has two solutions in a small box in C 2 containing the real point
(0, 0).
More generally, we can extend an n-dimensional box in R n to an n-dimensional
box in C n by adding a small imaginary part to each variable. If the system can
be extended to an analytic function in complex n-space (or if it can be extended to
a function that can be approximated by an analytic function), then the topological
degree gives the number of solutions, counting multiplicities, within the small region
in complex space. (See section 2 for an explanation of multiplicity.) For example,
Alternate preconditioners can nonetheless be computed; see [16]. However, it can be shown that
uniqueness cannot be proven in this case; see [16], [23].
364 R. BAKER KEARFOTT, JIANWEI DIAN, AND A. NEUMAIER
the degree of the system in Example 2 within an extended box in complex space
can be computed to be 2, regardless of whether # is negative, positive, or zero. (See
the numerical results in section 8.) The topological degree corresponds roughly to
algebraic degree in one dimension; for example, the degree of z n in a small region in
containing 0 is n.
1.5. Organization of this paper. A review of properties of the topological
degree, to be used later, appears in section 2. The issue of preconditioning appears
in section 3. Construction of the box in the complex space appears in section 4.
Several algorithms have previously been proposed for computing the topological
degree [1], [12], [28], but these require computational e#ort equivalent to finding all
solutions to 4n (2n-1)-dimensional nonlinear systems within a given box, or worse. In
section 5, a reduction is proposed that allows computation of the topological degree
with a search in a space of dimension equal to the rank defect of the Jacobian matrix.
A theorem is proven that further simplifies the search.
In section 6, the actual algorithm is presented and its computational complexity is
given. Test problems and the test environment are described in section 7. Numerical
results appear in section 8. Future directions appear in section 9.
2. Review of some elements of degree theory. The topological degree or
Brouwer degree, well known within algebraic topology and nonlinear functional anal-
ysis, is both a generalization of the concept of a sign change of a one-dimensional
continuous function and of the winding number for analytic functions. It can be used
to generalize the concept of multiplicity of a root. The fundamentals will not be
reviewed here, but we refer to [3], [5], [12]. We present only the material we need.
Here we explain what we mean by "multiplicity." Actually, there is a more general
concept index (see [5, Chapter I]) for an isolated zero. The topological degree is equal
to the sum of the indices of zeros in the domain. The index is always positive in
our context. For this reason, we use the more suggestive term multiplicity as an
alternative term for index.
Suppose that F : D # C n
# C n is analytic. Then the real and imaginary
components of F and its argument z # C n may be viewed as real components in R 2n .
by -
R 2n by -
we have the following property of topological
degree
D, 0), and relationships between
D, 0) and the solutions of the system
Theorem 2.1 (see [5], [20], etc. Suppose F : D # C n
# C n is analytic, with
F (z) #= 0 for any z #D, and suppose -
D and -
D # R 2n are defined as above.
Then
D,
D, only if there is a solution z # D, F (z #
D, is equal to the number of solutions z # D, F (z # counting
multiplicities.
(4) If the Jacobi matrix F # (z # ) is nonsingular at every z # D with F (z #
then
D, is equal to the number of solutions z # D, F (z #
The following three theorems lead to the degree computation formula in Theorem
5.1 in section 5, the formula used in our computational scheme.
Theorem 2.2. (see [27, section 4.2]). Let D be an n-dimensional connected,
oriented region in R n and are continuous
SINGULAR COMPLEX ZEROS VERIFICATION 365
functions defined in D. Assume F #= 0 on the oriented boundary b(D) of D, b(D) can
be subdivided into a finite number of closed, connected (n - 1)-dimensional oriented
subsets # k
and there is a
on the oriented boundary b(# k
n-1 ) of
has the same sign at all solutions of there are any, on # k
Choose s # {-1, +1} and let K 0 denote the subset of the integers k # {1, . , r}
such that has solutions on # k
n-1 and sgn(f p at each of those solutions.
Then
The formula in Theorem 2.2 is a combination of formulas (4.15) and (4.16) in
[27]. The orientation of D is positive and the orientations of # k
positive
or negative, are induced by the orientation of D. If we assume that the Jacobi matrices
of F-p are nonsingular at all solutions of
depending on whether # k
n-1 has positive orientation or
negative orientation, and JF-p (x) is the determinant of the Jacobi matrix of F-p at
x. (See Theorem 5.2 and Theorem 7.2 in Chapter I of [5].) Thus we can simplify the
formula in Theorem 2.2 as follows.
Theorem 2.3. Suppose the conditions of Theorem 2.2 are satisfied and, addi-
tionally, the Jacobi matrix of F-p is nonsingular at each solution of
for each k # K 0 (s). Then
depending on whether # k
n-1 has positive orientation or
negative orientation, and JF-p (x) is the determinant of the Jacobi matrix of F-p at
x.
In our context, the region D is an n-dimensional box
The boundary #x of x consists of 2n (n - 1)-dimensional
boxes
The following theorem, necessary for the main characterization used in our algo-
rithm, is a basic property of oriented domains in n-space and follows from definitions
such as in [3]. See [18, pp. 7-8] for a detailed derivation in terms of oriented simplices.
Theorem 2.4. If x is positively oriented, then the induced orientation of x k is
and the induced orientation of x k
is
The oriented boundary b(x) can be divided into x k and x k
the associated orientations. Also, F #= 0 on b(x) is the same as F #= 0 on #x.
366 R. BAKER KEARFOTT, JIANWEI DIAN, AND A. NEUMAIER
z }| {
Fig. 3.1. A singular system of rank n - p preconditioned with an incomplete LU factorization,
where "#" represents a nonzero element.
Now fix a p between 1 and n. Then F-p
) is the same
as F-p
. For this fixed p, let K 0 denote the subset of the
integers k # {1, . , n} such that has solutions on x k and sgn(f p at these
solutions, and let K 0 denote the subset of the integers k # {1, . , n} such that
has solutions on x k
and sgn(f p at these solutions, where s # {-1, +1}.
Then, by Theorem 2.3, we have the following theorem.
Theorem 2.5. Suppose F #= 0 on #x, and suppose there is
that
(1) F-p #= 0 on #x k or #x k ,
has the same sign at all solutions of there are any, on x k or
(3) the Jacobi matrices of F-p are nonsingular at all solutions of
Then
sgn
#F-p
x#x
sgn
#F-p
3. On preconditioning. The inverse midpoint preconditioner approximately
diagonalizes the interval Jacobi matrix when F # (and well enough
conditioned). This preconditioner can be computed with Gaussian elimination with
partial pivoting. We can compute (to within a series of row permutations) an LU
factorization of the midpoint matrix m F # (x) . The factors L and U may then be
applied to actually precondition the interval linear system.
When the rank of F # Gaussian elimination with
full pivoting can be used to reduce F # (x) to approximately the pattern shown in
Figure
3.1. Actually, an incomplete factorization based on full pivoting will put the
system into a pattern that resembles a permutation of the columns of the pattern
in
Figure
3.1. However, for notational simplicity, there is no loss here in assuming
exactly the form in Figure 3.1.
SINGULAR COMPLEX ZEROS VERIFICATION 367
In the analysis to follow, we assume that the system has already been precondi-
tioned, so that it is, to within second-order terms with respect to w(x), of the form
in
Figure
3.1. Here we concentrate on the case p=1, although the idea can be applied
to the general case.
4. The complex setting and system form. Below, we assume
R n can be extended to an analytic function in C n .
small box that will be constructed
centered at the approximate solution -
x is near a point x # with F such that #-x - x # is much smaller than
the width of the box x, and width of the box x is small enough so that mean
value interval extensions lead, after preconditioning, to a system like Figure
3.1, with small intervals replacing the zeros.
(4) F has been preconditioned as in Figure 3.1, and F # space of
dimension 1.
The following representation is appropriate under these assumptions:
.
to complex space: x iy, with y in a small box
is centered at (0, . , 0). Define
z #
iy)), and v k (x, y) #(f k
Then, if preconditioning based on complete factorization of the midpoint matrix for
F # (x) is used, the first-order terms are eliminated in the pattern of Figure 3.1, and,
,
#xn (-x)y n +O #(x -
,
and
(-x)y k y l +O #(x -
,
.
5. Simplification of a degree computation procedure. To use Theorem 2.5
to compute the topological degree
directly in a verification algorithm would
require a global search of the 4n (2n-1)-dimensional faces of the 2n-dimensional box
z for zeros of -
F-p . This is an inordinate amount of work for a verification process
368 R. BAKER KEARFOTT, JIANWEI DIAN, AND A. NEUMAIER
that would normally require only a single step of an interval Newton method in the
nonsingular case. However, if the system is preconditioned and in the form described
in section 3 and section 4, the verification can be reduced to 4n-4 interval evaluations
and four one-dimensional searches.
To describe the simplification, define
Similarly define y k and y k
. Also define
F-un
To compute the degree
F , z, 0), we will consider -
F-un on the boundary of z. The
boundary of z consists of the 4n faces x 1 , x 1
Observe that, for
F-un
#xn
|#fk /#xn (-x)| #
w(x n ). Similarly, -
F-un
F-un
F-un
on y k
implies w(y k )/|#f k /#x n (-x)| # w(y n ). Thus if x n is chosen so that
min
,
then it is unlikely that u k (x,
. Similarly, if y n is chosen so
that
min
,
then it is unlikely that v k (x, on either y k or y k . Here, the coe#cient 1
2 is
to take into consideration the fact that u k (x, y) #
#xn
#xn
(-x)y n are only approximate equalities. (When #f k /#x n
there is no restriction on w(x n ) or w(y n ) due to w(x k ) or w(y k ).)
By constructing the box z in this way, we can eliminate search of 4n - 4 of the
4n faces of the boundary of z, since we have arranged to verify -
F-un (x, y) #= 0 on
each of these faces. Elimination of these 4n - 4 faces needs only 4n - 4 interval eval-
uations. Then, we need only to search the four faces x n , x n , y n , and y n for solutions
of -
F-un regardless of how large n is. This reduces total computational cost
dramatically, since searching a face is expensive. Based on this, the following theorem
underlies our algorithm in section 6.1.
Theorem 5.1. Suppose
, and v k #= 0 on y k and y k
has a unique solution on x n and x n with y n in the interior of y n ,
and -
has a unique solution on y n and y n with x n in the interior of
(3) at the four solutions of -
condition 2; and
(4) the Jacobi matrices of -
F-un are nonsingular at the four solutions of -
in condition 2.
SINGULAR COMPLEX ZEROS VERIFICATION 369
Then
F , z,
F-un (x,y)=0
un (x,y)>0
sgn
F-un
xn=xn
F-un (x,y)=0
un (x,y)>0
sgn
F-un
F-un (x,y)=0
un (x,y)>0
sgn
F-un
F-un (x,y)=0
un (x,y)>0
sgn
F-un
Proof. Condition 1 implies -
conditions 2 and 3 imply -
F #= 0 on #z.
Condition 1 implies -
F-un #= 0 on #x k , #x k
consists of 2(n - 1) (2n - 2)-dimensional boxes, each of which is either embedded
in some x k , x k
or is embedded in y n or y n . Thus, by
conditions 2 and 3, -
F-un #= 0 on #x n . Similarly, -
F-un #= 0 on #x n , #y n and #y n .
Thus condition 1 in Theorem 2.5 is satisfied.
Condition 2 in Theorem 2.5 is automatically satisfied since either has no
solutions or a unique solution on x k , x k
Then, with condition 4, the conditions of Theorem 2.5 are satisfied. The formula
is thus obtained with
The conditions of Theorem 5.1 will be satisfied when the system is that as described
in section 3 and section 4, the box z is constructed as in (5.1) and (5.2), and
the quadratic model is accurate. (See Theorem 5.2 and its proof of the results when
all the approximations are exact.)
In Theorem 5.1, the degree consists of contributions of the four faces we search.
We can compute the degree contribution of each of the four faces, then add them to
get the degree.
In Theorem 5.1 we choose We can also choose s = -1. That doesn't
make any di#erence in our context if we ignore higher order terms in the values of u n
at the solutions of -
on the four faces x n , x n , y n , and y n . To be specific, the
four values of u n are
,
,
,
,
370 R. BAKER KEARFOTT, JIANWEI DIAN, AND A. NEUMAIER
respectively, where # is defined in (5.3). When we choose w(y k ) the same (or roughly
the same) as w(x k ), the values of u n as a function of y n
(or y n ) will be the same (or
roughly the same) as the values of u n as a function of x n -
we ignore higher order terms, the cost of verifying u n < 0 and searching for solutions
of -
is approximately the same as the cost of verifying u n > 0
and searching for solutions of -
Next we will give a theorem that will further reduce the search cost by telling us
how we should search. Define
Theorem 5.2. If the approximations of (4.1) and (4.2) are exact, if we construct
the box z as in (5.1) and (5.2), and if #= 0, then
F , z, 2.
Proof. Under the assumptions,
Due to the construction of the box z, u
1. Next we locate
the solutions of -
(1) On x n ,
Plugging (5.8) and (5.9) into (5.6) and (5.7), we get
Then
SINGULAR COMPLEX ZEROS VERIFICATION 371
since #= 0. Thus by (5.9)
Therefore -
has a unique solution (-x, -
on x n . Plugging (5.12) into (5.10), we get the u n value at this solution, which
is
Next we compute the determinant of the Jacobi matrix of -
F-un at this solu-
tion.
Noting (5.4), (5.5), and (5.7), we have
F-un
(2) Similarly, on x n , -
has a unique solution (-x, -
y) on x n . The u n value
at this solution is
The determinant of the Jacobi matrix of -
F-un at this solution is
F-un
(3) On y n ,
Plugging (5.18) and (5.19) into (5.6) and (5.7), we get
372 R. BAKER KEARFOTT, JIANWEI DIAN, AND A. NEUMAIER
Then
since #= 0. Thus by (5.18),
Therefore -
has a unique solution (-x,
on y n . Plugging (5.22) into (5.20), we get the u n value at this solution,
which is
Next, as in (5.15), we compute the determinant of the Jacobi matrix of -
F-un
at this solution. Noting (5.4), (5.5), and (5.7), we have
F-un
#.
Similarly, -
has a unique solution (-x, - y) on y n . The u n value at this
solution is
n .
The determinant of the Jacobi matrix of -
F-un at this solution is
F-un
Finally, we can use the formula in Theorem 5.1 to compute the topological degree
F , z, 0). If # > 0, then we know from (5.14), (5.16), (5.24), and (5.26) that
at the solutions of -
We also know the signs of
the determinants of the Jacobi matrices at the two solutions from (5.15) and (5.17).
Therefore,
F , z, 2. If # < 0, then we know from (5.14),
(5.16), (5.24), and (5.26) that u n > 0 at the solutions of -
y n . We also know the signs of the determinants of the Jacobi matrices at the two
solutions from (5.25) and (5.27). Therefore
F , z, also in this
case.
The proof of Theorem 5.2 tells us approximately where we can expect to find the
solutions of -
on the four faces we search and the value of the degree we can
expect when the approximations (4.1) and (4.2) are accurate.
From (4.1), we know that if x n is known precisely, formally solving u k (x,
for x k gives sharper bounds -
larly, if y n is known precisely, formally solving v k (x, sharper bounds
SINGULAR COMPLEX ZEROS VERIFICATION 373
n- 1. Thus when we search x n (or x n ) for
solutions of -
0, we can first get sharper bounds for x k , 1 # k # n-1, since x n is
known precisely. Then, for a small subinterval y 0
n of y n , we can solve v k (x,
y k to get sharper bounds -
Thus we get a small subface of x n (or x n ) over which we can either use an interval
Newton method to verify the existence and uniqueness of the zero of -
F-un or use
mean-value extensions to verify that -
F-un has no zeros, depending on whether y 0
n is in
the middle of y n or not. Thus the process reduces to searching over a one-dimensional
interval y n . This further reduces the search cost. We can similarly search y n or y n .
6. The algorithm and its computational complexity.
6.1. Algorithm. The algorithm consists of three phases. In the box-setting
phase, we set the box z. In the elimination phase, we verify that u k #= 0 on x k and
, and v k #= 0 on y k and y k
1. In the search phase, we verify
the unique solution of -
in the interior of y n , and on
y n and y n with x n in the interior of x n , compute the signs of u n and the signs of
the Jacobi matrices of -
F-un at the four solutions of -
compute the degree
contributions of the 4 faces x n , x n , y n , and y n according to the formula in Theorem
5.1, and finally add the contributions to get the degree.
Algorithm
Box-setting phase
1. Compute the preconditioner of the original system, using Gaussian elimination
with full pivoting.
2. Set the widths of x k and y k (see explanation below), for 1 # k # n - 1.
3. Set the widths of x n and y n as in (5.1) and (5.2).
Elimination phase
1. Do for
(a) Do for x k and x k
i. Compute the mean-value extension of u k over that face.
ii. If 0 # u k , then stop and signal failure.
(b) Do for y k and y k
i. Compute the mean-value extension of v k over that face.
ii. If 0 # v k , then stop and signal failure.
Search phase
1. Do for x n and x n
(a) i. Use mean-value extensions for u k (x, to solve for x k to get
sharper bounds -
x k with width O #(x - x, y)# 2
ii. If -
return the degree contribution of that face as
iii. Update x k .
(b) i. Compute the mean-value extension u n over that face.
ii. If u n < 0, then return the degree contribution of that face as 0.
(c) Construct a small subinterval y 0
n of y n which is centered at 0.
(d) i. Use mean-value extensions for v k (x, to solve for y k to get
sharper bounds -
y k with width O max(#(x -
thus getting a subface x 0
n ) of x n (or x n .)
ii. If -
#, then stop and signal failure.
374 R. BAKER KEARFOTT, JIANWEI DIAN, AND A. NEUMAIER
up an interval Newton method for -
F-un to verify existence and
uniqueness of a zero in the subface x 0
ii. If the zero cannot be verified, then stop and signal failure.
(f) Inflate y 0
n as much as possible subject to verification of existence and
uniqueness of the zero of -
F-un over the corresponding subface, and thus
get a subinterval y 1
n of y n .
(g) In this step, we verify -
solutions when y
n .
n has two separate parts; we denote the lower part by y l
n and the
upper part by y u
n . We present only the processing of the lower part.
The upper part can be processed similarly.
A. Use mean-value extensions for v k (x, to solve for y k to
get sharper bounds for y k , 1 # k # n - 1, and thus to get a
subface of x n (or x n ).
B. Compute the mean-value extensions -
F -un over the subface obtained
in the last step.
F -un , then bisect y l
update the lower part as a new y l
and cycle.
F -un , then exit the loop.
ii. Do
A. If y 1
exit the loop.
B. y l
C. Use mean-value extensions for v k (x, to solve for y k to
get sharper bounds for y k , 1 # k # n - 1, and thus to get a
subface of x n (or x n ).
D. Compute the mean-value extensions -
F -un over the subface obtained
in the last step.
F -un , then cycle.
F -un , then y l
, mid(y l
Compute the mean-value extension of u n over x 0
.)
ii. If u n < 0, then return the degree contribution of that face as 0.
F-un
F-un
ii. If 0 # -
F-un
F-un
stop and signal failure.
(j) Use the formula in Theorem 5.1 to compute the degree contribution of
that face.
2. Do for y n and y n
(a) Same as step 1(a) except change x k to y k , -
x k to -
to v k .
(b) Same as step 1(b).
(c) Same as step 1(c) except change y 0
n to x 0
n , y n to x n , and 0 to -
x n .
(d) Same as step 1(d) except change y k to x k , -
y k to -
n to y 0
n to y 0
to y n , and x n to y n .
Same as step 1(e) except change x 0
n to y 0
n and x 0
n to y 0
n .
(f) Same as step 1(f) except change y 0
n to x 0
n to x 1
n , and y n to x n .
(g) Same as step 1(g) except change y n \ y 1
n to x n \ x 1
n .
Same as step 1(h) except change x 0
n to y 0
n and x 0
n to y 0
SINGULAR COMPLEX ZEROS VERIFICATION 375
(i) Same as step 1(i) except change
F-un
F-un
F-un
F-un
(j) Same as step 1(j).
3. Add the degree contributions of the four faces obtained in steps 1 and 2 to
get the degree.
End of algorithm
An explanation of the algorithm
1. In the box-setting phase, in step 2, the width w(x k ) of x k depends on the
accuracy of the approximate solution -
x of the system F should
be much larger than |-x k - x # k |. At the same time, w(x k ) should not be too
large, since the quadratic model needs to be accurate over the box.
2. In the search phase, in step 1(b) (or 2(b)), we check the sign of u n on that
face and discard that face at the earliest possible time if u n < 0 on that
face, since we know the degree contribution of that face is 0 according to the
formula in Theorem 5.1. This will save time significantly if it happens that
on that face. It did happen for all the test problems. (See section 8
for the test results.)
3. In the search phase, in step 1(e) (or 2(e)), we precondition the system -
F-un
before we use an interval Newton method, so that the method will succeed
(see section 1.3 and section 3). The system -
F-un is nonsingular over the
subfaces under consideration.
4. In the search phase, in step 1(f) (or 2(f)), we first expand the subinterval
n at both ends. If existence and uniqueness of the
zero of -
F-un can be verified over the corresponding subface, then we expand
the subinterval by 2# at both ends, then 4# and so on until existence and
uniqueness verification fails.
5. In the search phase, in step 1(g) (or 2(g)), the underlying idea is that the
farther away the interval y l
n is from the interval y 0
whose corresponding
subface of x n (or x n ) contains a unique solution of -
or the narrower
the interval y l
is, the more probable it is that we can verify that -
F-un
over the subface of x n (or x n ) corresponding to y l
n .
6.2. Computational complexity.
Derivation of the computational complexity
Box-setting phase: Step 1 is of order O n 3
. Step 2 is of order O (n). Step 3 is of
order O n 2
. Thus, the order of this phase is O n 3
.
Elimination phase: Step 1(a)i and 1(b)i are of order O n 2
. Step 1(a)ii and 1(b)ii
are of order O (1). Thus, the order of this phase is O n 3
.
Search phase: Step 1(a) and 2(a) are of order O n 3
. Step 1(b) and 2(b) are of
order O n 2
. Step 1(c) and 2(c) are of order O (1). Step 1(d) and 2(d) are
of order O n 3
. Step 1(e) and 2(e) are of order O n 3
. Step 1(f) and 2(f)
are of order N inf l *O n 3
. (See explanation below.) Step 1(g) and 2(g) are
of order N proc *O n 3
. (See explanation below.) Step 1(h) and 2(h) are of
order O n 2
. Step 1(i) and 2(i) are of order O n 3
. Step 1(j) and 2(j) are
of order O (1) . The last step of this phase is of order O (1) too. Thus, the
order of this phase is O n 3
376 R. BAKER KEARFOTT, JIANWEI DIAN, AND A. NEUMAIER
The order of the overall algorithm is thus O n 3
.
Remark. The order of the algorithm cannot be improved, since computing preconditioners
of the original system and the system -
F-un is necessary and computing
each preconditioner is of order O n 3
.
7. Test problems and test environment.
7.1. Test problems. Before describing the test set, we introduce one more
problem. Motivated by [10, Lemma 2.4], we considered systems of the following form.
Example 3. Set
the matrix corresponding to central di#erence discretization of the boundary value
problem
. The parameter t was
chosen to be equal to t is the largest eigenvalue of A.
The homotopy h in Example 3 has a simple bifurcation point at where the
two paths cross obliquely. That is, there are two solutions to
all t near t 1 and on either side of t 1 . Furthermore, the quadratic terms in the Taylor
expansion for f do not vanish at
The test set consists of Example 1, Example 2 with
and Example 3 with For all the test problems, we used
(0, 0, . , 0) as a good approximate solution to the problem F Actually, it's the
exact solution in Example 1 and Example 3. w(x k ) and w(y k ) were set to 10 -3 for 1 #
computed automatically by the algorithm. In fact,
can also be computed automatically by the algorithm,
depending on the accuracy of the approximate solution. At present, we used the known
true solutions to Example 1 and Example 3 and the known approximate solution to
Example 2 to test the algorithm and set the widths apparently small but otherwise
arbitrary.
For all the problems, the algorithm succeeded and returned a degree of 2.
7.2. Test environment. The algorithm in section 6.1 was programmed in the
Fortran 90 environment developed and described in [15], [16]. Similarly, all the functions
in the test problems were programmed using the same Fortran 90 system, and
internal symbolic representations of the functions were generated prior to execution of
the numerical tests. In the actual tests, generic routines then interpreted the internal
representations to obtain both floating point and internal values.
The LINPACK routines DGECO and DGESL were used in step 1 of the box-setting
phase, and in step 1(e), 2(e), 1(f), and 2(f) of the search phase to compute the
preconditioners. (See the algorithm and its explanation in section 6.1.)
The Sun Fortran 90 compiler version 1.2 was used on a Sparc Ultra model 140
with optimization level 0. Execution times were measured using the routine DSECND.
All times are given in CPU seconds.
8. Numerical results. We present the numerical results in Table 8.1 and some
statistical data in Table 8.2.
The column labels of Table 8.1 are as follows.
Problem: names of the problems identified in section 7.1.
n: number of independent variables.
Success: whether the algorithm was successful.
Degree: topological degree returned by the algorithm.
CPU time: CPU time in seconds of the algorithm.
Time ratio: This applies only to Example 3. It's the ratio of two successive CPU
times.
SINGULAR COMPLEX ZEROS VERIFICATION 377
Table
Numerical results.
Problem n Success Degree CPU time Time ratio
Example 2
Example 2
Example 3 5
Example 3 20
Example 3
Example 3 160
Example 3 320
Table
Statistical data.
Problem
Example
Example 2
Example 2
Example
Example
Example 3
Example 3 160
Example 3
The column labels of Table 8.2 are as follows.
Problem: names of the problems identified in section 7.1.
n: number of independent variables.
number of inflations the algorithm did in step 1(f) or 2(f) for the indicated
face x n , x n , y n , or x y .
number of subintervals of y n \ y 1
n the algorithm processed in step 1(g) or
subintervals of x n \ x 1
n the algorithm processed in step 2(g), i.e., the number
of y l
's plus number of y u
n 's in step 1(g) or number of x l
's plus number of
x u
n 's in step 2(g) for the indicated face x n , x n , y n , or x y .
We can see from Table 8.1 that the algorithm was successful on each problem in the
test set. The overall algorithm is O n 3
, but the are many O n 3
and O n 2
steps.
Some steps have many O n 3
and O n 2
substeps, and some of the substeps still
have many O n 2
structures. Thus, when n was small, those lower order structures
had significant influence on the CPU time. However, for the larger n in the examples
tried, the O n 3
terms dominated. We can see this from the time ratios of Example 3
in
Table
8.1.
In
Table
8.2, in each problem there were two faces of x n , x n , y n , and y n for
which N inf l = 0. This is because the algorithm verified that u n < 0 on each of those
two faces in step 1(b) or 2(b), and returned a degree contribution of each of those
378 R. BAKER KEARFOTT, JIANWEI DIAN, AND A. NEUMAIER
two faces as 0. Thus, the algorithm didn't proceed to step 1(f) or 2(f). For the same
reason, those two faces. For the remaining two faces for which the
algorithm did proceed to step 1(f) or 2(f), N inf l is small.
In step 1(g) or 2(g), which immediately follows the inflations, N
Example 1 and Example 2. This is because the inflations had covered the whole
interval y n . More significant is that N proc = 2 in Example 3 regardless of small n
or large n. This is because only one interval was processed to verify that -
has no solutions when x n # x l
n and only one interval was processed to verify that
solutions when x
n . This means that the algorithm was quite
e#cient.
9. Conclusions and future work. When we tested the algorithm, we took
advantage of knowing the true solutions (see section 7.1. For this reason, we set
arbitrarily. But we plan to have the algorithm
eventually compute these, based on the accuracy of the approximate solution
obtained by a floating point algorithm and the accuracy of the quadratic model.
We presented an algorithm which was designed to work for the case that the rank
deficiency of the Jacobian matrix at the singular solution is one. But the analysis in
section 5 and the algorithm in section 6.1 can be generalized to general rank deficiency.
Also, at present, it is assumed that the second derivatives # 2 fn
#xk#x l
don't vanish simultaneously at the singular solution. In fact, the analysis in section
5 and the algorithm in section 6.1 can be generalized to the general case that the
derivatives of f n of order 1 through r (r # 2) vanish simultaneously at the singular
solution. Computing higher order derivatives, however, may be expensive. Those
two generalizations can also be combined, i.e., any rank deficiency and any order of
derivatives of f n that vanish. We will pursue these generalizations in the future.
Modification of the algorithm to verify complex roots that are not lying near the
real axis is possible.
Another future direction of this study is to apply the algorithms to bifurcation
problems and other physical models.
Finally, verification is possible
in a multidimensional analogue
of odd-multiplicity roots. We are presently writing up theoretical and experimental
results for this situation.
--R
Computation of topological degree using interval arithmetic
Introduction to Interval Computations
New York
Direkte Verfahren zur Berechnung der Nullstellen von Polynomen
Fixed Points and Topological Degree in Nonlinear Analysis
Fixed Points
Circular arithmetic and the determination of polynomial zeros
Global Optimization Using Interval Analysis
Applied and Computational Complex Analysis.
Computing the Degree of Maps and a Generalized Method of Bisection
A summary of recent experiments to compute the topological degree
A Fortran 90 environment for research and prototyping of enclosure algorithms for nonlinear equations and global optimization
Continuous Problems
Rigorous global optimization and the GlobSol package.
Existence Verification for Singular Zeros of Nonlinear Systems
The Poincar-e-Miranda theorem
Degree Theory
Methods and Applications of Interval Analysis
Solving Polynomial Systems Using Continuation for Engineering and Scientific Problems
Interval Methods for Systems of Equations
New Computer Methods for Global Optimization
Verification methods for dense and sparse systems of equations
Optimal Solution of Nonlinear Equations
An algorithm for the topological degree of a mapping in R n
An Algorithm for the Numerical Calculation of the Degree of a Mapping
An algorithm for numerical calculation of the topological degree
A short proof and a generalization of Miranda's existence theorem
The topological degree theory for the location and computation of complex zeros of Bessel functions
--TR
--CTR
Jianwei Dian , R. Baker Kearfott, Existence verification for singular and nonsmooth zeros of real nonlinear systems, Mathematics of Computation, v.72 n.242, p.757-766, 1 April
R. Baker Kearfott , Jianwei Dian, Verifying topological indices for higher-order rank deficiencies, Journal of Complexity, v.18 n.2, p.589-611, June 2002
B. Mourrain , M. N. Vrahatis , J. C. Yakoubsohn, On the complexity of isolating real roots and computing with certainty the topological degree, Journal of Complexity, v.18 n.2, p.612-640, June 2002 | interval computations;complex nonlinear systems;topological degree;verified computations;singularities |