name
stringlengths 5
6
| title
stringlengths 8
144
| abstract
stringlengths 0
2.68k
| fulltext
stringlengths 1.78k
95k
| keywords
stringlengths 22
532
|
---|---|---|---|---|
510987 | Bounded Model Checking Using Satisfiability Solving. | The phrase model checking refers to algorithms for exploring the state space of a transition system to determine if it obeys a specification of its intended behavior. These algorithms can perform exhaustive verification in a highly automatic manner, and, thus, have attracted much interest in industry. Model checking programs are now being commercially marketed. However, model checking has been held back by the state explosion problem, which is the problem that the number of states in a system grows exponentially in the number of system components. Much research has been devoted to ameliorating this problem.In this tutorial, we first give a brief overview of the history of model checking to date, and then focus on recent techniques that combine model checking with satisfiability solving. These techniques, known as bounded model checking, do a very fast exploration of the state space, and for some types of problems seem to offer large performance improvements over previous approaches. We review experiments with bounded model checking on both public domain and industrial designs, and propose a methodology for applying the technique in industry for invariance checking. We then summarize the pros and cons of this new technology and discuss future research efforts to extend its capabilities. | Figure
must cause the initial state to be visited infinitely often. While there is no particular reason a counter
should work this way, we use the example to illustrate how fairness constraints are imposed in bounded model
checking. Given such a fairness constraint, a counterexample to the liveness property AF(b^a) would then need
to include a transition to the (0;0) state. This is a path with a loop as before, but with the additional constraint that
:a ^:b has to hold somewhere on the loop. This changes the generated Boolean formula as follows. For each
backloop, T(s2;s3), where the state s3 is required to be equivalent to either s0, s1 or s2, we add a term that requires
:a^:b to hold on the loop. For example, for the possible loop from s2 to s0 (the case where s3 = s0), we would
replace
by
(a3
with ci defined as :ai ^:bi. As there is no counterexample that would satisfy this fairness constraint, in this case
the resulting propositional formula would be unsatisfiable.
3.4 Conversion to CNF
Satisfiability testing for propositional formulae in known to be an NP-complete problem, and all known decision
procedures are exponential in the worst case. However, they may use different heuristics in guiding their search
and, therefore, exhibit different average complexities in practice. Precise characterization of the hardness of a
certain propositional problem is difficult and is likely to be dependent on the specific decision procedure used.
Many propositional decision procedures assume the input problem to be in CNF (conjunctive normal form).
Usually, it is a goal to reduce the size of the CNF version of the formula, although this may not always reduce the
complexity of the search. Our experience has been, however, that reducing the size of the CNF does reduce the
time for the satisfiability test as well.
in CNF is represented as a set of clauses. Each clause is a set of literals, and each literal is either
a positive or negative propositional variable. In other words, a formula is a conjunction of clauses, and a clause is
a disjunction of literals. For example, ((a _:b _c) ^(d _:e)) is represented as ffa;:b;cg;fd;:egg. CNF is is
also referred to as clause form.
Given a Boolean formula f, one may replace Boolean operators in f with :;^ and _ and apply the distributivity
rule and De Morgan's law to convert f to CNF. The size of the converted formula can be exponential with
respect to the size of f, the worst case occurring when f is in disjunctive normal form. To avoid the exponential
explosion, we use a structure preserving clause form transformation [28].
procedure bool-to-cnf( f, vf )
f
case
cached( f) == v:
return clause(vf
return clause( f $ vf );
f == hg:
return clause(vf
Fig. 2. An algorithm for generating conjunctive normal form. f, g and h are Boolean formulas. v, vh and vg are Boolean
variables. '' represents a Boolean operator.
Figure
outlines our procedure. Statements which are underlined represent the different cases considered, the
assignment while the symbol == denotes equality. Given a Boolean formula f, bool-to-
returns a set of clauses C which is satisfiable if and only if the original formula, f, is satisfiable. Note
that C is not logically equivalent to the original formula, but, rather, preserves its satisfiability. The procedure
traverses the syntactical structure of f, introduces a new variable for each subexpression, and generates clauses
that relate the new variables. In Figure 2, we use symbols g and h to denote subexpressions of the Boolean
formula f, and we use vf , vg and vh to denote new variables introduced for f, g and h. C1 and C2 denote sets
of clauses. If a subexpression, q, has been cached, the call to cached(q) returns the variable vq introduced for
q. The procedure, clause(), translates a Boolean formula into clause form. It replaces Boolean connectives such
as implication, !, or equality, $, etc., by combinations of and, or and negation operators and subsequently
converts the derived formula into conjunctive normal form. It does this in a brute force manner, by applying the
distributivity rule and De Morgan's law. As an example, if u and v are Boolean variables, clause() called on u
returns ff:u;vg;fu;:vgg. It should be noted that clause() will never be called by bool-to-cnf with more than 3
literals, and so, in practice, the cost of this conversion is quite acceptable. If v, vh, vg are Boolean variables and
'' is a Boolean operator, v $ (vh vg) has a logically equivalent clause form, clause(vf $ vh vg), with no more
than 4 clauses, each of which contains no more than 3 literals.
Internally, we represent a Boolean formula f as a directed acyclic graph (DAG), i.e., common subterms of
f are shared. In the procedure bool-to-cnf(), we preserve this sharing of subterms, in that for each subterm in f,
only one set of clauses is generated. For any Boolean formula f, bool-to-cnf( f,true) generates a clause set C with
O(jfj) variables and O(jfj) clauses, where jfj is the size of the DAG for f.
In
Figure
2, we assume that f only involves binary operators, however, the unary operator, negation, can be
handled similarly. We have also extended the procedure to handle operators with multiple operands. In particular,
we treat conjunction and disjunction as N-ary operators. For example, let us assume that vf represents the formula
Vni=0 ti. The clause form for
is then:
ff:vf ;t0g;f:vf ;t1g;:::;f:vf ;tng;fvf ;:t0;:::;:tngg
If we treat ^ as a binary operator, we need to introduce n1 new variables for the subterms in Vni=0 ti. With this
optimization, the comparison between two registers r and s occurring as a subformula, V1i=50(r[i]
can be converted into clause form without introducing new variables.
4 Experimental Results
At Carnegie-Mellon University, a model checker has been implemented called BMC, based on bounded model
checking. Its input language is a subset of the SMV language [26]. It takes in a circuit description, a property to be
proven, and a user supplied time bound, k. It then generates the type of propositional formula described in Section
3.1. It supports both the DIMACS format [20] for CNF formulae, and the input format for the PROVER Tool [5]
which is based on Stalmarck's Method [35]. In our experiments, we have used the PROVER tool, as well as two
public domain SAT solvers, SATO [39] and GRASP [33], both of which use the DIMACS format.
We first discuss experiments on circuits available in the public domain, that are known to be difficult for BDD-based
approaches. First we investigated a sequential multiplier, the shift and add multiplier of [12]. We specified
that when the sequential multiplier is finished, its output is the same as the output of a certain combinational
multiplier, the C6288 circuit from the ISCAS'85 benchmark set, when the same input words are applied to both
multipliers. The C6288 multiplier is a 16x16 bit multiplier, but we only allowed 16 output bits as in [12], together
with an overflow bit. We checked the above property for each output bit individually, and the results are shown
in
Table
1. For BDD-based model checkers, we used a manually chosen variable ordering where the bits of the
registers are interleaved. Dynamic reordering, where the application tries to change reorderings on the fly, failed
to find a considerably better ordering in a reasonable amount of time. The proof that the multiplier is finished after
a finite number of steps involves the verification of a simple liveness property which can be checked instantly both
with BDD based methods and bounded model checking.
In [25] an asynchronous circuit for distributed mutual exclusion is described. It consists of n cells for n users
that want to have exclusive access to a shared resource. We proved the liveness property that a request for using
the resource will eventually be acknowledged. This liveness property is only true if each asynchronous gate does
not delay execution, indefinitely. This assumption is modeled by a fairness constraint (fairness constraints were
explained in Section 3.3). Each cell has exactly gates and therefore the model has n
bit
sec MB
sec MB
SATO
sec MB
PROVER
sec MB13579111315
43983 73
26
sum 71923 2202 23970 1066
Table
1. 16x16 bit sequential shift and add multiplier with overflow flag and 16 output bits.
where n is the number of cells. Since we do not have a bound for the maximal length of a counterexample for
the verification of this circuit we could not verify the liveness property completely, rather, we showed that there
are no counterexamples of particular length k. To illustrate the performance of bounded model checking we chose
5;10. The results can be found in Table 2.
We repeated the experiment with a buggy design, by simply removing several fairness constraints. Both
PROVER and SATO generate a counterexample (a 2 step loop) nearly instantly (see Table 3).
cells sec MB sec MB
SATO
sec MB
PROVER
sec MB
SATO
sec MB
PROVER
sec MB3579111315
4857
9 5
22 8
9 8
107 19
168 22
54 5
444 9
Table
2. Liveness for one user in the DME.
cells
sec MB
sec MB
SATO
sec MB
PROVER
sec MB3579111315
5622 38
segmentation
28
14 44
413 702
719 702
Table
3. Counterexample for liveness in a buggy DME.
5 Experiments on Industrial Designs
In this section, we will discuss a series of experiments on industrial designs, checking whether certain predicates
were invariants of these designs. First, we explain an optimization for bounded model checking that was used in
these experiments.
5.1 Bounded Cone of Influence
The Cone of Influence Reduction is a well known technique3 that reduces the size of a model if the propositional
formulae in the specification do not depend on all state variables in the structure. The basic idea of the Cone
of Influence (COI) reduction is to construct a dependency graph of the state variables in the specification. In
building the dependency graph, a state variable is represented by a node, and that node has edges emanating out to
nodes representing those state variables upon which it combinationally depends. The set of state variables in this
dependency graph is called the COI of the variables of the specification. In this paper, we call this the classical
COI, to differentiate it from the bounded version. The variables not in the classical COI can not influence the
validity of the specification and can therefore be removed from the model.
This idea can be extended to what we call the Bounded Cone of Influence. The formal definition for the
bounded COI is given in [4], and we give, here, an intuitive explanation. The intuition is that, over a bounded
time interval, we need not consider every state variable in the classical COI at each time point. For example, if
we were to check EF p, where p is a propositional formula, for a time bound of k = 0, we would need to consider
only those state variables upon which p combinationally depends. If the initial values for these were consistent
with p holding, then EF p would evaluate to true, without needing to consider any additional state variables in
the classical COI. Let us, for convenience, call the set of state variables upon which p combinationally depends
its initial support. If we could not prove EF p true for wanted to check it for would need to
consider the set of state variables upon which those in the initial support depend. These may include some already
in the initial support set, if feedback is present in the underlying circuit. Clearly, the set union of the initial support
set plus this second support set are the only state variables upon which the truth value of EF p depends for time
1. Again, this will always be a subset of the state variables in the entire classical COI. If we restrict
ourselves to expanding formula 1 of Section 3.1 only for those variables in the bounded COI for a particular k, we
will get a smaller CNF formula, in general, than if we were to expand it for the entire, classical COI. This is the
main idea behind the Bounded Cone of Influence.
3 The cone of influence reduction seems to have been discovered and utilized by a number of people, independently. We note
that it can be seen as a special case of Kurshan's localization reduction [23].
5.2 PowerPC Circuit Experiments
We ran experiments on subcircuits from a PowerPC microprocessor under design at Motorola's Somerset design
center, in Austin, Texas. While a processor is under design at Somerset, designers insert assertions into the register
transfer level (RTL) simulation model. These Boolean expressions are important safety properties, i.e., properties
which should hold at all time points. If an assertion is ever false during simulation, an immediate error is flagged. In
our experiments, we checked, using BMC and two public domain SAT checkers, SATO and GRASP, 20 assertions
chosen from 5 different processor design blocks. We turned each into an AG p property, where p was the original
assertion. For each of these, we:
1. Checked whether p was a tautology.
2. Checked whether p was otherwise an invariant.
3. Checked whether AG p held for various time bounds, k, from 0 to 20.
The gate level netlist for each of the 5 design blocks was translated into an SMV file, with each latch represented
by a state variable having individual next state and initial state assignments. For the latter, we assigned the 0
or 1 values we knew the latches would have after a designated power-on-reset sequence4 Primary inputs to design
blocks were modeled as unconstrained state variables, i.e., having neither next state nor initial state assignments.
For combinational tautology checking we eliminated all initialization statements and ran BMC with a bound of
checking the inner, propositional formula, p, from each of the AG p specifications. Under these conditions,
the specification could hold only if p was true for all assignments to the state variables in its support.
Invariance checking entails checking whether a propositional formula holds in all initial states and is preserved
by the transition relation, the latter meaning that all successors of states satisfying the formula also satisfy it.
If these conditions are met, we call the predicate an inductive invariant. We ran BMC on input files with all
initialization assignments intact, for each design block and each p in each AG p specification, with a time bound
of determined whether each formula, p, held in the single, valid initial state of each design. We then
ran BMC in a mode in which, for each design block and each AG p specification, all initialization assignments
were removed from the input file, and, instead, an initial states predicate was added that indicated the initial
states should be all those states satisfying p. Note that we did not really believe the initial states actually were
those satisfying p. This technique was simply a way of getting the BMC tool to check all successors of all states
satisfying p, in one time step. The time bound, k, was set to 1, and the AG p specification was checked. If the
specification held, this showed p was preserved by the transition relation, since AG p could only hold, under these
circumstances, if the successors of every state satisfying p also satisfied p. Note that AG p not holding under these
conditions could possibly be due exclusively to behaviors in unreachable states. For instance, if an unreachable
state, s, existed which satisfied p but had a successor, s0, which did not, then the check would fail. Therefore,
because of possible bad behaviors in unreachable states, this technique can only show that p is an invariant, but
cannot show that it is not. However, we found this type of inductive invariance checking to be very inexpensive
with bounded model checking, and, therefore, very valuable. In fact, we made it a cornerstone of the methodology
we recommend in Section 6.
In these experiments, we used both the GRASP [33] and SATO [38] satisfiability solvers. When giving results,
however, we do not indicate from which solver they came, rather, we just show the best results from the two. There
is actually an interesting justification for this. In our experience, the time needed for satisfiability solving is often
just a few seconds, and usually no more than a few minutes. However, there are problem instances for which a
particular SAT tool will labor far longer, until a timeout limit is reached. We have quite often found that when
one SAT solving tool needs to be aborted on a problem instance, another such tool will handle it quickly; and,
additionally, the same solvers often switch roles on a later problem instance, the former slow solver suddenly
becoming fast, the former fast one, slow. Since the memory cost of satisfiability solving is usually slight, it makes
sense to give a particular SAT problem, in parallel, to several solvers, or to versions of the same solvers with
different command line arguments, and simply take the first results that come in. So, this method of running
multiple solvers, as we did, on each job, is something which we recommend.
The SMV input files were given to a recent version of the SMV model checker (the SMV1version referred
to earlier) to compare to BDD based model checking. We did 20 SMV runs, checking each of the AG p specifi-
cations, separately. When running SMV, we used command line options that enabled the early detection, during
4 Microprocessors are generally designed with specified reset sequences. In PowerPC designs, the resulting values on each
latch are known to the designers, and this is the appropriate initial state for model checking.
reachability analysis, of false AG p properties. In this mode, the verifier did not need to compute a fixpoint if a
counterexample existed, which made the comparison to BMC more appropriate. We also enabled dynamic variable
ordering when running SMV.
All experiments were run with wall clock time limits. The satisfiability solvers were given 15 minutes wall
clock time, maximum, to complete each run, while SMV was given an hour for each of its runs. BMC, itself, was
never timed, as its task of translating the design description and the specification is usually done quite quickly.
The satisfiability solving and SMV runs were done on RS6000 model 390 workstations, having 256 Megabytes
of local memory.
5.3 Environment Modeling
We did not model the interfaces between the subcircuits on which we ran our experiments and the rest of the
microprocessor or the external computer system in which the processor would eventually be placed. This is commonly
referred to as environment modeling. One would ideally like to do environment modeling on subcircuits
such as we experimented on, since these are not closed systems. Rather, they depend for their correct functioning
upon input constraints, i.e., certain input combinations or sequences not occurring. The rest of the system
must guarantee this [21]. However, in the type of invariant checking we did, one would always be assured of
true positives, since if a safety property holds with a totally unconstrained environment, then it holds in the real
environment (this is proven in [13, 18]).
It is likely that an industrial design team would first check safety properties with unconstrained environments,
since careful environment modeling can be time consuming. They would then decide, on an individual basis, what
to do about properties that failed: invest in the environment modeling for more accurate model checking, in order
to separate false failures from real ones, or hope that digital simulation will find any real violations that exist.
Importantly, the model checker's counterexamples could provide hints as to which simulations, on the complete
design not just the subcircuit, may need to be run. For instance, the counterexample may indicate that certain
instructions need to be in execution, certain exceptions occurring, etc. The properties that pass the invariance test
need no more digital simulation, and thus conserve CPU resources.
In the examples we did run, all the negatives proved, upon inspection with designers, to be false negatives.
The experiments still yield, however, useful information on the capacity and speed of bounded model checking.
Further, in Section 6, we describe a methodology that can reduce or eliminate false negatives.
5.4 Experimental Results
As mentioned, we checked 20 safety properties, distributed across 5 design blocks from a single PowerPC micro-
processor. These were all control circuits, having little or no datapath elements. Their sizes were as follows:
Circuit Latches PIs Gates
bbc 209 479 4852
ccc 371 336 4529
cdc 278 319 5474
dlc 282 297 2205
Circuit Spec Latches PIs
dlc 7 119 153
Table
4. Before and After Classical COI Primary Inputs)
In table 4, we report the sizes of the circuits before and after classical COI reduction has been applied.
Each AG p specification is given an arbitrary numeric label, on each circuit. These do not relate specifications on
different design blocks, e.g., specification 2 of dlc is in no way related to specification 2 of sdc. Many properties
involved much the same cone of circuitry on a design block, as can be seen by the large number of specifications
having cones of influence with the same number of latches and PIs. However, these reduced circuits were not
identical, from one specification to another, though they shared much circuitry.
Table
5 gives the results of tautology and inductive invariance checking for each p from each AG p specifica-
tion. These runs were done with bounded COI enabled. There are columns for tautology checking, for preservation
by the transition relation and for preservation in initial states. The last two conditions must both hold for a Boolean
formula to be an inductive invariant. AY in the leftmost part of a column indicates the condition holding, an
N that it does not, When a Y is recorded, time and memory usage may appear after it, separated by slashes.
These are recorded only for times 1 second, and memory usage 5 megabytes, otherwise a - appears for
insignificant time and memory. As can be seen, tautology and invariance checking can be remarkably inexpensive.
This is an extremely important finding, as these can be quite costly with BDD based methods, and are at the heart
of the verification methodology we propose in Section 6.
We were surprised by the small number of assertions that were combinational tautologies. We had expected
that designers would try to insure safety properties held by relying on combinational, as opposed to sequential
circuitry. However, the real environment may, in fact, constrain inputs to design blocks combinationally such that
these are combinational tautologies. See Section 6 for a discussion of this.
As stated above, many of our examples exhibited false negatives, and they did so at low time bounds. Other
of our examples were found to be inductive invariants. Satisfiability solving went quickly at high values of k if
counterexamples existed at low values of k or if the property was an invariant. The more difficult SAT runs are
those for which neither counterexamples nor proofs of correctness were found. Table 6 shows the four examples
which were of this type, bbc specs 1, 3 and 4, and sdc spec 1. All results, again, were obtained using bounded
COI. We also ran these examples using just classical COI, and we observed that the improvement that bounded
COI brings relative to classical COI wears off at higher k values, specifically, at values near to 10. Intuitively, this
is due to the fact that, as we extend further in time, we eventually compute valuations for all the state variables in
the classical cone of influence. However, since we expect bounded model checking to be most effective at finding
short counterexamples, bounded COI is helping augment the system's strengths.
In table 6, long k is the highest k value at which satisfiability solving was accomplished, and vars and
clauses list the number of literals and clauses in the CNF file at that highest k level. The time column gives
CPU time, in seconds, for the run at that highest k value. Regarding memory usage, this usually does not exceed
a few tens of megabytes, and is roughly the storage needed for the CNF formula, itself.
Table
7 lists the circuits and specifications which were either shown to be inductive invariants or for which
counterexamples were found. Under the column holds, a Y indicates a finding of being an inductive invariant,
a N the existence of a counterexample. For the counterexamples, the next column, fail k, gives the value of
k for which the counterexample was found. Since all counterexamples were found with k values 2, we did not
list time and memory usage, as this was extremely slight. In each case, the satisfiability solving took less than a
second, and memory usage never exceeded more than 5 megabytes.
Lastly, the results of BDD-based model checking are that SMV was given each of the 20 properties separately,
but completed only one of these verifications. The 19 others all timed out at one hour wall clock time. SMV was
run when the Somerset computer network allowed it unimpeded access to the CPU it was running on; and still,
under these circumstances, SMV was only able to complete the verification of sdc, specification 3. Classical COI
for this specification gave a very small circuit, having only 23 latches and 15 PIs. SMV found the specification
false in the initial state, in approximately 2 minutes. Even this, however, can be contrasted to BMC needing 2
seconds to translate the specification to CNF, and the satisfiability solver needing less than 1 second to check it!
5.5 Comparison to BDD Based Model Checking
It is useful to reflect on what the experiments on PowerPC microprocessor circuits show and what they do not.
First, the experiments should not be interpreted as evidence that BDD based model checkers cannot handle circuits
of the size given. There are approximation techniques, for instance where certain portions of a circuit are deleted
or approximated with simpler Boolean functions that still yield true positives for invariance checking, and these
could have been employed. Some of the verifications may have gone through under these circumstances. However,
the experiments, as run, do give a measure of the size limits of BDD based and SAT based model checking.
input constraints it proved easy to reach states that violated the purported invariants. It has been
noted, empirically, by many users of BDD based tools, that it is much harder to build BDDs for incorrect designs
than it is for correct designs. There is no theoretical explanation of why this is so, but it may very well be that
Circuit Spec Tautology Tran Rel'n Init State
dlc 6 N - N - Y -
Table
5. Tautology and Invariance Checking Results
circuit spec long k vars clauses time
Table
6. Size Measures for Difficult Examples
circuit spec holds fail k
dlc
dlc 6 N 2
dlc 7 N 0
Table
7. Invariants and Counterexamples
SMV, or another BDD based model checker, could have successfully completed many of the property checks on
versions of these designs having accurate input constraints. However, in a way this is to the credit of bounded
model checking, in that it seems able to handle problem instances which are difficult for BDDs.
another observation is that when a design has a large number of errors, random, digital simulation can
find counterexamples quickly. Many commercial formal verification tools first run random digital simulations
on a design, to see if property violations can be detected easily. While we did not do this in our experiments,
we feel it is likely this, too, would have found quick counterexamples. However, this only shows that bounded
model checking is at least as powerful as this method, on buggy designs-yet, bounded model checking has the
additional capability of conducting exhaustive searches, within certain limits.
As to those limits, a big question with bounded model checking is whether it can, or will, find long coun-
terexamples. Clearly, it is to the advantage of BDD based model checking that if the BDDs can be built and
manipulated, all infinite computation paths, i.e., all loops through the state graph, can be examined. But, all too
often, as mentioned, the BDDs cannot be built or manipulated. In those cases, even if bounded model checking
cannot be run over many time steps, it does give exhaustive verification at each time step, and certainly is worth
running. Most of our experiments did not produce information that would answer the question as to the expected
length of counterexamples, but a few did. Out of the verifications attempted, 4 yielded neither counterexamples
nor proofs of correctness, and simply timed out. This means for the property being checked, these designs were
not buggy, up to the depth checked. Of these four, bbc specs 1, 3 and 4, sdc spec 1, BMC was able to go out to
4, 10, 5 and 4 time steps, respectively (see table 6). Thus, we expect that with current technology, we might be
limited to between 5 and time steps on large designs. Of course, we could have let the SAT tools run longer,
and undoubtedly we would have extended some of these numbers. But, that was not the goal of our experiment.
We tried to see what one could expect running large numbers of designs through a verifier, where not much time
could be spent on any individual verification, as we felt this would replicate conditions that would occur in indus-
try. Still even if we end up limited, in the end, to explorations within 5 to 10 time steps of initial states, if such
explorations can be done quickly and are exhaustive, it is certain they will aid in finding design errors in industry.
And, of course, we hope to extend these limits by further research.
Lastly, the results for invariance checking speak for themselves. We believe the performance would only
improve given accurate input constraints. There is no logical reason to believe otherwise. Yet, it is hard to improve
on the existing performance, since nearly every invariance check completed in under 1 second!
6 A Verification Methodology
Our experimental results lead us to propose an automated methodology for checking safety properties on industrial
designs. In what follows, we assume a design divided up into separate blocks, as is the norm with hierarchical
VLSI designs. Our methodology is as follows:
1. Annotate each design block with Boolean formulae required to hold at all time points. Call these the block's
inner assertions.
2. Annotate each design block with Boolean formulae describing constraints on that block's inputs. Call these
the block's input constraints.
3. Use the procedure outlined in Section 6.2 to check each block's inner assertions under its input constraints,
using bounded model checking with satisfiability solving.
This methodology could be extended to include monitors for satisfaction of sequential constraints, in the
manner described in [21], where input constraints were considered in the context of BDD based model checking.
6.1 Incorporating Constraints
Let us consider propositional input constraints with which the valuations of circuit inputs must always be consistent
We discussed Kripke structures in Section 2, and how these can be used to model digital hardware systems.
We defined the unrolled transition relation of a Kripke structure in formula 1, of Section 3.1. We can incorporate
input constraints into the unrolled transition relation as shown below, where we assume the input constraints are
given by a propositional formula, c, over state variables representing inputs.
Below, when we speak of checking invariants under input constraints, we mean using formula 2 in place of
formula 1 for the unrolled transition relation, [ M ] ,
6.2 Safety Property Checking Procedure
The steps for checking whether a block's inner assertion, p, is an invariant under input constraints, c, are:
1. Check whether p is a combinational tautology in the unconstrained K, using formula 1. If it is, exit.
2. Check whether p is an inductive invariant for the unconstrained K, using formula 1. If it is, exit.
3. Check whether p is a combinational tautology in the presence of input constraints, using formula 2. If it is,
go to step 6.
4. Check whether p is an inductive invariant in the presence of input constraints, using formula 2. It it is, go to
step 6.
5. Check if a bounded length counterexample exists to AG p in the presence of input constraints, using formula 2.
If one is found, there is no need to examine c, since the counterexample would exist without input constraints5.
If a counterexample is not found, go to step 6. The input constraints may need to be reformulated and this
procedure repeated from step 3.
6. Check the input constraints, c, on pertinent design blocks, as explained below.
Inputs that are constrained in one design block, A, will, in general, be outputs of another design block, B. To check
A's input constraints, we turn them into inner assertions for B, and check them with the above procedure. One must
take precautions against circular reasoning while doing this. Circular reasoning can be detected automatically,
however, and should not, therefore, be a barrier to this methodology.
The ease with which we carried out tautology and invariance checking indicates the above is entirely feasible.
Searching for a counterexample, step 5, may become costly at high k values; however, this can be arbitrarily
limited. It is expected that design teams would set limits for formal verification and would complement its use
with simulation, for the remainder of available resources.
Conclusions
We can summarize the advantages of bounded model checking as follows. Bounded model checking entails only
slight memory and CPU usage, especially if the user is willing to not push the time bound, k, to its limit. But there
are some encouraging results for larger values of k as well [32]. The technique is extremely fast for invariance
checking. Counterexamples and witnesses are of minimal length, which make them easy to understand. The
technique lends itself well to automation, since it needs little by-hand intervention. The disadvantages of bounded
model checking are that, at present, the implementations are limited as to the types of properties that can be
checked, and there is no clear evidence the technique will consistently find long counterexample or witnesses.
From this discussion it follows that at the current stage of development bounded model checking alone can not
replace traditional symbolic model checking techniques based on BDDs entirely. However in combination with
traditional techniques bounded model checking is able to handle more verifications tasks consistently. Particularly
for larger designs where BDDs explode, bounded model checking is often still able to find design errors or as in
our experiments violations of certain environment assumptions.
Since bounded model checking is a rather recent technique there are a lot of directions for future research:
1. The use of domain knowledge to guide search in SAT procedures.
2. New techniques for approaching completeness, especially in safety property checking, where it may be the
most possible.
3. Combining bounded model checking with other reduction techniques.
5 This is implied by the theorems in [13, 18], mentioned in Section 5.2
4. Lastly, combining bounded model checking with a partial BDD approach.
The reader may also refer to [32], which presents successful heuristics for choosing decision variables for SAT
procedures in the context of bounded model checking of industrial designs. In [37] early results on combining
BDDs with bounded model checking are reported. See also [1] for a related approach.
While our efforts will continue in these directions, we expect the technique to be successful in the industrial
arena as presently constituted, and this, we feel, will prompt increased interest in it as a research area. This is all
to the good, as it will impel us faster, towards valuable solutions.
--R
Automatic verification of finie-state concurrent systems using temporal logic specifcations
Verification of the Future- bux+ Cache Coherence Protocol
Model Checking and Abstraction.
Model Checking and Abstraction.
Model Checking.
Verifying Temporal Properties of Sequential Machines Without Building their State Diagrams.
A Computing Procedure for Quantification Theory.
Building Decision Procedures for Modal Logics from Propositional Decision Procedures - the case study of modal
Model Checking and Modular Verification.
An Intermediate Design Language and its Analysis.
The second DIMACS implementation challenge
Design constraints in symbolic model checking.
Pushing the envelope: Planning
Test Generation using Boolean Satisfiability.
The design of a self-timed circuit for distributed mutual exclusion
Symbolic Model Checking: An Approach to the State Explosion Problem.
A Computational Theory and Implementation of Sequential Hardware Equivalence.
A structure-preserving clause form translation
Specification and Verification of Concurrent Systems in CESAR.
Analyzing a PowerPC 620 Microprocessor Silicon Failure using Model Checking.
Efficient BDD Algorithms for FSM Synthesis and Verification.
Tuning sat checkers for bounded model-checking
Search Algorithms for Satisfiability Problems in Combinational Switching Circuits.
Algorithms for Solving Boolean Satisfiability in Combinational Circuits.
Combinational Test Generation Using Satisfiability.
Combining decision diagrams and sat procedures for efficient symbolic model checking.
A Decision Procedure for Propositional Logic.
SATO: An Efficient Propositional Prover.
--TR
Automatic verification of finite-state concurrent systems using temporal logic specifications
Graph-based algorithms for Boolean function manipulation
A structure-preserving clause form translation
Representing circuits more efficiently in symbolic model checking
Model checking and abstraction
Symbolic model checking
Model checking and modular verification
Model checking and abstraction
Computer-aided verification of coordinating processes
An intermediate design language and its analysis
Algorithms for solving Boolean satisfiability in combinational circuits
Symbolic model checking using SAT procedures instead of BDDs
A Computing Procedure for Quantification Theory
Symbolic Model Checking
Symbolic Model Checking without BDDs
Symbolic Reachability Analysis Based on SAT-Solvers
The Industrial Success of Verification Tools Based on StMYAMPERSANDaring;lmarck''s Method
Design Constraints in Symbolic Model Checking
Verifiying Safety Properties of a Power PC Microprocessor Using Symbolic Model Checking without BDDs
Combining Decision Diagrams and SAT Procedures for Efficient Symbolic Model Checking
Tuning SAT Checkers for Bounded Model Checking
Introduction to a Computational Theory and Implementation of Sequential Hardware Equivalence
Verifying Temporal Properties of Sequential Machines Without Building their State Diagrams
Analyzing a PowerPCTM620 Microprocessor Silicon Failure Using Model Checking
Design and Synthesis of Synchronization Skeletons Using Branching-Time Temporal Logic
Verification of the Futurebus+ Cache Coherence Protocol
Building Decision Procedures for Modal Logics from Propositional Decision Procedure - The Case Study of Modal K
SATO
--CTR
Wojciech Penczek , Alessio Lomuscio, Verifying epistemic properties of multi-agent systems via bounded model checking, Fundamenta Informaticae, v.55 n.2, p.167-185, May
M. Kacprzak , A. Lomuscio , W. Penczek, From Bounded to Unbounded Model Checking for Temporal Epistemic Logic, Fundamenta Informaticae, v.63 n.2-3, p.221-240, April 2004
Alex Aiken , Suhabe Bugrara , Isil Dillig , Thomas Dillig , Brian Hackett , Peter Hawkins, An overview of the saturn project, Proceedings of the 7th ACM SIGPLAN-SIGSOFT workshop on Program analysis for software tools and engineering, p.43-48, June 13-14, 2007, San Diego, California, USA
Stephanie Kemper , Andr Platzer, SAT-based Abstraction Refinement for Real-time Systems, Electronic Notes in Theoretical Computer Science (ENTCS), 182, p.107-122, June, 2007
Wojciech Penczek , Alessio Lomuscio, Verifying Epistemic Properties of Multi-agent Systems via Bounded Model Checking, Fundamenta Informaticae, v.55 n.2, p.167-185, April
Liang Zhang , Mukul R. Prasad , Michael S. Hsiao , Thomas Sidle, Dynamic abstraction using SAT-based BMC, Proceedings of the 42nd annual conference on Design automation, June 13-17, 2005, San Diego, California, USA
W. Penczek , A. Lomuscio, Verifying epistemic properties of multi-agent systems via bounded model checking, Proceedings of the second international joint conference on Autonomous agents and multiagent systems, July 14-18, 2003, Melbourne, Australia
Liang Zhang , M. R. Prasad , M. S. Hsiao, Incremental deductive & inductive reasoning for SAT-based bounded model checking, Proceedings of the 2004 IEEE/ACM International conference on Computer-aided design, p.502-509, November 07-11, 2004
Clark Barrett , Leonardo Moura , Aaron Stump, Design and results of the 2nd annual satisfiability modulo theories competition (SMT-COMP 2006), Formal Methods in System Design, v.31 n.3, p.221-239, December 2007
Indradeep Ghosh , Mukul R. Prasad, A Technique for Estimating the Difficulty of a Formal Verification Problem, Proceedings of the 7th International Symposium on Quality Electronic Design, p.63-70, March 27-29, 2006
Matti Jrvisalo , Tommi Junttila , Ilkka Niemel, Unrestricted vs restricted cut in a tableau method for Boolean circuits, Annals of Mathematics and Artificial Intelligence, v.44 n.4, p.373-399, August 2005
Dionisio de Niz , Peter H. Feiler, Aspects in the industry standard AADL, Proceedings of the 10th international workshop on Aspect-oriented modeling, p.15-20, March 12-12, 2007, Vancouver, Canada
Boena Wona, ACTLS properties and Bounded Model Checking, Fundamenta Informaticae, v.63 n.1, p.65-87, January 2004
Panagiotis Manolios , Sudarshan K. Srinivasan , Daron Vroon, Automatic memory reductions for RTL model verification, Proceedings of the 2006 IEEE/ACM international conference on Computer-aided design, November 05-09, 2006, San Jose, California
Nadia Creignou , Herv Daud , John Franco, A sharp threshold for the renameable-Horn and the q-Horn properties, Discrete Applied Mathematics, v.153 n.1, p.48-57, 1 December 2005
Harald Rue , Leonardo de Moura, Simulation and verification I: from simulation to verification (and back), Proceedings of the 35th conference on Winter simulation: driving innovation, December 07-10, 2003, New Orleans, Louisiana
K. Subramani , John Argentieri, Chain programming over difference constraints, Nordic Journal of Computing, v.13 n.4, p.309-327, December 2006
Carsten Sinz, Visualizing SAT Instances and Runs of the DPLL Algorithm, Journal of Automated Reasoning, v.39 n.2, p.219-243, August 2007
Schafer , Heike Wehrheim, The Challenges of Building Advanced Mechatronic Systems, 2007 Future of Software Engineering, p.72-84, May 23-25, 2007
Miroslav N. Velev , Randal E. Bryant, Effective use of boolean satisfiability procedures in the formal verification of superscalar and VLIW microprocessors, Journal of Symbolic Computation, v.35 n.2, p.73-106, February
Alur , Thao Dang , Franjo Ivani, Predicate abstraction for reachability analysis of hybrid systems, ACM Transactions on Embedded Computing Systems (TECS), v.5 n.1, p.152-199, February 2006
Tobias Schuele , Klaus Schneider, Bounded model checking of infinite state systems, Formal Methods in System Design, v.30 n.1, p.51-81, February 2007
Henry Kautz , Bart Selman, The state of SAT, Discrete Applied Mathematics, v.155 n.12, p.1514-1524, June, 2007
Lucas Bordeaux , Youssef Hamadi , Lintao Zhang, Propositional Satisfiability and Constraint Programming: A comparative survey, ACM Computing Surveys (CSUR), v.38 n.4, p.12-es, 2006 | model checking;cone of influence reduction;processor verification;bounded model checking;satisfiability |
511261 | Complexity Analysis of Successive Convex Relaxation Methods for Nonconvex Sets. | This paper discusses computational complexity of conceptual successive convex relaxation methods proposed by Kojima and Tunel for approximating a convex relaxation of a compact subsetof the n-dimensional Euclidean spaceR n . Here,C0 denotes a nonempty compact convex subset ofR n , anda set of finitely or infinitely many quadratic functions. We evaluate the number of iterations which the successive convex relaxation methods require to attain a convex relaxation ofF with a given accuracy e, in terms of e, the diameter ofC0, the diameter ofF, and some other quantities characterizing the Lipschitz continuity, the nonlinearity, and the nonconvexity of the setof quadratic functions. | Introduction
.
In their paper [2], Kojima and Tun-cel proposed a class of successive convex relaxation
methods for a general nonconvex quadratic program:
maximize c T x subject to x 2 F; (1)
where
a constant column vector in the n-dimensional Euclidean space R n ;
(we often write F instead of F (C
compact convex subset of R n ;
set of finitely or infinitely many quadratic functions on
(2)
Their methods generate a sequence fC k g of compact convex sets satisfying
(a)
(b) if (detecting infeasibility),
Here c.hull(F ) denotes the convex hull of F .
In connection with the successive convex relaxation methods, Kojima [4] pointed out that
a wide class of nonlinear programs can be reduced to a nonconvex quadratic program of the
form (1). More generally, it is known that any closed subset G ae R n can be represented as
using some convex function g(\Delta) : R n ! R. See, for example, Corollary 3.5 of [10]. Thus,
given any closed subset G of R n and any compact convex subset C 0 of R n , we can rewrite
maximization of a linear function c T x over a compact set G " C 0 as
maximize c T x
subject to (x;
where g(\Delta) is a convex function on R n and - t - 0 is a sufficiently large number such that
turns out to be compact and convex, and the resultant
problem is a special case of the general nonconvex quadratic program (1). Although this
construction is not implementable because an explicit algebraic representation of such a
convex function g(\Delta) is usually impossible, it certainly shows theoretical potential of the
successive convex relaxation methods for quite general nonlinear programs.
The successive convex relaxation methods are extensions of the lift-and-project proce-
dure, which was proposed by Lov'asz and Schrijver [5] for 0-1 integer programs, to a general
quadratic program of the form (1). At each iteration of the methods, we first generate a
set of finitely or infinitely many quadratic functions such that each p(x) - 0
forms a valid inequality for the kth iterate C k . Since C k was chosen to include F in the
previous iteration, each p(x) - 0 serves as a (redundant) valid inequality for F ; hence F is
represented as
We then apply the SDP (semidefinite programming) relaxation or the SILP (semi-infinite
linear programming relaxation) to the set F with the above representation in terms of the
set P F [P k of quadratic functions to generate the next iterate C k+1 . (The latter relaxation
is also called the reformulation-convexification approach in the literature [7]). See also [1, 8].
The successive convex relaxation methods outlined above are conceptual in the sense
that we need to solve "infinitely many semidefinite programs (or linear programs) having
infinitely many linear inequality constraints" at each iteration. In their succeeding paper
[3], Kojima and Tun-cel presented two types of techniques which discretize and localize
"infinitely many semidefinite programs (or linear programs) having infinitely many linear
inequality constraints" to implement their conceptual methods under a certain assumption
on a finite representation of the feasible region F . They established that, for any given
ffl ? 0, we can discretize and localize the conceptual methods to generate a sequence fC k g
of compact convex sets satisfying the features (a), (b) above and
Each iteration of these discretized-localized versions of successive convex relaxation methods
requires to solve finitely many semidefinite programs (or linear programs) having finitely
many linear inequality constraints, so that these versions are implementable on computer.
However, they are still impractical because as a higher accuracy ffl is required, not only the
number of the semidefinite programs (or linear programs) to be solved at each iteration but
also their sizes explode quite rapidly.
More recently, Takeda, Dai, Fukuda and Kojima [9] presented practical versions of successive
convex relaxation methods by further slimming down the discretized-localized versions
to overcome the rapid explosion in the the number of the semidefinite programs (or
linear programs) to be solved at each iteration and their sizes. Although these versions no
more enjoy the feature (c)', numerical results reported in the paper [9] look promising.
This paper investigates computational complexity of the conceptual successive convex
relaxation methods given in the paper [2]. When they are applied to 0-1 integer programs,
they work as the Lov'asz and Schrijver lift-and-project procedure [5], and they terminate in
iterations, where n denotes the number of 0-1 variables. (See Section 6 of [2] and Section 7
of [3]. ) In the general case where P F consists of arbitrary quadratic functions, however, the
convergence of fC k g to c.hull(F ) is slower than linear in the worst case. (See an example in
Section 8.3 of [2]. ) In this paper, we bound the number k of iterations required to generate
an approximation of c.hull(F ) with a given "accuracy." To begin with, we need to clarify
the following issues.
ffl Input data of the conceptual successive convex relaxation methods.
ffl Output of the methods and its quality or "accuracy."
ffl What are assumed to be possible in the computation?
To discuss these issues in more detail, we need some notation.
the set of n \Theta n symmetric matrices:
the quadratic function having the constant term fl,
the linear term 2q T x, and the quadratic term x T Qx;
where
Our input data are a nonempty compact convex subset C 0 of R n and a set P F of finitely
or infinitely many quadratic functions. We do not care about how we represent the compact
may be represented in terms of finitely or infinitely many linear inequalities,
nonlinear convex inequalities and/or linear matrix inequalities. Although it seems nonsense
to try to define the size of such input data, we extract some quantity and quality from the
input data. The diameter diam(C 0 ) of C 0 and the diameter diam(F ) of F are relevant on
our complexity analysis since C 0 serves as an initial approximation of c.hull(F ) which we
want to compute. Concerning quality or difficulty of the input data, we introduce
(a common Lipschitz constant for all p(\Delta) 2 P F on C 0 );
Here - min (Q) denotes the minimum eigenvalue of Q . Note that all - lip
either a finite nonnegative value or +1. We will assume that
they are finite. If P F consists of a finite number of quadratic functions, this assumption
is satisfied. We may regard - nc (P F ) as a nonconvexity measure of the set P F of quadratic
all quadratic functions in P F are convex if and only if - nc (P F
involves more nonconvexity as - nc larger. On the other hand, we may regard
- nl (P F ) as a nonlinearity measure of the set P F of quadratic functions; all functions in P F
are linear if and only if - nl (P F involves more nonlinearity as - nl (P F
gets larger. - lip directly affect the upper bounds that we will
derive for the number k of iterations required to generate an approximation of c.hull(F )
with a given certain accuracy.
Our output is a compact set C k ae R n , which is an approximation of c.hull(F ). Again
we don't care about its representation and size. In order to evaluate the quality or accuracy
of the approximation, we introduce the notation
By definition, We say that a
compact convex subset C of C 0 is an ffl-convex-relaxation of F if it satisfies
We set up an ffl-convex-relaxation of F as our goal of the successive convex relaxation methods
Now we discuss what are assumed to be possible in the computation. First we assume
precise real arithmetic operations. At each iteration, we need to choose a set P k of finitely or
infinitely many quadratic functions that induce valid inequalities for C k before performing
the SDP or SILP relaxation. Secondly we assume that such a P k is available. Kojima
and Tun-cel [2] proposed and studied several candidates for P k . In this paper, we focus our
attention on the following three models of successive convex relaxation methods.
ffl Spherical-SDP Model: We take P k to be a set of spherical quadratic functions, and we
perform the SDP relaxation (Section 3).
ffl Rank-2-SDP Model: We take P k to be a set of rank-2 quadratic functions, and we
perform the SDP relaxation (Section 4).
ffl Rank-2-SILP Model: We take P k to be a set of rank-2 quadratic functions, and we
perform the semi-infinite LP relaxation (Section 4).
In each model, we assume that a set of infinitely many quadratic functions chosen for P k is
available.
Our complexity analysis of the Spherical-SDP Model is much simpler than that of the
latter two models, and the former analysis helps an easier understanding of the latter two
models, which are of more practical interest. The latter two models correspond to extensions
of the Lov'asz-Schrijver lift-and-project procedure with the use of the SDP relaxation and
the use of the SILP relaxation, respectively. See Section 6 of [2] and Section 7 of [3]. The
Rank-2-SDP and the Rank-2-SILP Models themselves are not implementable yet. Implementable
versions of successive convex relaxation methods given in the paper [3] correspond
to discretization and localization of these two models. See also [9].
We summarize our main results: For each of the models mentioned above, given an
arbitrary positive number ffl, we bound the number k of iterations which the successive
convex relaxation methods require to attain an ffl-convex-relaxation of
terms of the quantities 1=ffl, diam(C 0 ), 1=diam(F
upper bound derived there itself might not be so significant, and might be improved by a
more sophisticated analysis. It should be emphasized, however, that the upper bound is
polynomial in these quantities, and that this paper provides a new way of analyzing the
computational complexity of the successive convex relaxation methods for general nonlinear
programs.
Preliminaries.
After introducing notation and symbols which we will use throughout the paper, we describe
the SSDP and SSILP algorithms in Section 2.2. In Section 2.3, we present another accuracy
measure of convex relaxation which will be utilized in establishing our main results.
2.1 Notation and Symbols.
Let
the set of n \Theta n symmetric matrices;
= the set of n \Theta n symmetric positive semidefinite matrices;
trace of A T
the set of quadratic functions on R n
(see (3) for the definition of qf(\Delta; fl; q; Q));
the set of convex quadratic functions on R n
the set of linear functions on R n
the convex cone generated by P ae Q
the convex hull of G ae R n ;
(n-dimensional closed) ball
with a center - 2 R n and a radius ae ? 0);
(the radius of the minimum ball
with a center - 2 R n that contains G ae R n );
the diameter of G ae R
2.2 SSDP and SSILP Relaxation Methods.
Let C ae R n and p(\Delta) 2 Q. If p(x) - 0 for 8x 2 C, we say that
valid inequality for C, and that p(\Delta ) induces a quadratic valid inequality for C.
Let P be a nonempty subset of quadratic functions, i.e., ; 6= P ae Q. (In the algorithms
below, we will take P to be the union of P F and a P k of quadratic functions which induce
quadratic valid inequalities for the kth iterate C k ). We use the notation b
for the
SDP relaxation, and the notation b
F L
for the SILP relaxation applied to the set
is positive semidefinite,
F L
The lemma below provides a fundamental characterization of b
F L
which played an essential role in the global convergence analysis of the successive convex
relaxation methods in the papers [2, 3]. We will use the lemma in Sections 3 and 4.
Lemma 2.1. Let ; 6= P ae Q.
F L
Proof: See Theorem 4.2 and Corollary 4.3 of [2].
Algorithms 2.2 and 2.3 below are slight variants of the SSDP relaxation method given
in Section 3 of [2] and the SSILP relaxation method given in Section 7 of [2], respectively.
See also Section 3 of [3].
Algorithm 2.2. (SSDP relaxation method)
Step 0: Let
Step 1: Choose a set P k ae Q that induces quadratic valid inequalities for C k .
Step 2: Let C
Step 3: Let to Step 1.
Algorithm 2.3. (SSILP relaxation method)
Step 0: Let
Step 1: Choose a set P k ae Q that induces quadratic valid inequalities for C k .
Step 2: Let C
F
Step 3: Let to Step 1.
In Section 3 where we discuss the complexity of the Spherical-SDP Model, we will take
k to be a set of spherical functions, while in Section 4 where we discuss the complexity of
the Rank-2-SDP and the Rank-2-SILP Models, we take P k to be a set of rank-2 quadratic
functions. By definition, b
F L
make it possible for us to apply our complexity analysis to the Rank-2-SILP Model, and
simultaneously to the Rank-2-SDP Model in Section 4.
2.3 Accuracy Measures of Convex Relaxation.
In Section 1, we have introduced an ffl-convex-relaxation of using c.hull(F (ffl))
to measure the quality or accuracy of an approximation C k of c.hull(F )
which we want to compute. In our complexity analysis of Algorithms 2.2 and 2.3, this notion
is not easy to manipulate directly. In this section, we introduce another notion (/; \Xi)-
convex-relaxation of F which is easier to manipulate, and relate it to the ffl-convex-relaxation
of F .
Let / ? 0, and let \Xi ae R n be a nonempty compact convex set. We say that a compact
convex subset C of C 0 is an (/; \Xi)-convex-relaxation of F (C
The definition of c.relax(F (/); \Xi) is quite similar to that of (/; ae)-approximation of c.hull(F )
given in Section 5 of [3]. Note that B(-; ae(-; F (/))) in the definition of c.relax(F (/); \Xi)
corresponds to the minimum ball with the center - that contains F (/). It is easily verified
that given an arbitrary open convex set U containing F , if / ? 0 is sufficiently small and
if \Xi is a sufficiently large ball with its center in C 0 , then F ae c.relax(F (/); \Xi) ae U . See
Lemma 5.1 of [3] and its proof. By definition, we also see that
We assume in the remainder of this section that there exists a finite common Lipschitz
constant for all p(\Delta) 2
If P F consists of a finite number of quadratic functions, then this assumption is satisfied.
Lemma 2.4. Let - Choose positive numbers /, - and a compact convex
set \Xi such that
where
Proof: If F (/) is empty, then the desired inclusion relation holds with
So we will deal with the case that F (/) is not empty.
(i) Define a ffi-neighborhood G of F (/) within C 0 such that
We show that c.hull(G) ae c.hull(F (ffl)). Suppose that x 2 G. Then x 2 C 0 and there
exists a y 2 F (/) such that
(by y 2 F (/) and the definition of - lip )
Thus we have shown that G ae F (ffl), which implies that c.hull(G) ae c.hull(F (ffl)).
(ii) In view of (i), it suffices to prove that
Assuming that -
x 62 c.hull(G), we show that -
x 62 c.relax(F (/); \Xi). We will construct a
ball B( -) ae R n such that
x 62 c.relax(F (/); \Xi) follows from the definition of c.relax(F (/); \Xi). Let -
be the point that minimize the distance k- x \Gamma yk over y 2 c.hull(F (/)).
yk and -
fy
yg forms a supporting
hyperplane of c.hull(F (/)) such that
y for 8y 2 c.hull(F (/)):
We will show that - ffi ? ffi. Assume on the contrary that -
y lies in c.hull(F (/)),
there are y
Let
Then we see that
d
fy
dg
This implies that -
which contradicts to the hypothesis that -
x 62 c.hull(G).
Thus we have shown that -
(iii) Now defining -
d, we show that - and the ball B( -) satisfies
(8). From the definition of - above and -
first observe that
-) (i.e., the 2nd relation of (8));
Next we will show that the third relation of (8), i.e., c.hull(F (/)) ae B( - ). Suppose
that y 2 c.hull(F (/)). Then
d) (by (10))
Note that - d -
and
are orthogonal projection matrices onto the one dimensional
line f-
Rg in R n and its orthogonal complement, respectively. Therefore, from the
above equations, we see that
which derive that
(since both
Furthermore
y
(by (10))
(by
(by (7))
(by (7))
0:
Hence
d
(by (10))
Thus we have seen that c.hull(F (/)) ae B( -):
(iv) Finally we will show that the first relation of (8), i.e.,, - 2 \Xi. From the definition
of - , We know form (6) and (7) that
Hence if we choose a y 2 c.hull(F
This implies that -
Corollary 2.5. Assume that
Let
and
satisfies the inequality (7) of Lemma 2.4. We also see
that
(by
Hence if we take
, then - satisfies (6) and the desired result follows.
3 The Spherical-SDP Model.
Throughout this section, we assume that - lip
In the Spherical-SDP Model, we take P set of quadratic functions that
induce spherical valid inequalities for C k ) at Step 1 of Algorithm 2.2. Here we say that a
quadratic valid inequality p(x) - 0 for C k is spherical if p(\Delta) : R n ! R is of the form
for 9- 2 R n and 9ae ? 0. Let fC k g be the sequence of compact convex sets generated
by Algorithm 2.2 with at every iteration. Then the sequence fC k g enjoys
properties (a) monotonicity, (b) detecting infeasibility, and (c) asymptotic convergence,
which we stated in the Introduction. See Theorem 3.1 of [2]. Among these properties, only
the monotonicity property is relevant to the succeeding discussions.
Let / be an arbitrary positive number, and \Xi an arbitrary nonempty compact convex
set containing C 0 . In Lemma 3.2, we first present an upper bound -
k for the number k of
iterations at which C k attain an (/; \Xi)-convex-relaxation of F , i.e., C k ae c.relax(F (/); \Xi)
holds for the first time. Then in Theorem 3.3, we will apply Corollary 2.5 to this bound to
derive an upper bound k for the number k of iterations at which C k attains an ffl-convex-
relaxation of F , i.e., C k ae c.hull(F (ffl)) for the first time. Here ffl denotes an arbitrarily given
positive number.
Let j=diam(\Xi). Define
1otherwise:
By definition, we see that 0 -
Lemma 3.1. Let k 2 f0; g. For 8- 2 \Xi, define
ae (-; F
Then
Proof: It suffices to show that C k+1 ae B(-; ae 0 (-)) for 8- 2 \Xi. For an arbitrarily fixed
ae (-; F
we see that
which will be used later. If ae (-; F (/)) - ae k then ae . In this case
the desired result follows from the monotonicity, i.e., C k+1 ae C k . Now suppose that
ae (-; F . Assuming that -
obviously see that -
Hence we only need to deal with the
case that
We will show that
for 9qf(\Delta; - fl; - q; -
in 3 steps (i), (ii) and
(iii) below. Then -
x 62 C k+1 follows from Lemma 2.1.
(i) The relations in (14) imply that -
x 62 F (/). Hence there exists a
quadratic function qf(\Delta; - fl; - q; -
(ii) By the definition of ae k , we also know that the quadratic function
is a member of P S (C k ). Let p(\Delta). By the definition of - nc , all the eigenvalues of
the Hessian matrix of the quadratic function qf(\Delta; -
are not less than \Gamma- nc .
Hence (15) follows.
(iii) Finally we observe that
(by the definition of -
Thus we have shown (16).
Lemma 3.2. Suppose that diam(F ) ? 0. Define
otherwise:
k, then C k ae c.relax(F (/); \Xi).
Proof: For every - 2 \Xi and
It suffices to show that if k - k then
ae k (- ae (-; F (/)) for 8- 2 \Xi: (17)
In fact, if (17) holds, then
By Lemma 3.1,
ae k+1 (- max
ae (-; F
for
This implies that
ae k (- max
ae (-; F
for
Hence, for each - 2 \Xi, if k satisfies the inequality
then ae k (- ae (-; F (/)). When - we see that -
by definition. Hence (18)
holds for Now assume that - nc
we see that
Hence, if -
equivalently, if
then (18) holds. We also see from the definition of - ffi that
Therefore if k/
holds. Consequently we have shown that if
holds.
Now we are ready to present our main result in this section.
Theorem 3.3. Assume (11) as in Corollary 2.5 and diam(F
otherwise:
It is interesting to note that the bound k is proportional to the nonconvexity - nc of P F ,
and k quadratic functions in P F are almost convex.
Proof of Theorem 3.3: Choose positive numbers /, - 0 and a compact convex set
\Xi ae R n as in (12) of Corollary 2.5. Then c.relax(F (/); \Xi) ae c.hull(F (ffl)). Let
as in Lemma 3.2. By Lemma 3.2, if k - k then
On the other hand, we see that
Hence the inequality - appeared in the definition of - k can be rewritten as
We also see that
ffl=2
Therefore k
k.
4 The Rank-2 Models.
We discuss the Rank-2-SDP and the Rank-2-SILP Models simultaneously. Let
the ith unit coordinate vector in R n ;
for 8d 2 D and 8nonempty compact C ae R n ;
We define
e
Suppose that C is a subset of C 0 . Then the two terms (d T
appeared in the definition of r2f(\Delta; d 0 ; d; C) are linear supporting functions for
C, respectively, and induce linear valid inequalities for C 0 ae C and C:
respectively. Hence the minus of their product forms a rank-2 quadratic supporting function,
and induces a rank-2 quadratic valid inequality for C:
0:
In the Rank-2-SDP and the Rank-2-SILP Models of successive convex relaxation meth-
ods, we take P
k ) at Step 1 of Algorithms 2.2 and 2.3, respectively. Let fC k g
be the sequence of compact convex sets generated by Algorithm 2.2 or Algorithm 2.3 with
Then the sequence fC k g enjoys properties (a) monotonicity, (b) detecting
infeasibility, and (c) asymptotic convergence, which we stated in the Introduction. We can
prove this fact by similar arguments used in the proof of Theorem 3.3 of [3]. See also Remark
3.8 of [3]. Let ffl be an arbitrarily given positive number. We will derive an upper bound k
in Theorem 4.4 at the end of this section for the number k of iterations at which C k attains
an ffl-convex-relaxation of F . The argument of the derivation of the bound k will proceed
in a similar way as in the Spherical-SDP Model, although it is more complicated. In the
derivation of the bound in the Spherical-SDP Model, it was a key to show the existence
of a quadratic function g(\Delta) 2 c.cone
satisfying the relations (15)
and (16). We need a sophisticated argument, which we develop in Section 4.1, to construct
a quadratic function g(\Delta) 2 c.cone
~
satisfying the corresponding
relations (25) and (26) in the Rank-2-SDP and the SILP Models.
4.1 Convex Cones of Rank-2 Quadratic Supporting Functions for
n-dimensional Balls.
In this subsection, we are concerned with c.cone
e
Note that if C k ae B(-; ae), r2f(\Delta; d
(B(-; ae)) and r2f (\Delta; d
then r2f(\Delta; d 0 ; d; B(-; ae)) induces a weaker valid inequality for C k than r2f (\Delta; d
the sense that
This fact will be utilized in our complexity analysis of the Rank-2 Models in the next
subsection. For simplicity of notation, we only deal with the case that
For
D, and
sin '
sin '
For
D,
\Gammae T
\Gammae T
Essentially the same functions as the quadratic functions f
were introduced in the paper [3], and the lemma below is a further
elaboration of Lemma 4.4 of [3] on their basic properties.
Lemma 4.1. Let ae ? 0, ng.
e
The Hessian matrix of the quadratic function f
coincides with
the n \Theta n matrix
.
\Gamma The Hessian matrix of the quadratic function f \Gamma
coincides with
the n \Theta n matrix \Gamma
.
(iii) Suppose that ffi 2 [0; 1] and ffiaew 2 C 0 . Then
(iv) Suppose that - 0,
Proof: We will only show the relations on f
because we can derive the
corresponding relations on
similarly. First, assertion (i) follows directly
from the definitions (19).
. The Hessian matrix of the quadratic function f
turns out
to be the symmetric part of the matrix
sin '
\Gamma(w
sin '
\Gamma2(sin ')(e i e T
Thus we have shown (ii)
(iii) By definition, we have that
(w
\Gammae T
(w cos
We will evaluate each term appeared in the right hand side. We first observe that
where the last inequality follows from
Hence
Similarly
Since ffiaew 2 C 0 , we also see that
Therefore
sin '
sin '
sin '
Here ~
(iv) We see from (iii) that
4.2 Complexity Analisys.
In the remainder of the section, we assume that - lip
1. Let fC k g be the sequence of compact convex sets generated by either Algorithm 2.2 or
Algorithm 2.3 with taking
k ) at each iteration. Let / be an arbitrary positive
number, and \Xi an arbitrary compact convex set containing C 0 . In Lemma 4.3, we derive an
upper bound - k for the number k of iterations at which C k attains a (/; \Xi)-convex-relaxation
of F , i.e., C k ae c.relax(F (/); \Xi) holds for the first time. Then we will combine Lemmas 2.4
and 4.3 to derive an upper bound k for the number k of iterations at which C k attains an
ffl-convex-relaxation of F . Here ffl denotes an arbitrarily given positive number.
Let j=diam(\Xi). Define
2/
It should be noted that -
- and -
' are not greater than 8
and -
8 , respectively, and also that
By definition, we see that
Lemma 4.2. Let k 2 f0; g. For 8- 2 \Xi, define
ae (-; F
Then
Proof: It suffices to show that C k+1 ae B(-; ae 0 (-)) for 8- 2 \Xi. For an arbitrarily fixed
ae (-; F
We may assume without loss of generality that Algorithms 2.2 and 2.3 with
taking
k ) at each iteration are invariant under any parallel transformation. See
[3] for more details. Since we see that
which will be used later. If ae (0; F (/)) - ae k then ae . In this case the
desired result follows from C k+1 ae C k . Now suppose that ae (0; F Assuming
that -
derive that -
x 62 C k , we obviously see -
because C k+1 ae C k . Hence we only need to deal with the case that
We will show that
for 9qf(\Delta; - fl; - q; -
steps (i), (ii) and (iii) below. Since
x 62 C k+1 in both cases of the Rank-2-SDP Model (Algorithm
2.2) and the SILP Model (Algorithm 2.3).
(i) The relations in (24) imply that -
x 62 F (/). Hence there exists a
quadratic function qf(\Delta; - fl; - q; -
x=k- xk, and . Then we see that
We will represent the symmetric matrix -
Q as the difference of two n \Theta n symmetric
matrices -
nonnegative elements such that
For 8x 2 R n ,
\Gammae T
\Gammae T
w) and f \Gamma
are defined as in (19). By construction, g(\Delta) 2
e
We also see that g(\Delta) has the same Hessian matrix -
Q as f(\Delta) does.
4.1. Hence the Hessian matrix of the quadratic function g(\Delta)
vanishes. Thus we have shown (25).
(iii) Finally we observe that
(by (iv) of Lemma 4.1)
- \Gamma/ (by (22)):
Therefore
Lemma 4.3. Suppose that diam(F ) ? 0. Define
k, then C k ae c.relax(F (/); \Xi).
Proof: For every - 2 \Xi and
It suffices to show that if k - k then
ae k (- ae (-; F (/)) for 8- 2 \Xi: (28)
In fact, if (28) holds, then
By Lemma 4.2,
ae k+1 (- max
ae (-; F
This implies that
ae k (- max
ae (-; F
for
Hence, for each - 2 \Xi, if k satisfies the inequality
then ae k (- ae (-; F (/)). When - nl - /
by (21) that -
Hence
(28) holds for Now assume that - nl ?
we see that
Hence, if -
equivalently, if
then (29) holds. We also see by the definition of -
and (20) that
0:
Therefore if k
holds. Consequently we have
shown that if k -
holds.
Theorem 4.4. Assume (11) as in Corollary 2.5 and diam(F
Note that the bound k is proportional to the square of the nonlinearity - nl of P F , and
also that k any function in P F is almost linear.
Proof of Theorem 4.4: Choose positive numbers /, - 0 and a compact convex set
\Xi ae R n as in (12) of Corollary 2.5. Then c.relax(F (/); \Xi) ae c.hull(F (ffl)). Let
as in Lemma 4.3. By Lemma 4.3, if k - k then
On the other hand, we see that
(by
Hence the inequality - nl - /
appeared in the definition of - k can be rewritten
as
We also see that
ffl=2
Therefore k
k.
Concluding Discussions.
(A) In the Spherical-SDP Model, we have assumed that every ball with a given center
that contains the kth iterate C k is available. This assumption is equivalent to the
assumption that for a given - 2 R n , we can solve a norm maximization problem
subject to x 2
In fact, if we denote the maximum value of this problem by ae(-; C k ), then we can represent
the set of all balls containing C k as
and consists of the spherical quadratic functions p(\Delta;
We can also observe from the argument in Section 3 that among the spherical quadratic
functions in P S (C k ), only the supporting spherical quadratic functions p(\Delta; -; ae(-; C k
are relevant in constructing the next iterate C
In the Rank-2-SDP and the Rank-2-SILP Models, we have assumed that the maximum
value ff(d; C k ) of the linear function d T x over x 2 C k is available for every d 2 D. From
the practical point of view, the latter assumption on the Rank-2-SDP and the Rank-2-
SILP Models looks much weaker than the former assumption on the Spherical-SDP Model
because a maximization of a linear function d T x over C k is a convex program while the
norm maximization problem (30) is a nonconvex program. But the latter assumption still
requires to know the maximum value ff(d; C k ) for 8d in the set D consisting of infinitely
many directions, so that these models are not implementable yet. Kojima and Tun-cel [3]
proposed discretization and localization techniques to implement the Rank-2-SDP and the
Rank-2-SILP Models, and Takeda, Dai, Fukuda and Kojima [9] reported numerical results
on practical versions of the Rank-2-SILP Model.
In Section 4, we have focused on the Rank-2-SILP Model and diverted its complexity
analysis to the Rank-2-SDP Model since the SDP relaxation is at least as tight as the semi-infinite
relaxation. But this is somewhat loose. In particular, we should mention one
important difference between the upper bounds required to attain an ffl-convex-relaxation of
F in Theorems 4.4 and 3.3: the upper bound in Theorems 4.4 depends on the nonlinearity
while the upper bound in Theorem 3.3 depends on the nonconvexity - nc (P F )
but not on the nonlinearity - nl . This difference is critical when all the quadratic functions
are convex but nonlinear. Here we state a complexity analysis on the Rank-2-SDP Model
which leads to an upper bound depending on the nonconvexity - nc of P F but not on the
nonlinearity - nl of P F .
' and ffi as
':
By definition, we see that - 0, 1 -
It is easily
seen that the assertion of Lemma 4.2 remains valid with the definition above. The proof is
quite similar except replacing the quadratic function g(\Delta) in (27) by
ii
By using similar arguments as in Lemma 4.3, Theorem 4.4 and their proofs, we consequently
obtain the following result: Assume (11) as in Corollary 2.5 and diam(F
Note that now the bound k is proportional to the square of the nonconvexity - nc of P F ,
and also that k quadratic functions in P F are almost convex.
--R
"A lift-and-project cutting plane algorithm for mixed 0-1 programs,"
"Cones of matrices and successive convex relaxations of nonconvex sets,"
"Discretization and localization in successive convex relaxation methods for nonconvex quadratic optimization problems,"
"Moderate
"A reformulation-convexification approach for solving nonconvex quadratic programming problems, "
"Dual quadratic estimates in polynomial and boolean programming,"
"Towards the Implementation of Successive Convex Relaxation Method for Nonconvex Quadratic Optimization Prob- lems,"
Convex analysis and
--TR
--CTR
Akiko Takeda , Katsuki Fujisawa , Yusuke Fukaya , Masakazu Kojima, Parallel Implementation of Successive Convex Relaxation Methods for Quadratic Optimization Problems, Journal of Global Optimization, v.24 n.2, p.237-260, October 2002 | global optimization;convex relaxation;complexity;nonconvex quadratic program;semidefinite programming;sdp relaxation;lift-and-project procedure |
511444 | Cross-entropy and rare events for maximal cut and partition problems. | We show how to solve the maximal cut and partition problems using a randomized algorithm based on the cross-entropy method. For the maximal cut problem, the proposed algorithm employs an auxiliary Bernoulli distribution, which transforms the original deterministic network into an associated stochastic one, called the associated stochastic network (ASN). Each iteration of the randomized algorithm for the ASN involves the following two phases:(1) Generation of random cuts using a multidimensional Ber(p) distribution and calculation of the associated cut lengths (objective functions) and some related quantities, such as rare-event probabilities.(2) Updating the parameter vector p on the basis of the data collected in the first phase.We show that the Ber(p) distribution converges in distribution to a degenerated one, Ber(pd*), in the sense that someelements of pd*, will be unities and the rest zeros. The unity elements of pd* uniquely define a cut which will be taken as the estimate of the maximal cut. A similar approach is used for the partition problem. Supporting numerical results are given as well. Our numerical studies suggest that for the maximal cut and partition problems the proposed algorithm typically has polynomial complexity in the size of the network. | Introduction
Most combinatorial optimization problems are NP-hard; for example, deterministic and
stochastic (noisy) scheduling, the traveling salesman problem (TSP), the maximal cut in
a network, the longest path in a network, optimal buer allocation in a production line,
optimal routing in deterministic and stochastic networks and
ow control, optimization
of topologies and conguration of computer communication and tra-c systems. Well
established stochastic methods for combinatorial optimization problems are simulated annealing
[1], [2], [7], [37], initiated by Metropolis [31] and later generalized in [19], [22] and
[26], tabu search [16] and genetic algorithms [17]. For some additional references on both
deterministic and stochastic combinatorial optimization see [3]-[4], [23]-[25], [30], [32] and
[33]-[36].
Recent works on stochastic combinatorial optimization, which is also a subject of this
paper, include the method of Andradottir [5], [6], the nested partitioning method (NP)
[42], [43], the stochastic comparison method [18], and the ant colony optimization (ACO)
meta heuristic of Dorigo and colleagues [12], [15]. In most of the above methods a Markov
chain is constructed and almost sure convergence is proved by analyzing the stationary
distribution of the Markov chain.
We shall next review brie
y the ACO meta heuristic algorithms of [8]-[15], [20], [41],
[44]-[47], which try to mimic ant colonies behavior. It is known that ant colonies are able
to solve shortest-path problems in their natural environment by relying on a rather simple
biological mechanism: while walking, ants deposit on the ground a chemical substance,
called pheromone. Ants have a tendency to follow these pheromone trails. Within a xed
period, shorter paths between nest and food can be traversed more often than longer
paths, and so they obtain a higher amount of pheromone, which, in turn, tempts a larger
number of ants to choose them and thereby to reinforce them again. The above behavior
of real ants has inspired many researcher to use the ant system models and algorithms in
which a set of articial ants cooperate via pheromone depositing either on the edges or on
the vertices of the graph. Consider, for example, the ACS (ant colony system) approach of
Dorigo, Maniezzo and Colorni [15] for solving the TSP problem, which can be described as
follows: rst, a number of \articial ants", also called agents, are positioned randomly at
some node of the graph. Then, each agent performs a series of random moves to neighbor
nodes, controlled by suitably dened transition probabilities. Once an agent has visited all
nodes, the length of the tour is evaluated, and the pheromone values assigned to the arcs of
the path are increased by an amount depending on the length of the tour. This procedure
is repeated many times. The probability of a transition along a specic arc is computed
based on the pheromone value assigned to this arc, and the length of the arc. The higher
the pheromone value and the shorter the length of the arc, the higher the probability that
the agent will follow this arc in his rst move. Note also that while updating the transition
probabilities at each iteration of the ACS algorithm, Dorigo, Maniezzo and Colorni
[15] also introduce the so-called evaporation mechanism which discounts the pheromone
values obtained at the previous iteration. Diverse modications of ACS algorithm, which
presents a natural generalization of stochastic greedy heuristics, have been applied eciently
to many dierent types of discrete optimization problems and have produced very
results. Recently, the approach has been extended by Dorigo and Di Caro [12]
to a full discrete optimization meta heuristic, called the Ant Colony Optimization (ACO)
meta heuristic, which covers most of the well known combinatorial optimization problems.
Gutjahr [20], [21] was the rst to prove the convergence of the ACS algorithm.
This paper deals with application of the cross-entropy (CE) method to the maximal
cut and the partition problems. The CE method was rst introduced in [28] for estimating
probabilities of rare events for complex stochastic networks and then applied in [38]
for solving continuous multi-extremal and combinatorial optimization problems (COP),
namely, the shortest path (between two given nodes in a deterministic network or graph,
where each edge has a given length), the longest path and the TSP. The CE method for
combinatorial optimization employs an auxiliary random mechanism equipped with a set
of parameters, which transforms the deterministic network into a stochastic one, called
the associated stochastic network (ASN). Each iteration of the CE algorithm based on the
ASN involves the following two phases:
1. Generation of random trajectories (walks) using an auxiliary random mechanism,
like an auxiliary Markov chain with transition probability matrix
then calculation of the associated objective function.
2. Updating the parameters, like updating the elements of the transition probability
matrix on the basis of the data collected in the rst phase.
Let the original deterministic network be denoted by the graph
V is the set of nodes and E is the set of edges. Depending on a particular problem, we
introduce the randomness in the ASN by associating some form of randomness either to
(a) the edges E or to (b) the nodes V. More specically, we distinguish between the
so-called (a) stochastic edge networks (SEN) and (b) stochastic node networks (SNN).
(a) Stochastic edge networks (SEN). Here the trajectories are typically generated
using a Markov chain with transition probability matrix such that the transition
from i to j in uniquely denes the edge (ij) in the network. To SEN one can readily
reduce the TSP, the quadratic assignment problem, the deterministic and stochastic
ow
shop models, as well as some others. SEN were considered and treated earlier in [38].
(b) Stochastic node networks (SNN). Here the trajectories (walks) are generated
using an n-dimensional discrete distribution, like the n-dimensional Bernoulli (Ber (p))
distribution, such that each component of the random vector (rv)
uniquely denes the node V k network. To SNN one can
readily reduce the maximal cut problem, the partition problem, the clique problem, the
optimal buer allocation in a production line as well as some others. Application of CE
to SNN and in particular to maximal cut and maximal partition problems is the subject
of this paper.
Notice that terminology similar to SNN and SEN exists in Wagner, Lindenbaum and
Bruckstein [47] for the graph covering problem, called vertex ant walk (VAW) and edge ant
walk (EAN), respectively.
It is crucial to understand that the CE Algorithm 3.2 in [38] for SEN and Algorithm 4.1
in this paper for SNN are very similar. Their main dierence is in the sample trajectories
generation. As mentioned, in the former [38], the trajectories are typically generated using
a Markov chain, while in the latter they are generated, say, using an n-dimensional Ber(p)
distribution.
We shall show that the SNN Algorithm 4.1 for the maximal cut problem has the
following properties (similar properties apply for the maximal partitition problem):
1. The multi-dimensional Bernoulli distribution converges to a degenerated one
Ber(p
d
in the sense that some parameters of p
d , will be
unities and the rest will be zeros.
2. The unity elements of p
d uniquely dene a cut which will be taken as an estimate
of the maximal cut.
This perfectly matches with the SEN Algorithm 3.2 in [38], where
1. The probability matrix converges to a degenerated one in the sense that only
a single element at each row equals unity, while the remaining elements are
equal to zero.
2. The unity elements of the degenerated matrix uniquely dene the shortest tour, say
in a TSP problem.
We shall nally show that Algorithm 4.1, in fact, presents a simple modication of
Algorithm 2.1 for the estimation of probabilities of rare events. Algorithm 2.1 is the same
as Algorithm 1.2 in [38], and was adapted for the estimation of rare event probabilities in
SEN problems, like the stochastic TSP (Algorithm 3.1 in [38]); Algorithm 1.2 in [38] also
formed the basis of Algorithm 3.2 of that paper.
To elaborate more on this, let
represent the probability of the rare-event I fM(X)>xg ), where M(X) is the sample performance
of a stochastic system, vector with known
distribution and x is a xed number, chosen such that the probability '(x) is very small.
In order to give an example of M(X) in (1.1), consider a graph, whose edges have
random lengths given by X i 's. Then M(X) may be the length of the shortest path between
two designated nodes of the graph called the source and the sink. More formally, M(X)
can be dened as
is the j-th complete path from a source to a sink; p is the number of complete
paths.
Similar to Algorithm 3.1 in [38], Algorithm 2.1 (in this paper) is an adaptive algorithm
for the estimation of rare event probabilities using importance sampling and cross-entropy.
A distinguishing feature of both algorithms is that, when x is not xed in advance, they
automatically generate a sequence of tuples f
(see (2.5) and (2.8) below) ensuring
that
and '(
is the iteration number of Algorithm 2.1. The algorithm
stops when
Turning to COPs, note that as soon as a deterministic COP is transformed to a
stochastic one and M(X) (called the sample performance of the ASN, e.g., the cut value)
is available we can cast our ASN into the rare event framework (1.1). (Recall that in the
original formula (1.1) X is a natural random vector, while in the ASN it is an articially
constructed random vector, say a Bernoulli random vector). We shall show that in analogy
to Algorithm 2.1, Algorithm 4.1 generates a sequence of tuples f
(see (4.6) and
(4.7) below), which converges in distribution to a stationary point (
is the
true maximal cut value and p
d is the degenerated vector determining the cut corresponding
to
. In the language of rare events this also means that Algorithm 4.1 is able to identify
with very high probability a very small subset of the largest cut values. In what follows, we
shall show that COP's can be solved simultaneously with estimation of the probabilities of
rare-events for the ASN. This framework enables us to establish tight connections between
rare-events and combinatorial optimization.
As was also the case in [38], it is not our goal here to compare the e-ciency of the
proposed method with other well-established alternatives, such as simulated annealing,
tabu search and genetic algorithms. This will be done somewhere else. Our goal is
merely to establish some theoretical foundation for the proposed CE method, and to
demonstrate, both theoretically and numerically, the high speed of convergence of the
proposed algorithm and promote our approach for further applications.
In Section 2 we review the adaptive algorithm for the estimation of rare event probabilities
by citing some material from [28], [39] and Section 1.2 of [38]. In Section 3 we present
the maximal cut and partition problems, and the ASN for which we dene a probability
of rare-event exactly as in (1.1). We also present algorithms for generation of random cuts
and partitions. Section 4 presents our main CE Algorithm 4.1 for the maximal cut and
partition problems. Here we also prove a theorem stating that under certain conditions,
the sequence of tuples f
associated with the ASN converges to the stationary
point
d ). In Section 5 we give some modications to our main CE algorithm; an
important modication that we present is the fully automated CE Algorithm. In Section
6 supportive numerical results are presented and in Section 7 concluding remarks and
directions for further research are given.
2 The Cross-Entropy Method for Probability of Rare
Events Estimation
Let f(z; v) be a multivariate density with parameter vector v. Consider estimating
v). The importance sampling estimate is given by
new
I
where
is the likelihood ratio, X i f(z; v 0 ) and v new is called the reference parameter.
To nd the optimal reference parameter v
new one can either minimize the variance of
the importance sampling estimate
new ) (see [29], [39]) or maximize the following
cross-entropy (see [38])
new )g
Below we do the latter. Given a sample new ), we can estimate
the optimal solution v
new of (2.2) by the optimal solution of the program
vnew
new
I
It is readily seen that the programs (2.2) and (2.3) are useful only in the case, where ' is
not very small, say In rare-event context (say,
and (2.3) are useless, since owing to the rarity of the events fM(X i ) xg, the random
variables I fM(X i )xg
and the associated derivatives of b
new ) at
probability, provided the sample N is small relative to the
reciprocal of the rare-event probability '(x). To overcome this di-culty we introduce an
auxiliary sequence f
We start by choosing an initial
under the original pdf f(z; v), the probability '(
g g is not too small,
say '(
specically, we set v
sequentially iterate
in both v
t and
t as outlined below.
(a) Adaptive estimation of
t . For a xed v t derive
t from the following simple
one-dimensional root-nding program
I fM(X)
The stochastic counterpart of (2.4) is as follows: for xed
derive
t from the following
program
I
It is readily seen that
where M t;(j) is the j-th order statistics of the sequence M t;j M(X j ).
(b) Adaptive estimation of v
t . For xed
t from the solution of the
program
I fM(X)
where v v.
The stochastic counterpart of (2.7) is as follows: for xed
derive
t from the
following program
I
where
The resulting algorithm for estimating '(x) can be written as
1. Set v 0 v 0 v. Generate a sample X XN from the pdf f(x; v 0 ) and deliver
the solution (2.6) of the program (2.5). Denote the initial solution by
. Set t=1.
2. Use the same sample X as in (2.5) and solve the stochastic program (2.8).
Denote the solution by
3. Generate a new sample X XN from the pdf f(x;
deliver the solution
t in (2.6) of the program (2.5). Denote the solution by
t .
4. If
t x, set
t x and solve the stochastic program (2.8) for
x. Denote the
solution as
t+1 and stop; otherwise set t reiterate from step 2. After
stopping:
Estimate the rare-event probability '(x) using the estimate (2.1), with v 0 replaced
by
t+1 .
The monotonicity of the sequence
crucial for convergence of Algorithm
2.1. It is proved in [27] that if X is a one dimensional random variable that is distributed
Gamma(v;) and M() is a monotonically increasing positive function of one variable,
then the sequence
generated by Algorithm 2.1 monotonically increases, provided
N !1 at each iteration t. This theorem can be readily extended for multidimensional X
and some other distributions from the exponential family, such as Normal, Beta, Poisson
and discrete. For more details see [27].
The following theorem, due to [28], states that if the sequence f
monotonically increasing, then under some mild regularity conditions, as N ! 1, the
sequence f
reaches x in a nite number of iterations.
Theorem 2.1 Let x be such that '(x) > 0. Let h be the mapping which corresponds to an
iteration of Algorithm 2.1, i.e.,
(v)
(v).
Assume rst that the following conditions hold
1. The sequence f
(v t )g; monotonically increasing.
2. The mapping v 7!
(v) is continuous.
3. The mapping v 7!
(v) is proper, i.e. if
(v) belongs to some closed interval then
v belongs to a compact set.
4. The mapping v 7!
(v) is lower semi-continuous.
Then there exist t < 1 such that
lim
Pf
Proof Given in [27].
It readily follows from the above that if x is not xed in advance, Algorithm 2.1 will
automatically generate two sequences:
such that '(
3 The Maximal Cut and Partition Problems
3.1 Cuts, Partition and the Associated Stochastic Networks
The maximal cut problem in a graph can be formulated as follows. Given a graph
E) with set of nodes set of edges E between the nodes,
partition the nodes of the graph into two arbitrary subsets V 1 and V 2 such that the sum of
the weights of the edges going from one subset to the other is maximized. Mathematically
it can be written as
f ~
where
~
and denotes the symmetric matrix of weights (distances) of the edges, which is
assumed to be known.
The partition problem can be dened similarly. The only dierence between the maximal
cut and the partition problem is that in the former the length, say , of the vector
while in the latter it is xed.
Note that solving (3.1), for each one needs to decide whether
Since the matrix (L ij ) is symmetric, that is in order to avoid
duplication we shall assume without loss of generality that
The program (3.1) can be also written as
where
is the length (value) of the k-th cut, called the objective function, (V is the k-th
is the set of all possible cuts in the graph and jX j is the cardinality
of the set fXg. We denote the maximal cut and the maximal cut value (the optimal value
of the objective function)by (V
, respectively.
It is readily seen that the total number of cuts is
Similarly, the total number of partitions with being xed and equal to n=2 (for simplicity
assume n to be even) is
Figure
3.1: A 6-node network
As an example, consider Figure 3.1 and the associated 6x6 distance matrix
with cardinalities for maximal cut and partition jX
Consider, for instance, the following two cuts
and
The function value (partition cost)
in the rst and the second cases are
and
respectively.
As mentioned before, in order to generate the stationary tuple (
d ) (see Algorithm
4.1 below), we need to transform the original (deterministic) network into an associated
stochastic one. To do so for the maximal cut problem, we associate an n dimensional
random vector (rv) with the n dimensional vector
Each component of X is independent and Bernoulli distributed, i.e., X k Ber(p k ), and
has the interpretation that if X . If not stated
otherwise, we set p 1 1
Then for the maximal cut problem, each iteration of our main Algorithm 4.1 comprises
the following two phases:
(a) Generation of random cuts (see Algorithms 3.1, below) from the ASN using the
calculating the associated sample performance M(X).
(b) Updating the sequence of tuples f
t+1 g at each iteration of Algorithm 4.1, where
is the parameter vector of Ber( p t+1 ) having independent components. This is
the same as updating the sequence of tuples f
t+1 g at each iteration of Algorithm
2.1. Note that as soon as the auxiliary discrete distributions is dened, the sequence
f
t+1 g can be viewed as a particular case of the sequence f
t+1 g with
t+1 .
For the case of the partition problem, the generation of random partitions is performed
by a dierent algorithm (Algorithm 3.2 below), but Algorithm 4.1 for updating
the sequence of tuples f
t+1 g is the same for both problems.
We consider separately both phases (a) and (b). More specically, the rest of this
section deals with phase (a), while Section 4 deals with phase (b), where Algorithm 4.1 is
presented.
3.2 Random Cut Generation
The algorithm for generating random cuts in the ASN is based on Ber(p) with independent
components and can be written as follows:
Algorithm 3.1 Random cut generation :
1. Generate an n-dimensional random vector
independent components.
2. From X construct two vectors, V 1 and V 2 , such that the V 1 contains the set of
is a -dimensional vector containing the set of indices corresponding to unities and
the V 2 is a (n ) dimensional vector containing the set of indices corresponding
to zeros. Note that (0 n) is a random variable.
3. Calculate the sample function M(X) (see also (3.1)), associated with the random
cut
Consider the example in Fig. 3.1. Assume that the cut
is the maximal one. In this case, starting from an arbitrary 6-dimensional vector p with
the goal of Algorithm
4.1 is to converge to the degenerated Bernoulli distribution with the parameter vector
after a nite number of iterations.
Algorithm 3.1 (along with the Algorithm 4.1 below) can be readily extended to randomly
partitioning the nodes V of the graph E) into r 2 subsets and such that
the sum of the total weights of all edges going from one subset to another is maximized.
In this case one can follow the basic steps of Algorithm 3.1 using n r-point distributions
r
instead of n 2-point Ber(p j distributions.
3.3 Random Partition Generation
Here, unlike the independent Bernoulli case, the sample will be generated using a sequence
of m (recall that m is the number of nodes we want in V 1 ) dependent discrete distributions
denoted as
The goal of Fm (p) is to generate an associated random walk of length m, i.e.,
through the nodes of the network (i.e., each i k takes values from the set
such that the nodes visited in the random walk are not repeated. These nodes will then
constitute the set V 1 .
Let
(1)
Clearly,
(1)
is the discrete distribution where the probability of
selecting node i is (1)
. The node thus selected is denoted by i 1 . The sequence of distributions
will be derived recursively starting from M (1) and will be
used for generating
Algorithm 3.2 :
1. Generate i 1 from the discrete pdf M (1) and set X i 1
1.
2. Derive M (2) from M (1) as follows. First eliminate the element (1)
from (1) , and
then normalize the remaining (n 1)-dimensional vector. Call the resulting vector
(2) . Then M
3. Generate i 2 from M (2) and set X i 2
1.
4. Proceed with steps 2 and 3 recursively m 2 times and for each
1. Here i k is generated from M (k) , which is derived from M (k 1) as follows.
Eliminate the element (k 1)
corresponding to the node i k 1 from (k 1) , and then
normalize the remaining (n k 1)-dimensional vector. Call the resulting vector
(k) . Then M
5. Set the remaining n m elements of X equal to 0.
6. Calculate the objective function M(X) as in (3.8).
Note that the algorithm does not assume X 1 1. Its modication with X 1 1 is
straightforward: one needs only to replace (3.11) with
(1)
while the rest is similar.
4 The Main Algorithm
We assume below that we are given algorithms for generating random cuts and random
partitions and we are able to calculate the sample function M(X). As mentioned before,
we shall cast the ASN into the rare-events context.
4.1 The Rare-Event Framework
Consider (1.1) for the ASN. Assume for a moment that x is \close" to the unknown true
maximal cut
, which represents the unknown optimal solution of the programs (3.1)
and (3.3). With this in mind we shall adapt below the basic single-iteration and multiple
iteration (see (2.2)-(2.3) and (2.4)-(2.8), respectively) used in Algorithm 2.1 for the ASN
and in particular for the maximal cut, bearing in mind that X Ber(p). Similar to (2.2)
and (2.3) (the single-iteration program), we have
new
new
and
new
fi:X i;k =1g
respectively. Here it is assumed that the expectation is taken with respect to Ber(p) and
a sample of X.
The optimal solutions of (4.1) and (4.2) can be derived by straightforward application
of the Lagrange multipliers technique. They are
I fM(X)xg X r
I fM(X)xg
and
I fM(X k )xg
I fM(X k )xg
respectively, where expectation in the numerator of (4.3) is the expectation
over all possible cuts for which node V r belongs to V 1
in (4.4) (the sum is over all generated cuts). Also, in (4.4), we set the
new;r to some
arbitrary value, say 1/2, if
I fM(X k )xg
realizing that the chance of the
latter shrinks to 0, as N !1. Let p
new;n ).
Assume that the optimal solution (V
2 ) of the program (3.1) is unique and consider
the probability '(x) in (1.1). It is obvious that if x >
, then irrespective of the
choice of the parameter vector p in the Bernoulli distribution. We shall present rst an
important observation for the case when
. Let p
d denote a degenerated probability
vector, i.e., it contains a combination of unities and zeros. Moreover, let the components of
dene the unique maximal cut, in the sense that the unity components of p
d correspond
to the components of V
1 and the zero components of p
d correspond to the components of
2 . We shall call p
d the optimal degenerated vector (ODV).
Proposition 4.1 Assume that the maximal cut (V
2 ) is unique. Let X be a random
vector with independent components distributed Ber(p). Then for
the optimal
vector p
new in (4.3) reduces to the ODV p
d irrespective of p, provided
Proof The proof follows immediately from (4.3). To clarify, let X be the vector from
a degenerated Bernoulli distribution uniquely dening
and let (V
2 ) be the corresponding
cut. Then for any random vector X and the corresponding cut
must have that
I fM(X)
and therefore the rst part of the proposition is proved. For the second part of the
proposition note that 0 <
the second part follows from the fact that (due to similar reasoning as for the rst
new;r in (4.4) is either p
d;r or 1/2 (using our convention), and the fact that
I
It is not di-cult to verify that the variance of the estimate ' N (x; new ) in (2.1) (note
that v is now replaced by p and similarly for v new ) with
d , equals zero.
As we already mentioned, we shall approximate the unknown true solution (
d ) by the
sequence of tuples f
generated by Algorithm 4.1 below.
4.2 Main Algorithm
To proceed, note again that the single-iteration program (4.2) and its optimal solution
new;r in (4.4) are of little practical use, since for an arbitrary vector p in Ber(p), all
indicators I fM(X j )xg
very high probability, provided x is
close to the optimal value
and the sample size N is small relative to the reciprocal of
the rare-event probability PfM(X) xg.
To overcome this di-culty we use Algorithm 2.1 where we use
t instead of
t .
(a) Adaptive estimation of
t .
For a xed p
derive
t as the solution of
I
(b) Adaptive estimation of p
t .
For xed
derive
t;n ) from the solution of
fi:X i;k =1g
Note again that both programs (4.6) and (4.7) can be solved analytically. The solution
of (4.6) is given by (2.6). The solution of (4.7) (see (4.4)) can be written as
I
I
for
I
> 0; as before we set
otherwise, but this case never
happened in the examples we tried.
The resulting algorithm for estimating
and the vector p is as follows:
Algorithm 4.1 :
1. Choose p in the ASN such that
Generate N random vectors
corresponding cuts using Algorithm 3.1. Deliver the solution (2.6) of the program
(4.6). Denote the initial solution by
2. Use the same N vectors deliver the solution (4.8) of the
program (4.7). Denote the solution by
t .
3. Generate N new random vectors
using Algorithm
3.1 and deliver the solution
t of (2.6) of the program (4.6).
4. If for some t k and some k, say
stop and deliver
t as an estimate of
. Otherwise, set go to Step 2.
As an alternative to the estimate
t of
and to the stopping rule in (4.9) one can consider
the following:
4 . If for some t k and some k, say
0st
as an estimate of
. Otherwise, set go to Step 2.
Remark 4.1 Smoothed Probability Vectors Instead of p
t (see (4.8)) we typically use
its following smoothed version
~
where 0:5 < 1. Clearly, for we have that ~
The reason for using ~
instead of
t;i is twofold: (a) to smooth out the values of
t;i , (b) to reduce the probability
that some values of
t;i will be zeros or unities, specially in the beginning iterations. It
can be readily seen that starting with, say, p
that 0 <
for some indices i 2. We also found empirically that for 0:7 0:9, Algorithm 4.1
is typically more accurate than for in particular for noisy COP. In our numerical
studies we used 0:9. Note that according to the ant-based terminology [12], [15], we
can call and ~ p
t the evaporation parameter and the pheromone vector, respectively.
Remark 4.2 Relation to Root Finding As mentioned, Algorithm 4.1 might be viewed
as a simple modication of Algorithm 2.1. More precisely, it is similar to Algorithm 2.1 in
the sense of nding the root x (which is associated with the optimal solution
than the rare event probability '(x) by itself. This in turn implies that Algorithm 4.1
involves neither likelihood ratio calculations nor estimation of probabilities of rare events.
For that reason, both Algorithm 2.1 and Algorithm 4.1 have dierent stopping rules.
At this point it may be worth mentioning the following theorem.
Theorem 4.1 Assume that the maximal cut (V
2 ) is unique. If the conditions of
Theorem 2.1 hold, then there exists a t < 1 such that the sequence of tuples f
generated from Algorithm 4.1 converges in distribution to the constant tuple (
d ) as
!1, irrespective of the choice of p, provided
Proof According to Theorem 2.1, setting
, there exists a t < 1 such that
lim
Using (4.8) and a reasoning similar to that of Proposition 4.1, we get that
Combining the two previous facts, we have that there exists t < 1 such that
thus proving the statement of the theorem.
Remark 4.3 To apply this theorem to the maximal cut and partition problems one further
needs to prove that the conditions of Theorem 2.1 hold. For such results in settings
other than the maximal cut and partition problems one is referred to [27] (see, e.g., Proposition
3.1 in [27]).
Remark 4.4 It follows from Theorem 4.1 and Proposition 4.1 that (irrespective of the
initial choice of p), the multiple-iteration procedure involving the sequence f p
converges
to the same degenerated parameter p
d as does the single-iteration procedure.
5 Modications and Enhancements to the Main Algorithm
5.1 Alternative Sample Functions
A natural modication of the two-stage iterative procedure of Algorithm 4.1 would be
to update p
t (in the second stage; see (4.7)) using some alternative sample functions
rather than the indicators I fM i >
is the abbreviated
notation for M(X i ) that is used, for example, in (4.7).
Consider rst a maximization problem. As for alternatives to I fM i >
one could use
(in (4.7))
1.
where > 0.
2. Boltzmann type function
exp
where > 0.
3. Linear loss function with insensitive zone
4. Huber loss function
We found that the above modications typically lead to an increase in the convergence
speed of Algorithm 4.1 two to four times. The reason is that using indicators we put an
equal weight associated with each of the top dNe values of M
(4.8)), while in the modied versions we put a weight proportional to the respective value
of
Consider now a minimization problem. Recall that in this case we use in Algorithm
4.1 the bottom dNe values of M instead of the top ones, since now
we use the indicator function I fM<
. As for a simple modication of the sample function
I fM<
g we could use the top dNe values of 1
MN , (or say the top dNe values
We found, however, that this policy (modication) results typically in a
worse performance of the CE Algorithm 4.1 regardless of . The intuitive explanation is
that the function 1
M is quite nonlinear and \non symmetric" relative to M (used in the
maximization). To nd a \symmetric" to the M function we rst nd a \symmetric"
to the M function as follows: instead of the sample M (N) we can use (in a
minimization problem), say the following sequence [M (N) +M (1)
We can then use the top dNe samples of this new sequence (i.e., raise each element of
this sequence to the power , etc.
One can also use a Boltzmann type function that can be written (in analogy to (5.2))
as
exp
where again > 0. Similarly for the functions (5.3) and (5.4).
5.2 Single-Stage CE Algorithms versus Two-Stage CE Algorithm
Here the term, single-stage means that at each iteration, the CE algorithm updates p
alone, i.e., it does not involve the program (4.6) and, thus the sequence
t . In particular,
say in a maximization problem, this would imply updating the vector
t (similar to (4.8))
but taking the entire (rather than the truncated) sample M (or the entire
sample M
N ). For example, using the entire sample M
obtain instead of (4.7) the following stochastic program
fi:X i;k =1g
Such single-stage version would simplify substantially Algorithm 4.1. The disadvantage
of using single-stage sample functions is that, typically, it takes too long for Algorithm
4.1 to converge, since the large number of \not important" (untruncated) trajectories
slow down dramatically the convergence of f p
t g to p
d . We found numerically that the
single-stage CE algorithm is much worse than its two-stage counterpart in the sense that
it is both, less accurate and more time consuming compared to the original two-stage
Algorithm 4.1 (with Practically, we found that it does not work for
maximal cut problems of size n > 30.
Hence it is important for the CE method to use both stages, as in Algorithm 4.1.
This is also one of the major dierences between CE and ant-based methods of Dorigo,
Maniezzo and Colorni [15] and others, where a single-stage sample functions (for updating
t alone) is used.
5.3 Fully Automated CE Algorithm
Here we present a modication of Algorithm 4.1 in which both and N in (4.6) and (4.7)
are updated adaptively in t. We call this modication of CE, the fully automated CE
Algorithm. In addition, the FACE Algorithm is able to identify some \di-cult"
and pathological problems where it fails. For such problems, the FACE Algorithm stops
if the sample size N t becomes prohibitively large, say there is no improvement
in the objective function.
Let M be the order statistics corresponding to
(that is used while updating the tuple f
t+1 g), but arranged
in decreasing order (note that this is a departure from the convention we used earlier; we
have done this to simplify the notation used below). Notice from (4.6) that
The main assumption in the FACE Algorithm is
for is a xed positive constant in the interval 0:01
c 1. Thus cn corresponds to the number of samples in the set M t;j
lie in the upper 100 t % of the samples. We will refer to the latter as elite samples. Our
second parameter is as used in (4.11). Note that for c close to 1, may be chosen close
to unity, say 0.99. In contrast, if we choose should be less, say
We found numerically that combinations, like
and are good choices. If not stated otherwise, we shall use the
combination bear in mind a maximization problem, in particular
the maximal cut problem.
As compared to Algorithm 4.1 we introduce into the FACE Algorithm the following
two enhancements:
1. Prior to the t-th iteration, we keep a portion ; 0 1 of the elite samples
and we incorporate M t (with N and
being replaced by N t and t , respectively). More precisely, while implementing
(4.6) and (4.7), in addition to the current sample M(X
t , of size
generated from Ber( p
we also include the elite
samples M t from the (t 1)-th iteration. It is readily
seen that for the sequence f
t+1 g in (4.6) and (4.7) is based on the elite
sampling (of size t N obtained during all t iterations. We use 0:2 in the
numerical experiments reported in this paper, even though other values of in the
2. Let ~
r be a constant such that ~
r 1. For each iteration t of the FACE Algorithm we
design a sampling plan which ensures that
Note that for ~
implies improvement of the maximal order statistics,
t;(1) , at each iteration. If not stated otherwise, we shall take ~ r = 1.
The rst ~
r iterations (i.e., iteration number 0 to iteration number ~
r 1) of the FACE
Algorithm coincide with that of Algorithm 4.1 where as before we take p
Also, we typically take We then proceed
as follows:
Algorithm 5.1 : FACE Algorithm
1. At iteration t;
a sample of size ~
Combine these samples with the elite samples M t
obtained until iteration number t 1, thus giving a total sample
size of N these samples by M t;j ,
M t;j .
2. If (5.8) holds, proceed with (4.6)-(4.7) using the N t samples mentioned
in Step 1.
3. If (5.8) is violated, check whether or not
holds, where, say
stop and deliver M t;(1) as an estimate of the optimal solution. We call such M t;(1)
a reliable estimate of the optimal solution. Otherwise
4. Increase the sample size ~
holds or ~
very
large, say N In the former case, set N
and proceed again with (4.6) and (4.7). In the latter case, stop and deliver M t 1;(1)
as an estimate of the optimal solution. We call such M t 1;(1) , an unreliable estimate
of the optimal solution.
Remark 5.1 In Step 3, we use c in the numerical experiments reported in this
paper, even though other values of c 1 in the given range gave similar results.
Remark 5.2 To save sampling eort, we terminate Step 4 in a slightly dierent manner
than as stated in Algorithm 5.1. Let N be such that N 0 < N < N , say N
(implicitly, we are assuming that 20n < 10 6 which is usually the case in practice; also,
for SEN networks we take N In any iteration t, while increasing ~
N t as given
in Step 4, if we obtain that ~
and (5.8) is still violated, then we interrupt Step
4, and directly proceed with updating (
do (4.6) and (4.7) (before proceeding with the next iteration). However,
if with this slight change in Step 4, Algorithm 5.1 keeps generating samples of size N
for several iterations in turn, say for then we complete Step 4, as given
in Algorithm 5.1.
Remark 5.3 Zig-zag policy Let ~ k be a given constant such that ~ k 1. If for some t
then we can decrease N 0 by some factor starting from iteration t + 1. Similarly, if (5.8)
does not hold for ~ k consecutive iterations, then we can increase N 0 by some factor. Note
that we generate at least ~ k after the latest increase or decrease in N 0 , before
we start to check (5.11) or start to check the (5.8)'s (for the previous ~ k iterations). In the
numerical experiments reported in this paper we use ~ we increase or decrease
by a factor of 2.
Note that Algorithm 5.1 uses a maximal order-statistics based stopping criterion given
by (5.9). This seems more natural compared to the quantile based stopping criterion used
in Algorithm 4.1. Note also that (5.9) is based on the fact that while approaching the
degenerating solution
d , more and more trajectories will follow the path associated with
d and thus, the number of dierent trajectories will be less than ~
Note nally that if (5.8) holds for all t ~ r we automatically obtain that N
8t. In that case Algorithm 5.1 reduces to the original Algorithm 4.1, provided
6 Numerical Results
Below we present the performance of Algorithm 4.1 and Algorithm 5.1 for both the maximal
cut and partition problems. By the performance of the algorithms, we mean the
convergence of estimators
t and
t (see, e.g., (4.9), (4.10)) to the true unknown optimal
value
. As mentioned before, we choose in the respective ASNs.
Since the maximal cut and partition are NP hard problems, no exact method is known
for verifying the accuracy of our method except for the naive total enumeration routine,
which is feasible for small graphs, say for those with n nodes. To overcome this
di-culty, we construct an articial graph such that the solution is available in advance
and then verify the accuracy of our method. As an example, we consider the following
symmetric distance matrix
| {z }
| {z }
where all components of the upper left-hand and lower right-hand quadrants (of sizes
m) (n m), respectively, where 0 < m < n) are equal and generated
from a given distribution, such as U(a; b) (uniformly distributed on the interval (a; b)),
etc., and the remaining components equal We choose
C 2 such that the partition V
will be the optimal one. More precisely, assume that the random variable Z has a pdf
with bounded support, say Z C 1 . Let, for example, clearly for
2 ) to be the optimal solution, it su-ces that C 2 > C 1 (similarly for m < n=2).
In all our examples we set = 0:01 in (4.6), took stopped
Algorithm 4.1 according to the stopping rule (4.9) with the parameter 5. We found
that for r 10, the relative error, dened as
equalled zero in all our experiments. Here T corresponds to the stopping time of Algorithm
4.1. The running time in seconds of Algorithm 4.1 on a Sun Enterprise 4000 workstation
(12 CPU, 248 MHz) is reported as well.
Table
6.1 presents the relative errors (and the associated stopping times
as function of the sample size for the maximal cut problem
with for the following 6 cases of Z in (6.1): Z
Here Beta(; ; a; b) is the Beta distribution whose probability density function is given
by
()() (b a
Table
6.1 The relative errors (and the associated stopping times
as functions of the sample size for the maximal cut problems
with for the above 6 cases of Z given in (6.1).
Table
6.2 The CPU times T as functions of the sample size
the same data as in Table 6.1
The results of Tables 6.1 - 6.2 are self-explanatory. Similar results were obtained with
Algorithm 4.1 for the partition problems.
The rest of our numerical results are for the partition problems. Tables 6.3 - 6.4
presents the performance of
t along with as functions of t, where
are dened as
t;s 0:5; ng
and
t;s < 0:5;
In particular Table 6.3 corresponds to
Table
6.4 presents data similar to Table 6.3 for In both
cases we obtained a relative error of 0 with CPU times
respectively. The results of Tables 6.3 - 6.4 are self-explanatory.
Table
6.3 Performance of Algorithm 4.1 for
Z U(4:5; 5).
6 1.220356e+06 0.510000 0.490000
9 1.224679e+06 0.510000 0.490000
13 1.238418e+06 0.510000 0.470000
14 1.240769e+06 0.590000 0.420000
19 1.249318e+06 0.890000 0.090000
Table
6.4 Performance of Algorithm 4.1 for
Z U(4:5; 5).
6 1.308071e+06 0.516667 0.483333
9 1.410264e+06 0.833333 0.466667
14 1.456611e+06 1.000000 0.233333
19 1.458000e+06 1.000000 0.000000
Table
6.5 presents the dynamics of p
t;14 ) for another smaller example.
Table
6.5 Dynamics of
t for for
3 1:00 0:97 0:95 0:99 0:87 0:99 0:98 0:01 0:03 0:00 0:11 0:00 0:02 0:00
4 1:00 1:00 1:00 0:99 1:00 0:99 0:98 0:01 0:00 0:00 0:00 0:00 0:01 0:00
5 1:00 1:00 1:00 0:99 1:00 0:99 0:99 0:01 0:00 0:00 0:00 0:00 0:00 0:00
Table
6.6 presents the performance of Algorithm 5.1 for the same input data as Table
6.4 with N that M t;(N) denotes the best (elite) sample value
of M(X) obtained at iteration t. We found that Algorithm 5.1 is at least as accurate as
Algorithm 4.1 and is typically 2-3 times faster than Algorithm 4.1.
Table
6.6 Performance of Algorithm 5.1 for the same input data as Table 6.4 with
9 1.42e+06 1.39e+06 3000 0.01 0.87 0.43
At this point we would like to note that we have performed extensive simulations case
studies with Algorithm 4.1 for dierent SNN models. We found that in approximately
99% of the cases, Algorithm 4.1 performs well in the sense that the relative error dened
in (6.3) does not exceed 1%. These results will be reported somewhere else.
6.1 Empirical Computational Complexity
Let us nally discuss the computational complexity of Algorithm 4.1 for the maximal cut
and the partition problems, which can be dened as
Here T n is the total number of iterations needed before Algorithm 4.1 stops; N n is the
sample size, that is the total number of maximal cuts and partitions generated at each
iteration; G n is the cost of generating from n independent Bernoulli distributions (needed
in Algorithm 3.1) or from the sequence of distributions Fm (p) (needed in Algorithm 3.2);
is the cost of updating the tuple (
). The latter follows from the
fact that computing M(X) in (3.8) is a O(n 2 ) operation.
For the model in (6.1) we found empirically that T
1000. For the maximal cut problem, considering that we take n N n 10n and that G n
is O(n) , we obtain In our experiments, the complexity we observed was
more like
The partitition problem has similar computational characteristics.
It is important to note that these empirical complexity results are solely for the model
with the distance matrix (6.1).
7 Concluding Remarks and Directions for Further Research
This paper presents an application of the cross-entropy (CE) method [38] to the maximal
cut and the partition problems. The proposed algorithm employs an auxiliary discrete dis-
tribution, which transforms the original deterministic network into an associated stochastic
one, called the associated stochastic network (ASN). Each iteration of the CE method
involves two major steps: (a) generation of trajectories (cuts and partitions) using an
auxiliary discrete distribution with a parameter vector p and calculation of the associated
objective function M(X) and some related quantities, such as indicator functions associated
with rare-event probabilities, and (b) updating the parameter vector p on the basis
of the data collected in the rst step. Our numerical studies with the maximal cut and the
partition problem, as well as some other COPs (not reported in this paper) suggest that
the proposed algorithms typically perform well in the sense that in approximately 99% of
the cases the relative error dened in (6.3) does not exceed 1%. In addition, experiments
suggest that Algorithm 4.1 and Algorithm 5.1 have polynomial complexity in the size of
the network. Some further topics for investigation are listed below.
1. Establish convergence of Algorithm 4.1 for nite sampling (i.e., N < 1) with emphasis
on the complexity and the speed of convergence under the suggested stopping
rules.
2. Establish condence intervals (regions) for the optimal solution.
3. Apply the proposed methodology to a wide variety of combinatorial optimization
problems, such as the quadratic assignment problem, minimal cut, vehicle routing,
graph coloring and communication networks optimization problems.
4. Apply the proposed methodology to noisy (simulation based) networks. Our preliminary
studies suggest that Algorithm 4.1 performs well in the case of noisy networks.
5. Apply parallel optimization techniques to the proposed methodology.
Acknowledgments
I would like to thank Alexander Podgaetsky for performing the numerical
experiments. I would also like to thank three anonymous referees, Pieter-Tjerk
de Boer and Victor Nicola (Guest Editor) at University of Twente, The Netherlands, and
Perwez Shahabuddin (Guest Editor) at Columbia University, U.S.A., for many valuable
suggestions.
--R
John Wiley
Local search in combinatorial optimization
Network ows theory
Genetic algorithms in search
Introduction to global optimization
Global optimization in action
Modern heuristic search methods
Discrete event systems: Sensitivity analysis and stochastic optimization via the score function method
--TR
Discrete optimization
Simulated annealing and Boltzmann machines: a stochastic approach to combinatorial optimization and neural computing
Approximating the permanent
Network flows
Polynomial-time approximation algorithms for the Ising model
search
A method for discrete stochastic optimization
Ant-based load balancing in telecommunications networks
New parallel randomized algorithms for the traveling salesman problem
The ant colony optimization meta-heuristic
ACO algorithms for the quadratic assignment problem
Ant algorithms for discrete optimization
A Graph-based Ant system and its convergence
Genetic Algorithms in Search, Optimization and Machine Learning
Local Search in Combinatorial Optimization
Simulated Annealing
Efficiently searching a graph by a smell-oriented vertex process
Nested Partitions Method for Global Optimization
--CTR
Victor F. Nicola , Tatiana S. Zaburnenko, Efficient importance sampling heuristics for the simulation of population overflow in Jackson networks, Proceedings of the 37th conference on Winter simulation, December 04-07, 2005, Orlando, Florida
Victor F. Nicola , Tatiana S. Zaburnenko, Efficient simulation of population overflow in parallel queues, Proceedings of the 37th conference on Winter simulation, December 03-06, 2006, Monterey, California
P. T. de Boer , D. P. Kroese , R. Y. Rubinstein, Rare event simulation and combinatorial optimization using cross entropy: estimating buffer overflows in three stages using cross-entropy, Proceedings of the 34th conference on Winter simulation: exploring new frontiers, December 08-11, 2002, San Diego, California
Zdravko Botev , Dirk P. Kroese, Global likelihood optimization via the cross-entropy method with an application to mixture models, Proceedings of the 36th conference on Winter simulation, December 05-08, 2004, Washington, D.C.
Victor F. Nicola , Tatiana S. Zaburnenko, Efficient heuristics for the simulation of population overflow in series and parallel queues, Proceedings of the 1st international conference on Performance evaluation methodolgies and tools, October 11-13, 2006, Pisa, Italy
Victor F. Nicola , Tatiana S. Zaburnenko, Efficient importance sampling heuristics for the simulation of population overflow in Jackson networks, ACM Transactions on Modeling and Computer Simulation (TOMACS), v.17 n.2, p.10-es, April 2007 | cross-entropy;importance sampling;rare event simulation;combinatorial optimization |
511466 | Accelerated focused crawling through online relevance feedback. | The organization of HTML into a tag tree structure, which is rendered by browsers as roughly rectangular regions with embedded text and HREF links, greatly helps surfers locate and click on links that best satisfy their information need. Can an automatic program emulate this human behavior and thereby learn to predict the relevance of an unseen HREF target page w.r.t. an information need, based on information limited to the HREF source page? Such a capability would be of great interest in focused crawling and resource discovery, because it can fine-tune the priority of unvisited URLs in the crawl frontier, and reduce the number of irrelevant pages which are fetched and discarded. | (Note: The HTML version of this paper is best viewed using
Microsoft Internet Explorer. To view the HTML version using
Netscape, add the following line to your ~/.Xdefaults or
~/.Xresources le:
Netscape*documentFonts.charset*adobe-fontspecific: iso-8859-1
For printing use the PDF version, as browsers may not print the
mathematics properly.)
y
Contact author, email soumen@cse.iitb.ac.in
Copyright is held by the author/owner(s).
WWW2002, May 7{11, 2002, Honolulu, Hawaii, USA.
ACM 1-58113-449-5/02/0005
If Pr(c*|u) is large enough
then enqueue all outlinks v of u
with priority Pr(c*|u)Frontier URLS
priority queue
Pick
best
Seed
URLs
Class models
Dmoz
consisting of
topic
stats
taxonomy
Baseline learner
Newly fetched
Submit page for classification page u
Crawler
Crawl
database
Figure
1: A basic focused crawler controlled by one topic
classier/learner.
as well. Support for surng is limited to the basic interface
provided by Web browsers, except for a few notable research
prototypes.
While surng, the user typically has a topic-specic
information need, and explores out from a few known
relevant starting points in the Web graph (which may be
query responses) to seek new pages relevant to the chosen
topic/s. While deciding for or against clicking on a specic
link (u; v), humans use a variety of clues on the source
page u to estimate the worth of the (unseen) target page
v, including the tag tree structure of u, text embedded in
various regions of that tag tree, and whether the link is
relative or remote. \Every click on a link is a leap of faith"
[19], but humans are very good at discriminating between
links based on these clues.
Making an educated guess about the worth of clicking
on a link (u; v) without knowledge of the target v is
central to the surng activity. Automatic programs which
can learn this capability would be valuable for a number
of applications which can be broadly characterized as
personalized, topic-specic information foragers.
Large-scale, topic-specic information gatherers are
called focused crawlers [1, 9, 14, 28, 30]. In contrast to giant,
all-purpose crawlers which must process large portions of
the Web in a centralized manner, a distributed federation of
focused crawlers can cover specialized topics in more depth
and keep the crawl more fresh, because there is less to cover
for each crawler.
In its simplest form, a focused crawler consists of a
supervised topic classier (also called a 'learner') controlling
the priority of the unvisited frontier of a crawler (see
Figure
1). The classier is trained a priori on document
samples embedded in a topic taxonomy such as Yahoo!
or Dmoz. It thereby learns to label new documents as
belonging to topics in the given taxonomy [2, 5, 21]. The
goal of the focused crawler is to start from nodes relevant
to a focus topic c in the Web graph and explore links to
selectively collect pages about c, while avoiding fetching
pages not about c.
Suppose the crawler has collected a page u and
encountered in u an unvisited link to v. A simple crawler
(which we call the baseline) will use the relevance of u
to topic c (which, in a Bayesian setting, we can denote
Pr(cju)) as the estimated relevance of the unvisited page
v. This reects our belief that pages across a hyperlink
are more similar than two randomly chosen pages on the
Web, or, in other words, topics appear clustered in the
Web graph [11, 23]. Node v will be added to the crawler's
priority queue with priority Pr(cju). This is essentially a
\best-rst" crawling strategy. When v comes to the head
of the queue and is actually fetched, we can verify if the
gamble paid o, by evaluating Pr(cjv). The fraction of
relevant pages collected is called the harvest rate. If V
is the set of nodes collected, the harvest rate is dened
as (1=jV Alternatively, we can measure
the loss rate, which is one minus the harvest rate, i.e., the
(expected) fraction of fetched pages that must be thrown
away. Since the eort on relevant pages is well-spent,
reduction in loss rate is the primary goal and the most
appropriate gure of merit.
For focused crawling applications to succeed, the \leap
of faith" from u to v must pay In other words,
if Pr(cjv) is often much less than the preliminary estimate
Pr(cju), a great deal of network trac and CPU cycles
are being wasted eliminating bad pages. Experience with
random walks on the Web show that as one walks away
from a xed page u0 relevant to topic c0, the relevance of
successive nodes u1; to c0 drops dramatically within
a few hops [9, 23]. This means that only a fraction of out-links
from a page is typically worth following. The average
out-degree of the Web graph is about 7 [29]. Therefore, a
large number of page fetches may result in disappointment,
especially if we wish to push the utility of focused crawling
to topic communities which are not very densely linked.
Even w.r.t. topics that are not very narrow, the
number of distracting outlinks emerging from even fairly
relevant pages has grown substantially since the early
days of Web authoring [4]. Template-based authoring,
dynamic page generation from semi-structured databases,
ad links, navigation panels, and Web rings contribute many
irrelevant links which reduce the harvest rate of focused
crawlers. Topic-based link discrimination will also reduce
these problems.
1.1 Our contribution: Leaping with more faith
In this paper we address the following questions:
How much information about the topic of the HREF
target is available and/or latent in the HREF source page,
its tag-tree structure, and its text? Can these sources be
exploited for accelerating a focused crawler?
Our basic idea is to use two classiers. Earlier, the regular
baseline classier was used to assign priorities to unvisited
frontier nodes. This no longer remains its function. The role
of assigning priorities to unvisited URLs in the crawl frontier
is now assigned to a new learner called the apprentice, and
the priority of v is specic to the features associated with
the (u; v) link which leads to it1. The features used by the
apprentice are derived from the Document Object Model or
1If many u's link to a single v, it is easiest to freeze the priority of
v when the rst-visited u linking to v is assessed, but combinations
of scores are also possible.
. submit (u,v)
to the apprentice
Online Pr(c|u) for
training all classes c
Class
Apprentice learner
If Pr(c*|u) is
large enough.
Apprentice v
assigns more Pr(c*|v)
accurate priority
to node v Frontier URLS An instance (u,v)
priority queue for the apprentice
Pick
best
Class models
Dmoz
consisting of
topic
stats
taxonomy
Baseline learner (Critic)
Crawler
Crawl
database
Newly fetched
Submit page for classification page u
Figure
2: The apprentice is continually presented with
training cases (u; v) with suitable features. The apprentice
is interposed where new outlinks (u; v) are registered with
the priority queue, and helps assign the unvisited node v a
better estimate of its relevance.
DOM (http://www.w3.org/DOM/) of u. Meanwhile, the role
of the baseline classier becomes one of generating training
instances for the apprentice, as shown in Figure 2. We may
therefore regard the baseline learner as a critic or a trainer,
which provides feedback to the apprentice so that it can
improve \on the job."
The critic-apprentice paradigm is related to reinforcement
learning and AI programs that learn to play games
[26, x1.2]. We argue that this division of labor is natural
and eective. The baseline learner can be regarded as
a user specication for what kind of content is desired.
Although we limit ourselves to a generative statistical model
for this specication, this can be an arbitrary black-box
predicate. For rich and meaningful distinction between
Web communities and topics, the baseline learner needs
to be fairly sophisticated, perhaps leveraging
annotations on the Web (such as topic directories). In
contrast, the apprentice specializes in how to locate pages
to satisfy the baseline learner. Its feature space is more
limited, so that it can train fast and adapt nimbly to
changing fortunes at following links during a crawl. In
Mitchell's words [27], the baseline learner recognizes \global
regularity" while the apprentice helps the crawler adapt
to \local regularity." This marked asymmetry between
the classiers distinguishes our approach from Blum and
Mitchell's co-training technique [3], in which two learners
train each other by selecting unlabeled instances.
Using a dozen topics from a topic taxonomy derived
from the Open Directory, we compare our enhanced crawler
with the baseline crawler. The number of pages that are
thrown away (because they are irrelevant), called the loss
rate, is cut down by 30{90%. We also demonstrate that
the ne-grained tag-tree model, together with our synthesis
and encoding of features for the apprentice, are superior to
simpler alternatives.
1.2 Related work
Optimizing the priority of unvisited URLs on the crawl
frontier for specic crawling goals is not new. FishSearch
by De Bra et al. [12, 13] and SharkSearch by Hersovici
et al. [16] were some of the earliest systems for localized
searches in the Web graph for pages with specied keywords.
In another early paper, Cho et al. [10] experimented with a
variety of strategies for prioritizing how to fetch unvisited
URLs. They used the anchor text as a bag of words to
guide link expansion to crawl for pages matching a specied
keyword query, which led to some extent of dierentiation
among out-links, but no trainer-apprentice combination was
involved. No notion of supervised topics had emerged at
that point, and simple properties like the in-degree or the
presence of specied keywords in pages were used to guide
the crawler.
Topical locality on the Web has been studied for a few
years. Davison made early measurements on a 100000-
node Web subgraph [11] collected by the DiscoWeb system.
Using the standard notion of vector space TFIDF similarity
[31], he found that the endpoints of a hyperlink are much
more similar to each other than two random pages, and that
HREFs close together on a page link to documents which are
more similar than targets which are far apart. Menczer has
made similar observations [23]. The HyperClass hypertext
classier also uses such locality patterns for better semi-supervised
learning of topics [7], as does IBM's Automatic
Resource Compilation (ARC) and Clever topic distillation
systems [6, 8].
Two important advances have been made beyond the
baseline best-rst focused crawler: the use of context graphs
by Diligenti et al. [14] and the use of reinforcement learning
by Rennie and McCallum [30]. Both techniques trained
a learner with features collected from paths leading up to
relevant nodes rather than relevant nodes alone. Such paths
may be collected by following backlinks.
Diligenti et al. used a classier (learner) that regressed
from the text of u to the estimated link distance from u to
some relevant page w, rather than the relevance of u or an
outlink (u; v), as was the case with the baseline crawler.
This lets their system continue expanding u even if the
reward for following a link is not immediate, but several
links away. However, they do favor links whose payos
are closest. Our work is specically useful in conjunction
with the use of context graphs: when the context graph
learner predicts that a goal is several links away, it is crucial
to oer additional guidance to the crawler based on local
structure in pages, because the fan-out at that radius could
be enormous.
Rennie and McCallum [30] also collected paths leading
to relevant nodes, but they trained a slightly dierent
classier, for
An instance was a single HREF link like (u; v).
The features were terms from the title and headers
( . etc.) of u, together with the text
in and 'near' the anchor (u; v). Directories and
pathnames were also used. (We do not know the
precise denition of 'near', or how these features were
encoded and combined.)
The prediction was a discretized estimate of the
number of relevant nodes reachable by following (u; v),
where the reward from goals distant from v was
geometrically discounted by some factor < 1=2 per
hop.
Rennie and McCallum obtained impressive harvests of
research papers from four Computer Science department
sites, and of pages about ocers and directors from 26
company Websites.
Lexical proximity and contextual features have been
used extensively in natural language processing for disambiguating
word sense [15]. Compared to plain text, DOM
trees and hyperlinks give us a richer set of potential features.
Aggarwal et al. have proposed an \intelligent crawling"
framework [1] in which only one classier is used, but similar
to our system, that classier trains as the crawl progresses.
They do not use our apprentice-critic approach, and do not
exploit features derived from tag-trees to guide the crawler.
The \intelligent agents" literature has brought forth
several systems for resource discovery and assistance to
browsing [19]. They range between client- and site-level
tools. Letizia [18], Powerscout, and WebWatcher [17] are
such systems. Menczer and Belew proposed InfoSpiders
[24], a collection of autonomous goal-driven crawlers without
global control or state, in the style of genetic algorithms. A
recent extensive study [25] comparing several topic-driven
crawlers including the best-rst crawler and InfoSpiders
found the best-rst approach to show the highest harvest
rate (which our new system outperforms).
In all the systems mentioned above, improving the
chances of a successful \leap of faith" will clearly reduce
the overheads of fetching, ltering, and analyzing pages.
Furthermore, whereas we use an automatic rst-generation
focused crawler to generate the input to train the apprentice,
one can envisage specially instrumented browsers being used
to monitor users as they seek out information.
We distinguish our work from prior art in the following
important ways:
Two classiers: We use two classiers. The rst one is
used to obtain 'enriched' training data for the second one.
breadth-rst or random crawl would have a negligible
fraction of positive instances.) The apprentice is a simplied
reinforcement learner. It improves the harvest rate, thereby
'enriching' the data collected and labeled by the rst learner
in turn.
path collection: Our two-classier frame-work
essentially eliminates the manual eort needed to
create reinforcement paths or context graphs. The input
needed to start o a focused crawl is just a pre-trained topic
taxonomy (easily available from the Web) and a few focus
topics.
Online training: Our apprentice trains continually, acquiring
ever-larger vocabularies and improving its accuracy
as the crawl progresses. This property holds also for the
\intelligent crawler" proposed by Aggarwal et al., but they
have a single learner, whose drift is controlled by precise
relevance predicates provided by the user.
notions of proximity between text and hyperlinks, we encode
the features of link (u; v) using the DOM-tree of u, and
automatically learn a robust denition of 'nearness' of a
textual feature to (u; v). In contrast, Aggarwal et al
use many tuned constants combining the strength of text-
and link-based predictors, and Rennie et al. use domain
knowledge to select the paths to goal nodes and the word
bags that are submitted to their learner.
2 Methodology and algorithms
We rst review the baseline focused crawler and then
describe how the enhanced crawler is set up using the
apprentice-critic mechanism.
2.1 The baseline focused crawler
The baseline focused crawler has been described in detail
elsewhere [9, 14], and has been sketched in Figure 1. Here
we review its design and operation briey.
There are two inputs to the baseline crawler.
A topic taxonomy or hierarchy with example URLs
for each topic.
One or a few topics in the taxonomy marked as the
of focus.
Although we will generally use the terms 'taxonomy' and
'hierarchy', a topic tree is not essential; all we really need is
a two-way classier where the classes have the connotations
of being 'relevant' or `irrelevant' to the topic(s) of focus.
A topic hierarchy is proposed purely to reduce the tedium
of dening new focused crawls. With a two-class classier,
the crawl administrator has to seed positive and negative
examples for each crawl. Using a taxonomy, she composes
the 'irrelevant' class as the union of all classes that are not
relevant. Thanks to extensive hierarchies like Dmoz in the
public domain, it should be quite easy to seed topic-based
crawls in this way.
The baseline crawler maintains a priority queue on the
estimated relevance of nodes v which have not been visited,
and keeps removing the highest priority node and visiting it,
expanding its outlinks and checking them into the priority
queue with the relevance score of v in turn. Despite its
extreme simplicity, the best-rst crawler has been found to
have very high harvest rates in extensive evaluations [25].
Why do we need negative examples and negative classes
at all? Instead of using class probabilities, we could maintain
a priority queue on, say, the TFIDF cosine similarity
between u and the centroid of the seed pages (acting as an
estimate for the corresponding similarity between v and the
centroid, until v has been fetched). Experience has shown
[32] that characterizing a negative class is quite important to
prevent the centroid of the crawled documents from drifting
away indenitely from the desired topic prole.
In this paper, the baseline crawler also has the implicit
job of gathering instances of successful and unsuccessful
\leaps of faith" to submit to the apprentice, discussed next.
2.2 The basic structure of the apprentice
learner
In estimating the worth of traversing the HREF (u; v), we
will limit our attention to u alone. The page u is modeled
as a tag tree (also called the Document Object Model or
DOM). In principle, any feature from u, even font color and
site membership may be perfect predictors of the relevance
of v. The total number of potentially predictive features will
be quite staggering, so we need to simplify the feature space
and massage it into a form suited to conventional learning
algorithms. Also note that we specically study properties
of u and not larger contexts such as paths leading to u,
meaning that our method may become even more robust and
useful in conjunction with context graphs or reinforcement
along paths.
Initially, the apprentice has no training data, and passes
judgment on (u; v) links according to some xed prior
obtained from a baseline crawl run ahead of time (e.g., see
the statistics in x3.3). Ideally, we would like to train the
apprentice continuously, but to reduce overheads, we declare
a batch size between a few hundred and a few thousand
pages. After every batch of pages is collected, we check if any
page u fetched before the current batch links to some page
v in the batch. If such a (u; v) is found, we extract suitable
features for (u; v) as described later in this section, and add
h(u; v); Pr(cjv)i as another instance of the training data for
the apprentice. Many apprentices, certainly the simple naive
Bayes and linear perceptrons that we have studied, need not
start learning from scratch; they can accept the additional
training data with a small additional computational cost.
2.2.1 Preprocessing the DOM tree
First, we parse u and form the DOM tree for u. Sadly,
much of the HTML available on the Web violates any
HTML standards that permit context-free parsing, but
a variety of repair heuristics (see, e.g., HTML Tidy,
available at http://www.w3.org/People/Raggett/tidy/)
let us generate reasonable DOM trees from bad HTML.
ul
li li li li
tt TEXT a
TEXT HREF TEXT em
TEXT font TEXT
Figure
3: Numbering of DOM leaves used to derive oset
attributes for textual tokens. '@' means \is at oset".
Second, we number all leaf nodes consecutively from left
to right. For uniformity, we assign numbers even to those
DOM leaves which have no text associated with them. The
specic <a href.> which links to v is actually an internal
node av, which is the root of the subtree containing the
anchor text of the link (u; v). There may be other element
tags such as or in the subtree rooted at av. Let
the leaf or leaves in this subtree be numbered '(av) through
r(av) '(av). We regard the textual tokens available from
any of these leaves as being at DOM oset zero w.r.t. the
link. Text tokens from a leaf numbered , to the left of
'(av), are at negative DOM oset '(av). Likewise, text
from a leaf numbered to the right of r(av) are at positive
DOM oset r(av). See Figure 3 for an example.
2.2.2 Features derived from the DOM and text
tokens
Many related projects mentioned in x1.2 use a linear notion
of proximity between a HREF and textual tokens. In the
ARC system, there is a crude cut-o distance measured
in bytes to the left and right of the anchor. In the
Clever system, distance is measured in tokens, and the
importance attached to a token decays with the distance.
In reinforcement learning and intelligent predicate-based
crawling, the exact specication of neighborhood text is not
known to us. In all cases, some ad-hoc tuning appears to be
involved.
We claim (and show in x3.4) that the relation between
the relevance of the target v of a HREF (u; v) and the
proximity of terms to (u; v) can be learnt automatically. The
results are better than ad-hoc tuning of cut-o distances,
provided the DOM oset information is encoded as features
suitable for the apprentice.
One obvious idea is to extend the Clever model: a page
is a linear sequence of tokens. If a token t is distant x from
the HREF (u; v) in question, we encode it as a feature ht; xi.
Such features will not be useful because there are too many
possible values of x, making the ht; xi space too sparse to
learn well. (How many HREFS will be exactly ve tokens
from the term 'basketball'?)
Clearly, we need to bucket x into a small number of
ranges. Rather than tune arbitrary bucket boundaries by
hand, we argue that DOM osets are a natural bucketing
scheme provided by the page author. Using the node
numbering scheme described above, each token t on page u
can be annotated w.r.t. the link (u; v) (for simplicity assume
there is only one such link) as ht; di, where d is the DOM
oset calculated above. This is the main set of features
used by the apprentice. We shall see that the apprentice
can learn to limit jdj to less than most cases,
which reduces its vocabulary and saves time.
A variety of other feature encodings suggest themselves.
We are experimenting with some in ongoing work (x4),
but decided against some others. For example, we do not
expect gains from encoding specic HTML tag names owing
to the diversity of authoring styles. Authors use ,
, and nested tables for layout control in
non-standard ways; these are best deated to a nameless
DOM node representation. Similar comments apply to
HREF collections embedded in , , and
. Font and lower/upper case information is useful
for search engines, but would make features even sparser
for the apprentice. Our representation also attens two-dimensional
tables to their \row-major" representation.
The features we ignore are denitely crucial for other
applications, such as information extraction. We did not
see any cases where this sloppiness led to a large loss rate.
We would be surprised to see tables where relevant links
occurred in the third column and irrelevant links in the fth,
or pages where they are rendered systematically in dierent
fonts and colors, but are not otherwise demarcated by the
DOM structure.
2.2.3 Non-textual features
Limiting d may lead us to miss features of u that may be
useful at the whole-page level. One approach would be to use
larger in magnitude than some threshold.
But this would make our apprentice as bulky and slow to
train as the baseline learner.
Instead, we use the baseline learner to abstract u for
the apprentice. Specically, we use a naive Bayes baseline
learner to classify u, and use the vector of class probabilities
returned as features for the apprentice. These features can
help the apprentice discover patterns such as
\Pages about /Recreation/Boating/Sailing often
link to pages about /Sports/Canoe_and_Kayaking."
This also covers for the baseline classier confusing between
classes with related vocabulary, achieving an eect similar
to context graphs.
Another kind of feature can be derived from co-citation.
If v1 has been fetched and found to be relevant and HREFS
(u; v1) and (u; v2) are close to each other, v2 is likely to
be relevant. Just like textual tokens were encoded as ht; di
pairs, we can represent co-citation features as h; di, where
is a suitable representation of relevance.
Many other features can be derived from the DOM tree
and added to our feature pool. We discuss some options
in x4. In our experience so far, we have found the ht; di
features to be most useful. For simplicity, we will limit our
subsequent discussion to ht; di features only.
2.3 Choices of learning algorithms for the
apprentice
Our feature set is thus an interesting mix of categorical,
ordered and continuous features:
tokens ht; di have a categorical component t and
a discrete ordered component d (which we may like to
smooth somewhat). Term counts are discrete but can
be normalized to constant document length, resulting
in continuous attribute values.
Class names are discrete and may be regarded as
synthetic terms. The probabilities are continuous.
The output we desire is an estimate of Pr(cjv), given all the
observations about u and the neighborhood of (u; v) that
we have discussed. Neural networks are a natural choice
to accommodate these requirements. We rst experimented
with a simple linear perceptron, training it with the delta
rule (gradient descent) [26]. Even for a linear perceptron,
convergence was surprisingly slow, and after convergence,
the error rate was rather high. It is likely that local
optima were responsible, because stability was generally
poor, and got worse if we tried to add hidden layers or
sigmoids. In any case, convergence was too slow for use
as an online learner. All this was unfortunate, because the
direct regression output from a neural network would be
convenient, and we were hoping to implement a Kohonen
layer for smoothing d.
In contrast, a naive Bayes (NB) classier worked very
well. A NB learner is given a set of training documents,
each labeled with one of a nite set of classes/topic. A
document or Web page u is modeled as a multiset or bag
of words, fh; n(u; )ig where is a feature which occurs
times in u. In ordinary text classication (such as
our baseline learner) the features are usually single words.
For our apprentice learner, a feature is a ht; di pair.
NB classiers can predict from a discrete set of classes,
but our prediction is a continuous (probability) score. To
bridge this gap, We used a simple two-bucket (low/high
relevance) special case of Torgo and Gama's technique of
using classiers for discrete labels for continuous regression
[33], using \equally probable intervals" as far as possible.
Torgo and Gama recommend using a measure of centrality,
such as the median, of each interval as the predicted value of
that class. Rennie and McCallum [30] corroborate that 2{3
bins are adequate. As will be clear from our experiments, the
medians of our 'low' and `high' classes are very close to zero
and one respectively (see Figure 5). Therefore, we simply
take the probability of the 'high' class as the prediction from
our naive Bayes apprentice.
The prior probability of class c, denoted Pr(c) is the
fraction of training documents labeled with class c. The NB
model is parameterized by a set of numbers c; which is
roughly the rate of occurrence of feature in class c, more
exactly,
where Vc is the set of Web pages labeled with c and T is the
entire vocabulary. The NB learner assumes independence
between features, and estimates
Y
Nigam et al. provide further details [22].
3 Experimental study
Our experiments were guided by the following requirements.
We wanted to cover a broad variety of topics, some 'easy' and
some 'dicult', in terms of the harvest rate of the baseline
crawler. Here is a quick preview of our results.
The apprentice classier achieves high accuracy in
predicting the relevance of unseen pages given ht; di
features. It can determine the best value of dmax to
use, typically, 4{6.
Encoding DOM osets in features improves the
accuracy of the apprentice substantially, compared
to a bag of ordinary words collected from within the
same DOM oset window.
Compared to a baseline crawler, a crawler that is
guided by an apprentice (trained oine) has a 30%
to 90% lower loss rate. It nds crawl paths never
expanded by the baseline crawler.
Even if the apprentice-guided crawler is forced to
stay within the (inferior) Web graph collected by the
baseline crawler, it collects the best pages early on.
The apprentice is easy to train online. As soon as it
starts guiding the crawl, loss rates fall dramatically.
Compared to ht; di features, topic- or cocitation-based
features have negligible eect on the apprentice.
To run so many experiments, we needed three highly
optimized and robust modules: a crawler, a HTML-to-DOM
converter, and a classier.
We started with the w3c-libwww crawling library from
http://www.w3c.org/Library/, but replaced it with our
own crawler because we could eectively overlap DNS
lookup, HTTP access, and disk access using a select over
all socket/le descriptors, and prevent memory leaks visible
in w3c-libwww. With three caching DNS servers, we could
achieve over 90% utilization of a 2Mbps dedicated ISP
connection.
We used the HTML parser libxml2 library to extract
the DOM from HTML, but this library has memory leaks,
and does not always handle poorly written HTML well. We
had some stability problems with HTML Tidy (http://www.
w3.org/People/Raggett/tidy/), the well-known HTML
cleaner which is very robust to bad HTML. At present we
are using libxml2 and are rolling our own HTML parser and
cleaner for future work.
We intend to make our crawler and HTML parser code
available in the public domain for research use.
For both the baseline and apprentice classier we used
the public domain BOW toolkit and the Rainbow naive
Bayes classier created by McCallum and others [20]. Bow
and Rainbow are very fast C implementations which let us
classify pages in real time as they were being crawled.
3.1 Design of the topic taxonomy
We downloaded from the Open Directory (http://dmoz.
org/) an RDF le with over 271954 topics arranged in a
tree hierarchy with depth at least 6, containing a total of
about 1697266 sample URLs. The distribution of samples
over topics was quite non-uniform. Interpreting the tree as
an is-a hierarchy meant that internal nodes inherited all
examples from descendants, but they also had their own
examples. Since the set of topics was very large and many
topics had scarce training data, we pruned the Dmoz tree
to a manageable frontier by following these steps:
1. Initially we placed example URLs in both internal and
leaf nodes, as given by Dmoz.
2. We xed a minimum per-class training set size of
300 documents.
3. We iteratively performed the following step as long
as possible: we found a leaf node with less than k
example URLs, moved all its examples to its parent,
and deleted the leaf.
4. To each internal node c, we attached a leaf
subdirectory called Other. Examples associated
directly with c were moved to this Other subdirectory.
5. Some topics were populated out of proportion, either
at the beginning or through the above process. We
made the class priors more balanced by sampling
down the large classes so that each class had at most
300 examples.
The resulting taxonomy had 482 leaf nodes and a total
of 144859 sample URLs. Out of these we could successfully
fetch about 120000 URLs. At this point we discarded the
tree structure and considered only the leaf topics. Training
time for the baseline classier was about about two hours
on a 729MHz Pentium III with 256kB cache and 512MB
RAM. This was very fast, given that 1.4GB of HTML text
had to be processed through Rainbow. The complete listing
of topics can be obtained from the authors.
3.2 Choice of topics
Depending on the focus topic and prioritization strategy,
focused crawlers may achieve diverse harvest rates. Our
early prototype [9] yielded harvest rates typically between
0.25 and 0.6. Rennie and McCallum [30] reported recall
and not harvest rates. Diligenti et al. [14] focused on very
specic topics where the harvest rate was very low, 4{6%.
Obviously, the maximum gains shown by a new idea in
focused crawling can be sensitive to the baseline harvest
rate.
To avoid showing our new system in an unduly positive
or negative light, we picked a set of topics which were fairly
diverse, and appeared to be neither too broad to be useful
(e.g., /Arts, /Science) nor too narrow for the baseline
crawler to be a reasonable adversary. We list our topics
in
Figure
4. We chose the topics without prior estimates of
how well our new system would work, and froze the list
of topics. All topics that we experimented with showed
visible improvements, and none of them showed deteriorated
performance.
3.3 Baseline crawl results
Topic
/Arts/Music/Styles/Classical/Composers
/Arts/Performing_Arts/Dance/Folk_Dancing
/Business/Industries./Livestock/Horses.
/Computers/Artificial_Intelligence
/Games/Board_Games/C/Chess
/Health/Conditions_and_Diseases/Cancer
/Home/Recipes/Soups_and_Stews
/Recreation/Outdoors/Fishing/Fly_Fishing
/Recreation/Outdoors/Speleology
/Science/Astronomy
/Science/Earth_Sciences/Meteorology
/Sports/Basketball
/Sports/Canoe_and_Kayaking
/Sports/Hockey/Ice_Hockey
Figure
4: We chose a variety of topics which were neither
too broad nor too narrow, so that the baseline crawler
was a reasonable adversary. #Good (#Bad) show the
approximate number of pages collected by the baseline
crawler which have relevance above (below) 0.5, which
indicates the relative diculty of the crawling task.
We will skip the results of breadth-rst or random crawling
in our commentary, because it is known from earlier work
on focused crawling that our baseline crawls are already
far better than breadth-rst or random crawls. Figure 5
shows, for most of the topics listed above, the distribution
of page relevance after running the baseline crawler to
collect roughly 15000 to 25000 pages per topic. TheExpected #pages
baseline crawler used a standard naive Bayes classier on 100000
the ordinary term space of whole pages. We see that the
relevance distribution is bimodal, with most pages being
very relevant or not at all. This is partly, but only partly, a 10000
result of using a multinomial naive Bayes model. The naive
Bayes classier assumes term independence and multiplies 1000
together many (small) term probabilities, with the result
that the winning class usually beats all others by a largemargin in probability. But it is also true that many outlinks
lead to pages with completely irrelevant topics. Figure 5
gives a clear indication of how much improvement we can 10
expect for each topic from our new algorithm.
AI
Astronomy
Basketball0.2Cancer
Chess
Composers
FlyFishing
FolkDance
Horses
IceHockey
Kayaking
Meteorology
3.4 DOM window size and feature selection
A key concern for us was how to limit the maximum window
width so that the total number of synthesized ht; di features
remains much smaller than the training data for the baseline
classier, enabling the apprentice to be trained or upgraded
in a very short time. At the same time, we did not want
to lose out on medium- to long-range dependencies between
signicant tokens on a page and the topic of HREF targets
in the vicinity. We eventually settled for a maximum DOM
window size of 5. We made this choice through the following
experiments.
The easiest initial approach was an end-to-end cross-validation
of the apprentice for various topics while
increasing dmax. We observed an initial increase in the
validation accuracy when the DOM window size was
increased beyond 0. However, the early increase leveled
or even reversed after the DOM window size was
increased beyond 5. The graphs in Figure 6 display these
results. We see that in the Chess category, though the
validation accuracy increases monotonically, the gains are
less pronounced after dmax exceeds 5. For the AI category,
accuracy fell beyond
Relevance probability0.8
Soups
Tobacco
Figure
5: All of the baseline classiers have harvest rates
between 0.25 and 0.6, and all show strongly bimodal
relevance score distribution: most of the pages fetched are
very relevant or not at all.
It is important to notice that the improvement in
accuracy is almost entirely because with increasing number
of available features, the apprentice can reject negative
(low relevance) instances more accurately, although the
accuracy for positive instances decreases slightly. Rejecting
unpromising outlinks is critical to the success of the
enhanced crawler. Therefore we would rather lose a little
accuracy for positive instances rather than do poorly on the
negative instances. We therefore chose dmax to be either 4
or 5 for all the experiments.
We veried that adding oset information to text tokens
was better than simply using plain text near the link [8].
One sample result is shown in Figure 7. The apprentice
accuracy decreases with dmax if only text is used, whereas
it increases if oset information is provided. This highlights
Chess9585
Negative
Positive
Average
AI85%Accuracy70Negative
Positive
Average
Figure
There is visible improvement in the accuracy
of the apprentice if dmax is made larger, up to about 5{
7 depending on topic. The eect is more pronounced on
the the ability to correctly reject negative (low relevance)
outlink instances. 'Average' is the microaverage over all
test instances for the apprentice, not the arithmetic mean
of 'Positive' and `Negative'.
AI84%Accuracy78Text
Offset
Figure
7: Encoding DOM oset information with textual
features boosts the accuracy of the apprentice substantially.
the importance of designing proper features.
To corroborate the useful ranges of dmax above, we
compared the value of average mutual information gain for
terms found at various distances from the target HREF.
The experiments revealed that the information gain of terms
found further away from the target HREF was generally
lower than those that were found closer, but this reduction
was not monotonic. For instance, the average information
Chess
d_max=8
d_max=5
d_max=4
d_max=3
d
d_max=8
d_max=5
d_max=4
d_max=3
d
Figure
8: Information gain variation plotted against
distance from the target HREF for various DOM window
sizes. We observe that the information gain is insensitive to
dmax.
gain at higher than that at Figure 8.
For each DOM window size, we observe that the information
gain varies in a sawtooth fashion; this intriguing observation
is explained shortly. The average information gain settled
to an almost constant value after distance of 5 from the
target URL. We were initially concerned that to keep the
computation cost manageable, we would need some cap on
dmax even while measuring information gain, but luckily,
the variation of information gain is insensitive to dmax, as
Figure
8 shows. These observations made our nal choice of
easy.
In a bid to explain the occurrence of the unexpected
saw-tooth form in Figure 8 we measured the rate ht;di at
which term t occurred at oset d, relative to the total count
of all terms occurring at oset d. (They are roughly the
multinomial naive Bayes term probability parameters.) For
xed values of d, we calculated the sum of values of terms
found at those osets from the target HREF. Figure 9(a)
shows the plot of these sums to the distance(d) for various
categories. The values showed a general decrease as the
distances from the target HREF increased, but this decrease,
like that of information gain, was not monotonic. The
values of the terms at odd numbered distances from the
target HREF were found to be lower than those of the
terms present at the even positions. For instance, the sum
of values of terms occurring at distance 2 were higher
than that of terms at position 1. This observation was
explained by observing the HTML tags that are present
at various distances from the target HREF. We observed
that tags located at odd d are mostly non-text tags, thanks
to authoring idioms such as <a.> <a.> and
<a.> <a.> etc. A plot of the frequency of
HTML tags against the distance from the HREF at which
d
-5 -4 -3 -2
Tags at various DOM offsetsAI
Chess
Horses
Cancer
IceHockey
Bball-800060004000
Number of occurrences2000
font
td
img
br
tr
li
comment
div
table
center
span
-5 -4 -3 -2
Figure
9: Variation of (a) relative term frequencies and
(b) frequencies of HTML tags plotted against d.
they were found is shown in Figure 9(b). (The <a.> tag
obviously has the highest frequency and has been removed
for clarity.)
These were important DOM idioms, spanning many
diverse Web sites and authoring styles, that we did not
anticipate ahead of time. Learning to recognize these
idioms was valuable for boosting the harvest of the enhanced
crawler. Yet, it would be unreasonable for the user-supplied
baseline black-box predicate or learner to capture crawling
strategies at such a low level. This is the ideal job of
the apprentice. The apprentice took only 3{10 minutes
to train on its (u; v) instances from scratch, despite a
simple implementation that wrote a small le to disk for
each instance of the apprentice. Contrast this with several
hours taken by the baseline learner to learn general term
distribution for topics.
3.5 Crawling with the apprentice trained
o-line
In this section we subject the apprentice to a \eld test" as
part of the crawler, as shown in Figure 2. To do this we
follow these steps:
1. Fix a topic and start the baseline crawler from all
example URLs available from the given topic.
2. Run the baseline crawler until roughly 20000{25000
pages have been fetched.
3. For all pages (u; v) such that both u and v have
been fetched by the baseline crawler, prepare an
instance from (u; v) and add to the training set of
the apprentice.
4. Train the apprentice. Set a suitable value for dmax.
Expected #pages lost0
Baseline
Apprentice
#Pages fetched
Ice Hockey0
#Pages fetched
Figure
10: Guidance from the apprentice signicantly
reduces the loss rate of the focused crawler.
Expected #pages lostBaseline
Apprentice
5. Start the enhanced crawler from the same set of pages
that the baseline crawler had started from.
6. Run the enhanced crawler to fetch about the same
number of pages as the baseline crawler.
7. Compare the loss rates of the two crawlers.
Unlike with the reinforcement learner studied by Rennie
and McCallum, we have no predetermined universe of URLs
which constitute the relevant set; our crawler must go
forth into the open Web and collect relevant pages from
an unspecied number of sites. Therefore, measuring recall
w.r.t. the baseline is not very meaningful (although we do
report such numbers, for completeness, in x3.6). Instead, we
measure the loss (the number of pages fetched which had to
be thrown away owing to poor relevance) at various epochs
in the crawl, where time is measured as the number of pages
fetched (to elide uctuating network delay and bandwidth).
At epoch n, if the pages fetched are then the total
expected loss is (1=n) P (1 Pr(cjvi)).
Figure
shows the loss plotted against the number of
pages crawled for two topics: Folk dancing and Ice hockey.
The behavior for Folk dancing is typical; Ice hockey is
one of the best examples. In both cases, the loss goes up
substantially faster with each crawled page for the baseline
crawler than for the enhanced crawler. The reduction of loss
for these topics are 40% and 90% respectively; typically, this
number is between 30% and 60%. In other words, for most
topics, the apprentice reduces the number of useless pages
fetched by one-third to two-thirds.
In a sense, comparing loss rates is the most meaningful
evaluation in our setting, because the network cost of
fetching relevant pages has to be paid anyway, and can be
regarded as a xed cost. Diligenti et al. show signicant
improvements in harvest rate, but for their topics, the loss
rate for both the baseline crawler as well as the context-
focused crawler were much higher than ours.
3.6 URL overlap and recall
The reader may feel that the apprentice crawler has an
unfair advantage because it is rst trained on DOM-derived
features from the same set of pages that it has to crawl
again. We claim that the set of pages visited by the baseline
crawler and thBeas(eloine -lAinpperentirceaiIntersde)ct enhanced crawler have
small oveBralsakeptb,alal nd2t7h22e0 sup2e6r28io0 r res2u43l1ts for the crawler guided
FolkDance 14011 8168 2199
by the apIcepHroecknetyice 3a4r1e21in l2a2r4g96e par1t65b7 ecause of generalizable
learning.FlyTFishinsg can19b2e52 seen143f1r9om t6h83e4 examples in Figure 11.
Basketball FolkDance
4% 9%
49%
34%
47% 57%
Baseline Baseline
Apprentice Apprentice
Intersect Intersect
IceHockey FlyFishing
3%
17%
48%
58%
Baseline Baseline
Apprentice 35% Apprentice
Intersect Intersect
Figure
11: The apprentice-guided crawler follows paths
which are quite dierent from the baseline crawler because
of its superior priority estimation technique. As a result
there is little overlap between the URLs harvested by these
two crawlers.
Given that the overlap between the baseline and the
enhanced crawlers is small, which is 'better'? As per the
verdict of the baseline classier, clearly the enhanced crawler
is better. Even so, we report the loss rate of a dierent
version of the enhanced crawler which is restricted to visiting
only those pages which were visited by the baseline learner.
We call this crawler the recall crawler. This means that in
the end, both crawlers have collected exactly the same set
of pages, and therefore have the same total loss. The test
then is how long can the enhanced learner prevent the loss
from approaching the baseline loss. These experiments are a
rough analog of the 'recall' experiments done by Rennie and
McCallum. We note that for these recall experiments, the
apprentice does get the benet of not having to generalize,
so the gap between baseline loss and recall loss could be
optimistic. Figure 12 compares the expected total loss of
the baseline crawler, the recall crawler, and the apprentice-
guided crawler (which is free to wander outside the baseline
collection) plotted against the number of pages fetched, for a
few topics. As expected, the recall crawler has loss generally
Ice Hockey
Expected #pages lost0
Baseline
Recall
Apprentice
#Pages fetched
KayakingExpected #pages lost0
Baseline
Recall
Apprentice
#Pages fetched
Figure
12: Recall for a crawler using the apprentice but
limited to the set of pages crawled earlier by the baseline
crawler.
somewhere between the loss of the baseline and the enhanced
crawler.
3.7 Eect of training the apprentice online
Next we observe the eect of a mid-ight correction when
the apprentice is trained some way into a baseline and
switched into the circuit. The precise steps were:
1. Run the baseline crawler for the rst n page fetches,
then stop it.
2. Prepare instances and train the apprentice.
3. Re-evaluate the priorities of all unvisited pages v in
the frontier table using the apprentice.
4. Switch in the apprentice and resume an enhanced
crawl.
We report our experience with \Folk Dancing." The baseline
crawl was stopped after 5200 pages were fetched. Re-evaluating
the priority of frontier nodes led to radical
changes in their individual ranks as well as the priority
distributions. As shown in Figure 13(a), the baseline learner
is overly optimistic about the yield it expects from the
frontier, whereas the apprentice already abandons a large
fraction of frontier outlinks, and is less optimistic about
Baseline
Apprentice
(a) Estimated relevance of outlinks
Expected loss (#pages)2100
4500 #Pages crawled 5500
(b)
Figure
13: The eect of online training of the apprentice.
(a) The apprentice makes sweeping changes in the
estimated promise of unvisited nodes in the crawl frontier.
(b) Resuming the crawl under the guidance of the
apprentice immediately shows signicant reduction in the
loss accumulation rate.Apprentice
guides crawl
Collect instances
for apprentice
Train
apprentice
the others, which appears more accurate from the Bayesian
perspective.
Figure
13(b) shows the eect of resuming an enhanced
crawl guided by the trained apprentice. The new (u; v)
instances are all guaranteed to be unknown to the apprentice
now. It is clear that the apprentice's prioritization
immediately starts reducing the loss rate. Figure 14 shows
an even more impressive example. There are additional mild
gains from retraining the apprentice at later points. It may
be possible to show a more gradual online learning eect
by retraining the classier at a ner interval, e.g., every
100 page fetches, similar to Aggarwal et al. In our context,
however, losing a thousand pages at the outset because of
the baseline crawler's limitation is not a disaster, so we need
not bother.
3.8 Eect of other features
We experimented with two other kinds of feature, which we
call topic and cocitation features.
Our limiting dmax to 5 may deprive the apprentice of
important features in the source page u which are far from
the link (u; v). One indirect way to reveal such features
to the apprentice is to classify u, and to add the names
of some of the top-scoring classes for u to the instance
(u; v). x2.2.3 explains why this may help. This modication
resulted in a 1% increase in the accuracy of the apprentice.
A further increase of 1% was observed if we added all
Classical Composers16001200Cumulative expected loss (#pages)800
Collect
instances for
apprentice
Apprentice
guides crawl
Train
apprentice
2000 3000 4000 5000 6000 7000 8000
#Pages fetched
Figure
14: Another example of training the apprentice
online followed by starting to use it for crawl guidance.
Before guidance, loss accumulation rate is over 30%, after,
it drops to only 6%.
prexes of the class name. For example, the full name
for the Linux category is /Computers/Software/Operating_
Systems/Linux. We added all of the following to the
feature set of the source page: /, /Computers, /Computers/
Software, /Computers/Software/Operating_Systems and
/Computers/Software/Operating_Systems/Linux. We also
noted that various class names and some of their prexes
appeared amongst the best discriminants of the positive and
negative classes.
Cocitation features for the link (u; v) are constructed by
looking for other links (u; w) within a DOM distance of dmax
such that w has already been fetched, so that Pr(cjw) is
known. We discretize Pr(cjw) to two values high and low
as in x2.3, and encode the feature as hlow; di or hhigh; di.
The use of cocitation features did not improve the accuracy
of the apprentice to any appreciable extent.
For both kinds of features, we estimated that random
variations in crawling behavior (because of uctuating
network load and tie-breaking frontier scores) may prevent
us from measuring an actual benet to crawling under
realistic operating conditions. We note that these ideas may
be useful in other settings.
We have presented a simple enhancement to a focused
crawler that helps assign better priorities to the unvisited
URLs in the crawl frontier. This leads to a higher rate of
fetching pages relevant to the focus topic and fewer false
positives which must be discarded after spending network,
CPU and storage resources processing them. There is no
need to manually train the system with paths leading to
relevant pages. The key idea is an apprentice learner which
can accurately predict the worth of fetching a page using
DOM features on pages that link to it. We show that the
DOM features we use are superior to simpler alternatives.
Using topics from Dmoz, we show that our new system can
cut down the fraction of false positives by 30{90%.
We are exploring several directions in ongoing work.
We wish to revisit continuous regression techniques for the
apprentice, as well as more extensive features derived from
the DOM. For example, we can associate with a token t the
length ' of the DOM path from the text node containing t to
the HREF to v, or the depth of their least common ancestor [14]
in the DOM tree. We cannot use these in lieu of DOM oset,
because regions which are far apart lexically may be close
to each other along a DOM path. ht; '; di features will be
more numerous and sparser than ht; di features, and could
be harder to learn. The introduction of large numbers of
strongly dependent features may even reduce the accuracy
[15]
of the apprentice. Finally, we wish to implement some form
of active learning where only those instances (u; v) with the
largest j Pr(cju)Pr(cjv)j are chosen as training instances [16]
for the apprentice.
Acknowledgments
Thanks to the referees for suggest-
[17]
ing that we present Figure 7.
--R
Intelligent crawling on the World Wide Web with arbitrary predicates.
Scalable feature selection
Topic distillation
Ecient crawling through URL ordering.
Topical locality in the Web.
Information retrieval in the world-wide web: Making client-based searching feasible
Searching for arbitrary information in the WWW: The sh search for Mosaic.
Focused crawling using context graphs.
th International Conference on Very Large Data Bases
September 10-14
papers/focus-vldb00/focus-vldb00
A method for
disambiguating word senses in a large corpus.
the Humanities
Web site mapping.
guide for the web.
Letizia: An agent that assists Web browsing.
edu/people/lieber/Lieberary/Letizia/Letizia.
Exploring the Web
Bow: A toolkit for statistical language modeling
classication and clustering.
A comparison of event models for
naive Bayes text classication.
on Learning for Text Categorization
A comparison of event models for
naive Bayes text classication.
on Learning for Text Categorization
Also technical report WS-98-05
Links tell us about lexical and semantic
Web content.
Adaptive retrieval agents:
Internalizing local context and scaling up to the Web.
Technical Report CS98-579
Papers/MLJ.
Machine Learning.
Mining the Web.
WTMS: a system for collecting and analyzing
Stochastic models for the Web graph.
In FOCS
http://www.
Using reinforcement learning to
spider the web eciently.
Introduction to Modern
Information Retrieval.
Focused crawling using TFIDF centroid.
and Mining (CS610) class project
Regression by classication.
of Lecture Notes in Articial Intelligence
--TR
Automated learning of decision rules for text categorization
Information retrieval in the World-Wide Web: making client-based searching feasible
Enhanced hypertext categorization using hyperlinks
Combining labeled and unlabeled data with co-training
Automatic resource compilation by analyzing hyperlink structure and associated text
Efficient crawling through URL ordering
The shark-search algorithm. An application
Focused crawling
Topic Distillation and Spectral Filtering
Topical locality in the Web
WTMS
Adaptive Retrieval Agents
Intelligent crawling on the World Wide Web with arbitrary predicates
Integrating the document object model with hyperlinks for enhanced topic distillation and information extraction
Exploring the Web with reconnaissance agents
Evaluating topic-driven web crawlers
Machine Learning
Introduction to Modern Information Retrieval
Using Reinforcement Learning to Spider the Web Efficiently
Regression by Classification
Focused Crawling Using Context Graphs
Scalable feature selection, classification and signature generation for organizing large text databases into hierarchical topic taxonomies
Stochastic models for the Web graph
--CTR
Qingyang Xu , Wanli Zuo, First-order focused crawling, Proceedings of the 16th international conference on World Wide Web, May 08-12, 2007, Banff, Alberta, Canada
Sunita Sarawagi , V. G. Vinod Vydiswaran, Learning to extract information from large domain-specific websites using sequential models, ACM SIGKDD Explorations Newsletter, v.6 n.2, p.61-66, December 2004
Rashmin Babaria , J. Saketha Nath , Krishnan S , Sivaramakrishnan K R , Chiranjib Bhattacharyya , M. N. Murty, Focused crawling with scalable ordinal regression solvers, Proceedings of the 24th international conference on Machine learning, p.57-64, June 20-24, 2007, Corvalis, Oregon
Hongyu Liu , Evangelos Milios , Jeannette Janssen, Probabilistic models for focused web crawling, Proceedings of the 6th annual ACM international workshop on Web information and data management, November 12-13, 2004, Washington DC, USA
Gautam Pant , Padmini Srinivasan, Learning to crawl: Comparing classification schemes, ACM Transactions on Information Systems (TOIS), v.23 n.4, p.430-462, October 2005
Jingru Dong , Wanli Zuo , Tao Peng, Focused crawling guided by link context, Proceedings of the 24th IASTED international conference on Artificial intelligence and applications, p.365-369, February 13-16, 2006, Innsbruck, Austria
Martin Ester , Hans-Peter Kriegel , Matthias Schubert, Accurate and efficient crawling for relevant websites, Proceedings of the Thirtieth international conference on Very large data bases, p.396-407, August 31-September 03, 2004, Toronto, Canada
Gautam Pant, Deriving link-context from HTML tag tree, Proceedings of the 8th ACM SIGMOD workshop on Research issues in data mining and knowledge discovery, June 13-13, 2003, San Diego, California
Mrcio L. A. Vidal , Altigran S. da Silva , Edleno S. de Moura , Joo M. B. Cavalcanti, Structure-driven crawler generation by example, Proceedings of the 29th annual international ACM SIGIR conference on Research and development in information retrieval, August 06-11, 2006, Seattle, Washington, USA
Prasad Pingali , Jagadeesh Jagarlamudi , Vasudeva Varma, WebKhoj: Indian language IR from multiple character encodings, Proceedings of the 15th international conference on World Wide Web, May 23-26, 2006, Edinburgh, Scotland
Gautam Pant , Kostas Tsioutsiouliklis , Judy Johnson , C. Lee Giles, Panorama: extending digital libraries with topical crawlers, Proceedings of the 4th ACM/IEEE-CS joint conference on Digital libraries, June 07-11, 2004, Tuscon, AZ, USA
Jiansheng Huang , Jeffrey F. Naughton, K-relevance: a spectrum of relevance for data sources impacting a query, Proceedings of the 2007 ACM SIGMOD international conference on Management of data, June 11-14, 2007, Beijing, China
Weizheng Gao , Hyun Chul Lee , Yingbo Miao, Geographically focused collaborative crawling, Proceedings of the 15th international conference on World Wide Web, May 23-26, 2006, Edinburgh, Scotland
Christos Makris , Yannis Panagis , Evangelos Sakkopoulos , Athanasios Tsakalidis, Category ranking for personalized search, Data & Knowledge Engineering, v.60 n.1, p.109-125, January, 2007
L. K. Shih , D. R. Karger, Using urls and table layout for web classification tasks, Proceedings of the 13th international conference on World Wide Web, May 17-20, 2004, New York, NY, USA
Luciano Barbosa , Juliana Freire, An adaptive crawler for locating hiddenwebentry points, Proceedings of the 16th international conference on World Wide Web, May 08-12, 2007, Banff, Alberta, Canada
Luciano Barbosa , Juliana Freire, Combining classifiers to identify online databases, Proceedings of the 16th international conference on World Wide Web, May 08-12, 2007, Banff, Alberta, Canada
Soumen Chakrabarti , Mukul M. Joshi , Kunal Punera , David M. Pennock, The structure of broad topics on the web, Proceedings of the 11th international conference on World Wide Web, May 07-11, 2002, Honolulu, Hawaii, USA
P. Srinivasan , F. Menczer , G. Pant, A General Evaluation Framework for Topical Crawlers, Information Retrieval, v.8 n.3, p.417-447, May 2005
Using HMM to learn user browsing patterns for focused web crawling, Data & Knowledge Engineering, v.59 n.2, p.270-291, November 2006
Qingyang Xu , Wanli Zuo, Extracting Precise Link Context Using NLP Parsing Technique, Proceedings of the 2004 IEEE/WIC/ACM International Conference on Web Intelligence, p.64-69, September 20-24, 2004
Panagiotis G. Ipeirotis , Eugene Agichtein , Pranay Jain , Luis Gravano, To search or to crawl?: towards a query optimizer for text-centric tasks, Proceedings of the 2006 ACM SIGMOD international conference on Management of data, June 27-29, 2006, Chicago, IL, USA
Filippo Menczer, Lexical and semantic clustering by web links, Journal of the American Society for Information Science and Technology, v.55 n.14, p.1261-1269, December 2004
Deepayan Chakrabarti , Christos Faloutsos, Graph mining: Laws, generators, and algorithms, ACM Computing Surveys (CSUR), v.38 n.1, p.2-es, 2006 | reinforcement learning;document object model;focused crawling |
511488 | Extracting query modifications from nonlinear SVMs. | When searching the WWW, users often desire results restricted to a particular document category. Ideally, a user would be able to filter results with a text classifier to minimize false positive results; however, current search engines allow only simple query modifications. To automate the process of generating effective query modifications, we introduce a sensitivity analysis-based method for extracting rules from nonlinear support vector machines. The proposed method allows the user to specify a desired precision while attempting to maximize the recall. Our method performs several levels of dimensionality reduction and is vastly faster than searching the combination feature space; moreover, it is very effective on real-world data. | INTRODUCTION
When searching the WWW, users often desire results from a
specific category, such as personal home pages or conference an-
nouncements. Unfortunately, many search engines return results
from multiple categories, thus forcing the user to manually filter
the results. One solution is to use an automated classifier that identifies
whether a given result is close to the user's desired category.
However, since search engines often have low precision for a given
category, it may be necessary to retrieve a large number of documents
from the engines, which may be expensive or impossible
(search engines typically only allow retrieving a certain maximum
number of hits for a query). A single query modification (such as
+"home page") can improve precision, but often at the expense
of recall. To allow for both high precision and high recall, we introduce
a method to generate a collection of query modifications
that satisfies the user's desired precision and attempts to maximize
recall.
Copyright is held by the author/owner(s).
WWW2002, May 7-11, 2002, Honolulu, Hawaii, USA.
ACM 1-58113-449-5/02/0005.
We use a nonlinear support vector machine (SVM) to initially
classify all documents. The problem of building classifiers that
generalize-well is especially important in the domain of text classification
because typical problem instances have high-dimensional
input spaces (mapping to a word vector) with a relatively small
number of positive examples. The scarcity of positive examples
is related to the cost of having humans hand-classify examples.
SVMs are specifically designed to generalize-well in high-dimensional
spaces with few examples. Moreover, they have been shown
to be extremely accurate and robust on text classification problems
[8, 9], which is why we use SVMs in favor of other classification
methods.
We also know of no other work that extracts symbolic information
from SVMs; hence, we are also motivated by the desire to use
the high accuracy of SVMs to extract valuable symbolic features,
similar to earlier works that did the same for multilayer perceptrons
[14].
Our feature extraction process uses sensitivity analysis on the underlying
SVM to identify a linear model and a single query modification
that explains a subset of the desired documents. We extract
additional query modifications by iterating the procedure with
a new dataset composed of the false negatives of the linear model
and all negative examples. The procedure repeats, generating additional
query modifications, until no further progress is made. In
this way, our feature extraction method is similar to RIPPER [1] in
that we iteratively extract rules; however, our method differs in that
we use the SVM to guide our choice in rule selection.
In the end, we can identify a set of query modifications whose
combination "covers" a large portion of the positive documents, but
greatly reduces the number of false positives. Our method yields
precise search results and can be built on top of existing search
engines in the form of a metasearch engine. Moreover, by using
SVMs to guide the rule search, our extracted rules are predisposed
to have many of the same generalization qualities that the originating
possesses.
This approach differs from our other work on learning query
modifications [6] by producing a set of query modifications that
work together to improve recall, as opposed to a set of individually
effective modifications, that may or may not work well together.
This paper is divided into five sections. In Section 2, we discuss
WWW search engines and metasearch engines, with an emphasis
on how our results can be used to improve metasearch engines. In
Section 3, we describe our method for producing effective query
modifications, which uses text preprocessing, SVM classification,
sensitivity analysis, and a dataset deflation procedure. Section 4
gives experimental results for identifying personal home pages and
conference pages and shows that our method gives both high precision
and improved recall. Finally, Section 5 summarizes our work
with a discussion on how web page patterns are exploited by our
method's dimensionality reduction properties.
2. WWW SEARCH AND METASEARCH
The primary tool for accessing data on the Web is a search engine
such as AltaVista, Northern Light, Google, Excite, and others. All
search engines accept keyword queries and return a list of relevance
ranked results, where relevance is usually a topical measure. As
every search engine user has experienced, typical search queries
often return thousands of URLs, placing a burden on the user to
manually filter the results.
One common trick for combating this problem is to "spike" a
search query with a modification that is designed to narrow the results
to a specific context or category. For example, if searching for
the personal home pages of machine learning researchers, adding
the term "home page" can increase the density of valuable results
by encouraging a search engine to rank personal home pages
higher than non-home pages. However, using any single query
modification increases the search precision at the expense of re-
call, i.e., many home pages (e.g., those that do not contain "home
page") may be missed.
The coverage of a search query can be improved by using a meta-search
engine, such as DogPile, SavvySearch [7], MetaCrawler [13],
or Profusion [3]. Each submits the user's query to multiple search
engines and fuses the results, allowing for potentially higher recall.
Most simple metasearch engines fuse results by considering only
the titles, URLs and short summaries returned from the underlying
search engines. As a result, it is more difficult for them to assess
topical relevance or to determine if a particular result is of the type
desired by the user. In addition, since metasearch engines combine
many search engine results, there is a risk of one search engine
causing the total precision to drop.
Unlike typical metasearch engines, content-based metasearch en-
gines, such as Inquirus [10], download all possible results and consider
the full HTML of each page when ranking them. Although
this strategy may improve the accuracy of relevance ranking, it does
not allow users to control the desired category of results and, at
best, only organizes existing results. Just as with any metasearch
engine, Inquirus can have low precision if the underlying search engines
also have low precision. As a result, the improved coverage
does not guarantee improved recall.
Some search and metasearch engines, such as Northern Light
and SavvySearch, allow users to specify a desired category from a
limited set of categories. Northern Light constrains user searches
to results known to fall into the chosen cluster, while SavvySearch
only submits the query to search engines known to be of the desired
category, such as MP3 or news sites. Each approach offers the user
improved control, but both are still limited.
A problem arises when a user's desired document category does
not exactly match the provided choices, or the user wishes to submit
his query to a general purpose search engine (to improve re-
call). There is no way to provide custom categories to these search
tools, neither of which have a category for "personal home pages"
or "conference pages." Nevertheless, users often desire documents
in a well-defined category that is not easily localized like those covered
by specialized metasearch engines.
Our work is motivated by the desire to build a metasearch engine
that can adaptively specialize to many different document cat-
egories. Our prototype system, Inquirus 2 [5, 4], currently allows
users to focus a search on categories such as "personal home
pages," "research papers," and "product reviews." Previous versions
of Inquirus 2 used hand-coded query modifications to improve
the search performed by the search engines. This work gives
a method by which effective query modifications that have both
high precision and high recall can be generated automatically.
3. FINDING QUERY MODIFICATIONS
Our method for automatically identifying effective query modifications
is a five-step process that uses labeled examples. The first
stage is to preprocess the textual content of an HTML document
into n-gram features that appear to have discriminatory power. With
the document features and labels, we then train a nonlinear SVM
to classify the documents. Sensitivity analysis on the SVM is used
to estimate the importance of document features to the SVM clas-
sifier. The most important features are exhaustively analyzed to
find the combination of query modifications that yields the highest
recall for a desired level of precision.
Documents that are labeled as true positives by the best query
modification are removed from the dataset. The process then repeats
with a new SVM classifier until no new query modifications
are found. The entire method is described in greater detail in the
next four subsections.
3.1 Preprocessing
Our preprocessing extracts the full text and title text of a document
and converts it into features that consist of up to three consecutive
words. All non-letter characters are converted to whitespace
and all capital letters are converted to lower case.
After every page in the training set is converted, two feature histograms
are constructed for positive and negative exemplars. Since
the number of features per document can be on the order of thou-
sands, we reduce the number of features by eliminating those features
that are too rare (such as proper names). Common features,
such as stop words, are removed by the feature scoring process as
well. However, since we distinguish between title text and the full
document text, a particular stop word such as "s" in the title (from
apostrophe "s"), could be a strong feature, even though it is common
in the text.
This dimensionality reduction considers the relative ability of
any given feature to distinguish between positive and negative exemplars
by assigning a score to each feature. The score is generated
via the following four sets:
positive examples g
negative examples g
contains feature fg
contains feature fg
Ignoring higher order correlations, the best features for classifying
are those which occur only in one set and the worst features are
those which occur in an equal percentage of each set, or occur very
frequently in both sets. A simple scoring function, score(f), that
captures this notion is:
which is the probability (given equal sizes for P and N ) that you
know which set a document containing feature f came from. The
worst one can do is random or 0.5, and the best is absolute cer-
tainty, or 1. Unfortunately, this equation would predict all features
which occur only once as being perfect classifiers. To remedy this
problem, we add a requirement that a feature occur in at least some
threshold percentage of documents from either P or N . Any feature
occurring less than the threshold (7.5% in our experiments) is
removed from consideration.
After each feature is scored the top N are taken. For our experiments
we used N equal to 100 for the personal home pages, and
300 for conference pages. After the top N features are determined,
each page is converted to a binary vector, where each feature is
assigned f1; 1g, with 1 indicating that a feature is absent.
3.2 SVM Classification
Consider a set of data, f(x1 ; y1); ; (xN ; yN )g, such that x i
is an input and y i 2 f1; 1g is a target output. A support vector
machine is a model that is calculated as a weighted sum of kernel
function outputs. The kernel function of an SVM is written as
can be an inner product, Gaussian, polynomial,
or any other function that obeys Mercer's condition [15].
In the simplest case, where K(xa and the training
data is linearly separable, computing an SVM for the data corresponds
to minimizing jjwjj such that
Thus, an SVM yields the lowest complexity linear classifier that
correctly classifies all data. The solution for w is found by solving
the quadratic programming problem defined by the objective
function and the constraints. In the linear case, the solution is
with
The terms are the Lagrange multipliers found
in the primal and dual Lagrangians. In many cases, most of the
Lagrange multipliers will be zero. The only nonzero multipliers
correspond to data points that lie closest to the decision boundary.
The formalism behind SVMs has been generalized to accommodate
nonlinear kernel functions and slack variables for miss-
classifications. We write the output of a nonlinear SVM as:
Thus, K() is a dot product in a nonlinear feature space (x). The
objective function (which should be minimized) for Equation 1 is:
subject to the box constraint 8 and the linear constrain
0: C is a user-defined constant that represents a
balance between the model complexity and the approximation er-
ror; in the Lagrangian, C is multiplied by the sum of the magnitude
of the slack variables used for absorbing miss-classifications.
Equation 2 will always have a single minimum with respect to
the Lagrange multipliers, . The minimum to Equation 2 can be
found with any of a family of algorithms, all of which are based on
constrained quadratic programming. We used a faster variation [2]
of the Sequential Minimal Optimization algorithm [11, 12] in all of
our experiments.
When Equation 2 is minimal, Equation 1 will have a classification
margin that is maximized for the training set. For the case of
a linear kernel function (K(x an SVM finds a
decision boundary that is balanced between the class boundaries
of the two classes. In the nonlinear case, the margin of the classifier
is maximized in the nonlinear feature space, which results in a
nonlinear classification boundary.
Table
1: Procedure to find effective query modification given
an SVM and a working dataset.
for all
1. calculate sensitivity, df=dxjx=x i
2. find largest d magnitude components, c, of sensitivity
3. for all 2 d 1 combinations of c
(a) test query modification and note statistics
(b) if precision rate is above desired value and
recall is greater than best found so far, then save
query modification.
return best found query modification
In our experiments we used a Gaussian kernel function of the
The choice of is usually made to reflect the smoothness of the
feature space and the density of the training data. As there is no
general method for selecting , our choice is admittedly ad hoc,
which we heuristically set to 15. However, one may have success
with using a cross validation procedure choosing .
3.3 Sensitivity Analysis
After training a nonlinear SVM to classify our labeled training
data, we use a form of sensitivity analysis to identify components
of the document feature space that are important to the classifier.
Suppose that our classifier is linear in the input space, i.e.,
In the linear case, input features that are important are those with
the largest coefficient magnitudes, jw i j. In the nonlinear case, we
can linearize the model about a point in input space with a Taylor
expansion to obtain:
@f
@x
x=v
f(x), is a linear approximation to f(x) that is locally accurate
in the vicinity of v. The largest components of @f=@xjx=v
are those features that are important to the linear approximation.
Thus, if we want to know which inputs are most critical to computing
f(x) for choices of x near v, we can restrict our attention to
the inputs that have relatively large values of j@f=@xj evaluated at
which happen to be the inputs that when changed, produce
the largest change in f(x).
But what value should v take? An interesting aspect of SVM
learning is that the non-zero Lagrange multipliers correspond to
data points that are crucial to determining the maximum margin
classifier. Any data point with a zero Lagrange multiplier is redundant
in the sense that its removal would not alter the solution. All
other data points, with Lagrange multipliers at the solution taking
non-zero values, are the so-called "support vectors" which are the
only data points strictly needed in order to build the SVM model.
For many real-world problems (text classification among them), the
number of support vectors may be dramatically fewer than the number
of data points. Thus, our search for useful values of v can be
simplified by only searching those points in the input space that
are also support vectors (i.e., have non-zero Lagrange multipliers).
Moreover, since we are more interested in discovering rules that
identify positive members of the document category, we restrict
our attention to those positive support vectors that have y i equal to
1. In this way, our search is restricted the few data points that are
actually important to the SVM model.
The input sensitivity of the SVM from Equation 1 with the Gaussian
kernel from Equation 3 is easily derived as:
@f
@x
captured
region captured
region
captured region
positive
examples positive
examples
linear decision boundary
linear
decision
boundary linear
decision
boundary
positive
examples
Figure
1: Rules for finding multiple regions of positive documents
can be extracted by dataset deflation. Subsequent iterations
find regions in input space that are explained by simple
linear rules. Each identified rule/region will be mostly orthogonal
(and, therefore, complementary) to previous rules/regions.
In the high-dimensional input spaces typical of text classification
problems, there are many such orthogonal regions (unlike
this 2-dimensional example).
To find an effective query modification, we use the procedure described
in Table 1. The procedure iterates over all positive training
examples that have a non-zero Lagrange multiplier. A family of
query modifications is found by identifying the largest d components
of the sensitivity vector. Since no mainstream search engine
allows the user to specify weights for individual terms (i.e., they
only allow +term and -term), we cannot use all of the information
in the coefficients of the sensitivity vector.
Instead, we treat positive coefficients as having a weight of 1,
negative with -1, and all others 0. In this way, a query modification
can be thought of as a simple threshold classifier with weights that
are all in f1; 0; 1g. At line 3 in Table 1, we examine all possible
variations of a query modification that optionally zero out one
or more terms. This restricted search is necessary because of the
implicit thresholding that we are performing on the weights. How-
ever, the search is not as expensive as the bounds for the second for
loop suggest because the sensitivity analysis may produce duplicate
suggestions. In this case, we can hash the test query modifications,
and only evaluate those that have not been evaluated thus far.
In the end, we find a query modification that (almost always)
satisfies a pre-specified desired precision, but tends to maximize re-
call. If no query modification is found by the procedure, the search
for effective query modifications is finished.
3.4 Rinse and Repeat
After finding the first effective query modification, we find additional
query modifications by altering the training dataset for the
SVM. All true positive training exemplars are removed from the
dataset, leaving all negative exemplars and false negatives.
By deflating the dataset in this manner, we can find multiple local
linear rules that are all effective, especially when fused together.
Figure
1 shows a stylized example of how the method can work. In
the example, we have three regions that are mostly disjoint from
one another. For each region, the sensitivity analysis discovers a
linear rule with the following properties. First, the location of the
decision boundary will always be on the edge of the region that is
closest to the other regions (which is guaranteed by only searching
over support vectors). Second, the decision boundary will be
parallel to the direction of greatest sensitivity of the region (which
also implies that the decision boundary will be perpendicular to the
region's direction of greatest variance). As a result of these two
properties, each decision boundary can be viewed as the tightest
"slice" that removes a region from the rest of the training examples
Composing multiple "slices" in this way allows us to potentially
separate large regions of positive examples with a small number of
rules. Moreover, the procedure is structured so that each rule is a
conjunction of terms while the composition of the rules is formed
as a disjunction (i.e., disjunctive normal form); as a result, the rules
can be submitted to a search engine as multiple queries.
4. EXPERIMENTAL RESULTS
We now describe how our method has been used to generate effective
query modifications for identifying personal home pages
and conference pages. For our results, we use the notation ti-
tle:term to indicate that term should occur in the title of a doc-
ument. Many search engines, such as AltaVista, support queries of
this form. All other query modifications specify terms that should
appear in the title or the full text of the document.
4.1 Personal Home Pages
The training set for this experiment consisted of 250 personal
home pages and 999 random URLs that were given negative labels.
All of the documents were preprocessed into vectors of 100 fea-
tures. The negative exemplars contained approximately 1% false
positives, based on human inspection of random samples. We set
the desired precision to be at least 50%. The SVM was optimized
with set to 7, C set to 5, and d from the extraction procedure was
set to 5. The extracted query modifications are listed in Table 2
In the table, one line represents a single query modification. Notice
that all of the query modifications contain less than four additional
search terms even though our procedure searched for as
many as five terms in a single modification.
To measure the effectiveness of the query modifications, we tested
three queries without any modifications and with the first four generated
query modifications. The results are presented in Table 4.
Whenever a search engine returned more than 50 results, we only
manually classified the first 50 pages; performing the statistics in
this manner is consistent with our goal since we are using the query
modifications to bias the search engine's ranking function.
Table
2: Extracted query modifications from personal home
page data with desired precision set to 50%.
QM# extracted query modification
"i am" "my favorite" "sign my"
3 "is my" "my page" "interests"
4 title:home "s home"
5 interests "my page" "me at"
6 "sign my" title:home
7 "my page" interests "welcome to my"
Table
3: Extracted query modifications from conference page
data with desired precision set to 50%.
QM# extracted query modification
conference hotel workshops
4 "this workshop"
5 "fax email" "the conference"
"sponsored by"
As can be seen in Table 4, the fourth query modification for home
pages gives better precision in all cases, but the merged results of
all four query modifications gives the highest recall as indicated by
the low overlap between returned results. Moreover, in each case,
the combined precision is at least as high as the desired precision
of 50%.
We also note that our test produced many pages that were listings
of links to personal home pages, "about me" type pages, or CV
and resume pages; thus, while these pages were labeled as false
positives for our statistics, they are not too far from being desirable
results.
4.2 Conference Pages
The conference web page data consisted of 529 negative examples
and 150 positive examples. All of the documents were preprocessed
into vectors of 300 features. As before, we set the desired
precision to be 50%. The SVM was optimized with set to 15, C
set to 10, and d from the extraction procedure was set to 5 as in the
earlier test. The first five extracted query modifications are shown
in
Table
3.
As in the earlier test case, we used the top four query modifications
to augment three test search queries: "neural net-
works", "learning", and "linux". The test results are summarized
in Table 5. For this experiment, we manually classified the
first 20 web pages for all queries (classifying conference pages by
hand is more difficult than home pages).
As can be seen, the merged results maintained a precision level
that was close to the desired precision, while increasing recall compared
to the individual query modifications. Moreover, the combined
query modifications offer some insurance in case any single
query modification fails.
5. DISCUSSION AND CONCLUSIONS
As stated in Section 2, our motivation for this work is to automate
the process of generating effective query modifications for
the Inquirus 2 metasearch engine. Previously, our query modifications
were human-generated and selected based on the very unscientific
notion that they seemed to "make sense." While some of
the human-generated queries were supported by our experiments,
our automated method discovered many novel query modifications,
which, in turn, suggest some interesting facts about documents on
the Web and about the method we propose in this paper.
5.1 Category Patterns
As part of this work, we performed many experiments on personal
home pages that revealed interesting trends for the types of
home pages that exist, and the different usages of language on the
pages, which can be used to distinguish among them. For example,
in one experiment we found geocities to be a very effective
modifier because of the large number of free personal web pages
hosted by GeoCities. We also found that children's home pages
often made reference to their favorite things, while academics
often wrote of their research interests. Thus, our method for
finding multiple query modifications appears to identify language
trends that occur in subgroups of a category.
While it is unlikely to derive a set of query modifications that has
100% recall and 100% precision, we believe that our work supports
the use of multiple query modifications for increasing recall while
maintaining a desired precision.
5.2 "Eigenqueries" & Data Deflation
One key facet of our proposed method is the dataset deflation
step, which eliminates all true positives from the training data, and
retrains the SVM. The procedure in Table 1 performs a constrained
and limited search for an effective query modification. To make
an analogy, the query modification found in the step is akin to
an"eigenquery" in that it spans a large portion of the training data.
By removing the true positives from the dataset, at the next iteration
we force a procedure to find query modifications that complement
the previously found query modifications. Continuing the
analogy, data deflation is similar to factoring out an eigenvector
from a matrix so that the next eigenvector can be found.
In this way, our proposed method attempts to span a large portion
of document space by identifying as many "eigenqueries" as
possible.
5.3 Dimensionality Reduction
Another benefit of our approach is that it performs dimensionality
reduction in at least four different ways. During preprocessing,
we eliminate n-grams that are either too common or too uncom-
mon. While it is possible for this preprocessing to eliminate features
that are key to identifying a subset of the positive examples,
we believe that reducing the feature space via our preprocessing is
warranted considering that without it the feature space would consist
of hundreds of thousands of words or phrases.
Our search for sensitive positive exemplars is also assisted by the
use of SVMs as a nonlinear model because the discovered support
vectors (exemplars with positive Lagrange multipliers) are exactly
the training points whose removal would alter the solution. Thus,
by only searching sensitivities for positive support vectors, we reduce
the number of documents which must be considered.
The sensitivity analysis also provides a ranking of features in the
input space. By only considering the largest d components, we can
efficiently search for the optimal d-dimensional query modification
that uses those d input features.
Finally, and as previously mentioned, the dataset deflation step
eliminates a significant portion of the training data at each step,
which has the effect of speeding training time for the SVM and
Table
4: Test results for using extracted home page query modifications: "Total Pages" refers to the number examined, which were
always the top results, with a maximum of 50 pages examined at most.
home total
query pages pages precision
+"information retrieval" 0 50 0%
+"information retrieval" +title:s +title:page +"about me" 3 3 100%
+"information retrieval" +"i am" +"my favorite" +"sign my" 0 1 0%
+"information retrieval" +"is my" +"my page" +"interests" 0 12 0%
+"information retrieval" +title:home +"s home" 46 50 96%
(NO DUPLICATES)
+"beagles" +title:s +title:page +"about me" 5 7 71%
+"beagles" +"is my" +"my page" +"interests" 0 9 0%
+"beagles" +title:home +"s home" 39 50 78%
(4 POSITIVE DUPLICATES)
+"starcraft" +title:s +title:page +"about me" 24 34 71%
+"starcraft" +"i am" +"my favorite" +"sign my" 27 50 54%
+"starcraft" +"is my" +"my page" +"interests" 15
+"starcraft" +title:home +"s home" 44 50 88%
Table
5: Test results for using extracted conference page query modifications: "Total Pages" refers to the number examined, which
were always the top results, with a maximum of 20 pages examined at most.
conference total
query pages pages precision
+"neural networks" 1 20 5%
+"neural networks" +"call for"
+"neural networks" +"program committee" 12 20 60%
+"neural networks" +conference +hotel +workshops 17 20 85%
+"neural networks" +"this workshop"
(4 POSITIVE DUPLICATES, 1 NEGATIVE DUPLICATE)
+"learning" +conference +hotel +workshops 15 20 75%
(NO DUPLICATES)
+"linux" +conference +hotel +workshops 12 20 60%
simplifying the document space for which we are searching.
5.4
Summary
Conclusions
Two alternative extremes for identifying query modifications would
use linear classifiers and an exhaustive search in the feature
space. The former method fails because it cannot work effectively
in the linearly inseparable case. The latter approach is problematic
because of the exponential number of binary classifiers that
would need to be considered. Our approach exploits the underlying
regularity of the documents that make up the WWW. We can
discover higher-order correlations in the form of query modifications
that contain multiple terms-even with a linearly inseparable
feature space-but we can also guarantee that our procedure will
terminate in a reasonable amount of time.
It is also interesting to compare our approach to a "straw man"
approach represented by a more aggressive procedure that attempts
to construct a large set of ridiculously complicated query modifica-
tions, each of which identifies only one or two pages. Such a set
essentially acts as a lookup table for the training data. In this case,
one would have little hope of generalizing to real-world data. By
enforcing dimensionality reduction at many steps, we find query
modifications that appear to generalize to real-world data despite
the relatively small size of our training sets.
This last issue is especially important for very restricted searches,
say, one in which we are trying to find the personal home page of a
particular person (with a very common name) instead of one about
a particular topic. By restricting the complexity of our query modifications
and striving for maximum recall, our approach of using
multiple query modifications can further increase the likelihood of
finding very narrow search results which might normally be ranked
beyond the limit of a given search engine.
6.
ACKNOWLEDGEMENTS
We thank Frans Coetzee for insightful discussions, Sven Heinicke
and Andrea Ples for help in performing experiments, and the anonymous
reviewers for many helpful comments.
7.
--R
Fast effective rule induction.
Efficient SVM regression training with SMO.
Intelligent fusion from multiple
Architecture of a metasearch engine that supports user information needs.
Recommending web documents based on user preferences.
Improving category specific web search by learning query modifications.
Text categorization with support vector machines: learning with many relevant features.
Transductive inference for text classification using support vector machines.
Context and page analysis for improved web search.
Fast training of support vector machines using sequential minimal optimization.
Using sparseness and analytic QP to speed training of support vector machines.
The MetaCrawler architecture for resource aggregation on the Web.
Extracting refined rules from knowledge-based neural networks
The Nature of Statistical Learning Theory.
--TR
Extracting Refined Rules from Knowledge-Based Neural Networks
The nature of statistical learning theory
Fast training of support vector machines using sequential minimal optimization
Architecture of a metasearch engine that supports user information needs
Efficient SVM Regression Training with SMO
Context and Page Analysis for Improved Web Search
Text Categorization with Suport Vector Machines
Transductive Inference for Text Classification using Support Vector Machines
Improving Category Specific Web Search by Learning Query Modifications
--CTR
Hai Zhuge, Fuzzy resource space model and platform, Journal of Systems and Software, v.73 n.3, p.389-396, November-December 2004
Masayuki Okabe , Kyoji Umemura , Seiji Yamada, Query expansion with the minimum user feedback by transductive learning, Proceedings of the conference on Human Language Technology and Empirical Methods in Natural Language Processing, p.963-970, October 06-08, 2005, Vancouver, British Columbia, Canada
Gabriel L. Somlo , Adele E. Howe, Using web helper agent profiles in query generation, Proceedings of the second international joint conference on Autonomous agents and multiagent systems, July 14-18, 2003, Melbourne, Australia
Aris Anagnostopoulos , Andrei Z. Broder , Kunal Punera, Effective and efficient classification on a search-engine model, Proceedings of the 15th ACM international conference on Information and knowledge management, November 06-11, 2006, Arlington, Virginia, USA
Luis Gravano , Vasileios Hatzivassiloglou , Richard Lichtenstein, Categorizing web queries according to geographical locality, Proceedings of the twelfth international conference on Information and knowledge management, November 03-08, 2003, New Orleans, LA, USA
Luis Gravano , Panagiotis G. Ipeirotis , Mehran Sahami, QProber: A system for automatic classification of hidden-Web databases, ACM Transactions on Information Systems (TOIS), v.21 n.1, p.1-41, January
Bernard J. Jansen , Tracy Mullen , Amanda Spink , Jan Pedersen, Automated gathering of Web information: An in-depth examination of agents interacting with search engines, ACM Transactions on Internet Technology (TOIT), v.6 n.4, p.442-464, November 2006 | query modification;sensitivity analysis;support vector machine;rule extraction |
511496 | Choosing reputable servents in a P2P network. | Peer-to-peer information sharing environments are increasingly gaining acceptance on the Internet as they provide an infrastructure in which the desired information can be located and downloaded while preserving the anonymity of both requestors and providers. As recent experience with P2P environments such as Gnutella shows, anonymity opens the door to possible misuses and abuses by resource providers exploiting the network as a way to spread tampered with resources, including malicious programs, such as Trojan Horses and viruses.In this paper we propose an approach to P2P security where servents can keep track, and share with others, information about the reputation of their peers. Reputation sharing is based on a distributed polling algorithm by which resource requestors can assess the reliability of perspective providers before initiating the download. The approach nicely complements the existing P2P protocols and has a limited impact on current implementations. Furthermore, it keeps the current level of anonymity of requestors and providers, as well as that of the parties sharing their view on others' reputations. | INTRODUCTION
In the world of Internet technologies, peer-to-peer (P2P)
solutions are currently receiving considerable interest [6].
communication software is increasingly being used to
allow individual hosts to anonymously share and distribute
various types of information over the Internet [15]. While
systems based on central indexes such as Napster [13] collapsed
due to litigations over potential copyright infringe-
ments, the success of 'pure' P2P products like Gnutella [20]
and Freenet [4] fostered interest in dening a global P2P infrastructure
for information sharing and distribution. Several
academic and industrial researchers are currently involved
in attempts to develop a common platform for P2P
applications and protocols [5, 9, 14, 17]. Still, there are
several thorny issues surrounding research on P2P architectures
[3].
First of all, popular perception still sees P2P tools as a
way to trade all kinds of digital media, possibly without the
permission of copyright owners, and the legacy of early underground
use of P2P networks is preventing the full acceptance
of P2P technologies in the corporate world. Indeed,
P2P systems are currently under attack by organizations
like the RIAA (Recording Industry Association of Amer-
ica) and MPAA (Motion Picture Association of America),
which intend to protect their intellectual property rights
that they see violated by the exchange of copyright materials
permitted by P2P systems. This opposition is testied
by the recent lawsuit led against P2P software distributors
Grokster, KaZaA and MusicCity by the RIAA and MPAA,
and by the previous successful lawsuit against Napster led
by the RIAA. Of course, with this work we do not intend to
support the abuse of intellectual property rights. Our interest
arises from the observation that P2P solutions are seeing
an extraordinary success, and we feel that a self-regulating
approach may be a way to make these architectures compliant
with the ethics of the user population and isolate from
the network the nodes oering resources that are deemed
inappropriate by the users.
Secondly, a widespread security concern is due to the complete
lack of peers' accountability on shared content. Most
systems protect peers' anonymity allowing them to use
self-appointed opaque identiers when advertising shared information
(though they require peers to disclose their IP address
when downloading). Also, current P2P systems neither
have a central server requiring registration nor keep
track of the peers' network addresses. The result of this
approach is a kind of weak anonymity , that does not fully
avoid the risks of disclosing the peers' IP addresses, prevents
the use of conventional web of trust techniques [11],
and allows malicious users to exploit the P2P infrastructure
to freely distribute Trojan Horse and Virus programs.
Some pratictioners contend that P2P users are no more exposed
to viruses than when downloading les from the Internet
through conventional means such as FTP and the Web,
and that virus scanners can be used to prevent infection
from digital media downloaded from a P2P network. How-
ever, using P2P software undeniably increases the chances
of being exposed, especially for home users who cannot rely
on a security policy specifying which anti-virus program to
use and how often to update it; moreover, with FTP and
the Web, users most typically execute downloaded programs
only when they trust the site where the programs have been
downloaded from. We believe that the future development
of P2P systems will largely depend on the availability of
novel provisions for ensuring that peers obtain reliable information
on the quality of the resources they are retrieving.
In the P2P scenario, such information can only be obtained
by means of peer review , that is, relying on the peers' opinions
to establish a digital reputation for information sources
on the P2P network.
Our digital reputations can be seen as the P2P counter-parts
of client-server digital certicates [7, 8], but present
two major dierences that require them to be maintained
and processed very dierently. First of all, reputations must
be associated with self-appointed opaque identiers rather
than with externally obtained identities. Therefore, keeping
a stable identier (and its good reputation) through several
transactions must provide a considerable benet for peers'
wishing to contribute information to the network, while continuously
re-acquiring newcomer status must not be too
much of an advantage for malicious users changing their
identier in order to avoid the eect of a bad reputation.
Secondly, while digital certicates have a long life-cycle, the
semantics of the digital reputation must allow for easily and
consistently updating them at each interaction; in our ap-
proach, reputations simply certify the experience accumulated
by other peers' when interacting with an information
source, and smoothly evolve over time via a polling proce-
dure. As we shall see, our technique can be easily integrated
with existing P2P protocols.
2. ARCHITECTURES FOR
PEER-TO-PEER NETWORKS
The term peer-to-peer is a generic label assigned to net-work
architectures where all the nodes oer the same services
and follow the same behavior. In Internet jargon, the
P2P label represents a family of systems where the users of
the network overcome the passive role typical of Web naviga-
tion, and acquire an active role oering their own resources.
We focus on P2P networks for le exchange, where the P2P
label claries that nodes have
exible roles and may function
at the same time as clients and servers. Typically, P2P applications
oer a default behavior: they immediately make
available as servers all the les they retrieved as clients. For
this dual nature of server and client, a node in a P2P net-work
is called a servent.
The use of a P2P network for information exchange involves
two phases. The rst phase is the search of the servent
where the requested information resides. The second
phase, which occurs when a servent has identied another
servent exporting a resource of interest, requires to establish
a direct connection to transfer the resource from the
exporting servent to the searching servent.
While the exchange phase is rather direct and its behavior
is relatively constant across dierent architectures, the
rst phase is implemented in many dierent ways and it
most characterizes the dierent solutions. We identify three
main alternatives: centralized indexes, pure P2P architec-
tures, and intermediate solutions.
The best representative of centralized solutions is Napster,
a system dedicated to the exchange of audio les. Napster
was the rst P2P application to gain considerable success
and recognition. It used a centralized indexing service that
described what each servent of the network was oering to
the other nodes. Based on its indexes, Napster was able to
e-ciently answer search queries originating from servents in
the network and direct them to the servent oering the requested
resource. Napster reached a peak of 1.5 million users
connected at the same time, before being forced to activate
lters on the content that users were oering, to eliminate
from the indexes copyrighted materials. Combined with the
introduction of a paid subscription mechanisms, this forced
a rapid decline in the number of Napster users and currently
the service is not operational. Other solutions were quick to
emerge, to ll the void left by Napster, avoiding the centralization
that permitted Napster to oer good performance,
but also that forced it to take responsibility for the content
that users were exchanging.
The best known representatives of pure P2P architectures
are Gnutella and Freenet. Gnutella was originally designed
by Nullsoft, owned by America OnLine, but was immediately
abandoned by AOL and is currently maintained by
a number of small software producers. Gnutella is a distributed
architecture, where all the servents of the network
establish a connection with a variable number of servents,
creating a grid where each servent is responsible of transferring
queries and their answers. Freenet is an open source
architecture explicitly designed for the robust anonymous
diusion of information. Each resource is identied by a
key, and support for searches is strictly based on this key.
Freenet is designed to oer sophisticated services for the
protection of the integrity and the automatic distribution of
les near to the servents where requests are more frequent.
Freenet is currently oering a low degree of usability, which
limits its use to a relatively restricted number of adopters,
compared with the other solutions.
Intermediate architectures have recently emerged. The
best representative of this family is the product developed by
FastTrack(www.fasttrack.nu), a company originally based
in the Netherlands, and now owned by an Australian com-
pany. The FastTrack's software has been licensed to companies
KaZaA, MusicCity, and Grokster. FastTrack distinguishes
its servents in supernodes and nodes: supernodes
are servents which are responsible for indexing the network
content and in general have a major role in the organization
s
s
servent looking for a resource
servents willing to offer the requested resource
Legend
Query
Query
Query
Query
Query
Query
Query
Query
Query
QueryHit
QueryHit
QueryHit
QueryHit
s
Figure
1: Locating resources in a Gnutella-like P2P
environment
of the network. A node is eligible to become a supernode
only if it is characterized by adequate resources, in terms of
bandwidth and computational power. Files shared on the
network are enriched with metadata, automatically generated
or input by the user, that permit more precise searches.
This solution is particularly successful: at the time of writ-
ing, February 2002, there are reports of 80 million downloads
of the application, 1.5 million users connected on average at
any time, and almost 2 billion les expected to be exchanged
in the month.
We will use Gnutella throughout the paper as a reference,
because it is an open protocol and simple open source implementations
are available that permit to experiment with
our protocol variants.
2.1 Basic description of Gnutella
Gnutella oers a fully peer-to-peer decentralized infrastructure
for information sharing. The topology of a Gnutella
network graph is meshed, and all servents act both as clients
and servers and as routers propagating incoming messages
to neighbors. While the total number of nodes of a network
is virtually unlimited, each node is linked dynamically to a
small number of neighbors, usually between 2 and 12. Mes-
sages, that can be broadcast or unicast, are labeled by a
unique identier, used by the recipient to detect where the
message comes from. This feature allows replies to broadcast
messages to be unicast when needed. To reduce net-work
congestion, all the packets exchanged on the network
are characterized by a given TTL (Time To Live) that creates
a horizon of visibility for each node on the network.
The horizon is dened as the set of nodes residing on the
network graph at a path length equal to the TTL and reduces
the scope of searches, which are forced to work on
only a portion of the resources globally oered.
To search for a particular le, a servent p sends a broadcast
Query message to every node linked directly to it (see
Figure
1). The fact that the message is broadcast through
the P2P network, implies that the node not directly connected
with p will receive this message via intermediaries;
they do not know the origin of the request. Servents that
receive the query and have in their repository the le re-
quested, answer with a QueryHit unicast packet that contains
a ResultSet plus their IP address and the port number
of a server process from which the les can be downloaded
using the HTTP protocol. Although p is not known to the
responders, responses can reach p via the network by following
in reverse the same connection arcs used by the query.
Servents can gain a complete vision of the network within
the horizon by broadcasting Ping messages. Servents within
the horizon reply with a Pong message containing the number
and size of the les they share. Finally, communication
with servents located behind rewalls is ensured by means
of Push messages. A Push message behaves more or less
like passive communication in traditional protocols such as
FTP, inasmuch it requires the \pushed" servent to initiate
the connection for downloading.
2.2 Security threats to Gnutella
Gnutella is a good testbed for our security provisions, as
it is widely acknowledged that its current architecture provides
an almost ideal environment for the spread of self-replicating
malicious agents. This is due to two main features
of Gnutella's design: anonymous peer-to-peer communication
(searches are made by polling other Gnutella clients
in the community and clients are anonymous as they are
only identied by an opaque, self-appointed servent id), and
variety of the shared information (the les authorized to be
shared can include all media types, including executable and
binary les). The former feature involves a weakness due to
the combination of low accountability and trust of the individual
servents. In an ordinary Internet-based transaction, if
malicious content is discovered on a server the administrator
can be notied. On Napster, if a user was caught distributing
malicious content, his account could be disabled. In
Gnutella, anyone can attach to the network and provide malicious
content tailored to specic search requests with relatively
small chance of detection; even blacklisting hostile IPs
is not a satisfactory countermeasure, as there are currently
no mechanisms to propagate this information to the network
servents (in a pure distributed architecture there is no central
authority to trust), and in many situations servents use
dynamically assigned IPs. Gnutella clients are then more
easily compromised than Napster clients and other le sharing
tools. For instance, the well-known VBS.Gnutella worm
(often mis-called the Gnutella virus) spreads by making a
copy of itself in the Gnutella program directory; then, it
modies the Gnutella.ini le to allow sharing of .vbs les
in the Gnutella program folder. Other attacks that have
been observed rely on the anonymity of users: under the
shield of anonymity, malicious users can answer to virtually
any query providing tampered with information. As we
shall see in Section 5, these attacks can be prevented or their
eects relieved, increasing the amount of accountability of
Gnutella servents.
3. SKETCH OF THE APPROACH
Each servent has associated a self-appointed servent id,
which can be communicated to others when interacting, as
established by the P2P communication protocol used. The
servent id of a party (intuitively a user connected at a ma-
chine) can change at any instantiation or remain persis-
tent. However, persistence of a servent id does not aect
anonymity of the party behind it, as the servent id works
only as an opaque identier. 1 Our approach encourages persistence
as the only way to maintain history of a servent id
across transactions.
As illustrated in the previous section, in a Gnutella-like
environment, a servent p looking for a resource broadcasts
a query message, and selects, among the servents responding
to it (which we call oerers), the one from which to
execute the download. This choice is usually based on the
oer quality (e.g., the number of hits and the declared connection
speed) or on preference criteria based on its past
experiences.
Our approach, called P2PRep, is to allow p, before deciding
from where to download the resource, to enquire about
the reputation of oerers by polling its peers. The basic idea
is as follows. After receiving the responses to its query, p
can select a servent (or a set of servents) based on the quality
of the oer and its own past experience. Then, p polls
its peers by broadcasting a message requesting their opinion
about the selected servents. All peers can respond to
the poll with their opinions about the reputation of each of
such servents. The poller p can use the opinions expressed
by these voters to make its decision. We present two
avors
of our approach. In the rst solution, which we call basic
polling , the servents responding to the poll do not provide
their servent id. In the second solution, which we call enhanced
polling , voters also declare their servent id, which
can then be taken into account by p in weighting the votes
received (p can judge some voters as being more credible
than others).
The intuitive idea behind our approach is therefore very
simple. A little complication is introduced by the need to
prevent exposure of polling to security violations by malicious
peers. In particular, we need to ensure authenticity of
servents acting as oerers or voters (i.e., preventing imper-
sonation) and the quality of the poll. Ensuring the quality
of the poll means ensuring the integrity of each single vote
(e.g., detecting modications to votes in transit) and rule
out the possibility of dummy votes expressed by servents
acting as a clique under the control of a single malicious
party. In the next section we describe how these issues are
addressed in our protocols.
4. REPUTATION-BASED SOURCE
SELECTION PROTOCOLS
Both our protocols assume the use of public key encryption
to provide integrity and condentiality of message ex-
changes. Whether permanent or fresh at each interaction,
we require each servent id to be a digest of a public key, obtained
using a secure hash function [2] and for which the servent
knows the corresponding private key. This assumption
allows a peer talking to a servent id to ensure that its counterpart
knows the private key, whose corresponding public
key the servent id is a digest. A pair of keys is also generated
on the
y for each poll. In the following we will use
to denote a pair of public and private keys associated
with i, where i can be a servent or a poll request.
We will use fMgK and [M ] K to denote the encryption and
signature, respectively, of a message M under key K. Also,
It must be noted that, while not compromising anonymity,
persistent identiers introduce linkability, meaning transactions
coming from a same servent can be related to each
other.
in illustrating the protocols, we will use p to denote the pro-
tocol's initiator, S to denote the set of servents connected to
the P2P network at the time p sends the query, O to denote
the subset of S responding to the query (oerers), and V
to denote the subset of S responding to p's polling (voters).
A message transmission from servent x to servent y via the
P2P network will be represented as x !y, where \" appears
instead of y in the case of a broadcast transmission.
A direct message transmission (outside the P2P network)
from servent x to servent y will be represented as x D !y.
4.1 Basic polling
The basic polling solution, illustrated in Figure 2, works
as follows. Like in the conventional Gnutella protocol, the
servent p looking for a resource sends a Query indicating the
resource it is looking for. Every servent receiving the query
and willing to oer the requested resource for download,
sends back a QueryHit message stating how it satises the
query (i.e., number of query hits, the set of responses, and
the speed in Kb/second) and providing its servent id and
its pair hIP,porti, which p can use for downloading. Then, p
selects its top list of servents T and polls its peers about the
reputations of these servents. In the poll request, p includes
the set T of servent ids about which it is enquiring and a
public key generated on the
y for the poll request, with
which responses to the poll will need to be encrypted. 2 The
poll request is sent through the P2P network and therefore p
does not need to disclose its servent id or its IP to be able to
receive back the response. Peers receiving the poll request
and wishing to express an opinion on any of the servents
in the list, send back a PollReply expressing their votes
and declaring their hIP,porti pair (like when responding to
queries). The poll reply is encrypted with the public key
provided by p to ensure its condentiality (of both the vote
and the voters) when in transit and to allow p to check its
integrity. Therefore, as a consequence of the poll, p receives
a set of votes, where, for each servent in T , some votes can
express a good opinion while some others can express a bad
opinion. To base its decision on the votes received, p needs
to trust the reliability of the votes. Thus, p rst uses decryption
to detect tampered with votes and discards them.
detects votes that appear suspicious, for example
since they are coming from IPs suspected of representing a
clique (we will elaborate more on this in Section 4.4). Third,
selects a set of voters that it directly contacts (by using
the hIP,porti pair they provided) to check whether they actually
expressed that vote. For each selected voter v j , p
directly sends a TrueVote request reporting the votes it has
received from v j , and expects back a conrmation message
TrueVoteReply from v j conrming the validity of the vote.
This forces potential malicious servents to pay the cost of
using real IPs as false witnesses. Note that of course nothing
forbids malicious servents to completely throw away the
votes in transit (but if so, they could have done this blocking
on the QueryHit in the rst place). Also note that servents
will not be able to selectively discard votes, as their recipient
is not known and their content, being encrypted with
p's poll public key, is not visible to them. Upon assessing
correctness of the votes received, p can nally select the of-
ferer it judges as its best choice. Dierent criteria can be
In principle, p's key could be used for this purpose, but
this choice would disclose the fact that the request is coming
from p.
Initiator p Servents S
Query(search string,min speed)
QueryHit(num hits,IP,port,speed,Result,servent id i )
Select top list T of oerers -
Generate a pair (pk poll ,sk poll
Remove suspicious voters from set V -
Select a random set V 0 from the elected voters -
TrueVote(Votes
TrueVoteReply(response)
If response is negative, discard Votes j -
Based on valid votes select servent s from which
download les
(a)
Initiator p Servent s
Generate a random string r -
s D !p
sks ,pks )
If h(pks)=servent id s
Update experience repository -
(b)
Figure
2: Sequence of messages and operations in the basic polling protocol (a) and download of les from
the selected servent (b)
adopted and any servent can use its own. For instance, p can
choose the oerer with the highest number of positive votes,
the one with the highest number of positive votes among
the ones for which no negative vote was received, the one
with the higher dierence between the number of positive
and negative votes, and so on.
At this point, before actually initiating the download, p
challenges the selected oerer s to assess whether it corresponds
to the declared servent id. Servent s will need to respond
with a message containing its public key pks and the
challenge signed with its private key sks . If the challenge-response
exchange succeeds and the pks 's digest corresponds
to the servent id that s has declared, then p will know that
it is actually talking to s. Note that the challenge-response
exchange is done via direct communication, like the down-
load, in order to prevent impersonation by which servents
can oer resources using the servent id of other peers. With
the authenticity of the counterpart established, p can initiate
the download and, depending on its satisfaction for the
operation, update its reputation information for s.
4.2 Enhanced polling protocol
The enhanced polling protocol diers from the basic solution
by requesting voters to provide their servent id. Intu-
itively, while in the previous approach a servent only maintains
a local recording of its peers reputation, in the enhanced
solution each servent also maintains track of the
credibility of its peers, which it will use to properly weight
the votes they express when responding to a polling request.
The approach, illustrated in Figure 3, works as follows. Like
for the basic case, after receiving the QueryHit responses
and selecting its top list T of choice, p broadcasts a poll
request enquiring its peers about the reputations of servents
in T . A servent receiving the poll request and wishing to
express an opinion on any of the servents in T can do so by
responding to the poll with a PollReply message in which,
unlike for the basic case, it also reports its servent id. More
precisely, PollReply reports, encrypted with the public key
pk poll , the public key pk i of the voter and its vote declarations
signed with the corresponding private key sk i . The
vote declaration contains the pair hIP,porti and the set of
Initiator p Servents S
Initiator p
Query(search string,min speed)
QueryHit(num hits,IP,port,speed,Result,servent id i )
Select top list T of oerers -
Generate a pair (pk poll ,sk poll
Remove suspicious voters from set V -
Select a random set V 0 from the elected voters -
AreYou(servent
AreYouReply(response)
If response is negative, discard voter v j -
Based on valid votes and on voters'reputation
select servent s from which download les
(a)
Initiator p Servent s
Generate a random string r -
s D !p
sks ,pks )
If h(pks)=servent id s
Update experience and credibility repositories -
(b)
Figure
3: Sequence of messages and operations in the enhanced polling protocol (a) and interactions with
the selected servent (b)
votes together with the servent id of the voter. Once more,
the fact that votes are encrypted with pk poll protects their
condentiality and allows the detection of integrity viola-
tions. In addition, the fact that votes are signed with the
voter's private key guarantees the authenticity of their ori-
gin: they could have been expressed only by a party knowing
the servent id private key. Again, after collecting all the
replies to the poll, p carries out an analysis of the votes received
removing suspicious votes and then selects a set of
voters to be contacted directly to assess the correct origin
of votes. This time, the direct contact is needed to avoid
servent id to declare fake IPs (there is no need anymore to
check the integrity of the vote as the vote's signature guarantees
it). Selected voters are then directly contacted, via
the hIP,porti pair they provided with a message AreYou reporting
the servent id that was associated with this pair
in the vote. 3 Upon this direct contact, the voter responds
3 Note that it is not su-cient to determine that the hIP,porti
is alive since any servent id could abuse of hIP,porti pairs
available but disconnected from the P2P network.
with a AreYouReply message conrming its servent id. Servent
p can now evaluate the votes received in order to select,
within its top list T , the server it judges best according to
the i) connection speed, ii) its own reputation about the
servents, and iii) the reputations expressed in the votes re-
ceived. While in the basic polling all votes were considered
equal (provided removal of suspicious votes or aggregation
of suspected cliques), the knowledge about the servent ids
of the voters allows p to weight the votes received based on
who expressed them. This distinction is based on credibility
information maintained by p and reporting, for each servent
s that p wishes to record, how much p trusts the opinions
expressed by s (see Subsection 4.3).
Like for the basic case, we assume that, before download-
ing, a challenge-response exchange is executed to assess the
fact that the contacted servent s knows the private key sks
whose public key pks 's digest corresponds to the declared
servent id. After the downloading, and depending on the
success of the download, p can update the reputation and
credibility information it maintains.
4.3 Maintaining servents' reputations and
credibilities
When illustrating the protocols we simply assumed that
each servent maintains some information about how much
it trusts others with respect to the resources they oer (rep-
utation) and the votes they express (credibility). Dierent
approaches can be used to store, maintain, and express such
information, as well as to translate it in terms of votes and
vote evaluation. Here, we illustrate the approach we adopted
in our current implementation.
4.3.1 Representing Reputations
Each servent s maintains an experience repository as a
set of triples (servent id , num plus, num minus) associating
with each servent id the number of successful (num plus)
and unsuccessful (num minus) downloads s experienced.
Servent s can judge a download as unsuccessful, for example,
if the downloaded resource was unreadable, corrupted or included
malicious content. The experience repository will be
updated after each download by incrementing the suitable
counter, according to the download outcome. Keeping two
separate counters (for bad and good experiences) provides
the most complete information.
4.3.2 Translating Local Reputations into Votes
The simplest form of vote is a binary value by which a vote
can be either positive (1) or negative (0). Whether to express
a positive or a negative opinion can be based on dier-
ent criteria that each voter can independently adopt. For in-
stance, a peer may decide to vote positively only for servents
with which it never had bad experiences (num minus=0),
while others can adopt a more liberal attitude balancing
bad and good experiences.
While we adopted simple binary votes, it is worth noting
that votes need not be binary and that servents need not
agree on the scale on which to express them. For instance,
votes could be expressed in an ordinal scale (e.g., from A to
D or from to ) or in a continuous one (e.g., a servent
can consider a peer reliable at 80%). The only constraint
for the approach to work properly is that the scale on which
one expresses votes should be communicated to the poller.
4.3.3 Representing Credibilities
Each servent x maintains a credibility repository as a set
of triples (servent id , num agree, num disagree) associating
with each servent id its accuracy in casting votes. Intu-
itively, num agree represents the number of times the servent
id's opinion on another peer s (within a transaction in
which s was then selected for downloading) matched the outcome
of the download. Conversely, num disagree represents
the number of times the servent id's opinion on another peer
s (again, within a transaction in which s was then selected
for downloading) did not match the outcome of the down-
load. A simple approach to the credibility repository maintenance
is as follows. At the end of a successful transaction,
the initiator p will increase by one the num agree counter
of all those servents that had voted in favor of the selected
servent s and will increase by one the num disagree counter
of all those servents that had voted against s. The vice versa
happens for unsuccessful transactions.
4.4 Removing suspects from the poll
PollReply messages need to be veried in order to prevent
malicious users from creating or forging a set of peers
with the sole purpose of sending in positive votes to enhance
their reputation. We base our verication on a suspects iden-
tication procedure, trying to reduce the impact of forged
voters. Our procedure relies on computing clusters of voters
whose common characteristics suggest that they may
have been created by a single, possibly malicious, user. Of
course, nothing can prevent a malicious user aware of the
clustering technique from forging a set of voters all belonging
to dierent clusters; this is however discouraged by the
fact that some of the voting peers will be contacted in the
following check phase. In principle, voters clustering can be
done in a number of ways, based on application-level param-
eters, such as the branch of the Gnutella topology through
which the votes were received, as well as on network-level
parameters, such as IP addresses. At rst sight, IP-address
clustering based on net id appears an attractive choice as it
is extremely fast and does not require to generate additional
network tra-c. An alternative, more robust, approach currently
used by many tools, such as IP2LL [10] and NetGeo
[12], computes IP clustering by accessing a local Whois
database to obtain the IP address block that includes a given
IP address. 4 We are well aware that, even neglecting the effects
of IP spoong, both IP clustering techniques are far
from perfect, especially when clients are behind proxies or
rewalls, so that the \client" IP address may actually correspond
to a proxy. For instance, the AOL network has
a centralized cluster of proxies at one location for serving
client hosts located all across the U.S., and the IP addresses
of such cluster all belong to a single address block [16]. In
other words, while a low number of clusters may suggest
that a voters' set is suspicious, it does not provide conclusive
evidence of forgery. For this reason, we do not use
the number of clusters to conclude for or against voters'
rather, we compute an aggregation (e.g., the arithmetic
mean) of votes expressed by voters in the same clus-
ter. Then, we use resulting cluster votes to obtain the nal
poll outcome, computed as a weighted average of the clus-
ter's votes, where weights are inversely related to cluster
sizes. After the outcome has been computed, an explicit IP
checking phase starts: a randomized sample of voters are
contacted via direct connections, using their alleged IP ad-
dresses. If some voters are not found, the sample size is
enlarged. If no voter can be found, the whole procedure is
aborted.
5. P2PREP IMPACT ON GNUTELLA-LIKE
The impact of P2PRep on a real world P2P system based
on Gnutella depends on several factors, some of them related
to the original design of Gnutella itself. First of all,
in the original design there is no need for a servent to keep
a persistent servent identier across transactions; indeed,
several Gnutella clients generate their identiers randomly
each time they are activated. P2PRep encourages servents
keen on distributing information to preserve their identiers,
4 In our current prototype, a local Whois query is generated
for all the IP addresses in the voters' set, and query results
are used for computing the distinct address blocks that include
the voters' IP addresses.
thus contributing to a cooperative increase of the P2P com-
munity's ethics. Secondly, e-ciency considerations brought
Gnutella designers to impose a constraint on the network
horizon, so that each servent only sees a small portion of
the network. This in
uences P2PRep impact, since in real
world scenarios a poller may be able to get a reasonable
number of votes only for servents that have a high rate of
activity. In other words, P2PRep will act as an adaptive selection
mechanism of reliable information providers within
a given horizon, while preserving the 'pure' P2P nature of a
Gnutella network. Another major impact factor for P2PRep
is related to performance, as Gnutella is already a verbose
protocol [18] and the amount of additional messages required
could discourage the use of P2PRep. However, the protocol
operation can be easily tuned to the needs of congested
network environments. For instance, in Section 4 we have
assumed that peers express votes on others upon explicit
polling request by a servent. Intuitively, we can refer to this
polling approach as client-based , as peers keep track of good
and bad experiences they had with each peer s they used
as a source. In low-bandwidth networks, P2PRep message
exchanges can be reduced by providing a server-based functionality
whereby servents keep a record of (positive) votes
for them stated by others. We refer to these \reported"
votes as credentials, which the servent can provide in the
voting process. Obviously, credentials must be signed by
the voter that expressed them, otherwise a servent could
fake as many as it likes. Credentials can be coupled with
either of our polling processes: in the basic protocol case,
the servent ids of the direct voters remain anonymous while
the ones of those that voted indirectly are disclosed. Finally,
when a P2P system is used as a private infrastructure for information
sharing (e.g., in corporate environments), P2PRep
votes semantics can easily be tuned adopting a rating system
for evaluating the quality of dierent information items
provided by a servent, rather than its reliability or malicious
attitude.
5.1 Security improvements
Of course, the major impact of a reputation based protocol
should be on improving the global security level. P2PRep
has been designed in order to alleviate or resolve some of the
current security problems of P2P systems like Gnutella [3].
Also, P2PRep tries to minimize the eects of some well-known
usually introduced by poll-based distributed
algorithms. In this section, we discuss the behavior
of our protocol with respect to known attacks. Throughout
the section, we assume Alice to be a Gnutella user searching
for a le, Bob to be a user who has the le Alice wants. Carl
to be a user located behind a rewall who also has the le
Alice wants, and David to be a malicious user.
5.1.1 Distribution of Tampered with Information
The simplest version of this attack is based on the fact
that there is virtually no way to verify the source or contents
of a message. A particularly nasty attack is for David to simply
respond providing a fake resource with the same name as
the real resource Alice is looking for. The actual le could be
a Trojan Horse program or a virus (like the Gnutella virus
mentioned in Section 2.2). Currently, this attack is particularly
common as it requires virtually no hacking of the
software client. Both the simple and enhanced version of our
protocol are aimed at solving the problem of impersonation
attacks. When Alice discovers the potentially harmful content
of the information she downloaded from David, she will
update David's reputation, thus preventing further interaction
with him. Also, Alice will become a material witness
against David in all polling procedures called by others. Had
David previously spent an eort to acquire a good reputa-
tion, he will now be forced to drop his identier, reverting to
newcomer status and dramatically reducing his probability
of being chosen for future interactions.
5.1.2 Man in the Middle
This kind of attacks takes advantage of the fact that the
malicious user David can be in the path between Alice and
Bob (or Carl). The basic version of the attack goes as follows
1. Alice broadcasts a Query and Bob responds.
2. David intercepts the QueryHit from Bob and rewrites
it with his IP address and port instead of Bob's.
3. Alice receives David's reply.
4. Alice chooses to download the content from David.
5. David downloads the original content from Bob, infects
it and passes it on to Alice.
A variant of this attack relies on push-request interception:
1. Alice generates a Query and Carl responds.
2. Alice attempts to connect but Carl is rewalled, so she
generates a Push message.
3. David intercepts the push request and forwards it with
his IP address and port.
4. Carl connects to David and transfers his content.
5. David connects to Alice and provides the modied content
While both
avors of this attack require substantial hacking
of the client software, they are very eective, especially
because they do not involve IP spoong and therefore cannot
be prevented by network security measures. Our protocols
address these problems by including a challenge-response
phase just before downloading. In order to impersonate Bob
(or Carl) in this phase, David should know Bob's private key
and be able to design a public key whose digest is Bob's iden-
tier. Therefore, both versions of this attack are successfully
prevented by our protocols.
6. IMPLEMENTING P2PREP IN THE
GNUTELLA ENVIRONMENT
We are nearing completion of an implementation of our
protocol as an extension to an existing Gnutella system. In
this section, we describe how the P2PRep protocol is implemented
and the modication it requires to a standard
Gnutella servent's architecture.
Header
Query
QueryHit NumberOfHits Port IP Speed FileEntry \0 FileEntry .\0 ServentID
MinimumSpeed SearchCriteria \0 SearchCriteria \0
Signature EncryptedPayloadPollReply
Figure
4: A description of P2PRep messages
6.1 P2PRep messages
To keep the impact of our proposed extension to a min-
imum, we use a piggyback technique: all P2PRep messages
are carried as payload inside ordinary Query and QueryHit
messages. P2PRep messages are summarized in Figure 4,
which also shows their structure. Specically, P2PRep encapsulation
relies on the eld SearchCriteria (a set of
null-terminated strings) in the Query message and on the
FileEntry elds of QueryHit. By carefully choosing message
encoding, P2PRep broadcast messages (e.g., Poll) are stored
in the SearchCriteria eld of a Query message and will be
understood by all P2PRep-compliant servents, while others
will consider them as requests of (unlikely) lenames and
simply ignore them. In turn, the QueryHit standard message
is composed of NumberOfHits elements of a FileEntry containing
a set of triples (FileSize, FileIndex, FileName).
We use these triples for encoding P2PRep unicast messages
(e.g., PollReply) sent as replies to previous broadcasts. In
order to ensure that our piggybacked messages are easily
distinguished from standard QueryHits and, at the same
time, that they are safely ignored by standard Gnutella ser-
vents, P2PRep unicasts are encoded into the Filename eld,
while the FileIndex and FileSize elds specify the type
of the message and the encoding of the payload. The internal
structure of P2PRep messages is very simple: Poll is
an anonymous broadcast message that contains a servent id
(or a set of them) and a session public key which is generated
for each poll session. When a servent needs to poll
the net about a peer, it generates a temporary key pair,
and sends the poll public key with the Poll message it-
self. The PollReply message is encrypted and signed by
the sender, with a persistent servent key. In our current de-
sign, the actual structure of the PollReply message depends
on a parametric encoding function, stored in the FileIndex
eld of the QueryHit carrier. However, all PollReply messages
contain an EncryptedPayload composed of a set of encrypted
strings. Each of these strings holds a hServentID,
6.2 The architecture
Although several implementations are available, most
Gnutella servents share a common architectural pattern,
that can be better understood looking at the information
ow represented in Figure 5. In a standard architecture,
Packet
Processor
Locator
Agent
Reputation
Manager
Server/Client
Crypto
Agent
Shared
Resources
Experience &
Credibility
repositories
GRouter
any
Ping,Pong
Push
Query
QueryHit
Poll
PollReply
KeyGen
Sign
Verify
(De)Crypt
Extended Types
QueryHit
Direct
connections
Upload,Download
Figure
5: Gnutella's Information Flow with protocol
extensions
two components are directly connected to the net: the
http server/client, used for uploads and downloads, and
the GRouter, a software component dedicated to message
routing. This component carries messages from the net to
the Packet Processor, a switch able to unpack messages,
identify their type and deliver them to the right manager
component. For instance, Query messages are delivered to a
Locator Agent that veries the presence of the requested le
in the local repository of shared les (Shared Resources).
All the other messages are rerouted on the net. If the
Locator Agent nds a match in the Shared Resources, the
Gnutella servent sends a QueryHit message through the
GRouter specifying its location to the requestor. Our protocol
requires complementing this architecture with three additional
components (enclosed by a dotted line in Figure 5).
The Reputation Manager is notied when query hits occur
and receives all Query and QueryHit messages carrying
P2PRep extensions. Those messages are processed in order
to choose the best servent based on the reputation and credibility
data stored in the Experience and Credibility repos-
itories. In order to assess peers' reputations, the Reputation
Manager sends and receives Poll and PollReply messages
via the GRouter, as well as service messages for key handling
(not shown in Figure 5). The Reputation Manager is
linked to a CryptoAgent component encapsulating the set
of encryption functions required by P2PRep. P2PRep requires
only standard encryption facilities: all is needed is
a public/private key pairs generation scheme, an encryption
function and a digital signature. For ease of implementation,
we have chosen to use the most popular schemes providing
the desired functionalities. Namely, RSA for keys and MD5
message digest of the payload for digital signatures.
7. COMMENTS AND DISCUSSION
We describe here a few additional aspects that may clarify
the potential of our solution and its possible integration with
current P2P technologies.
Limited cost : The implementation of our polling service
requires a certain amount of resources, in terms of
both storage capacity and bandwidth, but this cost is
limited and justied in most situations. The amount
of storage capacity is proportional to the number of
servents with which the servent has interacted. For
the basic protocol, this will require to add at most a
few bytes to the experience repository, for an exchange
that may have required the local storage of a le with a
size of several millions of bytes. The enhanced version
is more expensive in terms of local storage, but nor-
mally, the limiting resource in P2P networks is network
bandwidth rather than storage. The most network
intensive phase of our protocol is the polling phase,
where a Poll request is broadcast to the network and
PollReply responses are transmitted back by nodes
participating in the poll. The checking phase may be
also quite heavy on network bandwidth when the check
has to analyze all the votes, but in normal situations,
when votes are not forged, the random selection of a
limited set of checks makes it a modest addition to the
network load. In conclusion, the most expensive operation
is the polling phase, which operates in the same
way as a search. We can then assume that our service
would approximately double the tra-c in a Gnutella
network.
Concentration of servents: as we have already ob-
served, the Gnutella protocol limits the portion of the
network that each node can see. This means that servents
will have a high probability of exhibiting a sufcient
number of votes supporting their reputation in
the portion of the network that a node in a particular
instant sees only if they have a considerably greater
number of votes globally. We do not consider this a
strong limitation of the approach. As some studies
have indicated [1, 19], current P2P solutions show a
clear distinction between participants to the network,
with a relatively small portion of servents oering a
great number of resources, and a great number of servents
(free riders) which do not share resources but
only exploit what is oered by other participants. In
this situation, it should be possible to identify, even
in small portions of the network, servents that will exhibit
an adequate reputation.
Overload avoidance: Even if polling does not introduce
an overload in the P2P network, our reputation
service presents a considerable risk of focusing transfer
requests on the servents that have a good reputation,
reducing the degree of network availability. A possible
solution to this problem is to consider reputable nodes
as the sources of le identiers of correct resources.
The idea is to associate with every le an MD5 signa-
ture, which is returned with the resource description.
When a node identies a resource it is interested in
downloading, it rst has to verify the oerers' reputa-
tion. As soon as a reputable oerer is identied, the
requestor can interact directly with the oerer only to
check the association between its servent id and the
MD5 signature. It can then request a download from
any of the nodes that are exporting the resource with
the same MD5 signature. Once the le transfer is com-
pleted, the signature is checked.
Integration with intermediate P2P solutions: Intermediate
solutions (like FastTrack) identify nodes of
the network characterized by an adequate amount of
CPU power and network bandwidth, assigning to them
the role of indexing what is oered on the network.
The visible eect is a P2P network where response
time and network congestion are greatly reduced, and
users are not limited to searches on a portion of the
resources oered on the network. In this situation, as
in centralized solutions, when users connect to the net-work
they are required to immediately transfer to the
indexing nodes the description of the resources they
are sharing. For the implementation of our reputation
mechanism, the votes on servents that each node has
built and its experience should also be transferred to
the indexing node at the start of the session. A great
opportunity in this context derives from a possible pre-
processing, done on the indexing node, to associate a
reputation with each servent. In this way, the reputation
could be returned immediately in the result of a
search. Since we have no access to a public description
of this architecture, we did not consider this solution
at the moment.
8. CONCLUSIONS
We described a reputation management protocol for
anonymous P2P environments that can be seen as an extension
of generic services oered for the search of resources.
The protocol is able to reconcile two aspects, anonymity and
reputation, that are normally considered as con
icting. We
demonstrated our solution on top of an existing Gnutella
network. This paper represents a rst step towards the development
of a self-regulating system for preventing malicious
behavior on P2P networks.
9.
ACKNOWLEDGMENTS
The work reported in this paper was partially supported
by the Italian MURST DATA-X project and by the European
Community within the Fifth (EC) Framework Programme
under contract IST-1999-11791 { FASTER project.
10.
--R
Free riding on gnutella.
Security aspects of Napster and Gnutella.
A distributed anonymous information storage and retrieval system.
The free haven project: Distributed anonymous storage service.
A large-scale persistent peer-to-peer storage utility
SPKI certi
Digital signatures
JXTA: A network programming environment.
IP to latitude/longitude server.
Web Security - A Matter of Trust
Where in the world is netgeo.
An investigation of geographic mapping techniques for internet hosts.
P2P networking: An information-sharing alternative
A measurement study of peer-to-peer le sharing systems
The Gnutella Protocol Speci
--TR
Freenet
An investigation of geographic mapping techniques for internet hosts
Handbook of Applied Cryptography
Peer-to-Peer
Networking
--CTR
Mudhakar Srivatsa , Ling Liu, Securing decentralized reputation management using TrustGuard, Journal of Parallel and Distributed Computing, v.66 n.9, p.1217-1232, September 2006
Thomas Repantis , Vana Kalogeraki, Decentralized trust management for ad-hoc peer-to-peer networks, Proceedings of the 4th international workshop on Middleware for Pervasive and Ad-Hoc Computing (MPAC 2006), p.6, November 27-December 01, 2006, Melbourne, Australia
Mudhakar Srivatsa , Li Xiong , Ling Liu, TrustGuard: countering vulnerabilities in reputation management for decentralized overlay networks, Proceedings of the 14th international conference on World Wide Web, May 10-14, 2005, Chiba, Japan
Kevin Walsh , Emin Gn Sirer, Fighting peer-to-peer SPAM and decoys with object reputation, Proceeding of the 2005 ACM SIGCOMM workshop on Economics of peer-to-peer systems, August 22-22, 2005, Philadelphia, Pennsylvania, USA
Limited reputation sharing in P2P systems, Proceedings of the 5th ACM conference on Electronic commerce, May 17-20, 2004, New York, NY, USA
Kevin Walsh , Emin Gn Sirer, Experience with an object reputation system for peer-to-peer filesharing, Proceedings of the 3rd conference on 3rd Symposium on Networked Systems Design & Implementation, p.1-1, May 08-10, 2006, San Jose, CA
trust model of p2p system based on confirmation theory, ACM SIGOPS Operating Systems Review, v.39 n.1, p.56-62, January 2005
Sepandar D. Kamvar , Mario T. Schlosser , Hector Garcia-Molina, The Eigentrust algorithm for reputation management in P2P networks, Proceedings of the 12th international conference on World Wide Web, May 20-24, 2003, Budapest, Hungary
Jinsong Han , Yunhao Liu, Dubious feedback: fair or not?, Proceedings of the 1st international conference on Scalable information systems, May 30-June 01, 2006, Hong Kong
Zhengqiang Liang , Weisong Shi, Enforcing cooperative resource sharing in untrusted P2P computing environments, Mobile Networks and Applications, v.10 n.6, p.971-983, December 2005
reputation-based approach for choosing reliable resources in peer-to-peer networks, Proceedings of the 9th ACM conference on Computer and communications security, November 18-22, 2002, Washington, DC, USA
PeerTrust: Supporting Reputation-Based Trust for Peer-to-Peer Electronic Communities, IEEE Transactions on Knowledge and Data Engineering, v.16 n.7, p.843-857, July 2004
Loubna Mekouar , Youssef Iraqi , Raouf Boutaba, Peer-to-peer's most wanted: malicious peers, Computer Networks: The International Journal of Computer and Telecommunications Networking, v.50 n.4, p.545-562, 15 March 2006
Mayank Bawa , Brian F. Cooper , Arturo Crespo , Neil Daswani , Prasanna Ganesan , Hector Garcia-Molina , Sepandar Kamvar , Sergio Marti , Mario Schlosser , Qi Sun , Patrick Vinograd , Beverly Yang, Peer-to-peer research at Stanford, ACM SIGMOD Record, v.32 n.3, September
Arno Bakker , Maarten Van Steen , Andrew S. Tanenbaum, A wide-area Distribution Network for Transactions on Internet Technology (TOIT), v.6 n.3, p.259-281, August 2006
Providing witness anonymity in peer-to-peer systems, Proceedings of the 13th ACM conference on Computer and communications security, October 30-November 03, 2006, Alexandria, Virginia, USA
Audun Jsang , Roslan Ismail , Colin Boyd, A survey of trust and reputation systems for online service provision, Decision Support Systems, v.43 n.2, p.618-644, March, 2007
Krit Wongrujira , Aruna Seneviratne, Monetary incentive with reputation for virtual market-place based P2P, Proceedings of the 2005 ACM conference on Emerging network experiment and technology, October 24-27, 2005, Toulouse, France
Yuh-Jzer Joung , Jiaw-Chang Wang, Chord2: A two-layer Chord for reducing maintenance overhead via heterogeneity, Computer Networks: The International Journal of Computer and Telecommunications Networking, v.51 n.3, p.712-731, February, 2007
Audun Jsang , Elizabeth Gray , Michael Kinateder, Simplification and analysis of transitive trust networks, Web Intelligence and Agent System, v.4 n.2, p.139-161, April 2006
Jennifer Golbeck, Trust on the world wide web: a survey, Foundations and Trends in Web Science, v.1 n.2, p.131-197, January 2006
Bogdan C. Popescu , Bruno Crispo , Andrew S. Tanenbaum , Arno Bakker, Design and implementation of a secure wide-area object middleware, Computer Networks: The International Journal of Computer and Telecommunications Networking, v.51 n.10, p.2484-2513, July, 2007
Stefan Schmidt , Robert Steele , Tharam S. Dillon , Elizabeth Chang, Fuzzy trust evaluation and credibility development in multi-agent systems, Applied Soft Computing, v.7 n.2, p.492-505, March, 2007
Sergio Marti , Hector Garcia-Molina, Taxonomy of trust: categorizing P2P reputation systems, Computer Networks: The International Journal of Computer and Telecommunications Networking, v.50 n.4, p.472-484, 15 March 2006
Donovan Artz , Yolanda Gil, A survey of trust in computer science and the Semantic Web, Web Semantics: Science, Services and Agents on the World Wide Web, v.5 n.2, p.58-71, June, 2007 | reputation;credibility;P2P network;polling protocol |
511502 | Evaluating strategies for similarity search on the web. | Finding pages on the Web that are similar to a query page (Related Pages) is an important component of modern search engines. A variety of strategies have been proposed for answering Related Pages queries, but comparative evaluation by user studies is expensive, especially when large strategy spaces must be searched (e.g., when tuning parameters). We present a technique for automatically evaluating strategies using Web hierarchies, such as Open Directory, in place of user feedback. We apply this evaluation methodology to a mix of document representation strategies, including the use of text, anchor-text, and links. We discuss the relative advantages and disadvantages of the various approaches examined. Finally, we describe how to efficiently construct a similarity index out of our chosen strategies, and provide sample results from our index. | INTRODUCTION
The goal of Web-page similarity search is to allow users
to nd Web pages similar to a query page [12]. In partic-
ular, given a query document, a similarity-search algorithm
This work was supported by the National Science Foundation
under Grant IIS-0085896.
y Supported by an NSF Graduate Research Fellowship.
z Supported by NSF Grant IIS-0118173 and a Microsoft Research
Graduate Fellowship.
x Supported by an NSF Graduate Research Fellowship.
Copyright is held by the author/owner(s).
WWW2002, May 7-11, 2002, Honolulu, Hawaii, USA.
ACM 1-58113-449-5/02/0005.
should provide a ranked listing of documents similar to that
document.
Given a small number of similarity-search strategies, one
might imagine comparing their relative quality with user
feedback. However, user studies can have signicant cost in
both time and resources. Moreover, if, instead of comparing
a small number of options, we are interested in comparing
parametrized methods with large parameter spaces, the
number of strategies can quickly exceed what can be evaluated
using user studies. In this situation, it is extremely
desirable to automate strategy comparisons and parameter
selection.
The \best" parameters are those that result in the most
accurate ranked similarity listings for arbitrary query doc-
uments. In this paper, we develop an automated evaluation
methodology to determine the optimal document representation
strategy. In particular, we view manually constructed
directories such as Yahoo! [26] and the Open Directory
Project (ODP) [21] as a kind of precompiled user
study. Our evaluation methodology uses the notion of document
similarity that is implicitly encoded in these hierarchical
directories to induce \correct", ground truth orderings
of documents by similarity, given some query document.
Then, using a statistical measure ([13]), we compare similarity
rankings obtained from dierent parameter settings of
our algorithm to the correct rankings. Our underlying assumption
is that parameter settings that yield higher values
of this measure correspond to parameters that will produce
better results.
To demonstrate our evaluation methodology, we applied
it to a reasonably sized set of parameter settings (includ-
ing choices for document representation and term weighting
schemes) and determined which of them is most eective for
similarity search on the Web.
There are many possible ways to represent a document
for the purpose of supporting eective similarity search. The
following brie
y describes the representation axes we considered
for use with the evaluation methodology just described.
Three approaches to selecting the terms to include in the
vector (or equivalently, multiset) representing a Web page u
1. Words appearing in u (a content-based approach)
2. Document identiers (e.g. urls) for each document v
that links to u (a link-based approach)
3. Words appearing inside or near an anchor in v, when
the anchor links to u (an anchor-based approach)
The usual content-based approach ignores the available
hyperlink data and is susceptible to spam. In particular,
it relies solely on the information provided by the page's
author, ignoring the opinions of the authors of other Web
pages [3]. The link-based approach, investigated in [12],
suers from the shortcoming that pages with few inlinks
will not have su-cient citation data, either to be allowed in
queries or to appear as results of queries. This problem is especially
pronounced when attempting to discover similarity
relations for new pages that have not yet been cited su-
ciently. As we will see in Section 5, under a link-based ap-
proach, the vectors for most documents (even related ones)
are in fact orthogonal to each other.
The third approach, which relies on text near anchors,
referred to as the anchor-window [9], appears most useful
for the Web similarity-search task. Indeed, the use of
anchor-windows has been previously considered for a variety
of other Web IR tasks [2, 1, 9, 11]. The anchor-window
often constitutes a hand-built summary of the target document
[1], collecting both explicit hand-summarization and
implicit hand-classication present in referring documents.
We expect that when aggregating over all inlinks, the frequency
of relevant terms will dominate the frequency of irrelevant
ones. Thus, the resulting distribution is expected
to be a signature that is a reliable, concise representation
of the document. Because each anchor-window contributes
several terms, the anchor-based strategy requires fewer citations
than the link-based strategy to prevent interdocument
orthogonality. However, as a result of reducing orthogonal-
ity, the anchor-based strategy is nontrivial to implement efciently
[14]. We discuss later how a previously established
high-dimensional similarity-search technique based on hashing
can be used to e-ciently implement the anchor-based
strategy.
These three general strategies for document representation
involve additional specic considerations, such as term
weighting and width of anchor-windows, which we discuss
further in Section 3.
Note that there are many additional parameters that could
be considered, such as weighting schemes for font sizes, font
types, titles, etc. Our goal was not to search the parameter
space exhaustively. Rather, we chose a reasonable set
of parameters to present our evaluation methodology and
to obtain insight into the qualitative eects of these basic
parameters.
Once the best parameters, including choice of document
representation and term weighting schemes, have been determined
using the evaluation methodology, we must scale
the similarity measure to build a similarity index for the Web
as a whole. We develop an indexing approach relying on the
Min-hashing technique [10, 5] and construct a similarity-search
index for roughly 75 million urls to demonstrate the
scalability of our approach. Because each stage of our algorithm
is trivially parallelizable, our indexing approach can
scale to the few billion accessible documents currently on
the Web. 1
2. EVALUATION METHODOLOGY
The quality of the rankings returned by our system is
determined by the similarity metric and document features
Commercial search engines generally have several hundreds or
even thousands of machines at their disposal.
used. Previous work [12] has relied on user studies to assess
query response quality. However, user studies are time-
consuming, costly, and not well-suited to research that involves
the comparison of many parameters. We instead use
an automated method of evaluation that uses the orderings
implicit in human-built hierarchical directories to improve
the quality of our system's rankings.
In the clustering literature, numerous methods of automatic
evaluation have been proposed [17]. Steinback et
al. [25] divide these methods into two broad classes. Internal
quality measures, such as average pairwise document simi-
larity, indicate the quality of a proposed cluster set based
purely on the internal cluster geometry and statistics, without
reference to any ground truth. External quality mea-
sures, such as entropy measures, test the accordance of a
cluster set with a ground truth. As we are primarily investigating
various feature selection methods and similarity
metrics themselves in our work, we restrict our attention to
external measures.
The overall outline of our evaluation method is as follows.
We use a hierarchical directory to induce sets of correct,
ground truth similarity orderings. Then, we compare the
orderings produced by a similarity measure using a particular
set of parameters to these correct partial orderings, using
a statistical measure outlined below. We claim that parameter
settings for our similarity measure that yield higher
values of this statistical measure correspond to parameters
that will produce better results from the standpoint of a
user of the system.
2.1 Finding a Ground Truth Ordering
Unfortunately, there is no available ground truth in the
form of either exact document-document similarity values
or correct similarity search results.
Problem 1. SimilarDocument (notion of similar-
Formalize the notion of similarity between Web documents
using an external quality measure.
There is a great deal of ordering information implicit in the
hierarchical Web directories mentioned above. For example,
a document in the recreation/aviation/un-powered class
is on average more similar to other documents in that same
class than those outside of that class. Furthermore, that
document is likely to be more similar to other documents in
other recreation/aviation classes than those entirely outside
of that region of the tree. Intuitively, the most similar
documents to that source are the other documents in the
source's class, followed by those in sibling classes, and so
on.
There are certainly cases where location in the hierarchy
does not accurately re
ect document similarity. Consider
documents in recreation/autos, which are almost certainly
more similar to those in shopping/autos than to those in
recreation/smoking. In our sample, these cases do not affect
our evaluation criteria since we average over the statistics
of many documents.
To formalize the notion of distance from a source document
to another document in the hierarchy we dene familial
distance.
Definition 1. Let the familial distance d f (s; d) from a
source document s to another document d in a class hierarchy
be the distance from s's class to the most specic class
dominating both s and d. 2
We treated the hierarchy as a tree, ignoring the \soft-links" denoted
with an \@" su-x
Unrelated Documents
Query Document
Same Class Documents
Sibling Class Documents
Cousin Class Documents
Document Hierarchy
Figure
1: Mapping a hierarchy onto a partial order-
ing, given a source document.
In our system, however, we have collapsed the directory
below a xed depth of three and ignored the (relatively few)
documents above that depth. Therefore, there are only four
possible values for familial distance, as depicted in Figure 1.
We name these distances as follows:
Distance 0: Same { Documents are in the same class.
Distance 1: Siblings { Documents are in sibling classes.
Distance 2: Cousins { Documents are in classes which are
rst cousins.
Distance 3: Unrelated { The lowest common ancestor of
the documents classes is the root.
Given a source document, we wish to use familial distances
to other documents to construct a partial similarity ordering
over those documents. Our general principle is:
On average, the true similarity of documents to
a source document decreases monotonically with
the familial distance from that document.
Given this principle, and our denition of familial distance,
for any source document in a hierarchical directory we can
derive a partial ordering of all other documents in the direc-
tory. Note that we do not give any numerical interpretation
to these familial distance values. We only depend on the
above stated monotonicity principle: a source document is
on average more similar to a same-class document than to
a sibling-class document, and is on average more similar to
a sibling-class document than a cousin-class document, and
so on.
Definition 2. Let the familial ordering d f (s) of all
documents with respect to a source document s be: d f
f(a; b)j d f (s; a) < d f (s; b)g
This ordering is very weak in that for a given source, most
pairs of documents are not comparable. The majority of the
distinctions that are made, however, are among documents
that are very similar to the source and documents that are
much less similar. The very notion of a correct total similarity
ordering is somewhat suspect, as beyond a certain
point, pages are simply unrelated. Our familial ordering
makes no distinctions between the documents in the most
distant category, which forms the bulk of the documents in
the repository.
Of course our principle that true similarity decreases monotonically
with familial distance does not always hold. However
it is reasonable to expect that, on average, a ranking
system 3 that accords better with familial ordering will be
better than one that accords less closely.
2.2 Comparing Orderings
At this point, we have derived a partial ordering from a
given hierarchical directory and query (source) document s,
that belongs in the hierarchy. We then wish to use this partial
ordering to evaluate the correctness of an (almost) total
ordering produced by our system. 4 Perhaps the most common
method of comparing two rankings is the Spearman
rank correlation coe-cient. This measure is best suited
to comparing rankings with few or no ties, and its value
corresponds to a Pearson coe-cient [24]. There are two
main problems with using the Spearman correlation coecient
for the present work. First, as mentioned, there are a
tremendous number of ties in one of the rankings (namely
the ground truth ranking), and second, since we are more
concerned with certain regions of the rankings than others
(e.g., the top), we would like a natural way to measure
directly how many of the \important" ranking choices are
being made correctly. Given these goals, a more natural
measure is the Kruskal-Goodman [13].
Definition 3. For orderings a and b , (a ; b ) is
Intuitively, there are a certain number of document pairs,
and a given ordering only makes judgments about some of
those pairs. When comparing two orderings, we look only at
the pairs of documents that both orderings make a judgment
about. A value of 1 is perfect accord, 0 is the expected
value of a random ordering, and -1 indicates perfect reversed
accord. We claim that if two rankings a and b dier in
their values with respect to a ground truth t , then the
ordering with the higher will be the better ranking.
2.3 Regions of the Orderings
Thus, given a directory, a query document s, and a similarity
measure sim, we can construct two orderings (over
documents in the directory): the ground truth familial ordering
and the ordering induced by our similarity
measure sim(s) . We can then calculate the corresponding
value. This value gives us a measure of the quality of the
ranking for that query document with respect to that similarity
measure and directory. However, we need to give a
sense of how good our rankings are across all query docu-
ments. In principle, we can directly extend the statistic as
follows. We iterate s over all documents, aggregating all the
concordant and discordant pairs, and dividing by the total
number of pairs.
In order to more precisely evaluate our results, however,
we calculated three partial- values that emphasized dier-
ent regions of the ordering. Each partial- is based on the
fraction of correct comparable pairs of a certain type. Our
types are:
3 Of course the ranking system cannot make use of the directory
itself for this statement to hold.
4 Our ordering produces ties when two documents d 1 and d 2 have
exactly the same similarity to the source document s. When this
happens, it is nearly always because s is orthogonal to both d 1
and d 2 (similarity 0 to both).
Source document http://www.aabga.org
Source title American Assoc. of Botanical Gardens and Arboreta
Source category /home/gardens/clubs_and_associations
Settings: window size = 32, stem, dist and term weighting
Rank Sim Category
/home/gardens/clubs_and_associations
/home/gardens/clubs_and_associations
/home/gardens/clubs_and_associations
/home/gardens/clubs_and_associations
50 0.07 /home/gardens/plants
100 0.06 /home/apartment_living/gardening
Settings: window size = 0, no stem, no term weighting
Rank Sim Category
/home/gardens/clubs_and_associations
business/industries/construction_and_maintenance
/recreation/travel/reservations
50 0.13 /recreation/travel/reservations
100 0.13 business/industries/construction_and_maintenance
Figure
2: Orderings obtained from two dierent parameter
settings with respect to the same source
document. For contrast, we give the best and the
worst settings. For each document shown, we give
the rank, the similarity to the source document, and
the category (we omit the url of the document).
Calculated from only pairs of documents (d1 ; d2)
where d1 was from the same class as the source document
and d2 was from a sibling class.
Calculated from only pairs of documents (d1 ; d2)
where d1 was from the same class as the source document
and d2 was from a cousin class.
Calculated from only pairs of documents
was from the same class as the source
document and d2 was from an unrelated class.
These partial- values allowed us to inspect how various
similarity measures performed on various regions of the
rankings. For example, sibling- performance indicates how
well ne distinctions are being made near the top of the familial
ranking, while unrelated- performance measures how
well coarser distinctions are being made. Unrelated- being
unusually low in relation to sibling- is also a good indicator
of situations when the top of the list is high-quality
from a precision standpoint but many similar documents
have been ranked very low and therefore omitted from the
top of the list (almost always because the features were too
sparse, and documents that were actually similar appeared
to be orthogonal).
In
Figure
2, we show an example that re
ects our assumption
that larger values of the statistic correspond to
parameter settings that yield better results.
3. DOCUMENT REPRESENTATION
In this section we will discuss the specic document representation
and term weighting options we chose to evaluate
using the technique outlined above. Let the Web document
u be represented by a bag
where w i u are terms used in representing u (e.g., terms found
in the content and anchor-windows of u, or links to u), and
are corresponding weights. It now remains to discuss
which words should be placed in a document's bag, and with
what weight.
3.1 Choosing Terms
For both the content and anchor-based approaches, we
chose to remove all HTML comments, Javascript code, tags
(except 'alt' text), and non-alphabetic characters. A stop-word
list containing roughly 800 terms was also applied.
For the anchor-based approach, we must also decide how
many words to the left and right of an anchor Avu (the
anchor linking from page v to page u) should be included in
Bu . We experimented with three strategies for this decision.
In all cases, the anchor-text itself of Avu is included, as well
as the title of document u. The three strategies follow:
Basic: We choose some xed window size W , and always
include W words to the left, and W words to the right,
of Avu 5 . Specically, we use W 2 f0; 4; 8; 16; 32g.
Syntactic: We use sentence, paragraph, and HTML-region-
detection techniques to dynamically bound the region
around Avu that gets included in Bu . The primary
document features that are capable of triggering a window
cut-o are paragraph boundaries, table cell bound-
aries, list item boundaries, and hard breaks which follow
sentence boundaries. This technique resulted
in very narrow windows that averaged close to only 3
words in either direction.
Topical: We use a simple technique for guessing topic boundaries
at which to bound the region that gets included.
The primary features that trigger this bounding are
heading beginnings, list ends, and table ends. A particularly
common case handled by these windows was
that of documents composed of several regions, each
beginning with a descriptive header and consisting of a
list of urls on the topic of that header. Regions found
by the Topical heuristics averaged about 21 words in
size to either side of the anchor.
3.2 Stemming Terms
We explored the eect of three dierent stemming variations
Nostem: The term is left as is. If it appears in the stoplist,
it is dropped.
Stem: The term is stemmed using Porter's well known stemming
algorithm [22] to remove word endings. If the
stemmed version of the term appears in the stemmed
version of our stoplist, it is dropped.
Stopstem: The term is stemmed as above, for the purposes
of checking whether the term stem is in the stoplist.
If it is, the term is dropped, otherwise the original
unstemmed term is added to the bag.
The Stopstem variant is benecial if it is the case that the
usefulness of a term can be determined by the properties of
its stem more accurately than by the properties of the term
itself.
5 Stopwords do not get counted when determining the window
cuto.
3.3 Term Weighting
A further consideration in generating document bags is
how a term's frequency should be scaled. A clear benet of
the TF.IDF family of weighting functions is that they attenuate
the weight of terms with high document frequency.
These monotonic term weighting schemes, however, amplify
the weight of terms with very low document frequency. This
amplication is in fact good for ad-hoc queries, where a rare
term in the query should be given the most importance. In
the case where we are judging document similarities, rare
terms are much less useful as they are often typos, rare
names, or other nontopical terms that adversely aect the
similarity measure. Therefore, we also experimented with
nonmonotonic term-weighting schemes that attenuate both
high and low document-frequency terms. The idea that mid-frequency
terms have the greatest \resolving power" is not
new [23, 20]. We call such schemes nonmonotonic document
frequency (NMDF) functions.
Another component of term weighting that we consider,
and which has a substantial impact on our quality metric, is
distance weighting. When using an anchor-based approach
of a given window size, instead of treating all terms near an
anchor Avu equally, we can weight them based on their distance
from the anchor (with anchor-words themselves given
distance 0). As we will see in Section 5, the use of a distance-based
attenuation function in conjunction with large anchor-
windows signicantly improves results under our evaluation
measure.
4. DOCUMENT SIMILARITY METRIC
The metric we use for measuring the similarity of document
bags is the Jaccard coe-cient. The Jaccard coe-cient
of two sets A and B is dened as
simJ
In the previous section we explained how we represent Web
documents using bags (i.e. multisets). For the purposes of
this paper we extend Jaccard from sets to bags by applying
bag union and bag intersection. This is done by taking
the max and min multiplicity of terms, for the union and
intersection operations, respectively.
The reasons that we focus on the Jaccard measure rather
than the classical cosine measure are mainly scalability con-
siderations. For scaling our similarity-search technique to
massive document datasets we rely on the Min-Hashing tech-
nique. The main idea here is to hash the Web documents
such that the documents that are similar, according to our
similarity measure, are mapped to the same bucket with a
probability equal to the similarity between them. Creating
such a hash function for the cosine measure is to our knowledge
an open problem. On the other hand, creating such
hashes is possible for the Jaccard measure (see [5]).
We used our evaluation methodology to verify that the
Jaccard coe-cient and the cosine measure yield comparable
results. 6 Further evidence for the intuitive appeal of our
measure is provided in [19], where the Jaccard coe-cient
outperforms all competitor measures for the task of dening
similarities between words. Note that the bulk of the work
presented here does not depend on whether Jaccard or cosine
6 We omit the description of these experiments as it is not the
focus of our work.
is used; only in Section 7 do we require the use of the Jaccard
coe-cient.
5. EXPERIMENTAL RESULTS OF
PARAMETER EVALUATION
For evaluating the various strategies discussed in Section
3, we employ the methodology described in Section 2.
We sampled Open Directory [21] to get 300 pairs of clusters
from the third level in the hierarchy, as depicted previously
in
Figure
1. 7 As our source of data, we used a Web crawl
from the Stanford WebBase containing 42 million pages [15].
Of the urls in the sample clusters, 51,469 of them were linked
to by some document in our crawl, and could thus be used
by our anchor-based approaches. These test-set urls were
linked to by close to 1 million pages in our repository, all
of which were used to support the anchor based strategy
we studied. 8 This section describes the evaluation of the
strategies suggested in Section 3.
We veried that all three of our measures yield, with
very few exceptions, the same relative order of parameter
settings. In a sense, this agreement is an indication of the
robustness of our measures. Here we report the results
only for the sibling- statistic. The graphs for the cousins-
and unrelated- measures behave similarly.
For some of the graphs shown in this section the dierence
of scores between dierent parameter settings might seem
quite small, i.e. second decimal digit. Notice, however, that
in each graph we explore the eect of a single \parameter di-
mension" independently, so when we add up the eect on all
\parameter dimensions" the dierence becomes substantial.
5.1 Results: Choosing Terms
values when bags are generated using various
anchor-window sizes, using Topical and Syntactic window
bounding, using purely links, and using purely page
contents, are given in Figure 3.
The results for an anchor-based approach using large windows
provides the best results according to our evaluation
criteria. This may seem counterintuitive; by taking small
windows around the anchor, we would expect fewer spurious
words to be present in a document's bag, providing a
more concise representation. Further experiments revealed
why, in fact, larger windows provide benet. Figure 4 shows
the fraction of document pairs within the same Open Directory
cluster that are orthogonal (i.e., no common words) under
a given representation. We see that with smaller window
sizes, many documents that should be considered similar are
in fact orthogonal. In this case, no amount of reweighting
or scaling can improve results; the representations simply do
not provide enough accessible similarity information about
these orthogonal pairs. We also see that, under the content
and link approaches, documents in the same cluster are
largely orthogonal. Under the link-based approach, most of
the documents within a cluster are pairwise orthogonal, revealing
a serious limitation of a purely link-based approach.
Incoming links can be thought of as being opaque descrip-
tors. If two pages have many inlinks, but the intersection of
7 Any urls present below the third level were collapsed into their
third level ancestor category.
8 ODP pages themselves were of course excluded from the data set
to avoid bias. Furthermore, the high orthogonality gures for the
link-based approach, shown in Figure 4, show that partial ODP
mirrors could not have had a signicant impact on our results.
contents links
Figure
3: Document representations. Larger xed
anchor windows always gave better results, but topical
dynamic windows achieved similar results with
shorter average window size.0.10.30.50.70.9w32 w16 w8 w4 w0 semantic syntactic contents links
Fraction
of
Pairs
that
are
Orthogonal
Figure
4: Intracluster Orthogonality for various anchor
window types. Small windows and pure links
resulted in document bags which were largely or-
thogonal, making similarity hard to determine.
their inlinks is empty, we can say very little about these two
pages. 9 It may be that they discuss the same topic, but because
they are new, they are never cocited. In the case of the
anchor-window-based approach, the chance that the bags
for the two pages are orthogonal is much lower. Each inlink,
instead of being represented by a single opaque url, is represented
by the descriptive terms that are the constituents of
the inlink. Note that the pure link based approach shown is
very similar to the Cocitation Algorithm of [12]. 10
We also experimented with dynamically sized Syntactic
and Topical windows, as described in Section 3. These window
types behave roughly according to their average window
size, both in values and orthogonality. Surprisingly,
although the dynamic-window heuristics appeared to be effective
in isolating the desired regions, any increase in region
quality was overwhelmed by the trend of larger windows pro-
9 Using the SVD we could potentially glean some information in a
pure link approach despite orthogonality, assuming enough linkage
[12].
Furthermore we veried that the Cocitation Algorithm as described
in [12] yields similar scores to the scores for the 'links'
strategy shown above.0.4280.4320.4360.440
Anchor-Window,
Content, Links
Anchor-Window, Content Anchor-Window
Figure
5: Hybrid bag types. Adding documents'
own contents gave better results than anchor-
windows alone, though adding link IDs lowered
gamma.
viding better results. 11
In addition to varying window size, we can also choose to
include terms of multiple types (anchor, content, or links,
as described in Section 3) in our document representation.
Figure
5 shows that by combining content and anchor-based
bags, we can improve the sibling- score. 12 The intuition for
this variation is that if a particular document has very few
incoming links then the document's contents will dominate
the bags. Otherwise, if the document has many incoming
links the anchor-window-based terms will dominate. In this
way, the document's bag will automatically depend on as
much information as is available.
5.2 Results: Term Weighting
In the previous section, we saw that the anchor-based
approach with large windows performs the best. Our initial
intuition, however, that smaller windows would provide
a more concise representation is not completely without
merit. In fact, we can improve performance substantially
under our evaluation criteria by weighting terms based on
their distance from the anchor. We prevent ourselves from
falling into the trap of making similar documents appear orthogonal
(small windows), while at the same time, not giving
spurious terms too much weight (large windows). Figure
6 shows the results when term weights are scaled by
log
The results for frequency based weighting, shown in Figure
7, suggest that attenuating terms with low document
frequency, in addition to attenuating terms with high document
frequency (as is usually done), can increase perfor-
mance. Let tf be a term's frequency in the bag, and df be
the term's overall document frequency. Then in Figure 7,
log refers to weighting with tf
refers to weighting
with tf
df . NMDF refers to weighting with the log-scale
gaussian tf e 1( log(df)
(see
Figure
8).
5.3 Results: Stemming
11 However, the gap was substantially closed for high inlink pages.
All values in Figure 5 were generated with the distance-based
weighting scheme to be described.
Distance and
Frequency
Distance Frequency None
Figure
Frequency and distance
weighting improved results, and further improved
results when combined.0.430.450.47NMDF sqrt log None
Figure
7: Types of Frequency weighting: sqrt gave
the best results of the monotonic frequency weighting
schemes; NMDF gave slightly better results.
We now investigate the eects of our three stemming ap-
proaches. Figure 9 shows the sibling- values for the Nos-
tem, Stopstem, and Stem parameter settings. We see that
Stopstem improves the value, and that Stem provides
an additional (although much less statistically signicant 13 )
improvement. As mentioned in Section 3.2, the eect of
Stopstem over Nostem is to increase the eective reach of
the stopword list. Words that are not themselves detected
as stopwords, yet share a stem with another word that was
detected as a stopword, will be removed. The small additional
impact of Stem over Stopstem is due to collapsing
word variants into a single term.
6. SCALING TO LARGE REPOSITORIES
We assume that we have selected the parameters that
maximize the quality of our similarity measure as explained
in Section 2. We now discuss how to e-ciently nd similar
documents from the Web as a whole.
13 The Nostem Stopstem and Stem Stopstem average
dierences are of the same approximate magnitude, however the
pairwise variance of the Stem-Stoptem is extremely high in comparison
to the other
Document Frequency
Figure
8: Non-monotonic document frequency
(NMDF) weighting.0.390.410.43NoStem StopStem Stem
Figure
9: Stemming Variants: stemming gave the
best results.
Definition 4. Two documents are -similar if the Jaccard
coe-cient of their bags is greater than .
Problem 2. SimilarDocument (e-ciency consider-
Preprocess a repository of the Web W so that for
each query Web-document q in W all Web documents in W
that are -similar to q can be found e-ciently.
In this section, we develop a scalable algorithm, called In-
dexAllSimilar to solve the above problem for a realistic
Web repository size.
In tackling Problem 2, there is a tradeo between the
work required during the preprocessing stage and the work
required at query time to nd the documents -similar to
q. We have explored two approaches. Note that since q is
chosen from W, all queries are known in advance. Using
this property, we showed in previous work ([14]) how to efciently
precompute and store the answers for all possible
queries. In this case, the preprocessing stage is compute-
intensive, while the query processing is a trivial disk lookup.
An alternative strategy, which we discuss in detail in this
section, builds a specialized index during preprocessing, but
delays the similarity computation until query time. As we
will describe, the index is compact, and can be generated
very e-ciently, allowing us to scale to large repositories with
modest hardware resources. Furthermore, the computation
required at query time is reasonable.
web bag bags
repository fragments
merging
parsing signature
extraction
inverted
index
index H index I
Preprocessing
Query Processing
MH-signatures
Query Processing
Figure
10: Schematic view of our approach.
A schematic view of the IndexAllSimilar algorithm is
shown in Figure 10. In the next two sections, we explain
IndexAllSimilar as a two stage algorithm. In the rst
stage we generate bags for each Web document in the reposi-
tory. In the second stage, we generate a vector of signatures,
known as Min-hash signatures, for each bag, and index them
to allow e-cient retrieval both of document ids given signa-
tures, and the signatures given document ids.
6.1 Bag Generation
As we explained in the previous sections, the bag of each
document contains words (i) from the content text of the
document and (ii) from anchor-windows of other documents
that point to it. Our bag generation algorithm scans through
the Web repository and produces bag fragments for each doc-
ument. For each document there is at most one content bag
fragment and possibly many anchor bag fragments. After all
bag fragments are generated, we sort and collapse them to
form bags for the urls, apply our NMDF scaling as discussed
in Section 3.3, and nally normalize the frequencies to sum
to constant.
6.2 Generation of the Document Similarity
For the description of the Document Similarity Index (DSI)
resented by a bag of words
where w are the words found in the content and anchor text
of the document, and f are the corresponding normalized
frequencies (after scaling with the NMDF function).
There exists a family H of hash functions (see [7]) such
that for each pair of documents u, v we have P
where the hash function h is chosen at
random from the family H and simJ (u; v) is the Jaccard
similarity between the two documents' bags. The family H
is dened by imposing a random order on the set of all words
and then representing each url u by the lowest rank (accord-
ing to that random order) element from Bu . In practice, it is
quite ine-cient to generate fully random permutation of all
words. Therefore, Broder et al. [7] use a family of random
linear functions of the form use
the same approach (see Broder et al. [6] and Indyk [16] for
the theoretical background of this technique).
Based on the above property, we can compute for each bag
a vector of Min-hash signatures (MH-signatures) such that
the same value of the i-th MH-signature of two documents
indicates similar documents. In particular, if we generate a
vector mhu of m MH-signatures for each document u, the
Algorithm: ProcessQuery
Input: Query document q
Output: Similar documents
Fetch the MH-vector for q */
For each j from 1 to m /* Iterate over mhq */
/* For documents with the same j'th MH-signature as q */
For each docu 2 I[j][mhq
sim[docu
Sort the set of docids fdoc i g by their sim scores sim[doc i
Output
Figure
11: Query Processing
expected fraction of the positions in which the two documents
share the same MH-signatures is equal to the Jaccard
similarity of the document bags.
We generate two data structures on disk. The rst, H,
consecutively stores mhu for each document u (i.e., the m
4-byte MH-signatures for each document). Since our document
ids are consecutively assigned, fetching these signatures
for any document, given the document id, requires
exactly 1 disk seek to the appropriate oset in H, followed
by a sequential read of m 4-byte signatures. The second
structure, I, is generated by inverting the rst. For each
position j in an MH-vector, and each MH-signature h that
appears in position j in some MH-vector, I[j][h] is a list containing
id's for every document u such that the mhu
The algorithm for retrieving the ranked list of documents
-similar to the query document q, using the indexes H and
I, is given in Figure 11.
When constructing the indexes H and I, the choice of m
needed to ensure w.h.p. that documents that are -similar
to the query document are retrieved by ProcessQuery depends
solely on ; in particular, it is shown in [7] that the
choice of m is independent of the number of documents, as
well as the size of the lexicon. Since we found in previous
experiments that documents within an Open Directory category
have similarity of at least 0.15, we chose
can safely choose this value of [10]. 14
7. EXPERIMENTAL RESULTS
We employed the strategies that produced the best values
(see Section 5) in conjunction with the scalable algorithm
we described above (see Section 6) to run an experiment
on a sizable web repository. In particular we used
anchor-windows with distance and frequency term
weighting, stemming, and with content terms included. We
provide a description of our dataset and the behavior of our
algorithms, as well as a few examples from the results we
obtained.
7.1 Efficiency Results
The latest Stanford WebBase repository contains roughly
million pages, from a crawl performed in January 2001.
For our large scale experiment, we used a 45 million page
subset, which generated bags for 75 million urls. After
merging all bag fragments, we generated 80 MH-signatures
14 We chose and m heuristically; the properties of the Web as
a whole dier from those of Open Directory. Given additional
resources, decreasing and increasing m would be appropriate.
Algorithm step Time
Generation of bag fragments 24 hours
Merging of anchor-bag fragments 8 hours
MH-signature generation 22 hours
Query Processing < 3 seconds
Type of data Space
Web repository (45M pages,compressed) 100 GB
Merged bags 42 GB
MH-signatures (H) 24 GB
Inverted MH-signatures (ltered) (I) 5 GB
Figure
12: Timing results and space usage.
each 4 bytes long for each of the 75 million document
bags.
Three machines, each AMD-K6 550MHz, were used to
process the web repository in parallel to produce the bag
fragments. The subsequent steps (merging of fragments,
MH-signature generation, and query processing) took place
on a dual Pentium-III 933 MHz with 2 GB of main memory.
The timing results of the various stages and index sizes are
given in gure 12. The query processing step is dominated
by the cost of accessing I, the smaller of the on-disk indexes.
To improve performance, we ltered I to remove urls of low
indegree (3 or fewer inlinks). Note that these urls remain
in H, so that all urls can appear as queries; some simply
will not appear in results. Of course at a slight increase in
query time (or given more resources), I need not be ltered
in this way. Also note that if I is maintained wholly in
main-memory (by partitioning it across several machines,
for instance), the query processing time drops to a fraction
of a second.
7.2 Quality of Retrieved Documents
Accurate comparisons with existing search engines are dif-
cult, since one needs to make sure both systems use the
same web document collection. We have found however,
that the \Related Pages" functionality of commercial search
engines often return navigationally, as opposed to topically,
similar results. For instance, www.msn.com is by some criteria
similar to moneycentral.msn.com. They are both part of
Microsoft MSN; however the former would not be a very useful
result for someone looking for other nancial sites. We
claim that the use of our evaluation methodology has led us
to the use of strategies that re
ect the notion of \similarity"
embodied in the popular ODP directory. For illustration, we
have provided some sample queries in gure 13. In gure 14
we have given the top 10 words (by weight) in the bags for
these query urls. 15
8. RELATED WORK
Most relevant to our work are algorithms for the \Re-
lated Pages" functionality provided by several major search
engines. Unfortunately, the details of these algorithms are
not publicly available. Dean et al. [12] propose algorithms,
which we discussed in Sections 1 and 5.1, for nding related
pages based on the connectivity of the Web only and not
on the text of pages. The idea of using hyperlink text for
document representation has been exploited in the past to
attack a variety of IR problems [1, 3, 8, 9, 11, 18]. The
15 For display, the terms were unstemmed with the most commonly
occurring variant.
novelty of our paper, however, consists in the fact that we
do not make any a priori assumption about what are the
best features for document representation. Rather, we develop
an evaluation methodology that allows us to select
the best features from among a set of dierent candidates.
Approaches algorithmically related to the ones presented in
Section 6 have been used in [7, 4], although for the dierent
problem of identifying mirror pages.
9.
ACKNOWLEDGMENTS
We would like to thank Professor Chris Manning, Professor
Je Ullman, and Mayur Datar for their insights and
invaluable feedback.
10.
--R
Using Common Hypertext Links to Identify the Best Phrasal Description of Target Web Documents.
Categorization by context.
The Anatomy of a Large-Scale Hypertextual Web Search Engine
Filtering Near-duplicate Documents
On the Resemblance and Containment of Documents.
Syntactic Clustering of the Web.
Enhanced Hypertext Categorization Using Hyperlinks.
Automatic Resource Compilation by Analyzing Hyperlink Structure and Associated Text.
Finding Interesting Associations without Support Pruning.
Topical Locality in the Web.
Finding Related Pages in the World Wide Web.
Measures of association for cross classi
Scalable Techniques for Clustering the Web.
A Repository of Web Pages.
A Small Minwise Independent Family of Hash Functions.
Data clustering: A review.
Authoritative sources in a hyperlinked environment.
Measures of Distributional Similarity.
The Automatic Creation of Literature Abstracts.
Open Directory Project (ODP).
An Algorithm for Su-x Stripping
Introduction to Modern Information Retrieval.
Nonparametric Statistics for the Behavioral Sciences.
A comparison of document clustering techniques.
--TR
Enhanced hypertext categorization using hyperlinks
Min-wise independent permutations (extended abstract)
Syntactic clustering of the Web
Automatic resource compilation by analyzing hyperlink structure and associated text
The anatomy of a large-scale hypertextual Web search engine
Finding related pages in the World Wide Web
Authoritative sources in a hyperlinked environment
Data clustering
Topical locality in the Web
Introduction to Modern Information Retrieval
On the Resemblance and Containment of Documents
--CTR
Ullas Nambiar , Subbarao Kambhampati, Answering imprecise database queries: a novel approach, Proceedings of the 5th ACM international workshop on Web information and data management, November 07-08, 2003, New Orleans, Louisiana, USA
Ullas Nambiar , Subbarao Kambhampati, Providing ranked relevant results for web database queries, Proceedings of the 13th international World Wide Web conference on Alternate track papers & posters, May 19-21, 2004, New York, NY, USA
Ana G. Maguitman , Filippo Menczer , Heather Roinestad , Alessandro Vespignani, Algorithmic detection of semantic similarity, Proceedings of the 14th international conference on World Wide Web, May 10-14, 2005, Chiba, Japan
Kumiko Tanaka-Ishii , Hiroshi Nakagawa, A multilingual usage consultation tool based on internet searching: more than a search engine, less than QA, Proceedings of the 14th international conference on World Wide Web, May 10-14, 2005, Chiba, Japan
Gang Luo , Chunqiang Tang , Ying-li Tian, Answering relationship queries on the web, Proceedings of the 16th international conference on World Wide Web, May 08-12, 2007, Banff, Alberta, Canada
Wang Da-Zhen , Chen Yu-Hui, Near-replicas of web pages detection efficient algorithm based on single MD5 fingerprint, Proceedings of the 8th Conference on 8th WSEAS International Conference on Automation and Information, p.318-320, June 19-21, 2007, Vancouver, British Columbia, Canada
Dmitri Roussinov , Leon J. Zhao , Weiguo Fan, Mining context specific similarity relationships using the world wide web, Proceedings of the conference on Human Language Technology and Empirical Methods in Natural Language Processing, p.499-506, October 06-08, 2005, Vancouver, British Columbia, Canada
Ullas Nambiar , Subbarao Kambhampati, Mining approximate functional dependencies and concept similarities to answer imprecise queries, Proceedings of the 7th International Workshop on the Web and Databases: colocated with ACM SIGMOD/PODS 2004, June 17-18, 2004, Paris, France
Jaroslav Pokorny, Web Searching and Information Retrieval, Computing in Science and Engineering, v.6 n.4, p.43-48, July 2004
Ya Zhang , Chao-Hsien Chu , Xiang Ji , Hongyuan Zha, Correlating summarization of multi-source news with k-way graph bi-clustering, ACM SIGKDD Explorations Newsletter, v.6 n.2, p.34-42, December 2004
Ronald Fagin , Ravi Kumar , Mohammad Mahdian , D. Sivakumar , Erik Vee, Comparing and aggregating rankings with ties, Proceedings of the twenty-third ACM SIGMOD-SIGACT-SIGART symposium on Principles of database systems, June 14-16, 2004, Paris, France
Moiss G. de Carvalho , Marcos Andr Gonalves , Alberto H. F. Laender , Altigran S. da Silva, Learning to deduplicate, Proceedings of the 6th ACM/IEEE-CS joint conference on Digital libraries, June 11-15, 2006, Chapel Hill, NC, USA
Quanzhi Li , Yi-fang Brook Wu, People search: Searching people sharing similar interests from the Web, Journal of the American Society for Information Science and Technology, v.59 n.1, p.111-125, January 2008
Carlo Bellettini , Alessandro Marchetto , Andrea Trentini, WebUml: reverse engineering of web applications, Proceedings of the 2004 ACM symposium on Applied computing, March 14-17, 2004, Nicosia, Cyprus
Baoning Wu , Vinay Goel , Brian D. Davison, Topical TrustRank: using topicality to combat web spam, Proceedings of the 15th international conference on World Wide Web, May 23-26, 2006, Edinburgh, Scotland
Sariel Har-Peled , Vladlen Koltun , Dezhen Song , Ken Goldberg, Efficient algorithms for shared camera control, Proceedings of the nineteenth annual symposium on Computational geometry, June 08-10, 2003, San Diego, California, USA
Dniel Fogaras , Balzs Rcz, Scaling link-based similarity search, Proceedings of the 14th international conference on World Wide Web, May 10-14, 2005, Chiba, Japan
M. Eirinaki , M. Vazirgiannis , I. Varlamis, SEWeP: using site semantics and a taxonomy to enhance the Web personalization process, Proceedings of the ninth ACM SIGKDD international conference on Knowledge discovery and data mining, August 24-27, 2003, Washington, D.C.
Steven M. Beitzel , Eric C. Jensen , Abdur Chowdhury , David Grossman, Using titles and category names from editor-driven taxonomies for automatic evaluation, Proceedings of the twelfth international conference on Information and knowledge management, November 03-08, 2003, New Orleans, LA, USA
Filippo Menczer, Mapping the Semantics of Web Text and Links, IEEE Internet Computing, v.9 n.3, p.27-36, May 2005
Maria Halkidi , Benjamin Nguyen , Iraklis Varlamis , Michalis Vazirgiannis, THESUS: Organizing Web document collections based on link semantics, The VLDB Journal The International Journal on Very Large Data Bases, v.12 n.4, p.320-332, November
Junghoo Cho , Hector Garcia-Molina , Taher Haveliwala , Wang Lam , Andreas Paepcke , Sriram Raghavan , Gary Wesley, Stanford WebBase components and applications, ACM Transactions on Internet Technology (TOIT), v.6 n.2, p.153-186, May 2006
Mayank Bawa , Tyson Condie , Prasanna Ganesan, LSH forest: self-tuning indexes for similarity search, Proceedings of the 14th international conference on World Wide Web, May 10-14, 2005, Chiba, Japan
Gurmeet Singh Manku , Arvind Jain , Anish Das Sarma, Detecting near-duplicates for web crawling, Proceedings of the 16th international conference on World Wide Web, May 08-12, 2007, Banff, Alberta, Canada
Ping Li , Kenneth W. Church, A Sketch Algorithm for Estimating Two-Way and Multi-Way Associations, Computational Linguistics, v.33 n.3, p.305-354, September 2007
P. Ferragina , A. Gulli, A personalized search engine based on Web-snippet hierarchical clustering, SoftwarePractice & Experience, v.38 n.2, p.189-225, February 2008 | evaluation;open directory project;related pages;search;similarity search |
511509 | An event-condition-action language for XML. | XML repositories are now a widespread means for storing and exchanging information on the Web. As these repositories become increasingly used in dynamic applications such as e-commerce, there is a rapidly growing need for a mechanism to incorporate reactive functionality in an XML setting. Event-condition-action (ECA) rules are a technology from active databases and are a natural method for supporting suchfunctionality. ECA rules can be used for activities such as automatically enforcing document constraints, maintaining repository statistics, and facilitating publish/subscribe applications. An important question associated with the use of a ECA rules is how to statically predict their run-time behaviour. In this paper, we define a language for ECA rules on XML repositories. We then investigate methods for analysing the behaviour of a set of ECA rules, a task which has added complexity in this XML setting compared with conventional active databases. | Introduction
XML is becoming a dominant standard for storing and exchanging information. With its increasing
use in areas such as data warehousing and e-commerce [13, 14, 18, 22, 28], there is a rapidly growing
need for rule-based technology to support reactive functionality on XML repositories. Event-condition-
action (ECA) rules are a natural candidate to support such functionality. In contrast to implementing
reactive functionality directly within a programming language such as Java, ECA rules have a high
level, declarative syntax and are thus more easily analysed. Furthermore, many commercial and research
systems based on ECA rules have been successfully built and deployed, and thus their implementation
within system architectures is well-understood.
ECA rules automatically perform actions in response to events provided stated conditions hold.
They are used in conventional data warehouses for incremental maintenance of materialised views, for
validation and cleansing of the input data streams, and for maintaining audit trails of the data. By
analogy, ECA rules can also be used as an integrating technology for providing this kind of reactive
functionality on XML repositories. They can also be used for checking key and other constraints on
XML documents, and for performing automatic repairs when violations are detected. For a 'push' type
environment, they are a mechanism for automatically broadcasting information to subscribers as the
contents of relevant documents change. They can also be employed as a flexible means for maintaining
statistics about document and web site usage and behaviour. In this paper, we present a language in
which ECA rules on XML can be defined.
ECA rules have been used in many settings, including active databases [26, 29], workflow manage-
ment, network management, personalisation and publish/subscribe technology [3, 13, 14, 17, 27], and
specifying and implementing business processes [2, 16, 22]. However, one of the key recurring themes
regarding the successful deployment of ECA rules is the need for techniques and tools for analysing their
behaviour [15, 23]. When multiple ECA rules are defined within a system, their interactions can be
difficult to analyse, since the execution of one rule may cause an event which triggers another rule or set
of rules. These rules may in turn trigger further rules and there is indeed the potential for an infinite
cascade of rule firings to occur. Thus, the second part of this paper explores techniques for analysing the
behaviour of a set of ECA rules defined in our language.
Other ECA rule languages for XML have been proposed in [13, 14, 22] but none of these focus on
analysing the behaviour of the ECA rules. It is proposed in [13] that analysis techniques developed for
conventional active databases can be applied in an XML setting too, but details are not given.
A more recent paper [12] also defines an active rule language for XML, but is not concerned with rule
analysis. The rule syntax it describes is similar to the one we define here, the rule format being based
on the definition of triggers in SQL3. Its rule execution semantics is rather different from the model we
adopt, however. Generally speaking, insertions and deletions of XML data (so-called bulk statements)
may involve document fragments of unbounded size. [12] describes a semantics whereby each (top-level)
update is decomposed into a sequence of smaller updates (which depend on the contents of the fragment
being inserted/deleted) and then trigger execution is interleaved with the execution of these smaller
updates. In contrast, we treat each top-level update as atomic and trigger execution is invoked only after
completion of the top-level update. In general, these semantics may produce different results for the
same top-level update and it is a question of future research to determine their respective suitability in
different applications.
In an earlier paper [7] we described some initial proposals for analysis, and also optimisation, of
XML ECA rules. The language we discussed there was less expressive then the language we propose
here in that it did not allow disjunction or negation in rule conditions. Moreover, here we examine more
deeply the triggering and activation relationships between rules and have derive more precise tests for
determining both of these relationships than the tests we described in [7].
2 The ECA Rule Language
An XML database consists of a set of XML documents. Event-condition-action (ECA) rules on XML
databases take the following form:
on event
if condition
do actions
Rather than introducing yet another query language for XML, we use the XPath [32] and XQuery [33]
languages to specify events, conditions and actions within our ECA rules. XPath is used in a number
of W3C recommendations, such as XPointer, XSLT and XQuery itself, for selecting and matching parts
of XML documents and so is well-suited to the requirements of ECA rules. XQuery is used in our ECA
rules only where there is a need to be able to construct new fragments of XML. We define each of the
components of our ECA rule language below, give some example rules, and describe the rule execution
semantics.
2.1 Rule Events
The event part of an ECA rule is an expression of the form
or
where e is a simple XPath expression (defined in Section 2.4 below) which evaluates to a set of nodes.
The rule is said to be triggered if this set of nodes includes any node in a new sub-document, in the case
of an insertion, or in a deleted sub-document, in the case of a deletion.
The system-defined variable $delta is available for use within the condition and actions parts of the
rule (see below), and its set of instantiations is the set of new or deleted nodes returned by e.
2.2 Rule Conditions
The condition part of an ECA rule is either the constant TRUE, or one or more simple XPath expressions
connected by the boolean connectives and, or, not.
The condition part of an ECA rule is evaluated on each XML document in the database which has
been changed by an event of the form specified in the rule's event part. If the condition references the
system-defined it is evaluated once for each instantiation of $delta for each document.
Otherwise, the condition is evaluated just once for each document.
2.3 Rule Actions
The actions part of an ECA rule is a sequence of one or more actions:
action
These actions are executed on each XML document which has been been changed by an event of the
form specified in the rule's event part and for which the rule's condition query evaluates to True - we
call this set of documents the rule's set of candidate documents.
An ECA rule is said to fire if its set of candidate documents is non-empty.
Each action i above is an expression of the form
INSERT r BELOW e [BEFOREjAFTER q]
or
where r is a simple XQuery expression, e is a simple XPath expression and q is either the constant TRUE
or an XPath qualifier - see Sections 2.4 and 2.5 below for definitions of the italicised terms.
In an INSERT action, the expression e specifies the set of nodes, N , immediately below which new
sub-document(s) will be inserted. These sub-documents are specified by the expression r 1 . If e or r
references the $delta variable then one sub-document is constructed for each instantiation of $delta for
which the rule's condition query evaluates to True. If neither e nor r references $delta then a single
sub-document is constructed 2 .
q is an optional XPath qualifier which is evaluated on each child of each node n 2 N . For insertions
of the form AFTER q, the new sub-document(s) are inserted after the last sibling for which q is True,
while for insertions of the form BEFORE q, the new sub-document(s) are inserted before the first sibling
for which q is True. The order in which new sub-documents are inserted is non-deterministic.
In a DELETE action, expression e specifies the set of nodes which will be deleted (together with their
sub-documents). Again, e may reference the $delta variable.
Example 1 Consider an XML database consisting of two documents, s.xml and p.xml. The document
contains information on stores, including which products are sold in each store:
!store id="s1"?
!product id="p1"/?
!product id="p2"/?
We observe that using the phrase BELOW e to indicate where the update should happen is significant. Without it the
placement of new sub-documents would be restricted to occurring only at the root node of documents.
Thus, both document-level and instance-level triggering are supported in our ECA rule language. If there is no
occurrence of the $delta variable in a rule action, the action is executed at most once on each document each time the rule
fires - this is document-level triggering. If there is an occurrence of $delta in an action part, the action is executed once
for each possible instantiation of $delta on each document - this is instance-level triggering.
The document p.xml holds information on each product, including which stores sell each product:
!product id="p1"?
!store id="s1"/?
!store id="s2"/?
The following ECA rule updates the p.xml document whenever one or more products are added
below an existing store in s.xml:
on INSERT document('s.xml')/stores/store/product
if not(document('p.xml')/products/
product[@id=$delta/@id]/store[@id=$delta/./@id])
do INSERT !store id='-$delta/./@id-'/?
BELOW document('p.xml')/products/
product[@id=$delta/@id] AFTER TRUE
Here, the system-defined $delta variable is bound to the newly inserted product nodes detected by the
event part of the rule. The rule's condition checks that the store which is the parent of the inserted
products in s.xml is not already a child of those products in p.xml. The action then adds the store as a
child of those products in p.xml.
In a symmetrical way, the following ECA rule updates the s.xml document whenever one or more
stores are added below an existing product in p.xml:
on INSERT document('p.xml')/products/product/store
if not(document('s.xml')/stores/
store[@id=$delta/@id]/product[@id=$delta/./@id])
do INSERT !product id='-$delta/./@id-'/?
BELOW document('s.xml')/stores/
store[@id=$delta/@id] AFTER TRUE
The two rules ensure that the information in the two documents is kept mutually consistent.
Example 2 This example is taken from [1] which discusses view updates on semi-structured data. The
XML database consists of two documents, g.xml and m.xml. g.xml contains a restaurant guide, with
information about restaurants, including the entrees they serve, and the ingredients of each entree:
!name?Baghdad Cafe!/name?
!rating?Four stars!/rating?
!name?Cheeseburger Club!/name?
The document m.xml is a view derived from g.xml, and contains a list of those entrees at the Baghdad
Cafe where one of the ingredients is Mushroom:
Suppose now an ingredient element with value Mushroom is added to the second (unnamed) entree of
the Baghdad Cafe in g.xml. The following ECA rule performs the view maintenance of m.xml:
on INSERT document('g.xml')/guide/restaurant/
entree/ingredient
if $delta[.='Mushroom'] and
$delta/.[name='Baghdad Cafe']
do INSERT $delta/. BELOW document('m.xml')/entrees
AFTER TRUE
The resulting m.xml document is:
Note that inserting $delta/. results in the complete entree being inserted, while AFTER TRUE causes
it to be inserted after the last child of entrees.
2.4 Simple XPath Expressions
The XPath and XQuery expressions appearing in our ECA rules are restrictions of the full XPath [32]
and XQuery [33] languages, to what we term simple XPath and XQuery expressions. These represent
useful and reasonably expressive fragments which have the advantage of also being amenable to analysis.
The XPath fragment we use disallows a number of features of the full XPath language, most notably
the use of any axis other than the child, parent, self or descendant-or-self axes and the use of all functions
other than document(). Thus, the syntax of a simple XPath expression e is given by the following
grammar, where s denotes a string and n denotes an element or attribute name:
Expressions enclosed in '[' and `]' in an XPath expression are called qualifiers. So a simple XPath
expression starts by establishing a context, either by a call to the document function followed by a path
expression p, or by a reference to the variable $delta (the only variable allowed) followed by optional
qualifiers q and an optional path expression p. Note that a qualifier q can comprise a simple XPath
expression e.
If we delete all qualifiers (along with the enclosing brackets) from an XPath expression, we are left
with a path of nodes. We call this path the distinguished path of the expression and the node at the end
of the distinguished path the distinguished leaf of the expression.
The result of an XPath expression e is a set of nodes, namely, those matched by the distinguished
leaf of the expression. The (simple) result type of e, denoted type(e), is one of string, element name n or
*, where * denotes any element name. The result type can be determined as follows.
Let p be the distinguished path of e. If the leaf of p is @n or @*, type(e) is string. If the leaf of p is
n or *, type(e) is n or *, respectively. If the leaf is '.' or `.', type(e) is determined from the leaf of a
modified distinguished path 3 which is defined below.
The modified distinguished path is constructed from the distinguished path p of expression e by
replacing each occurrence of '.' and `.' from left to right in p as follows. If p starts with $delta, then
we substitute for $delta the distinguished path of the XPath expression which occurs in the event part
of the rule. If the step is '.' and it is preceded by `a/' (where a must be either an element name or *'),
then replace 'a/.' with `. If the separator preceding the occurrence of `.' or '.' is `//', then replace
the step with '*'. If the step is `.' and the separator which precedes it is '/', then delete the step and its
preceding separator.
Example 3 Consider the condition from the ECA rule of Example 2, namely,
if $delta[.='Mushroom'] and
$delta/.[name='Baghdad Cafe']
The result type of the first conjunct is ingredient because the event part of the rule is
on INSERT document('g.xml')/guide/restaurant/
entree/ingredient
The result type of the second conjunct is restaurant, determined as follows. The distinguished path
after substituting $delta is
document('g.xml')/guide/restaurant/entree/
ingredient/.
So we replace 'ingredient/.' with `.', we delete '/.', we replace `entree/.' with '.', and finally we
delete '/.', leaving the modified distinguished path
document('g.xml')/guide/restaurant
3 The modified distinguished path is simply used to determine the result type; it may not be equivalent to the original
path p.
Type inference is part of the XQuery formal semantics defined in [34]. This allows an implementation
of XQuery to infer at query compile time the output type of a query on documents conforming to a given
input type (DTD or schema). Since XPath expressions are part of XQuery, their result types can also
be inferred. Thus, in the presence of a DTD or XML schema, it is possible to infer more accurate result
types for XPath expressions using the techniques described in [34] 4 .
2.5 Simple XQuery Expressions
The XQuery fragment we use disallows the use of full so-called FLWR expressions (involving keywords
'for,' `let,' 'where' and `return'), essentially permitting only the 'return' part of an expression [33].
The syntax of a simple XQuery expression r is given by the following grammar:
r
c ::= '!' n a (`/?' j ('?' t\Lambda `!/' n '?'))
a ::= (n '= "' (s
Thus, an XQuery expression r is either a simple XPath expression e (as defined in Section 2.4) or an
element constructor c. An element constructor is either an empty element or an element with a sequence
of element contents t. In each case, the element can have a list of attributes a. An attribute list a can
be empty or is a name equated to an attribute value followed by an attribute list. An attribute value is
either a string s or an enclosed expression e 0 . Element contents t is one of a string, an element constructor
or an enclosed expression. An enclosed expression e 0 is an XPath expression e enclosed in braces. The
braces indicate that e should be evaluated and the result inserted at the position of e in the element
constructor or attribute value.
The result type of an XQuery expression r, denoted type(r), is a tree, each of whose nodes is of type
(for element name n), @n (for attribute name n), *, n//*, or string. The types with a suffix
//* indicate that the corresponding node can be the root of an arbitrary subtree. This is necessary to
capture the fact that the results of XPath expressions embedded in r return sets of nodes which may be
the roots of subdocuments. The tree for type(r) can be determined as follows. If r is an XPath expression
e, then type(r) comprises a single node whose type is type(e)//* if type(e 0 ) is n or *, or string if
is string. If r is an element constructor c, then we form a document tree T from c in the usual way,
except that some nodes will be labelled with enclosed expressions. For each such enclosed expression e 0 ,
we determine its result type type(e 0 ) in the same way as for the single XPath expression above. We then
replace e 0 in T by type(e 0 ). Now type(r) is given by the modified tree T .
The result type of an XQuery expression r denotes a set of trees S such that every tree returned by r
is in S (the converse does not necessarily hold because we do not type the results of enclosed expressions
as tightly as possible). We call each tree in S an instance of type(r). Given an XPath expression e and a
tree T , T satisfies e if e(T ) 6= ;. We say that may satisfy e if some instance of type(r) satisfies e.
Given XPath expression e and XQuery expression r, it is straightforward to test whether or not
may satisfy e. The test essentially involves checking whether the evaluation of e on the tree
of type(r) is empty or not. However, since type(r) denotes a set of trees rather than a single tree, the
evaluation needs to be modified as indicated in the following informal description: a node of type string
in type(r) may satisfy any string in e; a node of type * in type(r) may satisfy any element name in e; a
node of type n//* (respectively, *) in type(r) may satisfy any expression in e which tests attributes
or descendants of an element name n (respectively, any element name).
Example 4 Let r be the XQuery expression in the action of the first rule in Example 1, namely
!store id='-$delta/./@id-'/?.
The result type of $delta/./@id is string, so type(r) is store(@id(string)).
As another example, let r be the XQuery expression
4 Although the parent function in [34], which corresponds to a step of '.', always returns the type anyElement?.
and assume that the result type of -$delta/.- is *. Then type(r) is a(b( == ))(c).
2.6 ECA Rule Execution
In this section we describe informally the ECA rule execution semantics, giving sufficient details for our
purposes in this paper. We refer the interested reader to [7] for a fuller discussion.
The input to ECA rule execution is an XML database and a schedule. The schedule is a list of
updates to be executed on the database. Each such update is a pair
(a
The component a i;j is an action from the actions part of some rule r i . The component docsAndDeltas i
is a set of pairs (d; deltas d;i ), where d is the identifier of a document upon which a i;j is to be applied and
deltas d;i is the set of instantiations for the $delta variable generated by the event and condition part of
rule r i with respect to document d.
The rule execution begins by removing the update at the head of the schedule and applying it to the
database. For each rule r i , we then determine its set of candidate documents generated by this update,
together with the set deltas d;i for each candidate document d. For all rules r i that have fired (i.e. whose
set of candidate documents is non-empty) we place their list of actions a at the head of the
schedule, placing the actions of higher-priority rules ahead of the actions of lower-priority rules 5 . Each
such action a i;j is paired with the set docsAndDeltas i consisting of the set of candidate documents for
rule r i with the set of instantiations deltas d;i for each such document d.
The execution proceeds in this fashion until the schedule becomes empty. Non-termination of rule
execution is a possibility and thus rule analysis techniques are important for developing sets of 'well-
behaved' rules.
Analysing ECA Rule Behaviour
Analysis of ECA rules in active databases is a well-studied topic and a number of analysis techniques
have been proposed, e.g. [4, 5, 6, 8, 9, 10, 11, 16], mostly in the context of relational databases. Analysis
is important, since within a set of ECA rules, unpredictable and unstructured behaviour may occur.
Rules may mutually trigger one another, leading to unexpected (and possibly infinite) sequences of rule
executions.
Two important analysis techniques are to derive triggering [4] and activation [10] relationships
between pairs of rules. This information can then be used to analyse properties such as termination
or confluence of a set of ECA rules, or reachability of individual rules. The triggering and activation
relationships between pairs of rules are defined as follows:
A rule r i may trigger a rule r j if execution of the action of r i may generate an event which triggers
r j .
A rule r i may activate another rule r j if r j 's condition may be changed from False to True after the
execution of r i 's action.
A rule r i may activate itself if its condition may be True after the execution of its action.
Thus, two key analysis questions regarding ECA rules are:
1. Is it possible that a rule r i may trigger a rule r j ?
2. Is it possible that a rule r i may activate a rule r j ?
Once triggering and activation relationships have been derived, one can construct graphs which are
useful in analysing rule behaviour:
5 In common with the SQL3 standard for database triggers [24] we assume that no two rules can have the same priority.
This, together with our use of restricted sub-languages of XPath/XQuery, ensures that rule execution is deterministic in
our language, up to the order in which new sub-documents are inserted below a common parent.
A triggering graph [4] represents each rule as a vertex, and there is a directed arc from a vertex r i
to a vertex r j if r i may trigger r j . Acyclicity of the triggering graph implies definite termination of rule
execution. Triggering graphs can also be used for deriving rule reachability information, by examination
of the arcs in the graph.
An activation graph [10] again represents rules as vertices and there is a directed from a vertex r i
to a vertex r j if r i may activate r j . Acyclicity of this graph also implies definite termination of rule
execution.
The determination of triggering and activation relationships between ECA rules is more complex
in an XML setting than for relational databases, because determining the effects of rule actions is not
simply a matter of matching up the names of updated relations with potential events or with the bodies
of rule conditions. Instead, the associations are more implicit and semantic comparisons between sets of
path expressions are required. We develop some techniques below.
3.1 Triggering Relationships between XML
ECA rules
In order to determine triggering relationships between our XML ECA rules, we need to be able to
determine whether an action of some rule may trigger the event part of some other rule. Clearly, INSERT
actions can only trigger INSERT events, and DELETE actions can only trigger DELETE events.
3.1.1 Insertions
For any insertion action a of the form
INSERT r BELOW e 1
[BEFOREjAFTER q]
in some rule r i and any insertion event ev of the form
in some rule r j , we need to know whether ev is independent of a, that is, e 2 can never return any of the
nodes inserted by a.
The simple XQuery r defines which nodes are inserted by a, while the simple XPath expression e 1
defines where these nodes are inserted. So, informally speaking, if it is possible that some initial part of
can specify the same path through some document as e 1
and the remainder of e 2
"matches" r, then
ev is not independent of a. We formalise these notions below, based on tests for containment between
XPath expressions [19, 20, 30].
A prefix of a simple XPath expression e is an expression e 0 such that We
call e 00 the suffix of e and e 0 . Recall from Section 2.5 that, for XQuery r, type(r) denotes the result type
of r, and we can test whether or not type(r) may satisfy an XPath expression e.
Given XPath expressions e 1
and e 2
, we say that
and e 2
are independent if, for all possible XML
documents d, e 1
Now let us return to the action a and event ev defined above. Event ev is independent of action a
if for all prefixes e 0of e 2
, either
and e 0are independent, or
00Equivalently, we can say that rule r i (containing action a) may trigger rule r j (containing event ev) if for
some prefix e 0of e 2
and e 0are not independent and type(r) may satisfy e 00From arbitrary simple XPath expressions e 1
and e 2
, we can construct an XPath expression e
such that for all documents d, e 1
)(d). This is done by converting the distinguished
paths of e 1
and e 2
to regular expressions, finding their intersection using standard techniques [21], and
converting the intersection back to an XPath expression with the qualifiers from e 1
and e 2
correctly
associated with the merged steps in the intersection. The resulting expression for
may have to
use a union of path expressions (denoted
) at the top level, as permitted by XPath [32].
We can test whether e
is unsatisfiable, and hence whether e 1
and e 2
are independent, by
checking whether e
is contained in an unsatisfiable expression, using the containment test developed
in [19] (which allows unions of path expressions).
Example 5 Recall the two rules from Example 1. Let us call them Rule 1 and Rule 2. For the
form of each rule is
on INSERT e i
do INSERT r i BELOW f i AFTER TRUE
where
is !store id='-$delta/./@id-'/? and
is document('p.xml')/products/
product[@id=$delta/@id]
while
is document('p.xml')/products/product/store,
is !product id='-$delta/./@id-'/? and
is document('s.xml')/stores/
store[@id=$delta/@id].
Now let e
document('s.xml')/stores/store,
e 00is product, e 0is document('p.xml')/products/product and e 00is store. So f 2
and e 0are not
independent. Furthermore, type(r 2 ) is product(@id(string)) which may satisfy e 00. We conclude that
Rule 2 may trigger Rule 1. A similar argument shows that Rule 1 may trigger Rule 2.
On the other hand, if event e 1 were modified to
document('s.xml')/stores/store/product[name]
so that the inserted product had to have a name child, then f 2 and e 0are still not independent. However,
now
00since the product node in type(r 2
does not contain a name child. In this
case, we would detect that Rule 2 could not trigger the modified Rule 1.
3.1.2 Deletions
Similarly to insertions, for any deletion action a of the form
belonging to a rule r i , and any deletion event ev of the form
belonging to a rule r j , we have that r i may trigger r j if ev is not independent of a.
The test for independence of an action and an event in the case of deletions is simpler than for
the insertion case above. Let e be the XPath expression e 1 //*. Event ev is independent of action a if
expressions e and e 2
are independent (which can be determined as in Section 3.1.1).
3.2 Activation Relationships between XML
ECA rules
In order to determine activation relationships between ECA rules, we need to be able to determine
(a) whether an action of some rule r i may change the value of the condition part of some other rule r j
from False to True, in which case r i may activate r j ; and
(b) whether all the actions of a rule r i will definitely leave the condition part of r i False; if not, then r i
may activate itself.
Without loss of generality, we can assume that rule conditions are in disjunctive normal form, i.e.
they are of the form
(l 1;1 and l 1;2
or (l 2;1 and l 2;2
(l m;1 and l
where each l i;j is either a simple XPath expression c, or the negation of a simple XPath expression, not
c.
3.2.1 Simple XPath expressions
The following table illustrates the transitions that the truth-value of a condition consisting of a simple
XPath expression can undergo. The first column shows the condition's truth value before the update,
and the subsequent columns its truth value after a non-independent insertion (NI) and a non-independent
deletion
before after NI after ND
rue T rue T rue or F alse
alse T rue or F alse F alse
For case (a) above, i.e. when r i and r j are distinct rules, it is clear from this table that r i can
activate r j only if one of the actions of r i is an insertion which is non-independent of the condition of r j .
Let the condition of r j be the simple XPath expression c.
For c to be True, we require that it returns a non-empty result. Thus, in a sense, the distinguished
path of c plays the same role as an XPath qualifier in that we are interested in the existence of some
path in the document matching the distinguished path. In addition, the insertion of an element whose
name occurs only in a qualifier of c can turn c from False to True. For example, the insertion of a d
element below /a/b can turn condition /a/b[d]/e from False to True. Thus we need to consider all the
qualifiers and the distinguished path in c in a similar way in any test for case (a). Moreover, the use of
'.' in a condition is analogous to introducing a qualifier, so we need to rewrite conditions accordingly.
For example, the condition a/b/./d is equivalent to a[b]/d. This condition can be turned from False
to True if either a d element is added below an a element which has a b element as a child, or a b element
is added below an a element which has a d element as a child.
The procedure for determining non-independence of an insertion from a condition, c, involves constructing
from c a set C of conditions, each of which is an XPath expression without any qualifiers i.e.
a distinguished path. The objective is that condition c can change from False to True as a result of an
insertion only if at least one of the conditions in C can change from False to True as a result of the
insertion. We start with set proceed to decompose c into a number of conditions without
qualifiers, adding each one to C.
A single step of the decomposition is as follows:
1. For any u and w, and v an element name n or *,
ffl if condition c i is of the form uv/.w, then replace c i in C by u./[v]w;
is of the form uv//.w, then replace c i in C by u./[v]w and uv//*].
2. We can delete from c i steps of the form /. and ./, as well as replacing occurrences of //.// by
//, thereby ensuring that . can occur only at the end of c i preceded by //
3. If c i 2 C is of the form u[v]w, where u, v and w are all non-empty, then replace c i in C by u[v] and
uw.
4. If c i 2 C is of the form u[v], where u and v are non-empty, then delete c i from C and add to C the
conditions specified by one of the cases below, depending on the structure of the qualifier v:
nonterminal p from the grammar for simple XPath expressions, then add u=v to
ffl if v matches nonterminal e, then add e to C;
ffl if v matches x 'or' y (where x and y must be qualifiers), then add u[x] and u[y] to C;
ffl if v matches x 'and' y (where x and y must be qualifiers), then add u[x] and u[y] to C;
y, then if x or y match nonterminal p, add u=x or u=y, respectively, to C,
while if x or y match nonterminal e, add x or y, respectively, to C.
The decomposition process continues until all conditions in C are qualifier-free.
Now let one of the actions a from rule r i be
INSERT r BELOW e 1
[BEFOREjAFTER q]
As in Section 3.1.1, we determine type(r) and consider prefixes and suffixes of each condition c
i . Set C of conditions is independent of a if for each c i 2 C and for each prefix c 0 of c,
either
and c 0
are independent, or
If so, then action a cannot change the truth value of condition c in rule r j from False to True. Equivalently,
we can say that rule r i may activate rule r j if for some prefix c 0
i of some c
and c 0
i are not
independent and type(r) may satisfy c 00
Example 6 The conjunction of conditions from the ECA rule in Example 2 can be rewritten as the
single condition c:
$delta[.='Mushroom']/.[name='Baghdad Cafe']
initially comprises just condition c, which, after substituting for $delta (and dropping document('g.xml')
for simplicity), is decomposed into the conditions
/guide/restaurant/entree/ingredient
and
Condition c 2
is further decomposed into
=guide=restaurant=entree=ingredient
Conditions c 3
and c 5
can be interpreted as stating that the only way an insertion can change condition
c from False to True is if an ingredient is inserted as a child of an entree, an entree is inserted as a
child of a restaurant, or a name is inserted as a child of a restaurant. The test described above will
detect these possibilities and will correctly infer the possible activation relationships.
For case (b) above, a rule r i activates itself if it may leave its own condition True. From the above
table, we see that with the analysis that we have used so far this will be the case for all rules. To obtain
more precision, we need to develop the notion of self-disactivating rules, by analogy to this property of
ECA rules in a relational database setting [9]. A self-disactivating rule is one where the execution of its
action makes its condition False.
If the condition part of r i is a simple XPath expression c, the rule will be self-disactivating if all its
actions are deletions which subsume c. For each deletion action
we thus need to test if
For the simple XPath expressions to which our ECA rules are constrained in this paper, and provided
additionally that the only operator appearing in qualifiers is '=', it is known that containment is
decidable [19]. Thus, it is possible to devise a test for determining whether rules are self-disactivating.
The decidability of containment for larger fragments of the XPath language is an open problem [19].
However, even if a fragment of XPath is used for which this property is undecidable, it may still be
possible to develop conservative approximations, and this is an area of further research.
3.2.2 Negations of Simple XPath expressions
The following table illustrates the transitions that the truth-value of a condition of the form not c, where
c is a simple XPath expression, can undergo. The first column shows the truth value of the condition
before the update, and the subsequent columns its truth value after a non-independent insertion (NI)
and a non-independent deletion (ND):
before after NI after ND
rue T rue or F alse T rue
F alse F alse T rue or F alse
For case (a), where rules r i and r j are distinct, it is clear from this table that r i can activate r j only
if one of the actions of r i is a deletion which is non-independent of the condition of r j .
Let the condition of r j be not c. We construct the set of conditions C from c as in Section 3.2.1.
Now let the action from rule r i be
and let e be the query e 1 == . We again use the intersection test from Section 3.1.1 in order to check
whether e is independent of each of the conditions in C. If so, then e cannot change the truth value of c
from False to True. Otherwise, e is deemed to be non-independent of c, and r i may activate r j .
For case (b) above, a rule r i activates itself if it may leave its own condition True. We again need the
notion of a self-disactivating rule. If the condition part of r i is not c, the rule will be self-disactivating if
all its actions are insertions which guarantee that c will be True after the insertion.
Let an insertion action a from rule r i be
INSERT r BELOW e 1
[BEFOREjAFTER q]
and let condition c comprise prefix c 0 and suffix c 00 . Action a guarantees that c will be True after the
insertion if
and each of the trees in the set of trees denoted by r satisfies c 00 . As a result, we need a stronger concept
than the fact that type(r) may satisfy expression c 00 .
Recall the construction of the tree type(r) from Section 2.5. We modify the construction of type(r)
to leave enclosed expressions which are of type string in type(r) instead of replacing them by string. We
then need to define what it means for a node in type(r) to satisfy (rather than may satisfy) part of an
XPath expression e. A node of type n, @n or * in type(r) satisfies element name n, attribute name @n
or expression *, respectively, in e. A node of type n//* (respectively, *) in type(r) satisfies element
name n (respectively, expression *). A node labelled with an enclosed expression e 0 in type(r) satisfies
e 0 in e.
Example 7 Recall the first rule of Example 1. A prefix of the negated condition is identical to the
XPath expression
document('p.xml')/products/product[@id=$delta/@id]
and so clearly contains it. The corresponding suffix of the negated condition, namely
store[@id=$delta/./@id]
is satisfied by each of the trees denoted by the XQuery expression
!store id='-$delta/./@id-'/?
since the value for the id attribute is defined by the same expression as used in the suffix. Hence the
rule is self-disactivating. The second rule of Example 2 is similarly self-disactivating.
3.2.3 Conjunctions
For case (a), if the condition of a rule r j is of the form
l i;1 and l
we can use the tests described in the previous two sections for conditions that are simple XPath expressions
or negations of simple XPath expressions to determine if a rule r i may turn any of the l i;j from False to
True. If so, then r i may turn r j 's condition from False to True, and may thus activate r j .
For case (b), suppose the condition of rule r i is of the form l i;1 and l . There are three
possible cases:
(i) All the l i;j are simple XPath expressions. In this case, r i will be self-disactivating if each of its
actions is a deletion which subsumes one or more of the l i;j .
(ii) All the l i;j are negations of simple XPath expressions. In this case, r i will be self-disactivating if
each of its actions is an insertion which falsifies one or more of the l i;j .
(iii) The l i;j are a mixture of simple XPath expressions and negations thereof. In this case, r i may or
may not be self-disactivating.
3.2.4 Disjunctions
For case (a), if the condition of a rule r j is of the form
(l 1;1 and l
or (l 2;1 and l
(l m;1 and l
we can use the test described in Section 3.2.3 above to determine if a rule r i may turn any of the disjuncts
l i;1 and l
from False to True. If so, then r i may turn r j 's condition from False to True and may thus activate r j .
For case (b), suppose the condition of rule r i is of the form
(l 1;1 and l 1;2
or (l 2;1 and l 2;2
(l m;1 and l
Then r i will be self-disactivating if it leaves False all the disjuncts of its condition. This will be so if
(i) all the l i;j are simple XPath expressions and r i disactivates all the disjuncts of its condition as in
case (i) of Section 3.2.3 above; or
(ii) all the l i;j are negations of simple XPath expressions and r i disactivates all the disjuncts of its
condition as in case (ii) of Section 3.2.3 above.
In all other cases, r i may or may not be self-disactivating.
Conclusions
In this paper we have proposed a new language for defining ECA rules on XML, thus providing reactive
functionality on XML repositories, and we have developed new techniques for analysing the triggering
and activation dependencies between rules defined in this language. Our language is based on reasonably
expressive fragments of the XPath and XQuery standards.
The analysis information that we can obtain is particularly useful in understanding the behaviour
of applications where multiple ECA rules have been defined. Determining this information is non-trivial,
since the possible associations between rule actions and rule events/conditions are not syntactically
derivable and instead deeper semantic analysis is required.
One could imagine using XSLT to transform source documents and materialise the kinds of view
documents we have used in the examples in this paper. However, XSLT would have to process an
entire source document after any update to it in order to produce a new document whereas we invisage
detecting updates of much finer granularity. Also, using ECA rules allows one to update a document
directly, wheareas XSLT requires a new result tree to be generated by applying transformations to the
source document.
The simplicity of ECA rules is another important factor in their suitability for managing XML data.
ECA rules have a simple syntax and are automatically invoked in response to events - the specification
of such events is indeed a part of the Document Object Model (DOM) recommendation by the W3C.
Also, as is argued in [13], the simple execution model of ECA rules make them a promising means for
rapid prototyping of a wide range of e-services.
The analysis techniques we have developed are useful in a context beyond ECA rules. Our methods
for computing rule triggering and activation relationships essentially focus on determining the effects of
updates upon queries - the 'query independent of update' problem [25]. We can therefore use these
techniques for analysing the effects of other (i.e. not necessarily rule-initiated) updates made to an XML
database, e.g. to determine whether integrity constraints have been violated or whether user-defined views
need to be re-calculated. Query optimisation strategies are also possible: e.g. given a set of pre-defined
queries, one may wish to retain in memory only documents which are relevant to computing these queries.
As updates to the database are made, more documents may need to be brought into memory and these
documents can be determined by analysing the effects of the updates made on the collection of pre-defined
queries.
For future work there are two main directions to explore. Firstly, we wish to understand more fully
the expressiveness and complexity of the ECA language that we have defined. For example, we wish to
look at what types of XML Schema constraints can be enforced and repaired using rules in the language.
Secondly, we wish to further develop and gauge the effectiveness of our analysis methods. Techniques such
as incorporating additional information from document type definitions may help obtain more precise
information on triggering and activation dependencies [31]. We also wish to investigate the use of these
dependencies for carrying out optimisation of ECA rules.
--R
Incremental maintenance for materialized views over semistructured data.
Relational transducers for electronic com- merce
Push technology personalization through event correlation.
Static analysis techniques for predicting the behavior of active database rules.
An abstract interpretation framework for termination analysis of active rules.
A dynamic approach to termination analysis for active database rules.
Analysis and optimisation for event- condition-action rules on XML
Improved rule analysis by means of triggering and activation graphs.
An algebraic approach to rule analysis in expert database systems.
An algebraic approach to static analysis of active database rules.
Active XQuery.
Active rules for XML: A new paradigm for e-services
Pushing reactive services to XML repositories using active rules.
Practical applications of triggers and constraints: Success and lingering issues.
Designing Database Applications with Objects and Rules: The IDEA Methodology.
Views in a large scale XML repository.
Containment and integrity constraints for XPath fragments.
Query containment for conjunctive queries with regular expressions.
Introduction to Automata Theory
An active web-based distributed database system for e-commerce
Active database systems: Expectations
Active database features in SQL3.
Queries independent of updates.
Active Rules in Database Systems.
Efficient matching for web-based publish/subscribe systems
Updating XML.
Active Database Systems.
On the equivalence of XML patterns.
Minimising simple XPath expressions.
World Wide Web Consortium.
World Wide Web Consortium.
World Wide Web Consortium.
--TR
Static analysis techniques for predicting the behavior of active database rules
Query containment for conjunctive queries with regular expressions
Relational transducers for electronic commerce
An algebraic approach to static analysis of active database rules
Pushing reactive services to XML repositories using active rules
Updating XML
Introduction To Automata Theory, Languages, And Computation
Compile-Time and Runtime Analysis of Active Behaviors
Incremental Maintenance for Materialized Views over Semistructured Data
One-To-One Web Site Generation for Data-Intensive Applications
Push Technology Personalization through Event Correlation
Practical Applications of Triggers and Constraints
Views in a Large Scale XML Repository
Queries Independent of Updates
An Algebraic Approach to Rule Analysis in Expert Database Systems
Improving Rule Analysis by Means of Triggering and Activation Graphs
Efficient Matching for Web-Based Publish/Subscribe Systems
A Dynamic Approach to Termination Analysis for Active Database Rules
On the Equivalence of XML Patterns
An Abstract Interpretation Framework for Termination Analysis of Active Rules
Active rules for XML: A new paradigm for E-services
--CTR
S. Swamynathan , A. Kannan , T. V. Geetha, Composite event monitoring in XML repositories using generic rule framework for providing reactive e-services, Decision Support Systems, v.42 n.1, p.79-88, October 2006
Martin Bernauer , Gerti Kappel , Gerhard Kramler, Composite events for xml, Proceedings of the 13th international conference on World Wide Web, May 17-20, 2004, New York, NY, USA
Ce Dong , James Bailey, Static analysis of XSLT programs, Proceedings of the fifteenth Australasian database conference, p.151-160, January 01, 2004, Dunedin, New Zealand
George Papamarkos , Alexandra Poulovassilis , Peter T. Wood, Event-condition-action rules on RDF metadata in P2P environments, Computer Networks: The International Journal of Computer and Telecommunications Networking, v.50 n.10, p.1513-1532, 14 July 2006
James Bailey, Transformation and reaction rules for data on the web, Proceedings of the sixteenth Australasian database conference, p.17-23, January 01, 2005, Newcastle, Australia
Shuichi Kurabayashi , Yasushi Kiyoki, An adaptive active rule system for automatic service discovery and cooperation, Proceedings of the 24th IASTED international conference on Database and applications, p.115-122, February 13-15, 2006, Innsbruck, Austria
Using the composite act frame technique to model 'Rules of Origin' knowledge representations in e-government services, Electronic Commerce Research and Applications, v.6 n.2, p.128-138, Summer 2007 | reactive functionality;XML repositories;event-condition-action rules;XML;rule analysis |
511520 | Using web structure for classifying and describing web pages. | The structure of the web is increasingly being used to improve organization, search, and analysis of information on the web. For example, Google uses the text in citing documents (documents that link to the target document) for search. We analyze the relative utility of document text, and the text in citing documents near the citation, for classification and description. Results show that the text in citing documents, when available, often has greater discriminative and descriptive power than the text in the target document itself. The combination of evidence from a document and citing documents can improve on either information source alone. Moreover, by ranking words and phrases in the citing documents according to expected entropy loss, we are able to accurately name clusters of web pages, even with very few positive examples. Our results confirm, quantify, and extend previous research using web structure in these areas, introducing new methods for classification and description of pages. | Introduction
The Web is a large collection of heterogeneous documents. Recent estimates predict the size of the indexable
web to be more than 4 billion pages. Web pages, unlike standard text collections, can contain both multimedia
(images, sounds, flash, etc.) and connections to other documents (through hyperlinks). Hyperlinks are
increasingly being used to improve the ability to organize, search and analyze the web.
Hyperlinks (or citations) are being actively used to improve web search engine ranking [4], improve web
crawlers [6], discover web communities [8], organize search results into hubs and authorities [13], make
predictions about similarity between research papers [16] and even to classify target web pages [20, 9, 2,
5, 3]. The basic assumption made by citation or link analysis is that a link is often created because of a
subjective connection between the original document and the cited, or linked to document. For example, if
I am making a web page about my hobbies, and I like playing scrabble, I might link to an online scrabble
game, or to the home page of Hasbro. The belief is that these connections convey meaning or judgments
made by the creator of the link or citation.
On the web, a hyperlink has two components: The destination page, and associated anchortext describing
the link. A page creator determines the anchortext associated with each link. For example, a user could
create a link pointing to Hasbro's home page, and that user could define the associated anchortext to be
"My favorite board game's home page''. The personal nature of the anchortext allows for connecting words
to destination pages, as shown in Figure 1. Anchortext has been utilized in this way by the search engine
Google to improve web search. Google allows pages to be returned based on keywords occurring in inbound
anchortext, even if the words don't occur on the page itself, such as returning http://www.yahoo.com/
for a query of "web directory."
Typical text-based classification methods utilize the words (or phrases) of a target document, considering
the most significant features. The underlying assumption is that the page contents effectively describe the
page to be classified. Unfortunately, very often a web page might contain no obvious clues (textually) as
to its intent. For example, the home page of Microsoft Corporation (http://www.microsoft.com/)
provides no mention of the fact they sell operating systems. Or the home page of General Motors (http:
//www.gm.com/flash_homepage/) does not state that they are a car company (except for the word
"motors" in the title or the word "automotive" inside of a form field. To make matters worse, like a majority
of web pages, the General Motors home page does not have any meaningful metatags [15].
Determining if a particular page belongs to a given class, even though the page itself does not have any
obvious clues, or the words do not capture the higher-level notion can be a challenge - i.e. that GM is a car
manufacturer, or Microsoft designs and sells operating systems, or Yahoo is a directory service. Anchortext,
since it is decided by people who are interested in the page, may better summarize the contents of the page
- such as indicating that Yahoo is a web directory, or Excite@home is an Internet Service Provider 1 . Other
works have proposed and/or utilized in-bound anchortext to help classify target web pages. For example
Blum compared two classifiers for several computer science web pages (from the WebKB dataset), one for
full-text, and one for the words on the links pointing in to the target pages (inbound anchortext) [3]. From
their results, anchortext words alone were slightly less powerful than the full-text alone, and the combination
was better. Other work including work by f?rnkranz expanded this notion to include words beyond the
anchortext that occur near (in the same paragraph) and nearby headings. f?rnkranz noted a significant
improvement in classification accuracy when using the link-based method as opposed to the full-text alone
[9], although adding the entire text of "neighbor documents" seemed to harm the ability to classify pages
[5].
The web is large, and one way to help people find useful pages is a directory service, such as Yahoo
(http://www.yahoo.com/), or The Open Directory Project (http://www.dmoz.org/). Typically
directories are manually created, and the judgments of where a page goes is done by a human. For exam-
ple, Yahoo puts "General Motors" into several categories: "Auto Makers", "Parts", "Automotive", "B2B -
Auto Parts", and "Automotive Dealers". Yahoo puts itself "Yahoo" in several categories including "Web
Their homepage: http://www.home.com/index flash.html has no text, and no metatags. On a text-browser such as Lynx, the
rendered page is blank.
Directories." Unfortunately large web directories are difficult to manually maintain, and may be slow to
include new pages. It is therefore desirable to be able to learn an automatic classifier that tests membership
in a given category. Unfortunately, the makeup of a given category may be arbitrary. For example Yahoo
decided that Anthropology and Archaeology should be grouped together under "social sciences", while The
Open Directory Project (dmoz) separated archaeology into its own category (also under Social Sciences). A
second problem is that initially a category may be defined by a small number of pages, and classification
may be difficult. A third problem is naming of a category. For example given ten random botany pages,
how would you know that the category should be named botany, or that it is related to biology? Only two
of six random pages selected from the Yahoo category of Botany mentioned the word "botany" anywhere in
the text (although some had it in the URL, but not the body text). For human-generated clusters it may be
reasonable to assume a name can be found, however, for automatically generated clusters, naming may be
more difficult.
This work attempts to utilize inbound anchortext and surrounding words to classify pages accurately, and to
name (potentially very small) clusters of web pages. We make no assumptions about having a web-crawl.
We also quantify the effectiveness of using just a page's full-text, inbound anchortext, and what we call
extended anchortext (the words and phrases occurring near a link to a target page, as shown in Figure 1),
and propose two methods for improving the accuracy, a combination method and uncertainty sampling. We
also extract important features that can be used to name the clusters, and compare the ability of using only
a document's full-text with using in-bound anchortexts and extended anchortexts.
Our approach to basic text-classification is based on a simple four-step procedure, described in Figure 2:
First, obtain a set of positive and negative training documents. Second, extract all possible features from
these documents (a feature in this case is a word or phrase). Third, perform entropy-based dimensionality
reduction. Fourth, train an SVM classifier. Naming of clusters can be done by examining the top ranked
features after the entropy-based dimensionality reduction. The learned classifier can then be evaluated on
test data.
In comparison to other work on using link-structure to classify web pages, we demonstrate very high accu-
racy, more than 98% on average for negative documents, and as high as 96% for positive documents, with
an average of about 90% 2 . Our experiments described in this paper used about 100 web pages from each
of several Yahoo categories for positive training and test data, and random web pages as negative examples
(significantly fewer than other methods). Positive pages were obtained by choosing all web documents
listed in the chosen category, plus all documents from several sub-categories. The set of positive and neg-
2 Accuracy of one class is the recall of that class.
Step 1: Obtain positive and negative document sets
Step 2: Generate a positive and negative histogram of all features
Step 3: Select significant features using expected entropy loss
Step 4: Train an SVM using the selected features
Figure
2: Basic procedure for learning a text-classifier
taive documents was randomly split between training and test. We also evaluated the ability to name the
clusters, using small samples from several Yahoo categories as positive examples. In every case the name
of the Yahoo category was listed as the top ranked or second ranked feature, and the name of the parent category
was listed in the top 10 in every case but one. In addition, many of the top ranked features described
the names of the sub-categories (from which documents were drawn).
Our Method
First, we describe our method for extracting important features and training a full-text classifier of web
pages. Second, we describe our technique for creating "virtual documents" from the anchortext and inbound
extended anchortext. We then use the virtual documents as a replacement for the full-text used by our
original classifier. Third, we describe our method for combining the results to improve accuracy. Fourth, we
describe how to name a cluster using the features selected from the virtual documents.
2.1 Full-Text Classifier
In our earlier works, we described our algorithm for full-text classification of web pages [10, 11]. The basic
algorithm is to generate a feature histogram from training documents, select the "important features" and
then to train an SVM classifier. Figure 2 summarizes the high-level procedure.
2.1.1 Training Sets and Virtual Documents
To train a binary classifier it is essential to have sets of both positive and negative documents. In the simplest
case, we have a set of positive web pages, and a set of random documents to represent negative pages. The
assumption is that few of the random documents will be positive (our results suggested less than 1% of the
.My favorite search
.Search engine
google.
Virtual document
.My favorite search
engine is yahoo.
.Search engine
yahoo is powered by
google.
engine is
yahoo .
is powered by
Figure
3: A virtual document is comprised of anchortexts and nearby words from pages that link to the target document
random pages we used were positive). In our first case documents are the full-text found by downloading
the pages from various Yahoo categories.
Unfortunately, the full-text of a document is not necessarily representative of the "description" of the doc-
uments, and research has shown that anchortext can potentially be used to augment the full-text of a document
[20, 9, 3]. To incorporate anchortexts and extended anchortexts, we replaced actual downloaded
documents with virtual documents. We define a virtual document as a collection of anchortexts or extended
anchortexts from links pointing to the target document. Our definition is similar to the concept of "blurbs"
described by Attardi, et al. [2]. This is similar to what was done by f?rnkranz [9]. Anchortext refers to the
words occurring inside of a link as shown in Figure 1. We define extended anchortext as the set of rendered
words occurring up to 25 words before and after an associated link (as well as the anchortext itself). Figure
also shows an example of extended anchortext. f?rnkranz considered the actual anchortext, plus headings
occurring immediately preceding the link, and the paragraph of text containing the link. Our approach is
similar, except it made no distinction between other HTML structural elements. Our goal was to compare
the ability to classify web pages based on just the anchortext or extended anchortext, just the full-text, or
a combination of these. Figure 3 shows a sample virtual document. For our work, we limited the virtual
document to 20 inbound links, always excluding any Yahoo pages, to prevent the Yahoo descriptions or
category words from biasing the results.
To generate each virtual document, we queried the Google search engine for backlinks pointing into the
target document. Each backlink was then downloaded, the anchortext, and words before and after each
anchortext were extracted. We generated two virtual documents for each URL. One consisting of only the
anchortexts and the other consisting of the extended anchortexts, up to 25 words on each side of the link,
(both limited to the first 20 non-Yahoo links). Although we allowed up to 20 total inbound links, only
about 25% actually had 20 (or more). About 30% of the virtual documents were formed with three or fewer
inbound links. If a page had no inbound links, it was not considered for this experiment. Most URLs in
extracted from Yahoo pages had at least one valid-non Yahoo link.
2.1.2 Features and Histograms
For this experiment, we considered all words and two or three word phrases as possible features. We used
no stopwords, and ignored all punctuation and HTML structure (except for the Title field of the full-text
documents). Each document (or virtual document) was converted into a set of features that occurred and
then appropriate histograms were updated.
For example: If a document had the sentence: "My favorite game is scrabble", the following features are
generated: my, my favorite, my favorite game, favorite, favorite game, favorite
game is, . From the generated features an appropriate histogram is updated. There is one histogram for
the positive set and one for the negative set.
Unfortunately, there can be hundreds of thousands of unique features, most that are not useful, occurring
in just hundreds of documents. To improve performance and generalizability, we perform dimensionality
reduction using a two step process. This processes is identical to that described in our earlier works [10, 11].
First, we perform thresholding, by removing all features that do not occur in a specified percentage of
documents as rare words are less likely to be useful for a classifier. A feature f is removed if it occurs in
less than the required percentage (threshold) of both the positive and negative sets, i.e.,
and jB f j=jBj < T
Where:
the set of positive examples.
the set of negative examples.
documents in A that contain feature f .
documents in B that contain feature f .
threshold for positive features.
threshold for negative features.
Second, we rank the remaining features based on entropy loss. No stop word lists are used.
2.1.3 Expected Entropy Loss
Entropy is computed independently for each feature. Let C be the event indicating whether the document is
a member of the specified category (e.g., whether the document is about "biology"). Let f denote the event
that the document contains the specified feature (e.g., contains "evolution" in the title). The prior entropy of
the class distribution is e Pr(C) lg Pr(C) Pr(C) lg Pr(C). The posterior entropy of the class when
the feature is present is e f Pr(Cjf) lg Pr(Cjf) Pr(Cjf) lg Pr(Cjf); likewise, the posterior entropy
of the class when the feature is absent is e f Pr(Cjf) lg Pr(Cjf) Pr(Cjf) lg Pr(Cjf). Thus, the
expected posterior entropy is e f and the expected entropy loss is
e
If any of the probabilities are zero, we use a fixed value. Expected entropy loss is synonymous with expected
information gain, and is always non-negative [1].
All features meeting the threshold are sorted by expected entropy loss to provide an approximation of the
usefulness of the individual feature. This approach assigns low scores to features that, although common in
both sets, are unlikely to be useful for a binary classifier.
One of the limitations of using this approach is the inability to consider co-occurrence of features. Two or
more features individually may not be useful, but when combined may become highly effective. Coetzee et
al. discuss an optimal method for feature selection in [7]. Our method, although not optimal, can be run in
constant time per feature with constant memory per feature, plus a final sort, 3 both significantly less than the
optimal method described by Coetzee. We perform several things to reduce the effects of possible feature
co-occurrence. First, we consider both words and phrases (up to three terms). Considering phrases reduces
the chance that a pair of features will be missed. For example, the word "molecular" and the word "biology"
individually may be poor at classifying a page about "molecular biology", but the phrase is obviously useful.
A second approach to reducing the problem is to consider many features, with a relatively low threshold
for the first step. The SVM classifier will be able to identify features as important, even if individually
3 We assume that the histogram required for computation is generated separately, and we assume a constant time to look up data
for each feature from the histogram.
they might not be. As a result, considering a larger number of features can reduce the chance that a feature
is incorrectly missed due to low individual entropy. For our experiments, we typically considered up to a
thousand features for each classifier, easily handled by an SVM. We set our thresholds at 7% for both the
positive and negative sets.
2.1.4 Using Entropy Ranked Features to Name Clusters
Ranking features by expected entropy loss (information gain) allows us to determine which words or phrases
optimally separate a given positive cluster from the rest of the world (random documents). As a result, it
is likely that the top ranked features will meaningfully describe the cluster. Our earlier work on classifying
web pages for Inquirus 2 [10, 11] considered document full-text (and limited structural information) and
produced features consistent with the "contents" of the pages, not necessarily with the "intentions" of them.
For example, for the category of "research papers" top ranked features included: "abstract", "introduction",
"shown in figure". Each of these words or phrases describe "components" of a research paper, but the phrase
"research paper" was not top ranked. In some cases the "category" is similar to words occurring in the pages,
such as for "reviews" or "calls for papers". However, for arbitrary Yahoo categories, it is unclear that the
document text (often pages are blank) are as good an indication of the "description" of the category.
To name a cluster, we considered the features extracted from the extended anchortext virtual documents.
We believe that the words near the anchortexts are descriptions of the target documents, as opposed to
"components of them" (such as "abstract" or "introduction"). For example, a researcher might have a link
to their publications saying "A list of my research papers can be found here". The top ranked features by
expected entropy loss are those which occur in many positive examples, and few negative ones, suggesting
that they are a consensus of the descriptions of the cluster, and least common toward random documents.
2.1.5 SVMs and Web Page Classification
Categorizing web pages is a well researched problem. We chose to use an SVM classifier [19] because it
is resistant to overfitting, can handle large dimensionality, and has been shown to be highly effective when
compared to other methods for text classification [12, 14]. A brief description of SVMs follows.
Consider a set of data points, )g, such that x i is an input and y i is a target output.
An SVM is calculated as a weighted sum of kernel function outputs. The kernel function of an SVM is
written as K(x a ; x b ) and it can be an inner product, Gaussian, polynomial, or any other function that obeys
Mercer's condition.
In the case of classification, the output of an SVM is defined as:
The objective function (which should be minimized) is:
E() =2
subject to the box constraint 0 i C; 8 i and the linear constraint
0: C is a user-defined
constant that represents a balance between the model complexity and the approximation error. Equation 2
will always have a single minimum with respect to the Lagrange multipliers, . The minimum to Equation
2 can be found with any of a family of algorithms, all of which are based on constrained quadratic
programming. We used a variation of Platt's Sequential Minimal Optimization algorithm [17, 18] in all of
our experiments.
When Equation 2 is minimal, Equation 1 will have a classification margin that is maximized for the training
set. For the case of a linear kernel function (K(x an SVM finds a decision boundary that is
balanced between the class boundaries of the two classes. In the nonlinear case, the margin of the classifier
is maximized in the kernel function space, which results in a nonlinear classification boundary.
When using a linear kernel function, the final output is a weighted feature vector with a bias term. The
returned weighted vector can be used to quickly classify a test document by simply taking the dot product
of the features.
2.2 Combination Method
This experiment compares three different methods for classifying a web page: full-text, anchortext only,
and extended anchortext only. Section 3 describes the individual results. Although of the three, extended
anchortext seems the most effective, there are specific cases for which a document's full-text may be more
accurate. We wish to meaningfully combine the information to improve accuracy. The result from an SVM
classifier is a real number from 1 to +1, where negative numbers correspond to a negative classification,
and positive numbers correspond to a positive classification. When the output is on the interval ( 1; 1) it is
less certain than if it is on the intervals (1; 1) and (1; 1). The region ( 1; 1) is called the "uncertain
region".
We describe two ways to improve the accuracy of the extended anchortext classifier. The first is through
uncertainty sampling, where a human judges the documents in the "uncertain region." The hope is that
both the human judges are always correct, and that there are only a small percentage of documents in the
uncertain region. Our experimental results confirm that for the classifiers based on the extended anchortext,
on average about 8% of the total test documents (originally classified as negative) were considered uncertain,
and separating them out demonstrated a substantial improvement in accuracy.
The second method is to combine results from the extended anchortext based classifier with the less accurate
full-text classifier. Our observations indicated that the negative class accuracy was approaching 100% for
the extended anchortext classifier, and that many false negatives were classified as positive by the full-text
classifier. As a result, our combination function only considered the full-text classifier when a document was
classified as negative, but uncertain, by the extended anchortext classifier. For those documents, a positive
classification would result if the full-text classifier resulted in a higher magnitude (but positive) classification.
Our automatic method resulted in a significant improvement in positive class accuracy (average increase
from about 83% to nearly 90%), but had more false positives, lowering negative class accuracy by about a
percentage point from 98% to about 97%.
Our goal was to compare three different sources of features for training a classifier for web documents: full-
text, anchortext and extended anchortext. We also wished to compare the relative ability to name clusters of
web documents using each source of features.
To compare these methods, we choose several Yahoo categories (and sub-categories) and randomly chose
documents from each. The Yahoo classified documents formed the respective positive classes, and random
documents (found from outside Yahoo) comprised the negative class. In addition, the Yahoo assigned category
names were used as a benchmark for evaluating our ability to name the clusters. In all cases virtual
documents excluded links from Yahoo to prevent using their original descriptions to help name the clusters.
3.1 Text-Classification
The categories we chose for classification, and the training and test sizes are listed in Table 1. For each
case we chose the documents listed in the category itself (wed did not follow Yahoo links to other Yahoo
categories) and if there were insufficient documents, we chose several sub-categories to add documents.
Yahoo Category Parent Training Test
Biology Science 100/400 113/300
Archaeology Anthropology and Archaeology 100/400 145/300
Animals, Insects, and Pets 100/400 120/300
Museums, Galleries, and Centers Arts 75/500 100/300
Management Consulting Consulting 300/500 100/300
Table
1: Yahoo categories used to test classification accuracy, numbers are positive / negative
Yahoo Category Full-Text Anchortext Extended-AT Combined Sampled % Sampled
Biology 51.3/90 55.1/97.3 72.9/98 80.4/97.3 83.1/98 9.8
Archaeology 65.5/92.7 72.2/98.3 83.2/99.2 91.6/98.4 94.4/99.2 8.7
Museums 57/93.7 80/98 87/98.7 89/98.3 94/98.7 6.3
Mgmt Consulting 74/88.7 56.7/95 81.1/95 88.9/92.3 92.2/95 9.5
Average 66.2/92.5 68.3/97.5 82.2/98 89.3/97.1 92.1/98.0 7.7
Table
2: Percentage accuracy of five different methods (pos/neg), sampled refers to the uncertainty sampled case
Table
2 lists the results for each of the classifiers from Table 1.
When evaluating the accuracy, it is important to note several things. First, the negative accuracy is a lower-bound
since negative pages were random, some could actually be positive. We did not have time to manually
examine all random pages. However, a cursory examination of the pages classified as positive, but from the
random set, showed about 1 in 3 were actually positive - suggesting negative class accuracy was more than
99% in many cases. It is also important to note the relatively small set sizes used for training. Our positive
sets typically had 100 examples, relatively small considering there were as many as 1000 features used for
training. Positive accuracy is also a lower bound since sometimes pages may be misclassified by Yahoo.
Other works comparing accuracy of full-text to anchortext have not shown a clear difference in classification
ability, or a slight loss due to use of anchortext alone [9]. Our results suggest that anchortext alone is comparable
for classification purposes with the full-text. Several papers agree that features on linking documents,
in addition to the anchortext (but less than the whole page) can provide significant improvements. Our work
is consistent with these results, showing significant improvement in classification accuracy when using the
extended anchortext instead of the document full-text.
Our combination method is also highly effective for improving positive-class accuracy, but reduces negative
class accuracy. Our method for uncertainty sampling required examining of only 8% of the documents on
biology biology biology archaeology archaeology archaeology
biology http biology archaeology archaeology archaeology
dna http www science archaeological archaeological archaeological
biological edu molecular ancient museum ancient
cell html biological archaeologists the museum
university biology university stone museum of anthropology
molecular the university of Title:archaeology of history
research human human excavation archeology of archaeology
protein cell research of archaeology http research
human of molecular biology museum university prehistoric
Table
3: Top 10 ranked features by expected entropy loss. Bold indicates a category word, underline indicates
a parent category word.
wildlife wildlife wildlife museums museums museums
wildlife wildlife wildlife museum art museum
Title:wildlife species conservation museum of museum museum of
species org species art contemporary art
endangered endangered animals of art museum of of art
wild conservation wild gallery contemporary art gallery
conservation endangered species endangered contemporary art gallery contemporary art
habitat sanctuary animal contemporary org contemporary
animals http nature art museum museums art museum
endangered species refuge and wildlife arts of arts
Table
ranked features by expected entropy loss. Bold indicates a category word, underline indicates
a parent category word.
average, while providing an average positive class accuracy improvement of almost 10 percentage points.
The automatic combination also provided substantial improvement over the extended anchortext or the full-text
alone for positive accuracy, but caused a slight reduction in negative class accuracy as compared to the
extended anchortext case.
management consulting management consulting management consulting
(full-text) (anchortext) (extended anchortext)
management consulting management
consulting inc consulting
clients management associates
Title:management group consultants
strategic associates business
business com group
Title:consulting consulting group firm
consultants group inc consulting firm
services com www management consulting
Table
5: Top 10 ranked features by expected entropy loss. Bold indicates a category word, underline indicates
a parent category word.
biology (20) botany (8) wildlife (4) conservation and research (5) isps (6)
biology plant wildlife wildlife internet service
science botany animals conservation isps
biological of plant conservation endangered modem
molecular the plant insects natural earthlink
genetics botanical endangered species broadband
human plants the conservation research center providers
evolution and biology facts society http www service provider
genomics internet directory wild wildlife trust prodigy
anatomy botanic bat society http internet service provider
paleontology botanical garden totally wildlife society atm
Figure
4: Ranked list of features from extended anchortext by expected entropy loss. Number in parentheses
is the number of positive examples.
3.2 Features and Category Naming
The second goal of this research is to automatically name various clusters. To test our ability to name
clusters we compared the top ranked features (by expected entropy loss) with the Yahoo assigned names.
We performed several tests, with as few as 4 positive examples. Tables 3, 4 and 5 show the top 10 ranked
features for each of the five categories above for the full-text, the anchortext only and extended anchortext.
The full-text appears comparable to the extended anchortext, with in all five cases, the current category
name appearing as the top or second ranked feature, and the parent category name appearing in the top 10
(or at least one word from the category name). The extended anchortext appears to perform similarly, with
an arguable advantage, with the parent name appearing more highly ranked. The anchortext alone appears
to do a poor job of describing the category, with features like "and" or "http" ranking highly. This is likely
due to the fact people often put the URL or the name of the target page as the anchortext. The relatively
high thresholds (7%) removed most features from the anchortext only case. From the five cases there was an
average of about 46 features surviving the threshold cut offs for the anchortext only case. For the full-text
and extended anchortext, usually there were more than 800 features surviving the thresholds. Table 4 shows
the results for small clusters for the same categories and several sub-categories. In every case the category
name was ranked first or second, with the parent name ranked highly 4 . In addition, most of the other top
ranked features described names of sub-categories. The ISP example was one not found in Yahoo. For this
experiment, we collected the home pages of six ISPs, and attempted to discover the commonality between
them. The full-text based method reported features common to the portal home pages, current news, "sign
in", "channels" "horoscopes", etc. However, the extended anchortext method correctly named the group
"isps" or "internet service provider", despite the fact that none of the pages mentioned either anywhere on
their homepage, with only Earthlink and AT&T Worldnet mentioning the phrase "iternet service provider"
in a metatag. A search on Google for "isp" returned none of the ISPs used for this experiment in the top 10.
A search for "internet service provider" returned only Earthlink in the top 10.
4 Summary and Future Work
This paper describes a relatively simple method for learning a highly-accurate web page classifier, and using
the intermediate feature-set to help name clusters of web pages. We evaluated our approach on several Yahoo
categories, with very high accuracy for both classification and for naming. Our work supports and extends
other work on using web structure to classify documents, and demonstrates the usefulness of considering
inbound links, and words surrounding them. We also show that anchortext alone is not significantly better
(arguably worse) than using the full-text alone. We also present two simple methods for improving the
accuracy of our extended anchortext classifier. Combining the results from the extended anchortext classifier
with the results from the full-text classifier produces nearly a 7 percentage point improvement in positive
class accuracy. We also presented a simple method for uncertainty sampling, where documents that are
uncertain are manually evaluated, improving the accuracy nearly 10 percentage points, while requiring on-
4 In the case of "conservation and research", the Yahoo listed parent category was "organizations", which did not appear as a
top ranked feature, there were only three top level sub-categories under wildlife, suggesting that conservation and research could
be promoted.
average less than 8% of the documents to be examined.
Utilizing only extended anchortext from documents that link to the target document, average accuracy of
more than 82% for positive documents, and more than 98% for negative documents was achieved, while
just considering the words and phrases on the target pages (full-text) average accuracy was only 66.2% for
positive documents, and 92.5% for negative documents. Combing the two resulted in an average positive
accuracy of almost 90%, with a slight reduction in average negative accuracy. The uncertainty sampled case
had an average positive accuracy of more than 92%, with the neagtive accuracy averaging 98%.
Using samples of as few as four positive documents, we were able to correctly name the chosen Yahoo
category (without using knowledge of the Yahoo hierarchy) and in most cases rank words that occurred in
the Yahoo assigned parent category in the top 10 features. The ability to name clusters comes for free from
our entropy-based feature ranking method, and could be useful in creating automatic directory services.
Our simplistic approach considered only up to 25 words before and after (and the included words) an in-bound
link. We wish to expand this to include other features on the inbound web pages, such as structural
information (e.g., is a word in a link or heading), as well as experiment with including headings of the
inbound pages near the anchortext, similar to work done by f?rnkranz [9]. We also wish to examine the
effects of the number of inbound links, and the nature of the category by expanding this to thousands of
categories instead of only five. The effects of the positive set size also need to be studied.
We also intend to apply our methods to standard test collections, such as the WebKB database to support
comparisons with other research using the same collections and, to quantify the improvements of using our
entropy-based feature selection.
--R
Information Theory and Coding.
Antonio Gull-
Combining labeled and unlabeled data with co-training
The anatomy of a large-scale hypertextual web search engine
Enhanced hypertext categorization using hyperlinks.
Hector Garc-a-Molina
Feature selection in web applications using ROC inflections.
Efficient identification of web communities
Improving category specific web search by learning query modifications.
Using Extra-Topical User Preferences To Improve Web-Based Metasearch
Text categorization with support vector machines: Learning with many relevant features.
Authoritative sources in a hyperlinked environment.
Automated text categorization using support vector machine.
Accessibility of information on the web.
Digital libraries and Autonomous Citation Indexing.
Fast training of support vector machines using sequential minimal optimization.
Using sparseness and analytic QP to speed training of support vector machines.
The Nature of Statistical Learning Theory.
A study of approaches to hypertext categorization.
--TR
The nature of statistical learning theory
Enhanced hypertext categorization using hyperlinks
Combining labeled and unlabeled data with co-training
The anatomy of a large-scale hypertextual Web search engine
Efficient crawling through URL ordering
Fast training of support vector machines using sequential minimal optimization
Authoritative sources in a hyperlinked environment
Using analytic QP and sparseness to speed training of support vector machines
Efficient identification of Web communities
A Study of Approaches to Hypertext Categorization
Digital Libraries and Autonomous Citation Indexing
Text Categorization with Suport Vector Machines
Exploiting Structural Information for Text Classification on the WWW
Feature Selection in Web Applications By ROC Inflections and Powerset Pruning
Improving Category Specific Web Search by Learning Query Modifications
Using extra-topical user preferences to improve web-based metasearch
--CTR
Glover , David M. Pennock , Steve Lawrence , Robert Krovetz, Inferring hierarchical descriptions, Proceedings of the eleventh international conference on Information and knowledge management, November 04-09, 2002, McLean, Virginia, USA
unified model of literal mining and link analysis for ranking web resources, Proceedings of the 27th annual international ACM SIGIR conference on Research and development in information retrieval, July 25-29, 2004, Sheffield, United Kingdom
Adam Jatowt, Web page summarization using dynamic content, Proceedings of the 13th international World Wide Web conference on Alternate track papers & posters, May 19-21, 2004, New York, NY, USA
Rui Fang , Alexander Mikroyannidis , Babis Theodoulidis, A Voting Method for the Classification of Web Pages, Proceedings of the 2006 IEEE/WIC/ACM international conference on Web Intelligence and Intelligent Agent Technology, p.610-613, December 18-22, 2006
Pucktada Treeratpituk , Jamie Callan, Automatically labeling hierarchical clusters, Proceedings of the 2006 international conference on Digital government research, May 21-24, 2006, San Diego, California
Vincenzo Loia , Sabrina Senatore , M. I. Sessa, LearnMiner: deductive, tolerant agents for discovering didactic resources on the web, Proceedings of the 14th international conference on Software engineering and knowledge engineering, July 15-19, 2002, Ischia, Italy
Zheng Chen , Shengping Liu , Liu Wenyin , Geguang Pu , Wei-Ying Ma, Building a web thesaurus from web link structure, Proceedings of the 26th annual international ACM SIGIR conference on Research and development in informaion retrieval, July 28-August 01, 2003, Toronto, Canada
Mukhopadhyay , Debasis Giri , Sanasam Ranbir Singh, An approach to confidence based page ranking for user oriented Web search, ACM SIGMOD Record, v.32 n.2, p.28-33, June
Qingyang Xu , Wanli Zuo, Extracting Precise Link Context Using NLP Parsing Technique, Proceedings of the 2004 IEEE/WIC/ACM International Conference on Web Intelligence, p.64-69, September 20-24, 2004
Aixin Sun , Ee-Peng Lim , Wee-Keong Ng, Web classification using support vector machine, Proceedings of the 4th international workshop on Web information and data management, November 08-08, 2002, McLean, Virginia, USA
Pvel Calado , Marco Cristo , Edleno Moura , Nivio Ziviani , Berthier Ribeiro-Neto , Marcos Andr Gonalves, Combining link-based and content-based methods for web document classification, Proceedings of the twelfth international conference on Information and knowledge management, November 03-08, 2003, New Orleans, LA, USA
Xiaoguang Qi , Brian D. Davison, Knowing a web page by the company it keeps, Proceedings of the 15th ACM international conference on Information and knowledge management, November 06-11, 2006, Arlington, Virginia, USA
Shen , Jian-Tao Sun , Qiang Yang , Zheng Chen, A comparison of implicit and explicit links for web page classification, Proceedings of the 15th international conference on World Wide Web, May 23-26, 2006, Edinburgh, Scotland
Gui-Rong Xue , Yong Yu , Dou Shen , Qiang Yang , Hua-Jun Zeng , Zheng Chen, Reinforcing Web-object Categorization Through Interrelationships, Data Mining and Knowledge Discovery, v.12 n.2-3, p.229-248, May 2006
Bill Kules , Jack Kustanowitz , Ben Shneiderman, Categorizing web search results into meaningful and stable categories using fast-feature techniques, Proceedings of the 6th ACM/IEEE-CS joint conference on Digital libraries, June 11-15, 2006, Chapel Hill, NC, USA
Jian-Tao Sun , Ben-Yu Zhang , Zheng Chen , Yu-Chang Lu , Chun-Yi Shi , Wei-Ying Ma, GE-CKO: A Method to Optimize Composite Kernels for Web Page Classification, Proceedings of the 2004 IEEE/WIC/ACM International Conference on Web Intelligence, p.299-305, September 20-24, 2004
Philipp Cimiano , Siegfried Handschuh , Steffen Staab, Towards the self-annotating web, Proceedings of the 13th international conference on World Wide Web, May 17-20, 2004, New York, NY, USA
Shen , Zheng Chen , Qiang Yang , Hua-Jun Zeng , Benyu Zhang , Yuchang Lu , Wei-Ying Ma, Web-page classification through summarization, Proceedings of the 27th annual international ACM SIGIR conference on Research and development in information retrieval, July 25-29, 2004, Sheffield, United Kingdom
Thierson Couto , Marco Cristo , Marcos Andr Gonalves , Pvel Calado , Nivio Ziviani , Edleno Moura , Berthier Ribeiro-Neto, A comparative study of citations and links in document classification, Proceedings of the 6th ACM/IEEE-CS joint conference on Digital libraries, June 11-15, 2006, Chapel Hill, NC, USA
Andrej Bratko , Bogdan Filipi, Exploiting structural information for semi-structured document categorization, Information Processing and Management: an International Journal, v.42 n.3, p.679-694, May 2006
Valter Crescenzi , Paolo Merialdo , Paolo Missier, Clustering web pages based on their structure, Data & Knowledge Engineering, v.54 n.3, p.279-299, September 2005
Baoping Zhang , Yuxin Chen , Weiguo Fan , Edward A. Fox , Marcos Gonalves , Marco Cristo , Pvel Calado, Intelligent GP fusion from multiple sources for text classification, Proceedings of the 14th ACM international conference on Information and knowledge management, October 31-November 05, 2005, Bremen, Germany
Jianhan Zhu , Jun Hong , John G. Hughes, PageCluster: Mining conceptual link hierarchies from Web log files for adaptive Web site navigation, ACM Transactions on Internet Technology (TOIT), v.4 n.2, p.185-208, May 2004
Using web structure and summarisation techniques for web content mining, Information Processing and Management: an International Journal, v.41 n.5, p.1225-1242, September 2005
Z. Cui , G. Ducatel , M. Thint , B. Assadian , B. Azvine, Towards automated customer self-help, BT Technology Journal, v.24 n.1, p.96-106, January 2006
Ronald Fagin , Ravi Kumar , Kevin S. McCurley , Jasmine Novak , D. Sivakumar , John A. Tomlin , David P. Williamson, Searching the workplace web, Proceedings of the 12th international conference on World Wide Web, May 20-24, 2003, Budapest, Hungary
Maria Halkidi , Benjamin Nguyen , Iraklis Varlamis , Michalis Vazirgiannis, THESUS: Organizing Web document collections based on link semantics, The VLDB Journal The International Journal on Very Large Data Bases, v.12 n.4, p.320-332, November
Adam Jatowt , Mitsuru Ishizuka, Temporal multi-page summarization, Web Intelligence and Agent System, v.4 n.2, p.163-180, April 2006
Einat Amitay , David Carmel , Adam Darlow , Ronny Lempel , Aya Soffer, The connectivity sonar: detecting site functionality by structural patterns, Proceedings of the fourteenth ACM conference on Hypertext and hypermedia, August 26-30, 2003, Nottingham, UK
Tien Nhut Nguyen , Ethan Vincent Munson , Cheng Thao, Fine-grained, structured configuration management for web projects, Proceedings of the 13th international conference on World Wide Web, May 17-20, 2004, New York, NY, USA | evaluation;anchortext;classification;SVM;entropy based feature extraction;web directory;web structure;cluster naming |
511526 | Translating XSLT programs to Efficient SQL queries. | We present an algorithm for translating XSLT programs into SQL. Our context is that of virtual XML publishing, in which a single XML view is defined from a relational database, and subsequently queried with XSLT programs. Each XSLT program is translated into a single SQL query and run entirely in the database engine. Our translation works for a large fragment of XSLT, which we define, that includes descendant/ancestor axis, recursive templates, modes, parameters, and aggregates. We put considerable effort in generating correct and efficient SQL queries and describe several optimization techniques to achieve this efficiency. We have tested our system on all 22 SQL queries of the TPC-H database benchmark which we represented in XSLT and then translated back to SQL using our translator. | INTRODUCTION
XSLT is an increasingly popular language for processing XML
data. Based on a recursive paradigm, it is relatively easy to use
for programmers accustomed to a functional recursive style of pro-
gramming. While originally designed to serve as a stylesheet, to
map XML into HTML, it is increasingly used in other applications,
such as querying and transforming XML data.
Today most of the XML data used in enterprise applications
originates from relational databases, rather than being stored na-
tively. There are strong reasons why this will not change in the
near future. Relational database systems offer transactional guar-
antees, which make them irreplaceable in enterprise applications,
and come equipped with high-performance query processors and
optimizers. There exists considerable investment in today's relational
database systems as well as the applications implemented on
top of them. The language these systems understand is SQL.
Techniques for mapping relational data to XML are now well
understood. Research systems in XML publishing [2, 8, 10, 15]
have shown how to specify a mapping from the relational model to
XML and how to translate XML queries expressed in XML-QL [7]
or XQuery [3] into SQL.
In this paper we present an algorithm for translating XSLT programs
into efficient SQL queries. We identify a certain subset of
XSLT for which the translation is possible and which is rich enough
to express databases-like queries over XML data. This includes
recursive templates, modes, parameters (with some restrictions),
Copyright is held by the author/owner(s).
WWW2002, May 7-11, 2002, Honolulu, Hawaii, USA.
ACM 1-58113-449-5/02/0005.
aggregates, conditionals, and a large fragment of XPath. One important
contribution of this paper is to identify a new class of optimizations
that need to be done either by the translator, or by the
relational engine, in order to optimize the kind of SQL queries that
result from such a translation.
We argue that the XSLT fragment described here is sufficient
for expressing database-like queries in XSLT. As part of our experimental
evaluation we have expressed all 22 SQL queries in the
TPC-H benchmark [6] in this fragment, and translated them into
SQL using our system. In all cases we could express these queries
in our fragment, but in some cases the query we generated from the
XSLT program turned out to be significantly more complex than
the original TPC-H counterpart.
Translations from XML languages to SQL have been considered
before, but only for XML query languages, like XML-QL and
XQuery. The distinction is significant; since XSLT is not a query
language, its translation to SQL is significantly more complex. The
reason is the huge paradigm gap between XSLT's functional, recursive
paradigm, and SQL's declarative paradigm. An easy translation
is not possible, and, in fact, it is easy to construct programs in
XSLT that have no SQL equivalent.
As an alternative to translation, it is always possible to interpret
any XSLT program outside the relational engine, and use the
RDBMS only as an object repository. For example, the XSLT interpreter
could construct XML elements on demand, by issuing one
SQL query for every XML element that it needs. We assume, that
we can formulate a SQL query to retrieve an XML element with a
given ID. Such an implementation would end up reading and materializing
the entire XML document most of the time. Also, this
approach would need to issue multiple SQL queries for a single
XSLT program. This slows down the interpretation considerably
because of the ODBC or JDBC connection overhead. In contrast,
our approach generates a single SQL query for the entire XSLT
program, thus pushing the entire computation inside the relational
engine. This is the preferred solution, both because it makes a single
connection to the database server and because it enables the
relational engine to choose the best execution strategy for that particular
program.
As an example, consider the XSLT program below:
<xsl:template match="*">
<xsl:apply-template/>
</xsl:template>
<xsl:template match="person[name=='Smith']">
<xsl:value-of select="phone/text()"/>
</xsl:template>
The program makes a recursive traversal of the XML tree, looking
for a person called Smith and returning his phone. If we
interpret this program outside the relational engine we need to is-
Beers (name, price)
Drinkers (name, age)
Astrosign (drinkers, sign)
Frequents (drinker, bar)
Bars (name)
Serves (bar, beer)
Likes (drinker, beer)
Figure
1: Relational Schema for Beers/Drinkers/Bars.
doc
drinkers
astrosign beers
price barname name beername
name
bars
name
age
Figure
2: XML View for Beers/Drinkers/Bars.
sue a SQL query to retrieve the root element, then one SQL query
for each child, until we find a person element, etc. This naive
approach to XSLT interpretation ends up materializing the entire
document.
Our approach is to convert the entire XSLT program into one
SQL query. The query depends on the particular mapping from the
relational data to XML; assuming such a mapping, the resulting
SQL query is:
SELECT person.phone
FROM person
This can be up to order of magnitudes faster than the naive ap-
proach. In addition, if there exists an index on name in the database,
then the relational engine can further improve performance.
The organization of the paper is as follows. In Section 2 we
provide some examples of XSLT to SQL translation to illustrate the
main issues. Section 3 describes the architecture of our translator,
and while Section 4 describes the various components in detail. We
discuss the optimizations done to produce efficient SQL queries in
Section 5. Section 6 presents the results of the experiments on TPC-H
benchmark queries. Sections 7 and 8 discuss related work and
conclusions.
2. EXAMPLES OF XSLT TO SQL TRANSLATION
We illustrate here some example of XSLT to SQL translations,
highlighting the main issues. As we go along, the fragment of
XSLT translated by us will become clear. Throughout this section
we illustrate our queries on the well-known beers/drinkers/
bars database schema adapted from [17], shown in Figure 1. We
will assume it is exported in XML as shown in Figure 2. Notice
that there is some redundancy in the exported XML data, for example
bars are accessible either directly from drinkers or under
beers.
We assume the XML document to be unordered, and do not support
any XSLT expressions that check order in the input. For example
the beers under drinkers form an unordered collection. It
is possible to extend our techniques to ordered XML views, but this
Q2: SELECT Beers.name
FROM Beers
Q2': SELECT Beer.name
FROM Drinkers, Likes, Beers
WHERE Drinkers.name=Likes.drinker AND
Likes.beer=Beers.name
Figure
3: Find all beers liked by some drinker.
is beyond our scope here. Furthermore, we will consider only ele-
ment, attribute, and text nodes in the XML tree, and omit from our
discussion other kinds of nodes such as comments or processing
instructions.
2.1 XPath
XPath [5] is a component of XSLT, and the translation to SQL
must handle it. For example the XPath /doc/drinkers/name/
returns all drinkers. The equivalent SQL is:
SELECT drinkers.name FROM drinkers
For a less obvious example, consider the query in Figure 3 with
two SQL queries Q2 and Q2 0 . Q2 is not a correct translation of
P2, because it returns all beers, while P2 returns only beers liked
by some drinker. Indeed P2 and Q2 0 have the same semantics. In
particular Q2 0 preserves the multiplicities of the beers in the same
way as P2.
Q2 0 is much more expensive than Q2, since it performs two
joins, while Q2 is a simple projection. In some cases we can optimize
Q2 0 and replace it with Q2, namely when the following conditions
are satisfied: every beer is liked by at least one drinker, and
the user specifies that the duplicates in the answer have to be re-
moved. In this case P2 and Q2 have the same semantics, and our
system can optimize the translation and construct Q2 instead of
Q2 0 . This is one of the optimizations we consider in Section 5.
The XPath fragment supported by our system includes the entire
language except constructs dealing with order and reference traver-
sals. For example a navigation axis like ancestor-or-self is
supported, while following-sibling is not.
2.2 XSLT Templates and Modes
A basic XSLT program is a collection of template rules. Each
template rule specifies a matching pattern and a mode. Presence of
modes allows different templates to be chosen when the computation
arrives on the same node.
Figure
4 shows an XSLT program that returns for every drinker
with age less than 25, pairs of (drinker name, all beers having
price > 10 that she likes). The program has 3 modes. In the
first mode (the default mode is drinkers with age less than 25
are selected. In the second mode (mode =1), for those drinkers all
beers priced less than 10 are selected. In the third mode the result
elements are created.
In general templates and modes are also used to modularize the
program. The corresponding SQL query is also shown.
2.3 Recursion in XSLT
Both XSLT and XPath can traverse the XML tree recursively.
Consider the XPath expression //barname that retrieves all bar-
names. In absence of XML schema information it is impossible to
express this query in SQL, because we need to navigate arbitrarily
deep in the XML document 1 . However, in the case of XML data
1 Some SQL implementations support recursive queries and can
< xsl:template match="drinkers[age < 25 ]" >
< xsl:template match="beers[price
< result>
< xsl:apply-template select= "./name" mode=2 />
< xsl:template match="name" mode=2 >
SELECT drinkers.name, likes.name
FROM drinkers, likes, beers
WHERE drinkers.age < 25 AND
likes.drinker AND
likes.beer
Figure
4: XSLT program using modes: For every drinker with
age less than 25, return all pairs (drinker name, beers having
price less than 10 that she likes)
< xsl:template match="drinkers[name == 'Brian']">
< xsl:apply-template select="/drinkers" mode=1>
< xsl:param name='sign' select="astrosign"/>
< xsl:template match="drinkers" mode=1>
< xsl:param name='sign'/>
< xsl:variable name='currSign' select="astrosign"/>
< xsl:if test="$sign == $currSign">
< result>
< xsl:value-of select="name"/>
FROM drinkers as drinkers1, drinkers as drinkers2,
astrosign as astrosign1, astrosign as astrosign2
Figure
5: All drinkers with the same astrosign as Brian
generated from relational databases, the resulting XML document
has a non-recursive schema, and we can unfold recursive programs
into non-recursive ones. Using the schema in Fig. 2, the unfolded
XPath expression is /drinkers/beers/barname
Recursion can also be expressed in XSLT through templates.
Given a non-recursive XML schema, this recursion can also be
eliminated, by introducing additional XSLT templates and modes.
We describe the general technique in Section 4.1.
2.4 Variables and Parameters in XSLT
In XSLT one can bind some value to a parameter in one part of
the tree, then use it in some other part. In SQL this becomes a
join operation, correlating two tables. For example, consider the
query in Figure 5, which finds all drinkers with the same astrosign
as "Brian". A parameter is used to pass the value of "Brian's'' as-
trosign, which is matched against every drinker's astrosign.
In this example, the value stored in variable and parameters was
a single node. In general, they can store node-sets (specified us-
express such XSLT programs; we do not generate recursive SQL
queries in this work.
< xsl:template match="drinkers">
< result>
< xsl:value-of select=min("beers/price")/>
SELECT drinkers2.name, min(beers4.price)
FROM drinkers as drinkers2, likes as likes3,
beers as beers4
likes3.drinker AND
likes3.beer
GROUP BY drinkers2.name
Figure
For every drinker find the minimum price of beer she
likes.
ing XPath, for instance), and also results of another template call
(analogous to temporary tables in SQL). Our translation of XSLT
to SQL supports all possible values taken on by variables.
2.5 Aggregation
Both XSLT and SQL support aggregates, but there is a a significant
difference: in XSLT aggregate operator is applied to a subtree
of in the input, while in SQL it is applied to a group using a Group
By clause. Consider the query in Figure 6, which finds for every
drinker the minimum price of all beers she likes. In XSLT we
simply apply min to a subtree. In SQL we have to Group By
drinkers.name.
For a glimpse at the difficulties involved in translating aggre-
gates, consider the query in Figure 7, which, for every age, returns
the cheapest price of all beers liked by people of that age. In XSLT
we first find all ages, and then for each age apply min to a node-
set, which in this case in not a sub-tree. The correct SQL translation
for the XSLT program is shown next followed by an incorrect
translation. The difference is subtle. In XSLT we collect all ages,
with their multiplicities. That is, if three persons are 29 years old,
then there will be three results with 29. The wrong SQL query
contains a single such entry. The correct SQL query has an additional
GroupBy attribute (name) ensuring that each age occurs
the correct number of times in the output. See also our discussion
in Section 6.
2.6 Other XSLT Constructs
Apart from those already mentioned, our translation also supports
if-[else], for-each, and case constructs. The for-each construct
is equivalent to iteration using separate template rules. The
case construct is equivalent to multiple if statements.
2.7 Challenges
The translation from this XSLT fragment into SQL poses some
major challenges. First, we need to map from a functional programming
style to a declarative style. Templates correspond to
functions, and their call graph needs to be converted into SQL state-
ments. Second, we need to cope with general recursion, both at the
XPath level and in XSLT templates. This is not possible in gen-
eral, but it is always possible when the XML document is generated
from a relational database, which is our case. Third, parameters
add another source of complexities, and they typically need
to be converted into joins between values from different parts of
the XML tree. Finally, XSLT-style aggregation needs to be converted
into SQL-style aggregation. This often involves introducing
Group By clauses and, sometimes, complex conditions in the
Having clause.
Figure
8 illustrates a more complex example with aggregation
< xsl:template match="age">
< xsl:variable name="currage">
< result>
< xsl:value-of select=min("/doc/drinkers[age ==$currage ]/
beers/price")/>
Correct SQL:
SELECT drinkers2.age, min(beers.price)
FROM drinkers, likes, beers, drinkers2
likes.beer AND
GROUP BY drinkers2.age, drinkers2.name
Incorrect SQL:
SELECT drinkers.age, min(beers.price)
FROM drinkers, likes, beers
likes.beer
GROUP BY drinkers.age
Figure
7: For every age find the minimum price of beer liked
by some drinker of that age.
< xsl:template match="drinkers">
< xsl:apply-template select="beers/price" mode=1>
< xsl:with-param name="priceSet" select="beers/price"/>
< xsl:with-param name="drinkerName select="name"">
< xsl:template match="price" mode=1>
< xsl:param name="priceSet" select="default1"/>
< xsl:param name="drinkerName" select="default2"/>
< xsl:variable name="currPrice">
< xsl:variable name="currBeer">
< xsl:value-of select=. />
< xsl:if test=$currPrice==min($priceSet)/>
< result>
< xsl:value-of select=$drinkerName>
< xsl:value-of select=$currBeer>
< xsl:value-of select=$currPrice>
SELECT drinkers.name, likes3.beer, beers4.price
FROM drinkers, beers as beers6, likes as likes5,
beers as beers4, likes as likes3
likes3.name AND
likes3.beer AND
likes5.name AND
likes5.beer
GROUP BY beers4.name, drinkers2.name, likes3.beer,
beers4.price, likes3.name
Figure
8: Cheapest beer and price for every drinker
and parameters. The query finds for every drinker the cheapest
beer she likes and it's price. Notice the major stylistic difference
between XSLT and SQL. In XSLT we compute the minimum price,
bind it to a parameter, then search for the beer with that price and
retrieve its name. In SQL we use the Having clause.
Orthogonal to the translation challenge per se, we have to address
the quality of the generated SQL queries. Automatically
generated SQL queries tend to be redundant and have unneces-
Querier
Tagger
RDB Schema
View Tree
Query
SQL
Tuples
SQL
Generator
Optimizer
QTree
Generator
Parser
View
Output
Tree
Figure
9: Architecture of the Translator
sary joins, typically self-joins [16]). An optimizer for eliminating
redundant joins is difficult to implement since the general prob-
lem, called query minimization, is NP-complete [4]. Commercial
databases systems do not do query minimization because it is expensive
and because users do not write SQL queries that require
minimization. In the case of automatically generated SQL queries
however, it is all too easy to overshoot, and create too many joins.
Part of the challenge in any such system is to avoid generating redundant
joins.
3. ARCHITECTURE
Figure
9 shows the architecture of the translator. An XML view
is defined over the relational database using a View Tree [10]. The
XML view typically consists of the entire database, but can also
be a subset to export a subset view of the relational database. It
can also include redundant information. The view never computed,
but instead is kept virtual. Once the View Tree has been defined,
the system accepts XSLT programs over the virtual XML view, and
translates them to SQL in several steps.
First, the parser translates the XSLT program into an intermediate
representation (IR). The IR is a DAG (directed acyclic graph) of
templates with a unique root template (default mode template that
matches '/'). Each leaf node contributes to the program's result,
and each path from the root to a leaf node corresponds to a SQL
query: the final SQL query is a union of all such queries. Each
such path is translated first into a Query Tree (QTree) by the QTree
generator. A QTree represents multiple, possible overlapping navigations
through the XML document, together with selection, join,
and aggregate conditions at various nodes. It is explained in Section
4.2.
The SQL generator plus optimizer takes a QTree as input, and
generates an equivalent SQL query using the XML schema, RDB
schema, and View Tree. The SQL generator is described in Section
4.4, and the optimizations are discussed in Section 5.
The querier has an easy task; it takes the generated SQL query
and gets the resulting tuples from the RDB. The result tuples are
passed onto the tagger, similar to [15], which produces the output
for the user in a format dictated by the original query. The functionality
of the querier and the tagger is straightforward and not our
focus, and hence is not discussed further.
< xsl:template match="drinkers">
< xsl:variable name='namevar'>
< xsl:value-of select="name"/>
< xsl:if test="$namevar=='Brian"'>
< xsl:apply-template select="/drinkers" mode=1>
< xsl:param name='beerSet' select="beers"/>
< xsl:template match="drinkers" mode=1>
< xsl:param name='beerSet' select="defaultBeerSet"/>
< result>
< xsl:value-of select="name"/>
< xsl:value-of select=count(beers/[name == $beerSet])/>
Figure
10: Find (drinker, n) pairs, where n is the number of
beers that both Brian and drinker likes
4. TRANSLATION
We will use as a running example the program in Figure 10,
which retrieves the number of beers every drinker likes in common
with Brian. We begin by describing how the XSLT program is
parsed into an internal representation (IR) that reflects the semantics
of the program in a functional style. We proceed to describe the
QTree, which is an abstract representation of the paths traversed by
the program on the View Tree. A QTree represents a single such
path traversal, and is a useful intermediate representation for purposes
of translating XML tree traversals into SQL. We describe our
representation of the XML view over relational data (the View Tree,
and finally show how we combine information from the QTree and
View Tree to generate an equivalent SQL Query.
4.1 Parser
The output of the parser is an Intermediate Representation (IR)
of the XSLT query. Besides the strictly syntactic parsing, this module
also performs a sequence of transformations to generate the IR.
First it converts the XSLT program into a functional representation,
in which each template mode is expressed as a function. Figure 11
(a) shows this for our running example. We add extra functions to
represent the built-in XSLT template rules (Figure 11(b)), then we
"match" the resulting program against the XML Schema (extracted
from the View Tree). During the match all wildcards () are in-
stantiated, all navigations other than parent/child are expanded into
simple parent/child navigation steps, and only valid navigations are
retained. This is shown in sequence in Figures 11 (c), (d) and (e). In
some cases there may be multiple matches: Figure 12 (a) illustrates
such an example, with the expansion in Figure 12 (b).
The end result for our running example is the IR shown in Figure
13. In this case the result is a single call graph. In some cases,
a template calls more than one template conditionally (if-then-else
or case constructs) or unconditionally (as shown in Figure 14). The
semantics of such queries is the union of all possible paths that lead
from the start template to a return node, as shown in Figure 14.
At the end of the above procedure, we have one or more inde-
pendent, straight-line call graphs. In what follows, we will demonstrate
how to convert a straight-line call graph into a SQL query.
The SQL query for the whole XSLT program is the union of the
individual SQL queries.
4.2 QTree
The QTree is a simulation from the XML schema, and succinctly
describes the computation being done by the query. The QTree
abstraction captures the three components of an XML query: (a) the
f1(/drinkers, "beers");
count(beers[name == $beerSet]));
(a) Simplified Functional Form
f1(/drinkers, "beers");
count(beers[name == $beerSet]));
(b) Extended with Built-in Templates
f1(/drinkers, "beers");
count(beers[name == $beerSet]));
(c) Function Duplication
f1(/drinkers, "beers");
count(beers[name == $beerSet]));
(d) XPATH Expansion
f1 drinkers(/drinkers, "beers");
count(beers[name == $beerSet]));
Function Call Matching
Figure
11: The various stages leading to IR generation for the
query in Figure 10.
path taken by the query in the XML document, (b) the conditions
placed on the nodes or data values along the path (c) the parameters
passed between function calls. Corresponding to the three elements
of the XML query above, a QTree has the following components.
(a) Query Fragment
fN beers($default/beers,
(b) After XPATH Expansion
Figure
12: A query fragment with complex XPATH expansion.
f0_root
f0_drinkers
f1_drinkers
return
select: ./drinkers
condition: ./name == 'Brian'
select: /drinkers
params: beers
arguments: param1
select: ./name
params:
Figure
13: The IR for the query in Figure 10
Figure
14: Complex call graph decoupling
1. Tree: The tree representing the traversal of the select XPath
expressions (with which apply-template is used). Nodes in
this tree are labeled by the tag of the XPath component. Hence
each node in this tree is associated with a node in the XML
schema. Entities that are part of the output are marked with #.
2. Condition set: The collection of all conditions in the query.
It not only includes conditions specified explicitly using the
xsl:if construct, but also includes predicates in the XPath expressions
3. Mapping for parameters: A parameter can be the result of
another XSLT query, a node-set given by an XPath expres-
sion, or a scalar value. A natural way of representing this
is by using nested QTrees, which is the approach we take.
Note that the conditions inside the nested QTree might refer
to entities (nodes or other parameters) in the outer QTree.
Figure
(a) shows a QTree for the call graph in Figure 10.
There are three QTrees in the figure. Q1 is the main QTree corresponding
to the XSLT program. It has pointers to two other
QTrees Q2 and Q3, which correspond to the two node-set parameters
passed in the program.
The logic encapsulated by the XSLT program is as follows:
1. start at the "root" node.
2. traverse down to a "drinkers" named Brian; "./beers/beername"
is passed as a parameter at this point.
3. starting from the "root", traverse down to "drinkers" again.
4. traverse one level down to "name", and perform an aggregation
on the node-set "beers[. ]/"
These steps correspond to the main QTree for the query Q1. Note
that in step 3, when the query starts at the root to go to drinkers
again, a separate drinker node is instantiated since the query could
be referring to a drinker that is different from the current one (the
root has multiple instances of "drinker" child nodes). QTrees are
also created for every node-set. For example, the second parameter
of the return call (count(beer[name == $.])) is represented as the
QTree Q3. The predicate condition in the XPath for this parameter
is represented in the QTree and refers to P1, defined in Q1.
As an abstraction, QTree is general enough that it can also be
used for other XML query languages like XQuery and XML-QL.
QTree is a powerful and succinct representation of the query computation
independent of the language in which the query was expressed
in. Moreover, the conversion from QTree to SQL is also
independent of the query language.
4.3 The View Tree
The View Tree defines a mapping from the XML schema to the
relational tables. Our choice of the View Tree representation has
been borrowed from SilkRoute [10]. The View Tree defines a SQL
query for each node in the XML schema. Figure 16 shows the View
Tree for the beers/drinkers/bars schema. The right hand side of
each rule should be interpreted as a SQL query. The rule heads
(e.g., Drinkers) denote the table name, and the arguments denote
the column name. Same argument in two tables represents a join
on that value.
The query for an XML schema node depends on all its ances-
tors. For example, in Figure 16 the SQL for beername depends
on drinkers and bars. Correspondingly, the SQL query for a child
node is always a superset of the SQL query of its parent. Put another
way, given the SQL query for a parent node, one can construct
the query for its child node by adding appropriate FROM tables
and WHERE constraints. As discussed later, such representation is
crucial to avoid redundant joins, and hence generate efficient SQL
queries.
4.4 SQL Generation
This section explains how we generate SQL from a QTree using
the View Tree. As explained before, a QTree represents a traversal
of nodes in the original query and constraints placed by query on
these nodes. The idea is to generate the SQL query clauses corresponding
to those traversals and constraints. This is a three step
process. First, nodes of the QTree are bound to instances of relational
tables. Second, the appropriate WHERE constraints are generated
using the binding in the first step. Intuitively, the first step
generates the FROM part of the SQL query and join constraints due
to tree traversal. The second step generates all explicitly specified
constraints. Finally, the bindings for the return nodes are used to
generate the SELECT part. We next describe each of these steps.
name
drinkers drinkers
beers
name
Q3
beers
name
root
#name
Condition:
Q1:
SELECT drinkers.name, count(Q2)
FROM drinkers, drinkers2
Q2:
likes.beer
FROM likes
AND likes.beer in Q3
Q3:
likes.beer
FROM likes
Condition:
Figure
15: QTree for the example query (left) and mappings to SQL (right)
1. drinkers(name, age,
Astrosign(name,astrosign)
2. beers(beerName, price,
Beers(beerName, price),
3. bars(barName,
Bars(barName),
Frequents(drinkerName, barName)
4.
Beers(beerName, ,drinkerName),
5. beername(beerName, barName,
Frequents(drinkerName, barName),
Bars(beerName, drinkerName),
Figure
View Tree for the beers/drinkers/bars schema in
Figure
4.4.1 Binding the QTree nodes
A binding associates a relational table, column pair to a QTree
node. This (table; column) pair can be treated as its "value". The
binding step updates the list of tables required in the FROM clause
and implicit tree traversal constraints in WHERE clause.
We carry out this binding in a top down manner to avoid redundant
joins. Before a node is bound, all its ancestors should be
bound. To bind a node, we instantiate new versions of each table
present in the View Tree SQL query for the child. Tables and constraints
presented in the SQL of parent are not repeated again. The
node can now be bound to an appropriate table name (using table
renamings if required) and field using the SQL information from
the View Tree.
The end result of binding a node n is bindings for all nodes
that lie on the path from the root to n, a value association for n
of the form tablename.fieldname, a list of tables to be included in
the FROM clause, and the implicit constraints due to traversal.
4.4.2 Generating the WHERE clause
Recall that all explicit conditions encountered during query traversal
are stored in the QTree. In this step, these conditions are ANDed
SELECT drinkers.name, count(
likes.beer
FROM likes
likes.beer IN (
likes.beer
FROM likes
FROM drinkers, drinkers2
Figure
17: SQL for the QTree Q1 in Figure 15
together along with the constraints generated in the binding step.
A condition is represented in the QTree as a boolean tree with expressions
at leaves. These expressions are converted to constraints
by recursively traversing the expression, and at each step doing the
1. Constant expression are used verbatim.
2. Pointer to a QTree node is replaced by its binding.
3. Pointer to a QTree (i.e., the expression is a node set) is replaced
by a nested SQL query which is generated by calling the conversion
process recursively on the pointed QTree.
4.4.3 Generating the SELECT clause
The values (columns of some table) bound to the return nodes
form the SELECT part of the SQL query. If the return node is a
pointer to a QTree, it is handled as mentioned above and the query
generated is used as a subquery.
Figure
15 shows the mapping of three QTrees in our example to
SQL after these steps. Figure 17 shows the SQL generated by our
algorithm for Q1 after these steps.
4.5 Eliminating Join Conditions on Intersecting
We now briefly explain how our choice of a View Tree representation
helps in eliminating join conditions. For any two paths in the QTree ,
nodes that lie on both paths must have the same value. One simple
approach would be two bind the two paths independently and then
for each common node add equality conditions to represent the fact
that values from both paths are the same. For example, consider
a very simple query that retrieves all all drinkers younger than 25.
Figure
shows the QTree for this query.
If we take the approach of binding the nodes independently, and
then adding the SQL constraints we will have the following SQL
drinkers
age #name
Condition:
age < 25
Figure
18: QTree for all drinkers with age less than 25
query:
SELECT drinkers3.name
FROM drinkers2, drinkers3
WHERE drinkers2.age < 25 AND
In our approach however we first iterate over the common node,
which is the drinkers node, and then add the conditions. This
leads to a better SQL query, shown below.
SELECT drinkers.name
FROM drinkers
WHERE drinkers.age < 25
This redundant join elimination becomes more important for complex
queries, when there are many nodes that lie on multiple paths
from the root to leaves.
5. OPTIMIZATIONS
Automated query generation is susceptible to generating inefficient
queries with redundant joins and nested queries. Our optimizations
unnest subqueries and eliminate joins that are not nec-
essary. Most (but not all) of the optimizations described here are
general-purpose SQL query rewritings that could be done by an op-
timizer. There are three reasons why we address them here. First,
these optimizations are specific to the kind of SQL queries that result
from our translations, and therefore may be missed by a general
purpose optimizer. Second, our experience with one popular, commercial
database system showed that, indeed, the optimizer did not
perform any of them. Finally, some of the optimizations described
here do not preserve semantics in general. The semantics are preserved
only in the special context of the XSLT to SQL translation,
and hence cannot be done by a general-purpose optimizer.
5.1 Nested IN queries
This optimization applies to predicate expressions of the form a
in b, where b is a node-set (subquery). It can be applied only when
the expression is present as a conjunction with other conditions. By
default our SQL generation algorithm (Section 4.4) will generate a
SQL query for the node-set b. This optimization would unnest such
a subquery. Whether or not the query can be unnested depends on
the properties of the node-set b. There are three possibilities:
1. b is a singleton set
This is the simplest case. One can safely unnest the query as
it will not change the multiplicity of the whole query. Figure
19 illustrates this case. Note that astrosign in the XPath
expression is a node-set.
To determine if the node-set is a singleton set, we use the
following test. The View Tree has information regarding
whether a node can have multiple values relative to its parent
(by specifying a '*'). If no node in the QTree for the node-set
has a '*' in the XML schema, then it must be a singleton set.
Query: Find name of drinkers which are 'Leo'
Unoptimized:
FROM drinkers
WHERE 'Leo' in (SELECT astrosign.sign
FROM astrosign
Optimized:
FROM drinkers, astrosign
Figure
19: Unnesting subquery representing singleton set
Query: The subquery Q2 in Figure 15
Unoptimized:
likes.beer
FROM likes
likes.beer IN (SELECT likes.beer
FROM likes
WHERE likes.drinker= drinkers2.name)
Optimized:
likes.beer
FROM likes, likes2
likes2.beer AND
Figure
20: Unnesting subquery having no duplicates
2. b has no duplicates
If the subquery has no duplicates, the query will evaluate to
'true' at most once for all the values in the set b. Hence one
can unnest the query without changing multiplicity. Figure
20 illustrates this case, for the example query used in the
previous section.
To determine if the node-set has no duplicates, we use the
following test. If the QTree for the nodeset has no node with
a '*' except at the leaf node, then it is a distinct set. The
intuition is that the siblings nodes (with the same parent) in
the document are unique. So if there is a '*' at an edge other
than the leaf edge, uniqueness of the leaves returned by the
query is not guaranteed.
3. b can have duplicates
When a node-set can have duplicates for example, //beers, as
discussed in Section 2.1, unnesting the query might change
the semantics. This is because the multiplicity of the resultant
query will change if the condition a in b evaluates to true
more than once. We do not unnest such a query.
5.2 Unnesting Aggregate Operators Using
GROUP-BY
This optimization unnests a subquery that uses aggregation by
using GROUP-BY at the outer level. The optimization is applied
for expressions of the form op b, where b is a node-set, and op is
an aggregate operator like sum, min, max, count, avg. The observation
is that the nested query is evaluated once for every iteration
of outer query. We get the same semantics if we unnest the query,
and GROUP-BY on all iterations of outer query. To GROUP-BY
on all iterations of outer query we add keys of all the tables in the
Query: Q1 in Figure 15.
Q2 (used below) is the optimized version shown in Figure 20
Unoptimized SQL for Q1:
SELECT drinkers.name, count(Q2)
FROM drinkers, drinkers2
Optimized SQL for Q1:
SELECT drinkers.name, count(likes2.drinker)
FROM drinkers, drinkers2, likes, likes2
likes2.beer AND
GROUP BY drinkers.name, drinkers2.name
Figure
21: Illustrating GROUP-BY Optimization
from clause of outer query to the GROUP-BY clause. The aggregate
condition is moved into the HAVING clause. In SQL, the
GROUP-BY clause must have all the fields which are selected by
the query. Hence all the fields in SELECT clause are also added to
the GROUP-BY clause.
If the outer query already uses GROUP-BY then the above optimization
can not be applied. This also implies that for a QTree this
optimization can be used only once. In our implementation we take
the simple choice of applying the optimization the very first time
we can.
Figure
21 illustrates this case, for the example query used in the
previous section.
5.3 QTree Reductions
In this optimization, we transform the QTree itself. Long paths
with unreferenced intermediate nodes are shortened, as shown in
Figure
22. This helps in eliminating some redundant joins. The
optimization is done during the binding phase. Before binding a
node, we checked to see if a short-cut path from root to that node
exists. A short-cut path is possible if no intermediate node in the
path is referred to by any other part of the QTree except the immediate
child and parent of that node on that path. If a condition
is referring to an intermediate node or if an intermediate node has
more than one child, it is incorrect to create a short-cut path. Also
the View Tree must specify that such a short-cut is possible, and
what rules to use to bind the node if a short-cut is taken. Once the
edge has been shortened and nodes bound, rest of the algorithm
proceeds as before.
We observed that this optimization also helps in making the final
SQL query less sensitive to input schema. For example, if
our beers/drinkers/bars schema had "beers" as a top level
node, instead of being as a child node of Drinkers, then the same
query would had been obtained without the reduction optimization.
6. EXPERIMENTS
In this section we try to understand how well our algorithm translates
XSLT queries to SQL queries. We implemented our algorithm
in Java using the JavaCC parser [11]. Evaluation is done using the
TPC-H benchmark [6] queries. We manually translate the benchmark
SQL queries to XSLT, and then generating SQL queries from
the XSLT queries using our algorithm. In the process we try to
gauge the strengths and limitations of our algorithm, study the impact
of optimizations described in Section 5, and observe the effects
of semantic differences between XSLT and SQL.
The TPC-H benchmark is established by the Transaction Processing
Council (TPC). It is an industry-standard Decision Support
QTree QTree after reduction
root
beers
drinkers
#name
root
beers
#name
Figure
22: QTree Reduction for query //beers/name.
test designed to measure systems capability to examine large volumes
of data and execute queries with a high degree of complex-
ity. It consists of 22 business oriented ad-hoc queries. The queries
heavily use aggregation and other sophisticated features of SQL.
The TPC-H specification points out that these queries are more
complex than typical queries.
Out of the 22 queries, 5 require creation of an intermediate table
followed by aggregation on its fields. The equivalent XSLT translation
would require writing two XSLT programs, the second one
using the results of first. While this is possible in our framework as
described in Section 4, our current implementation only supports
parameters that are bound to fragments of the input tree, or to computed
atomic values. It does not support parameters bound to a
constructed tree. For such queries we translated the SQL query for
the intermediate table, which in most cases was the major part of
the overall query, to XSLT. Another modification we made was that
aggregates on multiple fields like sum(a*b) were taken as aggregate
on a single new field sum(c).
our algorithm generated efficient SQL queries in most
cases, some of which were quite complex. A detailed table describing
the result of translation for individual queries is presented
in
Appendix
A. We present a summary of results here.
Queries with at most single aggregation at any level and
non-leaf group-by were converted to TPC-H like (same nesting
structure, same number of joins) queries.
2. In 3 queries, the only reason for inefficiency was extra joins
because of the GROUP-BY semantic mismatch between XSLT
and SQL, as discussed below.
3. 1 query was inefficient because of GROUP-BY semantic mis-match
and presence of a nested IN query.
4. 1 query used CASE statement in SQL select. We generated
a UNION of two independent SQL queries.
5. 2 queries required aggregation on the XSLT output for trans-
lation, This is not fully supported by our current implementation
but a hand generation led to similar SQL queries.
6. 5 queries required temporary tables as mentioned above. We
observed that we were able to convert the XSLT for the temporary
tables to efficient SQL.
Many queries that were not translated as efficiently as their original
SQL version required grouping by intermediate output. This is
not an artifact of our translation algorithm, but due to a language-level
mismatch between XSLT and SQL. An XSLT query with
identical result cannot be written for these queries. With appropriate
extensions to XSLT to support GROUP-BY, one can generate
queries with identical results. It is no coincidence that this issue is
mentioned in the future requirements draft for XSLT [12].
6.1 Utility of Optimizations
In this section, we describe the utility of each of the three opti-
mizations, mentioned in Section 5, in obtaining efficient queries.
1. Unnesting IN subqueries (Section 5.1): 4 queries benefitted
from this optimization. All of those were the case when the
node-set had multiple but distinct values.
2. Unnesting aggregations (Section 5.2): 21 queries had some
form of aggregation, for which this optimization was useful.
3. QTree reduction (Section 5.3): For 13 queries, QTree reduction
was useful. This optimization was frequent because
many queries would be related to a node deep in the schema,
without placing a condition on the parent nodes. With QTree
reduction, efficient queries can be generated independent of
the XML schema.
7. RELATED WORK
SilkRoute [10, 8] is an XML publishing system that defines an
XML view over a relational database, then accepts XML-QL [7]
queries over the view and translates them into SQL. The XML
view is defined by a View Tree, an abstraction that we borrowed for
our translation. Both XML-QL and SQL are declarative languages,
which makes the translation somewhat simpler than for XSLT. A
translation from XQuery to SQL is described in [14] and uses a
different approach based on an intermediate representation of SQL.
A generic technique for processing structurally recursive queries
in bulk mode is described in [1]. Instead of using a generic tech-
nique, we leveraged the information present in the XML schema.
This elimination is related to the query pruning described in [9].
SQL query block unnesting for an intermediate language has been
discussed in [13] in the context of the Starburst system.
8. CONCLUSIONS
We have described an algorithm that translates XSLT into SQL.
By necessity our system only applies to a fragment of XSLT for
which the translation is possible. The full language can express
programs which have no SQL equivalent: in such cases the program
needs to be split into smaller pieces that can be translated
into SQL.
Our translation is based on a representation of the XSLT program
as a query tree, which encodes all possible navigations of the
program through the XML tree. We described a number of optimization
techniques that greatly improve the quality of the generated
SQL queries. We also validated our system experimentally on
the TPC-H benchmark.
Acknowledgments
We are thankful to Jayant Madhavan and Pradeep Shenoy for helpful
discussions and feedback on the paper.
Dan Suciu was partially supported by the NSF CAREER Grant
0092955, a gift from Microsoft, and an Alfred P. Sloan Research
Fellowship.
9.
--R
Unql: A query language and algebra for semistructured data based on structural recursion.
XPERANTO: publishing object-relational data as XML
XQuery 1.0: An XML Query LanguageXML Path Language (XPath).
Optimal implementation of conjunctive queries in relational data bases.
XML Path Language (XPath).
A query language for XML.
Efficient evaluation of XML middle-ware queries
Optimizing regular path expressions using graph schemas.
SilkRoute: Trading Between Relations and XML.
Requirements.
Extensible rule-based query rewrite optimization in Starburst
Efficiently publishing relational data as xml documents.
On database theory and xml.
A First Course in Database Systems.
--TR
Extensible/rule based query rewrite optimization in Starburst
A first course in database systems
A query language for XML
SilkRoute
Efficient evaluation of XML middle-ware queries
On database theory and XML
Optimizing Regular Path Expressions Using Graph Schemas
Efficiently Publishing Relational Data as XML Documents
Querying XML Views of Relational Data
UnQL: a query language and algebra for semistructured data based on structural recursion
Optimal implementation of conjunctive queries in relational data bases
--CTR
Zhen Hua Liu , Agnuel Novoselsky, Efficient XSLT processing in relational database system, Proceedings of the 32nd international conference on Very large data bases, September 12-15, 2006, Seoul, Korea
Chengkai Li , Philip Bohannon , P. P. S. Narayan, Composing XSL transformations with XML publishing views, Proceedings of the ACM SIGMOD international conference on Management of data, June 09-12, 2003, San Diego, California
Ce Dong , James Bailey, Static analysis of XSLT programs, Proceedings of the fifteenth Australasian database conference, p.151-160, January 01, 2004, Dunedin, New Zealand
Rajasekar Krishnamurthy , Raghav Kaushik , Jeffrey F Naughton, Unraveling the duplicate-elimination problem in XML-to-SQL query translation, Proceedings of the 7th International Workshop on the Web and Databases: colocated with ACM SIGMOD/PODS 2004, June 17-18, 2004, Paris, France
Rajasekar Krishnamurthy , Raghav Kaushik , Jeffrey F. Naughton, Efficient XML-to-SQL query translation: where to add the intelligence?, Proceedings of the Thirtieth international conference on Very large data bases, p.144-155, August 31-September 03, 2004, Toronto, Canada
Wenfei Fan , Jeffrey Xu Yu , Hongjun Lu , Jianhua Lu , Rajeev Rastogi, Query translation from XPATH to SQL in the presence of recursive DTDs, Proceedings of the 31st international conference on Very large data bases, August 30-September 02, 2005, Trondheim, Norway
Mustafa Atay , Artem Chebotko , Dapeng Liu , Shiyong Lu , Farshad Fotouhi, Efficient schema-based XML-to-Relational data mapping, Information Systems, v.32 n.3, p.458-476, May, 2007
Jixue Liu , Millist Vincent, Querying relational databases through XSLT, Data & Knowledge Engineering, v.48 n.1, p.103-128, January 2004
Artem Chebotko , Mustafa Atay , Shiyong Lu , Farshad Fotouhi, XML subtree reconstruction from relational storage of XML documents, Data & Knowledge Engineering, v.62 n.2, p.199-218, August, 2007
Sven Groppe , Stefan Bttcher , Georg Birkenheuer , Andr Hing, Reformulating XPath queries and XSLT queries on XSLT views, Data & Knowledge Engineering, v.57 n.1, p.64-110, April 2006
James Bailey, Transformation and reaction rules for data on the web, Proceedings of the sixteenth Australasian database conference, p.17-23, January 01, 2005, Newcastle, Australia | SQL;XSLT;query optimization;translation;XML;virtual view |
512165 | Fuzzy topological predicates, their properties, and their integration into query languages. | For a long time topological relationships between spatial objects have been a main focus of research on spatial data handling and reasoning. They have especially been integrated into query languages of spatial database systems and geographical information systems. One of their fundamental features is that they operate on spatial objects with precisely defined, sharp boundaries. But in many geometric and geographic applications there is a need to model spatial phenomena and their topological relationships rather through vague or fuzzy concepts due to indeterminate boundaries. This paper presents a model of fuzzy regions and focuses on the definition of topological predicates between them. Moreover, some properties of these predicates are shown, and we demonstrate how the predicates can be integrated into a query language. | INTRODUCTION
Representing, storing, quering, and manipulating spatial information
is important for many non-standard database applications.
Specialized systems like geographical information systems (GIS),
spatial database systems, and image database systems to some extent
provide the needed technology to support these applications.
For these systems the development of formal models for spatial objects
and for topological relationships between these objects is a
topic of great importance and interest, since these models exert a
great influence on the efficiency of spatial systems and on the expressiveness
of spatial query languages.
In the past, a number of data models and query languages for
spatial objects with precisely defined boundaries, so-called crisp
spatial objects, have been proposed with the aim of formulating
and processing spatial queries in databases (e.g., [8, 9]). Spatial
data types (see [9] for a survey) like point, line, or region are the
central concept of these approaches. They provide fundamental abstractions
for modeling the structure of geometric entities, their re-
lationships, properties, and operations. Topological predicates [6]
Permission to make digital or hard copies of all or part of this work for
personal or classroom use is granted without fee provided that copies are
not made or distributed for profit or commercial advantage and that copies
bear this notice and the full citation on the first page. To copy otherwise, to
republish, to post on servers or to redistribute to lists, requires prior specific
permission and/or a fee.
between crisp objects have been studied intensively in disciplines
like spatial analysis, spatial reasoning, and artificial intelligence.
Increasingly, researchers are beginning to realize that the current
mapping of spatial phenomena of the real world to exclusively crisp
spatial objects is an insufficient abstraction process for many spatial
applications and that the feature of spatial vagueness is inherent to
many geographic data [3]. Moreover, there is a general consensus
that applications based on this kind of indeterminate spatial data are
not covered by current information systems. In this paper we focus
on a special kind of spatial vagueness called spatial fuzziness.
Fuzziness captures the property of many spatial objects in reality
which do not have sharp boundaries or whose boundaries cannot
be precisely determined. Examples are natural, social, or cultural
phenomena like land features with continuously changing properties
(such as population density, soil quality, vegetation, pollution,
temperature, air pressure), oceans, deserts, English speaking areas,
or mountains and valleys. The transition between a valley and a
mountain usually cannot be exactly ascertained so that the two spatial
objects "valley" and "mountain" cannot be precisely separated
and defined in a crisp way. We will designate this kind of entities
as fuzzy spatial objects. In the GIS community, a number of models
based on fuzzy sets (e.g., [1, 5, 13, 14]) have been proposed,
but these are not suitable for an integration into a database system,
because they do not provide data types for fuzzy spatial data. The
author himself has started to work on this topic and to design a system
of fuzzy spatial data types including operations and predicates
[10, 11] that can be embedded into a DBMS.
The goal of this paper is to give a definition of topological predicates
on fuzzy regions, which is currently an open problem, and
to discuss some properties of these predicates. Besides, we show
the integration of these predicates into a query language. Section 2
presents a formal model of very generally defined crisp regions and
sketches the design of a well known collection of topological predicates
on crisp regions. In Section 3 we employ the concept of a
crisp region for a definition of fuzzy regions. Fuzzy regions are described
as collections of so-called crisp a-level regions. In practice,
this enables us to transfer the whole formal framework and later all
the well known implementation methods available for crisp regions
to fuzzy regions. In Section 4 we present an approach for designing
topological predicates between fuzzy regions that is based on fuzzy
set theory. Section 5 discusses some properties of these predicates,
and Section 6 deals with their integration into a query language.
Finally, Section 7 draws some conclusions.
2. CRISP REGIONS AND TOPOLOGICAL
PREDICATES
Our definition of crisp regions is based on point set theory
and point set topology [7]. Regions are embedded into the two-
Figure
1: Example of a (complex) region object.
dimensional Euclidean space IR 2 and are thus point sets. Unfortu-
nately, the use of pure point set theory for their definition causes
problems. If regions are modeled as arbitrary point sets, they can
suffer from undesired geometric anomalies. These degeneracies relate
to isolated or dangling line and point features as well as missing
lines and points in the form of cuts and punctures. A process called
regularization avoids these anomalies.
We assume that the reader is familiar with some needed, well-known
concepts of point set topology like topological space, open
set, closed set, interior, closure, exterior, and boundary. The concept
of regularity defines a point set S as regular closed if
We define a regularization function reg c which associates a set S
with its corresponding regular closed set as reg c (S) := S ffi . The
effect of the interior operation is to eliminate dangling points, dangling
lines, and boundary parts. The effect of the closure operation
is to eliminate cuts and punctures by appropriately supplementing
points and to add the boundary. We are now already able to give a
general definition of a type for complex crisp regions:
is bounded and regular closedg
In fact, this very "structureless" and implicit definition models (com-
plex) crisp regions possibly consisting of several components and
possibly having holes (Figure 1). As a special case, a simple region
is a region that does not have holes and does not consist of multiple
components.
An important approach to designing topological predicates on
simple crisp regions rests on the well-known 9-intersection model
[6] from which a complete collection of mutually exclusive topological
relationships can be derived. The model is based on the
nine possible intersections of boundary (-A), interior
of a spatial object A with the corresponding components
of another object B. Each intersection is tested with regard to
the topologically invariant criteria of emptiness and non-emptiness.
different configurations are possible from which only a
certain subset makes sense depending on the combination of spatial
objects just considered. For two simple regions eight meaningful
configurations have been identified which lead to the eight
predicates of the set T cr = fdisjoint, meet, overlap, equal, inside,
contains, covers, coveredByg. Each predicate is associated with a
unique combination of nine intersections so that all predicates are
mutually exclusive and complete with regard to the topologically
invariant criteria of emptiness and non-emptiness. We explain the
meaning of these predicates only informally here. Two crisp regions
A and B are disjoint if their point sets are disjoint. They meet
if their boundaries share points and their interiors are disjoint. They
are equal if both their boundaries and their interiors coincide. A is
contains A is a proper subset of B and if their
boundaries do not touch. A is covered by B (B covers A is a
proper subset of B and if their boundaries touch. Otherwise, A and
overlap. Generalizations to topological predicates for complex
crisp regions leading to the same (clustered) collection T cr of predicates
have been given in [12, 2]. We will base our definition of
topological predicates for fuzzy regions on these topological predicates
for complex crisp regions.
3. MODELING FUZZY REGIONS
A "structureless" definition of fuzzy regions in the sense that
only "flat" point sets are considered and no structural information
is revealed has been given in [10]. For our purposes we deploy a
"semantically richer" characterization and approximation of fuzzy
regions and define them in terms of special, nested a-cuts. A fuzzy
region -
F is described as a collection of crisp a-level regions 1 [10],
F , which is the level set of
F , and where
F a
We call F a an a-level region. Clearly, F a is a complex crisp region
whose boundary is defined by all points with membership value a i .
In particular, F a i can have holes. The kernel of -
F is then equal
to F 1:0 . An essential property of the a-level regions of a fuzzy
region is that they are nested, i.e., if we select membership values
We here describe the finite, discrete case. If L -
F is infinite, then
there are obviously infinitely many a-level regions which can only
be finitely represented within this view if we make a finite selection
of a-values. In the discrete case, if jL -
if we take
all these occurring membership values of a fuzzy region, we can
even replace "'" by "ae" in the inclusion relationships above. This
follows from the fact that for any p 2F a \Gamma F a
F (p) =a i . For the continuous case, we get -
As a result, we obtain:
A fuzzy region is a (possibly infinite) set of a-level re-
gions, i.e., -
F jg with a i ? a
F a ' F a
4. TOPOLOGICAL PREDICATES ON
FUZZY REGIONS
In this section we introduce a concept of topological predicates
for fuzzy regions. To clarify the nature of a fuzzy (topological)
predicate, we can draw an analogy between the transition of a crisp
set to a fuzzy set and the transition of a crisp predicate to a fuzzy
predicate. In a similar way as we can generalize the characteristic
function f0;1g to the membership function -
[0; 1], we can generalize a (binary) predicate
a (binary) fuzzy predicate
Hence, the value of
a fuzzy predicate expresses the degree to which the predicate holds
for its operand objects. In case of topological predicates, in this
paper the sets X and Y are both equal to the type region, and the set
f0;1g is equal to the type bool. The sets -
Y are both equal to
the type fregion for fuzzy regions, and for the set [0; 1] we need a
type fbool for fuzzy booleans.
For the definition of fuzzy topological predicates, we take the
view of a fuzzy region as a collection of a-level regions (Section 3),
which are complex crisp regions (Section 2), and assume the set T cr
of topological predicates on these regions. This preparatory work
now enables us to reduce topological predicates on fuzzy regions
to topological predicates on collections of crisp regions.
The approach presented in this section is generic in the sense
that any meaningful collection of topological predicates on crisp
regions could be the basis for our definition of a collection of topological
predicates on fuzzy regions. If the former collection additionally
fulfils the properties of completeness and mutual exclusion
Other structured characterizations given in [10] describe fuzzy regions
as multi-component objects, as three-part crisp regions, and
as a-partitions.
(which is the case for T cr ), the latter collection automatically inherits
these properties.
The open question now is how to compute the topological relationships
of two collections of a-level regions, each collection
describing a fuzzy region. We use the concept of basic probability
assignment [4] for this purpose. A basic probability assignment
m(F a i ) can be associated with each a-level region F a and can be
interpreted as the probability that F a i is the "true" representative of
F . It is defined as
m is built from the differences of successive a i 's. It is easy to see
that the telescoping sum - n
be the value that represents a (binary) property p f
between two fuzzy regions F and G. For reasons of simplicity,
we assume that L -
G =: L. Otherwise, it is not difficult to
F and L -
G by forming their union L := L -
G
and by reordering and renumbering all a-levels. Based on the work
in [4] property p f of F and G can be determined as the summation
of weighted predicates by
cr
cr yields the value of the corresponding property
cr for two crisp a-level regions F a i and Ga j . This formula
is equivalent to
cr
If p f is a topological predicate of T
equal between two fuzzy
regions, we can compute the degree of the corresponding relationship
with the aid of the pertaining crisp topological predicate p cr 2
cr . The value of p cr Once
this value has been determined for all combinations of a-level regions
from F and G, the aggregated value of the topological predicate
can be computed as shown above. The more fine-grained
the level set L for the fuzzy regions F and G is, the more
precisely the fuzziness of topological predicates can be determined.
It remains to show that 0 - p f really
a fuzzy predicate. Since a
since p cr holds. We
can show the other inequality by determining an upper bound for
cr
cr
(since
Hence,
An alternative definition of fuzzy topological predicates, which
pursues a similar strategy like the one discussed so far, is based
on the topological predicates p srh on simple regions with holes
(but without multiple components). If F a is an a-level region, let
us denote its connected components by F a Similarly,
we denote the connected components of an a-level region Ga j by
Ga . We can then define a topological predicate p 0
f as
f
It is obvious that p 0
holds since all factors have a value
greater than or equal to 0. We can also show that p 0
the following transformations:
f
Hence,
holds. As a rule, the predicates p f and p 0
f do
not yield the same results. Assume that F a i and Ga fulfil a predicate
cr . This fact contributes once to the summation process
for p f . But it does not take into account that possibly several
faces F a ik (at least one) of F a i satisfy the corresponding predicate
least one) of Ga . This fact
contributes several times (at most f i \Delta to the summation process
for
f . Hence, the evaluation process for p 0
f is more fine-grained
than for p f .
Both generic predicate definitions reveal their quantitative
character. If the predicate p cr and the predicate
respectively, is never fulfilled, the predicate
respectively, yields false. The more a-level
regions of F and G (simple regions with holes of F a and Ga j ) fulfil
the predicate p cr the more the validity
of the predicate p f (p 0
increases. The maximum is reached if all
topological predicates are satisfied.
5. PROPERTIES
An interesting issue relates to the effect of the number of a-level
regions on the computation results for p f (F;G). What can we expect
if we supplement L with an additional membership value? Can
we make a general statement saying that the value for
then always increase or decrease or stagnate? To answer this ques-
tion, let p n
f denote the predicate p f (F;G) if for its computation
contains n labels except for a We now extend L
by an additional label a rearranging
the indices of the membership values. Without loss of generality
we assume an l 2 ng such that
a enables us to compute the difference
f (F;G) and to investigate whether this difference
is always greater than, less than, or equal to 0. The computation
is simplified by the observation that all addends (a
a cr
f
f
neutralize each other. That is, for p n+1
disjoint cr disjoint cr cr cr
overlap cr cr
contains cr contains cr covers cr cr
Table
1: Evaluation of the predicate comparisons of the first class.
we have only to consider those addends having factors with an index
equal to k. For p n
f we have only to consider addends
having factors with
we obtain:
(p cr cr
cr
cr
(p cr cr
(p cr cr
(p cr cr
cr
(p cr cr
The first line computes the sum of all addends having the factor
a l \Gamma a k . The fourth line does the same for all addends having the
factor a k \Gamma a l+1 . Unfortunately, these sums include addends having
the factor a l \Gamma a l+1 so that these addends have to be subtracted
(second and fifth line). The third, sixth, and seventh line insert
the correct addends. The eighth line subtracts all those addends of
having the factor a l \Gamma a l+1 . The whole expression can be
restructured as follows:
(p cr cr
(p cr cr
(p cr cr
cr
(p cr cr
cr
\Theta n
cr cr
cr cr
cr cr
cr cr
cr cr
cr cr
For a comparison of D with 0 we can observe that the values
of the factors a k \Gamma a l+1 , a
a l+1 are all greater than 0. Hence, the result only depends
on the predicate values. Analyzing the six differences of predi-
cates, we can group them into two classes. The first class contains
the differences p cr cr cr
cr cr cr
structure is p cr . The second class
contains the remaining differences p cr cr
cr cr cr cr
Their common structure is p cr
Table
1 shows the result of comparing the predicates involved
in the differences of the first class. For each predicate combination
cr we first set p cr (F;G) and then
cr both to 1 (true) and 0 (false) (written in bold font). Afterwards
we determine the result of the respective other predicates
and thus obtain four pairs of values. For instance, if disjoint cr
is equal to 0, disjoint cr either equal to 0 or to 1 (in-
dicated by the expression "0 j 1"). Next, we assign the correct
comparison operator =, -, or - reflecting the relationship
of a pair of values to each of the four cases. In the end, we
form the combination of the four comparison operators and obtain
the relationship between p cr (F;G) and p cr G). The symbol
Q indicates that the equality or inequality of the two predicates
cannot be generally decided. The only solution here is to compute
it for each single case. We have omitted the comparisons
for inside and coveredBy, since they are inverse to contains and
covers, respectively. That is, inside cr
coveredBy cr cr
For an investigation of the second class we do not have to consider
the predicates disjoint cr , meet cr , overlap cr , and equal cr ; they
are symmetric in their arguments, that is, p cr cr (G;F).
We also need not consider the predicates covers cr and coveredBy cr ,
since already their predicate combinations in the first class cannot
be decided generally. Thus, only the predicates contains cr and the
inverse inside cr remain. For contains cr we obtain Table 2.
contains cr contains cr
Table
2: Evaluation of the contains comparison of the second
class.
In summary, we obtain a rather negative result. Since the differences
for meet cr , overlap cr , equal cr , covers cr , and coveredBy cr cannot
be decided in general, no general statement can be made about
the difference between p n+1
f
cates. The computation of this difference also fails for contains cr
and inside cr . The problem is that the behavior of both predicates
in the first and second class is opposite to each other. That is, in
the first class contains cr contains cr and in the
second class contains cr contains cr hold. For the
computation of D these two differences are added, and we cannot
generally decide whether the result is less than, greater than, or
equal to 0. Hence, all these predicates do not satisfy some kind of
"monotonicity criterion". A positive exception is only the predicate
disjoint cr for which we obtain disjoint n+1
6. QUERYING WITH FUZZY TOPOLOGICAL
PREDICATES
In this section we demonstrate how fuzzy topological predicates
can be integrated into an SQL-like spatial query language. The fact
that the membership degree yielded by a fuzzy topological predicate
is a computationally determined quantification between 0 and
1, i.e., a fuzzy boolean, impedes a direct integration. First, it is not
very comfortable and user-friendly to use such a numeric value in
a query. Second, spatial selections and spatial joins expect crisp
predicates as filter conditions and are not able to cope with fuzzy
predicates.
As a solution, we propose to embed adequate qualitative linguistic
descriptions of nuances of topological relationships as appropriate
interpretations of the membership values into a spatial
query language. For instance, depending on the membership value
yielded by the predicate inside f , we could distinguish between not
inside, a little bit inside, somewhat inside, slightly inside, quite in-
side, mostly inside, nearly completely inside, and completely inside.
These fuzzy linguistic terms can then be incorporated into spatial
queries together with the fuzzy predicates they modify. We call
these terms fuzzy quantifiers, because their semantics lies between
the universal quantifier for all and the existential quantifier there
exists. It is conceivable that a fuzzy quantifier is either predefined
and anchored in the query language, or user-defined.
We know that a fuzzy topological predicate p f is defined as
fregion \Theta fregion ! [0; 1]. The idea is now to represent each fuzzy
quantifier g 2G= fnot, a little bit, somewhat, slightly, quite, mostly,
nearly completely, completelyg by an appropriate fuzzy set with a
let gp f be a quantified fuzzy predicate (like somewhat inside with
somewhat and Then we can define:
That is, only for those values of p f (F;G) for which - g yields 1, the
predicate gp f is true. A membership function that fulfils this quite
strict condition is, for instance, the crisp partition of [0; 1] into jGj
disjoint or adjacent intervals completely covering [0; 1] and the assignment
of each interval to a fuzzy quantifier. If an interval [a; b]
is assigned to a fuzzy quantifier g, the intended meaning is that
otherwise. For exam-
ple, we could select the intervals [0:0;0:02] for not, [0:02;0:05] for
a little bit, [0:05;0:2] for somewhat, [0:2;0:5] for slightly, [0:5;0:8]
for quite, [0:8;0:95] for mostly, [0:95;0:98] for nearly completely,
and [0:98;1:00] for completely.
Alternative membership functions are shown by the fuzzy sets
in
Figure
2. While we can always find a fitting fuzzy quantifier
for the partition due to the complete coverage of the interval [0; 1],
this is not necessarily the case here. Each fuzzy quantifier is associated
with a fuzzy number having a trapezoidal-shaped membership
function. The transition between two consecutive fuzzy
quantifiers is smooth and here modeled by linear functions. Within
a fuzzy transition area - g yields a value less than 1 which makes
the predicate gp f false. Examples in Figure 2 can be found at 0:2,
0:5, or 0:8. Each fuzzy number associated with a fuzzy quantifier
can be represented as a quadruple (a;b;c;d) where the membership
function starts at (a;0), linearly increases up to (b;1), remains
constant up to (c;1), and linearly decreases up to (d;0). Figure 2
assigns (0:0;0:0;0:0;0:02) to not, (0:01;0:02;0;03;0:08) to a little
bit, (0:03;0:08;0:15;0:25) to somewhat, (0:15;0:25;0:45;0:55)
to slightly, (0:45;0:55;0:75;0:85) to quite, (0:75;0:85;0:92;0:96)
to mostly, (0:92;0:96;0:97;0:99) to nearly completely, and
(0:97;1:0;1:0;1:0) to completely.
So far, the predicate gp f is only true if - g yields 1. We can relax
this strict condition by defining:
In a crisp spatial database system this gives us the chance also to
take the transition zones into account and to let them make the predicate
evaluating a fuzzy spatial selection or join in
a fuzzy spatial database system, we can even set up a weighted
ranking of database objects satisfying the predicate gp f at all and
being ordered by descending membership degree 1 - g ? 0.
A special, optional fuzzy quantifier, denoted by at all, represents
the existential quantifier and checks whether a predicate p f can be
fulfilled to any extent. An example query is: "Do regions A and B
(at all) overlap?" With this quantifier we can determine whether
The following few example queries demonstrate how fuzzy spatial
data types and quantified fuzzy topological predicates can be
integrated into an SQL-like spatial query language. It is not our objective
to give a full description of a specific language. We assume
a relational data model where tables may contain fuzzy regions as
attribute values.
What we need first is a mechanism to declare user-defined fuzzy
quantifiers and to activate predefined or user-defined fuzzy quanti-
fiers. This mechanism should allow to specify trapezoidal-shaped
and triangular-shaped membership functions as well as crisp parti-
tions. In general, this means to define a classification, which could
be expressed in the following way:
create classification fq
(not (0:00;0:00;0:00;0:02);
a little bit (0:01;0:02;0;03;0:08);
somewhat (0:03;0:08;0:15;0:25);
slightly (0:15;0:25;0:45;0:55);
quite (0:45;0:55;0:75;0:85);
mostly (0:75;0:85;0:92;0:96);
nearly completely (0:92;0:96;0:97;0:99);
completely (0:97;1:0;1:0;1:0))
slightly quite mostly
somewhat
a little bit
not
completely
nearly completely
Figure
2: Membership functions for fuzzy quantifiers.
Such a classification could then be activated by
set classification fq
Assuming that we have a relation pollution, which stores among
other things the blurred geometry of polluted zones as fuzzy re-
gions, and a relation areas, which keeps information about the use
of land areas and which stores their vague spatial extent as fuzzy
regions. A query could be to find out all inhabited areas where people
are rather endangered by pollution. This can be formulated in
an SQL-like style as (we here use infix notation for the predicates):
select areas.name
from pollution, areas
inhabited and
pollution.region quite overlaps areas.region
This query and the following two ones represent fuzzy spatial joins.
Another query could ask for those inhabited areas lying almost
entirely in polluted areas:
select areas.name
from pollution, areas
inhabited and
areas.region nearly completely inside
pollution.region
Assume that we are given living spaces of different animal species
in a relation animals and that their vague extent is represented as a
fuzzy region. Then we can search for pairs of species which share
a common living space to some degree:
select A.name, B.name
from animals A, animals B
where A.region at all overlaps B.region
As a last example, we can ask for animals that usually live on land
and seldom enter the water or for species that never leave their land
area (the built-in aggregation function sum is applied to a set of
fuzzy regions and aggregates this set by repeated application of
fuzzy geometric union):
select name
from animals
where (select sum(region) from areas)
nearly completely covers or
completely covers region
7. CONCLUSIONS
We have presented a definition of topological predicates on fuzzy
regions. We have shown that all these predicates with one exception
do not fulfil some kind of "monotonicity criterion" which
documents the independence of topological and metric properties.
Moreover, we have sketched the integration of these predicates into
fuzzy spatial query languages. For that purpose, fuzzy quantifiers
are used that can be incorporated into spatial queries.
8.
--R
Fuzzy Set Theoretic Approaches for Handling Imprecision in Spatial Analysis.
Topological Relationships of Complex Points and Complex Regions.
Geographic Objects with Indeterminate Boundaries.
A General Approach to Parameter Evaluation in Fuzzy Digital Pictures.
Qualitative Spatial Reasoning: A Semi-Quantitative Approach Using Fuzzy Logic
A Topological Data Model for Spatial Databases.
Point Set Topology.
An Introduction to Spatial Database Systems.
Spatial Data Types for Database Systems - Finite Resolution Geometry for Geographic Information Systems
Uncertainty Management for Spatial Data in Databases: Fuzzy Spatial Data Types.
Finite Resolution Crisp and Fuzzy Spatial Objects.
A Design of Topological Predicates for Complex Crisp and Fuzzy Regions.
Incorporating Fuzzy Logic Methodologies into GIS Operations.
--TR
A general approach to parameter evaluation in fuzzy digital pictures
A topological data model for spatial databases
Qualitative spatial reasoning: a semi-quantitative approach using fuzzy logic
An introduction to spatial database systems
Uncertainty Management for Spatial Data in Databases
A Design of Topological Predicates for Complex Crisp and Fuzzy Regions
Topological Relationships of Complex Points and Complex Regions | fuzzy region;fuzzy spatial query language |
512181 | Improving min/max aggregation over spatial objects. | We examine the problem of computing MIN/MAX aggregates over a collection of spatial objects. Each spatial object is associated with a weight (value), for example, the average temperature or rainfall over the area covered by the object. Given a query rectangle, the MIN/MAX problem computes the minimum/maximum weight among all objects intersecting the query rectangle. Traditionally such queries have been performed as range search queries. Assuming that the objects are indexed by a spatial access method, the MIN/MAX is computed as objects are retrieved. This requires effort proportional to the number of objects intersecting the query interval, which may be large. A better approach is to maintain aggregate information among the index nodes of the spatial access method; then various index paths can be eliminated during the range search. In this paper we propose four optimizations that further improve the performance of MIN/MAX queries. Our experiments show that the proposed optimizations offer drastic performance improvement over previous approaches. Moreover, as a by-product of this work we present an optimized version of the MSB-tree, an index that has been proposed for the MIN/MAX computation over 1-dimensional interval objects. | Introduction
Computing aggregates over objects with non-zero extents has received a lot of attention recently
([YW01, ZMT+01, PKZ+01, ZTG+01]). Formally, the general box-aggregation problem is dened
as: \given n weighted rectangular objects and a query rectangle r in the d-dimensional space, nd
the aggregate weight over all objects that intersect r". In this paper we examine the problem of
computing the MIN and MAX aggregates (box-max) over spatial objects. Each object is represented
by its Minimum Bounding Rectangle (MBR) and is associated with a weight (value) that we want to
aggregate. A rectangle is also called a box and thus the name \box-aggregation". Since computing
the MIN is symmetric, in the following discussion we focus on MAX aggregation. Moreover, we
assume that objects are indexed by a spatial access method (SAM) like the R-tree or its variants
Computer Science Department, University of California, Riverside, CA 92521. donghui@cs.ucr.edu
y Computer Science Department, University of California, Riverside, CA 92521, tsotras@cs.ucr.edu. This work has
ben supported by NSF grants IIS-9907477, EIA-9983445 and by the Department of Defense.
[Gut84, BKS+90, SRF87].
The box-max problem has many real-life applications. For example, consider a database that keeps
track of rainfall over geographic areas. Each area is represented by a 2-dimensional rectangle and a
query is: \nd the max precipitation in the Los Angeles district ". The database may also
keep track of the time intervals of each rainfall, in which case we store 3-dimensional rectangles (one
dimension representing the rainfall duration). A box-max query is then: \nd the max precipitation
in the Los Angeles district during the interval [1999-2000] ".
There have been three approaches towards solving box-aggregation queries. The straightforward
approach is to simply perform a range search on the SAM that indexes the objects, and compute the
aggregation as objects are retrieved. While readily available, this solution requires eort proportional
to the number of objects that intersect the query rectangle, which can be large. Performance is
improved if the SAM maintains additional aggregate information ([JL98, LM01, PKZ+01]). For
example, the Aggregation R-tree (aR-tree) ([PKZ+01]) is an R-tree that stores the aggregate value
of each sub-tree in the index record pointing to this sub-tree. While traversing the index, the
aggregation information eliminates various search paths, thus improving query performance.
The third approach uses a specialized aggregate index built explicitly for computing the aggregate
in question [YW01, ZMT+01, ZTG+01]. This index maintains the aggregate incrementally. While
it is an additional index, it is usually rather compact (since it does not index the actual data but in
practice a much smaller representative set) and provides the best query performance.
The main contributions of this paper are:
We propose four optimizations for improving the MIN/MAX aggregation. One of our optimizations
(the k-max) attempts to eliminate more paths from the index traversal when the aggregate
is computed. As such, it can be used either on the SAM that indexes the objects, or, on a specialized
aggregate index. The other optimizations (union, box-elimination and area-reduction)
eliminate or resize object MBRs when they do not aect the MIN/MAX computation. Thus
they apply only to specialized MIN/MAX aggregate indices.
We present a specialized aggregate index, the Min/Max R-tree (MR-tree) that uses all four
optimizations. We further present an experimental comparison among a plain R-tree, the aR-
tree, the aR-tree with the k-max optimization and the MR-tree. Our experiments show drastic
improvements when the proposed optimizations are used.
As a by-product of this research, we discuss how a specialized aggregate index, the MSB-
tree [YW00], can be optimized by applying the box-elimination optimization. The MSB-tree
e-ciently solves the MIN/MAX problem for the special case of one-dimensional interval objects.
Its original version needs to be frequently reconstructed.
The rest of the paper is organized as follows. Section 2 discusses related previous work. Section
3 identies the special characteristics of the box-max problem and presents the optimization tech-
niques. Section 4 summarizes the MR-tree while section 5 presents the results from our experimental
comparisons. Section 6 applies one proposed optimization technique to on the MSB-tree. Finally,
section 7 provides conclusions and problems for further research.
Related Work
There are two variations of the box-aggregation problem, depending on whether objects have zero
extent (point objects) or not. Aggregation over point objects is a special case of the orthogonal
range searching which has received vast attention in the past 20 years in the eld of computational
geometry. For more details, we refer to the surveys [Meh84, PS85, Mat94, AE98]. Most of the
solutions utilize some variation of the range-tree ([Ben80]) following the multi-dimensional divide-
and-conquer technique. In the database eld, [JL98] proposed the R
a tree which stores aggregated
results in the index. [Aok99] proposed to selectively traverse a multi-dimensional index for the
problem of selectivity estimation (corresponding to the COUNT aggregate). [LM01] proposed the
Multi-Resolution Aggregate Tree (MRA-tree) which augments the index records of an R-tree with
aggregate information for all the points in the record's sub-tree. The MRA-tree also uses selective
traversal to provide an estimate aggregation result. The result can be progressively rened. [JL99]
proposes a performance model to estimate the performance of index structures with and without
aggregated data.
A special case of the point aggregation problem is the work on data cube aggregation for OLAP
applications. A data cube ([GBL+96]) can be thought of as a multi-dimensional array. [RKR97]
proposed the cubetree as a storage abstraction of the cube and realized it using packed R-trees to
e-ciently support cube and group-by aggregations. [HAM+97] addressed both the box-max and the
box-sum (for SUM, COUNT and AVG) queries over data cubes. The solution to the box-max query
was based on storing precomputed max values in a balanced hierarchical structure. This solution was
further improved by [HAM+97b]. The solution to the box-sum query was based on pre-computing
the prex sum, which is the aggregate over a range covering the smallest cell of the array. This
solution was improved by [GAE+99, CI99, GAE00]. Specically, [GAE00] proposed the dynamic
data cube which has the best update cost. Recently, [CCL+01] proposes the dynamic update cube
which further improves the update cost to O(log k u ) where k u is the number of changed array cells.
For aggregations over objects with non-zero extents, [YW01] presented the SB-tree which solved the
box-sum query in the special case of one-dimensional time intervals. The SB-tree was extended to the
Multi-version SB-tree (MVSB-tree) in [ZMT+01] to e-ciently support temporal box-sum aggregation
queries with key-range predicates. [ZTG+01] addressed box-sum aggregation over spatiotemporal
objects. Furthermore, [YW00] presented the MSB-tree for the box-max query over one-dimensional
interval objects. The aR-tree ([PKZ+01]) was originally proposed to index the spatial dimension in
a spatial data warehouse environment, but can be used to solve both the box-sum and the box-max
queries over spatial data with non-zero extent. The aR-tree is an R-tree which stores for each index
record the aggregate value for all objects in its sub-tree. Since the aR-tree was built for the support of
both box-sum and box-max queries, it is not fully optimized towards box-max queries. The aR-tree
is used here as a starting point for our optimizations and it is included in our experimentation for
comparison purposes. Also related is [AS90] which answers window queries on top of the pyramid
data structure. Aggregations are used for the existence/non-existence of image features and the
visibility in terrain data. Last, in the spatial-temporal data warehouse environment, [PKZ+01b]
proposed the aggregate R-B-tree (aRB-tree) which uses an R-tree to index the spatial dimension and
each record r in the R-tree has a pointer to a B-tree which keeps historical data about r.
3 The Proposed Optimizations
In this paper we focus our discussion on the MAX aggregate. The discussion for the MIN aggregates
is symmetric and is omitted. Our goal is to solve the box-max problem, where we have a set of
objects, each of which has a box and a value; given a query box q, we want to nd the maximum
value of all objects intersecting q. Assume the objects are indexed by a tree-like structure (e.g. the
R-tree) where the objects are stored in leaf nodes and where the MBR of an internal node contains
the MBRs of all its children. Using such an index, a box-max query can be answered by performing
a range search. In this section we propose four optimizations that improve the performance.
We rst introduce some notations. An index/leaf record is an entry in an internal/leaf node of the
tree. Given an leaf record r, let r:box and r:value denote the MBR and the value of the record,
respectively. Given an index record r, let r:box denote its MBR, r:value denote the maximum value
of all records in subtree(r) and r:child denote the child page pointed by r.
3.1 The k-max optimization
The aR-tree is an R-tree where each index record stores the maximum value of all leaf records in
the sub-tree. If a query box contains the MBR of an index record, the value stored at the record
contributes to the query answer and the examination of the sub-tree is omitted. However, note that
at higher levels of the aR-tree, the index records have large MBRs. So the box-max query is not
likely to stop at higher levels of the aR-tree. The k-max optimization is an extension that keeps
constant number of leaf objects along with each index record such that even if the query box does
not contain the MBR of an index record, the examination of the sub-tree may be omitted.
The k-max optimization Along with each index record r, store the k objects (for a small constant
which are in subtree(r) and have the largest values among the objects in the subtree. When
examining record r during a box-max query, if the query box intersects with any of the k max-value
objects in r, the examination of subtree(r) is omitted.
Clearly, the k-max optimization allows for more paths to be omitted during the index traversal.
However, the benet of k-max on the query performance is not provided for free. The overall space
is increased (since each node stores more information) as well as the update time (eort is needed to
maintain the k objects). Hence in practice the constant k should be kept small. In our experiments,
we found large improvement in query time even for a small
As pointed out, the next three optimizations apply for an index explicitly maintained for the
MIN/MAX aggregation (to avoid confusion we call such an explicit index the MIN/MAX index).
Since the MIN/MAX problem is not incrementally maintainable when tuples are deleted from the
database [YW01], the following discussion assumes an append-only database (i.e., spatial objects are
inserted in the database but never deleted). When a spatial object o with MBR o:box and value
o:value is inserted in the database, o:box accompanied by o:value is inserted as a leaf record in the
MIN/MAX index as well. However, as we will describe, some of these insertions may not be applied
to the MIN/MAX index, or may cause existing MBRs to be deleted or altered from the MIN/MAX
index. As such, we can use an R -tree to implement the MIN/MAX index. The result after applying
all four optimizations will be the MR-tree.
3.2 The box-elimination optimization
Consider two leaf records There is
no need to maintain in the MIN/MAX index since it will not contribute to any MAX query. We
thus say that becomes obsolete due to
The box-elimination optimization If during the insertion of an object o, a (leaf or index) record
r is found such that o:box contains r:box and o:value r:value, remove r from the MIN/MAX index;
if r is an index record, remove subtree(r) as well.
The above optimization will reduce the size of the MIN/MAX index, since sub-trees may be removed
during an insertion. There is a tradeo between the time to update the MIN/MAX index and the
overall space occupied by this index. A newly inserted object may make obsolete more than one
existing records which are on dierent paths from the root to leaves. The MIN/MAX index can be
made very compact if all these obsolete records (and their sub-trees) are removed. However, this
may result in expensive update processing. If the update is to be kept fast, we can choose to remove
only the obsolete records met along the insertion path (which is a single path since we use a R -tree
to implement the MIN/MAX index). The complexity of the insertion algorithm remains O(log (n))
where n is the number of MBRs in the MIN/MAX index (which in practice is much smaller than the
total number of spatial objects in the collection). Another choice is to choose c paths and remove the
obsolete records met along these paths, where c is a constant. The space occupied by the obsolete
sub-trees can be re-used.
3.3 The union optimization
While the box-elimination optimization focuses on making obsolete existing records in the index, the
union optimization focuses on making obsolete objects before they are inserted in the MIN/MAX
tree. First we note that the MBR of an object should not be inserted in the MIN/MAX index if
there is an existing leaf object in the index whose MBR contains it and has a larger value. Such
an insertion can be safely ignored for the purposes of MIN/MAX computation. To fully implement
this test, all the paths that may contain this object have to be checked; at worst this may check all
leaf objects in the MIN/MAX tree. A better heuristic is to use the k max-value MBRs. If the new
object is contained by any of the k max-value MBRs found along the index nodes in the insertion
path, and has a smaller value, then there is no need to perform the insertion.
Moreover, we observe that even if the MBR of an object to be inserted is not fully contained by any
existing leaf object, we still might safely ignore it. This is the case when the new object's MBR is
contained in the union of MBRs of several existing objects. As illustrated in gure 1, the shadowed
box represents the new object to be inserted and the other two rectangles represent two objects
already in the MIN/MAX index. Since the new object is contained in the union of the two existing
objects with a smaller value, its insertion can be safely ignored.7
Figure
1: The new object becomes obsolete by the union of two existing objects.
To implement this technique, each index record r in the MIN/MAX index stores (1) the union
(denoted by r:union) of MBRs of all the leaf objects in subtree(r), and (2) the minimum value
(denoted by r:low) of all the objects in the subtree(r). The overall optimization is described below:
The union optimization If during the insertion of object o, an index/leaf record r is found such that
o:box is covered by r:union/r:box and o:value r:low/r:value, the insertion is ignored. Moreover,
check whether some max-value object stored in r covers o:box and has a value no smaller than o:value;
this also makes
A remaining question with the above optimization is how to store the union of all leaf objects under
an index record. At worst, this union may need space proportional to the number of leaf objects
that create it. Given that each index record has limited space, we store an approximate union. In
particular, we store a good approximation that can be represented with t boxes (MBRs), where t is
a small constant. What is important is that the approximation should aect only the query time,
but not the correctness of the query result. If the approximate union covers some area not covered
by the actual union, it may erroneously make obsolete some new object. So the approximate union
should be completely covered by the actual union. We formally state the problem as follows.
Denition 1 Given constant t and a set of n boxes t. The covered
t-union of S is dened as a set of t boxes such that (1)
a i is maximal, i.e. there does not exist another set of t boxes fa 0
satisfying the rst
condition such that S t
covers larger space than S t
a i .
To nd the exact answer with an exhaustive search algorithm in the two-dimensional case takes
O(n 8t ) time, which is clearly unacceptable. So we need to nd an e-cient algorithm to compute a
good approximation of the covered t-union. Again, in order for the box-max query to give correct
result, we require that the approximate covered t-union be completely covered by the original n
boxes. We hereby propose a O(n log n) algorithm.
Algorithm CoveredUnion(Boxes src[1.n], Number n) Given a set of n boxes src, return an approximate covered
t-union of src. Note that the algorithm uses small constants t, c and max try.
1. seeds = the c t boxes from src whose areas are the largest;
2. Initialize the set of destination boxes dest to be empty;
3. for every dimension d i
4. Project all src boxes to dimension d i and sort the projected end points into array proj[d i ][1::2n];
5. endfor
6. for i from 1 to t
7. Pick the box b from seeds which has the largest area not covered by the union of boxes in dest;
8. loop max try times
9. for each dimension d i
11. TryExtend( b, d i , false );
12. endfor
13. if b cannot be enlarged by extending to any direction, break;
14. else b = the extended box with the largest area;
15. endloop
16. Add b to set;
17. endfor
18. return dest;
Algorithm TryExtend(Box b, Dimension d i , Boolean positive) Given a box b, a dimension d i and a boolean
variable positive denoting whether b should be extended to the positive or negative direction along dimension d i , try
to extend b and return it.
1. if positive = true then // try to extend to the positive direction
2. Find the smallest number w in proj[d i ] which is larger than the projection of b to dimension d
3. Try to extend b to w in dimension d i ; shrink in the other dimensions if needed to make sure that b is covered
by the union of src boxes; the extension is successful if the area of b grows.
4. else
5. // try to extend to the negative direction; similar; omit;
6. endif
7. return b;
The idea of the algorithm CoveredUnion is to pick t boxes from the original n boxes and try to
expand each one of them as much as possible. To choose the i th box (1 i t), choose the one
which has the largest area not covered by the i 1 boxes computed so far. To expand a chosen box,
try to expand along all directions parallel to the axes. Trivial analysis shows that the complexity of
the algorithm is O(n log n).
In the following discussion the term union means the approximate covered t-union.
3.4 The area-reduction optimization
The last optimization we propose dynamically reduces the box area of the object to be inserted.
The area-reduction optimization If during the insertion of object o, an index record r is found
such that r:union intersects with o:box and r:low o:value, we reduce the size of o:box by subtracting
the area covered by r:union from it before inserting it to the lower levels. If the insertion reaches
a leaf object which intersects the new object and has an equal or larger value, the area of the new
object is reduced accordingly. The box of the new object is similarly reduced if some max-value object
stored in an index record r exists whose box intersects with o:box and whose value is no smaller than
o:value.
This optimization reduces the MBR of an object only if the reduced part is covered by some existing
records in the tree with larger or equal values. Hence the correctness of the MIN/MAX aggregates
is not aected. One benet of this optimization is that overlapping among sibling records in the tree
is reduced. Figure 2 shows an example. The two large boxes represent two index records r 1 and r 2 .
Assume r 1 :union is equal to the MBR of r 1 . The combination of the light-shadowed and the dark-
shadowed boxes represents an object to be inserted with value 8. The object should be recursively
inserted into subtree(r 2 ). Without applying the area-reduction optimization, r 2 :box would need to
be expanded to fully contain the new object. On the other hand, if we apply the optimization,
the light-shadowed area is subtracted and thus we insert in subtree(r 2 ) a much smaller box (the
dark-shadowed area) which is fully contained in r 2 :box and thus no expansion for r 2 is needed.r2 (min=7)
Figure
2: The area-reduction optimization helps to reduce overlaps.
Another benet of the area-reduction optimization is that it can help to make new records obsolete.
As an example, consider gure 2 again. It is possible that at some lower level in subtree(r 2 ), the
dark-shadowed area is found obsolete. Since the light-shadowed area is already made obsolete by r 1
due to the optimization, there's no need to insert the record at all.
Note that the result of a box when some areas are subtracted from it may be a set of boxes rather
than a single box. So an object to be inserted may be fragmented into several smaller boxes by this
optimization. One choice to handle this is to follow the R + -tree ([SRF87]) approach, i.e. to insert
every small box as a separate copy. But this choice increases the space overhead. Another choice is
to maintain the list of small boxes in the execution of the insertion algorithm. As we go down the
tree, some small boxes may become smaller or obsolete. Eventually at the leaf level, the MBR of the
smaller boxes is inserted. Note that the MBR is at most as large as the original box to be inserted
and in many cases, much smaller.
4 The Min/Max R -tree
The MR-tree is a dynamic, disk-based, height-balanced tree structure. There are two types of
pages: leaf pages and index pages. All pages have the same size. Since the MR-tree is based on
the R -tree, each page except the root has at most M records and at least m records. Each leaf
record has the form hbox; v 1 i, where v 1 is the value of the record. Each index record has the form
lowi. Here box and child has their usual meanings. The list
(b is the k max-value leaf objects in the sub-tree of this index record, sorted by
decreasing order of value. Here b i stands for the MBR and v i for the value of the leaf object with
the i th largest value. The union stores t boxes (the approximate t-union over all leaf MBRs), and
low is the minimum value over all leaf objects in the sub-tree.
Algorithm BoxMax(Page N , Box b, Value v) Given a tree node N , a query box b and a running value v, the
algorithm returns the box-max query result for the sub-tree rooted by N .
1. for every record r in N where r:box intersects with b
2. if r:v1 > v then
3. if N is leaf then
4.
5. else if there exists i in [1, k] such that r:b i intersects with b
6. Let i be the smallest one satisfying this condition;
7. if r:v i > v, set
8. else
9. v =BoxMax(Page(r:child), b, v);
10. endif
11. endif
12. endfor
13. return v;
BoxMax is a straightforward recursive algorithm. To nd the box-max over box b, we should
call BoxMax(root page, b, -1). The main dierence between this algorithm and the range query
algorithm in an R-tree is in steps 5-7, which corresponds to the k-max optimization. For an index
record r, the algorithm checks the k max-value objects stored in r. If any of them intersects with b,
there is no need to examine subtree(r).
Algorithm Insert(Tree T , Box b, Value v) Given tree index T , a box b and a value v, the algorithm inserts an
object with b and v into T .
1.
2. N =root page of T ;
3. while ( N is not leaf ) do
4. for every record r in N where r:box intersects with b
5. if r:box is contained in b and r:v1 v then
6. Remove subtree(r);
7. else
8. for every i such that r:v i v
9. Modify each box in S by subtracting r:b i from it;
10. endfor
11. if r:low v, modify every box in S by subtracting r:union from it;
12. endif
13. endfor
14. if N has zero record, goto step 19;
15. if S is empty, goto step 20;
17. endwhile
18. Optimizations for a leaf page; similar to steps 4 through 13; omit;
19. if S is not empty, insert hMBR(S), vi into N ;
20. while ( N is not root ) do
21. if N over
ows then
22. Split(N );
23. else if N under
ows then
24. Remove N and reinsert the records from N into the tree at N 's level;
25. endif
26. Adjust the entry in Parent(N) pointing to N ;
27. set N =Parent(N );
28. endwhile
29. if N over
ows then
30. Split old root and create a new root;
31. else if N has only one record and N is not leaf then
32. Remove N and set N 's child as the new root;
33. endif
Generally, the insertion algorithm follows a single path from the root down to a leaf. Reorganizations
may follow the path back up to the root. The optimizations are applied when going down the tree.
Steps 5 and 6 correspond to the box-elimination optimization which removes a sub-tree if the newly
inserted object has a larger value and spatially contains the sub-tree. Steps 8 to 11 correspond to the
area-reduction optimization which tries to reduce the size of the box to be inserted. Step 14 deals
with the rare case when all sub-trees in some index page N become obsolete due to the insertion of
an object. This may occur only when N is a root page, since otherwise the index record pointing to
N in the parent page would be obsolete before N has a chance to be examined. For this case, the
algorithm results in a tree with a single page and a single record. Step 15 means that the object to
be inserted is obsolete and no recursive insertion into lower levels are needed. Step 16 chooses a child
page to recursively insert into. We use the same algorithm as in the R*-tree. So the ChooseChild
procedure is not discussed in detail here. In steps 20 to 28, the path of pages is examined backwards.
The way to split an over
owed page and to reinsert entries in an under
owed page are identical to the
approaches in the R*-tree, plus the maintenance of the additional information kept along with each
index record. Steps 29 through 33 handle over
ow/under
ow of the root page. As a consequence,
the tree height may increase/decrease.
Performance
We compare the performance of the proposed MR-tree against the plain R -tree, the aR-tree and the
aR-tree with the k-max optimization (denoted as aR-tree kmax ). All the algorithms were implemented
in C++ using GNU compilers. The programs run on a Sun Enterprise 250 Server machine with two
300MHz UltraSPARC-II processors using Solaris 2.8. The page size is 4KB. For space limitations
we report the performance of the MR-tree only for is the number of max-value
objects kept in each index record and t is the number of boxes used to represent a union. Similarly,
the aR kmax uses 3. Each index utilizes an LRU buer and a path buer, which buers the
most recently accessed path. The total memory buer we used for each program has 256 pages.
We present results with two datasets, each containing 5 million square objects randomly selected
in a two-dimensional space. The space in both dimensions is [1, 1 million]. The rst data set was
used to test the performance in the presence of small objects. The size of each object was randomly
chosen from 10 to 1000. The second data set contains medium sized objects. The size of each object
was randomly chosen from 10 to 10,000.
R* aR kaR MR2575125Query Rectangle Area (%)
Query
Time
(#sec)
(a) Index Sizes. (b) Query performance.
Figure
3: Comparing the performance for small object dataset.
Figure
3 compares the performance for the small object dataset. In the gure we use R*, aR, kaR,
MR to represent the R*-tree, the aR-tree, the aR-tree kmax and the proposed MR-tree, respectively.
The MR-tree uses about 25% less space (gure 3a) than the other methods. This is because some
obsolete records were removed from the index. The aR-tree kmax occupies the most space, since
compared with the R*-tree and the aR-tree it stores more information in each index record.
To evaluate the query performance, the query rectangle area varies from 0.0001% to 50% of the
whole space. For each query rectangle size, we randomly generated 100 square queries and measured
the total running time. This running time was obtained by multiplying the number of I/Os by the
average disk page read access time (10ms), and then adding the measured CPU time. The CPU
time was measured by adding the time spent in user and system mode as returned by the getrusage
system call. Figure 3b shows the average time per query for various query sizes. While all methods
are comparable for small query rectangles (since few objects satisfy the query anyways), the MR-tree
is clearly the best as the query rectangle size increases. Note that the query time scale is logarithmic,
so the actual dierence in query speeds is drastic (for example, with query size of 1% the MR-tree is
about 20 times faster than the aR-tree). The reason is that for large query rectangles, the MIN/MAX
query has more chances to stop at higher levels in the MR-tree. In particular, the aR-tree will decide
to omit examining a sub-tree only if the query rectangle contains the box of the whole sub-tree. On
the other hand, the MR-tree search may omit traversing a sub-tree even if the query rectangle partly
intersects with it. The usefulness of the k-max optimization can be seen when comparing the aR-
tree kmax with the plain aR-tree. The MR-tree performs better than the aR-tree kmax for two reasons.
First, due to the additional optimizations, the MR-tree stores fewer objects. Second, objects in the
MR-tree have smaller area (the area reduction optimization), and thus achieve better clustering.
R* aR kaR MR2575125Query Rectangle Area (%)
Query
Time
(#sec)
(a) Index Sizes. (b) Query performance.
Figure
4: Comparing the performance over medium objects.
For medium-size objects, the performance improvement of the MR-tree over the other structures is
even better (gure 4). The reason is that larger objects have more chances to contain other objects
and thus make them obsolete; as a result, the MR-tree becomes more compact. This trend continued
with datasets with larger objects (results not shown for brevity).
We also compared the index generation time. For small objects (gure 5a), the MR-tree needs more
CPU time to generate (about 2.5 times more), but a little less I/O time. This is to be expected.
Although the MR-tree occupies less space, the generation of it needs more CPU time to maintain the
extra information stored in the index records. For medium objects (gure 5b), the MR-tree takes
R* aR kaR MR2575125IO
CPU
Creation
Time
kilo
sec)
R* aR kaR MR2575125
IO
CPU
Creation
Time
kilo
sec)
(a) Small objects. (b) Medium objects.
Figure
5: Comparing the index creation time.
about the same CPU time but less I/O time as compared with the other structures, since it is much
smaller.
6 Optimize the MSB-tree
We rst review the MSB-tree which is proposed in [YW00] to answer the box-max query for 1-
dimensional interval data. We then show how we can use one of the proposed optimization techniques
to improve it.
6.1 The MSB-tree
The MSB-tree is a tree structure where any record r in a tree node has an interval r:i and a value
r:v. The intervals of all records in a tree node do not intersect and the union of them is the interval
of the index record which points to this node. An object o to be inserted also has an interval o:i and
a value o:v. An index record r has an extra value r:u to be explained shortly.
Again we focus on the discussion of the MAX aggregate. Initially, there is only one node which is
a leaf node. To insert an object o into a leaf node L, every record r in L where r:i intersects o:i
is examined. If o:v r:v, the insertion has no eect; otherwise, if o:i contains r:i, set
otherwise, split r in two (or three) records: one corresponding to the intersection of r:i and o:i, while
the other(s) corresponding to the rest of r:i. If a leaf node L over
ows, it is split into two pages and
the records in it are evenly distributed into two pages based on their interval positions. To perform
an box-max query in an leaf page is easy. We just locate all records intersecting with the query
interval and return the maximum value of these records.
Besides an interval i, an value v and a pointer child to the child page, an index record also has a value
u. The meanings of u and v for an index record r are as follows. If subtree(r) were a stand-alone
tree, r:v would be the lower bound of any box-max query, while r:u would be the upper bound. Since
during a query, subtree(r) may not a stand-alone tree, we need to maintain a running value which
corresponds to the box-max we get so far by examining the path from root to the page pointed by
r. The nal query result is the larger value between the running value and the box-max query result
for subtree(r). Thus to perform a box-max query with interval q:i on an index page I where the
running value is current, we check all records r in I which intersects q:i. If r:u current, nothing
is done; otherwise, if q:i contains r:i, set current = r:u; otherwise, set current to be the query result
on subtree(r).
Now we discuss how to insert an object o into an index page I. Every record r in I where r:i intersects
o:i is examined. First of all, if r:u < o:v, we need to set to keep r:u as the upper bound.
Next, we check o:v against r:v. If o:v r:v, nothing needs to be done; otherwise, if o:i contains r:i,
simply set otherwise, we recursively insert
Note that for any given level l of the MSB-tree and for any given interval i, there can be at most two
records at level l which intersect i. So both the insertion algorithm and the query algorithm examine
two paths of the tree, and thus their complexities are both O(log B m), where B is the page capacity
in number of records, and m is the number of leaf records. Since each insertion splits at most two
leaf records, we have is the number of updates ever performed. But m can be
much smaller than n. To observe this, consider the insertion of object is the whole space
and o:v is larger than any existing value in the tree. Obviously, after this insertion, although n can
be arbitrarily large, the most compact tree will have only one leaf record, i.e.
An MSB-tree is called compact if the number of leaf records in it is minimum. The MSB-tree
update algorithm does not ensure the compactness of the tree. Since it is ideal to maintain a small
m, [YW00] proposes to periodically reconstruct the MSB-tree. To reconstruct, the whole tree is
browsed in a depth-rst manner to report every interval together with the aggregation value during
this interval. The intervals thus reported are continuous to one another. Adjacent intervals with the
same value are merged. All the intervals are then inserted into a second, initially empty MSB-tree
which eventually replaces the original tree.
6.2 Optimize the MSB-tree
During the reconstruction phase, the MSB-tree is o-line, i.e. no new insertion can be made. We
discuss how to apply the box-elimination optimization we proposed in section 3 to the MSB-tree to
get a relatively much smaller tree while it remains on-line.
The idea is that whenever the u and v values of an index record r becomes equal during an insertion
process, remove the whole subtree(r). To implement this, for the modied insertion algorithm to
insert an object o to an index page I, add the following steps to the original insertion algorithm: for
each record r in I such that o:i contains r:i and r:u o:v, remove subtree(r) and mark r as obsolete,
i.e. it points to an obsolete page; combine adjacent obsolete records; propagate the obsolete records
to leaf pages. It remains to discuss how to propagate an obsolete index record r to the leaf level. If
r is the only record in a page, merge the page with a sibling page; otherwise, there exists a sibling
record s which is not obsolete. Without lose of generality, suppose s:i is to the right of r:i. Merge
r with s by extending s:i to contain r:i. Also, the rst record of every node in the leftmost path
starting from s:child needs to be extended as well.
Now let's analyze the complexity of the modied algorithm. At each level of the tree, the algorithm
examines at most two pages. In each page, there are O(B) obsolete records. For each of these
records, we need to follow a single path from the page containing the record to some leaf. Thus
the worst case update complexity is O(B log B m) where m is the number of leaf records. Note that
this discussion does not count the cost to free up the space occupied by the sub-trees pointed to by
obsolete records. In fact, since these sub-trees are not needed in any update or query to be performed
later, the garbage collection can be performed by a background process. Compared with the original
MSB-tree update algorithm, the modied update algorithm is a little more expensive in the worst
case. However, this does not happen often, since each time the modied algorithm spends more time
in update, the tree is shrinked.
The major benet of the optimized algorithm over the original one is that the optimized one results in
a much smaller tree without periodic reconstruction. As an example, consider the previous example
to insert an object o where o:i is the whole space and o:v is larger than that of every existing record.
The original algorithm simply updates the u and v values of all root-level records and thus the
number of leaf records does not change. On the other hand, the optimized algorithm immediately
decides that the whole tree is obsolete and thus results in a very compact tree: a tree with only one
leaf record.
Conclusions
We examined the problem of computing MIN/MAX aggregation queries over spatial objects with
non-zero extents. We proposed four optimization techniques for improving the query performance.
We introduced the MR-tree, a new index explicitly designed for the maintenance of MIN/MAX
aggregates. The MR-tree combines all proposed optimizations. An experimental comparison showed
that our approach provides drastic improvement especially when query sizes increase. As a by-
product, we showed how one of the optimizations can be applied on an existing aggregation index
(the MSB-tree).
Acknowledgements
We would like to thank D. Gunopulos for many helpful discussions. We also thank D. Papadias for
providing us valuable input on related work including their own. Finally, we are grateful to B. Seeger
for the R -tree code.
--R
Computational Geometry: An Introduction
--TR
Computational geometry: an introduction
The R*-tree: an efficient and robust access method for points and rectangles
Geometric range searching
Range queries in OLAP data cubes
Cubetree
Efficient processing of window queries in the pyramid data structure
Multidimensional divide-and-conquer
Efficient computation of temporal aggregates with range predicates
Progressive approximate aggregate queries with a multi-resolution tree structure
R-trees
The Dynamic Data Cube
Data Cube
Incremental Computation and Maintenance of Temporal Aggregates
Hierarchical Prefix Cubes for Range-Sum Queries
The R+-Tree
Dynamic Update Cube for Range-sum Queries
Efficient OLAP Operations in Spatial Data Warehouses
The R-tree
PISA
How to Avoid Building DataBlades(r) That Know the Value of Everything and the Cost of Nothing
Relative Prefix Sums
--CTR
Jie Zhang , Michael Gertz , Demet Aksoy, Spatio-temporal aggregates over raster image data, Proceedings of the 12th annual ACM international workshop on Geographic information systems, November 12-13, 2004, Washington DC, USA
Zhang , J. Tsotras, Optimizing spatial Min/Max aggregations, The VLDB Journal The International Journal on Very Large Data Bases, v.14 n.2, p.170-181, April 2005
Ines Fernando Vega Lopez , Richard T. Snodgrass , Bongki Moon, Spatiotemporal Aggregate Computation: A Survey, IEEE Transactions on Knowledge and Data Engineering, v.17 n.2, p.271-286, February 2005 | min/max;indexing;spatial aggregates |
512447 | Dynamic memory management for programmable devices. | The paper presents the design and implementation of a novel dynamic memory-management scheme for ESP---a language for programmable devices. The firmware for programmable devices has to be fast and reliable. To support high performance, ESP provides an explicit memory-management interface that can be implemented efficiently. To ensure reliability, ESP uses a model checker to verify memory safety.The VMMC firmware is used as a case study to evaluate the effectiveness of this memory-management scheme. We find that the Spin model checker is able to exhaustively verify memory safety of the firmware; the largest process took 67.6 seconds and used 34.45 Mbytes of memory to verify. We also find that the runtime overhead to maintain the reference counts is small; the additional overhead accounts for 7.35% of the total message processing cost (in the worst case) over a malloc/free interface. | INTRODUCTION
Traditionally, devices implement simple functionality that
is usually implemented in hardware. All the complex functionality
is implemented in device drivers running on the
main processor. However, as devices get faster, it is increasingly
harder for software running on the main CPU to keep
up with the devices. This is because the main CPU has to
Permission to make digital or hard copies of all or part of this work for
personal or classroom use is granted without fee provided that copies are
not made or distributed for profit or commercial advantage and that copies
bear this notice and the full citation on the first page. To copy otherwise, to
republish, to post on servers or to redistribute to lists, requires prior specific
permission and/or a fee.
ISMM'02, June 20-21, 2002, Berlin, Germany.
go across the memory and I/O buses to reach the device
and incurs several hundreds of cycles for each access. In
these situations, better performance can be achieved by implementing
some of the functionality on the device instead
of on the main CPU [2, 22, 9, 21, 18, 20, 25, 1, 24].
Programmable devices can be used to implement the increasingly
sophisticated functionality that has to be supported
by the devices. These devices are equipped with a
programmable processor and memory (Figure 1). Since the
processor resides on the card, it incurs a much smaller overhead
to access the control registers on the device.
Writing firmware for the devices is di#cult for two rea-
sons. First, the code running on the device has to be fast.
The processing power and memory on the device tends to
be at least an order of magnitude less than the main CPU
and the main memory. Migrating code from the main CPU
to the device involves a tradeo# between running the code
on a faster processor that incurs higher overhead to access
the device, and running it on a slower processor that has
faster access to the device. The slower the code runs on the
device, the smaller the benefit of migrating code to devices.
Second, device firmware has to be reliable, as it is trusted
by the operating system. It has the ability to write to any
location in the physical memory. A stray memory write resulting
from a bug can corrupt critical data structures in the
operating system and can crash the entire machine.
Firmware for programmable devices is usually written using
event-driven state machines in C. This is because concurrency
is an e#ective way of structuring firmware for programmable
devices; the multiple threads of control provide
a convenient way of keeping track of the progress of several
events at the same time. And event-driven state machines
support low-overhead concurrency. However, programming
with event-driven state machines in C su#ers from a number
of problems [17]. Consequently, while the device firmware
can deliver good performance, it is often di#cult to write
and debug.
ESP [17, 16] is a language for writing firmware for programmable
devices using event-driven state machines. It is
designed to meet three goals: ease of programming, ease of
debugging, and high performance.
This paper focuses on the novel memory-management scheme
supported by ESP. An earlier paper [17] included a brief
description of this scheme. This paper presents a detailed
description of design and implementation of ESP's dynamic
memory management scheme. This paper also provides detailed
measurements in VMMC firmware to evaluate the effectiveness
of our approach.
is challenging because the firmware has to be fast as well
as reliable. The problem is compounded by the fact that
the firmware is implemented using concurrency. Traditional
memory management schemes fall into two categories: automatic
and explicit memory management. On one hand, automatic
memory management using garbage collection techniques
[26] provides safety but usually involves high overhead
(both in terms of the amount of memory and processing
time). On the other hand, explicit memory management
involves lower overhead but is hard to program correctly.
Section 6 discusses the related work in more detail.
To keep the dynamic memory management overhead low,
ESP provides an explicit memory management interface.
It then uses a model checker (Spin [15]) to ensure memory
safety. The key observation is that allocation bugs are
di#cult to find because memory allocation correctness is a
global property of a program-the property cannot be inferred
by looking only at a single module of the program.
To rectify this, ESP is designed to make memory allocation
correctness a local property of each process. This not
only promotes modular programming but also allows model
checkers to verify safety. This is because model checking involves
an exponential search. Making memory safety a local
property results in smaller models that are more amenable
to model checking.
The ESP runtime maintains reference counts on objects
to manage the dynamic allocation e#ciently. To make memory
allocation correctness a local property of each process,
objects sent over channels are passed by value in ESP. Se-
mantically, this means that a copy of the object being sent
over a channel is delivered to the receiving process. This
ensures that no two processes share an object. However,
copying objects being sent over channels at runtime can
be expensive. The ESP runtime avoids copying objects by
maintaining reference counts on objects so that the objects
are actually shared by multiple processes under the covers.
To demonstrate the e#ectiveness of our approach, we use
the VMMC firmware as a case study. The VMMC firmware
runs on the Myrinet [4] network interface cards and was programmed
using ESP. We found that the Spin model checker
was able to exhaustively verify memory safety of each of
the ESP processes in the VMMC firmware implementation.
The largest process took 67.6 seconds and 34.45 Mbytes of
memory to verify. We also found that the reference counting
used by the ESP runtime incurs a fairly small overhead.
Our measurements indicate that the additional bookkeeping
necessary to maintain the reference count results in a 7.35 %
increase (in the worst case) in message processing cost over
a malloc/free interface that is supported by C.
The rest of this paper is organized as follows. Section 2
presents a brief introduction to model checking. Section 3
presents an overview of ESP. Section 4 describes the design
and implementation of ESP's dynamic memory management
scheme. Section 5 uses the VMMC firmware to evaluate the
e#ectiveness of our scheme. Section 6 discusses the related
work. Finally, Section 7 presents our conclusions.
2. BACKGROUND
ESP uses model checkers to debug and extensively test the
device firmware. Model checking is a technique for verifying
a system composed of concurrent finite-state machines.
Given a concurrent finite-state system, a model checker ex-
CPUCPU
BUS
Main Memory
Main CPU
Network
DMA
DMA
CPU
MEM Card
Network
DMA
CPU
Figure
1: A machine with programmable devices
plores all possible interleaved executions of the state machines
and checks if the property being verified holds. A
global state in the system is a snapshot of the entire system
at a particular point in execution. The state space of the
system is the set of all the global states reachable from the
initial global state. Since the state space of such systems
is finite, the model checkers can, in principle, exhaustively
explore the entire state space.
Model checking verifiers can check for a variety of prop-
erties. These properties are traditionally divided into safety
and liveness properties. Safety properties are properties that
have to be satisfied in specific global states of the system.
Assertion checking and deadlock are safety properties. Assertions
are predicates that have to hold at a specified point
in one of the state machines. This corresponds to the set
of global states where that state machine is at the specified
point and the predicate holds. A deadlock situation corresponds
to the set of all the global states that do not have a
valid next state. Liveness properties are ones that refer to
sequence of states. Absence of livelocks is a liveness property
because it corresponds to a sequence of global states where
no useful work gets done. Liveness properties are usually
specified using temporal logics.
The advantage of using model checking is that it is auto-
matic. Given a specification for the system and the property
to be verified, model checkers automatically explore the
state space. If a violation of the property is discovered, it
can produce an execution sequence that causes the violation
and thereby helps in finding the bug.
The disadvantage of using model checking is that it is
computationally expensive. The state space to be explored
is exponential in the number of processes and the amount
of memory used by the program. As a result, the resources
required (CPU as well as memory resources) by the model
checker to explore the entire state space can quickly grow
beyond the capacity of modern machines.
3. ESP
The Event-driven State-machines Programming (ESP) [17,
16] is a language for programmable devices. It is designed to
three goals: ease of programming, ease of debugging,
and high performance.
In this section, we begin with a description of the approach
ESP takes to meet its goals. We then present an overview
of the ESP language.
3.1 Approach
ESP meets its three goals as follows (Figure 2):
Ease of programming. To support ease of programming,
ESP allows programs to be expressed in a concise modular
fashion using processes and channels. In addition, it provides
a number of features including pattern matching to
support dispatch on channels, a flexible external interface
to C, and a novel memory management scheme that is e#-
cient and safe.
Ease of debugging. To support ease of debugging, ESP
allows the use of a model checker like Spin [15] to extensively
test the program. The ESP compiler (Figure 2) not only
generates an executable but also extracts Spin models from
the ESP programs [16]. This minimizes the e#ort required in
using a model checker to debug the program. Often, the ESP
program is debugged entirely using the model checker before
being ported to run on the device. This avoids the slow and
painstaking process involved in debugging the programs on
the device itself.
High performance. To support high performance, the ESP
language is designed to be fairly static so that the compiler
can aggressively optimize the programs. In languages like
C, event-driven state machines are specified using function
pointers. This makes it di#cult for the C compiler to optimize
the programs. This forces the programmers to hand
optimize the program to get good performance. In contrast,
ESP is designed to support event-driven state machines. It
allows the ESP compiler to generate e#cient code.
3.2 Language Overview
The ESP language adopts several structures from CSP [14]
and has a C-style syntax. ESP supports event-driven state-machines
programming.
Concurrency in ESP is expressed using processes and chan-
nels. An ESP program consists of a set of processes communicating
with each other over channels. Each process represents
a sequential flow of control in the concurrent program
and implicitly encodes state machines.
Processes communicate with each other over channels.
Messages are sent over the channel using the out operation
and received using the in operation. Communication
over channels is synchronous 1 or unbu#ered-a process has
to be attempting to perform an out operation on a channel
concurrently with another process attempting to perform an
in operation on that channel before the message can be successfully
transferred over the channel. Consequently, both in
and out are blocking operations. The alt statement allows a
process to wait on in and out operations on several di#erent
channels till one of them becomes ready to complete.
In addition to basic types like int and bool, ESP supports
mutable and immutable versions of complex data types like
record, union and array. However, ESP does not support
recursive data types for two reasons. First, specification
languages for model checkers do not support recursive data
types. sending recursive data types by-value over
channels involves additional run-time overhead.
In ESP, processes and channels are static and are not first-class
objects-they can neither be created dynamically nor
Also known as rendezvous channels.
stored in variables nor sent over other channels. This design
allows the compiler to perform optimizations more effectively
ESP supports pure message passing communication over
the channels. Allowing processes to communicate over shared
memory (using shared mutable data structures) would require
ESP to provide additional mechanism (like locks) to
avoid race conditions. To avoid this, ESP does not allow
processes to share data structures.
Two aspects of ESP prevent sharing of data structures.
First, ESP disallows global variables. Each variable is local
to a single process. Second, objects sent over channels are
passed by value. To support this e#ciently, ESP allows only
immutable objects to be sent over channels. This applies not
only to the object specified in the out operation but also to
all objects recursively pointed to by that object.
4. DYNAMICMEMORYMANAGEMENTIN
4.1 Design
The design of the memory management scheme in ESP
was driven by two goals. First, the programs should be
safe. Bugs stemming from lack of safety are di#cult to find.
The problem is compounded by the fact that these programs
are concurrent and run on devices with minimal debugging
support. Second, the memory management overhead has to
be small.
ESP provides a novel memory management scheme that
provides safety as well as low overheads. To manage dynamically
allocated memory, it provides an explicit malloc/free-
style interface that incurs low overheads. It ensures safety
using a model checker. The only unsafe aspect of ESP is its
explicit memory management scheme. The memory allocation
bugs are eliminated using a model checker. This results
in safe ESP program that incur low memory-management
overhead at runtime.
The key observation is that allocation bugs are di#cult
to find because memory allocation correctness is a global
property of a program-the property cannot be inferred by
looking only at a single module of the program. A programmer
has to examine the entire program to make sure that all
allocated objects are eventually freed and are not accessed
once freed.
To rectify this, ESP makes memory allocation correctness
a local property of each process. Section 3.2 describes the
design choices that ensure that no two processes share any
data structure. It should be noted that to support pure
message passing-style communication, it would have been
su#cient to ensure that no two processes share any mutable
data structures. However, to make memory allocation
correctness a local property, ESP disallows sharing of even
immutable data structures.
Making memory allocation correctness a local property allows
the model checker to verify the memory safety of each
process separately. The ability to check each process separately
ensures that the size of model to be checked remains
small. The largest model that has to be checked depends
only on the size of the largest process and not on the size
of the entire program or the number of processes. Con-
sequently, we were able to exhaustively check the memory
safety of all ESP processes in the VMMC firmware (Sec-
pgm.ESP ESP Compiler
pgm.C help.C Generate Firmware
using C Compiler
pgmN.SPIN
Verify Property 1
Verify Property N
using SPIN
using SPIN
Figure
2: The ESP approach. The ESP compiler generates models (pgm[1-N].SPIN) that can be used by the Spin
model checker to debug the ESP program (pgm.ESP). The compiler generates three types of models: detailed (retains all the
details in the program), memory-safety (to check memory safety), and abstract (to generate more compact model by dropping
some irrelevant details). The compiler also generates a C file (pgm.C) that can be compiled into an executable. The shaded
regions represent code that has to be provided by the programmer. The test code (test[1-N].SPIN) is used to check di#erent
properties in the ESP program. It includes code to generate external events such as network message arrival as well as to
specify the property to be verified. The programmer-supplied C code (help.C) implements simple low-level functionality like
accessing special device registers, dealing with volatile memory, and marshalling packets that have to be sent out on the
network.
tion 5.2). In addition, making memory allocation correctness
a local property promotes modular programming.
Objects sent over channels are passed by value. The
means that a deep copy of the object is delivered to the receiving
process. 2 Objects received over a channel are treated
like newly allocated objects and have to be later freed by
that process. One possible complication occurs when an object
contains multiple links to another object. If pointer
sharing were preserved, the receiving process would need to
know about the sharing to check that the data structure was
correctly freed. However, it cannot determine the sharing
because pointer comparisons are not allowed in ESP. The
example in Figure 3 illustrates the problem with copying
semantics that preserves pointer sharing.
To avoid this, the deep copy performed when a data structure
is sent over a channel does not preserve pointer sharing.
This allows a receiving process to simply perform a recursive
arriving over channels.
ESP provides a malloc/free-style interface to manage dynamically
allocated memory. Since objects are not shared
between the processes, each process is responsible for freeing
its objects. Two primitives (free and rfree 3 ) allow
processes to free the allocated objects. In addition, ESP
provides two other primitives(pfree and prfree) which free
the object after evaluation of the current expression. This
allows the compiler to perform an optimization (Section 4.3).
The following code fragment
out( chan1, prfree(v));
is equivalent to
out( chan1, v);
ESP supports immutable as well as mutable data struc-
tures. An immutable object arriving on a channel can be
2 This is true only semantically. The ESP runtime never has
to actually copy objects sent over channels (Section 4.3).
3 which performs free recursively
mutated by first applying a cast operation to obtain a mutable
version of the object. Semantically, the cast operation
causes a new object to be allocated and the corresponding
values to be copied into the new object. However, the compiler
can avoid creating a new object in a number of cases.
For instance, if the compiler can determine that the object
being cast will be freed immediately after the cast, it can
reuse that object and avoid allocation.
ESP allows dangling pointers (pointers to objects that
have been already freed) during program execution. If dangling
pointers were not allowed, the program would have to
delete all pointers to a given object before that object could
be freed. This would require additional bookkeeping and
would place unnecessary burden on the programmer. Although
ESP allows dangling pointers, it disallows the use
of these pointers to access memory. This ensures memory
safety. In contrast, the usual approach to ensure memory
safety is to reclaim an object only if no pointers point to it.
This avoids dangling pointers. The only other approach that
we are aware of that provides safety while allowing dangling
pointers is region-based memory management [19]. It uses
the type system to guarantee that the dangling pointers are
not used at run time.
Memory allocation in ESP is a nonblocking operation. In
a concurrent program, making memory allocation blocking
has some advantages. It allows a memory allocation request
in one process that does not find any memory available
to block till another process frees up some memory.
Although this would lead to better memory utilization, it introduces
additional synchronization between the processes.
This forces the programmer to treat each allocation as potentially
blocking and make sure that it does not cause the
program to deadlock.
4.2 Extracting Memory-Safety Models
Currently, ESP used the Spin model checker [15]. Spin is
a flexible and powerful model checker designed for software
systems. Spin supports high-level features like processes,
rendezvous channels, arrays and records. Most other ver-
record of { v: int}
channel shareC: array of entryT
process process1 {
in( shareC, $a);
assert( length(a) == 2);
process process2 {
{ 11 };
out( shareC, { -> p1, p2});
process process3 {
{ 5 };
out( shareC, { -> p, p});
Figure
3: An example to illustrate the problems with copying semantics that preserves pointer sharing.
Process process1 expects an array of two elements on the channel shareC. Once it receives it, the process frees one of the
entries and proceeds to use the other entry. If process process2 sends an array on the channel, process1 would execute
correctly because the two entries point to di#erent objects. However, if process process3 sends an array on the channel,
process1 will try to access the record after it has freed it resulting in an error.
ifiers target hardware systems and provide a fairly di#er-
ent specification language. Although ESP can be translated
into these languages, additional state would have to be introduced
to implement features like the rendezvous channels
using primitives provided in the specification language. This
would make the state explosion problem worse. In addition,
the semantic information lost during translation would make
it harder for the verifiers to optimize the state-space search.
allows verification of safety as well as liveness prop-
erties. The liveness properties in Spin are specified using
Linear Temporal Logic (LTL).
The ESP compiler generates three types of models: de-
tailed, memory-safety, and abstract [16]. The detailed models
contain all the details from the original ESP program.
These models are useful during the development and de-bugging
of the firmware using the simulation mode in Spin.
The memory-safety models are used to check for memory allocation
bugs in the program. These models are essentially
detailed models with some additional Spin code inserted to
check for validity of memory accesses. The abstract models
omit some of the details that are irrelevant to the particular
property being verified. These models can have significantly
smaller state than the detailed models and are useful for
checking larger systems. In this paper, we will discuss in
detail only the memory-safety models.
The memory-safety models generated by the ESP compiler
can be used to check for memory allocation bugs in
the program. These models are essentially detailed models
with some additional Spin code inserted to check for validity
of memory accesses. Therefore, they contain even more
state than the detailed models. In spite of this, these models
can be usually used to exhaustively explore the state space
for allocation bugs. This is because the memory safety of
each individual process can be checked separately using the
verifier (Section 4.1).
Variables in ESP store pointers to data objects. For in-
stance, in the following code, variables b1 and b2 point to
the same array. Therefore, the update (b1[1]) should be
visible to variable b2.
$b1: #array of int = #{ -> 5, 11}; // Allocate
Since Spin does not support pointers, each object in the
generated model is assigned an objectId at allocation time.
The objectId is stored as an additional field in the object
itself. When an object gets copied due to an assignment
operation, the objectId field also gets copied. This ensures
that all objects in the translated Spin code that share the
same objectId represent a single object in the original ESP
code. When a mutable object gets updated in ESP code,
the translated Spin code includes code to check and update
all other objects with the same objectId.
The memory-safety model includes additional code (asser-
tions) that checks the validity of each object that is accessed.
When a new object is allocated, an unused objectId is assigned
to the object. Before every object access, code is
inserted in the model to check that the object is live. Array
accesses include additional code to check that the array
index is within bounds. Union references include code to
check that field being accessed is valid. When an object is
all objects in the model with that same objectId are
marked as invalid by changing the objectId field to -1.
The memory-safety model checks for bugs like accessing
an object after it has been freed, double freeing an object,
and using an invalid array index. In addition, it can also
find most memory leaks. This is because a process in the
generated model has a bounded number of objects and the
compiler can determine this bound. Arrays are the only
source of unbounded allocation in an ESP process, since
ESP does not support recursive data types. However, the
ESP compiler imposes a bound on the maximum lengths of
the arrays during model extraction [17], thereby bounding
the number of objects in the model. By constraining the
model to only pick objectIds within this bound, any steady
memory leak can be detected. A steady leak will cause the
model to run out of objectIds during model checking.
Currently, the objectIds are a source of unnecessary increase
in state space to be explored in models generated by
the ESP compiler. The problem stems from the fact that a
given object in the program can get assigned di#erent objectIds
depending on the scheduling decisions made prior
to its allocation. The result is that a single "state" manifests
itself as several di#erent states in the state space. This
problem can be alleviated by using a separate objectId table
for each distinct type in each process. This is because
two pointers can point to the same object only if they have
the same type and belong to the same process. This should
reduce the number of di#erent objectIds a given object can
get assigned.
4.3 Code Generation
From the programmer's perspective, each process has its
own set of objects that have to be managed separately for
each process (Section 4.1). Each process allocates its objects
and explicitly frees them (using free). Objects sent
over the channels are deep copied before being handed to
the receiving processes. Therefore, objects arriving over a
channel are treated like newly allocated objects that have
to be later freed by that process.
The implementation uses a reference-counting scheme to
manage the objects. Although semantically, processes do
not share objects, the implementation shares objects between
processes for e#ciency-copying objects is computationally
expensive. The runtime system maintains reference
counts to keep track of the number of processes sharing
the object. Recursive increment and decrement operations
on cyclic data structures require additional bookkeeping to
avoid infinite loops. However, ESP does not support recursive
data types. Consequently, these operations can be
implemented e#ciently.
Normal allocation causes objects to be allocated and their
reference count initialized to one. When an object is sent
over a channel, the reference count of the object is recursively
incremented (thereby avoiding the expensive deep copy
before giving it to the receiving process. When
a process frees an object, its reference count is decremented
by one. The object is actually deallocated only when all the
processes have freed it and the reference count is zero.
The following simple optimization is fairly e#ective in reducing
the number of reference count increments and decre-
ments. In the following code fragment
out( chan1, prfree(v));
the reference count will be recursively incremented before
sending the object v on the channel. After sending the
object, the reference count will be recursively decremented
because of the prfree. In this case, the reference count
increments and decrements can be optimized away by the
compiler.
The deep copy performed when a data structure is sent
over a channel does not preserve pointer sharing (Section 4.1).
This has two benefits. First, it allows the copying semantics
to be implemented e#ciently; a simple recursive increment
of reference count su#ces. An object that is pointed to multiple
times within the data structure will have its reference
count incremented multiple times. Second, it allows the correctness
of the memory allocation to be a local property of
each process (Section 4.1). The receiving process does not
have to worry about the pointer sharing on objects arriving
over channels.
A cast of an immutable object into a mutable object can
require copying the object. This is because a program can
detect object sharing by mutating it at one location and
observing the change at another location. However, the cast
operation is fairly uncommon in ESP programs. In addition,
the copying is not always necessary. The copy can be often
avoided when a cast is necessary but the program is written
carefully (to allow the compiler to optimize it). For instance,
if the reference count of the immutable object is one (no
other process is holding that object) and the object is freed
immediately after the cast, the compiler can avoid the copy
and use the same object.
record of { v: int}
channel countC: array of entryT
process processA {
{ 3 };
{ -> p1, p2};
out( countC, a);
process processB {
Figure
4: An example that shows that the traditional
reference counting scheme is not su#cient for
ESP.
Several design choices in the ESP language allow the implementation
to share objects while providing the illusion
of the disjoint set of objects. First, only immutable objects
can be sent over channels. Therefore, the program cannot
detect that the object is being shared by mutating it in one
process and observing the change in another process. Sec-
ond, objects cannot be compared for pointer equality. This
prevents the program from comparing the pointer for two
di#erent objects and detecting that the implementation is
using the same object to represent both of them. Finally,
ESP does not support recursive data types, and therefore
the program cannot have cyclic data structures. This means
that recursive reference count increments does not have to
deal with infinite loops due to cyclic data structures, and
can therefore be implemented e#ciently.
Traditional reference counting schemes maintain the counts
on the objects di#erently from the way it is done in ESP. In
the traditional scheme, the reference counts are incremented
only at the root and decremented recursively only when the
reference count of the object becomes zero. Our earlier paper
[17] suggested that this would be su#cient for ESP too.
It turns out that this is not su#cient. Consider the example
in
Figure
4. Until the point when the objects are sent over
the channel countC, both schemes would have kept the same
reference counts on all the objects; each of the three objects
(pointed to by the variables p1, p2, and a) would have a
reference count of one. However, on performing the send
operation on the channel countC, the traditional scheme will
increment the reference count of only the array object while
the ESP scheme will increment the reference count of each
of the three objects. If the scheduler chooses to schedule
the process processB first, then the free statement will be
executed. With the traditional scheme, this will cause the
reference count of the object pointed to by p1 to go to zero,
thereby freeing the object. This will generate an error when
the process processA is scheduled to run and it tries to access
the variable p1. With the ESP scheme, the reference
count of the object pointed to by p1 will be decremented
from two to one, and so the object will not be freed. This
allows the process processA to later access it.
4.4 Limitations
One of the main limitations of this approach is that it has
problems dealing with recursive data types. The problem is
that recursive data types introduce cyclic data structures. In
the presence of cyclic data structures, the deep copy semantics
of ESP does not make any sense. One possible approach
is to allow only noncyclic data structures on channels. This
might require additional checks at run time. However, these
checks would be necessary only on channels that allow recursive
data types.
5. EXPERIMENTAL RESULTS
This section presents measurements to demonstrate the
e#ectiveness of ESP's memory management scheme. The
measurements were performed on the VMMC firmware that
runs on the Myrinet [4] network interface cards. Our measurements
are designed to investigate the following issues:
. The programmer e#ort required to verify memory safety.
. The e#ectiveness of using the model checker to verify
memory safety.
. The extra performance overhead incurred at runtime
to maintain reference counts.
. The allocation pattern exhibited by the firmware. In
particular, we measure the object lifetimes.
Before answering these questions, this section presents a
brief overview of the VMMC firmware.
5.1 VMMC Firmware
The Virtual Memory-Mapped Communication (VMMC)
architecture [9] delivers high performance on gigabit networks
by using sophisticated network cards. It allows data
to be directly sent to and from the application memory
(thereby avoiding memory copies) without involving the operating
system (thereby avoiding system call overhead). The
operating system is usually involved only during connection
setup and disconnect.
The VMMC implementation [9] uses the Myrinet [4] net-work
interface cards. Myrinet is a packet-switched gigabit
network. The Myrinet network card is connected to the net-work
through two unidirectional links of 160 Mbytes/s peak
bandwidth each. The actual node-to-network bandwidth
is usually constrained by the PCI bus (133 Mbytes/s) on
which the network card sits. The network card has a programmable
memory and three DMA engines to transfer data- one to
transfer data to and from the host memory; one to send data
out onto the network; one to receive data from the network.
The card has a number of control registers including a status
register that checks for data arrival, watchdog timers
and DMA status.
The VMMC software (Figure 5) has three components: a
library that links to the application; a device driver that is
used mainly during connection setup and disconnect; and
firmware that runs on the network card. Most of the software
complexity is concentrated in the firmware code, which
was implemented using event-driven state-machines in C.
Significant e#ort [10, 9, 3, 6] has been spent on imple-
menting, performance tuning, and debugging the VMMC
Card
Interface
Processor
Main
Network
Application
Network
Firmware
Device Driver
Library
Figure
5: VMMC Software Architecture. The shaded
regions are the VMMC components.
Process ESP Generated Test
Program Model Code
reliableSend
reliableRecv 152 664 41
localReq 172 742 67
remoteReq 167 882 85
remoteReply 177 715 104
Table
1: Sizes (in lines) of the various files used
to check memory safety of the various processes in
the VMMC firmware. The three remaining processes
are not listed in the table because they did not involve any
allocation. The second column shows the size of the portion
of program relevant for the particular model. The third
column shows the size of the model generated by the ESP
compiler. The last column shows the number of lines of Spin
test code that was required.
firmware. In spite of this, we continue to encounter bugs in
the firmware.
The VMMC firmware was reimplemented using ESP. The
ESP version of the VMMC firmware required significantly
fewer lines of code than the C version. The ESP version
has 500 lines of ESP code together with around 3000 lines
of C code. All the complex state machine interactions are
restricted to the ESP code, which uses 8 processes and 19
channels. The C code performs only simple operations like
packet marshalling and handling device registers. This is a
significant improvement over the C version where the complex
interactions were scattered throughout the 15600 lines
of code.
5.2 Verifying Memory Safety in VMMCFirmware
The ESP compiler extracts memory-safety models that
can be used to verify the safety of each of the processes separately
(Section 4.2). To use these models to verify safety
involves two steps. First, the programmer has to provide
test code (test[1-N].SPIN in figure 2) to check each of the
processes. Then, the model checker is used to perform a
state-space exploration to verify safety. The former involves
programmer e#ort while the latter is performed automatically
and is constrained by the available computational resource
Programmer e#ort required. The test code for checking
memory safety has to be provided by the programmer
simulates external events such as network message arrival.
Unlike with other models, the test does not have to include
any additional code to check the safety-the code to check
for memory safety is included in the generated model in the
form of assertions. Table 1 presents the sizes of the test
code that had to be written to verify memory safety in the
firmware. 4 In each case, the size of the test code is
fairly small. The table also shows the size of the relevant
portion of the ESP code and the size of the models generated
by the ESP compiler.
Each test code has to be written only once but can be used
repeatedly to recheck the system as the software evolves.
Since the models are extracted automatically, rechecking the
software requires little programmer e#ort.
E#ectiveness of model checking. For every process in
the VMMC firmware, the entire state space could be explored
exhaustively using Spin. Table 2 presents the amount
of state that had to be explored to verify memory safety of
each of the processes. The biggest process (reliableSend)
required only 67.6 seconds of processor time and 34.45 Mbytes
of memory. This shows the e#ectiveness of the model checker
to verify safety.
This is in contrast with our experience with checking the
VMMC firmware for global properties like deadlocks [16].
The ESP compiler used abstraction techniques to generate
smaller models that would require less resource to explore
the state space. This approach allowed the model checker to
identify several hard-to-find bugs in the firmware that can
cause the firmware to deadlock. However, the state space
was still too big. As a result, Spin could only perform a
partial search due to resource constraints. This illustrates
the importance of making memory safety a local property
of each process.
The memory-safety model generated by the ESP compiler
catches not only all bugs due to invalid memory accesses but
also most of the memory leaks (Section 4.2). The memory
safety bugs in the VMMC firmware had already been eliminated
by the time the ESP compiler was modified to support
the memory-safety models. Spin was used to check an
earlier version of the firmware that had an allocation bug.
The verifier easily identified the bug. To further check the
e#ectiveness of using the memory-safety models, a variety
of memory allocation bugs were inserted manually in the
program. These bugs either access objects after they were
freed or use an invalid array index or introduce memory
leaks. Spin was able to quickly find the bug in every case.
In ESP, the model checker was used throughout the program
development process. Traditionally, model checking
is used to find hard-to-find bugs in working systems. Since
developing firmware on the network interface card involves
a slow and painstaking process, we used the Spin simulator
to implement and debug it. Once debugged, the firmware
was ported to the network interface card with little e#ort.
5.3 Performance
Reference counting overhead. We measure the performance
overhead incurred by the ESP runtime to manage
dynamic memory in the VMMC firmware. To put the over-
4 The processes not listed in the table did not involve any
dynamic allocation.
head in perspective, it estimates the additional overhead the
ESP's scheme would incur over a malloc/free interface that
is supported by C.
VMMC provides two types or operations to transfer data
between two machines. The remote-write operation transfers
data from the local machine to a remote machine. The
remote-read operation fetches data from a remote machine
to the local machine. A remote-read operation behaves like
two remote-write operations. It requires two messages to be
sent over the network-a request message sent by the local
machine and a reply message (with the requested data) sent
by the remote machine. Therefore, in this section, we report
only the measurements from the remote-write operations.
Measuring the memory management overhead on the firmware
poses a problem. The granularity of the clock available on
the Myrinet network card is fairly large (0.5 -s). There-
fore, we cannot simply instrument the firmware to measure
the fraction of time spent in the memory management rou-
tines. Consequently, we estimate the memory management
overhead in three steps.
First, we measure the overhead of each of the memory
management operations (second column in Table 3). When
a reference count decrement operation is performed, the object
is freed or not depending on whether the reference count
is zero. Therefore, the overhead in the both cases is mea-
sured. The overhead of all the memory management operations
should have little variance. This is because ESP uses
a simple scheme to manage the free memory. It keeps a set
of list of free blocks-all blocks in a particular list have the
same size. Consequently, allocating (or freeing) an object
involves removing from (or adding to) the head of a list.
Second, we measure the number of times each of these operations
is executed on a remote-write operation (Table 3).
Using the numbers from these two steps, we estimate the
memory management overhead involved in each remote-write
operation.
Finally, we instrument the VMMC firmware to measure
the total time spent to process each remote-write request
(third column in Table 4). Then we compute the fraction of
the total processing time spent in managing dynamic memory
Table
4).
ESP performs a common optimization performed by reference-counting
systems. The reference counts of newly allocated
objects are set to zero instead of one. Free objects are identified
by their presence in a free list. This avoids the need
for incrementing the counter before allocation and decrementing
it before freeing. Consequently, the actual cost of
maintaining the reference counts (Table 4) can be obtained
by adding the execution times in rows two and three in Table
3.
Table
4 shows that the overhead of maintaining the reference
is a fairly small fraction of the total memory-management
cost (27.7 % in the worst case). This is because very few reference
count increments (and decrements) were necessary in
the firmware. Only one reference count increment was necessary
on the sending side and only three on the receiving
side (
Table
3). The remaining memory management overhead
is the cost of allocating and freeing memory. This cost
would be incurred even by a simple malloc/free interface
provided in C.
One advantage of an explicit memory management scheme
is that the programmer can control when an object is freed.
This allows the programmer to get better performance by
Process Name No. of States Time (in Seconds) Memory Used (in Mbytes)
Stored Matched Stack Hash table States Store Total
reliableSend 11118 316725 67.6 24.0 1.0 9.45 34.45
localReq
remoteReq 2315 3510 0.9 24.0 1.0 1.87 26.87
remoteReply 8565 7312 2.3 24.0 1.0 5.55 30.55
Table
2: Checking for memory safety in the VMMC firmware using Spin. In each case, the entire state space
was explored in the exhaustive mode in Spin. The stored column shows the number of unique states encountered while the
matched column shows the number of states encountered that had already been visited before. The memory usage is broken
down into space used for the stack, the hash table, and the visited states. The space used by the stack and the hash table
are statically allocated by Spin.
Operation Operation Sender Receiver
Execution Time Operation Count Execution Time* Operation Count Execution Time*
Allocation 0.59 -s 3 1.77 -s 3 1.77 -s
Increment Ref Count 0.15 -s 1 0.15 -s 3 0.45 -s
Decrement Object not freed 0.26 -s 1 0.26 -s 3 0.78 -s
Count Object freed 0.48 -s 3 1.44 -s 3 1.44 -s
Total - 3.62 -s - 4.44 -s
Table
3: Estimate of the memory management overhead in the VMMC firmware. The table computes
the amount of time spent in the memory management primitives in the firmware when a machine sends a
message to another machine. It shows the time spent on both the sending as well as the receiving machines.
*These values were computed using the measurements in the other columns.
Machine Total Memory Management Overhead Reference Counting Overhead
Execution Time Execution Time % of Total Execution Time % of Total
Sender Small 19.50 -s 3.62 -s 18.56 % 0.41 -s 2.10 %
Rest 28.47 -s 3.62 -s 12.72 % 0.41 -s 1.50 %
Receiver 16.74 -s 4.44 -s 26.52 % 1.23 -s 7.35 %
Table
4: Comparison of the memory management overhead with the total time spent by the VMMC firmware
to process a message being sent over the network (both on the sender and the receiver machines). Since
messages of up to 32 bytes are treated di#erently than larger messages on the Sender, the overheads are shown separately for
the two categories: Small (<= bytes messages) and Rest. The total execution time was measured by instrumenting the
firmware. The memory management overheads were obtained from Table 3. The reference counting overhead is the overhead
of maintaining the reference counts in the object. It is obtained by adding the execution times in rows two and three in
Table
3.
Application Problem Size
LUContiguous 2048 x 2048 Matrix
WaterSpatial 15625 Molecules
BarnesSpatial 8192 Particles
WaterNsquared 1000 Molecules
Volrend head
Table
5: SPLASH2 Applications
moving some of the allocation overhead out of the critical
path.
Object lifetimes. We measure the lifetime of the allocated
objects in the VMMC firmware using SPLASH2 applications
Table
5). These applications run on top of the
Shared Virtual Memory (SVM) [3] library that, in turn, runs
on top of the VMMC library. All applications measurements
were made using a cluster of four SMP PC. Each PC
has four 200 MHz Pentium processors, 1 GB memory and a
Myrinet network interface card with a LANai 4.x 33 MHz
processor and 1 MB on-board SRAM memory. The nodes
are connected by a Myrinet crossbar switch. The PCs run
Table
6 shows the lifetimes of the objects allocated in the
firmware. The lifetime is measured in the number of allocations
and not the execution time. During each allocation,
a counter is incremented. As can be seen from the table,
most objects are freed very quickly-over 99 % of objects
are freed within 128 allocations. A small number of tables
are allocated when the firmware is started. These objects
are never freed.
6. RELATED WORK
Explicit Memory Management. Traditionally, dynamic
memory on programmable devices is managed using an interface
that allows the program to allocate and free memory
maintained in bu#er pools. When required, the programmer
explicitly maintains reference counts on the objects. These
interfaces do not provide memory safety. They often result
in memory allocation bugs that are notoriously di#cult
to find; they lead to memory corruption that manifests as
faulty behavior at a location in the program di#erent from
the site of the bug.
A number of tools [5] use static and runtime techniques
to find memory allocation bugs in unsafe languages like C.
For instance, the Purify [13] tool inserts code in the executable
that check for a number of bugs like invalid indices
in array accesses and memory leaks. This allows it to detect
an error when it happens at run time. However, it is the
programmer's responsibility to run the executable with different
inputs so as to exercise every possible program path.
LCLint [11] combines static analysis with program annotations
to identify a broad class of allocation bugs. A different
approach [23] to find a more limited class of bugs
(bu#er overruns) is to formulate the bu#er overrun problem
as an integer constraints problem and statically check
for constraint satisfaction. A limitation of these static approaches
is that it can flag false-positive as well as false-negative
bugs.
Automatic Memory Management. Automatic memory
management in safe programming languages is implemented
using a garbage collector that is responsible for reclaiming
unused memory [26]. Garbage collection often involves run-time
overhead (both in terms of processor overheads as well
as additional memory requirement) that make them di#cult
to use in programmable devices. Copying garbage collectors
usually use only half of the available memory. This is
a problem on programmable devices which have relatively
small amounts of memory. Mark-and-sweep collectors do
not waste memory but incur overhead proportional to the
size of the heap. Although some techniques [7] can be used
to reduce the cost during during the sweep phase, even the
cost during the mark phase can be significant. This is because
the firmware maintains a few large tables that have
to be scanned during the mark phase. This is a problem
on programmable devices where the collector would be triggered
frequently because of the limited memory available.
Memory Management using Regions. Vault [8] and
Cyclone [12] use regions [19] to provide safe memory man-
agement. Region-based memory management techniques
exiting dynamic contexts like procedures.
This makes them unsuitable for a language like ESP that
does not have any dynamic context.
7. CONCLUSIONS
This paper presented the design and implementation of
a novel memory-management scheme for ESP. ESP provides
an explicit interface to manage dynamic memory. This
interface can be implemented e#ciently using a reference-counting
technique. ESP's design makes memory-allocation
correctness a local property of each process. This allows a
model checker to be used to ensure the safety of the pro-
gram. This approach results in safe programs that incur
low runtime overheads to manage the memory.
The e#ectiveness of ESP's scheme was evaluated using the
VMMC firmware as a case study. We found that the Spin
model checker is able to exhaustively verify memory safety of
each of the ESP processes in the firmware. Verifying memory
safety took between 0.1 and 67.6 seconds. It required
less than 35 Mbytes of memory. We also found that the
runtime overhead to maintain the reference counts is small.
The additional overhead to maintain the reference counts
(when compared to a simple malloc/free interface) varied
between 1.5 % and 7.35 % of the total message processing
cost.
Acknowledgments
This work was supported in part by the National Science
Foundation (CDA-9624099,EIA-9975011,ANI-9906704,EIA-
9975011), the Department of Energy (DE-FC02-99ER25387),
California Institute of Technology (PC-159775, PC-228905),
Sandia National Lab (AO-5098.A06), Lawrence Livermore
Laboratory (B347877), Intel Research Council, and the Intel
Technology 2000 equipment grant.
8.
--R
Using Network Interface Support to Avoid Asynchronous Protocol Processing in Shared Virtual Memory Systems.
A Gigabit-per-Second Local Area Network
Porting a User-Level Communication Architecture to NT: Experiences and Performance
Reducing Sweep Time for a Nearly Empty Heap.
Enforcing High-Level Protocols in Low-Level Software
Design and Implementation of Virtual Memory-Mapped Communication on Myrinet
Static Detection of Dynamic Memory Errors.
Fast Detection of Memory Leaks and Access Errors.
Communicating Sequential Processes.
The Spin Model Checker.
ESP: A Language for Programmable Devices.
ESP: A Language for Programmable Devices.
High Performance Messaging on Workstations: Illinois Fast Messages (FM) for Myrinet.
Implementation of the Typed Call-by-Value lambda-Calculus Using a Stack of Regions
Active Messages: A Mechanism for Integrated Communication and Computation.
Evolution of Virtual Interface Architecture.
A First Step Towards Automated Detection of Bu
Virtual Log Based File Systems for a Programmable Disk.
Uniprocessor Garbage Collection Techniques.
The SPLASH-2 Programs: Characterization and Methodological Considerations
--TR
Active messages
Implementation of the typed call-by-value MYAMPERSAND#955;-calculus using a stack of regions
The SPLASH-2 programs
U-Net
High performance messaging on workstations
Static detection of dynamic memory errors
The Model Checker SPIN
Active disks
Virtual log based file systems for a programmable disk
Using network interface support to avoid asynchronous protocol processing in shared virtual memory systems
Reducing sweep time for a nearly empty heap
Communicating sequential processes
Enforcing high-level protocols in low-level software
Region-based memory management in cyclone
High-Speed Data Paths in Host-Based Routers
User-Level Network Interface Protocols
Evolution of the Virtual Interface Architecture
Myrinet
Design and Implementation of Virtual Memory-Mapped Communication on Myrinet
Uniprocessor Garbage Collection Techniques
--CTR
Sanjeev Kumar , Kai Li, Using model checking to debug device firmware, Proceedings of the 5th symposium on Operating systems design and implementation Due to copyright restrictions we are not able to make the PDFs for this conference available for downloading, December 09-11, 2002, Boston, Massachusetts
Sanjeev Kumar , Kai Li, Using model checking to debug device firmware, ACM SIGOPS Operating Systems Review, v.36 n.SI, Winter 2002 | programmable devices;reference counting;dynamic memory management;model checking |
512449 | Accurate garbage collection in an uncooperative environment. | Previous attempts at garbage collection in uncooperative environments have generally used conservative or mostly-conservative approaches. We describe a technique for doing fully type-accurate garbage collection in an uncooperative environment, using a "shadow stack" to link structs of pointer-containing variables, together with the data or code needed to trace them. We have implemented this in the Mercury compiler, which generates C code, and present preliminary performance data on the overheads of this technique. We also show how this technique can be extended to handle multithreaded applications. | INTRODUCTION
New programming language implementations usually
need to support a variety of di#erent hardware architectures,
because programmers demand portability. And because programmers
also demand e#ciency, new programming language
implementations often need to eventually generate
native code, whether at compile time, as in a traditional
compiler, or run time, for a JIT compiler.
However, implementing compiler back-ends or JITs that
generate e#cient native code for a variety of di#erent hardware
architectures is a di#cult and time-consuming task.
Furthermore, a considerable amount of ongoing maintenance
is required to generate e#cient code for each new chip with
di#erent performance characteristics or a new architecture.
Because of this, few programming language implementors
try to implement their own native-code generators. Instead,
many programming language implementors reuse one of the
existing back-end frameworks, in one of several ways: by
generating another more-or-less high-level language, such as
Java or C; by generating an intermediate language such as
C- [16], Java byte-code, or MSIL (the intermediate language
of the .NET Common Language Runtime); or by interfacing
directly with a reusable back-end, such as GCC (the GNU
Compiler Collection back-end) or ML-RISC.
Permission to make digital or hard copies of all or part of this work for
personal or classroom use is granted without fee provided that copies are
not made or distributed for profit or commercial advantage and that copies
bear this notice and the full citation on the first page. To copy otherwise, to
republish, to post on servers or to redistribute to lists, requires prior specific
permission and/or a fee.
ISMM'02, June 20-21, 2002, Berlin, Germany.
Unfortunately, however, most of these systems, and especially
most of the more mature and popular of them, do
not have any direct support for garbage collection, and the
ones that do, such as Java and MSIL, have their own draw-
backs, such as poor performance or being available only on
a restricted set of platforms.
Implementing garbage collection in systems that do not
have direct support for it poses some di#cult challenges; in
particular, it is hard for the garbage collector to trace the
system stack.
One widely-used solution to the problem of garbage collection
in uncooperative environments is the approach of
using conservative [4] or mostly-conservative [1, 22] collec-
tion. This approach can often deliver good performance.
However, conservative collection has some drawbacks; the
most significant of these is that the probabilistic nature of
conservative collection makes it unsuitable for very high-reliability
applications, but it also requires a small degree
of cooperation from the back-end, because certain compiler
optimizations are not safe in the presence of conservative
collection.
Another solution which has been used when compiling to
C [20] is to entirely avoid storing data on the C stack. This
can be done by implementing a virtual machine, with its
own stack and registers (which can be implemented as e.g.
C global variables). The source language is then compiled
to code which manipulates the virtual machine state; procedure
calls and parameter passing are handled by explicitly
manipulating the virtual machine stack and registers, rather
than using C function calls.
With this approach, the collector only needs to trace the
virtual machine stack, not the C stack. Since the source
language compiler has full control over the virtual machine
stack, tracing that stack is relatively straight-forward, and
so traditional techniques for accurate garbage collection can
be used. This approach also has the advantage that it can
overcome some of the other drawbacks of C, e.g. the lack of
support for proper tail recursion optimization [5]. However,
this approach discards many of the advantages of compiling
to a high-level language [13]. The source language compiler
must do its own stack slot and register allocation, and
the generated C code is very low-level. To make the code
e#cient, non-portable features must be used [14], and the
resulting system is complex and fragile. Furthermore, the
use of a di#erent calling convention makes interoperability
more di#cult.
We propose an alternative approach that allows fully type-
accurate and liveness-accurate garbage collection, thus allowing
the use of a normal copying collector, without requiring
any support from the back-end target language, and
while still generating code that uses the normal C function
calling mechanism. We describe this approach in the context
of compiling to C, although it would also work equally well
when interfacing directly to a compiler back-end framework,
such as the GCC back-end.
Our technique is formulated as a transformation on the
generated C code, which modifies the C code in such a way
as to insert calls to perform garbage collection when nec-
essary, and to provide the garbage collector with su#cient
information to trace and if necessary update any pointers on
the C stack. (The transformation is not entirely independent
of the front-end language, however; it requires information
from the source language front-end compiler about how to
trace each local variable.)
We have implemented this technique in the Mercury com-
piler, and we have run a number of benchmarks of this implementation
to investigate the overheads and performance
of our technique.
Section 2 describes our technique, showing how the code
is transformed and explaining how this avoids the di#cul-
ties with compiler optimizations that can cause problems for
conservative collection. Section 3 describes how the technique
can be extended to support multithreaded applica-
tions. Section 4 evaluates the performance of our technique.
Section 5 discusses related work.
2. THE GC TRANSFORMATION
The basic idea is to put all local variables that might
contain pointers in structs, with one struct for each stack
frame, and chain these structs together as a linked list. We
keep a global variable that points to the start of this chain.
At GC time, we traverse the chain of structs. This allows
us to accurately scan the C stack.
For each function, we generate a struct for that function.
Each such struct starts with a sub-struct containing a couple
of fixed fields, which allow the GC to traverse the chain:
struct function name_frame {
struct StackChain fixed_fields;
The fixed fields are as follows:
struct StackChain {
struct StackChain *prev;
void (*trace)(void *locals);
The 'prev' field holds a link to the entry for this function's
caller. The 'trace' field is the address of a function to trace
everything pointed to by this stack frame.
To ensure that the garbage collector does not try to traverse
uninitialized fields, we insert code to zero-initialize any
uninitialized fields of each struct before inserting it into the
chain.
We need to keep a link to the topmost frame on the stack.
There are two possible ways that this could be handled. One
way is to pass it down as a parameter. Each function would
get an extra parameter 'stack chain' which points to the
caller's struct. An alternative approach is to just have a
global variable 'stack chain' that points to the top of the
stack.
extern void *stack_chain;
We insert extra code to set this pointer when entering and
returning from functions. To make this approach thread-
safe, the variable would actually need to be thread-local
rather than global. This approach would probably work best
if the variable is a GNU C global register variable, which
would make it both e#cient and thread-safe. If GNU C extensions
are not available, the function parameter approach
is probably best.
In our implementation in the Mercury compiler, we are
currently just using a global variable, for simplicity.
2.1 Example
If we have a function
RetType
foo(Arg1Type arg1, Arg2Type arg2, . )
{ Local1Type local1;
bar(arg1, arg2, local1, &local2);
where new object() is the allocation primitive that allocates
garbage collected objects, and where say Arg1Type
and Local1Type might contain pointers, but Arg2Type and
don't, then we would transform it as follows:
struct foo_frame {
StackChain fixed_fields;
Arg1Type arg1;
static void
foo_trace(void *frame) {
struct foo_frame
/* code to trace locals->arg1
and locals->local1 */
RetType
foo(Arg1Type arg1, Arg2Type arg2, . )
{ struct foo_frame locals;
locals.fixed_fields.trace
.
bar(locals.arg1, arg2,
locals.local1, &local2);
stack_chain)->prev;
Here we are following Goldberg's approach to tag-free
garbage collection for strongly typed languages [11];
trace Arg1Type() and trace Local1Type() are type-specific
garbage collection routines that are generated by
the front-end compiler. However, unlike Goldberg, we are
primarily interested in adapting this approach to an uncooperative
environment; our technique would apply equally
well if using a tagged representation (in that case, only a
single trace object() function would be needed, instead of
one for each type) or if using a table-driven approach [7]
rather than a function per frame (in that case, the 'trace'
field of the 'StackChain' struct would be a data table rather
than a function pointer).
Finally, the code in the runtime system to traverse the
stack frames is as follows:
void
traverse_stack(struct StackChain *stack_chain)
{ while (stack_chain != NULL) {
The details of new object() will depend on the particular
garbage collection algorithm chosen. In our implementation,
new object() first checks for heap exhaustion, and then allocates
memory by incrementing a global heap pointer:
extern byte *heap_pointer;
#define new_object(type,size) \
heap_pointer += (size), \
(type *) (heap_pointer -
Here GC check() compares the global heap pointer to another
global variable which points to the end of the heap,
and calls garbage collect() if the heap is near exhaustion.
extern byte *heap_gc_threshold;
#define GC_check(size) \
heap_gc_threshold \
As is well known, it is possible to the reduce the overhead of
checking for heap exhaustion by combining multiple checks
into a single check. However, our current implementation
does not perform that optimization.
The garbage collect() routine in our current implementation
implements a very simple copying collector [21].
byte *from_heap;
byte *to_heap;
void
garbage_collect(void)
{ /* swap the "to" heap
with the "from" heap */
byte *tmp;
/* reset the "to" heap */
/* copy the live objects
from the "from" heap
to the "to" heap */
Note that we keep a separate list of global roots, which is
used for global variables that might contain pointers, in addition
to the 'stack chain' list.
2.2 Safety
Why, you might ask, doesn't this technique su#er from the
same problems with back-end compiler optimization that
cause trouble with conservative collection?
The reason that this technique works is that we are not
going behind the back-end compiler's back; everything is
done in strictly conforming C code. Conservative collectors
use non-portable techniques to trace the stack, but with our
approach the collector can trace the stack just by traversing
our linked list of structs. Although the code contains some
pointer casts, the behaviour of these casts is defined by the
C standard.
The back-end compiler cannot do any unsafe optimizations
on pointer variables such as locals.arg1 and
locals.local1, because we've stored the address of locals
in a global variable, and so it must assume that any call to
a function whose body is not known might update the fields
of locals.
Of course, inhibiting such optimizations in this way is a
two-edged sword. The advantage is that it ensures correct-
ness. The disadvantage is that it hurts performance.
The back-end compiler is free to perform function inlining
(or outlining, for that matter); our own "shadow stack"
of linked structs need not have any direct relationship with
the stack frames in the underlying machine code. Further-
more, the back-end compiler can do whatever fancy optimizations
it wants on non-pointer variables, such as arg2
and local2, and it can cache the values of pointer variables
such as locals.arg1 and locals.local1 in registers
between function calls (if it does appropriate alias analysis).
But in general it cannot cache the values of pointer variables
in registers across calls to non-inlined functions. This can
have a significant impact on performance.
A key contribution of this work is to measure the impact
on performance that this transformation has.
2.3 Improvements
The scheme described above is naive in certain respects.
There are several ways in which this scheme can be opti-
mized. Rather than putting all variables that might contain
pointers in the shadow-stack structs, it is su#cient to do
this only for such variables which are going to be live across
a point where garbage collection could occur, i.e. live across
an allocation or function call.
Another optimization is to not bother allocating a struct
for leaf functions that do not contain any functions calls or
memory allocations.
Another possible optimization would be to use local variables
to cache the fields of the shadow-stack struct, if they
are referenced multiple times in a sequence of code where no
garbage collection could occur, so that the C compiler could
then allocate the local variables in registers.
The only one of these optimizations implemented in our
current implementation is not allocating structs for functions
that do not have any pointer-containing variables.
2.4 Nested scopes and liveness-accuracy
An issue that we have not yet discussed is how to handle
variables declared in nested scopes within a function.
One way to handle them is to ignore the nesting and just
put all pointer-containing variables in the struct, regardless
of their scope. This requires first ensuring that no function
contains two declarations of the same variable name with
di#erent types in di#erent scopes (e.g. by renaming apart if
needed).
This approach is the one that we have used in our current
implementation. But it has two drawbacks: first, by extending
the lifetime of such variables to the whole function,
we may increase stack usage; and second, since the collector
will scan them, this may in turn also lead to unnecessary
heap retention.
The second drawback could be solved by inserting code to
zero out these variables when their scope is exited, or when
they are otherwise statically known to be dead. This would
ensure that the collector is "liveness-accurate" with respect
to the liveness of the local variables in the C code before
applying our transformation. Note that if such variables
were represented just as local variables in the final C code,
as would be the case when using a conservative collector,
it might not help to add code to assign zero to them at
the end of their lifetime, since the C compiler could just
"optimize" away the assignments. But with our approach,
where such variables are represented as fields of the 'locals'
struct, whose address has been taken and stored in a global
variable, we're safe from such unwanted optimizations.
Another way to handle nested scopes would be to use
unions to ensure that the storage for pointer-containing variables
declared in non-overlapping scopes is shared. However,
in order for the collector to be able to trace these unions,
we would need to store a discriminant for the union, which
recorded which scope (if any) was active, so that the trace()
function would know which union element to trace. Code
would need to be inserted to initialize these discriminants
on function entry, and to set the corresponding discriminant
on entry/exit from each nested scope.
A third way to handle nested scopes would be to treat each
nested scope as a separate stack frame, with the pointer-
containing variables allocated in a separate struct for each
nested scope. This would require adding code at the entry
of each nested scope to link the corresponding struct
into the stack chain list, and adding code at the exit point(s)
to remove it from the chain.
For both the second and third alternatives, the collector
would be to some degree liveness-accurate, in the sense that
variables would definitely not be scanned after their scope
has exited. However, if proper static liveness-accuracy is
desired, then for any pointer-containing variables which are
statically known to die before their scope has exited, additional
code would need to be inserted to zero out such
variables at the point of their death. (If the last use of such
a variable is as a function call argument, this implies copying
the variable from the stack frame struct to a local tempo-
rary, zeroing out the stack frame struct field, and then using
the local temporary in the call; zeroing out the stack frame
struct field after the call would be too late.)
3. MULTITHREADING
This system can also be extended to support multi-threading
(using a "stop the world" approach to collection)
in a fairly straight-forward manner, with little additional
overhead.
The "stop the world" approach means that when garbage
collection occurs, every mutator thread must advance to a
safe point, and stop executing the program; when all mutator
threads are stopped at a safe point, garbage collection
can begin. The collector itself can be either sequential or
parallel (single- or multi-threaded). The important thing
about this approach is that the collector never runs in parallel
with the mutator.
In our case, the "safe points" are the calls to
garbage collect().
To avoid the need for synchronization at allocations,
there should be a separate area of free heap space
for each thread. The 'stack chain', `heap pointer',
and 'heap gc threshold' variables all need to be made
thread-local. When one thread runs out of free heap
space, it can schedule a garbage collection by setting
the 'heap gc threshold' variables for every other
thread to point a sentinel value (such as the start of
the heap) which will cause those threads to enter the
garbage collection() function when they next invoke
GC check(). The garbage collection() function can handle
the necessary synchronization with other threads.
To ensure that each thread will invoke GC check() within
a bounded amount of time, the compiler will need to insert
an additional call to GC check() in the body of any
long-running loops that does not do any heap allocation.
Operating systems calls that can block will also need special
handling (space limits prevent us from elaborating on
that point here).
As mentioned earlier, 'stack chain' and `heap pointer'
can be GCC global register variables; this remains the case
even with multithreading. However, 'heap gc threshold'
cannot be put in a register; it needs to be addressable so
that it can be assigned to by other threads. In addition,
because it can be modified by other threads, it needs to be
declared 'volatile'.
To summarize, the changes required to support multi-threading
are these: the extra synchronization code inside
garbage collection(), the 'volatile' qualifier on the
'heap gc threshold' variables, the extra calls to GC check()
for loops that don't do any heap allocation, and special han-
Program Lines Number Execution time Ratios
of of (in seconds) hlc.gc hlc.agc hlc.agc
code iterations hlc hlc.gc hlc.agc / hlc / hlc / hlc.gc
cqueens 91 35000 0.70 1.29 1.28 1.84 1.83 0.99
crypt 132 20000 2.72 5.06 5.00 1.86 1.84 0.99
deriv 126 70000 0.15 0.75 0.79 5.00 5.27 1.05
poly 259 1200 0.57 2.89 3.37 5.07 5.91 1.17
primes 78 30000 1.12 2.03 2.58 1.81 2.30 1.27
qsort 64 100000 0.83 2.68 3.11 3.23 3.75 1.16
queens 85 100 2.31 4.78 4.55 2.07 1.97 0.95
query 96 30000 3.92 3.95 4.09 1.01 1.04 1.04
tak
Harmonic mean 1.99 2.08 1.06
Figure
1: Benchmark results
dling for OS calls that may block. While the extra synchronization
code in garbage collection() may be somewhat
costly, collections should not be too frequent, so the amortized
cost is small. Loops that don't do any heap allocation
are also likely to be rare, and in most cases the cost of an
extra GC check per loop iteration will be relatively small; if
the loop is small, the cost of checking can be amortized over
multiple loop iterations by performing loop unrolling prior
to inserting the extra GC checks. Finally, the cost of the
special handling needed for blocking OS calls is likely to be
small in comparison to the cost of a system call. Hence we
expect that overall, this technique should have little additional
overhead compared with the single-threaded version.
Since the C standard does not support multithread-
ing, it is not possible to do all this in strictly conforming
C. In addition to functions for creating and synchronizing
threads, such as provided by Posix threads,
the approach described above also requires that assignments
to the 'volatile', thread-local pointer variables
'heap gc threshold' be atomic. Technically this is not guaranteed
by the C or Posix standards, but most current platforms
do make assignments to appropriately aligned pointers
atomic, so it should be pretty portable in practice.
4. PERFORMANCE
The benchmark machine was a Gateway Select 1200
PC, with a 1200MHz AMD Athlon CPU, 64kb L1 instruction
cache, 64kb L1 data cache, 256k L2 cache,
and 256Mb RAM, running Debian GNU/Linux Woody
(testing), GNU libc 2.1, gcc 2.95.4, and Mercury rotd-
2002-04-19. Each benchmark was compiled with 'mmc -O5
-no-reclaim-heap-on-failure -no-deforestation'.
Times shown are each the best of three successive runs, on
an unloaded machine.
We used a set of small benchmarks that has previously
been used in benchmarking Mercury implementations [18].
Since the time for many of these benchmarks is very small,
we ran each benchmark for many iterations, and measured
the total time to execute all iterations.
(Ideally, it would be better to test with larger benchmarks.
But our current implementation does not yet support the
full Mercury language - higher-order code and type classes
are not supported because tracing of closures is not yet im-
plemented. This makes it di#cult to find large benchmark
programs that work with our implementation.)
We compared three variants ("grades") of the Mercury
compiler. The 'hlc' grade allocates memory by just incrementing
a heap pointer, as described in this paper, but the
code does not test for heap overflow, and instead of performing
garbage collection, memory is reclaimed by just resetting
the heap pointer after each iteration of the benchmark. This
is not a realistic memory management strategy for most real
applications, but it represents a useful baseline.
The 'hlc.gc' grade uses the Boehm (et al) conservative
collector [4].
The 'hlc.agc' uses the accurate garbage collection technique
described in this paper. The collector is a simple
two-space copying collector. The heap size used was 128k
per space (i.e. 256k in total).
The results are shown in Figure 1.
On most of these benchmarks, the conservative collector
is a little faster than the accurate collector. On some, the
accurate collector is slightly faster. When averaged over
all the benchmarks, the conservative collector is 6% faster.
Both the conservative collector and the accurate collector
are much slower than the version which just resets the heap
pointer after each benchmark iteration.
Profiling indicates that for the 'hlc.agc' grade, very little
time (less than 1%) is spent in the garbage collection()
function. The slow-down compared to the 'hlc' grade probably
results from decreased locality and from the overhead
of storing local pointer variables on the stack rather than in
registers.
Given that the Boehm collector has had years of tweaking
and tuning, whereas our implementation is not yet well
optimized (e.g. it does not yet use GCC global register
variables), we consider this to be a reasonable result. We
conjecture that with additional work on optimization, this
approach can achieve performance as good as the Boehm
collector on most benchmarks. But the increased portability
and reliability of our approach make it desirable for some
uses regardless of whether there is a performance advantage
one way or the other.
In our current implementation, we have not yet obtained
the main benefit for logic programming languages of a copying
collector, namely that copying collectors can allow cheap
heap reclamation on backtracking, by just saving and restoring
the heap pointer. Doing this requires some additional
care in the garbage collector, to update the saved heap pointers
after garbage collection, which we have not yet implemented
properly.
5. RELATED WORK
The accepted wisdom of the community is that conservative
collection is the only approach that works in uncooperative
environments. For example, the highly experienced
and respected language implementor Robert Dewar
wrote "you can't do any kind of type accurate GC without
information from the compiler back end" (noting that
tagged architectures such as the CDC 6000 were an excep-
[6]. Similarly, in the Garbage Collection FAQ [10],
David Chase states that when compiling to C or C++, re-location
of objects is "not generally possible" even if active
pointers are registered, "because compiler-generated temporaries
may also reference objects". However, this is certainly
not a problem for our technique, as explained in Section 2.2,
so this section of the FAQ is at best misleading.
A big part of the contribution of this paper is to dispute
that accepted wisdom, by demonstrating that it is possible
to implement fully type-accurate garbage collection within
such an environment.
A lot of earlier work in the literature addresses issues
which are quite close to the issues that we address, but using
di#erent techniques, or uses very similar techniques, but for
a di#erent purpose.
Boehm [4], Bartlett [1], and Yip [22] address the issue
of uncooperative environments, but use conservative or
mostly-conservative collection, rather than accurate collec-
tion. Boehm and Chase [2, 3] address the issue of safety of
conservative garbage collection in the presence of compiler
optimizations.
Shadow stacks have been used for debugging in the Berkeley
Sather [19] implementation, and in cdb [12]. But these
systems do not use shadow stacks for garbage collection;
Berkeley Sather uses the Boehm (et al) conservative collector
Shadow stacks have been used for hand-coded garbage
collection in several systems that we are aware of, in particular
Emacs, GCC, and RT++ [8, 9, 17]. In these systems,
unlike ours, the code to register local variables is inserted
manually, and the shadow stack is a list or array of individual
variables rather than a list of frames. These systems are perhaps
closest to ours, but we are interested in automating this
technique as part of a programming language implementa-
tion, rather than using it to implement a garbage collection
library for a language with manual memory management.
As far as we know there has been no published work comparing
the performance of these hand-coded shadow stack
approaches with conservative collection. And as far as we
are aware, none of the published work on these addresses
multithreading.
Tarditi et al [20] describe an ML to C compiler that supports
accurate garbage collection. However, this compiler
works by emulating a virtual machine, rather than using
the normal C calling convention; this has the drawbacks
mentioned in the introduction.
The Glasgow Haskell compiler [15] has accurate garbage
collection and can compile via GNU C; but it relies on highly
non-portable techniques that involve munging the generated
assembler file after the GNU C back-end has finished with
it. The success of such techniques relies on continued cooperation
from the back-end compiler.
C- [16], a portable assembly language that supports
garbage collection, is another approach that relies on co-operation
from the back-end compiler.
a variety of di#erent
garbage collection techniques, such as mark-sweep collection
and copying collection, but does not address the issue of how
to locate roots on the stack.
6. CONCLUSIONS
We have presented a scheme for performing accurate
garbage collection in an uncooperative environment, by a
simple transformation on the code passed to the uncooperative
compiler back-end framework. We have implemented
this scheme in the Mercury compiler, and measured its per-
formance, which is similar to that of the Boehm (at al)
conservative collector [4] on most of our benchmarks, even
though there are several important optimizations that we
have not yet implemented.
We have also described how this scheme can be extended
to handle multithreaded applications.
The source code for our system is freely available on
the web at <http://www.cs.mu.oz.au/mercury/download/
rotd.html>.
Acknowledgments
I would like to thank Tom Lord, for his comments on the
gcc mailing list about the advantages of precise collection;
Zoltan Somogyi, Andreas Rossberg, Tyson Dowd, Ralph
Becket, Jon Dell'Oro and the anonymous referees for their
comments on earlier drafts of this paper; and Microsoft and
the Australian Research Council, for their financial support.
7.
--R
Simple garbage-collector-safety
A proposal for garbage-collector-safe C compilation
Garbage collection in an uncooperative environment.
Proper tail recursion and space e
Mail to the GCC mailing list
David Je
A copying collector for C
Dynamic storage reclamation in C
A machine-independent debugger
Compiling Mercury to high-level C code
Compiling logic programs to C using GNU C as a portable assembler.
Peyton Jones
The execution algorithm of Mercury
Sather revisited: A high-performance free alternative to C++
assembly required: Compiling Standard ML to C.
Uniprocessor garbage collection techniques.
--TR
Garbage collection in an uncooperative environment
Tag-free garbage collection for strongly typed programming languages
Simple garbage-collector-safety
A machine-independent debugger
Proper tail recursion and space efficiency
Uniprocessor Garbage Collection Techniques
Run Time Type Information in Mercury
C--
Compiling Mercury to High-Level C Code
DYNAMIC STORAGE RECLAMATION IN C++ (M.S. Thesis)
--CTR
Martin Hirzel , Amer Diwan , Johannes Henkel, On the usefulness of type and liveness accuracy for garbage collection and leak detection, ACM Transactions on Programming Languages and Systems (TOPLAS), v.24 n.6, p.593-624, November 2002
Andreas Bauer, Creating a portable programming language using open source software, Proceedings of the USENIX Annual Technical Conference 2004 on USENIX Annual Technical Conference, p.40-40, June 27-July 02, 2004, Boston, MA | multithreading;garbage collection;programming language implementation |
512510 | Ordered binary decision diagrams as knowledge-bases. | We consider the use of ordered binary decision diagrams (OBDDs) as a means of realizing knowledge-bases, and show that, from the view point of space requirement, the OBDD-based representation is more efficient and suitable in some cases, compared with the traditional CNF-based and/or model-based representations. We then present polynomial time algorithms for the two problems of testing whether a given OBDD represents a unate Boolean function, and of testing whether it represents a Horn function. | Introduction
Logical formulae are one of the traditional means of representing knowledge in AI [11]. However,
it is known that deduction from a set of propositional clauses is co-NP-complete and abduction
is NP-complete [13]. Recently, an alternative way of representing knowledge, i.e., by a subset
of its models, which are called characteristic models, has been proposed (see e.g., [6, 7, 8, 9]).
Deduction from a knowledge-base in this model-based approach can be performed in linear time,
and abduction is also performed in polynomial time [6].
In this paper, we propose yet another method of knowledge representation, i.e., the use of
ordered binary decision diagrams (OBDDs) [1, 2, 12]. An OBDD is a directed acyclic graph
representing a Boolean function, and can be considered as a variant of decision trees. By
restricting the order of variable appearances and by sharing isomorphic subgraphs, OBDDs have
the following useful properties: 1) When a variable ordering is given, an OBDD has a reduced
canonical form for each Boolean function. 2) Many Boolean functions appearing in practice
can be compactly represented. 3) There are e-cient algorithms for many Boolean operations
on OBDDs. As a result of these properties, OBDDs are widely used for various applications,
especially in computer-aided design and verication of digital systems (see e.g., [4, 14]). The
manipulation of knowledge-bases by OBDDs (e.g. deduction and abduction) was rst discussed
by Madre and Coudert [10].
We rst compare the above three representations, i.e., formula-based, model-based, and
OBDD-based, on the basis of their sizes. In particular, we show that, in some cases, OBDDs
require exponentially smaller space than the other two representations, while there are also cases
in which each of the other two requires exponentially smaller space. In other words, these three
representations are mutually incomparable with respect to space requirement.
It is known that OBDDs are e-cient for knowledge-base operations such as deduction
and abduction [10]. We investigate two fundamental recognition problems of OBDDs, that is
testing whether a given OBDD represents a unate Boolean function, and testing whether it
represents a Horn function. We often encounter these recognition problems, since a knowledge-base
representing some real phenomenon is sometimes required to be unate or Horn, from the
hypothesis posed on the phenomenon and/or from the investigation of the mechanism causing
the phenomenon. For example, if the knowledge-base represents the data set of test results on
various physical measurements (e.g., body temperature, blood pressure, number of pulses and
so on), it is often the case that the diagnosis of a certain disease is monotonically depending
on each test result (we allow changing the polarities of variables if necessary). Also in articial
intelligence, it is common to consider Horn knowledge-bases as they can be processed e-ciently
in many respects (for example, deduction from a set of Horn clauses can be done in linear
time [5]). We show that these recognition problems for OBDDs can be solved in polynomial
time for both the unate and Horn cases.
The rest of this paper is organized as follows. The next section gives fundamental denitions
and concepts. We compare the three representations in Section 3, and consider the problems of
recognizing unate and Horn OBDDs in Sections 4 and 5, respectively.
Preliminaries
2.1 Notations and Fundamental Concepts
We consider a Boolean function f : f0; 1g n ! f0; 1g. An assignment is a vector a 2 f0; 1g n ,
whose i-th coordinate is denoted by a i . A model of f is a satisfying assignment a of f , i.e.
and the theory (f) representing f is the set of all models of f . Given a; b 2 f0; 1g n ,
we denote by a b the usual bitwise (i.e., componentwise) ordering of assignments; a i b i for
Given a subset E f1; 2; . ; ng, E denotes the characteristic
vector of E; the i-th coordinate E
be the n variables of f , where each x i corresponds to the i-th coordinate
of assignments and evaluates to either 0 or 1. Negation of a variable x i is denoted by x i .
Variables and their negations are called literals. A clause is a disjunction of some literals, and a
conjunction of clauses is called a conjunctive normal form (CNF). We say that f is represented
by a CNF ', if holds for all a 2 f0; 1g n . Any Boolean function can be represented
by some CNF, which may not be unique.
We sometimes do not make a distinction among a function f , its theory (f ), and a
CNF ' that represents f , unless confusion arises. We dene a restriction of f by replacing
a variable x i by a constant a i 2 f0; 1g, and denote it by f
Restriction may be applied to many variables. We also
Lemma 2.1 The relation has the following properties:
holds if and only if f j x i =a i gj x i =a i holds for both a
holds if and only if f h and g h hold.
For an assignment p 2 f0; 1g n , we dene a p b if (a bit p) (b bit p) holds, where bit
denotes the bitwise (i.e., componentwise) exclusive-or operation. A Boolean function f is unate
with polarity p if f(a) f(b) holds for all assignments a and b such that a p b. A theory is
unate if represents a unate function. A clause is unate with polarity p if positive
literals x i and negative literals x i in the clause. A CNF is unate with polarity p if
it contains only unate clauses with polarity p. It is known that a theory is unate if and only
if can be represented by some unate CNF. A unate function is positive (resp., negative) if its
polarity is (00
A theory is Horn if is closed under operation ^ bit , where a ^ bit b is bitwise AND of
models a and b. For example, if a = (0011) and (0001). The closure of
a theory with respect to ^ bit , denoted by
(), is dened as the smallest set that contains
and is closed under ^ bit . We also use the operation ^ bit as a set operation; (f) ^ bit (g)
holds for some b 2 (f) and c 2 (g)g. We often denotes (f) ^ bit (g) by
convenience. Note that the two functions are dierent.
A Boolean function f is Horn if (f) is Horn; equivalently if f holds (as sets of
models). A clause is Horn if the number of positive literals in it is at most one, and a CNF is
Horn if it contains only Horn clauses. It is known that a theory is Horn if and only if can be
represented by some Horn CNF. By denition, a negative function is Horn, but not conversely.
For any Horn theory , a model a 2 is called characteristic if it cannot be produced by
bitwise AND of other models in ; a 62
fag). The set of all characteristic models of
a Horn theory , which we call the characteristic set of , is denoted by Char(). Note that
every Horn theory has a unique characteristic set Char(), which satises
. The set of minimal models of f with respect to p 2 f0; 1g n is dened as
there exists no b 2 (f) satisfying b < p ag,
where b < p a denotes that b p a and b 6= a hold. The following lemma gives an upper bound
on the size (i.e., cardinality) of the characteristic set.
Lemma 2.2 [9] Let f be a Horn function on n variables. Then, the characteristic set of f has
size at most
ng and E n;i is the characteristic
vector of the set E n;i f0; ng given by
ng for
2.2 Ordered Binary Decision Diagrams
An ordered binary decision diagram (OBDD) is a directed acyclic graph that represents a Boolean
function. It has two sink nodes 0 and 1, called the 0-node and the 1-node, respectively (which
are together called the constant nodes). Other nodes are called variable nodes, and each variable
node v is labeled by one of the variables x 1 denote the label of node v.
Each variable node has exactly two outgoing edges, called a 0-edge and a 1-edge, respectively.
One of the variable nodes becomes the unique source node, which is called the root node. Let
denote the set of n variables. A variable ordering is a total ordering
associated with each OBDD, where is a permutation f1; 2; . ; ng !
ng. The level 3 of a node v, denoted by level(v), is dened by its label; if node v has
label x (i) , level(v) is dened to be That is, the root node is in level n and has
label x (1) , the nodes in level n 1 have label x (2) and so on. The level of the constant nodes
is dened to be 0. On every path from the root node to a constant node in an OBDD, each
variable appears at most once in the decreasing order of their levels.
Every node v of an OBDD also represents a Boolean function f v , dened by the subgraph
consisting of those edges and nodes reachable from v. If node v is a constant node, f v equals
to its label. If node v is a variable node, f v is dened as var(v) f 0-succ(v) _ var (v) f 1-succ(v)
by Shannon's expansion, where 0-succ(v) and 1-succ(v), respectively, denote the nodes pointed
by the 0-edge and the 1-edge of node v. The function f represented by an OBDD is the one
represented by the root node. Figure 1 illustrates three OBDDs representing x 3 x 2 _ x 1 with a
3 This denition of level may be dierent from its common use.
0-edge
1-edge
constant node
variable node
(a)
(b) (c)
Figure
1: OBDDs representing x 3 x 2 _ x 1 .
variable ordering Given an assignment a, the value of f(a) is determined by following
the corresponding path from the root node to a constant node in the following manner: at a
variable node v, one of the outgoing edges is selected according to the assignment a var(v) to the
variable var (v). The value of the function is the label of the nal constant node.
When two nodes u and v in an OBDD represent the same function, and their levels are the
same, they are called equivalent. A node whose 0-edge and 1-edge both point to the same node
is called redundant. An OBDD is called dense if every variable node v satisfy
all paths from the root node to constant nodes visit
nodes). A dense OBDD which has no equivalent nodes is quasi-reduced. A quasi-reduced OBDD
which has no redundant nodes is reduced. The OBDDs (a), (b) and (c) in Fig. 1 are dense, quasi-
reduced and reduced, respectively. In the following, we assume that all OBDDs are reduced,
unless otherwise stated. The size of an OBDD is the number of nodes in the OBDD. Given
a function f and a variable ordering, its reduced OBDD is unique and has the minimum size
among all OBDDs with the same variable ordering. The minimum sizes of OBDDs representing
a given Boolean function depend on the variable orderings [2].
Given an OBDD that represents f , the OBDDs of f j x i =0 and f j x i =1 can be obtained in
denotes the size of the OBDD of f . The size does not increase by
a restriction. Given two OBDDs representing f and g, applying fundamental logic operators,
g, can be performed in O(jf j jgj) time, and property f g
can be also checked in O(jf j
A partition for f is a pair of sets (L; R) satisfying L; R
and L \ is called a left partition and R is called a right partition. Let l denote an
assignment to the variables in L, and r denote an assignment to the variables in R. Then, l r
denotes the complete assignment obtained by combining l and r. Let X 0 be a subset of X, and
! be a positive number satisfying 0 < ! < 1. Then, a partition (L; R) is called !-balanced for
Given a partition (L; R), a set A of assignments
l i for L and r i for R, h, is called a fooling set if it satises
for some a 2 f0; 1g. The next lemma tells that the size h of a fooling set gives a lower bound
on the size of an OBDD that represents f .
Lemma 2.3 [3] Let f be a Boolean function on n variables, X 0 be a subset of the variables and
! be a positive number satisfying 0 < ! < 1. If f has a fooling set of size at least h for every
!-balanced partition (L; R) for X 0 , then the size of OBDD representing f is at least h for any
variable ordering.
3 Three Approaches for Knowledge-Base
Representation
In this section, we compare three knowledge-base representations: CNF-based, model-based, and
OBDD-based. It is known that CNF-based and model-based representations play orthogonal
roles with respect to space requirement. Namely, each of them sometimes allows exponentially
smaller sizes than the other, depending on the functions. We show that OBDD-based representation
is incomparable to the other two in the same sense.
We start with relations between OBDD and CNF representations.
Lemma 3.1 There exists a negative theory on n variables, for which OBDD and CNF both
require size O(n), while its characteristic set requires
size
z n=2 ).
Proof: Consider a function
2m. The size of this CNF is obviously O(n). The characteristic set is given by
exactly one of a 2i 1 or a 2i is 0 for all whose size is
The OBDD representing f A is illustrated in Fig. 2, with a variable ordering
The size of this OBDD is O(n). 2
0-edge
1-edge
Figure
2: OBDD representing f
Lemma 3.2 There exists a negative theory on n variables, for which OBDD requires size O(n)
and the characteristic set requires size O(n 2 ), while CNF requires
size
z n=2 ).
Proof: Consider a function
_
to f A . The smallest CNF representation of f B , which is given above, has
clauses. The
characteristic set is f f1;2;.;2mgS 2 f0; 1g 2m
whose size is O(n 2 ) [6]. The OBDD representing
f B is illustrated in Fig. 3, with a variable ordering
f B is dual to f A , this OBDD is obtained by negating input variables (i.e., exchanging the roles
of 0-edges and 1-edges) and negating output (i.e., exchanging the roles of the 0-node and the
1-node) of the OBDD in Fig. 2. The size of this OBDD is O(n). 2
By combining Lemmas 3.1 and 3.2, we show that, for some theory, OBDD can be exponentially
smaller than its characteristic set and CNF representations.
Theorem 3.1 There exists a negative theory on n variables, for which OBDD requires size
O(n), while both of the characteristic set and CNF require
sizes
z n=4 ).
0-edge
1-edge
Figure
3: OBDD representing f
Proof: Consider a function
^@
2m _
As shown in Lemma 3.1, the characteristic set requires
size
z n=4 ) to represent
the rst half. Also by Lemma 3.2, CNF representation always requires
size
z n=4 ) to represent
the second half. Note the rst and second halves are independent since the variables in the rst
half do not appear in the second half and vice versa. Therefore, the above lower bounds of the
characteristic set and CNF are valid also for f C . An OBDD that represents f C is illustrated in
Fig. 4, with a variable ordering The size of this OBDD is O(n). 2
We now turn to the opposite direction, i.e., CNF and the characteristic set can be exponentially
smaller than the size of OBDD.
Lemma 3.3 The size of the characteristic set is O(n) for the following Horn function on n
variables x i;j , 1
f D =@
_
x i;jAA ^@
_
x i;j
as the set B n dened in Lemma 2.2,
for convenience, where E is the set E n;0 f(i; j)g
0-edge
1-edge
x n=2
x n=2+2
Figure
4: OBDD representing f
corresponding to variable x i;j . f D holds for the characteristic vector E n;0 . Thus,
Similarly, jmin E n;i;j (f D
since f D
Next, since f D implied by
we enumerate all minimal models for each E n;i;m+1 . By denition, we obtain E n;0 by
ipping the (i; m+1)-th coordinate of E n;i;m+1 . This E n;0 is a minimal model for E n;i;m+1 since
When the (i; m+ 1)-th coordinate is xed to 0, the clause
is
satised by
ipping at least one of the (i; j)-th coordinates among
two or more (i; j)-th coordinates are
ipped, the corresponding vector is not minimal. Thus, we
have jmin E n;i;m+1 (f D m. Similarly, we have jmin E n;m+1;j (f D
We also enumerate all minimal models for En;m+1;m+1 since f D ( En;m+1;m+1
obtain E n;0 by
ipping the (m+1;m+1)-th coordinate. When the (m+1;m+1)-th coordinate
is xed to 0, minimal models are obtained by
ipping exactly one of the (i; m+1)-th coordinates
among exactly one of the (m coordinates among
Thus, we have jmin E n;m+1;m+1 (f D In total, we have P
a2Bn jmin a (f D
i.e., O(n). By Lemma 2.2, this means that the size of the characteristic set of f D is O(n). 2
Lemma 3.4 [15] Let f be a Boolean function on n variables x i;j , 1
Then, for any partition (L; R) satisfying either of the following properties
holds:
(1) There are at least m=
dierent i's satisfying fx
(2) There are at least m=
dierent j's satisfying fx
Lemma 3.5 The size of OBDD representing the following negative function f E on n variables
variable ordering:
_
x i;jAA ^@
_
x i;j
Proof: We prove this by Lemma 2.3 in Section 2.2. Let us consider that the set X 0 in
Lemma 2.3 is given by the set of all variables, and for every balanced partition
assuming case (1) of Lemma 3.4 without loss of generality, we have at least m=
pdierent i's satisfying fx We select
2 of these i's, I
g. For every i k 2 I, we can select two variables x i k ;l k 2 L
and x i k ;r k 2 R. We construct a set A of assignments such that each assignment satises the
following restrictions:
(1) For every assigned either (0; 1) or (1; 0).
(2) For every are assigned 1.
(3) Other variables are assigned 0.
The size of the set A is 2 m=
2 since there are choices in restriction (1). Let l h r h denote the
assignment satisfying
.
Now, we prove that set A is a fooling set, dened just before Lemma 2.3. First, we show
assigned 0,
all are assigned 0, we have W m
mg. Thus, we have f E (l h r h
Next, we show that f E (l h r h 0
there exists at least one
variable x which is assigned 1 by l h r h and 0 by l h 0
. By restriction (1), x i k ;r k is then
assigned 0 by l h r h and 1 by l h 0
. Therefore, x i k ;l k and x i k ;r k are assigned 1 by assignment
l h r h 0
, implying that
holds. This proves that A is a fooling set. Since the size of
this fooling set is at least 2 m=
2 for any balanced partition, this lemma follows from Lemma 2.3.Theorem 3.2 There exists a Horn theory on n variables, for which CNF requires size O(n)
and the characteristic set requires size O(n), while the size of the smallest OBDD representation
is
s
Proof: Consider the function f D in Lemma 3.3. As stated in Lemma 3.3, the size of its
characteristic set is O(n). Also the size of the CNF is obviously O(n). The function f E in
Lemma 3.5 is obtained by restricting x 1;m+1 ; . ; xm;m+1 , xm+1;1 ; . ; xm+1;m and xm+1;m+1 of
f D to 0. Since the size of OBDD does not increase by a restriction, the size of the smallest
OBDD of f D is
The above results show that none of the three representations can always dominate the
other two. OBDDs can nd a place in knowledge-bases as they can represent some theories
more e-ciently than others.
Unfortunately, by combining Theorems 3.1 and 3.2, the following negative result is obtained.
Corollary 3.1 There exists a Horn function on n variables, for which both of the characteristic
set and CNF require
sizes
and the size of the smallest OBDD representation is
s
Proof: Consider a function which consists of two parts, where the rst one (resp., second
one) corresponds to f C in Theorem 3.1 (resp., f D in Theorem 3.2). Both have n=2 variables
respectively, and share none of the variables. Similarly to the case of Theorem 3.1, the lower
bounds for the three representations are easily obtained. 2
Checking Unateness of OBDD
In this section, we discuss the problem of checking whether a given OBDD represents a unate
function. We assume, without loss of generality, that the variable ordering is always
The following well-known property will show that this problem can be solved in polynomial
time.
Property 4.1 Let f be a Boolean function on n variables x 1 unate with
holds for every
As noted in subsection 2.2, an OBDD representing f can be obtained
in O(jf j log jf time from the OBDD representing f , where jf j denotes its size. The size does
not increase by a restriction f j x i =0 or f j x i =1 . Since the property g h can be checked in
O(jgj jhj) time, the unateness of f can be checked in O(njf checking the conditions
The following well-known property is useful to reduce the computation time.
Property 4.2 Let f be a Boolean function on n variables x 1 unate with
only if (i) both f j xn=0 and f j xn=1 are unate with polarity
The unateness of functions f j xn=0 and f j xn=1 can be checked by applying Property 4.2
recursively, with an additional condition that f j xn=0 and f j xn=1 have the same polarity. Note
that the property f j xn=0 f j xn=1 (resp., f j xn=0 f j xn=1 ) can be also checked recursively,
since it holds if and only if f j xn=0; xn 1 =an 1
Algorithm CHECK-UNATE in Fig. 5 checks the above conditions in the bottom-up manner
(i.e., from level to the root node). We use an array p[ ' ] to denote the polarity of f with
respect to x ' in level '; each element stores 0, 1 or (not checked yet). We also use a two-dimensional
array imp[u; v] to denote whether f u f v holds or not; each element stores YES,
NO or (not checked yet). In Step 2, the unateness with the polarity specied by array p is
checked for the functions of the nodes in level '. More precisely, the unateness of the functions
is checked in Step 2-1, and the consistency of their polarities is checked in Step 2-2. In Step 3,
imp[u; v] are computed for the functions f u and f v in levels up to '.
The unateness check of f v in Step 2-1 can be easily done, since both f 0-succ(v) (i.e., f v j x ' =0 )
and f 1-succ(v) (i.e., f v j x ' =1 ) have already been checked to be unate with polarity (p[1 ]; p[2 ]; . ;
p[' 1]), and f 0-succ(v) and f 1-succ(v) have been compared in Step 3 of the previous iteration.
Note that constant functions 0 and 1 are considered to be unate. The polarity of f v with respect
to x ' in level ' is temporarily stored in pol in Step 2-1.
In Step 2-2, the polarity consistency with respect to x ' is checked by comparing the polarity
of node v (which is pol ) and p[ ' ]. If p[ ' is the rst node checked in level '), we store
Algorithm
Input: An OBDD representing f with a variable ordering
Output: \yes" and its polarity if f is unate; otherwise, \no".
imp[u; v] :=> > <
otherwise;
' := 1.
(check unateness in level ' and compute p[ ' ]). For each node v in level ' (i.e.,
labeled with x ' ), apply Steps 2-1 and 2-2.
Step 2-1. Set pol := 0 if imp[0-succ(v);
output \no" and halt.
Step 2-2. If p[ '
\no" and halt.
Step 3 (compute imp in level '). For each pair of nodes (u; v) (where (u; v) and (v; u)
are considered dierent) such that level(u) ' and level(v) ', and at least one of
level(u) and level(v) is equal to ', set imp[u; v] := YES if both imp[0-succ 0 (u); 0-succ 0 (v)] and
imp[1-succ 0 (u); 1-succ 0 (v)] are YES; otherwise, set imp[u; v] := NO.
Step 4 (iterate). If is the level of the root node, then output \yes" and polarity
halt. Otherwise set ' return to Step 2.
Figure
5: Algorithm CHECK-UNATE to check the unateness of an OBDD.
pol in p[ ' ]. Otherwise, pol is checked against p[ ' ] and \no" is output if they are not consistent.
Note that CHECK-UNATE outputs p[ ' there are no nodes in level ' (i.e., f does not
depend on x ' ).
In Step 3, comparison between f u and f v is also performed easily, since the comparisons
between f u j x ' =a ' and f v j x ' =a ' for both a Here we
use the convention that 0-succ 0 (v) (resp., 1-succ 0 (v)) denotes 0-succ(v) (resp., 1-succ(v)) if
This is because f v j x '
hold if holds if level(v) < '. Note
that f holds if and only if u and v are the same node. After Step 3 is done for some ', we
know imp[u; v] for all pairs of nodes u and v such that level(u) ' and level(v) '. We store
all the results, although some of them may not be needed.
Next, we consider the computation time of this algorithm. In Step 2, the computation for
each v is performed in constant time from the data already computed in the previous Step 3.
Thus the total time of Step 2 is O(jf j). In Step 3, the comparison between f u and f v for each
pair (u; v) is performed in constant time. The number of pairs compared in Step 3 during the
entire computation is O
Theorem 4.1 Given an OBDD representing a Boolean function f , checking whether f is unate
can be done in O(jf is the size of the given OBDD.
If we start Algorithm CHECK-UNATE with initial condition
for all check the positivity (resp. negativity) of f . This is because f is
positive (resp., negative) if and only if the polarities of all nodes are 0 (resp., 1).
Corollary 4.1 Given an OBDD representing a Boolean function f , checking whether f is positive
(resp., negative) can be done in O(jf is the size of the given OBDD.
Checking Horness of OBDD
In this section, we discuss the problem of checking whether a given OBDD represents a Horn
function. After examining the condition for Horness in the next subsection, an algorithm will
be given in subsection 5.2.
5.1 Conditions for Horness
We assume, without loss of generality, that the variable ordering is always
Denoting f j xn=0 and f j xn=1 by f 0 and f 1 for simplicity, f is given by
are Boolean functions on Similarly to the case
of unateness, we check the Horness of f in the bottom-up manner.
Lemma 5.1 Let f be a Boolean function on n variables x 1 which is expanded as
only if both f 0 and f 1 are Horn and f
holds.
Proof: We rst prove the identity
by considering all models:
Now, the if-part of the lemma is immediate from (1), because the Horness of f 0 and f 1 (i.e.,
and the property f imply
Next, we consider the imply
Equality (2) implies that f 1 is Horn. Also f holds since a ^ bit a = a holds
for any model a in (f 0 ). Thus, we have
By combining (3) and (4), we have f holds if and
only if g f holds, we also have f
The Horness of f 0 and f 1 can be checked by applying Lemma 5.1 recursively. The following
lemma says that the condition Lemma 5.1 can be also checked in the bottom-up
Lemma 5.2 Let f , g and h be Boolean functions on n variables, which are expanded as
holds if and only if f
Proof: The identity
can be proved in a manner similar to (1) by considering all models. Then, since f ^ bit g h
holds if and only if (f ^ bit g)j
hold, we can prove this lemma by Lemma 2.1(2). 2
Note that the condition of type f ^ bit g f in Lemma 5.1 requires to check the condition
of type (i.e., checking of type f ^ bit g h for three functions f , g and h). The
last condition can be checked recursively by Lemma 5.2.
5.2 Algorithm to Check Horness
Applying Lemmas 5.1 and 5.2 recursively, the Horness of a Boolean function f can be checked
by scanning all nodes in a given OBDD in the bottom-up manner. Namely, for each node v in
level ', we check the condition of Lemma 5.1, i.e., whether both f v j x ' =0 and f v j x ' =1 are Horn
holds. Lemma 5.2 gives the condition how f v j x '
can be checked in the bottom-up manner.
Algorithm CHECK-HORN in Fig. 6 checks the Horness of a given OBDD in the above
manner. We use an array horn [v] to denote whether each node v represents a Horn function
or not, and a three-dimensional array bit-imp[u; v; w] to denote whether f
or not. Each element of the arrays stores YES, NO or (not checked yet); horn
says that f v is Horn and bit-imp[u; v; holds. We note here
that, since the OBDD is reduced, the condition f may be checked for functions
in dierent levels; in such case, all functions are considered to have l max variables by adding
dummy variables, where l max denotes the maximum level of the nodes u, v and w.
In Step 2 of Algorithm CHECK-HORN, horn [v] for each v can be computed in constant time
by Fig. 7, which corresponds to Lemma 5.1, since f v j x level(v)
f 1-succ(v) hold, and horn[0-succ(v)], horn[1-succ(v)] and bit-imp[0-succ(v); 1-succ(v); 0-succ(v)]
in Fig. 7 have already been computed in the previous iterations.
Similarly, bit-imp[u; v; w] in Step 3 for each triple (u; v; w) can be computed in constant
time by Fig. 8, which corresponds to Lemma 5.2. As in the case of Algorithm CHECK-UNATE,
itself if level(v) < '. Upon completing Step 3 for ', we have the results bit-imp[u; v; w] for all
triples (u; v; w) such that level(u) ', level(v) ' and level(w) '. These contain all the
information required in the next iteration, although some of them may not be needed.
Now, we consider the computation time of Algorithm CHECK-HORN. In Step 2, since
horn [v] for each node v is computed in constant time, O(jf time is required for checking all
Algorithm CHECK-HORN
Input: An OBDD representing f with a variable ordering
Output: \yes" if f is Horn; otherwise, \no".
horn [v] :=
YES if v is a constant node 0 or 1;
otherwise;
otherwise;
' := 1.
(check Horness in level '). For each node v in level ' (i.e., labeled with x ' ), check
whether the function f v is Horn according to Fig. 7, and set its result YES or NO to horn [v]. If
there exists at least one node in level ' which is not Horn, output \no" and halt.
Step 3 (compute bit-imp in level '). For each triple (u; v; w) of nodes such that level(u) ',
level(v) ' and level(w) ', and at least one of level(u), level(v) and level(w) is equal to
check whether f holds according to Fig. 8, and set its result YES or NO to
Step 4 (iterate). If halt. Otherwise set ' := return to
Step 2.
Figure
Algorithm CHECK-HORN to check the Horness of an OBDD.
YES if all of horn[0-succ(v)], horn[1-succ(v)] and
are YES.
NO otherwise.
Figure
7: Checking horn [v] for a node v in Step 2.
YES if all of bit-imp[1-succ 0 (u); 1-succ 0 (v); 1-succ 0 (w)],
and bit-imp[1-succ 0 (u); 0-succ 0 (v); 0-succ 0 (w)] are YES.
NO otherwise.
Figure
8: Checking bit-imp[u; v; w] (i.e., f
for a triple of nodes (u; v; w) in Step 3.
nodes in the OBDD. In Step 3, bit-imp[u; v; w] for each triple (u; v; w) is also computed in
constant time. The number of triples to be checked in Step 3 during the entire computation is
O(jf j 3 ). The time for the rest of computation is minor.
Theorem 5.1 Given an OBDD representing a Boolean function f , checking whether f is Horn
can be done in O(jf j 3 ) time, where jf j is the size of the given OBDD.
6 Conclusion
In this paper, we considered to use OBDDs to represent knowledge-bases. We have shown that
the conventional CNF-based and model-based representations, and the new OBDD representation
are mutually incomparable with respect to space requirement. Thus, OBDDs can nd their
place in knowledge-bases, as they can represent some theories more e-ciently than others.
We then considered the problem of recognizing whether a given OBDD represents a unate
Boolean function, and whether it represents a Horn function. It turned out that checking
unateness can be done in quadratic time of the size of OBDD, while checking Horness can be
done in cubic time.
OBDDs are dominatingly used in the eld of computer-aided design and verication of
digital systems. The reason for this is that many Boolean functions which we encounter in
practice can be compactly represented, and that many operations on OBDDs can be e-ciently
performed. We believe that OBDDs are also useful for manipulating knowledge-bases. Developing
e-cient algorithms for knowledge-base operations such as deduction and abduction should
be addressed in the further work.
Acknowledgement
The authors would like to thank Professor Endre Boros of Rutgers University for his valuable
comments. This research was partially supported by the Scientic Grant-in-Aid from Ministry
of Education, Science, Sports and Culture of Japan.
--R
Theoretical Studies on Memory-Based Parallel Computation and Ordered Binary Decision Diagrams
--TR
A theory of the learnable
Graph-based algorithms for Boolean function manipulation
On the Complexity of VLSI Implementations and Graph Representations of Boolean Functions with Application to Integer Multiplication
Efficient implementation of a BDD package
Sequential circuit verification using symbolic model checking
Shared binary decision diagram with attributed edges for efficient Boolean function manipulation
Structure identification in relational data
An empirical evaluation of knowledge compilation by theory approximation
The complexity of logic-based abduction
Horn approximations of empirical data
Reasoning with models
Logic synthesis for large pass transistor circuits
Fast exact minimization of BDDs
Approximation and decomposition of binary decision diagrams
Doing two-level logic minimization 100 times faster
BDS
OBDDs of a Monotone Function and of Its Prime Implicants
On Horn Envelopes and Hypergraph Transversals
Reasoning with Ordered Binary Decision Diagrams
Translation among CNFs, Characteristic Models and Ordered Binary Decision Diagrams
--CTR
Takashi Horiyama , Toshihide Ibaraki, Translation among CNFs, characteristic models and ordered binary decision diagrams, Information Processing Letters, v.85 n.4, p.191-198, 28 February | unate functions;recognition problems;ordered binary decision diagrams OBDDs;automated reasoning;horn functions;knowledge representation |
512523 | Objects and classes in Algol-like languages. | Many object-oriented languages used in practice descend from Algol. With this motivation, we study the theoretical issues underlying such languages via the theory of Algol-like languages. It is shown that the basic framework of this theory extends cleanly and elegantly to the concepts of objects and classes. Moreover, a clear correspondence emerges between classes and abstract data types, whose theory corresponds to that of existential types. Equational and Hoare-like reasoning methods and relational parametricity provide powerful formal tools for reasoning about Algol-like object-oriented programs. 2002 Elsevier Science (USA) | Introduction
Object-oriented programming first developed in the context
of Algol-like languages in the form of Simula 67 [17]. The
majority of object-oriented languages used in practice claim
either direct or indirect descent from Algol. Thus, it seems
entirely appropriate to study the concepts of object-oriented
programming in the context of Algol-like languages. This
paper is an effort to formalize how objects and classes are
used in Algol-like languages and to develop their theoretical
underpinnings.
The formal framework we adopt is the technical notion
of "Algol-like languages" defined by Reynolds [51]. The
Idealized Algol of Reynolds is a typed lambda calculus with
base types that support state-manipulation (for expressions,
commands, etc. The typed lambda calculus framework
gives a "mathematical" flavor to Idealized Algol and sets
it within the broader programming language research. Yet,
the base types for state-manipulation make it remarkably
close to practical programming languages. This combination
gives us an ideal setting for studying various programming
language phenomena of relevance to practical languages like
C++, Modula-3, Java etc.
Reynolds also argued [50, Appendix] that object-oriented
programming concepts are implicit in his Idealized Algol.
The essential idea is that classes correspond to "new" operators
that generate instances every time they are invoked.
This obviates the need for a separate "class" concept. The
idea has been echoed by others [46, 2]. In contrast, we
take here the position that there is significant benefit to
directly representing object-oriented concepts in the formal
system instead of encoding them by other constructs. While
the effect of classes can be obtained by their corresponding
"new" operators, not all properties of classes are exhibited
by the "new" operators. Thus, classes form a specialized
form of "new" operators that are of independent interest.
In this paper, we define a language called IA + as an
extension of Idealized Algol for object-oriented programming
and study its semantics and formal properties. An
important idea that comes to light is that classes are abstract
data types whose theory corresponds to that of existential
types [35]. In a sense, IA + is to Idealized Algol what SOL is
to polymorphic lambda calculus. However, while SOL can
be faithfully encoded in polymorphic lambda calculus [45],
IA + is more constrained than Idealized Algol. The corresponding
encoding does not preserve equivalences. Thus,
IA + is a proper extension.
Related work A number of papers [19, 1, 11, 18] discuss
object-oriented type systems for languages with side effects.
It is not clear what contribution these type systems make
to reasoning principles for programs. A related direction is
that of "object encodings." Pierce and Turner [44] study
the encoding of objects as abstract types, which bears some
similarity to the parametricity semantics in this paper. More
recent work along this line is [12]. Fisher and Mitchell [20]
also relate classes to data abstraction. This work assumes
a functional setting for objects, but some of the ideas deal
with "state." Work on specification of stateful objects includes
[5, 28, 29, 30] in addressing subtyping issues and [3, 6]
in addressing self-reference issues. The major developments
in the research on Algol-like languages are collected in [43].
Tennent [58] gives a gentle introduction to the concepts as
of 1994.
2 The language IA
The language IA + is an extension of Idealized Algol with
classes. Thus, it is a typed lambda calculus with base types
corresponding to imperative programming phrases. The
base types include:
ffl comm, the type of commands or state-transformers,
and
ffl exp[ffi], the type of state-dependent expressions giving
ffi-typed values,
ffl val[ffi], the type of phrases that directly denote ffi-typed
values (without any state-dependence).
Here, ranges over a collection of "data types" such as
int(eger) and bool(ean) whose values are storable in vari-
ables. The "types" like exp[ffi] and comm are called "phrase
types" to distinguish them from data types. Values of arbitrary
phrase types are not storable in variables. 1
The collection of phrase types (or "types," for short) is
given by the following syntax:
where fi ranges over base types (exp[ffi], comm and val[ffi]).
Except for cls ' types, the remaining type structure is that
of simply typed lambda calculus with record types and sub-
typing. See, for instance, Mitchell [34, Ch. 10] for details.
The basic subtypings include
a collection of types '
called "state-dependent" types, and
ffl the standard record subtyping ("width" as well as "depth"
subtyping).
Our interpretation of subtyping is by coercions [34, Sec. 10.4.2].
The parameter passing mechanism of IA + is call-by-name
(as is usual with typed lambda calculus). The second coercion
above makes available Algol's notion of call-by-value.
An "expression" argument can be supplied where a "value"
is needed.
The type cls ' is the type of classes that describe the
behavior of '-typed objects. An "object" is an abstraction
that encapsulates some internal state represented by "fields"
and provides externally visible operations called "methods."
A class defines the fields and methods for a collection of
objects, which are then called its "instances." The distinction
between classes and instances arises because objects are
stateful. (If a class is stateless, then there is no observable
difference between its instances and there would be little
point in making the class-instance distinction.) Classes represent
the abstract (or "mathematical") concept of a behavior
instances represent the concrete (or "physical")
realizations of the behavior.
For defining classes, we use a notation of the form:
class '
fields C1
methods M
init A
The various components of the description are as follows:
ffl ' is a type (the type of all instances of this class), called
the signature of the class,
are identifiers (for the fields),
are terms denoting classes (of the respective
fields),
ffl M is a term of type ' (defining the methods of the
class), and
1 It is possible to postulate a data type of references (or pointers)
ref ', for every phrase type ', whose values are storable in variables.
This obtains the essential expressiveness that the object-oriented
programmer desires. Unfortunately, our theoretical understanding
of references is not well-developed. So, we omit them from the main
presentation and mention issues relating to them in Sec. 4.3.
ffl A is a comm-typed term (for initializing the fields).
Admittedly, this is a complex term form but it represents
quite closely the term forms for classes in typical programming
languages. Moreover, we will see that much of this
detail has a clear type-theoretic basis.
It is noteworthy that we cannot define nontrivial classes
without first having some primitive classes (needed for defining
fields). We will assume a single primitive class for (mu-
table) variables via the constant:
If x is an instance of Var[ffi] (a "variable"), then x:get is a
state-dependent expression that gives the value stored in x
and x:put(k) is a command that stores the value k in x. 2
We often use the abbreviation:
for the signature type of variables. We assume the subtyp-
ings:
whose coercion interpretations are the corresponding field
selections.
Note that the type var[ffi] is different from the class Var[ffi].
Values of type var[ffi] need not be, in general, instances of
Var[ffi]. For instance, the following (trivial) class has instances
of type var[int]:
class
fields
methods
init skip
Instances of this class always give 0 for the get message and
do nothing in response to a put message. Yet they have
the type var[int]. In essence, the type of an object merely
gives its signature (the types of its methods), whereas its
class defines its behavior. A tighter integration of classes
and types would certainly be desirable. We return to this
issue in Sec. 4.1.
As an example of a nontrivial class, consider the following
class of counter objects:
class
finc: comm, val: exp[int]g
fields
cnt
methods
init
A counter has a state variable for keeping a count; the inc
method increments the count and the val method returns
the count. (The definition of the inc method could have
also been written as cnt := cnt using the subtypings of
var[ffi]. We use explicit coercions for clarity.)
We assume that all new variables come initialized to some specific
initial value init ffi . It is also possible to use a modified primitive
cls var[ffi] that allows explicit initialization via a
parameter.
One would want a variety of combinators for classes. The
following "product" combinator for making pairs of objects
is an essential primitive:
cls '1 \Theta cls '2 ! cls ('1 \Theta '2)
An instance of a class C1 C2 is a pair consisting of an
instance of C1 and an instance of C2 . Other useful combinators
abound. For instance, the following combinator is
motivated by the work on "fudgets" [14]:
An instance of F1 !? F2 is a pair (a; b) where a is an instance
of F1 (b) and b an instance of F1 (a). The two objects are
thus interlinked at creation time using mutual recursion.
Common data structures in programming languages such
as arrays and records also give rise to class combinators.
The array data structure can be regarded as a combinator
of type:
array
so that (array C n) is equivalent to an n-fold product C
C, viewed as a (partial) function from integers to C-objects.
The record construction
record C1
is essentially like C1 \Delta \Delta \Delta Cn except that its instances are
records instead of tuples.
For creating instances of classes, we use the notation:
new C
which is a value of type
the signature of class C. For example,
new Counter -a. B
creates an instance of Counter, binds it to a and executes
the command B. The scope of a extends as far to the right
as possible, often delimited by parentheses or begin-end
brackets.
The type of newC illustrates how the "physical" nature
of objects is reconciled with the "mathematical" character of
Algol. If new C were to be regarded as a value of type ' then
the mathematical nature of Algol would prohibit stateful
objects entirely. For example, a construction of the form
let new Counter
in a.inc; print a.val
would be useless because it would be equivalent, by fi-reduction,
to:
(new Counter).inc; print (new Counter).val
thereby implying that every use of a gives a new counter and
no state is propagated. The higher order type of newC gives
rise to no such problems. This insight is due to Reynolds [51]
and has been used in several other languages [37, 56].
2.1 The formal system
We assume a standard treatment for the typed lambda calculus
aspects of IA + . The type rules for cls types are shown
in Fig. 1. Note that we have one rule for the introduction
of cls types and one for elimination. We show a single
field in a class term for simplicity. This is obviously not
fields C x methods M init
cls Intro
cls Elim
Figure
1: Type rules for cls types
cls '1 \Theta cls '2 ! cls ('1 \Theta '2 )
(where
Figure
2: Essential constants of IA
a limitation because the combinator of classes can be used
to instantiate multiple classes. It is significant that the
initialization command is restricted to acting on the field x.
We do not allow it to alter arbitrary non-local objects. The
methods term M , on the other hand, can act on non-local
objects. This is useful, for instance, to obtain the effect
of "static" fields in languages like C++ and Java. If a class
term does not have any free identifiers, we call it a "constant
class."
The restriction that the initialization command should
have no free identifiers other than x is motivated by reasoning
considerations. Programmers typically want to assume
that the order of instance declarations is insignificant. If the
initializations were to have global effects, the order would
become significant. However, the restriction as stated in
the rule is too stringent. One would want the initialization
command to be able to at least read global variables. In
Appendix
A, we outline a more general type system based
on the ideas of [50, 48] that allows read-only free identifiers.
The important constants of IA are shown in Fig. 2.
(The constants for expression and value types are omitted.)
The constant skip denotes the do-nothing command and ";"
denotes sequential composition. The letval operator sequences
the evaluation of an expression with that of another
expression or command. More precisely, letvalef evaluates
e in the current state to obtain a value x and then evaluates
f x. (Note that this would not make sense if letval e f were
of type val[ffi 0 ].) The infix operator ":=" is a variant of letval
defined by:
a
For example, the command (cnt.put := cnt.get + 1) in the
definition of the Counter class involves such sequencing. The
letval operator is extended to higher types as follows:
letval
(fst
letval
Thus, all "state-dependent" types (as defined in Appendix
have letval operators, and we have a coercion:
which serves to interpret the subtyping (val[ffi] ! ') !:
The equational calculus for the typed lambda calculus
part of IA + is standard. For cls type constructs, we have
the following laws:
new (class ' fields C x methods M init
new C -x:
(j) (class ' fields C x methods x init skip)
(fl) new C1 -x: new C2 -y: M
new C2 -y: new C1 -x: M
The (fi) law specifies the effect of an Intro-Elim combination.
The (j) law specifies the effect of an Elim-Intro combination
where the "Elim" is the implicit elimination in field declara-
tions. The (fl) law allows one to reorder new declarations.
Note that it is important for initializations to be free from
global effects for the (fl) law to hold.
The interaction of new declarations with various constants
is axiomatized by the following equational axioms
new c -x:
new c -x: a; new c g (2)
new c -x: g(x);
ae
new c -x:
letval e -z: h x z
oe
ae
letval e -z:
new c -x: h x z
oe
new c -x: if p (f x) (g
(In the presence of nontermination, the first equation must
be weakened to an inequality new c -x: skip v skip.) These
equations state that the new operator commutes with all the
operations of IA + . Any computation that is independent of
the new instance can be moved out of its scope. Notice
that we can derive from the second equation, by setting
the famous equation:
new c -x: a = a (6)
which has been discussed in various papers on semantics of
local variables [31, 32, 40]. Compilers (implicitly) use these
kinds of equations to enlarge or contract the scope of local
variables and to eliminate "dead" variables. By formally
introducing classes as a feature, we are able to generalize
them for all classes.
In [50,
Appendix
Reynolds suggests encoding classes
as their corresponding "new" operators. This involves the
cls
(class ' fields C x methods M init
new C -x: A; p(M)
new
3 Note that these axioms are equations of lambda calculus, not
equational schemas. The symbols c; a;
can never be substituted by terms that capture bound identifiers. For
instance, in equation (2), a cannot be substituted by a term that has
x occurring free.
For instance, the class Counter would be encoded as an
tunately, arbitrary functions of this type do not satisfy the
axioms of new listed above. (This means that Reynolds's
encoding does not give a fully abstract translation from
IA + to Idealized Algol.) Our treatment can be seen as a
formalization of the properties intrinsic to "new" operators
of classes.
2.2 Specifications
An ideal framework for specifying classes in IA + is the specification
logic of Reynolds [52]. Specification logic is a theory
within (typed) first-order intuitionistic logic (and, hence, its
name is somewhat a misnomer). We use the intuitionistic
connectives "&", "=)", "8" and "9". The types include
those of Idealized Algol and an additional base type assert
for assertions (state-dependent classical logic formulas). The
atomic formulas of specification logic include:
ffl Hoare triples, fPg A fQg, for command A and assertions
P and Q, and
ffl non-interference formulas, A #; B, where A and B are
terms of arbitrary types.
Note that assertions form a "logic within logic." One can
use classical reasoning for them even though the outer logic
is intuitionistic. A non-interference formula A # B means
intuitively that A and B do not access any common storage
locations except in a read-only fashion. definition
of the property uses a possible-world semantics [41].) We use
a symmetric non-interference predicate (from [38]), which is
somewhat easier to use than the original Reynolds's version.
The proof rules for the non-interference predicate are the
are the free identifiers of A and B respectively).
2. A # B if both A and B are of "passive" types.
3. A # B if either A or B is of a "constant" type.
Passive types are those that give exp[ffi]-typed values and
constant types are those that give val[ffi]-typed values. See
Appendix
A for further discussion. The effect of the non-interference
predicate is best illustrated by the proof rule:
which states that two non-interfering commands can be freely
reordered. The survey article of Tennent [58] has a detailed
description of specification logic.
For handling IA + , we extend specification logic with cls
types and a new formula of the form:
Inst C x: OE(x)
where C is a class, x an identifier (bound in the formula)
and OE(x) is a formula. The meaning is that all instances
x of class C satisfy the formula OE(x). An example is the
following specification of the variable class:
Inst Var[ffi] x.
Inst Queue q.
8x,y: val[int]. 8g: exp[int] ! comm. g # q =)
Figure
3: Equational specification of a queue class
Inst Queue q.
9elems: list val[int] ! assert.
8k: val[int]. 8s: list val[int].
ftrueg q.init felems([ ])g
Figure
4: Hoare-triple specification of queues
Thus, the Hoare logic axiom for assignment becomes an
axiom of the variable class. One can also write equational
specifications for classes. For example, consider the specification
of counters by:
Inst Counter x.
The quantified function identifier g plays the role of a "con-
version" function, to convert expressions into commands. As
a less trivial example, an equational specification of a Queue
class is shown in Fig. 3. Its structure is similar to that of
the Counter specification.
Specification logic allows the use of both equational reasoning
and reasoning via Hoare-triples. The choice between
them is a matter of preference, but Hoare-like reasoning
is better understood and is often simpler. As illustration,
we show in Fig. 4, a Hoare-triple specification of Queue.
The specification asserts the existence of an elems predicate
representing an abstraction of the internal state of the queue
as list. (We are using an ML-like notation for lists.) Note
that the logical facilities of specification logic allow us to
specify the exitence of an abstraction function which would
be implementation-dependent.
For example, Fig. 5 shows an implementation of the
Queue class using "unbounded" arrays. 4 To show that it
meets the Hoare-triple specification, we pick the predicate:
A Queue-state represents a queue with elements
and the list of array elements between f +1 and r is s. Note
that the predicate incorporates both the "representation
invariant" and the "representation function" in America's
terminology [5]. In fact, all of America's theory for class
specifications is implicit in specification logic.
4 We are using "unbounded" arrays as an abstraction to finesse
the technicalities of bounds. Clearly, both the specification and
the implementation of Queue can be modified to deal with bounded
queues.
class queue
fields (UnboundedArray Var[int]) a;
methods
init (f := 0; r :=
Figure
5: An implementation of queues
Specification logic is also able to express "history proper-
ties" recommended by Liskov and Wing [30]. For example,
here is a formula that states that a counter's value can only
increase over time:
Inst Counter x.
Using Inst-specifications, we formulate the following proof
rule for new declarations:
Inst C x: OE(x)
/(new C g)
where x does not occur free in any undischarged assump-
tions, the terms A i and the formula /(\Gamma). This states that,
to prove a property / for (newC g), we need to prove / for
(g x), where x is an arbitrary instance of C, assuming the
specification OE(x) and the fact that x does not interference
with anything unless C interferes with it. The terms A i can
be any terms whatever but, in a typical usage of the rule,
they are the free identifiers of /(gx). These non-interference
assumptions arise from the fact that x is a "new" instance.
The rule for inferring Inst-specifications is:
Inst C z: /(z)
OE(M)
Inst (class ' fields C z init A methods M) x: OE(x)
where z does not occur free in any undischarged assump-
tions, the terms A i and the formula OE(\Gamma).
Inst-specifications are not always adequate for capturing
the entire behavior of class instances. Since they specify
the behavior of instances in arbitrary states, they miss the
specification of initial state and the final state transforma-
tions. Additional axioms involving new-terms are necessary
to capture these aspects. For example, the Counter class
satisfies the following "initialization" axiom:
new Counter -x. g(x.val); h(x)
new Counter -x. g(0); h(x)
which specifies that the initial value of a counter is 0. The
"finalization" axiom:
new Counter -x. g(x); h(x.inc)
new Counter -x. g(x); h(skip)
states that any increment operations done just before deallocation
are redundant.
The denotational semantics of IA brings out important
properties of classes and objects. We consider two styles of
semantics: parametricity semantics along the lines of [42],
which highlights the data abstraction aspects of classes, and
object-based semantics along the lines of [49], which highlights
the class-instance relationship.
3.1 Parametricity semantics
As pointed by Reynolds [53], parametricity has to do fundamentally
with data abstraction. Since classes incorporate
data abstraction, one expects parametricity to play a role
in their interpretation. We follow the presentation of [42,
Sec. 2] in our discussion. In particular, we ignore recursion
and curried functions. The later discussion in [42] in handling
these features is immediately applicable.
A type operator T over a small collection of sets S is a
ffl the "set part" T set assigns to each set X 2 S, a set
set (X), and
ffl the "relation part" T rel assigns to each binary relation
(We normally write both T set and T rel as simply T , using
the context to disambiguate the notation.) Similarly, n-ary
type operators with n type variables can be defined.
The type operators for constant types, variable types,
product and function space constructors are standard. For
example, for the function space constructor, we have the
relation part:
The relation part for a constant type K is the identity
relation, denoted \Delta K . We define quantified type operators
for a universal quantifier 8 and an existential quantifier 9:
ffl The type operator 8Z: T (X; Z) represents parametrically
polymorphic functions p with components pZ 2
Formally, its set part consists of all S-indexed
families fpZgZ2S such that, for all relations S: Z
Its relation part, which can be
written as 8S: T (R; S) for any R: X defined
by
Z gZ2S
ffl The operator 9Z: T (X; Z) represents data abstractions
that implement an abstract type Z with operations of
type T (X; Z). To define it formally, consider "imple-
mentation" pairs of the form hZ; pi where Z 2 S and
Two such implementations are said to
be similar, hZ; pi - hZ there exists a relation
(Any such relation S is termed a simulation.) The
set part 9Z: T (X; Z) consists of equivalence classes of
implementations under the equivalence relation - .
Write the equivalence class of hZ; pi as hjZ; pji. The
relation part 9S: T (R; S) for any relation R: X
is the least relation such that
The basic reference for parametricity is Reynolds [53], while
Plotkin and Abadi [45] define a logic for reasoning about
parametricity. The notion of existential quantification is
from [35], but the parametricity semantics is not mentioned
there. The idea of simulation relations for abstract type
implementations dates back to Milner [33] and appears in
various sources including [9, 27, 25, 36, 54].
The types ' of IA are interpreted as type operators [[']]
in the above sense. The parameters for the type operators
are state sets. Typically they capture the states involved
in the representation of objects. The relation parts of the
operators specify how two values of type ' are related under
change of representation. Here is the interpretation:
Note that the meaning of a class is a data abstraction. It
involves a state set Z for the internal state of the instances,
a component of type [[']](Q \Theta Z) for the methods of the class
and a component of type Z for the initial state. Two such
implementations with internal state sets Z and Z 0 are similar
(and, hence, equivalent) if, for some relation S: Z $ Z 0 , the
initial states are related by S and their methods "preserve"
S according to the relation
For example, consider the following class as an alternative
to Counter:
st
methods
The meanings of Counter and Counter2 can be calculated
as follows:
The two implementations are similar because there is a simulation
relation S: Int $ Int given by
\Gamman
which is preserved by the two implementations. Hence, the
two abstractions (equivalence classes) are equal:
Thus, the parametricity semantics gives an
extremely useful proof principle for reasoning about equivalence
of classes.
The interpretation of terms is as follows. A term M of
function
We write [[M the component of [[M at Q. The semantics
of Algol phrases is as in [42]. An important point to
recall from that paper, Sec. 3.2, is the fact that parametricity
makes available certain "expand" functions:
expand
For every value v 2 [[']](Q), there is a unique expanded value
in [[']](Q\ThetaZ) that acts the "same way" as v does. We use the
abbreviated notation v " Q\ThetaZ
Q to denote expand ' [Q; Z](v).
For example, if the expanded
command
leaves the Z component unchanged. These expand functions
play a crucial role in interpreting instance declarations and
inheritance. They also have significance in interpreting con-
stants. A "constant value" in [[']](Q) is a value of the form
Qobtained by expanding a value in the unit state set.
So, we only need to specify the interpretation of a constant
in the unit state set.
The semantics of class constructs is as follows:
fields C x methods M init
-q: fst (pZ (m)(q; z))
where hjZ; (m;
A class definition builds an abstract type as illustrated with
Counter above. The new operator "opens" the abstract
type and passes to the client procedure P the representation
and the method suite of the class. Thus, an "instance"
is created. Note that, in the normal case where P is an
abstraction -x: M , its meaning is \LambdaZ: -m:
the body term M will now use the expanded
state set Q \Theta Z. Every time the class C is instantiated, a
new Z component is added to the state set in this fashion.
Thus, every "opening" of the abstract type gives rise to a
new instance with its own state component that does not
interfere with the others.
In comparing this operation with the object encoding
proposed by Pierce, Turner and others [44, 12], we note that
they treat objects as abstract types whereas we treat classes
as abstract types. Thus, some of the bureaucratic opening-
closing code that appears in their model is finessed here.
Message send in our model is simply the field selection of a
record. Nevertheless, the idea of abstract types appears in
both the models, and the implications of this commonality
should be explored further.
The class constants have the following interpretation:
init
hjZ1 \Theta Z2 ; ((m 0
The Var[ffi] class denotes a state set [[ffi]] with get and put
operations on it. The operator combines two classes by
joining their state sets. The method suites of the individual
classes are expanded to operate on the combined state set.
Theorem 1 The parametricity model satisfies all the equivalences
and axioms of Sec. 2.1.
The plain parametricity semantics described above does
not handle the equality relation in a general fashion. In
implementing data abstractions, it is normal to allow the
same abstract value to be represented by multiple concrete
representations. In our context, this means that the equality
relation for abstract states is, in general, not the same as
the equality relation for concrete states. It corresponds to a
partial equivalence relation (per) for concrete states [24].
For example, in the Queue implementation of Fig. 5, an
empty queue is represented by any state in which f and r
are equal. The second axiom of the equational specification
(Fig. does not hold in this implementation. (The left
hand side gives a state with the right
hand side gives a state with
This can be remedied by modifying the parametricity
semantics to a parametric per semantics, where each type
carries its own notion of equality. 5 More formally, A "type"
in the new setting (called a per-type) is a pair
where X is a set and EX is a per over X representing the
notion of equality for X. All the above ideas can be modified
to work with per-types. (See Appendix B.)
The per semantics influences reasoning about programs
as follows. Suppose we obtain a package hhZ; \Delta Z
9Z: T (X; Z) as the meaning of a class. If p preserves some
per EZ in the sense that p T then we have
Thus, we are at liberty to make up any per EZ that is
preserved by p and use it as the equality relation for the
representation.
For example, for the Queue class of Fig. 5, the state set Z
consists of triples ha; f; ri where a: Int ! Int and f; r 2 Int .
We pick the equivalence relation EZ given by:
ha; f; ri EZ ha
- map a (f
to represent the intuition that only the portion of the array
between f +1 and r contains meaningful values. In verifying
the axioms of queues, we interpret =comm as the per for
5 It does not seem possible to obtain the information of this
semantics from the plain parametricity semantics because quantified
type operators do not map per's to per's in general.
inc.*
val.1
val.2
inc.*
inc.*
Figure
Trace set of a counter object
\Theta Z), viz., [EQ \Theta EZ ! EQ \Theta EZ ]. (EQ is some
per for Q respected by the other variables like g.) Here is the
verification of the problematic second axiom. The two sides
of the equation denote the respective state transformations:
It is clear that they are equivalent by the relation [EQ \Theta
3.2 Object-based semantics
The object-based semantics [49, 39] (see also [4]) treats objects
as state machines and describes them purely by their
observable behavior. The observable behavior is given in
terms of event traces whose structure is determined by the
type of the object. This is similar to how processes are
described in the semantics of CSP or CCS. Since no internal
states appear in the denotations, proving the equivalence
of two classes reduces to proving the equality of their trace
sets.
Before looking at formal definitions, we consider an example
Figure
6 depicts the trace set of a counter object
in its initial state. The events for this object are "inc. "
denoting a successful completion of the inc method, and
"val.i" denoting a completion of the val method with the
result i (an integer). The nodes can be thought of as states
and events as state transitions. Note that a val event does
not change the state whereas an inc event takes the object
to a state with a higher val value. For discussion purposes,
we can label each node with an integer (which might well
be the same integer given by val ). The trace set can then
be described mathematically by a recursive definition:
The parameter of the CNT function is the label of the state.
Note that these labels can be anything we make up, but it
often makes sense to use labels that correspond to states in
an implementation. For instance, here is another description
of the same trace set using negative integers for labels:
This description corresponds to the class Counter2. While
it is obvious that the two trace sets are the same, a formal
proof would use the simulation relation S defined in (9). We
can show by fixed point induction that
and it follows that
Note that in this description there is virtually no difference
between classes and instances. A class determines a
trace set which is then shared by all instances of the class.
The specification equations of classes can be directly verified
in the trace sets. For example, the equation x.inc;
g(x.val of the Counter class is verified by noting
that
for all states n.
The object-based semantics, described in [47, 39], makes
these ideas work for Idealized Algol. For simplicity, we consider
a version of Idealized Algol with "Syntactic Control of
Interference", where functions are only applied to arguments
that they do not interfere with.
We start with the notion of a coherent space [22], which
is a simple form of event structure [59]. A coherent space
is a pair A = (jAj; -A ) where A is a (countable) set and
-A is a reflexive-symmetric binary relation on jAj. The
elements of jAj are to be thought of as events for the objects
of a particular type. The relation -A , called the coherence
relation, states whether two events can possibly be observed
from the same object in the same state.
The free object space generated by A is a coherent space
A is the set of sequences over jAj
("traces") and -A is defined by
This states that, after carrying out a sequence of events
the two traces must have coherent events at
position i. If a then the same condition applies to
position then the two events lead to
distinct states and, so, there is no coherence condition on
future events.
An element of a coherent space A is a pairwise coherent
subset x ' jAj. So, the elements of object spaces denote
trace sets for objects. Functions appropriate for object
spaces are what are called regular maps
It turns out that they can be described more simply in
terms of linear maps F : A ! B. We actually define
"multiple-argument linear maps" because they are needed
for semantics. A linear
jBj such that, whenever
(~s;
Every such linear map denotes a multiple-argument regular
Coherent spaces for the events of various Idealized Algol
types are shown in Figure 7. The trace sets for objects of
type ' are elements of [[']] . Since we have a state-free description
of objects, there is virtually no difference between
jA1 \Theta A2
Figure
7: Coherent spaces of events for IA types
objects and classes. The only difference is that a class can
be used repeatedly to generate new instances. So, a trace of
a class is a sequence of object traces, one for each instance
generated. Therefore, we define
The meaning of a term x1 :
multiple-argument linear map
We regard a vector of traces ~s 2 j[[' 1 ]]j
a record j 2 \Pi x i
. So, the linear map [[M is a set
of pairs (j; a), each of which indicates that, to produce an
event a for the result, the term M carries out the event
traces on the objects for the free identifiers.
The interpretation of interference-controlled Algol terms
is as in [49]. The interpretation of class terms is as follows:
fields C x methods M init
The meaning of the class term says that the trace set of C
must have a trace s0s1 where s0 represents the effect of the
initialization command A. If the methods term M maps the
trace s1 2 j[[- ]]j to a trace s 2 j[[']]j , then s is a possible
trace for the new class. The meaning of new C P finds a
trace s supported by C such that P is ready to accept an
object with this trace. Of course, C supports many traces.
But, P will use at most one of these traces.
Theorem 2 The object-based model satisfies all the equivalences
and axioms of Sec. 2.1, adapted to a version of IA
with Syntactic Control of Interference.
4 Modularity issues
In this section, we briefly touch upon the higher-level modularity
issues relevant to object-oriented programming. Further
work is needed in understanding these issues.
4.1 Types and classes
In most object-oriented languages, the notion of types and
classes is fused into one. Such an arrangement is not feasible
in IA because classes are first-class values and their equality
is not decidable. For example, the classes (array c n)
and (array c 0 n 0 ) are equal only if n and n 0 are equal. Such
comparisons are neither feasible nor desirable. However,
a tighter integration of classes with types can be achieved
using opaque subtypes as in Modula-3, also called "partially
abstract" types [21]. For example, the counter class may be
defined as:
newtype counter !: finc: comm, val: exp[int]g
in
class counter .
A client program only knows that counter is some subtype
of the corresponding signature type and that Counter is of
type cls counter . The class Counter, on the other hand, is
inside the abstraction boundary of the abstract type counter,
and regards it as being equal to the signature type.
We can specify requirements for partially abstract types.
For example, the specification:
8x: counter.
states that every value of type counter - not just an instance
of some class - is monotonically increasing. All
reveal blocks of the type counter get a proof obligation to
demonstrate that their use of the type counter satisfies the
specification. For example, if we use reveal blocks to define
classes Counter and Counter2, we have the job of showing
that their instances are monotonically increasing. Note that
such partially abstract types correspond to what America [5]
calls "types."
4.2 Inheritance
is a typed lambda calculus with records, most
inheritance models in the literature can be adapted to it.
For illustration, we show the recursive record model [13, 15,
46]. A class that uses self-reference is defined to be of type
cls instead of cls ', so that the method suite is
parameterized by "self." We have a combinator
close: cls
close class ' fields c f methods fix f init skip
which converts a self-referential class c to a class whose
instances are ordinary objects.
Let c be of type cls (' ! '). To define a derived class of
is a record type extension,
we use a construction of the form:
class
fields c f; .
methods -self. (f self) with[-
init A
where with is a record-combination operator [16]. (The
with operator is qualified by the type - to indicate the
record fields that get updated. This is needed for coherence
of subtyping.)
As an example, suppose we define a variant of the Counter
class that provides a set method in a "protected" fashion:
type protected counter =
counter \Phi fset: val[int] ! commg
protected counter
in
class counter ! counter
fieldsVar[int] cnt
methods
-self.
We can then define a derived class that issues warnings
whenever the counter reaches a specified limit:
protected counter
in
Warn Counter
class counter ! counter
fields Counter f
methods
-self. (f self) with[set:.
print "Limit reached";
(f self).set kg
init skip
Note that both (close Counter) and (close Warn Counter)
are of type cls counter. Their instances satisfy the specification
of counter, including its history property. The set
method does not cause a problem because it is inaccessible
to clients.
The proof principle for self-referential classes is derived
from fixed-point induction:
(Inst c f: 8x: OE(x) =) OE(f(x)))
Inst (close c) x: OE(x)
For example, both
lim) satisfy:
Inst (close C) x.
8k: val[int]. 8p: exp[int] ! assert.
4.3 Dynamic Objects
Typical languages of Algol family provide dynamic storage
via Hoare's [26] concept of ``references'' (pointers). An
object created in dynamic storage is accessed through a
reference, which is then treated as a data value and becomes
storable in variables. Some of the modern languages, like
Modula-3, treat references implicitly (assuming that every
object is automatically a reference). But it seems preferable
to make references explicit because the reasoning principles
for them are much harder and not yet well-understood.
To provide dynamic storage in IA + , we stipulate that,
for every type ', we have a data type ref '. The operations
for references are roughly as follows:
The rule for newref is not sound in general. Since references
can be stored in variables and exported out of their scope,
they should not refer to any local variables that obey the
stack discipline. If and when the local variables are deallo-
cated, these references would become "dangling references".
A correct type rule for newref is given in Appendix A.
Our knowledge of semantics for dynamic storage is rather
incomplete. While some semantic models exist [55, 56], it
is not yet clear how to integrate them with the reasoning
principles presented here.
5 Conclusion
Reynolds's Idealized Algol is a quintessential foundational
system for Algol-like languages. By extending it with objects
and classes, we hope to provide a similar foundation
for object-oriented languages based on Algol. In this paper,
we have shown that the standard theory of Algol, including
its equational calculus, specification logic and the major
semantic models, extends to the object-oriented setting. In
fact, much of this has been already implicit in the Algol
theory but perhaps in a form accessible only to specialists.
Among the issues we leave open for future work are
a more thorough study of inheritance models, reasoning
principles for references, and investigation of call-by-value
Algol-like languages.
Acknowledgments
It is a pleasure to acknowledge Peter
O'Hearn's initial encouragement in the development of
this work as well as his continued feedback. Bob Tennent,
Hongseok Yang and the anonymous referees of FOOL 5
provided valuable observations that led to improvements in
the presentation. Thanks to Martin Abadi for explaining the
intricacies of per semantics. This research was supported
by the NSF grant CCR-96-33737.
Appendix
A Reflective type classes
In the type rules of section 2.1, the initialization command
of a class was restricted to only the local fields of the class.
While this restriction leads to clean reasoning principles:
the (fl) law and equations (2-6), it is too restrictive to be
practical. For instance, a counter class parameterized by
an initial value n does not type-check under this restriction
because its init command has free occurrences of n.
A reasonable relaxation of the restriction is to allow the
initialization command to read storage locations, but not to
write to them. This kind of restriction is also useful in other
contexts, e.g for defining "function procedures" that read
global variables but do not modify them [58, 56].
The use of dynamic storage involves a similar restriction.
A class used to instantiate a dynamic storage object should
not have any references to local store. We define a general
notion that is useful for formalizing such restrictions.
reflective type class is a set of type terms
T such that
1.
2.
3.
The terminology is motivated by the fact that these classes
can be interpreted in reflective subcategories of the semantic
category [48].
We define several reflective type classes based on the
following intuitions. Constant types involve values that are
state-independent; they neither read nor write storage loca-
tions. (Such values have been called by various qualifications
such as "applicative" [56], "pure" [37], and "chaste" [57]).
Dually, state-dependent types involve values that necessarily
depend on the state. Values of passive types only read
storage locations, but do not write to them (one of the senses
of "const" in C++). Values of dynamic types access only
dynamic storage via references.
We add three new type constructors Const, Pas and Dyn
which identify the values with these properties even if they
are of general types.
A value of type Const' is a '-typed value that has been built
using only constant-typed information from the outside. So
it can be regarded as a constant value.
We define the following classes as the least reflective
classes satisfying the respective conditions:
1. Constant types include val[ffi] and Const ' types.
2. State-dependent types include exp[ffi] and comm, and
are closed under Const, Pas and Dyn type constructors.
3. Passive types include val[ffi], exp[ffi], Const ' and Pas '
types.
4. Dynamic types include val[ffi]; Const ' and Dyn ' types.
said to be T -used in M if every free occurrence of x is
in a subterm of M with a T -type. (In particular, we say
"constantly used", "passively used", and "dynamically used"
for the three kinds of usages.)
The introduction rules for Const, Pas, and Dyn are as
follows:
is constantly-used in M and
there are no occurrences of ".
Const '
are passively used in M .
dynamically used in M .
The dereference operator (") is treated as if it were an
-used means that every identifier in \Gamma is
T -used. For the elimination of these type constructors, we
use the subtypings (for all types '):
Const ' !: Pas ' !: '
Const ' !: Dyn ' !: '
Note that any closed term can be given a type of the
Const '. For example, the counter class of Section 2
has the type Const (cls counter ).
Application to class definitions The type rule for classes
is now modified as follows:
fields C x methods M init
passively used in
This allows the free identifiers \Gamma to be used in A, but in a
read-only fashion. The parametricity interpretation of cls-
type must be modified to [[cls
Z]. The rest of the theory remains the same, except that
the equation (2) becomes conditional on non-interference:
c # a =) new c -x: a;
Application to references We use the following rule for
creating references:
The rule ensures that the class instantiated in the dynamic
store does not use any locations from the local store, so the
instance will not use them either. This avoids the "dangling
reference" problem.
B Semantics of specifications
In this section, we consider the issue of interpreting specifi-
cations. This raises two issues. First, the non-interference
formulas in specifications require a sophisticated functor category
interpretation [57, 41] whose relationship to the parametricity
interpretation is not yet well-understood. It is
however possible to interpret restricted versions of specifi-
cations, those in which 8-quantified identifiers are restricted
not to interfere with any other free identifiers. Note that
the queue specification in Fig. 3 is of this form. The second
issue, discussed in Section 3.1, is that the equality relation
of specifications must be general enough to be refined by
implementations.
To allow for equality relations to be refined in implemen-
tations, we define a parametric per semantics for IA + . The
basic ideas are from Bainbridge et al. [7]. (See also [8].) We
adapt them to a predicative polymorphic context. A per E
over a set X is a symmetric and transitive relation. (It differs
from an equivalence relation in that it need not be reflexive.)
The domain of E is defined by x 2 dom(E)
Note that E reduces to a (total) equivalence relation over
dom(E). The set of equivalence classes under E is denoted
Q(E). See [34, Sec. 5.6] for discussion of per's.
A "type" in the new setting (called a per-type) is a pair
is a set and EX is a per over X.
The per specifies the notion of "equality" for the type. A
is an ordinary relatin
(called a saturated
relation).
A "type operator" is a pair hTper ; T rel i of mappings for
per-types and saturated relations. The per-type operators
for products and function spaces are as follows:
R \Theta
Assume that S is a small collection of per-types. We are
interested in per-type operators over S. These operators
Inst C x: OE () 9hZ; hp; z0ii -
Figure
8: Interpretation of specifications
inherit product, sum and function space constructors from
the above notions. We define type quantifiers as follows:
ffl The per-type operator 8Z: T (X; Z) maps a per-type
X to the per-type h
The set consists of families indexed by Z 2 S. The
per equates two families p and p 0 if for all saturated
relations S: Z
Z 0 . The
relation part of the operator maps a saturated relation
ffl The per-type operator 9Z: T (X; Z) maps a per-type
X to the per-type h
given by
The relation part of the operator maps a saturated
relation R: X
Comparing this to the plain parametricity semantics of Section
3.1, we note that per's take the place of the identity
relations.
Theorem 5 Every type operator
types to per-types and saturated relations R
i to
saturated relations T
The proof is similar to that in [8].
The interpretation of IA + is exactly the same as in plain
parametricity semantics except that the type operators are
now understood to be per-type operators. The interpretation
of specifications is shown in Fig. 8. A judgment
of the form Q; j means that the formula OE with free
holds in the state set Q and environment j 2
dom([[\Gamma]](E Q )).
--R
An imperative object calculus.
A Theory of Objects.
A logic of object-oriented programs
Linearity, sharing and state.
Designing an object-oriented programming language with behavioural subtyping
Functorial polymorphism.
Refinement of concurrent object-oriented programs
Mathematical Foundations of Programming Semantics: Eleventh Annual Conference
Comparing object encodings.
A semantics of multiple inheritance.
FUDGETS: A graphical user interface in a lazy functional language.
A Denotational Semantics of Inheritance.
Inheritance is not subtyping.
An Algol-based simulation language
A calculus for concurrent objects.
An interpretation of typed OOP in a language with state.
On the relationship between classes
Proofs and Types.
Theoretical Aspects of Object-Oriented Programming
Abstract data types and software validation.
Data refinement refined.
Record handling.
An axiomatic approach to binary logical relations with applications to data refinement.
Reasoning and refinement in object-oriented specification languages
Modular specification and verification of object-oriented programs
A behavioral notion of subtyping.
Axiomatizing operational equivalence in the presence of side effects.
Towards fully abstract semantics for local variables.
An algebraic definition of simulation between programs.
Foundations of Programming Languages.
Abstract types have existential types.
On the Refinement Calculus.
Call by name
Syntactic control of interference revisited.
Objects, interference and Yoneda embedding.
Semantics of local variables.
Semantical analysis of specification logic
Simple type-theoretic foundations for object-oriented programming
A logic for parametric polymorphism.
Objects as closures: Abstract semantics of object-oriented languages
Global state considered unnecessary: Semantics of interference-free imperative programming
Passivity and independence.
Global state considered unnecessary: An introduction to object-based semantics
Syntactic control of interference.
The essence of Algol.
Idealized Algol and its specification logic.
Types, abstraction and parametric poly- morphism
Behavioral correctness of data representations.
Categorical models for local names.
Assignments for applicative languages.
Semantical analysis of specification logic.
Denotational semantics.
An introduction to event structures.
--TR
A semantics of multiple inheritance.
Structure and interpretation of computer programs
Communicating sequential processes
On understanding types, data abstraction, and polymorphism
Event structures
Abstract types have existential type
Objects as closures: abstract semantics of object-oriented languages
Communication and concurrency
Proofs and types
Inheritance in smalltalk-80: a denotational definition
Towards fully abstract semantics for local variables
A sound and complete axiomatization of operational equivalence of programs with memory
Semantical analysis of specification logic
Behavioural correctness of data representations
Inheritance is not subtyping
Call by name, assignment, and the lambda calculus
Theoretical aspects of object-oriented programming
Two semantic models of object-oriented languages
A behavioral notion of subtyping
Parametricity and local variables
Denotational semantics
An interpretation of typed OOP in a language
Positive subtyping
An imperative object calculus
ALGOL-like languages (v.2)
Assignments for applicative languages
On the relationship between classes, objects, and data abstraction
Semantics of dynamic variables in Algol-like languages
SIMULA: an ALGOL-based simulation language
Syntactic control of interference
A Theory of Objects
On the Refinement Calculus
Modular Specification and Verification of Object-Oriented Programs
Data Refinement Refined
Comparing Object Encodings
A Logic for Parametric Polymorphism
Reasoning and Refinement in Object-Oriented Specification Languages
An Imperative, First-Order Calculus with Object Extension
A Logic of Object-Oriented Programs
A Calculus for Concurrent Objects
An introduction to event structures
Designing an Object-Oriented Programming Language with Behavioural Subtyping
Abstraction
A denotational semantics of inheritance
--CTR
Bernhard Reus , Thomas Streicher, Semantics and logic of object calculi, Theoretical Computer Science, v.316 n.1-3, p.191-213, 28 May 2004
Uday S. Reddy , Hongseok Yang, Correctness of data representations involving heap data structures, Science of Computer Programming, v.50 n.1-3, p.129-160, March 2004
Bernhard Reus , Jan Schwinghammer, Denotational semantics for a program logic of objects, Mathematical Structures in Computer Science, v.16 n.2, p.313-358, April 2006
Matthew Parkinson , Gavin Bierman, Separation logic and abstraction, ACM SIGPLAN Notices, v.40 n.1, p.247-258, January 2005
Anindya Banerjee , David A. Naumann, Ownership confinement ensures representation independence for object-oriented programs, Journal of the ACM (JACM), v.52 n.6, p.894-960, November 2005 | relational parametricity;semantics;algol-like languages;object-oriented programming;specification logic |
512531 | Flow-sensitive type qualifiers. | We present a system for extending standard type systems with flow-sensitive type qualifiers. Users annotate their programs with type qualifiers, and inference checks that the annotations are correct. In our system only the type qualifiers are modeled flow-sensitively---the underlying standard types are unchanged, which allows us to obtain an efficient constraint-based inference algorithm that integrates flow-insensitive alias analysis, effect inference, and ideas from linear type systems to support strong updates. We demonstrate the usefulness of flow-sensitive type qualifiers by finding a number of new locking bugs in the Linux kernel. | INTRODUCTION
Standard type systems are flow-insensitive, meaning a
value's type is the same everywhere. However, many important
program properties are flow-sensitive. Checking such
properties requires associating di#erent facts with a value at
di#erent program points.
This paper shows how to extend standard type systems
with user-specified flow-sensitive type qualifiers, which are
atomic properties that refine standard types. In our system
users annotate programs with type qualifiers, and inference
checks that the annotations are correct. The critical feature
of our approach is that flow-sensitivity is restricted to the
type qualifiers that decorate types-the underlying standard
types are unchanged-which allows us to obtain an e#cient
type inference algorithm. Type qualifiers capture a natural
class of flow-sensitive properties, while e#cient inference of
the type qualifiers allows us to apply an implementation to
large code bases with few user annotations.
As an example of type qualifiers, consider the type File
used for I/O operations on files. In most systems File operations
can only be used in certain ways: a file must be
opened for reading before it is read, it must be opened for
writing before it is written to, and once closed a file cannot
be accessed. We can express these rules with flow-sensitive
type qualifiers. We introduce qualifiers open, read, write,
readwrite, and closed. The type open File describes a
file that has been opened in an unknown mode, the type
read File (respectively write File) is a file that is open
for reading (respectively writing), the type readwrite File
is a file open for both reading and writing, and the type
closed File is a closed file. These qualifiers capture inherently
flow-sensitive properties. For example, the close()
function takes an open File as an argument and changes
the file's state to closed File.
These five qualifiers have a natural subtyping relation:
readwrite # read # open and readwrite # write # open.
The qualifier closed is incomparable to other qualifiers because
a file may not be both closed and open. Qualifiers that
introduce subtyping are very common, and our framework
supports subtyping directly; in addition to a set of qualifiers,
users can define a partial order on the qualifiers.
Our results build on recent advances in flow-sensitive type
systems [5, 7, 25] as well as our own previous work on flow-insensitive
type qualifiers [16, 24]. The main contribution
of our work is a practical, flow-sensitive type inference al-
gorithm, in contrast to the type checking systems of [5, 7,
Our flow-sensitive type inference algorithm is made practical
by solving constraints lazily. As in any flow-sensitive
analysis, explicitly forming a model of the store at every program
point is prohibitively expensive for large code bases.
By generating a constraint system linear in the size of the
type-annotated program and solving only the portion of the
constraints needed to check qualifier annotations, our algorithm
is able to scale to large examples.
Finally, our system is designed to be sound; we aim to
prove the absence of bugs, not just to be heuristically good
at finding bugs. For example, we believe that our system
could be integrated into Java in a sound manner. We have
shown soundness for restrict (Section 4), a key new construct
in our system (see technical report [15]). Since the
remainder of our system can be viewed as a simplification
of [25], we believe it is straightforward to prove soundness
for our full type system using their techniques.
In Section 5 we report on experience with two applica-
tions, analyzing locking behavior in the Linux kernel and
analyzing C stream library usage in application code. Our
system found a number of new locking bugs, including some
that extend across multiple functions or even, in one case,
across multiple files.
1.1 System Architecture
Our flow-sensitive qualifier inference algorithm has several
interlocking components. We first give an overview of the
major pieces and how they fit together.
We expect programmers to interact with our type sys-
tem, both when adding qualifier annotations and when reviewing
the results of inference. Thus, we seek a system
that supports e#cient inference and is straightforward for
a programmer to understand and use. Our type inference
system integrates alias analysis, e#ect inference, and ideas
from linear type systems.
. We use a flow-insensitive alias analysis to construct a
model of the store. The alias analysis infers an abstract
location for the result of each program expression; expressions
that evaluate to the same abstract location
may be aliased.
. We use e#ect inference [20] to calculate the set of abstract
locations an expression e might use during e's
evaluation. These e#ects are used in analyzing function
calls and restrict (see below). E#ect inference
is done simultaneously with alias analysis.
. We model the state at a program point as an abstract
store, which is a mapping from abstract locations to
types. We can use the abstract locations from the flow-insensitive
alias analysis because we allow only the
type qualifiers, and not the underlying standard types,
to change during execution. We represent abstract
stores using a constraint formalism. Store constructors
model allocations, updates, and function calls,
and store constraints C1 # C2 model a branch from
the program point represented by store C1 to the program
point represented by store C2 .
. We compute a linearity [25] for each abstract location
at each program point. Informally, an abstract location
is linear if the type system can prove that it corresponds
to a single concrete location in every execution;
otherwise, it is non-linear. We perform strong updates
[4] on locations that are linear and weak updates on
locations that are non-linear. A strong update can
change the qualifier on a location's type arbitrarily.
Weak updates cannot change qualifiers. Computing
linearities is important because most interesting flow-sensitive
properties require strong updates.
. The system described so far has a serious practical
weakness: Type inference may fail because a location
on which a strong update is needed may be inferred
to be non-linear. We address this with a new annotation
restrict. The expression restrictx =e in e #
introduces a new name x bound to the value of e. The
name x is given a fresh abstract location, and among
all aliases of e, only x and values derived from x may be
used within e # . Thus the location of x may be linear,
and hence may be strongly updated, even if the location
of e is non-linear. We use e#ects to enforce the correctness
of restrict expressions-soundness requires
that the location of e does not appear in the e#ect of
e # .
. We use e#ects to increase the precision of the analysis.
If an expression e does not reference location #, which
we can determine by examining the e#ect of e, then it
does not access the value stored at #, and the analysis
of # can simply flow from the store preceding e to the
one immediately after e without passing through e. If e
is an application of a function called in many di#erent
contexts, then this idea makes e fully polymorphic in
all the locations that e does not reference.
2. RELATED WORK
We discuss three threads of related work: type systems,
dataflow analysis, and tools for finding bugs in software.
Type Systems. Our type system is inspired by region and
alias type checking systems designed for low-level programs
[5, 25, 29]. Two recent language proposals, Vault [7] and
Cyclone [17], adapt similar ideas for checking high-level pro-
grams. Both of these languages are based on type checking
and require programmers to annotate their programs with
types. In contrast, we propose a simpler and less expressive
monomorphic type system that is designed for e#cient type
Our system incorporates e#ect inference [20, 32]
to gain a measure of polymorphism. Recent work on Vault
[12] includes a construct focus that is similar to restrict.
The type state system of NIL [27] is one of the earliest
to incorporate flow-sensitive type checking. Xu et al [33]
use a flow-sensitive analysis to check type safety of machine
code. Type systems developed for Java byte code [22, 26]
also incorporate flow-sensitivity to check for initialization
before use and to allow reuse of the same local variable with
di#erent types.
Igarashi and Kobayashi [18] propose a general framework
for resource usage analysis, which associates a trace with
each object specifying valid accesses to the object and checks
that the program satisfies the trace specifications. They
provide an inference algorithm, although it is unclear how
e#cient it is in practice since it invokes as a sub-step an
unspecified algorithm to check that a trace set is valid.
Flanagan and Freund [13] use a type checking system to
verify Java locking behavior. In Java locks are acquired and
released according to a lexical discipline. To model locking
in the Linux kernel (as in Section 5) we must allow non-
lexically scoped lock acquires and releases.
The subset of our system consisting of alias analysis and
e#ect inference can be seen as a monomorphic variant of
region inference [28]. The improvements to region inference
reported in [2] are a much more expensive and precise
method for computing linearities.
Dataflow Analysis. Although our type-based approach is
related to dataflow analysis [1], it di#ers from classical dataflow
analysis in several ways. First, we generate constraints over
stores and types to model the program. Thus there is no distinction
between forward and backward analysis; information
may flow in both directions during constraint resolution,
depending on the specified qualifier partial order. Second,
we explicitly handle pointers, heap-allocated data, aliasing,
and strong/weak updates. Third, there is no distinction between
interprocedural and intraprocedural analysis in our
system.
The strong/weak update distinction was first described
by Chase et al [4]. Several techniques that allow strong updates
have been proposed for dataflow-based analysis of programs
with pointers, among them [3, 8, 31]. Jagannathan
et al [19] present a system for must-alias analysis of higher-order
languages. The linearity computation in our system
corresponds to their singleness computation, and they use
a similar technique to gain polymorphism by flowing some
bindings around function calls.
Another recent system for checking typestate properties is
ESP [6]. Like our proposal, ESP incorporates a conservative
alias analysis. There are also significant di#erences: ESP is
more directly based on dataflow analysis and incorporates a
path-sensitive symbolic execution component. ESP has been
used to check the correctness of C stream library usage in
gcc.
Bug-Finding Tools. The AST Toolkit provides a frame-work
for posing user-specified queries on abstract syntax
trees annotated with type information. The AST Toolkit
has been successfully used to uncover many bugs [30].
Meta-level compilation [9] is a system for finding bugs in
programs. The programmer specifies a flow-sensitive property
as an finite state automaton. A program is analyzed by
traversing control paths and triggering state transitions of
the automata on particular actions in program statements.
The system warns of potential errors when an automaton enters
an error state. In [9] an intraprocedural analysis of lock
usage in the Linux kernel uncovered many local locking bugs.
Our type-based system found interprocedural locking bugs
that extended across multiple functions or even, in one case,
across multiple files (Section 5). 1 Newer work on meta-level
compilation [10] includes some interprocedural dataflow, but
it is unclear how their interprocedural dataflow analysis handles
aliasing.
LCLint [11] is a dataflow-based tool for checking properties
of programs. To use LCLint, the programmer adds
extra annotations to their program. LCLint performs flow-sensitive
intraprocedural analysis, using the programmer's
1 The bugs were found in a newer version of the Linux kernel
than examined by [9], so a direct comparison is not possible,
though these bugs cannot be found by purely intraprocedural
analysis.
annotations at function calls.
ESC/Java [14] is a tool for finding errors in Java programs.
ESC/Java uses sophisticated theorem-proving technology to
verify program properties, and it includes a rich language for
program annotations.
3. TYPE SYSTEM
We describe our type system using a call-by-value lambda
calculus extended with pointers and type qualifier annota-
tions. The source language is
e ::= x | n | #x.e | e1 e2 | ref e | !e | e1 := e2
| assert(e, Q) | check(e, Q)
Here x is a variable, n is an integer, #x.e is a function with
argument x and body e, the expression e1 e2 is the application
of function e1 to argument e2 , the expression ref e
allocates memory and initializes it to e, the expression !e
dereferences pointer e, and the expression e1 := e2 assigns
the value of e2 to the location e1 points to.
We introduce qualifiers into the source language by adding
two new forms [16]. The expression assert(e, Q) asserts
that e's top-level qualifier is Q, and the expression check(e, Q)
type checks only if e's top-level qualifier is at most Q.
Our type inference algorithm is divided into two steps.
First we perform an initial flow-insensitive alias analysis and
e#ect inference. Second we generate and solve store and
qualifier constraints and compute linearities.
3.1 Alias Analysis and Effect Inference
We present the flow-insensitive alias analysis and e#ect inference
as a translation system rewriting source expressions
to expressions decorated with locations, types, and e#ects.
The target language is
| assert(e, Q) | check(e, Q)
The target language extends the source language syntax in
two ways. Every allocation site ref # e is annotated with
the abstract location # that is allocated, and each function
annotated with both the type t of its parameter
and the e#ect L of calling the function. E#ects are unions
and intersections of e#ect variables #, which represent an
unknown set of e#ects that e#ect inference solves for, and
e#ect constants #, which stands for either a read, write, or
allocation of location #. For simplicity in this paper we do
not distinguish which of the three possible e#ects # stands
for, although we do so in our implementation.
Foreshadowing flow-sensitive analysis, pointer types are
written ref (#), and we maintain a separate global abstract
store CI mapping locations # to types; CI (# if location
# contains data of type # . If type inference requires # ,
we also require CI
contain the e#ect L of calling the function.
Figure
1 gives rules for performing alias analysis and effect
inference while translating source programs into our
target language. This translation system proves judgments
meaning that in type environment #, expression
e translates to expression e # , which has type t, and
the evaluation of e may have e#ect L.
(Ref)
(Deref)
(App)
(Down)
Figure
1: Type, alias, and e#ect inference
The set of locations appearing in a type, locs(t), is
locs(ref (# locs(CI (#))
locs(t1 -# L t2
We assume that locs(#) is empty until # is equated with a
constructed type. We define locs(#) to be
locs(t).
We briefly discuss the rules in Figure 1:
. (Var) and (Int) are standard. In lambda calculus, a
variable is an r-value, not an l-value, and accessing a
variable has no e#ect.
. (Ref) allocates a fresh abstract location #. We add the
e#ect {#} of the allocation to the e#ect and record in
CI the type to which the location # points.
. (Deref) evaluates e, which yields a value of type t. As
is standard in type inference, to compute the location
e points to we create a fresh location # and equate the
type t with the type ref (#). We look up the type of
location # in CI and add # to the e#ect set.
. (Assign) writes a location. Note that the type of e2
and the type that e1 points to are equated. Because
types contain locations, this forces potentially aliased
locations to be modeled by one abstract location.
. (Lam) defines a function. We annotate the function
with the e#ect # of the function body and the type #
of the parameter. Function types always have an e#ect
in
/* Write to x's cell */
f z;
check(!y, qc )
(a) Source program
in
f z;
check(!y, qc )
(b) Target program
C I (#x
Figure
2: Example alias and e#ect analysis
variable # on the arrow, which makes e#ect inference
easier. Notice that creating a function has no e#ect
(the potential allocation of a closure does not count as
an e#ect, because a closure cannot be updated).
. (App) applies a function to an argument. The e#ect
of applying e1 to e2 includes the e#ect # of calling the
function e1 represents. Notice that e1 's argument type
is constrained to be equal to the type of e2 . As before,
this forces possibly-aliased locations to have the same
abstract location.
. (Assert) and (Check) are translated unchanged into
the target language. Qualifiers are flow-sensitive, so
we do not model them during this first, flow-insensitive
step of the algorithm.
. (Down) hides e#ects on purely local state. If evaluating
e produces an e#ect on some location # neither in #
nor in t, then # cannot be accessed in subsequent com-
putation. Thus we can conservatively approximate the
set of e#ects that may be visible as locs(# locs(t).
By intersecting the e#ects L with the set of e#ects
that may be visible, we increase the precision of e#ect
inference, which in turn increases the precision of flow-sensitive
type qualifier inference. Although (Down) is
not a syntactic rule, it only needs to be applied once
per function body [15].
Figure
2 shows an example program and its translation.
We use some syntactic sugar; all of these constructs can be
encoded in our language (e.g., by assuming a primitive Y
combinator of the appropriate type). In this example the
constant qualifiers qa , q b , and qc are in the discrete partial
order (the qualifiers are incomparable). Just before f re-
turns, we wish to check that y has the qualifier qc . This
check succeeds only if we can model the update to y as a
strong update.
In
Figure
2, we assign x, y, and z distinct locations #x ,
#y , and #z , respectively. Because f is called with argument
z and our system is not polymorphic in locations, our alias
analysis requires that the types of z and w match, and thus
w is given the type ref (#z ). Finally, notice that since x and
y are purely local to the body of f , using the rule (Down)
our analysis hides all e#ects on #x and #y . The e#ect of f
is {#z} because f writes to its parameter w, which has type
ref (#z ). (More precisely, f has e#ect # where {#z} #.)
Let n be the size of the input program. Applying the
rules in Figure 1 generates a constraint system of size O(n),
using a suitable representation of locs(# locs(t) (see [15]).
Resolving the type equality constraints in the usual way with
unification takes O(n#(n)) time, where #(-) is the inverse
Ackerman's function. The remaining constraints are e#ect
constraints of the form L #. We solve these constraints
on-demand-in the next step of the algorithm we ask queries
of the form # L. We can answer all such queries for a single
location # in O(n) time [15].
3.2 Stores and Qualified Types
Next we perform flow-sensitive analysis to check the qualifier-
related annotations. In this second step of the algorithm we
take as input a program that has been decorated with types,
locations, and e#ects by the inference algorithm of Figure 1.
Throughout this step we treat the abstract locations # and
e#ects L from the first step as constants. We analyze the
input program using the extended types shown below:
# | int | ref (#) | (C, # L (C # )
| Merge(C, C # , L) | Filter(C, L)
Here qualified types # are standard types with qualifiers inserted
at every level. Qualifiers Q are either qualifier variables
#, which stand for currently unknown qualifiers, or
constant qualifiers B, specified by the user. We assume a
supplied partial order # among constant qualifiers.
The flow-sensitive analysis associates a store C with each
program point. This is in contrast to the flow-insensitive
step, which uses one global store C I to give types to loca-
tions. Function types are extended to (C, # L (C # ),
where C describes the store the function is invoked in and
describes the store when the function returns.
Each location in each store has an associated linearity #.
There are three linearities: 0 for unallocated locations, 1 for
linear locations (these admit strong updates), and # for non-linear
locations (which admit only weak updates). The three
linearities form a lattice 0 < 1 < #. Addition on linearities
is as expected: 0
A store is a vector
that assigns a type # i and a linearity # i to every abstract
location # i computed by the alias analysis. We call such a
vector a ground store. If G is a ground store, we write G(#)
for #'s type in G, and we write G lin (#) for #'s linearity in G.
Rather than explicitly associating a ground store with every
program point, we represent stores using a constraint
formalism. As the base case, we model an unknown store
using a store variable #. We relate stores at consecutive
program points either with store constructors (see below),
which build new stores from old stores, or with store constraints
are generated at branches from the
program point represented by store C1 to the program point
represented by store C2 .
A solution to a system of store constraints is a mapping
from store variables to ground stores, and from Assign(- )
stores (see below) to types. A solution S satisfies a system
of store constraints if for each constraint C1 # C2 we have
(Ref#
Figure
3: Store compatibility rules
according to the rules in Figure 3, and the
solution satisfies the rules in Figure 4.
In
Figure
3, constraints between stores yield constraints
between linearities and types, which in turn yield constraints
between qualifiers and between stores. In our constraint
resolution algorithm, we exploit the fact that we are only
interested in qualifier relationships to solve as little of the
expensive store constraints as possible (see Section 3.4).
In (Ref# ) we require that the locations on the left- and
right-hand sides of the # are the same. Alias analysis enforces
this property, which corresponds to the standard requirement
that subtyping becomes equality below a pointer
constructor. We emphasize that in this step we treat abstract
locations # as constants, and we will never attempt
(or need) to unify two distinct locations to satisfy (Ref# ).
In (Fun# ) we require that the e#ects of the constrained
function types match exactly. It would also be sound to
allow the e#ect of the left-hand function to be a subset of
the e#ect of the right-hand function.
Figure
4 formalizes the four kinds of store constructors by
showing how a solution S behaves on constructed stores.
The store Alloc(C, #) is the same as store C, except that
location # has been allocated once more. Allocating location
# does not a#ect the types in the store but increases the
linearity of location # by one.
The store Merge(C, C # , L) combines stores C and C # according
to e#ect L. If # L, then Merge(C, C # , L) assigns
# the type it has in C, otherwise Merge(C, C # , L) assigns #
the type it has in C # . The linearity definition is similar.
The store Filter(C, L) assigns the same types and linearities
as C for all locations # such that # L. The types of
all other locations are undefined, and the linearities of all
other locations are 0.
Finally, the store Assign(C, # ) is the same as store C,
except location # has been updated to type # where #
(we allow a subtyping step here). If # is non-linear in C, then
in
Figure
4(c) we require that the type of # in Assign(C, # :
# ) be at least the type of # in C; this corresponds to a weak
update. (In our implementation we require equality here.)
Putting these together, intuitively if # is linear then its type
in Assign(C, #) is # , otherwise its type is # S(C)(#),
where # is the least-upper bound.
3.3 Flow-Sensitive Constraint Generation
Figure
5 gives the type inference rules for our system.
In this system judgments have the form #, C
meaning that in type environment # and with initial store
(a) Types
lin (#) otherwise
lin (# L
lin (# L
(b) Linearities
lin (# S(C)(# S(Assign(C, #)
for all stores Assign(C, #)
(c) Weak updates
Figure
4: Extending a solution to constructed stores
evaluating e yields a result of type # and a new store C # .
We write C(#) for the type associated with # in store C;
we discuss the computation of C(#) in Section 3.4. We use
the function sp(t) to decorate a standard type t with fresh
qualifier and store variables:
We briefly discuss the rules in Figure 5:
. (Var) and (Int) are standard. For (Int), we pick a fresh
qualifier variable # to annotate n's type.
. (Ref) adds a location # to the store C # , yielding the
store Alloc(C #). The type # of e is constrained to be
compatible with #'s type in C # . 2
. (Deref) looks up the type of e's location # in the current
store C # . In this rule, any qualifier may appear
on e's type; qualifiers are checked only by (Check), see
below.
. (Assign) produces a new store representing the assignment
of type # to location #.
. (Lam) type checks function body e in fresh initial store
# and with parameter x bound to a type with fresh
qualifier variables.
An alternative formulation is to track the type # of e as
part of the constructed store Alloc(-), and only constrain
# to be compatible with C #) if after the allocation # is
non-linear.
#, C
#, C
#, C
#, C # ref
(Ref)
#, C
#,
(Deref)
#,
#, C # e1
#,
#,
#, C # e1
(App)
#, C
#, C # assert(e,
#, C
#, C # check(e,
Figure
5: Constraint generation rules
. (App) constrains #2 # to ensure that e2 's type is
compatible with e1 's argument type. The constraint
ensures that the current state of the
locations that e1 uses, which are captured by its e#ect
set L, is compatible with the state function e1 expects.
The final store Merge(# , C # , L) joins the store C # before
the function call with the result store # of the
function. Intuitively, this rule gives us some low-cost
polymorphism, in which functions do not act as join
points for locations they do not use.
. (Assert) adds a qualifier annotation to the program,
and (Check) checks that the inferred top-level qualifier
of e is compatible with the expected qualifier Q.
Figure
6 shows the stores and store constraints generated
for our example program. We have slightly simplified the
graph for clarity. Here # is f 's initial store and # is f 's final
store. We use undirected edges for store constructors and a
directed edge from C1 to C2 for the constraint C1 # C2 .
We step through constraint generation. We model the allocation
of #x with the store Alloc(#x ). Location #x is initialized
to 0, which is given the type #0 int for fresh qualifier
variable #0 . (Ref) generates the constraint #0 int #x)
to require that the type of 0 be compatible with #x ). We
model the allocation and initialization of #y and #z sim-
ilarly. Then we construct three Assign stores to represent
the assignment statements. We give 3 and 4 the types #3 int
and #4 int, respectively, where #3 and #4 are fresh qualifier
variables.
For the recursive call to f , we construct a Filter and add a
constraint on #. The Merge store represents the state when
the recursive call to f returns. We join the two branches of
Merge
Filter
Assign
Assign
#y :qc int
Assign
#z :#4 int
Alloc
#x :#3 int
Alloc
#z
Alloc
#y
#x
qa int # Alloc(#x )(#y )
Figure
Store constraints for example
the conditional by making edges to # . Notice the cycle, due
to recursion, in which state from # can flow to the Merge,
which in turn can flow to # . Finally, the qualifier check
requires that #y ) has qualifier qc .
3.4 Flow-Sensitive Constraint Resolution
The rules of Figure 5 generate three kinds of constraints:
qualifier constraints Q # Q # , subtyping constraints # ,
and store constraints C # (the right-hand side of a store
constraint is always a store variable). A set of m type and
qualifier constraints can be solved in O(m) time using well-known
techniques [16, 23], so in this section we focus on
computing a solution S to a set of store constraints.
Our analysis is most precise if as few locations as possible
are non-linear. Recall that linearities naturally form
a partial order 0 < 1 < #. Thus, given a set of constructed
stores and store constraints, we perform a least
fixpoint computation to determine S(C) lin (#). We initially
assume that in every store, location # has linearity 0. Then
we exhaustively apply the rules in Figure 4(b) and the rule
S(#) lin we reach a fix-
point. This last rule is derived from Figure 3.
In our implementation, we compute S(C) lin (#) in a single
pass over the store constraints using Tarjan's strongly-connected
components algorithm to find cycles in the store
constraint graph. For each such cycle containing more than
one allocation of the same location # we set the linearity of
# to # in all stores on the cycle.
Given this algorithm to compute S(C) lin (#), in principle
we can then solve the implied typing constraints using the
following simple procedure. For each store variable #, initialize
S(#) to a map
{#1 :sp(CI (#1)), . , #n :sp(CI (#n))}
and for each store Assign(C, # ) initialize
#) to sp(CI (#)), thereby assigning fresh qualifiers to the
type of every location at every program point. Replace uses
of C(#) in
Figure
5 with S(C)(#), using the logic in Figure
4(a).
Apply the following two closure rules until no more constraints
are generated:
lin (# S(C)(# S(Assign(C, #)
for all stores Assign(C, # )
Given a program of size n, in the worst case this naive algorithm
requires at least n 2 space and time to build S(-)
and generate the necessary type constraints. This cost is
too high for all but small examples. We reduce this cost in
practice by taking advantage of several observations.
Many locations are flow-insensitive. If a location # never
appears on the left-hand side of an assignment, then #'s type
cannot change. Thus we can give # one global type instead of
one type per program point. In imperative languages such as
C, C++, and Java, function parameters are a major source
of flow-insensitive locations. In these languages, because
parameters are l-values, they have an associated memory
location that is initialized but then often never subsequently
changed.
Adding extra store variables trades space for time. To
compute S(C)(#) for a constructed store C, we must deconstruct
recursively until we reach a variable store or an
assignment to # (see Figure 4(a)). Because we represent the
e#ect constraints compactly (in linear space), deconstructing
Filter(-, L) or Merge(-, L) may require a potentially linear
time computation to check whether # L. We recover
e#cient lookups by replacing C with a fresh store variable
# and adding the constraint C #. Then rather than computing
S(C)(#) we compute S(#), which requires only a
map lookup. Of course, we must use space to store # in
S(#). However, as shown below, we often can avoid this
cost completely. We apply this transformation to each store
constructed during constraint inference.
Not every store needs every location. Rather than assuming
S(#) contains all locations, we add needed locations
lazily. We add a location # to S(#) the first time the analysis
requests #) and whenever there is a constraint C # or
# C such that # S(C). Stores constructed with Filter
and Merge will tend to stop propagation of locations, saving
space (e.g., if Filter(C, L) # S(#), but # L, then we
do not propagate # to C).
We can extend this idea further. For each qualifier variable
#, inference maintains a set of possible qualifier constants
that are valid solutions for #. If that set contains
every constant qualifier, then # is uninteresting (i.e., # is
constrained only by other qualifier variables), otherwise #
is interesting. A type # is interesting if any qualifier in #
is interesting, otherwise # is uninteresting. We then modify
the closure rules as follows:
for all # S(C) or S(#) s.t.
S(C)(#) or S(#) interesting
lin (# S(C)(# S(Assign(C, #)
for all Assign(C, # ) s.t. S(C)(#) or
S(Assign(C, #) interesting
In this way, if a location # is bound to an uninteresting type,
then we need not propagate # through the constraint graph.
Figure
7 gives an algorithm for lazy location propagation.
We associate a mark with each # in each S(#) and with #
in Assign(C, # ). Initially this mark is not set, indicating
that location # is bound to an uninteresting type.
If a qualifier variable # appears in S(#), we associate
the pair (#) with #, and similarly for Assign stores. If
during constraint resolution the set of possible solutions of
# changes, we call Propagate(#,C) to propagate #, and in
turn #, through the store constraint graph.
If Propagate(#, C) is called and # is already marked in C,
we do nothing. Otherwise, Back-prop() and Forward-prop()
make appropriate constraints between S(C)(#) and S(C #)
for every store C # reachable from C. This step may add #
to C # if C # is a store variable, and the type constraints that
Back-prop() and Forward-prop() generate may trigger
subsequent calls to Propagate().
Consider again our running example. Figure 8 shows how
locations and qualifiers propagate through the store constraint
graph. Dotted edges in this graph indicate inferred
constraints (discussed below). For clarity we have omitted
the Alloc edges (summarized with a dashed line) and the
base types.
The four type constraints in Figure 6 are shown as directed
edges in Figure 8. For example, the constraint #0 int #
#x) reduces to the constraint #0 #x , which is a directed
edge #0 #x . Adding this constraint does not cause any
propagation; this constraint is among variables. Notice that
the assignment of type #3 int to #x also does not cause any
propagation.
The constraint qa int # Alloc(#x )(#y ) reduces to qa int #
#y ), which reduces to qa #y . This constraint does trigger
propagation. Propagate(#y , #) first pushes #y backward to
the Filter store. But since #y # L, propagation stops. Next
we push #y forward through the graph and stop when we
reach the store Assign(-, #y : qc int); forward propagation
assumes that this is a strong update.
contains an interesting type,
#y is propagated from this store forward through the graph.
On one path, propagation stops at the Filter. The other
paths yield a constraint qc # y . Notice that the constraint
remains satisfiable.
The constraint q b #z triggers a propagation step as
before. However, this time #z # L, and during backward
propagation when we reach Filter we must continue. Eventually
we reach Assign(-, #z : #4 int) and add the constraint
#4 #z . This in turn triggers propagation from
This propagation step reaches # ,
adds #z to S(# ), and generates the constraint #4 # z .
Finally, we determine that in the Assign stores #x and #y
are linear and #z is non-linear. (The linearity computation
uses the Alloc(-) stores, which are not shown.) Thus the
update to #z is a weak update, which yields a constraint
#z #4 .
This example illustrates three kinds of propagation. The
location #x is never interesting, so it is not propagated through
the graph. The location #y is propagated, but propagation
stops at the strong update to #y and also at the Filter, because
the (Down) rule in Figure 1 was able to prove that
#y is purely local to f . The location #z , on the other hand,
is not purely local to f , and thus all instances of #z are
conflated, and #z admits only weak updates.
case C of
#:
I (#)) to S(#) if not already in S(#)
if # is not marked in #
mark # in S(#)
Forward-prop(C, #, S(#))
for each C # such that C #
Back-prop(C #, S(#))
if # is not marked in Assign(C #)
mark # in Assign(C #)
Forward-prop(C, #)
case C of
#:
I (#)) to S(#) if not already in S(#)
Alloc(C #
Back-prop(C #)
then Back-prop(C #)
else Back-prop(C #)
then Back-prop(C #)
then #
else Back-prop(C #)
for each # such that C #
I (#)) to S(#) if not already in S(#)
for each C # such that C # is constructed from C
case C # of
Alloc(C, #
if # L and
then Forward-prop(C #)
if # L and
then Forward-prop(C #)
Filter(C,
then Forward-prop(C #)
Assign(C, #
then Forward-prop(C #)
Figure
7: Lazy location constraint propagation
4. RESTRICT
As mentioned in the introduction, type inference may fail
because a location on which a strong update is needed may
be non-linear. In practice a major source of non-linear locations
is data structures. For example, given a linked list
l, our alias analysis often cannot distinguish l->lock from
l->next->lock, hence both will likely be non-linear.
Our solution to this problem is to add a new form
restrict x =e1 in e2
to the language. Intuitively, this declares that of all aliases
of e1 , only x and copies derived from x will be used within
qc Merge
{#y :# y ,
#z :# z } # Filter
Assign
Assign
#y :qc
Assign
#z :#4
#x :#3
#x :#x , #y :#y , #z :#z }
qa
Figure
8: Constraint propagation
e2 . For example, consider
restrict {
x := .; /* valid */
y := .; /* invalid */
The first assignment through x is valid, but the assignment
through y is forbidden by restrict.
We check restrict using the following type rule, which
is integrated into the first inference pass of Figure 1:
restrict x =e1 in e2 #
restrict # x =e # 1 in e
(Restrict)
Here we bind x to a type with a fresh abstract location #
to distinguish dereferences of x from dereferences of other
aliases of e1 . The constraint # L2 forbids location # from
being dereferenced in e2 ; notice dereferences of # within
e2 are allowed. We require that # not escape the scope of
e2 with # locs(# locs(CI (# locs(t2 ), and we also
add # to the e#ect set. We translate restrict into the
target language by annotating it with the location # that
x is bound to. A full discussion of restrict, including a
soundness proof, can be found in a technical report [15].
We use restrict to locally recover strong updates. The
key observation is that the location # of e1 and the location
# of x can be di#erent. Thus even if the linearity of # is #,
the linearity of # can be 1. Therefore within the body of e2
we may be able to perform strong updates of # . When the
scope of restrict ends, we may need to do a weak update
from # to #.
For example, suppose that we wish to type check a state
change of some lock deep within a data structure, and the location
of the lock is non-linear. The following is not atypical
of Linux kernel code:
. /* non-linear loc */
Assuming the type system determines that the . above
contains no accesses to aliases of the lock and does not alias
the lock to a non-linear location, we can modify the code to
type check as follows:
restrict lock = &a->b[c].d->lock in {
In our flow-sensitive step, we use the following inference
rule for restrict:
#,
#, C # restrict # x =e1 in e2 : #2 ,
(Restrict)
In this rule, we infer a type for e1 , which is a pointer to
some location #. Then we create a new store C # in which the
location # of x is both allocated and initialized to C #). In
added to the type environment, we evaluate
e2 . Finally, the result store is the store C # with a potentially
update assigning the contents of # to #.
5. EXPERIMENTS
To test our ideas in practice we have built a tool Cqual
that implements our inference algorithm. To use Cqual,
programmers annotate their C programs with type quali-
fiers, which are added to the C syntax in the same way as
const [16]. The tool Cqual can analyze a single file or a
whole program. As is standard in type-based analysis, when
analyzing a single file, the programmer supplies type signatures
for any external functions or variables.
We have used Cqual to check two program properties:
locking in the 2.4.9 Linux kernel device drivers and uses
of the C stream library. Our implementation is sound up
to the unsafe features of C: type casts, variable-argument
functions, and ill-defined pointer arithmetic. We currently
make no attempt to track the e#ect of any of these features
on aliasing, except for the special case of type casting the
result of malloc-like functions. In combination with a system
for enforcing memory safety, such as CCured [21], our
implementation would be sound.
In our implementation, we do not allow strong updates
on locations containing functions. This improves e#ciency
because we never need to recompute S(C) lin (#)-weak updates
will not add constraints between stores. Additionally,
observe that allocations a#ect linearities but not types, and
reads and writes a#ect types but not linearities. Thus in our
implementation we also improve the precision of the analysis
by distinguishing read, write, and allocation e#ects. We
omit details due to space constraints.
The analysis results are presented to the user with an
emacs-based user interface. The source code is colored according
to the inferred qualifiers. Type errors are hyper-linked
to the source line at which the error first occurred,
and the user can click on qualifiers to view a path through
the constraint graph that shows why a type error was de-
tected. We have found the ability to visualize constraint
solutions in terms of the original source syntax not just use-
ful, but essential, to understanding the results of inference.
More detail on the ideas in the user interface can be found
in [24].
5.1 Linux Kernel Locking
The Linux kernel includes two primitive locking functions,
which are used extensively by device drivers:
void spin_lock(spinlock_t *lock);
void spin_unlock(spinlock_t *lock);
We use three qualifiers locked, unlocked, and # (unknown)
to check locking behavior. The subtyping relation is locked <
# and unlocked < #. We assign spin lock the type
(C, ref (#} (Assign(C, # : locked spinlock
where
unlocked spinlock t
We omit the function qualifier since it is irrelevant. The type
of spin lock requires that the lock passed as the argument
be unlocked (see the where clause) and changes it to locked
upon returning. The signature for spin unlock is the same
with locked and unlocked exchanged.
In practice we give spin lock this type signature by supplying
Cqual with the following definition:
void spin_lock($unlocked spinlock_t *lock) {
$locked spinlock_t);
Here change type(x, t) is just like the assignment
#something of type t#;
except that rather than give an explicit right-hand side we
just give the type of the right-hand side. In this case the
programmer needs to supply the body of spin lock because
it is inline assembly code.
Since our implementation currently lacks parametric poly-
morphism, we inline calls to spin lock and spin unlock.
Using these type signatures we can check for three kinds
of errors: deadlocks from acquiring a lock already held by
the same thread, attempts to release a lock already released
by the same thread, and attempting to acquire or release a
lock in an unknown (#) state.
We analyzed 513 whole device driver modules (a whole
module includes all the files that make up a single driver).
A module must meet a well-specified kernel interface, which
we model with a main function that non-deterministically
calls all possible driver functions registered with the kernel.
We also separately analyzed each of the 892 driver files
making up the whole modules. In these experiments we
removed the # qualifier so that locked and unlocked are
incomparable, and we made optimistic assumptions about
the environment in which each file is invoked.
We examined the results for 64 of the 513 whole device
driver modules and for all of the 892 separately analyzed
driver files. We found 14 apparently new locking bugs, including
one which spans multiple files. In five of the apparent
bugs a function tries to acquire a lock already held by
a function above it in the call chain, leading to a deadlock.
For example, the emu10k1 module contains a deadlock (we
omit the void return
emu10k1_mute_irqhandler(struct emu10k1_card *card) {
struct patch_manager
. spin_lock_irqsave(&mgr->lock, flags);
emu10k1_set_oss_vol(card, .
emu10k1_set_oss_vol(struct emu10k1_card *card, .) {
. emu10k1_set_volume_gpr(card, .
emu10k1_set_volume_gpr(struct emu10k1_card *card, .) {
struct patch_manager
. spin_lock_irqsave(&mgr->lock, flags); .
Note that detecting this error requires interprocedural analysis
One of our goals is to understand how often, and why,
our system fails to type check real programs. We have categorized
every type error in the separate file analysis of the
driver files. In this experiment, of the 52 files that fail
to type check, 11 files have locking bugs (sometimes more
than one) and the remaining 41 files have type errors. Half
of these type errors are due to incorrect assumptions about
the interface for functions; these type errors are eliminated
by moving to whole module analysis. The remaining type
errors fall into two main categories.
In many cases the problem is that our alias analysis is not
strong enough to type check the program. Another common
class of type errors arises when locks are conditionally
acquired and released. In this case, a lock is acquired if a
predicate P is true. Before the lock is released, P is tested
again to check whether the lock is held. Our system is not
path sensitive, and our tool signals a type error at the point
where the path on which the lock is acquired joins with the
path on which the lock is not acquired (since we did not
use # in these single file experiments-in the whole module
analysis, this error is detected later on, when there is an attempt
to acquire or release the lock in the # state). Most of
these examples could be rewritten with little e#ort to pass
our type system. In our opinion, this would usually make
the code clearer and safer-the duplication of the test on P
invites new bugs when the program is modified.
Even after further improvements, we expect some dynamically
correct programs will not type check. As future work,
we propose the following solution. The qualifier # represents
an unknown state. We can use the information in the
constraints to automatically insert coercions to and from #
where needed. During execution these coercions perform
runtime tests to verify locks are in the correct state. Thus,
our approach can introduce dynamic type checking in situations
where we cannot prove safety statically.
Of the 513 whole modules, 196 contain type errors, many
of which are duplicates from shared code. We examined 64
of the type error-containing modules and discovered that a
major source of type errors is when there are multiple aliases
of a location, but only one alias is actually used in the code
of interest. Not surprisingly, larger programs, such as whole
modules, have more problems with spurious aliasing than
the optimistic single-file analysis. We added restrict annotations
by hand to the 64 modules we looked at, including
the emu10k1 module, which yielded the largest number of
such false positives. Using restrict, we eliminated all of
the false positives in these modules that occurred because
non-linear locations could not be strongly updated. This
supports our belief that restrict is the right tool for dealing
with (necessarily) conservative alias analysis. Currently
adding restrict by hand is burdensome, requiring a relatively
large number of annotations. We leave the problem
of automatically inferring restrict annotations as future
work.
5.2 C Stream Library
As mentioned in the introduction, the C stream library
0k 100k 200k 300k 400k 500k 600k 700k 800k
Size (preprocessed lines of code)
Time
(sec)
Flow sensitive Flow insensitive Parsing1003005007009000k 100k 200k 300k 400k 500k 600k 700k 800k
Size (preprocessed lines of code)
Space
(Mbytes)
Figure
9: Resource usage for whole module analysis
interface contains certain sequencing constraints. For ex-
ample, a file must be opened for reading before being read.
A special property of the C stream library is that the result
of fopen must be tested against NULL before being used, because
fopen may or may not succeed. The class of C stream
library usage errors our tool can detect includes files used
without having been opened and checked against NULL, files
opened and then accessed in an incompatible mode, and files
accessed after being closed. We omit the details due to space
constraints.
We tried our tool on two application programs, man-1.5h1
and sendmail-8.11.6. We were primarily interested in the
performance of our tool on a more complex application (see
below), as we did not expect to find any latent stream library
usage bugs in such mature programs. However, we did find
one minor bug in sendmail, in which an opened log file is
never closed in some circumstances.
5.3 Precision and Efficiency
The algorithm described in Section 3.4 is carefully designed
to limit resource usage. Figure 9 shows time and
space usage of whole module analysis versus preprocessed
lines of code for 513 Linux kernel modules. All experiments
were done on a dual processor 550 MHz Pentium III with
2GB of memory running RedHat 6.2.
We divide the resource usage into C parsing and type
checking, flow-insensitive analysis, and flow-sensitive analy-
sis. Flow-insensitive analysis consists of the alias and e#ect
inference of Figure 1 together with flow-insensitive qualifier
inference [16]. Flow-sensitive analysis consists of the constraint
generation and resolution described in Sections 3.3-
3.4, including the linearity computation. In the graphs, the
reported time and space for each phase includes the time
and space for the previous phases.
The graphs show that the space overhead of flow-sensitive
analysis is relatively small and appears to scale well to large
modules. For all modules the space usage for the flow-sensitive
analysis is within 31% of the space usage for the
flow-insensitive analysis. The running time of the analysis
is more variable, but the absolute running times are within
a factor of 1.3 of the flow-insensitive running times.
The analysis of sendmail-8.11.6, with 175,193 preprocessed
source lines, took 28.8 seconds and 264MB; man-1.5h1,
with 16,411 preprocessed source lines, took 1.85 seconds and
32MB. These results suggest that our algorithm also behaves
e#ciently when checking C stream library usage.
6. CONCLUSION
We have presented a system for extending standard type
systems with flow-sensitive type qualifiers. We have given
a lazy constraint resolution algorithm to infer type qualifier
annotations and have shown that our analysis is e#ective
in practice by finding a number of new locking bugs in the
7.
--R
Better Static Memory Management: Improving Region-Based Analysis of Higher-Order Languages
An Extended Form of Must Alias Analysis for Dynamic Allocation.
Analysis of Pointers and Structures.
Typed Memory Management in a Calculus of Capabilities.
Enforcing High-Level Protocols in Low-Level Software
Checking System Rules Using System-Specific
Bugs as Deviant Behavior: A General Approach to Inferring Errors in Systems Code.
Static Detection of Dynamic Memory Errors.
Adoption and Focus: Practical Linear Types for Imperative Programming.
Extended Static Checking for Java.
Checking Programmer-Specified Non-Aliasing
A Theory of Type Qualifiers.
Cyclone user's manual.
Resource Usage Analysis.
A Simple
Tractable Constraints in Finite Semilattices.
Detecting Format String Vulnerabilities with Type Qualifiers.
Alias Types.
A Type System for Java Bytecode Subroutines.
A Programming Language Concept for Enhancing Software Reliability.
Implementation of the Typed Call-by-Value #-Calculus using a Stack of Regions
Alias Types for Recursive Data Structures.
Personal communication.
Typing References by E
Typestate Checking of Machine Code.
--TR
Compilers: principles, techniques, and tools
Typestate: A programming language concept for enhancing software reliability
Polymorphic effect systems
Analysis of pointers and structures
Typing references by effect inference
Implementation of the typed call-by-value MYAMPERSAND#955;-calculus using a stack of regions
Context-sensitive interprocedural points-to analysis in the presence of function pointers
An extended form of must alias analysis for dynamic allocation
Efficient context-sensitive pointer analysis for C programs
Better static memory management
Static detection of dynamic memory errors
A type system for Java bytecode subroutines
Single and loving it
A simple, comprehensive type system for Java bytecode subroutines
Typed memory management in a calculus of capabilities
A theory of type qualifiers
Type-based race detection for Java
Enforcing high-level protocols in low-level software
Bugs as deviant behavior
CCured
Resource usage analysis
Adoption and focus
Extended static checking for Java
Alias Types
Typestate Checking of Machine Code
Tractable Constraints in Finite Semilattices
Alias Types for Recursive Data Structures
Cyclone User''''s Manual, Version 0.1.3
Checking Programmer-Specified Non-Aliasing
--CTR
David Greenfieldboyce , Jeffrey S. Foster, Visualizing type qualifier inference with Eclipse, Proceedings of the 2004 OOPSLA workshop on eclipse technology eXchange, p.57-61, October 24-24, 2004, Vancouver, British Columbia, Canada
Futoshi Iwama , Naoki Kobayashi, A new type system for JVM lock primitives, Proceedings of the ASIAN symposium on Partial evaluation and semantics-based program manipulation, p.71-82, September 12-14, 2002, Aizu, Japan
Gary Wassermann , Zhendong Su, Sound and precise analysis of web applications for injection vulnerabilities, ACM SIGPLAN Notices, v.42 n.6, June 2007
Brian Chess , Gary McGraw, Static Analysis for Security, IEEE Security and Privacy, v.2 n.6, p.76-79, November 2004
Vincent Simonet, An extension of HM(X) with bounded existential and universal data-types, ACM SIGPLAN Notices, v.38 n.9, p.39-50, September
Timothy Fraser , Nick L. Petroni, Jr. , William A. Arbaugh, Applying flow-sensitive CQUAL to verify MINIX authorization check placement: 3, Proceedings of the 2006 workshop on Programming languages and analysis for security, June 10-10, 2006, Ottawa, Ontario, Canada
Futoshi Iwama , Atsushi Igarashi , Naoki Kobayashi, Resource usage analysis for a functional language with exceptions, Proceedings of the 2006 ACM SIGPLAN symposium on Partial evaluation and semantics-based program manipulation, January 09-10, 2006, Charleston, South Carolina
Ranjit Jhala , Rupak Majumdar, Bit level types for high level reasoning, Proceedings of the 14th ACM SIGSOFT international symposium on Foundations of software engineering, November 05-11, 2006, Portland, Oregon, USA
David Koes , Mihai Budiu , Girish Venkataramani, Programmer specified pointer independence, Proceedings of the 2004 workshop on Memory system performance, June 08-08, 2004, Washington, D.C.
Benjamin Chelf , Dawson Engler , Seth Hallem, How to write system-specific, static checkers in metal, ACM SIGSOFT Software Engineering Notes, v.28 n.1, p.51-60, January
Thomas A. Henzinger , Ranjit Jhala , Rupak Majumdar, Permissive interfaces, ACM SIGSOFT Software Engineering Notes, v.30 n.5, September 2005
Yanhong A. Liu , Tom Rothamel , Fuxiang Yu , Scott D. Stoller , Nanjun Hu, Parametric regular path queries, ACM SIGPLAN Notices, v.39 n.6, May 2004
Yoann Padioleau , Julia L. Lawall , Gilles Muller, Understanding collateral evolution in Linux device drivers, ACM SIGOPS Operating Systems Review, v.40 n.4, October 2006
Eran Yahav , G. Ramalingam, Verifying safety properties using separation and heterogeneous abstractions, ACM SIGPLAN Notices, v.39 n.6, May 2004
Samuel Z. Guyer , Calvin Lin, Error checking with client-driven pointer analysis, Science of Computer Programming, v.58 n.1-2, p.83-114, October 2005
Junfeng Yang , Ted Kremenek , Yichen Xie , Dawson Engler, MECA: an extensible, expressive system and language for statically checking security properties, Proceedings of the 10th ACM conference on Computer and communications security, October 27-30, 2003, Washington D.C., USA
Yichen Xie , Alex Aiken, Scalable error detection using boolean satisfiability, ACM SIGPLAN Notices, v.40 n.1, p.351-363, January 2005
Atsushi Igarashi , Naoki Kobayashi, Resource usage analysis, ACM Transactions on Programming Languages and Systems (TOPLAS), v.27 n.2, p.264-313, March 2005
Naoki Kobayashi, Time regions and effects for resource usage analysis, ACM SIGPLAN Notices, v.38 n.3, March
Atsushi Igarashi , Naoki Kobayashi, A generic type system for the Pi-calculus, Theoretical Computer Science, v.311 n.1-3, p.121-163, 23 January 2004
J. Field , D. Goyal , G. Ramalingam , E. Yahav, Typestate verification: abstraction techniques and complexity results, Science of Computer Programming, v.58 n.1-2, p.57-82, October 2005
system for resource protocol verification and its correctness proof, Proceedings of the 2004 ACM SIGPLAN symposium on Partial evaluation and semantics-based program manipulation, p.135-146, August 24-25, 2004, Verona, Italy
Yitzhak Mandelbaum , David Walker , Robert Harper, An effective theory of type refinements, ACM SIGPLAN Notices, v.38 n.9, p.213-225, September
Kevin W. Hamlen , Greg Morrisett , Fred B. Schneider, Certified In-lined Reference Monitoring on .NET, Proceedings of the 2006 workshop on Programming languages and analysis for security, June 10-10, 2006, Ottawa, Ontario, Canada
Seth Hallem , Benjamin Chelf , Yichen Xie , Dawson Engler, A system and language for building system-specific, static analyses, ACM SIGPLAN Notices, v.37 n.5, May 2002
Ranjit Jhala , Rupak Majumdar, Path slicing, ACM SIGPLAN Notices, v.40 n.6, June 2005
Thomas A. Henzinger , Ranjit Jhala , Rupak Majumdar , Kenneth L. McMillan, Abstractions from proofs, ACM SIGPLAN Notices, v.39 n.1, p.232-244, January 2004
Yichen Xie , Alex Aiken, Saturn: A scalable framework for error detection using Boolean satisfiability, ACM Transactions on Programming Languages and Systems (TOPLAS), v.29 n.3, p.16-es, May 2007
Alex Aiken , Jeffrey S. Foster , John Kodumal , Tachio Terauchi, Checking and inferring local non-aliasing, ACM SIGPLAN Notices, v.38 n.5, May
Adrian Birka , Michael D. Ernst, A practical type system and language for reference immutability, ACM SIGPLAN Notices, v.39 n.10, October 2004
Christian Skalka, Trace effects and object orientation, Proceedings of the 7th ACM SIGPLAN international conference on Principles and practice of declarative programming, p.139-150, July 11-13, 2005, Lisbon, Portugal
Nurit Dor , Stephen Adams , Manuvir Das , Zhe Yang, Software validation via scalable path-sensitive value flow analysis, ACM SIGSOFT Software Engineering Notes, v.29 n.4, July 2004
Ted Kremenek , Ken Ashcraft , Junfeng Yang , Dawson Engler, Correlation exploitation in error ranking, ACM SIGSOFT Software Engineering Notes, v.29 n.6, November 2004
Junfeng Yang , Paul Twohey , Dawson Engler , Madanlal Musuvathi, Using model checking to find serious file system errors, Proceedings of the 6th conference on Symposium on Opearting Systems Design & Implementation, p.19-19, December 06-08, 2004, San Francisco, CA
Murali Krishna Ramanathan , Ananth Grama , Suresh Jagannathan, Static specification inference using predicate mining, ACM SIGPLAN Notices, v.42 n.6, June 2007
Wei-Ngan Chin , Siau-Cheng Khoo , Shengchao Qin , Corneliu Popeea , Huu Hai Nguyen, Verifying safety policies with size properties and alias controls, Proceedings of the 27th international conference on Software engineering, May 15-21, 2005, St. Louis, MO, USA
Polyvios Pratikakis , Jaime Spacco , Michael Hicks, Transparent proxies for java futures, ACM SIGPLAN Notices, v.39 n.10, October 2004
Jeffrey Fischer , Ranjit Jhala , Rupak Majumdar, Joining dataflow with predicates, ACM SIGSOFT Software Engineering Notes, v.30 n.5, September 2005
Karl Chen , David Wagner, Large-scale analysis of format string vulnerabilities in Debian Linux, Proceedings of the 2007 workshop on Programming languages and analysis for security, June 14-14, 2007, San Diego, California, USA
Matthew S. Tschantz , Michael D. Ernst, Javari: adding reference immutability to Java, ACM SIGPLAN Notices, v.40 n.10, October 2005
Junfeng Yang , Paul Twohey , Dawson Engler , Madanlal Musuvathi, Using model checking to find serious file system errors, ACM Transactions on Computer Systems (TOCS), v.24 n.4, p.393-423, November 2006
Madanlal Musuvathi , Dawson R. Engler, Model checking large network protocol implementations, Proceedings of the 1st conference on Symposium on Networked Systems Design and Implementation, p.12-12, March 29-31, 2004, San Francisco, California
Manuvir Das , Sorin Lerner , Mark Seigle, ESP: path-sensitive program verification in polynomial time, ACM SIGPLAN Notices, v.37 n.5, May 2002
Nic Volanschi, Condate: a proto-language at the confluence between checking and compiling, Proceedings of the 8th ACM SIGPLAN symposium on Principles and practice of declarative programming, July 10-12, 2006, Venice, Italy
Gregor Snelting , Torsten Robschink , Jens Krinke, Efficient path conditions in dependence graphs for software safety analysis, ACM Transactions on Software Engineering and Methodology (TOSEM), v.15 n.4, p.410-457, October 2006
Tian Zhao , Jens Palsberg , Jan Vitek, Type-based confinement, Journal of Functional Programming, v.16 n.1, p.83-128, January 2006
Brian Chin , Shane Markstrum , Todd Millstein, Semantic type qualifiers, ACM SIGPLAN Notices, v.40 n.6, June 2005
Polyvios Pratikakis , Jeffrey S. Foster , Michael Hicks, LOCKSMITH: context-sensitive correlation analysis for race detection, ACM SIGPLAN Notices, v.41 n.6, June 2006
Xiaolan Zhang , Larry Koved , Marco Pistoia , Sam Weber , Trent Jaeger , Guillaume Marceau , Liangzhao Zeng, The case for analysis preserving language transformation, Proceedings of the 2006 international symposium on Software testing and analysis, July 17-20, 2006, Portland, Maine, USA
Todd Millstein, Practical predicate dispatch, ACM SIGPLAN Notices, v.39 n.10, October 2004
Cristian Cadar , Vijay Ganesh , Peter M. Pawlowski , David L. Dill , Dawson R. Engler, EXE: automatically generating inputs of death, Proceedings of the 13th ACM conference on Computer and communications security, October 30-November 03, 2006, Alexandria, Virginia, USA
John Tang Boyland , William Retert, Connecting effects and uniqueness with adoption, ACM SIGPLAN Notices, v.40 n.1, p.283-295, January 2005
David Hovemeyer , William Pugh, Finding bugs is easy, ACM SIGPLAN Notices, v.39 n.12, December 2004
Philip W. L. Fong, Pluggable verification modules: an extensible protection mechanism for the JVM, ACM SIGPLAN Notices, v.39 n.10, October 2004
Yao-Wen Huang , Fang Yu , Christian Hang , Chung-Hung Tsai , Der-Tsai Lee , Sy-Yen Kuo, Securing web application code by static analysis and runtime protection, Proceedings of the 13th international conference on World Wide Web, May 17-20, 2004, New York, NY, USA
Chris Andreae , James Noble , Shane Markstrum , Todd Millstein, A framework for implementing pluggable type systems, ACM SIGPLAN Notices, v.41 n.10, October 2006
Jeffrey S. Foster , Robert Johnson , John Kodumal , Alex Aiken, Flow-insensitive type qualifiers, ACM Transactions on Programming Languages and Systems (TOPLAS), v.28 n.6, p.1035-1087, November 2006
M. Pistoia , S. Chandra , S. J. Fink , E. Yahav, A survey of static analysis methods for identifying security vulnerabilities in software systems, IBM Systems Journal, v.46 n.2, p.265-288, April 2007
M. Pistoia , S. Chandra , S. J. Fink , E. Yahav, A survey of static analysis methods for identifying security vulnerabilities in software systems, IBM Systems Journal, v.46 n.2, p.265-288, April 2007 | constraints;effect inference;linux kernel;restrict;locking;flow-sensitivity;type qualifiers;types;alias analysis |
512542 | Profile-guided code compression. | As computers are increasingly used in contexts where the amount of available memory is limited, it becomes important to devise techniques that reduce the memory footprint of application programs while leaving them in an executable form. This paper describes an approach to applying data compression techniques to reduce the size of infrequently executed portions of a program. The compressed code is decompressed dynamically (via software) if needed, prior to execution. The use of data compression techniques increases the amount of code size reduction that can be achieved; their application to infrequently executed code limits the runtime overhead due to dynamic decompression; and the use of software decompression renders the approach generally applicable, without requiring specialized hardware. The code size reductions obtained depend on the threshold used to determine what code is "infrequently executed" and hence should be compressed: for low thresholds, we see size reductions of 13.7% to 18.8%, on average, for a set of embedded applications, without excessive runtime overhead. | INTRODUCTION
In recent years there has been an increasing trend towards the
incorporation of computers into a wide variety of devices, such as
palm-tops, telephones, embedded controllers, etc. In many of these
devices, the amount of memory available is limited, due to considerations
such as space, weight, power consumption, or price. For
example, the widely used TMS320-C5x DSP processor from Texas
Instruments has only 64 Kwords of program memory for executable
code [23]. At the same time, there is an increasing desire to use
more and more sophisticated software in such devices, such as encryption
software in telephones, speech/image processing software
in palm-tops, fault diagnosis software in embedded processors, etc.
Since these devices typically have no secondary storage, an application
that requires more memory than is available will not be able
to run. This makes it desirable to reduce the application's runtime
memory requirements for both instructions and data - its memory
footprint - where possible. We focus in this work on reducing the
overall memory footprint by reducing the space required for instructions
The intuition underlying our work is very simple. Most programs
obey the so-called "80-20 rule," which states, in essence,
that most of a program's execution time is spent in a small portion
of its code (see [17]); a corollary is that the bulk of a program's
code is generally executed infrequently. Our work aims at exploiting
this aspect of programs by using compression techniques that
yield smaller compressed representations, but may require greater
decompression effort at runtime, on infrequently executed portions
of programs. The expectation is that the increased compression for
the infrequently executed code will contribute to a significant improvement
in the overall size reduction achieved, but that the concomitant
increase in decompression effort will not lead to a significant
runtime penalty because the code affected by it is infrequently
executed.
This apparently simple idea poses some interesting implementation
challenges and requires non-trivial design decisions. These
include the management of memory used to hold decompressed
functions (discussed in Section 2); the design of an effective com-
pression/decompression scheme so that the decompressor code is
small and quick (Section 3); identification of appropriate units for
compression and decompression (Section 4); as well as optimizations
that improve the overall performance of the system (Section
6). Our work combines aspects of profile-directed optimization,
runtime code generation/modification, and program compression.
We discuss other related work in Section 8.
call sites
(a) Original
compressed
code
Decompressor
call sites function offset table
f
runtime buffer
[1]1[0]
(b) Compressed
Figure
1: Code Organization: Before and After Compression
2. THE BASIC APPROACH
2.1
Overview
Figure
shows The basic organization of code in our system.
Consider a program with three infrequently executed functions, 1 f,
and h, as shown in Figure 1(a). The structure of the code after
compression is shown in Figure 1(b). The code for each of these
functions is replaced by a stub (a very short sequence of instruc-
tions) that invokes a decompressor whose job is to decompress the
code for a function into the runtime buffer and then to transfer control
to this decompressed code. A function offset table specifies the
location within the compressed code where the code for a given
function starts. The stub for each compressed function passes an
argument to the decompressor that is an index into this table; this
argument is indicated in Figure 1(b) by the label ([0], [1], etc.)
on the edge from each stub to the decompressor. The decompressor
uses this argument to index into the function offset table, retrieve
the start address of the compressed code for the appropriate func-
tion, and start generating uncompressed executable code into the
runtime buffer. Decompression stops when the decompressor encounters
a sentinel (an illegal instruction) that is inserted at the end
of the code for each function. The decompressor then (flushes the
instruction cache, then) transfers control to the code it has generated
in the runtime buffer. When this decompressed function finishes
its execution, it returns to its caller in the usual way. Since the
control transfers from the stubs to the decompressor, and from the
decompressor to the runtime buffer do not alter the return address
transmitted from the original call site, no special action is necessary
to return from a decompressed function to its call site.
This method partitions the original program code into two parts.
Infrequently executed functions (such as f, g, and h) are placed
in a compressed code part, while frequently executed functions remain
in a never-compressed part. The stub code that manages control
transfers to compressed functions must also lie in the never-
compressed part.
It is important to note that when comparing the space usage of the
original and compressed programs, the latter must take into account
the space occupied by the stubs, the decompressor, the function
offset table, the compressed code, the runtime buffer, and the never-
compressed original program code.
Our implementation uses a notion of "function" that is somewhat
more general than the usual connotation of this term in source language
programs. We discuss exactly what constitutes such a "func-
tion" in Section 4.
2.2 Buffer Management
The scheme described above is conceptually fairly straightforward
but fails to mention several issues whose resolution determines
its performance. The most important of these is the issue
of function calls in the compressed code. Suppose that in Figure 1,
the code for f contains a call to g. Since f is compressed, the call
site is in the runtime buffer when the call is executed. As described
above, this call will be to the stub for g, and the code for g will
be decompressed and executed as expected. What happens when g
returns? The return address points to the instruction following the
call in f. This is a problem: the instructions for f were over-written
when g was decompressed. The return address points to a
location in the runtime buffer that now contains g's code.
The question that we have to address, therefore, is: If a function
call is executed from the runtime buffer, how can we guarantee that
the correct code will be executed when the call returns? The answer
to this question is inextricably linked with the way we choose
to manage the runtime buffer. We have the following options for
buffer management:
1. We may simply avoid the problem by refusing to compress
any function whose body contains any function calls, since
these may result in a function call from within the runtime
buffer. We reject this option because it severely limits the
amount of code that can be subjected to compression.
2. We may choose to ensure that the decompressed code for
a function is never overwritten until after all function calls
within its body have returned. The simplest way to do this is
never to discard the decompressed code for a function. In this
case, the compressed code for a function is decompressed at
most once-the first time it is called-with subsequent calls
bypassing the decompressor and entering the decompressed
code directly. This conceptually resembles the behavior of
just-in-time compilers that translate interpretable code to native
code [1, 22].
An alternative is to discard the decompressed code for a function
when it is no longer on the call stack, since at this point
we can be certain that any function called by it has returned
to it already. This is the approach taken by Lucco [19],
though rather than immediately discarding a function after
execution, he caches the function in the hope that it might be
re-executed. The Smalltalk-80 system also extracts an executable
version of a function from an intermediate representation
when the procedure is first invoked [8]. It caches the
bsr $ra, g
return
f: offset
instruction
entry(a) Original
EntryStub:
Decompress
<index(f), 0>
never-compressed
runtime stub list
bsr $ra, Decompress
<index(f), 98>
return
br g97
.
entry
runtime buffer
instruction
offset
(b) Transformed, during runtime after CreateStub has created Re-
Figure
2: Managing Function Calls Out of the Runtime Buffer.
executable code, and only discards it to prevent the system
from running out of memory.
The main drawback with this approach is that the runtime
buffer must be made large enough to hold all of the decompressed
functions that can possibly coexist on the call stack.
In the worst case, this is the entire program. The resulting
memory footprint - which includes the space needed for the
runtime buffer as well as the stubs, the decompressor, and the
function offset table - will therefore be bigger than that of the
original program. This approach is therefore not suitable for
limited-memory devices.
3. When a decompressed function f calls a function g from
within the runtime buffer, we may choose to allow the decompressor
to overwrite f's code within the buffer. This is
the approach used in our implementation. This has the benefit
that we only need a runtime buffer large enough to hold
the code for the largest compressed function. As pointed out
above, however, this means that when the call from g returns,
the runtime buffer may no longer hold the correct instructions
for it to return to. This problem can be solved if we
can ensure that the code for f is restored into the runtime
buffer between the point where the callee g returns and the
point where control is transferred to the caller f. We discuss
below how this can be done.
Suppose that a function f within the runtime buffer calls a compressed
function g. In our scheme, this causes the decompressor to
overwrite f's code in the buffer with g's code. For correctness, we
have to restore f's code to the buffer after the call to g returns but
before control is transferred to the appropriate instruction within
f. Since we don't have any additional storage area where f's code
could be cached, restoring f's code to the runtime buffer requires
that it be decompressed again. This means that when control returns
from g, it must first be diverted to the decompressor, which
can then decompress f and transfer control to it. The decompressor
must also be given an additional argument specifying to where control
should be transferred in the decompressed function, since the
program may (re-)enter f at some instruction other than its entry
point.
One option is to create a stub at compile time that contains the
function call to g followed by code to call the decompressor to
restore f to the runtime buffer and transfer control to the instruction
after f's call to g. This stub obviously cannot be placed in the
runtime buffer, since it may be overwritten there; it must be placed
in the never-compressed portion of the program. Since every call
from a compressed function requires its own stub, these restore
stubs amount to a large fraction of the final executable's size (e.g.,
if we only compress code that is never executed during profiling,
we create restore stubs that occupy 13%, on average, and for some
programs 20% of the never-compressed code; if we compress code
that accounts for at most 1% of the instructions executed during
profiling, the average percentage rises to 27%).
Rather than creating all restore stubs at compile time, we instead
create at runtime, when g is called, a temporary restore stub that exists
only until g returns. The transfer to g is prefaced with code that
generates the restore stub and makes the return address of the original
call point to this stub. Then an unconditional jump or branch
is made to g.
If every control transfer from compressed code created a restore
stub, we would, in effect, be maintaining a call stack of calls from
compressed code. If the compressed code is recursive, this could
require an arbitrarily large amount of additional space. Instead, we
create only one restore stub for a particular call site in compressed
code and maintain a usage count for that restore stub to determine
when the stub is no longer needed. When asked to create a restore
stub, we first check to see if a stub for that call site already exists
and, if it does, increase its usage count and use its address for the
return address; otherwise we create a new restore stub with usage
count equal to 1. In effect, this implements a simple reference-
count-based garbage collection scheme for restore stubs. The text
area of memory for a program now conceptually consists of three
parts: the never-compressed code; the runtime stub list; and the
runtime decompression buffer (Figure 2(b)).
On return from g, the restore stub invokes the decompressor
which recognizes that it has been called by a restore stub, decrements
the stub's usage count, restores f to the runtime buffer, and
transfers control to the appropriate instruction.
This runtime scheme never creates more restore stubs than the
compile-time scheme, though it does require an additional 8 bytes
per stub in order to maintain the count. In fact, the maximum number
of restore stubs that exist at one time in our test suite is 9 for a
very aggressive profile threshold of where the code
considered for compression accounts for 1% of the total dynamic
instruction count of the profiled program (see Section 5).
Figure
illustrates how this is done. Figure 2(a) shows a function
f whose body contains a function call, at callsite cs0, that
calls g. The instruction 'bsr r, Label' puts the address of the
next instruction (the return address) into register r and branches to
Label. Callsite cs0 is at offset 96 within the body of f (relative
to the beginning of f's code), and the return address it passes to
its callee is that of the following instruction, which is at offset 97.
Figure
2(b) shows the result of transforming this code so that the
decompressor is called when the call to g returns. The function call
to g at cs0 is replaced by a function call to CreateStub using
the same return address register $ra. CreateStub creates a restore
stub for this call site (or uses the existing restore stub for this
call site if it exists) and changes $ra to contain the stub's address.
It then transfers control to an unconditional branch at offset 97 that
transfers control to g. Note that the single original instruction bsr
becomes two instructions in the runtime buffer. To save
space in the compressed code, these two instructions are created by
the decompressor from the single bsr $ra, g when filling the
runtime buffer.
When g returns, the instructions in the restore stub are executed.
This causes the decompressor to be invoked with the argument pair
<index(f), 98>, where index(f) is f's index within the
function offset table, and 98 is the offset within f's code where
control should be transferred after decompression. The overall effect
is that when control returns from the function call, f's code is
decompressed, after which control is transferred to the instruction
following the function call in the original code.
It is important to note that, in the scheme described above, the
call stack of the original and compressed program are exactly the
same size at any point in the program's execution. In fact, there is
no need to modify the return sequence of any function. A function
g may be called from either the runtime buffer or never-compressed
code and, in general, may have call sites in both. If the call site
is in a never-compressed function, CreateStub is not invoked
and g returns to the instruction following the call instruction in the
usual way. If the call site is in compressed code, then the return
address passed to g is that of the corresponding restore stub, and
control transfers to this stub when g returns. It is not hard to see, in
fact, that the control transfers happen correctly regardless of how g
uses the return address passed to it: for example, g may save this
address in its environment at entry and restore it on exit; or keep
it in a register, if it is a leaf function; or pass the return address to
some other function, if tail-call optimization is carried out.
In some cases, such as control transfers through longjmp, a
function may be returned from without a corresponding call. This
means that the usage count for the callsite's restore stub may be
inaccurate or, even worse, the restore stub may no longer exist. For
this reason, functions that call setjmp are not compressed.
2.3 Decompressor Interface
The decompressor is invoked with two arguments: an index in
the function offset table, indicating the function to be decompressed;
and an offset in the runtime buffer, indicating the location in the
runtime buffer where control should be transferred after decom-
pression. Rather than pass these arguments to the decompressor in
a register, we put them in a dummy instruction, called a tag, that
follows the call to the decompressor: the low 16 bits contain the
offset and the high 16 bits the function index. Since the decompressor
never returns to its caller (instead it transfers control to the
function it decompresses into the runtime buffer), this "instruction"
is never executed. We can, however, access it via the return address
set by the call to the decompressor.
Various registers may be used as the return address register on a
call to the decompressor. For a restore stub, the register that was
used in the original call instruction can be used; it is guaranteed to
be free. For an entry stub, any free register will do. (If no register is
free, we push the value of a register $ra, use $ra, and then restore
it at the end of the decompressor.) The decompressor, however,
must know which register contains the return address when it is
called. We accomplish this by giving the decompressor multiple
entry points, one per possible return address register. The entry
point for register r pushes r onto the stack and then jumps to the
body of the decompressor. The decompressor now knows that the
return address is at the top of the stack. The decompressor then
1. saves all registers that it will use on the stack,
2. places an instruction at the start of the runtime buffer that
unconditionally jumps to the offset provided by the tag,
3. fills the rest of the runtime buffer by decompressing the function
indicated by the tag,
4. restores all saved registers, and
5. unconditionally jumps to the start of the runtime buffer (which
immediately jumps to the appropriate offset).
By creating the unconditional jump instruction in the runtime
buffer, we avoid the need for a register to do the control transfer
from the end of the decompressor to the offset within the runtime
buffer. We insert one other instruction before this jump instruction
that sets the return register to the address of a restore stub (when
creating a stub) or restores $ra (when an entry stub has no free
register). We note that CreateStub and Decompress are contained
in the same function. This saves having multiple entry points
(one per possible return address register) in two functions, and it is
easy to determine from the return address whether the function was
called from inside the runtime buffer (when it should act as Cre-
ateStub) or outside (when it should act as Decompress).
3. COMPRESSION & DECOMPRESSION
Our primary consideration in choosing a compression scheme is
minimizing the size of the compressed functions. We would like
to achieve good compression even on very short sequences of instructions
since the functions we may want to compress can be very
small. A second consideration is the size of the decompressor itself
since it becomes part of the memory footprint of the program. Fi-
nally, the decompressor must be fast since it is invoked every time
control transfers to a compressed function that is not already in the
runtime buffer. Since the functions that we choose to compress
have a low execution count, we don't expect to invoke the decompressor
too often during execution. A faster decompressor, how-
ever, means we can tolerate the compression of more frequently
executed code which, in turn, leads to greater compression opportunities
The compression technique that we use is a simplified version
of the "splitting streams" approach [9]. The data to be compressed
consists of a sequence of machine code instructions. Each instruction
contains an opcode field and several operand fields, classified
by type. For example, in our test platform, a branch instruction
consists of a 6-bit opcode field, a 5-bit register field, and a 21-bit
displacement field [2]. In order to compress a sequence of instruc-
tions, we first split the sequence into separate streams of values,
one per field type, by extracting, for each field type, the sequence
of field values of that type from successive instructions. We then
compress each stream separately. For our test platform, we split the
instructions into 15 streams. Note that no instruction contains all
field types.
To reconstruct the instruction sequence, we decompress an op-code
from the opcode stream. This tells us the field types of the
instruction, and we obtain the field values from the corresponding
streams. We repeat this process until the opcode stream is empty.
We compress each stream by encoding each field value in the
stream using a Huffman code that is optimal for the stream. This
is a two-pass process. The first pass calculates the frequency of
the field values and constructs the Huffman code. The second pass
encodes the values using the code. Since the Huffman code is designed
for each stream, it must be stored along with the encoded
stream in order to permit decompression.
We use a variant of Huffman encoding called canonical Huffman
encoding that permits fast decompression yet uses little memory
[5]. Like a Huffman code, a canonical Huffman code is an optimal
character-based code (the characters in this case are the field val-
ues). In fact, the length of the canonical Huffman codeword for a
character is the same as the length of the Huffman codeword for that
character. Thus the number N [i] of codewords of length i in both
encodings is the same. The codewords of length i in the canonical
Huffman code are the N [i], i-bit numbers b
2.
For example, if N
28
and the codewords are
Notice that the codewords are completely determined given the
number of codewords of each length, i.e., the N [i]'s.
We store the n characters to be encoded in an array
ordered by their codeword value. The advantage of the canonical
Huffman code is that a codeword can be rapidly decoded using the
arrays N [i] and D[j].
do
while (v b +N [i])
return
The compressed program consists of the codeword sequence,
code representation (the array N [i]), and value list (the array D[j])
for each stream. In fact, since every instruction begins with an
opcode that completely specifies the remaining fields of the in-
struction, we can merge the codeword sequences of the individual
streams into one sequence. We simply interpret the first bits of the
codeword sequence using the Huffman code for the opcode stream,
and use the decoded opcode to specify the appropriate Huffman
codes to use for the remaining fields. For example, when decoding
a branch instruction, we would read a codeword from the sequence
using first the opcode code, then the register code, and finally the
displacement code. The total space required by the compressed
program is approximately 66% of its original size.
We can achieve somewhat better compression for some streams
using move-to-front coding prior to Huffman coding. This has the
undesirable affect of increasing the code size and running time of
the decompression algorithm. Other approaches that decompress
larger parts of an instruction, or multiple instructions, in one decompression
operation may result in better and faster decompres-
sion, but these approaches typically require a more complex decompression
algorithm, or one that requires more space for data
structures.
4. COMPRESSIBLE REGIONS
The "functions" that we use as a unit of compression and decompression
may not agree with the functions specified by the program.
It is often the case that a program-specified function will contain
some frequently-executed code that should not be compressed, and
some infrequently-executed (cold) code that should be compressed.
If the unit of compression is the program-specified function then
the entire function cannot be compressed if it contains any code
that cannot be considered for compression. As a result, the amount
of code available for compression may be significantly less than the
total amount of cold code in the program.
In addition, the runtime buffer must be large enough to hold the
largest decompressed function. A single large function may often
account for a significant fraction of the cold code in a program.
Having a runtime buffer large enough to contain this function can
offset most of the space-savings due to compression.
To address this issue, we create "functions" from arbitrary code
regions and allow these regions to be compressed and decompressed.
This means that control transfers into and out of a compressed region
of code may no longer follow the call/return model for func-
tions. For example, we may have to contend with a conditional
branch that goes from one compressed region of code to another,
different, compressed region. Since the runtime buffer holds the
code of at most one such region at any time, a branch from one region
to another must now go through a stub that invokes the decom-
pressor. This is not a terrible complication. A compressed region
might have multiple entry points, each of which requires an entry
stub, but in all other ways it is the same as an original function. For
instance, function calls from within a compressed region are still
handled as discussed in Section 2.
We now face the problem of how to choose regions to com-
press. We want these regions to be reasonably small so that the
runtime buffer can be small, yet we want few control transfers between
different regions so that the number of entry stubs is small.
This is an optimization problem. The input is a control flow graph
E) for a program in which a vertex b represents a basic
block and has size jbj equal to the number of instructions in the
block, and an edge (a; b) represents a control transfer from a to
b. In addition, the input specifies a subset U of the vertices that
can be compressed. The output is a partition of a subset S of the
compressible vertices U into regions so that the
following cost is minimized:
never-compressed code
offset table
jbjg runtime buffer
where s(R i ) is the size of the region R i after compression, Y is
the set of blocks requiring an entry stub, i.e.,
a 62 R i for some ig;
the constant 2 is the number of words required for an entry stub, and
Buffer size bound0.801.001.20
Normalized
code
size
a
a
a
a
c
c
c
c
d
d d d
d
d
e
e
e
e
f
f
(a)
Buffer size bound0.801.001.20
Normalized
code
size
a
a
a a
a
c
c
c c
c
d
d
d d
d
d
d
e
e
e
f
f
f
Buffer size bound0.801.001.20
Normalized
code
size
a
a a
a
a
c
c c
c
c
d
d
d d d
d
d
d
e
e
e
e
e
f
(c)
Buffer size bound0.801.00
Normalized
code
mean
Key:
Figure
3: Effect of Buffer Size Bound on Code Size
c i is the number of external function calls within R i (the decompressor
creates an additional instruction for each such call). Note
that we have not included the size of the restore stub list (calculat-
ing its size, even given a partition, is an NP-hard problem).
In practice, we cannot afford to calculate s(R) for all possible
regions R, so we assume that a fixed compression factor of
applies to all regions (i.e.,
b2R jbj). Unfortunately, the
resulting simplified problem is NP-hard (PARTITION reduces to it).
We resort to a simple heuristic to choose the compressible regions.
We first decide which basic blocks can be compressed. Our criteria
for this decision are discussed in more detail in Section 5. We
also fix an upper bound K on the size of the runtime buffer (our current
implementation uses an empirically chosen value of
bytes; this is determined as described below). We create an initial
set of regions by performing depth-first search in the control flow
graph. We limit the depth-first search so that it produces a tree that
contains at most K instructions and is composed of compressible
blocks from a single function. If it is profitable to compress the
set of blocks in the tree, we make this tree a compressible region;
otherwise, we mark the root of the tree so that we never re-initiate a
depth-first search from it (though it might be visited in a subsequent
depth-first search starting from a different block). We continue the
depth-first search until all compressible blocks have been visited.
To decide if a region containing I instructions is profitable to
compress, we compare (1
)I , the number of instructions saved
by compressing the region, with the number of instructions E added
for entry stubs. If E < (1
)I , the region is profitable to compress
As mentioned above, we use an empirically determined upper
bound K on the size of the runtime buffer to guide the partitioning
of functions into compressible regions. If we choose too small
a value for K, we get a large number of small compressible re-
gions, with a correspondingly large number of entry stubs and function
offset table entries. These tend to offset the space benefits of
having a small runtime buffer, resulting in a large overall memory
footprint. If the value of K is too large, we get a smaller number
of distinct compressible regions and function offset table entries,
but the savings there are offset by the space required for the run-time
buffer. Our empirical observations of the variation of overall
code size, as K is varied, are shown in Figure 3, for three different
thresholds of cold code as well as the mean for each of these
thresholds (other values of yield similar curves). It can be seen
that, for these benchmarks at least, the smallest overall code size is
obtained at We prefer the latter value because
the larger runtime buffer means that we get somewhat larger
regions and correspondingly fewer inter-region control transfers;
this results in fewer calls to the decompressor at runtime and yields
somewhat better performance.
The partition obtained by depth-first search, in practice, typically
contains many small regions. This is partly due to the presence of
small functions in user and library code, and partly due to frag-
mentation. This incurs overheads from two sources: first, each
compressible region requires a word in the function offset table;
and second, inter-region control transfers require additional code
in the form of entry or restore stubs to invoke the decompressor.
These overheads can be reduced by packing several small regions
into a single larger one that still contains at most K instructions.
To pack regions, we start with the set of regions created by the
depth-first search and repeatedly merge the pair that yields the most
savings (without exceeding the instruction bound K) until no such
pairs exist. For the pair of regions fR; R 0 g (and for R swapped
with R 0 in the following), we save an entry stub for every basic
Threshold0.100.300.500.700.90Fraction
of
Code
cold code
compressible code
Figure
4: Amount of Cold and Compressible Code (Normal-
ized)
block in region R that has incoming edges from R 0 (and possibly
from R) but from no other region. For every call from region R to
R 0 , we save a restore stub. We may also save a jump instruction for
every fall-through edge from region R to R 0 .
In principle, the packing of regions in this way involves a space-time
tradeoff: packing saves space, but since each region is decompressed
in its entirety before execution, the resulting larger regions
incur greater decompression cost at runtime. However, given
that only infrequently-executed code is subjected to runtime de-
compression, the actual increase in runtime cost is not significant.
5. IDENTIFYING COLD CODE
The discussion so far has implicitly assumed that we have identified
portions of the program as "cold" and, therefore, candidates
for compression. The determination of which portions of the program
are cold is carried out as follows. We start with a threshold
, 0:0 1:0, that specifies the maximum fraction of the total
number of instructions executed at runtime (according to the execution
profile for the program) that cold code can account for. Thus,
means that all of the code identified as cold should account
for at most 25% of the total number of instructions executed
by the program at runtime.
Let the weight of a basic block be the number of instructions
in the block multiplied by its execution frequency, i.e., the block's
contribution to the total number of instructions executed at runtime.
Let tot instr ct be the total number of instructions executed by the
program, as given by its execution profile. Given a value of ,
we consider all basic blocks b in the program in increasing order of
execution frequency, and determine the largest execution frequency
N such that
b:freq(b)N
weight(b) tot instr ct :
Any basic block whose execution frequency is at most N is considered
to be cold.
Figure
4 shows (the geometric mean of) the relative amount of
cold and compressible code in our programs at different thresholds.
It can be seen, from Figure 4, that the amount of cold code varies
from about 73% of the total code, on average, when the threshold
(where only code that is never executed is considered cold)
to about 94% at cold code accounts for 1% of the
total number of instructions executed by the program at runtime),
to 100% at However, not all of this cold code can be
compressed: the amount of compressible code varies from about
69% of the program at 0:0 to about 90% at 0:01, to
about 96% at 1:0. The reason not all of the cold code is
compressible, at any given threshold, is that, as discussed in Section
4, a region of code may not be considered for compression even if
it is cold, because it is not profitable to do so.
6. OPTIMIZATIONS
6.1 Buffer-Safe Functions
As discussed earlier, function calls within compressed code cause
the creation, during execution, of a restore stub and an additional
instruction in the runtime buffer. This overhead can be avoided if
the callee is buffer-safe, i.e., if it and any code it might call will not
invoke the decompressor. If the callee is buffer-safe, then the run-time
buffer will not be overwritten during the callee's execution, so
the return address passed to the callee can be simply the address of
the instruction following the call instruction in the runtime buffer:
there is no need to create a stub for the call or to decompress the
caller when the call returns. In other words, a call from within a
compressed region to a buffer-safe function can be left unchanged.
This has two benefits: the space cost associated with the restore
stub and the additional runtime buffer instruction is eliminated, and
the time cost for decompressing the caller on return from the call is
avoided.
We use a straightforward iterative analysis to identify buffer-safe
functions. We first mark all regions that are clearly not buffer-safe:
i.e., those that have been identified as compressible, and those that
contain indirect function calls whose possible targets may include
non-buffer-safe regions. This information is then propagated iteratively
to other regions: if R is a region marked as non-buffer-safe,
and R 0 is a region from which control can enter R-either through
a function call or via a branch operation-then R 0 is also marked
as being non-buffer-safe. This is repeated until no new region can
be marked in this way. Any region that is left unmarked at the end
of this process is buffer-safe.
For the benchmarks we tested, this analysis identifies on the
average, about 12.5% of the compressible regions as buffer-safe;
the gsm and g721 enc benchmarks have the largest proportion of
buffer-safe regions, with a little over 20% and 19%, respectively,
of their compressible regions inferred to be buffer-safe.
6.2 Unswitching
If a code region contains indirect jumps through a jump table, it
is necessary to process any such code to ensure that runtime control
transfers within the decompressed code in the runtime buffer
are carried out correctly. We have two choices: we can either up-date
the addresses in the jump table to point into the runtime buffer,
at the locations where the corresponding targets would reside when
the region is decompressed; or we can "unswitch" the region to use
a series of conditional branches instead of an indirect jump through
a table. Note that in either case, we have to know the size of the
jump table: in the context of a binary rewriting implementation
such as ours, this may not always be possible. If we are unable to
determine the extent of the jump table, the block containing the indirect
jump through the table and the set of possible targets of this
jump must be excluded from compression. For the sake of sim-
plicity, our current implementation uses unswitching to eliminate
the indirect jump, after which the space for the jump table can be
reclaimed.
7. EXPERIMENTAL RESULTS
Our ideas have been implemented in the form of a binary-rewriting
tool called squash that is based on squeeze, a compactor of Compaq
Alpha binaries [7]. Squeeze is based on alto, a post-link-time code
optimizer [20]. Squeeze alone compacts binaries that have already
Program Profiling Input Timing Input
file name size (KB) file name size (KB)
adpcm clinton.pcm 295.0 mlk IHaveADream.pcm 1475.2
clinton.adpcm 73.8 mlk IHaveADream.adpcm 182.1
lena.tif 262.4
dec clinton.g721 73.8 mlk IHaveADream.g721 368.8
gsm clinton.pcm 295.0 mlk IHaveADream.pcm 1475.2
jpeg dec testimg.jpg 5.8 roses17.jpg 25.1
jpeg end testimg.ppm 101.5 roses17.ppm 681.1
pgp compression.ps 717.2 TI-320-user-manual.ps 8456.6
rasta ex5 c1.wav 17.0 phone.pcmle.wav 83.7
Figure
5: Inputs used for profiling and timing runs
a b c d e f
Code
Size
reduction
Thresholds
Key:
Figure
Reduction due to Profile-Guided Code Compression at Different Thresholds
been space optimized by about 30% on average. Squash, using
the runtime decompression scheme outlined in this paper, compacts
squeezed binaries by about another 14-19% on average.
To evaluate our work we used eleven embedded applications
from the MediaBench benchmark suite (available at www.cs.ucla.
edu/-leec/mediabench): adpcm, which does speech compression
and decompression; epic, an image data compression util-
dec and g721 enc, which are reference implementations
from Sun Microsystems of the CCITT G.721 voice compression
decoder and encoder; gsm, an implementation of the European
GSM 06.10 provisional standard for full-rate speech transcoding;
jpeg dec and jpeg enc, which implement JPEG image decompression
and compression; mpeg2dec and mpeg2enc, which implement
MPEG-2 decoding and encoding respectively; pgp, a popular cryptographic
encryption/decryption program; and rasta, a speech-analysis
program. The inputs used to obtain the execution profiles used to
guide code compression, as well as those used to evaluate execution
speed (Figure 7(b)), are described in Figure 5: the profiling
inputs refer to those used to obtain the execution profiles that were
used to carry out compression, while the timing inputs refer to the
inputs used to generate execution time data for the uncompressed
and compressed code. Details of these benchmarks are given in the
Appendix
.
These programs were compiled using the vendor-supplied C compiler
cc V5.2-036, invoked as cc -O1, with additional flags instructing
the linker to retain relocation information and to produce
statically linked executables. 2 The vendor-supplied compiler cc
produces the most compact code at optimization level -O1: it carries
out local optimizations and recognition of common subexpres-
sions; global optimizations including code motion, strength reduc-
tion, and test replacement; split lifetime analysis; and code schedul-
2 The requirement for statically linked executables is a result of the
fact that alto relies on the presence of relocation information to distinguish
addresses from data. The Tru64 Unix linker ld refuses to
retain relocation information for executables that are not statically
linked.
adpcm epic
Geom.
Code
Size
reduction
(%)16.8Thresholds0.00001(a) Code Size
adpcm epic
Geom.
Execution
Time
(Normalized)
1.00 1.04Thresholds0.00001(b) Execution Time
Figure
7: Effect of Profile-Guided Compression on Code Size and Execution Time
ing; but not size-increasing optimizations such as inlining; integer
multiplication and division expansion using shifts; loop unrolling;
and code replication to eliminate branches.
The programs were then compacted using squeeze. Squeeze eliminates
redundant, unreachable, and dead code; performs interprocedural
strength reduction and constant propagation; and replaces
multiple similar program fragments with function calls to a single
representative function (i.e., it performs procedural abstraction).
Squeeze is very effective at compacting code. If we start with an executable
produced by cc -O1 and remove unreachable code and
no-op instructions, squeeze will reduce the number of instructions
that remain by approximately 30% on average.
The remaining instructions were given to squash along with profile
information obtained by running the original executable on sample
inputs to obtain execution counts for the program's basic blocks.
Squash produces an executable that contains never-compressed code,
entry stubs, the function offset table, the runtime decompressor, the
compressed code, the buffer used to hold dynamically generated
stubs, and the runtime buffer. All of this space is included in the
code size measurement of squashed executables.
Figure
6 shows how the amount of code size reduction obtained
using profile-guided compression varies with the cold code threshold
. With only code that is never executed is considered
to be cold; in this case, we see size reductions ranging
from 9.0% (g721 enc) to 22.1% (pgp), with a mean reduction of
13.7%. The size reductions obtained increase as we increase ,
which makes more and more code available for compression. Thus,
at reductions ranging from 12.1% (ad-
pcm) to 23.7% (pgp), with a mean reduction of 16.8%. At the
extreme, with considered cold, the code
size reductions range from 21.5% (adpcm) to 31.8% (pgp), with a
mean of 26.5%. It is noteworthy that much of the size reductions
are obtained using quite low thresholds, and that the rate at which
the reduction in code size increases with is quite small. For ex-
ample, increasing by five orders of magnitude, from 0:00001 to
1:0, yields only an additional 10% benefit in code size reduction.
However, as is increased, the runtime overhead associated with
repeated dynamic decompression of code quickly begins to make
itself felt. Our experience with this set of programs (and others)
indicates that beyond = 0:0001 the runtime overhead becomes
quite noticeable. To obtain a reasonable balance between code size
improvements and execution speed, we focus on values of up to
0.00005.
Execution time data were obtained on a workstation with a 667
MHz Compaq Alpha 21264 EV67 processor with a split two-way
set-associative primary cache (64 Kbytes each of instruction and
data cache) and 512 MB of main memory running Tru64 Unix. In
each case, the execution time was obtained as the smallest of 10
runs of an executable on an otherwise unloaded system.
Figure
7 examines the performance of our programs, both in
terms of size and speed, for ranging from 0.0 to 0.00005. The final
set of bars in this figure shows the mean values for code size reduction
and execution time, respectively, relative to squeezed code;
the number at the top of each bar gives the actual value of the geometric
mean for that case. It can be seen that at low cold-code
thresholds, the runtime overhead incurred by profile-guided code
compression is small: at 0:0 the compressed code is about
the same speed, on average, as the code without compression; at
we incur an average execution time overhead of
4%; and at the average overhead is 24%. Given
the corresponding size reductions obtained-ranging from 13.7%
to 18.8%-these overheads do not seem unreasonably high. (Note
that these reductions in size are on top of the roughly 30% code
size reduction we obtain using our prior work on code compaction
It is important to note, in this context, that the execution speed
of compressed code can suffer dramatically if the timing inputs,
i.e., inputs used to measure "actual" execution speed, cause a large
number of calls to the decompressor. This can happen for two rea-
sons. First, a code fragment that is cold in the profile may occur
in a cycle, which can be either a loop within a procedure, or an
inter-procedural cycle arising out of recursion. Second, the region
partitioning algorithm described in Section 4 may split a loop into
multiple regions. In either case, if the loop or cycle is executed
repeatedly in the timing inputs, the repeated code decompression
can have a significant adverse effect on execution speed. An example
of the first situation occurs in the SPECint-95 benchmark li,
where an interprocedural cycle, that is never executed in the pro-
file, is executed many times with the timing input. An example of
the second situation occurs in the benchmark mpeg2dec when the
runtime buffer size bound K is small (e.g.,
8. RELATED WORK
Our work combines aspects of profile-directed optimization, run-time
code generation/modification, and program compression. Dynamic
optimization systems, such as Dynamo [4], collect profile
information and use it to generate or modify code at runtime. These
systems are not designed to minimize the memory footprint of the
executable, but rather to decrease execution time. They tend to
focus optimization effort on hot code, whereas our compression efforts
are most aggressive on cold code.
More closely related is the work of Hoogerbrugge et al., who
compile cold code into interpreted byte code for a stack-based machine
[14]. By contrast, we use Huffman coding to compress cold
code, and dynamically uncompress the compressed code at runtime
as needed. Thus, our system does not incur the memory cost of a
byte-code interpreter.
There has been a significant amount of work on architectural extensions
for the execution of compressed code: examples include
Thumb for ARM processors [3], CodePack for PowerPC processors
[15], and MIPS16, for MIPS processors [16]. Special hardware
support is used to expand each compressed instruction to its
executable form prior to execution. While such an approach has the
advantage of not incurring the space overheads for control stubs
and time overheads for software decompression, the requirement
for special hardware limits its general applicability. Lefurgy et
al. describe a hybrid system where decompression is carried out
mostly in software, but with the assistance of special hardware
instructions to allow direct manipulation of the instruction cache
[18]; decompression is carried out at the granularity of individual
cache lines.
Previous work in program compression has explored the compressibility
of a wide range of program representations: source
programs, intermediate representations, machine codes, etc. [24].
The resulting compressed form either must be decompressed (and
perhaps compiled) before execution [9, 10, 11] or it can be executed
(or interpreted [13, 21]) without decompression [6, 12]. The
first method results in a smaller compressed representation than the
second, but requires the time and space overhead of decompression
before execution. We avoid requiring a large amount of additional
space to place the decompressed code by choosing to decompress
small pieces of the code on demand, using a single, small
runtime buffer. Similar techniques of partial decompression and
decompression-on-the-fly have been used under similar situations
[9, 19], but these techniques require altering the runtime operation
or the hardware of the computer.
Most of the earlier work on code compression to yield smaller
executables treated an executable program as a simple linear sequence
of instructions, and used a suffix tree construction to identify
repeated code fragments that could be abstracted out into functions
[6, 12]. We have recently shown that it is possible to obtain results
that are as good, or better, by using aggressive inter-procedural
size-reducing compiler optimizations applied to the control flow
graph of the program, instead of using a suffix-tree construction
over a linear sequence of instructions [7].
9. CONCLUSIONS AND FUTURE WORK
We have described an approach to use execution profiles to guide
code compression. Infrequently executed code is compressed using
data compression techniques that produce compact representations,
and is decompressed dynamically prior to execution if needed. This
has several benefits: the use of powerful compression techniques
allows significant improvements in the amount of code size reduction
achieved; for low execution frequency thresholds the runtime
overheads are small; and finally, no special hardware support is
needed for runtime decompression of compressed code. Experimental
results indicate that, with the proper choice of cold code
thresholds, this approach can be effective in reducing the memory
footprint of programs without significantly compromising execution
speed: we see code size reductions of 13.7%
to on average, for a set of embedded ap-
plications, relative to the code size obtained using our prior work
on code compaction [7]; the concomitant effect on execution time
ranges from a very slight speedup for 0:0 to a 27% slowdown,
on average, for
We are currently looking into a number of ways to enhance this
work further. These include other algorithms for compression and
decompression, as well as other algorithms for constructing compressible
regions within a program.
Acknowledgements
We gratefully acknowledge the loan of equipment by Karen Flat-
Richard Flower, and Robert Muth of Compaq Corp.
10.
--R
Effective Code Generation in a Just-in-Time Java Compiler
Alpha Architecture Handbook
A Transparent Runtime Optimization System.
Managing Gigabytes: Compressing and Indexing Documents and Images
Enhanced code compression for embedded RISC processors.
Compiler Techniques for Code Compaction.
Efficient implementation of the Smalltalk-80 system
Code compression.
Adaptive compression of syntax trees and iterative dynamic code optimization: Two basic technologies for mobile-object systems
binaries.
Custom instruction sets for code compression.
A Code Compression System Based on Pipelined Interpreters.
A Decompression Core for PowerPC.
An Empirical Study of FORTRAN Programs.
Reducing Code Size with Run-Time Decompression
Optimizing an ANSI C interpreter with superoperators.
Overview of the IBM Java Just-in-Time Compiler
Texas Instruments Inc.
The 'Code Compaction' Bibliography.
--TR
Optimizing an ANSI C interpreter with superoperators
Code compression
binaries
Fast, effective code generation in a just-in-time Java compiler
Enhanced code compression for embedded RISC processors
A code compression system based on pipelined interpreters
Compiler techniques for code compaction
dictionary program compression
Alto
Analyzing and compressing assembly code
Managing Gigabytes
Adaptive Compression of Syntax Trees and Iterative Dynamic Code Optimization
Efficient implementation of the smalltalk-80 system
--CTR
Saumya Debray , William S. Evans, Cold code decompression at runtime, Communications of the ACM, v.46 n.8, August
Arvind Krishnaswamy , Rajiv Gupta, Dynamic coalescing for 16-bit instructions, ACM Transactions on Embedded Computing Systems (TECS), v.4 n.1, p.3-37, February 2005
Stacey Shogan , Bruce R. Childers, Compact Binaries with Code Compression in a Software Dynamic Translator, Proceedings of the conference on Design, automation and test in Europe, p.21052, February 16-20, 2004
Yuan Xie , Wayne Wolf , Haris Lekatsas, Profile-Driven Selective Code Compression, Proceedings of the conference on Design, Automation and Test in Europe, p.10462, March 03-07,
Karine Heydemann , Francois Bodin , Henri-Pierre Charles, A software-only compression system for trading-offs between performance and code size, Proceedings of the 2005 workshop on Software and compilers for embedded systems, p.27-36, September 29-October 01, 2005, Dallas, Texas
E. Wanderley Netto , R. Azevedo , P. Centoducatte , G. Araujo, Multi-profile based code compression, Proceedings of the 41st annual conference on Design automation, June 07-11, 2004, San Diego, CA, USA
Shao-Yang Wang , Rong-Guey Chang, Code size reduction by compressing repeated instruction sequences, The Journal of Supercomputing, v.40 n.3, p.319-331, June 2007
Israel Waldman , Shlomit S. Pinter, Profile-driven compression scheme for embedded systems, Proceedings of the 3rd conference on Computing frontiers, May 03-05, 2006, Ischia, Italy
John Gilbert , David M. Abrahamson, Adaptive object code compression, Proceedings of the 2006 international conference on Compilers, architecture and synthesis for embedded systems, October 22-25, 2006, Seoul, Korea
Rajeev Kumar , Amit Gupta , B. S. Pankaj , Mrinmoy Ghosh , P. P. Chakrabarti, Post-compilation optimization for multiple gains with pattern matching, ACM SIGPLAN Notices, v.40 n.12, December 2005
Taweesup Apiwattanapong , Mary Jean Harrold, Selective path profiling, ACM SIGSOFT Software Engineering Notes, v.28 n.1, January
Jeremy Lau , Stefan Schoenmackers , Timothy Sherwood , Brad Calder, Reducing code size with echo instructions, Proceedings of the international conference on Compilers, architecture and synthesis for embedded systems, October 30-November 01, 2003, San Jose, California, USA
Shukang Zhou , Bruce R. Childers , Mary Lou Soffa, Planning for code buffer management in distributed virtual execution environments, Proceedings of the 1st ACM/USENIX international conference on Virtual execution environments, June 11-12, 2005, Chicago, IL, USA
Mario Latendresse , Marc Feeley, Generation of fast interpreters for Huffman compressed bytecode, Proceedings of the workshop on Interpreters, virtual machines and emulators, p.32-40, June 12-12, 2003, San Diego, California
Hongxu Cai , Zhong Shao , Alexander Vaynberg, Certified self-modifying code, ACM SIGPLAN Notices, v.42 n.6, June 2007
Haifeng He , John Trimble , Somu Perianayagam , Saumya Debray , Gregory Andrews, Code Compaction of an Operating System Kernel, Proceedings of the International Symposium on Code Generation and Optimization, p.283-298, March 11-14, 2007
Steve Haga , Andrew Webber , Yi Zhang , Nghi Nguyen , Rajeev Barua, Reducing code size in VLIW instruction scheduling, Journal of Embedded Computing, v.1 n.3, p.415-433, August 2005
Mario Latendresse , Marc Feeley, Generation of fast interpreters for Huffman compressed bytecode, Science of Computer Programming, v.57 n.3, p.295-317, September 2005
Bjorn De Sutter , Bruno De Bus , Koen De Bosschere, Sifting out the mud: low level C++ code reuse, ACM SIGPLAN Notices, v.37 n.11, November 2002
O. Ozturk , G. Chen , M. Kandemir , I. Kolcu, Compiler-Guided data compression for reducing memory consumption of embedded applications, Proceedings of the 2006 conference on Asia South Pacific design automation, January 24-27, 2006, Yokohama, Japan
Guilin Chen , Mahmut Kandemir, Optimizing Address Code Generation for Array-Intensive DSP Applications, Proceedings of the international symposium on Code generation and optimization, p.141-152, March 20-23, 2005
Marc L. Corliss , E. Christopher Lewis , Amir Roth, The implementation and evaluation of dynamic code decompression using DISE, ACM Transactions on Embedded Computing Systems (TECS), v.4 n.1, p.38-72, February 2005
Zhang , Chandra Krintz, The design, implementation, and evaluation of adaptive code unloading for resource-constrained devices, ACM Transactions on Architecture and Code Optimization (TACO), v.2 n.2, p.131-164, June 2005
Mary J. Irwin, Exploiting frequent field values in java objects for reducing heap memory requirements, Proceedings of the 1st ACM/USENIX international conference on Virtual execution environments, June 11-12, 2005, Chicago, IL, USA
Bjorn De Sutter , Bruno De Bus , Koen De Bosschere, Link-time binary rewriting techniques for program compaction, ACM Transactions on Programming Languages and Systems (TOPLAS), v.27 n.5, p.882-945, September 2005
Chang Hong Lin , Yuan Xie , Wayne Wolf, Code compression for VLIW embedded systems using a self-generating table, IEEE Transactions on Very Large Scale Integration (VLSI) Systems, v.15 n.10, p.1160-1171, October 2007 | dynamic decompression;code compression;code size reduction;code compaction |
512550 | A compiler approach to fast hardware design space exploration in FPGA-based systems. | The current practice of mapping computations to custom hardware implementations requires programmers to assume the role of hardware designers. In tuning the performance of their hardware implementation, designers manually apply loop transformations such as loop unrolling. designers manually apply loop transformations. For example, loop unrolling is used to expose instruction-level parallelism at the expense of more hardware resources for concurrent operator evaluation. Because unrolling also increases the amount of data a computation requires, too much unrolling can lead to a memory bound implementation where resources are idle. To negotiate inherent hardware space-time trade-offs, designers must engage in an iterative refinement cycle, at each step manually applying transformations and evaluating their impact. This process is not only error-prone and tedious but also prohibitively expensive given the large search spaces and with long synthesis times. This paper describes an automated approach to hardware design space exploration, through a collaboration between parallelizing compiler technology and high-level synthesis tools. We present a compiler algorithm that automatically explores the large design spaces resulting from the application of several program transformations commonly used in application-specific hardware designs. Our approach uses synthesis estimation techniques to quantitatively evaluate alternate designs for a loop nest computation. We have implemented this design space exploration algorithm in the context of a compilation and synthesis system called DEFACTO, and present results of this implementation on five multimedia kernels. Our algorithm derives an implementation that closely matches the performance of the fastest design in the design space, and among implementations with comparable performance, selects the smallest design. We search on average only 0.3% of the design space. This technology thus significantly raises the level of abstraction for hardware design and explores a design space much larger than is feasible for a human designer. | INTRODUCTION
The extreme flexibility of Field Programmable Gate Arrays
has made them the medium of choice for fast
hardware prototyping and a popular vehicle for the realization
of custom computing machines. FPGAs are composed
of thousands of small programmable logic cells dynamically
interconnected to allow the implementation of any
logic function. Tremendous growth in device capacity has
made possible implementation of complex functions in FP-
GAs. For example, FPGA implementations can sometimes
yield even faster solutions than conventional hardware, up
to 2 orders of magnitude on encryption [18]. In addition
FPGAs o#er a much faster time to market for time-critical
applications and allow post-silicon in-field modification to
prototypical or low-volume designs where an Application
Specific Integrated Circuit (ASIC) is not justified.
Despite growing importance of application-specific FPGA
designs, these devices are still di#cult to program making
them inaccessible to the average developer. The standard
practice requires developers to express the application in
a hardware-oriented language such as Verilog or VHDL,
and synthesize the design to hardware using a wide variety
of synthesis tools. As optimizations performed by
synthesis tools are very limited, developers must perform
high-level and global optimizations by hand. For example,
no commercially-available high-level synthesis tool handles
multi-dimensional array variables 1 nor automatic selection
of loop unroll factors.
Because of the complexity of synthesis, it is di#cult to
predict a priori the performance and space characteristics of
the resulting design. For this reason, developers engage in
an iterative refinement cycle, at each step manually applying
transformations, synthesizing the design, examining the
results, and modifying the design to trade o# performance
and space. Throughout this process, called design space
exploration, the developer carries the responsibility for the
correctness of the application mapping.
We believe the way to make programming of FPGA-based
systems more accessible is to o#er a high-level imperative
programming paradigm, such as C, coupled with compiler
technology oriented towards FPGA designs. In this way,
developers retain the advantages of a simple programming
Several claim the support of this feature but only for simulation
purposes, not actual hardware synthesis.
model via the high-level language but rely on powerful compiler
analyses and transformations to optimize the design as
well as automate most of the tedious and error-prone mapping
tasks. We make the observation that, for a class of
FPGA applications characterized as highly parallel array-based
computations (e.g., multimedia codes), many hand
optimizations performed by developers are similar to transformations
used in parallelizing compilers. For example, developers
parallelize computations, optimize external memory
accesses, explicitly manage storage and perform loop
transformations. For this reason, we argue that parallelizing
compiler technology can be used to optimize FPGA designs.
In this paper, we describe an automated approach to design
space exploration, based on a collaboration between a
parallelizing compiler and high-level synthesis tools. Completely
synthesizing a design is prohibitively slow (hours to
days) and further, the compiler must try several designs to
arrive at a good solution. For these reasons, we exploit estimation
from behavioral synthesis to determine specific hardware
parameters (e.g., size and speed) with which the compiler
can quantitatively evaluate the application of a transformation
to derive an optimized and feasible implementation
of the loop nest computation. Since the hardware implementation
is bounded in terms of capacity, the compiler
transformations must also consider space constraints. This
compiler algorithm e#ectively enables developers to explore
a potentially large design space, which without automation
would not be feasible.
In previous work, we presented an overview of DEFACTO,
the system upon which this work is based, which combines
parallelizing compiler technology in the Stanford SUIF compiler
with hardware synthesis tools [9]. In this paper, we
present a detailed algorithm for design space exploration
and results demonstrating its e#ectiveness. While there are
a few systems that automatically synthesize hardware designs
from C specifications [24], to our knowledge there is no
other system that automatically explores the design space in
collaboration with behavioral synthesis estimation features.
Our current infrastructure largely supports the direct mapping
of computations to multiple FPGAs [26]. However, the
work in this paper describes an implementation and experimental
results for designs that are mapped to a single FPGA
and multiple memories. We thus focus on the algorithmic
aspects of design space exploration under simpler data and
computation partitioning strategies.
This paper makes the following specific contributions.
. Describes the integration of behavioral synthesis tools
and parallelizing compiler technology to map computations
to FPGA-based architectures. We present a
compiler algorithm for design space exploration that
relies on behavioral synthesis estimates. The algorithm
applies loop transformations to explore a space-time
trade-o# in the realization of hardware designs.
. Defines a balance metric for guiding design space explo-
ration, which suggests when it is profitable to devote
more resources to storage or computation. The design
space exploration algorithm exploits monotonicity
properties of the balance metric to e#ectively prune
large regions of the search space, thereby allowing the
compiler to consider a wider range of transformations
that otherwise would not be feasible.
. Presents experimental results for five multimedia ker-
nels. Our algorithm derives an implementation that
closely matches the performance of the fastest design
in the design space, and among implementations with
comparable performance, selects the smallest design.
We search on average only 0.3% of the design space.
As technology advances increase density of FPGA devices,
tracking Moore's law for conventional logic of doubling every
months, devices will be able to support more sophisticated
functions. With the future trend towards on-chip integration
of internal memories, FPGAs with special-purpose
functional units are becoming attractive as a replacement
for ASICs and for custom embedded computing architec-
tures. We foresee a growing need to combine the strengths
of high-level program analysis techniques, to complement
the capabilities of current and future synthesis tools. Devices
and consequently designs will become more complex,
demanding an e#cient solution to exploring even larger design
spaces.
The remainder of the paper is organized as follows. In the
next section we present some background on FPGAs and
behavioral synthesis. Section 3 describes the optimization
goal of our design space exploration algorithm in mapping
loop nest computations to hardware. In Section 4 we discuss
the analyses and transformation our algorithm uses. In Section
5 we present the design space exploration algorithm. In
Section 6 we present experimental results for the application
of this algorithm to 5 image processing computations. We
survey related work in Section 7 and conclude in Section 8.
2. BACKGROUND
We now describe FPGAs and synthesis, and compare with
optimizations performed in parallelizing compilers. We also
discuss features of our target application domain.
2.1 Field-Programmable-Gate-Arrays
FPGAs are a popular vehicle for rapid prototyping or as
a way to implement simple logic interfaces. FPGAs are implemented
as (re)programmable seas-of-gates with distinct
internal architectures. For example, the Xilinx Virtex family
of devices consists of 12, 288 device slices where each slice in
turn is composed of 2 look-up tables (LUTs) each of which
can implement an arbitrary logic function of 11 boolean inputs
and 6 outputs [15]. Two slices form a configurable logic
block (CLBs) and these blocks are interconnected in a 2-
dimensional mesh via programmable static routing switches.
To configure an FPGA, designers have to download a bit-stream
file with the configuration of all slices in the FPGA
as well as the routing. Other programmable devices, for
example the APEX II devices from Altera, have a more hierarchical
routing approach to connecting the CLBs in their
FPGAs, but the overall functionality is similar [6].
As with traditional architectures, bandwidth to external
memory is a key performance bottleneck in FPGAs, since
it is possible to compute orders of magnitude more data
in a cycle than can be fetched from or stored to memory.
However, unlike traditional architectures, an FPGA has the
flexibility to devote its internal configurable resources either
to storage or to computation.
2.2 FPGA Synthesis Flow
Synthesis flow for FPGAs is the term given to the process
of translating functional logic specifications to a bitstream
description that configures the device. This functional specification
can be done at multiple levels. Using hardware
description languages such as VHDL or Verilog, designers
can specify the functionality of their datapath circuits (e.g.,
adders, multipliers, etc.) as a diagram of design blocks. This
structural specification defines the input/output interface for
each block and allows the designers to describe finite state
machines (FSMs) to control the temporal behavior of each of
the blocks. Using this approach designers can control every
single aspect of operations in their datapaths. This is the
preferred approach when maximum performance is sought,
but requires extremely high design times.
The process that takes a structural specification and targets
a particular architecture's programmable units (LUTs
in the case of Xilinx devices) is called RTL-level synthesis.
The RTL-level synthesis generates a netlist representation of
the intended design, used as the input of low-level synthesis
steps such as the mapping and place-and-route (P&R) to
ultimately generate the device bitstream configuration file.
2.3 Behavioral Synthesis vs. Compilers
Behavioral specifications in VHDL or Verilog, as opposed
to lower level structural specifications, express computations
without committing to a particular hardware implementation
structure. The process of taking a behavioral specification
and generating a hardware implementation is called
behavioral synthesis. Behavioral synthesis performs three
core functions:
. binding operators and registers in the specification
to hardware implementations (e.g., selecting a ripple-carry
adder to implement an addition);
. resource allocation (e.g., deciding how many ripple-carry
adders are needed); and,
. scheduling operations in particular clock cycles.
To generate a particular implementation, behavioral synthesis
requires the programmer to specify the target design
requirements in terms of area, clock rate, number of clock
cycles, number of operators, or some combination. For ex-
ample, the designer might request a design that uses two
multipliers and takes at most 10 clock cycles. Behavioral
synthesis tools use this information to generate a particular
implementation that satisfies these constraints.
In addition, behavioral synthesis supports some optimiza-
tions, but relies heavily on the developer to direct some of
the mapping steps. For example, current behavioral synthesis
tools allow the specification of which loops to unroll.
After loop unrolling, the tool will perform extensive optimizations
on the resulting inner loop body, such as parallelizing
and pipelining operations and minimizing registers
and operators to save space. However, deciding the unroll
factor is left up to the programmer.
Behavioral Synthesis Parallelizing Compilers
Optimizations only on scalar variables Optimizations on scalars and arrays
Optimizations only inside loop body Optimizations inside loop body
and across loop iterations
Supports user-controlled Analyses guide automatic
loop unrolling loop transformations
Manages registers and Optimizes memory accesses
inter-operator communication Evaluates trade-offs of different
storage on- and off-chip
Considers only single FPGA System-level view: multiple FPGAs
multiple memories
Performs allocation, binding and No knowledge of hardware
scheduling of hardware resources implementation of computation
Table
1: Comparison of Behavioral Synthesis and
Parallelizing Compiler Technologies.
While there are some similarities between the optimizations
performed by synthesis tools and parallelizing com-
pilers, in many ways they o#er complementary capabilities,
as shown in Table 1. The key advantage of parallelizing
compiler technology over behavioral synthesis is the ability
to perform data dependence analysis on array variables,
used as a basis for parallelization, loop transformations and
optimizing memory accesses. This technology permits optimization
of designs with array variables, where some data
resides in o#-chip memories. Further, it enables reasoning
about the benefits of code transformations (such as loop
unrolling) without explicitly applying them. In addition,
parallelizing compilers are capable of performing global program
analysis, which permits optimization across the entire
system.
2.4 Target Application Domain
Because of their customizability, FPGAs are commonly
used for applications that have significant amounts of fine-grain
parallelism and possibly can benefit from non-standard
numeric formats (e.g., reduced data widths). Specifically,
multimedia applications, including image and signal processing
on 8-bit and 16-bit data, respectively, o#er a wide
variety of popular applications that map well to FPGAs.
For example, a typical image processing algorithm scans
a multi-dimensional image and operates on a given pixel
value and all its neighbors. Images are often represented as
multi-dimensional array variables, and the computation is
expressed as a loop nest. Such applications exhibit abundant
concurrency as well as temporal reuse of data. Examples
of computations that fall into this category include image
correlation, Laplacian image operators, erosion/dilation
operators and edge detection.
Fortunately, such applications are a good match for the
capabilities of current parallelizing compiler analyses, which
are most e#ective in the a#ne domain, where array subscript
expressions are linear functions of the loop index variables
and constants [25]. In this paper, we restrict input programs
to loop nest computations on array and scalar variables (no
pointers), where all subscript expressions are a#ne with a
fixed stride. The loop bounds must be constant. 2 We support
loops with control flow, but to simplify control and
scheduling, the generated code always performs conditional
memory accesses.
3. OPTIMIZATION GOAL AND BALANCE
Simply stated, the optimization criteria for mapping a single
loop nest to FPGA-based systems are as follows: (1) the
design must not exceed the capacity constraints of the sys-
tem; (2) the execution time should be minimized; and, (3)
for a given level of performance, FPGA space usage should
be minimized. The motivation for the first two criteria
should be obvious, but the third criterion is also needed for
several reasons. First, if two designs have equivalent perfor-
mance, the smaller design is more desirable, in that it frees
up space for other uses of the FPGA logic, such as to map
other loop nests. In addition, a smaller design usually has
less routing complexity, and as a result, may achieve a faster
Non-constant bounds could potentially be supported by the
algorithm, but the generated code and resulting FPGA designs
would be much more complex. For example, behavioral
synthesis would transform a for loop with a non-constant
bound to a while loop in the hardware implementation.
target clock rate. Moreover, the third criterion suggests a
strategy for selecting among a set of candidate designs that
meet the first two criteria.
With respect to a particular set of transformations, which
are described in the next section, our algorithm attempts
to select the best design that meets the above criteria. The
algorithm uses two metrics to guide the selection of a design.
First, results of estimation provide space usage of the design,
related to criterion 1 above. Another important metric used
to guide the selection of a design, related to criteria 2 and
3, is Balance, defined by the equation.
where F refers to the data fetch rate, the total data bits
that memory can provide per cycle, and C refers to the data
consumption rate, total data bits the computation can consume
during the computational delay. If balance is close to
one, both memories and FPGAs are busy. If balance is less
than one, the design is memory bound; if greater than one,
it is compute bound. When a design is not balanced, this
metric suggests whether more resources should be devoted
to improving computation time or memory time.
We borrow the notion of balance from previous work for
mapping array variables to scalar registers to balance the
floating point operations and memory accesses [5]. Because
we have the flexibility in FPGAs to adjust time spent in
either computation or memory accesses, we use the data
fetch rate and data consumption rate, and compare them
under di#erent optimization assumptions.
4. ANALYSES AND TRANSFORMATIONS
This section describes at a high level the code transformations
performed by our system, as illustrated by the FIR
filter example in Figure 1.
Unroll-and-Jam. The first code transformation, unroll-
and-jam, involves unrolling one or more loops in the iteration
space and fusing inner loop bodies together, as shown in
Figure
1(b). Unrolling exposes operator parallelism to high-level
synthesis. In the example, all of the multiplies can
be performed in parallel. Two additions can subsequently
be performed in parallel, followed by two more additions.
Unroll-and-jam can also decrease the dependence distances
for reused data accesses, which, when combined with scalar
replacement discussed below, can be used to expose opportunities
for parallel memory accesses.
Scalar Replacement. Scalar replacement replaces array
references by accesses to temporary scalar variables, so
that high-level synthesis will exploit reuse in registers [5].
Our approach to scalar replacement closely matches previous
work, which eliminates true dependences when reuse
is carried by the innermost loop, for accesses in the a#ne
domain with consistent dependences (i.e., constant dependence
distances) [5]. There are, however, two di#erences:
(1) we also eliminate unnecessary memory writes on output
dependences; and, (2) we exploit reuse across all loops
in the nest, not just the innermost loop. The latter di#er-
ence stems from the observation that many, though not all,
algorithms mapped to FPGAs have su#ciently small loop
bounds or small reuse distances, and the number of registers
that can be configured on an FPGA is su#ciently large.
A more detailed description of our scalar replacement and
register reuse analysis can be found in [9].
In the example in Figure 1(c), we see the results of scalar
replacement, which illustrates some of the above di#erences
int S[96];
int C[32];
int D[64];
for (j=0; j<64; j++)
(a) Original code.
for (j=0; j<64; j+=2)
(b) After unrolling j loop and i loop by 1 (unroll
factor 2) and jamming copies of i loop together.
for (j=0; j<64; j+=2) { /* initialize D registers */
for (i=0; i<32; i+=2) {
if (j==0) { /* initialize C registers */
rotate registers(c 0 0, . ,c 0 15);
rotate registers(c 1 0, . ,c 1 15);
} (c) After scalar replacement of accesses to C and D across
both i and j loop.
for (j=0; j<32; j++) { /* initialize D registers */
for (i=0; i<16; i++) {
if (j==0) { /* initialize C registers */
rotate registers(c 0 0, . ,c 0 15);
rotate registers(c 1 0, . ,c 1 15);
} (d) Final code generated for FIR, including loop
normalization and data layout optimization.
Figure
1: Optimization Example: FIR.
from previous work. Accesses to arrays C and D can all
be replaced. The D array is written back to memory at
the end of the iteration of the j loop, but redundant writes
are eliminated. Only loop-independent accesses to array S
are replaced because the other accesses to array S do not
have a consistent dependence distance. Because reuse on
array C is carried by the outer loop, to exploit full reuse of
data from C involves introducing extra registers that hold
values of C across all iterations of the inner loop. The rotate
operation shifts the registers and rotates the last one into
the first position; this operation can be performed in parallel
in hardware.
Loop Peeling and Loop-Invariant Code Motion.
We see in Figure 1(c) and (d) that values for the c registers
are loaded on the first iteration of the j loop. For clarity
it is not shown here, but the code generated by our compiler
actually peels the first iteration of the j loop instead
of including these conditional loads so that other iterations
of the j loop have the same number of memory loads and
can be optimized and scheduled by high-level synthesis ac-
cordingly. Although at first glance the code size appears to
be doubled by peeling, high-level synthesis will usually reuse
the operators between the peeled and original loop body, so
that the code growth does not correspond to a growth in
the design. Memory accesses to array D are invariant with
respect to the i loop, so they are moved outside the loop
using loop-invariant code motion. Within the main unrolled
loop body, only memory accesses to array S remain.
Data Layout and Array Renaming. Another code
transformation lays out the data in the FPGA's external
memory so as to maximize memory parallelism. Custom
data layout is separated into two distinct phases. In the
first phase, which we call array renaming, performs a 1-
mapping between array access expressions and virtual
memory ids, to customize accesses to each array according
to their access patterns. Array renaming can only be performed
if all accesses to the array within the loop nest are
uniformly generated. Two a#ne array references
b1 , . , an
. , an , b1 , . , bn , c1 , . cn , d1 , . , dn are constants and
are loop index variables, are uniformly generated
. If an array's accesses are not uniformly
generated, then it is mapped to a single memory. The result
of array renaming is an even distribution of data across the
virtual memories.
The second phase, called memory mapping, binds virtual
memory ids to physical ids, taking into consideration accesses
by other arrays in the loop nest to avoid scheduling
conflicts. As shown in Figure 1(d), the e#ect of data layout
is that even elements of S and C are mapped to memory 0,
and odd elements are mapped to memory 1, with accesses
renamed to reflect this layout. D is similarly distributed to
memories 2 and 3.
This approach is similar in spirit to the modulo unrolling
used in the RAW compiler [3]. However, as compared to
modulo unrolling, which is a loop transformation that assumes
a fixed data layout, our approach is a data trans-
formation. Further, our current implementation supports
a more varied set of custom data layouts. A typical lay-out
is cyclic in at least one dimension of an array, possibly
more, but more customized data layouts arise from packing
small data types, strided accesses, and subscript expressions
with multiple induction variables (i.e., subscripts of the form
more than one a i
is non-zero). A full discussion of the data layout algorithm
is beyond the scope of this paper, but further discussion can
be found in [9].
Summary
. To summarize, the algorithm evaluates a focused
set of possible unroll factors for multiple loops in the
loop nest. Data reuse is exploited within and across the
loops in the nest, as a result of scalar replacement by the
compiler, eliminating unnecessary memory accesses. Operator
parallelism is exposed to high-level synthesis through
the unrolling of one or more loops in the nest; any independent
operations will be performed in parallel if high-level
synthesis deems this beneficial.
Thus, we have defined a set of transformations, widely
used in conventional computing, that permit us to adjust
parallelism and data reuse in FPGA-based systems through
a collaboration between parallelizing compiler technology
and high-level synthesis. To meet the optimization criteria
set forth in the previous section, we have reduced the
optimization process to a tractable problem, that of selecting
the unroll factors for each loop in the nest that leads to
a high-performance, balanced, e#cient design. In the next
section, we present the algorithm in detail.
Although our algorithm focuses on a fixed set of compiler
transformations, the notion of using balance to guide the
performance-space tradeo# in design space exploration can
be used for other optimizations as well.
5. OPTIMIZATION ALGORITHM
The discussion in this section defines terms and uses these
to describe the design space exploration algorithm. The
algorithm is presented assuming that scalar replacement will
exploit all reuse in the loop nest on a#ne array accesses. The
resulting design will store all reused data internally on the
FPGA, which is feasible for many applications with short
reuse distances, but may require too many on-chip registers
in the general case. We address this problem by limiting the
number of registers in Section 5.4.
5.1 Definitions
We define a saturation point as a vector of unroll factors
where the memory parallelism reaches the bandwidth of the
architecture, such that the following property holds for the
resulting unrolled loop body:
i#Reads
l#NumMemories
width l .
j#Writes
l#NumMemories
width l .
Here, C1 and C2 are integer constants. To simplify this
discussion, let us assume that the access widths match the
memory width, so that we are simply looking for an unroll
factor that results in a multiple of NumMemories read and
write accesses for the smallest values of C1 and C2 . The
saturation set, Sat, can then be determined as a function of
the number of read and write accesses, R and W , in a single
iteration of the loop nest and the unroll factor for each loop
in the nest. We consider reads and writes separately because
they will be scheduled separately.
We are interested in determining the saturation point after
scalar replacement and redundant write elimination. For the
purposes of this discussion, we assume that for each array
accessed in the main loop body, all accesses are uniformly
generated, and thus a customized data layout will be ob-
tained; modifications to the algorithm when this does not
hold are straightforward, but complicate the calculation of
the saturation point. R is defined as the number of uniformly
generated read sets. W is the number of uniformly
generated write sets. That is, there is a single memory read
and single write access for each set of uniformly generated
references because all others will be removed by scalar replacement
or redundant write elimination.
We define an unroll factor vector as
where u i corresponds to the unroll factor for loop i, and a
function
1#i#n which is the product of all the
unroll factors. Let
The saturation set Sat can then be defined as a vector whose
product is Psat , where #u i #= 1, array subscript expressions
for memory accesses are varying with respect to loop i. That
is, the saturation point considers unrolling only those loops
that will introduce additional memory parallelism. Since
loop peeling and loop-invariant code motion have eliminated
memory accesses in the main loop body that are invariant
with respect to any loop in the nest, from the perspective of
memory parallelism, all such unroll factor vectors are equiv-
alent. A particular saturation point Sat i refers to unrolling
the i loop by the factor Psat , and using an unroll factor of
1 for all other loops.
5.2 Search Space Properties
The optimization involves selecting unroll factors for the
loops in the nest. Our search is guided by the following
observations about the impact of unrolling a single loop in
the nest, which depend upon the assumptions about target
applications in Section 2.4.
Observation 1. The data fetch rate is monotonically non-decreasing
as the unroll factor increases by multiples of Psat ,
but it is also nonincreasing beyond the saturation point.
Intuitively, the data fetch rate increases as there are more
memory accesses available in the loop body for scheduling
in parallel. This observation requires that the data is laid
out in memory and the accesses are scheduled such that the
number of independent memory accesses on each memory
cycle is monotonically nondecreasing as the unroll factor
increases. Here, the unroll factor must increase by multiples
of Psat , so that each time a memory operation is per-
formed, there are NumMemories accesses in the main loop
body that are available to schedule in parallel. This is true
whenever data layout has successfully mapped each array
to multiple memories. (If data layout is not successful, as
is the case when not all accesses to the same array are uniformly
generated, a steady state mapping of data to memories
can guarantee monotonicity even when there are less
than NumMemories parallel accesses, but we will ignore
this possibility in the subsequent discussion.)
Data layout and mapping data to specific memories is
controlled by the compiler. Given the property of array renaming
from Section 4, that the accessed data is evenly distributed
across virtual memory ids, this mapping derives a
solution that, in the absence of conflicting accesses to other
arrays, exposes fully parallelizable accesses. To prevent conflicting
read or write accesses in mapping virtual memory ids
to physical ones, we must first consider how accesses will be
scheduled. The compiler component of our system is not
directly responsible for scheduling; scheduling memory accesses
as well as computation is performed by behavioral
synthesis tools such as Monet.
The scheduling algorithm used by Monet, called As Soon
As Possible, first considers which memory accesses can occur
in parallel based on comparing subscript expressions and
physical memory ids, and then rules out writes whose results
are not yet available due to dependences [10]. In performing
the physical memory id mapping, we first consider read ac-
cesses, so that we maximize the number of read operations
that can occur in parallel. The physical id mapping matches
the read access order, so that the total number of memory
reads in the loop is evenly distributed across the memories
for all arrays. As an added benefit, operands for individual
writes are fetched in parallel. Then the physical mapping for
write operations is also performed in the same order, evenly
distributing write operations across the memories.
With these properties of data layout and scheduling, at
the saturation point, we have guaranteed through choice of
unroll factor that the data fetch rate increases up to the
saturation point, but not beyond it.
Observation 2. The consumption rate is monotonically
non-decreasing as the unroll factor increases by multiples of
Psat , even beyond the saturation point.
Intuitively, as the unroll factor increases, more operator
parallelism is enabled, thus reducing the computation time
and increasing the frequency at which data can be con-
sumed. Further, based on Observation 1, as we increase the
data fetch rate, we eliminate idle cycles waiting on memory
and thus increase the consumption rate. Although the parallelism
exploited as a result of unrolling a loop may reach
a threshold, performance continues to improve slightly due
to simpler loop control.
Observation 3. Balance is monotonically nondecreasing
before the saturation point and monotonically nonincreasing
beyond the saturation point as the unroll factor increases by
multiples of Psat .
That balance is nondecreasing before the saturation point
relies on Observation 1. The data fetch rate is increasing
as fast as or faster than the data consumption rate because
memory accesses are completely independent, whereas operator
parallelism may be restricted. Beyond the saturation
point, the data fetch rate is not increasing further, and the
consumption rate is increasing at least slightly.
5.3 Algorithm Description
The algorithm is presented in Figure 2. Given the above
described monotonicity of the search space for each loop in
the nest, we start with a design at the saturation point, and
we search larger unroll factors that are multiples of Psat ,
looking for the two points between which balance crosses
over from compute bound to memory bound, or vice versa.
In fact, ignoring space constraints, we could search each loop
in the nest independently, but to converge to a near-optimal
design more rapidly, we select unroll factors based on the
data dependences, as described below.
The algorithm first selects U init , the starting point for the
search, which is in Sat. We select the most promising unroll
factors based on the dependence distance vectors. A dependence
distance vector is a vector
represents the vector di#erence between two accesses to the
same array, in terms of the loop indices in the nest [25].
Since we are starting with a design that maximizes memory
parallelism, then either the design is memory bound and we
stop the search, or it is compute bound and we continue. If
it is compute bound, then we consider unroll factors that
provide increased operator parallelism, in addition to memory
parallelism. Thus, we first look for a loop that carries
no dependence (i.e., #d#Dd unrolled iterations of
such a loop can be executed in parallel. If such a loop i is
found, then we set the unroll factor to Sat i . assuming this
unroll factor is in Sat.
If no such loop exists, then we instead select an unroll factor
that favors loops with the largest dependence distances,
because such loops can perform in parallel computations between
dependences. The details of how our algorithm selects
the initial unroll factor in this case is beyond the scope of
this paper, but the key insight is that we unroll all loops
in the nest, with larger unroll factors for the loops carrying
larger minimum nonzero dependence distances. The monotonicity
property also applies when considering simultaneous
unrolling for multiple loops as long as unroll factors for all
loops are either increasing or decreasing.
If the initial design is space constrained, we must reduce
the unroll factor until the design size is less than the
size constraint Capacity, resulting in a suboptimal design.
The function FindLargestFit simply selects the largest unroll
factor between the baseline design corresponding to no
unrolling (called U base ), and U init , regardless of balance, because
this will maximize available parallelism.
Assuming the initial design is compute bound, the algorithm
increases the unroll factors until it reaches a design
that is (1) memory bound; (2) larger than Capacity; or,
(3) represents full unrolling of all loops in the nest (i.e.,
The function Increase(Uin ) returns unroll factor vector
Uout such that
If there are no such remaining unroll factor vectors, then
Increase returns Uin .
If either a space-constrained or memory bound design is
found, then the algorithm will select an unroll factor vector
between the last compute bound design that fit, and the
current design, approximating binary search, as follows.
The function SelectBetween(Usmall , U large ) returns the unroll
factor vector Uout such that
If there are no such remaining unroll factor vectors, then
SelectBetween returns Usmall , a compute bound design.
5.4 Adjusting Number of On-Chip Registers
For designs where the reuse distance is large and many
registers are required, it may become necessary to reduce the
number of data items that are stored on the FPGA. Using
fewer on-chip registers means that less reuse is exploited,
which in turn slows down the fetch rate and, to a lesser
extent, the consumption rate. The net e#ect is that, in the
Search Algorithm:
Input: Code /* An n-deep loop nest */
Output: #u1 , . , un # /* a vector of unroll factors */
while (!ok) do
first deal with space-constrained designs */
if (Estimate.Space > Capacity) then
else
else if
else if (B < 1) then /* memory bound */
else
Balanced solution is between earlier size and this */
else if (B > 1) then /* compute bound */
if (U
/* Have only seen compute bound so far */
else
Balanced solution is between earlier size and this */
Check if no more points to search */
return Ucurr
Figure
2: Algorithm for Design Space Exploration.
first place, the design will be smaller and more likely to fit
on chip, and secondly, space is freed up so that it can be
used to increase the operator parallelism for designs that
are compute bound.
To adjust the number of on-chip registers, we can use loop
tiling to tile the loop nest so that the localized iteration
space within a tile matches the desired number of registers,
and exploit full register reuse within the tile.
6. EXPERIMENTAL RESULTS
This section presents experimental results that characterize
the e#ectiveness of the previously described design space
exploration algorithm for a set of kernel applications. We describe
the applications, the experimental methodology and
discuss the results.
6.1 Application Kernels
We demonstrate our design exploration algorithm on five
multimedia kernels, namely:
. Finite Impulse Response (FIR) filter, integer multiply-accumulate
over consecutive elements of a 64 element
array.
. Matrix Multiply (MM), integer dense matrix multiplication
of a 32-by-16 matrix by a 16-by-4 matrix.
. String Pattern Matching (PAT), character matching
operator of a string of length 16 over an input string
of length 64.
Metrics: Area, Number Clock Cycles
Behavioral VHDL
Transformed SUIF
Application
YES
Balanced Design?
NO
Balance Calculation
Monet
Behavioral Synthesis
Compiler Analyses
scalar replacement
data layout
array renaming
data reuse
Determination
Unroll Factor
tiling
unroll & jam
Figure
3: Compilation and Synthesis Flow.
. Jacobi Iteration (JAC), 4-point stencil averaging computation
over the elements of an array.
. Sobel (SOBEL) Edge Detection (see e.g., SOBEL [22]),
3-by-3 window Laplacian operator over an integer image
Each application is written as a standard C program where
the computation is a single loop nest. There are no pragmas,
annotations or language extensions describing the hardware
implementation.
6.2 Methodology
We applied our prototype compilation and synthesis system
to analyze and determine the best unrolling factor for
a balanced hardware implementation. Figure 3 depicts the
design flow used for these experiments. First, the code is
compiled into the SUIF format along with the application
of standard compiler optimizations. Next, our design space
exploration algorithm iteratively determines which loops in
the loop nest should be unrolled and by how much. To
make this determination the compiler starts with a given
unrolling factor and applies a sequence of transformations
as described in Sections 4 and 5. Next, the compiler translates
the SUIF code resulting from the application of the
selected set of transformations to behavioral VHDL using
a tool called SUIF2VHDL. The compiler next invokes the
Mentor Graphics' Monet TM behavioral synthesis tool to obtain
space and performance estimates for the implementation
of the behavioral specification. In this process, the compiler
currently fixes the clock period to be 40ns. The Monet TM
synthesis estimation yields the amount of area used by the
implementation and the number of clock cycles required to
execute to completion the computation in the behavioral
specification. Given this data, the compiler next computes
the balance metric.
This system is fully automated. The implementation of
the compiler passes specific to this experiment, namely data
reuse analysis, scalar replacement, unroll&jam, loop peel-
ing, and customized data layout, constitutes approximately
14, 500 lines of C++ source code. The algorithm executed
in less than 5 minutes for each application, but to fully synthesize
each design would require an additional couple of
hours.
6.3 Results
In this section, we present results for the five previously
described kernels in Figures 4 through 10. The graphs show
a large number of points in the design space, substantially
more than are searched by our algorithm, to highlight the
relationship between unroll factors and metrics of interest.
The first set of results in Figures 4 through 7 plots bal-
ance, execution cycles and design area in the target FPGA
as a function of unroll factors for the inner and outer loops
of FIR and MM. Although MM is a 3-deep loop nest, we
only consider unroll factors for the two outermost loops,
since through loop-invariant code motion the compiler has
eliminated all memory accesses in the innermost loop. The
graphs in the first two columns have as their x-axis unroll
factors for the inner loop, and each curve represents a specific
unroll factor for the outer loop.
For FIR and MM, we have plotted the results for pipelined
and non-pipelined memory accesses to observe the impact of
memory access costs on the balance metric and consequently
in the selected designs. In all plots, a squared box indicates
the design selected by our search algorithm. For pipelined
memory accesses, we assume a read and write latency of
1 cycle. For non-pipelined memory accesses, we assume a
read latency of 7 cycles and a write latency of 3 cycles,
which are the latencies for the Annapolis WildStar TM [13]
board, a target platform for this work. In practice, memory
latency is somewhere in between these two as some but not
all memory accesses can be fully pipelined. In all results we
are assuming 4 memories, which is the number of external
memories that are connected to each of the FPGAs in the
Annapolis WildStar TM board.
In these plots, a design is balanced for an unrolling factor
when the y-axis value is 1.0. Data points above the y-axis
value of 1.0 indicate compute-bound designs whereas points
with the y-axis value below 1.0 indicate memory-bound de-
signs. A compute-bound design suggests that more resources
should be devoted to speeding up the computation component
of the design, typically by unrolling and consuming
more resources for computation. A memory-bound design
suggests that less resources should be devoted to computation
as the functional units that implement the computation
are idle waiting for data. The design area graphs represent
space consumed (using a log scale) on the target Xilinx Virtex
1000 FPGAs for each of the unrolling factors. A vertical
line indicates the maximum device capacity. All designs to
the right side of this line are therefore unrealizable.
With pipelined memory accesses, there is a trend towards
compute-bound designs due to low memory latency. Without
pipelining, memory latency becomes more of a bottle-neck
leading, in the case of FIR, to designs that are always
memory bound, while the non-pipelined MM exhibits
compute-bound and balanced designs.
The second set of results, in Figures 8 through 10, show
performance of the remaining three applications, JAC, PAT
and SOBEL. In these figures, we present, as before, balance,
cycles and area as a function of unroll factors, but only for
pipelined memory accesses due to space limitations.
We make several observations about the full results. First,
we see that Balance follows the monotonicity properties described
in Observation 3, increasing until it reaches a sat-
Inner Loop Unroll Factor0.150.250.35Balance
Outer Loop Unroll Factor 1
Outer Loop Unroll Factor 2
Outer Loop Unroll Factor 4
Outer Loop Unroll Factor 8
Outer Loop Unroll Factor
Outer Loop Unroll Factor
Outer Loop Unroll Factor 64
selected design
Inner Loop Unroll Factor200060001000014000Execution
Cycles
Outer Loop Unroll Factor 1
Outer Loop Unroll Factor 2
Outer Loop Unroll Factor 4
Outer Loop Unroll Factor 8
Outer Loop Unroll Factor
Outer Loop Unroll Factor
Outer Loop Unroll Factor 64
selected design
Space (log-scaled)10Execution
Cycles
(log-scaled)
selected design
space
(a) Balance (b) Execution Time (c) Area
Figure
4: Balance, Execution Time and Area for Non-pipelined FIR.
Inner Loop Unroll Factor1.222.8Balance
Outer Loop Unroll Factor 1
Outer Loop Unroll Factor 2
Outer Loop Unroll Factor 4
Outer Loop Unroll Factor 8
Outer Loop Unroll Factor
Outer Loop Unroll Factor
Outer Loop Unroll Factor 64
selected design
Inner Loop Unroll Factor200040006000
Execution
Cycles
Outer Loop Unroll Factor 1
Outer Loop Unroll Factor 2
Outer Loop Unroll Factor 4
Outer Loop Unroll Factor 8
Outer Loop Unroll Factor
Outer Loop Unroll Factor
Outer Loop Unroll Factor 64
selected design
Space (log-scaled)
Execution
Cycles
(log-scaled)
selected design
space
(a) Balance (b) Execution Time (c) Area
Figure
5: Balance, Execution Cycles and Area for Pipelined FIR.
Inner Loop Unroll Factor0.40.60.81
Balance
Outer Loop Unroll Factor 1
Outer Loop Unroll Factor 2
Outer Loop Unroll Factor 4
Outer Loop Unroll Factor 8
Outer Loop Unroll Factor
Outer Loop Unroll Factor
selected design
Inner Loop Unroll Factor200040006000Execution
Cycles
Outer Loop Unroll Factor 1
Outer Loop Unroll Factor 2
Outer Loop Unroll Factor 4
Outer Loop Unroll Factor 8
Outer Loop Unroll Factor
Outer Loop Unroll Factor
selected design
Space (log-scaled)10
Execution
Cycles
(log-scaled)
selected design
space
(a) Balance (b) Execution Time (c) Area
Figure
Balance, Execution Cycles and Area for Non-pipelined MM.
Inner Loop Unroll Factor123
Balance
Outer Loop Unroll Factor 1
Outer Loop Unroll Factor 2
Outer Loop Unroll Factor 4
Outer Loop Unroll Factor 8
Outer Loop Unroll Factor
Outer Loop Unroll Factor
selected design
Inner Loop Unroll Factor20004000Execution
Cycles
Outer Loop Unroll Factor 1
Outer Loop Unroll Factor 2
Outer Loop Unroll Factor 4
Outer Loop Unroll Factor 8
Outer Loop Unroll Factor
Outer Loop Unroll Factor
selected design
Space (log-scaled)10Execution
Cycles
(log-scaled)
selected design
space
(a) Balance (b) Execution Time (c) Area
Figure
7: Balance, Execution Cycles and Area for Pipelined MM.
Inner Loop Unroll Factor11.41.82.2
Balance
Outer Loop Unroll Factor 1
Outer Loop Unroll Factor 2
Outer Loop Unroll Factor 4
Outer Loop Unroll Factor 8
Outer Loop Unroll Factor
Outer Loop Unroll Factor
selected design
Inner Loop Unroll Factor40080012001600
Execution
Cycles
Outer Loop Unroll Factor 1
Outer Loop Unroll Factor 2
Outer Loop Unroll Factor 4
Outer Loop Unroll Factor 8
Outer Loop Unroll Factor
Outer Loop Unroll Factor
selected design
Space (log-scaled)
2Execution
Cycles
(log-scaled)
selected design
space
(a) Balance (b) Execution Time (c) Area
Figure
8: Balance, Execution Time and Area for Pipelined JAC.
Inner Loop Unroll Factor1.52.5Balance
Outer Loop Unroll Factor 1
Outer Loop Unroll Factor 2
Outer Loop Unroll Factor 3
Outer Loop Unroll Factor 4
Outer Loop Unroll Factor 6
Outer Loop Unroll Factor 8
Outer Loop Unroll Factor 12
Outer Loop Unroll Factor
Outer Loop Unroll Factor 24
Outer Loop Unroll Factor 48
selected design
Inner Loop Unroll Factor10002000Execution
Cycles
Outer Loop Unroll Factor
Outer Loop Unroll Factor
Outer Loop Unroll Factor
Outer Loop Unroll Factor
Outer Loop Unroll Factor
Outer Loop Unroll Factor
Outer Loop Unroll Factor 12
Outer Loop Unroll Factor
Outer Loop Unroll Factor 24
Outer Loop Unroll Factor 48
selected design
Space (log-scaled)1010Execution
Cycles
(log-scaled) selected design
space
(a) Balance (b) Execution Time (c) Area
Figure
9: Balance, Execution Cycles and Area for Pipelined PAT.
Inner Loop Unroll Factor1.51.71.92.1
Balance
Outer Loop Unroll Factor 1
Outer Loop Unroll Factor 2
Outer Loop Unroll Factor 4
Outer Loop Unroll Factor 8
Outer Loop Unroll Factor
Outer Loop Unroll Factor
Outer Loop Unroll Factor 64
selected design
Inner Loop Unroll Factor2000400060008000
Execution
Cycles
Outer Loop Unroll Factor 1
Outer Loop Unroll Factor 2
Outer Loop Unroll Factor 4
Outer Loop Unroll Factor 8
Outer Loop Unroll Factor
Outer Loop Unroll Factor
Outer Loop Unroll Factor 64
selected design
Space (log-scaled)10
Execution
Cycles
(log-scaled) selected design
space
(a) Balance (b) Execution Time (c) Area
Figure
10: Balance, Execution Time and Area for Pipelined SOBEL.
uration point, and then decreasing. The execution time is
also monotonically nonincreasing, related to Observation 2.
In all programs, our algorithm selects a design that is close
to best in terms of performance, but uses relatively small
unroll factors. Among the designs with comparable perfor-
mance, in all cases our algorithm selected the design that
consumes the smallest amount of space. As a result, we
have shown that our approach meets the optimization goals
set forth in Section 3. In most cases, the most balanced
design is selected by the algorithm. When a less balanced
design is selected, it is either because the more balanced design
is before a saturation point (as for non-pipelined FIR),
or is too large to fit on the FPGA (as for pipelined MM).
Table
2 presents the speedup results of the selected design
for each kernel as compared to the baseline, for both
pipelined and non-pipelined designs. The baseline is the
loop nest with no unrolling (unroll factor is 1 for all loops)
but including all other applicable code transformations as
described in Section 4.
Although in these graphs we present a very large number
of design points, the algorithm searches only a tiny fraction
of those displayed. Instead, the algorithm uses the prun-
Program Non-Pipelined Pipelined
FIR 7.67 17.26
MM 4.55 13.36
JAC 3.87 5.56
PAT 7.53 34.61
Table
2: Speedup on a single FPGA.
ing heuristics based on the saturation point and balance,
as described in section 5. This reveals the e#ectiveness of
the algorithm as it finds the best design point having only
explored a small fraction, only 0.3% of the design space consisting
of all possible unroll factors for each loop. For larger
design spaces, we expect the number of points searched relative
to the size to be even smaller.
6.4 Accuracy of Estimates
To speed up design space exploration, our approach relies
on estimates from behavioral synthesis rather than going
through the lengthy process of fully synthesizing the design,
which can be anywhere from 10 to 10, 000 times slower for
this set of designs. To determine the gap between the behavioral
synthesis estimates and fully synthesized designs,
we ran logic synthesis and place-and-route to derive implementations
for a few selected design points in the design
space for each of the applications. We synthesized the base-line
design, the selected designs for both pipelined and non-pipelined
versions, and a few additional unroll factors beyond
the selected design.
In all cases, the number of clock cycles remains the same
from behavioral synthesis to implemented design. However,
the target clock rate can degrade for larger unroll factors due
to increased routing complexity. Similarly, space can also
increase, slightly more than linearly with the unroll factors.
These factors, while present in the output of logic synthesis
and place-and-route, were negligible for most of the designs
selected by our algorithm. Clock rates degraded by less
than 10% for almost all the selected designs as compared
with the baseline, and the speedups in terms of reduction in
clock cycles more than made up for this. In the case of FIR
with pipelining, the clock degraded by 30%, but it met the
target clock of 40ns, and because the speedup was 17X, the
performance improvement was still significant. The space
increases were sublinear as compared to the unroll factors,
but tended to be more space constrained for large designs
than suggested by the output of behavioral synthesis.
The very large designs that appear to have the highest
performance according to behavioral synthesis estimates
show much more significant degradations in clock and increases
in space. In these cases, performance would be worse
than designs with smaller unroll factors. Our approach does
not su#er from this potential problem because we favor small
unroll factors, and only increase the unrolling factor when
there is a significant reduction in execution cycles due to
memory parallelism or instruction-level parallelism.
For this set of applications, these estimation discrepan-
cies, while not negligible, never influenced the selected de-
sign. While this accuracy issue is clearly orthogonal to the
design space algorithm described in this paper, we believe
that estimation tools will improve their ability to deliver accurate
estimates given the growing pressures for accuracy in
simulation for increasingly larger designs.
7. RELATED WORK
In this section we discuss related work in the areas of automatic
synthesis of hardware circuits from high-level language
constructs and design space exploration using high-level
loop transformations.
7.1 Synthesizing High-Level Constructs
The gap between hardware description languages such
as VHDL or Verilog and applications in high-level imperative
programming languages prompted researchers to develop
hardware-oriented high-level languages. These new
languages would allow programmers to migrate to configurable
architectures without having to learn a radically new
programming paradigm while retaining some level of control
about the hardware mapping and synthesis process.
One of the first e#orts in this direction was the Handel-
parallel programming language. Handel-C is heavily
influenced by the OCCAM CSP-like parallel language but
has a C-like syntax. The mapping from Handel-C to hardware
is compositional where constructs, such as for and
while loops, are directly mapped to predefined template
hardware structures [20].
Other researchers have developed approaches to mapping
applications to their own reconfigurable architectures that
are not FPGAs. These e#orts, e.g., the RaPiD [7] reconfigurable
architecture and the PipeRench [12], have developed
an explicitly parallel programming language and/or
developed a compilation and synthesis flow tailored to the
features of their architecture.
The Cameron research project is a system that compiles
programs written in a single-assignment subset of C called
SA-C into dataflow graphs and then synthesizable VHDL [23].
The SA-C language includes reduction and windowing operators
for two-dimensional array variables which can be
combined with doall constructs to explicitly expose parallel
operations in the computation. Like in our approach,
the SA-C compiler includes loop-level transformations such
as loop unrolling and tiling, particularly when windowing
operators are present in a loop. However, the application
of these transformations is controlled by pragmas, and is
not automatic. Cameron's estimation approach builds on
their own internal data-flow representation using curve fitting
techniques [17].
Several other researchers have developed tools that map
computations expressed in a sequential imperative programming
language such as C to reconfigurable custom computing
architectures. Weinhardt [24] describes a set of program
transformations for the pipelined execution of loops
with loop-carried dependences onto custom machines using
a pipeline control unit and an approach similar to ours. He
also recognizes the benefit of data reuse but does not present
a compiler algorithm.
The two projects most closely related to ours, the Nimble
compiler [19] and work by Babb et. al. [2], map applications
in C to FPGAs, but do not perform design space exploration.
They also do not rely on behavioral synthesis, but in fact
the compiler replaces most of the function of synthesis tools.
7.2 Design Space Exploration
In this discussion, we focus only on related work that has
attempted to use loop transformations to explore a wide
design space. Other work has addressed more general issues
such as finding a suitable architecture (either reconfigurable
or not) for a particular set of applications [1].
In the context of behavioral VHDL [16] current tools such
as Monet TM [14] allow the programmer to control the application
of loop unrolling for loops with constant bounds.
The programmer must first specify an application behavioral
VHDL, linearize all multi-dimensional arrays, and then select
the order in which the loops must execute. Next the
programmer must manually determine the exact unroll factor
for each of the loops and determine how the unrolling is
going to a#ect the required bandwidth and the computation.
Given the e#ort and interaction between the transformations
and the data layout options available this approach to design
space exploration is extremely awkward and error-prone.
Other researchers have also recognized the value of exploiting
loop-level transformations in the mapping of regular
loop computations to FPGA-based architectures. Der-
rien/Rajopadhye [8] describe a tiling strategy for doubly
nested loops. They model performance analytically and select
a tile size that minimizes the iteration's execution time.
7.3 Discussion
The research presented in this paper di#ers from the efforts
mentioned above in several respects. First the focus of
this research is in developing an algorithm that can explore
a wide number of design points, rather than selecting a single
implementation. Second, the proposed algorithm takes
as input a sequential application description and does not
require the programmer to control the compiler's transfor-
mations. Third, the proposed algorithm uses high-level compiler
analysis and estimation techniques to guide the application
of the transformations as well as evaluate the various
design points. Our algorithm supports multi-dimensional
array variables absent in previous analyses for the mapping
of loop computations to FPGAs. Finally, we use a commercially
available behavioral synthesis tool to complement
the parallelizing compiler techniques rather than creating an
architecture-specific synthesis flow that partially replicates
the functionality of existing commercial tools. Behavioral
synthesis allows the design space exploration to extract more
accurate performance metrics (time and area used) rather
than relying on a compiler-derived performance model. Our
approach greatly expands the capability of behavioral synthesis
tools through more precise program analysis.
8. CONCLUSION
We have described a compiler algorithm that balances
computation and memory access rates to guide hardware
design space exploration for FPGA-based systems. The experimental
results for five multimedia kernels reveal the algorithm
quickly (in less than five minutes, searching less
than 0.3% of the search space) derives a design that closely
matches the best performance within the design space and
is smaller than other designs with comparable performance.
This work addresses the growing need for raising the level
of abstraction in hardware design to simplify the design pro-
cess. Through combining strengths of parallelizing compiler
and behavioral synthesis, our system automatically performs
transformations typically applied manually by hardware de-
signers, and rapidly explores a very large design space. As
technology increases the complexity of devices, consequently
designs will become more complex, and furthering automation
of the design process will become crucial.
Acknowledgements
. This research has been supported
by DARPA contract # F30602-98-2-0113. The authors wish
to thank contributors to the DEFACTO project, upon which
this work is based, in particular Joonseok Park, Heidi Ziegler,
Yoon-Ju Lee, and Brian Richards.
9.
--R
Parallelizing Applications into Silicon.
A compiler-managed memory system for raw machines
Improving register allocation for subscripted variables.
Improving the ratio of memory operations to floating-point operations in loops
Altera Corp.
Specifying and compiling applications for RaPiD.
Loop tiling for reconfigurable accelerators.
Bridging the gap between compilation and synthesis in the DEFACTO system.
UnderStanding Behavioral Synthesis: A Practical Guide to High-Level Design
Evaluation of the Streams-C C-to-FPGA compiler: an applications perspective
A coprocessor for streaming multimedia acceleration.
Annapolis MicroSystems WildStar tm manual
Mentor Graphics Monet TM user's manual (release r42).
XILINX Virtex-II 1.5V FPGA data sheet
Behavioral Synthesis.
Fast area estimation to support compiler optimizations in FPGA-based reconfigurable systems
Structured hardware compilation of parallel programs.
Compiling OCCAM into FPGAs.
Digital Signal Processing: Principles
An automated process for compiling dataflow graphs into reconfigurable hardware.
Compilation and pipeline synthesis for reconfigurable architectures.
Optimizing Supercompilers for Supercomputers.
--TR
Improving register allocation for subscripted variables
Digital signal processing (3rd ed.)
Maps
PipeRench
Hardware-software co-design of embedded reconfigurable architectures
Evaluation of the streams-C C-to-FPGA compiler
An automated process for compiling dataflow graphs into reconfigurable hardware
Understanding Behavioral Synthesis
Optimizing Supercompilers for Supercomputers
Loop Tiling for Reconfigurable Accelerators
Specifying and Compiling Applications for RaPiD
Parallelizing Applications into Silicon
A Bit-Serial Implementation of the International Data Encryption Algorithm IDEA
Fast Area Estimation to Support Compiler Optimizations in FPGA-Based Reconfigurable Systems
Coarse-Grain Pipelining on Multiple FPGA Architectures
--CTR
Rui Rodrigues , Joao M. P. Cardoso, An Infrastructure to Functionally Test Designs Generated by Compilers Targeting FPGAs, Proceedings of the conference on Design, Automation and Test in Europe, p.30-31, March 07-11, 2005
Byoungro So , Pedro C. Diniz , Mary W. Hall, Using estimates from behavioral synthesis tools in compiler-directed design space exploration, Proceedings of the 40th conference on Design automation, June 02-06, 2003, Anaheim, CA, USA
Gang Quan , James P. Davis , Siddhaveerasharan Devarkal , Duncan A. Buell, High-level synthesis for large bit-width multipliers on FPGAs: a case study, Proceedings of the 3rd IEEE/ACM/IFIP international conference on Hardware/software codesign and system synthesis, September 19-21, 2005, Jersey City, NJ, USA
sharing for synthesis of control data flow graphs on FPGAs, Proceedings of the 40th conference on Design automation, June 02-06, 2003, Anaheim, CA, USA
David Andrews , Ron Sass , Erik Anderson , Jason Agron , Wesley Peck , Jim Stevens , Fabrice Baijot , Ed Komp, Achieving programming model abstractions for reconfigurable computing, IEEE Transactions on Very Large Scale Integration (VLSI) Systems, v.16 n.1, p.34-44, January 2008
Joonseok Park , Pedro C. Diniz , K. R. Shesha Shayee, Performance and Area Modeling of Complete FPGA Designs in the Presence of Loop Transformations, IEEE Transactions on Computers, v.53 n.11, p.1420-1435, November 2004
Heidi E. Ziegler , Mary W. Hall , Pedro C. Diniz, Compiler-generated communication for pipelined FPGA applications, Proceedings of the 40th conference on Design automation, June 02-06, 2003, Anaheim, CA, USA
R. A. Gonalves , P. A. Moraes , J. M. P. Cardoso , D. F. Wolf , M. M. Fernandes , R. A. F. Romero , E. Marques, ARCHITECT-R: a system for reconfigurable robots design, Proceedings of the ACM symposium on Applied computing, March 09-12, 2003, Melbourne, Florida
Byoungro So , Mary W. Hall , Heidi E. Ziegler, Custom Data Layout for Memory Parallelism, Proceedings of the international symposium on Code generation and optimization: feedback-directed and runtime optimization, p.291, March 20-24, 2004, Palo Alto, California
Uday Bondhugula , J. Ramanujam , P. Sadayappan, Automatic mapping of nested loops to FPGAS, Proceedings of the 12th ACM SIGPLAN symposium on Principles and practice of parallel programming, March 14-17, 2007, San Jose, California, USA
Heidi Ziegler , Mary Hall, Evaluating heuristics in automatically mapping multi-loop applications to FPGAs, Proceedings of the 2005 ACM/SIGDA 13th international symposium on Field-programmable gate arrays, February 20-22, 2005, Monterey, California, USA
YongKang Zhu , Grigorios Magklis , Michael L. Scott , Chen Ding , David H. Albonesi, The Energy Impact of Aggressive Loop Fusion, Proceedings of the 13th International Conference on Parallel Architectures and Compilation Techniques, p.153-164, September 29-October 03, 2004
Zhi Guo , Betul Buyukkurt , Walid Najjar, Input data reuse in compiling window operations onto reconfigurable hardware, ACM SIGPLAN Notices, v.39 n.7, July 2004
scheduling algorithm for optimization and early planning in high-level synthesis, ACM Transactions on Design Automation of Electronic Systems (TODAES), v.10 n.1, p.33-57, January 2005
Charles R. Hardnett , Krishna V. Palem , Yogesh Chobe, Compiler optimization of embedded applications for an adaptive SoC architecture, Proceedings of the 2006 international conference on Compilers, architecture and synthesis for embedded systems, October 22-25, 2006, Seoul, Korea
Chen Ding , Yutao Zhong, Predicting whole-program locality through reuse distance analysis, ACM SIGPLAN Notices, v.38 n.5, May
Patrick Carribault , Albert Cohen, Applications of storage mapping optimization to register promotion, Proceedings of the 18th annual international conference on Supercomputing, June 26-July 01, 2004, Malo, France | design space exploration;loop transformations;reuse analysis;data dependence analysis |
513071 | Stack and Queue Integrity on Hostile Platforms. | When computationally intensive tasks have to be carried out on trusted, but limited, platforms such as smart cards, it becomes necessary to compensate for the limited resources (memory, CPU speed) by off-loading implementations of data structures on to an available (but insecure, untrusted) fast coprocessor. However, data structures, such as stacks, queues, RAMs, and hash tables, can be corrupted (and made to behave incorrectly) by a potentially hostile implementation platform or by an adversary knowing or choosing data structure operations. This paper examines approaches that can detect violations of datastructure invariants, while placing limited demands on the resources of the secure computing platform. | Introduction
Smart cards, set-top boxes, consumer electronics
and other forms of trusted hardware [2, 3, 16] have
been available (or are being proposed [1]) for applications
such as electronic commerce. We shall refer to
these devices as T . These devices are typically composed
of a circuit card encased in epoxy or a similar
substance, that has been strewn with various electronic
tamper-detection devices. The physical design
constraints on these devices include heat dissipation
difficulties, size (often a credit card or PCMCIA for-
very low power consumption requirements, etc.
This leads (particularly in the credit card format) to
devices with exceedingly low data transfer rates, mem-
ory, and omputing resources. These resources are not
a limitation in a few applications areas as such as cash
cards, identity cards etc. However, for general purpose
multi-application cards, these resource limitations are
significant.
We have been exploring the use of trusted hardware
in software engineering [7, 6]. (see Section 6). In this
Appears in IEEE Computer Society Symposium on Re-search
in Security and Privacy, Oakland, CA, May, 1998, pp.
198-206.
context, it becomes necessary store large amounts of
data in the form of various data structures (stacks,
queues, arrays, dynamic/static symbol tables, various
types of trees, etc. Such data structures cannot fit
into the limited resources on the T device. However,
T devices are usually used in concert with a larger,
more powerful (and presumably adverse) host computer
(H). Such data structures can be stored on H:
But how can their integrity be assured? Data structures
have invariants; for each instance of a data struc-
ture, these invariants can be stored in a "digested"
form as signatures within the T device. Data structure
operations are performed in conjunctions with modifications
on the signatures to maintain a "digested"
form of the invariants.
This approach differs from earlier work [4, 5]. Our
protocols are much simpler. We use O(1) memory in
the trusted computer, and transfer only O(1) amount
of data for each push and pop operation in an on-line
mode. Previous approaches used a O(log(n)) trusted
memory and O(log(n)) data transfer for each operation
(where n is the size of the stack or queue). How-
ever, previous work [4, 5] assumed extremely powerful
adversaries (information theoretic bounds). We do
follow in this line of work, but with quite different
techniques that are applicable with computationally
bounded adversaries.
This paper is organized as follows. In Section 2, we
present goals, threats, and related work. Section 3 describes
our protocol for stacks and an evaluation of it.
In Section 4, we describe our protocol for queues and
an evaluation of it. Section 5 describes previous work
to handle random access memory and discusses the
relationship of stacks and queues to this work. Section
7 describes extensions to our protocols. Section
6 describes applications using our protocols. Section
presents some concluding remarks.
Background
In this section we discuss the goals of the work,
threats, and related work.
The goal of this work is to maintain integrity of
stacks and queues maintained by a potentially hostile
platform. Also, we wish to do this in a manner that
is extensible for other desired properties described in
Section 7.
We need to make some assumptions about the adversary
and the environment. So that we may focus
our description on the security of the data structure
application, we assume the channel between T and
H is authenticated. We allow for an adversary that
may learn information on this channel. We assume
H is dishonest and need not follow the protocol. We
assume the adversary can also submit high level commands
to T . Thus, the data structure protocols need
to be secure against chosen and known attacks. In this
context, a chosen attack is one where an adversary has
complete control over data and operations to the data
structure. A known attack is one where the attacker
is assumed to know operations and data. We assume
a computationally bounded adversary who is limited
in the number of operations that can be submitted
to the data structure, the amount of information that
may be stored, and the number of operations required
to process data and/or fill storage with data.
We assume the adversary may try to replay data
from other instances of data structures. These replays
may be due to multiple concurrent runs of the protocol
and should not lead to a vulnerability. We assume
the adversary may try to compose new messages using
message fragments from other sessions.
Some issues are beyond the scope of this paper.
This paper does not address how to recover from corrupt
or lost data. Thus, we do not attempt to replicate
data structures and operations. The sharing of data
structures by multiple secure processors is also beyond
the scope of this paper.
2.1 Related Work
This work follows the memory protection investigations
of [4, 5], which considered the problem of verifying
the correctness of a large memory of size n bits
maintained by an all-powerful adversary P , subject to
update requests from originator V with only a limited
amount of trusted memory. (Most of these schemes
are based on Merkle signature trees [12] which is described
in further detail in Section 5.) It is shown
that P can fool V with an incorrect memory whenever
V has access to less than log(n) bits of trusted
memory. They also describe implementations of stacks
and queues [5]. The stack implementation uses log(H)
memory accesses for operations on a stack of height
H. Our approach also relates to, but differs from the
work of Lamport [10]. His one-time password scheme
precomputes a chain of hashes on a secret w with the
sequence: w; (w). The password
fo the i th idenfication session, 1 - i - t, is
defined to be w In Lamport's scheme,
the chain decreases with each useage. We also use a
chain of "digests" (signatures and/or hashes) in our
protocols; however, our scheme computes the chain
differently; in addition, our chain grows and shrinks
based on the change in state of the data structure.
In our approach, we expect that P is a constant factor
faster than the V . We use only a constant number
of bits of trusted memory, irrespective of the size of
the stack and the queue. We also perform only a constant
number of untrusted memory operations for each
stack push and pop. We assume a "signature scheme"
that is collision/computation resistant and 2 nd preimage
resistant.
Goldreich [8] and Ostrovsky [14] give solutions for
oblivious machines. A machine is oblivious if the sequence
in which it accesses memory locations is equivalent
for any two programs with the same running
time. This work solves a different problem yet relies
on techniques for protecting the integrity of memory
(e.g., Ostrovsky uses sequence numbers for protecting
RAMs), but does not address methods for protecting
the integrity of stacks and queues.
3 Stacks
We begin by defining a stack as follows:
Interface
7 Invariants
propose to implement this using a secure processor
(for trusted), and an insecure processor H
(for hostile), using the following algorithm. Actions
taken by T are shown so prefixed, others in italics. r 0
are random numbers, generated by T : oe(x) is a signature
on datum x by the trusted processor. In the basic
protocol (i.e., where there is only one trusted host),
can represent a cryptographic hash function (ei-
ther keyed or unkeyed) or a public key based digital
signature. We assume oe(x) is collision/computation
resistant and 2 nd pre-image resistant.
We assume the probability of a signature collision
can be made arbitrarily small by changing the parameters
of the signature scheme. We also assume that
there is a good source of random numbers; such functionality
is starting to become available from hardware
devices. If oe(x) is keyed, a separate key is generated
for each instance. The key is destroyed upon a delete
and the key never leaves T .
We assume a authenticated channel with message
stream integrity between T and H. Also, entries on
the stack can be simple strings. Arrows indicate direction
of transmission. Below, we use the
to represent a new value for oe: Data structure operation
requests from T to H are shown in double quotes
(e.g., "push . ") followed by the relevant operands.
For new():
T selects random r init 2R f0; 1g l
To initialize a stack, the T generates a new random
signs it and sends it off to H with the "new
stack" command. Henceforth, this signature oe(r init )
is used by T to identify the stack.
For push(x; S):
H (stores the above two on top of stack)
Here, a "push" request is sent to H along with the
current stack signature and the new value. The stack
signature is updated by signing the string formed by
appending the pushed value to the current signature.
This signature is always retained in T , and we refer
to it below as oe top .
For pop(S)
(from top of stack), or
oe(r init ) and "error" if stack underflow.
T if not "error", computes oe(x jj oe top )
and compare it with stored oe
Bottom
Top
Hostile
Platform
Maintains
Stack (r init
Trusted
Host
Figure
1: A resource-limited, secure implementation
of stacks.
with oe(r init );
T if above comparisons fail, terminate with
error
Upon a "pop", the H returns the value ostensibly at
the top of the stack (x), and the signature of the
stack (oe top ) when that value was pushed. The stored,
current stack signature is recomputed and checked as
shown above. T verifies an empty stack claim by re-computing
using the local copy of r init and
comparing it to the returned copy of oe.
For
The "delete" command resets the stack protocol; associated
signatures held by T are discarded.
The stack protocol is illustrated in Figure 1. The
arrows show the inputs to the computed signature.
There is always a signature of the stack maintained in
the T device. Prior to executing the push, the signature
oe of the stack is in the T device; when an item x i
needs to be pushed on, as the i'th member of the stack,
T computes a new signature oe i+1 as shown above and
in the figure. Then the new item x and the old signature
oe i are given to the H stack implementation, with
a request to execute a push. Normally, a push takes
one argument, but since we are using a fixed length
signature, these two arguments (the new item and the
old signature) can just be represented as a single bit
string. The inclusion of the signature adds a constant
amount of external storage and transmission overhead
for each operation. The new signature oe i+1 is retained
in the T device's memory as defense against tampering
by H. Thus, when a pop command is issued, H is
expected return the top item x, and signature of the
rest of stack, oe i . Then the original signature oe i+1 is
recomputed and checked against the value stored in
T . It is infeasible for H to spoof T by forging the
values of x i or oe i so long as T retains oe i+1 . Thus the
stack invariants are preserved. This is argued in more
detail in the following section.
3.1 Evaluation
We now argue that our stack integrity checking protocols
work provided the signature schemes we use are
collision resistant and 2 nd pre-image resistant.
stack is one which works correctly
according to the standard specification of a stack.
Specifications of a stack can be found on page 2, and
in [9] (page 170). Non-ideal stacks only show their
flaws when a pop is executed that returns a value other
than what should be on the top of an ideal stack.
We define an incorrect stack as
one which after some series of
returns a value different from
the one that would be on the top of an ideal stack after
the operations
Definition 3 A protocol for checking stack implementations
is secure if an incorrect stack is always
detected whenever it returns the wrong value on a pop.
We can now present the main claim of correctness of
our stack protocol.
Theorem 1 Our stack protocol is secure, as long as
the signature scheme on which it is based is collision
resistant and 2 nd pre-image resistant.
To prove this theorem, we first define the notion of
a correct digest of a stack. Next, we argue that our
protocols ensure that T , after any series of operations
\Omega will have a correct digest of the ideal stack after
We then argue that if T has the correct
digest of the ideal stack, then an incorrect stack
operation by H will be detected.
Definition 4 A correct digest of a stack with initializing
value s 0 , and items s defined as follows:
We are now ready to state the main claim about the
digest maintained by our protocol.
any series of
Our protocol always maintains the correct digest of an
ideal stack in T , providing a) T operates correctly according
to our protocol, and b) the underlying signature
scheme is collision resistant and 2 nd pre-image
resistant.
This is shown by induction, assuming that the T works
according the stack protocol given above. The correctness
of the digest initial state is trivial. Now suppose
that we have a correct digest oe i\Gamma1 of the ideal stack after
the first operations. There are two significant
cases: may be a push or a pop.
Push For push(x), the T will compute a signature
thus:
which is correct by definition (see description of
"push" in Protocol 1) and by inductive assumption
Pop Upon a pop, the H is expected to return two
values: the item at the top of the stack x; and
the signature of the rest of the stack oe r . T checks
that the following holds:
Since we assume a collision resistant and 2 nd pre-image
resistant signature scheme, it would be infeasible
for H to find other values of x; oe r so as to
satisfy the above constraint. So x and oe r are indeed
the same values that were used to compute
oe originally. This means that if oe correctly
digests the ideal stack, then so does oe r after the
pop is executed.
The operation may also be a delete or a new; in either
case, the effect will be either to create a new, inde-
pendent, (correctly initialized) digest of a new stack,
and/or to terminate the current stack instance (even
if there are elements in it).
Our final claim is shown below; when this claim is
established, the theorem is proven.
the T always stores the correct digest of
the ideal stack, after every sequence of
then an incorrect stack operation can always
be detected, provided the underlying signature scheme
is collision resistant and 2 nd pre-image resistant.
Without loss of generality, assume that after the
above, we execute a pop. Assume the T
correctly digests the ideal stack after the
operations\Omega
into oe n , which was (during some operation
n) computed as:
where x is the item currently on the top of the stack.
Now, because of the collision resistance of signature
scheme, H cannot feasibly substitute another x or
oe r . Therefore, an incorrect stack operation will be
detected via a bad signature.
4 Queues
Queues are implemented by keeping two items in
trusted memory - the signature of the entire queue,
including the items that used to be at the rear of the
queue, and a signature of all the items that have been
removed from the queue.
We begin with a brief description of the interface
of a queue.
Queue hT i
Interface
Axiomatization is not provided; it can be found in
standard texts on formal specification, such as Guttag
Horning [9]. As in the case of stacks, we assume
messages between T and H are sent over an authenticated
channel having message stream integrity, bit-string
entries on the queue, and (for simplicity) a new
signing key for each queue instance.
For new(Q):
T selects random r init 2R fO; 1g l
As in the case of the stack, T generates a new random
signs it, and sends it to H as an identifier
to initialize a new queue.
For nq(Q; x):
Rear
s q1
Front
s q
Hostile
Platform
maintains
Queue (r init
Trusted
Host
Removed
Elements
r m-1
same
r init
same
Figure
2: A resource-limited, secure implementation
of queues.
On an enqueue, T computes a new signature by signing
the string formed by appending the new item with
the current signature of the entire queue. This signature
is sent to H along with the current item; this
signature also updates the current queue signature after
the enqueue operation.
For
T computes and checks that
NB: If H says "queue empty", ensure that oe
When the H gets a dequeue request, it returns the
item ostensibly at the front of the queue, and a signature
oe front . T appends the returned item to its stored
oe r , signs the result, and compares the signature with
oe front . If the signatures match, it approves the oper-
ation, and updates oe r to oe front .
The queue protocol is illustrated in Figure 2. T
retains two signatures: oe q , which is an digest of the
entire queue, and oe r , which is a digest of all the items
that have been removed. With each enqueue request,
the T updates the oe q to include the item in the queue
digest. The H is asked to store the item and the current
digest. We assume oe(x) represents a keyed cryptographic
hash or public key signature. Upon a dequeue
request, H is asked to return the item at the
front of the queue x, and the associated signed digest
oe front . T now uses the oe r item value stored in
trusted memory to authenticate the dequeued value:
this signature represents all the items that have ever
been removed from the queue. T compares a new oe 0 r
value to the result of signing the string obtained by
appending the item x claimed to be at the front of the
queue to the old signature, oe r . The following section
examines the correctness of our protocol more closely.
4.1 Evaluation
We now argue that the queue protocol detects incorrect
operation of queues by H.
Definition 5 An ideal queue is one which works according
the usual LIFO discipline.
As before, incorrect queues are ones which, (after
some series of operations) on a dequeue, return an
item other than the one which would be at the head
of the ideal queue after the same set of operations.
Definition 6 We define an incorrect queue as
one which after some series of
returns a value different
from the one that would be on the head of an ideal
queue after the operations
Definition 7 A protocol for checking queue implementations
is secure if an incorrect queue is always
detected whenever it returns a wrong value on a dequeue
Here is the main claim of correctness of our queue
protocol.
Theorem 2 Our queue protocol is secure, as long as
the signature scheme on which it is based is collision
resistant and 2 nd pre-image resistant.
We use the notion of a correct digest here as well;
however, in the case of the queue, there are two pieces
to the digest, oe q , which represents the entire "histor-
ical" queue, including items that have ever been en-
queued, and oe r , which represents just all the items
that have been dequeued (see figure 2).
Definition 8 A correct digest of a queue with initializing
value that are currently
on the queue (with q 1 being the item to be next de-
queued) and items r that have been removed
(r 1 the first item removed, r m the item most recently
removed) consists of two signatures, oe q ; oe r which are
computed as follows:
any series of
Our protocol always maintains the correct digest of an
ideal queue in T , providing a) T operates correctly
according to our protocol, and b) the underlying signature
scheme is collision resistant and 2 nd pre-image
resistant.
We show this by induction: at initialization, the
claim holds trivially. Now consider each queue operation
Enqueue On an enqueue(x) request, the oe r is unchanged
(by Protocol 2); this is specified by Definition
8. The oe q is computed as follows (again,
as specified by Definition 8):
Dequeue On an dequeue(x) request, the oe q is unchanged
(by Protocol 2) as specified by Definition
8 above. The oe r is updated as described in
Protocol 2. The H returns a signature, oe 0
r and an
item x. The following equality is checked:
if the equality holds, then (assuming that the signature
scheme that is used has the desired prop-
erties) setting oe r to oe 0
r will correctly update oe r .
Note that H does not compute oe 0
r (since the signing
key is secret and we assume collision resistant
and 2 nd pre-image resistant signature schemes).
This value is given to H at the time x is enqueued:
if H returns that value, and it checks out, the correctness
of the digest is preserved. Note that the
way signatures are used here is different from the
stack protocol. In the stack protocol, the H returns
an item and an old signature (i.e., inputs
to the signature algorithm) which when signed
should yield a value identical to the digest held in
T . In the case of queues, the H should return an
item and an output signature; the item, and the
oe r digest, when signed together, should match the
output signature returned by H.
The operations delete and new; will create a new,
independent, (correctly initialized) digest of a new
queue, and/or to terminate the current stack instance.
When the following claim is established, the theorem
is proven.
4 If the T always stores the correct digest of
the ideal queue, after every sequence of operations
. then an incorrect queue can always be
detected, provided the underlying signature scheme is
collision resistant and 2 nd pre-image resistant.
Assume that after the
above, we execute
a dequeue. Assume the T correctly digests the
ideal queue after the
operations\Omega into oe q and oe r . Now
on the dequeue, H returns an item x and a new signature
r , which is verified as:
where x is the item currently on the head of the queue.
Now, because of the collision resistance of signature
scheme, and the cryptographic assumption that H is
unable to compute the signature, H cannot feasibly
substitute another x or oe 0
r . Therefore, an incorrect
queue operation will be detected via a bad signature.
5 Schemes for RAM and Trees
There have been several schemes proposed in
the literature to handle a random access memory
(RAM) [5]. Most of these schemes are based on
Merkle signature trees [12]. We describe the signature
tree and discuss the tradeoffs in implementing secure
stacks and queues using them.
5.1 Prior Work on RAMs
Given an n-bit address space, one can construct secure
RAM M as a binary tree with 2 n leaves and 2 n
interior nodes, using 2 n+1 data elements on an insecure
memory array M i (where "i" stands for insecure).
Each bit of the address selects a branch on the binary
tree.
Figure
3 shows a (tiny) RAM with a 4 bit address
space. Each node in the RAM is unambiguously designated
by a bit substring of the address. The leaves
store the values of the RAM. That is, given a (com-
plete) n-bit address string a, and the buggy RAM,
[a] is the actual value of the memory cell at address
a. Tampering of these values by the adversary is
deterred (with high probability) by storing signatures
in the interior nodes. These signatures are computed
as follows. For a given interior node with address a
where j a j! n (Note: we treat each bit string as a different
index into the (insecure) memory array. Thus
0011 is not the same as 11. This can be accomplished
by a simple transformation of the bit string into an
integer array index).
The root value M i [0] is kept in the T . When an
address a is accessed, all n values on the interior nodes1001000
Figure
3: RAM: Address shown 1010
along the address path a, as well as the n additional
values need to compute the signatures on the interior
nodes, (as well as the root value) are also accessed.
When an address is modified, all the signature values
on the interior nodes along the address path are
recomputed.
5.2 Can we do just do Stacks & Queues
on RAM?
Since RAMs are the most general type of memory,
the question naturally arises, can we not simply implement
secure stacks and queues using the secure RAM?
The answer to this question in a given specific situations
depends on a set of engineering design issues.
The main consideration in using a RAM is the number
of signature computations. Any time any particular
(leaf) value is addressed in the binary tree represen-
tation, n signature computations are required. This
could be avoided by partitioning the RAM into large
pages, there by reducing the effective number of signature
computations. Another approach is to retain a
certain number of pages in the T and "swap in" pages
from H only when needed. This would amount to a
secure virtual memory.
There are several complications in implementing secure
paging systems based on the RAM schemes described
above. Limited program (text) space on a T
device (e.g., a smart card) may preclude implementation
of a secure virtual memory. Limitations in the
data memory will limit the amount of "trusted" pages
that can be kept within T . Bandwidth limitations will
force high penalties for page faults. Decreasing the
page size too much will increase the number of signature
computations on each page fault. With an n bit
address space, there will be O(n) signature computations
on a page fault: signatures will have to checked
when an "outdated" dirty page is read, and recomputed
upon write out. If another page is swapped in,
another set of signatures will have to be checked.
If the application primarily uses stacks and queues,
and the above mentioned complications dominate,
then our stack and queue schemes will be useful. How-
ever, if a high-quality implementation of a virtual secure
memory is available, then it would be reasonable
to use that. However, the authors are not aware of
any such implementations for T devices currently on
the market. Unlike secure virtual memory schemes,
which must be carefully implemented and tuned, our
schemes are relatively simple, and can be built by an
application programmer.
6 Applications
The problem of checking large data structures with
limited memorywas motivated by new applications for
software tools [7, 6]. The goal of this work is to place
trusted software tools (static analyzers, type check-
ers, proof checkers, compilers/instrumenters, etc.) in
trusted hardware; the output of these tools would be
attested by a signature in a public-key crypto system.
One particular application concerns Java TM byte-code
verification [11]. This is a process similar to type-checking
that is carried out on Java TM virtual machine
(JVM) programs. The JVM is a stack-oriented
machine. The typechecking process ensures that for
every control flow path leading to a given point in a
JVM program, the types of the stack entries are com-
patible: e.g. an object of type NetSocket is never conflated
with an object of type Circle. There are other
properties that bytecode verification ensures, but the
key typesafety property is at the core of the security
policies of the JVM. Currently, this process is carried
out by browsers such as Netscape TM prior the execution
of mobile Java TM code such as applets. Perfor-
mance, security, configuration management, and intellectual
property protection advantages are claimed
when bytecode verification and similar static analysis
processes are conducted by a a trusted T machine.
The result of the analysis is attested by a (pubic-key
based) cryptographic signature on the mobile code.
The bytecode verification algorithm [11] involves,
inter alia a) maintaining an agenda of control flow
paths to be expanded, b) computing the state of evaluation
stack, c) looking up the typing rules for various
types of instructions d) maintaining symbol tables
of variables. This resource-intensive data usage taxes
the resources of all but the most powerful (and expen-
devices. Another application discussed in [6]
is placing a proof checker in a T device. Necula [13]
suggests that mobile code should carry with it a proof
of an applicable safety property. Unfortunately, such
proofs reveal a great deal about data structure and
layouts, loop invariants and algorithms. Vendors may
balk at revealing such intimate details of their prod-
ucts. With a proof checker in a T device, the vendor
can check the proof at their site. The bare binary
(sans proof) can be signed by the T device to attest
to the correctness of the proof, which can then remain
secret. Yet another application is the creation of
trusted, signed analysis products (control dependency
graphs, data dependency graphs, slices etc.) to accompany
mobile code. Such trusted analysis products can
be used by security environments to optimally "sand-
box" [15] mobile code.
Techniques such as the ones we suggest will form
a useful implementation technique in placing trusted
software analysis tools in trusted hardware. While
trusted hardware devices can be expected to become
more powerful, the inherent physical design constraints
(form factor, energy usage, heat dissipation)
are likely to prevent the performance gap (with conventional
machines) from narrowing; so implementation
techniques such as ours will remain applicable.
7 Extensions and Future Work
The techniques we have discussed above simply insure
the integrity of stacks and queues. We now discuss
some extensions and future work to address some
related issues such as confidentiality, key expiration,
and data structure sharing.
Providing Confidentiality Confidentiality may
be a concern in applications of stacks and queues.
Our protocol designs allow for layering (or integrating)
confidentiality mechanism on top of the basic protocols
Updating Keys Since we assume a computationally
bounded adversary, cryptographic keys used to
protect the integrity of the data structure have a limited
lifetime. Relying on keys beyond their lifetime
could compromise the integrity of an instance of the
data structure. We now discuss how to manage keys
for instances of data structures.
One general approach of replacing keys is the most
obvious: complete rewrite. That is to temporarily suspend
normal usage of the data structure to remove all
elements from one instance of the data structure and
add them to a new instance with a new key. Some
issues arise from this approach. It may not be possible
to go directly from one stack to another since
the order of the elements would be reverse. However,
this problem can be corrected by repeating the process
with another stack. Alternatively the operation
can be performed in a single pass using a queue.
A second general approach for updating keys is
gradual transition from one key to another. This can
eliminate the need for the data structure to be unavailable
during key updates. That is, multiple signatures
using different keys are maintained until the
entire structure has been updated with the new key.
Sharing Data Structures Multiple entities may
wish to share operations to a data structure. Issues
involved in sharing the data structure concern sharing
the most recent signatures and keys associated with
the structure. Such schemes may be built on top of secure
quorum schemes. However, it is unclear whether
such schemes satisfy the security and performance requirements
for sharing data structures. This area is a
topic of future research.
8 Conclusion
We have described protocols by which resource-
limited, trusted computers can store stacks and queues
on untrusted hosts while retaining only a constant
amount of memory in the trusted machine. Tahis approach
differs from earlier work [4, 5]. Our protocols
are much simpler. We use O(1) memory in the trusted
computer, and transfer only O(1) amount of data for
each push and pop operation in an on-line mode. Previous
approaches used a O(log(n)) trusted memory
and O(log(n)) data transfer for each operation (where
n is the size of the stack or queue). However, unlike
previous approaches, which use information theoretic
bounds, we assume computationally limited ad-
versaries. We present arguments to show that our
protocols will detect attacks which return incorrect
values.
9
Acknowledgements
This work has greatly benefited as a result of early
discussions with Dave McAllester on RAM trees, more
recent discussions with Philip Fong on associative ar-
rays, as well as feedback from the anonymous reviewers
of this conference.
--R
Javacard 2.0 Application Programming Inter- faces
The Mondex Magazine
Checking linked data structures.
Checking the correctness of memories.
Techniques for trusted software engineering.
Cryptographic verification of test coverage claims.
Towards a theory of software protection and simulation by oblivious rams.
LARCH: Languages and Tools for Formal Specification.
Password Identification with Insecure Communications.
The Java TM Virtual Machine specification.
A certified digital signature.
Efficient computations on oblivious rams.
Efficient software-based fault isolation
Secure coprocessors in electronic commerce applications.
--TR
Towards a theory of software protection and simulation by oblivious RAMs
Efficient computation on oblivious RAMs
A certified digital signature
languages and tools for formal specification
Efficient software-based fault isolation
Access control and signatures via quorum secret sharing
Proof-carrying code
Cryptographic verification of test coverage claims
Techniques for trusted software engineering
Password authentication with insecure communication
Java Virtual Machine Specification | software protection;correctness of memories;oblivious ram;data structures;security |
513337 | On Computing Structural Changes in Evolving Surfaces and their Appearance. | As a surface undergoes a one-parameter family of deformations, its shape and its appearance change smoothly except at certain critical parameter values where abrupt structural changes occur. This paper considers the case of surfaces defined as the zero set of smooth density functions undergoing a Gaussian diffusion process and addresses the problem of computing the critical parameter values corresponding to structural changes in the parabolic curves of a surface and in its aspect graph. An algorithm based on homotopy continuation and curve tracing is proposed in the case of polynomial density functions, whose zero set is an algebraic surface. It has been implemented and examples are presented. | Introduction
This article is concerned with the interplay of scale and shape in computer vision. Con-
cretely, imagine a smooth density function defined over some volume. The points where
the density exceeds a given threshold form a solid whose boundary is a level set of this
function. Smoothing the density changes its level sets and the solid shapes they bound.
What are the "important" changes that may occur? For example, can a solid break
into two pieces? Or for that matter, can two pieces merge into a single one? When?
Answering this question is relevant in medical imaging for example, where anatomical
surfaces are routinely identified in volumetric CT, MR, or ultrasound images as level
sets of smoothed density functions (Lorensen and Cline, 1987). Consider now a moving
object, say a train, and a static camera, say the old-fashioned large-format kind. As
the train speeds by, its silhouette on the ground glass of the camera will change, mostly
smoothly, but at times abruptly as well. For perfectly focussed cameras, a complete
catalogue of the corresponding discontinuities has been established by mathematicians
(Arnol'd, 1969; Platonova, 1981; Kergosien, 1981), leading to the introduction of aspect
graphs in computer vision by Koenderink and Van Doorn (1976, 1979). When the
camera is not perfectly focussed, the appearance of our (now blurry) train will still go
through occasional discontinuities, but these will be of a slightly different nature, and
they will depend on the amount of smoothing introduced by the camera optics. The
situation becomes even more complicated for digital cameras since the image plane in
this case is split into pixels that can only resolve points whose angular separation is
large enough. Characterizing and computing the scale, i.e., the amount of blur and/or
image resolution at which the discontinuities in appearance are themselves suddenly
changing is of course pertinent in line-drawing interpretation and more generally in
object recognition (Eggert et al., 1993; Shimshoni and Ponce, 1997).
As already hinted at, classifying the abrupt changes -called critical events or singularities
from now on- of the shape and/or appearance of a surface as it evolves
under certain types of deformations is the business of mathematicians, and it involves
2tools from differential geometry and singularity theory (Whitney, 1955; Thom,
1972; Arnol'd, 1984). This article is aimed at the simpler problem of actually computing
the singularities for a particular class of surfaces, namely the zero sets of polynomial
densities, or algebraic surfaces, evolving under one-parameter families of deformations,
mostly those associated with Gaussian smoothing of the polynomial density function.
For the sake of simplicity, we will further restrict our study of appearance changes to
surfaces of revolution defined by rotationally-symmetric density functions, and model
the dependency of appearance on scale using perfectly focussed pictures of blurred
objects (blurry images of perfectly sharp objects would of course provide a more
realistic model). In particular, the evolution process will not only depend on the shape
of the object, but on the actual density function used to define its surface, which
is only justified when this density function is physically significant, e.g., the X-ray
attenuation stored at each voxel of a CT image (Kergosien, 1991; Noble et al., 1997).
Despite these obvious limitations, we believe that the study undertaken in this paper
can bring some insight into the more general role of scale in vision, for example, give
an idea of the complexity of scale-dependent representations of shape and appearance,
and of their adequacy to the organization of objects into classes. We also believe that
some of the proposed techniques can be generalized without too much difficulty to more
general settings, for example the detection of singularities in medical images, or the
construction of scale-space aspect graphs for algebraic surfaces that are not rotationally
symmetric. Our presentation can be thought of as a preliminary experimental foray
into the role of scale in computational differential geometry (Koenderink, 1990). Specif-
ically, we address the problem of computing the critical values of a Gaussian diffusion
process corresponding to structural changes in (1) the parabolic curves of an algebraic
surface and (2) the aspect graph of algebraic surfaces of revolution. Section 2 gives a
brief survey of related work and presents an overview of our approach. We derive the
equations characterizing the singularities of evolving surfaces and their aspect graph
in Sections 3 and 4. These equations are solved using homotopy continuation (Morgan,
1987) and curve tracing (see Appendix A and (Kriegman and Ponce, 1991)), and a
number of experiments are presented in Sections 3.3 and 4.3. We conclude in Section 5
with a brief discussion of our results and perspectives on future research. Preliminary
versions of this work have appeared in (Pae and Ponce, 1999; Pae, 2000).
2. Background and Approach
2.1. Background
The idea of capturing the significant changes in the structure of a geometric pattern as
this pattern evolves under some family of deformations has a long history in computer
vision. For example, Marr (1982) advocated constructing the primal sketch representation
of an image by keeping track of the patterns that do not change as the image is
blurred at a variety of scales. The idea of recording the image changes under blurring
lead to the scale-space approach to image analysis, which was first proposed by Witkin
(1983) in the case of inflections of a one-dimensional signal, and has since been applied
to many domains, including the evolution of curvature patterns on curves (Asada and
Brady, 1986; Leyton, 1988; Mackworth and Mokhtarian, 1988) and surfaces (Ponce
and Brady, 1987; Monga et al., 1994), and more recently, the evolution of curves and
level sets under non-linear diffusion processes (Kimia et al., 1995; Sethian, 1996).
The aspect graph of Koenderink and Van Doorn (1976, 1979) provides another
example of an evolving geometric pattern characterized by its singularities. This time,
the objective is to enumerate the possible appearances of an object. The range of
viewpoints is partitioned into maximal regions where the (qualitative) appearance of
an object remains the same, separated by critical boundaries where the topology of
the silhouette changes according to some visual event. The maximal regions (labelled
by the object appearance at some sample point) are the nodes of the aspect graph,
and the visual events separating them are its arcs. Approaches to the construction of
the orthographic or perspective aspect graph of a polyhedral object include (Castore,
1984; Stewman and Bowyer, 1988; Plantinga and Dyer, 1990; Wang and Freeman,
1990; Gigus et al., 1991) and initial attempts at constructing scale-space aspect graphs
(Eggert et al., 1993; Shimshoni and Ponce, 1997) for this class of surfaces. Approximate
aspect graphs of polyhedral objects have also been successfully used in recognition
tasks (Chakravarty, 1982; Hebert and Kanade, 1985; Ikeuchi and Kanade, 1988).
For smooth surfaces, a complete catalogue of visual events is available from singularity
theory (Arnol'd, 1969; Platonova, 1981; Kergosien, 1981; Rieger, 1987), and it has
been used to compute the aspect graphs of surfaces of revolution (Eggert and Bowyer,
1989; Kriegman and Ponce, 1990), algebraic surfaces (Petitjean et al., 1992; Rieger,
1992), and empirical surfaces defined as level sets of volumetric density functions (Ker-
gosien, 1991; Noble et al., 1997) (see (Thirion and Gourdon, 1993) for related work).
In his book Solid Shape (1990), Koenderink addressed the problem of understanding
the structural changes of the latter type of surfaces as the density function undergoes a
diffusion process. He focussed on the evolution of certain surface attributes, namely, the
parabolic curves and their images via the Gauss map, which are significant for vision
applications: for example, the intersection of a parabolic curve with the occluding
contour of an object yields an inflection of the silhouette (Koenderink, 1984), and
the asymptotic directions along the parabolic curves give birth to the lip and beak-
to-beak events of the aspect graph (Kergosien, 1981). Koenderink proposed to define
morphological scripts that record the possible transformations of a surface and use
these as a language for describing dynamic shape. Bruce, Giblin and Tari (1996a)
have used singularity theory to expand Koenderink's work and establish a complete
catalogue of the singularities of the parabolic curves under one-parameter families of
deformations, and their work is the theoretical foundation for the approach presented
in Section 3. The same authors have also identified additional events involving the
ridges of a surface as well as multilocal singularities where a plane is bi- or tritangent
to the surface (Bruce et al., 1996b), but these will not be investigated in this paper.
2.2. Approach
The classical characterization of critical events in singularity theory is in terms of local
surface models defined by a height function is the parameter
controlling the surface evolution. In this article we are interested instead in global
surface models defined by some implicit equation F (x; Our approach to
the problem of computing the singularities of the evolution process relies on deriving
the equations that govern it and solving these equations for the corresponding critical
parameter values. Concretely, we will show in Sections 3 and 4 that the parabolic
curves of the surface formed by the zero set of a scale-dependent volumetric density
and its visual events can always be defined by a system of equations of the form
(1)
where P 1 to P n are non-linear combinations of F and its partial derivatives. For a
given value of t, these equations define a curve in IR n+1 . As will be shown in Sections
3 and 4, when t changes and the surface evolves, critical events can be characterized
by two additional equations, adding to a total of n equations in n unknowns
that admit, in general, a finite number of solutions.
To find these solutions, it is necessary to restrict our attention to classes of density
functions F for which (numerical or symbolic) algorithms guaranteed to find all zeros
of (1) exist. Specifically, we focus in this presentation on algebraic surfaces, defined as
the zero set of a volumetric polynomial density:
i+j+kd
a
Examples of algebraic surfaces include planes and quadric surfaces (i.e., ellipsoids,
hyperboloids and paraboloids) as well as the zero set of higher-degree polynomial
densities. Most importantly, as will be shown in the following sections, the equations
defining the singularities of evolving algebraic surfaces are polynomials in the unknowns
of interest. This is the key to computing the critical events associated with
surface evolution: as noted earlier, these events are characterized by systems of n
equations in n and they can be found using homotopy continuation
(Morgan, 1987), a global root finder that finds all the (real and complex) solutions of
square systems of polynomial equations. Between singularities, the structure of interest
(parabolic curves or aspect graph) does not change. An explicit representation for this
structure can be constructed using the curve tracing algorithm presented in (Kriegman
and Ponce, 1991). This algorithm is outlined in Appendix A for completeness.
3. Parabolic Curve Evolution
3.1. Problem Geometry
This section gives a brief overview of the differential geometry concepts needed to
understand the surface evolution process. The presentation is deliberately informal
and written with the hope that the reader's geometric intuition will compensate for
the lack of mathematical details. These can be found in classical texts (Hilbert and
Cohn-Vossen, 1952; Arnol'd, 1969; Platonova, 1981; Koenderink, 1990) and in the
paper by Bruce et al. (1996a) that has inspired the work presented here.
Consider a solid with a smooth boundary. Its surface can take three different forms
in the neighborhood of one of its points, depending on how it intersects its tangent
plane there (Figure 1): in the vicinity of an elliptic point, the surface is cup-shaped
Figure
it does not cross its tangent plane and looks like the surface of an egg or
the inside surface of its broken shell. We say that the point is convex in the former case
and concave in the latter one. At a hyperbolic point, the surface is saddle-shaped and
its tangent plane along two curves (Figure 1(b)). The tangents to these curves
are the asymptotic directions of the surface at the point. The elliptic and hyperbolic
points form patches on a surface, and they are separated by curves formed by parabolic
points where the intersection of the surface and its tangent plane has a cusp (there
are actually two types of parabolic points; we will come back to those in a minute).
The cusp direction is also an asymptotic direction in that case.
(a) (b) (c)
Figure
1. Local shape of a surface: (a) an elliptic point, (b) a hyperbolic point, and (c) a parabolic
point.
We define the Gaussian (or spherical) image of a smooth surface by mapping each
one of its points onto the place where the associated unit normal pierces the unit
sphere. It can be shown that the Gauss map is one-to-one in the neighborhood of
elliptic or hyperbolic points. The orientation of a small closed curve centered at an
elliptic point is preserved by the Gauss map, but the orientation of a curve centered
at a hyperbolic point is reversed (Figure 2).
Map
Gauss
Figure
2. Left: a surface in the shape of a kidney bean. It is formed of a convex area, a hyperbolic
region, and the parabolic curve separating them. Right: the corresponding Gaussian image. Darkly
shaded areas indicate hyperbolic areas, lightly shaded ones indicate elliptic ones.
The situation is a bit more complicated at a parabolic point: in this case, any small
neighborhood will contain points with parallel normals, indicating a double covering of
the sphere there (Figure 2): we say that the Gaussian image folds along the parabolic
curve.
Figure
3 shows an example, with a single covering of the sphere on one side of
the parabolic curve's image and a triple covering on the other side. The easiest way of
thinking about the creation of such a fold is to grab (in your mind) a bit of the rubber
skin of a deflated balloon, pinch it, and fold it over. As illustrated by the figure, this
process will in general introduce not only a fold of the spherical image, but two cusps
as well, whose preimages are aptly named cusps of Gauss. Cusps and inflections of the
spherical image of the parabolic curve always come in pairs. The inflections split the
fold of the Gauss map into convex and concave parts, and their preimages are called
gutterpoints (Figure 3).
Fold
Concave
Cusp of
Gauss
Gutterpoint Image
Fold
Image
Fold
Convex
Figure
3. Folds and cusps of the Gauss map. To clarify the structure of the fold, it is drawn in the left
and right sides of the figure as a surface folding in space. The changes in topology of the intersection
between a great circle and the Gaussian image of the surface as the circle crosses the fold are illustrated
in the far right of the figure.
As shown in (Bruce et al., 1996a), there are a few more possibilities in the case
of one-parameter families of deforming surfaces: indeed, three types of higher-order
contact may occur at isolated values of the parameter controlling the deformation.
The corresponding singularities are called A3, A4 and D4 transitions following Arnold's
conventions (Arnol'd, 1969), and they affect the structure of the parabolic curve as well
as its image under the Gauss map (Figure 4). There are four types of A3 transitions:
in the first case, a parabolic loop disappears from the surface, and an associated
loop with two cusps disappears on the Gauss sphere (this corresponds to a lip event
in catastrophe theory jargon (Thom, 1972; Arnol'd, 1984), see Figure 4(a)). In the
second case, two smooth parabolic branches join, then split again into two branches
with a different connectivity, while two cusping branches on the Gauss sphere merge
then split again into two smooth branches (this is a beak-to-beak event, see Figure
4(b)). Two additional singularities are obtained by reversing these transitions. At an
event, the parabolic curve remains smooth but its Gaussian image undergoes a
swallowtail transition, i.e., it acquires a higher-order singularity that breaks off into
two cusps and a crossing (Figure 4(c)). Again, the transition may be reversed. Finally,
there are four D4 transitions. In the first one (Figure 4(d)), two branches of a parabolic
curve meet then split again immediately; a similar phenomenon occurs on the Gauss
sphere, with a cusp of Gauss "jumping" from one branch to the other. In the second
transition (Figure 4(e)), a parabolic loop shrinks to a point then expands again into a
loop; on the Gauss sphere, a loop with three cusps shrinks to a point then reappears.
The transitions can as usual be reversed.
Gaussian Image
Parabolic Set
(a)
(b)
(c)
(d)
Figure
4. The singularities of evolving surfaces. The events are shown on both the surface (left) and
the Gauss sphere (right). The actual events are shown as black disks, and the (generic) cusps of Gauss
are shown as white disks. See text for more details.
3.2. Computational Approach
Bruce, Giblin and Tari (1996a) give explicit equations for all the singularities in the
case of a surface defined by a height function is the parameter
controlling the surface evolution. Here we are interested in surfaces defined by
the implicit equation F (x; can be thought of as a volumetric
density depending on t. Although it is possible to use the chain rule to rederive the
corresponding equations in this case, this is a complex process (Thirion and Gourdon,
1993) and we have chosen instead to first construct the equations characterizing the
parabolic curves and the cusps of Gauss, which turn out to have a very simple form,
then exploit the fact that the parabolic curve or its image under the Gauss map are
singular at critical events to characterize these events.
Let us start by considering a static surface defined by F (x; Using subscripts
to denote partial derivatives, the normal to this surface is given by the gradient
As shown in (Weatherburn, 1927), the parabolic curves are
defined by
ae F (x;
denotes the symmetric matrix@ F yy F zz \Gamma F 2
Note that denotes the Hessian matrix associated to F .
The cusps of Gauss are points where the asymptotic direction along the parabolic
curve is tangent to this curve (Koenderink, 1990). Let us show that the asymptotic
direction at a parabolic point is a = ArF . Asymptotic tangents can be defined as
vectors that (1) lie in the tangent plane and (2) are self-conjugated. The first condition
is obviously satisfied at a parabolic point since a \Delta
second condition is also obviously satisfied since a T Ha
the tangent to the parabolic curve is given by rP \Theta rF , it follows that the cusps of
Gauss are given by the equations 8 !
Note that rP has a simple form:
@
@x
since (@rF T In particular, this simplifies the expression
of C since the second term in (2) cancels in the dot product. Similar simplifications
occur during the computation of the non-generic singularities below.
We are now ready to characterize these singularities. Note that in the case of a
surface undergoing a family of deformations parameterized by some variable t, the
functions F , P and C also depend on t. Let us first consider A3 and D4 transitions.
Since they yield singular parabolic curves, they must satisfy
where rx denotes the gradient operator with respect to x, and the third equation simply
states that the normals to the original surface and the "parabolic surface" defined
by are parallel. This is a vector equation with three scalar components, but only
two of these components are linearly independent. It follows that the singularities of
the parabolic curves are characterized by four equations in four unknowns. The case of
singularities is a little more complicated since the parabolic curve is smooth there.
On the other hand, the curve defined in IR 4 by the cusps of Gauss is singular, and the
singularities can thus also be found by solving a system of four equations in four
unknowns, namely 8
3.3. Implementation and Results
We have constructed a MATHEMATICA implementation of the curve tracing algorithm
described in Appendix A and used the parallel implementation of homotopy
continuation described in (Stam, 1992) to solve all relevant polynomial equations.
This section demonstrates the computation of the singularities of two quartic surfaces
evolving under a Gaussian diffusion process. As shown in Appendix B, convolving a
polynomial density with a Gaussian kernel yields a new polynomial surface with the
same degree and coefficients that are polynomials in the scale oe of this kernel. The
calculation of these coefficients has been implemented in MATHEMATICA as well.
Figure
5 illustrates the effect of the diffusion process on the "squash" surface defined
in (Petitjean et al., 1992) by
The figure shows, from left to right, the original surface and its Gaussian image, the
first critical event (an A3 singularity where two components of the parabolic curve
merge into a single one), a snapshot of the surface after this event, and finally a
second A3 singularity where the parabolic curve vanishes. After that, the surface is
convex and no further singularity occurs except for the disappearance of the surface
itself.
Singularities
Type A3 A3 Vanish
oe 0.2487 0.3325 0.5498
Figure
5. Diffusion of the squash. Top: the evolving surface and its parabolic curves (as before, generic
cusps of Gauss are shown as white discs and singularities are shown as black discs). Middle: the
corresponding Gaussian image. Bottom: type and scale of the singularities. Here, as in the following
figure, the event corresponding to the disappearance of the surface is not shown.
Figure
6 shows the "dimple" surface defined in (Petitjean et al., 1992) by
as it evolves under a similar Gaussian diffusion process. There are four singularities in
this case, including the disappearance of the surface.
Dimple Singularities
Type D4 A3 A3 Vanish
oe 0.1486 0.1826 0.2532 0.3590
Figure
6. Evolution of the dimple surface (rows 1 and 2) and its Gaussian image (rows 3 and 4).
Our approach also handles other one-parameter families of deformations whose
dependence on the evolution parameter is polynomial: for example, Figure 7 shows the
singularities found when the dimple is linearly morphed into the squash as follows: if
respectively denote the squash and dimple equations,
the morphed surface is defined by (1 \Gamma t)S(x; There are five
critical events in this case, but, as in our other examples, no A4 singularity.
Singularities
Type D4 D4 A3 A3 A3
Figure
7. Morphing a dimple into a squash. The evolving surface is shown in columns 1 and 2, and
its Gaussian image is shown in columns 3 and 4.
4. Aspect Graph Evolution
4.1. Problem Geometry
We consider once again in this section a solid bounded by a smooth surface undergoing
some deformation process, but this time we study the evolution of its appearance as
both the deformation parameter and the viewpoint change. We assume orthographic
projection and model the range of viewpoints by a unit viewing sphere. As mentioned
in Section 2, the aspect graph of our object partitions this sphere into maximal regions
where the topological structure of its silhouette remains the same. This section
presents (informally once again) the geometric principles that determine the visual
events separating these regions and govern their evolution under the diffusion process.
Let us first consider the case of a static solid object. For a given camera position, the
contour of this object is defined by the set of curves where the image plane intersects
a viewing cylinder that grazes the object along a second set of curves, forming the
occluding contour. The occluding contour is in general smooth and it consists of fold
points, where the viewing ray is tangent to the surface, and a discrete set of cusp
points where the ray is tangent to the occluding contour as well. The image contour is
piecewise smooth, and its only singularities are a discrete set of cusps, formed by the
projection of cusp points, and T-junctions, formed by the transversal superposition
of pairs of fold points (Figure 8(b)). This is for transparent solids: for opaque ones,
contours terminate at cusps, and one of the two contour branches forming a T-junction
ends there.
Figure
8. The elements forming the contours of solids bounded by smooth surfaces. From left to right:
a smooth piece of image contour, a cusp, and a T-junction. Reprinted from (Petitjean et al., 1992,
Figure
3).
Critical changes in object appearance as a function of viewpoint are called visual
events. They bear the same colorful names, inherited from catastrophe theory (Thom,
1972; Arnol'd, 1984), as the singularities of evolving surfaces introduced in the previous
section, but have of course a different geometric meaning. Their occurence can be
understood in terms of the Gauss map. Let us first consider what happens to the
occluding contour as the great circle formed by its Gaussian image becomes tangent
to, then crosses the spherical image of the parabolic curve. As can be seen by examining
again
Figure
3, there are two cases: when the tangency occurs along a convex fold of
the Gauss map, we have a lip event: an isolated point appears on the spherical image
of the occluding contour before exploding into a small closed loop on the unit sphere
Figure
3, bottom right). In the image, there is no contour before the event, and an
isolated point appears out of nowhere at the singularity before exploding into a closed
loop consisting of a pair of contour branches meeting at two cusps (Figure 9). One of
the branches has two inflections and it is formed by the projection of both elliptic and
hyperbolic points, while the other one is formed by the projection of hyperbolic points
only. For opaque objects, one of the branches is always occluded by the object.
Figure
9. The lip event.
When the great circle associated with the occluding contour crosses the spherical
image of the parabolic curve along a concave part of the fold, a beak-to-beak event
occurs. Two separate loops merge then separate with a different connectivity (Figure
3, top right). In the picture, two distinct portions of the contour, each having a cusp
and an inflection, meet at a singularity (Figure 10). Before the event, each of the
branches is divided by the associated cusp into a purely hyperbolic portion and a
mixed elliptic-hyperbolic arc, one of which is always occluded. After the event, two
cusps and two inflections disappear as the contour splits into two smooth branches
with a different connectivity. One of these arcs is purely elliptic and the other one is
purely hyperbolic, one of the two always being occluded in the case of opaque objects.
The reverse transition is of course also possible, as for all other visual events.
Figure
10. The beak-to-beak event.
The lip and beak-to-beak events occur when the Gaussian image of the occluding
contour becomes tangent to the spherical image of a parabolic curve. It is easily shown
that this happens when the viewpoint is along the asymptotic tangent at a parabolic
point. A swallowtail event occurs when the viewing direction is along an asymptotic
tangent at a flecnodal point. As shown in Figure 11(a)-(b), the intersection of the
surface and its tangent plane at a flecnodal point consists of two curves, one of which
has an inflection at the point. Unlike ordinary asymptotic rays (Figure 11(c)), that are
blocked by the observed solid (this is what gives rise to contour terminations at cusps),
this one intersects the surface at a single point: it "sees through it" (Koenderink, 1990),
producing a sharp V on the image contour at the singularity. The contour is smooth
before the transition but it acquires two cusps and a T-junction after it (Figure 11,
bottom). All surface points involved in the event are hyperbolic. For opaque objects,
one branch of the contour ends at the T-junction and the other one ends at a cusp.
(a)
(b)
(c)
Figure
11. A swallowtail event. Top: (a) surface shape in the neighborhood of a flecnodal point, and
comparison of the intersection of the associated solid and its tangent plane (b) near such a point and
(c) an ordinary hyperbolic point. Bottom: the event itself.
Flecnodal points form curves on generic smooth surfaces. The visual events described
so far occur when the viewpoint crosses the surface ruled by asymptotic
tangents along these curves and their parabolic cousins. These events are said to
be local because each ruling grazes the surface at a single point. Multilocal events
occur when the viewpoint crosses the surface ruled by certain families of bitangent
and tritangent rays that graze the surface along multiple curves. Let us first discuss
tangent crossings. A limiting bitangent occurs when the tangent plane itself becomes
bitangent to the surface. The limiting bitangents sweep a ruled surface, called the
limiting bitangent developable. A tangent crossing occurs when the line of sight crosses
this surface (Figure 12, top), with two separate pieces of contour becoming tangent
to each other at the event before crossing transversally at two T-junctions (Figure 12,
bottom). For opaque objects, either a previously hidden part of the contour becomes
visible after the transition, or (as in the figure) another branch disappears due to
occlusion.
There are two more types of singular bitangents: the asymptotic bitangents, that
intersect the surface along an asymptotic direction at one of their endpoints, and the
A
A
Figure
12. A tangent crossing. The ordering of spatially distinct parts of the occluding contour changes
when the viewpoint crosses the limiting bitangent developable surface in B.
tritangents, that graze the surface in three distinct points. The corresponding visual
events occur when the line of sight crosses one of the associated developable surfaces,
and they also involve the appearance or disappearance of a pair of T-junctions: a cusp
crossing occurs when a smooth piece of the image contour crosses another part of the
contour at a cusp (or endpoint for an opaque object) of the latter (Figure 13(a)). Two
T-junctions are created (or destroyed) in the process, only one of which is visible for
opaque objects. A triple point is formed when three separate pieces of the contour
momentarily join at non-zero angles (Figure 13(b)). For transparent objects, three
T-junctions merge at the singularity before separating again. For opaque objects, a
portion of the contour disappears (or appears) after the transition, along with two
T-junctions, while a third T-junction appears (or disappears).
We have enumerated the visual events associated with static objects. When a surface
is allowed to deform, higher-order singularities will also occur. A catalogue of these
critical events can (in principle) be constructed using the tools of singularity theory.
As far as we know, however, this has not been done yet, and the associated image
singularities have not been analyzed either. Thus, we have not relied on an explicit
catalogue of scale-space critical events in the approach described in the next section:
instead, we have characterized numerically the singularities of the surfaces swept by
the parabolic and flecnodal curves and the bitangent and tritangent developables as
the parameter controlling surface evolution changes.
4.2. Computational Approach
In our implementation, we have decided for simplicity to focus on compact solids of
revolution bounded by algebraic surfaces with an equation of the form x
0, where R(z) is a polynomial of degree d. These surfaces are smooth by construction,
(a)
(b)
Figure
13. More multilocal events: (a) a cusp crossing; (b) a triple point. After (Petitjean et al., 1992)
Figure
6).
with a radius function
R(z) in the range of z values where R(z) 0. 1 As
shown earlier, convolving an algebraic surface with a Gaussian kernel yields another
algebraic surface. Furthermore, if the original surface is rotationally symmetric, so is
the smoothed surface. This simplifies the aspect graph construction since, for a fixed
scale, visual events occur along parallels (in the usual sense of constant-latitude circles)
of the viewing sphere in this case, and they can be characterized by a single number:
the tangent of the angle fi between the axis of symmetry of the solid of revolution
and the corresponding viewing direction. In the rest of this section, we will generalize
the visual event equations derived in (Kriegman and Ponce, 1990) for static solids of
revolution to the case where
R, and use polynomial curve tracing techniques
to trace these events in the oe; fi plane when the surface is allowed to evolve under a
Gaussian diffusion process governed by the scale parameter oe.
4.2.1. Local Events
As suggested by Section 4.1 and demonstrated by (Kriegman and Ponce, 1990; Petit-
jean et al., 1992), the computation of the local visual events of a (static) smooth
surface can be decomposed into two steps: (1) the identification of the parabolic
and flecnodal curves on the surface, and (2) the construction of the curves traced by
the corresponding asymptotic directions on the viewing sphere. The case of algebraic
surfaces of revolution is particularly simple, since the parabolic and flecnodal curves
are parallels (circular cross-sections perpendicular to the axis in this context), the
corresponding z values being the roots of univariate equations in z, and, as already
noted, the corresponding viewing directions (i.e., the asymptotic tangents along these
1 Using instead an arbitrary polynomial radius function r, as was done in (Kriegman and Ponce,
1990), would have forced us to either tackle the more complicated case of piecewise-smooth surfaces
when the range of z values is restricted to a compact interval where r(z) 6= 0, or to handle the
singularities occurring at points where but the tangent of the profile defining the solid of
revolution is not parallel to the cross-sections.
curves) also trace parallels on the viewing sphere. The equations for the parabolic
and flecnodal curves and the corresponding asymptotic directions were derived in
(Kriegman and Ponce, 1990). They can be expressed in terms of the radius function
r and its derivatives, and are also easily rewritten in terms of the function R used in
this presentation (Table I).
Table
I. Equations defining the local events for surfaces of revolution.
Both the original equations derived in (Kriegman and Ponce, 1990) and
the corresponding equations when
R(z) are given in the table.
(Kriegman and Ponce, 1990) New equation
Parabolic points r
Flecnodal points 3r 0 r 00
Viewing direction tan 2
The same equations hold for algebraic surfaces of revolution undergoing a Gaussian
diffusion process, but this time R also depends on the scale oe of the Gaussian kernel.
The curves formed in the oe; z plane by the pairs (oe; z) satisfying these equations can be
traced using the algorithm given in Appendix A, and the corresponding visual event
curves are then easily constructed in the oe; fi plane by substituting the corresponding
oe and z values in the viewing direction equation.
4.2.2. Multilocal Events
The construction of multilocal events is similar to the characterization of their local
counterparts, but it involves more unknowns. Indeed, the ruled surfaces associated with
multilocal events for solids of revolution touch their surfaces along a discrete set of
parallels determined by square systems of polynomial equations, and the directions of
the corresponding bitangents and tritangents trace parallels on the viewing sphere and
determine the structure of the aspect graph. The equations determining the limiting
bitangents, asymptotic bitangents and tritangents were derived in (Kriegman and
Ponce, 1990), and they are given in Table II, where they are also rewritten in terms
of the function R and its derivatives evaluated at the points z i where a bitangent
grazes the surface.
Note that the ruled surface defined by the limiting bitangents (resp. the asymptotic
bitangents) is defined by two equations in the z 1 and z 2 unknowns, i.e., the bitangency
condition given in the first row of Table II and the equation given in the second (resp.
third) row of this table. Likewise, the surface swept by the tritangents is defined by
three equations in the z 1 , z 2 and z 3 unknowns, namely the bitangency equation and the
two constraints given in the fourth row of the table. These equations can be simplified
by rewriting them in terms of the derivatives of R in z 1 . More precisely, any analytic
function F can be written as a convergent Taylor series
Table
II. Equations defining the multilocal events for surfaces of revolution. Both the original equations
derived in (Kriegman and Ponce, 1990) and the corresponding equations when
are given in the table. The bitangency condition is verified by all multilocal events. Note that the
used in Table II and the rest of this section to denote the value of a function
evaluated in z i , e.g., R 00= R 00 (z1 ).
(Kriegman and Ponce, 1990) New equation
Bitangents r 2
(R 0+R
Limiting
bitangents
Asymptotic
bitangents
R
Tritangents
(R 0+R
R
Viewing
direction tan 2
tan
In the case of a polynomial R of degree d, the series can be truncated since all the
derivatives of order greater than the degree of the polynomial are zero, and we have
d
Using these remarks it is a simple matter to simplify the equations defining the
multilocal visual events. The new equations are given in Table III, where A d\Gamma4 , B 2d\Gamma3 ,
C d\Gamma4 and D d\Gamma3 are respectively polynomials of degree
two variables. The coefficients are easily computed from Eq. (3) and Table II. Similar
calculations can also be done in the case where r is an arbitrary polynomial, see (Pae,
2000) for details.
Table
III. Simplified equations defining the multilocal events for surfaces of
revolution.
Simplified equation
Bitangents R 000+
Limiting
bitangents 2R1R
Asymptotic
bitangents R 000+
Tritangents
Viewing
direction tan 2 fi =2 R 00+
It should be noted that the new equations have a lower degree than the old ones. At
least as importantly, their form clarifies the geometric relationship between the surface
curves associated with local and multilocal events. By comparing Tables I and III we
see for example that the equation giving the viewing direction in the multilocal case
encompasses the corresponding equation for local events when z . Likewise, the
equations characterizing the limiting bitangents reduce to the equations characterizing
the parabolic and flecnodal points when z This occurs when a ruling of the
limiting bitangent developable reduces to a single point (a cusp of Gauss) where the
parabolic and flecnodal curves intersect tangentially (Koenderink, 1990).
Along the same lines, the equations defining the asymptotic bitangents both reduce,
in the limit z , to the equation of a flecnodal curve (this corresponds to the
intersection of the asymptotic bitangent developable and the flecnodal curve at a
biflecnode (Koenderink, 1990)). In this case the two equations given in Table III are
actually redundant and contain the flecnodal curve as an extraneous component. It
is easy to eliminate this redundancy (and lower the overall degree of the equations
determining cusp crossings) by replacing the second equation by
Finally, note that the equations defining the tritangents reduce to the equations
defining the asymptotic bitangents when z and to the equation defining the
flecnodal curve when z To remove these limit cases, we introduce
It is clear that both E d\Gamma4 (z are polynomials of degree d \Gamma 4,
and that
is a polynomial of degree
Thus tritangents can be characterized by the conditions
and the last equation determining the tritangents in Table III.
4.3. Implementation and Results
Armed with the equations derived in the previous section, we can now use the algorithm
described in Appendix A to trace the curves defined by these equations in IR k ,
with for the (oe; z) pairs associated with parabolic and flecnodal curves,
for the (oe; z associated with limiting and asymptotic bitangent rays, and
for the (oe; z associated with tritangents. We then construct
the corresponding scale-space visual event curves in the oe; fi plane by substituting
the corresponding values in the equations defining the singular viewing directions.
The regions delimited by these curves form a scale-space aspect graph, and sample
aspects for each one of them can also be constructed via curve tracing, as described in
(Kriegman and Ponce, 1990). The proposed approach has once again been implemented
using MATHEMATICA and homotopy continuation. We have conducted experiments
using two solids of revolution S 1 and S 2 . The first one is defined by
and the corresponding profile curve is shown in Figure 14.
Figure
14. The profile curve
R(z) associated with S1 .
The second solid of revolution, S 2 , is defined by
and the corresponding profile is shown in Figure 15.
Figure
15. The profile curve associated with S2 .
Figure
shows the scale-space aspect graph of S 1 plotted in the (oe; fi) plane, along
with a close-ups of a key area, and the corresponding aspects. The vertical lines drawn
in
Figure
are scale-space critical events, where the number of aspects changes.
Figure
17 shows the same plots for S 2 . These figures demonstrate that the number of
aspects generally decreases as the polynomial density is smoothed: indeed, although
the aspect graph of S 2 is initially quite complex with several almost indistinguishable
aspects near a triple point and the butterfly event associated with a biflecnode on the
surface (see (Koenderink, 1990) and Figure 17), it simplifies a great deal after this
event. It should be noted however that the number of aspects may locally increase for
a while as oe increases (e.g., after crossing the first vertical line in the close-up shown
in
Figure
16(top right)).
Swallowtail
Beak-to-Beak
Beak-to-Beak
Tangentcrossing
Separation
of
the
surface
Disappearance
of
parabolic
lines
Dissapearance
of
the
smaller
part
of
the
surface
A
F
G
Swallowtail
Beak-to-Beak Lip
Tangentcrossing
Figure
16. Top: the scale-space aspect graph of S1 and a close-up of the region marked by a rectangle;
Bottom: the aspects of S1 . The "aspect" (I) is a drawing of S1 in the area of the close-up shown in
the top-right part of the figure; close-ups of the actual aspects in this region are labeled (j) to (p).
A
Disappearance
of
parabolic
lines
Merging
of
a
of
swalltails
butterfly B
F
G
I
O
G
J3
O
(M) (N) (O) (P) (b)
(b') (b'') (O) (P) (Q) (R)
Figure
17. Top: the aspect graph of S2 , together with close-ups of the four regions marked by rectan-
gles. The visual events are indicated by numbers: 1=beak-to-beak, 2=lip, 3=swallowtail, 4=tangent
crossing, 5=cusp crossing and 6=triple point. Bottom: the aspects of S. The aspect (b) is at the
butterfly singularity shown above in the bottom-right rectangular region. The bottom two rows show
close-ups of the aspects near a triple point (GH) as well as two close-ups (b') and (b'') of the butterfly
and close-ups of the aspects near it.
5. Discussion
We have presented the first (to the best of our knowledge) implemented algorithm for
computing the critical events of evolving smooth surfaces and of their aspect graph.
Along the way, we have derived simple equations for the cusps of Gauss of implicit
surfaces (although those were without doubt known previously, it is difficult to find
such equations in the literature), and simpler, lower-degree equations for the multilocal
events of (static) solids of revolution than those given in (Kriegman and Ponce, 1990).
At this point, it is worth thinking about what the experiments conducted in this
paper have taught us: first, they suggest that algebraic surfaces evolve under parameterized
one-parameter families of deformations in a relatively simple and intuitive
manner. Thus there may be hope for using the sequence of critical events associated
with an evolving surface as a guide to constructing object taxonomies, individual
instances with similar sequences being grouped in the same class, 2 even if this involves
surface models and computational approaches completely different from those
presented in this article. Our (limited) experiments have shown, on the other hand,
that the scale-space aspect graph has a very complex structure at fine scales, with new
events sometimes occurring as scale increases, even though a great deal of simplification
appears to take place at coarse scales. In this context, the causality of the diffusion
process with respect to the various singularities should also be investigated (e.g., can
new parabolic curves appear as scale increases?), as has been done for zero-crossings
of the Laplacian of Gaussian-smoothed two-dimensional images (Yuille and Poggio,
1986).
It would be very interesting to generalize the results obtained in this paper to more
general settings and surface models, for example to surfaces defined as the zero sets of
empirical density functions (Kergosien, 1991; Noble et al., 1997), or, in the case of scale-space
aspect graphs, to algebraic surfaces that are not of revolution. This would require
efficient methods for monitoring the sign of the equations defining critical events at
each voxel of the empirical data in the first case, and, in the second case, trying to
simplify the multilocal visual event equations derived in (Petitjean et al., 1992), as
was done here for solids of revolution, to keep their degree and the computational
complexity of the corresponding scale-dependent algebraic calculations under control.
Along the same lines, it would be very interesting to assess the size of the various
scale-dependent structures discussed in this paper as a function of the degree of the
corresponding algebraic surfaces, using for example tools from enumerative geometry
and intersection theory (Petitjean, 1995). Surface evolution and aspect graph evolution
are obviously related, and their exact relationship should also be spelled out. Finally,
non-linear evolution processes independent of the underlying density (Kimia et al.,
1995; Sethian, 1996) are of course also of interest, as are better models of the imaging
process (i.e., blurring the image instead of the object), and they should be investigated.
Acknowledgments
. This work was partially supported by the National Science Foundation
under grant IRI-9634312 and by the Beckman Institute at the University of
Illinois at Urbana-Champaign.
These evolutionary sequences are clearly related to the idea of morphological scripts in
(Koenderink, 1990).
Appendix
A. Curve Tracing Algorithm
We recall in this section the curve tracing algorithm of (Kriegman and Ponce, 1991;
Petitjean et al., 1992). We consider an algebraic curve \Gamma, defined as in Eq. (1) by n
polynomial equations in n+1 unknowns (we assume that t is fixed here). The algorithm
traces \Gamma in four stages (Figure
1. Compute all extremal points (including singular points) of \Gamma in some direction,
say x . (These are marked E 1 and E 2 in the Figure 18.)
2. Compute all intersections of \Gamma with the hyperplanes orthogonal to the x 0 axis at
the extremal points. (In the figure, the hyperplanes are simply vertical lines, and the
only intersections in this case are E 1 and E 2 .)
3. For each interval of the x 0 axis delimited by these hyperplanes, intersect \Gamma and
the hyperplane passing through the mid-point of the interval to obtain one sample for
each real branch. (The samples are denoted S 1 to S 4 in the figure.)
4. March numerically from the sample points found in step 3 to the intersection
points found in step 2 by predicting new points through Taylor expansion and
correcting them through Newton iterations.
Figure
18. An example of curve tracing in IR 2 . This curve has two extremal points E1 ; E2 , and four
regular branches with sample points S1 to S4 . E2 is singular. (Reprinted from (Petitjean et al., 1992).)
requires the computation of the extrema of \Gamma in the x 0 direction. These
points are characterized by a system of n+1 polynomial equations in n+1 unknowns,
obtained by adding the equation jJ to (1), where J denotes the Jacobian matrix
n. They are found using the homotopy continuation method
(Morgan, 1987). Steps 2 and 3 require computing the intersections of a curve with a
hyperplane. Again, these points are the solutions of polynomial equations, and they
are found using homotopy continuation. The curve is actually traced (in the classical
sense of the term) in step 4, using a classical prediction/correction approach based on a
Taylor expansion of the P i 's. This involves inverting the matrix J which is guaranteed
to be non-singular on extrema-free intervals.
B. Gaussian Diffusion of Polynomial Densities
We give in this section explicit formulas for convolving a polynomial density with a
Gaussian kernel. We start with the univariate case, where G oe
oe
is
the Gaussian kernel with standard deviation oe, and P
i=0 a i x i is a polynomial
of degree d. The convolution is given by
G
oe\Omega
d
a i
where M j is the moment of order j of the Gaussian kernel, defined by
This moment is zero for odd values of j. For even values, it is easily shown that
proceeding by induction we finally obtain
Y
Now let us switch to the trivariate case of interest with
G oe (x;
The moments of this kernel are now given by
If one of the indices i, j or k is odd, M ijk is equal to zero. When all indices are even,
we use the separability of the Gaussian distribution to write
Y
r
Y
s
Y
This gives the following procedure for computing P oe
1. Expand P into an expression of the form
where C ijk (x; y; z) is polynomial in x, y, and z.
2. Calculate M ijk using the formula given above and let the result be d ijk oe i+j+k .
3. The desired convolution is
ijk
Obviously, In addition, it is easy to show that P oe is a polynomial in x,
y, z, and oe with the same degree as P . As an example, the polynomial P oe associated
with the density function (4x 2 that defines the surface
of the dimple in Section 3 is
--R
Catastrophe Theory.
of Comp.
IEEE Trans.
Geometry and the Imagination.
'La famille des projections orthogonales d'une surface et ses singularit'es'.
Solid Shape.
Curves and Surfaces.
Solving Polynomial Systems using Continuation for Engineering and Scientific Problems.
IEEE Conf.
'G'eom'etrie 'enum'erative et contacts de vari'et'es lin'eaires: application aux graphes d'aspects d'objets courbes'.
of Comp.
Level set methods: evolving interfaces in geometry
Master's thesis
Structural Stability and Morphogenesis.
Differential geometry.
--TR
The curvature primal sketch
Scaling theorems for zero crossings
On the classification of views of piecewise smooth objects
Marching cubes: A high resolution 3D surface construction algorithm
A process-grammar for shape
Solid shape
Computing exact aspect graphs of curved objects
Visibility, occlusion, and the aspect graph
A new curve tracing algorithm and some applications
Efficiently Computing and Representing Aspect Graphs of Polyhedral Objects
Global bifurcation sets and stable projections of nonsingular algebraic surfaces
Computing exact aspect graphs of curved objects
Shapes, shocks, and deformations I
Parabolic curves of evolving surfaces
Ridges, crests and sub-parabolic lines of evolving surfaces
Finite-Resolution Aspect Graphs of Polyhedral Objects
On computing aspect graphs of smooth shapes from volumetric data
Generic Sign Systems in Medical Imaging
The Scale Space Aspect Graph | algebraic surfaces;differential geometry;aspect graphs;shape representation;scale space |
513398 | Affine Type A Crystal Structure on Tensor Products of Rectangles, Demazure Characters, and Nilpotent Varieties. | Answering a question of Kuniba, Misra, Okado, Takagi, and Uchiyama, it is shown that certain higher level Demazure characters of affine type A, coincide with the graded characters of coordinate rings of closures of conjugacy classes of nilpotent matrices. | Introduction
In [9, Theorem 5.2] it was shown that the characters of certain level one Demazure
modules of type A (1)
decomposed as linear combinations of irreducible
characters of type An\Gamma1 , have coefficients given by Kostka-Foulkes polynomials in
the variable is the null root. The key steps in the proof are that
1. The Demazure crystals are isomorphic to tensor products of classical b
sl n crystals
indexed by fundamental weights [10].
2. The generating function over these crystals by weight and energy function is
equal to the generating function over column-strict Young tableaux by weight
and charge [22].
3. The Kostka-Foulkes polynomial is the coefficient of an irreducible sl n -character
in the tableau generating function [1] [17].
The main result of this paper is that for a Demazure module of arbitrary level
whose lowest weight is a multiple of one of those in [9], the corresponding coefficient
polynomial is the Poincar'e polynomial of an isotypic component of the coordinate
ring of the closure of the conjugacy class of a nilpotent matrix. The Poincar'e
polynomial is the q-analogue of the multiplicity of an irreducible gl(n)-module in a
tensor product of irreducible gl(n)-modules in which each factor has highest weight
given by a rectangular (that is, a multiple of a fundamental) weight. These poly-
nomials, which possess many properties generalizing those of the Kostka-Foulkes,
have been studied extensively using algebro-geometric and combinatorial methods
[8] [13] [24] [25] [30] [31].
The connection between the Demazure modules and the nilpotent adjoint orbit
closures can be explained as follows. Let X - be the Zariski closure of the conjugacy
class of the nilpotent Jordan matrix with block sizes given by the transpose partition
- t of -, that is,
Lusztig gave an embedding of the variety X - as an open dense subset of a P -
stable Schubert variety Y - in c
is the parabolic subgroup
given by "omitting the reflection r 0 " [19]. The desired level l Demazure module,
viewed as an sl n -module, is isomorphic to the dual of the space of global sections
L\Omega l), where L 0 is the restriction to Y - of the homogeneous line bundle on
c
SLn=P affording the fundamental weight 0 .
The proof of the main result entails generalizations of the three steps in the proof
of [9, Theorem 5.2]. First, the methods of [9] may used to show that the Demazure
crystal is isomorphic to a classical b
sl crystal B that is a tensor product of crystals
of the form B k;l (notation as in [7]). We call B k;l a rectangular crystal since it
is indexed by the weight l k that corresponds to the rectangular partition with k
rows and l columns.
Second, it is shown that the crystal B is indexed by sequences of Young tableaux
of rectangular shape equipped with a generalized charge map. In particular, we
give explicit descriptions in terms of tableaux and the Robinson-Schensted-Knuth
correspondence, of
ffl The zero-th crystal raising operator ee 0 acting on B, which involves the generalized
cyclage operators of [24] on LR tableaux and a promotion operator
on column-strict tableaux.
ffl The combinatorial R-matrices on a tensor product of the form B k1 ;l
which are given by a combination of the generalized automorphism of conjugation
[24] and the energy function.
ffl The energy function, which equals the generalized charge of [13] [24].
Moreover, it is shown that every generalized cocyclage relation [24] on LR tableaux
may be realized by ee 0 . The formula for the corresponding change in the energy
function by ee 0 was known [22] in the case that all rectangles are single rows or all
are single columns.
Third, it must be shown that the tableau formula coincides with the Poincar'e
polynomial. This was accomplished in [24], where it is shown that the tableau
formula satisfies a defining recurrence of Weyman [31] [32] for the Poincar'e polynomials
that is closely related to Morris' recurrence for Kostka-Foulkes polynomials
[21].
As an application of the formula for the energy function, we give a very simple
proof of a monotonicity property for the Poincar'e polynomials (conjectured by A. N.
Kirillov) that extends the monotonicity property of the Kostka-Foulkes polynomials
that was proved by Han [4].
Thanks to M. Okado for pointing out the reference [26] which has considerable
overlap with this paper and [24].
2. Notation and statement of main result
2.1. Quantized universal enveloping algebras. For this paper we only require
the following three algebras: U q (sl n
sl n ).
Let us recall some definitions for quantized universal enveloping algebras taken
from [5] and [6]. Consider the following data: a finitely generated Z-module P
(weight lattice), a set I (index set for Dynkin diagram), elements
roots) and f h (basic coroots) such that
is a generalized Cartan matrix, and a symmetric form (\Delta; \Delta) : P \Theta P ! Q such that
Q\Omega P .
This given, let U q ( b
be the quantized universal enveloping algebra, the Q(q)-
algebra with generators fe and relations as in [6,
Section 2].
For U q ( b
1g be the index set for the Dynkin diagram,
the Cartan matrix of type A (1)
(fundamental weights) and let P have dual basis fh
Define the elements
so that (hh i ; ff j i) i;j2I is the Cartan matrix of type A (1)
the symmetric Q-valued form (\Delta; \Delta) by (ff
the quantized universal enveloping algebra for
this data is U q ( b
sl
be the dominant weights. For
be the irreducible integrable highest weight U q ( b
sl n )-module of
highest weight , B( ) its crystal graph, and u 2 B( ) the highest weight vector.
For U 0
sl n ), let I and (a ij ) be as above, but instead of P use the "classical
weight lattice" P cl
i2I Z i (where by abuse of notation the image of
in P cl is also denoted i ). The basic coroots a Z-basis of P
cl .
The basic roots are fff denotes the image of ff i in P cl for
Note that the basic roots are linearly dependent. The pairing and symmetric form
are induced by those above. The algebra for this data is denoted U 0
may
be viewed as a subalgebra of U q ( b
its generators are a subset of those of
U
its relations map to relations. Let P
cl
the U q ( b
sl n)-module V ( ) is a U 0
sl n)-module by restriction and weights are taken
modulo ffi .
For U q (sl n ), let I be the index set for the Dynkin diagram,
the restriction of the above Cartan matrix to J \Theta J , and P cl = P cl =Z 0
the weight lattice. The basic coroots
cl Jg. form a Z-basis of
cl
. The basic roots are fff cl Jg. The algebra for this data
is U q (sl n ), which can be viewed as a subalgebra of U 0
sl
cl
be the dominant integral weights. For
cl let V - be the irreducible U q (sl n )-
module of highest weight -, and B - its crystal graph.
Denote by W the Weyl group of the algebra b
sl n and by W that of sl n . W is the
subgroup of automorphisms of P generated by the simple reflections fr
where
cl . W acts faithfully on the affine subspace
cl . For for the map X ! X given by translation by -. Let
cl . Then the action of r i on X for i 2 J is given by
is the reflection through the hyperplane orthogonal to '. Then W
the element - 2 Q acts by - .
For the Demazure module of lowest weight w is defined
by Vw ( is a generator of the (one dimensional) extremal
weight space in V ( ) of weight w and U q (b) is the subalgebra of U q ( b
by the e i and h 2 P .
2.2. Main result. Let - be a partition of n. The coordinate ring C [X - ] of the
nilpotent adjoint orbit closure X - has a graded sl n -action induced by matrix conjugation
on X - . For
cl , define the Poincar'e polynomial of the -th isotypic
component of C [X - ], by
d-0
where C [X - ] d is the homogeneous component of degree d.
Partitions with at most n parts are projected to dominant integral weights of
sl n by - 7! wt sl (-), where
wt sl
cl
Remark 1. Warning: this is not the Kostka polynomial, but a generalization;
see [31]. In the special case that - is a partition of n with at most n parts then
which is a renormalization of the Kostka-Foulkes polynomial
with indices - t and -.
Theorem 2. Let l be a positive integer, - a partition of n, and w - 2 W the
translation by the antidominant weight \Gammaw 0 wt sl (- t cl , where w 0 is the longest
element of W . Then
e \Gammal 0 chVw- (l 0
K wt sl (- (q) chV wt sl (-)
runs over the partitions of the multiple ln of n with at most n parts.
3. Crystal structure on tensor products of rectangles
The goal of this section is to give explicit descriptions of the classical b
sl n crystal
structure on tensor products of rectangular crystals and their energy functions.
This is accomplished by translating the theory of sl n crystals and classical b
crystals in [6] [7] [11] [22] into the language of Young tableaux and the Robinson-
Schensted-Knuth (RSK) correspondence.
3.1. Crystals. This section reviews the definition of a weighted crystal [6] and
gives the convention used here for the tensor product of crystals.
A P-weighted I-crystal is a a weighted I-colored directed graph B, that is, a set
equipped with a weight function directed edges colored by the set
I, satisfying the following properties.
(C1) There are no multiple edges; that is, for each i 2 I and b; b there is at
most one edge colored i from b to b 0 .
If such an edge exists, this is denoted b
equivalently
of the notation of a function B ! B. It is said that e
f i (b) is defined or equivalently
that ee i (b 0 ) is defined, if the edge exists.
i (b) is definedg
i (b) is defined.g
f i (b) is defined then wt( e
Equivalently, wt(ee i
is a P-weighted I-crystal for 1 - j - m, the Cartesian product Bm \Theta \Delta \Delta \Delta\ThetaB 1
can be given a crystal structure as follows; this crystal is denoted
The convention used here is opposite that in much of the literature but is convenient
for the tableau combinatorics used later. Let b
The weight function on B is given by
The root operators e
f i and the functions OE i are defined by the "signature rule".
Given construct a biword (sequence of pairs of letters) consisting
of OE i (b j ) copies of the biletter
copies of the biletter
\Delta for all j,
sorted in weakly increasing order by the order
This biword is now repeatedly reduced by removing adjacent biletters whose lower
letters are +\Gamma in that order. If are viewed as left and right parentheses
then this removes matching pairs of parentheses. At the end one obtains a biword
whose lower word has the t. If s ? 0
be the upper letter corresponding to the rightmost
leftmost +) in the reduced biword, and define
e
\Delta\Omega b 1+j
\Gamma\Omega e
respectively,
e
\Delta\Omega b
1+j+\Omega ee i (b j+
\Delta\Omega
A morphism of P-weighted I-crystals is a map g that preserves weights
and satisfies g( e
I and b 2 B, that is, if e
f i (b) is defined
then e
is, and the above equality holds.
It is easily verified that the P-weighted I-crystals form a tensor category.
We only require the following kinds of crystals.
1. The crystal graphs of integrable U q ( b
sl n )-modules are P-weighted I-crystals
and are called b
sl n -crystals.
2. The crystal graphs of U 0
sl n)-modules that are either integrable or are finite-dimensional
and have a weight space decomposition, are P cl -weighted I-
crystals and are called classical b
sl n -crystals.
3. The crystal graphs of integrable U q (sl n )-modules are P cl -weighted J-crystals
and are called sl n -crystals.
3.2. Crystal reflection operator and Weyl group action. Let B be an sl n
crystal and i 2 J . Write
e
e
e
e \Gammap
The Weyl group W acts on B by r i er i (b) for i 2 J . It is obvious that er i is
an involution and that e r i and e r j commute for but not at all obvious
6 MARK SHIMOZONO
that the e
r i satisfy the other braid relation. A combinatorial proof of this fact is
indicated in [18] for the action of W on an irreducible sl n crystal.
3.3. Irreducible sl n crystals. Let - 1
be a partition
of length at most n. Let V - be the irreducible U q (sl n )-module of highest weight
wt sl (-) and B - its crystal. In [11] the structure of the sl n crystal B - is determined
explicitly. The crystal B - may be indexed by the set CST(-) of column-strict
tableaux of shape - with entries in the set ng. The combinatorial
construction yielding the action of the crystal operators e e i and e
already known. In a 1938 paper, in the course of proving
the Littlewood-Richardson rule, G. de B. Robinson gave a form of the Robinson-
Schensted-Knuth (RSK) correspondence which is defined by giving the value of the
map on sl n -highest weight vectors and then extending it by via canonical sequences
of raising operators e
f i [23, Section 5]; see also [20, I.9] where Robinson's proof is
cleaned up.
Suppose first that is the crystal of the defining representation of
sl n . This crystal is indexed by the set ng and e
defined if and
only if and in that case e
Next consider the tensor product (B (1)
)\Omega m . It may be indexed by words
in the alphabet [n], where u Its sl n crystal structure
is defined by the signature rule. This case of the signature rule is given in [18].
Now it is necessary to introduce notation for Young tableaux.
Some definitions are required. The Ferrers diagram D(-) is the set of pairs of
integers
g. A skew shape is the set
difference of the Ferrers diagrams D(-) and D(-) of the partitions - and -. If D
and E are skew shapes such that D has c columns and E has r rows, then define
the skew shape
In other words,
D\Omega E is the union of a translate of D located to the southwest of
a translate of E.
A tableau of (skew) shape D is a function
is depicted as a partial matrix whose (i; j)-th entry is T (i;
Denote by shape(T ) the domain of T . The tableau T is said to be column-strict
if T (i;
CST(D) be the set of column-strict tableaux of shape D.
The content of a tableau is the sequence
is the number of occurrences of the letter i in T . Let CST(D; fl)
denote the set of column-strict tableaux of shape D and content fl. The (row-
reading) word of the tableau T is the word given by word(T
is the word obtained by reading the i-th row of T from left to right. Say that the
word u fits the skew shape D if there is a column-strict tableau (necessarily unique)
whose row-reading word is u.
Remark 3. Let D be a skew shape, u a word in the alphabet [n] and 1
It is well-known and easy to verify that if ee i (u) is defined, then u fits D if and only
if ee i (u) does. This given, if T is a column-strict tableau of shape D and ee i (word(T
is defined, then let ee i (T ) be the unique column-strict tableau of shape D such that
)). The same can be done for e
Thus the set CST(D) is an sl n crystal; call it B D . When this is the
crystal B - .
3.4. Tensor products of irreducible sl n crystals and RSK. Let D be a skew
diagram and B D the sl n crystal defined in the previous section. The RSK correspondence
yields a combinatorial decomposition of B into irreducible sl n crystals.
The RSK map can be applied to tensor products of irreducible sl n crystals. The
goal of this section is to review a well-known parametrizing set for the multiplicity
space of such a tensor product, by what we shall call Littlewood-Richardson (LR)
tableaux.
be a sequence of positive integers summing to n and
sequence of partitions such that R i has j i parts, some of
which may be zero. Let A 1 be the first j 1 numbers in [n], A 2 the next j 2 , and so
on. Recall the skew shape
embedded in the plane so that A i gives
the set of row indices for R i . Let
is the length of
the i-th row of
\Delta\Omega R 1 , that is, fl is obtained by juxtaposing the partitions
. The tensor product crystal may be viewed as a skew crystal:
\Delta\Omega
for the weakly
increasing word (of length fl r ) comprising the r-th row of b, for 1 - r - n. The
word of b regarded as a skew column-strict tableau, is given by
Recall Knuth's equivalence on words [12]. Say that a skew shape is normal
(resp. antinormal) if it has a unique northwest (resp. southeast) corner cell [3]. A
normal skew shape is merely a translation in the plane of a partition shape, and
an antinormal shape is the 180-degree rotation of a normal shape. For any word
v, there is a unique (up to translation) column-strict tableau P (v) of normal shape
such that v is Knuth equivalent to the word of P (v). There is also a unique (up to
skew column-strict tableau P& (v) of antinormal shape such that v is
Knuth equivalent to the word of P& (v).
The tableau P (v) may be computed by Schensted's column-insertion algorithm
[27]. For a subinterval A ae [n] and a (skew) column-strict tableau T , define T j A to
be the skew column-strict tableau obtained by restricting T to A, that is, removing
from T the letters that are not in A. Define the pair of column-strict tableaux
By definition shape(Q(b)). It is easy to show that Q(b) is column-strict
and of content fl. This gives an embedding
CST(-) \Theta CST(-; fl)
(3.
It is well-known that this is a map of sl n crystals. That is, if g is any of e e i , e
For the case this fact is in [18].
Let us describe the image of the map (3.1). For the partition -
and a permutation w in the symmetric group Sn , the key tableau Key(w-) of
content w-, is the unique column-strict tableau of shape - and content w-. In the
above notation for the sequence of partitions R, for
in the alphabet A j .
Say that a word u in the alphabet [n] is R-LR (short for R-Littlewood-Richardson)
is the restriction of the word u to the
subalphabet A j ae [n]. Say that a (possibly skew) column-strict tableau is R-LR
if its row-reading word is. Denote by LRT(-; R) the R-LR tableaux of partition
shape - and
The following theorem is essentially a special case of [32, Theorem 1] which is a
strong version of the classical rule of Littlewood and Richardson [16].
Theorem 4. The map (3.1) gives a bijection
CST(-) \Theta LRT(-; R):
3.5. Kostka crystals. In the case that
, we call
R a Kostka crystal. The set LRT(-; R) is merely the set CST(-; fl).
be a sequence of rectangles as usual. Define the Kostka
crystal B rows(R) by the sequence of one row partitions rows(R) whose r-th partition
is given by (fl r ) where fl r is the length of the r-th row of the skew shape
Letting r the r-th row of b, there is the obvious sl n crystal
embedding
\Delta\Omega
In fact, the RSK correspondence (3.3) may be defined using the commutativity of
the diagram in (3.2) for
f i and all i 2 J , and giving its values on the sl n -highest
weight elements in B [23]. Suppose b 2 B is such that word(b) is of sl n -highest
weight, that is, ffl Such words are said to possess the lattice
property. In this case content(word(b)) must be a partition, say -, and
Key(-). For the recording tableau, write
r is the r-th row of b viewed as a tableau of the skew shape
Then Q(b) is the column-strict tableau of shape equal to content(b), whose i-th row
contains m copies of the letter j if and only if the word v j contains m copies of the
letter i, for all i and j. In particular, if elements the same sequences
of raising operators, then
3.6. Rectangle-switching bijections. From now on we consider only crystals
is the partition with j j rows and - j columns for 1 - j - m.
Consider the case
. Since the U q (sl n )-module V
R2\Omega
follows that there is a unique sl n crystal
isomorphism
It is defined explicitly in terms of the RSK correspondence as follows. By the above
multiplicity-freeness, for any partition -,
(R 1
Thus there is a unique bijection
For later use, extend - to a bijection from the set of (R 1 words to the set
of (R 2 words by
where Q(u) is the standard column-insertion recording tableau (that is,
Q(u) with u regarded as a word in the tensor product (B (1)
)\Omega N where N is the
length of u).
- is the rectangular generalization of an automorphism of conjugation. oe is
defined by the commutative diagram
R1\Omega
\Gamma\Gamma\Gamma\Gamma!
oe
R2\Omega
\Gamma\Gamma\Gamma\Gamma!
In other words, for all
Now the tensor product of rectangular
crystals. Let w 2 Sm be a permutation in the symmetric group on m letters. Write
where wR is the sequence of rectangles
Write oe R i ;R j for the action of the above sl n crystal isomorphism at consecutive
tensor positions in
\Delta\Omega
. Then the isomorphisms oe R i ;R j
satisfy a Yang-Baxter identity
j\Omega id)
This is a consequence of the corresponding difficult identity for bijections - R i ;R j
on recording tableaux, defined and conjectured in [13] and proven in [24, Theorem
9 (A5)]. By composing maps of the form oe R i ;R j , it is possible to well-define sl n
crystal isomorphisms
such that oe These bijections satisfy
P(oe R;wR
Q(oe R;wR
(3.
is the shape-preserving bijection defined in
[24].
Remark 5. Suppose is a Kostka crystal, b 2 B, and
with with R weakly increasing word of length fl j for 1
\Delta\Omega b 0
1 . In this case - R i+1 ;R i is
an automorphism of conjugation acting in multiplicity space. By definition b 0
It follows that b 0 can be computed from
b by a jeu-de-taquin on the two row skew tableau with word b i+1 b i .
3.7. b
sl n crystal structure on rectangular crystals. Suppose the sequence R
consists of a single rectangular partition with k rows and l columns, so that B
k;l . In [7] the existence of a unique classical b
sl n crystal structure on B k;l was
proved. The sl n crystal structure has already been described in Section 3.3. Using
the properties of the perfect crystal B k;l given in [7], an explicit tableau construction
for e
e 0 is presented.
The Dynkin diagram of b
sl n admits the rotation automorphism that sends i to
It follows that there is a bijection such that the
following diagram commutes for all i 2 I:
e
where subscripts are taken modulo n. Of course it is equivalent to require that /
satisfy the diagram with ee i replacing e
Lemma 6. / is unique and rotates content in the sense that for all
for all is equal to mn by convention.
Proof. Let b be the sl n -highest weight vector in B k;l , given explicitly by Key((l k )).
By the definition of / and the connectedness of B k;l it is enough to show that /(b)
is uniquely determined. By definition ffl i . Recall from [7] that
for all b
is said to be minimal if equality holds,
and that for any sequence of numbers (a
that sum to l, there is a
unique minimal vector b 0 such that ffl i (b
These facts imply that ffl 0 l. From the definition of / it follows that
Thus /(b) is minimal and hence uniquely defined.
Next it is shown that / is uniquely defined by a weaker condition than (3.9).
Lemma 7. / is uniquely defined by (3.9) for
Proof. Let the restriction of b to the subinterval [2;
[3;n] . By abuse of notation we shall occasionally identify a
(skew) tableau with its row-reading word.
If u is a word or tableau and p is an integer, denote by u+p the word or tableau
whose entries are obtained from those of u by adding p. The first goal is to show
that b b and b b are Knuth-equivalent, that is, P 1). By the assumption
on /, b b admits a sequence of lowering operators e
only if b b admits the sequence e i 1
are words in the
alphabet proves that P characterization
of the RSK map (see section 3.5).
Now the shape of the tableau b 0 is a rectangle, so its restriction b b
[3;n] to
a final subinterval of [n], has antinormal shape. Hence b b
uniquely determined by b b.
It only remains to show that the subtableau b 0 j [1;2] is uniquely specified. Its
shape must be the partition shape given by the complement in the rectangle (l k )
with the shape of b 0
[3;n] . Now b 0 j [1;2] is a column-strict tableau of partition shape and
contains only ones and twos, so it has at most two rows and is therefore uniquely
determined by its content. But its content is specified by (3.10).
The map / is explicitly constructed by exhibiting a map that satisfies the conditions
in Lemma 7.
The following operation is Sch-utzenberger's promotion operator, which was defined
on standard tableaux but has an obvious extension to column-strict tableaux
[3] [29]. Let D be a skew shape and b 2 CST(D). The promotion operator applied
to b is computed by the following algorithm.
1. Remove all the letters n in b, which removes from D a horizontal strip H
(skew shape such that each column contains at most one cell).
2. Slide (using Sch-utzenberger's jeu-de-taquin [3] [28]) the remaining subtableau
bj [n\Gamma1] to the southeast into the horizontal strip H, entering the cells of H
from left to right.
3. Fill in the vacated cells with zeros.
4. Add one to each entry.
The resulting tableau is denoted pr(b) 2 CST(D) and is called the promotion of
the tableau b.
Proposition 8. The map / of (3.9) is given by pr.
Proof. pr is content-rotating (satisfies (3.10)) and satisfies (3.9) for
since pr(t)j commutes with sl n crystal operators. By
Lemma
In light of (3.9), the operators ee 0 and e
f 0 on B k;l are given explicitly by
ee
e
Remark 9. Consider again the map pr on b 2 B k;l . The tableau b
partition shape - := (l Its row-reading word has
Schensted tableau pair P( b Key(-). Let
which has antinormal shape and whose complementary shape inside the rectangle
(l k ) must be a single row of length p, that is, shape( b b 0
and Q( b b 0 ) is a column-strict tableau of shape - and content (l \Gamma
is the longest element of the symmetric group S k . So
Let D be the skew shape
R be the tensor product of
rectangular crystals,
Note that the operator pr on
B may be described by
\Delta\Omega
By the definition of e e 0 on a rectangular crystal and the signature rule, it follows
that
ee
e
as operators on
Example 10. Let given by the
following skew tableau of shape R
\Theta \Theta \Theta \Theta \Theta \Theta 1 1
\Theta \Theta \Theta \Theta \Theta \Theta 2 2
\Theta \Theta \Theta 1 1 3
\Theta \Theta \Theta 2 3 4
\Theta \Theta \Theta 3 4 5
The element pr(b) is given by
\Theta \Theta \Theta \Theta \Theta \Theta 2 2
\Theta \Theta \Theta \Theta \Theta \Theta 3 3
\Theta \Theta \Theta 2 2 4
\Theta \Theta \Theta 3 4 5
\Theta \Theta \Theta 4 5 6
The signature for calculating ee 1 on pr(b) is
must be applied to the second tensor position. Then ee 1 (pr(b)) equals
\Theta \Theta \Theta \Theta \Theta \Theta 2 2
\Theta \Theta \Theta \Theta \Theta \Theta 3 3
\Theta \Theta \Theta 1 2 4
\Theta \Theta \Theta 3 4 5
\Theta \Theta \Theta 4 5 6
Finally ee 0
\Theta \Theta \Theta \Theta \Theta \Theta 1 1
\Theta \Theta \Theta \Theta \Theta \Theta 2 2
\Theta \Theta \Theta 1 3 3
\Theta \Theta \Theta 2 4 4
\Theta \Theta \Theta 3 5 7
3.8. Action of e e 0 on the tableau pair. In this section an algorithm is given
to compute the tableau pair (P(ee 0 (b)); Q(ee 0 (b))) of e e 0 (b) directly in terms of the
tableau pair (P(b); Q(b)) of b. In light of (3.12) and (3.2) with
to give P(pr(b)) and Q(pr(b)) in terms of P(b) and Q(b).
\Delta\Omega pr(b 1 ) can be constructed by applying Remark 9 to each
tensor factor. The element b is regarded as a skew tableau of shape
\Delta\Omega b b
denote the shape of b b j , so that b b has shape
\Delta\Omega D 1 .
Next, let b b 0
j be the tableau of skew shape D 0
obtained from b b j
as in Remark 9. Write b b
1 , a skew column-strict tableau of shape
\Delta\Omega D 0
1 . Finally, pr(b) is obtained by adjoining zeros to b b 0 at the
vacated positions of the shape
\Delta\Omega R 1 that are not in D 0 , and then adding
one to each entry.
In other words, P( b b) is obtained from P(b) by removing the letters n, which occupy
a horizontal strip (call it H). It is well-known that Q( b b) is obtained from Q(b) by
reverse column insertions at the cells of H starting with the rightmost cell of H
and proceeding to the left, ejecting a weakly increasing word v of length mn (b).
Another way to say this is that there is a unique weakly increasing word v of length
mn (b) such that P (vQ( b Q(b). So the content of v is the difference of the
contents of Q(b) and Q( b b). Since Q(b) 2 LRT(R), it follows that v is the weakly
increasing word comprised of mn (b j ) copies of the maximum letter of A j for all j.
In light of Remarks 5 and 9,
where w R
0 is the automorphism of conjugation corresponding to the longest element
in the Young subgroup SA1 \Theta \Delta \Delta \Delta \Theta SAm in the symmetric group Sn . Recall that
be the skew shape given by the difference of the shapes
of P(pr(b)) and P(pr(b)j [2;n] ). It is well-known that Q(pr(b)j [2;n] ) is obtained from
Q(pr(b)) by reverse row insertions at the cells of H 1 starting from the rightmost
and proceeding to the left. Let u be the ejected word. Then using an argument
similar to that above, P (Q(pr(b)j [2;n] and u is the weakly increasing
word comprised of mn (b j ) copies of the minimal letter of A j for all j. Since u and
are both weakly increasing words it is easy to calculate directly that
v.
14 MARK SHIMOZONO
Therefore
Remark 11. In summary, the tableau pair (P(pr(b)); Q(pr(b))) is constructed from
the tableau pair (P(b); Q(b)) by the following steps. Let
1. Let H be the horizontal strip given by the positions of the letters n in P .
2. Let v be the weakly increasing word and b
Q the tableau such that shape( b
H, such that
Q). These may be produced by reverse
column insertions on Q at H from right to left.
3. Then
4. Let H 1 be the horizontal strip
Q).
5. Let P 1 be the column-strict tableau given by adjoining to P j [n\Gamma1] the letters
n at the cells of H 1 .
By [24, Proposition 15],
(w Rb
R (v b
where -R is the LR analogue of the right circular shift of a word by positions defined
in [24].
All of these steps are invertible, so a description of pr \Gamma1 is obtained as well.
Example 12. Continuing the previous example, the image of b under the map
(3.3) is given by the tableau pair
Then H is the skew shape given by the single cell (7; 1),
and
consists of the single cell (1; 5) and
6
3.9. The R-cocyclage and e
e 0 . In [24] the R-cocyclage relation was defined on the
set LRT(R). In the Kostka case this is a weak version of the dual of the cyclage poset
of Lascoux and Sch-utzenberger [18]. It is shown that every covering R-cocyclage
relation, realized as recording tableaux, is induced by e
e 0 on some element of B R .
Theorem 13. Let ux be an R-LR word with x a letter. Then there is an element
R such that that the
cell is not in the n-th row. In particular, every
R-cocyclage covering relation is realized by the action of e e 0 in this way.
Proof. By definition (see [24]), every covering relation in the R-cocyclage has the
form that P (v) covers P (-R (v)) where v is an R-LR word. It follows from [24,
Proposition 23] that if P is an R-cocyclage then x ? 1. If s is in
the n-th row, then by the column-strictness of P (ux) and the fact that all letters
are in the set [n], x = 1. So no R-cocyclage covering relations are excluded by the
restriction that s not be in the n-th row.
and the automorphism of conjugation w R
preserves shape, without loss
of generality it may be assumed that u is the row-reading word of a column-strict
tableau U of partition shape b shape(P(ux)). Now a skew
tableau t has partition shape if and only if is the word of the
column-strict tableau U
exists since Q 2 LRT(-; R)
and (3.3) is a bijection. It must be shown that Q(ee 0 This shall be
verified by applying the formula (3.12) and Remark 11.
Now n. The horizontal strip
H given by the cells of Key(w-) containing the letter n, is merely the n-th row
of the shape -. Since Q 2 CST(-) (and all tableaux are in the alphabet [n]), the
subtableau given by the first -n columns of Q is equal to Key((- n
the rest of Q and R 0 the sequence of rectangles obtained by removing the first
-n columns from each of the rectangles in R. Since Q is R-LR and the column-
reading word of Q equals that of Key((- n
y be the minimal element of the last interval Am . In the notation of Remark 11,
since Q r is R 0 -LR, it follows that
The right hand side is a column-strict tableau of shape
so that the horizontal strip H entirely in the first row,
and P(pr(b)) is formed from 1 by pushing the first row
to the right by -n cells and placing 1's in the vacated positions. The tableau
ones. Hence P(pr(b)) contains - t twos in its
first row, that is, ffl 1
which changes the letter 2 at the cell (1; mn (b) + 1) to a
1. By (3.2),
Now pr \Gamma1 is applied, reversing the algorithm in Remark 11 starting with the tableau
1 and H 0 for the analogues of H 1 and
H. By Remark 11 and direct calculation,
In particular H 0
[fsg. The reverse row insertions on Q(ee 1
at
merely remove the -n copies of y from the first row by (3.14). The final
reverse row insertion (at the cell s) stays within the subtableau Q r and changes it
to the tableau Q r " say, and ejects the letter x, since since x is obtained from Q
by reverse row insertion at s and
. The result of the reverse row
insertion on Q(ee 1 (pr(b))) at H 0
1 is
with ejected word xy -n . Writing U
r and using the fact that Q r is
Q(ee
Remark 14. Suppose B R is a Kostka crystal with R
n. Then Theorem 13 shows that every covering relation in the cyclage poset
- CST(-; fl) is realized in the recording tableaux by e e 0 acting on some b 2 B R .
Remark 15. Let the maximum number of columns among the
rectangles R i in R. Suppose is an sl n -highest weight vector such that Q(b)
has shape -, such that covering relation in
the R-cocyclage.
To see this, let us adopt the notation of the proof of Theorem 13. Let
be the corner cell in the last column of -. Then - so that w-. By the
choice of b,
To apply Theorem 13 it must be shown that t ! n. Suppose not. Then - n
and Q(b) is an R-LR tableau of shape (- n
the total of the heights of the
rectangles in R is n, it follows that all of the rectangles in R must have exactly - 1
columns, contradicting the assumption that - 1 ? M .
Apply the reverse row insertion on Q(b) at the cell s, obtaining the column-strict
tableau U of shape - \Gamma fsg and ejecting the letter x. Then P
is an R-cocyclage, by [24, Remark 17].
Example 16. Continuing the example, e e 0 (b) is computed below. Q(ee 1
Q(pr(b)) and
6
Applying pr \Gamma1 to the tableau pair of ee 1 (pr(b)) and denoting by P 0
the
corresponding tableaux and word, we obtain
and finally
P(ee
Remark 17. Suppose that is such that each R j has a common
number of columns, say l. Then B R , being the tensor product of perfect crystals
of level l, is perfect of level l and therefore connected by [6]. In this case, more
is true. Using Remark 15, every element can be connected to the unique sl n -highest
weight vector in B R of charge zero, by applying last column R-cocyclages on the
recording tableau.
For R a general sequence of rectangles, B R is still connected. However it is not
necessarily possible to use e
e 0 to connect every sl n -component to the zero energy
component in such a way that the energy always drops by one. For example,
and the sl 3 -component with Q-tableau 1 1
. The
applications of e e 0 on the five elements of this component that admit e e 0 , all produce
elements in the component with Q-tableau 1 1 3
2 which has the same energy.
3.10. Rectangle-switching bijections and ee 0 .
Proposition 18. Let R 1 and R 2 be rectangles. Then the rectangle-switching bijec-
tion
is an isomorphism of classical b
sl n crystals.
Proof. Since is known to be an isomorphism of sl n -crystals, it only
remains to show that oe commutes with ee 0 . Let b 2 B
. By the bijectivity
of (3.3) it is enough to show that
P(ee
Q(ee
Consider first the process in passing from (P(b); Q(b)) to (P(ee 0 (b)); Q(ee 0 (b))). Let
H and H 1 be the be the horizontal strips, v the weakly increasing word and b
the tableau as in Remark 11. Let b
the analogous
objects in passing from (P(b 0
shape(P(b)). Observe that P(b 0 by (3.6), so that
This implies that the increasing words v 0 and v
have the same length mn (b) and shape( b
call their common shape b -.
Now
is the unique element in LRT(-; (R 2
is the unique element in LRT(-; (R 1
On the other hand, consider the word v b
Q, identifying b
Q with its row-reading word.
The tableau Q(v b
Q) has shape -. Let p and q be the number of cells in b
respectively. Since b
Q is a column-strict tableau of shape b -, Q(v b
Q)j [p] is the rowwise
standard tableau of shape b -, the unique standard tableau of shape b - in which
is located immediately to the right of i not in the first column.
Now Q(v b
Q)jH is filled from left to right by the numbers since it
records the column-insertion of the weakly increasing word v. The same argument
applies to Q(v 0 b
that - (v b
0 and -R 0 for the corresponding
constructions for R 0 . By [24, Theorem 16],
(w R 0b
R (v b
Applying the P tableau part of (3.5) to the word (w Rb
By this and the Q tableau part of (3.6) for the word e e 0 (b),
Q(oe(ee
This proves the equality of the Q-tableaux in (3.16).
For the P-tableaux, let us recall the process that leads from (P(b); Q(b)) to
(P(ee 0 (b)); Q(ee 0 (b))) and from (P(b 0
Recall that P(b 0 they have the same restriction to the alphabet
On the other hand, since it has been shown that the tableaux Q(ee 0 (b))
and Q(ee 0 (b 0 )) have the same shape, it follows that the horizontal strips H 1 and H 0coincide. So P(ee 0 (b)) and P(ee 0 (b 0 restricted to the alphabet [2; n].
Since both tableaux also have the same partition shape they must coincide. This,
together with the P-tableau part of (3.6) applied to e e 0 (b), shows that
P(ee
3.11. Energy function. In this section it is shown that the energy function of the
classical b
sl n crystal B R is given by the generalized charge of [24] on the Q-tableau.
The definition of energy function follows [7] and [22].
Consider the unique classical b
sl n crystal isomorphism
An energy function
!Zis a function that satisfies the following
axioms:
such that e
and similarly for ee i .
(H2) For all
such that e
e 0 (b) is defined,
H(ee
defined in the same way then
Proof. Without loss of generality assume that - 1 - 2 . Let
be the partition whose Ferrers diagram is obtained by placing the shape R 1 atop R 2 .
Let QR be the unique element of the singleton set LRT(fl(R); (R 1
that fl(R) is the only shape admitting an R-LR tableau that has at most - 1 columns.
Let R be the unique sl n -highest weight vector of weight fl(R); it is given
explicitly by v
in the notation of the definition of R-LR, and satisfies
It is shown that every element b 2 B R is connected to v R . First, using sl n -raising
operators, it may be assumed that b is an sl n -highest weight vector. If Q(b) has
at most - 1 columns then Otherwise Q(b) has more than - 1 columns, and
Remark 15 applies. But e
e 0 (b) is closer to v R in the sense that Q(ee 0 (b)) has one
fewer cells to the right of the - 1 -th column than Q(b) does [24, Proposition 38], so
induction finishes the proof.
By Lemma 19 H is uniquely determined up to a global additive constant. H is
normalized by the condition
with v R as in Lemma 19. Equivalently,
where Key(R i ) is the highest-weight vector in B R i and the value of H is the size
of the rectangle R 1 " R 2 .
For a tableau Q 2 LRT(-; (R 1 to be the number of cells
in the shape of Q that are strictly east of the max(-
Proposition 20. Let be a pair of rectangles. Then for all
Proof. Follows immediately from the proof of Lemma 19.
Now consider
The energy function for B is given as follows. Denote by H Zgiven
by the value of the energy function H (wB) i+1 ;(wB) i at the (i + 1)-st and i-th tensor
positions (according to our convention). Recall the isomorphisms of classical b
crystals (3.7). Define the cyclic permutation w
1!j-m
Call the inner sum E (j)
R (b).
The following version of [9, Lemma 5.1] holds for ER with no additional difficulty.
Lemma 21. Let
\Delta\Omega b
Example 22. Let b be as in the running example. Then
\Theta \Theta \Theta \Theta \Theta \Theta 1 1
\Theta \Theta \Theta \Theta \Theta \Theta 2 2
\Theta \Theta \Theta 1 3 3
\Theta \Theta \Theta 2 4 4
so that
so that d 2 2. Finally
so that d 1 (- 2
3.12. Energy and generalized charge. Define the map ER
ER (b) for any b 2 B R such that This map is well-defined
since ER is constant on sl n -components and the map (3.3) is a bijection. It follows
immediately from the definitions that
1-i!j-m
The Kostka case of the following result was first proven by K. Kilpatrick and
D. White. In the further special case that - is a partition it was shown in [22]
that ER (Q) is the charge. Now in the Kostka case the generalized charge statistic
charge R specializes to the formula of charge in [15].
Theorem 23. charge
Proof. Let Q 2 LRT(R) of shape -, say. It will be shown by induction on the
number of rectangles m and then on charge R (Q), that that ER : LRT(R) ! Z
satisfies the intrinsic characterization of charge R by the properties (C1) through
(C4) [24, Theorem 21]. Let
First, (C2) need only be checked when Q last column R-cocyclage,
and (C4) need only be verified when - To see this, if then there is a
last column R-cocyclage Q 0 !R Q and in this case charge R (Q
. If (C3) does not apply, then one may apply (C4) several times
to switch the widest rectangle closer to the beginning of the sequence R and then
apply (C3), which decreases the number of rectangles m.
(C1) is trivial. For (C2), let Q 0 !R Q be a last-column R-cocyclage with
b such that
and as in Theorem 13 and Remark 15. By the proof of Theorem 13 in
this case, ffl 0
However, Q(ee 0 and (C2) has been
verified.
To check (C3), let b
of
R). It follows that
22 MARK SHIMOZONO
for all j ? 1. Therefore
1-i!j-m
which verifies (C3).
For (C4), the proof may be reduce to the case 3. By abuse of notation we
suppress the notation for the sequence of rectangles, writing - p (Q) for the operator
that acts on the restriction of an LR tableau to the p-th and (p 1)-st alphabets,
and similarly for the function d p . Write w i;j := -
d i;j := d i (w i;j Q):
The value d 0
i;j is computed using a case by case analysis.
1. In this case it is clear that d 0
2.
3.
so that
4. is the identity, and
5. so that
since the restriction of w i;j Q to the i-th and i 1-st subalphabets is not
affected by - p+1 .
7.
8. p. In this case it is clear that d
i;j .
Based on these computations, the difference in energies E -p R (-
as follows. In cases 1, 4, and 5, and 8, d 0
so these terms cancel. The sum
of the terms in cases 6 and 7 cancel. So it is enough to show that the sum of terms
in 2 and 3 cancel, that is,
Rewriting d p (w p;j observe that without loss of generality it
may be assumed that must be shown that
Recall that in verifying (C4) it may be assumed that - . In this case [24,
Remark 39] applies. Say - There
are three cases, namely four terms in
are zero. If then the first and fourth terms are zero and the second
and third agree, while if then the second and third are zero and the first and
fourth agree.
Corollary 24.
chV wt sl (-) K -;R (q);
where - runs over partitions of length at most n.
Proof. The equality follows immediately from Theorem 23, the weight-preserving
bijection (3.3), and [24, Theorem 11].
4. Tensor product structure on Demazure crystals
The tensor product structure for the Demazure crystals, is a consequence of an
inhomogeneous version of [10, Theorem 2.3] that uses Lemma 21.
Theorem 25. Let - be a partition of n. Then
Bw- (l 0
;l\Omega u l 0
as classical b
sl n crystals, where the affine b
sl n Demazure crystal is viewed as a classical
sl n -crystal by composing its weight function wt with the projection cl .
Moreover, if v 7!
b\Omega u l 0
then
where the left hand side is the distance along the null root ffi of v from the highest
weight vector u l 0 2 V(l 0 ) and R is defined by R
5. Proof of Theorem 2
Theorem 2 follows from Theorem 25 and Corollary 24.
6. Generalization of Han's monotonicity for Kostka-Foulkes
polynomials
The following monotonicity property for the Kostka-Foulkes polynomials was
proved by G.-N. Han [4]:
denotes the partition obtained by adding a row of length a to -.
Here is the generalization of this result for the polynomials K -;R (q) that was
conjectured by A. N. Kirillov.
Theorem 26. Let R be a dominant sequence of rectangles and
tangle. Then
is the partition obtained by adding m rows of size k to - and R[(k m )
is any dominant sequence of rectangles obtained by adding the rectangle (k m ) to R.
Proof. Write
the map
the letters of Y 0 are smaller than those of Q
follows that shape(i R since it is Knuth
equivalent to a shuffle of Y 0 and the tableau Q+m, which is R-LR in the alphabet
+m]. Thus the map i R is well-defined. Let B represent the union of the
zero-th and first subalphabets for w Y be the key tableau for the first
subalphabet of w
by the Knuth invariance of d 0 , the fact that w 0;j doesn't touch letters in the zero-th
subalphabet, the definition of d 0 , the fact that w 0;j Q 2 LRT(w 0;j R), and direct
calculation of the shape of P (Y Y 0 ) combined with Proposition 20. If i ? 0 then
From (6.1) and (6.2) it follows that E R +(i R
--R
Combinatorial properties of partially ordered sets associated with partitions and finite Abelian groups
Dual equivalence with applications
Croissance des polyn-omes de Kostka
Infinte dimensional Lie algebras
Modern Phys.
Perfect crystals of quantum affine Lie algebras
On the Grothendieck group of modules supported in a nilpotent orbit in the Lie algebra gl(n)
Crystal graphs for representations of the q-analogue of classical Lie algebras
A generalization of the Kostka-Foulkes polynomials
Cyclic permutations on words
Crystal graphs and q-analogues of weight multiplicities for root systems of type An
Group characters and algebra
Sur une conjecture de H.
in Noncommutative structures in algebra and geometric combinatorics
Green Polynomials and Singularities of Unipotent Classes
The characters of the group GL(n
A cyclage poset structure for Littlewood-Richardson tableaux
generalized Kostka-Foulkes poly- nomials
Math 13
correspondance de Robinson
Promotion des morphisms d'ensembles ordonnes
Bases of coordinate rings of closures of conjugacy classes of nilpotent matrix
Characters of modules supported in the conjugacy class of a nilpotent matrix
The equations of conjugacy classes of nilpotent matrices
Some connections between the Littlewood-Richardson rule and the construction of Schensted
--TR
Dual equivalence with applications, including a conjecture of Proctor
Graded characters of modules supported in the closure of a nilpotent conjugacy class
A cyclage poset structure for LittlewoodMYAMPERSANDmdash;Richardson tableaux
Multi-atoms and monotonicity of generalized Kostka polynomials
--CTR
Lipika Deka , Anne Schilling, New fermionic formula for unrestricted Kostka polynomials, Journal of Combinatorial Theory Series A, v.113 n.7, p.1435-1461, October 2006
Anne Schilling , Philip Sternberg, Finite-dimensional crystals B2,s for quantum affine algebras of type D(1)n, Journal of Algebraic Combinatorics: An International Journal, v.23 n.4, p.317-354, June 2006
S. Ole Warnaar, The Bailey Lemma and Kostka Polynomials, Journal of Algebraic Combinatorics: An International Journal, v.20 n.2, p.131-171, September 2004 | tableau;littlewood-richardson coefficient;crystal graph;kostka polynomial |
513418 | Guaranteed. | We describe a distributed memory parallel Delaunay refinement algorithm for polyhedral domains which can generate meshes containing tetrahedra with circumradius to shortest edge ratio less than 2, as long as the angle separating any two incident segments and/or facets is between 90 and 270 degrees. Input to our implementation is an element--wise partitioned, conforming Delaunay mesh of a restricted polyhedral domain which has been distributed to the processors of a parallel system. The submeshes of the distributed mesh are then independently refined by concurrently inserting new mesh vertices.Our algorithm allows a new mesh vertex to affect both the submesh tetrahedralizations and the submesh interfaces induced by the partitioning. This flexibility is crucial to ensure mesh quality, but it introduces unpredictable and variable latencies due to long delays in gathering remote data required for updating mesh data structures. In our experiments, more than 80% of this latency was masked with computation due to the fine--grained concurrency of our algorithm.Our experiments also show that the algorithm is efficient in practice, even for certain domains whose boundaries do not conform to the theoretical limits imposed by the algorithm. The algorithm we describe is the first step in the development of much more sophisticated guaranteed--quality parallel mesh generation algorithms. | INTRODUCTION
A recent trend in many control systems is to connect distributed elements of a control system via a shared
broadcast bus instead of using point-to-point links [1]. However, there are fundamental differences between a
shared bus and point-to-point links. Firstly, because the bus is shared between a number of subsystems, there is
contention for access to the bus, which must be resolved using a protocol. Secondly, transmission of a signal
or data is not virtually instantaneous; different signals will be able to tolerate different latencies. Therefore
there is a fundamental need for scheduling algorithms to decide how contention is resolved in such a way that
all latency requirements are met.
There are a number of existing bus technologies, but in this paper we are concerned with Controller Area
Network (CAN) [2], and for comparison the Time Triggered Protocol (TTP) [3]. These two buses differ in the
way that they are scheduled: CAN takes a dynamic approach, using a priority-based algorithm to decide which
of the connected stations is permitted to send data on the bus. TTP uses a static approach, where each station
is permitted a fixed time slice in which to transmit data. A common misconception within the automotive
industry is that while CAN is very good at transmitting the most urgent data, it is unable to provide guarantees
that deadlines are met for less urgent data [3, 5]. This is not the case: the dynamic scheduling algorithm used
by CAN is virtually identical to scheduling algorithms commonly used in real-time systems to schedule
computation on processors. In fact, the analysis of the timing behaviour of such systems can be applied almost
1 The authors can be contacted via e-mail as ken@minster.york.ac.uk; copies of York technical reports cited in this paper
are available via FTP from minster.york.ac.uk in the directory /pub/realtime/papers
without change to the problem of determining the worst-case latency of a given message queued for
transmission on CAN.
This paper reproduces the existing processor scheduling analysis, and shows how this analysis is applied to
CAN. In order for the analysis to remain accurate, details of the implementation of CAN controllers must be
known, and therefore in this paper we assume an existing controller (the Intel 82527) to illustrate the
application of the analysis. We then apply the SAE 'benchmark' for class C automotive systems (safety critical
control applications) [4]. We then extend the CAN analysis to deal with some fault tolerance issues.
The paper is structured as follows: the next section outlines the behaviour of CAN (as implemented by the
Intel 82527) and the assumed system model. Section 3 applies the basic processor scheduling analysis to the
82527. Section 4 then applies this analysis to the standard benchmark, using a number of approaches. Section
5 extends the basic analysis to deal with error recovery on CAN, and shows how the revised analysis can be
re-applied to the benchmark when there are certain fault tolerance requirements. Finally, section 6 discusses
some outstanding issues, and offers conclusions.
2. system MODEL
CAN is a broadcast bus where a number of processors are connected to the bus via an interface (Figure 1).
Host
CPU Network
controller
CAN bus
Figure
1: CAN architecture
A data source is transmitted as a message, consisting of between 1 and 8 bytes ('octets'). A message may be
transmitted periodically, sporadically, or on-demand. So, for example, a data source such as 'road speed'
could be encoded as a 1 byte message and broadcast every 100 milliseconds. The data source is assigned a
unique identifier, represented as an 11 bit number (giving 2032 identifiers - CAN prohibits identifiers with
the seven most significant bits equal to '1'). The identifier servers two purposes: filtering messages upon
reception, and assigning a priority to the message. As we are concerned with closed control systems we
assume a fixed set of messages each having a unique priority and temporal characteristics such as rate and
A station on a CAN bus is able to receive a message based on the message identifier: if a particular host CPU
needs to obtain the road speed (for example) then it indicates the identifier to the bus controller. Only
messages with desired identifiers are received and presented to the host CPU. Thus in CAN a message has no
destination.
The use of the identifier as priority is the most important part of CAN regarding real-time performance. In any
bus system there must be a way of resolving contention: with a TDMA bus, each station is assigned a pre-determined
time slot in which to transmit. With Ethernet, each station waits for silence and then starts
transmitting. If more than one station try to transmit together then they all detect this, wait for a randomly
determined time period, and try again the next time the bus is idle. Ethernet is an example of a carrier-sense
broadcast bus, since each station waits until the bus is idle (i.e. no carrier is sensed), and monitors its own
traffic for collisions. CAN is also a carrier-sense broadcast bus, but takes a much more systematic approach to
contention. The identifier field of a CAN message is used to control access to the bus after collisions by taking
advantage of certain electrical characteristics.
With CAN, if multiple stations are transmitting concurrently and one station transmits a '0' bit, then all
stations monitoring the bus will see a '0'. Conversely, only if all stations transmit a `1' will all processors
monitoring the bus see a '1'. In CAN terminology, a `0' bit is termed dominant, and a '1' bit is termed
recessive. In effect, the CAN bus acts like a large AND-gate, with each station able to see the output of the
gate. This behaviour is used to resolve collisions: each station waits until bus idle (as with Ethernet). When
silence is detected each station begins to transmit the highest priority message held in its queue whilst
monitoring the bus. The message is coded so that the most significant bit of the identifier field is transmitted
first. If a station transmits a recessive bit of the message identifier, but monitors the bus and sees a dominant
bus then a collision is detected. The station knows that the message it is transmitting is not the highest priority
message in the system, stops transmitting, and waits for the bus to become idle. If the station transmits a
recessive bit and sees a recessive bit on the bus then it may be transmitting the highest priority message, and
proceeds to transmit the next bit of the identifier field. Because CAN requires identifiers to be unique within
the system, a station transmitting the last bit (least significant bit) of the identifier without detecting a collision
must be transmitting the highest priority queued message, and hence can start transmitting the body of the
message (if identifiers were not unique then two stations attempting to transmit different messages with the
same identifier would cause a collision after the arbitration process has finished, and an error would occur).
There are some general observations to make on this arbitration protocol. Firstly, a message with a smaller
identifier value is a higher priority message. Secondly, the highest priority message undergoes the arbitration
process without disturbance (since all other stations will have backed-off and ceased transmission until the bus
is next idle). The whole message is thus transmitted without interruption.
The overheads of a CAN frame amount to a total of 47 bits (including 11 bits for the identifier field, 4 bits for
a message length field, 16 bits for a CRC field, 7 bits for the end-of-frame signal, and 3 bits for the
intermission between frames). Some of these fields are 'bit stuffed': when five consecutive bits of the same
polarity are sent, the controller inserts an extra 'stuff bit' of opposite polarity into the stream (this bit stuffing
is used as part of the error signalling mechanism). Out of the 47 overhead bits, 34 are subject to bit-stuffing.
The data field in a message (between 0 and 8 bytes) is also bit-stuffed. The smallest CAN message is 47 bits,
and the largest 130 bits.
CAN has a number of other features, most important of which are the error recovery protocol, and the
'remote transmission request' messages. CAN is able to perform a number of checks for errors, including the
use of a 16 bit CRC (which applied over just 8 bytes of data provides a high error coverage), violation of the
'bit stuffing' rule, and failure to see an acknowledge bit from receiving stations. When any station (including
the sending station) detects an error, it immediately signals this by transmitting an error frame. This consists of
six bits of the same polarity, and causes all other stations to detect an error. These stations concurrently
transmit error frames, after which all stations are re-sychronised. Most controllers automatically re-enter
arbitration to re-transmit the failed message. In order to recover from a single error, the protocol requires the
transmission of at most 29 bits of error frame, and the re-transmission of the failed message. The error
recovery overheads can be bounded by knowing the expected upper bound on the number of errors in an
interval.
As well as data messages, there are also 'remote transmission request' (RTR) messages. These messages are
contentless (i.e. are zero bytes long) and have a special meaning: they instruct the station holding a data
message of the same identifier to transmit that message. RTR messages are intended for quickly obtaining
infrequently used remote data. However, the 'benchmark' [4] does not require RTR messages, and so do not
discuss these types of messages further.
We can only apply the analysis to a particular controller, since different controllers can have different
behaviours (the Philips 82C200 controller can have a much worse timing performance that the 'ideal'
behaviour [6]). A controller is connected to the host processor via dual-ported RAM (DPRAM), whereby the
CPU and the controller can access the memory simultaneously. A perfect CAN controller would contain 2032
'slots' for messages, where a given message to be sent is simply copied into the slot corresponding to the
message identifier. A received message would also be copied into a corresponding slot. However, since a slot
requires at least 8 bytes, a total of at least 16526 bytes of memory would be required. In an automotive
environment such an amount of memory would be prohibitively expensive (an extra 50- per station multiplied
over ten stations per vehicle and a million vehicles is $5M!), and the Intel 82527 makes the compromise of
giving just 15 slots. One of these slots is dedicated to receiving messages; the remaining 14 slots can be set to
either transmit or receive messsages. Each slot can be mapped to any given identifier; slots programmed to
receive can be set to receive any message matching an identifier mask. Each slot can be independently
programmed to generate an interrupt when receiving a message into the slot, or sending a message from the
slot. This enables 'handshaking' protocols with the CPU, permitting a given slot to be multiplexed between a
number of messages. This is important when controlling the dedicated receive slot 15: this special slot is
'double buffered' so that the CPU has time to empty one buffer whilst the shadow buffer is available to the
controller. In this paper we assume that slots are statically allocated to messages, with slot 15 used to receive
messages that cannot be fitted into the remaining slots. The 82527 has the quirk that messages stored in the
slots are entered into arbitration in slot order rather than identifier (and hence priority) order. Therefore it is
important to allocate the messages to the slots in the correct order.
We now outline a 'system model' for message passing that we are able to analyse. A system is deemed to be
composed of a static set of hard real-time messages, each statically assigned to a set of stations connected to
the bus. These hard real-time messages are typically control messages, and have deadlines that must be met, or
else a serious error is said to occur. Messages will typically be queued by a software task running on the host
CPU (the term 'task' encompasses a number of activities, ranging from interrupt handlers, to heavyweight
processes provided by an operating system). A given task is assumed to be invoked by some event, and to then
take a bounded time to queue the message. Because this time is bounded instead of fixed, there is some
variability, or jitter, between subsequent queuings of the message; we term this queuing jitter. For the
purposes of this paper, we assume that there is a minimum time between invocations of a given task; this time
is termed the period 2 . If the given task sends a message once every few invocations of the task, then the
message inherits a period from the sending task.
A given message is assigned a fixed identifier (and hence a fixed priority). We assume that each given hard
real-time message must be of bounded size (i.e. contain a bounded number of bytes). Given a bounded size,
and a bounded rate at which the message is sent, we effectively bound the peak load on the bus, and can then
apply scheduling analysis to obtain a latency bound for each message.
We assume that there may also be an unbounded number of soft real-time messages: these messages have no
hard deadline, and may be lost in transmission (for example, the destination processor may be too busy to
receive them). They are sent as 'added value' to the system (i.e. if they arrive in reasonable time then some
quality aspect of the system is improved). In this paper we do not discuss special algorithms to send these, and
for simplicity instead assume that they are sent as "background" traffic (i.e. assigned a priority lower than all
hard real-time messages) 3 .
As mentioned earlier, the queuing of a hard real-time message can occur with jitter (variability in queuing
times). The following diagram illustrates this:
Queueing
window
a b
Figure
2: message queuing jitter
The shaded boxes in the above diagram represent the 'windows' in which a task on the host CPU can queue
the message. Jitter is important because to ignore it would lead to insufficient analysis. For example, ignoring
jitter in Figure 2 would lead to the assumption that message m could be queued at most once in an interval of
duration (b - a). In fact, in the interval (a . b] the message could be queued twice: once at a (as late as
Of course, the task could be invoked once only (perhaps in response to an emergency), and would therefore have an infinite
period.
3 There are a number of algorithms that could potentially lead to very short average response times for soft real-time messages;
the application of these algorithms to CAN bus is the subject of on-going research at York.
possible in the first queuing window), and once at b (as early as possible in the next queuing window).
Queuing jitter can be defined as the difference between the earliest and latest possible times a given message
can be queued. In reality, it may be possible to reduce the queuing jitter if we know where in the execution of
a task a message is queued (for example, there may be a minimum amount of computation required before the
task could queue a message, and therefore event b in the diagrams would occurs later than the start of the
task); other work has addressed this [13].
The diagram above also shows how the period of a message can be derived from the task sending the
message. For example, if the message is sent once per invocation of the task, then the message inherits a
period equal to the period of the task.
To keep the queuing jitter of a message small, we might decompose the task generating the message into two
tasks: the first task calculates the message contents, and the second 'output' task merely queues the message.
The second task is invoked a fixed time after the first task, such that first task will always have completed
before the second task runs. Since the second task has very little work to do, it can typically have a short
worst-case response time, and the queuing jitter inherited by the message will therefore be small (this general
technique is discussed in more detail elsewhere [14]).
We briefly discuss how a message is handled once received. At a destination station the results of an incoming
message must be made available. If the message is a sporadic one (i.e. sent as the result of a 'chance' event)
then there is a task which should be invoked by the message arrival. In this case, the message arrival should
raise an interrupt on the host CPU (and hence be assigned to slot 15 on the Intel 82527 bus controller). Of
course, it is possible for the arrival of the message to be polled for by the task, but if the required end-to-end
latency is small then the polling period may have to be unacceptably high. The arrival of a periodic message
can be dealt with without raising an interrupt: the message can be statically assigned to a slot in the 82527 and
then be picked up by the application task. This task could be invoked by a clock (synchronised to a notional
global clock) so that the message is guaranteed to have arrived when the task runs.
As can be seen, the sole requirement on the communications bus is that messages have bounded latencies. We
now proceed to develop analysis to give these. Clearly, this analysis will form a key part of a wider analysis of
the complete system to give end-to-end timing guarantees; such end-to-end analysis is the subject of on-going
research at York.
Having established the basic model for CAN and the system we are now able to give analysis bounding the
timing behaviour of a given hard real-time message.
3. ANALYSIS OF 82527 CAN
In this section we present analysis that bounds the worst-case latency of a given hard real-time message type.
The analysis is an almost direct application of processor scheduling theory [7, 8, 9]. However, there are some
assumptions made by this analysis: Firstly, the deadline of a given message m (denoted D m ) must not be more
than the period of the message (denoted T m ). Secondly, the bus controller must not release the bus to lower
priority messages if there are higher priority messages pending (i.e. the controller cannot release the bus
between sending one message and entering any pending message into the arbitration phase; note that the
Philips 82C200 CAN controller fails to meet this assumption).
The worst-case response time of a given message m is denoted by R m , and defined as the longest time between
the start of a task queuing m and the latest time that the message arrives at the destination stations. Note that
this time includes the time taken for the sender task to execute and queue message m, and is at first sight a
curious definition (measuring the time from the queuing of the message to the latest arrival might seem better).
However, the contents of the message reflects the results of some action undertaken by the task (itself
triggered in response to some event), and it is more desirable to measure the wider end-to-end time associated
with an event.
The jitter of a given message m is denoted J m , and inherited from the response time of the tasks on the host
CPU. If these tasks are scheduled by fixed priority pre-emptive scheduling then related work can bound the
time taken to queue the message [10, 7] and hence determine the queuing jitter.
We mentioned earlier how CAN operates a fixed priority scheduling algorithm. However, a message is not
fully pre-emptive, since a high priority message cannot interrupt a message that is already transmitting 4 . The
work of Burns et al [9] allows for this behaviour, and from other processor scheduling work [8] we can bound
the worst-case response time of a given hard real-time message m by the following:
R J w C
The term J m is the queuing jitter of message m, and gives the latest queuing time of the message, relative to
the start of the sending task. The term w m represents the worst-case queuing delay of message m (due to both
higher priority messages pre-empting message m, and a lower priority message that has already obtained the
bus).
The represents the longest time taken to physically send message m on the bus. This time includes the
time taken by the frame overheads, the data contents, and extra stuff bits (recall from section 2 that the
message contents and 34 bits of the overheads are subject to bit stuffing with a stuff width of 5). The
following equation gives
The denotes the bounded size of message m in bytes. The term t bit is the bit time of the bus (on a bus
running at 1 Mbit/sec this is 1ms).
The queuing delay is given by:
4 It is commonly understood in the computing field that pre-emption can include stopping one activity to start or continue with
another.
(2)
The set hp(m) is the set of messages in the system of higher priority than m. T j is the period of a given
message j, and J j is the queuing jitter of the message. B m is the longest time that the given message m can be
delayed by lower priority messages (this is equal to the time taken to transmit the largest lower priority
message), and can be defined by:
Where lp(m) is the set of lower priority messages. Note that if there are an unbounded number of soft real-time
messages of indeterminate size, then B m is equal to 130t bit .
Notice that in equation 2 term w m appears on both the left and right hand sides, and the equation cannot be re-written
in terms of w m . A simple solution is possible by forming a recurrence relation:
A value of zero for w m
0 can be used. The iteration proceeds until convergence (i.e. w w
The above equations do not assume anything about how identifiers (and hence priorities) are chosen.
However, from work on processor scheduling [11, 8] we know that the optimal ordering of priorities is the
deadline monotonic one: a task with a short value of D - J should be assigned a high priority.
We can now apply this analysis to the SAE benchmark [4].
4. THE SAE 'BENCHMARK'
The SAE report describes a set of signals sent between seven different subsystems in a prototype electric car.
Although the car control system was engineered using point-to-point links, the set of signals provide a good
example to illustrate the application of CAN bus to complex distributed real-time control systems.
The seven subsystems are: the batteries ('Battery'), the vehicle controller (`V/C'), the inverter/motor
controller ('I/M C'), the instrument panel display (`Ins'), driver inputs ('Driver'), brakes (`Brakes'), and the
transmission control ('Trans'). The network connecting these subsystems is required to handle a total of 53
messages, some of which contain sporadic signals, and some of which contain control data sent periodically. A
periodic message has a fixed period, and implictly requires the latency to be less than or equal to this period.
messages have latency requirements imposed by the application: for example, all messages sent as a
result of a driver action have a latency requirement of 20ms so that the response appears to the driver to be
instantaneous.
The reader is referred to the work of Kopetz [3] for a more detailed description of the benchmark. Note that
Kopetz is forced to 'interpret' the benchmark specification, giving sensible timing figures where the
benchmark fails to specify them (for example, the latency requirement of 20ms for driver-initiated messages is
a requirement imposed by Kopetz rather than the benchmark). There is still some unspecified behaviour in the
benchmark: the system model assumed by this paper requires that even sporadic messages are given a period
(representing the maximum rate at which they can occur), but no periods for the sporadic messages in the
benchmark can be inferred (Kopetz implicitly assumes that sporadic messages have a period of 20ms). Like
Kopetz, we are forced to assume sensible values. We also hypothesise queuing jitter values.
The following table details the requirements of the messages to be scheduled. There are a total of 53
messages, some simple periodic messages, and some 'chance' messages (i.e. queued sporadically in response
to an external event).
Signal
Number
Signal
Description
Size
/bits
/ms
/ms
Periodic
/ms
From To
Traction Battery Voltage 8 0.6 100.0 P 100.0 Battery V/C
Traction Battery Current 8 0.7 100.0 P 100.0 Battery V/C
3 Traction Battery Temp, Average 8 1.0 1000.0 P 1000.0 Battery V/C
Auxiliary Battery Voltage 8 0.8 100.0 P 100.0 Battery V/C
5 Traction Battery Temp, Max. 8 1.1 1000.0 P 1000.0 Battery V/C
6 Auxiliary Battery Current 8 0.9 100.0 P 100.0 Battery V/C
9 Brake Pressure, Line 8
Transaxle Lubrication Pressure 8 Trans V/C
Transaction Clutch Line Pressure 8 Trans V/C
Traction Battery Ground Fault 1 1.2 1000.0 P 1000.0 Battery V/C
14 Hi&Lo Contactor Open/Close 4 0.1 50.0 S 5.0 Battery V/C
Key Switch Start 1 0.3 50.0 S 20.0 Driver V/C
19 Emergency Brake 1 0.5 50.0 S 20.0 Driver V/C
Motor/Trans Over Temperature 2 0.3 1000.0 P 1000.0 Trans V/C
22 Speed Control 3 0.7 50.0 S 20.0 Driver V/C
26 Brake Mode (Parallel/Split) 1 0.8 50.0 S 20.0 Driver V/C
28 Interlock 1 0.5 50.0 S 20.0 Battery V/C
29 High Contactor Control 8 0.3 10.0 P 10.0 V/C Battery
Battery
Reverse and 2nd Gear Clutches 2 0.5 50.0 S 20.0 V/C Trans
l
Number
Signal
Description
Size
/bits
/ms
/ms
Periodic
/ms
From To
Battery
Battery
34 DC/DC Converter Current Control 8 0.6 50.0 S 20.0 V/C Battery
Battery
36 Traction Battery Ground Fault Test
38 Backup Alarm 1 0.9 50.0 S 20.0 V/C Brakes
Warning Lights 7 1.0 50.0 S 20.0 V/C Ins.
42 Torque Command 8
43 Torque Measured 8
44 FWD/REV 1 1.2 50.0 S 20.0 V/C I/M C
48 Shift in Progress 1 1.4 50.0 S 20.0 V/C I/M C
Processed Motor Speed 8
50 Inverter Temperature Status 2 0.6 50.0 S 20.0 I/M C V/C
52 Status/Malfunction (TBD) 8 0.8 50.0 S 20.0 I/M C V/C
53 Main Contactor Acknowledge 1 1.5 50.0 S 20.0 V/C I/M C
A simple attempt at implementing the problem on CAN is to map each of these messages to a CAN message.
messages generally require a latency of 20 ms or less (although Kopetz gives a required latency of
5ms for one sporadic message). These messages may be queued infrequently (for example, it is reasonable to
assume that there at least 50 ms elapses between brake pedal depressions). The benchmark does not give
periods for these messages, and so we assume a period of 50ms for all sporadic messages.
We mentioned in section 2 how a special 'output task' could be created for each message with the job of
merely queuing the pre-assembled message, and we assume this model is adopted for the benchmark system
analysed here. The following table lists the messages in order of priority (i.e. in D - J order), and gives the
worst-case latencies as computed by the analysis of the previous section. The signal numbers in bold indicate
that the signal is a sporadic one. The symbol - indicates that the message fails to meet its latency requirements
(i.e. the message is not guaranteed to always reach its destinations within the time required); the symbol '-'
indicates that no valid response time can be found because the message is not guaranteed to have been sent
before the next is queued (i.e. R > D - J).
Signal N
/bytes
/ms
/ms
/ms
R
R
R
(500Kbit/s
R
Signal N
/bytes
/ms
/ms
/ms
R
R
R
(500Kbit/s
R
43 1 0.1 5.0 5.0 4.568 2.284 1.142 0.571
44 1 1.2 50.0 20.0 - 39.848 4.300 2.150 1.075
26 1 0.8 50.0 20.0 - 8.080 3.032 1.516
28 1 0.5 50.0 20.0 - 12.868 4.166 2.083
Signal N
/bytes
/ms
/ms
/ms
R
R
R
(500Kbit/s
R
There is a problem with the approach of mapping each signal to a CAN message approach: the V/C subsystem
transmits more than 14 message types, and so the 82527 cannot be used (recall that there are 15 slots in the
82527, one of which is a dedicated receive slot). We will return to this problem shortly.
As can be seen, at a bus speed of 125Kbit/s the system cannot be guaranteed to meet its timing constraints. To
see the underlying reason why, consider the following table:
Bus
Speed
Message
Utilisation
Bus
Utilisation
a
500 Kbit/s 3.98% 31.32% 3.09
1Mbit/s 1.99% 15.66% 5.79
The 'message utilisation' is calculated using the number of data bytes in a given CAN message. The 'bus
utilisation' is calculated by using the total number of bits (including overhead) in a given CAN message. The
column headed a details the breakdown utilisation [12] of the system for the given bus speed. The breakdown
utilisation is the largest value of a such that when all the message periods are divided by a the system remains
schedulable (i.e. all latency requirements are met). It is an indication of how much slack there is in the system:
a value of a close to but greater than 1 indicates that although the system is schedulable, there is little room
for increasing the load. The symbol '-' for a for the bus speed of 125Kbit/s indicates that no value for
breakdown utilisation can be found, since even a = 0 still results in an unschedulable system.
As can be seen, there is a large difference between the message and bus utilisations. This is because of the
relatively large overhead of a CAN message. At a bus speed of 125Kbit/s the bus utilisation is greater than
100%, and it is therefore no surprise that the bus is unschedulable.
One way of reducing the bus utilisation (and the message utilisation) is to 'piggyback' messages sent from the
same source. For example, consider the Battery subsystem: this periodically sends four single byte messages
each with a period of 100 ms (message numbers 1, 2, 4, and 6). If we were to collect these into a single
message then we could sent one four byte message at the same rate. This would reduce the overhead, and
hence the bus utilisation. Another advantage with piggybacking is that the number of slots required in the bus
controller is reduced (we have only 14 slots available with the 82527 CAN controller, and the V/C subsystem
has more than 14 signals).
We can piggyback the following periodic messages:
New message name Size
/bytes
/ms
Composed from
Battery high rate 4 100.0 1,2,4,6
Battery low rate 3 1000.0 3,5,13
Brakes high rate 2 5.0 8,9
I/M C high rate 2 5.0 43,49
V/C high rate 4 5.0
V/C low rate 1 1000.0 33,36
The piggybacking would be implemented by the application tasks computing the message contents as before,
but where several signals are piggybacked on a single message there would be a single task created with the
simple job of queuing the message periodically. Because there are fewer messages to send from a given node,
the queuing jitter of a given message may be slightly less.
Note that signals 29 and are required to be sent with a period of 10ms, but that they are piggybacked into a
message ('V/C high rate') sent once every 5ms (and thus every other message sent would contain null data for
these signals). We need to do this so that there not more than 14 message types sent from the V/C subsystem.
The following table gives the timing details of all the messages in this new message set:
Signal N
/bytes
/ms
/ms
/ms
R
R
R
(500Kbit/s)
R
28
26 1 0.3 50.0 20.0 19.240 6.708 2.626 1.313
Signal N
/bytes
/ms
/ms
/ms
R
R
R
(500Kbit/s)
R
44 1 0.5 50.0 20.0 - 38.448 11.944 4.516 2.258
Notice how the timing requirements are met for many more messages than with the simple approach. We also
are able to send the above messages using the Intel 82527 CAN controller, since no subsystem sends more
than 14 message types.
For comparison, the following table gives the utilisations of the above set of messages, and the breakdown
utilisation of the system:
Bus
Speed
Message
Utilisation
Bus
Utilisation
a
500 Kbit/s 3.80% 18.94% 4.12
1Mbit/s 1.9% 9.47% 7.61
As can be seen, the piggybacking of messages leads to a reduction in overheads, and hence a reduction in bus
utilisation. In general, this in turn leads to increased real-time performance. However, by piggybacking signals
into a single message we require that the signals are always generated together; this may restrict the
applicability of piggybacking.
It is possible to piggyback signals that are not neccesarily generated together (for example, sporadic signals).
The approach we take is to send a 'server' message periodically. A sporadic signal to be sent is stored in the
memory of the host CPU. When the 'server' message is to be sent, the sender task polls for signals that have
occurred, and fills the contents of the message appropriately. With this approach, a sporadic signal may be
delayed for up to a polling period plus the worst-case latency of the 'server' message. So, to piggyback a
number of sporadic signals with latency requirements of 20ms or longer, a server message with a period of
10ms and a worst-case response time of 10ms would be sufficient. Alternatively, a server message with a
period of 15ms and a worst-case response time of 5ms could be used.
We transform the set of messages to include these server messages. We choose server messages with periods
of 10ms and latency requirements of 10ms. The following table lists the messages in the system:
Signal
/bytes
/ms
/ms
/ms
R
(125Kbit/s
R
(250Kbit/s
R
(500Kbit/s
R
There are two sporadic signals that remain implemented by sporadic messages: Signal 14 has a deadline that is
too short to meet by polling. Signal is the only sporadic sent from the Brakes subsystem, and cannot
therefore be piggybacked with other sporadic signals.
The following table gives the utilisation and breakdown utilisation of the above system:
Bus
Speed
Message
Utilisation
Bus
Utilisation
a
500 Kbit/s 4.62% 21.11% 3.812
1Mbit/s 2.31% 10.55% 7.082
Notice that the breakdown utilisation figure for the system running over a 125Kbit/s bus is very close to 1,
showing that the system is only just schedulable, and probably cannot accommodate any more urgent signals.
Notice also how the bus utilisation has increased, despite the system now being schedulable at all four bus
speeds. Although the overheads are now higher (due to polling for sporadics), the peak load in the system is
lower. This illustrates that 'utilisation' is a figure valid only over sizeable intervals; in real-time systems the
peak load is far more important, and can be independent of the overall bus utilisation.
5. EXTENDING THE ANALYSIS: ERROR RECOVERY
We have so far assumed that no errors can occur on the bus. However, the analysis of section 3 can be easily
extended to handle this. Equation 2 is amended to:
Where the worst-case response time is given by equation 1. E m (t) is termed the error recovery overhead
function, and gives the expected upper bound on the overheads due to error recovery than could occur in an
interval of duration t.
This function can be determined either from observation of the behaviour of CAN under high noise conditions,
or by building a statistical model. In this paper we use a very simple error function for illustration:
Let n error be the number of burst errors that could occur in an arbitrarily small interval (i.e. n error errors could
occur error be the residual error period (i.e. after the initial n errors , errors cannot occur
before T error has elapsed, and at a rate not higher than one every T error ). The number of errors in an interval of
duration t is bounded by:
error
error
With the CAN protocol, each error can give rise to at most 29 bits of error recovery overhead, followed by
the re-transmission of a message. Only messages of higher priority than message m and message m itself can
be re-transmitted and delay message m (a station attempting to re-transmit a lower priority message after
message m is queued will lose the arbitration). The largest of these messages is:
Therefore, a bound on the overheads in an interval, and hence the error function, is given by:
error
error
We can now return to the SAE benchmark and see how CAN performs under certain error conditions. Let
n error be 4, and let T error be 10ms (this is a very pessimistic assumption: at a bit rate of 1 Mbit/s this is
equivalent to an error rate of 1 bit in 10 000; measurements of CAN suggest a typical rate of 1 bit in 10 5 ).
The following table gives the results when the updated analysis is applied to the previously schedulable
message set:
Signal
/bytes
/ms
/ms
/ms
R
(125Kbit/s
R
R
(500Kbit/s)
R
The results show how a CAN network running at 125Kbit/s cannot tolerate the assumed error rate. At a speed
of 250Kbit/s or more the system can tolerate such an error rate. Note that we do not need to transmit each
message multiple times, and can instead rely on the CAN protocol to detect errors and retransmit failed
messages.
An alternative fault tolerance strategy is to disable the CAN error recovery mechanism and to use a replicated
bus where a message is sent on both buses concurrently (this approach is taken by Kopetz with the TTP bus).
However, we can see that this solution is far less flexible than using the dynamic error recovery approach,
since the bandwidth is effectively wasted under normal conditions. If assumptions about the rate of errors are
used (indeed, for any system we must make such assumptions) then we can apply these assumptions to the
dynamic recovery approach.
6. DISCUSSION AND CONCLUSIONS
The analysis reported in this paper enables the CAN protocol to be used in a wide range of real-time
applications. Indeed its use of system-wide priorities to order message transmissions makes it an ideal control
network. The use of a global approach to priorities also has the advantage that the wealth of scheduling
analysis developed for fixed priority processor scheduling can be easily adapted for use with CAN. Tools
already exist that embody processor scheduling. Similar tools could be developed for CAN. These would not
only accurately predict the worst case message latencies (for all message classes in the system) but could also
be used, by the systems engineer, to ask "what if" questions about the intended application.
By applying the analysis to an existing benchmark an assessment of its applicability has been made. However,
the benchmark does not illustrate all of the advantages the full flexibility of CAN can provide when supported
by priority based analysis. In particular, sporadic messages with tight deadlines but long inter-arrival times can
easily accommodated. It is also possible to incorporate many different failure models and to predict the
message latencies when different levels of failure are being experienced.
7.
--R
"Electronic Exit from Spaghetti Junction"
"Road Vehicles - Interchange of Digital Information - Controller Area Network (CAN) for High Speed Communication"
"A Solution to an Automotive Control System Benchmark"
"Class C Application Requirement Considerations"
"Survey of Known Protocols"
"Guaranteeing Message Latencies on Controller Area Network"
"Fixed Priority Scheduling of Hard Real-Time Systems"
"Applying New Scheduling Theory to Static Priority Pre-emptive Scheduling"
"Allocating and Scheduling Hard Real-Time Tasks on a Point-to-Point Distributed System"
"Fixed Priority Scheduling with Deadlines Prior to Completion"
"On The Complexity of Fixed-Priority Scheduling of Periodic Real-Time Tasks"
"The Rate Monotonic Scheduling Algorithm: Exact Characterisation and Average Case Behaviour"
"Analysis of Hard Real-Time Communications"
"Holistic Schedulability Analysis for Distributed Hard Real-Time Systems"
--TR
Construction of three-dimensional Delaunay triangulations using local transformations
Parallel unstructured grid generation
Active messages
A data-parallel algorithm for three-dimensional Delaunay triangulation and its implementation
A condition guaranteeing the existence of higher-dimensional constrained Delaunay triangulations
Tetrahedral mesh generation by Delaunay refinement
Mesh generation for domains with small angles
Mobile object layer
Simultaneous mesh generation and partitioning for Delaunay meshes
--CTR
Daniel A. Spielman , Shang-Hua Teng , Alper ngr, Time complexity of practical parallel steiner point insertion algorithms, Proceedings of the sixteenth annual ACM symposium on Parallelism in algorithms and architectures, June 27-30, 2004, Barcelona, Spain
Christos D. Antonopoulos , Xiaoning Ding , Andrey Chernikov , Filip Blagojevic , Dimitrios S. Nikolopoulos , Nikos Chrisochoides, Multigrain parallel Delaunay Mesh generation: challenges and opportunities for multithreaded architectures, Proceedings of the 19th annual international conference on Supercomputing, June 20-22, 2005, Cambridge, Massachusetts
Computational Database System for Generatinn Unstructured Hexahedral Meshes with Billions of Elements, Proceedings of the 2004 ACM/IEEE conference on Supercomputing, p.25, November 06-12, 2004
Andrey N. Chernikov , Nikos P. Chrisochoides, Practical and efficient point insertion scheduling method for parallel guaranteed quality delaunay refinement, Proceedings of the 18th annual international conference on Supercomputing, June 26-July 01, 2004, Malo, France
Benot Hudson , Gary L. Miller , Todd Phillips, Sparse parallel Delaunay mesh refinement, Proceedings of the nineteenth annual ACM symposium on Parallel algorithms and architectures, June 09-11, 2007, San Diego, California, USA | guaranteed-quality mesh generation;delaunay triangulation;parallel mesh generation |
513453 | Towards Intelligent Semantic Caching for Web Sources. | An intelligent semantic caching scheme suitable for web sources is presented. Since web sources typically have weaker querying capabilities than conventional databases, existing semantic caching schemes cannot be directly applied. Our proposal takes care of the difference between the query capabilities of an end user system and web sources. In addition, an analysis on the match types between a user's input query and cached queries is presented. Based on this analysis, we present an algorithm that finds the best matched query under different circumstances. Furthermore, a method to use semantic knowledge, acquired from the data, to avoid unnecessary access to web sources by transforming the cache miss to the cache hit is presented. To verify the effectiveness of the proposed semantic caching scheme, we first show how to generate synthetic queries exhibiting different levels of semantic localities. Then, using the test sets, we show that the proposed query matching technique is an efficient and effective way for semantic caching in web databases. | Introduction
Web databases allow users to pose queries to distributed and heterogeneous
web sources. Such systems usually consist of three components
(Adali et al., 1996; Garc'ia-Molina et al., 1995): 1) mediators to
provide a distributed, heterogeneous data integration, 2) wrappers to
provide a local translation and extraction, and web sources containing
raw data to be queried and extracted. In the virtual approach
et al., 1998), the queries are posed to a uniform interface and
submitted to multiple sources at runtime. Such querying can be very
costly due to run-time costs. An effective way to reduce costs in such
an environment is to cache the results of prior queries and to reuse
them (Alonso et al., 1990; Franklin et al., 1993).
Let us first consider a motivating example for semantic caching.
Example 1: Two queries, Q1 and Q2, are asked against a relation
emp(name,age,title,gender) using Datalog notation of Zaniolo et
al. (1997) and saved in the cache as follows:
Q1: male(name) !- emp(name,-,'m').
Q2: 50s-mngr(name) !-
This research is supported in part by DARPA contract No. N66001-97-C-8601
and SBIR F30602-99-C-0106.
c
Publishers. Printed in the Netherlands.
emp(name,age,'manager',-), 50!=age!60.
When a new query Q3 asking "male manager's name in his fifties'' is
given, it can be evaluated against either the emp relation or the stored
queries Q1 and Q2 in the cache as follows:
emp(name,age,'manager','m'), 50!=age!60.
E2: 50s-male-manager(name) !- male(name), 50s-mngr(name).
Both evaluations yield identical results. However, when the emp relation
is stored remotely or temporarily unavailable due to network
partition, using the evaluation E2 against the stored queries is much
more efficient. 2
caching (Dar et al., 1996; Keller and Basu, 1996; Larson
and Yang, 1985; Ren and Dunham, 1998; Sellis, 1988) exploits
the semantic locality of the queries by caching a set of semantically
associated results, instead of tuples or pages which are used in conventional
caching. Semantic caching can be particularly effective in
improving performance when a series of semantically associated queries
are asked if the results may likely overlap or contain one another. Ap-
plications, such as the cooperative database system (Chu et al., 1994)
and geographical information system, are the examples.
So far most semantic caching schemes in client-server architectures
are based on the assumption that all participating components are
full-fledged database systems. If a client sends a query A but its cache
contains answers for A - B, then the client has to send a modified
query A - :B to the server to retrieve the remaining answers. In web
databases, however, web sources such as plain web pages or form-based
systems have very limited querying capabilities and cannot easily
support such complicated (e.g., negation) queries.
Our proposed semantic caching scheme is based upon the following
three key ideas:
1. Since querying capabilities of web sources are weaker than those of
queries from end users, query translation and capability mapping
are necessary in semantic caching.
2. With an efficient method to locate the best matched query from the
set of candidates, semantic caching for web sources can significantly
improve system performance.
3. Semantic knowledge can be used to transform a cache miss in a
conventional caching to a cache hit .
INTELLIGENT SEMANTIC CACHING 3
The rest of the paper is organized as follows. In Section 2, we introduce
background information and related work for semantic caching.
In Section 3, we describe our proposed intelligent semantic caching
in detail. In Section 4, query matching technique is presented. Then,
experimental results follow in Section 5. Finally, concluding remarks
are discussed in Section 6.
2. Background
2.1. Preliminaries
Our caching scheme is implemented in a web database test-bed called
CoWeb (Cooperative Web Database) at UCLA. The architecture consists
of a network of mediator and wrapper components (Adali et al.,
1996; Garc'ia-Molina et al., 1995). The focus of the system is to use
knowledge for providing cooperative capabilities such as conceptual and
approximate web query answering, knowledge-based semantic caching,
and web triggering with fuzzy threshold conditions. The input query
is expressed in the SQL 1 language based on the mediator schema. The
mediator decomposes the input SQL into sub-queries for the wrappers
by converting the WHERE clause into disjunctive normal form, DNF
(the logical OR of the logical AND clauses), and dis-joining conjunctive
predicates. CoWeb handles selection and join predicates with any of the
following operators f?; -; =g.
Our semantic caching approach is closely related to the query satisfiability
and query containment problems (Guo et al., 1996; Ullman,
1988). Given a database D and query Q, applying Q on D is denoted as
Q(D). Then, hQ(D)i, or hQi for short, is the n-ary relation obtained by
evaluating the query Q on D. Given two n-ary queries, Q 1 and Q 2 , if
database D, then the query Q 1 is contained
in the query Q 2 , that is Q . If two queries contain each other,
they are equivalent , that is Q
The solutions to both query satisfiability and containment problems
vary depending on the exact form of the predicate. If a conjunctive
query has only selection predicates with the five operators f?; -; !
=g, the query satisfiability problem can be solved in O(jQj) time
for the query Q (Guo et al., 1996). On the other hand, the conjunctive
query containment problem is shown NP -complete (Chandra and
Merlin, 1977) although in the common case where no predicate appears
Current CoWeb implementation supports only SPJ (Select-Project-Join) type
SQL.
more than twice, there appears to be a linear-time algorithm (Saraiya,
1991; Ullman, 1997).
2.2. Related Work
Past research areas related to semantic caching include conventional
caching (Alonso et al., 1990; Franklin et al., 1993), query satisfiability
and containment problems (Guo et al., 1996; Ullman, 1988),
view materialization (Levy et al., 1995; Larson and Yang, 1985), query
folding (Qian, 1996), and semantic query optimization (Chu et al.,
1994). Recently, semantic caching in a client-server or multi-database
architecture has received attention (Ashish et al., 1998; Dar et al.,
1996; Godfrey and Gryz, 1999; Keller and Basu, 1996; Ren and Dun-
ham, 1998; Chidlovskii and Borghoff, 2000). Deciding whether a query
is answerable or not is closely related to the problem of finding complete
rewritings of a query using views (Levy et al., 1995; Qian, 1996). The
main difference is that semantic caching techniques evaluate the given
query against the semantic views, while query rewriting techniques
rewrite a given query based on the views (Cluet et al., 1999). Further,
our proposed technique is also more suitable for web databases where
the querying capability of the sources is not compatible with that of
the clients. In such settings, it is generally impossible to get a snapshot
of the given views to materialize them since query interfaces simply do
not allow it.
Semantic caching and the corresponding indexing techniques which
require that the cached results be exactly matched with the input query
are presented in Sellis (1988). In our approach, the cached results do not
have to be exactly matched with the input query in order to compute
answers. Chen and Roussopoulos (1994) approaches semantic caching
from the query planning and optimization point of view. Dar et al.
maintains cache space by coalescing or splitting the semantic
regions, while we maintain cache space by reference counters to allow
overlapping in the semantic regions. Further, we provide techniques to
find the best matched query under different circumstances via extended
and knowledge-based matching. In Keller and Basu (1996), predicate
descriptions derived from previous queries are used to match an input
query with the emphasis on updates in the client-server environment.
In Chidlovskii and Borghoff (2000), a semantic caching scheme for
conjunctive keyword-based web queries is introduced. Here, to quickly
process a comparison of an input query against the semantic views,
binary signature method is used. Issues such as probe vs. remainder
query and region coalescing that were originally dealt in Dar et al.
are further explored with real life experiments.
INTELLIGENT SEMANTIC CACHING 5
In Ashish et al. (1998), selectively chosen sub-queries are stored in
the cache and are treated as information sources in the domain model.
To minimize the expensive cost for containment checking, the number
of semantic regions is reduced by merging them whenever possible. Ren
and Dunham (1998) defines a semantic caching formally and addresses
query processing techniques derived from Larson and Yang (1985). A
comprehensive formal framework for semantic caching is introduced
in Godfrey and Gryz (1999) illustrating issues, such as when answers
are in the cache, when answers in the cache can be recovered, etc. Adali
et al. (1996) discusses semantic caching in the mediator environment
with knowledge called invariants. Although the invariants are powerful
tools, due to their support of arbitrary user-defined functions as con-
ditions, they are mainly used for substituting a domain call. On the
contrary, we propose a simpler and easier way (i.e., FIND) to express
a fragment containment relationship on relations that can be acquired
(semi)-automatically.
3. Semantic Caching Technique
3.1. Semantic Caching Model
A semantic cache is essentially a hash table where an entry consists
of a (key, value) pair. The key is the semantic description based on
the previous queries. The value is a set of answers that satisfy the key.
The semantic description made of a prior query is denoted as semantic
view , V. Then, an entry in the semantic cache is denoted as (V; hVi)
using the notation hVi in Section 2.
To represent a submitted SQL query in a cache, we need: 1) relation
names, attributes used in the WHERE clause, projected attributes,
and conditions in the WHERE clause (Larson and Yang, 1985; Ren and
Dunham, 1998). For semantic caching in CoWeb, we use only "condi-
tions in the WHERE clause" for the following reasons. In our settings,
since there is one wrapper covering one web source, and thus 1-to-1
mapping between the wrapper and the web source, the relation names
are not needed. In addition, the majority of the web sources has a fixed
output page format from which the wrapper (i.e., extractor) extracts
the specified data. That is, whether or not the input SQL query wants to
project some attributes, the output web page that the wrapper receives
always contains a set of pre-defined attribute values. Since retrieval cost
is the dominating factor in web databases, CoWeb chooses to store all
attribute values contained in the output web page in the cache. Thus
the attributes used in conditions and the projected attributes do not
need to be stored.
6 LEE AND CHU
Furthermore, by storing all attribute values in the cache, CoWeb
can completely avoid the un-recoverability problem, which can occur
when query results cannot be recovered from the cache even if they
are found in the cache, due to the lack of certain logical information
(Godfrey and Gryz, 1999). As a result, queries stored in the semantic
cache of the CoWeb have the form "SELECT * FROM web source
WHERE condition", where the WHERE condition is a conjunctive pred-
icate. Hereafter, user queries are represented by the conditions in the
clause.
3.2. Query Naturalization
Different web sources use different ontology. Due to security or performance
concerns et al., 1998), web sources often provide
different query processing capabilities. Therefore, wrappers need to perform
the following pre-processing of an input query before submitting
it to the web source:
1. Translation: To provide a 1-to-1 mapping between the wrapper
and the web source, the wrapper needs to schematically translate the
input query.
2. Generalization & Filtration: If there is no 1-to-1 mapping between
the wrapper and the web source, the wrapper can generalize the
input query to return more results than requested and filter out the
extra data. For instance, a predicate (name='tom') can be generalized
into the predicate (name LIKE '%tom%') with an additional filter
(name='tom').
3. Specialization: When there is no 1-to-1 mapping between the wrapper
and the web source, the wrapper can specialize the input query with
multiple sub-queries and then merge the results. For instance, a predicate
(1998!year!2001) can be specialized to a disjunctive predicate
that year is an integer type.
The original query from the mediator is called input query . The
generated query after pre-processing the input query is called native
query , as it is supported by the web source in a native manner (Chang
et al., 1996). Such pre-processing is called query naturalization. The
query used to filter out irrelevant data from the native query results
is called filter query (Chang et al., 1996). When the translation is not
applicable due to the lack of 1-to-1 mapping, CoWeb applies generalization
or specialization based on the knowledge regarding the querying
capability of the web source. This information is pre-determined by a
domain expert such as wrapper developer. CoWeb carries a capability
Cache
Web Source
Wrapper
Mediator
Cache
Manager
Native
Query
Native
Wrapper
Naturalization
Filtration
Input
Query
Native
Query
Filter
Native
Figure
1. The control flow among the mediator, wrapper, and web source. An input
query from the mediator is naturalized in the wrapper and converted to a native
query. A filter query can be generated if needed. The cache manager then checks
the native query against the semantic views stored in the cache to find a match. If a
match is found but no filter query was generated for the query, results are retrieved
from the cache and returned to the mediator. If there was a filter query generated,
then the results need to be filtered to remove the extra data. If no match is found, the
native query is submitted to the web source. After obtaining native results from the
web source, the wrapper performs post-processing and returns the final results to the
mediator. Finally the proper form of the native query (e.g., disjunctive predicates
are broken into conjunctive ones) is saved in the cache for future use.
description vector (CDV), a 5-tuple vector, to describe the querying
capability of the web source. For each attribute of the web source,
the associated 5-tuple vector carries: 1) in: describes whether the web
source must be given binding for this attribute or not. It can have
two values - man for mandatory, opt for optional. 2) out: describes
whether this attribute will be shown in the results or not. It has the
same values as in. contains operators being supported by the
attribute. contains a string value to be used as a wild card.
domain: represents the complete domain values of the attribute.
Currently three types - list, interval, and set - are supported.
The expressive power of the CDV is less than that of the Vassalos
and Papakonstantinou (1997), but equivalent to that of the Levy et
al. (1996). Unlike Levy et al. (1996) where query capabilities are described
for a whole web source, each attribute in CoWeb carries its own
description. Figure 1 illustrates the control flow among the mediator,
wrapper and web source in detail.
Example 2: Imagine a web source that supports queries on the relation
employee(name,age,title) with only "=" operator. Then, an input
query Q:(20-age-22 - title='manager') needs to be naturalized
(i.e., specialized) into a native query V:((age=20 - title='manager')
Further, since semantic views use only conjunctive predicates, the native
query V is partitioned into three conjunctive parts,
x y
a
c
Q2: x=1, y=1
Q3: y=1
a
c
Q2: x=1, y=1
Q3: y=1
t1: Q1 evicted
Y
Y
a
c
d
Q2: x=1, y=1
Q3: y=1
Q4: x=2
t2: Q4 inserted
Y
Z
t0: initial
Time Semantic View Semantic Index Physical Storage
rc
x y rc
x y rc
Figure
2. A cache replacement example. When Q1 is evicted at time t1, the corresponding
reference counters are decremented. The tuple b is deleted since its
reference counter is 0, but the tuple a and c remain in the physical storage. When
Q4 is inserted at time t2, new tuple tuple d is inserted to the physical storage and
the reference counter of the tuple c is increased.
title='manager'). Thus, three entries
are inserted as semantic views into the cache. 2
3.3. Semantic View Overlapping
A semantic view creates a spatial object 2 in an n-dimensional hyper-
space, which creates overlapping. For instance, two queries (10-age-20
create an overlapping
excessive overlapping
of the semantic views may waste the cache space for duplicate answers,
the overlapped portions can be coalesced to the new semantic views
and the remaining semantic views are modified appropriately or can
be completely separate semantic views. For details, refer to Lee and
Chu (1999). In CoWeb, unlike these approaches, the overlapping of the
semantic views is allowed to retain the original form of the semantic
views. By using a reference counter to keep track of the references of
the answer tuples in implementing the cache, the problem of storing
redundant answers in the cache is avoided (Keller and Basu, 1996).
3.4. Cache Replacement Policy
According to pre-determined evaluation functions (e.g., LRU, semantic
distance), the corresponding replacement values (e.g., access order, dis-
2 This is called a semantic region in Dar et al. (1996) and a semantic segment in
Ren and Dunham (1998).
INTELLIGENT SEMANTIC CACHING 9
Table
I. Query match types and their properties. V is a semantic view
and Q is a user query.
Answers from
Match Types Properties Cache Web source
Exact match
Containing match V 6'
Contained match
Overlapping match V 6'
Disjoint match
tance value) are computed and added to the semantic view. Individual
tuples stored in the physical storage contain a reference counter to
keep track of the number of references. After the semantic view for
replacement has been decided, all tuples belonging to the semantic
view are found via the semantic index and their reference counters are
decremented by 1. The tuples with counter value 0 are removed from
the physical storage. The corresponding semantic view and semantic
index are then removed from the cache entries. An example is illustrated
in Figure 3.4. Note that the objects in the semantic index can
be overlapped, but not in the physical storage. Also, there is no coalease
among overlapping or containing semantic indices.
3.5. Match Types
When a query is compared to a semantic view, there can be five different
match types. Consider a semantic view V in the cache and a user query
Q. When V is equivalent to Q, V is an exact match of Q. When
contains Q, V is a containing match of Q. In contrast, when V
is contained in Q, V is a contained match of Q. When V does not
contain, but intersects with Q, V is an overlapping match of Q.
Finally, when there is no intersection between Q and V, V is a disjoint
match of Q. The exact match and containing match are complete
matches since all answers are in the cache, while the overlapping and
contained match are partial matches since some answers need to be
retrieved from the web sources. The detailed properties of each match
type are shown in Table I. Note that for the contained and overlapping
matches, computing answers requires the union of the partial answers
from the cache and from the web source.
The MatchType(Q,V) algorithm then can be derived from Table I
in a straightforward manner. Using algorithms developed for solving
Exact
Matching
Extended
Matching
Knowledge-based
Matching
Query Answer
Query Matching
Figure
3. The flow in the query matching technique.
the satisfiability and containment problems (Saraiya, 1991; Guo et al.,
1996; Ullman, 1997), the MatchType algorithm can be implemented in
(limited containment case when no predicate
appears more than twice).
4. Query Matching Technique
Let us now discuss the process of finding the best matched query from
the semantic views, called query matching, which consists of three
steps: exact , extended , and knowledge-based matching , as depicted in
Figure
3.
4.1. Exact Matching
Traditional caching considers only exact matches between input queries
and semantic views. If there is a semantic view that is identical to the
input query, then it is a cache hit. Otherwise, it is a cache miss.
4.2. Extended Matching
Extended matching extends the exact matching for those cases where
an input query is not exactly matched with a semantic view. Other
than the exact matching, the containing match is the next best case
since it only contains some extra answers. Then, between the contained
and overlapping matches, the contained match is slightly better. This
is because the contained match does not contain extra answers in the
cache, although both have only partial answers (see Table I). Note
that for an input query, there can be many containing or contained
matches. In the following subsections, we present how to find the best
match among the different candidates in a cache.
4.2.1. The BestContainingMatch & BestContainedMatch
Algorithms
Intuitively, we want to find the most specific semantic view which
would incur the least overhead cost to answer the user's query (i.e.,
the smallest superset of the input query). Without loss of generality,
INTELLIGENT SEMANTIC CACHING 11
z LIKE 'C'
x=2 . y=1 . z='C' & w='D' .
input query
minimally-containing
match
containing match
semantic views
Figure
4. Example query containment lattice. Given the input query, there are seven
containing matches. Among them, two are the minimally-containing matches. That
is, these two semantic views are the smallest superset of the given input query.
we discuss the case of the containing match only (the contained match
case can be defined similarly). We first define the query containment
lattice.
Definition 3 (Query Containment Lattice)
Suppose Q is a query and the set UQ corresponds to the set of all the
containing/contained matches of the Q found in the cache. Then, the
query containment lattice is defined to be a partially ordered set hQ,
'i where the ordering ' forms a lattice over the set UQ [ f?g. For a
containing match case, the greatest lower bound (glb) of the lattice is
the special symbol ? and the least upper bound (lub) of the lattice is
the query Q itself. For contained match case, the least upper bound
(lub) of the lattice is the special symbol ? and the greatest lower bound
(glb) of the lattice is the query Q itself.
Definition 4 (Minimality and Maximality)
A containing match of the Q, A, is called minimally-containing
match of the Q and denoted by M(C;Q) min if and only if there is
no other containing match of the Q, B, such that A ' B ' Q in Q's
query containment lattice. Symmetrically, a contained match of the Q,
A, is called maximally-contained match of the Q and denoted by
only if there is no other contained match of the
Q, B, such that A ' B ' Q in Q's query containment lattice.
An example of the query containment lattice is shown in Figure 4. Note
that for a given query Q, there can be several minimally-containing
matches found in the cache as illustrated in Figure 4. In such cases, the
best minimally-containing match can be selected based on such heuristics
as the number of answers associated with the semantic view, the
do
do
containing match then
Bucket containing / Bucket containing -
containing do
one heuristically from Bucket containing ;
return
Figure
5. The BestContainingMatch algorithm.
number of predicate literals in the query, etc. The BestContainingMatch
algorithm is shown in Figure 5. It first finds the minimally-containing
matches using the containment lattice and if there are several minimally-
containing matches, then pick one heuristically.
being the length of the longest containing match and
k being the number of containing matches, the running time of the
BestContainingMatch algorithm becomes O(k 2 jV max indexing
on the semantic views. Observe that the BestContainingMatch
algorithm is only justified when finding the best containing match is
better than selecting an arbitrary containing match followed by fil-
tering. This occurs often in web databases with a large number of
heterogeneous web sources or in multi-media databases with expensive
operations for image processing. The BestContainedMatch algorithm
is similar to the case of the BestContainingMatch algorithm.
4.2.2. The BestOverlappingMatch Algorithm
For the overlapping matches, we cannot construct the query containment
lattice. Thus, in choosing the best overlapping match, we use
a simple heuristic: choose the overlapping match which overlaps most
with the given query. There are many ways to determine the meaning
of overlapping. One technique is to compute the overlapped region
between two queries in n-dimensional spaces or compare the number
of associated answers and select the one with maximum answers.
4.3. Knowledge-based Matching
According to our experiments in Section 5, partial matches (i.e., overlapping
and contained matches) constitute about 40% of all match
types for the given test sets (see Table IV). Interestingly, a partial
match can be a complete match in certain cases. For instance, for the
employee relation, a semantic view V:(gender='m') is the overlapping
match of a query Q:(name='john'). If we know that john is in fact
a male employee, then V is a containing match of Q since Q ' V.
Since complete matches (i.e., exact and containing matches) eliminate
the need to access the web source, transforming a partial match into a
complete match can improve the performance significantly.
Obtaining semantic knowledge from the web source and maintaining
it properly are important issues. In general, such knowledge can
be obtained by human experts from the application domain. In addi-
tion, database constraints, such as inclusion dependencies, can be used.
Knowledge discovery and data mining techniques are useful in obtaining
such knowledge (semi-)automatically. For instance, the association
rules tell if the antecedent in the rule is satisfied, then the consequent
of the rule is likely to be satisfied with certain confidence and support.
How to manage the obtained knowledge under addition, deletion, or
implications is also an important issue. Since the focus of this paper
is to show how to utilize such knowledge for semantic caching, the
knowledge acquisition and management issues are beyond the scope of
this paper. We assume that the semantic knowledge that we are in need
of was already acquired and was available to the cache manager. We
use a generic notation derived from Chu et al. (1994) to denote the
containment relationship between two fragments of relations.
A fragment inclusion dependency (FIND) says that values in columns
of one fragment must also appear as values in columns of other frag-
ment. Formally, -!A1
'g, P and Q are valid SELECT conditions, R and S are valid rela-
tions, and A i and B i are attributes compatible each other. Often LHS
or RHS is used to denote the left or right hand side of the FIND.
A set of FIND is denoted by \Delta and assumed to be closed under its
consequences (i.e.,
Definition 6 (Query 4-Containment)
Given two n-ary queries, Q 1 and Q 2 , if hQ 1 (D)i ae hQ2 (D)i for an
arbitrary relation D obeying the fragment inclusion dependencies, then
the query Q 1 is 4-contained in the query Q 2 and denoted by Q 1 '4
. If two queries 4-contain each other, they are 4-equivalent and
denoted by Q
Now using FIND framework, we can easily denote the various semantic
knowledges. For instance, let's consider the classical inclusion
dependency. Inclusion dependency is a formal statement of the form
are relation names, X is an ordered list
14 LEE AND CHU
of attributes of R, Y is an ordered list of attributes of S of the same
length as X (Johnson and Klug, 1984). For instance, the inclusion
dependency "every manager is also an employee" can be denoted as
in FIND, where oe means selecting every tuples in the relation. In
addition, FIND can easily capture association rules found via data
mining techniques.
4.3.1. Transforming Partial Matches to Complete Matches
Our goal is to transform as many partial matches (i.e., overlapping
and contained matches) to complete matches (i.e., exact and containing
matches) as possible with the given FIND set \Delta. The overlapping
match can be transformed into four other match types, while the
contained match can only be transformed into the exact match.
1. Overlapping Match: Given a query Q, its overlapping match V
and a dependency set \Delta,
is the exact
match of the Q.
is the
containing match of the Q.
is the
contained match of the Q.
is unsatisfiable, or
is the disjoint match of the Q.
Proof: Here, we only show the proof for the second case of the overlapping
match transformation. Others follow similarly as well. For the
overlapping match, from Table I, we have Q Q. If the
condition part is satisfied, then we have Q ' LHS ' RHS ' V, thus
since ' is a transitive operator. This overwrites the original
property Q 6' V. As a result, we end up with a property Q ' V-V 6' Q,
which is the property of the containing match. (q.e.d)
2. Contained Match: Given a query Q, its contained match V and a
\Delta, if fLHS j RHSg 2 \Delta; Q ' LHS;V ' RHS, then V is the exact
match of Q.
Example 7: Suppose we have a query Q:(salary=100k) and a semantic
view V:(title='manager'). Given a \Delta: foe 80k-salary-120k '
becomes a containing match of Q since Q '
INTELLIGENT SEMANTIC CACHING 15
4.3.2. The \Delta-MatchType Algorithm
Let us first define an augmented MatchType algorithm in the presence
of the dependency set \Delta. The \Delta-MatchType algorithm can be
implemented by modifying the MatchType algorithm in Section 3.5
by adding additional input, \Delta, and changing all j to j \Delta and ' to
. The computational complexity of contains the
single
Let jL max j and jR max j denote the length of the longest LHS and RHS
in \Delta and let j\Deltaj denote the number of FIND in \Delta, then the total
computational complexity of the \Delta-MatchType algorithm is O(j\Deltaj(jQj+
in the worst case when all semantic views in the
cache are either overlapping or contained matches. Since the gain from
transforming partial matches to complete matches is I/O-bounded and
the typical length of the conjunctive query is relatively short, it is a
good performance trade-off to pay overhead cost for the CPU-bounded
\Delta-MatchType algorithm in many applications.
4.4. The QueryMatching Algorithm: Putting It All
Together
The QueryMatching algorithm shown in Figure 6 finds the best semantic
view in the cache for a given input query in the order of the exact
match, containing match, contained match and overlapping match. If
all semantic views turn out to be disjoint matches, it returns a null
answer. It takes into account not only exact containment relationship
but also extended and knowledge-based containment relationships. Let
denote the length of the longest semantic views. Then the for
loop takes at most O(kj\Deltaj(jQj Assuming
that in general jV max j is longer than others, the complexity can
be simplified to O(kj\DeltajjV max j). In addition, the BestContainingMatch
and BestContainedMatch takes at most O(k 2 jV max j). Therefore, the
total computational complexity of the QueryMatching algorithm is
O(kj\DeltajjV
5. Performance Evaluation via Experiments
The experiments were performed on a Sun Ultra 2 machine with 256
MB RAM. Each test run was scheduled as a cron job and executed
between midnight and 6am to minimize the effect of the load at the
web site. The test-bed, CoWeb, was implemented in Java using jdk1.1.7.
do
switch \Delta-MatchType(Q, V i , \Delta) do
case exact
case containing
case contained
case overlapping
otherwise: skip;
cnting cnting );
else if B cnted
return
Figure
6. The QueryMatching algorithm.
We used the following schema available from USAir site 3 . Among
7 attributes, both org and dst are mandatory attributes, thus they
should always be bounded in a query.
USAir(org, dst, airline, stp, aircraft, flt, meal)
5.1. Generating Synthetic Test Queries
caching inevidently behaves very sensitively according to the
semantic locality (i.e., the similarity among queries) of the test queries.
Because of difficulties to obtain real-life test queries from such web
sources, synthetic test sets with different semantic localities were generated
to evaluate our semantic caching scheme. Two factors to the
query generator were manipulated using the distribution D=fN 0 :P 0 ,
the number of attributes used in the WHERE condition and P i is the
percentage of the P -th item.
1. The number of the attributes used in the WHERE condition
test query with a large number of attribute conditions
(e.g., age=20 - 40k!sal!50k - title='manager') is more specific
than that of a small number of attribute conditions (e.g., age=20).
Therefore, a test set with many such specific queries is likely to perform
badly in semantic caching since there are not many exact or containing
3 Flight schedule site at http://www.usair.com/. At the time of writing, we noticed
that the web site has slightly changed its web interface and schema since then.
INTELLIGENT SEMANTIC CACHING 17
matches. Let us denote the number of attributes used in the WHERE
condition as N i (i.e., N 3 means that 3 attributes are used in the WHERE
condition). For instance, the following input distribution D=fN 0 :30%,
can be read
as "Generate more queries with short conditions than ones with long
conditions. The probability distributions are 30%, 20%, 15%, 13%, 12%,
2. The name of the attributes used in the WHERE condition
A test set containing many queries asking about common
attributes is semantically skewed and is likely to perform well with
respect to semantic caching. Therefore, different semantic localities
can be generated by manipulating the name of the attributes used
in the WHERE condition. For instance, the following input distribution
D=forg:14.3%, dst:14.3%, airline:14.3%, stp:14.3%, aircraft:14.3%, flt:14.3%,
meal:14.3%g can be read as "All 7 attributes are equally likely being
used in test set". As an another example, the fact that flight number
information is more frequently asked than meal information can be
represented by assigning a higher percentage value to the flt attribute
than the meal attribute.
5.2. Query Space Effect
Another important aspect in generating synthetic test queries is the
Query Space that is the sum of all the possible test queries. For instance,
for the input distribution for the NUM factor D=fN 0 :0%, N 1 :0%,
6 %,
6
6
6 %g, the effects
of applying this distribution to 100 query space and 1 million query
space are different. That is, the occurrence of a partial or full match in
the case with 100 query space is much higher than the occurrence of
those in the case with 1 million query space. To take into account this
effect, we need to adjust the percentage distribution.
Formally, given the n attribute list, fA 1 ; :::; Am jA m+1 ; :::; A n g, among
which A 1 ; :::; Am are mandatory attributes and the rest are optional
attributes, and their domain value list, respectively, all possible
number of query combinations (query space), T, satisfies:
Y
Y
(1a)
is the
cardinality of the values in domain D j .
According to the calculation using Equation 1a, for instance, a total
of 32,400 different SQL queries (i.e., query space) can be generated
Table
II. Breakdown between the number of SELECT conditions and query
space.
Number of attributes (NUM) Query space size Query space percentage
6 12672 39.2 %
Total 32400 100%
from the given USAir schema. The breakdown of query space is shown
in
Table
II. Using the query space size and percentage, now we can
adjust the percentage distribution for the input of the query generator.
For instance, to make a uniform distribution in terms of the number
of attributes, we give the following input distribution to the generator:
6 %,
6
6
6 %g.
5.3. Test Sets
The four test sets (uni-uni, uni-sem, sem-uni, and sem-sem) were
generated by assigning different values to the two input parameters
(NUM and NAME) after adjusting the query space effect. They are
shown in Table III. uni and sem stand for uniform and semantic
distribution, respectively. The total query space was set to 32,400.
Each test set with 1,000 queries was randomly picked based on the
two inputs. The sem values for the input NUM were set to mimic the
Zipf distribution (Zipf, 1949), where it is shown that humans tend to
ask short and simple questions more often than long and complex ones.
The sem values for the input NAME were set arbitrarily, assuming that
airline or stopover information would be more frequently asked than
others. Figure 7 shows the different access patterns of the uniform and
semantic distribution in terms of the chosen attribute names.
The following is an example of a typical test query generated.
SELECT org, dst, airline, stp, aircraft, flt, meal
INTELLIGENT SEMANTIC CACHING 19
dst
airline
stp
aircraft
flt
meal
Test Query Number
dst
airline
stp
aircraft
flt
meal
Test Query Number
a. Uniform distribution b. Semantic distribution
Figure
7. Attribute name access patterns. Shaded area means the attribute is being
used in the test query. Since both org and dst attributes are mandatory, they are
chosen always (thus completely shaded). Since the case b has more semantics, its
access pattern is more skewed (i.e., flt attribute is seldom accessed in b while it
is as frequently accessed as other attributes in a.) Also the case b shows airline
attribute is more favored in the test query (the row is mostly shaded) than aircraft
attribute.
Table
III. Uniform and semantic distribution values used for generating
the four test sets.
Scheme Number of the attributes used (NUM)
uni 0% 0% 16.7% 16.7% 16.7% 16.7% 16.7% 16.7%
sem 0% 0% 40% 25% 15% 10% 5% 5%
Scheme Name of the attributes used (NAME)
org dst airline stp aircraft flt meal
uni 100% 100% 20% 20% 20% 20% 20%
sem 100% 100% 40% 25% 10% 5% 20%
FROM USAir
AND 6!=flt AND 1!=stp!=2 AND meal='supper'
AND aircraft='Boeing 757-200'
5.4. Performance Metrics
1. Average Response Time
queries) / n. To eliminate the initial noise when an experiment first
starts, we can use T from the k queries of the sliding window instead
of n queries in the query set.
2. Cache Coverage Ratio R c : Since the traditional cache "hit ratio"
does not measure the effect of partial matching in semantic caching, we
propose to use a cache coverage ratio as a performance metric. Given
a query set consisting of n queries be the number of
answers found in the cache for the query q i , and let M i be the total
number of answers for the query q i for
where
instance, the query coverage ratio R q of the exact match and containing
match is 1 since all answers must come from the cache. Similarly, R q
of the disjoint match is 0 since all answers must be retrieved from the
web source.
5.5. Experimental Results
In
Figure
8, we compared the performance difference of three caching
cases: 1) no caching (NC), 2) conventional caching using exact matching
(CC), and caching using the extended matching (SC). Both
cache sizes were set to 200KB. Regardless of the types of test set,
NC shows no difference in performance. Since the number of exact
matches was very small in all the test sets, CC shows only a little
improvement in performance as compared to the NC case. Due to the
randomness of the test sets and large number of containing matches in
our experiments, SC exhibits a significantly better performance than
CC. The more semantics the test set has (thus the more similar queries
are found in the cache), the less time it takes to determine the answers.
Next, we studied the behavior of semantic caching with respect to
cache size. We set the replacement algorithm as LRU and ran four test
sets with cache sizes equal to 50KB, 100KB, 150KB, and unlimited.
Because the number of answers returned from the USAir web site is,
on average, small, the cache size was set to be small. Each test set
contained 1,000 synthetic queries. Figure 9.a and Figure 9.b show the
T and R c for semantic caching with selected cache sizes. The graphs
show that the T decreases and the R c increases proportionally as
cache size increases. This is due to the fact that there are fewer cache
4 In our experiments, c was set to 0.5 for the overlapping and contained match
when
INTELLIGENT SEMANTIC CACHING 21001100000000000000000000000000011111111111111111111111111111100000000000000000011111111111111111100000000000000000000111111111111111111110000000000000000000000000000001111111111111111111111111111110011000000000000000000000000000111111111111111111111111111111000000000000000011111111111111111100000000000000001111111111111111110000000000000000000000001111111111111111111111111115001500250035004500Average
Response
Time
(millisec)
Test
uni-uni uni-sem sem-uni sem-sem
Figure
8. Performance comparison of the semantic caching with conventional
caching.001100000000000000000000000000000011111111111111111111111111111111100000000000000001111111111111111110000000000000000001111111111111111110000000000000000000000001111111111111111111111111110001110000000000000000000011111111111111111111000000000000000111111111111111000000000000111111111111111000000001111111111110000000000000000000000000001111111111111111111111110000000000001111111111110000001111000000111111 111500150025003500Average
Response
Time
(millisec)
Test
uni-uni uni-sem sem-uni sem-sem
size 50KB
size 100KB
size 150KB
size unlimited000111001111000000000000001111111111111100000000000000111111111111110000000000000011111111111111000011110000000000001111111111110000000000001111111111111110000000000000000000011111111111111111111111100000000000000011111111111111111100000000000011111111110000000000000000000000000000001111111111111111111111111111110000000000000000000000111111111111111111110000000000000000000000000000000001111111111111111111111111111110.20.40.60.81
Cache
Coverage
Test
uni-uni uni-sem sem-uni sem-sem
size 50KB
size 100KB
size 150KB
size unlimited
a. Average response time T b. Cache coverage ratio R c
Figure
9. Performance comparison of four test sets with selected cache sizes.
replacements. The degree of the semantic locality in the test set plays
an important role. The more semantics the test set has, the better it
performs. Due to no cache replacements, there is only a slight difference
for the unlimited cache size in the R c graph. The same behavior occurs
in the R c graph for the cache size with 150KB for the sem-uni and
sem-sem test sets.
Next, we compared the performance difference between the LRU
(least recently used) and MRU (most recently used) replacement al-
gorithms. Due to limited space, we only show uni-uni and sem-sem
test set results. For this comparison, 10,000 synthetic queries were
generated in each test set and the cache size was fixed to be 150KB.
Figure
10.a shows the T of the two replacement algorithms. For both
test sets, LRU outperformed MRU. Further, the difference of the T
between LRU and MRU increased as the semantic locality increased.
This is because when there is a higher semantic locality, it is very
likely that there is also a higher temporal locality. Figure 10.b shows
the R c of the two replacement algorithms. Similar to the T case, LRU
22 LEE AND CHU5001500250035002000 4000 6000 8000 10000
Average
Response
Time
(millisec)
Number of Queries
LRU,sem-sem
MRU,sem-sem
2000 4000 6000 8000 10000
Cache
Coverage
Number of Queries
LRU,sem-sem
MRU,sem-sem
a. Average response time T b. Cache coverage ratio R c
Figure
10. Performance comparison of four test sets with LRU and MRU replacement
algorithms.
Table
IV. Distribution of match types for four test sets.
Test sets Exact Containing Contained Overlapping Disjoint
uni-sem 0.5% 27.7% 12.8% 36.8% 22.2%
sem-uni 5.1% 44.6% 12.0% 25.1% 13.2%
sem-sem 6.1% 52.0% 15.1% 18.0% 13.6%
Average 3.025% 34.35% 11.925% 28.0% 23.8%
outperformed MRU in the R c case as well. Note that the sem-sem case
in the R c graph of the LRU slightly increased as the number of test
queries increased while it stayed fairly flat in the uni-uni case. This is
because when there is a higher degree of semantic locality in the test
set such as in sem-sem case, the replacement algorithm does not lose its
querying pattern (i.e., semantic locality). That is, the number of exact
and containing matches is so high (i.e., 58.1% combined in Table IV)
that most answers are found in the cache, as opposed to a web source.
On the other hand, in the sem-sem case, the R c graph of the MRU
decreased as the number of test queries increased. This is true due to
the fact that MRU loses its querying pattern by swapping the most
recently used item from the cache.
Table
IV shows the average percentages of the five match types
based on 1,000 queries for four test sets. The fact that partial matches
(contained and overlapping matches) constitute about 40% shows the
potential usage of the knowledge-based matching technique.
INTELLIGENT SEMANTIC CACHING 230.20.61
Knowledge-based
Matching
Knowledge Size (%)
uni-sem
sem-uni
sem-sem
Figure
11. Performance comparison of the knowledge-based matching.
Figure
11 shows an example of the knowledge-based matching using
semantic knowledge. We used a set of induced rules acquired by techniques
developed in Chu et al. (1994) as semantic knowledge. Figure 11
shows knowledge-based matching ratios ( # knowledge-based matches
with selected semantic knowledge sizes. The semantic knowledge size is
represented as a percentage against the number of semantic views. For
instance, a size of 100% means that the number of induced rules used as
semantic knowledge equals the number of semantic views in the cache.
Despite a large number of partial matches in the uni-uni and uni-sem
sets shown in Table IV, it is interesting to observe that the knowledge-based
matching ratios were almost identical for all test sets. This is due
to the fact that many of the partially matched semantic views in the
uni-uni and uni-sem sets have very long conditions and thus fail to
match the rules. Predictably, the effectiveness of the knowledge-based
matching depends on the size of the semantic knowledge.
6. Conclusions
caching via query matching techniques for web sources is
presented. Our scheme utilizes the query naturalization to cope with the
schematic, semantic, and querying capability differences between the
wrapper and web source. Further, we developed a semantic knowledge-based
algorithm to find the best matched query from the cache. Even
if the conventional caching scheme yields a cache miss, our scheme can
potentially derive a cache hit via semantic knowledge. Our algorithm
is guaranteed to find the best matched query among many candidates,
based on the algebraic comparison of the queries and semantic context
of the applications. To prove the validity of our proposed scheme, a
set of experiments with different test queries and with different degrees
of semantic locality were performed. Experimental results confirm the
effectiveness of our scheme for different cache sizes, cache replacement
algorithms and semantic localities of test queries. The performance
improves as the cache size increases, as the cache replacement algorithm
retains more querying patterns, and as the degree of the semantic
locality increases in the test queries. Finally, an additional 15 to 20 %
improvement in performance can be obtained using knowledge-based
matching. Therefore, our study reveals that our semantic caching technique
can significantly improve the performance of semantic caching in
web databases.
Semantic caching at the mediator-level requires communication with
multiple wrappers and creates horizontal and vertical partitions as well
as joining of input queries (Godfrey and Gryz, 1999), which result
in more complicated cache matching. Further research in this area is
needed. Other cache issues that were not covered in this paper, such
as selective materializing, consistency maintainence and indexing, also
need to be further investigated. For instance, due to the autonomous
and passive nature of web sources, wrappers and their semantic caches
are not aware of web source changes. More techniques need to be developed
to incorporate such web source changes into the cache design
in web databases.
--R
Query Caching and Optimization in Distributed Mediator Systems.
Data Caching Issues in an Information Retrieval System.
Intelligent Caching for Information Mediators: A KR Based Approach.
Semantic Caching of Web Queries.
Optimal Implementation of conjunctive Queries in Relational Databases.
Boolean Query Mapping Across Heterogeneous Information Sources.
Using LDAP Directory Caches.
A Scalable and Extensible Cooperative Information System.
Semantic Data Caching and Replacement.
Caching for Client-Server Database Systems
Database Techniques for the World-Wide Web: A Survery
Integrating and Accessing Heterogeneous Information Sources in TSIMMIS.
Query Caching for Heterogeneous Databases.
Answering Queries by Semantic Caches.
Query Folding with Inclusion Dependencies.
Solving Satisfiability and Implication Problems in Database Systems.
Testing Containment of Conjunctive Queries under Functional and Inclusion Dependencies.
A Predicate-based Caching Scheme for Client-Server Database Architectures
Caching via Query Matching for Web Sources.
Answering Queries Using Views.
Querying Heterogeneous Information Sources Using Source Descriptions.
Query Folding.
Semantic Caching and Query Processing.
Subtree Elimination Algorithms in Deductive Databases.
Intelligent Caching and Indexing Techniques For Relational Database Systems.
Principles of Database and Knowledge-Base Systems
Information Integration Using Logical Views.
Describing and Using Query Capabilities of Heterogeneous Sources.
Advanced Database Systems.
Human Behaviour and the Principle of Least Effort.
--TR
Intelligent caching and indexing techniques for relational database systems
Data caching issues in an information retrieval system
Subtree-elimination algorithms in deductive databases
Query answering via cooperative data inference
The implementation and performance evaluation of the ADMS query optimizer
Answering queries using views (extended abstract)
Solving satisfiability and implication problems in database systems
Query caching and optimization in distributed mediator systems
CoBase: a scalable and extensible cooperative information system
Advanced database systems
Database techniques for the World-Wide Web
Using LDAP directory caches
caching via query matching for web sources
Principles of Database and Knowledge-Base Systems
Testing containment of conjunctive queries under functional and inclusion dependencies
Boolean Query Mapping Across Heterogeneous Information Sources
Query Folding with Inclusion Dependencies
Query Folding
Information Integration Using Logical Views
Caching for Client-Server Database Systems
Semantic Data Caching and Replacement
Querying Heterogeneous Information Sources Using Source Descriptions
Describing and Using Query Capabilities of Heterogeneous Sources
Answering Queries by Semantic Caches
Semantic caching of Web queries
A predicate-based caching scheme for client-server database architectures
Optimal implementation of conjunctive queries in relational data bases
--CTR
Kai-Uwe Sattler , Ingolf Geist , Eike Schallehn, Concept-based querying in mediator systems, The VLDB Journal The International Journal on Very Large Data Bases, v.14 n.1, p.97-111, March 2005
Bjrn r Jnsson , Mara Arinbjarnar , Bjarnsteinn rsson , Michael J. Franklin , Divesh Srivastava, Performance and overhead of semantic cache management, ACM Transactions on Internet Technology (TOIT), v.6 n.3, p.302-331, August 2006 | web database;semantic caching;query matching;semantic locality |
513542 | Robust Learning with Missing Data. | This paper introduces a new method, called the robust Bayesian estimator (RBE), to learn conditional probability distributions from incomplete data sets. The intuition behind the RBE is that, when no information about the pattern of missing data is available, an incomplete database constrains the set of all possible estimates and this paper provides a characterization of these constraints. An experimental comparison with two popular methods to estimate conditional probability distributions from incomplete dataGibbs sampling and the EM algorithmshows a gain in robustness. An application of the RBE to quantify a naive Bayesian classifier from an incomplete data set illustrates its practical relevance. | Introduction
Probabilistic methods play a central role in the development of Artificial
Intelligence (AI) applications because they can deal with for the degrees
of uncertainty so often embedded into real-world problems within a sound
mathematical framework. Unfortunately, the number of constraints needed
to define a probabilistic model, the so called joint probability distribution,
grows exponentially with the number of variables in the domain, thus making
infeasible the straight use of these methods.
However, the assumption that some of the variables are stochastically
independent, given a subset of the remaining variables in the domain, dramatically
reduces the number of constraints needed to specify the probabilistic
model. In order to exploit this property, researchers developed a
new formalism to capture, in graphical terms, the assumptions of independence
in a domain, thus reducing the number of probabilities to be assessed.
This formalism is known as Bayesian Belief Networks (bbns) [Pearl, 1988].
bbns provide a compact representation for encoding probabilistic information
and they may be easily extended into a powerful decision-theoretic
formalism called Influence Diagrams. More technically, a bbn is a direct
acyclic graph in which nodes represent stochastic variables and links represent
conditional dependencies among variables. A variable bears a set of
possible values and may be regarded as a set of mutually exclusive and exhaustive
states, each representing the assignment of a particular value to the
variable. A conditional dependency links a child variable to a set of parent
variables and it is defined by the set of conditional probabilities of each state
of the child variable given each combination of states of the parent variables
in the dependency.
In the original development of bbns, domain experts were supposed to
be their main source of information: their independence assumptions, when
coupled with their subjective assessment of the conditional dependencies
among the variables, produces a sound and compact probabilistic representation
of their domain knowledge. However, bbns have a strong statistical
root and, during the past few years, this root prompted for the development
of methods able to learn bbns directly from databases of cases rather than
from the insight of human domain experts [Cooper and Herskovitz, 1992,
Buntine, 1994, Heckerman and Chickering, 1995]. This choice can be extremely
rewarding when the domain of applications generates large amounts
of statistical information and aspects of the domain knowledge are still unknown
or controversial, or too complex to be encoded as subjective probabilities
2of few domain experts.
Given the nature of bbns, there are two main tasks involved in the learning
process of a bbn from a database: the induction of the graphical model
of conditional independence best fitting the database at hand, and the extraction
of the conditional probabilities defining the dependencies in a given
graphical model from the database. Under certain constraints, this second
task can be accomplished exactly and efficiently when the database is complete
[Cooper and Herskovitz, 1992, Heckerman and Chickering, 1995]. We
can call this assumption the "Database Completeness" assumption. Un-
fortunately, databases are rarely complete: unreported, lost, and corrupted
data are a distinguished feature of real-world databases. In order to move
on applications, methods to learn bbns have to face the challenge of learning
from databases with unreported data. During the past few years, several
methods have been proposed to learn conditional probabilities in bbns
from incomplete databases, either deterministic, such as sequential updating
[Spiegelhalter and Lauritzen, 1990], and [Cowel et al., 1996], or the EM-algorithm
[Dempster et al., 1977], or stochastic such as the Gibbs Sampling
[Neal, 1993]. All these methods share the common assumption that unreported
data are missing at random. Unfortunately this assumption is as
unrealistic as the "Database Completeness" assumption because in the real
world there is often a reason so that data are missing.
This paper introduces a new method to learn conditional probabilities
in bbns from incomplete databases which does not rely on the "Missing at
Random" assumption. The major feature of our method is the ability to
learn bbns which are robust with respect to the pattern of missing data
and to return sets of estimates,instead of point estimates, whose width is a
monotonic increasing function of the information available in the database.
The reminder of this paper is structured as follows. In the next Section we
will start by establishing some notation and reviewing the background and
the motivation of this research. In Section 3 we will outline the theoretical
framework of our robust approach to learn probabilities in bbns and in Section
4 we will describe in details the algorithms we used to implement such
an approach. In Section 5 we will compare the behavior of the system implemented
using our method with an implementation of the Gibbs Sampling
and finally we will draw our conclusions.
Figure
1: The graphical structure of a bbn.
Background
A bbn is defined by a set of variables and a network
structure S defining a graph of conditional dependencies among the elements
of X . We will limit attention to variables which take a finite number
of values and therefore, for each variable X , we can define a set of
mutually exclusive and exhaustive states x representing
the assignments to X i of all its m possible values.
Figure
1 shows a bbn we will use to illustrate our notation. In this case
we have and we assume that each variable may take
two values, true (1) and false (0).
The network structure S defines a set of conditional dependencies among
the variables in X . In the case reported in Figure 1, there are two depen-
dencies: one linking the variable X 3 to its parents X 1 and X 2 , which are
marginally independent, and the other one linking the variable X 4 to its
parent X 3 . Since there are no direct links between
separates X 4 from the nodes we have that X 4 is conditionally independent
of . The conditional dependencies are defined by a
set of twelve conditional probabilities: eight are p(X
and four for p(X
2. These conditional probabilities and the marginal
probabilities of the parent nodes X 1 and X 2 , that is p(X
are then used to write down the joint probability of each
network configuration, for instance fX
Figure
2: The graphical structure of a bbn with the associated parameters.
Using a generic structure S, the joint probability of a particular set of values
of the variables in X , say can be decomposed as
Y
are the parent nodes of X i , and x pa(X i ) denotes the combination
of states of pa(X i ) in x. If X i is a root node, then p(X
In the following we will denote the events X
We are given a database of cases g, each case C
being a set of entries. We will assume that the cases are
mutually independent. Given a network structure S, the task here is to learn
the conditional probabilities defining the dependencies in the bbn from D.
We will consider the conditional probabilities defining the bbn as parameters
so that the joint probability of each case in the database,
say
Y
Table
1: Parameters defining the conditional probabilities of the bbn displayed
in Figure 1.
and ' i parameterizes the probability of x ij given the parent configuration
pa(x i ). For instance, in the network in Figure 1 we need only eight parameters
to describe all the dependencies, two of them are needed to define
the probability distributions of X 1 and X 2 , say
and the other six to define the probability distributions of
is given in Figure 2.
Our task is to assess the set of parameters induced by the database D
over the network structure S. The classical statistical parameter estimation
provides the basis to learn these parameters. When the database is complete,
the common approach is Maximum Likelihood which returns the parameter
values that make the database most likely. Given the values of X in the
database, the joint probability of D is
l
Y
This is a function of the parameters ' only, usually called the likelihood
l('). The Maximum Likelihood estimate of ' is the value which
maximizes l('). For discrete variables the Maximum Likelihood estimates
of the conditional probabilities are the observed relative frequencies of the
relevant cases in the database. Let n(x ij jpa(x i )) be the observed frequency
of cases in the database with x ij , given the parent configuration pa(x i ), and
let n(pa(x i )) be the observed frequency of cases with pa(x i ). The Maximum
Likelihood estimate of the conditional probability of x ij simply
Consider for instance the database given in Table 2. The Maximum
Case
Table
2: An artificial database for the bbn displayed in Figure 1.
Likelihood estimate of ' 3 is 1/3.
The Bayesian approach extend the classical parameter estimation technique
in two ways: (i) the set of parameters ' are regarded as random
variables, and (ii) the likelihood is augmented with a prior, say -('), representing
the observer's belief about the parameters before observing any
data. Given the information in the database, the prior density is updated
in the posterior density using Bayes' theorem, and hence
Z
-(')p(Dj')d':
The Bayesian estimate of ' is then the expectation of ' under the posterior
distribution.
The common assumptions in the Bayesian approach to learn a bbn with
discrete variables are that the parameters (i) are independent from each
other, and (ii) have a Dirichlet distribution, which simplifies to a Beta distribution
for binary variables. Assumption (i) allows to factorise the joint
prior density of ' as
Y
thus allowing "local computations", while (ii) facilitates computations of
the posterior density by taking advantages of conjugate analysis. Full details
are given for instance by [Spiegelhalter and Lauritzen, 1990]. Here we
just outline the standard conjugate analysis with a Dirichlet prior.Consider
for instance the variable X i and let ' be the parameters associated
to the conditional probabilities so that
1. A Dirichlet prior for ' i , denoted as D(ff
a continuous multivariate distribution with density function proportional to
Y
The hyper-parameters ff ij s have the following interpretation: ff
can be regarded as an imaginary sample size needed to formulate this prior
information about ' i , and the mean of ' ij is ff ij =ff i+ , m. Note that
this prior mean is also the probability of x ij given the parent configuration
Hence, a uniform prior with would assign uniform
probabilities to each x ij , given the parent configuration pa(x i ).
With complete data, the posterior distribution of the parameters can
be computed exactly using standard conjugate analysis, yielding a posterior
density and the posterior
means represent their estimates. Thus the Bayes estimate of the conditional
probability of x ij ))g.
Consider again the example in Table 2. If the prior distribution of ' 3 is
so that a priori the conditional probability of X
is 0.5, the posterior distribution would be updated into D(2; 3), yielding a
Bayesian estimate 2/5.
Unfortunately, the situation is quite different when some of the entries
in the database are missing. When a datum is missing, we have to face a
set of possible complete databases, one for each possible value of the variable
for which the datum is missing. Exact analysis would require to compute
the joint posterior distribution of the parameters given each possible
completion of the database, and then to mix these over all possible com-
pletions. This is apparently infeasible. A deterministic method, proposed
by [Spiegelhalter and Lauritzen, 1990] and improved by [Cowel et al., 1996],
provides a way to approximate the exact posterior distribution by processing
data sequentially. However, [Spiegelhalter and Cowel, 1992] show that
this method is not robust enough to cope with systematically missing data:
in this case the estimates rely heavily on the prior distribution.
When deterministic methods fail, current practice has to resort to an
estimate of the posterior means using stochastic methods. Here we will
describe only the most popular stochastic method for Bayesian inference:
the Gibbs Sampling. The basic idea of the Gibbs Sampling algorithm is (i)
for each parameter ' i sample a value from the conditional distribution of ' i
given all the other parameters and the data in the database, (ii) repeat this
for all the parameters, and (iii) iterates these steps several times. It can be
proved that, under broad conditions, this algorithm provides a sample from
the joint posterior distribution of ' given the information in the database.
This sample can then be used to compute empirical estimates of the posterior
means or any other function of the parameters. In practical applications,
the algorithm iterates a number of times and then, when stability seems
to be reached, a final sample from the joint posterior distribution of the
parameters is taken [Buntine, 1996].
When some of the entries in the database are missing, the Gibbs Sampling
treats the missing data as unknown parameters, so that, for each
missing entry, a values is sampled from the conditional distribution of the
corresponding variables, given all the parameters and the available data.
The algorithm is then iterated to reach stability, and then a sample from
the joint posterior distribution is taken which can be used to provide empirical
estimates of the posterior means [Thomas et al., 1992]. This approach
relies on the assumption that the unreported data are missing at random.
When the "Missing at random" assumption is violated, as when data are
systematically missing, such method suffers of a dramatic decrease in ac-
curacy. The completion of the database using the available information in
the database itself leads the learning system to ascribe the missing data
to known values in the database and, in the case of systematically missing
data, to twist the estimates of the probabilities in the database. It is apparent
that this behavior can prevent the applicability of the learning method
to generate bbns because, for the general case, it can produce unreliable
estimates.
Theory
The solution we propose is a robust method to learn parameters in a bbn.
Our method computes the set of possible posterior distributions consistent
with the available information in the database and proceeds by refining this
set as more information becomes available. The set contains all possible
estimates of the parameters in the bbn and can be efficiently computed by
calculating the extreme posterior distributions consistent with the database
D. In this Section, we will first define some basic concepts about convex
sets of probabilities and probability intervals, and then we will describe the
theoretical basis of our learning method.
3.1 Probability Intervals
Traditionally, a probability function p(x) assigns a single real number to the
probability of x. This requirement, called Credal Uniqueness Assumption,
is one of the most controversial points of Bayesian theory. The difficulty
of assessing precise, real-valued quantitative probability measures is a long
standing challenge for probability and decision theory and it motivated the
development of alternative formalisms able to encode probability distributions
as intervals rather than real numbers. The original interest for the notion
of belief functions proposed by Dempster [Dempster, 1967] and further
developed by Shafer [Shafer, 1976] was mainly due to its ability to represent
credal states as intervals rather than point-valued probability measures
[Grosof, 1986].
Several efforts have been addressed to interpret probability intervals
within a coherent Bayesian framework, thus preserving the probabilistic
soundness and the normative character of traditional probability and decision
theory [Levi, 1980, Kyburg, 1983, Stiling and Morrel, 1991]. This approach
regards probability intervals as a convex sets of standard probability
distributions and is therefore referred to as Convex Bayesianism. Convex
Bayesianism subsumes standard Bayesian probability and decision theory
as a special case in which all credal states are evaluated by convex sets of
probability functions containing just one element.
Convex Bayesianism relaxes the Credal Uniqueness Assumption by replacing
the real-valued function p(x) with a convex set of these functions.
A convex set of probability functions P (x) is the set
are two probability functions and 0 - ff - 1 is a real
number. We will use p ffl and p ffl to denote the minimum and the maximum
probability of the set P , respectively.
The intuition behind Convex Bayesianism is that even if an agent cannot
choose a real-valued probability p(x), his information can be enough to constrain
p(x) within a set P (x) of possible values. An appealing feature of this
approach is that each probability function p ff fulfills the requirements
of a standard probability function, thus preserving the probabilistic
soundness and the normative character of traditional Bayesian theory.
The most conservative interpretation of Convex Bayesianism is the so
called Sensitivity Analysis or Robust Bayesian approach [Berger, 1984], within
which the set P (x) is regarded as a set of precise probability functions and
inference is carried out over the possible combinations of these probability
functions. Good [Good, 1962] proposes to model this process as a "black
box" which translates these probability functions into a set of constraints of
an ideal, point-valued probability function. A review of these methods may
be found in [Walley, 1991]. This theoretical framework is especially appealing
when we are faced with the problem of assessing probabilities from an
incomplete database. Indeed, in this case we can safely assume the existence
of such an ideal, point valued probability as the probability value we would
assess if the database was complete.
The suitability of Convex Bayesianism to represent incomplete probabilistic
information motivated the development of computational methods
to reason on the basis of probability intervals [White, 1986, Snow, 1991]. A
natural evolution of this approach leaded to the combination of the computational
advantages provided by conditional independence assumptions with
the expressive power of probability intervals [van der Gaag, 1991]. There-
fore, some efforts have been addressed to extend bbns from real-valued probabilities
to interval probabilities [Breeze and Fertig, 1990, Ramoni, 1995],
thus combining the advantages of conditional independence assumptions
with the explicit representation of ignorance. These efforts provide different
methods to use the bbns we can learn with our method.
3.2 Learning
We will describe the method we propose by using the artificial database
in
Table
2 for the bbn given in Figure 1. Table 3 reports the database
given in Table 2 where some of the entries, denoted by ?, are missing. We
will proceed by analyzing the database sequentially, just to show how we
can derive our results. We stress here that our method does not require a
sequential updating.
We start by assuming total ignorance, and hence the parameters ' i ,
are all given a prior distribution D(1; 1). We also assume that
the are independent. Consider the first entry in the database, this is a
complete case so that we update the distributions of ' 1 , ' 2 , ' 3 and ' 8 in
1). The
distributions of ' 3 are not changed since the entries in C 1 do
not have any information about them. After processing the first case we
thus have that the Bayes estimates of the parameters are:
Case
Table
3: The incomplete database displayed in Table 2.
Consider now case C 2 . The observation on X 3 is missing, and we know that
it can be either 1 or 0. Thus the possible completions of case C 2 are
The two possible completions would yield ' 1
would not be updated, whichever is the com-
pletion, since only the parent configuration 0; 0 is observed. Consider now
the updating of the distributions of ' 3 , ' 7 and ' 8 . We have that ' 3
D(1; 2) and this completion would lead to ' 7
1). Instead C 2 c2
would lead to ' 3
1). Hence the possible Bayes estimates are:
Instead of summarizing this information somehow, we can represent it via
intervals, whose extreme points are the minimum and the maximum Bayes
estimates that we would have from the possible completions of the database.
Thus, for instance, from the two possible posterior distributions of ' 3
we learn
and similarly from the posterior distributions of ' 7 and ' 8 we
learn
3=4. Clearly, we also have
Consider now the third case in the database. Again the observation
on X 3 is missing, and as before we can consider the two possible completions
of this case, and proceed by updating the relevant distributions. Now
however we need to "update intervals", and this can be done by updating
the distributions corresponding to the extreme points of each interval.
This would yield four distributions, and hence 4 Bayes estimates of the relevant
conditional probabilities from which we can extract the extreme ones.
For instance, the completion C 3 c1
would lead to the updating,
among others, of the distribution of ' 7 , so that the two distributions obtained
after processing the first two cases in the database would become
- D(2; 1), and ' 8 is not up-
dated. If we consider the completion C 3 c2
7 is not updated,
1). If we sort
the corresponding estimates in increasing order and find the minimum and
the maximum we obtain p ffl (X
Consider 3=4. The
minimum probability is achieved by assuming that all the completions in
the database assign 1 to X 3 ; the maximum is achieved by assuming that all
the completions in the database assign 0 to X 3 . If we now process the fourth
case, for instance the distribution of ' 7 would be updated by considering the
completions 0; leading to extreme probabilities
4=5. Thus the minimum is
obtained by assigning 0 to X 4 and the maximum by assigning 1 to X 4 .
This can be generalized as follows. Let X i be a binary variable in the
bbn and denote by n ffl (1jpa(x i )) the frequency of cases with
the parent configuration pa(x i ), which have been obtained by completing incomplete
cases. Similarly, let n ffl (0jpa(x i )) denote the frequency of cases with
given the parent configuration pa(x i ), which have been obtained by
completion of incomplete cases. Suppose further that we start from total ig-
3norance, thus the parameter ' i which is associated to p(1jpa(x i )) is assigned
a D(1; 1) prior, before processing the information in the database. Then
and
The minimum and maximum probability of the complementary event are
such that p ffl (1jpa(x i
This result can be easily generalized to discrete variables with k states
leading to
and
Note that in this case the sum of the maximum probability of x ij jpa(x i
and the minimum probabilities of x ih jpa(x i ), h 6= j, is one.
It is worth noting that the bounds depend only on the frequencies of
complete entries in the database, and the "artificial" frequencies of the completed
entries, so that they can be computed in batch mode.
This Section is devoted to the description of the algorithms implementing
the method outlined in Section 3. We will first outline the overall procedure
to extract the conditional probabilities from a database given a network
structure S. Then, we will describe the procedure to store the observations
in the database, and we will analyze the computational complexity of the
algorithms. Finally, we will provide details about the current implementation
4.1
Overview
Section 3 described the process of learning conditional probabilities as a
sequential updating of a prior probability distribution. The nature of our
method allows us to implement the method as a batch procedure which
first parses the database and stores the observations about the variables
and then, as a final step, computes the conditional probabilities needed to
specify a bbn from these observations.
The procedure takes as input a database D, as defined in Section 2,
and a network structure S. The network structure S is identified by a set
of conditional dependencies fd X 1
associated to each variable in
. A dependency dX i
is an ordered tuple (X is a child
variable in X with parent nodes pa(X i ).
The learning procedure takes each case in the database as a statistical
unit and parses it using the dependencies defining the network structure S.
Therefore, for each entry in the case, the procedure recalls the dependency
within which the entry variable appears as a child and identifies the states
of its parent variables in the case. In this way, for each case in the database,
the procedure detects the configuration of states for each dependency in S.
Recall that the probabilities of the states of the child variables given the
states of the parent variables are the parameters ' we want to learn from
the database.
For each configuration the procedure maintains two coun-
ters, say n(x When the detected configuration
does not contain any missing datum, the first counter n(x ij jpa(x i
is increased by one. When the datum for one or more variables in the
combination is missing, the procedure increases by one the second counter
for each configuration of states of the variable of each missing
entry. In other words, the procedure uses the counter n ffl
to ascribe a sort of virtual observation to each possible state of a variable
whose value is missing in the case. Once the database has been parsed, we
just need to collect the counters of each configuration and compute the two
extreme lower bound using formula 3, and the upper bound using formula
4.
Let D be a database and X the set of variables in a bbn. We will denote
as states(X i ) the set of states for the variable X
the set of its immediate predecessors in the bbn. The learning procedure is
defined as follows.
procedure learn(D;X )
while do
while do
while k - jP j do
while l - jX j do
if
The procedure store stores the counters for each configuration of parent
states and it will be described in the next subsection. The procedure collect
simply collects all the counters for each state in each variable in X .
4.2 Storing
It is apparent that the procedure store plays a crucial role for the efficiency
of the procedure. In order to develop an efficient algorithm, we used discrimination
trees to store the parameter counters, following a slightly modified
version of the approach proposed by [Ramoni et al., 1995].
Along this approach, each state of each variable of the network is assigned
to a discrimination tree. Each level of the discrimination tree is defined by
the possible states of a parent variable. Each path in the discrimination
tree represents a possible configuration of parent variables for that state. In
this way, each path is associated to a single parameter in the network. Each
leaf of the discrimination tree holds the pair of counters
Figure
3: The discrimination tree associated to the state X 3 = 1.
For each entry, if there is no missing datum, we just need to
follow a path in the discrimination tree to identify the counters to update.
In order to save memory storage, the discrimination trees are incrementally
built: each branch in the tree is created the first time the procedure needs
to walk through it.
The procedure store uses ordered tuples to identify the variable states
of interest. A state is identified by an ordered tuple (X; x), where X is the
variable and x the reported value. If the value is not reported, x will be
?. The procedure store takes as input three arguments: an ordered tuple
c representing the current state of the child variable, the set A of ordered
tuples for each state of the parent variables of c in the current case, and a
flag dictating whether the update is induced by a missing datum or not.
procedure store(c; A; v)
if c(2) =? then
while do
else
if p(2) =? then
while do
else
counter ffl (p)/counter ffl (p)
else
return
else
The functions counter and counter ffl identify the counters for n and for
respectively, associated to each leaf node in the tree.
In order to illustrate how these procedures work, let's turn back to the
example described in Section 2. The procedure learn starts by parsing the
first line of the database reported in Table 3. It uses the network structure
S depicted in Figure 1 to partition the case into four relevant elements:
corresponding to a parameters in the network structure.
Now, suppose the third entry has to be stored in the discrimination tree
associated to the state X 3 = 1. Figure 3 displays this discrimination tree
associated to the state X 3 = 1. The procedure walks along the solid line in
Figure
3 and updates the counter n(X because the
entry does not include any missing datum.
Suppose now the entry
has to be stored. Figure 3 identifies with a dashed line the paths followed by
the procedure in this case: when the procedure hits a state of the variable
whose datum is not reported, it walks through all the possible branches
following it and updates the n ffl counters at the end of each path.
4.3 Computational Complexity
The learning procedure takes advantage of the modular nature of bbns: it
partions the search space using the dependencies in the network. This partitioning
is the main task performed by the procedure learn. learn starts
by scanning the database D and, for each elements of a row, it scans the
row again to identify the patterns of the dependencies in the bbn. Suppose
the database D contains n columns and m rows, the superior bound
of the execution time of this part of the algorithm is O(gmn 2 ) , where g is
the maximum number of parents for a variable in the bbn. Note that the
number of columns is equal to the number of variables in the bbn, that is
The procedure collect scans the generated discrimination trees and is
applied once during the procedure. The number of discrimination trees generated
during the learning process is
discrimination
tree has a number of leaves
are
the parents of the variable of the state associated to the discrimination tree.
Note that this is the number of conditional distributions we need to learn
to define a conditional dependency.
The main job of the algorithm is left to the procedure store Using discrimination
trees, the time required by the procedure store to ascribe one
entry to the appropriate counter is linear in the number n of parent variables
in the dependency, when the reported entry about the parameter does not
contain any missing datum. When data are missing, the procedure is, in the
worse case, exponential in the number of parent variables with unreported
data in the row. It is worth noting that the main source of complexity in
the algorithm does not depend on the dimension of the database but on the
topology of the bbn.
Our previous example makes clear that the order of variables in the tree
plays a crucial role in the performance of the algorithm: if the positions of
the states of variable X 1 and X 2 were exchanged, the procedure had to walk
through the whole tree rather than just half of it. Efficiency may be gained
by a careful sorting of the variable states in the tree using precompilation
techniques to estimate in advance those variables whose values are missing
more often in the database. This task can be accomplished using common
techniques in the machine learning community to estimate the information
structure of classification trees [Quinlan, 1984].
4.4 Implementation
This method has been implemented in Common Lisp on a Machintosh Performa
6300 under Machintosh Common Lisp. A porting to CLISP running
on a Sun Sparc Station is under development. The system has been implemented
as a module of a general environment for probabilistic inference
called Refinment Architecture (era) [Ramoni, 1995]. era has
been originally developed on a Sun Sparc 10 using the Lucid Common Lisp
development environment. We were very careful to use only standard Common
Lisp resources in order to develop code conforming to the new established
ANSI standard. Therefore, the code should be easily portable on any
Common Lisp development environment.
The implementation of the learning system deeply exploits the modularity
allowed by the Common Lisp Object System (CLOS) protocol included
in the ANSI standard: for instance, the learning algorithm uses the CLOS
classes implemented in the architecture to represent the elements of the net-work
structure. This strategy allows a straightforward integration of the
learning module within the reasoning modules of era. In this way, the
results of the learning process are immediately available to the reasoning
modules of era to draw inferences and make decisions.
5 Experimental Evaluation
Gibbs Sampling is currently the most popular stochastic method for Bayesian
inference in complex problems, such as learning when some of the data are
missing, although its limitations are well-known: the convergence rate is
slow and resource consuming. However, given its popularity, we have compared
the accuracy of our method with one of its implementations. In this
Section, we will report the results of two sets of experimental comparisons,
one using a real-world problem and one using an artificial example. The
aim of these experiments is to compare the accuracy of the parameter estimates
provided by the Gibbs Sampling and our method as the available
information in the database decreases.
Figure
4: The network structure of the bbn used for the first set of experiments
5.1 Materials
In order to experimentally compare our method to the Gibbs Sampling,
we choose the program BUGS [Thomas et al., 1992] which is commonly regarded
as a reliable implementation of such a technique [Buntine, 1996]. In
the following experiments, we used the implementation of BUGS version 0.5
running on a Sun Sparc 5 under SunOS 5.5 and the era implementation of
our method running on a Machintosh PowerBook 5300 under Machintosh
Common Lisp version 3.9. All the experiments reported in this Section share
these materials. Different materials used for each set of experiments will be
illustrated during the description of each experiment.
5.2 Experiment 1: The CHILD Network
In the first set of experiments, we used a well-known medical problem. This
problem has been already used in the bbns literature [Spiegelhalter et al., 1993]
and concerns the early diagnosis of congenital hearth disease in newborn babies
# Name States
1 Birth Asphyxia yes no
Disease PFC TGA Fallot PAIVS TAPVD Lung
3 Age 0-3-days 4-10-days 11-30-days
4 LVH yes no
5 Duct flow Lt-to-Rt None Rt-to-Lt
6 Cardiac mixing None Mild Complete Transp
7 Lung parenchema Normal Congested Abnormal
8 Lung flow Normal Low High
9 Sick yes no
11 Hypoxia in O2 Mild Moderate Severe
Chest X-ray Normal Oligaemic Plethoric Grd-Glass Asy/Patchy
14 Grunting yes no
19 X-ray Report Normal Oligaemic Plethoric Grd-Glass Asy/Patchy
Grunting Report yes no
Table
4: Definition of the variables in the CHILD bbn: the first column
reports the numeric index used in Figure 4, the second the variable name,
and the third its possible states.
5.2.1 Materials
Figure
4 displays the network structure of our medical problem. Table 4
associates each number in the bbn to the name of a variable and reports its
possible values. The clinical problem underlying the bbn depicted in Figure
4 is described in [Frankin et al., 1989]. The task of the bbn is to diagnose
the occurrence of congenital heart disease in pediatric patients using clinical
data reported over the phone by referring pediatricians. Authors report that
the referral process generates a large amount of clinical information but this
information is often incomplete and includes missing and unreported data.
These features, together with the reasonable but realistic size of the
bbn, makes of this problem an ideal testbed for our experiments: the bbn in
Figure
4 is defined by 344 conditional probabilities (although the minimal set
Figure
5: Plots of estimates against amount of information for
the parameters a)
is slightly more than 240 since some of the variables in the bbn are binary)
and the number of cases reported in the original database was more than 100.
Since the aim of our experiment is to test the accuracy of the competing
methods, we cannot use the original database because we need a reliable
measure of the probabilities we want the systems to assess. Therefore, still
using the original network, we generated a complete random sample of 100
cases from a known distribution and we used this sample as the database
to learn from. Using this database and the bbn in Figure 4, we run two
different tests.
Test 1: Missing at Random
The goal of the first test is to characterize the behavior of our method with
respect to the Gibbs Sampling when the "Missing at Random" assumption
holds.
Method. We start with a complete database, where all the parameters are
independent and uniformly distributed, and run both our learning algorithm
and the Gibbs Sampling on it. Then, we proceed by randomly deleting
the 20% of entries in the database, and by running the two methods on
the incomplete database, until the database is empty. For each incomplete
database we run 10,000 iterations of the Gibbs Sampling, which appeared
to be enough to reach stability, and the estimates returned are based on a
final sample of 5,000 values.
Results. Figure 5 shows the estimates of the conditional probabilities
defining the dependency linking the variable n2 (disease) to the variable
inferred by the two learning methods, for 5 different proportions
of completeness. We report this dependency as an example of the overall
behavior of the systems during the test. Stars report the point estimates
given by the Gibbs Sampling, while errorbars indicates 95% confidence interval
about the estimates. Solid lines indicates the lower and upper bounds
of the probability intervals inferred by our method. We choose to represent
the outcomes of the Gibbs Sampling and our system in two different ways
because there is a basic semantic clash between the Gibbs Sampler confidence
intervals and the intervals returned by our system. The estimates
given by the Gibbs Sampling, when the database is incomplete, are based
on the most likely reconstruction of the missing entries, and this relies on
the prior belief about the parameters and the complete data. The 95% confidence
intervals represent the posterior uncertainty about the parameters,
which depend on the inferred database and the prior uncertainty. Thus the
larger the intervals, the less reliable are the estimates. Intervals returned by
our method represent the set of posterior estimates of the parameters that
we would obtain in considering all possible completions of the database. In
particular, both the estimates based on the original database and the estimates
returned by the Gibbs Sampling will be one of them. The width of
our intervals is then a measure of the uncertainty in considering all possible
completions of the database.
When the information in the database is complete, the intervals of our
systems degenerate to a single point, and this point coincides with the exact
estimates. The semantic difference between the intervals returned by the two
systems accounts for some of the slightly tighter intervals returned by the
Gibbs Sampling, together with the fact that, by assuming that the data are
missing at random, the Gibbs Sampling can exploit a piece of information
Figure
Plots of estimates against amount of information for the parameters
a)
not available to our system. Nonetheless, the width of the intervals returned
by the two systems is overall comparable in the case reported in Figure 5,
as well as for all the remaining parameters in the bbn.
The main difference was in the execution time: in the worse case, Gibbs
Sampling took over 37 minutes to run to completion on a Sun Sparc 5, while
our system ran to completion in less than 0.20 seconds on a Machintosh
5.2.3 Test 2: Systematically Missing
What does it happen when the data are not missing at random and therefore
the Gibbs Sampling is using a wrong guess? The aim of the second test is
to compare the behavior of the two systems when the data are not missing
at random, but the value of a variable is systematically removed from the
database.
Method. The procedure used for this test is a slight modified version of
the test for the randomly missing data. In this case, we iteratively deleted
Figure
7: Plots of estimates against amount of information for the parameters
a)
1% of the database by systematically removing the entries reporting the
value normal for the variable n7 (lung parenchema) and we ran the two
learning algorithms. Each run of the Gibbs Sampling is based on 10,000
iterations, and a final sample of 5,000 cases. This procedure was iterated
until no value normal was reported for the variable n7 in the database.
Results. The local independencies induced by the network structure are
such that the modification of values of the variable n7 in the database only
affect the estimates of the conditional probabilities for its immediate predecessors
and successors. Therefore, we will focus on the estimates of the
parameters affected by the changes. Figure 6 shows the estimates provided
by the two systems for the conditional probabilities of the states of variable
n7 given the state of its immediate predecessor in the
bbn. Note that the information percentage reported on the x\Gammaaxis starts
at 98%, meaning that the number of entries for the state n7 = normal account
for the 2% the complete database. At 98%, the database contains no
entry for normal. Results report that the point-valued estimates of
the Gibbs Sampling always fall within the intervals calculated by our sys-
tem. However, plots a and b display the behavior of the Gibbs Sampling
estimates, jumping from the lower bound to the upper bound of the set
calculated by our method as the missing data are replaces with entries n7
normal. The jump testifies a 30% error in the point estimate provided
by the Gibbs Sampling when all the entries normal are missing. The
behavior of both systems does not change when the data are missing on the
parent variables rather than on the child variable. Figure 7 plots the estimates
of some parameters of the dependency linking the parent variables n7
(lung-parench) and n6 (Cardiac Mixing) to the child variable n11 (Hypoxia
in O2). In this case, the initial error of the Gibbs Sampling goes up to the
45%, as shown in the plot b. This error is even more remarkable once we
realize that the 98% of the overall information is still available.
The difference in execution time was comparable to the one of the previous
test: in the worse case, Gibbs Sampling took over 16 minutes to run
to completion on a Sun Sparc 5, while our system ran to completion in less
than 0.20 seconds on a Machintosh PowerBook 5300.
5.3 Experiment 2: An Artificial Network
Results from experiments on the CHILD network show a bias of the point
estimates given by the Gibbs Sampling, although the associated confidence
intervals are always large enough to include the estimates calculated from
the complete database. Since the width of the confidence intervals is a
function of the sample size, we are left with the doubt that a large database
could give tighter intervals around biased estimates. These results prompt
for a more accurate investigation of this behavior as the size of the database
increases. Thus, in this Section, we will use an artificial - and therefore
more controllable - example in order to amplify this bias. The rationale
behind this second experiment is twofold: (i) to display the effect on the
learning process of the bias induced by the "Missing at Random" assumption
when in fact data are systematically missing and the size of the database is
large, and (ii) to show the effect of this bias on the predictive performance
on the bbn.
Figure
8: The simple network used for the second set of experiments.
5.3.1 Materials
Figure
8 shows the simple bbn we used for our second experiment: two
binary variables linked by a dependency. We generated a database of 1000
random cases from the following probability distribution:
The parameters in the bbn were all assumed to be independent and uniformly
distributed.
5.3.2 Learning
Using the bbn displayed in Figure 8, we tried to make clearer the bias effect
detected in the previous set of experiments.
Method. We followed a procedure analogue to that used in the second
test of the previous experiment: we iteratively deleted the 10% of entries
with no value with in the database. We then
run our method and the Gibbs Sampling on each incomplete database. Each
run of the Gibbs Sampling is based on 1,000 iterations to reach stability, and
a final sample of 2,000 values.
Results. Figure 9 shows the parameter estimates given by the two sys-
tems. The bias of the Gibbs Sampling is absolutely clear: when 75% of
the information is available in the database but all the entries of
are missing, the estimate given by the Gibbs Sampling for lies
on the lower extreme of the interval estimated by our method. However,
Figure
9: Plots of estimates agaist amount of information for the parameters
a)
the sample size tight up the confidence interval around the estimate 0.0034
so the "true" estimate 0.4955, which would be computed in the complete
database, is definitely excluded, with an error overpassing the 40%.
The estimate of p(X moves from 0.8919 in the complete
database to 0.662 when no entries with are left in the database. How-
ever, the wide interval associated to the estimate testifies the low confidence
of the Gibbs Sampler in it.
More dramatic is the effect of the Missing at Random assumption on
the estimates of the conditional probability
from 0.1286 in the complete database, to 0.5059 , with a very narrow confidence
interval: 0.4752,0.5367. This narrow interval overestimates the reliability
of the inferred value and excludes the true value inferred from the
complete database.
Execution time was 10 minutes for the Gibbs Sampling and less than
seconds for our system.
5.4 Prediction
The goal of learning bbns is to use them to perform different reasoning tasks,
such as prediction. The aim of this second test is to evaluate the reliability
of the predictions given by the learned bbn.
Materials. We used the bbn learned by the two systems in the previous
test to predict the value of X 2 given an observation about X 1 .
Results. The effect of the strong bias in the estimates returned by the
Gibbs Sampling is remarkable in the predictive performance of the network.
Suppose that X observed and we want to predict the value of X 2 .
Since in this case p(X reduces to p(X Figure 9 c)
plots the marginal probability of p(X Suppose that we use
the estimates learned by the Gibbs Sampling with 75% of the complete
data, when all the entries are missing. The prediction of the Gibbs
Sampling is p(X against the value
0:1286 that we would have inferred from the complete database. Instead,
our method returns the probability interval [0:12; 0:54], thus including the
"true" value.
The results of these experiments match our expectations. The accuracy of
the two systems is overall comparable when data are missing at random.
However, the estimates given by the Gibbs Sampling are prone to bias when
data are systematically missing. The reason of this behavior is easy to
identify. Consider for instance the artificial example in Figure 8. In the
reconstruction of the original database, the Gibbs Sampling exploits the
available information to ascribe the missing entries mainly to X
hence the high confidence on the estimate of the conditional distribution of
These results show also that our method is robust because instead of
"betting" on the most likely complete database that we could infer from the
available information from the incomplete database at hand, our method
returns results which make the problem solver aware of his own ignorance.
This feature is even more important when we consider the remarkable effect
of the strong bias in the estimates returned by the Gibbs Sampling on the
predictive performance of the bbn.
A last word about the execution time. The computational cost of Gibbs
Sampling is a well known issue and the results of our experiments show the
computational advantages of our deterministic method with respect to the
stochastic simulator.
6 Conclusions
Incompleteness is an common feature of real-world databases and the ability
of learning from databases with incomplete data is a basic challenge
researchers have to face in order to move their methods to applications. A
key issue for a method able to learn from incomplete database is the reliability
of the knowledge bases it will generate. The results of our investigation
shows that the common Missing at Random assumption exploited by current
learning methods can dramatically affect the accuracy of their results.
This paper introduced a robust method to learn conditional probabilities
in a bbn which does not rely on this assumption. In order to drop this
assumption, we had to change the overall learning strategy with respect
to traditional Bayesian methods: rather than guessing the value of missing
data on the basis of the available information, our method bounds the set
of all posterior probabilities consistent with the database and proceed by
refining this set as more information becomes available.
The main feature of this method is its robustness with respect to the distribution
of missing data: it does not relies on the assumption that data are
missing at random because it does not try to infer them from the available
information. The basic intuition behind our method is that we are better
off if, rather than trying to complete the database by guessing the value of
missing data, we regard the available information as a set of constraints on
the possible distributions in the database and we reason on the basis of the
set of probability distributions consistent with the database at hand.
An experimental comparison between our method and a powerful stochastic
method shows a remarkable difference in accuracy between the two methods
and the computational advantages of our deterministic method with
respect to the stochastic one.
Acknowledgments
Authors thank Greg Cooper, Pat Langley, Zdenek Zdrahal for their useful
suggestions during the development of this research. Equipment has been
provided by generous donations from Apple Computers and Sun Microsystems
--R
The robust bayesian viewpoint.
Decision making with interval influence diagrams.
Operations for learning with graphical mod- els
A guide to the literature on learning probabilistic networs from data.
A bayesian method for the induction of probabilistic networks from data.
A comparison of sequential learning methods for incomplete data.
Maximum likelihood from incomplete data via the em algorithm.
Upper and lower probabilities induced by multivalued mapping.
Combining clinical judgments and clinical data in expert systems.
Subjective probability as a measure of a non-measurable set
An inequality paradigm for probabilistic knowl- edge
Learning bayesian networks: The combinations of knowledge and statistical data.
Rational belief.
The Enterprise of Knowledge.
Probabilistic inference using markov chain monte carlo methods.
Probabilistic Reasoning in Intelligent Systems: Networks of plausible inference.
Learning efficient classification procedures and their application to chess and games.
An ignorant belief network to forecast glucose concentration from clinical databases.
Ignorant influence diagrams.
A Mathematical Theory of Evidence.
Improved posterior probability estimates from prior and linear constraint system.
Learning in probabilistic expert systems.
Sequential updating of conditional probabilities on directed graphical structures.
Bayesian analysis in expert systems.
Covex bayesian decision theory.
Bugs: A program to perform bayesian inference using gibbs sampling.
Computing probability intervals under independency constraints.
Statistical Reasoning with Imprecise Probabilities.
A posteriori representations based on linear inequality descriptions of a priori conditional probabilities.
--TR
Probabilistic reasoning in intelligent systems: networks of plausible inference
A Bayesian Method for the Induction of Probabilistic Networks from Data
C4.5: programs for machine learning
The EM algorithm for graphical association models with missing data
Learning Bayesian Networks
Irrelevance and parameter learning in Bayesian networks
Bayesian classification (AutoClass)
On the Optimality of the Simple Bayesian Classifier under Zero-One Loss
Bayesian Network Classifiers
Expert Systems and Probabiistic Network Models
Probability Intervals Over Influence Diagrams
Bayesian methods
--CTR
Marco Zaffalon , Marcus Hutter, Robust inference of trees, Annals of Mathematics and Artificial Intelligence, v.45 n.1-2, p.215-239, October 2005
Gert de Cooman , Marco Zaffalon, Updating beliefs with incomplete observations, Artificial Intelligence, v.159 n.1-2, p.75-125, November 2004
Ferat Sahin , M. etin Yavuz , Ziya Arnavut , nder Uluyol, Fault diagnosis for airplane engines using Bayesian networks and distributed particle swarm optimization, Parallel Computing, v.33 n.2, p.124-143, March, 2007 | bayesian learning;bayesian networks;bayesian classifiers;missing data;probability intervals |
513803 | Routing performance in the presence of unidirectional links in multihop wireless networks. | We examine two aspects concerning the influence of unidirectional links on routing performance in multihop wireless networks. In the first part of the paper we evaluate the benefit from utilizing unidirectional links for routing as opposed to using only bidirectional links. Our evaluations are based on three transmit power assignment models that reflect some realistic network scenarios with unidirectional links. Our results indicate that the marginal benefit of using a high-overhead routing protocol to utilize unidirectional links is questionable.Most common routing protocols however simply assume that all network links are bidirectional and thus may need additional protocol actions to remove unidirectional links from route computations. In the second part of the paper we investigate this issue using a well known on-demand routing protocol Ad hoc On-demand Distance Vector (AODV) as a case study. We study the performance of three techniques for AODV for efficient operation in presence of unidirectional links viz. BlackListing Hello and ReversePathSearch. While BlackListing and Hello techniques explicitly eliminate unidirectional links the ReversePathSearch technique exploits the greater network connectivity offered by the existence of multiple paths between nodes. Performance results using ns-2 simulations under varying number of unidirectional links and node speeds show that all three techniques improve performance by avoiding unidirectional links the ReversePathSearch technique being the most effective. | INTRODUCTION
A unidirectional link arises between a pair of nodes in a network
when only one of the two nodes can directly communicate with
the other node. In multihop wireless networks (also known as ad
hoc networks), unidirectional links originate because of several rea-
sons. These include difference in radio transceiver capabilities of
nodes, the use of transmission range control, difference in wireless
channel interference experienced by different nodes. Depending on
such conditions, unidirectional links can be quite common.
This paper addresses routing in the presence of unidirectional
links. Most of the previous work on this problem concentrated on
developing routing protocols [1, 10, 39, 31, 33, 2], or techniques
such as tunneling [23] to allow the use of unidirectional links. But
the resulting performance advantages and tradeoffs are not well un-
derstood. Our approach in this work is to empirically study the
influence of unidirectional links on routing performance.
Utilizing unidirectional links along with bidirectional links for
routing has two conceivable advantages over using only bidirectional
links. First, they can improve the network connectivity. For
example, removal of unidirectional links in Figure 1 partitions the
network. Second, they can provide better, i.e., shorter, paths. In
Figure
node B can communicate to node C directly in one hop
by using the unidirectional link as the alternate
bidirectional path B E C requires two hops.
But routing using unidirectional links is complex and entails high
overheads. Main difficulty comes from the asymmetric knowledge
about a unidirectional link at its end nodes. A node downstream of a
unidirectional link (for example, node F in Figure 1) immediately
knows about the incoming unidirectional link (link
hearing a transmission from the upstream node (node E); but the
upstream node may not know about its outgoing unidirectional link
until the downstream node explicitly informs it over a multihop
reverse path (say, F A Learning about the
unidirectional link thus incurs higher overhead than when the link
is bidirectional.
There is evidence in the literature that routing protocols finding
unidirectional paths (paths with one or more unidirectional links)
are subject to higher overheads than those finding only bidirectional
paths. For distance-vector protocols, Gerla et al. [10] and
Prakash [33] independently make this observation. In the realm of
on-demand protocols for ad hoc networks also, similar observation
can be made. DSR [15] requires two route discoveries to discover
unidirectional paths - one from the source and the other from the
destination, as opposed to a single route discovery to find bidirectional
paths. Although pure link-state protocols such as OSPF
[21] may be able to support unidirectional links with least additional
overhead, they already have very high overheads compared
A
F
Figure
1: A network with unidirectional links.
to other competing protocols for ad hoc networks [6]. Exactly for
this reason, efficient variants of link-state protocols (e.g., TBRPF
[24], OLSR [5]) have been developed. But these protocols work
only with bidirectional links.
Besides, the use of unidirectional links poses problems to existing
link-layer protocols. Many common link-layer protocols for
medium access and address resolution not only assume bidirectional
links, but also very much depend on two-way handshakes
and acknowledgments for their operation. For example, the well-known
IEEE 802.11 DCF MAC protocol [14] depends on the exchange
of RTS-CTS control packets between the sender and the
receiver to prevent hidden terminal collisions, and also expects acknowledgment
from the receiver to judge correct packet reception
of unicast transmissions.
Thus it is important to evaluate the potential benefit of unidirectional
links to know whether employing a seemingly high overhead
unidirectional link routing protocol is justified. We evaluate this
benefit in the first part of the paper. We compare two routing ap-
proaches: (i) using both unidirectional and bidirectional links; (ii)
using only bidirectional links. We look at the idealized routing performance
obtained from these two approaches - independent of
specific routing protocols or associated overheads - to find out
any performance advantages of utilizing unidirectional links. To
accomplish this goal, we simulate a large number of random multihop
network topologies with unidirectional links. We study the
connectivity and path cost metrics of these topologies when uni-directional
links are used, and when they are ignored. In order to
create unidirectional links, we use three models that assign variable
transmission ranges to nodes. These models reflect some realistic
wireless network scenarios having unidirectional links. Our results
show that the connectivity advantage using unidirectional links is
almost non-existent, but shortest path costs show some improvement
with unidirectional links. However, these improvements too
go away when hop-by-hop acknowledgment costs are accounted.
Utilizing unidirectional links for routing purposes may not be ef-
ficient, and most routing protocols indeed work with only bidirectional
links. But these protocols must still need additional mechanisms
to "eliminate" unidirectional links from route computations
when they are present. We investigate the importance of such mechanisms
using a well-known on-demand routing protocol called Ad
hoc On-demand Distance Vector (AODV) [29, 28]. The basic AODV
protocol works only with bidirectional links. We propose a new
technique called ReversePathSearch to handle unidirectional links
in AODV. This technique takes advantage of multiple paths between
nodes to overcome unidirectional links. We also consider
two other techniques - BlackListing and Hello - that explicitly
eliminate unidirectional links in AODV. Using ns-2 simulations, we
evaluate the performance of these three techniques relative to basic
AODV.
The rest of the paper is organized as follows. Section 2 evaluates
the idealized performance advantage of routing using unidirectional
links. Section 3 investigates the issue of avoiding unidirectional
links from route computations in the context of AODV.
Here we present and evaluate three techniques to improve basic
AODV performance in networks with unidirectional links. Section
4 reviews related work. Finally, we present our conclusions in Section
5.
2. BENEFITOFUNIDIRECTIONALLINKS
2.1 Unidirectional Link Scenarios
Generally speaking, there are two principal reasons behind the
presence of unidirectional links in multihop wireless networks. First,
difference in radio transmission power level (or receiver sensitiv-
ity) of the nodes give rise to unidirectional links. When two nodes
(say A and B) have widely different radio transmission ranges 1 so
that node A can transmit to node B but not the other way, a uni-directional
link forms from node A to node B. Nodes may naturally
have different transmission ranges in a heterogenous network
where there is an inherent difference in radio capabilities. Alter-
natively, they may have different transmission ranges when power
control algorithms are used for energy savings or topology control.
Second, difference in interference (or noise) at different nodes
cause asymmetric links. Asymmetric links occur between a pair of
nodes if the link quality is different in each direction. An extreme
case of link asymmetry leads to unidirectional links. Nodes may
experience different interference levels because of wireless channel
imperfections such as multipath fading and shadowing. Hidden
terminals can be another cause of wide variation in interference
levels (as also mentioned in [33]).
To study the benefit of unidirectional links for routing, we only
consider unidirectional links arising from difference in transmission
ranges. Note that for unidirectional links to be effective for
routing, they should exist long enough for the routing protocol to
compute routes through them and to later use such routes to forward
some data. Unidirectional links caused by variation in interference
levels presumably happen on much smaller time-scales
than would be needed for routing. So here we limit ourselves to
unidirectional links from variable transmission ranges. However,
later in the paper, we do take into account the issue of interference
to some extent, specifically from hidden terminals, when studying
the negative influence of unidirectional links on routing.
Among the power control algorithms, only a particular class is
relevant here. Some power control algorithms prescribe a common
power level for all nodes in the network (e.g., [22]). These
algorithms do not create any unidirectional links. Other algorithms
that do allow variable power levels either assign power levels on
a per-transmission basis (e.g., [41, 20]), or assign power levels independent
of any single transmission, but may be used for several
successive transmissions (e.g., [13, 37, 34, 40]). Since the former
set of algorithms may result in short-term unidirectional links, we
limit our attention to the latter class only.
We will sometimes use transmission range instead of transmission
power to simplify description. Note that it is usually straightforward
to compute the "nominal" transmission range of a node given
the transmission power, large-scale path loss model, and the radio
parameters.
2.2 Models for Variable Transmit Power As-
signment
Based on the above discussion, we use the following models in
our evaluations to create networks with unidirectional links.
TwoPower In this model, the transmission range of a node can
take one of two possible values with equal probability. A
node can have either a short or a long range corresponding
to low and high transmission power levels, respectively. The
fraction of low power nodes is the variable parameter. A
similar model was used in [38, 26]. This model represents a
heterogenous network with two widely different radio capa-
bilities. For example, transmission power levels of vehicular
and man-pack radios in a battlefield scenario can differ by as
much as 10dB.
RandomPower With this model, each node is assigned a random
transmission range that is uniformly distributed between minimum
and maximum range values. This model is representative
of two practical scenarios where unidirectional links
might occur: (i) a generalization of TwoPower level model
described above, i.e., a network of nodes with multiple different
power levels; (ii) a snapshot of a network in which
each node adjusts its transmit power based on the available
energy supply to conserve its battery power.
Rodoplu and Meng (R&M) This model is based on the distributed
topology control algorithm proposed by Rodoplu and Meng
[37]. Topology control algorithms (e.g., [13, 37, 34, 40])
adjust node transmit powers in order to obtain a topology
that optimizes a certain objective such as network capacity,
network reliability or network lifetime. Almost all topology
control algorithms in the literature try to guarantee some
form of network connectivity while optimizing one or all of
the above criteria. We have chosen the R&M algorithm because
it is the only algorithm in our knowledge that considers
unidirectional links and ensures strong connectivity possibly
using some unidirectional links. This feature of the algorithm
provides a favorable case for the use of unidirectional
links for routing to potentially provide better network con-
nectivity. All other algorithms guarantee connectivity using
bidirectional links alone and thus are not good candidates for
our evaluation.
Here we briefly review the R&M algorithm for the benefit
of the readers. The algorithm aims at achieving energy
efficiency through transmit power adjustment. Because of
the nature of wireless communication, it is sometimes energy
efficient for a node to use a lower transmit power and
communicate with a farther node using intermediate relaying
nodes than to use higher power and communicate directly.
The algorithm uses this observation to advantage. Central to
the algorithm is the notion of an enclosure. Enclosure of a
node represents its immediate locality. As long as each node
maintains links with nodes in its enclosure, strong connectivity
is guaranteed. So each node reduces its transmit power
from the maximum value to a level where it can reach only
nodes in its enclosure. The algorithm assumes that each node
knows its position. Every node computes its enclosure set
by exchanging position information with all reachable nodes
(using maximum power).
2.3 Evaluation Methodology
Our goal here is to assess the benefit attainable from using uni-directional
links for routing in multihop wireless networks. To accomplish
this goal, we evaluate the idealized routing performance
of two approaches: (i) utilizing unidirectional as well as bidirectional
links; (ii) utilizing only bidirectional links. Our evaluation
process involves static simulation of large number (over a thousand
for each data point) of random multihop network topologies containing
unidirectional links, and comparing the average connectivity
and path cost metrics with and without unidirectional links. The
three models for transmit power assignment described in the previous
subsection are used to generate networks with unidirectional
links.
To measure connectivity, we compute the average number of
strongly connected components and largest strongly connected component
over all random graph samples. For comparing the path
quality, we consider the average shortest path cost, per-hop acknowledgment
cost and the total communication cost. Note that
all path costs are in number of hops and are averaged over all node
pairs having a bidirectional path between them, for each random
graph sample - averaged over all such samples. The per-hop acknowledgment
cost is computed as the average cost of traversing
a shortest path between a pair of nodes hop-by-hop in the reverse
direction. The total communication cost is simply the sum of the
shortest path and acknowledgment costs.
We experiment with a wide variety of node densities and radio
ranges. All our experiments are for 100 node networks. Each
random network topology consists of nodes randomly placed in a
square field. To vary node density (measured as nodes/sq. km), the
dimensions of the field are varied. In all three range assignment
models, a fixed maximum transmission range of 250 m is used. In
the TwoPower model, the fraction of low power nodes is varied to
get different range assignments. The long range is same as the maximum
range, while the short range is always set to 125 m. Note that
we experimented with different short range values, but the results
are not very sensitive to these values. In the RandomPower model,
the minimum range is changed for variation in ranges. In the R&M
model, the radio range of a node is controlled by the algorithm, and
cannot be artificially varied.
2.4 Simulation Results
2.4.1 Variation in Node Density
Here we study the effect of node density on connectivity and path
cost metrics in all three range assignment models. The fraction of
low power nodes in the TwoPower model is set to 0.5 as this value
results in the most number of unidirectional links. In the Random-
Power model, the minimum range value is kept constant at 125 m
which is chosen somewhat arbitrarily. We did experiments with
other values of minimum range, but the results did not vary much.
Node density is varied so that all network configurations are covered
starting from very sparse and disconnected networks to highly
dense and connected networks. Note that number of unidirectional
and bidirectional links (data not shown) increase with increase in
density in all three models. However, the relative number of these
links and their rate of increase with density is very much dependent
on the specific model used. Also, we noticed that the mean radio
range in the R&M model shrank as nodes become denser. This is
expected, however, given the nature of the underlying algorithm.
The first set of plots (Figure 2) study the network connectivity
properties with and without using unidirectional links. The number
of strongly connected components and the size of the largest components
are very similar regardless of whether or not unidirectional
links are used. Note that unidirectional links do not improve connectivity
in the R&M model even though they are explicitly taken
into account by the algorithm. Furthermore, we found that connec-
0Strongly
connected
components
Density (nodes/sq. km)
With unidirectional links
unidirectional links
(a) TwoPower5152535450 50 100 150 200 250 300 350 400 450 500
Strongly
connected
components
Density (nodes/sq. km)
With unidirectional links
unidirectional links
(b) RandomPower26101418
Strongly
connected
components
Density (nodes/sq. km)
With unidirectional links
unidirectional links
(c) R&M20406080100
Largest
strongly
connected
component
Density (nodes/sq. km)
With unidirectional links
unidirectional links
(d) TwoPower20406080100
Largest
strongly
connected
component
Density (nodes/sq. km)
With unidirectional links
unidirectional links
Largest
strongly
connected
component
Density (nodes/sq. km)
With unidirectional links
unidirectional links
Figure
2: Connectivity metrics in all three models with varying density.
tivity metrics in this model are exactly identical to the case where
all nodes use the maximum range. Both these observations suggest
that it may be somewhat unlikely in random topologies for two sub-components
to be connected by two unidirectional links (between
different node pairs).
The second set of plots (Figure 3 (a, b, c)) shows the average cost
of the shortest path. The initial hump in the plots is because of the
sharp transition from disconnected to connected networks within a
small range of densities. Observe that ignoring unidirectional links
only marginally increases the shortest path cost in TwoPower and
RandomPower models (Figure 3 (a, b)) except when the density is
between 50-100 nodes/sq. km, where the increase is more signifi-
cant. In the R&M model (Figure 3 (c)), the increase is marginal for
lower density, but increases with increasing density.
However, note that the shortest path cost is only a part of the
overall picture. Many ad hoc network protocols use some sort of
per-hop acknowledgment either in the network or the link layer to
guarantee reliable transmission and also to detect link breaks. Use
of unidirectional links will cause such acknowledgments to traverse
multiple hops - possibly in the network layer (see [23] for an idea
based on tunneling). This will increase the overall communication
cost. As expected, the hop-by-hop acknowledgement costs are
more in all three models when unidirectional links are used (Figure
3 (d, e, f)). The overall communication cost in TwoPower and Ran-
domPower models (Figure 3 (g, h)) is approximately the same with
or without unidirectional links. In the R&M model (Figure 3 (i)),
they are still similar for lower density, but the use of unidirectional
links brings down the cost a bit (up to 10%) when the node density
is very high.
2.4.2 Variation in Radio Range
Till now, in TwoPower and RandomPower models, the variability
in range has been fixed and node density has been varied. Here
we study the effect of variation in ranges for a fixed density. We set
the node density to 100 nodes/sq. km which yields a connected net-work
when all nodes use the maximum range. Using this density
value allows us to meaningfully evaluate the connectivity advantage
from unidirectional links. A higher value of density will produce
more number of bidirectional links and thus benefits the case
without unidirectional links. On the other hand, a lower density
value will not allow us to explore the whole range of connectivi-
ties, as network will not get connected for any range assignment.
Figure
4 shows all metrics with varying fraction of low power
nodes in the TwoPower model. Note that connectivity (Figure 4
improves only slightly (less than a few percents) by using
unidirectional links. On the other hand, the average total communication
cost (Figure 4 (e)) improves (up to about 7%, but mostly
lower) when a large fraction of nodes is low power; the costs are
similar when a small fraction of nodes is low power.
The effect of variability in node ranges in the RandomPower
model is shown in Figure 5 for different values of the minimum
range. There is some noticeable improvement (about 15%) in the
largest components (Figure 5 (b)) with unidirectional links when
minimum range is very small. However, for higher values of minimum
range, the improvements start to drop. This is somewhat
expected because the variability in node ranges decreases with increase
in the minimum range. Similar observation applies for communication
cost (Figure 5 (e)) as well; there is up to 10% improvement
with unidirectional links when the minimum range is small.
The general observation from the foregoing evaluations is that
unidirectional links provide only incremental benefit. They do not
improve connectivity in most cases. They do improve shortest path
cost in general. But with per-hop acknowledgments, the overall
benefit is small and is restricted to only certain densities and radio
ranges.
3. ELIMINATION OF UNIDIRECTIONAL
Majority of the protocols developed for multihop wireless networks
assume bidirectional links (e.g., [17], DSDV [27], AODV [28],
6
Avg.
shortest
path
cost
Density (nodes/sq. km)
With unidirectional links
unidirectional links
(a) TwoPower357
Avg.
shortest
path
cost
Density (nodes/sq. km)
With unidirectional links
unidirectional links
(b) RandomPower579
Avg.
shortest
path
cost
Density (nodes/sq. km)
With unidirectional links
unidirectional links
(c) R&M23456
Avg.
hop-by-hop
ack
cost
Density (nodes/sq. km)
With unidirectional links
unidirectional links
(d) TwoPower357
Avg.
hop-by-hop
ack
cost
Density (nodes/sq. km)
With unidirectional links
unidirectional links
Avg.
hop-by-hop
ack
cost
Density (nodes/sq. km)
With unidirectional links
unidirectional links
Avg.
communication
cost
Density (nodes/sq. km)
With unidirectional links
unidirectional links
Avg.
communication
cost
Density (nodes/sq. km)
With unidirectional links
unidirectional links
Avg.
communication
cost
Density (nodes/sq. km)
With unidirectional links
unidirectional links
Figure
3: Path cost metrics in all three models with varying density.
TBRPF [24], OLSR [5]). But for correct operation in the presence
of unidirectional links, they require additional mechanisms to eliminate
unidirectional links from route computations. Our goal in this
section is to understand the importance of such mechanisms and
their effect on the overall performance of the routing protocol. We
investigate this issue using AODV as a case study. While general
comments are difficult to make, we do believe that other protocols
will also benefit from the mechanisms we develop and our observations
from the performance evaluation.
3.1
AODV is an on-demand routing protocol. It is loosely based on
the distance-vector concept. In on-demand protocols, nodes obtain
routes on an as needed basis via a route discovery procedure. Route
discovery works as follows. Whenever a traffic source needs a route
to a destination, it initiates a route discovery by flooding a route request
(RREQ) for the destination in the network and then waits for
a route reply (RREP). When an intermediate node receives the first
copy of a RREQ packet, it sets up a reverse path to the source using
the previous hop of the RREQ as the next hop on the reverse
path. In addition, if there is a valid route available for the desti-
nation, it unicasts a RREP back to the source via the reverse path;
otherwise, it re-broadcasts the RREQ packet. Duplicate copies of
the RREQ are immediately discarded upon reception at every node.
The destination on receiving the first copy of a RREQ packet forms
a reverse path in the same way as the intermediate nodes; it also
unicasts a RREP back to the source along the reverse path. As the
RREP proceeds towards the source, it establishes a forward path to
the destination at each hop. AODV also includes mechanisms for
erasing broken routes following a link failure, and for expiring old
and unused routes. We do not discuss them, as they are not relevant
here.
The above route discovery procedure requires bidirectional links
for correct operation. Only then RREP can traverse back to the
source along a reverse path and form a forward path to the destination
at the source. Many common MAC protocols check link
bidirectionality only for unicast transmissions. For example, IEEE
802.11 DCF MAC [14] protocol uses an RTS-CTS-Data-ACK exchange
for unicast transmissions; receipt of CTS following an RTS
or ACK following the data transmission on a link ensures that it is
bidirectional. Broadcast transmissions, however, cannot detect the
presence of unidirectional links. Since AODV RREQ packets typically
use link-layer broadcast transmissions, some unidirectional
links can go undetected and as a result reverse paths may contain
unidirectional links (directed away from the source). RREP transmissions
along such reverse paths will fail, as they are unicast.
Route discovery fails when none of the RREPs reach the source.
Strongly
connected
components
Fraction of low power nodes
With unidirectional links
unidirectional links
Largest
strongly
connected
component
Fraction of low power nodes
With unidirectional links
unidirectional links
(b)34567
Avg.
shortest
path
cost
Fraction of low power nodes
With unidirectional links
unidirectional links
(c)34567
Avg.
hop-by-hop
ack
cost
Fraction of low power nodes
With unidirectional links
unidirectional links
Avg.
communication
cost
Fraction of low power nodes
With unidirectional links
unidirectional links
Figure
4: TwoPower model: connectivity and path cost metrics with varying fraction of low power nodes.261014180 50 100 150 200 250
Strongly
connected
components
Minimum transmission range (m)
With unidirectional links
unidirectional links
(a)657585950 50 100 150 200 250
Largest
strongly
connected
component
Minimum transmission range (m)
With unidirectional links
unidirectional links
(b)34560 50 100 150 200 250
Avg.
shortest
path
cost
Minimum transmission range (m)
With unidirectional links
unidirectional links
Avg.
hop-by-hop
ack
cost
Minimum transmission range (m)
With unidirectional links
unidirectional links
Avg.
communication
cost
Minimum transmission range (m)
With unidirectional links
unidirectional links
Figure
5: RandomPower model: connectivity and path cost metrics with varying values of minimum range.
Figure
All links are bidirectional except the one with an ar-
row. A receives first copy of RREQ from S for D via S A
path, and forms a reverse path A S; subsequent RREP transmission
from A to S will fail. This scenario will repeat for later
route discovery attempts from S. The alternate, longer path
will never be discovered.
It can fail even when there is a bidirectional path between the source
and the destination. This is because only the first copy of a RREQ
packet - which may arrive via a unidirectional path from the source
- is considered by intermediate nodes and the destination to form
reverse paths and send back RREPs; later copies are simply discarded
even if they take bidirectional paths. See Figure 6 for an
In the worst case, this scenario will result in repeated route discovery
failures. Thus additional mechanisms are needed to avoid
the above problem in networks with unidirectional links.
3.2 Techniques for Handling Unidirectional
Links in AODV
In the following we describe three techniques to alleviate this
problem. The first two techniques - "BlackListing" and "Hello"
are known techniques. The third technique, "ReversePathSearch"
is our contribution in this paper.
BlackListing This technique reactively eliminates unidirectional
links. It is included in the latest AODV specification [29].
Here, whenever a node detects a RREP transmission failure,
it inserts the next hop of the failed RREP into a "blacklist"
set. The blacklist set at a node indicates the set of nodes
from which it has unidirectional links. For example, in Figure
6 node A will blacklist node S. Later when a node receives
a RREQ from one of the nodes in its blacklist set,
it discards the RREQ to avoid forming a reverse path with
unidirectional link. This gives a chance for RREQ from an
alternate path (e.g., via C in Figure 6) to provide a different
reverse path. BLACKLIST TIMEOUT specifies the period
for which a node remains in the blacklist set. By default, this
period is set to the upper bound of the time it takes to perform
the maximum allowed number of route discovery attempts by
a source.
This technique is simple and has little overhead when there
are few unidirectional links. However, when there are many
unidirectional links, this approach is inefficient because these
links are blacklisted iteratively one at a time. Several route
discoveries may be needed before a bidirectional path, if ex-
ists, is found. Another difficulty with this technique is in setting
an appropriate value for the BLACKLIST TIMEOUT.
Setting it to a small value may reduce the effectiveness of the
technique. On the other hand, setting it to a very large value
affects connectivity when there are many short-term unidirectional
links.
Hello In the contrast to the BlackListing technique, this technique
proactively eliminates unidirectional links by using periodic
one-hop Hello packets. A similar idea has also been used in
OLSR [5] to record only bidirectional links. In each Hello
packet, a node includes all nodes from which it can hear Hellos
(i.e, its set of neighbors). If a node does not find itself in
the Hello packet from another node, it marks the link from
that node as unidirectional. Just as in the BlackListing tech-
nique, every node ignores RREQ packets that come via such
unidirectional links. Note that this hello packets are identical
to the AODV hello packets [29] except for the additional
neighborhood information.
The advantage of this technique is that it automatically detects
unidirectional links by exchanging Hello packets. But
the periodic, large Hello packets can be a significant over-
head. Although the size of the Hello packets may be reduced
by using "differential" Hellos [24], the periodic packet overhead
is still a concern. However, in situations when Hellos
must be used for maintaining local neighborhood and to detect
link failures (e.g., when the link layer cannot provide
any feedback about link failures), incremental overhead for
unidirectional link detection may not be very much.
ReversePathSearch Unlike the above two techniques, this technique
does not explicitly remove unidirectional links. In-
stead, it takes a completely different approach. Each unidirectional
link is viewed as a "fault" in the network and multiple
paths between the nodes are discovered to perform fault-tolerant
routing. The basic idea is as follows. During the
RREQ flood, multiple loop-free reverse paths to the source
are formed at intermediate nodes and the destination. Using
a distributed search procedure, multiple RREPs explore
this multipath routing structure in an attempt to find one or
more bidirectional paths between the source and the destina-
tion. This search procedure is somewhat similar to the well-known
depth first search algorithm. When RREP fails at a
node, the corresponding reverse path is erased and the RREP
is retried along an alternate reverse path, if one is available;
when all reverse paths fail at a node during this process, the
search backtracks to upstream nodes 2 of that node with respect
to the source and they too follow the same procedure.
This continues until either one or more bidirectional paths
are found at the source, or all reverse paths are explored.
This technique is described in more detail in the following
subsection.
3.3 Reverse Path Search Technique
In a prior work [19], we have investigated a multipath extension
to AODV, called AOMDV, where route update rules to maintain
multiple loop-free paths are described. We use the same AOMDV
route update rules here in the ReversePathSearch technique to maintain
multiple loop-free paths to a destination at every node.
In the ReversePathSearch algorithm, all RREQ copies including
duplicates are examined at intermediate nodes and the destination
for possible alternate reverse paths to the source. However, reverse
2 A node X is upstream of a node Y with respect to a node D if Y
appears on a path from X to D. Conversely, Y is a downstream node
of X for D.
F
G
Figure
7: Demonstration of reverse path search. Multiple reverse
paths to the source S formed during the RREQ flood
are shown; some of the reverse paths contain unidirectional
links such as C A S. Consider the RREP propagation
via C A S. The transmission from A to S fails causing
a BRREP transmission at A. C erases the reverse path via A
and transmits a RREP to B in order to explore the reverse path
This also fails, causing C to transmit BRREP. E
then erases the reverse path via C and transmits RREP to F
in order to explore path E F G H S. This will be
successful.
paths are formed only from those copies that satisfy route update
rules and provide loop-free paths [19]. Other copies are simply
discarded. Note that this is different from basic AODV where only
the first copy is looked at.
When a RREQ copy at an intermediate node creates or updates
a reverse path, and the intermediate node has no valid path to the
destination, the RREQ copy is re-broadcasted provided it is the first
copy that yields a reverse path; this is somewhat similar to basic
AODV where only first copies of RREQ are forwarded to prevent
looping during the flood. On the contrary, if the intermediate node
does have a valid path to the destination, it checks whether or not a
RREP has already been sent for this route discovery. If not, it sends
back a RREP along the newly formed reverse path and remembers
the next hop used for this RREP; otherwise, the RREQ copy is
dropped.
When the destination receives copies of RREQs, it also forms
reverse paths in the same way as intermediate nodes. However, unlike
intermediate nodes, the destination sends back a RREP along
each new reverse path. Multiple replies from destination allow exploration
of multiple reverse paths concurrently - thus speeding
up the search for a bidirectional path. In contrast, allowing multiple
destination replies in basic AODV has little benefit unless those
replies take non-overlapping paths to the source. This is because intermediate
nodes have at most one reverse path back to the source.
When an intermediate node receives a RREP, it follows route
update rules in [19] to form a loop-free forward path to the desti-
nation, if possible; else, the RREP is dropped. Supposing that the
intermediate node forms the forward path and has one or more valid
reverse paths to the source, it checks if any of those reverse paths
was previously used to send a RREP for this route discovery. If not,
it chooses one of those reverse paths to forward the current RREP,
and also remembers the next hop for that reverse path; otherwise,
the RREP is simply dropped.
RREP transmission failure (as a result of transmission over a
unidirectional link, for example) at an intermediate node results in
that node erasing the corresponding reverse path, and retrying an
alternate reverse path. If no such alternate path is available, the intermediate
node transmits (broadcast transmission) a new message
called the "backtrack route reply" (BRREP) to inform its upstream
nodes (with respect to the source) to try other reverse paths at those
nodes. A BRREP is also generated by an intermediate node, if that
node does not have any reverse path upon a RREP reception. On
receiving a BRREP, an intermediate node upstream of the BRREP
source (meaning it has last sent a RREP to the BRREP source for
this route discovery) takes a similar action as on a RREP failure;
nodes that are not upstream of the BRREP source simply discard
the packet on reception. When the destination encounters a RREP
failure, or receives a BRREP, it only erases corresponding reverse
paths. See Figure 7 for an illustration.
Note that the above procedure is guaranteed to terminate. To
see this, observe that every RREP failure erases the corresponding
reverse path. So reverse paths cannot be explored indefinitely since
there are only finite number of them. On the other hand, alternate
reverse paths are not explored at the intermediate nodes as long
as RREPs successfully go through. Also note that in the above
description, some details have been omitted for the sake of brevity.
For instance, algorithms actions to cope with BRREP loss are not
mentioned.
Multiple loop-free reverse paths used by the above algorithm are
in general a subset of all possible reverse paths. Thus, sometimes it
is possible that the multiple reverse paths explored by the algorithm
do not include a bidirectional path between source and destination
although such a path exists. But in dense networks that we con-
sider, often there is more than one bidirectional path and the above
possibility is rare.
Finally, multiple replies from the destination in the algorithm
yield multiple forward paths at intermediate nodes and the source.
This ability to compute multiple bidirectional paths in a single route
discovery is highly beneficial in mobile networks for efficient recovery
from route breaks.
3.4 Performance Evaluation
In this section we evaluate the performance of the three techniques
described in the previous subsection relative to basic AODV
under varying number of unidirectional links and node speeds. Two
primary goals from this evaluation are: (i) to understand the impact
of unidirectional links on basic AODV performance; and (ii)
to evaluate the effectiveness of the three techniques in handling uni-directional
links.
3.4.1 Simulation Environment
We use a detailed simulation model based on ns-2 [8]. The
Monarch research group in CMU developed support for simulating
multi-hop wireless networks complete with physical, data link
and MAC layer models [4] on ns-2. The distributed coordination
function (DCF) of IEEE 802.11 [14] for wireless LANs is used
as the MAC layer. The radio model uses characteristics similar to
a commercial radio interface, Lucent's WaveLAN. WaveLAN is a
shared-media radio with a nominal bit-rate of 2 Mb/s and a nominal
radio range of 250 meters. More details about the simulator
can be found in [4, 8]. This simulator has been used for evaluating
performance of earlier versions of the AODV protocol (e.g., [4,
30]).
The AODV model in our simulations is based on the latest protocol
specification [29], except that the expanding ring search is
disabled for all protocol variations. Note that the expanding ring
search introduces an additional reason for route discovery failures,
i.e., when a smaller ring size (TTL value) is used for the search.
This makes analysis somewhat difficult. So in our AODV model,
a source does network-wide route discoveries appropriately spaced
in time until either a route is obtained or a maximum retry limit
(3) is reached; when the limit is exceeded, source reports to the application
that the destination is unreachable and drops all buffered
packets for the destination - we term this event as "route search
failure" in our evaluations. Besides, link layer feedback is used
to detect link failures in all protocol variations. The 802.11 MAC
layer reports a link failure when it fails to receive CTS after several
RTS attempts, or to receive ACK after several retransmissions of
DATA. Note that in the Hello technique, link failures are detected
using hello messages as well as the feedback, whichever detects the
link breakage first.
TwoPower model is used to create unidirectional links, primarily
because it does not perform power control. Use of power control
makes the analysis here difficult as the number of unidirectional
links then will become heavily dependent on the choice of the actual
power control algorithm and the frequency at which the algorithm
is invoked. Note that the frequency of invocation did not play
any role in our earlier evaluations as we used only static topolo-
gies. We modified the ns-2 simulator to allow variable transmission
ranges for nodes. In all our experiments, short and long ranges
in the TwoPower model are set to 125 m and 250 m, respectively.
We vary the fraction of low power nodes to vary the number of
unidirectional links.
We consider 100 node networks with nodes initially placed at
random in a rectangular field of dimensions 575 m x 575 m. The
field size is chosen to guarantee a connected network across the parameter
space. We use random waypoint model [4] to model node
movements. Pause time is always set to zero and the maximum
speed of the nodes is changed to change mobility.
Traffic pattern in our experiments consists of fixed number of
CBR connections (20) between randomly chosen source-destination
pairs and each connection starts at a random time at the beginning
of the simulation and stays until the end. Each CBR source sends
packets (each of size 512 bytes) at a fixed rate of 4 packets/s.
All our experiments use 500 second simulation times. In the
case of static networks, each data point in the plots is an average
of at least 50 runs with different randomly generated initial node
positions and range assignments in each run. In the mobility exper-
iments, we average over 25 randomly generated mobility scenarios
and range assignments. Identical scenarios are used across all protocol
variations.
3.4.2 Performance Metrics
We evaluate four key performance metrics: (i) Packet delivery
fraction - ratio of the data packets delivered to the destination
to those generated by the CBR sources; (ii) Average end-to-end
delay of data packets - this includes all possible delays caused
by buffering during route discovery, queuing delay at the interface,
retransmission delays at the MAC, propagation and transfer times;
(iii) Route search failures - total number of route search failure
events (see previous subsection) at sources; (iv) Normalized routing
load - the number of routing packets "transmitted" per data packet
"delivered" at the destination. Each hop-wise transmission of a
routing packet is counted as one transmission.
3.4.3 Simulation Results
We present two sets of experiments. In the first set, the network
is static and the number of unidirectional links is varied. In the
second set, node mobility is considered. Figure 8 shows the packet
delivery fraction, average delay, and the route search failures as
a function of the number of unidirectional links. We change the
fraction of low power nodes from 0 to 0.5 to increase the number
of unidirectional links. With increase in unidirectional links, basic
AODV drops the highest number of packets (as many as 20%) and
also experiences most number of route search failures (Figure 8 (a,
c)). This is because the basic AODV protocol does not take notice
of the unidirectional links and repeatedly performs route discoveries
without any benefit. Note that after every route search failure,
all packets buffered for the destination at the source are dropped.
The drop in packet delivery is less drastic for BlackListing compared
to basic AODV. But it still drops as many as 14% of the packets
because of its slowness in eliminating unidirectional links one
by one. It still has a large number of route search failures; but performs
somewhat better than basic AODV. The delay performance
of AODV and BlackListing (Figure 8 (b)) is similar as route discovery
latency dominates the delay in both cases.
Both Hello and ReversePathSearch deliver almost all packets always
Figure
8 (a)), as both are able to successfully find routes
always (Figure 8 (c)). However, their delay performance (Figure 8
(b)) is quite different. Delay for the Hello technique is similar to basic
AODV and BlackListing, while ReversePathSearch has significantly
lower delay than others. This shows that ReversePathSearch
can effectively overcome unidirectional links by exploring multiple
reverse paths. However, the delay performance of the Hello technique
is counter-intuitive. One would normally expect the performance
of Hello to be independent of unidirectional links because it
proactively eliminates unidirectional links in the background without
burdening AODV route discovery mechanism. Below we will
analyze the reason behind this unexpected behavior.
By additional instrumentation, we found that the sharp increase
in route discovery attempts with increase in unidirectional links
Figure
9 (a)) explains the delay performance of the Hello tech-
nique. What is more interesting is the reason behind the rise in
route discoveries itself. This we found was because of an undesirable
interaction of this protocol with the 802.11 MAC layer. We
noticed that in general MAC collisions increased with increase in
the number of unidirectional links. This is because of the hidden
terminal interference via unidirectional links, and the insufficiency
of the RTS-CTS handshake in 802.11 MAC to avoid such hidden
terminals (See [32] for a similar observation). For the Hello tech-
nique, however, the increase in collisions is quite dramatic (Figure
9 (c)) because of large and periodic (every second) broadcast hello
packets. Consequently, the efficiency of the MAC layer is negatively
affected resulting in more number of unsuccessful transmissions
and triggering of link failure signals to the routing protocol.
Such link failure signals cause route breaks (Figure 9 (b)). Note
that in the basic AODV protocol, every route break will result in a
new route discovery attempt. Since the Hello technique is identical
to basic AODV except for the additional hello exchanges, it does
more number of route discovery attempts as unidirectional links
grow in number. Note that basic AODV and BlackListing are not
affected very much by the above phenomenon, as they spend most
of their effort finding a route. Even though ReversePathSearch is
subject to this problem to some extent, the availability of redundant
forward paths prevents a big drop in its performance.
Figure
shows the routing load comparison for the four proto-
cols. Overall, ReversePathSearch has the lowest overhead followed
by BlackListing, basic AODV and Hello. The high overhead of the
Hello technique is expected because of the periodic hello messages.
Also, low routing overhead in ReversePathSearch indicates that the
higher per route discovery costs in ReversePathSearch due to more
route replies is very well offset by the significant reduction
in route discoveries (Figure 9(a)). Relative performance in
terms of the byte overhead (not shown) is same as above. In sum-
mary, the ReversePathSearch technique allows for a much more effective
elimination of unidirectional links from route computations
Packet
delivery
fraction
Unidirectional links
Basic AODV
BlackListing
Hello
ReversePathSearch
(a) Packet delivery fraction20601000 100 200 300 400 500 600 700
Avg.
delay
(ms)
Unidirectional links
Basic AODV
BlackListing
Hello
ReversePathSearch
(b) Average delay10305070900 100 200 300 400 500 600 700
Route
search
failures
Unidirectional links
Basic AODV
BlackListing
Hello
ReversePathSearch
(c) Route search failures
Figure
8: Performance with varying number of unidirectional links.501502503500 100 200 300 400 500 600 700
Route
discovery
attempts
Unidirectional links
Hello
ReversePathSearch
(a) Route discoveries50150250350
Route
breaks
Unidirectional links
Hello
(b) Route breaks500015000250000 100 200 300 400 500 600 700
MAC
collisions
Unidirectional links
Hello
(c) MAC collisions
Figure
9: Route discoveries, route breaks and MAC collisions for the Hello technique with varying number of unidirectional links.0.51.52.50 100 200 300 400 500 600 700
Normalized
routing
load
Unidirectional links
Basic AODV
BlackListing
Hello
ReversePathSearch
(a) Routing load
Figure
10: Routing load with varying number of unidirectional
links.
compared to BlackListing, however with much lower overhead cost
compared to Hello.
The effect of node mobility on all metrics is shown in Figure 11.
Here we vary the maximum node speed between 0 and 20 m/s. To
stress the protocols, we set the fraction of low power nodes to 0.5
which results in the most number of unidirectional links in static
networks. Mobility also affects the number and duration of unidirectional
links. But these are somewhat hard to quantify in mobile
networks. As expected, performance in terms of packet delivery,
delay and overhead degrades for all protocols with increase in mobility
observations about the relative
performance of the four protocols pretty much remain same as in
the static network case. Note that the difference in routing load
between basic AODV, BlackListing, and Hello shrinks as mobility
starts to play a dominant role. ReversePathSearch, however, continues
to perform significantly better than the rest. With the built-in
redundancy in the route discovery process, it not only eliminates
unidirectional links from the route computations by exploring alternate
reverse paths, but also, in a similar vain, avoids broken reverse
paths due to mobility (Figure 11 (c)). In addition, by computing
multiple forward paths at the source and intermediate nodes, it obviates
the need for frequent route discovery attempts in response to
route breaks caused by node mobility.
4. RELATED WORK
Although many common routing protocols assume bidirectional
links, there is still considerable amount of literature available on
routing using unidirectional links (e.g., [1, 10, 39, 31, 2]). These
protocols are mainly targeted towards two network environments,
namely, mixed satellite and terrestrial networks, and multihop wireless
networks, where unidirectional links commonly occur. Of the
several unicast routing proposals for multihop wireless networks
within the IETF MANET working group [18], only DSR [15] and
FSR [9] can fully support unidirectional links, while ZRP [12, 11]
and TORA [25] can partially support unidirectional links. There
have also been attempts to extend existing protocols to support uni-directional
links [33, 38, 16]. But none of the above efforts contain
any simulation or experimental evaluation of the impact of unidirectional
links on routing performance in realistic scenarios.
Support for unidirectional links below the network layer also received
some attention. Link-layer tunneling approach has been explored
in [7, 23]. The main motivation behind this approach is
to hide the unidirectional nature of a link from higher layer protocols
so that they can operate over unidirectional links without
any modifications. This is basically achieved by forming a reverse
tunnel (possibly via a multihop path) for each unidirectional link
using information gathered by the routing protocol. In [36], an
alternative approach to tunneling, but with a similar goal is pro-
posed. The idea here is to introduce a sub-layer beneath the net-work
layer to find and maintain multihop reverse routes for each
unidirectional link. There is also some work on using multihop
acknowledgements to discover unidirectional links [26], and GPS-based
approaches for enabling link-level acknowledgements [16]
over unidirectional links.
Work on channel access protocols for multihop wireless networks
with unidirectional links is starting to get attention [35, 3].
Ramanathan [35] makes an important observation that many uni-directional
links hurt channel access protocol performance. This
is somewhat related to our observation that utilizing unidirectional
links does not provide any significant additional benefit.
5. CONCLUSIONS
Unidirectional links commonly occur in wireless ad hoc networks
because of the differences in node transceiver capabilities
or perceived interference levels. Unidirectional links can presumably
benefit routing by providing improved network connectivity
and shorter paths. But prior work indicates that routing over uni-directional
links usually causes high overheads. With this in mind,
we evaluated performance advantages, in terms of connectivity and
path costs, of routing using unidirectional links under ideal con-
ditions. Our evaluations were done with respect to three variable
transmission range assignment models that reflect some realistic
network scenarios with unidirectional links. Main conclusion from
this study is that unidirectional links provide only incremental ben-
efit. Thus, protocols that avoid unidirectional links demand a closer
look.
Many common routing protocols, however, simply assume that
links are bidirectional. They will require some additional protocol
operations to detect and eliminate unidirectional links from
route computations. To assess the difficulty of doing this, we have
presented a case study with the AODV protocol where three such
techniques are presented and evaluated. It is observed that the Re-
versePathSearch technique performs the best because of its ability
to explore multiple paths. It exhibits a dual advantage, both in terms
of immunity from unidirectional links and from mobility-induced
link failures. While this case study has been performed only for
AODV, we expect that other protocols that share certain characteristics
with AODV, such as the on-demand nature or the distance
vector framework, will also benefit from these ideas.
Besides, our performance study also revealed that 802.11 MAC
performance degrades in the presence of unidirectional links. A
similar observation was also made in [32]. These observations suggest
the need for more efficient MAC protocols to handle unidirectional
links, as such links may be inevitable in certain ad hoc
network scenarios (for example, a network of nodes with heterogenous
powers).
Acknowledgments
This work is partially supported by NSF CAREER grant ACI-00961-
86 and NSF networking research grant ANI-0096264. Mahesh
Marina is supported by an OBR computing research award in the
ECECS department, University of Cincinnati.
6.
--R
Distributed Algorithms for Unidirectional Networks.
A Performance Comparison of Multi-Hop Wireless Ad Hoc Network Routing Protocols
Optimized Link State Routing Protocol.
Fisheye State Routing Protocol (FSR) for Ad Hoc Networks.
A Distributed Routing Algorithm for Unidirectional Networks.
The Interzone Routing Protocol (IERP) for Ad Hoc Networks.
The Intrazone Routing Protocol (IARP) for Ad Hoc Networks.
Topology Control for Multihop Packet Radio Networks.
IEEE Standards Department.
The Dynamic Source Routing Protocol for Mobile Ad Hoc Networks (DSR).
On supporting Link Asymmetry in Mobile Ad Hoc Networks.
Mobile Ad hoc Networks (MANET).
A Power Controlled Multiple Access Protocol for Wireless Packet Networks.
OSPF version 2.
Power Control in Ad-Hoc Networks: Theory
A Tunneling Approach to Routing with Unidirectional Links in Mobile Ad-Hoc Networks
Topology Broadcast Based on Reverse-Path Forwarding (TBRPF)
Using Multi-Hop Acknowledgements to Discover and Reliably Communicate over Unidirectional Links in Ad Hoc Networks
Highly Dynamic Destination-Sequenced Distance-Vector Routing (DSDV) for Mobile Computers
Ad Hoc On-Demand Distance Vector Routing
Ad hoc On-Demand Distance Vector (AODV) Routing
Performance Comparison of Two On-demand Routing Protocols for Ad Hoc Networks
A Distributed Routing Algorithm for Multihop Packet Radio Networks with Uni- and Bi-Directional Links
Medium Access Control in a Network of Ad Hoc Nodes with Heterogeneous Power Capabilities.
A Routing Algorithm for Wireless Ad Hoc Networks with Unidirectional Links.
Topology Control of Multihop Wireless Networks using Transmit Power Adjustment.
A Unified Framework and Algorithm for Channel Assignment in Wireless Networks.
Providing a Bidirectional Abstraction for Unidirectional Ad Hoc Networks.
Minimum Energy Mobile Wireless Networks.
Scalable Unidirectional Routing with Zone Routing Protocol (ZRP) Extensions for Mobile Ad-hoc Networks
Directed Network Protocols.
Distributed Topology Control for Power Efficient Operation in Multihop Wireless Ad Hoc Networks.
On the Construction of Energy-Efficient Broadcast and Multicast Trees in Wireless Networks
--TR
Highly dynamic Destination-Sequenced Distance-Vector routing (DSDV) for mobile computers
Distributed Algorithms for Unidirectional Networks
Packet-radio routing
A performance comparison of multi-hop wireless ad hoc network routing protocols
A unified framework and algorithm for channel assignment in wireless networks
Simulation-based performance evaluation of routing protocols for mobile ad hoc networks
Channel access scheduling in Ad Hoc networks with unidirectional links
A routing algorithm for wireless ad hoc networks with unidirectional links
Directed Network Protocols
Ad-hoc On-Demand Distance Vector Routing
On-Demand Multi Path Distance Vector Routing in Ad Hoc Networks
--CTR
Jun-Beom Lee , Young-Bae Ko , Sung-Ju Lee, EUDA: detecting and avoiding unidirectional links in ad hoc networks, ACM SIGMOBILE Mobile Computing and Communications Review, v.8 n.4, October 2004
Venugopalan Ramasubramanian , Daniel Moss, BRA: a bidirectional routing abstraction for asymmetric mobile ad hoc networks, IEEE/ACM Transactions on Networking (TON), v.16 n.1, p.116-129, February 2008
David Kotz , Calvin Newport , Robert S. Gray , Jason Liu , Yougu Yuan , Chip Elliott, Experimental evaluation of wireless simulation assumptions, Proceedings of the 7th ACM international symposium on Modeling, analysis and simulation of wireless and mobile systems, October 04-06, 2004, Venice, Italy
Douglas M. Blough , Mauro Leoncini , Giovanni Resta , Paolo Santi, Topology control with better radio models: Implications for energy and multi-hop interference, Performance Evaluation, v.64 n.5, p.379-398, June, 2007
Eleonora Borgia , Franca Delmastro, Effects of unstable links on AODV performance in real testbeds, EURASIP Journal on Wireless Communications and Networking, v.2007 n.1, p.32-32, January 2007
Douglas M. Blough , Mauro Leoncini , Giovanni Resta , Paolo Santi, The lit K-neigh protocol for symmetric topology control in ad hoc networks, Proceedings of the 4th ACM international symposium on Mobile ad hoc networking & computing, June 01-03, 2003, Annapolis, Maryland, USA
Christian Bettstetter , Christian Hartmann, Connectivity of wireless multihop networks in a shadow fading environment, Wireless Networks, v.11 n.5, p.571-579, September 2005
Handling asymmetry in power heterogeneous ad hoc networks, Computer Networks: The International Journal of Computer and Telecommunications Networking, v.51 n.10, p.2594-2615, July, 2007
Wesam Al Mobaideen , Hani Mahmoud Mimi , Fawaz Ahmad Masoud , Emad Qaddoura, Performance evaluation of multicast ad hoc on-demand distance vector protocol, Computer Communications, v.30 n.9, p.1931-1941, June, 2007
Christian Bettstetter , Christian Hartmann, Connectivity of wireless multihop networks in a shadow fading environment, Proceedings of the 6th ACM international workshop on Modeling analysis and simulation of wireless and mobile systems, September 19-19, 2003, San Diego, CA, USA
Rendong Bai , Mukesh Singhal, Salvaging route reply for on-demand routing protocols in mobile ad-hoc networks, Proceedings of the 8th ACM international symposium on Modeling, analysis and simulation of wireless and mobile systems, October 10-13, 2005, Montral, Quebec, Canada
Liran Ma , Qian Zhang , Xiuzhen Cheng, A power controlled interference aware routing protocol for dense multi-hop wireless networks, Wireless Networks, v.14 n.2, p.247-257, March 2008
Paolo Santi, Topology control in wireless ad hoc and sensor networks, ACM Computing Surveys (CSUR), v.37 n.2, p.164-194, June 2005 | routing;asymmetric links;multihop wireless networks;ad hoc networks;on-demand routing;unidirectional links;multipath routing |
513809 | Priority scheduling in wireless ad hoc networks. | Ad hoc networks formed without the aid of any established infrastructure are typically multi-hop networks. Location dependent contention and "hidden terminal" problem make priority scheduling in multi-hop networks significantly different from that in wireless LANs. Most of the prior work related to priority scheduling addresses issues in wireless LANs. In this paper, priority scheduling in multi-hop networks is discussed. We propose a scheme using two narrow-band busy tone signals to ensure medium access for high priority source stations. The simulation results demonstrate the effectiveness of the proposed scheme. | INTRODUCTION
With advances in wireless communications and the growth
of real-time applications, wireless networks that support quality
of service (QoS) have recently drawn a lot of attention.
In order to provide di#erentiated service to real-time and
non-real-time packets, the medium access control protocol
must provide certain mechanisms to incorporate di#erenti-
ated priority scheduling, such that higher priority tra#c can
be transmitted in preference to lower priority tra#c.
# This research is supported in part by National Science
Foundation grant ANI-9973152.
Permission to make digital or hard copies of all or part of this work for
personal or classroom use is granted without fee provided that copies are
not made or distributed for profit or commercial advantage and that copies
bear this notice and the full citation on the first page. To copy otherwise, to
republish, to post on servers or to redistribute to lists, requires prior specific
permission and/or a fee.
MOBIHOC'02, June 9-11, 2002, EPFL Lausanne, Switzerland.
There are two standards for wireless networks that cover
MAC sub-layer: the European Telecommunications Standards
Institute (ETSI) High Performance European Radio
LAN (HIPERLAN) [4] and the IEEE 802.11 WLAN [6].
HIPERLAN explicitly supports QoS for packet delivery in
wireless LANs. IEEE 802.11 may carry tra#c with time-bounded
requirements using PCF (Point Coordination Func-
tion), which needs the coordination of an "Access Point".
Neither of them can provide e#ective priority scheduling in
ad hoc networks.
By wireless LAN, we mean a network in which all stations
1 are within each other's transmission range. On the
other hand, in multi-hop networks, two stations that cannot
hear each other may still compete with each other for
the channel due to the "hidden terminal" problem. In such
environments, in addition to local channel information, the
channel status near the neighboring nodes also has to be
considered to ensure priority scheduling.
Another di#erence between Wireless LANs and multi-hop
networks with respect to priority scheduling is that di#erent
flows in multi-hop networks have di#erent degree of con-
tention. Here, we define the contention degree for a flow
as the number of flows with which it is competing for the
channel. While each flow competes for the channel with all
other flows in wireless LANs, in multi-hop networks, di#er-
ent flows may experience di#erent situations depending on
the network topology and flow pattern. For example, in a
multi-hop network, it is possible that flow A has contention
degree of 10 while flow B just competes with flow A. Under
such a circumstance, it might be easier for flow B to access
the channel.
While there is some research related to priority scheduling
in wireless networks [1] [9] [10] [2] [3] [7] [11] [14], most of
these schemes can only work well in a wireless LAN. In this
paper, we propose a scheme using two narrow-band busy
tone signals to achieve e#ective priority scheduling in ad
hoc networks.
The rest of this paper is organized as follows. Section 2
presents the related work. The problem of priority scheduling
in multi-hop networks is discussed in section 3. Section
4 describes the proposed busy tone priority scheduling
(BTPS) protocol. Performance evaluation is presented in
section 5. Finally, we present our conclusions in section 6.
2. RELATED WORK
Medium Access Control (MAC) protocols that aim to provide
di#erentiated services should be able to meet require-
We use the terms station and node interchangeably
ments of tra#c with di#erent priority classes. If a high priority
flow's tra#c pattern satisfies the behavior described
in the service agreement, its packets should be delivered in
preference to other packets with lower priorities. On the
other hand, flows with lower priorities should use as much
bandwidth as possible after the transmission requirements
of higher priority flows have been satisfied.
In general, there are two directions in wireless MAC protocols
to facilitate channel access privilege of high priority
tra#c: reservation based schemes and contention based
schemes.
Reservation based schemes usually make some assumptions
about high priority tra#c. For example, high priority
tra#c is assumed to be periodic with fixed arrival rate. For
reservation based schemes, when resources are reserved but
unused, they are often wasted. A typical example of a reservation
based MAC protocol is GAMA/PS [1]. GAMA-PS
divides time into a sequence of cycles; each cycle begins with
a contention period and ends with a "group-transmission"
period. The group-transmission period is divided into a set
of zero or more individual transmission periods, each for a
station in the "transmission group". A station with data to
send competes for membership in the "transmission group"
during the contention period; also, by listening to the chan-
nel, a group member becomes aware of how many stations
are in the group and of its own position within the group.
In this case, members of the transmission group take turn
transmitting data, and collision is avoided. However, a basic
requirement for this protocol is that each station can hear
the transmissions of other stations, which limits the use of
the protocol to wireless LANs.
The MACA/PR protocol [2] extends the reservation based
scheme to multi-hop networks. The first data packet of
a high priority flow makes reservations along the route to
the destination. Each station maintains a reservation table
(RT) which keeps track of the transmitting and receiving
"reserved windows" for neighbors within a two-hop neigh-
borhood. Low priority sources are only allowed to fill in
empty windows. In order for the reservation scheme to work,
the size of high priority packets must be pre-specified for
each connection, and the size of low priority packets must
be bounded so as not to interfere with the reservation constraints
Unlike the reservation based schemes, contention based
schemes are probabilistic. Flow scheduling decision is made
locally, and contention is resolved probabilistically. As an
example, reference [9] uses "black burst" to help high priority
flows contend for the channel. After channel becomes
idle, a high priority flow has shorter waiting time before it
transmits the "black burst", other low priority flows which
have longer waiting time will drop out of contention once
they hear the "black burst" during their waiting time. This
scheme thus provides a way for the high priority source stations
in a wireless LAN to reserve the channel by occupying
the channel with "black burst". Reference [10] further generalizes
this scheme to "ad hoc carrier sense multiple access
wireless network", which is defined as a wireless network
without hidden nodes. That is, each source station in such
a network can always sense the possible interfering transmis-
sions. However, this is not the case in most ad hoc networks.
More often, "hidden terminals" do exist in ad hoc networks,
and nodes cannot always sense each other's transmissions.
Thus, the scheme in [10] cannot be applied to general ad
hoc networks.
Several researchers propose some simple modifications to
the IEEE 802.11 Distributed Coordination Function (DCF)
to incorporate di#erentiated service. IEEE 802.11 DCF defines
a collision avoidance mechanism to resolve contention
among di#erent stations willing to access the medium. Each
station chooses a random number between zero and a given
"Contention Window" as the backo# duration. After sensing
the channel to be idle for a suitable "interframe space"
duration, each station waits until the backo# timer has been
counted down to zero before accessing the channel. A station
freezes its backo# timer if it senses a busy channel, and
then continues to count down the backo# timer when the
channel becomes idle for "interframe space" duration again.
If collisions occur, the colliding stations will exponentially
increase their "Contention Window" by a factor of 2. The
value of "Contention Window" is constrained to be between
CWmin and CWmax . A source station sends a "RTS" (re-
quest to send) first. If it gets a "CTS" (clear to send) back
from the receiver, the data packet will be sent, followed by
an "ACK" from the receiver. In the case that a "RTS" is
not followed by a "CTS", or "Data" is not followed by an
"ACK", collision is assumed to have occurred.
Summarizing, there are two "waiting stages" in IEEE
802.11 before the station accesses the channel.
. The "interframe space"(IFS) stage.
. The backo# stage, whose duration is a random value
between zero and the "Contention Window".
In [3], [7], [11] and [14], various schemes have been proposed
to modify the backo# stage so that di#erent priority
source stations use di#erent "Contention Window" generation
functions. For example, [3] proposes that high priority
source stations randomly choose the backo# interval from
- 1] and low priority source stations choose from
[2
is the number of consecutive times
a station attempts to send a packet. [11] proposes to set
di#erent values of CWmin and CWmax for di#erent priority
classes. [7] proposes that instead of using the exponential
factor of 2 after a collision, di#erent priority classes use
di#erent exponential increase factor. Stations with lower
priority increase their "Contention Window" much faster
than the stations with higher priority. One drawback faced
by [3], [11] and [7] is that high priority flows may possibly
experience more collisions compared to their low priority
counterparts in multi-hop networks. As a result, "high pri-
flows cannot be ensured to have smaller "Contention
hence, the priority of channel access cannot be
ensured either. In order to adapt better to multi-hop net-
works, in [14], a packet's priority information is piggybacked
in the RTS/CTS/Data/ACK frames. Based on overheard
packets, each station maintains a scheduling table, which
records priority information of flows that are within two-hop
neighborhood. The backo# duration is generated based
on the scheduling table. However, this scheme su#ers from
incomplete scheduling table which is caused by collisions,
location dependent errors, node mobility and partially overlapping
transmission regions.
All of the above schemes, which propose to modify the
backo# interval of IEEE 802.11 to incorporate di#erentiated
service, su#er from one major drawback as described below:
As the backo# timer for a low priority packet is frozen only
when the channel becomes busy, it will continue to count
down each time when the channel becomes idle again. Thus,
eventually a low priority packet that arrived earlier might
have the shortest backo# interval. In such cases, "priority
occurs in that the low priority packet has a shorter
backo# interval than backlogged high priority packets, and
grabs the channel. An example is illustrated in Figure 1.
bc-backoff counter
Packet transmission
Packet transmission
Figure
1: Priority reversal
Suppose that two ranges of backo# interval [0, 15] and
[16, 31] are now used respectively by high priority and low
priority packets. Nodes 1 and 4 have high priority packets
(flows 1, 2) to node 2 while nodes 3 and 5 have low priority
packets for node 2 (flows 3, 4). At time t1, nodes 1, 3 and 5
had packets backlogged with backo# intervals 10, 17 and
respectively. Node 1 began its transmission at time t2, so
nodes 3 and 5 froze their backo# counters with the remaining
values of 7 and 8. During node 1's transmission, at time
t3, a high priority packet arrived at node 4. Node 4 chose 9
slots as the backo# interval. When node 1 finished its transmission
at time t4, nodes 4, 3 and 5 began to count down
backo# interval after "interframe space" duration. Hence,
node 4, the high priority source node, had the largest back-
o# counter. Consequently, node 4 lost the channel access to
nodes 3 and 5.
As we mentioned earlier, IEEE 802.11 requires each station
to wait for the channel to be idle for "interframe space
duration before counting down the backo# interval.
IEEE 802.11 defines 4 types of IFS, which are used to provide
di#erent priorities for di#erent transmissions. Packets
with shorter IFS have higher priority. SIFS is the minimum
interframe space, which is used to separate transmissions belonging
to a single dialog, i.e., CTS, DATA and ACK trans-
missions, thus giving them highest priority. PIFS is used
by the PCF (Point Coordination Function) to give the Access
Point higher priority over other stations. DIFS is used
by a station willing to start a new transmission. EIFS is
the longest IFS used by a station that has received a packet
that it could not understand; this is needed to prevent the
station from colliding with a future packet belonging to an
on-going dialog.
Unlike IEEE 802.11 Distributed Coordination Function
(DCF) in which all new transmissions use DIFS as the "in-
terframe space(IFS)", [3] and [7] propose that di#erent priority
source stations can apply di#erent IFS. Specifically,
one of the schemes proposed in [7] works in the following
way. Assume that there are two priority classes: one is high
priority and the other is low priority. Then, the IFS of a
low priority flow is defined as the sum of IFS and maximum
contention window of high priority flows. The high priority
packets are constrained to increase their contention window
no larger than the above maximum value. That is, let LIFS
represent the IFS for low priority flows, HIFS represent the
IFS for high priority flows, and Cwh represent the maximum
contention window of high priority flows. Then we have:
This scheme sacrifices available network capacity to ensure
the transmissions of high priority flows. Since the entire
IFS duration is enforced before each station can continue to
count down the backo# interval, this scheme avoids the "pri-
ority reversal" problem mentioned earlier. However, there
is a critical trade-o# between making full use of bandwidth
and ensuring priority. If the maximum contention window
of high priority flows is constrained to be too small, they will
experience high degree of contention. On the other hand, if
this parameter is chosen to be too large, significant band-width
will be wasted by making low priority flows wait very
long unnecessarily when high priority flows are not backlogged
Among several choices of modifying IEEE 802.11 DCF,
shows that the scheme using di#erent IFS for di#erent
priority classes, as described above, works best. For this
reason, this scheme is chosen to be the one that we compare
our scheme's performance with. Considering only two priority
classes, this scheme is implemented in the following way:
high priority flows use DIFS as the IFS. The sum of DIFS
and Cwh (as defined above) is used as the IFS of low priority
flows. Throughout the rest of this paper, this scheme is
called "PMAC"(Priority MAC) for convenience.
3. PRIORITY SCHEDULING IN
MULTI-HOP NETWORKS
We consider two priority classes: high priority and low
priority.20
High priority flow low priority flow
Figure
2: Impact of "hidden terminals" on priority
scheduling
3.1 Impact of "Hidden Terminals" on Priority
Scheduling
Consider a very simple three-hop scenario in Figure 2.
Node 0 has high priority packets for node 1 (flow 1) and node
2 has low priority packets for node 3 (flow 2). Flow 1 and
flow 2 conflict with each other since node 2's transmission
will interfere with node 1's reception of any other packets.
When both flows are backlogged, how to ensure the channel
access priority of flow 1?
The scheme proposed in [7], which we refer to as "PMAC"
in section 2, tries to solve this problem by forcing node 2 to
wait for a longer IFS after the channel becomes idle. How-
ever, as we mentioned earlier, there is a critical trade-o#
between making full use of bandwidth and ensuring priority
The key point here is that, when node 0 has a high priority
packet backlogged, node 2 should be aware of that and
defer its transmission; on the other hand, if node 0 is not
backlogged, node 2 should maximize its own throughput.
This objective can be achieved by using two narrow-band
busy tone signals (BT1 and BT2) as proposed in this paper.
The basic idea (as elaborated later) is that whenever a high
priority packet is backlogged at node 0, it will send a BT1
every M slots before it acquires the channel, where M is a
parameter of the proposed scheme. In Figure 2, when node
1 hears this BT1, it will send a BT2. All nodes with low
priority packets that hear either BT1 or BT2 will defer their
transmissions for some duration. In this way, channel access
priority of node 0 can be ensured. Certainly, if there is no
high priority packet backlogged at node 0, node 2 will not
hear any busy tone signal, hence, its channel access will not
be a#ected at all. The details of this protocol are described
in Section 4.
3.2 Impact of Collisions on Priority
Scheduling
14low priority flow
High priority
flows
Figure
3: Impact of collisions on priority scheduling
In
Figure
nodes 0 and 2 have high priority packets for
node 1 while there is a low priority flow from node 3 to node
4. When node 3 transmits to node 4, node 1 cannot receive
any packet from node 0 or 2 during that transmission. Now
suppose the transmissions of nodes 0 and 2 collide at node
1 (this can occur with non-negligible frequency). The time
period in which nodes 0 and 2 detect the collision and resolve
the channel contention could be long. Unless node 3 defers
its transmission during this entire period, nodes 0 and 2 are
likely to lose the channel access to node 3. However, how can
node 3 know that collision occurred between high priority
nodes 0 and 2? Similarly, how can node 3 know that the
contention between nodes 0 and 2 has been resolved and
both of them have finished the transmissions of backlogged
high priority packets?
In multi-hop networks, under severe contention amongst
high priority flows it is a challenge to ensure their priority
over low priority flows. The major di#culty is that every
node can only sense its local channel status. In the example
above, even if nodes 0 and 2 are experiencing continuous
collisions, node 3 still may sense its channel as free and start
its transmission.
The scheme proposed in this paper solves this problem as
follows. During the procedure of channel access of nodes
0 and 2, they will send BT1 signal every M slots until the
packet is sent on the data channel, where M is a parameter
to be set as mentioned earlier. Node 1 will send BT2 after
sensing BT1. If the transmissions of nodes 0 and 2 collide
at node 1, they will detect the collision after some duration,
which is called "CTS-Timeout" in the case of IEEE 802.11
DCF using RTS/CTS handshake. After the collision is de-
tected, the channel access procedure will start once again,
during which BT1 and BT2 will again be sent periodically.
We require low priority source nodes that sense BT1 or BT2
signal to defer their transmissions for the "CTS-Timeout"
duration. This ensures channel access of high priority packets
as elaborated in Section 4.
4. PROPOSED BUSY TONE PRIORITY
The scheme proposed in this paper is a contention based
protocol. The proposed scheme makes use of two busy tone
signals, and borrows some mechanisms from IEEE 802.11
DCF. The proposed protocol is called "Busy Tone Priority
Scheduling(BTPS)". We now describe the protocol, followed
by an example in Figure 5.
4.1 Channel Requirement
In the proposed BTPS scheme, two narrow-band busy
tone signals named BT1 and BT2 are used. Reference [8]
previously proposed the use of two busy tone signals to provide
higher network utilization. The work in [8] has a different
objective and di#erent mechanism compared to the
priority scheduling protocol proposed in this paper.
Low priority source stations determine the presence of
high priority packets by sensing the carrier on the busy-tone
channel. According to [5], the time period of 5-s is su#cient
for the busy tone signal to be detected if 1% of total channel
frequency spectrum 2 is assigned to each busy tone channel
(including guard band). To ensure adequate spectral separation
between two busy tone channels, they can be put
at the two ends of the channel spectrum as Figure 4 shows.
Now, the total available channel bandwidth is divided into
three parts: BT1 channel, Data channel and BT2 channel,
with respective bandwidth percentage of 1%, 98%, and 1%.
The resulting data channel has a bit rate of 1.96 Mbps.
Channel BT2 Channel
Data Channel (98% bandwidth)
Figure
4: Channel Spectrum Division
2 The channel frequency spectrum is 22 MHz with IEEE
In general, it is hard to require a node to have the capability
of receiving while it is transmitting, or transmitting
to more than one channel at the same time. The proposed
BTPS protocol only requires that stations be able to monitor
the carrier status of the data channel as well as two
busy tone channels while the station is idle and lock onto
the signal on the data channel as desired. Here, a station is
defined to be idle when it is not transmitting to any channel,
and it is not receiving a packet from the data channel. Since
we only need to detect the existence of busy tone without
it should not be di#cult for a station to have such
capabilities. Once stations begin to receive from or transmit
to the data channel, the status of busy tone channel can be
ignored. The busy tone channel's sensing threshold is set
the same as data channel's sensing threshold.
4.2 Channel Access Procedure with the use of
dual busy tone
In BTPS, busy tone serves as the indication of backlogged
high priority packets. All packets are transmitted on the
data channel. Each dialog begins with RTS/CTS hand-
shake, followed by the transmissions of Data/ACK packets.
As in IEEE 802.11, each station, before accessing the
channel, needs to wait for the channel to be idle for the
period of "interframe space (IFS)", then enter the backo#
stage. The length of backo# interval is randomly chosen between
zero and the value of "Contention Window". When
collision occurs, the "Contention Window" will be exponentially
increased by the factor of 2. Stations will freeze their
backo# timers once they sense data channel is busy. At
the end of backo# stage, stations are allowed to acquire the
channel. Time is slotted and each unit is called one Slot-
Time.
The di#erence between IEEE 802.11 Distributed Coordination
Function (DCF) and BTPS is that high priority and
low priority source stations behave di#erently during "IFS"
and "backo# " stages in BTPS.
. High Priority Source Stations: The DIFS is used as the
interframe space for high priority source stations. During
DIFS and backo# stages, the high priority source
stations send a BT1 pulse (5-s duration) every M slots.
Between two consecutive busy tone pulse transmis-
sions, there should be at least one Slot-Time interval
so that these stations have a chance to listen to data
Therefore, M could be any value that is larger
than 2, depending on the choice of IFS for low priority
source stations. The principle is that the IFS of low
priority stations should be larger than M slots, so that
they can always sense the busy tones before they attempt
to acquire the channel. In our implementation,
M is set to 3.
. Stations that sense BT1: High priority source stations
will disregard BT1. Any other station that senses a
will send a BT2 pulse (5-s duration) if it is not
receiving a packet from the data channel. It will also
defer its transmission of a low priority packet. Specif-
ically, RTS for a low priority packet is deferred for
"CTS-Timeout" duration after receipt of a BT1. Special
attention also needs to be paid to the transmission
interval of BT2. Between two consecutive BT2 pulses,
there should be at least one Slot-Time interval to make
sure that the stations, which transmit BT2 after sensing
BT1, have a chance to receive packet from data
channel. That is, a station will send BT2 pulse at
most once every two slots.
. Stations that sense BT2: High priority source stations
will disregard BT2. Any other station that senses a
will defer its RTS for a low priority packet for
"CTS-Timeout" duration.
. Low priority source stations: DIFS plus one Slot-Time
is used as the "interframe space" for low priority source
stations. In the case of IEEE 802.11 DSSS [6], DIFS
lasts for two and half Slot-Time. Since busy tone will
be initiated every three slots by high priority stations,
low priority source nodes that wait for at least three
and half Slot-Time will sense the busy tone and defer
their transmissions.
4.3 Occupancy of Data Channel using black
burst
During the channel access procedure described above, a
station may transmit BT1 or BT2. However, the same station
could be the receiver of a high priority packet for which
an RTS may be transmitted while it is sending BT1 or BT2.
Since a station cannot receive while it is transmitting, the
high priority packet intended for this station will be missed
during its busy tone transmission. The scenario in Figure 3
can be used to illustrate the situation. After sensing BT1
from node 0, node 1 will send BT2 correspondingly. But
when node 1 is transmitting BT2, node 2, another high priority
source node, could possibly be sending RTS to node
1 on the data channel. Without taking care of such a situ-
ation, node 1 will miss the high priority packet from node
2.
Taking into account several factors including data channel
carrier detection time, turnaround time of stations from
receiving mode to transmitting mode as well as the transmission
time of BT2, BTPS requires that each high priority
source station send a two slot duration of "black burst" before
the transmission of RTS packet on data channel. The
"black burst" is used to occupy the channel. With "black
burst" ahead of useful data, the receivers will either detect
that data channel is busy before turning to transmit busy
tone, or be able to correctly receive packet from data channel
after the transmission of a busy tone.
Now, back to our example. After the transmission of BT2,
node 1 will sense the carrier on data channel and begin to
receive the signal. Because of the two slot duration of "black
burst" ahead of RTS packet, node 1 can still receive the RTS
packet correctly from node 2.
There is no need to add "black burst" before CTS, Data
or ACK packets.
4.4 Summary of BTPS protocol
The behavior of the BTPS protocol is summarized in Figure
5. The high priority source station in Figure 5(a) will
send BT1 every 3 slots during DIFS and backo# stage. Once
the backo# counter is counted down to zero and the channel
is idle, a "black burst" which lasts two Slot-Time long will
be sent first, followed by a RTS packet. After getting a CTS
reply, the data packet will be sent, followed by the reception
of an ACK. From the point of sending "black burst" to
the time of receiving the ACK, no busy tone signal will be
transmitted. Any other station that senses BT1, as shown
Station that sense BT1:
DIFS Backoff Stage black
burst
(a) .
Data
Note: RTS, CTS, Data and ACK are not represented
in their actual lengths due to space limitation.
s
Turnaround time from receiving to transmitting
(b)
The arrows indicate sensed BT1. After BT1 is sensed, transmission
High priority source station:
(c)
The arrows indicate sensed BT2. After BT2 is sensed, transmission
Busy Medium
Station that sense BT2:
of a low piroirty packet is deferred for CTS-Timeout duration (355
of a low piroirty packet is deferred for CTS-Timeout duration (355
Figure
5: Behavior of BTPS protocol. One Slot-time is 20-s. The duration between consecutive ticks,
shown as short bars in the figure, is 10-s. Black boxes represent received signal, and white boxes represent
transmitted signal. Figure (a) shows the behavior of a high priority source station which has a packet
backlogged. Figure (b) shows behavior of stations that sense BT1, while figure (c) shows behavior of stations
that sense BT2.
in
Figure
5(b), will transmit BT2 provided that it is not
receiving from data channel. Each time when stations sense
busy tone (BT1 or BT2), the transmissions of low priority
packets will be deferred for "CTS-Timeout" duration, which
is shown in Figure 5(b) and Figure 5(c).
5. PERFORMANCE EVALUATION
In this section, simulation results are presented to demonstrate
the e#ectiveness of the proposed BTPS protocol. The
simulation results for PMAC [7] are also shown for com-
parison. Recall that PMAC is a modified version of IEEE
802.11 DCF, which chooses IFS for low and high priority
flows di#erently to attempt to achieve priority scheduling.
As we mentioned earlier, Cwh is a critical parameter for
PMAC, and it is di#cult to adapt this parameter to dynamic
network situations. However, in the simulation, being
aware of the number of high priority flows and tra#c load,
we try to choose a suitable value to demonstrate a reason-able
performance for PMAC. In some scenarios, results of
IEEE 802.11 DCF are also presented to show the baseline.
The performance metrics we use include "Delivery Ratio of
High Priority Packets", which is the ratio of high priority
flows' throughput over their sending rate; and "Aggregate
Throughput", which is the aggregate throughput of all high
and low priority flows. For PMAC [7], a higher value of Cwh
improves the first metric; but degrades aggregate through-
put, and vice versa. Our scheme can improve on PMAC
with respect to both metrics.
5.1 Simulation Model
All the simulation results are based on a modified version
of ns-2 network simulator from USC/ISI/LBNL [13],
with wireless extensions from the CMU Monarch project
[12]. The extensions provide a wireless protocol stack including
IEEE 802.11. The radio interface model approximates
the first generation WaveLan radio interface with 2
Mbps bit rate and 250 meter transmission range using omnidirectional
antenna. The tra#c sources are chosen to be
constant bit rate (CBR) sources using packet size of 512
bytes. Cwh for PMAC is set to 32 slots. The simulation
results are averages over runs, and each simulation run
is for 6 second duration.
Since our objective is to demonstrate MAC protocol's performance
to deliver high priority packets, mobile situations
are not simulated here. However, the behavior of BTPS
protocol itself will not be impacted by mobility.
5.2 Scenario 1
In this scenario, 24 nodes are arranged in a 4-6 grid with
a grid spacing of 200 meters. The flow pattern is as shown
in
Figure
6. Figure 7 plots the flows' conflict graph. The
conflict graph is defined as G=(V, E), in which V is the set
of all flows, and an edge (f i , f j ) belongs to E if and only if
Number of high
priority flows High priority flow ID1 flow 4
3 flow 4, 5, 6
4 flow 4, 5, 6, 8
5 flow 4, 5, 6, 7, 8
6 flow 4, 5, 6, 7, 8, 9
Table
1: The high priority flows in scenario 1
flows f i and f j conflict with each other (i.e., they cannot
simultaneously). Among all flows, flows 5 and 8
have the highest contention degree, while flows 1, 3, 10, 12
have the lowest contention degree.
flow4 flow5 flow6
200m
200m
Figure
Network topology of scenario 1
flow 1
flow 4
flow 7
flow
flow 3
flow 2
flow 5
flow 8
flow 11
flow 6
flow 9
flow 12
Figure
7: Conflict graph for flows in scenario 1
The number of high priority flows is increased from 0 to
6 in our simulations. The corresponding high priority flows
for each case are given in Table 1. The tra#c sending rate
for each high priority flow in each case is 180 Kbps, while
all remaining low priority flows have aggressive sending rate
of 1500 Kbps.
Figure
8 plots the delivery ratio of high priority packets
versus the number of high priority flows. The proposed
BTPS protocol can deliver most of the high priority packets
in each case, while the delivery ratio of PMAC [7] begins
to drop when the number of high priority flows is 3. When
there are six high priority flows, the performance gap between
BTPS and PMAC in terms of high priority packets'
delivery ratio is 12.6%. Because IEEE 802.11 DCF does not
provide di#erentiated service and the high priority flows simulated
have higher contention degree, IEEE 802.11 delivers
very few high priority packets.0.10.30.50.70.90 1
Delivery
of
High
Priority
Flows
Number of high priority flows
BTPS
Figure
8: Delivery ratio of high priority pack-
ets, comparison between BTPS, PMAC and IEEE
Figure
9 presents the aggregate throughput for BTPS,
PMAC and IEEE 802.11. When all flows are low priority
(i.e., number of high priority flows is 0), the aggregate
throughput achieved by PMAC is only 83.4% of that
achieved by IEEE 802.11. In PMAC, for each packet's trans-
mission, the waiting time of low priority source nodes in
"interframe space" stage is slots more than the
corresponding waiting time in IEEE 802.11. This causes
the 16.6% loss of aggregate throughput. Furthermore, the
larger the value of Cwh, the more is the loss in aggregate
throughput. If we reduce the value of Cwh, the throughput
loss can be smaller, but the deliver ratio for high priority
packets would be worse. On the other hand, the aggregate
throughput achieved by BTPS is 97.4% of that achieved by
802.11. The loss of throughput is mainly caused by the 2%
bandwidth given to busy tone channels in BTPS.
When there are high priority flows, IEEE 802.11 schedules
a di#erent set of flows compared to priority scheduling
protocols BTPS and PMAC, hence achieves much more
throughput at the cost of starving high priority flows. The
situation is elaborated below.
For the scenario with six high priority flows, we plot each
individual flow's throughput for BTPS, PMAC and IEEE
802.11 DCF in
Figure
10. The highest throughput in this
situation can be achieved by scheduling flows 1, 3, 10, and
12 at all times since they have the lowest contention degree
and most aggressive sending rate. However, this maximum
throughput is achieved at the cost of starving other flows,
particularly, the high priority flows 4, 5, 6, 7, 8 and 9. From
the results shown in Figure 10, IEEE 802.11 DCF performs
in this way and achieves the highest aggregate throughput
of 4820 Kbps 3 . On the other hand, BTPS and PMAC give
channel access preference to high priority flows, but at the
cost of decreased aggregate throughput. BTPS achieves aggregate
throughput of 2645 Kbps and PMAC achieves 2311
3 Recall that our proposed scheme can achieve the aggregate
throughput comparable to IEEE 802.11 when there are no
high priority flows.
Aggregate
Throughput
(Kbps)
Number of high priority flows
BTPS
Figure
9: Aggregate throughput comparison between
BTPS, PMAC and IEEE 802.11
Kbps. The reason why both BTPS and PMAC lose through-put
in comparison with IEEE 802.11 is because high priority
flows have higher contention degree in the simulated sce-
nario. As shown in Figure 7, for example, when flow 4 is
transmitting on the data channel, flows 1, 5, 7 cannot be
scheduled. Similarly, when flow 5 is transmitting, flows 2,
4, 6, 8 cannot use the channel either.
Note the two high priority flows (flows 5 and 8) with the
highest contention degree. PMAC just delivers 61% of packets
for flow 5, and 57.3% for flow 8 compared to proposed
BTPS. PMAC is unable to deliver many high priority packets
due to contention among the high priority flows. With
PMAC, the problem illustrated in section 3.2 occurs often,
resulting in low priority tra#c gaining channel access instead
of high priority tra#c. Thus, PMAC [7] delivers more
low priority packets from flows 2 and 11 but fewer high priority
packets from flows 5 and 8 than the proposed BTPS
protocol.20060010001400
Flow ID
Throughput
of
each
flow
(Kbps)
BTPS
high priority flows
Figure
10: Throughput of each flow in scenario 1
with six high priority flows
5.3 Scenario 2: Random Topology
We generate eight random topologies in a 1000m-1000m
area. The total number of nodes in this area is increased
Num. of nodes Total num. of flows Num. of high
priority flows
50 43 22
Table
2: The number of high priority flows in random
topologies
from 10 to 80 with a step size of 10, and flows are randomly
chosen between two nodes which are one hop away. Among
all flows, half are high priority flows with sending rate of
120 Kbps, the remaining low priority flows have aggressive
sending rate of 1500 Kbps. Table 2 shows the number of
high priority flows for each simulated topology.
The delivery ratio of high priority packets is shown in
Figure
11, from which we can see that BTPS delivers more
high priority packets than PMAC in most cases. Only when
there are only 10 or 20 nodes and the corresponding numbers
of high priority flows are 4 or 7 respectively, does PMAC
deliver as many high priority packets as BTPS. In the case
of 80 nodes, the delivery ratio di#erence between BTPS and
PMAC reaches 20.5%. The simulation results demonstrate
that severe contention among high priority flows will cause
significant performance degradation with PMAC.0.40.60.81
Delivery
of
High
Priority
Flows
Number of nodes
BTPS
Figure
Delivery ratio of high priority packets in
random scenarios, comparison between BTPS and
Figure
12 presents the aggregate throughput comparison
between BTPS and PMAC for the generated random sce-
narios. With an increase in the number of flows in the
1000m-1000m area, the contention degree for each flow
tends to become higher. BTPS ensures high priority pack-
ets' delivery first, then low priority packets use as much
bandwidth as possible after satisfying requirements of the
high priority flows. On the other hand, PMAC lacks the
capability to resolve contention among high priority flows
e#ciently under situations with high degree of contention;
also the low priority packets cannot make full use of available
bandwidth due to larger "interframe space" duration. For
these reasons, it is not surprising that the proposed BTPS
protocol provides higher aggregate throughput than PMAC
[7]. In the case of 80 nodes, BTPS gains 51.5% aggregate
throughput over PMAC.12001600200024002800
Aggregate
Throughput
(Kbps)
Number of nodes
BTPS
Figure
12: Aggregate throughput comparison between
BTPS and PMAC in random scenarios
6. CONCLUSION
We present a priority scheduling MAC protocol (BTPS)
for ad hoc networks. With the use of two narrow-band busy
tone signals, BTPS ensures channel access of high priority
packets. Furthermore, in the absence of high priority
packets, low priority flows can make full use of available
bandwidth in BTPS. Simulation results demonstrate the effectiveness
of BTPS protocol with respect to "delivery ratio
of high priority packets" and "aggregate throughput".
7.
--R
A Priority Scheme for IEEE 802.11 DCF Access Method.
Radio Equipment and Systems(RES)
Packet Switching in Radio Channels: Part II- The Hidden Terminal Problem in Carrier Sense Multiple-Access and the Busy-Tone Solution
Dual busy tone multiple access(DBTMA): A new medium access control for packet radio networks.
Distributed Control Algorithms for Service Di
The CMU Monarch Project.
VINT Group.
--TR
An efficient packet sensing MAC protocol for wireless networks
Real-time support in multihop wireless networks
Distributed multi-hop scheduling and medium access with delay and throughput constraints
--CTR
Xue Yang , Nitin Vaidya, Priority scheduling in wireless ad hoc networks, Wireless Networks, v.12 n.3, p.273-286, May 2006
Yang Xiao , Yi Pan, Differentiation, QoS Guarantee, and Optimization for Real-Time Traffic over One-Hop Ad Hoc Networks, IEEE Transactions on Parallel and Distributed Systems, v.16 n.6, p.538-549, June 2005
Marco Caccamo , Lynn Y. Zhang, The capacity of an implicit prioritized access protocol in wireless sensor networks, Journal of Embedded Computing, v.1 n.2, p.195-207, April 2005
Kamal Jain , Jitendra Padhye , Venkata N. Padmanabhan , Lili Qiu, Impact of interference on multi-hop wireless network performance, Proceedings of the 9th annual international conference on Mobile computing and networking, September 14-19, 2003, San Diego, CA, USA
Luciano Bononi , Luca Budriesi , Danilo Blasi , Vincenzo Cacace , Luca Casone , Salvatore Rotolo, A differentiated distributed coordination function MAC protocol for cluster-based wireless ad hoc networks, Proceedings of the 1st ACM international workshop on Performance evaluation of wireless ad hoc, sensor, and ubiquitous networks, October 04-04, 2004, Venezia, Italy
Ming Li , B. Prabhakaran, MAC Layer admission control and priority re-allocation for handling QoS guarantees in non-cooperative wireless LANs, Mobile Networks and Applications, v.10 n.6, p.947-959, December 2005
Kamal Jain , Jitendra Padhye , Venkata N. Padmanabhan , Lili Qiu, Impact of interference on multi-hop wireless network performance, Wireless Networks, v.11 n.4, p.471-487, July 2005
J. Jobin , Michalis Faloutsos , Satish K. Tripathi, The case for a systematic approach to wireless mobile network simulation, Journal of High Speed Networks, v.14 n.3, p.243-262, July 2005
Thomas Kunz, Multicasting in mobile ad-hoc networks: achieving high packet delivery ratios, Proceedings of the conference of the Centre for Advanced Studies on Collaborative research, p.156-170, October 06-09, 2003, Toronto, Ontario, Canada
Luciano Bononi , Marco Di Felice , Lorenzo Donatiello , Danilo Blasi , Vincenzo Cacace , Luca Casone , Salvatore Rotolo, Design and performance evaluation of cross layered MAC and clustering solutions for wireless ad hoc networks, Performance Evaluation, v.63 n.11, p.1051-1073, November 2006 | ad hoc network;busy tone;medium access control;priority scheduling |
513818 | Minimum energy paths for reliable communication in multi-hop wireless networks. | Current algorithms for minimum-energy routing in wireless networks typically select minimum-cost multi-hop paths. In scenarios where the transmission power is fixed, each link has the same cost and the minimum-hop path is selected. In situations where the transmission power can be varied with the distance of the link, the link cost is higher for longer hops; the energy-aware routing algorithms select a path with a large number of small-distance hops. In this paper, we argue that such a formulation based solely on the energy spent in a single transmission is misleading --- the proper metric should include the total energy (including that expended for any retransmissions necessary) spent in reliably delivering the packet to its final destination.We first study how link error rates affect this retransmission-aware metric, and how it leads to an efficient choice between a path with a large number of short-distance hops and another with a smaller number of large-distance hops. Such studies motivate the definition of a link cost that is a function of both the energy required for a single transmission attempt across the link and the link error rate. This cost function captures the cumulative energy expended in reliable data transfer, for both reliable and unreliable link layers. Finally, through detailed simulations, we show that our schemes can lead to upto 30-70% energy savings over best known current schemes, under realistic environments. | INTRODUCTION
Multi-hop wireless networks typically possesstwo important characteristics
1. The battery power available on the constituent lightweight mobile
nodes(such as sensornodes or smart-phones) is relatively
limited.
2. Communication costs (in terms of transmission energy required)
are often much higher than computing costs (on individual
devices).
Energy-aware routing protocols (e.g., [14, 13, 1]) for suchnetworks
typically select routes that minimize the total transmission power
over all the nodes in the selected path.
In constant-power scenarios, where the transmission power of a
node is chosenindependent of the distance of the link, conventional
routing [9, 11] will be most energy efficient when the
links are error free. In alternative variable-power scenarios, where
the nodes can dynamically vary their transmitter power levels, the
chosentransmission power dependson the distance betweenthe transmitter
and receiver nodes. For wireless links, a signal transmitted
with power P t over a link with distance D gets attenuated and is
received with power
Pr /
where K is a constant that depends on the propagation medium and
antennacharacteristics 1 . Therefore, the transmission power for these
links are chosen proportional to D K . In these scenarios, proposals
for energy-efficient routing protocols (e.g., [14, 6]) typically aim to
choosea path with a a large number of small-range hops, since they
consumelesspower than an alternative route that hasa smaller number
of hops, but a larger distance for individual hops. In general,
most formulations for computing energy efficient paths employ algorithms
for computing minimum-cost paths, with the link metric
determined by the energy required to transmit a single packet over
that link. Setting this link cost to 1 (and thus computing minimum
hop paths) suffices in constant-power scenarios, since the transmission
energy is the same for all links.
In this paper, we discuss why such a formulation of the link cost
fails to capture the actual energy spent in reliable packet delivery -
a more accurate formulation needs to consider the link error rates
to account for the potential cost of retransmissions needed for reliable
packet delivery. Wireless links typically employ link-layer
1 K is typically around 2 for short distances and omni-directional
antennae, and around 4 for longer distances.
frame recovery mechanisms (e.g. link-layer retransmissions, or forward
error correcting codes) to recover from packet losses. Addi-
tionally, protocols such as TCP or SCTP employ additional source-initiated
retransmission mechanisms to ensure a reliable transport
layer. Therefore, the energy cost associated with a candidate path
should thus reflect not merely the energy spent in just transmitting
a single packet across the link, but rather the "total effective energy"
spent in packet delivery, which includes the energy spent in potential
retransmissions as well 2 .
We first consider how the error rate of individual links affects the
overall number of transmissions needed to ensure reliable packet
delivery. Such an analysis helps to clearly delineate how the energy
associated with the reliable delivery of a packet differs from the energy
associated with simply transmitting the packet. As part of this
analysis, we consider two different operating models:
a) End-to-End Retransmissions (EER): where the individual
links do not provide link-layer retransmissions and error recovery-
reliable packet transfer is achieved only via retransmissions
initiated by the source node.
Retransmissions (HHR): where each individual
link provides reliable forwarding to the next hop using localized
packet retransmissions.
We shall see that, in both cases, it is important to consider the
link's error rate as part of the route selection algorithm, since the
choice of links with relatively high error rates can significantly increase
the effective energy spent in reliably transmitting a single
packet. This is true in both the constant-power and variable-power
scenarios - in either scenario, ignoring the error rate of the link
leads to the selection of paths with high error rates and consequently,
high retransmission overhead. The analysis of the effects of link
error rates on the effective energy consumption is more interesting
for the variable-power case: we shall see that the choice between
a path with many short-range hops and another with fewer long-range
hops is non-trivial, but involves a tradeoff between the reduction
in the transmission energy for a single packet and the potential
increase in the frequency of retransmissions. Our analysis
of the variable-power scenarios shows that schemeswhich consider
the link-error rates would perform significantly better than currently
proposed minimum-energy routing protocols, which do not.
We then study how routing algorithms can be used to minimize
our new objective function: the energy required to reliably transmit
a packet to the destination, the effective transmission energy. Since
most decentralizedad-hoc routing protocols (e.g., AODV [12], DSR [7])
attempt, at least approximately, to select aminimum-cost path (where
the path cost is a sum of the individual link costs), we define a new
link cost as a function of both the link distance and the link error
rate. We shall show that such a link cost can be exactly defined to
obtain optimal solutions only for the HHR scenario; for the EER
framework, we can only devise an approximate cost function. By
using simulation studies, we also demonstrate how the choice of
parameters in the approximate EER cost formulation represents a
tradeoff between energy efficiency and the achieved throughput.
While the link quality has been previously suggested as a routing
metric to reduce queuing delays and loss rates, its implicit effect on
the energy efficiency has not been studied before. By incorporating
the link error rates in the link cost, energy savings of 30% to 70%
can often be achieved under realistic operating conditions.
2 This is especially relevant in multi-hop wireless networks, where
variable channel conditions often cause packet error rates as high as
The rest of the paper is organized as follows. Section 2 provides
an overview of previous related work. Section 3 formulates the effective
transmission energy problem as a function of the number of
hops, and the error rates of each hop, for both the EER and HHR
case and analyses its effect on the optimum number of hops in the
variable-power scenario. It also demonstrates the agreementbetween
our idealized energy computation and real TCP behavior. Section
4 shows how to form link costs that lead to the selection of minimum
effective energy paths. In Section 5 we present the results
of our simulation studies on certain ad-hoc topologies, for both the
fixed-power and the variable-power scenarios. Finally, Section 6
concludes the paper.
2. RELATED WORK
Metrics used by conventional routing protocols for the wired Internet
typically do not need to consider any energy-related parame-
ters. Thus, RIP [9] uses hop count as the sole route quality metric,
thereby selecting minimum-hop paths between the source and des-
tinations. OSPF [11], on the other hand, can support additional link
metrics such as available bandwidth, link propagation delay etc.-
there is, however, no well-defined support for using link-error rates
as a metric in computing the shortest cost path. Clearly, in fixed-
power scenarios, the minimum-hop path would also correspond to
the path that uses the minimum total energy for a single transmission
of a packet.
In contrast, energy-aware routing protocols for variable-power scenarios
aim to directly minimize the total power consumed over the
entire transmission path. PAMAS [14], is one such minimum total
transmission energy protocol, where the link cost was set to the
transmission power and Dijkstra's shortest path algorithm was used
to compute the path that uses the smallest cumulative energy. In the
case where nodes can dynamically adjust their power based on the
link distance, such a formulation often leads to the formation of a
path with a large number of hops. A link cost that includes the receiver
power as well is presented in [13]. By using a modified form
of the Bellman-Ford algorithm, this approach resulted in the selection
of paths with smaller number of hops than PAMAS.
In contrast to the routing protocols for the wired Internet, the routing
protocols for wireless ad-hoc environments (e.g. AODV, DSR)
contain special features to reduce the signaling overheads and con-
vergenceproblems causedbynode mobility and link failures. While
these protocols do not necessarily compute the absolute minimum-cost
path, they do aim to select paths that have lower cost (in terms
of metrics such as hop count or delay). Such protocols, can in principle
be adapted to yield energy-efficientpaths simply by setting the
link metric to be a function of the transmission energy. In contrast,
other ad-hoc routing protocols have been designed specifically to
minimize transmission energy cost. For example, the Power-Aware
Route Optimization (PARO) algorithm [6, 5] is designed for scenarios
where the nodes can dynamically adjust their transmission
power - PARO attempts to generate a path with a larger number
of short-distance hops. According to the PARO protocol, a candidate
intermediary node monitors an ongoing direct communication
between two nodes and evaluates the potential for power savings by
inserting itself in the forwarding path- in effect, replacing the direct
hop between the two nodes by two smaller hops through itself. For
any frequency-hopping wireless link, Gass et. al. [4] have proposed
a transmission power adaptation scheme to control the link quality.
Researchers in energy-aware routing have also considered other
objective functions, besides the one of minimum total energy. One
alternative approachconsidersthe battery capacity of individual nodes;
such battery-aware routing algorithms typically aim to extend the
lifetime of the networkby distributing the transmission paths among
nodes that currently possess greater battery resources. Such algorithms
are basedon the observationthat minimum-energy routes can
often unfairly penalize a subset of the nodes, e.g., if several minimum
energy routes have a common node in the path, the battery of
that node will be exhausted quickly. Among such battery-aware al-
gorithms, [15] formulated a node metric, where the capacity of each
node was a decreasing function of the residual battery capacity. A
minimum cost path selection algorithm then helps to steer routes
aways from paths where many of the intermediate nodes are facing
battery exhaustion. Since this mechanism could still lead to the
choice of a path having a node that was nearing exhaustion (espe-
cially if the other nodes on the path had high residual capacity), the
basic MMBCR algorithm and its CMMBCR variant [16] formulates
path selection as a min-max problem. In this approach, the capacity
of a route is defined as the battery level of the critical (most drained)
node; the algorithm then selects the path with the highest capacity.
All these protocols and algorithms, do not, however, consider the
effect of the link error rates on the overall number of retransmis-
sions, and thus the energy needed for reliable packet delivery. Our
problem formulation and routing solution implicitly assumes that
each node in the ad-hoc network is aware of the packet error link on
its outgoing links. Sensing the channelnoise conditions can be done
either at the link layer, a capability that is built into most commercial
wireless 802.11 interfaces available today, or through higher layer
mechanismssuch as periodic packet probes or aggregatedpacket reception
reports from the receiver 3 .
3. ENERGY COST ANALYSIS
In this section, we demonstrate how the error rate associated with
a link affects a) the overall probability of reliable delivery, and con-
sequently, b) the energy associated with the reliable transmission of
a single packet. For any particular link (i; j) between a transmitting
node i and a receiving node j, let T i;j denote the transmission
power andp i;j represent the packeterror probability. Assumingthat
all packets are of a constant size, the energy involved in a packet
simply a fixed multiple of T i;j .
Any signal transmitted over a wireless medium experiences two
different effects: attenuation due to the medium, and interference
with ambient noise at the receiver. Due to the characteristics of the
wireless medium, the transmitted signal suffers an attenuation proportional
to D K , where D is the distance between the receiver and
the transmitter. The ambient noise at the receiver is independent of
the distance between the source and distance, and depends purely
on the operating conditions at the receiver. The bit error rate associated
with a particular link is essentially a function of the ratio of this
received signal power to the ambient noise. In the constant-power
scenario, T i;j is independent of the characteristics of the link (i;
and is essentially a constant. In this case, a receiver located farther
away from a transmitter will suffer greater signal attenuation (pro-
portional to D K ) and will, accordingly, be subject to a larger bit-error
rate. In the variable-power scenario, a transmitter node essentially
adjusts T i;j to ensure that the strength of the (attenuated) signal
received by the receiver is independent of D and is above a certain
threshold level Th. According, the optimal transmission power
associated with a link of distance D in the variable-power scenario
is given by:
where fl is a proportionality constant and K is the coefficient of attenuation
(K - 2). Since Th is typically a technology-specific
3 Similar ideas were proposed for link sensing in the Internet
MANET Encapsulation Protocol [2].
constant, we can see that the optimal transmission energy over such
a link varies as:
It is now easy to understand, at least qualitatively, the impact of
neglecting the link error rates while determining a specific path between
the source and destination nodes. For the fixed-power case,
the minimum-hop path may not be the most energy-efficient, since
an alternative path with more hops may prove to be better if its over-all
error rate is sufficiently low. Similarly, for the variable-power
case, a path with a greater number of smaller hops may not always
be better; the savings achieved in the individual transmission energies
(given by Equation may be nullified by a larger increase in
link errors and consequently retransmissions.
We now analyze the interesting consequencesof this behavior for
the variable-power scenario (for both the EER and HHR cases); we
omit the analysis for the fixed-power scenario which is simpler, and
a special case of our ensuing analysis.
3.1 Optimal Routes in EER Case
In the EER case, a transmission error on any link leads to an end-
to-end retransmission over the path. Given the variable-power formulation
of Eopt in Equation (3), it is easy to see why placing an intermediate
node along the straight line between two adjacent nodes
(breaking up a link of distance D into two shorter links of distance
and D2 such that D1 reduces the total Eopt .
In fact, PARO works using precisely such an estimation. From a reliable
transmission energy perspective, such a comparison is, how-
ever, inadequate since it does not include the effect on the overall
probability of error-free reception.
To understand the energy-tradeoff involved in choosing a path
with multiple short hops over one with a single long hop, consider
communication between a sender (S) and a receiver (R) located at
a distance D. Let N represent the total number of hops between S
and R, so that N \Gamma 1 represents the number of forwarding nodes between
the end-points. For notational ease, let the nodes be indexed
referring to the (i \Gamma 1) th intermediate
hop in the forwarding path; also, node 1 refers to S and
refers to R. In this case, the total optimal energy spent
in simply transmitting a packet once (without considering whether
or not the packet was reliably received) from the sender to the receiver
over the forwarding nodes is:
or, on using Equation (3),
ffD K
where D i;j refers to the distance between nodes i and j and ff is
a proportionality constant. To understand the transmission energy
characteristics associated with the choice of
nodes, we compute the lowest possible value of E total for anygiven
layout of N \Gamma 1. Using very simple optimality arguments, it is easy
to see that the minimum transmission energy case occurs when each
of the hops are of equal length D
N . In that case, E total is given by:
ff
ffD K
For computing the energy spent in reliable delivery, we now consider
how the choice of N affects the the probability of transmission
errors and the consequent need for retransmissions. Clearly,
increasing the number of intermediate hops the likelihood of transmission
errors over the entire path.
Assuming that each of the N links has an independent packet error
rate of plink , the probability of a transmission error over the entire
path, denoted by p, is given by
The number of transmissions (including retransmissions) necessary
to ensure the successfultransfer of a packet betweenS andD is then
a geometrically distributed random variable X , such that
The mean number of individual packet transmissions for the successful
transfer of a single packet is thus 1
. Since eachsuch transmission
uses total energy E total given by Equation (6), the total expected
energy required in the reliable transmission of a single packet
is given by:
total
ffD K
The equation clearly demonstrates the effect of increasing N on
the total energy necessary; while the term N K \Gamma1 in the denominator
increaseswith N , the error-related term (1\Gammap link ) N decreaseswith
N . By treating N as a continuous variable and taking derivatives, it
is easy to see that the optimal value of the number of hops, Nopt is
given by:
Thus a larger value of plink corresponds to a smaller value for the
optimal number of intermediate forwarding nodes. Also, the optimal
value for N increases linearly with the attenuation coefficient
K . There is thus clearly an optimal value of N ; while lower values
of N do not exploit the potential reduction in the transmission en-
ergy, higher values of N cause the overhead of retransmissions to
dominate the total energy budget.
To study these tradeoffs graphically, we plot E EER
total rel against
varying N (for different values of plink ) in Figure 1. For this graph,
ff and D (which are really arbitrary scaling constants) in the analysis
are kept at 1 and 10 respectively and 2. The graph shows
that for low values of the link error rates, the probability of transmission
errors is relatively insignificant; accordingly, the presence
of multiple short-range hops nodes leads to a significant reduction
in the total energy consumption. However, when the error rates are
higher than around 10%, the optimal value of N is fairly small; in
such scenarios, any potential power savings due to the introduction
of an intermediate node are negated by a sharp increase in the number
of transmissions necessary due to a larger effective path error
rate. In contrast to earlier analyses, a path with multiple shorter
hops is thus not always beneficial than one with a smaller number
of long-distance hops.
3.1.1 Energy Costs for TCP Flows
Our formulation (Equation (8)) provides the total energyconsumed
per packet using an ideal retransmission mechanism. TCP's flow
control and error recovery algorithms could potentially lead to different
values for the energy consumption, since TCP behavior during
loss-related transients can lead to unnecessary retransmissions.
While the effective TCP throughput (or goodput) as a function of1234
(Energy
per
Number of Intermediate (Relay) Nodes
Total Effective Trx Energy per Pkt (with Retransmissions)
K=2.0, EER
Figure
1: Total Energy Costs vs. Number of Forwarding
Nodes
(Energy
per
Number of Intermediate (Relay) Nodes
vs. TCP Effective Trx Energy/ Pkt
Figure
2: Idealized / TCP Energy Costs vs. Number of Forwarding
Nodes (EER)
the end-to-end loss probability has been derived in several analyses
(see [8, 3]), there exists no model to predict the total number
of packet transmissions (including retransmissions) for a TCP flow
subject to a variable packet loss rate. We thus use simulation studies
using the ns-2 simulator 4 , to measure the energy requirements
for reliable TCP transmissions. Figure 2 plots the energy consumed
by a persistent TCP flow, as well as the ideal values computed using
Equation (8), for varying N and for 0:05g. The
remarkably close agreement between our analytical predictions and
TCP-driven simulation results verifies the practical utility of our analytical
model.
3.2 Optimal Routes in HHR Case
In the case of the HHR model, a transmission error on a specific
link implies the need for retransmissions on that link alone. This
is a better model for multi-hop wireless networking environments,
which typically always employ link-layer retransmissions. In this
case, the link layer retransmissions on a specific link essentially ensure
that the transmission energy spent on the other links in the path
is independent of the error rate of that link. For our analysis, we
do not bound the maximum number of permitted retransmissions: a
transmitter continues to retransmit a packet until the receiving node
acknowledgeserror-free reception. (Clearly, practical systemswould
typically employ a maximum number of retransmission attempts to
4 Available at http://www.isi.edu/nsnam/ns
(Energy
per
Number of Intermediate (Relay) Nodes
Total Effective Trx Energy per Pkt (with Retransmissions)
K=2.0, HHR
Figure
3: Total Energy Costs vs. Number of Forwarding
Nodes (HHR)
bound the forwarding latency.) Since our primary focus is on energy-efficient
routing, we also do not explicitly consider the effect of such
retransmissions on the overall forwarding latency of the path in this
paper.
Since the number of transmissions on each link is now independent
of the other links and is geometrically distributed, the total energy
cost for the HHR case is
total
ff
In the case of N intermediate nodes, with eachhop being of distance
N and having a link packet error rate of plink , this reduces to:
total
Figure
3 plots the total energy for the HHR case, for
different values of N and plink . In this case, it is easy to see that the
total energy required always decreases with increasing N , following
the 1
course, the logarithmic scale for the
energy cost compresses the differences in the value of P HHR
total rel for
different plink . By itself, this result is not very interesting: if all
links have the same error rate, it is beneficial to substitute a single
hop with multiple shorter hops.
A more interesting study is to observe the total energy consump-
tion, for a fixed N , for different values of plink . Clearly, for moderately
large values of plink , the number of total transmissions (and
hence, the energy consumption) increases super-linearly with an increase
in the link error rate. The graph thus shows the importance
of choosing links with appropriate link error rates, even in the HHR
case. (In the EER case, Figure 1 clearly demonstrates that the effect
of larger link error rates is much more drastic - when
example, increasing the loss probability from 0:1 to 0:2 can increase
the energy consumption ten-fold.) An energy-aware algorithm that
does not consider the error rates of associatedlinks would not distinguish
between two paths, each of 10 nodes having the same D values
but different packet error rates. However, our analysis clearly
shows that the effective energy consumed by a path consisting of
links with higher packet error rates would be much larger than a path
with smaller error rates.
We obtain another meaningful observation by comparing the values
rel for the EER and HHR cases (Figures 1 and 3), for
identical values of N and K . It is easy to see that, for moderate to
high values of plink , the EER framework results in at least an order
of magnitude higher energy consumption than the HHR case.
By avoiding the end-to-end retransmissions, the HHR approachcan
significantly lower the total energy consumption. These analyses
reinforce the requirements of link-layer retransmissions in any radio
technology used in multi-hop, ad-hoc wireless networks.
4. ASSIGNING LINK COSTS
In contrast to traditional Internet routing protocols, energy-aware
routing protocols typically compute the shortest-cost path, where
the cost associated with each link is some function of the transmission
(and/or reception) energy associatedwith the correspondingnodes.
To adapt such minimum cost route determination algorithms (such
as Dijkstra's or the Bellman-Ford algorithm) for energy-efficient reliable
routing, the link cost must now be a function of not just the
associated transmission energy, but the link error rates as well. Using
such a metric would allow the routing algorithm to select links
that present the optimal tradeoff between low transmission energies
and low link error rates. As we shall shortly see, defining sucha link
cost is possible only in the HHR case; approximations are needed to
define suitable cost metrics in the EER scenario.
Before presenting the appropriate link costs, it is necessary to define
the graphused for computing the shortest cost paths. Consider a
graph, with the set of vertices representing the communicationnodes
and a link l i;j representing the direct hop betweennodes i and j. For
generality, assume an asymmetric case where l i;j is not the same as
l j;i ; moreover, l i;j refers to the link used by node i to transmit to
node j. A link is assumed to exist between node pair (i; j) as long
as node j lies within the transmission range of node i. This transmission
range is uniquely defined for the constant-power case; for
the variable-power case, this range is really the maximum permissible
range corresponding to the maximum transmission power of a
sender. Let E i;j be the energy associated with the transmission of
a packet over link l i;j , and p i;j be the link packet error probability
associated with that link. (In the fixed-power scenario, E i;j is independent
of the link characteristics; in the variable-power scenario,
E i;j is a function of the distance between nodes i and j.) Now, the
routing algorithm's job is to compute the shortest path from a source
to the destination that minimizes the sum of the transmission energy
costs over each constituent link.
4.1 Hop-by-Hop Retransmissions (HHR)
Consider a path P from a source node S (indexed as node 1) to
nodeD that consists of N \Gamma1 intermediate
Then, choosing path P for communication between S and D implies
that the total energy cost is given by:
Choosing a minimum-cost path from node 1 to node N + 1 is thus
equivalent to choosing the path P that minimizes Equation (11). It
is thus easy to see that the corresponding link cost for link L i;j , denoted
by C i;j , is given by:
Some ad-hoc routing protocols, such as DSR or AODV, can then
use this link cost to compute the appropriate energy-efficient routes.
Other ad-hoc routing protocols, such as PARO, can also be easily
adapted to use this new link cost formulation to compute minimum-energy
routes. Thus, in the modified version of the PARO algo-
rithm, an intermediate nodeC would offer to interject itself between
two nodes A and B if the sum of the link costs CA;C
less than the 'direct' link cost CA;B .
4.2 End-to-End Retransmissions (EER)
In the absence of hop-by-hop retransmissions, the expression for
the total energy cost along a path contains a multiplicative term involving
the packet error probabilities of the individual constituent
links. In fact, assuming that transmission errors on a link do not stop
downstream nodes from relaying the packet, the total transmission
energy can be expressed as :
Given this form, the total cost of the path cannot be expressed as a
linear sum of individual link costs 5 , thereby making the exact formulation
inappropriate for traditional minimum-cost path computation
algorithms. We therefore concentrate on alternative formulations
of the link cost, which allow us to use conventional distributed
shortest-cost algorithms to compute "approximate" minimum energy
routes.
A study of Equation (13) shows that using a link with a high p can
be very detrimental in the EER case: an error-prone link effectively
drives up the energy cost for all the nodes in the path. Therefore,
a useful heuristic function for link cost should have a super-linear
increase with increase in link error rate; by making the link cost for
error-prone links prohibitively high, we can ensure that such links
are usually excluded during shortest-cost path computations.
In particular, for a path consisting of k identical links (i.e. have
the same link error rate and link transmission cost), Equation13 will
reduce to
where, p is the link error rate and E is the transmission cost across
each of these links. This leads us to proposea heuristic cost function
for a link, as follows:
is chosen to be identical for all links 6 .
Clearly, if the exact path length is known and all nodes on the path
have the identical link error rates and transmission costs, L should
be chosen equal to that path length. However, we require that a link
should advertise a single cost for that link for distributed route com-
putation, in accordance with current routing schemes. Therefore,
we need to fix the value of L, independent of the different paths
that cross a given link. If better knowledge of the network paths are
available, then L should be chosen to be the average path length of
this network. Higher values of L impose progressively stiffer penalties
on links with non-zero error probabilities.
Given this formulation of the link cost, the minimum-cost path
computation effectively computes the path with the minimum "ap-
5 We do not consider solutions that require eachnode or link to separately
advertise two different metrics. If such advertisements were
allowed, we can indeed compute the optimal path accurately. For
example, if we considered two separate metrics- a) E i;j and b)
then a node can accurately compute the next hop
neighbor (using a distance-vector approach) to a destination D by
using the cumulative values
advertised
by its neighbor set.
6 There should be an L factor in the numerator too (as in Equation
14, but since this is identical for all links, it can effectively be
ignored.
proximate" energy cost given by:
As before, regular ad-hoc routing protocols, or newer ones such as
PARO, can use this new link cost function C approx to evaluate their
routing decisions.
As with our theoretical studies in Section 3, the analysis here does
not directly apply to TCP-based reliable transport, since TCP's loss
recovery mechanism can lead to additional transients. In the next
section, we shall use simulation-based studies to study the performance
of our suggested modifications to the link cost metric in typical
ad-hoc topologies.
5. PERFORMANCE EVALUATION
The analysis of the previous section provides a foundation for devising
energy-conscious protocols for reliable data transfer. In this
section, we report on extensive simulation-based studies on the performance
impacts of our proposed modifications in the ns-2 simu-
lator. The traffic for our simulation studies consists of two types:
1. For studies using the EER framework, we usedTCP flows implementing
the NewReno version of congestion control.
2. For studies using the HHR framework, we used both UDP
and TCP flows. In UDP flows, packets are inserted by the
source at regular intervals.
To study the performance of our suggested schemes, we implemented
and observed three separate routing algorithms:
1. The minimum-hop routing algorithm, where the cost of all
links is identical andindependent of both the transmission energy
and the error rate.
2. The Energy-Aware (EA) routing algorithm, where the cost
associated with each link is the energy required to transmit a
single packet (without retransmission considerations) across
that link.
3. Our Retransmission-EnergyAware (RA) algorithm, where the
link cost includes the packet error rates, and thus considers
the (theoretical) impact of retransmissions necessary for reliable
packet transfer. For the HHR scenario, we use the link
cost of Equation (12); for the EER model, we use the 'ap-
proximate' link cost of Equation (15) with 2. In Section
5.4.2, we also study the effect of varying the L-parameter.
In the fixed-powerscenario, the minimum-hop andEA algorithms
exhibit identical behavior; accordingly, it suffices to compare our
RA algorithm with minimum-hop routing alone. For our experi-
ments, weused different topologies havingupto 100nodesrandomly
distributed on a squareregion, to study the effects of various schemes
on energy requirements and throughputs achieved. In this section,
we discuss in detail results from one representative topology, where
nodes were distributed over a 70X70 unit grid, equi-spaced 10
units apart (Figure 4). The maximum transmission radius of a node
is 45 units, which implies that each node has between 14 and 48
neighbors on this topology,
Each of the routing algorithms (two for the fixed-power scenario,
three for the variable-power scenario) were then run on these static
topologies to derive the least-cost paths to each destination node.
To simulate the offered traffic load typically of suchad-hoc wireless
topologies, each of the corner node on the grid had 3 active flows,
A
Figure
4: The 49-node topology. The shaded region marks the
maximum transmissionrange for the cornernode, A. Thereare
three flows from each of the 4 corner nodes, for a total of 12
flows.
providing a total of 12 flows. Since our objective was to study the
transmission energies alone, we did not consider other factors such
as link congestion, buffer overflow etc. Thus, each link had an infinitely
larger transmit buffer; the link bandwidths for all links (point
to point) was set to 11 Mbps. Each of the simulations was run for a
fixed duration.
5.1 Modeling Link Errors
The relation between the bit-error-rate (pb ) over a wireless channel
and the received power level Pr is a function of the modulation
scheme. However, in general, several modulation schemes exhibit
the following generic relationship between pb and Pr :
pb / erfc(
r
constant P r
where N is the noise spectral density (noise power per Hz) and erfc(x )
is defined as the complementary function of erf (x ) and is given by
Z xe \Gammat 2
dt
As specific examples, the bit error rate is given by
r
for BPSK (binary phase-shift keying), where f is transmission bit-rate
Since we are not interested in the details of a specific modulation
schemebut merely want to study the general dependenceof the error
rate on the received power, we make the following assumptions:
i) The packet error rate p, equals S:pb , where pb is the bit error
rate and S is the packet size. This is an accurate approximation
for small error rates pb ; thus, we assume that the packet
error rate increases/decreases in direct proportion to pb .
ii) The received signal power is inversely proportional to D K ,
where D is the link distance, and K is the same constant as
usedin Equation 2. ThusPr canbe replaced by T=D K where
T is the transmitter power. We chooseBPSK as our representative
candidate and hence, use Equation 17 to derive the bit-
error-rate.
We study both the fixed and variable power scenarios in our simulations
Fixed transmission power: In this case all the nodes in the
network use a fixed power for all transmissions, which is independent
of the link distance. While such an approach is
clearly inefficient for wireless environments, it is representative
of several commercial radio interfaces that do not provide
the capability for dynamic power adjustment. From Equation
17, it is clear that links with larger distances have higher
packet error rates.
For our experiments in this case, we first chose a maximum
error rate (pmax ) for an unit hop along the axes for the grid
topology given in Figure 4. Using Equation 2 and 17, it is
then possible to calculate the corresponding maximum error
rates on the other links.
To add the effect of random ambient noise in the channel, we
chose the actual packet error rate on each link uniformly at
random from the interval (0; pmax ), where pmax is the maximum
packet error rate computed for that link. For different
experiments, we varied the pmax for the unit hop links (and
correspondingly the maximum error rates for the other links).
ffl Variable transmission power: In this case, we assume that
all the nodes in the network are dynamically able to adjust
transmission power across the links. Each node chooses the
transmission power level for a link so that the signal reaches
the destination node with the same constant received power.
Since we assumethat the attenuation of signal strength is given
by Equation 1, the energy requirements for transmitting across
links of different lengths is given by Equation 3.
Since all nodes now receive signals with the same power, the
bit error rate, given by Equation 17, is the same for all links
(by using the flexibility of adjusting the transmission power
basedon link distances). Therefore, for this scenario, we only
need to model the additional link error rate due to ambient
noise at the receiver. We chose the maximum error rate for
a link due to ambient noise (p ambient ), for the different experiments
in this case, andchose the actual error rate for a link
uniformly at random from the interval (0; p ambient ).
5.2 Metrics
To study the energy efficiency of the routing protocols, we observed
two different metrics:
1. Normalized energy: We first compute the average energy
per data packet by dividing the total energy expenditure (over
all the nodes in the network) by the total number of unique
packets received at any destination (sequencenumber for TCP
and packets for UDP). We defined the normalized energy of a
scheme, as the ratio of the averageenergyper data packet for
that schemeto the averageenergyperdata packet requiredby
the minimum-hop routing scheme. Since, the minimum-hop
routing schemeclearly consumesthe maximal energy, the normalized
energy parameter provides an easy representation of
the percentage energy savings achieved by the other (EA and
routing algorithms.
2. Effective Reliable Throughput: This metric counts the number
of packets that was reliably transmitted from the source
Normalized
energy
required
Max error rate on the unit links
UDP flows on the 49-node topology (Fixed transmission power)
Retransmission Aware
Figure
5: UDP flows with link layer re-transmissions (HHR)
for fixed transmission power scenario.
to the destination, over the simulated duration. Since all the
plots show results of runs of different schemes over the same
time duration, we do not actually divide this packet count by
the simulation duration. Different routing schemes will differ
in the total number of packets that the underlying flows
are able to transfer over an identical time interval.
5.3 Fixed Transmission Power Scenario
We first present results for the case where each node uses a fixed
and constant transmission power for all links. In this case, it is obvious
that the EA routing scheme degenerates to the minimum-hop
routing scheme.
5.3.1 HHR Model
We first present the results for the case where each link implements
its own localized retransmission algorithm to ensure reliable
delivery to the next node on the path.
HHR with UDP: Figure 5 shows the the total energy consumption
for the routing schemesunder link-layer retransmissions (HHR
case). We experimented with a range of link error rates to obtain
these results. Ascanbe seen, the RA schemeshowsa significant improvement
over the minimum-hop (identical in this environment to
the EA) scheme, as expected. The normalized energy requirements
of the minimum-hop and the EA schemes is unity in this case. With
increasing link error rates, the benefits of using our re-transmission
aware scheme become more significant. For example, at a maximum
link error rate for the unit hop links (pmax ) of 0.25, the RA
schemeconsumesabout24% lower energy than the other two schemes.
Note, that in this case, 0.25 is only the maximum link error rate for
the unit links; typical unit links will have actual error rates varying
between 0.0 and 0.25.
It is perhaps important to emphasize that it is only the normalized
energy for the RA scheme which decreases with increasing link error
rate. The absolute energy expenditure will obviously increase
with an increasing value of pmax for all routing algorithms.
HHR with TCP: In Figure 6, we observe the same metric for
flows. As can be seen, the trends for both UDP and TCP flows,
in terms of energy requirements are similar, when link-layer retransmissions
are present. However, it is more interesting to observe the
consequences of using these different schemes on the number of
data packetstransmitted reliably to the destinations of the flows. This
is shown in Figure 7. The RA scheme consistently delivers a larger
volume of data packets to the destination within the same simulated0.750.850.951.05
Normalized
energy
required
Max error rate on the unit links
flows on the 49-node topology (Fixed transmission power)
Retransmission Aware
Figure
Energy required for TCP flows with link layer re-transmissions
(HHR) for fixed transmission power scenario.2000060000100000140000
Sequence
numbers
transmitted
reliably
Max error rate on the unit links
flows on the 49-node topology (Fixed transmission power)
Energy Aware
Retransmission Aware
Figure
7: Reliable packet transmissions for TCP flows with
link layer re-transmissions (HHR) for fixed transmission
power scenario.
duration, even while it is consuming less energy per sequencenum-
ber transmitted. This is becauseof two reasons. First, the RA scheme
at many times chooses path with lower error rates. Thus the number
of link-layer retransmissions seen for TCP flows using the RA
scheme is lower, andhence the round trip time delays are lower. The
throughput, T , of a TCP flow, with round trip delay, - and loss rate,
varies as [10]:
\Theta 1
The RA scheme has smaller values of both p and - and so has a
higher throughput.
5.3.2 EER Model
We now provide the results of our experiments under the EER
scheme.
EER with TCP: We looked at the energy requirements whenend-
to-end TCP re-transmissions are the sole means of ensuring reliable
data transfer. The minimum-hop algorithm always chooses a small
number of larger distance links. However, in this fixed transmission
power case, the received signal strength over larger distance links is
lower, and consequently, by Equation 17, has a higher bit error rate.
Since there are no link layer retransmissions, the loss probability for
Normalized
energy
required
Max error rate on the links
UDP flows on the 49-node topology
Energy Aware
Retransmission Aware
Figure
8: UDP flows with link layer re-transmissions (HHR)
for variable transmission power scenario.
each data segment is fairly high. Therefore this scheme achieves a
very low TCP throughput (less than 1% of that achieved by the RA
scheme) and still used 10-20% more energy. Hence it was difficult
to do meaningful simulation comparisons of the RA scheme with
this minimum-hop algorithm.
5.4 Variable Transmission Power Scenario
In this case, the nodes are capable to adapting the transmission
power, so that the received signal strength is identical across all links.
To achieve this, clearly, links with larger distances require a higher
transmission power than links with smaller distances. In this situa-
tion, we varied the link error rate due to ambient noise at the receiver
of the links to compare the different schemes.
Unlike the fixed transmission power case, the EA routing algorithm
in this case chooses paths with a large number of small hops,
and has lower energy consumption than the minimum hop routing
algorithm. Therefore, in these results, we compare our RA scheme
with both EA and minimum-hop routing.
5.4.1 HHR Model
We first present the results for the case where each link implements
its own localized retransmission algorithm to ensure reliable
delivery to the next node on the path.
HHR with UDP: Figure 8 shows the the total energy consumption
for the routing schemesunder link-layer retransmissions (HHR
case). We experimented with a range of channel error rates to obtain
these results. Both EA and RA schemes are a significant improvement
over the minimum-hop routing scheme, as expected. How-
ever, with increasing channel error rates, the difference between the
normalized energy required per reliable packet transmission for the
RA and the EA schemesdiverges. At some of the high channel error
rates (pambient = 0:5), the energy requirements of the RA scheme
is about 25% lower than the EA scheme. It is again useful to note,
that this error rate is only the maximum error rate for the link. The
actual link error rate is typically much smaller.
Once again, it is only the normalized energy for the RA scheme
which decreases. The absolute energy required obviously increases
with an increasing value of pmax .
HHR with TCP: In Figure 9, we observe the same metric for
flows. As before, the energy requirements of for the RA scheme
is much lower than the EA scheme. Additionally, we can again observe
Figure
10) that the number of data packets transmitted reliably
for the RA scheme is much higher than that of the EA scheme.0.2050.2150.2250.2350.2450.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5 0.55 0.6
Normalized
energy
required
Max error rate on the links
flows on the 49-node topology
Energy Aware
Retransmission Aware
Figure
9: Energy required for TCP flows with link layer re-transmissions
(HHR) for variable transmission power scenario
Sequence
numbers
transmitted
reliably
Max error rate on the links
flows on the 49-node topology
Energy Aware
Retransmission Aware
Figure
10: Reliable packet transmissions for TCP flows with
link layer re-transmissions (HHR) for variable transmission
power scenario.
5.4.2 EER Model
Finally, we provide the results of our experiments under the EER
framework.
EER with TCP: For the EER case, like before, it was often difficult
to simulate links with high error rates- even with a small number
of hops, each TCP packet is lost with a high probability and no
data ever gets to the destinations.
The energy savings achieved by the RA algorithm is more pro-
nouncedwhenno link-layer retransmission mechanismsare present.
For some of the higher link error rates simulated in this environment
(e.g., 0:22), the energy savings of the RA scheme
was nearly 65% of the EA scheme, as can be seen in Figure 11.
Again, it is interesting to observe the data packets transmitted reliably
by the EA and the RA schemes, simulated over the same duration
12). For lower error rates (pmax between 0.1 to 0.14)
the RA scheme transmits nearly an order of magnitude more TCP
sequence numbers than the EA scheme. While the total TCP goodput
approacheszero for bothschemes, as the link error rates increase,
the rate of decrease in the TCP goodput is much higher for the EA
scheme than the RA scheme.
Varying L: In Figure 13, we varied the L-parameter of Equation
for a specific error rate on the links (i.e., 0:175). The
Normalized
energy
required
Max error rate on the links
flows on the 49-node topology
Energy Aware
Retransmission Aware (L=2)
Figure
11: TCP flows with no link layer re-transmissions
Sequence
numbers
transmitted
reliably
Max error rate on the links
flows on the 49-node topology
Energy Aware
Retransmission Aware (L=2)
Figure
12: TCP flows with no link layer re-transmissions
(EER) for variable transmission power scenario.
number of reliably transmitted packetsincreasedmonotonically with
the value of L. However, the curve in the figure has a minimum "en-
ergy per reliably transmitted packet", corresponding to
this example 7 . Varying the L-value from this optimal value leads
to poorer energy-efficiency (higher energy/packet). There is thus
clearly a trade-off between the achieved throughput, and the effective
energy expended. To achieve a higher throughput, it is necessary
to prefer fewer hops, as well as links with low error rates
(higher error rate links will causehigher delaysdue to re-transmissions).
This plot illustrates the following important point: it is possible to
tune the L-parameter to choosean appropriateoperating point that
captures the tradeoff between a) the achieved TCP throughput, and
b) the effective energy expended per sequence number received reliably
6. CONCLUSION
In this paper, we have shown why the effective total transmission
energy, which includes the energy spent in potential retrans-
missions, is the proper metric for reliable, energy-efficient commu-
nications. The energy-efficiency of a candidate route is thus critically
dependent on the packet error rate of the underlying links,
7 Finer measurementswith many more L-values would yield the exact
L that minimizes this curve.0.320.360.4
5k 10k 15k 20k 25k 30k 35k 40k 45k 50k
Normalized
energy
required
Sequence numbers transmitted reliably
flows on the 49-node topology
Figure
13: Varying the L parameter to tradeoff normalized energy
and number of reliably transmitted sequence numbers.
since they directly affect the energy wasted in retransmissions. Our
analysis of the interplay between error rates, number of hops and
transmission power levels reveals several key results:
1. Even if all links have identical error rates, it is not always
true that splitting a large-distance (high-power) hop into multiple
small-distance (low-power) hops results in overall energy
savings. Our analysis shows that if the number of hops
exceeds an optimal value (which can be as small as
in realistic scenarios), the rise in the overall error probability
negates any apparent reduction in the transmission power.
2. Any routing algorithm must evaluate a candidatelink (and the
path) on the basis of both its power requirements and its error
rate. Even in the HHR framework, where retransmissions
are typically localized to a specific hop, the choice of an error-prone
link can lead to significantly higher effective energy expended
per packet.
3. Link-layer retransmission support (HHR) is almost mandatory
for a wireless, ad-hoc network, since it can reduce the
effective energy consumption by at least an order of magnitude
4. The advantages of using our proposed re-transmission aware
routing scheme is significant irrespective of whether fixed or
variable transmission power is used by the nodes to transmit
across links.
We also studied modifications to the link cost that would enable
conventional minimum-cost path algorithms to select optimal "ef-
fective energy" routes. While the appropriate cost for link (i;
turned out to be E i;j
for the HHR framework, it was not possible
to define an exact link cost for the EER case. For the EER scenario,
we studied the performance of approximate link costs of the form
various values of L. Our simulation studies show that
the incorporation of the error rate in the link cost leads to significant
energy savings (potentially as high as 70%) compared to existing
minimum-energy algorithms. It also turns out that, in the HHR
model, the L parameter in the link cost provides a knob to trade off
energy efficiency with network throughput (capacity). While larger
values of L always lead to the selection of shorter-hop routes and
larger session throughput, the energy-efficiency typically increases
and then decreases with increasing L.
As part of future research, we intend to extendour analyses(which
assumed each link to be operating independently of other links) to
scenarios, such as IEEE 802.11-based networks, where the logical
links share the same physical channel and hence, interfere with one
another. Indeed, since an energy-aware routing protocol defines the
next-hop node (and hence, implicitly defines the associated transmission
power), the choice of the routing algorithm is expected to
affect both the overall network capacity and individual sessionthrough-
puts in such scenarios.
7.
--R
Energy conserving routing in wireless ad-hoc networks
An Internet MANET encapsulation protocol (IMEP) specification
Connections with multiple congested gateways in packet-switched networks part 1: One-way traffic
PARO: A power-aware routing optimization scheme for mobile ad hoc networks
Dynamic source routing in ad hoc wireless networks.
The macroscopic behavior of the TCP congestion avoidance algorithm.
Routing and channel assignment for low power transmission in PCS.
Performance evaluation of battery-life-aware routing schemes for wireless ad hoc networks
--TR
Connections with multiple congested gateways in packet-switched networks part 1
The macroscopic behavior of the TCP congestion avoidance algorithm
Power-aware routing in mobile ad hoc networks
PAMASMYAMPERSANDmdash;power aware multi-access protocol with signalling for ad hoc networks
An adaptive-transmission protocol for frequency-hop wireless communication networks
Ad-hoc On-Demand Distance Vector Routing
Window-Based Error Recovery and Flow Control with a Slow Acknowledgement Channel
--CTR
Chao Gui , Prasant Mohapatra, SHORT: self-healing and optimizing routing techniques for mobile ad hoc networks, Proceedings of the 4th ACM international symposium on Mobile ad hoc networking & computing, June 01-03, 2003, Annapolis, Maryland, USA
Chao Gui , Prasant Mohapatra, A framework for self-healing and optimizing routing techniques for mobile ad hoc networks, Wireless Networks, v.14 n.1, p.29-46, January 2008
Bansal , Rajeev Shorey , Rajeev Gupta , Archan Misra, Energy efficiency and capacity for TCP traffic in multi-hop wireless networks, Wireless Networks, v.12 n.1, p.5-21, February 2006
Wook Choi , Sajal K. Das , Jiannong Cao , Ajoy K. Datta, Randomized dynamic route maintenance for adaptive routing in multihop mobile ad hoc networks, Journal of Parallel and Distributed Computing, v.65 n.2, p.107-123, February 2005
Qunfeng Dong , Suman Banerjee , Micah Adler , Archan Misra, Minimum energy reliable paths using unreliable wireless links, Proceedings of the 6th ACM international symposium on Mobile ad hoc networking and computing, May 25-27, 2005, Urbana-Champaign, IL, USA
Anand Srinivas , Eytan Modiano, Finding minimum energy disjoint paths in wireless ad-hoc networks, Wireless Networks, v.11 n.4, p.401-417, July 2005
Liran Ma , Qian Zhang , Xiuzhen Cheng, A power controlled interference aware routing protocol for dense multi-hop wireless networks, Wireless Networks, v.14 n.2, p.247-257, March 2008
Constandinos X. Mavromoustakis , Helen D. Karatza, Adaptive Energy Conservation Model using Dynamic Caching for Wireless Devices, Proceedings of the 37th annual symposium on Simulation, p.257, April 18-22, 2004
Budhaditya Deb , Badri Nath, On the node-scheduling approach to topology control in ad hoc networks, Proceedings of the 6th ACM international symposium on Mobile ad hoc networking and computing, May 25-27, 2005, Urbana-Champaign, IL, USA
Constandinos X. Mavromoustakis , Helen D. Karatza, Handling Delay Sensitive Contents Using Adaptive Traffic-Based Control Method for Minimizing Energy Consumption in Wireless Devices, Proceedings of the 38th annual Symposium on Simulation, p.295-302, April 04-06, 2005
Constandinos X. Mavromoustakis , Helen D. Karatza, Quality of Service Measures of Mobile Ad-hoc Wireless Network using Energy Consumption Mitigation with Asynchronous Inactivity Periods, Simulation, v.83 n.1, p.107-122, January 2007
Budhaditya Deb , Sudeept Bhatnagar , Badri Nath, Information assurance in sensor networks, Proceedings of the 2nd ACM international conference on Wireless sensor networks and applications, September 19-19, 2003, San Diego, CA, USA
Anand Srinivas , Eytan Modiano, Minimum energy disjoint path routing in wireless ad-hoc networks, Proceedings of the 9th annual international conference on Mobile computing and networking, September 14-19, 2003, San Diego, CA, USA
Seungjoon Lee , Bobby Bhattacharjee , Suman Banerjee, Efficient geographic routing in multihop wireless networks, Proceedings of the 6th ACM international symposium on Mobile ad hoc networking and computing, May 25-27, 2005, Urbana-Champaign, IL, USA
Pierpaolo Bergamo , Alessandra Giovanardi , Andrea Travasoni , Daniela Maniezzo , Gianluca Mazzini , Michele Zorzi, Distributed power control for energy efficient routing in ad hoc networks, Wireless Networks, v.10 n.1, p.29-42, January 2004
Jian Li , Prasant Mohapatra, PANDA: a novel mechanism for flooding based route discovery in ad hoc networks, Wireless Networks, v.12 n.6, p.771-787, November 2006 | routing;energy efficiency;ad-hoc networks |
513826 | Weak duplicate address detection in mobile ad hoc networks. | Auto-configuration is a desirable goal in implementing mobile ad hoc networks. Specifically, automated dynamic assignment (without manual intervention) of IP addresses is desirable. In traditional networks, such dynamic address assignment is often performed using the Dynamic Host Configuration Protocol (DHCP). Implementing DHCP, however, requires access to a DHCP server. In mobile ad hoc networks, it is difficult to guarantee access to a DHCP server, since ad hoc networks can become partitioned due to host mobility. Therefore, alternative mechanisms must be employed. One plausible approach is to allow a node to pick a tentative address randomly (or using some locally available information), and then use a "duplicate address detection" (DAD) procedure to detect duplicate addresses. The previously proposed DAD procedures make use of timeouts and do not always perform correctly in presence of partitions. In networks where message delays cannot be bounded, use of timeouts can lead to unreliability. Therefore, we propose an alternative approach (which can be used in conjunction with previously proposed schemes). We refer to the proposed approach as "weak" duplicate address detection. The goal of weak DAD is to prevent a packet from being routed to the "wrong" destination node, even if two nodes in the network happen to have chosen the same IP address. We also propose an enhanced version of the weak DAD scheme, which removes a potential shortcoming of the weak DAD approach. | INTRODUCTION
Auto-configuration is a desirable goal in implementing
mobile ad hoc networks [16]. Specifically, automated dynamic
assignment of IP addresses is desirable. In traditional
networks, such dynamic address assignment is often
performed using Dynamic Host Configuration Protocol
(DHCP) [5]. Implementing DHCP, however, requires access
to a DHCP server. In mobile ad hoc networks, it is di#cult
to guarantee access to a DHCP server, since ad hoc networks
can become partitioned due to host mobility. There-
fore, alternative mechanisms must be employed. One plausible
approach is to allow a node to pick a tentative address
randomly (or using some locally available information), and
then use a "duplicate address detection" (DAD) procedure
to detect duplicate addresses. Such duplicate address detection
mechanisms have been proposed previously [2, 13,
16]. The previously proposed DAD procedures make use
of timeouts. In networks where message delays cannot be
bounded, use of timeouts cannot reliably detect absence of a
message. Such unreliability can lead to a situation wherein
existence of duplicate addresses goes undetected. Therefore,
we propose an alternative approach (which can be used in
conjunction with an existing DAD scheme such as [16]).
We refer to the proposed approach as "weak" duplicate
address detection. The goal of weak DAD is to prevent a
packet from being delivered to a "wrong" destination node,
even if two nodes in the network happen to have chosen the
same IP address [19]. We believe that weak DAD can significantly
improve robustness in presence of partitions and unpredictable
message delays as compared to existing schemes.
The rest of this paper is organized as follows. Related
work is summarized in Section 2. To motivate weak DAD, in
Section 3, we first define "strong DAD," and make some simple
observations on impossibility of strong DAD. These observations
motivate weak DAD. Proposed weak DAD scheme
is presented in Section 4. Section 5 presents an enhancement
to the weak DAD scheme to avoid unexpected behavior in
upper layers of the protocol stack. Section 6 explains how
weak DAD can be performed in conjunction with Dynamic
Source Routing [8]. A hybrid DAD scheme is discussed in
Section 7. A problem that arises when flooding is used as
the routing protocol is discussed in Section 8. Section 9
touches on the issue of address reuse. Section 10 presents
the conclusions.
2. RELATED WORK
An alternative to an automated dynamic address assignment
is to use a manual procedure that somehow ensures
unique addresses. However, manual assignment is cumbersome
in general. As noted previously, DHCP is commonly
used for the purpose of dynamically assigning unique IP addresses
in traditional networks [5]. Since access to a DHCP
server cannot be guaranteed in mobile ad hoc networks,
DHCP may not be suitable for such environments.
When the IP address size is large, as in case of IP version
6, a unique IP address can potentially be created by embedding
MAC address (provided that MAC addresses are
into the IP address. However, we consider the case
when this is not feasible. For instance, a 48-bit IEEE 802.11
MAC address cannot be embedded in a 32-bit IP version 4
address.
Perkins et al. [16] propose a simple DAD mechanism that
works correctly provided that message delays between all
pairs of nodes in the ad hoc network are bounded. The DAD
protocol in [16] is based on IPv6 stateless address autoconfiguration
mechanism [18]. We now describe the mechanism
in [16] briefly. A node, say node X, first picks a random
address (perhaps from an address space reserved for this
purpose). To determine if the address is already assigned
to another node, node X issues a "route request" for that
randomly selected address. The purpose of the route request
is to find a route to a node with the selected address.
If the chosen address is indeed already assigned to another
node, then with the routing protocols assumed in [16] the
route request will result in a "route reply" being sent back
to node X. Thus, absence of a route reply can be used as
an indication that no other node has assigned the address
chosen by node X. The scheme in [16] defines a timeout
period, and if no route reply is received by node X during
this period, the route request is sent again (this procedure
is repeated up to RREQ RETRIES times, for a suitably defined
RREQ RETRIES parameter). [16] suggests that the
values for the timeout and RREQ RETRIES parameters for
the route request messages issued during DAD should be the
same as their "usual" values for similar messages used in the
base routing protocol. Clearly, if message delays between all
pairs of nodes are bounded, then it should be possible to determine
a suitable timeout interval such that the above DAD
mechanism can detect the presence of duplicate addresses.
A disadvantage of the above scheme is that it does not work
as intended when unbounded delays due to partitions can
occur. Particularly, when partitions merge, the resulting
network may contain nodes with duplicate addresses. For
correct behavior, this scheme must be augmented with a
procedure that detects merging of partitions, and then takes
suitable actions to detect duplicate addresses in the merged
partitions. An advantage of the proposed weak DAD approach
is that it does not require use of an explicit procedure
for detecting merging partitions.
Boleng [2] presents another address assignment scheme
which uses timeouts in the procedure for detecting duplicate
addresses. As such, this scheme shares some of the short-comings
of [16]. [14] presents an address assignment scheme
for ad hoc networks which relies on having a leader in each
partition, and bears some similarities to [2]. The leader is
used to assign addresses in its partition. [12] suggests the
use of unique identifiers to allow distinction between packets
belonging to two di#erent mobile ad hoc networks. Such
identifiers are also used in IEEE 802.11 networks under a
variety of names. [12] suggests that unique identifier be
included in each message. These identifiers have some similarity
to the key used in our proposed scheme (as described
later), however, the manner in which the key is used is quite
di#erent from the use of unique identifier suggested in [12].
In particular, [12] suggests that the unique identifier be included
in each message. As a design decision, we do not
modify IP packet headers (so the keys are not used to make
routing decisions), and the keys are not included in most of
the IP packets. Our keys are only included in network layer
control packets.
[13] present an address assignment scheme for ad hoc networks
based on a distributed mutual exclusion algorithm
that treats IP addresses as a shared resource. Some aspects
of [13] are similar to [2, 14]. Assignment of a new address
in [13] requires an approval from all other known nodes in
the network. This scheme also needs a mechanism to detect
when partitions occur, and when partitions merge. When
partitions occur, the nodes in each partition determine identity
of the node with the lowest IP address in their partition.
The node with the lowest IP address then floods a unique
identifier to its partition. This information can be later used
to detect merging partitions. When two partitions merge,
nodes in the two partitions are required to exchange the set
of allocated IP addresses in each partition. [13] also requires
the use of timeouts for several operations. As compared to
[13], benefit of our approach is that it is much simpler, it
can work in presence of partitions (without requiring any
special procedure for detection of partitions or merging of
partitions), and the proposed approach can be integrated
with many di#erent routing protocols (use with link state
routing [9, 10] and dynamic source routing [8] is illustrated
in this paper).
[3] presents a dynamic configuration scheme for choosing
IP version 4 (IPv4) link-local addresses. This scheme allows
nodes on di#erent network links to choose the same
link-local addresses. However, when two such network links
are later connected, now duplicate addresses may occur on
the new link. [3] suggests a modification to the Address
Resolution Protocol (ARP), such that ARP replies are sent
by a broadcast, as opposed to unicast, to facilitate detection
of duplicate addresses. This approach may potentially
be extended to ad hoc networks, by considering the ad hoc
network to be a "link". However, this will also require a
mechanism for detecting when partitions merge, and each
ARP request will have to be flooded through the entire ad
hoc network (since the network is now considered to be a
single "link" for ARP purposes). However, the proposed
Weak DAD scheme behaves similar to the scheme in [3] in
that both may delay duplicate detection until it becomes
necessary to ensure correct behavior.
Schurgers et al. [17] present a scheme for distributed assignment
of encoded MAC addresses in sensor networks.
Their objective is to assign MAC addresses containing a
small number of bits, using neighborhood information. They
observe that MAC addresses need not be unique on a network-wide
basis, and it is adequate to ensure that MAC address
assigned to a node is unique within its two hop neighborhood
3. STRONG DUPLICATE ADDRESS
DETECTION
This paper considers the problem of duplicate address detection
(DAD) in mobile ad hoc networks. We identify two
versions of duplicate address detection: strong DAD, and
DAD. The definition of strong DAD attempts to capture
the intuitive notion of a "correct" or desirable behavior
of a DAD scheme. We later show that strong DAD is not
always achievable.
Before proceeding further, we would like to state two
simplifying assumptions, which can be relaxed with simple
changes to the proposed protocol:
. Presently, we ignore the issue of address reuse in our
discussion. However, proposed schemes can be modified
easily to incorporate limited-time "leases" of IP
addresses, as elaborated later in Section 9.
. For simplicity of discussion, we assume that each node
in the wireless ad hoc network has a single interface,
and we refer to the address assigned to this interface
as the "node address". When a node is equipped with
multiple interfaces, the protocols presented here can
be easily adapted.
Informally, strong DAD allows detection of a duplicate
address "soon after" more than one node chooses a given
address. With strong DAD, if multiple nodes have chosen
a particular address at a given time, then at least one of
these nodes will detect the duplicate within a fixed interval
of time. 1 An alternative would be to require all nodes
to detect the duplication. We chose the less demanding re-
quirement, because even this requirement can be shown to
be impossible to satisfy (implying that the stronger requirement
is impossible as well). Proposed requirement of strong
DAD is defined more formally below:
Strong DAD: Let A i (t) be the address assigned (tenta-
tively or otherwise) to node i at time t. A i (t) is undefined
when node i has not chosen any address at time t. For each
address a #=undefined, define set
That is, Sa(t) is the set of nodes that are assigned address a
at time t. A strong DAD algorithm must ensure that, within
1 By repeated application of this requirement, when more
than two nodes have chosen the same address, eventually
all but one (if not all) nodes that have chosen the duplicate
address will detect the duplication.
a finite bounded time interval after t, at least one node in
will detect that |Sa(t)| > 1.
We now argue that the strong DAD is impossible under
certain conditions.
A Simple Observation: If partitions can occur for unbounded
2 intervals of time, then strong DAD is impossible.
The observation above is obvious and intuitive. The impossibility
result applies to the protocol in [16] as well. To
elaborate on the claim, let the ad hoc network be partitioned
into, say, two partitions, and remain so for an unbounded
interval of time. In this case, if two nodes in the two partitions
choose the same address a, no algorithm can detect
these duplicates within a bounded time interval, since the
nodes in the two partitions cannot communicate with each
other in a timely manner.
Perkins et al. [16] suggest that when partitions merge,
their DAD algorithm should be executed again in order
to detect duplicate addresses. They also suggest that this
should be performed in a way that avoids congestion caused
by messages sent for the purpose of duplicate address detection
(note that, when using [16], when partitions merge,
addresses of all nodes must be checked for duplicates). [16],
however, does not indicate how merging of partitions should
be detected, or how the congestion caused by DAD messages
may be reduced. Also, there remains a period of vulnerability
(after the partitions merge) because detection of merging
partitions, and the subsequent detection of duplicate
addresses requires some amount of time. During this time,
nodes that are assigned the same address may potentially receive
packets intended for each other. [13] handles partitions
by incorporating a procedure for detecting when partitions
occurs, and when partitions merge. Detection of partitions,
and merging partitions, can be expensive and prone to delays
when message delays are unpredictable. The proposed
approach described in Section 4 does not need the use of
such partition detection procedures.
The theorem below now generalizes on the impossibility
observation above.
Theorem: Strong DAD cannot be guaranteed if message
delays between at least one pair of nodes in the network are
unbounded.
Proof: Assume that there exists a node pair between which
message delay is unbounded. In particular, assume that the
delay between nodes X and Y is unbounded. Suppose that
nodes X and Y choose addresses a and b, respectively, at
time t. Assume that all other nodes have already chosen
unique addresses distinct from a and b. Now there are two
possibilities:
. a #= b: In this case,
. a = b: In this case,
Since message delays between nodes X and Y are unbounded,
note X cannot be guaranteed to receive within a bounded in-
An unbounded time interval is one on which no bound exists
terval of time a message from node Y (and vice versa). Now
consider any node Z such that delay in sending a message
from Y to Z is bounded. Then, the delay along any path
between Z and X must be unbounded, otherwise, message
from Y to X can be delivered via Z in a bounded interval
of time, contradicting the above assumption. This implies
that node X cannot receive a message (within bounded time)
from any node whose state is causally dependent on node
Y's state at time t. In summary, the above argument implies
that node X cannot receive a message (within bounded time)
from node Y, or from another node whose state causally depends
on node Y's state. Essentially, node X will not receive
any message that will help it determine (within bounded
time) whether node Y is assigned address a or not. Sim-
ilarly, node Y cannot know whether node X has assigned
address a or not. In other words, nodes X and Y cannot
distinguish between the two cases possible (i.e., a #= b and
Therefore, strong DAD cannot be guaranteed if
message delays between a node pair are unbounded. 2
The theorem above states that unbounded delays preclude
strong DAD. However, when all message delays are bounded,
strong DAD can in fact be achieved. Note that the bounded
delay assumption requires that any partitions last only for
a bounded duration of time (i.e., for this assumption to be
valid, the message delays need to be bounded despite any
temporary partitions that may occur). Under the bounded
delay assumption, the DAD schemes proposed in [2, 16] can
perform strong DAD, by choosing a large enough timeout
interval (the scheme in [16] is described in Section 2). How-
ever, in practice, particularly in presence of partitions, it
may not be possible to bound message delays. The impossibility
of strong DAD under conditions that are likely to
occur in a practical ad hoc network motivates us to consider
a weaker version of duplicate address detection.
4. WEAK DUPLICATE ADDRESS
DETECTION
Delays in ad hoc networks are not always bounded. Even
if the message delays were bounded, determining the bound
is non-trivial (particularly when size of the network may be
large and possibly unknown). Impossibility of strong DAD
in presence of unbounded delays implies that timeout-based
duplicate address detection schemes such as [2, 16] will not
always detect duplicate addresses.
Motivated by the above observations, we propose Weak
Duplicate Address Detection as an alternative to strong
DAD. Weak DAD, unlike strong DAD, can be achieved despite
unbounded message delays. The proposed weak DAD
mechanism can be used either independently, or in conjunction
with other schemes, such as [16].
Weak DAD relaxes the requirements on duplicate address
detection by not requiring detection of all duplicate ad-
dresses. Informally, weak DAD requires that packets "meant
for" one node must not be routed to another node, even if
the two nodes have chosen the same address (we will soon
make the definition more formal). This is illustrated now by
an example. In Figure 1(a), the nodes belong to two par-
titions. In the partition on the left, node A has chosen IP
address a, and in the other partition, node K has chosen IP
address a as well. Initially, as shown in Figure 1(a), packets
from node D to node A travel via nodes E and C. Note
that the packets to node D from node A are routed using
destination IP address a included in the IP packet header.
Figures
1(b) and 1(c) both show the network after the
two partitions have merged. However, the behavior in the
two figures is di#erent. In Figure 1(b), after the partitions
merge, packets from node D with destination address a get
routed to node K (previously they were routed to node A),
whereas in Figure 1(c), the packets continue being routed to
node A even after the partitions merge.
We suggest that the situation in Figure 1(b) is not ac-
ceptable, while the situation in Figure 1(c) may be toler-
ated. Essentially, we suggest that duplicate addresses may
be tolerated so long as packets reach the destination node
"intended" by the sender, even if the destination's IP address
is also being used by another node. We now present a
somewhat more formal definition of weak DAD (note that,
in a subsequent section, we will present a modified version,
named Enhanced Weak DAD, which removes a shortcoming
of the Weak DAD approach, as elaborated later):
Weak DAD: Let a packet sent by some node, say node
X, at time t to destination address a be delivered to node
Y that has chosen address a. Then the following condition
must hold even if other nodes also choose address a:
. After time t, packets from node X with destination
address a are not delivered to any node other than
node Y.
Using a weak DAD mechanism, it can be guaranteed that
packets sent by a given node to a particular address are not
delivered over time to two di#erent nodes even if both are
assigned the same address. For instance, in Figure 1(c),
packets from node D sent to address a will reach node A
after the partitions merge, as they did before the partitions
merged. We now present a weak DAD scheme with the
following design goals:
. Address size cannot be made arbitrarily large. There-
fore, for instance, MAC address cannot be embedded
in the IP address.
. IP header format should not be modified. For instance,
we do not want to add new options to the IP header.
. Contents of routing-related control packets (such as
link state updates, route requests, or route replies)
may be modified to include information pertinent to
DAD.
. No assumptions should be made about protocol layers
above the network layer.
Proposed approach for weak DAD is implemented by making
some simple changes to the routing protocol. The weak
DAD scheme described below is based on link state routing
[9, 10]. Section 6 later explains how weak DAD can be
performed in conjunction with the Dynamic Source Routing
(DSR) protocol [8]. Note that weak DAD can potentially
be performed in conjunction with other routing protocols as
well; we use link state routing and DSR only as examples.
A
F
G
A
F
G
A
F
G
IP address = a
(a)
IP address = a
(b)
IP address = a
(c)
IP address = a
IP address = a IP address = a
Figure
1: An Example: Nodes A and K choose the same IP address. In our figures, a link shown between a
pair of nodes implies that they can communicate with each other on the wireless channel.
Intuition Behind Weak DAD Implementation
We assume that each node is pre-assigned a unique "key".
When MAC address of an interface is guaranteed to be
unique, the MAC address may be used as the key. Alter-
natively, each node may pick a random key containing a
su#ciently large number of bits so as to make the probability
of two nodes choosing the same key acceptably small.
Another alternative is to derive the key using some other
information (for instance, manufacturer's name and device
serial number together may form a key, provided the serial
numbers are unique). 3
Given a unique key, a unique IP address can be created
simply by embedding the key in the IP address. A similar
approach has been suggested previously for IP version 6.
However, in IP version 4, the number of address bits is relatively
small, and it may not be possible to embed the key
3 If public-key cryptography is being employed at the net-work
layer, then the public key of a node, presumably
unique, may also be used in our DAD procedure. Any mechanism
to derive a unique key will su#ce.
in the IP address. In this paper, we assume that it is not
possible to embed the key in an IP address. The proposed
uses the key for the purpose
of detecting duplicate IP addresses, without actually embedding
the key in the IP address itself. Note that, we do
not make any changes to the IP header, and forwarding decisions
are, as usual, made using the IP destination address
in the header of IP packets.
Weak DAD with Link State Routing
In its simplest version, link state routing protocol maintains
a routing table at each node with an entry for each
known node in the network. For each destination node, the
entry contains the "next hop" or the neighbor node on a
route to that destination. The next hop neighbor may be
determined to minimize a suitable cost function. To help
determine the next hops, each node periodically broadcasts
the status of all its links which is propagated to all the nodes
in the network (several optimizations may be used to reduce
the overhead of link state updates [7]). Each node uses the
link status information received from other nodes to determine
the network topology, and in turn, the next hop on
the shortest path (i.e., lowest cost) route to the destination
(for instance, using Dijkstra's shortest path algorithm [9]).
Figure
2(a) shows an example link state packet that may be
transmitted by node D in that figure, and also example of
the routing table at node D. In the figure, IP X denotes an
IP address. In the link state packet, each row corresponds
to the status of one link.
We augment the link state routing algorithm as follows.
In each link state packet, each node's address is tagged by its
key. Thus, if the link state packet includes cost information
for link (IP X, IP Y), then the keys K X and K Y of nodes
with address IP X and IP Y, respectively, are also included
in the link state packet. Figure 2(b) shows the link state
packet obtained by adding these keys to the link state packet
shown in Figure 2(a).
Let some node Z receive a link state packet that includes
the (node address, key) pair (IP X, K X). Then node Z
checks if any entry in its routing state (i.e., routing table
and cached link status information) contains address IP X.
Assume that an entry for address IP X is found in Z's routing
state, and the key associated with this entry is k. Now, if
k #= K X, then node Z concludes that there must exist two
nodes whose address is IP X; the fact that there are two
di#erent nodes with same address is identified due to the
di#erences in their keys. At this point, node Z invalidates
the routing state associated with address IP X, and takes
additional steps to inform other nodes about the duplicate
addresses. 4 If, however, or an entry for IP X is not
found in the routing state at node Z, then normal processing
of link state information occurs at node Z, as required by
the link state routing algorithm.
With the above modification, a node, say, node Z, that
has previously forwarded a packet for destination address a
towards one node, say, node W, will never forward a packet
for destination address a towards another node, say, node
X, even if W and X are both assigned address a. To see
this, observe that, initially node Z's routing table entry for
address a leads to node W. Thus, initially, node Z must have
the key of node W associated with address a in its routing
table (because node Z would have previously received the
status of links at node W). Now, before Z's routing table
entry for node a is updated to lead to node X, a link state
update containing status of links at node X (and, therefore,
also node X's key) will have to be received by node Z. When
such a packet is received by Z, the mismatch in the keys of
nodes W and X will allow node Z to detect duplication of
address a.
With the above modification to link state routing, weak
DAD can be achieved provided that MAC addresses assigned
to the nodes are unique. However, if two nodes are assigned
the same MAC address, the above protocol may fail
to achieve weak DAD. Although an attempt is made to assign
unique MAC addresses to each wireless device, MAC
4 Invalidating the relevant routing state at node Z is su#-
cient to satisfy Weak DAD requirement. In fact, it is also
su#cient to simply ignore the information received with a
mismatching key for node X. However, invalidating the routing
state for IP X and informing other nodes about the detected
duplicates can speed up duplicate detection at other
nodes, as well as reassignment of new addresses to the nodes
with duplicate addresses.
may sometimes be duplicated on multiple devices
[13, 4, 11]. Therefore, it is worthwhile to consider this pos-
sibility. Our protocol described above can run into trouble
if two nodes (within two hops of each other) are assigned
identical MAC address, and two nodes (possibly, but not
necessarily, the same two nodes) have identical IP address.
We now illustrate this problem, and then suggest some solutions
to alleviate this.
Consider Figure 3(a). In this case, initially the nodes are
partitioned into two partitions. Nodes P and Q both have
MAC address m. Similarly, nodes M and R both have IP
address a. The next hop for destination IP address a in node
A's routing table is node P with IP address b and MAC
address m. Figure 3(a) shows the corresponding entry in
the routing table maintained at node A. Now, let us assume
that the two partitions come closer as shown in Figure 3(b).
Assume that node A wants to send a packet destined to
node M (the destination IP address in this packet's header
will be a). Recall that the next hop for destination a as per
A's routing table is b. Now, node A knows that the MAC
address corresponding to IP address b is m (node A has
learned this previously). Therefore, node A will transmit a
frame for MAC address m. This frame transmitted by node
A is intended for node P (with IP address b). However,
now node Q is in the vicinity of node A, and node Q also
has MAC address m. Thus, nodes P and Q will both accept
the frame and forward to their corresponding network layers.
The network layers at nodes P and Q will thereafter forward
the packet (according to their local routing table entries)
towards nodes M and R, respectively. In this manner, both
nodes M and R will receive the packet sent by node A, which
is unacceptable as per the requirements of weak DAD.
Note that the above problem arises because two nodes (M
and R) have the same IP address, and two nodes (P and
Q) have the same MAC address - in particular, the two
nodes sharing the same MAC address are neighbors to a
common node (node A). It is easy to see that, for the above
problem to occur, two nodes having the same MAC address
must be within two hops of each other (i.e., they must have
a common neighbor). Therefore, if a node's MAC address
can be guaranteed to be unique within two hops, then the
problem described above will not occur. We now suggest
two solutions to overcome the above problem.
. Detect duplicate MAC addresses: Since the problem
described above only occurs if two nodes within two
hops of each other use the same MAC address, a simple
solution can be devised to detect the duplicate
MAC addresses (similar to a solution in [17]). This
solution requires each node to periodically broadcast
a "beacon" containing information useful for detecting
duplicate MAC addresses within two hops. For the
purpose of detecting duplicate MAC addresses, each
node chooses a MAC-key. The MAC-key is not necessarily
the same as the key used for weak DAD - to
distinguish between the two, the term "key" (without
the prefix "Mac-") refers to the key used previously in
Two options exist for choosing the MAC-key: (a) the
first option is to somehow choose a unique MAC-key
for each device (for instance, as a function of the man-
ufacturer's serial number). (b) the second option is
A
Dest Next
Hop
from to cost
link state packet
transmitted by D
IP_B IP_B
IP_C IP_E
IP_A IP_B
IP_E IP_E
Routing table
at node D
(a)
Routing table at
node D
Dest Key Next
Hop
IP_C K_C IP_E
IP_A K_A IP_B
IP_E K_E IP_E
IP_B K_B IP_B IP_D K_D IP_E K_E 2
from key to key cost
link state packet
transmitted by D
(b)
Figure
2: Link State Routing
to choose the MAC-key randomly. In the former case,
when the MAC-key is known to be unique, the MACkey
assigned to a device need not change over time. In
the second case, where the MAC-key is chosen ran-
domly, a new MAC-key is chosen by a node when
transmitting each of its beacons. When a node transmits
a beacon, it includes the pair (own MAC address,
own MAC-key) in the beacon. When a node X hears
a beacon from another node Y, node X records the
(MAC address, MAC-key) pair for node Y. When node
transmits a beacon, in addition to its own MAC address
and MAC-key, node X also includes (MAC ad-
dress, MAC-key) pair for each of its known neighbors.
Note that, when MAC-keys are chosen randomly, each
beacon contains a new MAC-key for the sender, which
must be recorded by nodes that receive the beacon.
When some node, say P, hears a beacon containing a
(MAC address, MAC-key) pair that contains its own
MAC address but a MAC-key that it has not used
recently, 5 node P detects presence of a duplicate MAC
address within two hops.
In the case when MAC-keys are unique by design, the
MAC-keys of two nodes are guaranteed to be di#er-
ent. In the latter case, where MAC-keys are chosen
randomly, even if the MAC-keys of two nodes happen
to be identical for a given beacon interval, over time,
they will become di#erent, since each node picks a new
random MAC-key for each of its beacons.
The above procedure for detecting duplicate MAC addresses
has a "window of vulnerability" of the order of
the beacon interval during which nodes with duplicate
addresses may receive each other's packets. Once a
duplicate MAC address is detected, one of the nodes
5 "Recently" may be defined as within last k beacon inter-
vals, for some small k.
with the duplicate address must choose a new MAC
address.
. The second solution does not satisfy our goal that IP
headers remain unchanged. This approach changes the
procedure for calculating and verifying the IP header
checksum. Specifically, this approach would utilize the
unique key of the destination node in calculating the
header checksum for IP packets. Thus, in the scenario
illustrated in Figure 3, the IP header checksum for the
packet sent by node A to node M will be calculated
using node M's unique key K M. Now, as discussed
earlier, this packet may indeed be delivered to node
R, since node M and R both have the same IP ad-
dress. However, the checksum will likely fail at node
R resulting in packet discard. Thus, only the intended
destination, node M, will actually deliver the packet
to upper layers. Note that the key of the destination
node is available in the routing table, as seen in the
example of link state routing protocol.
5. ENHANCED WEAK DUPLICATE
ADDRESS DETECTION
Weak DAD described above su#ers from one shortcoming
which may manifest itself in unexpected behavior of upper
layer protocols. The shortcoming is now illustrated using
Figure
3. For this discussion, let us ignore the MAC addresses
shown in Figure 3, and pretend that all MAC addresses
are unique. Consider Figure 3(a). Let us assume
that application layer at node R provides a certain service
called Foo. While the network is partitioned, as in Figure
performs a service discovery for service Foo,
and discovers that the node with IP address a (i.e., node R)
provides this service. Application layer at node E records
the mapping between service Foo and IP address a. Now
An entry in node A's
IP address = a IP address = a
IP address = a
Dest Key Next Hop
a K_M b
routing table
(a) (b)
A
A
IP address = a
F
Figure
3: Problem caused by duplicate MAC and duplicate IP addresses
the two partitions merge as shown in Figure 3(b). After the
partitions merge, the application layer at node A performs
a service discovery for service Foo, and learns from node E
that this service is available at a node with IP address a.
Thereafter, node A sends a service request to IP address a.
This request is delivered to node M (recall that node A's
routing table entry for address a leads the packet along the
route A-P-F-M to node M). Thus, the request is not delivered
to node R that actually provides service Foo. In this
case, node A would not receive the requested service. The
above scenario can occur despite Weak DAD. 6 This scenario
could potentially be dealt with by the application software
(i.e., by the service client) or the service discovery mecha-
nism. However, in the following, we consider an approach
at the network layer to address this problem. 7
The unintended outcome above occurs because node A relies
on information provided by node E, but nodes A and E
know two di#erent nodes that are both assigned address a.
Thus, the state at nodes A and E is inconsistent. Note that
6 Node A may eventually discover duplication of address a.
For instance, in this example, when using link state routing,
node A may eventually receive a link state update from node
R. At that time, node A would detect duplication of address
a. However, until node A learns of the duplication, requests
from A may still reach the wrong node. Also, when using
other routing protocols, duplication of address a may not be
detected for longer intervals of time.
7 The problem described here in the context of Weak DAD
can occur with other duplicate address detection schemes as
well. Past work has not considered this issue. For instance,
when using [16], let us assume that two partitions of nodes
exists. Also assume that IP address a is in use in each
partition. Now suppose that node X learns of service Foo
at a node with IP address a while being in one partition.
Thereafter, node X moves to the second partition. Now,
if node X attempts to use service Foo, its request would
be delivered to a node in the second partition, while the
service is actually provided by a node in the first partition.
The enhancement described here avoids such problems.
scenarios similar to the above can also occur when information
is propagated through several nodes (for instance, node
F in
Figure
may later learn of service Foo from node
A).
To avoid the above situation, if any layer above the net-work
layer at some node, say node X, is delivered a packet
from another node (potentially several hops away), then the
network layer at node X must be aware of all (IP address,
pairs known to the sender of the packet. 8 This will ensure
that protocol layers above the network layer (i.e., above
routing protocol) at node X will not use a packet sent by
another node whose (IP address, key)-pair database is inconsistent
with that at node X. A modified version of Weak
DAD which can satisfy the above condition is referred to
as "Enhanced Weak DAD". Enhanced Weak DAD can be
implemented by taking the following step in addition to the
Weak DAD scheme described earlier.
. An IP packet sent by a node, say X, is said to be
an "upper layer packet" if it encapsulates upper layer
data. That is, an upper layer packet contains data
generated by a layer above the network layer, either at
node X or another node. In the latter case, the packet
is simply being forwarded by node X to the next hop
on the route to its destination.
Link state packets sent by the network layer at a node
(when using link state routing) are not upper layer
packets. In some protocols, route discovery is performed
by flooding the network with route requests.
In such cases, a node transmits a route request which
is received by the network layer at neighbor nodes.
8 In other words, if state at node X above the network layer
causally depends on the state of node Y, then the (IP ad-
dress, known to node Y should be consistent with
the (IP address, key) node pairs known to node X. The
enhanced weak DAD procedure described here attempts to
achieve this goal.
The network layer at such a neighbor node, say node
X, may decide to send (i.e., forward) the route request
to X's neighbors. These route requests are also not
considered upper layer packets at node X, since the IP
packet containing the forwarded request is sent by the
network layer at node X (i.e., not by an upper layer).
For each neighbor, node X keeps track of any new (IP
address, may have learned since last sending
an upper layer packet to that neighbor. One possible
approach for implementing this would be to maintain
a sequence number at node X, which would be incremented
each time node X learns a new (IP address,
pair. The (IP address, key) pairs cached at node
X should be tagged by this sequence number when
the pair was received by node X. Also, for each neighbor
node, node X would record the sequence number
when node X last updated the neighbor with the (IP
address, key) database at node X. Before sending an
upper layer packet to a neighbor Y, node X first verifies
whether it has updated node Y with all known (IP
address, entries: if the sequence number SY when
node Y was last updated is smaller than the current
sequence number at node X, then node X first sends
to node Y all (IP address, key) entries in its database
which are tagged with a sequence number greater than
SY .
Since the frequency with which new (IP address, key)
pairs are learned is likely to be low, most IP packets'
processing will not incur any additional overhead, except
to compare the current sequence number at node
X with that recorded for the neighbor to which the
packet is being sent. 9
With the above modifications, the problem described in the
context of service discovery (or other similar problems) can
be prevented. One concern with the enhanced version is the
size of the (IP address, key) database to be exchanged between
neighbors. This database may grow in size over time,
potentially increasing the overhead of enhanced weak DAD.
Limited-term leases on IP addresses (as discussed in Section
can help reduce the overhead somewhat. However,
it is worthwhile investigating other approaches as well.
In the above, we discussed implementation of weak DAD
in conjunction with link state routing. The next section
9 An alternative to keeping track of (IP address, key) updates
on a per-neighbor basis would be as follows: Whenever
a node discovers a new neighbor, it should exchange all
known (IP address, key) pairs with this neighbor. Also, before
sending any upper layer packet to a neighbor, the node
should ensure that all neighbors are updated with any new
(IP address, key) pairs it learned since sending the last upper
layer packet. With this modification, it is not necessary to
maintain a sequence number as described earlier. However,
the node now needs to exchange (IP address, key) pairs with
more nodes than in the previous approach, thereby trading
the overhead of maintaining the sequence numbers with the
overhead of additional tra#c between neighbors. Since a
node X may not always know all its neighbor nodes, when
packets are promiscuously received by some neighbor Z, it
is also necessary to incorporate a mechanism to allow neighbor
Z to determine whether node X has previously sent all
(IP address, key) pairs known to X. A mechanism based on
sequence numbers may potentially be used.
briefly discusses implementation of weak DAD in conjunction
with Dynamic Source Routing [8].
DETECTION WITH
Dynamic Source Routing (DSR) [8] is a reactive protocol
that discovers routes on an as-needed basis. DSR contains
several optimizations to reduce the overhead of route discov-
ery, however, for brevity we describe a simplified version of
DSR. The simplified version su#ces to illustrate how Weak
DAD may be achieved in conjunction with DSR and other
similar protocols. With DSR, packets are source-routed,
thus, the source node includes the entire route in the packet
header. When a node X needs to send a packet to a node
Z, but does not know a route to node Z, node X performs
a route discovery, by flooding the network with a route request
packet. When a node Y receives a route request packet
associated with a route discovery for node Z, node Y adds
its own IP address to the packet and forwards the packet
to its neighbors (node X initially stores its address in the
route request packet). In this manner, if node Z is reachable
from node X, eventually the route request will reach
node Z (perhaps along multiple routes). A route request
received by node Z will contain the route taken by the route
request. When links are bi-directional, node Z can send a
route reply to node X by reversing the route included in the
route request. On receipt of the route reply, node X learns
a route to node Z (the route is included in the route reply).
DSR also incorporates other mechanisms by which a node
may learn routes (for instance, a node may learn a route by
overhearing packets transmitted by other nodes, since the
source route is included in each packet). The routing information
known to a node is stored in its local route cache.
DSR uses route error messages to inform nodes about link
failures. To incorporate Enhanced Weak DAD into DSR,
the following steps should be taken:
. Each route request accumulates a route (from X to Z
in the above example) as it makes progress through the
network. When a node adds its own IP address to the
route request, it should also include its key. Thus, for
each node on the route taken by the route request, it
will contain the (IP address, key) pair for that node.
Similarly, for each IP address appearing in the body
(not IP header) of any routing-related message, such
as route reply or route error, the key associated with
the IP address should also be included.
. The additional step described in Section 5 should be
incorporated as well. Recall that the procedure in Section
5 requires the use of a sequence number to track
new (IP address, key) information at a node. To work
correctly with DSR, the current sequence number at a
node should be included by a node when sending DSR
routing-related messages (Route Request, Route Reply
or Route Error). When a node Y receives, say, a Route
Request from its neighbor X (possibly by promiscuous
listening), node Y determines if its last known sequence
for node X matches with the sequence number
received with the route request. If the sequence numbers
do not match, then node Y should first request an
update of the (IP address, key) database from node X.
Note that node Y may or may not be the intended recipient
of the Route Request from X, since DSR allows
promiscuous listening of such messages.
When a node becomes aware of two (IP address, key) pairs
with identical IP addresses but mismatching keys, it detects
address duplication. With the above modifications, DSR
will function correctly, with the exception of promiscuous
listening of source routes included in data packets (promis-
cuous listening of routing-related packets is taken into con-
sideration, as discussed above). If promiscuous listening of
source routes in data packets is also to be enabled, then either
the sequence numbers should be included in the source
routes in the data packets, or each node must update all its
neighbors with all its (IP address, key) pairs before sending
an upper layer (data) packet. The first solution requires
modification to the packet format to include keys along with
the source route, and the second solution is also di#cult to
implement. In particular, in DSR, data packets may be
promiscuously received by all neighbors of the transmitter.
Due to mobility, a node is not always aware of all its neigh-
bors, and it is di#cult to guarantee that (IP address, key)
database is updated at all neighbors at all times. Thus, for
DSR we recommend that source routes learned by promiscuous
listening of routing-related packets be utilized, but
the routes learned by promiscuous listening of data packets
should not be used, unless it can be verified that the overhearing
host is aware of the (IP address, key) database at
the transmitting node.
7. A HYBRID DAD SCHEME
A hybrid DAD scheme may be obtained by combining
the (enhanced) weak DAD scheme described above with the
timeout-based mechanism in [16] or other DAD schemes.
In particular, as proposed in [16] (and as summarized in
Section 2), a node wishing to assign itself an address may
choose an address randomly (or using locally available infor-
mation). Then, the node may send "route request" message
for the randomly chosen address. If a "route reply" is received
within a timeout interval, then it is determined that
the chosen address is already in use. If a route reply is not
received, then the route request may optionally be sent a few
more times. If after this, no route reply is still received, then
the node may assign itself the chosen address. However, in
the hybrid scheme, the provisions of (enhanced) weak DAD
are also incorporated. Thus, the timeout-based mechanism
will detect duplicate address within a single partition with
a high probability. For any duplicate addresses that escape
this mechanism, the provisions of weak DAD scheme will
ensure that the duplicate addresses will be detected eventu-
ally. The hybrid scheme may potentially detect some duplicate
addresses sooner than using weak DAD alone, and the
use of weak DAD makes it robust to partitions and large
message delays unlike the scheme in [16]. Thus, the hybrid
scheme can provide benefits of both the component schemes.
8. FLOODING-BASED ROUTING
PROTOCOLS
We believe that Enhanced Weak DAD can be implemented
in conjunction with most routing protocols, with one notable
exception. Typically, when a packet is forwarded from
a source to a destination, on each hop it is unicast to the
next node on the route to the destination. For instance, in
Figure
3(a) where link state routing is assumed, when node
A sends a packet to address a, it will be unicast by node
A to node P, then unicast by node P to node F, and finally
unicast by node F to node M. When a packet is forwarded on
each hop by unicasting, the Enhanced Weak DAD scheme
works satisfactorily.
Enhanced Weak DAD does not work as expected when
flooding is used as the routing protocol - here we refer
to flooding of data packets, not flooding of routing-related
control packets (such as "route requests" used in DSR [8]
or AODV [15] routing protocols). Weak DAD works correctly
even if routing-related control packets (such as route
are flooded, but not if data packets are flooded.
When data packets are flooded (without using hop-of-hop
forwarding using unicasts), no explicit routing information
(e.g., routing table [15, 7] or route cache [6]) is used for the
purpose of routing the data packets to their destinations.
Therefore, the weak DAD mechanism, which relies on mismatching
associated with routing information, cannot
help to detect duplicate addresses.
As such, due to its high overhead, flooding is an unlikely
choice for routing data packets to their destinations. How-
ever, some researchers have proposed selectively piggybacking
data packets on routing control packets, such as route
requests, which may be flooded. Also, other protocols which
use limited flooding to deliver data have also been proposed
[1]. In such protocols, some data packets may be flooded
through a part (or all) of the network. This may result in
the data packet being delivered to multiple nodes who may
have assigned themselves identical IP address, violating the
requirements of weak DAD.
The problem of developing a satisfactory duplicate address
detection scheme that works despite unbounded message
delays when using flooding-like routing protocols remains
open. When using such flooding-based schemes for
data delivery, a partial solution is to require each node to
flood its (IP address, key) pair through the network period-
ically. This scheme, however, has two drawbacks: (a) high
overhead due to periodic flooding of keys, and (b) a period
of vulnerability (during which packets may be delivered to
multiple nodes) that grows with the time interval between
consecutive floods of (IP address, key) pairs. Thus, an attempt
to reduce the flooding overhead increases the period
of vulnerability.
To reiterate a point mentioned earlier, Enhanced Weak
DAD does perform correctly when routing-related control
packets are flooded, and therefore, the proposed technique
is adequate for most existing routing protocols.
9. ADDRESS REUSE
One issue of practical interest has not been considered
in this paper so far. With DHCP [5], a host is assigned
(or leased) an IP address for a finite interval of time. In
our discussion above, we did not consider the issue of such
finite-time leases. For instance, if host X is assigned address
a at time t1 for the duration t d , then some other host
should be able to use address a from time t1
after some additional delay to allow clean-up of associated
network state). Without some modifications, the weak DAD
scheme above will consider the reassignment of the address
to another host after time as a duplicate assignment.
Let us assume that each node voluntarily decides to give up
its "lease" on its IP address after some duration t d (which
may be di#erent for di#erent nodes). We now briefly suggest
a procedure for allowing reuse of an address after its current
lease has expired.
To implement the finite-time lease, we can associate "re-
maining lifetime" with the keys advertised in routing-related
packets (such as link state updates). The remaining lifetime
of a key is initially set by a host to be equal to the duration of
its address lease. These remaining lifetimes are also recorded
by the hosts (along with the keys). The remaining lifetime
associated with each key stored at any node decreases with
time, and when it reaches 0, any routing state associated
with the corresponding IP address (e.g., link state, route
cache entries) is invalidated. This approach can be used to
allow a node to assign itself an address for a finite duration
of time, and allow re-use of this address thereafter. Thus,
the weak DAD scheme will only detect an address as a duplicate
if it is reused before a previous assignment of that
address has expired.
10. CONCLUSIONS
This paper defines the notion of strong and weak duplicate
address detection (DAD). We argue that strong DAD is impossible
under conditions that may occur in practice. This
motivates our definition of weak DAD. The paper presents a
scheme (and an enhanced version) that can detect
duplicate addresses even if the nodes that are assigned
duplicate addresses initially belong to di#erent partitions.
An advantage of the proposed scheme is that it works despite
unbounded message delays. The proposed scheme can
be combined with existing mechanisms such as [16] to obtain
a dynamic address assignment scheme for ad hoc networks
that is responsive as well as robust to partitions and arbitrary
message delays.
Acknowledgements
The author thanks referees of this paper for their constructive
comments. Thanks are also due to Gaurav Navlakha
for his comments on the paper.
11.
--R
"A distance routing e#ect algorithm for mobility (DREAM),"
"E#cient network layer addressing for mobile ad hoc networks,"
"Dynamic configuration of IPv4 link-local addresses,"
"Duplicate MAC addresses on Cisco 3600 series,"
"Dynamic host configuration protocol,"
"Caching strategies in on-demand routing protocols for wireless ad hoc networks,"
"Optimized link state routing protocol,"
"Dynamic source routing in ad hoc wireless networks,"
An Engineering Approach to Computer Networking.
Computer Networking: A Top-Down Approach Featuring the Internet
"How to troubleshoot duplicate MAC address conflicts,"
"Issues pertaining to service discovery in mobile ad hoc networks,"
"MANETconf: Configuration of hosts in a mobile ad hoc network,"
"Dynamic address allocation protocols for mobile ad hoc networks,"
"Ad-hoc on demand distance vector routing,"
"IP address autoconfiguration for ad hoc networks,"
"Distributed assignment of encoded MAC addresses in sensor networks,"
"IPv6 stateless address autoconfiguration,"
"Weak duplicate address detection in mobile ad hoc networks,"
--TR
An engineering approach to computer networking
A distance routing effect algorithm for mobility (DREAM)
Caching strategies in on-demand routing protocols for wireless ad hoc networks
Distributed assignment of encoded MAC addresses in sensor networks
Computer Networking
Ad-hoc On-Demand Distance Vector Routing
--CTR
Dongkeun Lee , Jaepil Yoo , Keecheon Kim , Kyunglim Kang, IPv6 stateless address auto-configuration in mobile ad-hoc network (T-DAD) and performance evaluation, Proceedings of the 2nd ACM international workshop on Performance evaluation of wireless ad hoc, sensor, and ubiquitous networks, October 10-13, 2005, Montreal, Quebec, Canada
C. N. Ojeda-Guerra , C. Ley-Bosch , I. Alonso-Gonzlez, Using an updating of DHCP in mobile ad-hoc networks, Proceedings of the 24th IASTED international conference on Parallel and distributed computing and networks, p.58-63, February 14-16, 2006, Innsbruck, Austria
Dongkeun Lee , Jaepil Yoo , Hyunsik Kang , Keecheon Kim , Kyunglim Kang, Distributed IPv6 addressing technique for mobile ad-hoc networks, Proceedings of the 2006 ACM symposium on Applied computing, April 23-27, 2006, Dijon, France
Jun Luo , Jean-Pierre Hubaux , Patrick Th. Eugster, PAN: providing reliable storage in mobile ad hoc networks with probabilistic quorum systems, Proceedings of the 4th ACM international symposium on Mobile ad hoc networking & computing, June 01-03, 2003, Annapolis, Maryland, USA
Tsan-Pin Wang , Jui-Hsien Chuang, Fast Duplicate Address Detection for Seamless Inter-Domain Handoff in All-IPv6 Mobile Networks, Wireless Personal Communications: An International Journal, v.42 n.2, p.263-275, July 2007
Venkata C. Giruka , Mukesh Singhal, A localized IP-address auto-configuration protocol for wireless ad-hoc networks, Proceedings of the 4th international workshop on Wireless mobile applications and services on WLAN hotspots, September 29-29, 2006, Los Angeles, CA, USA
Namhoon Kim , Soyeon Ahn , Younghee Lee, AROD: An address autoconfiguration with address reservation and optimistic duplicated address detection for mobile ad hoc networks, Computer Communications, v.30 n.8, p.1913-1925, June, 2007
M. Fazio , M. Villari , A. Puliafito, IP address autoconfiguration in ad hoc networks: design, implementation and measurements, Computer Networks: The International Journal of Computer and Telecommunications Networking, v.50 n.7, p.898-920, 15 May 2006
Yi-an Huang , Wenke Lee, Hotspot-based traceback for mobile ad hoc networks, Proceedings of the 4th ACM workshop on Wireless security, September 02-02, 2005, Cologne, Germany
Marco Gruteser , Dirk Grunwald, Enhancing location privacy in wireless LAN through disposable interface identifiers: a quantitative analysis, Proceedings of the 1st ACM international workshop on Wireless mobile applications and services on WLAN hotspots, September 19-19, 2003, San Diego, CA, USA
Marco Gruteser , Dirk Grunwald, Enhancing location privacy in wireless LAN through disposable interface identifiers: a quantitative analysis, Mobile Networks and Applications, v.10 n.3, p.315-325, June 2005 | mobile ad hoc networks;auto-configuration;duplicate address detection |
513853 | Efficient register and memory assignment for non-orthogonal architectures via graph coloring and MST algorithms. | Finding an optimal assignment of program variables into registers and memory is prohibitively difficult in code generation for application specific instruction-set processors (ASIPs). This is mainly because, in order to meet stringent speed and power requirements for embedded applications, ASIPs commonly employ non-orthogonal architectures which are typically characterized by irregular data paths, heterogeneous registers and multiple memory banks. As a result, existing techniques mainly developed for relatively regular, orthogonal general-purpose processors (GPPs) are obsolete for these recently emerging ASIP architectures. In this paper, we attempt to tackle this issue by exploiting conventional graph coloring and maximum spanning tree (MST) algorithms with special constraints added to handle the non-orthogonality of ASIP architectures. The results in our study indicate that our algorithm finds a fairly good assignment of variables into heterogeneous registers and multi-memories while it runs extremely faster than previous work that employed exceedingly expensive algorithms to address this issue. | INTRODUCTION
As embedded system designers strive to meet cost and performance
goals demanded by the applications, the complexity
of processors is ever increasingly optimized for certain application
domains in embedded systems. Such optimizations
need a process of design space exploration [8] to find hardware
configurations that meet the design goals. The final configuration
of a processor resulting from a design space exploration
usually has an instruction set and the data path that are highly
tuned for specific embedded applications. In this sense, they
are collectively called application specific instruction-set processors
(ASIPs).
An ASIP typically has a non-orthogonal architectre which
can be characterized by irregular data paths containing heterogeneous
registers and multiple memory banks. As an example
of such an architecture, Figure 1 shows the Motorola
DSP56000, a commercial off-the-shelf ASIP specifically designed
for digital signal processing (DSP) applications. Note
from the data path that the architecture lacks a large number
of centralized general-purpose homogeneous registers; in-
stead, it has multiple small register files where different files
are distributed and dedicated to different sets of functional
units. Also, note that it employs a multi-memory bank architecture
which consists of program and data memory banks.
In this architecture, two data memory banks are connected
through two independent data buses, while a conventional von
Neumann architecture has only a single memory bank. This
type of memory architecture is supported by many embedded
processors, such as Analog Device ADSP2100, DSP Group
PineDSPCore, Motorola DSP56000 and NEC uPD77016. One
obvious advantage of this architecture is that it can access two
data words in one instruction cycle.
Multi-memory bank architectures have been shown to be
effective for many operations commonly found in embedded
applications, such as N real multiplies:
From this example, we can see that the application can operate
at an ideal rate if a processor has two data memory banks
Multiplier
ALU
A (56)
Shifter/Limiter
Shifter
Address ALU
Address ALU
Memory
Global Data Bus
Y Memory
AGU
Figure
1: Motorola DSP56000 data path with dual data memory banks X and Y
so that two variables, x(i) and y(i), can be fetched simultane-
ously. But, we also can see that this ideal speed of operation
is only possible with one condition: the variables should be
assigned to different data memory banks. For instance, in the
following DSP56000 assembly code implementing the N real
multiplies, arrays x and y are assigned respectively to the two
memory banks X and Y.
move x:(r0)+,x0 y:(r4)+,y0
mpyr x0,y0,a x:(r0)+,x0 y:(r4)+,y0
do #N-1,end
mpyr x0,y0,a a,x:(r1)+ y:(r4)+,y0
move x:(r0)+,x0
move a,x:(r1)+
Unfortunately, several existing vendor-provided compilers
that we tested were not able to exploit this hardware feature of
dual data memory banks efficiently; thereby failing to generate
highly optimized code for their target ASIPs. This inevitably
implies that the users for these ASIPs should hand-optimize
their code in assembly to fully exploit dual memory banks,
which makes programming the processors quite complex and
time consuming.
In this paper, we describe our implementation of two core
techniques in the code generation for non-orthogonal ASIPs:
register allocation and memory bank assignment. Our register
allocation is decoupled into two phases to handle the heterogeneous
register architecture of an ASIP as follows.
1. Physical registers are classified into a set of register classes,
each of which is a collection of registers dedicated to the
same machine instructions; and, our register classification
algorithm allocates each temporary variable to one
of the register classes.
2. A conventional graph coloring algorithm is slightly modified
to assign each temporary a physical register within
the register class previously allocated to it.
Our memory bank assignment whose goal is to efficiently assign
variables to multi-memory banks for ASIPs is also decoupled
into two phases as follows.
1. A maximum spanning tree (MST) algorithm is used to
find a memory bank assignment for variables.
2. The initial bank assignment by the MST-based algorithm
is improved by the graph coloring algorithm that was
also used for register assignment.
Our algorithms differ from previous work in that it assigns
variables to heterogeneous registers and multi-memory banks
in separate, decoupled code generation phases, as shown above;
while previous work did it in a single, tightly-coupled phase [13,
14]. As will be reported later in this paper, our performance
results were quite encouraging. First of all, we found that the
code generation time was dramatically reduced by a factor of
up to four magnitudes of order. This result was somewhat already
expected because our decoupled code generation phases
greatly simplified the register and bank assignment problem
overall. Meanwhile, the benchmarking results also showed
that we generated code that is nearly identical in quality to
the code generated by the coupled approach in almost every
case.
Section 2 discusses the ASIP architecture that we are targeting
in this work. Section 3 presents our algorithms, and Section
presents our experiments with additional results that we
have recently obtained since our earlier preliminary study [5].
Section 5 concludes our discussion.
2. TARGET MACHINE MODEL
In this section, we characterize the non-orthogonal architecture
of ASIPs with two properties.
To more formally define this register architecture, we start
this section by first presenting the following definitions.
DEFINITION 1. Given a target machine M , let I
ng be the set of all the instructions defined on M , and R
rmg be the set of all its registers. For instruction
I , we define the set of all its operands, Op(i
g. Assume C jl is the set of all the registers
that can appear at the position of some operand O jl
l k. Then we say here that C jl forms a register class for
instruction
DEFINITION 2. From Definition 1, we define S j , a collection
of distinct register classes for instruction i j , as follows:
fC jl g: (1)
From this, we in turn define S as follows:
We say that S is the whole collection of register classes for
machine M .
To see the difference of homogeneous and heterogenous
register architectures, first consider the SPARC as an example
of a processor with homogenous registers. A typical instruction
of the SPARC has three operands
op code reg
where all the in the register file can
appear as the first operand reg i . In this case, the set of all these
registers forms a single register class for op code. Since
for the other operands reg j and regk , the same registers
can appear, they again form the same class for the instruction.
Thus, we have only one register class defined for the instruction
op code. On the other hand, the DSP56000 has an instruction
of the form
which multiplies the first two operands and places the product
in the third operand. The DSP56000 restricts reg i and reg j to
be input registers X0, X1, Y0, Y1, and regk to be accumulator
A or B. In this case, we have two register classes defined
for mpya: fX0, X1, Y0, Y1g at reg i and reg j and fA,Bg at
regk .
In the above examples, S j for op code and mpya are, re-
spectively, Y1g,fA,Bgg.
We say that a typical processor with n general purpose registers
like the SPARC has a homogeneous register architecture.
This is mainly because S is usually a set of a single element
consisting of the n registers for the processor, which, by Definitions
1 and 2, equivalently means that the same n registers
are homogeneous in all the machine instructions. In the
case of DSP56000, however, its registers are dedicated differently
to the machine instructions, which make them partially
homogeneous only in the subsets of machine instruc-
tions. For example, we can see that even one instruction like
mpya of DSP56000 has two different sets of homogenous reg-
isters: XYN and AB. We list the whole collection of register
classes defined for DSP56000 in Table 1. In general, we say
that a machine with such complex register classes has a heterogeneous
register architecture.
ID Register Class Physical Registers
9 Y Y0, Y1
Table
1: The register classes for Motorola DSP56000
2.2 Multiple Data Memory Banks
As an example of multi-memory bank ASIPs, we will use
the DSP56000 whose data path was shown in Figure 1, where
the ALU operations are divided into data operations and address
operations. Data ALU operations are performed on a
data ALU with data registers which consist of four 24-bit input
registers (X0, X1, Y0 and Y1) and two 56-bit accumulators
and B). Address ALU operations are performed in
the address generation unit (AGU), which calculates memory
addresses necessary to indirectly address data operands in
memory. Since the AGU operates independently from the data
ALU, address calculations can occur simultaneously with data
ALU operations.
As shown in Figure 1, the AGU is divided into two identical
halves, each of which has an address ALU and two sets of
16-bit register files. One set of the register files has four address
registers (R0 - R3), and the other also has four address
registers (R4 - R7). The address output multiplexers select
the source for the XAB, YAB. The source of each effective
address may be the output of the address ALU for indexed addressing
or an address register for register-indirect addressing.
At every cycle, the addresses generated by the ALUs can be
used to access two words in parallel in the X and Y memory
banks, each of which consists of 512-word 24-bit memory.
Possible memory reference modes of the DSP56000 are of
four types: X, Y, L and XY. In X and Y memory reference
modes, the operand is a single word either from X or Y memory
bank. In L memory reference mode, the operand is a long
word (two words each from X and Y memories) referenced by
one operand address. In XY memory reference mode, two independent
are used to move two word operands to
memory simultaneously: one word operand is in X memory,
and the other word operand is in Y memory. Such independent
moves of data in the same cycle are called a parallel move. In
Figure
1, we can see two data buses XDB and YDB that connect
the data path of the DSP56000 to two data memory banks
X and Y, respectively. Through these buses, a parallel move is
made between memories and data registers.
These architectural features of the DSP56000, like most other
ASIPs with multi-memory banks, allow a single instruction
to perform one data ALU operation and two move operations
in parallel per cycle, but only under certain conditions due to
hardware constraints. In the case of the DSP56000, the following
parallel move conditions should be met to maximize
the utilization of the dual memory bank architecture: (1) the
two words should be addressed from different memory banks;
memory indirect addressing modes using address registers are
used to address the words; and, each address register involved
in a parallel move must be from a different set among the two
register files in the AGU. In this implementation, we attempt
to make the parallel move conditions meet in the code so that
as many parallel moves as possible can be generated.
3. REGISTERALLOCATIONANDMEM-
In this section, we detail the code generation phases for
register allocation and memory bank assignment, which were
briefly described in Section 1. To explain step by step how
our code generator produces the final code, we will use the example
of DSP56000 assembly code shown in Figure 2. This
code can be obtained immediately after the instruction selection
phase. Note that it is still in a sequential and unoptimized
form. This initial code will be given to the subsequent phases,
and optimized for the dual memory architecture of DSP56000,
as described in this section.
MOVE a, r0
MAC r0, r1, r2
MOVE low(r2), low(v)
MOVE high(r2), high(v)
MOVE d, r3
MOVE e, r4
MOVE f, r5
MAC r3, r4, r5
MOVE low(r5), low(w)
MOVE high(r5), high(w)
Figure
2: Example of uncompacted DSP56000 assembly code
produced after instruction selection
3.1 Register Class Allocation
In our compiler, instruction selection is decoupled from register
allocation and all other subsequent phases, In fact, many
conventional compilers such as gcc, lcc and Zephyr that have
been targeting GPPs, also separate these two phases. Separating
register allocation from instruction selection is relatively
straightforward for a compiler targeting GPPs because GPPs
have homogeneous registers within a single class, or possibly
just a few classes, of registers; that is, in the instruction selection
phase, instructions that need registers are assigned symbolic
temporaries which, later in the register allocation phase,
are mapped to any available registers in the same register class.
In ASIPs, however, the register classes for each individual instruction
may differ, and a register may belong to many different
register classes (see Table 1).
What all this implies is that the relationship between registers
and instructions is tightly coupled so that when we select
an instruction, somehow we should also determine from
which register classes registers are assigned to the instruction.
Therefore, phase-coupling [9], a technique to cleverly combine
these closely related phases, has been the norm for most
compilers generating code for ASIPs. However, this phase-
coupling may create too many constraints for code generation,
thus increasing the compilation time tremendously, as in the
case of previous work which will be compared with our approach
in Section 4.1.
To relieve this problem in our decoupled approach and still
handle a heterogeneous register structure, we implemented a
simple scheme that enforces a relationship that binds these
two separate phases by inserting another phase, called register
class allocation, between them. In this scheme, we represent
a register in two notions: a register class and a register number
in the class. In the register class allocation phase, temporaries
are not allocated physical registers, but a set of possible registers
(that is, a register class) which can be placed as operands
of an instruction. Physical registers are selected among the
register class for each instruction in a later phase, which we
call register assignment. Since the focus of this paper is not
on the register class allocation, we cannot discuss the whole
algorithm here. Refer to [6] for more details.
The register classes that are allocated for the code in Figure
are shown below. They are associated with each temporary
ri referenced in the code.
Between register class allocation and register assignment,
the code compaction phase results in not only reduced code
size, but also in exploitation of machine instructions that perform
parallel operations, such as the one with an add plus a
parallel move. Figure 3 shows the resulting instructions after
the code in Figure 2 is compacted. We can see in the
compacted code that one MAC (multiply-and-add) and two
moves are now combined into a single instruction word, and
two moves are combined into one parallel move instruction.
We use the traditional list scheduling algorithm for our code
compaction.
MOVE a,r0 b,r1
MOVE c,r2 d,r3
MAC r0, r1, r2 e,r4 f,r5
MAC r3, r4, r5 low(r2),low(v)
MOVE high(r2),high(v) low(r5),low(w)
MOVE high(r5),high(w)
Figure
3: Code sequence after compacting the code in Figure 2
3.2 Memory Bank Assignment
After register class allocation and code compaction, each
variable in the resulting code is assigned to one of a set of
memory banks (in this example, banks X or Y of the DSP56000).
In this section, we present our memory bank assignment technique
using two well-known algorithms.
3.2.1 Using a MST Algorithm
In the memory bank assignment phase, we use a MST al-
gorithm. The first step of this basic phase is to construct a
weighted undirected graph, which we called the simultaneous
reference graph (SRG). The graph contains variables referenced
in the code as nodes. An edge in the SRG
means that both variables v j and vk are referenced within the
same instruction word in the compacted code. Figure 4(a)
shows an SRG for the code from Figure 3. The weight on
an edge between two variables represents the number of times
the variables are referenced within the same word.
a
c
d
e
f
a
c
e
d
f
(a) SRG
MOVE X:a,r0 Y:b,r1
MOVE X:c,r2 Y:d,r3
MAC r0, r1, r2 X:e,r4 Y:f,r5
MAC r3, r4, r5 low(r2),X:low(v)
MOVE high(r2),X:high(v) low(r5),Y:low(w)
MOVE high(r5),Y:high(w)
(c) Memory Bank Assignment
(b) Assigned Memory Bank11
Figure
4: Code result after memory bank assignment determined
from its SRG built for the code in Figure 3
According to the parallel move conditions, two variables
referenced in an instruction word must be assigned to different
memory banks in order to fetch them in a single instruction
cycle. Otherwise, an extra cycle would be needed to access
them. Thus, the strategy that we take to maximize the memory
throughput is to assign a pair of variables referenced in
the same word to different memory banks whenever it is pos-
sible. If a conflict occurs between two pairs of variables, the
variables in one pair that appear more frequently in the same
words shall have a higher priority over those in the other pair.
Notice here that the frequency is denoted by the weight in the
SRG.
Figure
4(b) shows that the variables a, c, e, and v are assigned
to X memory, and the remaining ones b, d, f , and w
are to Y memory. This is optimal because all pairs of variables
connected via edges are assigned to different memories
X and Y, thus avoiding extra cycles to fetch variables, as can
be seen from the resulting code in Figure 4(c). In the case of
variables v and w, we still need two cycles to move each of
them because they are long type variables with double-word
length. However, they also benefit from the optimal memory
assignment as each half of the variables is moved together in
the same cycle.
The memory bank assignment problem that we face in reality
is not always as simple as the one in Figure 4. To illustrate
a more realistic and complex case of the problem, consider
Figure
5 where the SRG has five variables.
Y
Y
(b) Maximum Spanning Tree
(a) SRG
Figure
5: More complex example of a simultaneous reference
graph and the maximum spanning tree constructed from it
We view the process of assigning n memory banks as that of
dividing the SRG into n disjoint subgraphs; that is, all nodes
in the same subgraph are assigned a memory bank that corresponds
to the subgraph. In our compiler, therefore, we try to
obtain an optimal memory bank assignment for a given SRG
by finding a partition of the graph with the minimum cost according
to Definition 3.
DEFINITION 3. Let E) be a connected, weighted
graph where V is a set of nodes and E is a set of edges. Let
we be the weight on an edge e 2 E. Suppose that a partition
of the graph G divides G into n
disjoint subgraphs G
the cost of the partition P is defined as
we .
Finding such an optimal partition with the minimum cost is
another NP-complete problem. So, we developed a greedy approximation
algorithm with O(jEj+jV jlgjV
ity, as shown in Figure 6. Since in practice jEj jV j for our
problem, the algorithm usually runs fast in O(jV jlgjV
In the algorithm, we assume virtually no existing
ASIPs have more than two data memory banks. But,
this algorithm can be easily extended to handle the cases for
2.
In our memory bank assignment algorithm, we first identify
a maximum spanning tree (MST) of the SRG. Given a connected
graph G, a spanning tree of G is a connected acyclic
subgraph that covers all nodes of G. A MST is a spanning tree
whose total weight of all its edges is not less than those of any
other spanning trees of G. One interesting property of a spanning
tree is that it is a bipartite graph as any tree is actually
bipartite. So, given a spanning tree T for a graph G, we can
obtain a partition P =<G1 ; G2> from T by, starting from an
arbitrary node, say u, in T , assigning to G1 all nodes an even
distance from u and to G2 those an odd distance from u.
Based on this observation, our algorithm is designed to first
identify a spanning tree from the SRG, and then, to compute
a partition from it. But we here use a heuristic that chooses
not an ordinary spanning tree but a maximum spanning tree.
The rationale for the heuristic is that, if we build a partition
from a MST, we can eliminate heavy-weighted edges of the
MST, thereby increasing the chance to reduce the overall cost
of the resultant partition. Unfortunately, constructing a partition
from a MST does not guarantee the optimum solution.
But, according to our earlier preliminary work [5], the notion
of a MST provides us a crucial idea about how to find a partition
with low cost, which is in turn necessary to find a near-optimal
memory bank assignment. For instance, our algorithm
can find an optimal partitioning for the SRG in Figure 5.
To find a MST, our algorithm uses Prim's MST algorithm [11].
Our algorithm is global; that is, it is applied across basic blocks.
For each node, the following sequence is repeatedly iterated
until all SRG nodes have been marked. In the algorithm, the
edges in the priority queue Q are sorted in the order of their
weights, and an edge with the highest weight is removed first.
When there is more than one edge with the same highest weight,
the one that was inserted first will be removed. Note here that
the simultaneous reference graph GSR is not necessarily con-
nected, as opposed to our assumption made above. Therefore,
we create a set of MSTs one for each connected subgraph
of GSR . Also, note in the algorithm that at least one of the
nodes w and z should always be marked because the edges of
a marked node u was always inserted in Q earlier in the algorithm
Figure
5(b) shows the spanned tree obtained after this
algorithm is applied to the SRG given in Figure 5(a). We can
see that X memory is assigned in even depth and Y memory in
odd depth in this tree.
3.2.2 Using a Graph Coloring Algorithm
A graph coloring approach [4] has been traditionally used
for register allocation in many compilers. The central idea of
graph coloring is to partition each variable into separate live
ranges, where each live range is a candidate to be allocated
to a register rather than entire variables. We have found that
the same idea can be also used to improve the basic memory
bank assignment described in Section 3.2.1 by relaxing the
name-related constraints on variables that are to be assigned
Input: a simultaneous reference graph
Output: a set VSR whose nodes are all colored either with X or Y
Algorithm:
is a set of MSTs and Q is a priority queue
for all nodes v in VSR do unmark v;
select unmarked node in(V SR );
Return ? if every node in VSR is marked
create a new MST T
while do // Find all MSTs for connected subgraphs of GSR
mark u;
Eu the set of all edges incident on u;
sort the elements of Eu in incresing order
by weights, and add them to Q;
while do
remove an edge highest priority from Q;
if z is unmarked then
if w is unmarked then
od
if u is marked then
// All nodes in a connected subgraph of GSR have been visited
select unmarked node in(V SR );
// Select a node in another subgraph, if any, of GSR
add T i to ST ; i++; create a new MST T
od
for all nodes v in VSR do uncolor v;
for every MST T i 2 ST do
// Assign variables in T i 's to memory banks X and Y
next visitors Q ;;
m # of nodes in VSR of X-color
# of nodes in VSR of Y -color;
select an arbitrary node v in T
nodes have been X-colored
color v with Y -color;
else // More nodes have been Y-colored
color v with X-color;
repeat
for every node u adjacent to v do
if u is not colored then
color u with a color different from the color of v;
append u to next visitors Q;
extract one node from next visitors Q;
until all nodes in T i are colored;
od
m # of nodes in VSR of X-color
# of nodes in VSR of Y -color;
while m > 0 do
// While there are more X-colored nodes than Y -colored ones
if 9 uncolored node v 2 VSR then
color v with Y -color; m-;
else break;
while m < 0 do
// While there are more Y -colored nodes than Y -colored ones
if 9 uncolored node v 2 VSR then
color v with X-color; m++;
else break;
for any uncolored node v in VSR do
color v alternately with X and Y colors;
return
Figure
memory bank assignment algorithm for dual memories
to memory banks.
In this approach, we build an undirected graph, called the
memory bank interference graph, to determine which live ranges
conflict and could not be assigned to the same memory bank.
Disjoint live ranges of the same variable can be assigned to
different memory banks after giving a new name to each live
range. This additional flexibility of a graph coloring approach
can sometimes result in a more efficient allocation of variables
to memory banks, as we will show in this section.
Two techniques, called name splitting and merging, have
been newly implemented to help the memory bank assignment
benefit from this graph coloring approach. The example in
Figure
4 is too simple to illustrate this; hence, let us consider
another example in Figure 7 that will serve to clarify various
features of these techniques.
a
c
d
e
f
a
d
f
c
MOVE c,r2 d,r3
MAC r0,r1, r2 e,r4 f,r5
MAC r3,r4, r5 r2,a c,r6
ADD r2,r6, r7 r5,d e,r8
MOVE a,r9 d,r10
a b c d e f
cost
(a) Result After Code Compaction (b) Live Range of Each Variables
(c) SRG (d) Partitioned Memory
MOVE X:a,r0 Y:b,r1
MOVE X:d,r3 Y:c,r2
MAC r0,r1,r2 X:f,r5 Y:e,r4
MAC r3,r4,r5 r2,X:a Y:c,r6
ADD r2,r6,r7 r5,X:d Y:e,r8
MOVE r10,X:d
Assignment
Figure
7: Code example and data structures to illustrate name
splitting and merging
Figure
7(a) shows an example of code that is generated after
code compaction, and Figure 7(b) depicts the live ranges of
each of the variables. Note that the variables a and d each have
multiple live ranges. Figures 7(c) and 7(d) show the SRG and
the assignment of variables to memory banks. We can see that
a single parallel move cannot be exploited in the example because
a and d were assigned to the same data memory. Finally,
Figure
7(e) shows the resulting code after memory banks are
assigned by using the MST algorithm with the memory partitioning
information from Figure 7(d).
Figure
8 shows how name splitting can improve the same
example in Figure 7. Name splitting is a technique that tries to
reduce the code size by compacting more memory references
into parallel move instructions. This technique is based on
a well-known graph coloring approach. Therefore, instead of
presenting the whole algorithm, we will describe the technique
with an example given in Figure 8. We can see in Figure 8(a)
that each live range of the variable is a candidate for being
assigned to a memory bank. In the example, the two variables
a and d with disjoint live ranges are split; that is, each live
range of the variables are given different names.
Figures
8(b) and 8(c) show the modified SRG and the improved
assignment of variables to memory banks. Figure 8(d)
demonstrates that, by considering live ranges as opposed to
entire variables for bank assignment, we can place the two live
c
e
f
e
c
d2
d2
(a) Live Range After Local Variable Renaming (b) SRG
(c) Partitioned Memory
MOVE X:a1,r0 Y:b,r1
MOVE X:d1,r3 Y:c,r2
MAC r0,r1,r2 X:f,r5 Y:e,r4
MAC r3,r4,r5 r2,X:a2 Y:c, r6
ADD r2,r6,r7 r5,X:d2 Y:e,r8
MOVE X:a2,r9 Y:d2,r10
MOVE r10,X:d2
(d) Result After Name Splitting
Figure
8: Name splitting for local variables
ranges of d in different memory banks, which allows us to exploit
a parallel move after eliminating one MOVE instruction
from the code in Figure 7(e).
Although name splitting helps us to further reduce the code
size, it may increase the data space, as we monitored in Figure
8. To mitigate this problem, we merge names after name
splitting. Figure 9 shows how the data space for the same example
can be improved using name merging. In the earlier
example, we split a into two names a1 and a2 according to
the live ranges for a, and these new names were assigned to
the same memory bank. Note that these live ranges do not
conflict. This means that they can in turn be assigned to the
same location in memory.
a
e
c
f
a b c d1 e f
(a) Live Range After
Merging
(b) Partitioned Memory
Figure
9: Name merging for local variables
Not only can the compiler merge nonconflicting live ranges
of the same variable, as in the case of the variable a, but it can
also merge nonconflicting live ranges of different variables.
We see in Figure 9(b) that two names b and d2 are merged to
save one word in Y memory.
The key idea of name splitting and merging is to consider
live ranges, instead of entire variables, as candidates to be assigned
to memory banks. As can be seen in above examples,
the compiler can potentially reduce both the number of executed
instructions by exploiting parallel moves and the number
of memory words required.
We have shown in this work that applying graph coloring
techniques when assigning variables to memory banks has a
greater potential for improvement than applying these techniques
for register assignment. The reason is that the number
of memory banks is typically much smaller than the number of
registers; thereby, the algorithm for name splitting and merging
has practically polynomial time complexity even though
name splitting and merging basically use a theoretically NP-complete
graph coloring algorithm. That is, asymptotically
the time required for name splitting and merging scales at
worst case as data memory banks. This is yet
much faster than conventional graph coloring for register allo-
cation, whose time complexity is O(nm n ) where m is typically
more than 32 for GPPs. It has already been empirically
proven that in practice, register allocation with such high complexity
runs in polynomial time thanks to numerous heuristics
such as pruning. So does name splitting and merging, as we
will demonstrate in Section 4.
3.3 Register Assignment
After memory banks are determined for each variable in the
code, physical registers are assigned to the code. For this,
we again use the graph coloring algorithm with special constraints
added to handle non-orthogonal architectures. To explain
these constraints, recall that we only allocated register
classes to temporaries earlier in the register class allocation
phase. For register assignment, each temporary is assigned
one physical register among those in the register class allocated
to the temporary. For example, the temporary r0 in Figure
4 shall be replaced by one register among four candidates
that they are
in register class 1, which is currently allocated to r0 as shown
in Section 3.1.
In addition to register class constraints, register assignment
also needs to consider additional constraints for certain types
of instructions. For instance, register assignment for instructions
containing a parallel move, such as those in Figure 4,
must meet the following architectural constraints on dual memory
banks: data from each memory bank should be moved to a
predefined set of registers. This constraint is also due partially
to the heterogeneous register architecture of ASIPs. Back in
the example from Figure 4, the variable a in the parallel move
with r0 is allocated to memory X . Therefore, only registers
eligible for r0 are confined to X0 and X1. If these physical
registers are already assigned to other instructions, then a
register spill will occur.
Satisfying all these constraints on register classes and memory
banks, our graph coloring algorithm assigned temporaries
to physical registers in the code. Figure 10 shows the resulting
code after register assignment is applied to the code shown in
Figure
4(c). We can see in Figure 10 that memory references in
the code represented symbolically in terms of variable names
like a and b are now converted into real ones using addressing
modes provided in the machine. This conversion was done in
the memory offset assignment phase that comes after register
assignment. In this final phase, we applied an algorithm similar
to the maximum weighted path (MWP) algorithm originally
proposed by Leupers and Marwede [7].
MOVE X:(r1)+,X0 Y:(r5)+,Y0
MOVE X:(r1)+,A Y:(r5)+,Y1
MAC X0, Y0, A X:(r1)+,X1 Y:(r5)+,B
Figure
10: Resulting code after register assignment and memory
offset assignment
4. COMPARATIVE EMPIRICAL STUDIE
To evaluate the performance of our memory bank assignment
algorithm, we implemented the algorithm and conducted
experiments with benchmark suites on a DSP56000 [10]. The
performance is measured in two metrics: size and time. In
this section, we report the performance obtained in our exper-
iments, and compare our results with other work.
4.1 Comparison with Previous Work
Not until recently had code generation for ASIPs received
much attention from the main stream of conventional compiler
research. One prominent example of a compiler study targeting
ASIPs may be that of Araujo and Malik [2] who proposed
a linear-time optimal algorithm for instruction selection, register
allocation, and instruction scheduling for expression trees.
Like most other previous studies for ASIPs, their algorithm
was not designed specifically for the multi-memory bank ar-
chitectures. To the best of our knowledge, the earliest study
that addressed this problem of register and memory bank assignment
is that of Saghir et al. [12]. However, our work differs
from theirs because we target ASIPs with heterogeneous
registers while theirs assume processors with a large number
of centralized general-purpose registers. By the same token,
our approach also differs from the RAW project at MIT [3]
since their memory bank assignment techniques neither assume
heterogeneous registers. nor even ASIPs.
Most recently, this problem was extensively addressed in a
project, called SPAM, conducted by researchers at Princeton
and MIT [1, 14]. In fact, SPAM is the only closely related
work that is currently available to us. Therefore, in this work,
we compared our algorithm with theirs by experimenting with
the same set of benchmarks targeting the same processor.
4.2 Comparison of Code Size
In
Figure
11, we list the benchmarks that were compiled
by both the SPAM compiler and ours. These benchmarks are
from the ADPCM and DSPStone [15] suites. For some reason,
we could not port SPAM successfully on our machine plat-
form. So, the numbers for SPAM in the figure are borrowed
from their literature [14] in a comparison with our experimental
result.0.20.611.41.8
convolution complex_multiply iir_biquad_N_sections least_mean_square matrix_multiply_1 adapt_quant adapt_predict_1 iadpt_quant scale_factor_1 speed_control_2 tone_detector_1
Benchmarks
Size Ratio
Figure
11: Ratios of our code sizes to SPAM code sizes
The figure displays the size ratios of our code to SPAM
code; that is, SPAM code size is 1 and our code size is normalized
against SPAM code size. In the figure, we can see
that the sizes of our output code are comparable to those of
their code overall. In fact, for seven benchmarks out of the
twelve, our output code is smaller than SPAM code. These results
indicate that our memory bank assignment algorithm is as
effective as their simultaneous reference allocation algorithm
in most cases.
4.3 Comparison of Compilation Time
While both compilers demonstrate comparable performance
in code size, the difference of compilation times is significant,
as depicted in Figure 12. According to their literature [14], all
experiments of SPAM were conducted on Sun Microsystems
Ultra Enterprise featuring eight processors and 1GB RAM.
Unfortunately, we could not find exactly the same machine
that they used. Instead, we experimented on the same Sun Microsystems
Ultra Enterprise but with two processors and 2GB
RAM.101000100000
convolution complex_multiply iir_biquad_N_sections least_mean_square matrix_multiply_1 adapt_quant adapt_predict_1 iadpt_quant scale_factor_1 speed_control_2 tone_detector_1
Benchmarks
Compilation Time Ratio
Figure
12: Ratios (in log scale) of compilation times of our compiler
to those of the SPAM compiler
We can see in the figure that our compilation times were
roughly three to four orders of magnitude faster. Despite the
differences of machine platforms, therefore, we believe that
such large difference of compilation times clearly demonstrates
the advantage of our approach over theirs in terms of compilation
speed.
Our comparative experiments show evidence that the compilation
time of SPAM may increase substantially for large ap-
plications, as opposed to ours. We have found that the long
compilation time in the SPAM compiler results from the fact
that they use a coupled approach that attempts to deal with
register and memory bank assignment in a single, combined
step, where several code generation phases are coupled and simultaneously
considered to address the issue. That is, in their
approach, variables are allocated to physical registers at the
same time they are assigned the memory banks.
To support their coupled approach, they build a constraint
graph that represents multiple constraints under which an optimal
solution to their problem is sought. Unfortunately, these
multiple constraints in the graph turn their problem into a typical
multivariate optimum problem which is tractable only by
an NP-complete algorithm. In this coupled approach, multivariate
constraints are unavoidable as various constraints on
many heterogeneous registers and multi-memory banks should
be all involved to find an optimal reference allocation simulta-
neously. As a consequence, to avoid using such an expensive
algorithm, they inevitably resorted to a heuristic algorithm,
called simulated annealing, based on a Monte Carlo approach.
However, even with this heuristic, we have observed from their
literature [13, 14] that their compiler still had to take more
than 1000 seconds even for a moderately sized program. This
is mainly because the number of constraint in their constraint
graph rapidly becomes too large and complicated as the code
size increases.
We see that the slowdown in compilation is obviously caused
by the intrinsic complexity of their coupled approach. In con-
trast, our compilation times stayed short even for larger bench-
marks. We credit this mainly to our decoupled approach which
facilitated our application of various fast heuristic algorithms
that individually conquer each subproblem encountered in the
code generation process for the dual memory bank system.
More specifically, in our approach, register allocation is de-coupled
from code compaction and memory bank assignment;
thereby, the binding of physical registers to temporaries comes
only after code has been compacted and variables assigned to
memory banks.
Some could initially expect a degradation of our output code
quality due to the limitations newly introduced by considering
physical register binding separately from memory bank as-
signment. However, we conclude from these results that careful
decoupling may alleviate such drawbacks in practice while
maximizing the advantages in terms of compilation speed, which
is often a critical factor for industry compilers.
4.4 Comparison of Execution Speed
To estimate the impact of code size reduction on the running
time, we generated three versions of the code as follows.
uncompacted The first version is our uncompacted code, such
as shown in Figure 2, generated immediately after the
instruction selection phase.
compiler-optimized The uncompacted code is optimized for
DSP56000 by using the techniques in Section 3 to produce
the code like the one in Figure 10.
hand-optimized The uncompacted code is optimized by hand.
We hand-optimized the same code that the compiler used
as the input so that the hand-optimized one may provide
us with the upper limit of the performance of the benchmarks
on DSP56000.
Their execution times are compared in Figure 13 where the
ratios of speedup improvement produced by both compiler-
optimization and hand-optimization compared to the speedup
produced by the uncompacted code. For instance, the compiler-
optimized code for complex multiply achieves speedup of
about 23% over the uncompacted code while the hand-optimized
code achieves additional speedup of 9%, which is tantamount
to 32% in total over the uncompacted code.
In
Figure
13, we can see that the average speedup of our
compiler-optimized code over the uncompacted code is about
7%, and that of hand-optimized code over the compiler-optimized
code is 8%. These results indicate that the compiler has achieved
roughly the half of the speedup we could get by hand optimiza-
tion. Although these numbers may not be satisfactory, the results
also indicate that, in six benchmarks out of the twelve,
our compiler has achieved the greater part of the performance
gains achieved by hand optimization.
Of course, we also have several benchmarks, such as fir2dim,
convolution and least mean square, in which our compiler
has much room for improvement. According to our anal-
ysis, the main cause that creates such difference in execution0.20.611.4fir2dim convolution complex_multiply iir_biquad_N_sections least_mean_square matrix_multiply_1 adapt_quant adapt_predict_1 iadpt_quant scale_factor_1 speed_control_2 tone_detector_1
Benchmarks
and
ompiler
Figure
13: Speedups of the execution times of both compiler-
optimized and hand-optimized code over the execution time of un-optimized
code
time between the compiler-generated code and the hand optimized
code is the incapability of our compiler to efficiently
handle loops. To illustrate this, consider the example in Figure
14, which shows a typical example where software pipelining
is required to optimize the loop.
DO #16, L10
MOV X:a,X0 Y:b,Y0
MPY X0,Y0,A X:c,X1 Y:d,Y1
ADD X1,Y1,A
MOV A,X:e
(a) Compiled Compacted Code by Our Approach
DO #16, L10
MOV X:a,X0 Y:b,Y0
MPY X0,Y0,A X:c,X1 Y:d,Y1
ADD X1,Y1,A
MOV A,X:e
(a) Compiled Compacted Code by Our Approach
MOV X:a,X0 Y:b,Y0
DO #15, L10
MPY X0,Y0,A X:c,X1 Y:d,Y1
ADD X1,Y1,A X:a,X0 Y:b,Y0
MOV A,X:e
(b) Hand-Optimized Compacted Code
Figure
14: Compaction Difference Between Our Compiled Code
and Hand-Optimized Code
Notice in the example that a parallel move for variables a
and b cannot be compacted into the instruction word containing
ADD because there is a dependence between MPY and them.
However, after placing one copy of the parallel move into the
preamble of the loop, we can now merge the move with ADD.
Although this optimization may not reduce the total code size,
it eliminates one instruction within the loop, which undoubtedly
would reduce the total execution time noticeably.
This example informs us that, since most of the execution
time is spent in loops, our compiler cannot match hand optimization
in run time speed without more advanced loop opti-
mizations, such as software pipelining, based on rigorous dependence
analysis. Currently, this issue remains for our future
research.
5.
AND CONCLUSION
In this paper, we proposed a decoupled approach for supporting
a dual memory architecture, where the six code generation
phases are performed separately. We also presented
name splitting and merging as additional techniques. By comparing
our work with SPAM, we analyzed the pros and cons of
our decoupled approach as opposed to their coupled approach.
The comparative analysis of the experiments revealed that our
compiler achieved comparable results in code size; yet, our de-coupled
structure of code generation simplified our data allocation
algorithm for dual memory banks, which allows the algorithm
to run reasonably fast. The analysis also revealed that
exploiting dual memory banks by carefully assigning scalar
variables to the banks brought about the speedup at run time.
However, the analysis exposed several limitations of the current
techniques as well. For instance, while our approach was
limited to only scalar variables, we expect that memory bank
assignment for arrays can achieve a large performance enhancement
because most computations are performed on arrays
in number crunching programs. This is actually illustrated
in Figures 11 and 13, where even highly hand-optimized
code could not make a significant performance improvement
in terms of speed although we made a visible difference in
terms of size. This is mainly because the impact of scalar variables
on the performance is relatively low as compared with
the space they occupy in the code. Another limitation would
be to perform memory bank assignment on arguments passed
via memory to functions. This would require interprocedural
analysis since the caller must know the memory access patterns
of the callee for passing arguments. Also, certain loop
optimization techniques, like those listed in Section 4.4, need
to be implemented to further improve execution time of the
output code.
6.
--R
Challenges in Code Generation for Embedded Processors
Code Generation for Fixed-point DSPs
Compiler Support for Scalable and Efficient Memory Systems.
Register Allocation and Spilling via Graph Coloring.
Efficient and Fast Allocation of On-chip Dual Memory Banks
The Very Portable Optimizer for Digital Signal Processors.
Algorithms for Address Assignment in DSP Code Generation.
Retargetable Compilers for Embedded Core Processors.
Code Generation for Embedded Processors.
Motorola Inc.
Shortest Connection Networks and Some Generalizations.
Exploiting Dual Data-Memory Banks in Digital Signal Processors
Code Optimization Libraries For Retargetable Compilation For Embedded Digital Signal Processors.
Simultaneous Reference Allocation in Code Generation for Dual Data Memory Bank ASIPs.
--TR
Exploiting dual data-memory banks in digital signal processors
Algorithms for address assignment in DSP code generation
Code generation for fixed-point DSPs
Simultaneous reference allocation in code generation for dual data memory bank ASIPs
The very portable optimizer for digital signal processors
Compiler Support for Scalable and Efficient Memory Systems
Retargetable Compilers for Embedded Core Processors
Code Generation for Embedded Processors
Register allocation MYAMPERSANDamp; spilling via graph coloring
A Study on Data Allocation of On-Chip Dual Memory Banks
Code optimization libraries for retargetable compilation for embedded digital signal processors
--CTR
Yi-Hsuan Lee , Cheng Chen, An effective and efficient code generation algorithm for uniform loops on non-orthogonal DSP architecture, Journal of Systems and Software, v.80 n.3, p.410-428, March, 2007
G. Grwal , S. Coros , D. Banerji , A. Morton, Comparing a genetic algorithm penalty function and repair heuristic in the DSP application domain, Proceedings of the 24th IASTED international conference on Artificial intelligence and applications, p.31-39, February 13-16, 2006, Innsbruck, Austria
Chun-Gi Lyuh , Taewhan Kim, Memory access scheduling and binding considering energy minimization in multi-bank memory systems, Proceedings of the 41st annual conference on Design automation, June 07-11, 2004, San Diego, CA, USA
Jeonghun Cho , Yunheung Paek , David Whalley, Fast memory bank assignment for fixed-point digital signal processors, ACM Transactions on Design Automation of Electronic Systems (TODAES), v.9 n.1, p.52-74, January 2004
Rajiv A. Ravindran , Robert M. Senger , Eric D. Marsman , Ganesh S. Dasika , Matthew R. Guthaus , Scott A. Mahlke , Richard B. Brown, Increasing the number of effective registers in a low-power processor using a windowed register file, Proceedings of the international conference on Compilers, architecture and synthesis for embedded systems, October 30-November 01, 2003, San Jose, California, USA
Rajiv A. Ravindran , Robert M. Senger , Eric D. Marsman , Ganesh S. Dasika , Matthew R. Guthaus , Scott A. Mahlke , Richard B. Brown, Partitioning Variables across Register Windows to Reduce Spill Code in a Low-Power Processor, IEEE Transactions on Computers, v.54 n.8, p.998-1012, August 2005
Zhong Wang , Xiaobo Sharon Hu, Energy-aware variable partitioning and instruction scheduling for multibank memory architectures, ACM Transactions on Design Automation of Electronic Systems (TODAES), v.10 n.2, p.369-388, April 2005
Xiaotong Zhuang , Santosh Pande, Parallelizing load/stores on dual-bank memory embedded processors, ACM Transactions on Embedded Computing Systems (TECS), v.5 n.3, p.613-657, August 2006
Yi-Hsuan Lee , Cheng Chen, An Efficient Code Generation Algorithm for Non-orthogonal DSP Architecture, Journal of VLSI Signal Processing Systems, v.47 n.3, p.281-296, June 2007
Zhong Wang , Xiaobo Sharon Hu, Power Aware Variable Partitioning and Instruction Scheduling for Multiple Memory Banks, Proceedings of the conference on Design, automation and test in Europe, p.10312, February 16-20, 2004 | compiler;memory assignment;non-orthogonal architecture;dual memory;maximum spanning tree;graph coloring |
513858 | Compiler-directed cache polymorphism. | Classical compiler optimizations assume a fixed cache architecture and modify the program to take best advantage of it. In some cases, this may not be the best strategy because each loop nest might work best with a different cache configuration and transforming a nest for a given fixed cache configuration may not be possible due to data dependences. Working with a fixed cache configuration can also increase energy consumption in loops where the best required configuration is smaller than the default (fixed) one. In this paper, we take an alternate approach and modify the cache configuration for each nest depending on the access pattern exhibited by the nest. We call this technique compiler-directed cache polymorphism (CDCP). More specifically, in this paper, we make the following contributions. First, we present an approach for analyzing data reuse properties of loop nests. Second, we give algorithms to simulate the footprints of array references in their reuse space. Third, based on our reuse analysis, we present an optimization algorithm to compute the cache configurations for each nest. Our experimental results show that CDCP is very effective in finding the near-optimal data cache configurations for different nests in array-intensive applications. | INTRODUCTION
Most of today's microprocessor systems include several special
architectural features (e.g., large on-chip caches) that use
a significant fraction of on-chip transistors. These complex
and energy-hungry features are meant to be applicable across
different application domains. However, they are effectively
wasted for applications that cannot fully utilize them, as they
are implemented in a rigid manner. For example, not all the
loops in a given array-based embedded application can take
advantage of a large on-chip cache. Also, working with a fixed
cache configuration can increase energy consumption in loops
where the best required configuration (from the performance
angle) is smaller than the default (fixed) one. This is because
a larger cache can result in a large per access energy.
The conventional approach to address the locality problem
for caches (that is, the problem of maximizing the number
of cache hits) is to employ compiler optimization techniques
[8]. Current compiler techniques generally work under
the assumption of a fixed cache memory architecture, and
try to modify the program behavior such that the new behavior
becomes more compatible with the underlying cache
configuration. However, there are several problems with this
method. First, these compiler-directed modifications sometimes
are not effective when data dependences prevent necessary
program transformations. Second, the available cache
space sometimes cannot be utilized efficiently, because the
static configuration of cache does not match different requirements
of different programs and/or of different portions of
the same program. Third, most of the current compiler techniques
(adapted from scientific compilation domain) do not
take energy issues into account in general.
An alternative approach to the locality problem is to use re-configurable
cache structures and dynamically tailor the cache
configurations to meet the execution profile of the application
at hand. This approach has the potential to address the locality
problem in cases where optimizing the application code
alone fails. However, previous research on this area [1, 9] is
mainly focused on the implementation and the employment
mechanisms of these designs, and lacks software-based techniques
to direct dynamic cache reconfigurations. Recently, a
compiler-directed scheme to adapt the cache assist was proposed
in [6]. Our work focuses on the cache as opposed to the
cache assist.
In this paper, we propose a strategy where an optimizing
compiler decides the best cache configuration for each nest
in the application code. More specifically, in this paper, we
make the following contributions. First, we present techniques
for analyzing the data reuse properties of a given loop nest
and constructing formal expressions of these reuse patterns.
Second, we develop algorithms to simulate the footprints of
array references. Our simulation approach is much more efficient
than classical cycle-based simulation techniques as it
simulates only data reuse space. Third, we develop an optimization
algorithm for computing the optimized cache configurations
for each loop nest. We also provide a program level
algorithm for selecting dynamic cache configurations. We focus
on the behavior of array references in loop nests as loop
nests are the most important part of array-intensive media
and signal processing application programs. In most cases,
the computation performed in loop nests dominates the execution
time of these programs. Thus, the behavior of the
loop nests determines both performance and energy behavior
of applications. Previous research [8] shows that the performance
of loop nests is directly influenced by the cache behavior
of array references. Also, recently, energy consumption has
become an important issue in embedded systems [9]. Conse-
quently, determining a suitable combination of cache memory
configuration and optimized software is a challenging problem
in embedded design world.
The rest of this paper is organized as follows. Section 2 reviews
basic concepts, notions, and representations for array-based
codes. In Section 3, concepts related to cache behavior
such as cache misses, interferences, data reuse, and data locality
are analyzed. Section 4 introduces our compiler-directed
cache polymorphism technique, and presents a complete set
of algorithms to implement it. We present experimental results
in Section 5 to show the effectiveness of our technique.
Finally, Section 6 concludes the paper with a summary and
discusses some future work on this topic.
2. ARRAY-BASED CODES
This paper is particularly targeted at the array-based codes.
Since the performance of loop nests dominates the overall performance
of the array-based codes, optimizing nests is particularly
important for achieving best performance in many
embedded signal and video processing applications. Optimizing
data locality (so that the majority of data references are
satisfied from the cache instead of main memory) can improve
the performance and energy efficiency of loop nests in the following
ways. First, it can significantly reduce the number of
misses in data cache, thus avoiding frequent accesses to lower
memory hierarchies. Second, by reducing the number of accesses
to the lower memory hierarchies, the increased cache hit
rate helps promote the energy efficiency of the entire memory
system. In this section, we discuss some basic notions about
array-based codes, loop nests, array references as well as some
assumptions we made.
2.1 Representation for Programs
We assume that the application code to be optimized has
the format which is shown in Figure 1.
Assumption 1. Each array in the application code being
optimized is declared in the global declaration section of the
program. The arrays declared in the global section can be referenced
by any loop in the code.
This assumption is necessary for our algorithms that will
be discussed in following sections. In the optimization stage
of computing the cache configuration for the loop nests, Assumption
1 ensures an exploitable relative base address of
each array involved.
Global Declaration Section of Arrays;
main(int argc, char *argv[ ])
f
Loop Nest No. 0;
Loop Nest No.
Loop Nest No. l;
Figure
1: Format for a Program.
f
Figure
2: Format for a Loop Nest.
Since loop nests are the main structures in array-based pro-
grams, program codes between loop nests can be neglected.
We also assume that each nest is independent from the oth-
ers. That is, as shown in Figure 1, the application contains
a number of independent nests, and no inter-loop-nest data
reuse is accounted for. This assumption can be relaxed to
achieve potentially more effective utilization of reconfigurable
caches. This will be one of our future research. Note that several
compiler optimizations such as loop fusion, fission, and
code sinking can be used to bring a given application code
into our format [12].
Assumption 2. All loop nests are at the same program lexical
level, the global level. There is no inter-nesting between
any two different loop nests.
Assumption 3. All nests in the code are perfectly-nested,
i.e., all array operations and array references only occur at
the innermost loop.
These assumption, while not vital for our analysis, make our
implementation easier. We plan to relax these in our future
work.
2.2 Representation for Loop Nests
In our work, loop nests form the boundaries at which dynamic
cache reconfigurations occur. Figure 2 shows the format
for a loop nest.
In this format, ~ i stands for the loop index vector, ~
are the corresponding
lower bound, upper bound, and stride for each loop index
to different instances of array references in the nest.
Note that these may be same or different references to the
same array, or different references to different arrays. Function
f j;k ( ~ i) is the subscript (expression) function (of ~ i) for the
th subscript of the j th array reference, where
and dk is the number of dimensions for the
corresponding array.
2.3 Representation for Array References
In a loop nest with the loop index vector ~ i, a reference AR j
to an array with m dimensions is expressed as:
We assume that the subscript expression functions f j;k ( ~ i) are
affine functions of the loop indices and loop-invariant con-
stants. A row-major storage layout is assumed for all arrays
as in C language. Assuming that the loop index vector is
an n depth vector; that is, ~
the number of loops in the nest, an array reference can be
represented
(1)
The vector at the left side of the above equation is called
array reference subscript vector ~
f . The matrix above is defined
as access matrix A. The rightmost vector is known as
the constant offset vector ~c. Thus, the above equation can be
also written as [12]:
~
3. CACHE BEHAVIOR
In this section, we review some basic concepts about cache
behavior. As noted earlier, in array-intensive applications,
cache behavior is largely determined by the footprints of the
data manipulated by loop nests. In this paper, we first propose
an algorithm for analyzing the cache behavior for different
arrays and different array references in a given loop nest.
Based on the information gathered from this analysis, we then
propose another algorithm to compute the cache memory demand
in order to achieve a perfect cache behavior for the loop
nest being analyzed, and suggest a cache configuration.
3.1 Cache Misses
There are three types of cache misses: compulsory (cold)
misses, capacity misses, and conflict (interference) misses.
Different types of misses influence the performance of program
in different ways. Note that, most of the data caches
used in current embedded systems are implemented as set-associative
caches or direct-mapping caches in order to achieve
high speed, low power, and low implementation cost. Thus,
for these caches, interference misses can dominate the cache
behavior, particularly for array-based codes. It should be
stressed that since the cache interferences occur in a highly
irregular manner, it is very difficult to capture them accurately
[11]. Ghosh et al. proposed cache miss equations in [4]
as an analytical framework to compute potential cache misses
and direct code optimizations for cache behavior.
3.2 Data Reuse and Data Locality
Data reuse and data locality concepts are discussed in [12]
in detail. Basically, there are two types of data reuses: temporal
reuse and spatial reuse. In a given loop nest, if a reference
accesses the same memory location across different loop iter-
ations, this is termed as temporal reuse; if the reference accesses
the same cache block (not necessarily the same memory
location), we call this spatial reuse. We can consider temporal
reuse is a special case of spatial reuse. If there are different
references accessing the same memory location, we say that a
group-temporal reuse exists; whereas if different references are
accessing the same cache block, it is termed as group-spatial
reuse. Note that group reuse only occurs among different references
of the same array in a loop nest. When the reused
data item is found in the cache, we say that the reference exhibits
locality. This means that data reuse does not guarantee
data locality. We can convert a data reuse into locality only
by catching the reused item in cache. Classical loop-oriented
compiler techniques try to achieve this by modifying the loop
access patterns.
4. ALGORITHMSFORCACHEPOLYMOR-
The performance and energy behavior of loop nests are
largely determined by their cache behavior. Thus, how to optimize
the cache behavior of loop nests is utmost important
for satisfying high-performance and energy efficiency demands
of array-based codes.
There are at least two kinds of approaches to perform optimizations
for cache behavior. The conventional way is compiler
algorithms that transform loops using interchange, re-
versal, skewing, and tiling transformations, or transform the
data layout to match the array access pattern. As mentioned
earlier, the alternative approach is to modify the underlying
cache architecture depending on the program access pattern.
Recent research work [7] explores the potential benefits from
the second approach. The strategy presented in [7] is based on
exhaustive simulation. The main drawback of this simulation-based
strategy is that it is extremely time consuming and can
consider only a fixed set of configurations. Typically, simulating
each nest with all possible cache configurations makes this
approach unsuitable for practice. In this section, we present
an alternative way for determining the suitable cache configurations
for different sections (nests) of a given code.
4.1 Compiler-directed Cache Polymorphism
The existence of cache interferences is the main factor that
degrades the performance of a loop nest. cache interferences
disrupt the data reuse in a loop nest by preventing data
reuse from being converted into locality. Note that both self-
interferences or cross-interferences can prevent a data item
from being used while it is still in the cache. Our objective is
then to determine the cache configurations that help reduce
interferences. The basic idea behind the compiler-directed
cache polymorphism (CDCP) is to analyze the source code of
an array-based program and determine data reuse characteristics
of its loop nests at compile time, and then to compute a
suitable (near-optimal) cache configuration for each loop nest
to exploit the data locality implied by its reuse. The near-optimal
cache configuration determined for each nest eliminates
most of the interference misses while keeping the cache
size and associativity under control. In this way, it optimizes
execution time and energy at the same time. In fact, increasing
either cache capacity or associativity further only
increases energy consumption. In this approach, the source
codes are not modified (obviously, they can be optimized be-
f
Figure
3: Example Code - a Loop Nest.
fore our algorithms are run; what we mean here is that we do
not do any further code modifications for the sake of cache
morphism).
At the very high level, our approach can be described as
follows. First, we use compiler to transform the source codes
into an intermediate format. In the second step, each loop
nest is processed as a basic element for cache configuration.
In each loop nest, references of each array are assigned into
different uniform reference sets. Each uniform set is then analyzed
to determine the reuse they exhibit over different loop
levels. Then, for each array, an algorithm is used to simulate
the footprints of the reuse space within the layout space of
this array. Following this, a loop nest level algorithm optimizes
the cache configurations while ensuring data locality.
Finally, the code is generated such that these dynamic cache
configurations are activated at runtime (in appropriate points
in the application code).
4.2 Array References and Uniform Reference
Sets
Every array reference is expressed in Equation 2, ~
in which ~
f is the subscript vector, A is the access matrix, ~ i
is the loop index vector and ~c is the constant vector. All
the information are stored in the array reference leaf, array
node and its parent loop-nest node of the intermediate codes.
Consider a piece of code in Figure 3, which is a loop nest:
The first reference of array a is represented by the following
access matrix Aa and constant offset vector \Gamma! ca ,
The reference to array b is also represented by its access matrix
A b and constant offset vector \Gamma! c b :
The definition of uniform reference set is very similar to
the uniformly generated set [3]. If two references to an array
have the same access matrix and only differ in constant offset
vectors, these two references are said to belong to the same
uniform reference set. Constructing uniform reference sets for
an array provides an efficient way for analyzing the data reuse
for the said array. This is because all references in an uniform
reference set have same data access patterns and data reuse
characteristics. Also, identifying uniform reference sets allows
us to capture group reuse easily.
4.3 Algorithm for Reuse Analysis
In the following sections, we use a bottom-up approach
to introduce the algorithms for implementing our compiler-
INPUT: access matrix Am\Lambdan of a uniform reference set
array node, loop-nest node
a given cache block size: BK SZ
OUTPUT: self-reuse pattern vector \Gamma\Gamma\Gamma! SRPn of this uniform set
Begin
Initial self-reuse pattern vector: \Gamma\Gamma\Gamma!
current loop level CLP to be the innermost loop:
current dimension level CDN to be the highest
dimension:
Set index occurring flag IOF
If Element in access matrix A[CDN ][CLP
Break
Go up to the next lower dimension level
While CDN == the lowest dimension
If IOF == FALSE
Set reference has temporal reuse at this level:
Else If CDN == m
If A[CDN ][CLP
Set reference has spatial reuse at this level:
Go up to the next higher loop level
While CLP == the outermost loop level
End.
Figure
4: Algorithm 1: Self-Reuse Analysis.
directed cache polymorphism technique. First, algorithms analyzing
the data reuses including self-reuses and group-reuses
are provided for each uniform reference set in this subsection.
4.3.1 Self-Reuse Analysis
Before the reuse analysis, all references of an array in a loop
nest are first constructed into several uniform reference sets.
Self-reuses (both temporal and spatial) are analyzed at the
level of uniform set. This algorithm works on access matrix.
The detailed algorithm is shown in Figure 4.
This algorithm checks each loop index variable from the
innermost loop to the outermost loop to see whether it occurs
in the subscript expressions of the references. If the j th
loop index variable i j does not occur in any subscript expres-
sion, the reflection in access matrix is that all elements in
the j th column are 0. This means that the iterations at the
th loop do not change the memory location accessed, i.e.,
the array reference has self-temporal reuse in the j th loop.
If the index variable only occurs in the lowest (the fastest-
dimension (i.e., the m th dimension), the distance
between the contiguous loop iterations is checked. In the al-
gorithm, s[CLP ] is the stride of the CLP th loop, BK SZ is
a given cache block size and ELMT SZ is the size of array
elements. If the distance (A[CDN ][CLP ] s[CLP ]) between
two contiguous iterations of this reference is within a cache
block, it has spatial reuse in this loop level.
4.3.2 Group-Reuse Analysis
Group reuses only exist among references in the same uniform
reference set. Group-temporal reuse occurs when different
references access the same data location across the loop
iterations, while group-spatial reuse exists when different references
access the same cache block in the same or different
loop iterations. Algorithm 2 in Figure 5 exploits a simplified
version of group reuse which only exists in one loop level.
When a group-spatial reuse is found at a particular loop
level, the algorithm in Figure 5 first checks whether this level
INPUT: a uniform reference set with A and ~cs
array node, loop-nest node
a given cache block size: BK SZ
OUTPUT: group-reuse pattern vector \Gamma\Gamma\Gamma\Gamma! GRPn of this uniform set
Begin
Initial group-reuse pattern vector: \Gamma\Gamma\Gamma\Gamma!
For each pair of constant vectors ~
c1 and ~
If ~
c1 and ~
c2 only differ at the j th element
Check the j th row in access matrix A
Find the first occurring loop index variable (non-zero
element) starting from the innermost loop, say i
Continue
Else
Check the k th column of access matrix A
only occurs in the j th dimension
is the lowest dimension of array
If init dist%A[k][m] == 0
Else If GRP[k] == 0
Else
If init dist%A[k][m] == 0
End.
Figure
5: Algorithm 2: Group-Reuse Analysis.
has group-temporal reuse for other pairs of references. If it
does not have such reuse, this level will be set to have group-
spatial reuse. Otherwise, it just omits the current reuse found.
For group-temporal reuse found at some loop level, the element
corresponding to that level in the group-reuse vector
\Gamma\Gamma\Gamma! GRPn will be directly set to have group-temporal reuse.
Now, for each array and each of its uniform reference sets
in a particular loop nest, using Algorithm 1 and Algorithm
2, the reuse information at each loop level can be collected.
As for the example code in subsection 4.3, references to array
a have self-spatial reuse at loop level l, self-temporal reuse
at loop level j and group reuse at loop level j. Reference of
array b has self-spatial reuse at loop level i.
Note that, in contrast to the most of the previous work in
reuse analysis (e.g., [12]), this approach is simple and computes
reuse information without solving a system of equations.
4.4 Simulating the Footprints of Reuse Spaces
The next step in our approach is to transform those data
reuses into real data localities. A straightforward idea is to
make the data cache large enough to hold all the data in these
reuse spaces of the arrays. Note that data which are out of
reuse spaces are not necessary to be kept in cache after the
first reference since there is no reuse for those data. As discussed
earlier, the cache interferences can significantly affect
the overall performance of a nest. Thus, the objective of our
technique is to find a near-optimal cache configuration, which
can reduce or eliminate the majority of the cache interferences
within a nest. An informal definition of a near-optimal cache
configuration is as follows:
Definition 1. A near-optimal cache configuration is the
possibly smallest cache in size and associativity which achieves
a near-optimal number of cache misses. And, any increase in
either cache size or associativity over this configuration does
not deliver further significant improvement.
In order to figure out such a near-optimal cache configuration
that would contain the entire reuse space for a loop
nest, the real cache behavior in these reuse spaces must be
made available for potential optimizations. In this section,
we provide an algorithm that simulates the exact footprints
(memory addresses) of array references in their reuse spaces.
Suppose, for a given loop index vector ~ i, an array reference
with a particular value of ~ can be
expressed as follows:
Here, SA is starting address of the array reference, which
is different from the base address (the memory address of
the first array element) of an array. It is the constant part
of the above equation. Suppose that the data type size of
the array elements is elmt sz, the depth of dimension is m,
the dimensional bound vectors are \Gamma!
and the constant offset vector
is derived from the following equation
ae
are integrated coefficients of the loop
index variables. Suppose the access matrix is Am\Lambdan , Cof j is
derived as follows:
ddk a lj ;
ae
Note that, with Equation 3, the address of an array reference
at a particular loop iteration can be calculated as the
offset in the layout space of this array. The algorithm provided
in this section is using these formulations to simulate
the footprints of array references at each loop iteration within
their reuse spaces. Following two observations give some basis
as to how to simulate the reuse spaces.
Observation 1. In order to realize the reuse carried by the
innermost loop, only one cache block is needed for this array
reference.
Observation 2. In order to realize the reuse carried by
a non-innermost loop, the minimum number of cache blocks
needed for this array reference is the number of cache blocks
that are visited by the loops inner than it.
Since we have assumed that all subscript functions are affine,
for any array reference, the patterns of reuse space during
different iterations at the loop level which has the reuse are
exactly the same. Thus, we only need to simulate the first
iteration of the loop having the reuse currently under ex-
ploiting. For example, loop level j in loop vector ~ i has the
reuse we are exploiting, the simulation space is defined as
k?j varies from its lower bound l k to upper bound uk .
Algorithm 3 (shown in Figure 6) first calls Algorithms 1 and
2. Then, it simulates the footprints of the most significant
reuse space for an array in a particular loop nest. These
footprints are marked with a array bitmap.
4.5 Computation and Optimization of cache
Configurations for Loop Nests
INPUT: an array node, a loop-nest node
a given cache block size: BK SZ
OUTPUT: an array-level bitmap for footprints
Begin
Initial array size AR SZ in number of cache blocks
Allocate an array-level bitmap ABM with size AR SZ
and initial ABM to zeros
Initial the highest reuse level RS
//n is the depth of loop nest
For each uniform reference set
Call Algorithm 1 for self-reuse analysis
Call Algorithm 2 for group-reuse analysis
highest reuse level of this set
If RS LEV ? URS LEV
If RS LEV == n
For all references of this array
l//only use the lower bound
apply equation 3 to get the reference address f( ~ i)
transfer to block id: bk
set array bitmap: ABM [bk
Else
For all loop indexes
varies the value of i j from lower bound to upper bound
For all references of this array
apply equation 3 to get the reference address f( ~ i)
transfer to block id: bk
set array bitmap: ABM [bk
End.
Figure
Algorithm 3: Simulation of Footprints in
Reuse Spaces.
In previous subsections, the reuse spaces of each array in
a particular loop nest have been determined and their footprints
have also been simulated in the layout space of each
array. Each array has a bitmap indicating the cache blocks
which have been visited by the iterations in reuse spaces after
applying Algorithm 3. As we discussed earlier, the phenomena
of cache interferences can disturb these reuses and prevent
the array references from realizing data localities across loop
iterations. Thus, an algorithm that can reduce these cache
interferences and result in better data localities within the
reuse spaces is crucial.
In this subsection, we provide a loop-nest level algorithm to
explicitly figure out and display the cache interferences among
different arrays accessed within a loop nest. The main point
of this approach is to map the reuse space of each array into
the real memory space. At the same time, the degree of conflict
(number of interferences among different arrays) at each
cache block is stored in a loop-nest level bitmap. Since the
self-interference of each array is already solved by Algorithm
3 using an array bitmap, this algorithm mainly focuses on
reducing the group-interference that might occur among different
arrays. As is well-known, one of the most effective way
to avoid interferences is to increase the associativity of data
cache, which is used in this algorithm. Based on the definition
of near-optimal cache configuration, this algorithm tries
to find the smallest data cache with smallest associativity that
achieves significantly reduced cache interferences and nearly
perfect performance of the loop nest. Figure 7 shows the detailed
algorithm (Algorithm 4) that computes and optimizes
the cache configuration.
For a given loop nest, Algorithm 4 starts with the cache
block size (BK SZ) from its lower bound, e.g., 16 bytes and
goes up to its upper bound, e.g., 64 bytes. At each particular
BK SZ, it first applies Algorithm 3 to obtain the array bitmap
ABM of each array. Then it allocates a loop-nest level bitmap
INPUT: loop-nest node
global list of arrays declared
lower bound of block size: Bk SZ LB
upper bound of block size: Bk SZ UB
OUTPUT: optimal cache configurations at diff. BK SZ
Begin
For each array in this loop nest
Call algorithm 3 to get the array bitmap ABM
create and initial a loop-nest level bitmap LBM,
with the size is the smallest 2 n that is -
the size of the largest array (in block): LBM size
For each array bitmap ABM
map ABM into the loop-nest bitmap LBM
with the relative base-address of array: base addr
to indicate the degree of conflict at each block
For block id ! array size
base addr)%LBM
ABM [block id]
set the largest degree of conflict in LBM
set cache
set optimal cache conf. to current cache conf.
For assoc ! assoc upper bound
half the number of sets of current cache by
For
set highest value of LBM [i]; i - LBM size
set cache size = assoc LBM size
If assoc ! assoc upper bound
and cache size ! optimal cache size
set optimal cache conf. to current cache conf.
give out optimal cache conf. at BK SZ
doubling BK SZ
while
End.
Figure
7: Algorithm 4: Compute and Optimize cache
Configurations for Loop Nests.
LBM for all arrays within this nest, whose size is the smallest
value in power of 2 that is greater or equal to the largest
array size. All ABMs are remapped to this LBM with their
relative array base addresses. The value of each bits in LBM
indicates the conflict at a particular cache block. Following
this, the optimization is carried out by halving the size of
LBM and remapping LBM . The largest value of bits in
LBM also shows the smallest cache associativity needed to
avoid the interference in the corresponding cache block. This
process is ended when the upper bound of associavitity is met.
A near-optimal cache configuration at block size BK SZ is
computed as the one which has smallest cache size as well as
the smallest associativity.
4.6 Global Level cache Polymorphism
The compiler-directed cache polymorphism technique does
not make changes to the source code. Instead, it uses compiler
only for source code parsing and generates internal code with
the intermediate format which is local to our algorithms. A
global or program level algorithm, Algorithm 5 (in Figure 8)
is presented in this subsection to obtain the directions (cache
configurations for each nest of a program) of the cache reconfiguration
mechanisms.
This algorithm first generates the intermediate format of
the original code and collects the global information of arrays
in source code. After that, it applies Algorithm 4 to each of its
loop nests and obtains the near-optimal cache configurations
for each of them. These configurations are stored in the cache-
configuration list (CCL). Each loop nest has a corresponding
INPUT: source code(.spd)
OUTPUT: Performance data and its cache configurations
for each loop nest
Begin
Initial cache-configuration list: CCL
Use one SUIF pass to generate the intermediate code format
Construct a global list of arrays declared with its
relative base address
For each loop nest
For each array in this loop nest
Construct uniform reference sets for all its references
Call algorithm 4 to optimize the cache configurations
for this loop nest
store the configurations to the CCL
For each block size
activate reconfiguration mechanisms with each loop nest
using its configuration from the CCL
Output performance data as well as the cache configuration
of each loop nest
End.
Figure
8: Algorithm 5: Global Level cache Polymorphism
#define N 8
int a[N][N][N], b[N][N][N];
f
int i, j, k, l;
f
Figure
9: An Example: Array-based Code.
node in CCL which has its near-optimal cache configurations
at different block sizes. After the nest-level optimization is
done, Algorithm 5 activates the cache reconfiguration mech-
anisms, in which a modified version of the Shade simulator
is used. During the simulation, Shade is directed to use the
near-optimal cache configurations in CCL for each loop nest
before its execution. The performance data of each loop nest
under different cache configurations is generated as output.
Since current cache reconfiguration mechanisms can only
vary cache size and cache ways with fixed cache block size,
the cache optimization is done for different (fixed) cache block
sizes. This means that the algorithms in this paper suggest
a near-optimal cache configuration for each loop nest for a
given block size. In the following section, experimental results
verifying the effectiveness of this technique are presented.
4.7 An Example
In this subsection, we focus on the example code in Figure
9 to illustrate how the compiler-directed cache polymorphism
technique works. For simplicity, this code only contains one
nest.
Algorithm 5 starts with one SUIF pass to convert the above
source code into intermediate code, in which the program
node only has one loop-nest node. The loop-nest node is
represented by its index vector ~ with an index
lower bound vector of \Gamma! , an upper bound
vector of \Gamma! stride vector of \Gamma!
. Within the nest, arrays a and b have references
AR a 1 , AR a 2 , AR a 3 and AR b , which are represented in access
matrices and constant vectors as follows:
A a 1 :@ 1
A a 2 :@ 1
A a 3 :@
Also, a global array list is generated as ! a; b ?. Then,
for array a, references AR a 1 and AR a 2 are grouped into one
uniform reference set, and AR a 3 is put to another one. Array
b, on the other hand, has only one uniform reference set.
Then, Algorithm 4 is invoked and starts from the smallest
cache block size, BK SZ, say 16 bytes. It uses Algorithm 3
to obtain the array bitmap ABMa for array a and ABM b for
array b at BK SZ. Within Algorithm 3, we first call Algorithm
1 and Algorithm 2 to analyze the reuse characteristics
of a given array. In our example, the first uniform set of array
a has self-spatial reuse at level l, group-temporal reuse at
level j, the second uniform set has self-spatial reuse at level
l and self-temporal reuse at level j. Reference of array b has
self-spatial reuse at level i. The highest level of reuse is then
used for each array by Algorithm 3 to generate the ABM for
its footprints in the reuse space. We assume an integer has 4
bytes in size. In this case, both ABMa and ABM b have 128
bits shown as follows:
These two ABMs are then passed by Algorithm 3 to Algorithm
4. In turn, Algorithm 4 creates a loop-nest bitmap
LBM with size being equal to the largest array size, MAX(
ABMs), and re-maps ABMa and ABM b to LBM . Since array
a has relative base address at 0 (byte), and array b at
2048, we determine LBM as follows:
Name Arrays Nests Brief Description
Alternate Direction Integral
aps.c 17 3 Mesoscale Hydro Model
bmcm.c 11 3 Molecular Dynamic of Water
Computation
tomcat.c 9 8 Mesh Generation
Array-based Computation
vpenta.c 9 8 Nasa Ames Fortran Kernel
Molecular Dynamics of Water
Table
1: The Array-based Benchmarks Used in the
Experiments.
The maximum value of bits in LBM indicates the number of
interference among different arrays in the nest. Thus, it is the
least associativity that is required to avoid this interference.
In this example, Algorithm 4 starts from a cache associativity
of 2 to compute the near-optimal cache configuration. Each
time, the size of LBM is halved and the LBM is re-mapped
until the resulting associativity reaches the upper bound, e.g.,
16. Then it outputs the smallest cache size with smallest associativity
as the near-optimal configuration at this block size
BK SZ. For this example, the near-optimal cache configuration
is 2KB 2-way associative cache at
The LBM after optimization is shown as follows:
Following this, Algorithm 4 continues to compute the near-optimal
cache configurations for larger cache block sizes by
doubling the previous block size. When the block size reaches
its upper bound, e.g., 64 bytes, this algorithm stops to pass
all the near-optimal configurations at different block sizes to
Algorithm 5. On receiving these configurations, Algorithm
activates Shade to simulate the example code (executable)
with these cache configurations. Then the performance data
is generated as the output of Algorithm 5.
5. EXPERIMENTS
5.1 Simulation Framework
In this section, we present our simulation results to verify
the effectiveness of the CDCP technique. Our technique
has been implemented using SUIF [5] compiler and Shade
[2]. Eight array-based benchmarks are used in this simulation
work. In each benchmark, loop nests dominate the over-all
execution time. Our benchmarks, the number of arrays
(for each benchmark) and the number of loop nests (for each
are listed in Table 1.
Our first objective here is to see the cache configurations
returned by our CDCP scheme and a scheme based on exhaustive
simulation (using Shade). We consider three different
block (line) sizes: 16, 32 and 64 bytes. Note that our work
is particularly targeted at L1 on-chip caches.
5.2 Selected cache Configurations
In this subsection, we first apply an exhaustive simulation
method using the Shade simulator. For this method, the original
program codes are divided into a set of small programs,
each program having a single nest. Shade simulates these
loop nests individually with all possible L1 data cache configurations
within the following ranges: cache sizes from 1K
to 128K, set-associativity from 1 way to 16 ways, and block
size at 16, 32 and 64 bytes. The number of data cache misses
is used as the metric for comparing performance. The optimal
cache configuration at a certain cache block size is the
smallest one in terms of both cache size and set associativity
that achieves a performance (the number of misses) which
cannot be further improved (the number of misses cannot be
reduced by 1%) by increasing cache size and/or set associa-
tivities. The left portion of Table 2 shows the optimal cache
configurations (as selected by Shade) for each loop nest in
different benchmarks as well as at different cache block sizes.
The compiler-directed cache polymorphism technique directly
takes the original source code in the SUIF .spd format
and applies Algorithm 5 to generate the near-optimal
cache configurations for each loop nest in the source code. It
does not do any instruction simulation for configuration op-
timization. Thus, it is expected to be very fast in finding
the near-optimal cache configuration. The execution engine
(a modified version of Shade) of CDCP directly applies these
cache configurations to activate the reconfiguration mechanisms
dynamically. The cache configurations determined by
are shown on the right part of Table 2. To sum up, in
Table
2, for each loop nest in a given benchmark, the optimal
cache configurations from Shade and near-optimal cache configurations
from CDCP technique at block sizes 16, 32, and
64 bytes are given. A notation such as 8k4s is used to indicate
a 8K bytes 4-way set associative cache with a block size of 32
bytes. In this table, B means bytes, K denotes kilobytes and
indicates megabytes.
From
Table
2, we can observe that CDCP has the ability to
determine cache capacities at byte granularity. In most cases,
the cache configuration determined by CDCP is less than or
equal to the one determined by the exhaustive simulation.
5.3 Simulation Results
The two sets of cache configurations for each loop nests
given in Table 2 are both simulated at the program level. All
configurations from CDCP with cache size less than 1K are
simulated at 1K cache size with other parameters unmodified.
For best comparison, the performance is shown as the cache
hit rate instead of the miss rate. Figure 10 gives the performance
comparison between Shade (exhaustive simulation)
and CDCP using a block size of 16 bytes.
Figure
10: Performance Comparison of cache Configurations
at Block Size of 16: Shade Vs CDCP.
We see from Figure 10 that, for benchmarks adi:c, aps:c,
bmcm:c and wss:c, the results obtained from Shade and CDCP
are very close. On the other hand, Shade outperforms CDCP
in benchmarks ef lux:c, tomcat:c and vpenta:c, and CDCP
Codes Shade CDCP
adi
aps
3 4k2s 4k8s 8k8s 2k16s 4k8s 8k8s
bmcm
eflux
3 128k16s 128k16s 128k1s 128k8s 256k2s 256k2s
6 128k16s 128k16s 128k1s 128k8s 256k2s 256k2s
tomcat
3 128k4s 128k8s 128k1s 64k1s 128k2s 256k2s
6 1k2s 1k4s 2k4s 64B4s 128B4s 256B2s
7 64k4s 128k8s 128k8s 32k4s 64k8s 128k16s
tsf
3 4k4s 4k16s 8k4s 4k1s 4k1s 4k1s
vpenta
3 1k4s 2k2s 2k8s 256B4s 512B2s 1k2s
5 1k4s 2k4s 4k2s 256B4s 512B2s 1k2s
6 1k2s 2k2s 2k8s 128B8s 256B4s 512B8s
7 1k2s 1k2s 1k16s 64B1s 128B2s 256B4s
wss
3 1k2s 1k2s 1k2s 64B2s 128B4s 256b4s
6 1k2s 1k2s 1k2s 32B2s 64B1s 128B2s
Table
2: cache Configurations for each Loop Nest in
Benchmarks: Shade Vs CDCP.
outperforms Shade in tsf:c. Figures 11 and 12 show the results
with block sizes of 32 and 64 bytes, separately.
We note that, for most benchmarks, the performance difference
between Shade and CDCP decreases as the block size
is increased to 32 and 64 bytes. Especially for benchmarks
adi:c, aps:c, bmcm:c and wss:c, the performances from the
two approaches are almost the same. For other benchmarks
such as tsf:c and vpenta:c, our CDCP strategy consistently
outperforms Shade when block size is 32 or 64 bytes. This
is because the exhaustive Shade simulation has a searching
range (for cache sizes) from 1K to 128K as explained earlier,
while CDCP has no such constraints (that is, it can come
up with a non-standard cache size too). Obviously, we can
use much larger and/or much finer granular cache size for exhaustive
simulation. But, this would drastically increase the
simulation time, and is not suitable for practice. In contrast,
Figure
Performance Comparison of cache Configurations
at Block Size of 32: Shade Vs CDCP.
the CDCP strategy can determine any near-optimal cache
configuration without much increase in search time.
Figure
12: Performance Comparison of cache Configurations
at Block Size of 64: Shade Vs CDCP.
For more detailed study, we break down the performance
comparison at loop nest level for benchmark aps:c. Figure 13
shows the comparison for each loop nest of this benchmark at
different cache block sizes.
Figure
13: Loop-nest Level Performance Comparison
of cache Configurations for asp.c: Shade Vs CDCP.
The results from the loop nest level comparison show that
the CDCP technique is very effective in finding the near-optimal
cache configurations for loop nests in this benchmark,
especially at block sizes of 32 and 64 bytes (the most common
block sizes used in embedded processors). Since CDCP
is analysis-based not simulation-based, we can expect that it
will be even more desirable in codes with large input sizes.
From energy perspective, the Cacti power model [10] is used
to compute the energy consumption in L1 data cache for each
loop nest of our benchmarks at different cache configurations
listed in Table 2. We use 0.18 micron technology for all the
cache configurations. The detailed energy consumption figures
are given in Table 3.
Codes Shade CDCP
adi
aps
bmcm
eflux
6 2573.0 2666.1 375.0 1323.3 795.5 821.3
tomcat
28.4 27.5 28.1 28.4 27.5 74.3
7 9461.3 18865.2 25190.9 9647.7 21984.0 57050.0
tsf
vpenta
5 188.4 216.9 108.7 188.4 97.4 98.3
wss
6 74.8 73.8 74.6 74.8 27.6 74.6
Table
3: Energy Consumption (microjoules) of L1
Data cache for each Loop Nest in Benchmarks with
Configurations in Table 2: Shade Vs CDCP.
From our experimental results, we can conclude that (i)
our strategy generates competitive performance results with
exhaustive simulation, and (ii) in general it results in a much
lower power consumption than a configuration selected by
exhaustive simulation. Consequently, our approach strikes a
balance between performance and power consumption.
6. CONCLUSIONS AND FUTURE WORK
In this paper, we propose a new technique, compiler-directed
cache polymorphism, for optimizing data locality of array-based
embedded applications while keeping the energy consumption
under control. In contrast to many previous tech-
Energy estimation is not available from Cacti due to the very
small cache configuration.
niques that modify a given code for a fixed cache architec-
ture, our technique is based on modifying (reconfiguring) the
cache architecture dynamically between loop nests. We presented
a set of algorithms that (collectively) allow us to select
a near-optimal cache configuration for each nest of a given
application. Our experimental results obtained using a set of
array-intensive applications reveal that our approach generates
competitive performance results and consumes much less
energy (when compared to an exhaustive simulation based
framework). We plan to extend this work in several direc-
tions. First, we would like to perform experiments with different
sets of applications. Second, we intend to use cache
polymorphism at granularities smaller than loop nests. And
finally, we would like to combine CDCP with loop/data based
compiler optimizations to optimize both hardware and software
in a coordinated manner.
7.
--R
Selective cache ways: On-demand cache resource allocation
Shade: a fast instruction-set simulator for execution profiling
Strategies for cache and local memory management by global program transformation.
cache miss equations: An analytical representation of cache misses.
Stanford Compiler Group.
Morphable cache architectures: potential benefits.
Improving data locality with loop transformations.
Reconfigurable caches and their application to media processing.
An integrated cache timing and power model.
cache interference phenomena.
A data locality optimizing algorithm.
--TR
Strategies for cache and local memory management by global program transformation
A data locality optimizing algorithm
Shade: a fast instruction-set simulator for execution profiling
Cache interference phenomena
Improving data locality with loop transformations
Cache miss equations
Selective cache ways
Reconfigurable caches and their application to media processing
Morphable Cache Architectures
--CTR
Min Zhao , Bruce Childers , Mary Lou Soffa, Predicting the impact of optimizations for embedded systems, ACM SIGPLAN Notices, v.38 n.7, July
Min Zhao , Bruce R. Childers , Mary Lou Soffa, A Model-Based Framework: An Approach for Profit-Driven Optimization, Proceedings of the international symposium on Code generation and optimization, p.317-327, March 20-23, 2005
Min Zhao , Bruce R. Childers , Mary Lou Soffa, An approach toward profit-driven optimization, ACM Transactions on Architecture and Code Optimization (TACO), v.3 n.3, p.231-262, September 2006 | compilers;energy consumption;embedded software;cache polymorphism;data reuse;cache locality |
514027 | Fast three-level logic minimization based on autosymmetry. | Sum of Pseudoproducts (SPP) is a three level logic synthesis technique developed in recent years. In this framework we exploit the "regularity" of Boolean functions to decrease minimization time. Our main results are: 1) the regularity of a Boolean function of n variables is expressed by its autosymmetry degree k (with 0 &xle; k &xle n), where means no regularity (that is, we are not able to provide any advantage over standard synthesis); 2) for k &xge; 1 the function is autosymmetric, and a new function k is identified in polynomial time; k is "equivalent" to, but smaller than , and depends on n-k variables only; 3) given a minimal SPP form for k a minimal SPP form for is built in linear time; experimental results show that 61% of the functions in the classical Espresso benchmark suite are autosymmetric, and the SPP minimization time for them is critically reduced; we can also solve cases otherwise practically intractable. We finally discuss the role and meaning of autosymmetry. | Introduction
The standard synthesis of Boolean functions is performed with Sum of Products (SP) minimization
procedures, leading to two level circuits. More-than-two level minimization is much harder, but
the size of the circuits can signicantly decrease. In many cases three-level logic is a good trade-o
among circuit speed, circuit size, and the time needed for the minimization procedure. Note that, in
all cases, algorithms for exact minimization have exponential complexity, hence the time to attain
minimal forms becomes huge for increasing size of the input.
Two level minimization is well developed [10, 4, 9]. Several techniques for three level minimization
have been proposed for dierent algebraic expressions. Among them EX-SOP forms, where
two SP forms are connected in EXOR [5, 6]; and Sum of Pseudoproducts (SPP) forms, consisting
of an OR of pseudoproducts, where a pseudoproduct is the AND of EXOR factors [8]. For example
is an EX-SOP form, and
is an SPP form. Experimental results show that the average size of SPP forms is approximately
half the size of the corresponding SP, and SPP forms are also smaller than EX-SOP [2]. As a limit
case each EXOR factor reduces to a single literal in SPP, and the SP and SPP forms coincide.
In this work we refer to SPP minimization. Initially this can be seen as a generalization of
SP minimization, and in fact an extension of the Quine-McCluskey algorithm was given in [8] for
SPP. In particular the pseudoproducts to be considered can be limited to the subclass of prime
pseudoproducts, that play the same role of prime implicants in SP. The algorithm for SPP, however,
was more cumbersome than the former, thus failing in practice in minimizing very large functions.
A deeper understanding of the problem, together with the use of ad hoc data structures, has allowed
to widely extend the set of functions practically tractable [2]. Still a number of standard benchmark
functions can be hardly handled with this technique.
In fact the aim of this paper is to exploit the \regularity" of any given Boolean function, in order
to decrease the time needed for its logical synthesis. Our main results are: 1) the regularity of a
Figure
1: An autosymmetric function f with in a Karnaugh map of four variables. The
reduced function f 2 is the set of points in the dotted line, and depends only on the variables x 1
and x 3 .
Boolean function f of n variables can be expressed by an autosymmetry degree k (with 0 k n),
which is computed in polynomial time. means no regularity, that is we are not able to provide
any advantage over standard synthesis; 2) for k 1 the function f is said to be autosymmetric, and
a new function f k is identied in polynomial time. In a sense f k is \equivalent" to, but smaller than
f , and depends on n k variables only. A function f with its corresponding function
are shown in gure 1; given a minimal SPP form for f k , a minimal SPP form for f is built
in linear time; experimental results show that 61% of the functions in the classical Espresso
benchmark suite are autosymmetric: the SPP minimization time for them is critically reduced, and
we can also solve cases otherwise intractable. Indeed, although autosymmetric functions form a
subset of all possible Boolean functions, a great amount of standard functions of practical interest
fall in this class. In the last section we speculate on the possible causes of this fact. Observe that
even if we can study an autosymmetric function f in a n k dimensional space, f depends, in
general, on all the n input variables, i.e., f is non degenerated.
In the next section 2 we recall the basic denitions and results of SPP theory, and present a
companion algebraic formulation later exploited for testing autosymmetry. In section 3 we discuss
the properties of autosymmetric functions, and how the problem of determining their minimal SPP
forms can be studied on a reduced number of variables. In section 4 we show how autosymmetry
can be tested in polynomial time, and derive a new minimization algorithm that includes such a
test in the initial phase. In section 5 we present an extensive set of experimental results which
validate the proposed approach, also proving that the number of benchmarks practically tractable
is signicantly increased. A nal discussion on the role of autosymmetry is developed in section 6.
2 The Underlying Theory
The basic denitions and properties that we now recall were stated in [8] and extended in [2].
We work in a Boolean space f0; 1g n described by n variables x 0 each point is
represented as a binary vector of n elements. A set of k points can be arranged in a k n matrix
whose rows correspond to the points and whose columns correspond to the variables. Figure 2
represents a set of eight points in a space of six variables. A Boolean function
can be specied with an algebraic expression where the variables are connected through Boolean
operators, or as the set of points for which denotes the number of such points.
From [8] we take the following. Let u be a (Boolean) vector; u be the elementwise complementation
of u; and b
denote u or u. The constant vectors 0 and 1 are made up of all 0's or all 1's,
respectively. Vector uv is the concatenation of u and v. A vector u of 2 m elements, m 0, is
normal normal. For example all the columns in
Figure
2: A canonical matrix representing a pseudocube P in f0; 1g 6 . The canonical columns are
the matrix of gure 2 are normal vectors.
A matrix M with 2 m rows is normal if all its rows are dierent and all its columns are normal.
A normal matrix is canonical if its rows, interpreted as binary numbers, are arranged in increasing
order (the matrix of gure 2 is canonical). A normal vector is k-canonical, 0 k < m, if is
composed of an alternating sequence of groups of 2 k 0's and 2 k 1's. In gure 2, c 0 is 2-canonical, c 2
is 1-canonical, and c 4 is 0-canonical. A canonical matrix M contains m columns c i 0
of
increasing indices, called the canonical columns of M , such that c i j is (m j 1)-canonical for 0
1. The other columns are the noncanonical ones. If M represents a set of points in a Boolean
space (where the rows are the points and the column c i corresponds to the variable x
and noncanonical columns correspond to canonical and noncanonical variables, respectively. We
Denition 1 (From [8]) A pseudocube of degree m is a set of 2 m points whose matrix is canonical
up to a row permutation.
The matrix of gure 2 represents a pseudocube of 2 3 points in f0; 1g 6 . Note that a subcube in
f0; 1g n is a special case of pseudocube where the noncanonical columns are constant.
The function with value 1 in the points of a pseudocube P is a pseudoproduct, and can be
expressed as a product of EXOR factors in several dierent forms, one of which is called the
canonical expression (brie
y CEX) of P . For the pseudocube P of gure 2 we have:
Refer to [8] for the nontrivial rule for generating CEX(P). We simply recall that each EXOR factor
of the expression contains exactly one noncanonical variable directed or complemented, namely
the one with greatest index in the example), and each noncanonical variable is
contained only in one EXOR factor; all the other variables in the expression are canonical
and x 4 in the example); and some canonical variables may not appear in the expression. Note that
the minimal SP form for the above function is x
much larger than
CEX(P).
If P is in fact a subcube, each EXOR factor in CEX(P) reduces to a single noncanonical variable,
the canonical variables do not appear, and the whole expression reduces to the well known product
expression e.g. used for implicants in SP forms.
A general property of the algebraic representation of pseudocubes is given in the following:
Theorem 1 In a Boolean space f0; 1g
(i) the EXOR factor of any subset of variables (directed or complemented) represents a pseudocube
of degree n
(ii) the product of k n arbitrary EXOR factors represents either an empty set or a pseudocube of
degree n k.
Point (i) of the theorem can be easily proved by induction on the number of variables in the
then follows from a theorem of [8] which states that the intersection of
two pseudocubes of degrees p, r is either empty, or is a pseudocube of degree
For the example of gure 2, the EXOR factors x 1 , x in the expression
pseudocubes of 2 5 points, and their product represents the pseudocube P of 2 3
points.
Note now that an equality of the form EXOR 1 EXOR satised by all the
points of a pseudocube, can be equivalently written as a system of k linear equations:
that is an instance of a general linear system A is a kn matrix of
coe-cients 0, 1, and the sum is substituted with EXOR. As known [3, 1] such a system species an
a-ne subspace of the linear space f0; 1g n . Then, from the existence of CEX(P) for any pseudocube
, and from theorem 1, we have:
Corollary 1 In a Boolean space f0; 1g n there is a one-to-one correspondence between a-ne sub-spaces
and pseudocubes.
This corollary allows to inherit all the properties of a-ne subspaces into pseudocube theory. In
particular, a pseudocube containing the point (vector) 0 corresponds to a linear subspace. We also
Denition 2 (From [2]) The structure of a pseudocube P , denoted by STR(P), is CEX(P) without
complementations.
For the pseudocube P of gure 2 we have: STR(P
us now extend the symbol to denote the elementwise EXOR between two vectors. Then is
the vector obtained from complementing in it the elements corresponding to the 1's of . For a
vector 2 f0; 1g n and a subset S f0; 1g n , let Sg. We have:
Theorem 2 (From various results in [2]) For any pseudocube P f0; 1g n and any vector
, the subset
Finally recall that an arbitrary function can be expressed as an OR of pseudoproducts, giving
rise to an SPP form. For example if we add the two rows (points) r
to the matrix of gure 2, we have a new function f that can be seen as the union of two partially
overlapping pseudocubes: namely P (already studied), and Q associated to the rows r
Note that Q is in fact a cube, and we have In conclusion f can be expressed
in SPP form as:
The minimal SP form for f contains 40 literals, while the SPP expression (3) contains 11 literals.
Passing from SP to SPP, however, amounts to passing from a two-level to a three-level circuit. This
fact has always to be taken into account and will not be further repeated.
3 Autosymmetric Functions
The class of autosymmetric functions introduced in [8] seems to be particularly suitable for SPP
minimization. The present work is addressed to these functions, for which we give an alternative
denition.
Denition 3 A Boolean function f in f0; 1g n is closed under , with 2 f0; 1g n , if for each w
Each function is obviously closed under the zero vector 0. As proved in [8], if a function f
is closed under two dierent vectors 1 , 2 2 f0; 1g n , it is also closed under 1 2 . Therefore
the set L f of all the vectors such that f is closed under is a linear subspace of f0; 1g n , see
for example [3]. (In fact, combining in EXOR, in all possible ways, k linearly independent vectors
we form a subspace of 2 k vectors that is closed under , and contains the vector 0
generated as i i ). L f is called the linear space of f , and k is its dimension. By corollary 1, L f
is a pseudocube, and we will refer to CEX(L f ) and STR(L f ).
Denition 4 A Boolean function f is k-autosymmetric, or equivalently f has autosymmetry degree
linear space L f has dimension k.
We have:
Theorem 3 Let f be a k-autosymmetric function. There exist ' vectors w 1 , w
, such that
(w
and for each
Proof. Let w 1 be any vector in f . By denition 3, w 1 L f f . Consider the set f
f is a pseudocube of degree k with jf
be any vector of f 1 . Again by denition 3, w 2 L f f . Observe that (w 1
(by contradiction: let
then that is w which is a contradiction). Therefore we have:
and using the same argument on the set f n ((w 1 the theorem easily follows. ut
Form the proof above we see that the number of points of a k-autosymmetric function is a
multiple of 2 k . Indeed, each a-ne subspace w i L f contains 2 k points. Recalling that L f is a
pseudocube, and from theorems 2 and 3 we immediately have:
Corollary 2 A k-autosymmetric function f is a disjoint union of jf j=2 k pseudocubes w i L f of
degree k all having the same structure STR(L f ), and the same canonical variables of L f .
This corollary has two immediate consequences. First we can extend the denition of canonical
variables to autosymmetric functions. Namely, the canonical (respectively: noncanonical) variables
of L f are designated as the canonical (respectively: noncanonical) variables of f . Second, note that
the 2 k points of each pseudocube w i L f contain all the 2 k combinations of values of the canonical
variables, hence in exactly one of these points all such values are 0. We then have:
Corollary 3 The vectors w of theorem 3 can be chosen as the points of f where all
the canonical variables have value 0. Moreover, for 1 i is obtained from
complementing the noncanonical variables with value 0 in w i .
Example 1 Consider the function
11111g. It can be easily veried that the linear space of f is:
where each vector can be obtained as the EXOR of the other two. f is then 2-autosymmetric. We
are the canonical and noncanonical variables, respectively. From
corollary 3 we have w
From this we derive the following (nonminimal) SPP form for f , where each term w i L f of expression
(6) is transformed into an EXOR factor derived from (5) complementing the noncanonical
variables with value 0 in w
From the above properties of autosymmetric functions we observe:
Any function is at least 0-autosymmetric, since is closed under 0.
A function is (n 1)-autosymmetric if and only if is a pseudocube of degree n 1.
A function is n-autosymmetric if and only if is a constant.
Pseudocubes of degree k are the only k-autosymmetric functions with only one term in the
union of expression (4).
We will study any k-autosymmetric function f with k 1 through a simpler function f k . We
state:
Denition 5 For a (k 1)-autosymmetric function f , the restriction f k consists of the jf j=2 k
points of f contained in the subspace f0; 1g n k where all the canonical variables of f have value 0.
Note that f k depends only on the n k noncanonical variables of f .
Once L f has been computed (see next section), the canonical variables of f are known, and
f k can be immediately determined applying denition 5. For instance, for the function f with
example depends on the noncanonical variables x 2 ; x 3 , x 4 . To build f 2 we take the
subset f00001; 00100; 00110g of the points of f for which the canonical variables x 0
0, and then project these points into the f0; 1g 3 subspace relative to x 2 ; x 3 , x 4 , where we have
110g. We can then represent as:
Note that the same expression is obtained setting x
We now show that the SPP-minimal form of any autosymmetric function can be easily derived
from the SPP-minimal form of its restriction. Note that nding the latter form is easier because
the restriction has less variables and points. In the example above f depends on 5 variables and has
depends only on 3 variables and has only 3 points (represented by the minterms
The following lemma is an easy extension of Theorem 6 of [8].
function and its restriction have the same number of pseudo-
products in their minimal SPP forms.
In fact we can prove a stronger property, namely not only f k and f have the same number
of pseudoproducts in their minimal forms, but a minimal form for f can be easily derived from a
minimal form for f k . Let x z 0
be the noncanonical variables of f , and let STR(L f
is the EXOR factor containing x z that each
noncanonical variable is in an unique EXOR factor). We have:
Theorem 4 If SPP (f k ) is a minimal SPP form for f k , then the form SPP (f) obtained by substituting
in SPP (f k ) each variable x z i with the EXOR factor p i is a minimal SPP form for f . 1
Proof. By lemma 1, the number of pseudoproducts in SPP (f) is minimum, then we have only to
prove that this form covers exactly all the points of f .
When we transform f into f k , we select a the vector u i with all canonical variables set to zero
from each a-ne subspace w i L f . Call v i the vector u i without the canonical variables, i.e., its
projection onto a subspace f0; 1g n k . When we apply the linear substitutions x z0
any pseudoproduct that covers v i in f k to cover all the points in w i L f in f ,
and the thesis immediately follows. ut
4 The Minimization Algorithm
In the previous section we have shown that each Boolean function f is k-autosymmetric, for k 0.
If k 1, f will be simply called autosymmetric. For minimization purposes we have an increasing
advantage for increasing k 1, as minimizing a k-autosymmetric function with n variables and '
points reduces to minimizing a dierent function with n k variables and '=2 k points. Even for
we have to cover only one half of the original points.
Fortunately, for a given function f , nding the associated linear space L f and computing the
autosymmetry degree k is an easy task, because the required algorithm is polynomial in the number
n of variables and in the number of points of f . By denition 3, a function f is closed under if
for any u 2 f there exists v 2 f such that there exists a vector
such that we can express as In other words, must be searched within the
vectors of the set fu v j u; v 2 fg. More precisely we have:
Theorem 5 Let f be a Boolean function. Then L
Proof. Let 2
u2f
ut
Based on theorem 5 we state:
Algorithm 1 Construction of L f (build L f and nd the autosymmetry degree k of a given
function f)
1. for all build the set u f ;
2. build the set L
3. compute
1 The resulting expressions may be reduced using some properties of EXOR, in particular
The time complexity of algorithm 1 is (jf j 2 n), because we must build a set uf for all
and the construction of each such a set requires (jf j n) time.
Any SPP minimization algorithm can be easily extended for exploiting autosymmetry. For a
given function f we rst compute L f and k with algorithm 1. If f is not autosymmetric)
we proceed with regular minimization, otherwise we compute the restriction f k of f , minimize
it, and nally derive a minimal form SPP (f) from SPP (f k ). We propose the following A (for
autosymmetry) minimization algorithm.
Algorithm 2 A-Minimization (build a minimal SPP form of a given function f)
1. build L f and compute the value of k by algorithm
2. if with any SPP synthesis algorithm
3. else
(a) determine the canonical variables of L f and compute the restriction f k as indicated in
section 3;
(b) compute STR(L f
(c) compute the minimal form SPP (f k ) for f k with any SPP synthesis algorithm;
(d) build SPP (f) by substituting in SPP (f k ) each noncanonical variable x z i with the EXOR
By the theory developed in the previous section, algorithm 2 is correct. Note that the algorithm
builds an SPP form minimal with respect to the number of pseudoproducts. To obtain the minimal
SPP form with respect to the number of literals we must slightly rearrange steps 3(c) and 3(d),
executing the substitutions of all p i for x z i in the prime pseudoproducts of f k , before selecting such
pseudoproducts in the set covering problem implicit in the minimization algorithm.
Example 2 Minimization of the function f of example 1, using algorithm 2.
Derive L f and k by algorithm 1. For this purpose, for all u 2 f compute the set u f . (For
example, for the point 00100 we obtain the set: 00100
11011g). The intersection of all the sets
gives the linear space L 11001g. We then have 2.
proceed with the else branch of algorithm 2. L f has noncanonical variables
is restricted to these variables and we have: f
The minimization problem now consists of nding a minimal SPP cover of the points of f 2 .
Applying the algorithm of [2] we have the minimal form:
Compute: STR(L f
Derive the minimal SPP form for f by substituting x 2 , x 3 and x 4 in SPP (f 2 ) with the EXOR
factors of STR(L f ), respectively immediate
algebraic simplications we obtain: 2
2 E.g., in the rst term of SPP (fk ) we have: x4 [x4 (x0 x4 In the last term of
#funct 365 116 72 95 41 43 28
%funct 38,9 12,3 7,6 10,1 4,3 4,6 4,1 2,9 1,7 2,4 6,2 3,0 1,9 100
28
Table
1: Distribution of k-autosymmetric functions in the Espresso benchmark suite. #funct are
the total numbers of functions (single outputs) for any value of k, and %funct are the corresponding
percentages. The other rows report the number of k-autosymmetric functions synthesized with the
new algorithm A, and with the previous best algorithm C (#AC); synthesized with algorithm A
only (#A), since algorithm C did not terminate; not synthesized at all since both algorithms did
not terminate (#*).
Table
2: Time gain using algorithm A versus algorithm C, for the 546 functions of row #AC of
to 8). and TC are the times required by the two algorithms on the same function.
is the average of the ratio all the functions with the same value of k.
5 Experimental Results
The new minimization algorithm 2, also called algorithm A, has been tested on a large set of
functions without don't cares, taken from the Espresso benchmark suite [11] (the dierent outputs
of each function have been synthesized separately). The performance of algorithm A has been
compared with the performance of the best previous algorithm, that is the one proposed in [2], in
the following indicated as algorithm C (after Ciriani). In fact, also the minimization of function f k
in algorithm A (step 3(c)) has been implemented with algorithm C.
For all the functions considered we have computed the values of the autosymmetry degree k
with algorithm 1, obtaining the results shown in the rst two rows of table 1. Surprisingly the
overall percentage of autosymmetric functions (k 1) is over the 61%. We have then attempted to
run algorithms A and C for all such functions, recording the CPU times whenever the computation
terminated in less than 172800 seconds (2 days) on a Pentium III 450 machine. Results on program
termination are given in the last three rows of table 1.
Table
2 shows the average reduction of computing time using algorithm A instead of C, for all
the benchmark functions for which both algorithms terminated (i.e., for the 546 functions of row
#AC of table 1). Note the improvement introduced by the new algorithm for all the autosymmetric
functions in the set, and how such an improvement drastically increases for increasing k. For
instead, we have actually the ratio (T A =TC ) is slightly greater than 1 for
each such a function. This is because algorithm A computes L f in any case, then calls algorithm
C. The resulting slowdown is however always negligible because L f is computed in polynomial
time by algorithm 1 (see previous section), while algorithm C is exponential in nature. For all the
functions in the table the forms obtained with the old and the new method coincide.
Finally table 3 shows the CPU times for a small subset of the above functions with k 1, and
other relevant minimization parameters for them.
6 The Role of Autosymmetry
To understand the role of autosymmetric functions we must compare them with the set of all possible
functions. The total number of Boolean functions of n variables is N
, corresponding to all
the ways a subset of points can be chosen in f0; 1g n . This is a huge number, however, due to the
function k #L #PP
Table
3: Detailed results for a subset of autosymmetric functions. The last two columns report
CPU times in seconds on a Pentium III 450 machine for algorithms A and C (a star indicates
non termination after 172800 seconds). The results are relative to single outputs, e.g. max512(0)
corresponds to the rst output of max512. #L and #PP are the numbers of literals and of prime
pseudoproducts in the minimal expression.
randomness of the above generating process very many of such functions do not correspond to any
signicant circuit. Autosymmetric functions are just a subset of the above. By a counting argument
(omitted here for brevity) we can in fact prove that autosymmetric functions are
Therefore, for increasing n, autosymmetric functions constitute a vanishing fractions of all the
functions, as NA =N F goes to zero for n going to innity. Still the question remains on how many
signicant functions are autosymmetric.
All what we can say at the moment is that most of the major benchmark functions are indeed
autosymmetric, as shown in the previous section. The more so when n is small and the values of
N F and NA are not too distant. The reason, we might argue, is that a function encoding a real life
problem must exhibit a regular structure that can be re
ected in some degree of autosymmetry.
This regularity may also allow to dene an autosymmetric function f independently on the number
of variables, and then to state a rule for deriving a minimal form for f valid for any n. Well
known functions as, for example, the ones counting the parity of n bits, or giving the next-state
values for an n bit Gray code, can be easily expressed in minimal form for an arbitrary number
of variables just because they are autosymmetric (for the parity see [8]; for Gray codes elementary
considerations su-ce).
It might be relevant to examine the relation between autosymmetric functions, and functions
which are simply symmetric. That is, functions which are invariant under any permutation of their
variables (see for example [7]). The total number of symmetric functions is
but they are not a subset of the autosymmetric ones. In fact a symmetric function
may be autosymmetric (e.g., the parity function), but there are symmetric functions that are not
autosymmetric (e.g., any symmetric function with an odd number of points). At the moment we
have no interesting results in this direction.
We observe that the information content of an autosymmetric function f is represented by its
reduction f k together with the linear transformation x z 0
so that the core of the synthesis problem is the minimization of f k (section 4). This suggests a
formal generalization. For a given function g 2 f0; 1g m ! f0; 1g we can dene the autosymmetry
class C g as the class of all the autosymmetric functions f 2 f0; 1g t ! f0; 1g; t m, such that
g. Since the information content of any given function f can be easily found, and a minimal
SPP form for f can then be derived from SPP (f k ), minimizing the function f k corresponds to
minimize the entire class C . Exploiting the full potential of such an approach is currently a
matter of study.
Finally another interesting research direction is the generalization of the autosymmetry property
to functions with don't care set. Since the autosymmetry is strictly related to the points of a
function f , the major goal would be selecting a subset of don't cares as points of f to maximize
the autosymmetry degree of the function. However this seems not to be an easy task.
--R
Realization of Boolean Functions by Disjunctions of Products of Linear Forms.
Logic Minimization Using Exclusive Gates.
Algebra Vol.
An Optimization of AND-OR-EXOR Three-Level Networks
AOXMIN-MV: A Heuristic Algorithm for AND-OR- XOR Minimization
Switching and Finite Automata Theory.
On a New Boolean Function with Applications.
Espresso-Signature: A New Exact Minimizer for Logic Functions.
Synthesis of Finite State Machines: Logic Optimization.
Synthesis on Optimization Benchmarks.
--TR
Two-level logic minimization: an overview
Synthesis of finite state machines
On a New Boolean Function with Applications
Logic minimization using exclusive OR gates
--CTR
Synthesis of integer multipliers in sum of pseudoproducts form, Integration, the VLSI Journal, v.36 n.3, p.103-119, October | three-level logic;synthesis;autosymmetry |
514118 | Deriving a simulation input generator and a coverage metric from a formal specification. | This paper presents novel uses of functional interface specifications for verifying RTL designs. We demonstrate how a simulation environment, a correctness checker, and a functional coverage metric are all created automatically from a single specification. Additionally, the process exploits the structure of a specification written with simple style rules. The methodology was used to verify a large-scale I/O design from the Stanford FLASH project. | INTRODUCTION
1.1 Motivation
Before a verification engineer can start simulating RTL designs,
he must write three verification aids: input testbenches to stimulate
the design, properties to verify the behavior, and a functional
coverage metric to quantify simulation progress. It would be much
easier if the three can be automatically derived from the interface
protocol specification. Not only will this save a great deal of work,
but it will also result in fewer bugs in the test inputs and the checking
properties because they are mechanically derived from a verified
specification. Motivated by these advantages, we developed a
methodology where the three aids are automatically generated from
a specification. Furthermore, we demonstrate how a specification
structured by certain style rules allows for more memory efficient
simulation runs. The primary contributions of this paper are:
Permission to make digital or hard copies of all or part of this work for
personal or classroom use is granted without fee provided that copies are
not made or distributed for profit or commercial advantage and that copies
bear this notice and the full citation on the first page. To copy otherwise, to
republish, to post on servers or to redistribute to lists, requires prior specific
permission and/or a fee.
DAC 2002, June 10-14, 2002, New Orleans, Louisiana, USA.
a presentation of how a bus protocol specification can be
used for simulation to automatically
- generate the most general set of legal input sequences
to a design
produce properties to check interface protocol conformance
- define a new functional coverage metric
bias inputs to increase the probability of hitting uncovered
cases
a new input generation scheme that greatly reduces the size
of the BDD (binary decision diagram) [1] used
- by dynamically selecting relevant constraints on a cycle-
by-cycle basis and not translating the entire specification
into a BDD
- and by extracting out the pure "constraining" logic and
discarding the conditional logic ("the guard") when
building the BDD
a report on the successful application of the methodology to
verify a fabricated working design, and a description of the
new bugs found
1.2 Background
Many of today's digital circuit designs depend on the tight integration
of multiple design components. These components are
often designed by different engineers who may have divergent interpretations
of the interface protocol that "glue" the components
together. Consequently, designs may be incompatible and behave
incorrectly when combined. Thus, functional interface specifications
are pivotal for successful module integration and should be
accordingly solid and precise. However, specifications widely in
use today are still written informally, forfeiting an opportunity for
automated analysis and logical clarity. In many cases, specifications
such as those of standard bus protocols are buggy, ambigu-
ous, and contradictory: all problems that can be resolved by formal
specification development.
The advantages of formal specifications seem clear but they are
nonetheless avoided due to a perceived cost-value problem; they are
often considered too costly for the benefits they promise. Specif-
ically, they are criticized for their lengthy development time and
the need for formal verification training. For many, the value of
a correct specification does not justify these costs, and precious
resources are allocated for more pressing design needs. Thus, to
counter these disincentives, we are developing a formal specification
methodology that attacks this cost-value problem from two angles
Cost Side The earlier papers [9, 8] for this project focus on the
first half of the problem: minimizing the cost by making the specification
process easier. We have directed our attention to signal-level
Specification
Compiler
Compiler Compiler
Input
Generator
Coverage
Generated from One Specification
Correctness
Checker
Design
Under
Verification
Simulation Done?
Three Tools Used Together For Simulation
Correctness
Checker
Coverage
Input
Generator
Figure
1: The Trio of Verification Aids
bus protocol descriptions for this aim since they are both important
and challenging to specify. Using the PCI (Peripheral Component
bus protocol and the Intel r Itanium^TM Processor bus
protocol as examples, we developed a specification style that produces
correct, readable, and complete specifications with less effort
than free-form, ad hoc methods. The syntactic structuring used in
the methodology is language-independent and can be applied to
many specification languages from SMV [6] to Verilog. This is
a reflection of our belief that methodology, as opposed to tool or
language development, is the key to achieving the stated goal.
Value Side As the latest work in this series, this paper focuses on
the second angle of the cost-value problem: to increase the value
of a formal specification beyond its role as a documentation. It is
based on the idea that once a correct, well-structured specification
is developed, it can be exploited in a way that a haphazard and incomplete
specification cannot be. In particular, we investigate ways
to use the specification directly to generate inputs, check behavior,
and monitor coverage. Because our goal is to verify large designs
common in industry, the methodology is specifically tailored for
simulation-based verification.
1.3 The Problem and Our Approach
Given an HDL (hardware description language) component design
to verify, an engineer needs various additional machinery (Fig-
ure 1, right).
1. Input Logic There must be logic to drive the inputs of the
design. One method uses random sequences, which are not guaranteed
to comply with the protocol, and consequently, it is difficult
to gauge correctness of the design because its inputs may be in-
correct. A more focused method is directed testing where input
sequences are manually written, but they are time-consuming to
write and difficult to get correct.
2. Output Check Logic to determine the correctness of the
component's behavior is needed if manual scrutiny is too cumber-
some. With the methodology presented here, the scope of correctness
checking is limited to interface protocol conformance. It cannot
check higher-level properties such as whether the output data
from one port correctly corresponds to another port's input data.
3. Coverage Metric Because complete coverage with all possible
input sequences tested is not possible, there must be some metric
that quantifies the progress of verification coverage. The verification
engineer would like to know whether the functionalities of
the design have been thoroughly exercised and all interesting cases
have been reached. This is an open area of research, and currently,
most practitioners resort to methods with little theoretical backing.
Our Approach At the foundation of our methodology is a unified
framework approach where the three tools are generated from
a single source specification (Figure 1). This is possible because all
three are fundamentally based on the interface protocol, and con-
sequently, the interface specification can be used to automatically
create the three. In current practice, the verification aids are each
written from scratch; this requires a tremendous amount of time and
effort to write and debug. By eliminating this step, our methodology
enhances productivity and shortens development time.
Also, a thoroughly debugged, solid specification invariably leads
to correct input sequences, checking properties, and coverage met-
rics. The correctness of the core document guarantees the correctness
of the derivatives. In contrast, with current methods, each
verification aid needs to be individually debugged. The advantages
of this are most pronounced for standard interfaces where the correctness
effort can be concentrated in the standards committee and
not duplicated among the many implementors. Furthermore, when
a change is made to the protocol (a frequent occurrence in indus-
try), one change in the protocol specification is sufficient to reflect
this because the verification aids can be regenerated from the revised
document. Otherwise, the engineer would have to determine
manually the effect of the change for each tool.
The derivation of a behavioral checker is the most straightforward
of the three. The checker is on-the-fly; during simulations, it
flags an error as soon as the component violates the protocol. As
addressed in the first paper[9], the specification is written in a form
very close to a checker. Furthermore, the specification is guaranteed
to be executable by the style rules (described in section 2.1),
and so the translation from it to a HDL checker requires minimal
changes [9].
The bulk of the current work addresses the issue of automatically
generating input sequences. Our method produces an input generator
which is dynamic and reactive; the generated inputs depend on
the previous cycle outputs of the design under verification. In ad-
dition, these inputs always obey the protocol, and the generation is
a one-pass process. The mechanism relies on solving boolean constraints
by building and traversing BDD structures on every clock
cycle. Although input generation using constraint solvers is not by
itself novel, our approach is the first to use and exploit a complete
and structured specification.
Finally, a new simulation coverage metric is introduced, and the
automatic input biasing based on this metric is also described. Although
more experiments are needed to validate this metric's ef-
fectiveness, its main advantage (currently) is that it is specification-based
and saves time: extra work is not needed to write out a metric
or to pinpoint the interesting scenarios for they are gleaned directly
from the specification document.
Previous Works Clarke et al. also researched the problem of
specifications and generators in [3] but the methodology is most
closely related to the SimGen project described in the 1999 paper
[11] by Yuan et al. As with the SimGen work, we are focusing on
practical methods that can be used for existing complex designs.
Within that framework, there are mainly two features that differentiate
our approach from SimGen.
First, the SimGen software uses a statically-built BDD which
represents the entire input constraint logic; in contrast, our frame-work
dynamically builds the appropriate BDD constraint on every
clock cycle. This results in a dramatically smaller BDD for two
reasons. One, only a small percentage of the protocol logic is relevant
on each cycle, and so the corresponding BDD is always much
smaller than the static BDD representing the entire protocol. Two,
the BDD contains only the design's input variables and do not contain
state variables or the design's outputs. Thus, for the PCI ex-
ample, instead of a BDD on 161 variables, we have a BDD on 15
variables, an order of magnitude difference. Consequently, our input
generation uses exponentially less memory. This reduction is
based on the observation that the sole role of the state variables
and the design's outputs is to determine which parts of the protocol
are relevant (and thus required in the BDD) for a particular cycle.
Otherwise, these variables are not needed to calculate the inputs.
We believe that these two reasons for smaller BDDs would hold
for many interfaces and therefore allow input generation for a large
interface that may otherwise be hindered by BDD blowup.
A second difference with SimGen is that, unlike our framework,
it requires the users to provide the input biases. A unique contribution
of our work is the automated process of determining biases. It
is noted that all these advantages are possible because our method
exploits the structure of a stylized specification whereas SimGen is
applicable for more general specifications.
2. METHODOLOGY
2.1 Specification Style
The specification style was introduced in [9] and is summarized
here for the reader. It is based on using multiple constraints to
collectively define the signalling behavior at the interface. The
constraints are short boolean formulas which follow certain syntactic
rules. They are also independent of each other, rely on state
variables for historic information, and when AND-ed together, define
exactly the correct behavior. This is similar to using (linear or
branching time) temporal logic for describing behavior. However,
our methodology allows and requires only the most basic operators
for writing the constraints, and it aims for a complete specification
as opposed to an ad hoc list of properties that should hold true.
This decomposition of the protocol into multiple constraints has
many advantages. For one, the specification is easier to maintain.
Constraints can be added or removed and independently modified.
It is also believed that it is easier to write and debug. Since most
existing natural-language specifications are already written as a list
of rules, the translation to this type of specification requires less
effort and results in fewer opportunities for errors. For debugging,
a symbolic model checker can be easily used to explore the states
allowed by the constraints.
Style Rule 1 The first style rule requires the constraints to be
written in the following form.
prev(signal
signal i _ ::: ^:signal n
where "!" is the logical symbol for "implies". The antecedent,
the expression to the left of the "!", is a boolean expression containing
the interface signal variables and auxiliary state variables,
and the consequent, the expression to the right of the "!", contains
just the interface signal variables. The allowed operators are
AND, OR, and NEGATION. The prev construct allows the value
of a signal (or the state of a state machine) a cycle before to be ex-
pressed. The constraints are written as an implication with the past
expression as the antecedent and the current expression as the con-
sequent. In essence, the past history, when it satisfies the antecedent
expression, requires the current consequent expression to be true;
otherwise, the constraint is not "activated" and the interface signals
do not have to obey the consequent in the current cycle. In this
way, the activating logic and the constraining logic are separated.
For example, the PCI protocol constraint, prev(trdy^stop) ! stop
means "if the signals trdy and stop were true in the previous cycle
(the "activating" logic), then stop must be true in the current cycle
(the "constraining" logic)" where a "true" signal is asserted and a
"false" signal is deasserted. This separation is what identifies the
relevant (i.e."activated") constraints on a particular cycle. Also, it
allows the BDD to be an expression purely of the "constraining"
logic (as explained in the next section, 2.2.1).
Used for
Generating
Not Used
Verified)
(To Be
Spec. for 0 Spec. for 1
Spec. for 2
1Have
Do Not
Implemen-
-tations
Implementation to be Verified
Interface
Three Components Defined at the Interface
Input Formula
Active
Active
For Every Clock Cycle
Active a0_3 c0_3
Antecedent Consequent
Step 3a
Output for component 0 at time t
(= Partial Input for component 2)
Find Solution
Figure
2: The Input Generation Algorithm
Style Rule 2 The second style rule, the separability rule, requires
each constraint to constrain only the behavior of one component.
Equivalently, because the constraining part is isolated from the activating
part (due to the first style rule), the rule requires the consequent
to contain only outputs from one component.
Style Rule 3 The third rule requires that the specification is dead
state free. This rule effectively guarantees that an output satisfying
all of the constraints always exists as long as the output sequence so
far has not violated the constraints. There is a universal test that can
verify this property for a specification. Using a model checker, the
following (computation tree logic)[2] property can be checked
against the constraints, and any violations will pinpoint the dead
state: AG(all constraints have been true so f ar !
EX(all constraints are true))
Although abiding by the style rules may seem restrictive, it promises
many benefits. Furthermore, the style is still powerful enough to
specify the signal-level PCI and Intel r Itanium^TM Processor bus
protocols.
2.2 Deriving an Input Generator
2.2.1 Basic Algorithm
Based on the following algorithm, input vectors are generated
from the structured specification (Figure 2).
1. Group the constraints according to which interface component
they specify. (This is possible because of style rule 2,
the separability rule.) If there are n interface components,
there will be n groups.
2. Remove the group whose constraints are for the component
under verification. These will not be needed. Now, there are
groups of constraints.
3. For each group of constraints, do the following on every
clock cycle of the simulation run. The goal is to choose an
input assignment for the next cycle.
(a) For each constraint, evaluate just the antecedent half.
The antecedent values are determined by internal state
variables and observed interface signal values. For antecedents
which evaluate to true, the corresponding constraints
are marked as activated.
(b) Within each group, AND together just the consequent
halves of the activated constraints to form the input for-
mula. As a result, there is one input formula for each
interface component. The formulas have disjoint support
(because of rule 2), which greatly reduces the complexity
of finding a satisfying assignment.
(c) A boolean satisfiability solver is used to determine a
solution to each of the input formulas. A BDD-based
solver is used instead of a SAT-based one in order to
control the biasing of the input variables. Since the
specification is nondeterministic and allows a range of
behaviors, there will most likely be multiple solutions.
(In section 2.3, we discuss how a solution is chosen
so that interesting simulation runs are generated.) The
chosen solutions form the input vector for this cycle.
(d) Go back to step 3(a) on the next clock cycle.
The significance of the style rules become clear from this al-
grotihm. The "activating" - "constraining" division is key to allowing
for a dramatically smaller expression (just the consequent
halves) to solve (rule 1). The separability rule also allows for
smaller expressions by enforcing strict orthogonality of the specification
along the interface components (rule 2). Finally, the lack
of dead states guarantees the existence of a correct input vector assignment
for every clock cycle (rule 3).
2.2.2 Implementation
A compiler tool, which reads in a specification and outputs the
corresponding input generation module, has been designed and im-
plemented. There are two parts to the input generator: the Verilog
module which acts as the frontend and the C module as the backend
Figure
3).
The Verilog module contains all the antecedents of the constraints,
and based on its inputs (the component's outputs) and its internal
state variables, determines which constraints are activated for
that clock cycle. Then, the indices of the activated constraints are
passed to the backend C module. The C module will return an input
assignment that satisfies all the activated constraints, and the Verilog
module will output this to the component under verification.
The choice of Verilog as a frontend allows many designs to be used
with this framework.
The C module contains the consequent halves of the constraints.
It forms conjunctions (ANDs) of the activated consequents, solves
the resulting formula, and returns an assignment to the Verilog
module. It is initialized with an array of BDDs where each BDD
corresponds to a constraint consequent. On every clock cycle, after
the activation information is passed to it, it forms one BDD per interface
component by performing repeated BDD AND operations
on activated consequents in the same group. The resulting BDD
represents an (aggregated) constraint on the next state inputs from
one component, and by traversing the BDD until the "1(TRUE)"
terminal node is reached, an assignment can be found. Once an assignment
is determined for each interface component, the complete
input assignment to the component under verification has been es-
tablished. The CUDD (Colorado University Decision Diagram)
package [10] version 2.3.1 was used for BDD representation and
manipulation, and Verilog-XL was used to simulate the setup.
2.3 Biasing the Inputs
2.3.1 Coverage Metric
We use the specification to define corner cases, scenarios where
the required actions are complex. These states are more problematic
for component implementations, and thus, simulations should
drive the component through these scenarios. Consequently, whether
Verilog
Module
Component
Verification
Under
Module
Indices of
Activated
Constraints
Assignment
Input Generator Interface Specified
Figure
3: Implementation Details of the Input Generator
a corner case has been reached or not can be used to measure simulation
progress, and missed corner cases can be used to determine
the direction of further simulations.
As a first order approximation of corner cases, the antecedents of
the constraints are used. This is because only when the antecedent
clause is true does the implementation have to comply with the constraint
clause. As an example, consider the PCI constraint, "master
must raise irdy within 8 cycles of the assertion of frame." The antecedent
is "the counter that starts counting from the assertion of
frame has reached 7 and irdy still has not been asserted" and the
consequent is "irdy is asserted." Unless this antecedent condition
happens during the simulation, compliance with this constraint cannot
be completely known. For a simulation run which has triggered
only 10% of the antecedents, only 10% of the constraints have been
checked for the implementation. In this sense, the number of antecedents
fired during a simulation run is a rough coverage metric.
There is one major drawback to using this metric for coverage.
The problem is intimately related to the general relationship between
implementation and specification. By the process of design,
for every state, a designer chooses an action from the choices offered
by the nondeterministic specification to create a deterministic
implementation. As a result, the implementation will not cover the
full range of behavior allowed by the specification. Thus, some
of the antecedents in the specification will never be true because
the implementation precludes any paths to such a state. Unless the
verification engineer is familiar with the implementation design, he
cannot know whether an antecedent has been missed because of the
lack of appropriate simulation vectors or because it is structurally
impossible.
2.3.2 Deriving Biases for Missed Corner Cases
To reach interesting corner cases, verification engineers often apply
biasing to input generation. If problematic states are caused by
certain inputs being true often, the engineer programs the random
input generator to set the variable true n% instead of the neutral
50% of the time. For example, to verify how a component reacts to
an environment which delays its response, env response, the engineer
can set the biasing so that the input, env response, is true only
5% of the time. 0% is not used because it may cause the interface
to deadlock. With prevailing methods, the user needs to provide
the biasing numbers to the random input generator. This requires
expert knowledge of the design, and the biases must be determined
by hand. In contrast, by targeting antecedents, interesting biasing
can be derived automatically. The algorithm works as follows:
1. Gather the constraints that specify the outputs of the component
to be verified. The goal: the antecedents of these constraints
should all become true during the simulation runs.
2. Set biases for all input signals to neutral (50% true) in the
input generator described in section 2.2. (Exactly how this is
done will be explained in the following subsection.)
a 2% true
b 98% true
c: 2% true
Biasing a
2%
98%
d
ZERO
ONE
2%
c
2% 98%
2% 98%
2%
98%
50% 50%
50% 50% 50% 50%
50%
50%
50% 50%
THEN
Figure
4: The Biased BDD Traversal
3. Run the simulation for some number of cycles.
4. Determine which antecedents have not fired so far.
5. Pick one missed antecedent, and use it to determine the variable
biasing. If, for example, antecedent :a^b^:c has not
been true, set the following biases: a is true 2% of the time,
b for 98%, c for 2%.
6. Re-run the simulation and repeat from step 4. Continue until
all antecedents have been considered.
There are a number of interesting conclusions. First, although
effort was invested in determining optimal bias numbers exactly,
biases that simply allowed a signal to be true (or false) "often" was
sufficient. Empirically, interpreting "often" as 49 out of 50 times
seems to work well. Second, an antecedent expression contains
not only interface signal variables but also counter values and
other variables that cannot be skewed directly. Just skewing the
input variables in the antecedent is primary biasing, and a more
refined, secondary biasing can be done by dependency analysis.
This was done manually. For example, many hard-to-reach cases
are states where a counter has reached a high value, and by dependency
analysis, biases that will allow a counter to increment
frequently without resetting were determined.
2.3.3 Implementing Biasing
The actual skewing of the input variables is done during the BDD
traversal stage of the input generation. After the input formula
BDD for a component has been built, the structure is traversed according
to the biases. If variable b is biased to be true 49 out of
50 times, the THEN branch is taken 49 out of 50 times (Figure
4). If this choice of branching forces the expression to evaluate to
false (i.e. the traversal inevitably leads to the "ZERO" leaf), the
algorithm will backtrack and the ELSE branch will be taken. As a
result, even if b is biased to be true 49 out of 50 occurrences, the
protocol logic can force b to be false most of the time. What is
guaranteed by the biasing scheme is that whenever b is allowed to
be true by the constraint, it will most likely be true.
An extra step is added to the input generation algorithm to accommodate
the biasing. The variables need to be re-ordered so that
the biased variables are at the top of the BDD, and their truth value
are not determined by the other variables. In Figure 5, variable c
is intended to be true most of the time. However, since c is buried
towards the bottom of the BDD, if chosen, c is
ONE ZERO
a
c
50%
50%
50%
50%
2% 98%
Figure
5: Incorrect Ordering
forced to be false to satisfy the constraint. In contrast, if c is at the
top of the BDD, the true branch can be taken as long as the other
variables are set accordingly (for example, a = 1). Fortunately,
since the number of BDD variables is kept small, reordering for
this purpose does not lead to BDD blowup problems.
Compared to the biasing technique used in SimGen, the biasing
used in this framework is coarse. With SimGen, branching prob-
abilities, which take into account variable ordering, are calculated
from the desired biases. In contrast, this method directly uses the
biasing as the branching probabilities; it requires no calculations
and compensates for possible distortions by reordering. Although
implementing the SimGen calculations is not difficult, the advantages
of achieving more precise biasing are not clear from the examples
attempted.
3. EXPERIMENTAL RESULTS
To demonstrate the methodology on a meaningful design, we
chose the I/O component from the Stanford FLASH [5] project for
verification. The I/O unit, along with the rest of the project, had
been extensively debugged, fabricated, and tested and is part of a
working system in operation. The methods are evaluated on the
PCI interface of the component.
The design is described by 8000 lines of Verilog and contains
283 variables which range from 1-bit to 32-bit variables: a complexity
which renders straightforward model checking unsuitable.
Approximate model checking was used by Govindaraju et al [4] to
verify this design but no bugs were found because the design inputs
were overly constrained and only a small state space was explored.
Our simpler and more flexible simulation-based checking proved
to be more effective by finding new bugs.
The Setup A formal PCI specification was used to constrain the
inputs and check the outputs at the PCI interface of the design. A
simulation checker that flags PCI protocol violations was generated
from the specification using a compiler tool written in OCAML
[7]. The same compiler tool was modified to output the constrained
random simulation generator which controls the PCI interface inputs
of the I/O unit. The I/O unit (the design under verification),
checker, and input generator are connected and simulated together,
and results are viewed using the VCD (Value Change Dump) file.
The inputs were skewed in different configurations for each simulation
run in order to produce various extreme environments and
stress the I/O unit.
Verification Results Using the 70 assertions provided by the interface
specification, nine previously unreported bugs have been
found in the I/O unit. Most are due to incorrect state machine de-
sign. For example, one bug manifested itself by violating the protocol
constraint, "once trdy has been asserted, it must stay asserted
until the completion of a data phase." Because of an incorrect path
in the state machine, in some cases, the design would assert trdy
and then, before the completion of the data phase, deassert trdy.
This can deadlock the bus if the counterparty infinitely waits for
the assertion of trdy. The bug was easily corrected by removing
the problematic and most likely unintended path. The setup makes
Env. Design
a0 -> c0
a2 -> c2
Spec.
Spec.
Useful Useful
Not as
Figure
The Two Types of Simulation Coverage Metric and
their Effectiveness
the verification process much easier; the process of finding signal-level
bugs is now nearly automated, and so, most of the effort can
focus on reasoning about the bug once it is found.
Coverage Results Unfortunately, the original intended use of
the coverage metric proved to be fruitless for this experiment. Using
antecedents of the constraints that specify the component was
meaningless because the FLASH PCI design is conservative and
implements a very small subset of the specification. For example,
the design only initiates single data phase transactions, and never
initiates multiple data phase transactions. Thus, most of the antecedents
remained false because it was structurally impossible for
them to become true.
However, using the metric to ensure that the environment is maximally
flexible proved to be much more powerful. The motivation
is to ensure that the design is compatible with any component that
complies with the interface protocol. The design should be stimulated
with the most general set of inputs, and so, using the missed
antecedents from the constraints that specify the environment (in
Figure
6, "a0, a1, .") to determine biases was extremely fruitful;
most of the design bugs were unearthed with these biasings.
Performance Results Performance issues, such as speed and
memory usage, did not pose to be problems, and so, we were free
to focus on generating interesting simulation inputs. However, to
demonstrate the scalability of the method for larger designs, performance
results were tabulated. The simulations were run on a 4-
Processor Sun Ultra SPARC-II 296 MHz System with 1.28Gbytes
of main memory. The specification provided 63 constraints to model
the environment. These constraints required 161 boolean variables,
but because of the "activating" - "constraining" logic separation
technique, only 15 were needed in the BDDs. Consequently, the
BDDs used were very small; the peak number of nodes during
simulation was 193, and the peak amount of memory used was
4Mbytes.
Furthermore, speed was only slightly sacrificed in order to achieve
this space efficiency. The execution times for different settings are
listed in Table 2. With no constraint solving, where inputs are randomly
set, the simulation takes 0.64s for 12,000 simulator time
steps. If the input generator is used, the execution time increased by
57% to 1.00s; this is not a debilitating increase, and now the inputs
are guaranteed to be correct. The table also indicates how progressively
adding signal value dumps, a correctness checker module,
and coverage monitor modules, adds to the execution time.
4. FUTURE WORK
Better coverage metrics can probably be deduced from the speci-
fication. A straightforward extension would be to see whether pairs
of antecedents become true during simulations. Exploiting a structured
formal specification for other uses is also of interest. Perhaps
incomplete designs can be automatically augmented by specification
constraints for simulation purposes. Or, useful synthesis information
can be extracted from the specification. Also, experiments
to determine whether designs that are too big for SimGen-type al-
Number
Boolean Vars in Spec 161
Boolean Vars in BDD 15
Constraints on Env 63
Assertions on Design 70
Peak Nodes in BDD 193
BDD Memory Use 4 Mbytes
Bugs Found in Design 9
Table
1: Interface Specification Based Generation Details for
the FLASH Example
Settings User Time System Time Total
Random 0.53s 0.11s 0.64s
Constrained 0.77s 0.23s 1.00s
with Dump 0.77s 0.26s 1.03s
with Monitor 1.33s 0.29s 1.62s
with Coverage 1.54s 0.25s 1.79s
Table
2: Time Performance of the Methodology on FLASH Example
(for 12000 simulator time steps)
gorithms can be handled by ours would further validate the method-
ology. Furthermore, more extensive experiments to quantify the
speed penalty for the dynamic BDD building should be done.
Acknowledgement
This research was supported by GSRC contract
SA2206-23106PG-2.
5.
--R
Synthesis of synchronization skeletons for branching time temporal logic.
The Stanford FLASH Multiprocessor.
A Specification Methodology by a Collection of Compact Properties as Applied to the Intel Itanium Processor Bus Protocol.
Modeling Design Constraints and Biasing in Simulation Using BDDs.
--TR
Graph-based algorithms for Boolean function manipulation
The Stanford FLASH multiprocessor
Modeling design constraints and biasing in simulation using BDDs
Counterexample-guided choice of projections in approximate symbolic model checking
Executable Protocol Specification in ESL
Monitor-Based Formal Specification of PCI
A Specification Methodology by a Collection of Compact Properties as Applied to the IntelMYAMPERSAND#174; ItaniumTM Processor Bus Protocol
Design and Synthesis of Synchronization Skeletons Using Branching-Time Temporal Logic
--CTR
Serdar Tasiran , Yuan Yu , Brannon Batson, Using a formal specification and a model checker to monitor and direct simulation, Proceedings of the 40th conference on Design automation, June 02-06, 2003, Anaheim, CA, USA
Yunshan Zhu , James H. Kukula, Generator-based Verification, Proceedings of the IEEE/ACM international conference on Computer-aided design, p.146, November 09-13,
Young-Su Kwon , Young-Il Kim , Chong-Min Kyung, Systematic functional coverage metric synthesis from hierarchical temporal event relation graph, Proceedings of the 41st annual conference on Design automation, June 07-11, 2004, San Diego, CA, USA
Jun Yuan , Ken Albin , Adnan Aziz , Carl Pixley, Constraint synthesis for environment modeling in functional verification, Proceedings of the 40th conference on Design automation, June 02-06, 2003, Anaheim, CA, USA
Serdar Tasiran , Yuan Yu , Brannon Batson, Linking Simulation with Formal Verification at a Higher Level, IEEE Design & Test, v.21 n.6, p.472-482, November 2004
Ed Cerny , Ashvin Dsouza , Kevin Harer , Pei-Hsin Ho , Tony Ma, Supporting sequential assumptions in hybrid verification, Proceedings of the 2005 conference on Asia South Pacific design automation, January 18-21, 2005, Shanghai, China
Ansuman Banerjee , Bhaskar Pal , Sayantan Das , Abhijeet Kumar , Pallab Dasgupta, Test generation games from formal specifications, Proceedings of the 43rd annual conference on Design automation, July 24-28, 2006, San Francisco, CA, USA
Jun Yuan , Carl Pixley , Adnan Aziz , Ken Albin, A Framework for Constrained Functional Verification, Proceedings of the IEEE/ACM international conference on Computer-aided design, p.142, November 09-13,
Smitha Shyam , Valeria Bertacco, Distance-guided hybrid verification with GUIDO, Proceedings of the conference on Design, automation and test in Europe: Proceedings, March 06-10, 2006, Munich, Germany
Shireesh Verma , Ian G. Harris , Kiran Ramineni, Interactive presentation: Automatic generation of functional coverage models from behavioral verilog descriptions, Proceedings of the conference on Design, automation and test in Europe, April 16-20, 2007, Nice, France
Alessandro Pinto , Alvise Bonivento , Allberto L. Sangiovanni-Vincentelli , Roberto Passerone , Marco Sgroi, System level design paradigms: Platform-based design and communication synthesis, ACM Transactions on Design Automation of Electronic Systems (TODAES), v.11 n.3, p.537-563, July 2006
Annette Bunker , Ganesh Gopalakrishnan , Sally A. Mckee, Formal hardware specification languages for protocol compliance verification, ACM Transactions on Design Automation of Electronic Systems (TODAES), v.9 n.1, p.1-32, January 2004 | coverage;BDD minimization;testbench;input generation |
514214 | Markov model prediction of I/O requests for scientific applications. | Given the increasing performance disparity between processors and storage devices, exploiting knowledge of spatial and temporal I/O requests is critical to achieving high performance, particularly on parallel systems. Although perfect foreknowledge of I/O requests is rarely possible, even estimates of request patterns can potentially yield large performance gains. This paper evaluates Markov models to represent the spatial patterns of I/O requests in scientific codes. The paper also proposes three algorithms for I/O prefetching. Evaluation using I/O traces from scientific codes shows that highly accurate prediction of spatial access patterns, resulting in reduced execution times, is possible. | INTRODUCTION
The disparity between processor and disk performance
continues to increase, driven by market economics that reward
high-performance processors and high-capacity storage
devices. As parallel systems (e.g., clusters) are increasingly
constructed using large numbers of commodity components,
This work was supported in part by the National Science
Foundation under grants NSF ASC 97-20202, NSF ASC 99-
75248 and NSF EIA-99-72884; by the Department of Energy
under contracts LLNL B341494 and LLNL B505214; and by
the NSF Alliance PACI Cooperative Agreement.
Permission to make digital or hard copies of all or part of this work for
personal or classroom use is granted without fee provided that copies are
not made or distributed for profit or commercial advantage and that copies
bear this notice and the full citation on the first page. To copy otherwise, to
republish, to post on servers or to redistribute to lists, requires prior specific
permission and/or a fee.
ICS'02, June 22-26, 2002, New York, New York, USA.
I/O has become a performance bottleneck for many scientic
applications. Characterization of application I/O patterns
has shown that many of these applications have complex,
irregular I/O patterns [4, 18], mediated by multilevel I/O
libraries, that are ill-suited to le system policies optimized
for simple sequential patterns (i.e., sequential read ahead
and write behind).
Patterson et al [12] showed that large I/O performance
gains are possible using hints to advise le systems and
I/O libraries of anticipated I/O accesses. Given foreknowledge
of temporal and spatial I/O patterns, one can schedule
prefetching and write behind requests to ameliorate or
eliminate I/O stalls. Indeed, with foreknowledge, optimal
le cache replacement decisions are possible under restricted
conditions [8].
Unfortunately, perfect foreknowledge is rarely possible.
An application's I/O access stream may be data dependent,
or it may change due to user interactions. Moreover, for
complex, irregular patterns, an enumeration of the complete
request stream often requires multilevel instrumentation to
capture and record I/O patterns for reuse when guiding future
executions. Not only is such instrumentation expensive,
it requires storing the access pattern.
Rather than specifying a complex I/O pattern exactly,
probabilistic models can potentially capture the salient features
more compactly by describing likely request sequences.
Although statistical characterization of I/O patterns is less
accurate than enumeration, it is compact and, as we shall
see, can be used to guide caching, prefetching, and write
behind decisions. Hence, this paper explores techniques for
constructing compact, Markov models of spatial access pat-
terns, evaluating the e-cacy of the predictions using I/O
traces drawn from large-scale scientic codes.
The remainder of the paper is organized as follows. In x2,
we outline Markov model construction techniques. This is
followed in x3{x4 by a description of our implementation and
test suite. In x5, we describe the results of trace-driven simulations
that evaluate the performance of several prediction
algorithms, as well as timing experiments using scientic
codes. Finally, in x6{x7, we summarize related work and
results, and we outline plans for future work.
2. MARKOV MODEL PREDICTION
One of the lessons from experimental analysis of I/O patterns
in parallel codes is that their behavior is far more
complex than originally expected [4, 16, 14, 18, 13]. Unlike
sequential I/O patterns on vector systems, parallel I/O
1.5e+062.5e+063.5e+064.5e+060 2000 4000 6000 8000 10000 12000 14000 16000
Initial
offset
(bytes)
Request number
Figure
1: Initial CONTINUUM I/O Request Osets
patterns are often strided or irregular and vary widely in
size.
2.1 I/O Pattern Complexity
Simple approaches to le system tuning such as N-block
read ahead work well for sequential access patterns, but they
are inappropriate for more irregular patterns. As an example
Figure
1 shows the initial le osets for requests from
execution of CONTINUUM, a parallel, unstructured mesh
code whose I/O behavior was captured using our Pablo I/O
characterization toolkit [4]. Fewer than half the I/O requests
follow sequentially from the previous request.
Complex patterns such as those in CONTINUUM cannot
be reduced to simple rules (i.e., sequential or strided) and
can only be captured by more powerful and expressive meth-
ods. Moreover, such complex patterns are common when
application I/O requests are mediated by multilevel I/O libraries
2.2 Markov Models
To capture non-sequential I/O patterns, we have chosen
to model an application's I/O access stream as a Markov
model. A Markov model with N states can be fully described
by its transition matrix P . Each value P ij of this N by N
matrix represents the probability of a transition to state j
from the current state i. Because the transition probabilities
depend only on the current state, the transition path to the
current state does not aect future state transitions; this
characteristic is known as the Markov property [7].
Given a le system block size, we create a state for every
block associated with each le. Because application I/O requests
are not normally expressed as block osets, but rather
as byte osets, we convert each request into the sequence of
blocks that must be accessed to satisfy the application re-
quest. Hence, multiple requests to the same block result in
the block number being repeated in the access stream. A
transition occurs whenever a le block is accessed, and the
model's transition matrix is created by setting each P ij to
the fraction of times that state j is accessed immediately
after state i.
Consider two illustrative examples. First, a simple, sequential
access pattern is represented by a cyclic matrix {
each block is accessed once in succession. Second, Figure
2 illustrates a simple strided pattern that repeats multiple0 1
6 10135791111311From
To
Figure
2: Strided I/O Pattern and Transitions
times. Both of these examples highlight a key aspect of
Markov transition matrices, the matrices are sparse for all
but the most complex, irregular patterns. Consequently, one
can compactly represent access patterns for even large les.
However, the Markov property that allows compact representation
is also a potential error source. Because a block
access depends solely on the immediately preceding access,
the history of accesses is lost. For example, consider the
following pattern of block accesses: 0, 1, 0, 2, 0, 3, 0, 4.
The Markov model for this pattern has four transitions from
block zero, each with probability 0.25. Therefore, the model
implies that accesses to blocks one through four are equally
likely after access to block zero. However, if block three has
already been accessed, the next block to follow block zero is
block four.
2.3 Prediction Strategies
Given an I/O access pattern Markov model, it can be
used during a subsequent application execution to predict
future I/O accesses. Several dierent prediction strategies,
based on the Markov model, are possible, ranging in complexity
from simply following the most likely sequence of single
step state transitions (greedy prediction) to more complex
schemes that predict the most likely N-step path. Each
diers in implementation complexity, execution overhead,
and associated predictive power.
Greedy Prediction. The simplest prediction method chooses
a sequence of le blocks by repeatedly nding the most likely
transition from the current state s. This approach builds
directly on N-step transition models from Markov theory.
Hence, we call this algorithm greedy-xed.
The sequence of predicted blocks stops when a specied
number of blocks is reached, though other terminating conditions
are possible. For instance, the sequence might be
terminated when the total likelihood of the sequence drops
below a given threshold. This results in predictions of varying
length. In either case, the greedy-xed prediction strategy
provides a (possibly variable length) le block sequence
that can be used for prefetching or write behind.
Path Prediction. Greater prediction accuracy may be possible
by choosing a sequence with the highest total likeli-
hood, rather than choosing the most likely sequence of single
step transitions. This method treats the Markov model
as a graph and performs a depth-limited search, beginning
at the current state, to nd the most likely path. We call
this path-xed prediction. It can choose less likely initial
transitions if they lead to high probability sequences.
Amortized Prediction. For I/O access patterns that revisit
a block many times, the most likely predicted path may not
visit the block. However, enough alternate paths may visit
the block to cause it to be accessed more often during the
application execution. To explore the eectiveness
of such a approach, we propose a method called amortized
prediction, so named because the value of predicting a block
is in a sense amortized over the entire execution rather than
just over the next L blocks.
Amortized prediction creates the state occupancy probability
vectors (t) for each of the next L time steps and
chooses the most likely state in each vector as the prediction
for that time step. The initial probability vector (0)
is zero except for element s (0), which is set to one. The
vectors (1) through (L) are generated by repeated application
of the Kolmogorov equation
P is the model's transition matrix. Block j of the prediction
sequence is chosen as the state with the greatest probability
in (j).
3. ASSESSMENT INFRASTRUCTURE
To assess the e-cacy of Markov models in predicting I/O
access patterns within scientic applications, we have relied
on three approaches: trace-driven simulation, cache simula-
tion, and experimental measurement within a research le
system.
3.1 Trace-Driven Assessment
We constructed Markov models using a sparse matrix
strategy to represent the Markov model's transition matrix
and reduce storage requirements. Many scientic applications
typically have regular, though not sequential, le access
patterns. With such regularity, only a few transitions
occur for each state. If the average number of transitions
from a source state is bounded above by a small constant,
the size of the matrix will grow linearly, rather than quadrat-
ically, with the size of the le.
Our trace-driven simulator accepts these sparse matrix
Markov models, a prediction strategy, and an application
I/O trace in the Pablo Self-Dening Data Format (SDDF)
[1], and then computes prediction accuracies. Figure 3 details
the simulator's operation.
3.2 Cache Simulation
We also developed a simple le cache simulator with LRU
replacement of xed-size blocks. After each request to the
cache is satised, either by a cache hit or by loading the requested
block, the cache prefetches additional blocks specied
by a prediction algorithm.
Application
Trace File
Library
Markov
Model
Builder
Markov Model
I/O Request
Algorithm
Prediction
Predicted
Sequences
Comparison
Prediction
Accuracy
Results
Figure
3: Trace-Driven Simulator Structure2e+066e+061e+071.4e+07
Initial
offset
(bytes)
Request number
Figure
4: Cactus I/O Request Pattern (Detail)
3.3 File System Measurement
To explore the thesis that performance is best maximized
by tuning le system policies to application behavior, we
also developed a portable parallel le system (PPFS2) that
consists of a portable, user-level input/output library with a
variety of le caching, prefetching, and data policies. PPFS2
executes atop a Linux cluster and uses the underlying le
system for I/O. We have used PPFS2 to study temporal
access pattern prediction via time series [19, 20].
In conducting experiments, we used a cluster of 32 dual
processor, 933 MHz Intel Pentium III systems linked though
Gigabit Ethernet. Each machine contained 1 GB of memory
and ran the Linux 2.2 kernel. Disk requests were serviced
by an on each machine.
4. SCIENTIFIC APPLICATION SUITE
To assess the accuracy of compact Markov models for
I/O prediction and their utility for le block prefetching,
we chose a suite of six scientic applications. These parallel
applications have been the subject of extensive analysis [4,
16, 14] and were chosen for their large size and variety of
I/O request characteristics.
All I/O traces were obtained using our Pablo I/O characterization
toolkit [4]. For simplicity, we have used the I/O
trace data from a single node of each parallel application
execution, and we model the application's requests to a sin-
0Request
size
(bytes)
Request number
Figure
5: CONTINUUM Request Sizes
gle le. Each application code is brie
y described in the
paragraphs below.
Cactus. The Cactus code [2] is a modular environment for
developing high-performance, multidimensional simulations,
particularly for numerical relativity; the test problem used
to obtain the I/O trace was a black hole simulation.
The read requests for this test problem were all extremely
small, with the largest being a mere sixteen bytes, though
the le request osets span almost a gigabyte. Over percent
of the requests are sequential, and almost 98 percent
are within 50 bytes of the previous request. However, the
remaining requests are to regions at least 1 MB from the previous
request and often greater than 10 MB away. Figure 4
shows a detailed view of the rst six hundred requests.
CONTINUUM. CONTINUUM 1 is an unstructured mesh
continuum mechanics code. CONTINUUM's I/O requests
are primarily restricted to the rst eighth of the le, as
shown in Figure 1. However, end of the le accesses occur
at regular intervals.
Over 57 percent of the accesses are non-sequential, but the
seek distances within the rst eighth of the le are small.
Most interestingly, Figure 5 shows that the request sizes
vary widely, ranging from less than 10 bytes to more than
100,000 bytes.
Dyna3D. Dyna3D is an explicit nite-element code for analyzing
the transient dynamic response of three-dimensional
solids and structures. This application generates a long
sequence of sequential requests; one processor sequentially
reads the rst 2 MB of a large input le a single byte at a
time.
Hartree-Fock. The Hartree-Fock code [5] calculates interactions
among atomic nuclei and electrons in reaction paths,
Trace data for the CONTINUUM and HYDRO codes are
taken from codes that use an existing serial I/O library.
Developers analyzed this library and determined it to be
unsuitable for a parallel environment. Developers of the
replacement I/O library expect a very dierent pattern of
I/O once the new library is integrated into the codes.1e+063e+065e+060 50 100 150 200 250 300 350 400
Initial
offset
(bytes)
Request number
Figure
Hartree-Fock I/O Request Pattern1e+063e+065e+067e+060 100 200 300 400 500 600
Initial
offset
(bytes)
Request number
Figure
7: HYDRO I/O Request Pattern
storing numerical quadrature data for subsequent reuse. I/O
requests each retrieve 80 KB of data, and the le is accessed
sequentially six times. Figure 6 shows the le osets and
data reuse.
HYDRO. HYDRO 1 is a block-structured mesh hydrodynamics
code with multi-group radiation diusion. The majority
of I/O accesses are to three widely separated regions of
the le. Sixty-seven percent of the accesses follow sequentially
from the previous access, but seeks of up to almost
eight million bytes also occur. The distribution of the request
sizes ranges from one byte to almost one million bytes.
SAR. The SAR (synthetic aperture radar) code produces
surface images from aircraft- or satellite-mounted radar data.
As
Figure
8 shows, the application issues two sequential re-
quests, followed by a seek to the next portion of the le.
Request sizes range from 370 KB to almost 2 MB.
5. EXPERIMENTAL ASSESSMENT
Given the simulation infrastructure of x3 and I/O traces
from the application suite of x4, we built Markov models for
Request
position
and
extent
(bytes)
Request number
Figure
8: SAR I/O Request Pattern
the associated I/O patterns and analyzed the accuracy of
these models by comparing model predictions with actual
I/O requests. We also investigated the eect of le block
sizes on model size and complexity and analyzed prediction
accuracy for both single and multiple block predictions.
5.1 Prediction Accuracy
Because Markov models are a lossy representation of application
I/O patterns, the simplest metric of prediction accuracy
is the dierence between model predictions and application
request streams. This approach allows one to assess
e-cacy as a function of spatial I/O patterns.
The simulator of Figure 3 generates a prediction sequence
of length L before each application block request is pro-
cessed. The accuracy of the prediction sequence is the fraction
of the predicted blocks that exactly match the blocks
requested during the next L timesteps. The overall accuracy
is the arithmetic mean across all predicted request se-
quences, capturing the eects of le block size, prediction
length, and prediction algorithm.
Block Size. The most basic choice when building Markov
models of I/O activity is the choice of block size. Matching
the model's block size to le cache or disk block size is the
most natural choice. Intuitively, larger block sizes reduce
the model's storage requirements, but they provide weaker
bounds on the range of bytes requested. Larger blocks provide
implicit prefetching, which interacts with access pattern
sequentiality and request size.
Figure
9 shows the single step (i.e., greedy with a single
block lookahead) prediction accuracy of each I/O trace
as a function of block size. The CONTINUUM, HYDRO,
Cactus, and Dyna3D codes all show high accuracy for each
block size, and this accuracy is near constant across block
sizes.
In contrast, the prediction accuracy for the SAR code declines
sharply as the block size increases due to repeated le
seeks. The large, 80 KB requests by the Hartree-Fock code
create non-linear behavior; large blocks prefetch data that is
unused when the stride boundaries of Figure 6 are crossed.
More generally, this behavior is a consequence of the regularity
of application request patterns and the proximity of0.550.650.750.850.954096 8192 16384 32768 65536 131072 262144
Prediction
accuracy
Block size (bytes)
Cactus
CONTINUUM
Dyna3d
Hartree-Fock
HYDRO
Figure
9: One Step Accuracy (Multiple Block Sizes)0.20.61
Prediction
accuracy
Prediction length
Cactus
CONTINUUM
Dyna3d
Hartree-Fock
HYDRO
Figure
10: Greedy Fixed Accuracy (64 KB Blocks)
the block size to the request size. As the block size approaches
the request size from above, each state in the associated
Markov model has a single transition to the next
state and fewer transitions back to itself. From the other
side, self-transitions interrupt the procession from one state
to another more frequently. These factors result in a model
where the most likely state transition has a low probability
compared to other models. This suggests that the block
size should be some reasonable multiple of the application
request size that amortizes disk I/O overhead unless the application
request sizes are very large.
Prediction Length. For prefetching to be eective, it must
accurately predict the access pattern for a request sequence;
only with such predictions can the I/O system stage data
for requests. Figure 10 shows the eect of prediction path
length for a 64 KB block size and a greedy-xed prediction
strategy.
Not surprisingly, prediction accuracy generally decreases
as prediction length increases. The prominent exception is
again the Hartree-Fock code. Because the single step prediction
accuracy for 64 KB blocks is low, the multiplicative
eect of a prediction sequence is also low. Even for 4 KB
blocks (not shown), where Hartree-Fock has a single step
accuracy of over 90 percent, the prediction accuracy is only
percent when predicting 25 steps ahead.
Prediction Algorithm. The multi-step prediction errors for
the SAR and Hartree-Fock codes illustrate the information
loss from the compact Markov model { the memoryless property
means that certain seek patterns, can trigger mispredictions
for greedy lookahead strategies. This is the rationale
for the path-xed and amortized strategies described in x2.3.
We tested each prediction strategy on multiple block sizes
and prediction lengths up to 25 steps. In most cases, the
three algorithms resulted in similar prediction accuracy, with
all declining in accuracy as the sequence length increased.
However, Figure 11 shows a wide gap between performance
of the amortized, greedy xed, and path xed strategies for
the Hartree-Fock code and two large block sizes.
The reason for this striking dierence is that after a few
steps using amortized prediction, enough probability mass
has moved to a succeeding state to cause the predicted state
to change for the following steps. Greedy xed, and to a
lesser degree path xed, continue following the most likely
transition and will not account for multiple paths to another
state.
5.2 Cache Behavior
Access prediction accuracy is but one metric of Markov
model utility. To assess the benets of Markov models for
predicting I/O requests, we also simulated a simple client
cache and compared prefetching using the Markov model to
using a baseline N block read ahead strategy.
The simulated cache requests a prediction of length N
after each block request, and the predicted blocks are loaded
if not already there. Replacement is via a standard LRU
policy, with predicted blocks already present in the cache
placed with newly fetched blocks at the end of the LRU
list. We conducted experiments using block sizes of 1 KB,
4 KB, 16 KB, and 64 KB, a variety of le cache sizes, and
prediction horizons from 1 to 10 blocks.
Given the relatively strong sequentiality in the application
I/O access patterns, all three prediction strategies yield relatively
high cache hit ratios. Figures 12{13 show the cache
hit ratios for two typical cases, the HYDRO and Cactus
codes. Here, the greedy and amortized prediction strategies
always perform better than N block read ahead when
prefetching multiple blocks, and they often perform better
when predicting only one or two steps ahead.
5.3 File System Measurements
The true test of any I/O prediction strategy is its performance
as part of a parallel le system. Only such tests
capture the interplay of application and system features.
Hence, we also conducted experiments using Cactus and a
synthetic application similar to Hartree-Fock. Both were
modied to use PPFS2, the user-level parallel le system
of x3. PPFS2 was extended to support N block read ahead
and our Markov model prediction methods, and to use these
policies for block prefetching decisions.
To assess the benets of Markov model prediction over
standard prefetch policies, we ran experiments using the
Cactus code on a 50x50x50 grid for 2000 iterations. This results
in over 2 million block read requests distributed across
a 2 GB data le. We congured our PPFS2 testbed to use a
le cache with 4 KB and 8 KB blocks, an LRU cache0.960.981
Hit
rate
Prefetch depth
N-block readahead
Greedy-fixed
Amortized
Figure
12: HYDRO Cache Hit Ratios0.960.981
Hit
rate
Prefetch depth
N-block readahead
Greedy-fixed
Amortized
Figure
13: Cactus Cache Hit Ratios
replacement strategy, and a prefetch depth that ranged from
blocks. Only read requests were cached; write requests
were sent directly to the underlying le system, which
for our purposes was a single local disk. The Markov model
predictions were computed using greedy-xed, the simplest
prediction strategy.
Figure
14 shows the results of these experiments for a variety
of prefetch distances. The experiments using Markov
model prediction showed a performance improvement over
all tested prefetch depths when compared to both N block
readahead and no prefetching. Total application execution
time decreased by up to 10% over the no prefetching case
when using 8 KB blocks, and up to 3% when using 4 KB
blocks. N block readahead yielded substantially poorer performance
than either of the other two methods, which conrms
our assertion that le system policies designed for sequential
access patterns can be inappropriate for scientic
codes. The data from Figure 14 also show that the overhead
for using the Markov model is small and that the overhead
is repaid by improved prefetch predictions.
Figure
15 shows the cache hit rates recorded during these
tests. For both block sizes, Markov model prediction achieves
Prediction
accuracy
Prediction sequence length
Greedy-fixed
Path-fixed
Prediction
accuracy
Prediction sequence length
Greedy-fixed
Path-fixed
Amortized-fixed
(a) 64 KB Block Size (b) 256 KB Block Size
Figure
11: Hartree-Fock Strategy Comparison2006001000
Time
Prefetch depth (blocks)
N-block
N-block (4KB)
prefetching (8KB)
prefetching (4KB)
Markov
Markov (4KB)
Figure
14: Cactus Execution Time
a hit rate of over 99.99%. N block readahead achieves a hit
rate of over 98%, but the large performance dierence between
the two methods indicates that N block readahead is
prefetching many blocks that go unused.
To further quantify the overhead of Markov models, we
wrote a synthetic code that issues requests in the Hartree-Fock
pattern of x4. This synthetic code reads the 2.64 MB
input le sequentially six times via 396 requests. Each 80
KB request is followed by a loop that iterates for approximately
one millisecond, simulating a compute cycle. This
pattern is ideally suited to N Block readahead, and any large
overhead in the Markov model approach will be apparent.
Figure
shows the results of this experiment for a 1 MB
cache and 8 KB blocks. Markov model prediction performs
just as well as N block readahead for this request pattern.
The two algorithms also result in a decreased execution time
compared to the no prefetching case, with the time savings
reaching over 13% when prefetching 10 blocks at a time.0.9750.9850.9951 2 3
Cache
hit
rate
Prefetch depth (blocks)
Markov (4KB)
Markov
N-block (4KB)
N-block
Figure
15: Cactus Cache Hit Rate
5.4 Experiment Summary
Our results have shown high prediction accuracy across a
large range of block sizes and look ahead lengths using sim-
ple, greedy prediction strategies for most application I/O
patterns. However, because the amortized strategy yielded
results equal to those for greedy strategies in most cases and
substantially better performance in some others, we believe
the additional complexity of non-greedy strategies is justi-
ed. In addition, our timing experiments, using a user-level
le system modied with Markov models, demonstrate that
our approach can noticeably reduce the execution time of a
sample scientic application. We also demonstrate that the
overhead is very low.
6. RELATED WORK
Many groups have examined the spatial and temporal I/O
patterns of scientic applications. Smirni et al [17, 16, 15,
characterized the I/O accesses in a suite of scientic ap-0.8
Time
Prefetch depth (blocks)
prefetching
N-block
Markov
Figure
Hartree-Fock Execution Time
plications drawn from the Scalable I/O initiative [4]. Characterization
showed that the applications exhibited access
patterns ranging from simple sequential accesses to interleaved
patterns across processors. Although simple descriptions
are su-cient to describe these patterns, introduction
of higher-level I/O libraries such as FlexIO [11] has led to
even more complicated patterns.
Patterson et al [12] used application-provided hints to
guide prefetching and caching decisions. Their integrated
caching and prefetching system used a cost-benet analysis
to decide when to prefetch and which blocks to replace in
the cache.
Vellanki and Chervenak [21] used an approach similar
to Patterson's but replaced application-provided hints with
an automatically-generated probabilistic model of the access
stream. They built and used a prefetch tree using the
Lempel-Ziv scheme from Duke [22, 6]. In addition to reducing
the size of the access stream, this tree allowed prediction
of future accesses based on likely paths through the tree. Restricting
the width of the tree allowed them to decrease its
storage requirements at the cost of additional inaccuracy in
the predicted accesses. Our use of Markov models builds
on many of these ideas but emphasizes congurability and
compatibility with temporal prediction schemes [19].
Articial neural networks (ANNs) have been used previously
in I/O pattern characterization. Madhyastha and
Reed [10, 9] used ANNs and hidden Markov models to assign
the request pattern a simple qualitative classication
such as \sequential" or \variably-strided." Markov models
have also been used by Cortes and Labarta [3] to identify
and predict strided access patterns. Our work diers by providing
an explicit prediction of which blocks to expect next
and to represent arbitrary access patterns.
7. CONCLUSIONS AND FUTURES
We have outlined an approach to access pattern prediction
based on Markov models and several associated prediction
strategies. Using I/O traces from large scientic ap-
plications, our experiments explored the eects of varying
the block size used to build the model, the length of the
predicted I/O stream, and the algorithm used to generate
predictions.
The experimental data suggests that for the irregular I/O
patterns found in today's scientic applications, Markov
models strike an eective balance between implementation
complexity and predictive power. For reasonable block sizes,
performance is largely independent of block size, allowing le
system designers to choose a set of defaults, while achieving
high performance. This allows users of these models to
choose a block size appropriate to their tasks without affecting
prediction accuracy to a large degree. Finally, an
amortized path scheme is generally more eective in predicting
access patterns than simple greedy schemes.
Several opportunities exist for future study, including more
extensive experiments with scientic applications. Also, our
current techniques assume that one or more previous runs
are available to generate the Markov model; allowing the
model to be created and used online may provide benets
over uninformed prefetching and caching even though large
parts of the application pattern have not yet been integrated
into the model. Finally, in cases where hints are provided
about future I/O accesses, Markov models can be used to
establish the accuracy of those hints to be measured. This
would allow an adaptive le system to decide if using the
hinted sequence for further prefetching or caching is likely
to provide benet.
8.
--R
The Pablo Self-De ning Data Format
Three Dimensional Numerical Relativity with a Hyperbolic Formulation.
Linear Aggressive Prefetching: a Aay to Increase Performance of Cooperative Caches.
Characterization of a Suite of Input/Output Intensive Applications.
Input/Output Characteristics of Scalable Parallel Applications.
Practical Prefetching via Data Compression.
The Art of Computer Systems Performance Analysis.
A Trace-Driven Comparison of Algorithms for Parallel Prefetching and Caching
Automatic Classi
Input/Output Access Pattern Classi
Diving deep: Data-management and visualization strategies for adaptive mesh re nement simulations
Informed prefetching and caching.
Scalable Input/Output: Achieving System Balance.
A Comparison of Logical and Physical Parallel I/O Patterns.
I/O Requirements of Scienti
Performance Modeling of a Parallel I/O System: An Application Driven Approach.
Workload Characterization of Input/Output Intensive Parallel Applications.
Lessons from Characterizing the Input/Output Behavior of Parallel Scienti
ARIMA Time Series Modeling and Forecasting for Adaptive I/O Prefetching.
Automatic ARIMA Time Series Modeling and Forecasting for Adaptive Input/Output Prefetching.
Prefetching Without Hints: A Cost-Bene t Analysis for Predicted Accesses
Optimal prefetching via data compression.
--TR
Practical prefetching via data compression
Informed prefetching and caching
Input/output characteristics of scalable parallel applications
Optimal prefetching via data compression
A trace-driven comparison of algorithms for parallel prefetching and caching
Input/output access pattern classification using hidden Markov models
Lessons from characterizating the input/output behavior of parallel scientific applications
series modeling and forecasting for adaptive I/O prefetching
Diving Deep
Linear Aggressive Prefetching
Workload Characterization of Input/Output Intensive Parallel Applications
I/O Requirements of Scientific Applications
Automatic classification of input/output access patterns
Automatic arima time series modeling and forecasting for adaptive input/output prefetching
--CTR
Nancy Tran , Daniel A. Reed, Automatic ARIMA Time Series Modeling for Adaptive I/O Prefetching, IEEE Transactions on Parallel and Distributed Systems, v.15 n.4, p.362-377, April 2004
Yifeng Zhu , Hong Jiang, CEFT: a cost-effective, fault-tolerant parallel virtual file system, Journal of Parallel and Distributed Computing, v.66 n.2, p.291-306, February 2006 | storage;markov model;parallel computing |
514234 | Near-optimal adaptive control of a large grid application. | This paper develops a performance model that is used to control the adaptive execution the ATR code for solving large stochastic optimization problems on computational grids. A detailed analysis of the execution characteristics of ATR is used to construct the performance model that is then used to specify (a) near-optimal dynamic values of parameters that govern the distribution of work, and (b) a new task scheduling algorithm. Together, these new features minimize ATR execution time on any collection of compute nodes, including a varying collection of heterogeneous nodes. The new adaptive code runs up to eight-fold faster than the previously optimized code, and requires no input parameters from the user to guide the distribution of work. Furthermore, the modeling process led to several changes in the Condor runtime environment, including the new task scheduling algorithm, that produce significant performance improvements for master-worker computations as well as possibly other types of grid applications. | INTRODUCTION
This paper develops a model for near-optimal adaptive control of
the state-of-the-art stochastic optimization code ATR [18] on Grid
platforms such as Condor [19], Globus [10], or Legion [13], in
which the number and capabilities of the distributed hosts that
execute ATR varies during the course of the computation.
Stochastic optimization uses large amounts of computational
resources to solve key organizational, economic, and financial
planning decision problems that involve uncertain data. For
instance, an approximate solution of a cargo flight scheduling
problem required over hours of computation on four hundred
processors. To find more accurate solutions (in which a wider
range of scenarios is considered), or to verify the quality of
approximate solutions, may require vastly greater resources. The
aim is to find the decision that optimizes the expected
performance of the system across all possible scenarios for the
uncertain demands. Since the number of scenarios may be very
large (typically 10 4 to 10 7 ), evaluation of the expected
performance (which requires evaluation of the performance under
each scenario) can be quite expensive. ATR is also representative
of a class of iterative algorithms that (1) have a basic fork-join
synchronization structure, (2) require an unpredictable number of
iterations to converge to a solution, and (3) can adjust the number
and sizes of the tasks that are forked per iteration.
Computational grids running middleware such as Condor
currently provide one of the most attractive environments for
running large compute-intensive applications. These grids are
inexpensive, widely accessible, and powerful. Over time,
applications submitted using grid middleware are given a "fair
share" of the computational resources that are not being used by
higher priority computations. Applications like ATR can obtain
large quantities of processing power easily and inexpensively.
To run efficiently in a grid environment, the application must be
able to execute on a heterogeneous collection of hosts whose size
varies unpredictably during execution. Moreover, it should be
able to adapt to the changes in the size and composition of the
collection of hosts, as well as to changes in the computational
demands the algorithm, as it executes. It may adapt, for instance,
by changing the distribution of work among the hosts.
It is unknown how to develop an adaptive version of a stochastic
optimization tool such as ATR that minimizes total execution time
in a grid environment. The problem is particularly complex
because the parameters that govern the amount and distribution of
work also affect intrinsic performance of the algorithm (such as
the time to initialize each iteration and the number of iterations to
reach convergence) in ways that are not easily quantified.
Furthermore, the runtime environment typically includes support
functions that add unpredictable delays to task execution times.
Previous work [18] in developing the ATR algorithm for Condor
platforms has relied on simple task scheduling and extensive
experimental measurements of total application running time as a
function of (1) the average number of allocated compute nodes,
and (2) the fixed (i.e., non-adaptive) values of the ATR
parameters that define the number and composition of the parallel
tasks in the computation. These experiments have resulted in
rules of thumb for selecting the ATR parameters as a function of
the number of compute nodes allocated. For example, when the
*This work was partially supported by the National Science
Foundation under grants EIA-9975024 and EIA-0127857.
Permission to make digital or hard copies of all or part of this work for
personal or classroom use is granted without fee provided that copies are
not made or distributed for profit or commercial advantage and that
copies bear this notice and the full citation on the first page. To copy
otherwise, or republish, to post on servers or to redistribute to lists,
requires prior specific permission and/or a fee.
ICS'02, June 22-26, 2002, New York, New York, USA.
application runs on X - 100 distributed Condor nodes, the rules
provide parameters to ensure that 2X to 5X parallel tasks are
available for processing at any given time, in order to keep
processor efficiencies high in the presence of the high variability
in the task execution times. Previous studies concerning adaptive
control of distributed applications (e.g., [6][14][20]) have
proposed similar kinds of experimentally determined rules of
thumb. Development of adaptive codes that minimize total
execution time for complex applications, based on more precise
modeling and adaptation strategies has, to our knowledge, not
previously been investigated.
This paper develops a precise and accurate performance model of
ATR and then uses the model to develop an adaptive version of
the code that minimizes the total execution time for a large class
of planning problems, on any collection of compute nodes
including a varying number of heterogeneous nodes. The new
adaptive code does not require any user input to guide the amount
and distribution of work, and implements parameter settings quite
different from those obtained from the "rules of thumb"
developed previously. The new adaptive code also uses (1) a
higher performance task scheduling strategy, and (2) a reduced
debug I/O level, and (3) a proposed simple change in the Condor
runtime system support that reduces needless overhead on the
master node. The latter two changes, motivated by the
development of the ATR performance model, greatly improve task
execution times as well as the predictability of the task execution
times. All three changes can greatly improve other grid
applications as well as ATR. The new adaptive code, together
with the runtime system change, reduces total execution time
compared to the previously optimized code by factors of six or
more, depending on the planning problem and grid configuration.
The remainder of this paper is organized as follows. Section 2
provides an overview of the ATR application, the Condor runtime
environment, and related work in performance of adaptive grid
codes. Section 3 describes the detailed measurement-based
performance analysis of ATR needed to develop the model.
Section 4 describes, validates, and applies the model to select
near-optimal configurations for ATR on a sizable Condor pool.
Conclusions of the work are stated in Section 5.
2. BACKGROUND
Sections 2.1 and 2.2 briefly describe the ATR stochastic
optimization application, the Condor system, and the MW runtime
support library for master-worker computations. Section 2.3 then
summarizes related work in performance modeling and
development of large distributed applications.
2.1 ATR
ATR is an iterative "asynchronous trust-region" algorithm for
solving the fundamental stochastic optimization problem: two-stage
stochastic linear programming with recourse, over a discrete
probability space. The algorithm is described in detail in [18].
Here we discuss those aspects of the algorithm that are relevant to
selecting the performance-related algorithmic parameters; in
particular, the parameters that control the number and
composition of the parallel tasks in the execution.
The problem is as follows: Given a set of N scenarios i
,,
,,
subject to
Ax
x
x
where
x
x
and each )
Q w is the optimal objective value for a second-stage
linear program, defined as follows:
{ } .,
min
y
x
Wy
y
d
x
To evaluate the function )
thus requires the solution of
independent second-stage linear programs. When N is large,
this process can be computationally expensive.
The function )
Q is a piecewise linear, convex function. The
ATR algorithm builds up a lower bounding function m(x) for the
true objective function )
x
using information gathered
during evaluation of the second-stage linear programs.
The basic structure of the algorithm, illustrated in Figure 1, is that
a master processor computes a new candidate iterate x by solving
the trust-region subproblem defined below. Worker processors
then evaluate the second-stage linear programs for this x and
produce the information needed to refine the model function m(x).
The master processor updates this model function asynchronously,
as information arrives from each worker processor. Outdated
information may also be deleted from m(x) on occasion. When
sufficient new information has been received (i.e., after all or most
of the processors working on evaluation of x have returned their
results), the master computes a new iterate x. The process then
repeats.
New iterates x are generated by solving subproblems of the form:
I
x
x
Ax
x
to
subject
min
where I
x is the "incumbent" (the best iterate identified by the
algorithm to date), while D is the "trust-region radius," which
defines the maximum distance we can move away from the
incumbent on the current step.
In general, the more scenarios N that can be included in the
formulation, the more realistic is the model. Although the problem
becomes larger and harder to solve as N increases, this effect is
less marked if we have a good initial guess of the solution. A good
strategy is therefore to start by solving an approximate problem
with a modest value of N (1000, say), and use the resulting
approximate solution as the starting point for another approximate
problem with a larger number of scenarios (5000, say). This
procedure can be repeated with progressively larger N.
To reduce the time to compute each new iterate x, the N scenarios
are partitioned into a fixed set of T tasks denoted by T
, 2where each j
N denotes a set of scenario indices. For a given task j
a worker computes the following partial sum of )
The worker also computes one partial subgradient (also known as
a "cut") for the task. This information is used by the master to
update the partial model function )
corresponding to task j.
The complete model function )
m is then T
c x plus the sum of
the
all the tasks T
ATR enables additional parallelism by allowing more than one
candidate iterate x to be evaluated at the same time. In order to
generate an additional candidate iterate, the master processor
computes the new iterate before all of the scenarios for the current
candidate have been evaluated, as illustrated in Figure 2. ATR
thus creates and maintains a basket of B candidates (with B
between 1 and 15).
In the previous non-adaptive version of ATR, tasks are grouped
into G equal-size task groups, each containing T/G tasks. Each
task group is a unit of work that is sent to a worker. For example,
a possible configuration for
that each task group contains two tasks, each with 100 scenarios.
We assume N to be fixed in our performance analysis. We can
affect the amount and distribution of work, by varying the
parameters B, T, and G. By increasing B, new iterates can be
solved on the workers while the master is processing the results
from other iterates, and a slow worker will only slow down the
evaluation of one of the parallel iterates. By increasing T, more
cuts are computed per value of x, making the model function
more expensive to compute (by the master) but also a better
approximation to the true objective )
Q , which generally reduces
the number of iterations needed to solve the problem. By
increasing G, we obtain more task groups, with fewer tasks (and
therefore smaller execution times) for each group.
Previous evaluations of ATR on locally and widely distributed
Condor-MW grids have shown that for 50-100 workers, the best
execution times were obtained with three to six concurrent
candidates, 100 tasks, and 25 to 50 task groups.
The goal in this research is to create a performance model to
select the basket size B, the number of tasks, T, and the number of
task groups G, statically at program initiation time or dynamically
during the execution of the job, so as to minimize total ATR
execution time for any given planning problem of interest.
2.2 Condor/PVM and MW
The Condor system [19] manages heterogeneous collections of
computers, including workstations, PC clusters, and
multiprocessor systems. Mechanisms such as glide-in or flocking
are used to include processors from separate sites in the
resource pool. When a user submits a job to the system, Condor
identifies suitable processors in the pool and assigns the
processors to the job. If a processor executing the job becomes
unavailable (for example, because it is reclaimed by its owner),
Condor migrates the job to another node in its pool, possibly
restarting from a checkpoint that it saved at an earlier time. The
size of the pool available to a particular user can change
unpredictably during a computation, although users can exercise
some control over the resources devoted to their application by
specifying the number, speed, and/or type of workers.
MW [18] is a framework that facilitates implementation of
master-worker applications on a variety of computational grid
platforms. In this study, we use the version of MW that is
implemented for Condor-PVM [32], in which the master runs on
the submitting host, and the worker tasks on other processors
drawn from the Condor pool. Condor-PVM primitives are used to
buffer and pass messages between master and workers.
2.3 Related Work
Parallel stochastic optimization algorithms have been investigated
during the past 15 years. In [4], an algorithm related to ATR (but
without the trust region and asynchrony features) is applied to
multistage problems, and implemented on a cluster of modest size.
An earlier paper [5] describes an interior-point approach in which
the linear algebra computations are implemented in parallel, but
this approach does not scale well to a large number of processors.
A small PC cluster is used in [11], where the approach is to use
interior-point methods for the second-stage problems inexactly
and an analytic center method for the master problem.
workers
master
master
workers
Solve secondary-
stage linear
programs
master
Update m(x) and
compute new x
Update m(x) and
compute new x
Solve secondary-
stage linear
programs
Figure
1: Basic ATR Structure (B=1)
workers
master
master
Solve secondary-
stage linear
programs
Update m(x)
andcompute new x,
after j and after C.
Solve secondary-
stage linear
programs
workers
master Update m(x)
andcompute new x.
Figure
2: Asynchronous ATR Structure (B=2)
Table
1: ATR Configuration Parameters
Symbol Definition
N Number of scenarios evaluated per iterate
T Number of tasks per iteration
G Number of task groups into which scenarios are partitioned
x Vector of candidate planning decisions; "iterate"
B Number of iterates that are evaluated in parallel
Three approaches to adaptive software control have been explored
in previous work, namely (1) provide an interface for the user to
control application steering parameters (e.g., [2]), (2) use
measured or user-provided estimates of application execution time
as a function of system configuration, to heuristically adapt
system resource management policies or application configuration
parameters, automatically at runtime [17][6][14][16][20][27], or
(3) use very simple models that compute execution time for
alternative configurations [3][8][21][23][24][25] from user-provided
or measured deterministic processing and
communication requirements per task, possibly adjusted for
runtime-measured node processing capacities and available
network bandwidth.
Regarding history-based heuristic adaptive control algorithms,
Ribler et al. [20] describe the Autopilot system that filters data
from instrumented client tasks (e.g., to characterize the dominant
file access pattern) and adapts the system resource management
policies (e.g., file prefetch policy). The related GrADS Project
work [27] measures the "application signature" (i.e., processing,
I/O, and communication cycles as a function of time), and uses
fuzzy logic to determine whether the rates for each task are within
ranges defined by user-specified "performance contracts".
Algorithms for changing the configuration when the contract is
violated are not addressed in that work. Heymann et al. [14]
measure Condor-MW task execution times, and worker node
efficiencies, in each iteration. Results from synthetic MW
applications are used to estimate the number of worker nodes to
allocate to achieve 80% efficiency with no more than a 10%
increase in iteration execution time, based on the relative
processing times of the largest 20% of the tasks. This algorithm
assumes each task performs (approximately) the same work in
successive iterations, homogeneous processing nodes, and that the
application completes eachiteration before starting the next
iteration. Lan et al. use computation times measured in previous
iterations to decide whether to redistribute work in the next
iteration of an astrophysics code [17]. Chang and Karamcheti [6]
propose an application structure containing a "tunability
interface" and an expression of user preferences. Previous
application execution measurements, runtime resource and
application monitoring, and user preferences are used to
automatically select certain parameters at runtime, for example to
select the image resolution for the available processing capacity
and a user-specified image transmission time. In the Harmony
system, Keleher et al. [16] propose an approach which the user
specifies the processing and communication time, or provides a
model to predict these values at runtime, for each possible
multidimensional configuration of the application. The system
then dynamically allocates resources to achieve a particular
system objective, such as maximizing throughput.
Previous models for adaptive runtime control compute node
processing time and communication time per iteration, using
known and/or measured quantities per data point, or image pixel,
times the number of data points or pixels assigned to the node.
For example, Ripeanu et al. [21] compute the execution time for a
load-balanced finite difference application as a function of (a) the
data assigned to each node, (b) redundant work computed by
other nodes to reduce communication between nodes, and (c) the
communication time. These calculations are used at runtime to
select the amount of redundant work per node for the measured
Grid communication costs. The AppLeS project has used similar
calculations to determine how many and which available compute
or data server nodes should be assigned to a simple adaptive
iterative Jacobi application [3], a gene sequence comparison code
[24], a magnetohydrodynamics application [22], an adaptive
parallel tomography image reconstruction application [23], and
adaptive data server selection in the SARA application [25]. For
each of these applications, a linear optimization model is
formulated to compute the work assigned to each node per
candidate system configuration and measured node processing
and communication capacities, to achieve an objective such as
minimizing total execution time or maximizing image quality for a
user-specified target refresh frequency.
The approach in this paper is most similar to the model-based
adaptive runtime control in [23]. However, we are targeting the
much more complex ATR application that does not have a known
model for estimating execution time as a function of system
configuration (i.e., parameters B, T, G, and set of workers).
Furthermore, we develop a model that more efficiently determines
how to allocate work to available compute nodes without solving
an optimization problem over all possible system configurations.
3. ATR EXECUTION TIME ANALYSIS
In this section we analyze ATR task execution times and
communication latencies to determine which aspects of the
computation and communication need to be modeled, and how to
model the total ATR execution time. The analysis is guided by the
task graphs in Figures 1 and 2, which illustrate the overall
structure of a parallel ATR execution, for basket size B equal to
one and two, respectively. (Table 1 summarizes the notation.)
The basic functions of the master process are to (a) update the
function m(x) each time a worker processor returns the results of
executing a group of tasks, and (b) compute a new iterate x when
all T tasks associated with a particular iterate x have been
completed.
In the existing non-adaptive version of ATR, the parameters B, G
and T are specified as inputs to the application and are fixed
throughout the run. G specifies the number of task groups, which
can be evaluated in parallel (by the workers). Each task group
contains T/G tasks each containing N/T scenarios. Since each
individual task generates a subgradient (or cut), each task group
returns T/G subgradients.
The goals of this work are to determine:
. whether the values of B, G, T, and the number of tasks per
group can be determined so as to minimize total ATR execution
time on a collection of workers,
. whether near-optimal adaptive values of the parameters can be
computed at runtime as the collection of workers changes, and
. how much performance improvement can be gained from the
adaptive version of the code.
The challenge for modeling and minimizing the overall ATR
execution time is that previous measurements of ATR executions
[18] have revealed that it is difficult to quantify (a) how master
execution times increase as T increases, (b) how the number of
iterates that need to be evaluated increases as B increases or T
decreases, and (c) the high degree of unpredictability in the
execution times of the master and workers, which is possibly due
to the runtime environment. These issues are addressed below;
Section 3.1 analyzes worker execution times, while Section 3.2
analyzes master execution times. Section 3.3 analyzes Condor-
PVM communication costs between a pair of local hosts, as well
as for a pair of widely separated hosts, for message sizes that are
transmitted between the master and a worker in the ATR
application. These measured computation and communication
times are used in Section 4 to develop a performance model and
optimized adaptive parameter values for the ATR code.
A standard approach in our local Condor pool is to submit a
Condor job from a shared host, which becomes the master
processor. Because the shared host typically has a relatively high
CPU load, master processing times might be highly variable
and/or large. To avoid this problem, unless otherwise noted, the
ATR measurements reported in this section are submitted from a
single-user workstation that is free of other user processes during
the run. We refer to this setup as a "light load" master processor.
The initial analysis of execution times is based on several
planning problems, several values of N (numbers of scenarios), a
range of values of T and G, and a single
worker node. After analyzing the impact of T and G with
we investigates larger values of B. The use of a single worker
ensures that the measured master task execution times are not
inflated by unpredictable interrupts from workers that have
finished evaluating other tasks. Once the basic task execution
times are understood, interrupts can be modeled as needed.
3.1 Worker Execution Times
Table
2 summarizes the average, minimum, maximum, and
coefficient of variation (CV) in the execution time for the two
principal tasks carried out by the master, namely, updating m(x)
and computing a new iterate. The table also shows the average
and CV of the time for a worker to evaluate a task group, over all
of the iterations, for several different values of G and T. Similar
results were also obtained for other planning problems, other
values of the number of scenarios N, runs at different times of the
day, and for many different worker processors.
The measurements show that worker execution times in each
experiment have low variability (i.e., are highly predictable).
Furthermore, results in Figure 3 show that, in the common case
that N/G - 25, the average worker execution time is
approximately linear in the number of scenarios evaluated, N/G,
as well as in the peak speed of the processor. The second and
third rows of Table 2 (as well as other results omitted to conserve
space) demonstrate that worker execution time is independent of
T.
The above results indicate that the execution time for a given ATR
task on a given worker can be predicted from a benchmark that
contains at least 25 scenarios, which can be run on the worker
when it is first assigned to the ATR application. We note that the
deterministic worker execution times are due to the fact that the
Condor job scheduler uses space-sharing, rather than time-
sharing, for grid nodes that are reasonably well utilized. As a
consequence, interference from other jobs need not be modeled in
predicting ATR execution times on Condor.
3.2 Master Execution Times
Table
2 shows that the master processing times, both for updating
m(x) and for computing a new iterate, are highly variable. It might515253545
Number of Scenarios Evaluated (N/G)
Worker
Execution
Time
(sec)
MIPS 780
MIPS 1100
MIPS 1700
Minimum
Maximum
Figure
3: Worker Execution Time
(SSN Planning Problem, N=10,000, B=1, T/G=1)
Table
2: Example Measured Execution Times
Master Time to Update Model
Function m(x) (msec)
Master Time
to Compute a New Iterate, x (sec)
Worker Execution
Time (sec)
avg min max CV num it. avg min max CV avg CV
appear that the variability in the time to update m(x) is not
important, since the average time required for each update is only
a few milliseconds. However, since the master needs to perform
this update G times per iteration (each time a worker returns the
results from evaluating a task group). Since G may be 100 or
more, and since the largest update times are on the order of 1-2
seconds (as shown in Table 2), the cumulative update time can
have a significant impact on total ATR execution time. In Section
3.2.1 we (a) analyze the causes of the execution time
(b) propose two changes in the runtime system to reduce the
and (c) characterize this task execution time more
precisely. We note that experimentally observed variability in
worker execution times in previous work may actually have been
due to variability in the master processing time for updating m(x).
As noted in Section 2.1, as T increases, the average time to
compute a new iterate increases, while the number of iterations
decreases. This is because T cuts are added to the model function
m(x) at each iteration, so a larger T causes the model function to
become a closer approximation to the true objective function after
fewer iterations, while making each trust-region subproblem
harder to solve. The time to compute the new iterate varies
between under one second and three to four times the average
value. We investigate these variations in more detail in Section
3.2.2 with the goal of understanding how to model these
variations as a function of G and T.
3.2.1 Time to Update m(x)
Figure
4 provides example histograms of the master execution
times for updating the model function m(x) during three different
measurement runs. The histogram for the lightly loaded (700
MIPS) master with default debug I/O corresponds to the data in
Table
1. Even greater variability than
shown in this histogram is observed if the master is one of the
shared Condor hosts commonly used to submit Condor jobs.
Further investigation revealed that the value of the Condor "debug
I/O flag" commonly used for MW applications produces vast
quantities of debug output. Figure 4 provides a second histogram
("lightly loaded master with reduced debug I/O") for a run with a
reduced level of debug I/O (level 5) from Condor/MW, which still
produces a significant log of the MW execution events. The
reduced debug I/O improves the average time to update m(x), but
not the variability in the execution time.
To understand the high variability in the observed execution
times, we note that due to the worker execution times (see Table
2), there can be significant periods of time (on the order of
seconds) when the ATR master is waiting for results from the
workers. Since the master processor is part of the Condor pool,
we surmised that the high variability in the time to update the
model function shown in Figure 4 is at least partly due to Condor
administrative functions or to other Condor jobs that may run on
the master processor during these periods. The third histogram in
Figure
4 shows that using an isolated master node, which is not
available for running other Condor jobs or administrative
functions during the ATR run, greatly reduces the variability as
well as the average execution time to update m(x). Thus, we
proposed a new feature for Condor that allows a user-owned host
serving as the (lightly loaded) MW master to be "temporarily
unavailable" for running other Condor jobs.0.1101000
Worker Completion Event Count
Time
to
Update
(msec)
lightly loaded master, default debug level
lightly loaded master, reduced debug level
isolated master, reduced debug level
Figure
4: Example Histogram of Times to Update m(x)
(SSN Planning Problem, N=10,000, B=1, T=50, G=50)0.51.52.53.5
Iteration Number
Time
to
Compute
x
(sec) Lightly Loaded Master
Isolated Master
(a): Impact of Isolated Master
(SSN Planning Problem, N=10,000, B=1, T=50, G=50)515250 200 400 600 800
Iteration Number
Time
to
Compute
x
(sec)
Typical Profile (Isolated Master)
(20-term Planning Problem, N=5,000, B=1, T=200, G=50)
Figure
5: Example Histograms of Times to Compute a New Iterate, x
For the isolated master configurations, the average time to update
m(x) is 150-500 microseconds, depending on the value of T/G. In
this case, the overall ATR execution time is dominated by the
worker execution times and the time for the master to compute
new iterates (see Table 2).
3.2.2 Time to Compute a New x
Figure
5 shows a histogram of the time to compute the next
iterate, for a given planning problem and a given set of input
parameters. Figure 5(a) shows that an isolated master exhibits
lower mean and variability in execution time for computing the
new iterates than the same master in a lightly loaded (non-
isolated) configuration. The lower variability makes it easier to
optimize the application parameters that control parallelism.
Except as noted, all measurements of ATR below are obtained
with the isolated master and the reduced (but still substantial)
debug I/O level.
Figures
5(b) and 6(a) show histograms of the execution times for
computing new iterates x, for the two very different planning
problems 20term and SSN. In Figure 5(b), the starting point was
"blind" (that is, far from the solution), while in 6(a) it was chosen
as the solution of the corresponding planning problem for a
smaller number of scenarios N. In both figures, we see a wide
variability in the time required to compute the next iterate from
one iteration to the next, but the trend is similar. Specifically, the
time required tends to increase steadily, then drops off sharply
and becomes minimal for a (possibly long) sequence of iterates.
The times increase again for the last few iterations. Similar trends
were also observed in other planning problems, and many other
starting points and parameter settings.
An ad hoc adaptive algorithm might estimate the average time to
compute the next few iterates from the time to compute the
current iterate (with fairly high accuracy in most iterations). It
could use this estimate to adjust the parameters governing
parallelism in evaluation of the current iterate(s) (namely, B and
so as to achieve a desired trade-off between processor
efficiency and expected total time during the next iteration.
We develop an alternative approach in Section 4 that is based on
the following observations. Figures 6(a)-(c) illustrate the
dependence between the parameter T and the number of iterations
and time required to compute each new iterate, for the SSN
planning problem and a fixed starting point. Note that increasing
causes (a) a diminishing decrease in the number of iterations,
and (b) a diminishing increase in the time to compute each new
iterate. The overall effect, illustrated in Figure 6(d), is that the
total master processing time for computing new iterates grows10010000
Number of Tasks (T)
TotalMaster
Processing
Time
(sec)
(d) Cumulative Time to Compute
Iteration Number
Time
to
Compute
x
Number of Tasks (T)
Average
Time
to
Compute
x
(sec)
(c) Average Time to Compute
(a) Time to Compute Each New x20601000 100 200 300 400 500 600 700 800 900
Number of Tasks (T)
Number
of
Iterations
average
min
(b) Number of Iterations (n) vs. T
Figure
Impact of T on Time to Compute New x (SSN Planning Problem, N=10,000)
slowly with T. Other curves omitted for clarity show that the time
to compute the next iterate is largely independent of G. These
observations, which hold for the three planning problems and all
ATR parameter values that we analyzed, are the key insights
needed to optimize B, G, and T in Section 4.
3.2.3 Impact of Basket Size (B)
By setting 2, the workers can evaluate the scenarios
corresponding to one iterate x, while the master computes a new
value of x from the latest subgradient information. Since, for
practical planning problems, the master execution times tend to be
either under one second or at least as large as the worker
execution times, B should be set to at most two in order to
improve overall execution time. However, Figure 7 shows that
the total number of iterates n required for convergence of ATR
nearly doubles as B doubles. (Similar behavior was observed for
other planning problems and many values of N.) Thus, the total
execution time will not improve for B greater than one, a fact that
we have verified experimentally.
3.3 ATR Communication Times
Figures
8(a) and (b) provide the measured CondorPVM round trip
time for sending a message of a given size to another host and
receiving back a small message, a process that mimics the round-trip
communication between the master and a worker in ATR.
Figure
8(a) shows the results for hosts interconnected by a high
speed local network; Figure 8(b) provides the results for a host at
the University of Wisconsin sending to a host in Bologna, Italy.
For ATR planning problems, the size of the message from the
master to the worker typically ranges from 250 bytes to 12 KB.
Furthermore, sending a message to a given worker is overlapped
with the execution time of the worker that received the previous
message. Thus, there is approximately one round-trip time per
iteration in the critical path of an ATR computation. A
comparison of the round trip time, worker execution times, and
master execution times suggests that communication costs are
negligible for practical values of N, G, and T, even over wide area
networks.
4. NEAR-OPTIMAL ADAPTIVE ATR
The measurements reported in Section 3 have provided the data
needed to create a model of total ATR runtime. To summarize:
. Worker execution times are deterministic, approximately linear
in the number of scenarios per task group (N/G), and
independent of the number of tasks in the group (T/G).
. Communication costs between the master and workers are
negligible (over a high speed network), even when the master
and workers are widely distributed. We validate this again
below for an ATR run on a widely distributed Condor flock.
. The master processing time for updating the model function,
m(x), each time a worker returns its results, is 100-500 msec
and can be omitted in a (first-cut) model of total ATR run time.
. The time required for the master to solve the trust-region
subproblem to compute each new iterate, x, is a significant
component of total ATR execution time. Moreover, the total
master execution time to compute all iterates increases slowly
with T, while there is a significant decrease in the number of
iterates (n) that need to be considered as T increases, up to
about T=400 (or until N/T decreases to a few tens).
These observations motivate a surprisingly simple first-cut model
of the ATR execution time that can be used to optimize the
execution of ATR on various grid configurations. Because20601001400 1
Number
of
Iterations
maximum
average
minimum
Figure
7: Impact of B on Total Number of Iterations (n)
(SSN Planning Problem, N=10,000, T=200, G=50)135790 4 8 12 28
of Me ssage (Ki l obyte s)
Round
Trip
Time
(msec)
(a) Between Local Nodes0.280.841.40
28
Size of Message (Kilobytes)
Round
Trip
Time
(sec) Experiment 1
Experiment 2
(b) Between Wisconsin and Bologna, Italy
Figure
8: Roundtrip Message Time
interrupts, communication costs, and variability in task execution
times are not modeled, the model is even simpler than the LogGP
type of model than has been used for other large complex
applications [1, 26]. Section 4.1 presents the model and validates
that it is sufficiently detailed to estimate overall ATR execution
time, on widely distributed Condor flocks as well as on a local
Condor pool. Sections 4.2 and 4.3 demonstrate how the model,
together with improved task scheduling, can be used to minimize
ATR execution time on a varying set of homogeneous grid nodes
and a varying set of heterogeneous grid nodes, respectively.
4.1 Model Validation
For fixed values of N, G, and T, and a homogeneous set of G
worker nodes, a first-cut model of total ATR running time is
simply W
nt
t is the total master execution time for
computing new iterates, n is the number of iterations, and W
t is
the time needed by a worker to evaluate a group of tasks
containing N/G scenarios. Table 3 evaluates the accuracy of this
model, which ignores interrupts, communication overhead and
small master execution times to update m(x). The table compares
measured ATR execution times against execution time computed
from the model using the measured components ( M
t and W
several different configurations of the planning problem SSN, as
well as two representative versions of the problems Storm and 20-
term. The experiments in the table were run on a local Condor
pool or one of two Condor "flocks" in which the master processor
is a local (isolated) master while the homogeneous workers are
compute nodes at the Albuquerque High Performance Computing
Center or at Argonne National Laboratory. In the flock
experiments, communication between the master and the workers
is via the Internet, so communication costs are more similar to
those graphed in Figure 9(b) rather than those in Figure 9(a).
Table
3 shows that over a wide range of total application
execution times, from just a few minutes to over an hour, the
runtime estimates obtained from the simple model are within
about 10% of the measured execution times, even when the
workers are geographically distant from the master.
If we employ fewer workers K than the number of task groups G,
then the model of the non-adaptive ATR running time is modified
as follows: /
nt G K
- . The last row of Table 3 (and
other similar experiments omitted to conserve space) validates this
simple extended model, showing that it captures the principal
components of total run time.
For fixed N, G, and T, and a set of heterogeneous workers, the
total execution time is estimated by max
nt
.
Table
validates this model for collections of heterogeneous nodes from
our local Condor pool. We obtained these results by requesting G
workers, without restricting the type of worker nodes assigned. In
this case, Condor allocated a wide variety of processors, ranging
in speed from 186 MHz to 1.7 GHz. For the non-adaptive version
of ATR, the total execution times estimated are generally as
accurate as for the homogeneous workers, unless the problem size
is fairly small (i.e., execution time is less than 10 minutes) and the
number of workers G is large. In these cases, the worker tasks are
Table
3: Simple Model Estimates of Total ATR Execution Time for Homogeneous Workers
Compute
(sec)
Total Execution Time
Planning
Problem N T G
Number
of
Workers num it.
Total
Benchmark
Avgerage
Measured
Note
20-terms 5,000 200 50 G 597 2762.94 2.35 69.47 70.54 WI pool
ssn 40,000 100 50 G 84 297.36 30.97 48.83 52.21 WI-NM Flock
ssn 20,000 50 50 G 108 180.90 20.91 40.84 44.70 WI-Argonne Flock
ssn 20,000 100 50 G 84 244.00 20.89 33.51 36.38 WI-Argonne Flock
ssn 20,000 200 50 G 61 295.30 20.88 26.40 29.32 WI-Argonne Flock
ssn 5,000 200 50 G 131 1076.82 3.23 25.05 26.80 WI pool
ssn 20,000 400 50 G 44 441.80 20.96 22.98 24.98 WI-Argonne Flock
storm 10,000 200 50 G 11 2.53 82.44 16.53 18.48 WI pool
ssn 10,000 50 50 G 66 56.70 6.44 8.14 9.23 WI pool
ssn 10,000 100 50 G 50 71.79 6.48 6.70 8.23 WI pool
ssn 10,000 200 50 G 38 79.46 6.44 5.51 6.62 WI pool
ssn 10,000 400 50 G 26 70.45 6.43 4.07 4.92 WI pool
ssn 10,000 100 100 G 44 70.32 3.31 3.65 4.71 WI pool
ssn 10,000 100 500 2G/3 44 64.81 6.32 10.34 12.10 WI pool
small, and communication between the master and workers, which
is ignored in the model, has a secondary but non-negligible
impact on total running time. Since practical problems of interest
involve large planning problems, the model that ignores
communication costs is used below to minimize total ATR
execution time.
4.2 Adaptive Code for Homogeneous Workers
Based on the above results, we can specify an optimal adaptive
configuration for ATR algorithm, for a given number of scenarios
N and a number of available homogeneous workers, assuming the
objective is to minimize the total ATR execution time. The model
could be applied in a similar way to achieve some other objective,
such as a balance between minimizing execution time and
maximizing processor efficiencies, which is a subject left for
future work.
To (nearly) minimize total execution time on homogeneous
workers, B should be set to 1, the number of task groups G should
be set equal to the number of workers, and tasks should be
distributed equally among the workers. T could be specified by
the user, or can be set by the adaptive code to 200 or 400,
motivated by the fact that, for most planning problems and T in
this range, total master execution time does not increase greatly,
while the number of iterations decreases significantly. Table 3, as
well as many other experiments omitted to conserve space, show
that for various planning problems, number of scenarios, and
number of workers, with B=1 and G equal to the number of
workers, the running time decreases as we increase T in the range
of 25-400. Due to diminishing returns in reducing the number of
iterations, values of T larger than a few hundred, for the
representative planning problems studied in this paper, do not
improve total ATR running time. We note that if ATR is
modified to allow G to exceed T (a change that is simple in
principle), then ATR could still make productive use of more than
400 workers while still using a near-optimal value of 200 - 400
for T.
The optimal configuration for a fixed number of homogeneous
workers is easily adapted to the case where the number of workers
changes during the execution of the program. In this case, each
worker currently available is given approximately the same
number of tasks to evaluate, so that the time that the master needs
to wait for results from the workers is minimized. In the current
version of ATR, where each task is evaluated by a single worker,
it is valuable for T to be large (e.g., 400) because the work can be
distributed more evenly across the workers as the number of
workers varies.
4.3 Heterogeneous Grids
Table
4 shows that, unless the user requests a homogeneous
worker pool, the processors allocated to ATR by Condor may
have very diverse speeds, typically differing by a factor of seven
to ten. This is also true in other grid environments. Equal
partitioning of the work between processors is not a particularly
good strategy in this case, as the master processor will need to
wait for the slowest worker to complete, leaving faster workers
idle.
A better approach would be to give each worker a fraction of the
total scenarios N proportional to its speed. For instance, if we are
evaluating N scenarios, a worker with peak processing rate M
MIPS would receive (N/M) - M i scenarios to evaluate, where M
is the total peak rate summed over all workers. However, the
algorithm for computing the next iterate on the master requires T
(i.e., the number of subgradients computed per iteration) to be
fixed throughout the computation, and the work assigned to each
worker must be an integral number of tasks of size N/T.
Table
4: Predicted and Measured Total ATR Execution Time on Heterogenous Workers
Worker Time (t W ) (sec) Non Adaptive Execution Time
Adaptive Execution Time
avg min max Measured Model Number of
Workers Measured Model Number of
Workers Used
Estimated
Problem
Size
7.04 4.21 28.62 34.65 34.23 50 10.66 48 69% 10,000
7.03 4.19 28.62 35.07 34.22 50 11.22 48 68% 10,000
6.62 4.18 13.82 14.35 13.96 50 8.81 48 39% 10,000
4.44 2.14 21.52 17.73 16.37 100 3.97 72 78% 10,000
4.36 2.14 15.33 14.60 13.19 100 4.49 68 69% 10,000
2.86 1.36 13.88 13.75 10.73 150 3.95 75 71% 10,000
2.76 1.38 9.42 9.07 6.85 150 3.61 75 60% 10,000
2.86 1.37 9.78 9.63 6.65 150 3.30 75 66% 10,000
2.11 1.68 10.21 9.67 7.15 200 2.82 100 71% 10,000
7.76 2.58 19.46 22.32 50 11.43 9.86 26 49% 10,000
2.25 1.20 9.59 8.67 100 4.78 3.41 86 45% 10,000
2.84 0.83 9.60 9.52 100 6.78 3.24 67 29% 10,000
Thus, the near-optimal algorithm to minimize total execution time
for a given collection of heterogeneous worker nodes is as
follows. T (the number of tasks) is again chosen to be moderately
large (e.g., 400-800), so as to create smaller tasks for balancing
the load across the heterogeneous workers. Each task contains
N/T scenarios. Using the benchmark results for each worker,
tasks are allocated one at a time to workers, such that each task
will have the earliest expected completion time given the task
assignments made so far. In this way, tasks are assigned to a
worker in proportion to its execution time for the benchmark,
such that the number of assigned tasks multiplied by the
time will be approximately the same for each worker
that is assigned at least one task. Some of the workers with high
benchmark times might not be assigned any tasks, while workers
with low benchmark times may be assigned multiple tasks.
Since this task scheduling algorithm is not implemented in the
current MW runtime library, we implemented it inside the ATR
application to experiment with its effectiveness. We are also
collaborating with MW developers who are implementing this
feature within MW.
Since the tasks can be assigned to each worker as each new iterate
is created, the schedule is adaptive in nature. It also has the
advantage of simplicity. Although the number and computational
speeds of the workers may change dynamically during the run, the
adaptive code yields minimum execution time without taking a
complex global view of the runtime environment.
Table
4 shows the predicted and measured results of applying this
near-optimal approach. The first eleven rows show the predicted
execution time of the adaptive code, for workers that were
allocated to the non-adaptive version of ATR. For the highly
heterogeneous allocations, the ATR runtime is reduced by a factor
of greater than three when the scheduling strategy that adapts to
the worker speeds is applied. The lower part of the Table shows
measured and predicted execution times from the experimental
implementation of the adaptive scheme, along with the predicted
total execution time if these runs had been performed with the
non-adaptive code. For these somewhat less heterogeneous
processor pools, factor-of-two speedups are estimated for the
adaptive code. More significant speedups can be anticipated
when the number of allocated workers changes greatly during the
ATR execution. Table 5 also shows that, compared with ATR
execution times for parameter settings recommended in the
previous "rules of thumb", the new adaptive ATR has speedups
that are a factor of four to eight on homogeneous workers, or a
factor of three to four on heterogeneous workers.
5. CONCLUSION
We have performed a detailed analysis of the execution of the
ATR stochastic optimization code running in a Condor grid
environment. Initial measurements of the application, in this
work as well as in previous work, showed highly variable
execution times for key components of the algorithm, particularly
on the master processor. In previous work, this issue was
addressed by creating more parallel tasks than the number of
workers, so that workers could productively evaluate scenarios
during the long and unpredictable master computations.
However, a more detailed analysis revealed simple mechanisms
for reducing the variability of the task execution times, as well as
a more complete understanding of the complex impact of the
configuration parameters on total ATR execution time. Using the
analysis, we developed and applied surprisingly simple
performance models to determine configurations of ATR that
minimize total execution time on either static and dynamic
collections of homogeneous or heterogeneous workers.
Experiments in a local Condor pool, as well as with widely
distributed Condor flocks, indicate that total execution time is
reduced, using the simple model-based adaptive execution, by
factors of four to eight compared with the non-adaptive execution
and using previously recommended configuration parameters. In
addition, the new adaptive ATR uses a task-scheduling algorithm
that can improve the performance of other parallel grid
applications. This algorithm is currently being implemented in
the Condor-MW library. The temporarily isolated master is also a
proposed improvement in the runtime environment that could
greatly benefit other master worker grid applications.
Ongoing research includes (1) applying the ATR model to more
complex objectives, such as those that take into account the
utilization of allocated processors as well as the ATR execution
time, (2) developing models to control the adaptive execution of
other complex codes, using the same approach, which emphasizes
simplicity as well as accuracy, as we've used for ATR, and (3)
improvements in ATR such as new heuristics for updating the
model function m(x) and assigning partial tasks to workers to
achieve better load balancing and/or higher degrees of parallelism
in evaluating a single iterate. Although the development time for
a simple high fidelity analytic model is substantial, (a) it is still a
very small fraction of the time to design and develop a complex
code such as ATR that will potentially be used to solve many
important problems, and (b) the payoffs from the model in
optimizing the adaptive execution can be significant. We also
surmise that the LogP class of models [1][7] is a reasonable
starting point for developing other model-based adaptive codes,
since previous models of simple adaptive applications (reviewed
Section 2.3) as well as the simple model developed in this paper
for ATR, can be viewed as LogP models, and since a LogGP
model of a complex non-adaptive particle transport code [26] is
also highly accurate.
ACKNOWLEDGEMENTS
The authors thank Jeff Linderoth for many helpful discussions
concerning the ATR solution algorithms and for help in running
the experiments on widely distributed Condor flocks, and Jichuan
Chang for significant improvements to Condor-MW that enabled
this work.
Table
5: Execution Time Comparisons with Previous ATR
(SSN, N=40,000, 50 Workers)
Original ATR Recommended
Values of B, G, and T
Execution Time
Reduced Debug Default Debug
Worker Pool
New Adaptive
ATR
Execution Time
Homogeneous 61 min 92 min 68 min 149 min
--R
"LogGP: Incorporating Long Messages into the LogP Model"
"The Cactus Code: A Problem Solving Environment for the Grid"
"Application -Level Scheduling on Distributed Heterogeneous Networks"
"A parallel implementation of the nested decomposition algorithm for multistage stochastic linear programs,"
"Computing block-angular Karmarkar projections with applications to stochastic programming,"
"A Framework for Automatic Adaptation of Tunable Distributed Applications"
"LogP: Towards a Realistic Model of Parallel Computation"
"Application-Aware Scheduling of a Magnetohydrodynamics Application in the Legion Metasystem"
"A worldwide flock of Condors: Load sharing among workstation clusters,"
"The Globus Project: A Status Report,"
"Building and Solving Large-Scale Stochastic Programs on an Affordable Distributed Computing System,"
"An enabling framework for master-worker applications on the computational grid,"
"Legion: The next logical step toward the world-wide virtual computer,"
"Adaptive Scheduling for Master-Worker Applications on the Computational Grid"
"Predictive Application-Performance Modeling in a Computational Grid Environment"
"Exposing Application Alternatives"
"Dynamic Load Balancing of SAMR Applications on Distributed Systems"
"Decomposition Algorithms for Stochastic Programming on a Computational Grid,"
"Mechanisms for High-Throughput Computing,"
"The Autopilot Performance-Directed Adaptive Control System"
"Cactus Application: Performance Predictions in a Grid Environment"
Warren M.
"Applying Scheduling and Tuning to On-line Parallel Tomography"
"Application Level Scheduling of Gene Sequence Comparison on Metacomputers"
"Using AppLeS to Schedule Simple SARA on the Computational Grid"
"Predictive Analysis of a Wavefront Application Using LogGP"
"Performance Contracts: Predicting and Monitoring Grid Application Behavior"
--TR
Computing block-angular Karmarkar projections with applications to stochastic programming
LogP: towards a realistic model of parallel computation
LogGP
A worldwide flock of Condors
A parallel implementation of the nested decomposition algorithm for multistage stochastic linear programs
Application level scheduling of gene sequence comparison on metacomputers
Predictive analysis of a wavefront application using LogGP
Application-level scheduling on distributed heterogeneous networks
The <i>Autopilot</i> performance-directed adaptive control system
Applying scheduling and tuning to on-line parallel tomography
Dynamic load balancing of SAMR applications on distributed systems
A Framework for Automatic Adaptation of Tunable Distributed Applications
Adaptive Scheduling for Master-Worker Applications on the Computational Grid
Performance Contracts
Cactus Application
The Globus Project
Application-Aware Scheduling of a Magnetohydrodynamics Application in the Legion Metasystem
Predictive Application-Performance Modeling in a Computational Grid Environment
An Enabling Framework for Master-Worker Applications on the Computational Grid
The Cactus Code
Exposing Application Alternatives
--CTR
Jeff Linderoth , Steve Wright, COAP Computational Optimization and Applications, v.29 n.2, p.123-126, November 2004
E. Heymann , M. A. Senar , E. Luque , M. Livny, Efficient resource management applied to master-worker applications, Journal of Parallel and Distributed Computing, v.64 n.6, p.767-773, June 2004
Jeff Linderoth , Stephen Wright, Decomposition Algorithms for Stochastic Programming on a Computational Grid, Computational Optimization and Applications, v.24 n.2-3, p.207-250, February-March | parallel algorithms;grid computing;parallel application performance;adaptive computations;stochastic optimization |
52795 | Absolute Bounds on Set Intersection and Union Sizes from Distribution Information. | A catalog of quick closed-form bounds on set intersection and union sizes is presented; they can be expressed as rules, and managed by a rule-based system architecture. These methods use a variety of statistics precomputed on the data, and exploit homomorphisms (onto mappings) of the data items onto distributions that can be more easily analyzed. The methods can be used anytime, but tend to work best when there are strong or complex correlations in the data. This circumstance is poorly handled by the standard independence-assumption and distributional-assumption estimates. | r
e
methods for data in a database, especially when joins are involved. Such estimation is necessary fo
stimates of paging or blocks required. But often absolute bounds on such sizes can serve the purpose
of estimates, for several reasons:
1. Absolute bounds are more often possible to compute than estimates. Estimates generally
d
a
require distributional assumptions about the data, assumptions that are sometimes difficult an
wkward to verify, particularly for data subsets not much studied. Bounds require no assumptions
. Bounds are often easier to compute than estimates, because the mathematics, as we shall see,
can be based on simple principles - rarely are integrals (possibly requiring numerical approxima-
ion) needed as with distributions. This has long been recognized in computer science, as in the
a
analysis of algorithms where worst-case (or bounds) analysis tends to be much easier than
verage-case.
3. Even when bounds tend to be weak, several different bounding methods may be tried and the
e
best bound used. This paper gives some quite different methods that can be used on the sam
roblems.
4. Bounds fill a gap in the applicability of set-size determination techniques. Good methods exist
e
when one can assume independence of the attributes of a database, and some statistical techniques
xist when one can assume strong but simple correlations between attributes. But until now there
ave been few techniques for situations with many and complicated correlations between attri-
butes, situations bounds can address. Such circumstances occur more with human-generated data
han natural data, so with increasing computerization of routine bureaucratic activity we may seemore of them.
. Since choices among database access methods are absolute (yes-or-no), good bounds on the
e
sizes of intersections can sometimes be just as helpful for making decisions as "reasonable-guess
stimates, when the bounds do not substantially overlap between alternatives.
6. Bounds in certain cases permit absolutely certain elimination (pruning) of possibilities, as i
ranch-and-bound algorithms and in compilation of database access paths. Bounds also help random
sampling obtain a sample of fixed size from an unindexed set whose size is not known, since
error retrieving too few items is much worse than an error retrieving too many.
a
s
7. Bounds also provide an idea of the variance possible in an estimate, often more easily than
tandard deviation. This is useful for evaluating retrieval methods, since a method with the sameestimated cost as another, but tighter bounds, is usually preferable.
. Sizes of set intersections are also valuable in their own right, particularly with "statistical data-
bases" [16], databases designed primarily to support statistical analysis. If the users are doing
exploratory data analysis" [18], the early stages of statistical study of a data set, quick estimates
s
are important and bounds may be sufficient. This was the basis of an entire statistical estimation
ystem using such "antisampling" methods [14].
9. Bounds (and especially bounds on counts) are essential for analysis of security of statistical
A
databases from indirect inferences [5].
s with estimates, precomputed information is necessary for bounds on set sizes. The more space
e
allocated to precomputed information, the better the bounds can be. Unlike most work with
stimates, however, we will exploit prior information besides set sizes, including extrema,
frequency statistics, and fits to other distributions. We will emphasize upper bounds on
ntersection sizes, but we will also give some lower bounds, and also some bounds on set unions
and complements.
ince set intersections must be defined within a ""universe"" U, and we are primarily interested in
s
database applications, we will take U to be a relation of a relational database. Note that imposin
elections or restrictions on a relation is equivalent to intersecting sets of tuples defining those
s
selections. Thus, our results equivalently bound the sizes of multiple relational-database selection
n the same relation.
Section 2 of this paper reviews previous research, and Section 3 summarizes our method of
c
obtaining bounds. Section 4 examines in detail the various frequency-distribution bounds
overing upper bounds on intersections (section 4.1), lower bounds on intersections (section 4.2),
c
bounds on unions (section 4.4), bounds on arbitrary Boolean expressions for sets (section 4.6), and
oncludes (section 4.7) with a summary of storage requirements for these methods. Section 5
a
evaluates these bounds both analytically and experimentally. Section 6 examines a different bu
nalogous class of bounds, range-analysis, first for univariate ranges (section 6.1), thenmultivariate (section 6.2).
. Previous work
Analysis of the sizes of intersections is one of several critical issues in optimizing database query
performance; it is also important in optimizing execution of logic-programming languages like Prolog
The emphasis in previous research on this subject has been almost entirely on developing estimates, no
ounds. Various independence and uniformity assumptions have been suggested (e.g., [4] and [11]).
These methods work well for data that has no or minor correlations between attributes and between sets
ntersected, and where bounds are not needed.
Christodoulakis [2] (work extending [9]) has estimated sizes of intersections and unions where
r
correlations are well modeled probabilistically. He uses a multivariate probability distribution to
epresent the space of possible combinations of the attributes, each dimension corresponding to a set
being intersected and the attribute defining it. The size of the intersection is then the number of points
n a hyperrectangular region of the distribution. This approach works well for data that has a few
s
a
simple but possibly strong correlations between attributes or between sets intersected, and where bound
re not needed. Its main disadvantages are (1) it requires extensive study of the data beforehand to
d
l
estimate parameters of the multivariable distributions (and the distributions can change with time an
ater become invalid), (2) it only exploits count statistics (what we call level 1 and level 5 information
in section 4), and (3) it only works for databases without too many correlations between entities.
imilar work is that of [7]. They model the data by coefficients equivalent to moments. They do not
O
use multivariate distributions explicitly, but use the independence assumption whenever they can
therwise they partition the database along various attribute ranges (into what they call "betas", what
a
[5] calls "1-sets", and what [12] calls "first-order sets") and model the univariate distributions on every
ttribute. This approach does allow modeling of arbitrary correlations in the data, both positive and
e
d
negative, but requires potentially enormous space in its reduction of everything to univariat
istributions. It can also be very wasteful of space, since it is hard to give different correlation
d
phenomena different granularities of description. Again, the method expoits only count statistics an
nly gives estimates, not bounds.
Some relevant work involving bounds on set sizes is that of [8], which springs from a quite different
c
motivation that ours (handling of incomplete information in a database system), and again only uses
ount statistics. [10] investigates bounds on the sizes of partitions of a single numeric attribute using
e
a
prior distribution information, but does not consider the much more important case of multipl
ttributes.
There has also been relevant work over the years on probabilistic inequalities [1]. We can divide
f
counts by the size of the database to turn them into probabilities on a finite universe, and apply some
hese mathematical results. However, the first and second objections of section 1 apply to this work: it
l
d
usually makes detailed distributional assumptions, and is mathematically complex. For practica
atabase situations we need something more general-purpose and simpler.
3. The general method
e present two main approaches to calculation of absolute bounds on intersection and union sizes in
this paper.
uppose we have a census database on which we have tabulated statistics of state, age, and income.
a
Suppose we wish an upper bound on the number of residents of Iowa that are between the ages of
nd 34 inclusive, when all we know are statistics on Iowa residents and statistics on people age 30-34
eseparately. One upper bound would be the frequency of the mode (most common) state for people ag
0-34. Another would be five times the frequency of the most common age for people living in Iowa
s
(since there are five ages in the range 30-34). These are examples of frequency-distribution bound
discussed in section 4), to which we devote primary attention in this paper.
e
Suppose we also have income information in our database, and suppose the question is to find th
umber of Iowans who earned over 100,000 dollars last year. Even though the question has nothing to
d
with ages, we may be able to use age data to answer this question. We obtain the maximum and
s
minimum statistics on the age attribute of the set of Americans who earned over 100,000 dollar
combining several subranges of earnings to get this if necessary), and then find out the number of
Americans that lie in that age range, and that is an upper bound. We can also use the methods of the
receding paragraph to find the number of Iowans lying in that age range. This is an example of
O
range-restriction bounds (discussed in section 6).
ur basic method for both kinds of bounds is quite simple. Before querying any set sizes, preprocess
the data:
Group the data items into categories. The categories may be arbitrary.
(2) Count the number of items in each category, and store statistics characterizing (in some way
these counts.
Now when bounds on a set intersection or union are needed:
(3) Look up the statistics relevant to all the sets mentioned in the query, to bound certain subse
counts.
(4) Find the minima (for intersections) or maxima (for unions) of the corresponding counts for each
subset.
Sum up the minima (or maxima) to get an overall bound on the intersection size.
All our rules for bounds on sizes of set intersections will be expressed as hierarchy of differen
"levels"" of statistics knowledge about the data. Lower levels mean less prior knowledge, but
generally poorer bounding performance.
he word ""value"" may be interpreted as any equivalence class of data attribute values. This
means that prior counts on different equivalence classes may be used to get different bounds on
he same intersection size, and the best one taken, though we do not include this explicitly in ourformulae.
. Frequency-distribution bounds
We now examine bounds derived from knowledge (partial or complete) of frequency distributions ofattributes.
.1. Upper frequency-distribution bounds
y
I
4.1.1. Level 1: set sizes of intersected sets onl
f we know the sizes of the sets being intersected, an upper bound ("sup") on the size of the intersection
is obviously
where n (i ) is the size of the ith set and s is the number of sets
s
4.1.2. Level 2a: mode frequencies and numbers of distinct value
uppose we know the mode (most common) frequency m (i , j ) and number of distinct values d (i , j ) for
some attribute j for each set i of s total. Then an upper bound on the size of the intersection is
ss
prove this: (1) an upper bound on the mode frequency of the intersection is the minimum of the
e
mode frequencies; (2) an upper bound on the number of distinct values of the intersection is th
inimum of the number for each set; (3) an upper bound on the size of a set is the product of its mode
e
frequency and number of distinct values; and (4) an upper bound on the product of two nonnegativ
ncertain quantities is the product of their upper bounds.
e
If we know information about more than one attribute of the data, we can take the minimum of th
pper bound computations on each attribute. Letting r be the number of attributes we know these
statistics about, the revised bound is:
min
ss
special case occurs when one set being intersected has only one possible value on a given attribute-
that is, the number of distinct values is 1. This condition can arise when a set is defined as a partition
f the values on that attribute, but also can occur accidentally, particularly when the set concerned is
small. Hence the bound is the first of the inner minima, or the minimum of the mode frequencies on
hat attribute. For example, an upper bound on the number of American tankers is the mode frequency
of tankers with respect to the nationality attribute.
he second special case is the other extreme, when one set being intersected has all different values forsome attribute, or a mode frequency of 1. This arises from what we call an "extensional key" ([12], ch
functions like a key to a relation but only in a particular database
r
e
state. Hence the first bound is the minimum of the number of distinct values on that attribute. Fo
xample, an upper bound on the number of American tankers in Naples, when we happen to know
r
Naples requires only one ship per nationality at a time, is the number of different nationalities fo
ankers at Naples.
4.1.3. Level 2b: a different bound with the same information
A different line of reasoning leads to a different bound utilizing mode frequency and number of distinc
alues, an "additive" bound instead of the "multiplicative" one above. Consider the mode on some
attribute as partitioning a set into two pieces, those items having the mode value of the attribute, and
hose not. Then a bound on the size of the intersection of r sets is
min
r
s
s
prove this, let R be the everything in set i except for its mode, and consider three cases. Case 1:
e
assume the set i that satisfies the first inner min above also satisfies the second inner min. Then the
xpression in brackets is just the size of this set. But if such a set has minimum mode frequency and
f
minimum-size R , it must be the smallest set. Therefore its size must be an upper bound on the size
the intersection.
ase 2: assume set i satisfies the first inner min, some other set j satisfies the second inner min, and sets
have the same mode (most common value). We need only consider these two sets, because an
pper bound on their intersection size is an upper bound on the intersection of any group of sets
c
ontaining them. Then the minimum of the two mode frequencies is an upper bound on the mode
r
frequency of the intersection, and the minima of the sizes of R and R is an upper bound on the R fo
the intersection. Thus the sum of two minima on s is a minimum on s.
ase 3: assume set i satisfies the first inner min, set j satisfies the second inner min, and i and j have
s
f
different modes. Let the mode frequency of i be a and that of j be d; suppose the mode of i ha
requency e in set j, and suppose the rest of j (besides the d+e) has total size f. Furthermore suppose
that the mode of j has frequency b in set i, and the rest of i (besides the a+b) has total count c. Then
he 2b bound above is a +e But in the actual intersection of the two sets, a would match with e, b
with d, and c with f, giving an upper bound of min(a ,e )+min(b ,d )+min(c , f ). But e -min(a ,e )
lastly a -min(b ,d ) because a -b . Hence our 2b bound is an upper bound on the
actual intersection size.
ut the above bound doesn't use the information about the number of distinct values. If the set i that
d
minimizes the last minima in the formula above contains more than the minimum of the number of
istinct values d(i,j) over all the sets, we must "subtract out" the excess, assuming conservatively that
the extra values occur only once in set i:
min
ss
t would seem that we could do better by subtracting out the minimum mode frequency the sets a
number of times corresponding to the minima of the number of distinct values over all the sets
owever, this reduces to the level 2a bound.
s
A
4.1.4. Level 2c: Diophantine inferences from sum
different kind of information about a distribution is sometimes useful when the attribute is numeric:
its sum and other moments on the attribute for the set. (Since the sum and standard deviation require
he same amount of storage as level 2a and 2b information, we call them another level 2 situation.)
a
This information is only useful when (a) we know the set of all possible values for the universal set
nd (b) there are few of these values relative to the size of the sets being intersected. Then we can
f
write a linear Diophantine (integer-solution) equation in unknowns representing the number
ccurrences of each particular numeric value in each of the sets being intersected, and each solution
s
represents a possible partition of counts on each value. An upper bound on the intersection size is thu
he sum over all values of the minimum over all sets of the maximum number of occurrences of a
s
particular value for a particular set. See [13] for a further discussion of Diophantine inferences abou
tatistics. A noteworthy feature of Diophantine equations is the unpredictability of their number ofsolutions.
.1.5. Level 3a: other piecemeal frequency distribution information
e
f
The level 2 approach will not work well for sets and attributes that have relatively large mod
requencies. We could get a better (i.e. lower) upper bound if we knew the frequencies of other values
than the mode. Letting m2(i,j) represent the frequency of the second most common value of the ith se
n the jth attribute, a bound is:
min
(min m2(i , j ))* ((min d (i , j ))-1)
ss
or this we can prove by contradiction that the frequency of the second most common value of the
s
intersection cannot occur more than the minimum of the frequencies of the second most common value
f those sets. Let M be the mode frequency of the intersection and let M2 be the frequency of the
s
econd most common value in the intersection. Assume M2 is more than the frequency of the second
most common value in some set i. Then M2 must correspond to the mode frequency of that set i. Bu
hen the mode frequency of the intersection must be less than or equal to the frequency of the second
F
most frequent value in set i, which is a contradiction.
or knowledge of the frequency of the median-frequency value (call it mf(i,j)), we can just divide the
e
outer minimum into two parts (assuming the median frequency for an odd number of frequencies is th
igher of the two frequencies it theoretically falls between):
min
r
s
s
s
s
ss
min
*min d (i , j )
/2The mean frequency is no use since this is always the set size divided by the number of distinct values
.1.6. Level 3b: a different bound using the same information
e
In the same way that level 2b complements level 2a, there is a 3b upper bound that complements th
receding 3a bound:
min
ss
Here we don't include the median frequency because an upper bound on this for an intersection is not
l
f
the minimum of the median frequencies of the sets intersected.) The formula can be improved stil
urther if we know the frequency of the least common value on set i, and it is greater than 1: justmultiply the maximum of (d(i,j)-d(k,j)) above by this least frequency for i before taking the minimum.
.1.7. Level 4a: full frequency distribution information
r
e
An obvious extension is to knowledge of the full frequency distribution (histogram) for an attribute fo
ach set, but not which value has which frequency. By similar reasoning to the last section the bound
is:
r
s
here freq(i,j,k) is the frequency of the kth most frequent value of the ith set on the jth attribute. This
s
follows from recursive application of the first formula for a level-2b bound. First we decompose the
ets into two subsets each, for the mode and non-mode items; then we demcompose the non-mode
d
subsets into two subsets each, for their mode and non-mode items; and so on until the frequency
istributions are exhausted.
We can still use this formula if all we know is an upper bound on the actual distribution-we just get a
a
c
weaker bound. Thus there are many gradations between level 3 and level 4a. This is useful because
lassical probability distribution (like a normal curve) that lies entirely above the actual frequency
A
distribution can be specified with just a few parameters and thus be stored in very little space.
s an example, suppose we have two sets characterized by two exponential distributions of numbers
between 0 and 2. Suppose we can upper-bound the first distribution by 100e and the second by
-x
00e , so there are about 86 of each set. Then the distribution of the set intersection is bounded
above by the minimum of those two distributions. So an upper bound on the size of the intersection is
x -22
A
4.1.8. Level 4b: Diophantine inferences about values
different kind of Diophantine inference than that discussed in 4.1.4 can arise when the data
distribution is known for some numeric attribute. We may be able to use the sum statistic for se
alues on that attribute, plus other moments, to infer a list of the only possible values for each set being
intersected; then the possible values for the intersection set must occur in every possibility list. We ca
se this to upper-bound size of the intersection as the product of an upper bound on the mode frequency
of the intersection and the number of possible values of the intersection. To make this solutio
ractical we require that (a) the number of distinct values in each set being intersected is small with
l
respect to the size of the set, and (b) the least common divisor of the possible values be not too smal
say less than .001) of the size of the largest possible value. Then we can write a linear Diophantine
e
equation in unknowns which this time are the possible values, and solve for all possibilities. Again, se
13] for further details.
4.1.9. Level 5: tagged frequency distributions
Finally, the best kind of frequency-distribution information we could have about sets would specify
exactly which values in each distribution have which frequencies. This gives an upper bound of:
r
s
here gfreq(i,j,k) is the frequency of globally-numbered value k of attribute j for set i, which is zero
when value k does not occur in set i, and where d(U,j) is the number of distinct values for attribute j i
he data universe U.
All that is necessary to identify values is a unique code, not necessarily the actual value. Bit strings
can be used together with an (unsorted) frequency distribution of the values that do occur at least once
otice that level 5 information is analogous to level 1 information, as it represents sizes of particular
subsets formed by intersecting each original set with the set of all items in the relation having a
articular value for a particular attribute. This is what [12] calls "second-order sets" and [5] "2-sets".Thus we have come full circle, and there can be no "higher" levels than 5.
.2. Lower bounds from frequency distributions
On occasion we can get nonzero lower bounds ("inf") on the size of a set intersection, when the size ofthe data universe U is known, and the sets being intersected are almost its size.
.2.1. Lower bounds: levels 1 and 5
A set intersection is the same as the complement (with respect the universe) of the set union of the
r
complements. An upper bound on the union of some sets is the sum of their set sizes. Hence a lowe
ound on the size of the intersection, when the universe U is size N, is
s
s
hich is the statistical form of the simplest case of the Bonferroni inequality. For most sets of interest
a database user this will be zero since the sum is at most sN. But with only two sets being
e
e
intersected, or sets corresponding to weak restrictions (that is, sets including almost all the univers
xcept for a few unusual items, sets intersected with others to get the effect of removing those items), a
F
nonzero lower bound may more often occur.
or level 5 information the bound is:
r
s
here gfreq(i,j,k) is as before the number of occurrences of the kth most common value of the jth
a
attribute for the ith set, U is the universe set, and d(U,j) is the number of distinct values for attribute
mong the items of U.I
4.2.2. Lower bounds: levels 2, 3, and
t is more difficult to obtain nonzero lower bounds when statistical information is not tagged to specific
e
f
sets, as for what we have called levels 2, 3, and 4. If we know the mode values as well as the mod
requencies, and the modes are all identical, we can bound the frequency of the mode in the intersection
s
by the analogous formula to level using the mode frequency of the universe (if the mode i
dentical) for N. Without mode values, we can infer that modes are identical for some large sets,
whenever for each
c
where m(i,j) is the mode frequency of set i on attribute j, m2(i,j) the frequency of the second mos
ommon value, n(i) the size of set i, and N the size of the data universe.
The problem for level 4 lower bounds is that we do not know which frequencies have which values
ut if we have some computer time to spend, we can exhaustively consider combinatorial possibilities,
excluding those impossible given the frequency distribution of the universe, and take as the lower
ound the lowest level-5 bound. For instance, with an implementation of this method in Prolog, we
e
considered a universe with four data values for some attribute, where the frequency distribution of th
niverse was (54, 53, 52, 51), and the frequency distributions of the two sets intersected were (40, 38,
22, 20) and (30, 23, 21, 16). The level 4a lower bound was 8, and occurred for several matchings
ncluding:
r
The level 1 lower bound is 210 - 120 - so the effort may be worth it. (The level 1 and 4 uppe
ounds are both But the number of combinations that must be considered for
distinct values in the universe is (k !) .A
4.2.3. Definitional sets
nother very different way of getting lower bounds is from knowledge of how the sets intersected were
defined. If we know that set i was defined as all items having particular values for an attribute j, then
n analyzing an intersection including set i, the "definitional" set i contributes no restrictions on
attributes other than j and can be ignored. This is redundant information with levels 1 and 5, but i
ay help with the other levels. For instance, for i1 definitional on attribute j, a lower bound on the
f
s
size of the intersection of sets i1 and i2 is the frequency of the least frequent value (the "antimode")
et i2 on j.
4.3. Better bounds from relaxation on sibling sets
oth upper and lower bounds can possibly be improved by relaxation among related sets in the manner
c
of [3], work aimed at protection of data from statistical disclosure. This requires a good deal more
omputation time than the closed-form formulae in this paper and requires sophisticated algorithms.Thus we do not discuss it here.
.4. Set unions
Rules analogous to those for intersection bounds can be obtained for union bounds. Most of these arelower bounds.
.4.1. Defining unions from intersections
Since
means the size of the union of set i and set j, and n (i j ) means the size of thei
ntersection, extending our previous notation for set size, it follows that
using the distribution of intersection over union, and
ss
s
s
s
Another approach to unions is to use complements of sets and DeMorgan's Law
A (i )
s
s
ss
A (i )
he problem with using this is the computing of statistics on the complement of a set, something
I
difficult for level 2, 3, and 4 information.
one important situation the calculation of union sizes is particularly easy: when the two sets unioned
s
are disjoint (that is, their intersection is empty). Then the size of the union is just the sum of the se
izes, by the first formula in this section. Disjointness can be known a priori, or we can infer it usingmethods in section 6.1.2.
.4.2. Level 1 information for unions
To obtain union bounds rules from intersection rules, we can do a "compilation" of the above formulae
(section 3.5.5 of [12] gives other examples of this process) by substituting rules for intersections in
hem, and simplifying the result. Substituting the level 1 intersection bounds in the above set-
complement formula:
A (i )
ss
sup(n
A (i )
s
s
s
ere we use the standard notation of "inf" for the lower bound and "sup" for the upper bound.
I
4.4.3. Level 2b unions
f we know the mode frequency m(i,j) and the number of distinct values d(i,j) on attribute j, then we
can use a formula analogous to the level 2b intersection upper bound, a lower bound on the union:
ss
4.4.4. Level 2a unions
he approach used in level 2a for intersections is difficult to use here. We cannot use the negation
s
formula to relate unions to intersections because there is no comparable multiplication of two quantitie
like mode frequency and number of distinct values) that gives a lower bound on something. However,
for two sets we can use the other (first) formula relating unions to intersections, to get a union lower
ound:
For three sets, it becomes
r
+min(m (i 2,j ),m (i 3,j ))* min(d (i 2,j ),d (i 3, j
rj =The formulae get messy for more sets
.4.5. Level 3b unions
Analogous to level 2b, we have the lower bound
ss
here m2(i,j) is the frequency of the second most common value of set i on attribute j. And if we
a
know the frequency of the least common value in set i, we multiply the maximum of (d(k,j)-d(i,j)
bove by it before taking the maximum.
A
4.4.6. Level 3a unions
nalogous to level 2a, and to level 3a intersections, we have for the union of two sets a lower bound
of:
f
where m2 is the frequency of the second most common value, and mf the frequency of the median
requency value.
s
4.4.7. Level 4 union
he analysis of level 4 is analogous to that of section 4.1.7, giving a lower bound of
r
s
here freq(i,j,k) is the frequency of the kth most frequent value of the ith set unioned on the jthattribute.
.4.8. Level 5 unions
Level 5 is analogous to level 1
r
s
s
up: min
4.5. Complements
our coverage of set algebra we need set complements. The size of a complement is just
the difference of the size N of the universe U (something that is often important, so we ought to know
t) and the size of the set. An upper bound on a complement is N minus a lower bound on the size ofthe set; a lower bound on a complement is N minus an upper bound on the size of the set.
.6. Embedded set expressions
So far we have only considered intersections, unions, and complements of simple sets about which we
e
know exact statistics. But if the set-description language permits arbitrary embedding of query
xpressions, new complexities arise.
One problem is that the formulae of sections 4.1-4.4 require exact values values for statistics, and such
statistics are usually impossible for an embedded expression. But we can substitute upper bounds
he embedded-expression statistics in upper-bound formulae (or lower bounds when preceded in the
f
formula by a minus sign). Similarly, we can substitute lower bounds on the statistics in lower-bound
ormulae (or upper bounds when preceded in the formula by a minus sign). This works for statistics on
counts, mode frequency, frequency of the second-most common value, and number of distinct items-
ut not the median frequency.
s
A
4.6.1. Summary of equivalence
nother problem is that there can be many equivalent forms of a Boolean-algebra expression, and we
A
have to be careful which equivalent form we choose because different forms give different bounds
ppendix A surveys the effect of various equivalences of Boolean algebra on bounds using level 1
information. Commutativity and associativity do not affect bounds, but factoring out of common sets i
conjuncts or disjuncts with distributive laws is important since it usually gives better bounds and canno
orsen them. Factoring out enables other simplification laws which usually give better bounds too.
e
The formal summary of Appendix A is in Figure 1 ("yes" means better in all but trivial cases).
hese transformations are sufficient to derive set expression equivalent to another a set expression, the
a
information in the table is sufficient to determine whenever one expression is always better than
nother.
4.6.2. The best form of a given set expression, for level 1 information
e
So the best form for the best level 1 bounds is a highly factored form, quite different from a disjunctiv
ormal form or a conjunctive normal form. The number of Boolean operators doesn't matter, more the
l
number of sets they operate on, so we don't want the "minimum-gate" form important in classica
oolean optimization techniques like Karnaugh maps. So minimum-term form [6] seems to be closest
to what we want; note that all the useful transformations in the above table reduce the number of terms
n an expression. Minimum-term form makes sense because multiple occurrences of the same term
f
should be expected to cause suboptimal bounds arising from failure to exploit the perfect correlation
tems in the occurrences. Unfortunately, the algorithms in [6] for transforming a Boolean expression to
this form are considerably more complicated than the one to a minimum-gate form.
inimum-term form is not unique. Consider these three equivalent expressions:
e
l
These cannot be ranked in a fixed order, though they are all preferable (by their use of a distributiv
aw) to the unfactored equivalent
we may need to compute bounds on each of several minimum-term forms, and take the best bounds.
This situation should not arise very often, because users will query sets with few repeated mentions of
he same set-parity queries are rarely needed.
Another problem with the minimum-term form is that it does not always give optimal bounds. For
r
instance, let set A in the above be the union of two new sets D and E. Let the sizes of B, C, D, and E
espectively be 10, 7, 7, and 8. Then the three factored forms give upper bounds respectively of
min(15,17)+min(10,7)=22, min(10,22)+min(15,7)=17, and min(7,25)+min(15,10)=17. But the first form
s the minimum-term form, with 6 terms instead of 7. However, this situation only arises when there
are different ways to factor, and can be forestalled by calculating a bound separately for the minimum
erm form corresponding to every different way of factoring.
4.6.3. Embedded expression forms with other levels of informatio
evel 5 is analogous to level 1-it just represents a partition of all the sets being intersected into subsets
r
of a particular range of values on a particular attribute, with bounds being summed up on all such
anges of values of the attribute. Thus the above "best" forms will be equally good for level 5
e
information. Analysis is considerably more complicated for levels 2, 3, and 4 since we do not hav
oth upper and lower bounds in those cases. But the best forms for level 1 can be used heuristically
then.
4.7. Analysis of storage requirements
.7.1. Some formulae
Assume a universe of r attributes on N items, each attribute value requiring an average of w bits of
storage. The database thus requires rNw bits of storage. Assume we only tabulate statistics on "1-sets"
5] or "first-order sets" [12] or universe partitions by the values of single attributes. Assume there are
s
f
approximately even partitions on each attribute. Then the space required for storage of statistics is a
ollows:
Level 1: there are mr sets with just a set size tabulated for each. Each set size should average
e
about N /m , and should require about log (N /m ) bits, so a total of mr* log (N /m ) bits ar
required. This will tend to be considerably less than rNw, the size of the database, because
ill likely be on the same order as log (N /m ), and m is considerably less than N.
Level 2: for each of the mr sets we have 2r statistics (the mode frequency and number of distinct
values for each attribute). (This assumes we do not have any criteria to claim certain attributes as
eing useless, as when their values exhibit no significantly different distributions for different
sets-if not, we replace r by the number of useful attributes.) Hence we need 2mr log (N /m ) bits.2
e
Level 3: we need twice as much space as level 2 to include the second highest frequency and th
edian frequency statistics too, hence 4mr log (N /m ) bits.
evel 4: we can describe a distribution either implicitly (by a mathematical formula
a
d
approximating it) or explicitly (by listing of values). For implicit storage, we need to specify
istribution function and absolute deviations above and below it (since the original distribution is
s
f
discrete, it is usually easier to use the corresponding cumulative distributions). We can use code
or common distributions (like the uniform distribution, the exponential, and the Poisson), and we
f
need a few distribution parameters of w bits, plus the positive and negative deviation extrema
bits each too. So space will be similar to level 3 information.
e
d
If a distribution is not similar to any known distribution, we must represent it explicitly. Assum
ata items are aggregated into approximately equal-size groups of values; the m-fold partitioning
r
that defined the original sets is probably good (else we would not have chosen it for the othe
urpose originally), so let us assume it. Then we have a total of m r log (N /m ) bits. If some of
the groups of values (bins) on a set are zero, we can of course omit them and save space
evel 5: this information is similar to level 4 except that values are associated with points of the
distribution. Implicit representation by good-fit curves requires just as much space as level-4
mplicit representation-we just impose a fixed ordering of values along the horizontal axis instead
of sorting by frequency. Explicit representation also takes the level 4 of m r log (N /m ), but a
alternative is to give pairs of values and their associated frequencies, something good when dat
alues are few in number.
We also need storage for access structures. If users query only a few named sets, we can just
a
store the names in a separate lexicon table mapping names to unique integer identifiers, requiring
total of m*r* (l +log mr ) bits for the table, where l is the average length of a name, assuming all
statistics on the same set are stored together.
But if users want to query arbitrary value partitions of attributes, rather than about named sets,
a
we must also store definitions of the sets about which we have tabulated statistics. For sets that
re partitions of numeric attributes, the upper and lower limits of the subrange are sufficient, for
2mw bits each. But nonnumeric attributes are more trouble, because we usually have no
alternative than to list the set to which each attribute value belongs. We can do this with a has
able on value, for 2V log m bits assuming a 50% hash table occupancy. Thus total storage is
A
variety of compression techniques can be applied to storage of statistics, extending standard
d
compression techniques for databases [15]. Thus the storage calculations above can be considere
pper bounds.
These storage requirements are not necessarily bad, not even the level 4 and 5 explicit
distributions. In many databases, storage is cheap. If a set intersection is often used, or a bound
s needed to determine how to perform a large join when a wrong choice may mean hours or days
r
more time, quick reasoning with a few page fetches of precomputed statistics (it's easy to group
elated precomputed statistics on the same page) will usually be much faster than computing the
s
actual statistic or estimating it by unbiased sampling. That is because the number of page fetche
s by far the major determinant of execution time for this kind of simple processing. Computing
s
the actual statistic would require looking at every page containing items of the set; random
ampling will require examining nearly as many pages, even if the sampling ratio is small, because
r
d
except in the rare event in which the placement of records on pages is random (generally a poo
atabase design strategy), records selected will tend to be the only records used on a page, andthus most of the page-fetch effort is ""wasted."" Reference [14] discusses these issues further.
. Evaluation of the frequency-distribution bounds
5.1. Comparing bounds
e can prove some relationships between frequency-distribution bounds on intersections (see Figure 2):
1. Level 2b upper bounds are better than level 1 since
ss
s
s
s
-min n (i
. Level 3a upper bounds are better than level 2a because you get the latter if you substitutem(i,j) for m2(i,j) and mf(i,j) in the former, and m2(i , j )-m (i
. Level 3b upper bounds are better than level 2b because
s
s
s
ss
+min
d
4. Level 4a upper bounds are better than level 3a because the mode frequency is an upper boun
n the frequency of the half of the most frequent values, and the median frequency is an upper
s
bound on the frequency of the other half. Hence writing the level 3a expression in brackets as a
ummation of d(U,j) terms comparable to that in the level 4a summation, each level-3a term is an
upper bound on a corresponding level-4 term.
5. Level 4a upper bounds are better than level 2b since they represent repeated application oflevel-2b bounds to subsets of the sets intersected.
. Level 5 upper bounds are better than level 4a by the proof in Appendix B.
l7. Level 5 lower bounds are better than level 1 lower bounds because level 5 partitions the leve
sets into many subsets and computes lower bounds separately on each subset instead of all at
A
once.
nalogous arguments hold for bounds on unions since rules for unions were created from rulesfor intersections.
.2. Experiments
There are two rough guidelines for bounds on set intersection and union sizes to be more useful than
estimates of those same things:
1. Some of the sets being intersected or unioned are significantly nonindependent (that is, not
drawn randomly from some much larger population). Hence the usual estimates of their
ntersection size obtained from level 1 (size of the intersected sets) information will be poor.
f
2. At least one set being intersected or unioned has a significantly different frequency distributio
rom the others on at least one attribute. This requires that at least one set has values on an
attribute that are not randomly drawn.
hese criteria can be justified by the general homomorphism idea behind our approach (see
d
section 3): good bounds result whenever values in the range of the homomorphism get very
ifferent counts mapped onto them for each set considered. These criteria can be used to decidewhich sets on a database it might be useful to store statistics for computing bounds.
.2.1. Experiments: nonrandom sets
As a simple illustration, consider the experiments summarized in the tables of Figures 3 and 4. We
r
created a synthetic database of 300 tuples of four attributes whose values were evenly distributed
andom digits 0-9. We wrote a routine (MIX) to generate random subsets of the data set satisfying theabove two criteria, finding groups of subsets that had unusually many common values. We conducted
experiments each on random subsets of sizes 270, 180, 120, and 30. There were four parts to the
e
s
experiment, each summarized in a separate table. In the top tables in Figures 3 and 4, we estimated th
ize of the intersection of two sets; in the lower tables, we estimated the size of the intersection of four
sets. In Figure 3 the chosen sets had 95% of the same values; in Figure 4, 67%.
he entries in the tables represent means and standard deviations in 10 experiments of the ratios of
r
d
bounds or estimates to the actual intersection size. There are four pairs of columns for the fou
ifferent set sizes investigated. The rows correspond to the various frequency-distribution levels
discussed: the five levels of upper bounds first, then two estimate methods, then the two lower bound
ethods. (Since level 5 information is just level 1 information at a finer level of detail, it is easier to
e
generalize the level 1 estimate formula to a level 5 estimate formula.) Only level 2a and 3a rules wer
sed, not 2b and 3b.
The advantage of bounds shows in both Figure 3 and Figure 4, but more dramatically in Figure 3 where
c
sets have the 95% overlap. Unsurprisingly, lower bounds are most helpful for the large set sizes (lef
olumns), whereas upper bounds are most helpful for the small set sizes (right columns). However, the
e
lower bounds are not as useful because when they are close to the true set size (i.e. the ratio is near 1)
stimates are also close. But when upper bounds are close to the true set size for small sets, both
estimates and lower bounds can be far away.
.2.2. Experiments: real data
The above experiments were with synthetic data, but we found similar phenomena with real-world data.
A variety of experiments, summarized in [17], were done with data extracted from a database of
edical (rheumatology) patient records. Performance of estimate methods vs. our bounding methods
f
s
was studied for different attributes, different levels of information, and different granularities
tatistical summarization. Results were consistent with the preceding ones for a variety of set types.
d
This should not be surprising since our two criteria given previously are often fulfilled with medical
ata, where different measures (tests, observations, etc.) of the sickness of a patient often tend tocorrelate.
. Bounds from range analysis
Frequency-distribution bounds are only one example of a class of bounding methods involving
e
a
mappings (homomorphisms) of a set of data items onto a distribution. Another very important exampl
re bounds obtained from analysis on the range of values for some attribute, call it j, of the data items
c
for each set intersected. These methods essentially create new sets, defined as partitions on j, which
ontain the intersection or union being studied. These new sets can therefore can be included in the list
of sets being intersected or unioned without affecting the result, and this can lead to tighter (better)
ounds on the size of the result. Many formulas analogous to those of section 4 can be derived.6.1. Intersections on univariate ranges
.1.1. Statistics on partitions of an attribute
All the methods we will discuss require partition counts on some attribute j. That is, the number of
data items lying in mutually exclusive and exhaustive ranges of possible values for j. For instance, we
ay know the number of people ages 0-9, 10-19, 20-29, etc.; or the number of people with incomes 0-
r
9999, 10000-19999, 20000-29999, etc. We require that the attribute be sortable by something othe
han item frequency in order for this partioning to make sense and be different from the frequency-
distribution analysis just discussed; this means that most suitable attributes are numeric.
his should not be interpreted, however, as requiring anticipation of every partition of an attribute
f
that a user might mention in a query, just a covering set. To get counts on arbitrary subsets
he ranges, inequalities of the Chebyshev type may be used when moments are known, as for
instance Cantelli's inequalities:
[probability that x -l]-s /(s +l )
f
[probability that x -l]-l /(s +l
or - the mean and s the standard deviation of the attribute. Otherwise the count of a containingrange partition may be used as an upper bound on the subset count.
.1.2. Upper bounds from set ranges and bin counts on the universe (level 1)
Suppose we know partition (bin) counts on some numeric attribute j for the universe U. (We mus
now them for at least one set to apply these methods, so it might as well be the universe.) Suppose
we know the maximum h(i,j) and minimum l(i,j) on attribute j for each set i being intersected. Then an
upper bound on the maximum of the intersection H(j), and a lower bound on the minimum of th
ntersection are
ss
ote if H (j )<L ( j ) we can immediately say the intersection is the empty set. Similarly, for the union
of sets
ss
an intersection or union must be a subset of U that has values bounded by L(j) and H(j) on attribute
e
j, for any numeric attribute j. So an upper bound on the size of an intersection or union is th
inimum-size such range-partition set over all attributes j in Q, or
min
r
where s sets are intersected; where there are r numeric attributes; where B(x,j) denotes the number of
the bin into which value x falls on attribute j; and where binfreq(U,j,k) is the number of items in
artition (bin) k on attribute j for the universe U.
Absolute bounds on correlations between attributes may also be exploited. If two numeric attributes
r
have a strong relationship to each other, we can formally characterize a mapping from one to the othe
ith three items of information: the algebraic formula, an upper deviation from the fit to that formula
c
a
for the universe U, and a lower deviation. We can calculate these three things for pairs of numeri
ttributes on U, and store only the information for pairs with strong correlations. To use correlations in
y
s
finding upper bounds, for every attribute j we find L(j) and H(j) by the old method. Then, for ever
tored correlation from an arbitrary attribute j1 to an arbitrary attribute j2, we calculate the projection of
l
r
the range of j1 (from L(j1) to H(j1)) by the formula onto j2. The overlap of this range on the origina
ange of j2 (from L(j2) to H(j2)) is then the new range on j2, and L(j2) and H(j2) are updated if
r
necessary. Applying these correlations requires iterative relaxation methods since narrowing of the
ange of one attribute may allow new and tighter narrowings of ranges of attributes to which thatattribute correlates, and so on.
.1.3. Upper bounds from mode frequencies on bin counts for intersected sets (level 2)
e
At the next level of information, analogous to level 2 for frequency-distribution bounds, we can hav
nformation about distributions of values for particular sets. Suppose this includes an upper bound
e
f
m(i,j) on the number of things in set i in a bin of some attribute j. (This m(i,j) is like the mod
requency in section 4, except the equivalence classes here are all items in a certain range on a certain
r
attribute.) Assume as before we know what bins a given range of an attribute covers. Then an uppe
ound on the size of the set intersection is
min
sr
here H(j) and L(j) are as before. Similarly, an upper bound on the size of a set union is
min
6.1.4. Upper bounds from bin counts for intersected sets (level 5
inally, if we know the actual distribution of bin counts for each set i being intersected, we can modify
the intersection formula of level 1 as follows:
min
r
s
where s sets are intersected; where there are r numeric attributes; where B(x,j) is the number of the bi
nto which value x falls on attribute j; and where binfreq(i,j,k) is the number of items in partition (bin)
k on attribute j for set i. Similarly, the union upper bound is:
min
r
s
e
As with frequency-distribution level 4a and level 5 bounds, we can also use this formula when all w
now is an upper bound on the bin counts, perhaps from a distribution fit.
A
6.2. Multidimensional intersection range analysis
nalogous to range analysis, we may be able to obtain a multivariate distribution that is an upper
bound on the distribution of the data universe U over some set S of interest (as discussed in [9] and
2]). We determine ranges on each attribute of S by finding the overlap of the ranges for each set being
r
intersected as before. This defines a hyperrectangular region in hyperspace, and the universe uppe
ound also bounds the number of items inside it. We can also use various multivariate generalizations
s
of Chebyshev's inequality [1] to bound the number of items in the region from knowledge of moment
f any set containing the intersection set (including the universe). As with univariate range analysis, we
r
can exploit known correlations to further truncate the ranges on each attribute of S, obtaining a smalle
yperrectangular region.
Another class of correlation we can use is specific to multivariate ranges: those between attributes in the
l
set S itself. For instance, a tight linear correlation between two numeric attributes j1 and j2, strongly
imits the number of items within rectangles the regression line does not pass through. If we know
a
s
absolute bounds on the regression fit we can infer zero items within whole subregions. If we know
tandard error on the regression fit we can use Chebyshev's inequality and its relatives to bound how
many items can lie certain distances from the regression line.
ust as for univariate range analysis, we can exploit more detailed information about the distributions of
any attribute (not necessarily the ones in S). If we know an upper bound on bin size, for some
artitioning into subregions or "bins", or if we know the exact distribution of bin sizes, we may be ableto improve on the level 1 bounds.
.3. Lower bounds from range analysis
Lower bounds can be obtained from substituting the above upper bounds in the first three formulae
U
relating intersections and unions in section 4.4.1, either substituting for the intersection or for the union
nfortunately the resulting formulae are complicated, so we won't give them here.
6.4. Embedded set expressions for range analysis
et us consider the effect of Boolean equivalences on embedded set descriptions for the above range-
analysis bounds, for level 1 information. First, range-analysis bounds cannot be provided for expressions
f
with set complements in them, because there is no good way to determine a maximum or minimum
he complement of a set other than the maximum or minimum of the universe. So none of the
equivalences involving complements apply.
he only set-dependent information in the level-1 calculation are the extrema of the range, H and L.
Equivalence of set expressions under commutativity or associativity of terms in intersections or unions
hen follows from the commutativity of the maxima and minima of operations, as does distributivity of
a
intersections over unions and vice versa. Equivalence under reflexivity follows because max(a ,a )=a
nd min(a ,a )=a . Introduction of terms for the universe and the null set are useless, because the
e
max(a ,0)=a for a -0, and min(a ,N )=a . So expression rearrangements do not affect the bounds, so w
ight as well not bother; that seems a useful heuristic for level 2 and 5 information too.
6.5. Storage requirements for range analysis
pace requirements for these range analysis bounds can be computed in the same way as for the
e
frequency-distribution bounds. Assume that the number of bins on each attribute is m, the averag
umber of attributes is r, the number of bits required for each attribute value is w, and the number of
items in the database is N. Then the space requirements for univariate range bounds are:
level 1: mr log (N /m )+2mr w22
l
level 2: mr log (N /m )+2mr
evel 5: m r log (N /m )+2mr w
A
gain, these are pessimistic estimates since they assume that all attributes can be helpful for rangeanalysis.
.6. Evaluation of the range-analysis bounds
Level 2 upper bounds are definitely better than level 1 because the binfreq(U,j,k) is an upper bound on
e
a
level 5 is better than level 2 because mf(i,j) is an upper bound on binfreq(i,j,k). But th
verage-case performance of the range-analysis bounds is harder to predict than that of the frequency-
distribution bounds, since the former depends on widely-different data distributions, while the latter's
istributions tend to be more similar. Furthermore, maxima and minima statistics have high variance
r
for randomly distributed data, so it is hard to create an average-case situation for them; strong range
estriction effects do occur with real databases, but mostly with human-artifact data that does not fit
a
well to classical distributions. Thus no useful average-case generalizations are possible about range-
nalysis bounds.
6.7. Cascading range-analysis and frequency-distribution methods
e
The above determination of the maximum and minimum of an intersection set on an attribute can b
sed to find better frequency-distribution bounds too, since it effectively adds new sets to the list of sets
being intersected, sets defined as partitions of the values of particular attributes. These new sets may
ave unusual distributions on further attributes that can lead to tight frequency-distribution bounds.
7. Conclusion
e have provided a library of formulae for bounds on the sizes of intersections, unions, and
d
complements of sets. We have emphasized intersections (because of their greater importance) an
ntersection upper bounds (because they are easier to obtain). Our methods exploit simple precomputed
s
tatistics (counts, frequencies, maxima and minima, and distribution fits) on sets. The more we
a
precompute, the better our bounds can be. We illustrated by analysis and experiments the time-space
ccuracy tradeoffs involved between different bounds. Our bounds tend to be most useful when there
e
are strong or complex correlations between sets in an intersection or union, a situation in which
stimation methods for set size tend to do poorly. This work thus nicely complements those methods.
A
--R
Neil C.
cknowledgements: The work reported herein was partially supported by the Foundation l R Research Program of the Naval Postgraduate School with funds provided by the Chief of Nava esearch.
""
""
""
""
""
""
""
""
""
""
""
""
""
""
""
""
ince sup(n
""
he lower bounds on Q3 and Q4 are: inf(n (Q 3))
""
irst we prove this for a two-row matrix
he result for a two-row matrix easily extends to matrices with more rows
igure 4: experiments measuring average ratio of bounds and estimates to actual intersection size
Figure 5: table of cases for Appendix
upper a upper 2b upper r 3a upper 3b uppe 4a upper a 5 upper ctual intersection size
lower lower
level 1 upper bound 1.05 0.0 1.05 level 2a upper bound 1.28 0.01 1.31
level 3a upper bound 1.11 0.01 1.12 level 4a upper bound 1.02 0.0 1.02
level 1 estimate 0.94 0.0 0.63
level 5 lower bound 0.93 0.0 0.42
--TR
Antisampling for estimation: an overview
On semantic issues connected with incomplete information databases
An Approach to Multilevel Boolean Minimization
Evaluation of the size of a query expressed in relational algebra
Accurate estimation of the number of tuples satisfying a condition
Statistical Databases
An Analytic Approach to Statistical Databases
Rule-based statistical calculations on a database abstract
--CTR
Francesco M. Malvestuto, A universal-scheme approach to statistical databases containing homogeneous summary tables, ACM Transactions on Database Systems (TODS), v.18 n.4, p.678-708, Dec. 1993 | distribution information;closed-form bounds;database theory;statistical analysis;set theory;union sizes;file organisation;set intersection;boolean algebra;database access;rule-based system architecture |
543016 | Contour and Texture Analysis for Image Segmentation. | This paper provides an algorithm for partitioning grayscale images into disjoint regions of coherent brightness and texture. Natural images contain both textured and untextured regions, so the cues of contour and texture differences are exploited simultaneously. Contours are treated in the intervening contour framework, while texture is analyzed using textons. Each of these cues has a domain of applicability, so to facilitate cue combination we introduce a gating operator based on the texturedness of the neighborhood at a pixel. Having obtained a local measure of how likely two nearby pixels are to belong to the same region, we use the spectral graph theoretic framework of normalized cuts to find partitions of the image into regions of coherent texture and brightness. Experimental results on a wide range of images are shown. | Introduction
Submitted to the International Journal of Computer Vision, Dec 1999
A shorter version appeared in the International Conference in Computer Vision, Corfu, Greece, Sep 1999
To humans, an image is not just a random collection of pixels; it
is a meaningful arrangement of regions and objects. Figure 1 shows a
variety of images. Despite the large variations of these images, humans
have no problem interpreting them. We can agree about the dierent
regions in the images and recognize the dierent objects. Human visual
grouping was studied extensively by the Gestalt psychologists in the
early part of the large20 th century (Wertheimer, 1938). They identied
several factors that lead to human perceptual grouping: similarity,
proximity, continuity, symmetry, parallelism, closure and familiarity. In
computer vision, these factors have been used as guidelines for many
grouping algorithms.
The most studied version of grouping in computer vision is image
segmentation. Image segmentation techniques can be classied into
two broad families{(1) region-based, and (2) contour-based approaches.
Region-based approaches try to nd partitions of the image pixels into
sets corresponding to coherent image properties such as brightness,
color and texture. Contour-based approaches usually start with a rst
Currently with the Department of Computer Science, Carnegie Mellon Universit
c
2000 Kluwer Academic Publishers. Printed in the Netherlands.
Belongie, Leung and Shi
(a) (b)
(c) (d) (e)
Figure
1. Some challenging images for a segmentation algorithm. Our goal is to
develop a single grouping procedure which can deal with all these types of images.
stage of edge detection, followed by a linking process that seeks to
exploit curvilinear continuity.
These two approaches need not be that dierent from each other.
Boundaries of regions can be dened to be contours. If one enforces
closure in a contour-based framework (Elder and Zucker, 1996; Jacobs,
1996) then one can get regions from a contour-based approach. The
dierence is more one of emphasis and what grouping factor is coded
more naturally in a given framework.
A second dimension on which approaches can be compared is local
vs. global. Early techniques, in both contour and region frameworks,
made local decisions{in the contour framework this might be declaring
an edge at a pixel with high gradient, in the region framework this might
be making a merge/split decision based on a local, greedy strategy.
Region-based techniques lend themselves more readily to dening a
global objective function (for example, Markov random elds (Geman
and Geman, 1984) or variational formulations (Mumford and Shah,
1989)). The advantage of having a global objective function is that
decisions are made only when information from the whole image is
taken into account at the same time.
Contour and Texture Analysis for Image Segmentation 3
In contour-based approaches, often the rst step of edge detection
is done locally. Subsequently eorts are made to improve results by
a global linking process that seeks to exploit curvilinear continuity.
Examples include dynamic programming (Montanari, 1971), relaxation
approaches (Parent and Zucker, 1989), saliency networks (Sha'ashua
and Ullman, 1988), stochastic completion (Williams and Jacobs, 1995).
A criticism of this approach is that the edge/no edge decision is made
prematurely. To detect extended contours of very low contrast, a very
low threshold has to be set for the edge detector. This will cause random
edge segments to be found everywhere in the image, making the task
of the curvilinear linking process unnecessarily harder than if the raw
contrast information was used.
A third dimension on which various segmentation schemes can be
compared is the class of images for which they are applicable. As
suggested by Figure 1, we have to deal with images which have both
textured and untextured regions. Here boundaries must be found using
both contour and texture analysis. However what we nd in the
literature are approaches which concentrate on one or the other.
Contour analysis (e.g. edge detection) may be adequate for untextured
images, but in a textured region it results in a meaningless tangled
web of contours. Think for instance of what an edge detector would
return on the snow and rock region in Figure 1(a). The traditional
\solution" for this problem in edge detection is to use a high threshold
so as to minimize the number of edges found in the texture area. This
is obviously a non-solution{such an approach means that low-contrast
extended contours will be missed as well. This problem is illustrated in
Figure
2. There is no recognition of the fact that extended contours,
even weak in contrast, are perceptually signicant.
While the perils of using edge detection in textured regions have
been noted before (see e.g. (Binford, 1981)), a complementary problem
of contours constituting a problem for texture analysis does not
seem to have been recognized before. Typical approaches are based on
measuring texture descriptors over local windows, and then computing
dierences between window descriptors centered at dierent locations.
Boundaries can then give rise to thin strip-like regions, as in Figure 3.
For specicity, assume that the texture descriptor is a histogram of
linear lter outputs computed over a window. Any histogram window
near the boundary of the two regions will contain large lter responses
from lters oriented along the direction of the edge. However, on both
sides of the boundary, the histogram will indicate a featureless region.
A segmentation algorithm based on, say, 2 distances between his-
tograms, will inevitably partition the boundary as a group of its own.
As is evident, the problem is not conned to the use of a histogram of
4 Malik, Belongie, Leung and Shi
(a) (b)
(c) (d)
Figure
2. Demonstration of texture as a problem for the contour process. Each
image shows the edges found with a Canny edge detector for the penguin image
using dierent scales and thresholds: (a) ne scale, low threshold, (b) ne scale,
high threshold, (c) coarse scale, low threshold, (d) coarse scale, high threshold.
A parameter setting that preserves the correct edges while suppressing spurious
detections in the textured area is not possible.
(a) (b)
Figure
3. Demonstration of the \contour-as-a-texture" problem using a real image.
(a) Original image of a bald eagle. (b) The groups found by an EM-based algorithm
(Belongie et al., 1998).
lter outputs as texture descriptor. Figure 3 (b) shows the actual groups
found by an EM-based algorithm using an alternative color/texture
descriptor (Belongie et al., 1998).
Contour and Texture Analysis for Image Segmentation 5
1.1. Desiderata of a Theory of Image Segmentation
At this stage, we are ready to summarize our desired attributes for a
theory of image segmentation.
1. It should deal with general images. Regions with or without texture
should be processed in the same framework, so that the cues of
contour and texture dierences can be simultaneously exploited.
2. In terms of contour, the approach should be able to deal with
boundaries dened by brightness step edges as well as lines (as
in a cartoon sketch).
3. Image regions could contain texture which could be regular such as
the polka dots in Figure 1(c), stochastic as in the snow and rock
region in (a) or anywhere in between such as the tiger stripes in (b).
A key question here is that one needs an automatic procedure for
scale selection. Whatever one's choice of texture descriptor, it has
to be computed over a local window whose size and shape need to
be determined adaptively. What makes scale selection a challenge
is that the technique must deal with the wide range of textures |
regular, stochastic, or intermediate cases | in a seamless way.
1.2. Introducing Textons
Julesz introduced the term texton, analogous to a phoneme in speech
recognition, nearly 20 years ago (Julesz, 1981) as the putative units
of preattentive human texture perception. He described them qualitatively
for simple binary line segment stimuli|oriented segments,
crossings and terminators|but did not provide an operational deni-
tion for gray-level images. Subsequently, texton theory fell into disfavor
as a model of human texture discrimination as accounts based on spatial
ltering with orientation and scale-selective mechanisms that could be
applied to arbitrary gray-level images became popular.
There is a fundamental, well recognized, problem with linear lters.
Generically, they respond to any stimulus. Just because you have a
response to an oriented odd-symmetric lter doesn't mean there is an
edge at that location. It could be that there is a higher contrast bar
at some other location in a dierent orientation which has caused this
response. Tokens such as edges or bars or corners can not be associated
with the output of a single lter. Rather it is the signature of the
outputs over scales, orientations and order of the lter that is more
revealing.
6 Malik, Belongie, Leung and Shi
Here we introduce a further step by focussing on the outputs of these
lters considered as points in a high dimensional space (on the order of
40 lters are used). We perform vector quantization, or clustering, in
this high-dimensional space to nd prototypes. Call these prototypes
textons|we will nd empirically that these tend to correspond to oriented
bars, terminators and so on. One can construct a universal texton
vocabulary by processing a large number of natural images, or we could
nd them adaptively in windows of images. In each case the K-means
technique can be used. By mapping each pixel to the texton nearest
to its vector of lter responses, the image can be analyzed into texton
channels, each of which is a point set.
It is our opinion that the analysis of an image into textons will
prove useful for a wide variety of visual processing tasks. For instance,
in (Leung and Malik, 1999) we use the related notion of 3D textons for
recognition of textured materials. In the present paper, our objective is
to develop an algorithm for the segmentation of an image into regions
of coherent brightness and texture{we will nd that the texton representation
will enable us to address the key problems in a very natural
fashion.
1.3. Summary of our approach
We pursue image segmentation in the framework of Normalized Cuts
introduced by (Shi and Malik, 1997). The image is considered to be a
weighted graph where the nodes i and j are pixels and edge weights,
local measure of similarity between the two pixels. Grouping
is performed by nding eigenvectors of the Normalized Laplacian
of this graph (x3). The fundamental issue then is that of specifying the
edge weights we rely on normalized cuts to go from these local
measures to a globally optimal partition of the image.
The algorithm analyzes the image using the two cues of contour and
texture. The local similarity measure between pixels i and j due to the
contour cue, W IC
ij , is computed in the intervening contour framework
of (Leung and Malik, 1998) using peaks in contour orientation energy
(x2 and x4.1). Texture is analysed using textons (x2.1). Appropriate
local scale is estimated from the texton labels. A histogram of texton
densities is used as the texture descriptor. Similarity, W
ij , is measured
using the 2 test on the histograms (x4.2). The edge weights W ij
combining both contour and texture information are specied by gating
each of the two cues with a texturedness measure (x4.3)
In (x5), we present the practical details of going from the eigenvectors
of the normalized Laplacian matrix of the graph to a partition of
the image. Results from the algorithm are presented in (x6).
Contour and Texture Analysis for Image Segmentation 7
Figure
4. Left: Filter set f i consisting of 2 phases (even and odd), 3 scales (spaced
by half-octaves), and 6 orientations (equally spaced from 0 to ). The basic lter
is a dierence-of-Gaussian quadrature pair with scales of
center-surround lters. Each lter is L1-normalized for scale invariance.
2. Filters, Composite Edgels, and Textons
Since the 1980s, many approaches have been proposed in the computer
vision literature that start by convolving the image with a bank of
linear spatial lters f i tuned to various orientation and spatial frequencies
(Knutsson and Granlund, 1983; Koenderink and van Doorn,
1987; Fogel and Sagi, 1989; Malik and Perona, 1990). (See Figure 4 for
an example of such a lter set.)
These approaches were inspired by models of processing in the early
stages of the primate visual system (e.g. (DeValois and DeValois, 1988)).
The lter kernels f i are models of receptive elds of simple cells in
visual cortex. To a rst approximation, we can classify them into three
categories:
1. Cells with radially symmetric receptive elds. The usual choice of f i
is a Dierence of Gaussians (DOG) with the two Gaussians having
dierent values of . Alternatively, these receptive elds can also
be modeled as the Laplacian of Gaussian.
2. Oriented odd-symmetric cells whose receptive elds can be modeled
as rotated copies of a horizontal oddsymmetric receptive eld. A
suitable point spread function for such a receptive eld is f(x;
represents a Gaussian with standard
deviation . The ratio a measure of the elongation of the
lter.
3. Oriented even-symmetric cells whose receptive elds can be modeled
as rotated copies of a horizontal evensymmetric receptive eld.
8 Malik, Belongie, Leung and Shi
A suitable point spread function for such a receptive eld is
The use of Gaussian derivatives (or equivalently, dierences of oset
Gaussians) for modeling receptive elds of simple cells is due to (Young,
1985). One could equivalently use Gabor functions. Our preference for
Gaussian derivatives is based on their computational simplicity and
their natural interpretation as 'blurred derivatives' (Koenderink and
van Doorn, 1987; Koenderink and van Doorn, 1988).
The oriented lterbank used in this work, depicted in Figure 4, is
based on rotated copies of a Gaussian derivative and its Hilbert trans-
form. More precisely, let f 1 (x;
the Hilbert transform of f 1 (x; y) along the y axis:
dy
where is the scale, ' is the aspect ratio of the lter, and C is a
normalization constant. (The use of the Hilbert transform instead of
a rst derivative makes f 1 and f 2 an exact quadrature pair.) The
radially symmetric portion of the lterbank consists of Dierence-of-
Gaussian kernels. Each lter is zero-mean and L 1 normalized for scale
invariance (Malik and Perona, 1990).
Now suppose that the image is convolved with such a bank of linear
lters. We will refer to the collection of response images I f i as the
hypercolumn transform of the image.
Why is this useful from a computational point of view? The vector
of lter outputs I f i characterizes the image patch centered at
by a set of values at a point. This is similar to characterizing an
analytic function by its derivatives at a point { one can use a Taylor
series approximation to nd the values of the function at neighboring
points. As pointed out by (Koenderink and van Doorn, 1987), this is
more than an analogy, because of the commutativity of the operations
of dierentiation and convolution, the receptive elds described above
are in fact computing 'blurred derivatives'. We recommend (Koen-
derink and van Doorn, 1987; Koenderink and van Doorn, 1988; Jones
and Malik, 1992; Malik and Perona, 1992) for a discussion of other
advantages of such a representation.
The hypercolumn transform provides a convenient front end for
contour and texture analysis:
Contour. In computational vision, it is customary to model brightness
edges as step edges and to detect them by marking locations
Contour and Texture Analysis for Image Segmentation 9
corresponding to the maxima of the outputs of odd-symmetric l-
ters (e.g. (Canny, 1986)) at appropriate scales. However, it should
be noted that step edges are an inadequate model for the discontinuities
in the image that result from the projection of depth or orientation
discontinuities in physical scene. Mutual illumination and
specularities are quite common and their eects are particularly
signicant in the neighborhood of convex or concave object edges.
In addition, there will typically be a shading gradient on the image
regions bordering the edge. As a consequence of these eects, real
image edges are not step functions but more typically a combination
of steps, peak and roof proles. As was pointed out in (Perona
and Malik, 1990), the oriented energy approach (Knutsson and
Granlund, 1983; Morrone and Owens, 1987; Morrone and Burr,
1988) can be used to detect and localize correctly these composite
edges.
The oriented energy, also known as the \quadrature energy," at
angle 0 - is dened as:
has maximum response for horizontal contours. Rotated
copies of the two lter kernels are able to pick up composite edge
contrast at various orientations.
Given OE , we can proceed to localize the composite edge elements
(edgels) using oriented nonmaximal suppression. This is
done for each scale in the following way. At a generic pixel q, let
denote the dominant orientation and OE the
corresponding energy. Now look at the two neighboring values of
OE on either side of q along the line through q perpendicular to
the dominant orientation. The value OE is kept at the location
of q only if it is greater than or equal to each of the neighboring
values. Otherwise it is replaced with a value of zero.
Noting that OE ranges between 0 and innity, we convert it to a
probability-like number between 0 and 1 as follows:
IC is related to oriented energy response purely due to image
noise. We use 0:02 in this paper. The idea is that for any
contour with OE IC , p con 1.
Texture. As the hypercolumn transform provides a good local
descriptor of image patches, the boundary between dierently textured
regions may be found by detecting curves across which there
Belongie, Leung and Shi
is a signicant gradient in one or more of the components of
the hypercolumn transform. For an elaboration of this approach,
see (Malik and Perona, 1990).
Malik and Perona relied on averaging with large kernels to smooth
away spatial variation for lter responses within regions of texture.
This process loses a lot of information about the distribution of
lter responses; a much better method is to represent the neighborhood
around a pixel by a histogram of lter outputs (Heeger
and Bergen, 1995; Puzicha et al., 1997). While this has been shown
to be a powerful technique, it leaves open two important questions.
Firstly, there is the matter of what size window to use for pooling
the histogram { the integration scale. Secondly, these approaches
only make use of marginal binning, thereby missing out on the
informative characteristics that joint assemblies of lter outputs
exhibit at points of interest. We address each of these questions in
the following section.
2.1. Textons
Though the representation of textures using lter responses is extremely
versatile, one might say that it is overly redundant (each pixel value is
represented by N f il real-valued lter responses, where N f il is 40 for our
particular lter set). Moreover, it should be noted that we are characterizing
textures, entities with some spatially repeating properties by
denition. Therefore, we do not expect the lter responses to be totally
dierent at each pixel over the texture. Thus, there should be several
distinct lter response vectors and all others are noisy variations of
them.
This observation leads to our proposal of clustering the lter responses
into a small set of prototype response vectors. We call these
prototypes textons. Algorithmically, each texture is analyzed using the
lter bank shown in Figure 4. Each pixel is now transformed to a N f il
dimensional vector of lter responses. These vectors are clustered using
K-means. The criterion for this algorithm is to nd K \centers" such
that after assigning each data vector to the nearest center, the sum
of the squared distance from the centers is minimized. K-means is a
greedy algorithm that nds a local minimum of this criterion 1 .
It is useful to visualize the resulting cluster centers in terms of
the original lter kernels. To do this, recall that each cluster center
represents a set of projections of each lter onto a particular image
patch. We can solve for the image patch corresponding to each cluster
center in a least squares sense by premultiplying the vectors representing
the cluster centers by the pseudoinverse of the lterbank (Jones
Contour and Texture Analysis for Image Segmentation 11
and Malik, 1992). The matrix representing the lterbank is formed by
concatenating the lter kernels into columns and placing these columns
side by side. The set of synthesized image patches for two test images
are shown in Figures 5(b) and 6(b). These are our textons. The textons
represent assemblies of lter outputs that are characteristic of the local
image structure present in the image.
Looking at the polka-dot example, we nd that many of the textons
correspond to translated versions of dark spots 2 . Also included are
a number of oriented edge elements of low contrast and two textons
representing nearly uniform brightness. The pixel-to-texton mapping
is shown in Figure 5(c). Each subimage shows the pixels in the image
that are mapped to the corresponding texton in Figure 5(b). We refer to
this collection of discrete point sets as the texton channels. Since each
pixel is mapped to exactly one texton, the texton channels constitute
a partition of the image.
Textons and texton channels are also shown for the penguin image
in
Figure
6. Notice in the two examples how much the texton set can
change from one image to the next. The spatial characteristics of both
the deterministic polka dot texture and the stochastic rocks texture
are captured across several texton channels. In general, the texture
boundaries emerge as point density changes across the dierent texton
channels. In some cases, a texton channel contains activity inside a
particular textured region and nowhere else. By comparison, vectors of
lter outputs generically respond with some value at every pixel { a
considerably less clean alternative.
The mapping from pixel to texton channel provides us with a number
of discrete point sets where before we had continuous-valued lter vec-
tors. Such a representation is well suited to the application of techniques
from computational geometry and point process statistics. With these
tools, one can approach questions such as, \what is the neighborhood
of a texture element?" and \how similar are two pixels inside a textured
region?"
2.1.1. Local Scale and Neighborhood Selection
The texton channel representation provides us a natural way to dene
texture scale. If the texture is composed of texels, we might want to dene
a notion of texel neighbors and consider the mean distance between
them to be a measure of scale. Of course, many textures are stochastic
and detecting texels reliably is hard even for regular textures.
With textons we have a \soft" way to dene neighbors. For a given
pixel in a texton channel, rst consider it as a \thickened point"{ a disk
centered at it 3 . The idea is that while textons are being associated with
pixels, since they correspond to assemblies of lter outputs, it is better
12 Malik, Belongie, Leung and Shi
(a) (b)
(c)
Figure
5. (a) Polka-dot image. (b) Textons found via K-means with
in decreasing order by norm. (c) Mapping of pixels to the texton channels. The
dominant structures captured by the textons are translated versions of the dark
spots. We also see textons corresponding to faint oriented edge and bar elements.
Notice that some channels contain activity inside a textured region or along an
oriented contour and nowhere else.
Contour and Texture Analysis for Image Segmentation 13
(a) (b)
(c)
Figure
6. (a) Penguin image. (b) Textons found via K-means with
in decreasing order by norm. (c) Mapping of pixels to the texton channels. Among
the textons we see edge elements of varying orientation and contrast along with
elements of the stochastic texture in the rocks.
14 Malik, Belongie, Leung and Shi
(a) (b) (c)
Figure
7. Illustration of scale selection. (a) Closeup of Delaunay triangulation of
pixels in a particular texton channel for polka dot image. (b) Neighbors of thickened
point for pixel at center. The thickened point lies within inner circle. Neighbors are
restricted to lie within outer circle. (c) Selected scale based on median of neighbor
edge lengths, shown by circle, with all pixels falling inside circle marked with dots.
to think of them as corresponding to a small image disk dened by
the scale used in the Gaussian derivative lters. Recall Koenderink's
aphorism about a point in image analysis being a Gaussian blob of
Now consider the Delaunay neighbors of all the pixels in the thickened
point of a pixel i which lie closer than some outer scale 4 . The
intuition is that these will be pixels in spatially neighboring texels.
Compute the distances of all these pixels to i; the median of these
constitutes a robust local measure of inter-texel distance. We dene
the local scale (i) to be 1:5 times this median distance.
In
Figure
7(a), the Delaunay triangulation of a zoomed-in portion
of one of the texton channels in the polka-dot dress of Figure 5(a) is
shown atop a brightened version of the image. Here the nodes represent
points that are similar in the image while the edges provide proximity
information.
The local scale (i) is based just on the texton channel for the texton
at i. Since neighboring pixels should have similar scale and could be
drawn from other texton channels, we can improve the estimate of scale
by median ltering of the scale image.
2.1.2. Computing windowed texton histograms
Pairwise texture similarities will be computed by comparing windowed
texton histograms. We dene the window W(i) for a generic pixel i as
the axis-aligned square of radius (i) centered on pixel i.
Each histogram has K bins, one for each texton channel. The value of
the kth histogram bin for a pixel i is found by counting how many pixels
in texton channel k fall inside the window W(i). Thus the histogram
Contour and Texture Analysis for Image Segmentation 15
represents texton frequencies in a local neighborhood. We can write
this as
where I[] is the indicator function and T (j) returns the texton assigned
to pixel j.
3. The Normalized Cut Framework
In the Normalized Cut framework (Shi and Malik, 1997), Shi and Malik
formulate visual grouping as a graph partitioning problem. The nodes
of the graph are the entities that we want to partition; for example,
in image segmentation, they are the pixels. The edges between two
nodes correspond to the strength with which these two nodes belong
to one group; again, in image segmentation, the edges of the graph
corresponds to how much two pixels agree in brightness, color, etc.
Intuitively, the criterion for partitioning the graph will be to minimize
the sum of weights of connections across the groups and maximize the
sum of weights of connections within the groups.
Eg be a weighted undirected graph, where V are
the nodes and E are the edges. Let A;B be a partition of the
In graph theoretic language, the similarity
between these two groups is called the cut:
is the weight on the edge between nodes i and j. Shi and
Malik proposed to use a normalized similarity criterion to evaluate a
partition. They call it the normalized cut:
i2A;k2V W ik is the total connection from
nodes in A to all the nodes in the graph. For more discussion of this
criterion, please refer to (Shi and Malik, 1997; Shi and Malik, 2000).
One key advantage of using the normalized cut is that a good approximation
to the optimal partition can be computed very e-ciently 5 . Let
W be the association matrix, i.e. W ij is the weight between nodes i and
j in the graph. Let D be the diagonal matrix such that D
Belongie, Leung and Shi
i.e. D ii is the sum of the weights of all the connections to node i. Shi and
Malik showed that the optimal partition can be found by computing:
y T (D W )y
is a binary indicator vector specifying the group
identity for each pixel, i.e. y belongs to group A and
belongs to B. N is the number of pixels. Notice that
the above expression is a Rayleigh quotient. If we relax y to take on
real values (instead of two discrete values), we can optimize Equation 3
by solving a generalized eigenvalue system. E-cient algorithms with
polynomial running time are well-known for solving such problems.
The process of transforming the vector y into a discrete bipartition
and the generalization to more than two groups is discussed in (x5).
4. Dening the Weights
The quality of a segmentation based on Normalized Cuts or any other
algorithm based on pairwise similarities fundamentally depends on the
weights { the W ij 's { that are provided as input. The weights should
be large for pixels that should belong together and small otherwise.
We now discuss our method for computing the W ij 's. Since we seek to
combine evidence from two cues, we will rst discuss the computation
of the weights for each cue in isolation, and then describe how the two
weights can be combined in a meaningful fashion.
4.1. Images without Texture
Consider for the moment the \cracked earth" image in Figure 1(e).
Such an image contains no texture and may be treated in a framework
based solely on contour features. The denition of the weights in this
case, which we denote W IC
ij , is adopted from the intervening contour
method introduced in (Leung and Malik, 1998).
Figure
8 illustrates the intuition behind this idea. On the left is
an image. The middle gure shows a magnied part of the original
image. On the right is the orientation energy. There is an extended
contour separating p 3 from p 1 and p 2 . Thus, we expect p 1 to be much
more strongly related to p 2 than p 3 . This intuition carries over in our
denition of dissimilarity between two pixels: if the orientation energy
along the line between two pixels is strong, the dissimilarity between
these pixels should be high (and W ij should be low).
Contour and Texture Analysis for Image Segmentation 17
Figure
8. Left: the original image. Middle: part of the image marked by the box.
The intensity values at pixels p1 , p2 and p3 are similar. However, there is a contour
in the middle, which suggests that p1 and p2 belong to one group while p3 belongs
to another. Just comparing intensity values at these three locations will mistakenly
suggest that they belong to the same group. Right: orientation energy. Somewhere
along l 2 , the orientation energy is strong which correctly proposes that p1 and p3 belong
to two dierent partitions, while orientation energy along l 1 is weak throughout,
which will support the hypothesis that p1 and p2 belong to the same group.
Contour information in an image is computed \softly" through orientation
energy (OE) from elongated quadrature lter pairs. We introduce
a slight modication here to allow for exact sub-pixel localization
of the contour by nding the local maxima in the orientation energy
perpendicular to the contour orientation (Perona and Malik, 1990).
The orientation energy gives the condence of this contour. W IC
ij is
then dened as follows:
is the set of local maxima along the line joining pixels i and
j. Recall from (x2) that p con (x); 0 < p con < 1, is nearly 1 whenever the
orientated energy maximum at x is su-ciently above the noise level.
In words, two pixels will have a weak link between them if there is
a strong local maximum of orientation energy along the line joining
the two pixels. On the contrary, if there is little energy, for example
in a constant brightness region, the link between the two pixels will
be strong. Contours measured at dierent scales can be taken into
account by computing the orientation energy maxima at various scales
and setting p con to be the maximum over all the scales at each pixel.
4.2. Images that are Texture Mosaics
Now consider the case of images wherein all of the boundaries arise
from neighboring patches of dierent texture (e.g. Figure 1(d)). We
compute pairwise texture similarities by comparing windowed texton
Belongie, Leung and Shi
histograms computed using the technique described previously (x2.1.2).
A number of methods are available for comparing histograms. We use
the 2 test, dened as
are the two histograms. For an empirical comparison
of the 2 test versus other texture similarity measures, see (Puzicha
et al., 1997).
ij is then dened as follows:
If histograms h i and h j are very dierent, 2 is large, and the weight
ij is small.
4.3. General Images
Finally we consider the general case of images that contain boundaries
of both kinds. This presents us with the problem of cue integration.
The obvious approach to cue integration is to dene the weight between
pixels i and j as the product of the contribution from each cue:
ij . The idea is that if either of the cues suggests
that i and j should be separated, the composite weight, W ij , should be
small. We must be careful, however, to avoid the problems listed in the
Introduction (x1) by suitably gating the cues. The spirit of the gating
method is to make each cue \harmless" in locations where the other
cue should be operating.
4.3.1. Estimating texturedness
As illustrated in Figure 2, the fact that a pixel survives the non-maximum
suppression step does not necessarily mean that that pixel
lies on a region boundary. Consider a pixel inside a patch of uniform
texture: its oriented energy is large but it does not lie on the boundary
of a region. Conversely, consider a pixel lying between two uniform
patches of just slightly dierent brightness: it does lie on a region
boundary but its oriented energy is small. In order to estimate the
\probability" that a pixel lies on a boundary, it is necessary to take
more surrounding information into account. Clearly the true value of
this probability is only determined after the nal correct segmentation,
which is what we seek to nd. At this stage our goal is to formulate
a local estimate of the texturedness of the region surrounding a pixel.
Contour and Texture Analysis for Image Segmentation 19
Since this is a local estimate, it will be noisy but its objective will be
to bootstrap the global segmentation procedure.
Our method of computing this value is based on a simple comparison
of texton distributions on either side of a pixel relative to its dominant
orientation. Consider a generic pixel q at an oriented energy maxima.
Let the dominant orientation be . Consider a circle of radius (q) (the
selected scale) centered on q. We rst divide this circle in two along the
diameter with orientation . Note that the contour passing through q is
tangent to the diameter, which is its best straight line approximation.
The pixels in the disk can be partitioned into three sets
which are the pixels in the strip along the diameter, the pixels to the
left of D 0 , and the pixels to the right of D 0 , respectively. To compute
our measure of texturedness, we consider two half window comparisons
with D 0 assigned to each side. Assume without loss of generality that
D 0 is rst assigned to the \left" half. Denote the K-bin histograms
of by hL and D+ by hR respectively. Now consider the 2
statistic between the two histograms:
We repeat the test with the histograms of D and D 0 [D+ and retain
the maximum of the two resulting values, which we denote 2
LR . We
can convert this to a probability-like value using a sigmoid as follows:
This value, which ranges between 0 and 1, is small if the distributions
on the two sides are very dierent and large otherwise. Note that in the
case of untextured regions, such as a brightness step edge, the textons
lying along and parallel to the boundary make the statistics of the two
sides dierent. This is illustrated in Figure 9. Roughly, p texture 1 for
oriented energy maxima in texture and p texture 0 for contours. p texture
is dened to be 0 at pixels which are not oriented energy maxima.
4.3.2. Gating the Contour Cue
The contour cue is gated by means of suppressing contour energy
according to the value of p texture . The gated value, p B , is dened as
In principle, this value can be computed and dealt with independently
at each lter scale. For our purposes, we found it su-cient simply to
Belongie, Leung and Shi
Figure
9. Illustration of half windows used for the estimation of the texturedness.
The texturedness of a label is based on a 2 test on the textons in the two sides
of a box as shown above for two sample pixels. The size and orientation of the
box is determined by the selected scale and dominant orientation for the pixel at
center. Within the rocky area, the texton statistics are very similar, leading to a
low 2 value. On the edge of the wing, the 2 value is relatively high due to the
dissimilarity of the textons that re on either side of a step edge. Since in the case of
the contour the contour itself can lie along the diameter of the circle, we consider two
half-window partitions: one where the thin strip around the diameter is assigned to
the left side, and one where it is assigned to the other. We consider both possibilities
and retain the maximum of the two resulting 2 values.
keep the maximum value of p B with respect to . The gated contour
energy is illustrated in Figure 10, right. The corresponding weight is
then given by
4.3.3. Gating the Texture Cue
The texture cue is gated by computing a texton histogram at each pixel
which takes into account the texturedness measure p texture . Let h i be
the K-bin texton histogram computed using Equation 2. We dene a
by introducing a 0 th bin. The intuition is
that the 0 th bin will keep a count of the number of pixels which do
not correspond to texture. These pixels arise in two forms: (1) pixels
which are not oriented energy maxima; (2) pixels which are oriented
energy maxima, but correspond to boundaries between two regions,
thus should not take part in texture processing to avoid the problems
discussed in (x1) More precisely, ^ h i is dened as follows:
Contour and Texture Analysis for Image Segmentation 21
Figure
10. Gating the contour cue. Left: original image. Top: oriented energy after
nonmaximal suppression, OE . Bottom: 1 ptexture . Right: pB , the product of
1 ptexture and Note that this can be thought of as a
\soft" edge detector which has been modied to no longer re on texture regions.
Gated Texton
Histogram for
Each Pixel
Figure
11. Gating the texture cue. Left: original image. Top: Textons label, shown in
pseudocolor. Middle: local scale estimate (i). Bottom: 1 ptexture . Darker grayscale
indicates larger values. Right: Local texton histograms at scale (i) are gated using
ptexture as explained in 4.3.3.
where N (i) denotes all the oriented energy maxima lying inside the
window W(i) and NB is the number of pixels which are not oriented
energy maxima.
22 Malik, Belongie, Leung and Shi
4.3.4. Combining the Weights
After each cue has been gated by the above procedure, we are free to
perform simple multiplication of the weights. More specically, we rst
obtain W IC using Equation 6. Then we obtain W using Equation 4
with the gated versions of the histograms. Then we simply dene the
combined weight as
4.3.5. Implementation Details
The weight matrix is dened between any pair of pixels i and j. Naively,
one might connect every pair of pixels in the image. However, this is
not necessary. Pixels very far away from the image have very small
likelihood of belonging to the same region. Moreover, dense connectivity
means that we need to solve for the eigenvectors of a matrix of
size N pix N pix , where N pix is close to a million for a typical image.
In practice, a sparse and short-ranged connection pattern does a very
good job. In our experiments, all the images are of size 128 192.
Each pixel is connected to pixels within a radius of 30. Furthermore, a
sparse sampling is implemented such that the number of connections
is approximately constant at each radius. The number of non-zero connections
is 1000 in our experiments. For images of dierent sizes, the
connection radius can be scaled appropriately.
The parameters for the various formulae are given here:
1. The image brightness lies in the range [0; 1].
2.
3. The number of textons computed using K-means:
4. The textons are computed following a contrast normalization step,
motivated by Weber's law. Let jF (x)j be the L 2 norm of the l-
ter responses at pixel x. We normalize the lter responses by the
following
5.
Note that these parameters are the same for all the results shown in
Contour and Texture Analysis for Image Segmentation 23
5. Computing the Segmentation
With a properly dened weight matrix, the normalized cut formulation
discussed in (x3) can be used to compute the segmentation. However,
the weight matrix dened in the previous section is computed using
only local information, and is thus not perfect. The ideal weight should
be computed in such a way that region boundaries are respected. More
precisely, (1) texton histograms should be collected from pixels in a
window residing exclusively in one and only one region. If instead, an
isotropic window is used, pixels near a texture boundary will have a
histogram computed from textons in both regions, thus \polluting"
the histogram. (2) Intervening contours should only be considered at
region boundaries. Any responses to the lters inside a region are either
caused by texture or are simply mistakes. However, these two criteria
mean that we need a segmentation of the image, which is exactly the
reason why we compute the weights in the rst place! This chicken-
and-egg problem suggests an iterative framework for computing the
segmentation. First, use the local estimation of the weights to compute
a segmentation. This segmentation is done so that no region boundaries
are missed, i.e. it is an over-segmentation. Next, use this intial segmentation
to update the weights. Since the initial segmentation does not
miss any region boundaries, we can coarsen the graph by merging all
the nodes inside a region into one super-node. We can then use these
super-nodes to dene a much simpler segmentation problem. Of course,
we can continue this iteration several times. However, we elect to stop
after 1 iteration.
The procedure consists of the following 4 steps:
1. Compute an initial segmentation from the locally estimated weight
matrix.
2. Update the weights using the initial segmentation.
3. Coarsen the graph with the updated weights to reduce the segmentation
to a much simpler problem.
4. Compute a nal segmentation using the coarsened graph.
5.1. Computing the Initial Segmentation
Computing a segmentation of the image amounts to computing the
eigenvectors of the generalized eigensystem: (D W
tion 3). The eigenvectors can be thought of as a transformation of the
image into a new feature vector space. In other words, each pixel in
Belongie, Leung and Shi
the original image is now represented by a vector with the components
coming from the corresponding pixel across the dierent eigenvectors.
Finding a partition of the image is done by nding the clusters in this
eigenvector representation. This is a much simpler problem because
the eigenvectors have essentially put regions of coherent descriptors
according to our cue of texture and contour into very tight clusters.
Simple techniques such as K-means can do a very good job in nding
these clusters. The following procedure is taken:
1. Compute the eigenvectors corresponding to the second smallest
to the twelfth smallest eigenvalues of the generalized eigensystem
The corresponding eigenvalues are
2. Weight 7 the eigenvectors according to the eigenvalues: ^
12. The eigenvalues indicate the \goodness" of the corresponding
eigenvectors. Now each pixel is transformed to an 11
dimensional vector represented by the weighted eigenvectors.
3. Perform vector quantization on the 11 eigenvectors using K-means.
Start with K centers. Let the corresponding RMS error for
the quantization be e . Greedily delete one center at a time such
that the increase in quantization error is the smallest. Continue this
process until we arrive at K centers when the error e is just greater
than 1:1 e .
This partitioning strategy provides us with an initial segmentation of
the image. This is usually an over-segmentation. The main goal here
is simply to provide an initial guess for us to modify the weights. Call
this initial segmentation of the image S 0 . Let the number of segments
be N 0 . A typical number for N 0 is
5.2. Updating Weights
The initial segmentation S 0 found in the previous step can provide a
good approximation to modify the weight as we have discussed earlier.
modify the weight matrix as follows:
To compute the texton histograms for a pixel in R k , textons are
collected only from the intersection of R k and the isotropic window
of size determined by the scale, .
B is set to zero for pixels that are not in the region boundaries of
Contour and Texture Analysis for Image Segmentation 25
The modied weight matrix is an improvement over the original local
estimation of weights.
5.3. Coarsening the Graph
By hypothesis, since S 0 is an over-segmentation of the image, there are
no boundaries missed. We do not need to recompute a segmentation
for the original problem of N pixels. We can coarsen the graph, where
each node of the new graph is a segment in S 0 . The weight between
two nodes in this new graph is computed as follows:
j2R l
where R k and R l indicate segments in S 0 (k and l 2
W is the weight matrix of the coarsened graph and W is the weight
matrix of the original graph. This coarsening strategy is a very standard
technique in the application of graph partitioning (Metis, 1999). Now,
we have reduced the original segmentation problem with an N N
weight matrix to a much simpler and faster segmentation problem of
losing in performance.
5.4. Computing the Final Segmentation
After coarsening the graph, we have turned the segmentation problem
into a very simple graph partitioning problem of very small size. We
compute the nal segmentation using the following procedure:
1. Compute the second smallest eigenvector for the generalized eigensystem
using
W .
2. Threshold the eigenvector to produce a bi-partitioning of the im-
age. dierent values uniformly spaced within the range of the
eigenvector are tried as the threshold. The one producing a partition
which minimizes the normalized cut value is chosen. The
corresponding partition is the best way to segment the image into
two regions.
3. Recursively repeat steps 1 and 2 for each of the partitions until the
normalized cut value is larger than 0:1.
26 Malik, Belongie, Leung and Shi
Figure
12. pB is allowed to be non-zero only at pixels marked.
5.5. Segmentation in Windows
The above procedure performs very well in images with a small number
of groups. However, in complicated images, smaller regions can be
missed. This problem is intrinsic for global segmentation techniques,
where the goal is nd a big-picture interpretation of the image. This
problem can be dealt with very easily by performing the segmentation
in windows.
Consider the case of breaking up the image into quadrants. Dene
to be the set of pixels in the i th quadrant. Q
Image. Extend each quadrant by including all the pixels which are less
than a distance r from any pixels in Q i , with r being the maximum
texture scale, (i), over the whole image. Call these enlarged windows
. Note that these windows now overlap each other.
Corresponding to each ^
W i is dened by pulling
out from the original weight matrix W the edges whose end-points
are nodes in ^
. For each ^
an initial segmentation ^
0 is obtained,
according to the procedure in (x5.1). The weights are updated as in
(x5.2). The extension of each quadrant makes sure that the arbitrary
boundaries created by the windowing do not aect this procedure:
Texton histogram upgrade For each pixel in Q i , the largest possible
histogram window (a entirely contained in ^
by virtue of the extension. This means the texton histograms are
computed from all the relevant pixels.
Contour upgrade The boundaries in Q i are a proper subset of the
boundaries in ^
. So, we can set the values of p B at a pixel in Q i
to be zero if it lies on a region boundary in ^
This enables the
correct computation of W IC
ij . Two example contour update maps
are shown in Figure 12
Initial segmentations can be computed for each ^
. They
are restricted to Q i to produce S i
0 . These segmentations are merged to
Contour and Texture Analysis for Image Segmentation 27
form an initial segmentation S
. At this stage, fake boundaries
from the windowing eect can occur. Two examples are shown in
Figure
13. The graph is then coarsened and the nal segmentation is
computed as in (x5.3) and (x5.4).
Figure
13. Initial segmentation of the image used for coarsening the graph and
computing nal segmentation.
6. Results
We have run our algorithm on a variety of natural images. Figures 14
to 17 show typical segmentation results. In all the cases, the regions are
cleanly separated from each other using combined texture and contour
cues. Notice that for all these images, a single set of parameters are
used. Color is not used in any of these examples and can readily be included
to further improve the performance of our algorithm 8 . Figure 14
shows results for animal images. Results for images containing people
are shown in Figure 15 while natural and man-made scenes appear in
Figure
16. Segmentation results for paintings are shown in Figure 17.
A set of more than 1000 images from the commercially available Corel
Stock Photos database have been segmented using our algorithm 9 .
Acknowledgements
The authors would like to thank the Berkeley vision group, especially
Chad Carson, Alyosha Efros, David Forsyth and Yair Weiss for useful
discussions. This research was supported by (ARO) DAAH04-96-1-
0341, the Digital Library Grant IRI-9411334, NSF Graduate Fellowships
to SB and JS and a Berkeley Fellowship to TL.
28 Malik, Belongie, Leung and Shi
Notes
1 For more discussions and variations of the K-means algorithm, the reader is
referred to (Duda and Hart, 1973; Gersho and Gray, 1992).
2 It is straightforward to develop a method for merging translated versions of the
same basic texton, though we have not found it necessary. Merging in this manner
decreases the number of channels needed but necessitates the use of phase-shift
information.
3 This is set to 3% of the image dimension in our experiments. This is tied to the
intermediate scale of the lters in the lter set.
4 This is set to 10% of the image dimension in our experiments.
5 Finding the true optimal partition is an NP-complete problem.
6 The eigenvector corresponding to the smallest eigenvalue is constant, thus useless
normalized cut can be interpreted as a spring-mass system (Shi and Malik,
2000), this normalization comes from the equipartition theorem in classical statistical
mechanics which states that if a system is in equilibrium, then it has equal energy
in each mode (Belongie and Malik, 1998).
8 When color information is available, the similarity W ij becomes a product of
ij . Color similarity, W COLOR
ij , is computed
using 2 dierences over color histograms, similar to texture measured using texture
histograms. Moreover, color can clustered into \colorons", analogous to textons.
9 These results are available at the following web page:
http://www.cs.berkeley.edu/projects/vision/Grouping/overview.html
--R
Spatial Vision.
Pattern Classi
Computer Vision
Vector quantization and signal compression.
Computer Vision.
IEEE Conf.
Intell. to appear.
--TR
--CTR
David R. Martin , Charless C. Fowlkes , Jitendra Malik, Learning to Detect Natural Image Boundaries Using Local Brightness, Color, and Texture Cues, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.26 n.5, p.530-549, May 2004
Bernd Fischer , Joachim M. Buhmann, Path-Based Clustering for Grouping of Smooth Curves and Texture Segmentation, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.25 n.4, p.513-518, April
Anant Hegde , Deniz Erdogmus , Deng S. Shiau , Jose C. Principe , Chris J. Sackellares, Clustering approach to quantify long-term spatio-temporal interactions in epileptic intracranial electroencephalography, Computational Intelligence and Neuroscience, v.2007 n.2, p.1-8, April 2007
Anant Hegde , Deniz Erdogmus , Deng S. Shiau , Jose C. Principe , Chris J. Sackellares, Clustering approach to quantify long-term spatio-temporal interactions in epileptic intracranial electroencephalography, Computational Intelligence and Neuroscience, v.7 n.3, p.1-18, August 2007
Stella X. Yu , Jianbo Shi, Segmentation Given Partial Grouping Constraints, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.26 n.2, p.173-183, January 2004
Ann B. Lee , Kim S. Pedersen , David Mumford, The Nonlinear Statistics of High-Contrast Patches in Natural Images, International Journal of Computer Vision, v.54 n.1-3, p.83-103, August-September
Aleix M. Martnez , Pradit Mittrapiyanuruk , Avinash C. Kak, On combining graph-partitioning with non-parametric clustering for image segmentation, Computer Vision and Image Understanding, v.95 n.1, p.72-85, July 2004
Kevin Sookocheff , David Mould, One-click lattice extraction from near-regular texture, Proceedings of the 3rd international conference on Computer graphics and interactive techniques in Australasia and South East Asia, November 29-December 02, 2005, Dunedin, New Zealand
Charless Fowlkes , Serge Belongie , Fan Chung , Jitendra Malik, Spectral Grouping Using the Nystrm Method, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.26 n.2, p.214-225, January 2004
Manik Varma , Andrew Zisserman, A Statistical Approach to Texture Classification from Single Images, International Journal of Computer Vision, v.62 n.1-2, p.61-81, April-May 2005
Giuseppe Papari , Patrizio Campisi , Nicolai Petkov , Alessandro Neri, A biologically motivated multiresolution approach to contour detection, EURASIP Journal on Applied Signal Processing, v.2007 n.1, p.119-119, 1 January 2007
Long Quan , Jingdong Wang , Ping Tan , Lu Yuan, Image-Based Modeling by Joint Segmentation, International Journal of Computer Vision, v.75 n.1, p.135-150, October 2007
Svetlana Lazebnik , Cordelia Schmid , Jean Ponce, A Sparse Texture Representation Using Local Affine Regions, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.27 n.8, p.1265-1278, August 2005
Zhuowen Tu , Song-Chun Zhu, Parsing Images into Regions, Curves, and Curve Groups, International Journal of Computer Vision, v.69 n.2, p.223-249, August 2006
Kwang In Kim , Matthias O. Franz , Bernhard Scholkopf, Iterative Kernel Principal Component Analysis for Image Modeling, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.27 n.9, p.1351-1366, September 2005
Michelle Chang , John J. Leggett , Richard Furuta , Andruid Kerne , J. Patrick Williams , Samuel A. Burns , Randolph G. Bias, Collection understanding, Proceedings of the 4th ACM/IEEE-CS joint conference on Digital libraries, June 07-11, 2004, Tuscon, AZ, USA
Zhuowen Tu , Xiangrong Chen , Alan L. Yuille , Song-Chun Zhu, Image Parsing: Unifying Segmentation, Detection, and Recognition, International Journal of Computer Vision, v.63 n.2, p.113-140, July 2005
Timothy J. Roberts , Stephen J. McKenna , Ian W. Ricketts, Human Pose Estimation Using Partial Configurations and Probabilistic Regions, International Journal of Computer Vision, v.73 n.3, p.285-306, July 2007
Robert Hanek , Michael Beetz, The Contracting Curve Density Algorithm: Fitting Parametric Curve Models to Images Using Local Self-Adapting Separation Criteria, International Journal of Computer Vision, v.59 n.3, p.233-258, September-October 2004
Marek B. Zaremba , Roman M. Palenichka , Rokia Missaoui, Multi-scale morphological modeling of a class of structural texture, Machine Graphics & Vision International Journal, v.14 n.2, p.171-199, January 2005
Olivier Lezoray , Abderrahim Elmoataz , Sbastien Bougleux, Graph regularization for color image processing, Computer Vision and Image Understanding, v.107 n.1-2, p.38-55, July, 2007
Bodo Rosenhahn , Thomas Brox , Joachim Weickert, Three-Dimensional Shape Knowledge for Joint Image Segmentation and Pose Tracking, International Journal of Computer Vision, v.73 n.3, p.243-262, July 2007
Bastian Leibe , Ale Leonardis , Bernt Schiele, Robust Object Detection with Interleaved Categorization and Segmentation, International Journal of Computer Vision, v.77 n.1-3, p.259-289, May 2008
Jens Keuchel , Christoph Schnrr , Christian Schellewald , Daniel Cremers, Binary Partitioning, Perceptual Grouping, and Restoration with Semidefinite Programming, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.25 n.11, p.1364-1379, November
Daniel Cremers , Mikael Rousson , Rachid Deriche, A Review of Statistical Approaches to Level Set Segmentation: Integrating Color, Texture, Motion and Shape, International Journal of Computer Vision, v.72 n.2, p.195-215, April 2007
Ritendra Datta , Jia Li , James Z. Wang, Content-based image retrieval: approaches and trends of the new age, Proceedings of the 7th ACM SIGMM international workshop on Multimedia information retrieval, November 10-11, 2005, Hilton, Singapore | segmentation;normalized cut;grouping;cue integration;texton;texture |
543174 | Norm-Based Approximation in Bicriteria Programming. | An algorithm to approximate the nondominated set of continuous and discrete bicriteria programs is proposed. The algorithm employs block norms to find an approximation and evaluate its quality. By automatically adapting to the problem's structure and scaling, the approximation is constructed objectively without interaction with the decision maker. Mathematical and practical examples are included. | Introduction
In view of increased computational power and enhanced graphic capabilities
of computers, approximation of the solution set for bicriteria programming
has been a research topic of special interest. Since bicriteria programs feature
only two criteria, their solution set can be visualized graphically which
signicantly faciliates decision making. In this vein, researchers have given
This work was partially supported by ONR Grant N00014-97-1-0784.
Research Assistant, Department of Mathematical Sciences, Clemson University, Clem-
son, SC.
3 Associate Professor, Department of Computer Science and Mathematics, University
of Applied Sciences Dresden, Dresden, Germany.
4 Associate Professor, Department of Mathematical Sciences, Clemson University,
special attention to developing approximation methods that yield a representation
or description of the solution set rather than to further studying
scalarization approaches that had been extensively examined earlier.
Below we review approximation approaches specically developed for bi-criteria
programs. We focus on methods that are based on exact algorithms
for the solution of scalarization problems and were applied to example problems
Cohon (1978) and Poliscuk (1979) independently develop similar approximation
approaches for linear and convex bicriteria problems, respectively.
The weighted-sum scalarization is employed to nd nondominated points
and the l 2 norm is used as an estimate of the accuracy of the approxima-
tion. Fruhwirth et al. (1989) propose a sandwich algorithm to approximate
a convex curve in IR 2 and apply it to the bicriteria minimum cost
ow
problem. The curve is approximated by two piecewise linear functions, one
above and one below the curve. The curve's derivative is used to partition
a coordinate axis. Yang and Goh (1997) use the derivative of the upper approximation
instead. For both algorithms the approximation error decreases
quadratically with the number of approximation points. Jahn and Merkel
(1992) propose a reference-point-approach for general bicriteria programs
and give attention to avoid nding local optima. The approach produces
a piecewise linear approximation of the nondominated set. Payne (1993)
proposes to approximate the nondominated set of a general bicriteria problem
by rectangles, each dened by two nondominated points. Das (1999)
brie
y discusses an approach based on the Normal-Boundary Intersection
technique. A direction orthogonal to a line dened by two nondominated
points is used to nd a new nondominated point. The identied point has
the maximal l 1 distance from the approximation in the considered region.
The following two approaches are, to our knowledge, the only ones that
give a closed-form formula for an approximating function of the nondominated
set rather than a set of approximating points or a piecewise linear
approximation of the nondominated set. Approximating the nondominated
set of a convex bicriteria problem by a hyper-ellipse is proposed in Li et al.
(1998) and Li (1999). The technique requires three nondominated points
and their choice aects the quality of approximation. In Chen et al. (1999)
and Zhang et al. (1999), quadratic functions are used to locally approximate
the nondominated set of a general bicriteria problem in a neighborhood of
a nondominated point of interest. By performing the procedure for several
nondominated points, a piecewise quadratic approximation of the whole
nondominated set can be generated.
In this paper, we propose to approximate the solution set of bicriteria
programs by means of block norms. Using block norms to generate nondominated
points has several implications: the norm's unit ball approximates
the nondominated set and, at the same time, the norm evaluates the feasible
points as well as the quality of the current approximation.
y.
We consider the following general bicriteria program:
s. t. x 2 X;
(1)
is the feasible set and f 1 (x) and f 2 (x) are real-valued func-
tions. We dene the set of all feasible criterion vectors Z and the set of all
globally nondominated criterion vectors N of (1) as follows
t. ~
z zg;
T . We assume that the set Z is closed and
nonempty, and that there exists a point u so that Z u
0g. It follows that the set N is nonempty, see, for
example, Sawaragi et al. (1985, pp. 50{51).
The point z 2 IR 2 with
z
is called the utopia (ideal) criterion vector where the components of
are small positive numbers.
The point z 2 IR 2 with
z
is called the nadir point.
In Section 2, we present the methodological tools we use to construct the
approximation. Section 3 contains the approximation algorithm featuring
specic procedures depending on the structure of the problem. Examples
and case studies illustrating the performance of the algorithm are presented
in Section 4, and Section 5 concludes the paper.
In this section, we discuss approaches to generating nondominated points
used in the proposed approximation algorithm. Furthermore, the algorithm
relies on the usage of block norms, a well-known concept in convex analysis.
Block norms are norms with a polyhedral unit ball. A cone generated by
two neighboring extreme points of a unit ball is called a fundamental cone.
The partition of the unit ball into fundamental cones is used extensively in
our methodology.
Let
be an oblique norm 5 with the unit ball B. Given a
reference point z 0 (without loss of generality z
program yields a globally nondominated point, see Schandl (1999):
s. t. z 2 Z \ (z 0 IR 2
(2)
In the algorithm, the norm
with the center at z 0 , as used in (2), is being
constructed and used to generate new nondominated points.
Solving (2) requires a calculation of the norm
. As shown in Hamacher
and Klamroth (1997), it is su-cient to know in which fundamental cone
5 An oblique norm is a block norm where no facet of the unit ball is parallel to any
coordinate axis. For details see Schandl (1999).
a point z is located to calculate its norm
(z). Let
be a polyhedral
norm with the unit ball B IR 2 . Let z 2 C where
a fundamental cone, that is, C is generated by the two extreme points v i
is the unique representation
of z in terms of v i and v j then
Let z i and z j be two nondominated points in z 0 IR 2
. To guarantee that
a point z is in the cone su-cient to require
Using (3), the general norm problem (2) restricted to a cone can be
formulated as:
s. t.
Given an optimal solution (
z) of (4),
z is globally nondominated. Observe
that problem (4) generates a nondominated point independently of
the existence of a norm.
Besides the norm-based approach described above, we use two other
techniques to generate globally nondominated solutions. Following Steuer
and Choo (1983), we reformulate the lexicographic Tchebyche method for
the cone
lex min kz ~
z k w
z k 1
s. t.
where ~ z is the local utopia point for the cone, see Section 3.4. We rst
minimize the weighted Tchebyche norm between the local utopia point
and a feasible point. If there is no unique solution in this rst step, we
minimize the l 1 distance among all the solutions of the rst step. Given
an optimal solution (
z) of (5),
z is globally nondominated, see Schandl
(1999).
A direction method introduced in Pascoletti and Serani (1984) is mod-
ied in Schandl (1999). We use this method to search for globally nondominated
points in the entire set Z. Let z
1. Then the problem
lex
s. t.
has a nite solution (;
q), where
z is a globally nondominated point.
3 Approximation Algorithm
In this section, the algorithms for an IR 2
-nonconvex and discrete
feasible set Z are proposed. The algorithms in all three cases are very
similar, so we rst present the general algorithm and then point out special
features of the dierent cases.
3.1 General Strategy
The approximation algorithm is based on the successive generation of non-dominated
points using the methods described in Section 2. The basic idea
is to generate points in the areas where the nondominated set is not yet well
approximated. The approximation quality is evaluated using the approximation
itself by interpreting it as part of the unit ball of a block norm.
We explain the algorithm for the IR 2
case using Figure 1. To
start, we need a reference point z
. This might be a currently
implemented (not nondominated) solution or just a (not necessarily feasible)
6 A set Z is called
is convex.
guess. Without loss of generality, we assume throughout the section that
the reference point is located at the origin.
z 0
(a)
z 0
(b)
z 0
z 3
(c)
z 0
z 3
(d)
z 0
z 3
z 4
z 0
z 3
z 4
Figure
1: The steps of the approximation algorithm
To approximate the nondominated set in z 0 IR 2
, we rst explore the
feasible set along the directions ( 1; 0) and (0; 1) to nd z 1 and z 2 using the
direction method (6). These two points together with the reference point z 0
are used to dene a cone and a rst approximation, see Figure 1(b).
In a cone we search for a new candidate point to add to the approxima-
tion. Constructing new cones within the rst cone, we get a ner approximation
of the nondominated set while generating nondominated points and
updating the norm. Depending on the structure of the feasible set Z (IR 2
we use the norm method (4) and/or the
lexicographic Tchebyche method (5). For more details see Sections 3.3,
3.4 and 3.5. Interpreting the approximation as the lower left part of the
unit ball of a norm with z 0 as its center, we can calculate the distance of a
point
z from the current approximation as dev(z) := j
(z) 1j, which we
call the deviation of
z. Whenever possible, we add a point of worst approximation
by substituting two new cones for the cone in which this new point
is located.
3.2 Description of the Algorithm
The algorithm accepts the following input.
1. A reference point z
can be specied. If it is not given then
z 0 := z is used as a default.
2. Initial search directions can be given. There are three possibilities:
(a) At least two directions d
are given.
(b) An integer randDirNo 2 is given, which denes the number of
random directions in IR 2
which are generated.
(c) No directions are given; then the default directions d 1 := (
and d 2 := (0; 1) are used.
The directions are sorted in counterclockwise order. Let the number
of directions be k 2.
3. There are two possible stopping criteria; usually, at least one of them
must be given. The rst one is an upper bound > 0 on the maximal
deviation. As soon as we get dev(z) < for a point that should be
added next, the algorithm stops. The other possibility is to give an
integer maxConeNo 1, which species the maximum number of cones
to be generated.
The algorithm starts by solving the direction method (6) for all directions
d i and dening l initial cones. Note that l is not necessarily equal
to k 1 because two directions may generate the same nondominated point.
Now we nd a candidate to add in each cone, each having a deviation
from the current approximation associated with it. How this candidate is
found diers for the three types of problems and is described in the subsections
below.
Finally the main loop of the algorithm starts. If the maximum number
of cones maxConeNo was already constructed, the loop stops. Otherwise, the
candidate
z with the maximum deviation is considered. If this deviation is
smaller than , the loop stops. Otherwise, two new cones are constructed in
place of the cone containing
z, candidate points for the new cones together
with their deviations are calculated and the points are added to the list of
candidates.
At the end of the loop the sorted list of r nondominated points is printed
and can be used to visualize the approximated nondominated set. In the
-convex case, the approximation is in the form of an oblique norm's unit
ball with an algebraic description Az 5 e, where A is an (r 1) n matrix
and e is the vector of ones.
The algorithm is summarized in Figure 2. The procedure Calculate
Candidate depends on the structure of the feasible set. Suitable procedures
for
-nonconvex and discrete feasible sets are given in
Figures
3 and 5.
3.3 Convex Case
For the IR 2
-convex case, the candidate in a cone is found by the norm
method (4). By taking the candidate with the maximal deviation, we globally
maximize the norm and the resulting point is guaranteed to be globally
nondominated. Note that the deviation is implicitly given by the solution
of (4) because, due to (3), the optimal objective value of (4) is equal to the
candidate's norm, that is,
Given the set of extreme points of the approximation, we can easily nd
Procedure: Bicriteria Approximation
Read/generate z 0 , d i , , maxConeNo
for all d i do
Solve direction method
end for
Construct cones
for all cones do
Call Calculate Candidate
end for
while #cones < maxConeNo and dev(next point) do
Add next point
Construct new cones
for all new cones do
Call Calculate Candidate
end for
while
Output approximation
Figure
2: Pseudo code of the approximation algorithm
a representation of the approximation in the form Az 5 e. Since z
line connecting two neighboring extreme points includes the origin. Given
two points z i and z i+1 , we calculate row i of the matrix A as follows:
a
z i+1and a
The procedure to calculate a candidate is summarized in Figure 3.
Procedure: Calculate Candidate
Solve norm method to nd z and dev(z)
Figure
3: Finding a candidate in a cone for an IR 2
Setting the stopping criteria to can lead to an
innite running time for a general IR 2
considering numerical
problems). On the other hand, these settings can be useful for the special
case of a polyhedral set Z, since in this case our algorithm is able to nd
the exact nondominated set.
Consider a polyhedral feasible set Z. There are two cases for the location
of the points z i and z j when solving (4). Either both extreme points of the
approximation are on the same facet or they are on dierent (not necessarily
neighboring) facets.
Since (4) is a linear program if Z is polyhedral, its optimal solution is an
extreme point or a facet of the feasible set. We thus either nd a new point to
add to the approximation or the identied point has a deviation of 0 in which
case the cone is not considered anymore. The necessary number of iterations
is O(k) where k is the number of extreme points of the nondominated set
because in each iteration we either nd an extreme point or we eliminate a
cone from further consideration.
3.4 Nonconvex Case
Finding a candidate in a cone for an IR 2
-nonconvex feasible set is a two-stage
procedure. We rst try to nd a candidate \outside" the approxima-
tion; if this fails, we look for a candidate \inside". Thus we give a priority
to constructing the convex hull of the nondominated set before we further
investigate nonconvex areas.
Finding a candidate \outside" is done with the same method as for
-convex sets, that is, we use problem (4) exercising its applicability in
the absence of a norm. If the deviation of the candidate found by this
method is too small, that is, smaller than , we switch to a method using
the Tchebyche norm in order to investigate whether the nondominated set
is
-convex in this cone and its approximation is already good enough or
whether the nondominated set is IR 2
-nonconvex and a candidate has to be
found in the interior of the approximation. For a cone dened by the two
points z i and z i+1 , we rst calculate the local utopia and the local nadir
point:
~
z
z
Using these two points, we calculate the weights for a Tchebyche norm
whose unit ball's center is ~ z and whose upper right corner is ~
z , see
Figure
4.
The weights thus are
~
z
z
2:
z 0
z
~
z
~
z
Figure
4: The Tchebyche norm for a nonconvex area
We then use the lexicographic Tchebyche method (5) to nd a candidate
for this cone. Having found a candidate
z, its deviation is calculated
using (3). The norm can also be calculated using the equality constraint
of (5) because
Note that the candidate found using this two-stage procedure is not
necessarily the point of worst approximation. If the candidate has already
been found using program (4), it is the point of worst approximation among
all points \outside" the current approximation in this cone. Finding a point
with the lexicographic Tchebyche method|that is, in the second stage|
does not imply anything about how well this point is currently approximated
in comparison with other points. So it might happen that we miss a point
with a larger deviation than the candidate
z we are considering. But unless
the deviation of z is so small that the cone is not further considered, there
is a good chance that the point with the larger deviation is found in a later
iteration. The procedure to calculate a candidate is summarized in Figure 5.
Procedure: Calculate Candidate
Solve norm method to nd z and dev(z)
if dev(z) < then
Calculate ~
z and ~ z
Use lexicographic Tchebyche method to nd z
Calculate dev(z)
Figure
5: Finding a candidate in a cone for an IR 2
3.5 Discrete Case
The approach for the discrete case is exactly the same as for the IR 2
nonconvex case, that is, we rst use the norm method to search for a candidate
\outside" the approximation; if we nd none (or only one with a small
deviation), we search \inside" using the Tchebyche method.
Since using the Tchebyche method might lead to NP-hard problems,
see, for example, Warburton (1987) or Murthy and Her (1992), we develop an
alternative approach for the discrete case that uses cutting planes and does
not need two stages. Since this approach is not used in our implementation,
we only give a brief outline and refer the reader to Schandl (1999) for more
details. This approach might lead to NP-hard problems as well but there
are cases where the Tchebyche method leads to NP-hard problems while
the approach based on cutting planes does not.
The idea of the cutting-plane-approach is to restrict the feasible region
to an open rectangle dened by the two generators of the cone because this
is the only area within this cone where nondominated points can be located.
Then the norm method (4) is used to identify a candidate for this cone. If we
nd a candidate \outside" the current approximation, that is, a candidate
with a deviation large enough, we have a point of worst approximation and a
suitable point to add to the approximation. But if a point is found \inside"
the approximation, it is actually a point of best approximation. Therefore it
may happen quite often that a cone is excluded from further consideration
too early.
Independently of the choice of an approach to examine the interior of
the approximation, the algorithm enumerates all nondominated points if we
use the stopping criteria
The procedure to calculate a candidate using the Tchebyche method is
the same as for the IR 2
-nonconvex case, see Figure 5.
3.6 A Note on Connectedness
While the nondominated set of an IR 2
set is always connected, see
Bitran and Magnanti (1979) or Luc (1989), the nondominated set of an
-nonconvex problem might be disconnected. An indicator for disconnectedness
is the fact that we do not nd any new nondominated point in a
cone, neither in the interior nor in the exterior of the approximation. Since
we are able to identify disconnectedness in this way, we can remove such a
cone so that the resulting nal approximation is a disconnected set as well.
Thus our approximation is suitable for problems with connected and with
disconnected nondominated sets.
3.7 Properties of the Algorithm
The approximation algorithm for general bicriteria problems presented in
this section has many desirable properties some of which are, to our knowl-
edge, not available in any other approximation approach.
In each iteration, the subproblems (4) and/or (5) are only solved in
two new cones. Thus results from previous iterations are reused and no
optimization over the whole approximated region is necessary. Instead of
adding an arbitrary point in each iteration, our goal is to add the point of
worst approximation and to maximize the improvement in each iteration.
While this property does not always hold in the IR 2
-nonconvex and discrete
cases, it always holds in the IR 2
case. If the algorithm is interrupted
or stopped at a particular point (for example because the maximum allowed
number of cones has been constructed), the approximation has a similar
quality for the whole nondominated set.
While the points of the approximation are in general not nondominated
or even not feasible, all extreme points of the approximation are nondom-
inated. Even in the IR 2
case, points of the approximation may be
infeasible if the feasible set Z is \very thin" or even only a line. If all points
of the approximation are feasible though, we have constructed an inner approximation
of the nondominated set.
Using a norm induced by the problem (or, more precisely, by the approximation
of its nondominated set) avoids the necessity to choose, for
example, an appropriate norm, weights or directions to evaluate or estimate
the quality of the current approximation. The induced norm evaluates
the approximation quality and simultaneously generates suitable additional
points to improve the approximation. Since the quality of the approximation
is evaluated by the norm, the stopping criterion for the maximal deviation
is independent of the scaling. Indeed, the norm automatically adapts to the
given problem and yields a scaling-independent approximation.
Additionally, the constructed norm can be used to evaluate and compare
feasible points in z 0 IR 2
. A nondominated point has a norm greater or
equal while a norm between 0 and 1 for a point
z indicates that there is
a \better" point in the direction from z 0 to
z. The norm of a point
z can
be interpreted as a measure of quality relative to the maximal achievable
quality in the direction of
z.
While it is often convenient to have the reference point generated by
the algorithm, which is then the nadir point, choosing a specic reference
point can be used to closely explore a particular region of the nondominated
set. The automatically generated reference point can be used to construct
a global approximation of the entire nondominated set while a manually
chosen reference point helps to examine the structure and trade-os of the
nondominated set in a specic region. Thus the choice of the reference point
can be used to \zoom into" regions of interest. For examples see Section 4.
Finally, the algorithm works essentially in the same way for
and
problems. If the structure of the feasible set Z is un-
known, we can apply the algorithm described in Section 3.4. However, if the
problem is in fact
additional (unnecessary) computation
have to be performed. Not nding a candidate with a large enough deviation
in the exterior of the approximation in the IR 2
-convex case is an indicator
that the approximation is already good enough in the corresponding cone.
In the IR 2
case though, the Tchebyche method is used to search
for a candidate in the interior of the approximation which is unnecessary in
the
case because there cannot be a nondominated point in the
interior of the approximation. But the disadvantage of performing some
additional calculations is clearly outweighed by the fact that no information
concerning the structure of the feasible set Z is necessary. If the information
that the feasible set is IR 2
-convex is available, the specialized algorithm
presented in Section 3.3 should be used of course.
4 Examples and Case Studies
The approximation algorithm presented in Section 3 was implemented using
C++, AMPL, CPLEX, MINOS and gnuplot. The C++ program keeps lists of
points and cones and formulates mathematical programs which are solved
by AMPL, CPLEX and MINOS. Finally, the results are written to text les
which gnuplot uses to create two-dimensional plots.
4.1 Convex Example
Consider the following
t.
The solutions for 10 and 40 cones are shown in Figure 6. The approximation
is already very good for 10 cones and improves only slightly for 40 cones.
A small cusp can be seen at f(3; in both gures. At this point,
the rst two constraints hold with equality. The rst constraint denes the
nondominated set to the left of the cusp, the second one to the right of the
cusp.
The corresponding matrix for 10 cones, rounded to two decimals, looks
as follows:
0:49 0:48 0:463 0:45 0:42 0:39 0:25 0:19 0:12 0:04
0:03 0:03 0:04 0:04 0:04 0:05 0:06 0:06 0:06 0:06
All entries are positive, because A denes the oblique norm in the quadrant
. The rows of A dene the facets of the approximation from the
right to the left.1216202428
z 0
Figure
Approximation of (7)
4.2 Nonconvex Example
We present an IR 2
example taken from Zhang (1999):
s. t. x 1
Two interesting properties of our approximation algorithm can be seen in
Figure
7, depicting the approximation for dierent numbers of cones.
The approximation rst constructed by the algorithm is similar to the
convex hull of the nondominated set. Even when using cones for the
approximation it is not yet apparent that the problem is
The reason is that the algorithm uses only the norm method as long as it
nds candidates with a deviation larger than (which was set to 0:0001 in
this example). When it does not nd such a candidate in a cone, it switches
to the Tchebyche method to examine the interior of the approximation
and \discovers" the nonconvexity in the big cone, see Figures 7(c) and 7(d).
This illustrates that the choice of can in
uence the approximation process.
As in the IR 2
-convex example above, we see that areas with a big curvature
induce numerous cones so that the linear approximation adapts to
the nonlinear nondominated set. Our results agree with those obtained by
Zhang (1999) using the Tchebyche scalarization.
4.3 Case Study: Evaluation of Aircraft Technologies
We now present a bicriteria model to evaluate aircraft technologies for a
new aircraft. The model was proposed in Mavris and Kirby (1999) and the
data was provided by the Aerospace Systems Design Laboratory at Georgia
Institute of Technology. They can be found in Schandl (1999).
The model is a bicriteria problem of the form
t. 1 x
The functions f 1 (x) and f 2 (x) are modeled as Response Surface Equations:
where the coe-cients b i and b ij are found by regression. The Hessian of
z 0
(c)
z 0
z 0
Figure
7: Approximation of (8)
neither of the functions f 1 (x) and f 2 (x) is positive or negative (semi)denite.
The decision variable in the problem is a vector of nine so-called \k" fac-
tors. The impact of a technology is mapped to such a vector, so every
technology has a specic vector assigned to it. Not all technologies aect
all components of the vector. While the problem is thus discrete, the goal
of this model is to identify the values of \k" factors that are benecial for
the objective functions. Then technologies with corresponding vectors can
be further investigated. All \k" factors are normalized to the range [
and represent a change from the value of the currently used technologies.
The two criteria are the life cycle cost (including research cost, production
cost, and support cost) to be minimized and the specic express power
(measure of maneuverability) to be maximized.
The results of the approximation algorithm for 10 and 29 cones are shown
in
Figure
8. Our approximation agrees with the simulation results obtained
at the Aerospace Systems Design Laboratory, see Schandl (1999).
z 0
(b) 29 cones
Figure
8: Approximation of
There are two areas with an accumulation of constructed points in Figure
8(b). We examine these areas more closely by manually setting the
reference point to (0:671; 728) and (0:652; 683), respectively. The corresponding
approximations are shown in Figures 9(a) and 9(b). In Figure 9(a),
no reason for the accumulation of constructed points is apparent. Figure 9(b)
on the other hand shows a small nonconvex area of the nondominated set.
z 0
(a) 13 cones, z
z 0
(b) 19 cones, z
Figure
9: Approximation of
Being able to choose the reference point in this way demonstrates a
strength of our approximation approach. By simply resetting this point, we
are able to closely examine \suspicious" areas or areas of special interest.
Thus the approximation approach can be used both to get a general impression
of the entire nondominated set and to \zoom into" areas of interest
without changing the underlying algorithm.
An extended model includes a constraint and is discussed in Schandl
(1999).
4.4 Case Study: Choosing Aordable Projects
We now consider the problem of selecting the most aordable portfolio of
projects so that two criteria are maximized subject to a budgetary con-
straint. The model and data were taken from Adams et al. (1998) and
Hartman (1999).
There are 24 projects in which the decision maker can invest. Depending
on the model, the decision maker can invest in each project exactly once
(binary variables) or a positive number of times (integer variables). The goal
is to maximize the net present value (NPV) of investment and to maximize
the joint application or dual use (JA/DU) potential of the chosen projects.
The latter is a score assigned to each project by an expert. The investment
has to be made with respect to a budgetary constraint. The problem is
formulated as a bicriteria knapsack problem:
c 1i x i
c 2i x i
s. t.P
a
x binary or nonnegative integer,
where the parameters are explained in Table 1. The values of the parameters
are given in Schandl (1999).
Parameter Explanation
c 1i NPV of investment for project i in millions of dollars,
c 2i JA/DU score for project i,
a i Total cost of project i over three years in hundreds of
thousands dollars,
Total budget in hundreds of thousands of dollars
Table
1: Explanation of parameters in (10)
The approximation for the binary variable x is shown in Figure 10(a).
Our approximation algorithm nds all twelve nondominated solutions given
in Hartman (1999).
Allowing the variable x to be a nonnegative integer instead of binary
yields many more solutions. The approximation of (10) for the nonnegative
integer variable x is shown in Figure 10(b). In fact, the approximation nds
all 54 nondominated solutions that, according to personal communications,
Hartman found using her implementation of a dynamic-programming-based
algorithm generating all nondominated points.
z 0
(a) Binary variables10002000300040005000
z 0
(b) Integer variables
Figure
10: Approximation of (10)
Conclusions
In this paper we introduced a new approximation approach for bicriteria pro-
grams. Block norms are used to construct the approximation and evaluate
its quality.
The algorithm combines several desirable properties. Whenever possi-
ble, the approximation is improved in the area where \it is needed most"
because in each iteration, a point of worst approximation is added. The
algorithm is applicable even if the structure and convexity of the feasible
set is unknown. Given this knowledge though, more e-cient versions can
be applied. Using the approximation or a norm induced by it to improve
the approximation releases the decision maker from specifying preferences
(in the form of weights, norms, or directions) to evaluate the quality of the
approximation.
The algorithm yields a global piecewise linear approximation of the non-dominated
set which can easily be visualized. For
a closed-form description of the approximation can be calculated. For all
problems, the trade-o information provided by the approximation can be
used in the decision-making process. While the approximation is carried out
objectively, the subjective preferences must be (and should be) applied to
single out one (or several) nal result(s).
In the future, we plan to employ global optimization techniques for the
single objective subproblems in order to handle problems with disconnected
nondominated set and/or local minima.
--R
The Structure of Admissible Points with Respect to Cone Dominance.
Quality Utility
Multiobjective Programming and Planning.
An Improved Technique for Choosing Parameters for Pareto Surface Generation Using Normal-Boundary Intersection
Approximation of convex curves with application to the bicriterial minimum cost ow problem.
European Journal of Operational Research
Planar Location Problems with Barriers under Polyhedral Gauges.
Implementation of Multiple Criteria Dynamic Programming Procedures for A
Reference Point Approximation Method for the Solution of Bicriterial Nonlinear Optimization Problems.
Approximating the Pareto Set of Convex Bi-Criteria Optimization Problems to Aid Decision Making in Design
Approximating Pareto Curves Using the Hyper-Ellipse
Technology
Solving Min-Max Shortest-Path Problems on a Network
Theory of Multiobjective Optimization
An Interactive Weighted Tchebyche
Approximation of Pareto Optima in Multiple- Objective
A method for convex curve approximation.
European Journal of Operational Research
An Interactive Multiobjective Robust Design Proce- dure
Local Approximation of the E-cient Frontier in Robust Design
--TR
--CTR
Bernd Schandl , Kathrin Klamroth , Margaret M. Wiecek, Introducing oblique norms into multiple criteria programming, Journal of Global Optimization, v.23 n.1, p.81-97, May 2002 | nondominated points;block norms;approximation;bicriteria programs |
544758 | Risk and expectations in a-priori time allocation in multi-agent contracting. | In related research we have proposed a market architecture for multi-agent contracting and we have implemented prototypes of both the market architecture and the agents in a system called MAGNET. A customer agent in MAGNET solicits bids for the execution of multi-step plans, in which tasks have precedence and time constraints, by posting a Request for Quotes to the market. The Request for Quotes needs to include for each task its precedence constraints and a time window. In this paper, we study the problem of optimizing the time windows in the Requests for Quotes. Our approach is to use the Expected Utility Theory to reduce the likelihood of receiving unattractive bids, while maximizing the number of bids that are likely to be included in the winning bundle. We describe the model, illustrate its operation and properties, and discuss what assumptions are required for its successful integration into MAGNET or other multi-agent contracting systems. | INTRODUCTION
The MAGNET (Multi-AGent NEgotiation Testbed) [4]
system is designed to support multiple agents in negotiating
Permission to make digital or hard copies of all or part of this work for
personal or classroom use is granted without fee provided that copies are
not made or distributed for profit or commercial advantage and that copies
bear this notice and the full citation on the first page. To copy otherwise, to
republish, to post on servers or to redistribute to lists, requires prior specific
permission and/or a fee.
AAMAS'02, July 15-19, 2002, Bologna, Italy.
contracts for tasks with complex temporal and precedence
constraints.
We distinguish between two agent roles, the customer and
the supplier. A customer is an agent who needs resources
outside its direct control in order to carry out his plans. It
does so by soliciting the help of other self-interested agents
through a Requests for Quotes (RFQ). A supplier is an agent
who, in response to an RFQ, may o#er to provide the requested
resources or services for specified prices, over specified
time periods. The objective of the agents is to maximize
their profits while predicting and managing their financial
risk exposure.
In this paper, we focus on the decision process a customer
agent needs to go through in order to generate an RFQ. We
study in particular the problem of how to specify the time
windows for the di#erent tasks in the RFQ. This decision
determines an approximate schedule by setting limits on the
start and finish times for each individual task, since the
RFQ includes early start and late finish times for each task.
Because there is a probability of loss as well as a probability
of gain, we must deal with the risk posture of the person or
organization on whose behalf the agent is acting.
We show how to use the Expected Utility Theory to determine
the time windows for tasks in the task network, so that
bids that are close to these time windows form the most preferred
risk-payo# combinations for the customer agent. We
further examine to what extent the behavior of the model
corresponds to our expectations, explain what market information
needs to be collected in order to integrate the model
in MAGNET system and, finally, discuss how to use the
resulting time allocations to construct RFQs.
2. RELEVANCE OF THE PROBLEM
Before presenting our proposed solution, we need to understand
the importance of selecting appropriate time windows
for tasks in RFQs and how this choice a#ects the customer
agent's ability to accomplish the tasks as economically
and rapidly as possible.
Choosing appropriate time windows a#ects the number
and price of the bids received, the ability to compose the
bids into a feasible schedule, and the financial exposure of
the customer agent.
There are two major decisions the agent has to take here:
the relative allocation of time among the di#erent tasks, and
the extent to which the time windows of tasks connected by
precedence relations are allowed to overlap.
We have shown [1] that the time constraints specified in
the RFQ can a#ect the customer's outcome in two major
ways:
1. by a#ecting the number, price, and time windows of
the bids submitted. We assume that bids will reflect
supplier resource commitments, and therefore larger
time windows will result in more bids and better utilization
of resources, in turn leading to lower prices [3].
However, an RFQ that features overlapping time windows
makes the process of winner determination more
complex [2]. Another less obvious problem is that
every extra bid over the minimum needed to cover
all tasks adds one more rejected bid. Ultimately, a
large percentage of rejections will reduce the customer
agent's credibility, which, in turn, will result in fewer
bids and/or higher costs.
2. by a#ecting the financial exposure of the customer
agent. We assume non-refundable deposits are paid
to secure awarded bids, and payments for each task
are made as the tasks are completed. The payment
to the customer occurs only at the completion of all
the tasks. Once a task starts and, in case it is successfully
completed in the time period specified by the
contract, the customer is liable for its full cost, regardless
of whether in the meantime the plan as a whole
has been abandoned due to a failure on some other
branch of the plan.
We define successful plan execution as "completed by the
deadline," and we define successful completion of a task as
"completed without violating temporal constraints in the
plan." Note that a task can be completed successfully even
if it is not finished within the duration promised by the
bidder, as long as the schedule has su#cient slack to absorb
the overrun. If a plan is completed after its deadline, it has
failed, and we ignore any residual value to the customer of
the work completed.
The uncertainty of whether the tasks will be completed
on time as promised by suppliers further complicates the
decision process. Because of the temporal constraints between
tasks, failure to accomplish a task does not necessarily
mean failure of the goal. Recovery might be possible,
provided that whenever a supplier fails to perform or decommits
there are other suppliers willing to do the task and
there is su#cient time to recover without invalidating the
rest of the schedule.
If a task is not completed by the supplier, the customer
agent is not liable for its cost, but this failure can have a
devastating e#ect on other parts of the plan. Having slack
in the schedule increases the probability that tasks will be
completed successfully or that there will be enough time to
recover if one of the tasks fails. However, slack extends the
completion time and so reduces the reward. In made-to-
order products speed is the essence and taking extra time
might prevent a supplier from getting a contract. This complicates
the selection of which bids to accept. The lowest
cost combination of bids and the tightest schedule achievable
is not necessarily the preferable schedule because it is
more likely to be brittle.
Risk can also be reduced by consolidating tasks with fewer
suppliers. Suppliers can bid on "packages" composed of sub-sets
of tasks from the RFQ. In general, the customer is better
o# from a risk standpoint if it takes these packages, assuming
that the supplier is willing to be paid for the whole package
at the time of its completion. In some cases, the customer
may be willing to pay a premium over the individual task
prices in order to reduce risk. The advantage of doing this is
greater toward the end of the plan than near the beginning,
since at that point the customer has already paid a significant
part of the tasks. Having a greater financial exposure
provides an additional incentive to reduce risk.
3. THE MAGNET FRAMEWORK
3.1 General Terms
The customer is a human or artificial agent who wants
to achieve some goal and needs resources or services beyond
her direct control.
The supplier is a human or artificial agent who has direct
control over some resources or services and may o#er to provide
those in response to external request, i.e., may submit
and commit to bids.
The mediator is a MAGNET-assisted human agent who
meets the needs of a customer by negotiating over multiple
goods or services with one or more suppliers. We often refer
to the artificial part of the duo as to the customer agent.
The Request for Quotes is a signal composed by the customer
agent on the basis of the customer's needs and is sent
to solicit suppliers' bids. MAGNET is a mixed initiative
system, so between composing RFQs and sending them out,
there is a stage where a human user can impose her preferences
on the RFQ choices.
3.2 Task Network
The task network (see Figure 1) represents the structure
of the customer's plan. In essence, it is a connected directed
acyclic graph.
Masonry
Roofing
Plumbing
Electric
Exterior Interior24
Figure
1: A task network example.
Mathematically speaking, a task network is a tuple #N, #
of a set N of individual tasks and strict partial ordering on
them. We conveniently abuse N to also denote the number
of tasks. A task network represents a plan to accomplish
the agent's goal.
We define P (n) := {m # N |m # n} to be a set of predecessors
of n # N , where every predecessor m should be
completed before task n might start. Note that, in general,
P (n) is not completely ordered by #.
Similarly, S (n) is to denote a set of successors of n.
3.3 Time Allocation and Probabilities
A task network is characterized by a start time t s and a
which delimit the interval of time when tasks
can be scheduled.
The placement of an individual task n in the schedule is
characterized by its start time t s
n and finish time t f
n , which
are subject to the following constraints:
where P1 (n) is the a set of immediate predecessors of n,
defined similarly to be the set of immediate successors of
task n.
The probability of task n completion by the time t, conditional
on the successful completion of task n, is distributed
according to the cumulative distribution function (CDF)
1. Observe that #n is defined
to be explicitly dependent on the start time t s
n . To see the
rationale, consider the probability of successful mail delivery
in x days for packages that were mailed on di#erent days of
a week.
There is an associated unconditional probability of success
characterizing the percentage of tasks that are
successfully completed given infinite time (see Figure 2).p n
Figure
2: Unconditional distribution for successful
completion probability.
3.4 Monetary Transfers
bears an associated cost 1 . We assume the total
cost of task n has two parts: a deposit, which is paid when
the task starts, and a cost cn which is due at some time after
successful completion of n. Since we never compare plans
with di#erent deposits we assume without loss of generality
the deposit to be 0.
There is a single final payment V scheduled at the plan
conditional on all tasks in n being
successfully completed by that time.
There is an associated rate of return qn 2 that is used to
calculate the discounted present value (PV) for payo# cn due
at time t as
We associate the return q with the final payment V .
4. EXPECTED UTILITY
4.1 General Terms
We represent the customer agent's preferences over pay-
o#s by the von Neumann-Morgenstern utility function u.
1 Hereafter we use words "cost" and "reward" to denote some
monetary value, while referring the same value as "payo# "
or "payment" whenever it is scheduled at some time t.
2 The reason for having multiple qn 's is that individual tasks
can be financed from di#erent sources, thus a#ecting task
scheduling.
We further assume that the absolute risk-aversion coe#-
cient [14] r := -u # /u # of u is constant for any value of
its argument, hence u can be represented as follows:
A gamble is a set of payo#-probability pairs
1. The expectation of the utility
function over a gamble G is the expected utility (EU):
The certainty equivalent (CE) of a gamble G is defined
as the single monetary value whose utility matches the expected
utility of the entire gamble G, i.e. u (CE [G]) :=
Eu [G]. Hence under our assumptions
r log #
Naturally, the agent will not be willing to accept gambles
with less than positive certainty equivalent and the higher
values of the certainty equivalent will correspond to more
attractive gambles.
To illustrate the concept, Figure 3 shows how the certainty
equivalent depends on the risk-aversity of an agent. In this
figure we consider a gamble that brings the agent either 100
or nothing with equal probabilities. Agents with positive
r's are risk-averse, those with negative r's are risk-loving.
Agents with zero risk-aversity zero, i.e. risk-neutral, have a
CE equal to the gamble's weighted mean 50.
Figure
3: Certainty equivalent of a simple gamble
as a function of the risk-aversity.
4.2 Cumulative Probabilities
To compute the certainty equivalent of a gamble we need
to determine a schedule for the tasks and compute the payo#-
probability pairs.
We assume that a payo# cn for task n is scheduled at t f
so its present value -
cn 3 is
3 Hereafter we "wiggle" variables that depend on the current
task schedule, while omitting all corresponding indices for
the sake of simplicity.
We define the conditional probability of task n success as
We also define the precursors of task n as a set of tasks
that finish before task n starts in a schedule, i.e.
The unconditional probability that the task n will be completed
successfully is
pm .
That is, the probability of successful completion of every
precursor and of the task n itself are considered independent
events. The reason this is calculated in such form is because,
if any task in -
fails to be completed, there is no need
to execute task n.
The probability of receiving the final payment V is therefor
pn .
4.3 Example and Discussion
To illustrate the definitions and assumptions above, let's
return to the task network in Figure 1 and consider a sample
task schedule in Figure 4. In this figure the x-axis is time,
the y-axis shows both the task numbers and for each individual
task it also shows the cumulative distribution of the
unconditional probability of completion (compare to Figure
2). Circle markers show start times t s
n . Crosses indicate
both finish times t f
n and success probabilities -
pn (numbers
next to each point). Square markers denote that the corresponding
task cannot span past this point due to precedence
constraints. Finally, the thick part of each CDF shows the
time allocation for the task.
Figure
4: CE maximizing time allocations for the
plan in Figure 1 for
In practice, the customer agent needs a way of collecting
the market information necessary to build and use the
model. The probability of success is relatively easy to observe
in the market. This is the reason for introducing the
cumulative probability of success #n and the probability of
success pn , instead of the average project life span or probability
of failure or hazard function. Indeed, it is rational
for the supplier to report a successful completion immediately
in order to maximize the present value of a payment.
Also it is rational not to report a failure until the last possible
moment due to a possibility of earning the payment by
rescheduling, outsourcing or somehow else fixing the problem
To be specific, the information that the agent needs to
collect is the empirical distribution of how long does it take
from the point of starting some task to the point its completion
is reported. This data, unlike the data on failures or
actual positions in the supplier's schedule is less likely to be
private or unobservable.
5. MAXIMIZATION
5.1 Gamble Calculation Algorithm
Given a schedule, like the one shown in Figure 4, we need
to compute the payo# probability and then maximize the
CE for the gamble. Writing an explicit description of the
expected utility as a function of gambles is overly complicated
and relies on the order of task completions. Instead we
propose a simple recursive algorithm that creates these gam-
bles. We then maximize the CE over the space of gambles.
The proposed algorithm does not depend on the structure
of the task network, but on the number of tasks scheduled
in parallel.
Algorithm: G # calcGamble(T, D)
Requires: T "tasks to process", D "processed tasks"
Returns: G "subtree gamble"
"it's a branch''
"according to some ordering "
pn )})
endfor
I # calcGamble(T, D # {n}) "follow . # n path"
endfor
return G "subtree is processed"
else "it's a leaf ''
tasks are done"
return {(V, 1)}
else "some task failed"
return {(0, 1)}
endif
endif
In the first call the algorithm receives a "todo" task list
and a "done" task list #, all the subsequent calls
are recursive. To illustrate the idea behind this algorithm,
we refer to the payo#-probability tree in Figure 6. This tree
was built for the time allocations in Figure 5 and reflects
the precursor relations for this case.
Figure
5: CE maximizing time allocations for the
plan in Figure 1 for
Looking at the time allocation, we note that with probability
fails, the customer agent does not pay
or receive anything and stops the execution (path - 1 in the
tree). With probability -
p1 the agent proceeds with task
(path 1 in the tree). In turn, task 3 either fails with probability
p3 ), in which case the agent ends up stopping
the plan and paying a total of c1 (path 1 # - 3), or it is
completed with the corresponding probability -
p3 .
In the case where both 1 and 3 are completed, the agent
starts both 2 and 4 in parallel and becomes liable for paying
c2 and c4 respectively even if the other task fails (paths
fail, the resulting path in the tree is 1
the corresponding payo#-probability pair is framed in the
figure.
5.2 Computational Complexity
The computational complexity of the maximization procedure
is determined by two parameters: first, the procedure
itself is a non-linear maximization over 2N choice variables
with internal precedence constraints. Second, to calculate a
certainty equivalent value for every time schedule, the maximization
procedure should be able to build a corresponding
gamble and compute its expected utility.
For maximization we use the Nelder-Mead simplex (direct
search) method from the Matlab optimization toolbox.
The complexity of the calcGamble algorithm shown before
is O # 2 K-1
is the maximum number of
tasks that are scheduled to be executed in parallel.
The complexity estimate is based on the observation that
the depth of the payo#-probability tree is N and that any
subtree following an unsuccessful task execution has a depth
of no more than K- 1. The last statement follows from the
assumption that there are no more than K - 1 tasks running
in parallel to the one that failed and therefore no other
tasks will start after the failure was reported. Whether it is
possible to create an algorithm with significantly lower computational
costs is one of the questions we plan to address
in future research.
In commercial projects the ratio K/N is usually low, since
Figure
Payo#-probability tree for the time allocations
in Figure 5.
not many of these exhibit a high degree of the parallelism.
Our preliminary experiments, reported in Section 5.3, allow
us to conclude that K/N ratio is likely to be lower for risk-averse
agents (presumably, businessmen) than for risk-lovers
(gamblers).
5.3 Preliminary Experimental Results
We have conducted a series of experiments on the CE
maximization. Some of the results are summarized in Figure
7. In this figure, the y-axis shows 11 di#erent risk-
aversity r settings, the bottom x-axis - time t in the plan,
and the top x-axis - maximum CE value for each r setting.
The rounded horizontal bars in each of 11 sections denote
time allocations for each of six tasks with task 1 being on
top. Sections correspond to Figure
4 and Figure 5 respectively. Finally, the vertical bars
show the maximum CE values.
Let's examine the relative placement of time allocations
as a function of r. For this purpose we highlighted task 3
(black bars) and task 4 (white bars). Here task 3 has higher
variance of CDF and lower probability of success than task
4 (0.032 and 0.95 vs. 0.026 and 0.98), also task 3 is more
expensive (-15 vs. -7). There are four di#erent cases in
the experimental data:
1. Risk-loving agents tend to schedule tasks in parallel
and late in time in order to maximize the present value
of expected di#erence between reward and payo#s to
suppliers. This confirms the intuition from Figure 3 -
risk-lovers lean toward receiving high risky payments
rather than low certain payments.
2. Risk neutral and low risk-averse agents place risky task
3 first to make sure that the failure doesn't happen
too far in the project. Note, that they still keep task
2 running in parallel, so, in case 2 fails, they are liable
for paying the supplier of task 4 on success. One can
consider those agents as somewhat optimistic.
3. Moderately risk-averse agents try to dodge the situation
above by scheduling task 3 after task 2 is finished.
r
Figure
7: CE maximizing schedules and CE values for the plan in Figure 1 and r # [-0.03, 0.07].
These agents are willing to accept the plan, but their
expectations are quite pessimistic.
4. Highly risk-averse agents shrink task 1 interval to zero,
thus "cheating" to avoid covering any costs. One may
interpret this as a way of signaling a refusal of accepting
the plan.
6. ISSUES AND FUTURE RESEARCH
6.1 Multiple Local Maxima
One of the open issues is the existence of multiple local
maxima of CE, even in cases where task networks are fairly
simple. The reason for this is that the relative task placement
has two preferred configurations: independent individual
tasks can be either performed in parallel (thus increasing
the probability of successful completion) or they can be
scheduled in sequence to minimize overall payo#s, in case
one of tasks fails.
To illustrate the issue, we constructed a sample task net-work
with two parallel tasks. Task 1 has a higher variance of
completion time probability and lower probability of success
than task 2, everything else is the same. The resulting graph
of CE is shown in Figure 8. There are 3 local maxima in this
figure: one in the left side that corresponds to task 2 being
scheduled first in sequential order, another on the right side
corresponding to task 1 being first, and yet another one in
the furthermost corner of the graph representing both tasks
being scheduled at time 0 and executed in parallel.5010515task 1 start time
Figure
8: Local maxima for two parallel tasks.
In the course of the research, we were able to get around
the issue of multiple local maxima by starting the maximization
procedure from di#erent points. However, one may note
that the number of possible start points grows considerably
with the complexity of the task network and the algorithm
that checks each and every one of them is not scalable. A solution
we are considering is to use Simulated Annealing [20],
where each node in the search queue represents a local maximum
for some particular ordering of tasks.
6.2 Slack Allocation in RFQ
The last issue we want to address in this paper is how
to use the CE maximization procedure to construct RFQs
in the MAGNET framework. The CE maximizing schedule
contains information on what is the most desirable task
scheduling for the customer agent. However, it is hard to
imagine that there will always be bids that cover exactly the
same time intervals as in the maximizing schedule.
We suggest the following specify what
percentile # of the maximum CE value is considered acceptable
by the agent. then define the start time for the task
n as the set of values of t s
n , such that the CE of a schedule
that di#ers from the maximizing one only in the start time
of task n is no less than # of the maximum.
Graphically, this process is represented by building the
projection of the CE #-percentile graph (see Figure 9) on
the task n time axis. Assuming there is only one continuous
interval of t s
n values for every n # N , denote it as # t s-
Finally, submit the interval # t s-
and t f
are times from the maximizing schedule, as a part of
the RFQ.
taskstart
time 95%
90%
50%
Figure
9: Contours of some #-percentile graphs for
the CE graph in Figure 8.
This leaves several open questions for further study. For
instance, there could be more than one interval # t s-
for some tasks, so we need to distinguish them in the RFQ
composition. Also, it might be appropriate to decrease the
acceptable CE percentile for tasks involving goods and services
that are rare and do not attract many bids, and, at
the same time, increase the percentile for those tasks that
receive overly many bids. In addition, the RFQ might have
to be split in two or more parts, so that the requests for
rare goods and services are submitted first and the rest of
RFQ is composed after the bids for those rare products are
received. Deciding how and when to split the RFQs is still
an open question.
Although we do not specifically address the above mentioned
and related issues in the current paper, the CE maximization
approach promises to be powerful and flexible
enough to help us resolve those in our future research.
7. RELATED WORK
Expected Utility Theory [19] is a mature field of Eco-
nomics, that has attracted many supportive as well as critical
studies, both theoretical [12, 13] and empirical [23, 10].
We believe that expected utility will play an increasing role
in automated auctions, since it provides a practical way of
describing risk estimations and temporal preferences.
In our previous work on Expected Utility [1] we were
mostly concerned with computing the marginal expected
utility of completing successfully all the tasks within the
duration promised.
Our long term objective is to automate the scheduling
and execution cycle of an autonomous agent that needs the
services of other agents to accomplish its tasks. Pollack's
DIPART system [18] and SharedPlans [7] assume multiple
agents that operate independently but all work towards
the achievement of a global goal. Our agents are trying to
achieve their own goals and to maximize their profits; there
is no global goal.
Combinatorial auctions are becoming an important mechanism
not just for agent-mediated electronic commerce [8,
26, 22] but also for allocation of tasks to cooperative agents
(see, for instance, [9, 5]).
In [9] combinatorial auctions are used for the initial commitment
decision problem, which is the problem an agent
has to solve when deciding whether to join a proposed col-
laboration. Their agents have precedence and hard temporal
constraints. However, to reduce search e#ort, they
use domain-specific roles, a shorthand notation for collections
of tasks. In their formulation, each task type can be
associated with only a single role. MAGNET agents are
self-interested, and there are no limits to the types of tasks
they can decide to do. In [6] scheduling decisions are made
not by the agents, but instead by a central authority. The
central authority has insight to the states and schedules of
participating agents, and agents rely on the authority for
supporting their decisions. Nisan's bidding language [16] allows
bidders to express certain types of constraints, but in
MAGNET both the bidder and the bid-taker (the customer)
need to communicate constraints.
Inspite of the abundance of work in auctions [15], limited
attention has been devoted to auctions over tasks with
time constraints and interdependencies. In [17],
a method is proposed to auction a shared track line for
train scheduling. The problem is formulated with mixed
integer programming, with many domain-specific optimiza-
tions. Bids are expressed by specifying a price to enter a
line and a time window. The bidding language, which is
similar to what we use in MAGNET, avoids use of discrete
time slots. Time slots are used in [25], where a protocol for
decentralized scheduling is proposed. The study is limited
to scheduling a single resource. MAGNET agents deal with
multiple resources.
Most work in supply-chain management is limited to hierarchical
modeling of the decision making process, which is
inadequate for distributed supply-chains, where each organization
is self-interested, not cooperative. Walsh et al [24]
propose a protocol for combinatorial auctions for supply
chain formation, using a game-theoretical perspective. They
allow complex task networks, but do not include time con-
straints. MAGNET agents have also to ensure the scheduling
feasibility of the bids they accept, and must evaluate risk
as well. Agents in MASCOT [21] coordinate scheduling with
the user, but there is no explicit notion of money transfers
or contracts, and the criteria for accepting/rejecting a bid
are not explicitly stated. Their major objective is to show
policies that optimize schedules locally [11]. Our objective
is to optimize the customer's utility.
8.
ACKNOWLEDGMENTS
Partial support for this research is gratefully acknowledged
from the National Science Foundation under award
NSF/IIS-0084202.
9.
--R
Decision processes in agent-based automated contracting
Evaluating risk: Flexibility and feasibility in multi-agent contracting
A market architecture for multi-agent contracting
distributed control of a multirobot system.
Socially conscious decision-making
AI Magazine
A combinatorial auction for collaborative planning.
Estimating preferences under risk: The case of racetrack bettors.
Coordinated Supply Chain Scheduling.
Choice under uncertainty: Problems solved and unsolved.
Dynamic consistency and non-expected utility models of choice und er uncertainty
Microeconomic Theory.
Auctions and bidding.
Bidding and allocation in combinatorial auctions.
An auction-based method for decentralized train scheduling
Planning in dynamic environments: The DIPART system.
Risk aversion in the small and in the large.
Modern Heuristic Techniques for Combinatorial Problems.
MASCOT: an agent-based architecture for coordinated mixed-initiative supply chain planning and scheduling
An algorithm for winner determination in combinatorial auctions.
An empirical analysis of the economic value of risk changes.
Combinatorial auctions for supply chain formation.
Auction protocols for decentralized scheduling.
The Michigan Internet AuctionBot: A configurable auction server for human and software agents.
--TR
Modern heuristic techniques for combinatorial problems
A market architecture for multi-agent contracting
The Michigan Internet AuctionBot
Evaluating risk
Socially conscious decision-making
Bidding and allocation in combinatorial auctions
Combinatorial auctions for supply chain formation
An auction-based method for decentralized train scheduling
Decision Processes in Agent-Based Automated Contracting
An Algorithm for Optimal Winner Determination in Combinatorial Auctions
A Combinatorial Auction for Collaborative Planning
--CTR
Alexander Babanov , John Collins , Maria Gini, Scheduling tasks with precedence constraints to solicit desirable bid combinations, Proceedings of the second international joint conference on Autonomous agents and multiagent systems, July 14-18, 2003, Melbourne, Australia
Gleser K. Demir , Maria Gini, Risk and user preferences in winner determination, Proceedings of the 5th international conference on Electronic commerce, p.150-157, September 30-October 03, 2003, Pittsburgh, Pennsylvania
John Collins , Wolfgang Ketter , Maria Gini, A Multi-Agent Negotiation Testbed for Contracting Tasks with Temporal and Precedence Constraints, International Journal of Electronic Commerce, v.7 n.1, p.35-57, Number 1/Fall 2002
Alexander Babanov , John Collins , Maria Gini, Asking the right question: Risk and expectation in multiagent contracting, Artificial Intelligence for Engineering Design, Analysis and Manufacturing, v.17 n.3, p.173-186, June
Ka-Man Lam , Ho-fung Leung, A Trust/Honesty Model with Adaptive Strategy for Multiagent Semi-Competitive Environments, Autonomous Agents and Multi-Agent Systems, v.12 n.3, p.293-359, May 2006 | multi-agent contracting;automated auctions;risk estimation;expected utility |
544775 | Multi-issue negotiation under time constraints. | This paper presents a new model for multi-issue negotiation under time constraints in an incomplete information setting. In this model the order in which issues are bargained over and agreements are reached is determined endogenously as part of the bargaining equilibrium. We show that the sequential implementation of the equilibrium agreement gives a better outcome than a simultaneous implementation when agents have like, as well as conflicting, time preferences. We also show that the equilibrium solution possesses the properties of uniqueness and symmetry, although it is not always Pareto-optimal. | INTRODUCTION
Agent mediated negotiation has received considerable attention in
the field of electronic commerce [14, 9, 7]. In many of the applications
that are conceived in this domain it is important that the
agents should not only bargain over the price of a product, but also
take into account aspects like the delivery time, quality, payment
methods, and other product specific properties. In such multi-issue
negotiations, the agents should be able to negotiate outcomes that
are mutually beneficial for both parties [11]. However the complexity
of the bargaining problem increases rapidly as the number
of issues increases. Given this increase in complexity, there is a
need to develop software agents that can operate effectively in such
circumstances. To this end, this paper reports on the development
of a new model for multi-issue negotiation between two agents.
Permission to make digital or hard copies of all or part of this work for
personal or classroom use is granted without fee provided that copies are
not made or distributed for profit or commercial advantage and that copies
bear this notice and the full citation on the first page. To copy otherwise, to
republish, to post on servers or to redistribute to lists, requires prior specific
permission and/or a fee.
AAMAS '02, July 15-19, Bologna, Italy
In such bilateral multi-issue negotiations, one approach is to bundle
all the issues and discuss them simultaneously. This allows the
players to exploit trade-offs among different issues, but requires
complex computations to be performed [11]. The other approach
- which is computationally simpler - is to negotiate the issues
sequentially. Although issue-by-issue negotiation minimizes the
complexity of the negotiation procedure, an important question that
arises is the order in which the issues are bargained. This ordering
is called the negotiation agenda. Moreover, one of the factors that
determines the outcome of negotiation is this agenda [4]. To this
end, there are two ways of incorporating agendas in the negotiation
model. One is to fix the agenda exogenously as part of the
negotiation procedure [4]. The other way, which is more flexible,
is to allow the bargainers to decide which issue they will negotiate
next during the process of negotiation. This is called an endogenous
agenda [5]. Against this background, this paper presents a
multi-issue negotiation model with an endogenous agenda.
To provide a setting for our negotiation model, we consider the
case in which negotiation needs to be completed by a specified
time (which may be different for the different parties). Apart from
the agents' respective deadlines, the time at which agreement is
reached can effect the agents in different ways. An agent can gain
utility with time, and have the incentive to reach a late agreement
(within the bounds of its deadline). In such a case it is said to be a
strong (patient) player. The other possibility is that it can lose utility
with time and have the incentive to reach an early agreement.
It is then said to be a weak (impatient) player. As we will show,
this disposition and the actual deadline itself strongly influence the
negotiation outcome. Other parameters that effect the outcome include
the agents'strategies, their utilities and their reservation lim-
its. However, in most practical cases agents do not have complete
information on all of these parameters. Thus in this work we focus
on bilateral negotiation between agents with time constraints and
incomplete information.
To this end, Fatima et al presented a single-issue model for negotiation
between two agents under time constraints and in an incomplete
information setting [3]. Within this context, they determined
optimal strategies for agents but did not address the issue of
the existence of equilibrium. Here we adopt this framework and
prove that mutual strategic behavior of agents, where both use their
respective optimal strategies, results in equilibrium. We then extend
this framework for multi-issue negotiation between a buyer
and a seller for the price of more than one good/service. Specifi-
cally, each agent has a deadline before which agreement must end
on all the issues. However, the order in which issues are bargained
over and agreements are reached is determined by the equilibrium
strategies. These strategies optimize the time at which an issue is
settled and are therefore appropriate for the sequential implementation
scheme. Moreover, we show that the sequential implementation
of the equilibrium agreement results in an outcome that is
no worse than the outcome for the simultaneous implementation,
both when agents have like as well as conflicting time preferences.
Finally, we study the properties of the equilibrium solution.
This work extends the state of the art by presenting a more realistic
negotiation model that captures the following three aspects
of many real life bargaining situations. Firstly, it is a model for
negotiating multiple issues. Secondly, it takes the time constraints
of bargainers into consideration. Thirdly it allows agents to have
incomplete information about each other.
In section 2 we first give an overview of the single-issue negotiation
model of [3] and then prove that the mutual strategic behavior
of agents where both use their respective optimal strategies results
in equilibrium. In section 3 we extend this model to allow multi-issue
negotiation and study the properties of the equilibrium solu-
tion. Section 4 discusses related work. Finally section 5 gives the
conclusions.
2. SINGLE-ISSUENEGOTIATIONMODEL
In this section we first provide an overview of the single issue negotiation
model and a brief description of the optimal strategies as
determined in [3]. Due to lack of space, we describe the optimal
strategy determination for one specific negotiation scenario. We
then prove that the optimal strategy profiles form sequential equilibrium
points.
2.1 The Negotiation Protocol
This is basically an alternating offers protocol. Let b denote the
buyer, s the seller and let [P a
denote the range of values
for price that are acceptable to agent a, where a 2 fb; sg. A
value for price that is acceptable to both b and s, i.e., the zone of
agreement, is the interval [P s
min ) is the
price-surplus. T a denotes agent a's deadline. Let p t
b!s denote the
price offered by agent b at time t. Negotiation starts when the first
offer is made by an agent. When an agent, say s, receives an offer
from agent b at time t, i.e., p t b!s , it rates the offer using its utility
function U s . If the value of U s for p t
b!s at time t is greater than
the value of the counter-offer agent s is ready to send at time t 0 , i.e.,
s!b with t 0 > t then agent s accepts. Otherwise a counter-offer is
made. Thus the action A that agent s takes at time t is defined as:
Accept if U s (p t
s!b otherwise
2.2 Counter-offer generation
Since both agents have a deadline, we assume that they use a time
dependent tactic (e.g. linear (L), Boulware (B) or Conceder (C)) [2]
for generating the offers. In these tactics, the predominant factor
used to decide which value to offer next is time t. The tactics vary
the value of price depending on the remaining negotiation time,
modeled as the above defined constant T a . The initial offer is a
point in the interval [P a
]. The constant k a multiplied
by the size of interval determines the price to be offered in the first
proposal by agent a (as per [2]). The offer made by agent a to agent
b at time t (0 < t T a ) is modeled as a function a depending on
time as follows:
P a
min
P a
min
Price
Time
min
O
Figure
1: Negotiation outcome for Boulware and Conceder
functions
A wide range of time dependent functions can be defined by varying
the way in which a (t) is computed (see [2] for more details).
However, functions must ensure that 0 a (t) 1, a
and a (T a That is, the offer will always be between the
range [P a
max ], at the beginning it will give the initial constant
and when the deadline is reached it will offer the reservation
value. Function a (t) is defined as follows:
a
T a
These families of negotiation decision functions (NDF) represent
an infinite number of possible tactics, one for each value of .
However, depending on the value of , two extreme sets show
clearly different patterns of behaviour.
1. Boulware [11]. For this tactic < 1 and close to zero. The
initial offer is maintained till time is almost exhausted, when
the agent concedes up to its reservtion value.
2. Conceder [10]. For this tactic is high. The agent goes
to its reservation value very quickly and maintains the same
offer till the deadline. Finally price is increased
linearly.
The value of a counter offer depends on the initial price (IP) at
which the agent starts negotiation, the final price (FP) beyond which
the agent does not concede, and T a . A vector V of these four
forms the agent's strategy. Let
min The negotiation outcome O
is an element of f(p; t); Cg, where the pair (p; t) denotes the price
and time at which agreement is reached and C denotes the conflict
outcome.
For example, when b's strategy is defined as [P s
and s's strategy is defined as [P b
the outcome
(O1 ) that results is shown in figure 1. As shown in the figure,
agreement is reached at a price P s
at a time close to T. Similarly when the NDF in both strategies is
replaced with C, then agreement (O2 ) is reached at the same price
but towards the beginning of negotiation.
2.3 Agents' information state
Each agent has a reservation limit, a deadline, a utility function and
a strategy. Thus the buyer and seller each have four parameters
denoted hP b
The outcome of negotiation depends on all these eight parameters.
The information state I a of an agent a is the information it has
about the negotiation parameters. An agent's own parameters are
known to it, but the information it has about the opponent is not
complete. I b and I s are taken as:
I
and
I
are the information about b's own parameters
and L s
p and L s
t are its beliefs about s. Similarly, P s
min ,
are s's own parameters and L b p and L b t are its beliefs
about b. L s
p and L s
t are two lotteries that denote b's beliefs
about s's deadline and reservation price, where
minh ] such that (P s
minh ) and
h ] such that (T s
l < T s
Similarly L b
p and L b
t are two lotteries that denote s's beliefs about
b's deadline and reservation price, where
maxh ] such that (P b
and
h ] such that (T b
l < T b
Thus agents have uncertain information about each other's deadline
and reservation price. However, the agents do not know their
opponent's utility function or strategy.
Agents' utilities are defined with the following two von Neumann-Morgenstern
utility functions [6] that incorporate the effect of time
discounting. U a (p;
(p)U a
and U a
are uni-dimensional
utility functions. Here, preferences for attribute p,
given the other attribute t, do not depend on the level of t. U a
is defined as:
U a
for the buyer
min for the seller
U a
t is defined as U a
is the discounting factor.
Thus when (- a > 1) the agent is patient and gains utility with time
and when (- a < 1) the agent is impatient and loses utility with
time. Note that the agents may have different discounting factors.
Agents are said to have similar time preferences if both gain on
time or both lose on time. Otherwise they have conflicting time
preferences. Each agent's information is its private information
that is not known to the opponent.
2.4 Optimal strategies
We describe how optimal strategies are obtained for players that
are von Neumann-Morgenstern expected utility maximizers. Since
utility is a function of price and time, these strategies optimize both.
The discussion is from the perspective of the buyer (although the
same analysis can be taken from the perspective of the seller). b
believes that with probability
b , s's deadline is T s
l and with probabilityb it is T s
h . Let T b denote b's own deadline. This
gives rise to three relations between agent deadlines; (T b > T s
l < T b < T s
l ). For each of the two possible
realizations of b's discounting factor, these three relations can hold
between agent deadlines. In other words, there are six possible scenarios
(N1::N6 ) under which negotiation can take place. Due to
lack of space we describe here how the optimal strategies are obtained
for one specific scenario - N1 , i.e, when b gains on time (i.e.,
h .
prefers to reach agreement at the latest possible
time and at the lowest possible price. The optimal strategy for b is
determined using backward induction. The optimal price (P s
the optimal time (T s
are determined first, and then a strategy that
ensures agreement at P s
s
Time
Price
s
minh
min
Figure
2: Possible buyer strategies in a particular negotiation
scenario
No matter which strategy (B, C or L) s uses, it is bound to reach
its reservation price by its deadline. Since both possible values of
s's deadline are less than T b , b's optimal strategy would be never
to offer a price more than s's reservation price (P s
minl or P s
Moreover, it should offer this price at the latest possible time, which
is s's deadline (T s
l or T s
h ). This is because s quits if agreement is
not reached by then.
From its beliefs, it is known to b that s's reservation price, deadline
pair could be one of (P s
l
h
l
or (P s
One of these four pairs is (P s
). The possible
strategies that b can use are S bif s's reservation price deadline pair
is
l
if it is (P s
h
3 if it is (P s
l ) or S bif it is (P s
h ). These strategies are depicted in figure 2. Out
of these four possible strategies, the one that results in maximum
expected utility (EU) is b's optimal strategy (note that b's optimal
strategy does not depend on s's strategy). The EUs from these four
strategies depend on b and
b .
An agent's utility from price is independent of its utility from
time, i.e, the buyer always prefers a low price to a high price, and
for a given price it always prefers a late agreement to an early one.
In order to simplify the process of finding the optimal strategy, we
assume there is only one possible value, P s
minl , for
s's reservation price. This leaves us with only two strategies, S band S b(see figure 2). The EUs 1 from these strategies are:
l < t T s
h and
Out of these two, the one that gives a higher utility is optimal. EU1
and EU2 depend on the value of
b . So
b is varied between 0 and
1 and EU1 and EU2 are computed for different values of
b . A
comparison of these two utilities shows that for a particular value
of
b , say
c , . For (
b <
for (
c is crucial in determining the
optimal strategy. This computation gives the optimal time (T s
reaching agreement. T s
l if (
c ) or T s
b <
The next step is to find the optimal price. Assume that (
which means that S bis better than S b. This implies that the optimal
time for reaching agreement is T s
l and not T s
h . Strategies S band
bcan result in an agreement only after T s
l , since the price offered
prior to T s
l is unacceptable to the seller. Thus neither S bnor S bcan be optimal. The optimal strategy is S bor S bdepending on the
1 Utility from conflict to both agents is less than zero.
Negotiation Optimal
Scenario Strategy
all values of t
l
l
l
all values of t
l ; C] for all values of t
l
l
l
all values of t
Table
1: Optimal strategies for the buyer
value of b . The expressions for EU1 and EU3 are:
and
l
l ). Here T denotes
the time at which b offers P s
minl . b is varied between 0 and 1 and
EU1 and EU3 are computed. For
c )EU1 > EU3 . This
gives the optimal price P s
reaching agreement. P s
minl if
c ) or P s
c ). Assume that ( b < b
c ). This
means that the optimal price is P s
minh and the optimal strategy is
l ; B]. Thus S bresults in an outcome that is
optimal in both the price (P s
l ).
Assume that the seller also gains on time and T
l and
minh . Let the values of T b
l and T b
h in s's lottery be T b
and some value greater than T b respectively. Since both possible
values of the buyer's deadline are greater than its own, irrespective
of its value for
s , it has to concede up to P s
min by T s . Thus from
the seller's perspective, the optimal price P b
min and the optimal
In such a scenario, the optimal strategy for s
is to start at some high price, make small concessions till its deadline
is almost reached and then offer the reservation price P s
min at
using the Boulware NDF, i.e., S
In
order for the b and s strategies to converge, the values of b and
b in b's lotteries should be such that ( b < b
c ) and (
When these conditions are satisfied P s
. The optimal strategies S b
3 S s then converge and result in an
agreement at price P s
minh and time T s
l (see figure 2). b gets the
price-surplus.
When both buyer and seller lose utility on time, the optimal strategy
for them is to offer P s
minh at the earliest opportunity. This can
be done using a Conceder NDF that results in agreement at the
same price P s
minh but towards the beginning of negotiation. In the
same way, optimal strategies are obtained for the remaining negotiation
scenarios. These are summarised in table 1. T 0 denotes the
beginning of negotiation. A similar kind of analysis is made from
the seller's perspective to obtain P b
in the six possible sce-
narios. In each of these scenarios, the agents' optimal strategies do
not depend on their opponent's strategy. Again see [3] for details.
There are many scenarios in which negotiation can take place.
These depend on the agents' attitude towards time and the relationship
between their deadlines. As stated earlier in this section,
there are six possible scenarios from the buyer's perspective, on the
basis of which it selects its strategy. Similarly from the seller's perspective
there are also six possible scenarios. But the negotiation
outcome depends on all possible ways in which interaction between
Case 1, 2 and 3 Case 4
Deadline Seller's Outcome Outcome
Ordering Deadline b's deadline b's deadline
l (P s
l
l
l T b
l T b
l T b
l T b
l (P b
l
l T b
l T b
l
l T b
l T b
l (P s
l
l
l T b
l T b
l
l T b
l T b
l (P s
l
l
l T b
l T b
l
l T b
l T b
l (P b
l
l
l T b
l T b
l
l T b
l T b
l (P b
l
l
l T b
l T b
l
l T b
l T b
Table
2: Outcome of negotiation when both agents use their
respective optimal strategies
b and s can take place. There can be six possible orderings on the
agent deadlines:
1. T s
l < T s
l < T b
2. T b
l < T b
l < T s
3. T s
l < T b
l < T s
4. T s
l < T b
l < T b
5. T b
l < T s
l < T s
l < T s
l < T b
For each of these orderings, the agents' attitudes towards time could
be one of the following:
1. Both buyer and seller gain utility with time (Case 1).
2. Buyer gains and seller loses utility with time (Case 2).
3. Buyer loses and seller gains utility with time (Case 3).
4. Both buyer and seller lose utility with time (Case 4).
Thus in total there are 24 possible negotiation scenarios and the
outcome of negotiation depends on the exact scenario. A summary
of these is given in table 2. P s
indicates that the price-surplus goes
to b and P b
indicates that the price-surplus goes to s. As seen
in this table, the price-surplus always goes to the agent with the
longer deadline. The time of agreement is T 0 (which denotes the
beginning of negotiation) if both agents lose on time, and the earlier
Buyer
Seller
Buyer
I
I I I
43
Figure
3: Extensive form of the negotiation game
deadline if at least one agent gains on time. Note that these are
the outcomes that will result if the agents' beliefs about each other
satisfy the following conditions for convergence of strategies.
1. (
b <
l ) for b.
2.
minl ) for b.
3. (
s <
s
s >
s
l ) for s.
4. ( s < s
maxl
The similarity between these results and those of Sandholm and
Vulkan [15] on bargaining with deadlines is that, in both cases,
the price-surplus always goes to the agent with the longer dead-
line. However, the difference is that in [15] the deadline effect
overrides time discounting, whereas here the deadline effect does
not override time discounting. This happens because in [15] the
agents always make offers that lie within the zone of agreeement.
In our model, agents initially make offers that lie outside this zone,
and thereby delay the time of agreement. Thus when agents have
conflicting time preferences, in our case, agreement is reached near
the earlier deadline, but in [15] agreement is reached towards the
beginning of negotiation.
The single issue negotiation model of [3] only determines optimal
strategies for agents on the basis of available information and
shows the resulting outcome. However such an outcome is only
possible if this mutual strategic behavior of agents leads to equi-
librium. In the following subsection we prove this by using the
standard game theoretic solution concept of sequential equilibrium.
2.5 Equilibrium agreements
Since agents do not have information about their opponent's strategy
or utility, negotiation can be considered as a game G of incomplete
information. A strategy profile and belief system pair is a
sequential equilibrium of an extensive game if it is sequential rational
and consistent [8]. A system of beliefs in an extensive form
game G is a specification of a probability x 2 [0; 1] for each decision
node x in G such that
information sets
I . In other words, represents the agent's beliefs about the history
of negotiation. The player's strategies satisfy sequential rationality
if for each information set of each player a, the strategy of player a
is a best response to the other player's strategies, given a's beliefs
at that information set. The requirement for to be consistent with
the strategy profile is as follows. Even at an infromation set that
is not reached if all players adhere to their strategies, it is required
that a player's belief be derived from some strategy profile using
Bayes' rule.
THEOREM 1. There exists sequential equilibrium of G at the
point [P b
for the negotiation
scenario corresponding to case 1 and deadline ordering D1, where
minl if (P s
minl
minh
l if (T
l ) or T s
PROOF. The first three levels of the extensive form of this game
are shown in figure 3. At node 1 one of the players, say b, starts
negotiation by using its optimal strategy [P b
reaches node 2. At this level it is player s's turn to make a de-
cision. I 1 becomes the information set for s since it is unaware
of the strategy used by b and hence does not know which of the
three nodes 2, 3 or 4 play has reached. However, irrespective of
which node play reaches at this level (i.e., irrespective of s's belief
about the history of negotiation), the dominant strategy for s
is [P s
Play now reaches node 5 (since both agents
use B) at which b makes a move. At this point b does not know
exactly which node the play is at, but it knows with certainty that
its information set I 2 is reached (the probability of reaching other
decision nodes at this level is 0). The dominant strategy for b at
this information set (and at all others) is [P b
at every information set at which it is b's turn to make a move,
its optimal strategy is [P b
B], and at every information
set at which it is s's turn to make a move, its optimal strategy
is [P s
B]. The strategy profile [P b
therefore satisfies the requirements for sequential
rationality. Furthermore, at every information set, the optimal
strategies are also dominant strategies. This makes the strategy
profile [P b
point irrespective of the agents' beliefs about the history of
negotiation.
COROLLARY 1. The optimal strategy profile constitutes a unique
equilibrium.
PROOF. This is a direct consequence of the above proof. As the
optimal strategies for both agents are dominant strategies at each
of their information sets, there does not exist any other equilibrium
(neither a pure nor a mixed strategy) where an agent uses a strategy
other than its optimal strategy.
In the same way, sequential equilibrium can be shown to exist
when agents use their optimal strategies in all the remaining negotiation
scenarios.
3. MULTI-ISSUE NEGOTIATION
We now extend the above model for multi-issue bargaining where
the issues are independent 2 of each other. Assume that buyer, b,
and seller, s, that have unequal deadlines, bargain over the price of
two distinct goods/services, X and Y. Negotiation on all the issues
must end before the deadline. We consider two goods/services in
order to simplify the discussion but this is a general framework that
works for more than two goods/services.
3.1 Agents' information state
Let the buyer's reservation prices for X and Y be P b
x and P b
y and the
seller's reservation prices be P s
x and P s
y respectively. The buyer's
information state is:
I
2 Independence is a common and reasonable assumption to make in
this context. Future work will deal with the dependent case.
are the information about its own
parameters and L s
y and L s
are three lotteries that denote its beliefs
about the opponent's parameters.
xH ] is the lottery on the seller's reservation
price for X such that P s
yH ] is the lottery on the seller's reservation
price for Y such that P s
yH and
h ] is the lottery on the seller's deadline
such that T s
l < T s
h .
Similarly, the seller's information state is defined as:
I
An agent's information state is its private knowledge. The agents'
utility functions are defined as:
U a (px
Note that the discounting factors are different for different issues.
This allows agents' attitudes toward time to be different for different
issues.
3.2 Negotiation protocol
Again we use an alternating offers negotiation protocol. There are
two types of offers. An offer on just one good is referred to as
a single offer and an offer on two goods is referred to as a combined
offer. One of the agents starts by making a combined offer.
The other agent can accept/reject part of the offer (single issue) or
the complete offer. If it rejects the complete offer, then it sends a
combined counter-offer. This process of making combined offers
continues till agreement is reached on one of the issues. Thereafter
agents make offers only on the remaining issue (i.e., once agreement
is reached on an issue, it cannot be renegotiated). Negotiation
ends when agreement is reached on both the issues or a deadline is
reached. Thus the action A that agent s takes at time t on a single
offer is as defined in section 2.1 . Its action on a combined offer,
b!s ), is defined as:
1. Quit if t > T s
2. Accept X t
b!s if U s (X t
3. Accept Y t
b!s if U s (Y t
4. Offer X t 0
b!s not accepted
5. Offer Y t 0
b!s not accepted.
A counter-offer for an issue is generated using the method described
in section 2.2. Although agents initially make offers on
both issues, there is no restriction on the price they offer. Thus
by initially offering a price that lies outside the zone of agreement,
an agent can effectively delay the time of agreement for that issue.
For example, b can offer a very low price which will not be acceptable
to s and s can offer a price which will not be acceptable to b.
In this way, the order in which the issues are bargained over and
agreements are reached is determined endogenously as part of the
bargaining equilibrium rather than imposed exogenously as part of
the game tree.
Two implementation rules are possible for this protocol [4]. One
is sequential implementation in which agreement on an issue is implemented
as soon as it is settled; and the other is simultaneous
implementation in which agreement is implemented only after all
the issues are settled. We first list the equilibrium agreements in
different negotiation scenarios and then compare the outcome that
results from the sequential implementation with that of the simultaneous
implementation.
Negotiation Scenario Time of agreement
9 LG GG T T
Table
3: Equilibrium agreements for two issues X and Y
3.3 Equilibrium strategies
We assume that the conditions for convergence (as listed in section
are satisfied for both X and Y. As agents negotiate over
the price of two distinct goods/services, the equilibrium strategies
for the single issue model can be applied to X and Y independently
of each other. The equilibrium agreements in different negotiation
scenarios are listed in table 3. T equals T b if (T b < T s ) and T s if
denotes the beginning of negotiation. G indicates
that the agent gains utility on time and L indicates that it loses on
time. The price-surplus on both issues always goes to the agent
with the longer deadline (see section 2.4).
Consider a situation where both b and s lose on time on X and
gain on time on Y (row 11 of table 3). Let T b > T s . Assuming the
conditions for convergence are satisfied, b's equilibrium strategies
for X and Y are
and those for s are
(see table 1). During the process of negotiation, agents generate
offers using these strategies. This results in an agreement on X
towards the beginning of negotiation, and on Y at time T s (which
is the earlier deadline). The price-surplus for X and Y goes to the
agent with the longer deadline, i.e., b.
3.4 Implementation schemes
Any two strategies (S b ; S s ) lead to an outcome of the game. If
S b and S s are the equilibrium strategies, then the outcome is an
agreement on X at time t and price px and an agreement on Y at
time and price py . Payoffs for this outcome depend on the rules
by which agreements are implemented.
Sequential implementation. Exchange of a given good/service
takes place at the time of agreement on a price for that good/service.
The agents' utilities from the strategy pair (S b ; S s ) leading
to agreements (px ; t) and (py ; ) are:
y py )(- b
U s
Simultaneous implementation. Exchange of goods/services
takes place only after agreement is reached on prices of all
the goods. The agents'utilities for this rule are:
y py )(- b
U s
(py P s
Since the equilibrium strategies optimize the time (and price) of
agreement on an issue, it seems obvious that the agents will be better
off if exchange takes place sequentially rather than simultane-
ously. However, since agents can have like, as well as, conflicting
time preferences, it is important to determine if sequential implementation
proves better than simultaneous implementation for both
agents under all negotiation scenarios. We show below that sequential
implementation of the equilibrium agreement always gives a
better outcome than simultaneous implementation.
THEOREM 2. The outcome generated by sequential implementation
is no worse than the outcome for simultaneous implementation
for both agents.
PROOF. When at least one of the agents gains on time on an
issue, say X, then the equilibrium strategies result in an agreement
at the earlier of the two deadlines. If T b > T s , then
x and if T s > T b then
x . When both
agents lose on time on X, then agreement is reached towards the
beginning of negotiation. Thus
and
As shown in table 3, the agents have like
time preferences in the first and last rows. In all other cases they
have conflicting preferences on at least one issue. There are three
possible ways in which agreement can be reached between agents.
We analyze each of these cases below.
1. Both issues are agreed upon near the earlier deadline. Here
Such an agreement is possible only if, for every issue,
at least one agent gains on time. If px and py are the prices
that are agreed for X and Y, the expressions below yield equal
utility from both implementation schemes to both b and s.
y py )(- b
U s
2. Both issues are agreed upon towards the beginning of ne-
gotiation. This happens when both agents lose on time on
both the issues. As in case 1, the expressions for utility from
sequential and simultaneous schemes yield the same values
since
3. One issue is agreed towards the beginning of negotiation and
the other near the earlier deadline. This occurs when both
agents lose on time on one of the issues, say X, and at least
one agent gains on time on the other issue, say Y. Here
and the buyer's utilities from sequential and simultaneous
implementations are:
y py )(- b
and
y py )(- b
The utility from X is greater for sequential implementation
since T 0 < and both agents lose on time. The utility from Y
is equal for both schemes. As a result, sequential implementation
gives a total utility that is higher than simultaneous
implementation. The utility that the seller gets is:
U s
and
U s
As s also loses on time on X, its utility from X is higher for
sequential implementation giving a higher cumulative utility
than simultaneous implementation. Thus sequential implementation
always gives a better outcome than simultaneous
implementation.
The same argument holds good when b and s negotiate over more
than two issues. Thus from the perspective of both agents, sequential
implementation proves to be a better implementation scheme
than simultaneous implementation.
3.5 Properties of equilibrium solution
The main focus in the design of a negotiation model is on the properties
of the outcome, since the choice of a model depends on the
attributes of the solution it generates. We therefore study some important
properties [8] of the equilibrium agreement.
1. Uniqueness. If the solution of the negotiation game is unique,
then it can be identified unequivocally.
THEOREM 3. For each negotiation scenario, the proposed
negotiation model has a unique equilibrium agreement.
PROOF. There are n independent negotiation issues each
of which has a single equilibrium agreement (see section 2.5
for proof). This gives a unique equilibrium agreement for all
issues.
2. Symmetry. A bargaining mechanism is said to be symmetric
if it does not treat the players differently on the basis of
inappropriate criteria. Exactly what constitutes inappropriate
criteria depends on the specific domain. The proposed
negotiation mechanism possesses the property of symmetry
since the outcome does not depend on which player starts the
process of negotiation.
THEOREM 4. In all negotiation scenarios, the bargaining
outcome is independent of the identity of the first player.
PROOF. As shown in table 3, there are two time points
at which agreements can be reached; T 0 which denotes the
beginning of negotiation or T which is the earlier deadline.
At these time points one of the agents (either b or s depending
on whose turn it is) offers the equilibrium solution which the
other agent accepts.
3. Efficiency. An agreement is efficient if there is no wasted
utility, i.e, the agreement satisfies Pareto-optimality. The
equilibrium solution in the proposed model is Pareto-optimal
in some, but not all, negotiation scenarios.
THEOREM 5. When players have opposite time preferences
on an issue and the agent with the longer deadline
loses on time on that issue, the equilibrium solution is not
necessarily Pareto-optimal.
PROOF. Consider row 5 of table 3. Assume that T s < T b .
On issue Y, the agents have conflicting time preferences. As
and the price-surplus in the equilibrium
solution goes to b
y ). The utility to an agent can
be increased by changing price or time or both. Price (py )
can only be increased and can only be decreased, since a
decrease in price or an increase in will be unacceptable
to s. An increase in py decreases U b and increases U s . A
decrease in increases U b and decreases U s . But a change
in both py and can improve both U b and U s . The same
argument holds for the other cases.
In all the remanining scenarios it can be seen that the solution
is Pareto-efficient; an increase in one agent's uitlity lowers its
opponent's utility.
4. RELATED WORK
Fershtman [4] extends Rubinstein's complete information model
[12] for splitting a single pie to multiple pies. This model imposes
an agenda exogenously, and studies the relation between the agenda
and the outcome of the bargaining game. It is based on the assumption
that both players have identical discounting factors and does
not consider agent deadlines. Similar work in a complete information
setting includes [5] but it endogenizes the agenda.
Bac and Raff [1] developed a model that has an endogenous
agenda. They extend Rubinstein's model [13] for single pie bargaining
with incomplete information by adding a second pie. In this
model the price-surplus is known to both agents. For both agents,
the discounting factor is assumed to be equal over all the issues.
One of the players knows its own discounting factor and that of
its opponent. The other player knows its own discounting factor
but is uncertain of the opponent's discounting factor. This can take
one of two values, -H with probability and -L with probability
1 . These probabilities are common knowledge. Thus agents
have asymmetric information about discounting factors. They however
do not associate deadlines with players.
The difference between these models and ours is that firstly, our
model considers both agent deadlines and discounting factors. Sec-
ondly, in our case the players are uncertain about the opponent's
reservation price and deadline. Each agent knows its own reservation
price and deadline but has a binary probability distribution over
its opponent's reservation price and deadline. Moreover, the discounting
factor is different for different issues and the players have
no information about the opponent's discounting factors. Thirdly,
each agent's information state is its private knowledge which is not
known to its opponent. Our model is therefore closer to most real
life bargaining situations than the other models. The fourth point of
difference lies in the attributes of the solution. Comparing the solution
properties of these models, we see that the existing models do
not have a unique equilibrium solution. The equilibrium solution
depends on the identity of the first player. In our model, the equilibrium
solution is unique and is independent of the identity of the
first player. However, as is the case with our model, the equilibrium
solution is not always Pareto-optimal in the other models.
5. CONCLUSIONS
This paper presented a model for multi-issue negotiation under time
constraints in an incomplete information setting. The order in which
issues are bargained over and agreements are reached is determined
endogenously as part of the bargaining equilibrium rather than imposed
exogenously as part of the game tree. An important property
of this model is the existence of a unique equilibrium. For any is-
sue, this equilibrium results in agreement at the earlier deadline if
at least one agent has the incentive to reach a late agreement and
at the beginning of negotiation if both agents have the incentive to
reach an early agreement. The price-surplus on all issues goes to
the agent with the longer deadline.
The sequential implementation of the equilibrium agreement was
shown to result in an outcome that is no worse than the outcome for
simultaneous implementation when agents have similar, as well as
conflicting, time preferences. The equilibrium agreement possesses
the properties of being unique and symmetric, although it is not
always Pareto-optimal.
As it currently stands, our model considered the negotiation issues
to be independent of each other. In future we intend to study
bargaining over interdependent issues. Apart from this, our model
considered the case where agents had uncertain information about
each other's deadline and reservation price. In future we will introduce
learning into the model that will allow the agents to learn
these parameters during negotiation. These extensions will take the
model further towards real life bargaining situations.
6.
--R
Negotiation decision functions for autonomous agents.
Optimal negotiation strategies for agents with incomplete information.
The importance of the agenda in bargaining.
Decisions with Multiple Objectives: Preferences and Value Tradeoffs.
A classification scheme for negotiation in electronic commerce.
A Course in Game Theory.
Agents that buy and sell.
Negotiation Behavior.
The Art and Science of Negotiation.
Perfect equilibrium in a bargaining model.
A bargaining model with incomplete information about time preferences.
Agents in electronic commerce: component technologies for automated negotiation and coalition formation.
Bargianing with deadlines.
--TR
Agents that buy and sell
Bargaining with deadlines
Agents in Electronic Commerce
A Classification Scheme for Negotiation in Electronic Commerce
Optimal Negotiation Strategies for Agents with Incomplete Information
--CTR
Leen-Kiat Soh , Xin Li, Adaptive, Confidence-Based Multiagent Negotiation Strategy, Proceedings of the Third International Joint Conference on Autonomous Agents and Multiagent Systems, p.1048-1055, July 19-23, 2004, New York, New York
Shaheen S. Fatima , Michael Wooldridge , Nicholas R. Jennings, Bargaining with incomplete information, Annals of Mathematics and Artificial Intelligence, v.44 n.3, p.207-232, July 2005
Shohei Yoshikawa , Takahiko Kamiryo , Yoshiaki Yasumura , Kuniaki Uehara, Strategy Acquisition of Agents in Multi-Issue Negotiation, Proceedings of the 2006 IEEE/WIC/ACM International Conference on Web Intelligence, p.933-939, December 18-22, 2006
Shaheen S. Fatima , Michael Wooldridge , Nicholas R. Jennings, An agenda-based framework for multi-issue negotiation, Artificial Intelligence, v.152 n.1, p.1-45, January 2004
Robert M. Coehoorn , Nicholas R. Jennings, Learning on opponent's preferences to make effective multi-issue negotiation trade-offs, Proceedings of the 6th international conference on Electronic commerce, October 25-27, 2004, Delft, The Netherlands
Shaheen Fatima , Michael Wooldridge , Nicholas R. Jennings, Optimal agendas for multi-issue negotiation, Proceedings of the second international joint conference on Autonomous agents and multiagent systems, July 14-18, 2003, Melbourne, Australia
Janice W. Y. Hui , Toby H. W. Lam , Raymond S. T. Lee, The design and implementation of an intelligent agent-based negotiation shopping system, Multiagent and Grid Systems, v.1 n.3, p.131-146, August 2005
Xin Li , Leen-Kiat Soh, Hybrid negotiation for resource coordination in multiagent systems, Web Intelligence and Agent System, v.3 n.4, p.231-259, January 2005
Xin Li , Leen-Kiat Soh, Hybrid negotiation for resource coordination in multiagent systems, Web Intelligence and Agent System, v.3 n.4, p.231-259, October 2005
Karl Kurbel , Iouri Loutchko, Towards multi-agent electronic marketplaces: what is there and what is missing?, The Knowledge Engineering Review, v.18 n.1, p.33-46, January
Shaheen S. Fatima , Michael Wooldridge , Nicholas R. Jennings, A Comparative Study of Game Theoretic and Evolutionary Models of Bargaining for Software Agents, Artificial Intelligence Review, v.23 n.2, p.187-205, April 2005
Negotiation Framework for Automatic Collision Avoidance between Vessels, Proceedings of the IEEE/WIC/ACM international conference on Intelligent Agent Technology, p.595-601, December 18-22, 2006
Iyad Rahwan , Sarvapali D. Ramchurn , Nicholas R. Jennings , Peter Mcburney , Simon Parsons , Liz Sonenberg, Argumentation-based negotiation, The Knowledge Engineering Review, v.18 n.4, p.343-375, December
Ricardo Buttner, A Classification Structure for Automated Negotiations, Proceedings of the 2006 IEEE/WIC/ACM international conference on Web Intelligence and Intelligent Agent Technology, p.523-530, December 18-22, 2006 | game theory;agendas;negotiation |
544819 | Interacting with virtual characters in interactive storytelling. | In recent years, several paradigms have emerged for interactive storytelling. In character-based storytelling, plot generation is based on the behaviour of autonomous characters. In this paper, we describe user interaction in a fully-implemented prototype of an interactive storytelling system. We describe the planning techniques used to control autonomous characters, which derive from HTN planning. The hierarchical task network representing a characters' potential behaviour constitute a target for user intervention, both in terms of narrative goals and in terms of physical actions carried out on stage. We introduce two different mechanisms for user interaction: direct physical interaction with virtual objects and interaction with synthetic characters through speech understanding. Physical intervention exists for the user in on-stage interaction through an invisible avatar: this enables him to remove or displace objects of narrative significance that are resources for character's actions, thus causing these actions to fail. Through linguistic intervention, the user can influence the autonomous characters in various ways, by providing them with information that will solve some of their narrative goals, instructing them to take direct action, or giving advice on the most appropriate behaviour. We illustrate these functionalities with examples of system-generated behaviour and conclude with a discussion of scalability issues. | Figure
1).
Figure
1. A story instantiation generated by the system: Ross
asks Phoebe Rachel's preferences, but Phoebe lies to him.
The graphic environment for our system is based on the Unreal
computer game engine. Other researchers in the field of
interactive storytelling have previously described the use of the
same game engine [12] [13], which is increasingly used in non-
gaming applications, since the work of [15]. The main advantage
of a game engine is to provide both high-quality graphics and a
seamless integration of visualisation and interaction with the
environment objects. Further, the software architecture offers
various modes of integrating software, via C++ plugins or UDP
socket interfaces, through which we have integrated a
commercial speech recognition system.
3. NARRATIVE REPRESENTATIONS FOR
CHARACTER-BASED STORYTELLING
As a general rule, character-based storytelling systems do not
represent explicitly narrative knowledge, such as narrative
functions or decision points, as in [5] or [7], which could be
direct target for user interaction. For instance, in the system
described by Sgouros et al. [7], the user is prompted for strategic
decision to be made, and narrative causality is maintained via an
Assumption-based Truth Maintenance System (ATMS), a
process described as user-centred plot resolution. In the
interactive storytelling authoring system of Machado et al. [5],
narrative events are generated using a description in terms of
narrative functions inspired from Propp [16], which can
constitute basic building blocks for the plot.
The Story Nets described by Swartout et al. [2] correspond to a
plot-like representation of the consequences of user action.
However, unlike with user-centered plot resolution [7], these plot
models need not be explicit and can be derived from rules
operating on key decision points corresponding to user actions
[17]. This system integrates aspects from both plot-based and
character-based systems. It is however strongly centred on user
behaviour and its nominal mode assumes permanent user
involvement.
On the other hand, in character-based approaches, the plot is
generated by the multiple interactions between autonomous
characters. The problem with which character-based systems are
generally faced is to ensure that the actions they take are
narratively relevant. This corresponds to the narrative control
problem and has been studied by Young [13] and Mateas and
Stern [18] among others.
In our system, the plot should be mainly driven by the synthetic
characters, which is the only approach supporting continuous
storytelling with anytime user intervention. In order to reconcile
the character-based approach with the problem of narrative
control, we describe characters' behaviours in terms of roles, i.e.
a narrative representation of their goals and corresponding
actions.
For instance, our principal character, Ross, plans to seduce the
character Rachel. His role can be described into greater details as
a refinement of this high-level goal. Such a refinement will
define the various steps he'll take in seducing Rachel, such as
acquiring information about her, gaining her friendship, finding
ways to talk to her in private, offering her gifts, inviting her out,
etc. These also correspond, at its first level of refinement, to the
various stages of a (yet linear) story. However, this role
representation also includes, as it is refined, a large set of
alternative solutions at each further level. The terminal nodes
correspond to the final actions actually played on-stage through
3D animation of the synthetic characters. They consist in
interactions with on-stage objects (watching TV, reading a book,
buying gifts, making/drinking coffee) and other members of
the cast (talking, socialising, etc.
Figure
2. HTN representation for character behaviour.
The characters' roles can thus be represented in a consistent
fashion as Hierarchical Task Networks (HTN): this represents an
actor's potential contribution to the overall plot (see Figure 2). A
single HTN corresponds to several possible decompositions for
the main task: in other words, an HTN can be seen as an implicit
representation for the set of possible solutions (Erol et al., 1995).
This naturally led us to investigating the use of HTN planning
techniques to underlie characters' behaviour [12] [13]. In the
next section, we describe our approach to planning for
characters' narrative behaviours and how these have been
extended to incorporate user intervention.
4. PLAN-BASED BEHAVIOURS IN
There is a broad agreement on the use of planning techniques for
describing high-level behaviour of autonomous agents embodied
in virtual environments, both for task-based simulation [19] [20]
and for character-based storytelling [12].
Our description of characters' roles as HTNs naturally led to use
these as a starting point for the implementation of a planning
system. HTN-based planning, also known as task-decomposition
planning, is among the oldest approaches for providing domain-specific
knowledge to a planning system. While in the generic
case HTN planning may be faced with practical difficulties [21],
this approach is considered appropriate for knowledge-rich
domains, which can provide applications-specific knowledge to
assist plan generation [22]. Interactive Storytelling constitutes
such a knowledge-rich application, not least because of the
authoring process involved in the description of the baseline
story. Besides, there has been a renewed interest in recent years
for HTN planning [23], which has demonstrated state-of-the-art
performance on a number of benchmarks.
Interactive Storytelling requires interleaving planning with
execution. We have devised a search algorithm that produces a
suitable plan form the HTN. Taking advantage from our total
ordering assumption and sub-task independence, it searches the
HTN depth-first left-to-right and executes any primitive action
that is generated, or at least attempts to execute it in the virtual
stage. Backtracking is allowed when these actions fail (e.g.
because of the intervention of other agents or the user). This
search strategy is thus essentially similar to the one described by
Smith et al. [24]. In addition, heuristic values attached to the
various sub-tasks, so forward search can make use of these
values for selecting a sub-task decomposition (this is similar to
the use of heuristics described by Weyhrauch [25] to bias a
story instantiation). These heuristic values are used to represent
narrative concepts as well. Namely, the various tasks are
associated features that index them on some narrative dimension
(such as the sociable nature of an activity, or the rudeness of a
behaviour), which in turn are converted into heuristic values on
these dimensions. Using these heuristics according to his
personality and emotional status, a character will give preference
to different tasks. These heuristics can be altered dynamically,
which in turns modifies subsequent action selection in the
character's plan. For instance, Rachel may change mood because
some action by Ross has upset her; the consequence is that she
would abandon social activities for solitary ones.
Another essential aspect of HTN planning is that it is based on
forward search while being goal-directed at the same time, as the
top-level task is the main goal. An important consequence is that,
since the system is planning forward from the initial state and
expands sub-tasks left to right, the current state of the world is
always known, in this case the current stage reached by the plot.
We have adopted total ordering of sub-tasks for the initial
description of roles. Total-order HTN planning precludes the
possibility of interleaving sub-tasks from different tasks, thus
eliminating task interaction to a large extent [23]. In the case of
storytelling, sub-task independence is an hypothesis derived from
the inherent decomposition of a plot into various scenes, though
with the additional simplifying assumption that there are no
parallel storylines.
There are however additional requirements for planning
techniques that control synthetic actors. The environment of the
synthetic characters is by nature a dynamic one: the world in
which they evolve might constantly change under the influence of
other characters or due to user intervention. This would
traditionally call for an approach interleaving planning and
execution, so that the actions taken are constantly adapted to the
current situation. In addition, the action taken by an actor may
fail due to external factors, not least user intervention. The latter
requires that characters' behaviour incorporate re-planning
abilities. As we will see in section 5, these features also support
the interactive aspects of storytelling, allowing user intervention
to trigger the generation of new behaviours and the
corresponding evolution of the plot.
The behaviours for the various characters, corresponding to their
individual roles, are defined independently as HTNs. Their
integration takes place through the spatial environment in which
they all carry out their actions. As a consequence, their on-stage
interactions generate a whole range of situations not explicitly
described in their original roles.
Examples of such situations obtained with the system are:
1. Ross wants to steal Rachel's diary but she is using it herself,
or Phoebe is in the same room, preventing him from stealing
it
2. Ross wants to talk to Phoebe about Rachel, but she is busy
talking to Monica
3. Ross bumps into Rachel at an early stage of the story, where
he has not yet obtained information about her
4. Ross talks to Phoebe but the scene is witnessed by Rachel
Figure
3. Dramatisation of action repair.
These bottom-up situations illustrate why the characters'
behaviour cannot be solely determined by their top-down
planner, in order to be realistic. Situations 1 and 2 would
normally lead to re-planning, while more convenient solutions
can be devised, such as action repair [26]. In example 1 for
instance, Ross could just wait for Rachel to leave, which would
restore the executability conditions of the read_diary action
(see
Figure
3). Examples 3 and 4 represent situations that should
be actively avoided by the character. A practical solution consists
in using situated reasoning, implemented as sub-plans. These are
triggered by rules recognising the potential occurrence of such
situations and return active post-conditions to the initial plans
when it resumes. These mechanisms are further described in
[27]. Finally, characters also exhibit reactive behaviour based on
some situations: in some cases Rachel can get jealous if she sees
Ross in sustained conversation with another female character, or
Phoebe can get upset if Ross interrupts her. Reactive behaviours
can directly alter the character's plans or trigger scripted
response (such as leaving the room). In most cases, though, the
output of reactive behaviour is generally to alter the emotional
response of the reacting character, which in turns affects its
subsequent role. Altering the mood value is equivalent to
dynamically changing the heuristic coefficients attached to
certain activities. Hence, emotional representations, however
simple, play an important role in the story's consistency by
relating character behaviour to some personality variables.
Even though the individual mechanisms for actors' behaviour are
fairly deterministic, the overall plot generated is not generally
predictable by the spectator. Several mechanisms have been
incorporated to support, such as the random allocation of
characters on-stage, which together with the duration of their
actions, greatly affects the probability for encounters, which is a
major determinant of plot variability.
The important conclusion is that, while most user interaction
takes place through the characters' top-down plans, every
mechanism supporting an agent's behaviour is a potential target
for user intervention. This will be further discussed in section 6.
5. SYSTEM ARCHITECTURE
The system has been implemented using the Unreal game
engine as a development environment. The implementation
philosophy, like in previous behavioural animation systems is to
go from high-level planning to lower-level actions down to
animation sequences (which in our case are keyframe animation,
but can be interrupted at anytime in case of re-planning).
The game engine offers an API via its scripting language,
UnrealScript. Using this scripting language, it is possible to
define new actions out of basic primitives provided by the
engine; for instance, offering a gift, which consists in passing an
object from one character to another. The implementation of an
elementary action comprises the updating of graphic data
structures (e.g. the object list of a given character or of the
environment itself) plus the associated keyframe animation
played in the graphic environment.
Characters' roles are generated from HTN plans in the following
way. Each character's plan interleaves planning and execution;
the lowest-level operators of each plan are carried out in the
environment in the form of Unreal actions (Figure 4, 1-2), and
the action outcome is then passed back to the planner (Figure 4,
3). In terms of architecture the planning component is a C++
module, integrated in the game engine using a dynamically
linked library (.dll), which interfaces with the graphic
environment via the actions' representation layer programmed in
UnrealScript. Similarly, changes taking place in the environment
are analysed in this layer and passed back to the planner (Figure
4, 3).
Figure
4. System architecture.
6. PHYSICAL INTERVENTION ON THE
The user is a spectator of the unfolding 3D animation
corresponding to the generated story, but he can freely explore
the stage, being himself embodied through an invisible avatar.
This makes it possible for him to interfere directly with the
course of action by physical intervention on stage. In our
current system, physical interaction is limited to narrative
objects. The user can remove objects from the stage or change
their location, but cannot physically interfere with the actors, for
instance by preventing them to enter a room. This is meant to be
consistent with the spectator-based approach and its rule of
minimal involvement.
Many on-stage objects appear as affordances, i.e. candidates for
user interaction. This can be signalled either by their intrinsic
narrative significance or by their use by the synthetic characters
themselves. The former case is referred to as a dispatcher in
modern narratology [28]: a dispatcher is an object to which
choice is associated, triggering narrative consequences. For
instance, in our example scenario, roses and the chocolate box,
the potential gifts for Rachel, bear such properties and are a
natural target for user interaction. Dispatchers can also be
signalled dynamically. As the characters are acting rather than
improvising, their actions have direct narrative significance.
Hence, if Ross directs himself towards an object, such as
Rachel's diary or a telephone, this object acquires narrative
relevance and becomes a potential target for user interaction.
Other on-stage objects play a role in the behaviour and most
importantly the spatial localisation of the virtual actors. Coffee
machines or TV sets are used by the characters: if the user steals
the coffee machine that Phoebe was about to use, she would re-plan
some other activity, which might take her to another
location on the stage. As we have seen, moving to another
location can have significant narrative consequences.
Figure
5. Re-planning on action failure.
From an implementation perspective, actions that are part of the
character's plans are associated executability conditions, which
include the availability of some resources. For instance, Ross can
only read Rachel's diary if it is in the room and Phoebe will only
make coffee if the coffee machine is at its usual place. Physical
user intervention thus consists in causing character's action to
fail by altering their executability conditions. Action failure will
in turn trigger re-planning. For instance, Figure 5 shows a
fragment of Ross' plan for acquiring information about Rachel.
His initial plan consists in reading Rachel's diary, but the user
has stolen it. On reaching the diary's default location Ross
realises that it is missing and needs to re-plan a solution to find
information about Rachel, which in this case consists in asking
Phoebe. This is implemented using the search mechanism of our
HTN planner by back-propagating the failure of the action
read_diary to the corresponding sub-goal, so search will
backtrack and produce an alternative solution. From a narrative
perspective, the user has contrasted Ross' visible goal. But, apart
from the immediate amusement of doing so, because failure of
Ross' action is dramatised and part of the plot (see Figure 6), the
real impact lies in the long-term consequences of the resulting
situations. For instance, in the above example, when asking
Phoebe about Rachel, Ross might be seen by Rachel, who would
misunderstand the situation and become jealous!
Figure
6. Dramatisation of action failure.
This aspect becomes more obvious if considering the interaction
with objects used by secondary characters in their normal
activities. Phoebe's coffee machine does not have the narrative
significance of Rachel's potential gifts; however, displacing it
can have serious consequences as well, as she would move on
stage and might not be available to answer Ross, or could meet
Rachel. While this has proven to be a powerful mechanism for
story generation, at this early stage we have not explored its
impact in terms of user experience.
7. NATURAL LANGUAGE INTERACTION
WITH AUTONOMOUS CHARACTERS
Natural language intervention in interactive storytelling strongly
depends on the storytelling paradigm adopted. For instance,
permanent user involvement, e.g. in immersive storytelling or
training systems [2], requires linguistic interaction to be part of
the story itself. This most naturally calls for dialogue-based
interaction, as described for instance by Traum and Rickel [29]
for the same project.
Our own approach being based on a user-as-spectator paradigm,
the user interventions, including speech input are essentially
brief and can occur at anytime. They essentially take the form of
instructions or advice [19]. Speech input should be tailored to our
interactive storytelling context, in which the user influences
virtual characters, in order to implement a consistent user
experience. For instance, the utterance will often start with the
name of the addressee, as in Ross, be nice to Monica, not only
to identify the relevant character but also to establish a simple
relation between the user and the character he is influencing.
Also, the speech guidance should naturally be in line with the
various stages of the plot and correspond to narrative actions and
situations. The user can become acquainted with the possibilities
of intervention either by being introduced to the overall storyline
or, as otherwise suggested by Mateas and Stern [18], through
repeated use of the storytelling system.
There has been extensive research in the use of natural language
instructions for virtual actors. Webber et al. [19] have laid out
the foundations of relating natural language instruction to plan-based
high-level behaviour for embodied virtual agents. They
have also provided a classification of natural language
instructions in terms of their effects. Bindiganavale et al. [30]
have described the use of instructions and advice to influence the
dynamic behaviour of autonomous agents when dealing with
certain situations (checkpoint training). Though these are not
specifically addressing storytelling, many of these results can be
adapted to a narrative context.
We have incorporated an off-the-shelf system, the EAR SDK
from Babel Technologies into our prototype, which has been
integrated with the Unreal engine using dynamically linked
libraries like for the HTN system. The EAR SDK supports
speaker-independent input and allows for the definition of
flexible recognition grammars that include optional sequences
and joker characters. This makes possible to implement various
paradigms for speech recognition, from full utterance recognition
to multi-keyword spotting. At this stage we are experimenting
with a recognition grammar with optional sequences for added
flexibility and a small test vocabulary (< 100 words), which
includes the main actions and narrative objects, as well as some
situations.
Figure
7. Situational advice:
"Ross, don't let Rachel see you with Phoebe".
At this stage the natural language interpretation of user input is
based on simple template matching. We have defined templates
for several categories of advice, such as: prescribed action (talk
to Phoebe), provision of information to an actor (the diary is in
the living room, Rachel prefers chocolates), generic and
specific advice (don't be rude, be nice to Phoebe) and
situational advice (don't let Rachel see you with Phoebe (see
Figure
7)), etc.
The instantiation of the template's slots is carried out from
simple procedural Finite-State Transition Network parsing of the
relevant recognised elements. Consistency checking is based on
templates that contain role structures for a certain number of key
narrative actions that speech input is supposed to influence.
These are based on selectional restrictions for the various slots of
a given template. For instance, the advisee is often the main
character, especially when doctrine elements are involved.
The selection of the relevant candidate template is determined by
the semantic categories of verbs or action markers in the
sentence, which are used as heuristics to identify the best
template. It can be noted that there is no obvious mapping
between the surface form and the interpretation in terms of
narrative influence. For instance, talk to Monica is interpreted
as a direct suggestion for action (which will solve a sub-goal
such as obtaining information about Rachel), while don't talk to
Phoebe is more of a global advice, which should generate
situated reasoning whose result is to try to avoid Phoebe. As a
generic rule, though, it would appear that most negative
statements consist in advice or doctrine statements [19].
In our first series of test, we have been essentially focusing on
advice related to characters' behaviour, as they have the most
dramatic effect, and also as interaction with objects is often the
remit of physical intervention on stage.
Overall, we have identified various forms of natural language
intervention, such as: the provision of information to an actor
(including conspicuously false information), direct instruction for
action, warnings, and generic advice on the character's
behaviour.
In the next section, we give some examples of linguistic
interaction and relate these to the mechanisms by which their
effects on characters behaviours and on the plot are actually
implemented.
7.1 EXAMPLES
The direct provision of information can solve a character's sub-
goal: for instance, if, at an early stage of the plot, Ross is
acquiring information about Rachel's preferences, he can be
helped by the user, who would suggest that Rachel prefers
chocolates. The provision of such information has multiple
effects: besides directly assisting the progression of the plot, it
also prevents certain situations that have potentially a narrative
impact (such as an encounter between Ross and Phoebe) from
emerging. From an implementation perspective, sub-goals in the
HTN are labelled according to different categories, such as
information_goals. When these goals are active, they are checked
against new information input from the NL interface and are
marked as solved if the corresponding information matches the
sub-goal content.
[Ross I think Rachel prefers
chocolates]
Figure
8. Providing information to characters.
Provision of information can also be used to trigger action repair.
If for instance, Ross is looking for Rachel's diary and cannot find
it at its default location, he can receive advice from the user (the
diary is in the other room) and repair the current action (this
restores the executability condition of the read_diary action) (see
Figure
8). In this case, spoken information competes with re-planning
of another solution by Ross; The outcome will depend
on the timing and duration of the various actions and of the user
intervention (once a goal has been abandoned, it cannot, in our
current implementation be restored by user advice).
Another form of linguistic interaction consists in giving advice to
the characters. Advice is most often related to inter-character
behaviour and social relationships. We have identified three
kinds of advice. Generic advice is related to overall behaviour,
e.g. don't be rude. This can be matched to personality
variables, which in turn determine the choice of actions in the
HTN. Such advice can be interpreted by altering personality
variables that match the heuristic functions attached to the
candidate actions in the HTN. For instance, a nice Ross will
refrain from a certain number of actions, such as reading
personal diaries or mail, interrupting conversations or expelling
other characters from the set. This of course relies on an a priori
classification of actions in the HTN, which is based on static
heuristic values being attached to nodes of the HTN.
Situational advice is a form of rule that should help the character
avoiding certain situations. One such example is an advice to
avoid making Rachel jealous, such as don't let Rachel see you
with Phoebe. The processing of such advice is more complex
and we have only implemented simplified, procedural versions so
far. One such example in the same situation consists in warning
Ross that Rachel is approaching (Figure 9).
Figure
9. Advice I think Rachel is coming.
Speech input mostly targets the plan-based performance of an
actor's role but can also target other forms of behaviour as
mentioned in section 4, such as situated reasoning or reactive
behaviour. For instance, specific reactive behaviour can be
inhibited by spoken instructions: Rachel can be advised not to be
jealous (Rachel, don't be jealous).
8. CONCLUSIONS
We have described a specific approach to interactive storytelling
where the user, rather than being immersed in the story is
essentially trying to influence it from his spectator position. We
would suggest that this paradigm is worth exploring for future
entertainment applications, where it could bridge the gap
between traditional media and interactive media. The long-term
interest of this approach is however a case for user evaluation,
which should first require the system to reach a critical scale.
Our prototype currently has four autonomous characters, all
based on HTN plans (though the main character Ross has the
most complex plan) and is able to generate short stories (one-act
plays, [18]) up to three minutes in duration, with approximately
one beat [18] per minute. This contrasts with the objective
suggested by Mateas and Stern [18] of 10-15 minute stories with
three characters, which is certainly a valid objective for
interactive storytelling systems. Performance of the planning
component has shown good potential for scaling-up on simulated
tests. The main difficulties are expected to arise from increased
interaction between characters and the associated descriptions of
situated reasoning, for which no clear methodological principles
have been established. On the other hand, there is much to be
learned from running larger-scale tests and these results could
have a generic interest for the study of high-level behaviour of
embodied characters.
9.
--R
Interactive Movies
Toward the Holodeck: Integrating Graphics
Narrative in Virtual Environments - Towards Emergent Narrative
Real Characters in Virtual Stories (Promoting Interactive Story-Creation Activities)
Interactive Storytelling Systems for Children: Using Technology to Explore Language and Identity.
A Framework for Plot Control in Interactive Story Systems
Grabson and Braun
Acting in Character.
"Narrative Intelligence,"
Socially Intelligent Agents: The Human in the Loop.
Creating Interactive Narrative Structures: The Potential for AI Approaches.
An Overview of the Mimesis Architecture: Integrating Narrative Control into a Gaming Environment.
Bringing VR to the Desktop: Are You Game?
Morphology of the Folktale.
Adaptive Narrative: How Autonomous Agents
Towards Integrating Plot and Character for Interactive Drama.
The intentional planning system: Itplans.
Hybrid planning for partially hierarchical domains.
A Validation Structure Based Theory of Plan Modification and Reuse
Control Strategies in HTN Planning: Thoery versus Practice.
Computer Bridge: A Big Win for AI Planning.
Guiding Interactive Drama.
A consequence of incorporating intentions in means-end planning
Emergent Situations in Interactive Storytelling.
Introduction a l'Analyse Structurale des R
Embodied Agents for Multi-party Dialogue in Immersive Virtual Worlds
Dynamically Altering Agent Behaviours Using Natural Language Instructions.
--TR
A validation-structure-based theory of plan modification and reuse
Instructions, intentions and expectations
Hybrid planning for partially hierarchical domains
Control strategies in HTN planning
Interactive movies
Interactive storytelling systems for children
Dynamically altering agent behaviors using natural language instructions
Toward the holodeck
Emergent situations in interactive storytelling
Bringing VR to the Desktop
Acting in Character
Adaptive Narrative
Real Characters in Virtual Stories
Guiding interactive drama
--CTR
Arturo Nakasone , Helmut Prendinger , Mitsuru Ishizuka, Web presentation system using RST events, Proceedings of the fifth international joint conference on Autonomous agents and multiagent systems, May 08-12, 2006, Hakodate, Japan
Fred Charles , Marc Cavazza, Exploring the Scalability of Character-Based Storytelling, Proceedings of the Third International Joint Conference on Autonomous Agents and Multiagent Systems, p.872-879, July 19-23, 2004, New York, New York
Steven Dow , Manish Mehta , Annie Lausier , Blair MacIntyre , Micheal Mateas, Initial lessons from AR Faade, an interactive augmented reality drama, Proceedings of the 2006 ACM SIGCHI international conference on Advances in computer entertainment technology, June 14-16, 2006, Hollywood, California
Yundong Cai , Chunyan Miao , Ah-Hwee Tan , Zhiqi Shen, Fuzzy cognitive goal net for interactive storytelling plot design, Proceedings of the 2006 ACM SIGCHI international conference on Advances in computer entertainment technology, June 14-16, 2006, Hollywood, California
Marc Cavazza , Fred Charles , Steven J. Mead, Interactive storytelling: from AI experiment to new media, Proceedings of the second international conference on Entertainment computing, p.1-8, May 08-10, 2003, Pittsburgh, Pennsylvania
Bruno Herbelin , Michal Ponder , Daniel Thalmann, Building exposure: synergy of interaction and narration through the social channel, Presence: Teleoperators and Virtual Environments, v.14 n.2, p.234-246, April 2005
Arturo Nakasone , Mitsuru Ishizuka, Storytelling Ontology Model Using RST, Proceedings of the IEEE/WIC/ACM international conference on Intelligent Agent Technology, p.163-169, December 18-22, 2006
Fred Charles , Marc Cavazza , Steven J. Mead , Olivier Martin , Alok Nandi , Xavier Marichal, Compelling experiences in mixed reality interactive storytelling, Proceedings of the 2004 ACM SIGCHI International Conference on Advances in computer entertainment technology, p.32-40, June 03-05, 2005, Singapore
Scott W. McQuiggan , James C. Lester, Learning empathy: a data-driven framework for modeling empathetic companion agents, Proceedings of the fifth international joint conference on Autonomous agents and multiagent systems, May 08-12, 2006, Hakodate, Japan
Bradford W. Mott , James C. Lester, U-director: a decision-theoretic narrative planning architecture for storytelling environments, Proceedings of the fifth international joint conference on Autonomous agents and multiagent systems, May 08-12, 2006, Hakodate, Japan
Scott W. McQuiggan , James C. Lester, Modeling and evaluating empathy in embodied companion agents, International Journal of Human-Computer Studies, v.65 n.4, p.348-360, April, 2007
Jae-Kyung Kim , Won-Sung Sohn , Soon-Bum Lim , Yoon-Chul Choy, Definition of a layered avatar behavior script language for creating and reusing scenario scripts, Multimedia Tools and Applications, v.37 n.2, p.233-259, April 2008 | interactive storytelling;computer games;planning;synthetic characters;speech understanding |
544835 | An analysis of formal inter-agent dialogues. | This paper studies argumentation-based dialogues between agents. It defines a set of locutions by which agents can trade arguments, a set of agent attitudes which relate what arguments an agent can build and what locutions it can make, and a set of protocols by which dialogues can be carried out. The paper then considers some properties of dialogues under the protocols, in particular termination and complexity, and shows how these relate to the agent attitudes. | INTRODUCTION
When building multi-agent systems, we take for granted
the fact that the agents which make up the system will need
to communicate. They need to communicate in order to
resolve dierences of opinion and con
icts of interest, work
together to resolve dilemmas or nd proofs, or simply to
inform each other of pertinent facts. Many of these communication
requirements cannot be fullled by the exchange of
single messages. Instead, the agents concerned need to be
able to exchange a sequence of messages which all bear upon
the same subject. In other words they need the ability to
engage in dialogues. As a result of this requirement, there
has been much work on providing agents with the ability
to hold such dialogues. Recently some of this work has considered
argument-based approaches to dialogue, for example
the work by Dignum et al. [5], Parsons and Jennings [15],
Reed [18], Schroeder et al. [19] and Sycara [20].
Reed's work built on an in
uential model of human dialogues
due to argumentation theorists Doug Walton and
Krabbe [21], and we also take their dialogue typology
as our starting point. Walton and Krabbe set out to analyze
the concept of commitment in dialogue, so as to \pro-
Permission to make digital or hard copies of all or part of this work for
personal or classroom use is granted without fee provided that copies are
not made or distributed for profit or commercial advantage and that copies
bear this notice and the full citation on the first page. To copy otherwise, or
republish, to post on servers or to redistribute to lists, requires prior specific
permission and/or a fee.
Copyright 2000 ACM 1-58113-480-0/02/0007 .$5.00
vide conceptual tools for the theory of argumentation" [21,
page ix]. This led to a focus on persuasion dialogues, and
their work presents formal models for such dialogues. In
attempting this task, they recognized the need for a characterization
of dialogues, and so they present a broad typology
for inter-personal dialogue. They make no claims for its
comprehensiveness.
Their categorization identies six primary types of dialogues
and three mixed types. The categorization is based
upon: rstly, what information the participants each have at
the commencement of the dialogue (with regard to the topic
of discussion); secondly, what goals the individual participants
and, thirdly, what goals are shared by the partic-
ipants, goals we may view as those of the dialogue itself. As
dened by Walton and Krabbe, the three types of dialogue
we consider here are:
Information-Seeking Dialogues: One participant seeks
the answer to some question(s) from another partic-
ipant, who is believed by the rst to know the an-
swer(s).
Inquiry Dialogues: The participants collaborate to answer
some question or questions whose answers are not
known to any one participant.
Persuasion Dialogues: One party seeks to persuade another
party to adopt a belief or point-of-view he or she
does not currently hold. These dialogues begin with
one party supporting a particular statement which the
other party to the dialogue does not, and the rst seeks
to convince the second to adopt the proposition. The
second party may not share this objective.
In previous work [2], we began to investigate how these
dierent types of dialogue can be captured using a formal
model of argumentation. Here we extend this work, examining
some of the possible forms of information seeking, inquiry
and persuasion dialogues which are possible, and identifying
how the properties of these dialogues depend upon
the properties of the agents engaging in them.
Note that, despite the fact that the types of dialogue we
are considering are drawn from the analysis of human dia-
logues, we are only concerned here with dialogues between
articial agents. Unlike [9] for example, we choose to focus
in this way in order to simplify our task|doing this allows
us to deal with articial languages and avoid much of the
complexity inherent in natural language dialogues.
2. BACKGROUND
In this section we brie
y introduce the formal system of
argumentation which forms the backbone of our approach.
This is inspired by the work of Dung [6] but goes further
in dealing with preferences between arguments. Further details
are available in [1] We start with a possibly inconsistent
knowledge base with no deductive closure. We assume
contains formulas of a propositional langage L. ' stands for
classical inference and for logical equivalence. An argument
is a proposition and the set of formulae from which it
can be inferred:
Definition 1. An argument is a pair
h is a formula of L and H a subset of such that:
1. H is consistent;
2.
3. H is minimal, so no subset of H satisfying both 1. and
2. exists.
H is called the support of A, written
h is the conclusion of A written
We talk of h being supported by the argument (H ; h)
In general, since is inconsistent, arguments in A(),
the set of all arguments which can be made from , will
con
ict, and we make this idea precise with the notion of
undercutting:
Definition 2. Let A1 and A2 be two arguments of A().
undercuts
clusion(A1 ).
In other words, an argument is undercut if and only if there
is another argument which has as its conclusion the negation
of an element of the support for the rst argument.
To capture the fact that some facts are more strongly believed
1 we assume that any set of facts has a preference order
over it. We suppose that this ordering derives from the fact
that the knowledge base is stratied into non-overlapping
sets such that facts in i are all equally preferred
and are more preferred than those in j where j > i .
The preference level of a nonempty subset H of , level(H ),
is the number of the highest numbered layer which has a
member in H .
Definition 3. Let A1 and A2 be two arguments in A().
is preferred to A2 according to Pref i level(Support(A1
level(Support(A2 )).
By Pref we denote the strict pre-order associated with
. If A1 is preferred to A2 , we say that A1 is stronger
than A2 2 . We can now dene the argumentation system we
will use:
Definition 4. An argumentation system (AS) is a triple
A() is a set of the arguments built from ,
1 Here we only deal with beliefs, though the approach can also handle
desires and intentions as in [16] and could be extended to cope with
other mental attitudes.
We acknowledge that this model of preferences is rather restrictive
and in the future intend to work to relax it.
Undercut is a binary relation representing defeat relationship
between arguments, Undercut A()A(),
and
Pref is a (partial or complete) preordering on A()
A().
The preference order makes it possible to distinguish dier-
ent types of relation between arguments:
Definition 5. Let A1 , A2 be two arguments of A().
If A2 undercuts A1 then A1 defends itself against A2
Otherwise, A1 does not defend itself.
A set of arguments S defends A i: 8 B undercuts A
and A does not defend itself against B then 9 C 2 S
such that C undercuts B and B does not defend itself
against C .
Henceforth, CUndercut;Pref will gather all non-undercut arguments
and arguments defending themselves against all
their undercutting arguments. In [1], it was shown that the
set S of acceptable arguments of the argumentation system
is the least xpoint of a function F :
defended by Sg
Definition 6. The set of acceptable arguments for an
argumentation system hA(); Undercut ; Pref i is:
F i0 (;)
An argument is acceptable if it is a member of the acceptable
set.
An acceptable argument is one which is, in some sense,
proven since all the arguments which might undermine it
are themselves undermined.
3. LOCUTIONS
As in our previous work [2, 4], agents use the argumentation
mechanism described above as a basis for their reasoning
and their dialogues. Agents decide what they themselves
know by determining which propositions they have acceptable
arguments for. They trade propositions for which they
have acceptable arguments, and accept propositions put forward
by other agents if they nd that the arguments are
acceptable. The exact locutions and the way that they are
exchanged dene a formal dialogue game which agents engage
in.
Dialogues are assumed to take place between two agents,
P and C 3 . Each agent has a knowledge base, P and C
respectively, containing their beliefs. In addition, each agent
has a further knowledge base, accessible to both agents, containing
commitments made in the dialogue. These commitment
stores are denoted CS (P) and CS (C ) respectively,
and in this dialogue system (unlike that of [4] for exam-
ple) an agent's commitment store is just a subset of its
knowledge base. Note that the union of the commitment
3 The names stemming from the study of persuasion dialogues|P
argues \pro" some proposition, and C argues \con".
stores can be viewed as the state of the dialogue at a given
time. Each agent has access to their own private knowledge
base and both commitment stores. Thus P can make use of
and C can make use of
All the knowledge bases contain propositional formulas
and are not closed under deduction, and all are stratied
according to degree of belief as discussed above. Here we
assume that these degrees of belief are static and that both
the players agree on them, though it is possible [3] to combine
dierent sets of preferences, and it is also possible to
have agents modify their beliefs on the basis of the reliability
of their acquaintances [14].
With this background, we can present the set of dialogue
moves that we will use. For each move, we give what we
call rationality rules and update rules. These are based on
the rules suggested by [11]. The rationality rules specify the
preconditions for playing the move. Unlike those in [2, 4]
these are not absolute, but are dened in terms of the agent
attitudes discussed in Section 4. The update rules specify
how commitment stores are modied by the move.
In the following, player P adresses the move to player C.
We start with the assertion of facts:
assert(p). where p is a propositional formula.
rationality the usual assertion condition for the
agent.
update CS i
Here p can be any propositional formula, as well as the special
character U , discussed below.
assert(S). where S is a set of formulas representing the
support of an argument.
rationality the usual assertion condition for the
agent.
update CS i
The counterpart of these moves are the acceptance moves:
accept(p). p is a propositional formula.
rationality The usual acceptance condition for
the agent.
update CS i
accept(S). S is a set of propositional formulas.
rationality the usual acceptance condition for
every s 2 S .
update CS i
There are also moves which allow questions to be posed.
4 Which, of course, is the same as hA(P [ CS(P) [
challenge(p). where p is a propositional formula.
rationality
update CS i
A challenge is a means of making the other player explicitly
state the argument supporting a proposition. In contrast, a
question can be used to query the other player about any
proposition.
question(p). where p is a propositional formula.
rationality
update CS i
We refer to this set of moves as the set M 0
DC since they are
a variation on the set MDC from [2]|the main dierence
from the latter is that there are no \dialogue conditions".
Instead we explicitly dene the protocol for each type of
dialogue in Section 5. The locutions in M 0
DC are similar
to those discussed in legal reasoning [7, 17] and it should
be noted that there is no retract locution. Note that these
locutions are ones used within dialogues. Further locutions
such as those discussed in [13] would be required to frame
dialogues.
4. AGENT ATTITUDES
One of the main aims of this paper is to explore how
the kinds of dialogue in which agents engage depends upon
features of the agents themselves (as opposed, for instance,
to the kind of dialogue in which the agents are engaged or
the information in the knowledge-bases of the agents). In
particular, we are interested in the eect of these features
on the way in which agents determine what locutions can
be made within the connes of a given dialogue protocol
through the application of diering rationality conditions.
As is clear from the denition of the locutions, there are
two dierent kinds of rationality conditions|one which determines
if something may be asserted, and another which
determines whether something can be accepted. The former
we call assertion conditions, the latter we call acceptance
conditions and talk of agents having dierent attitudes
which relate to particular conditions.
Definition 7. An agent may have one of two assertion
attitudes.
a condent agent can assert any proposition p for which
it can construct an argument (S ; p).
a thoughtful agent can assert any proposition p for
which it can construct an acceptable argument (S ; p).
Thus a thoughtful agent will only put forward propositions
which, so far as it knows, are correct. A condent agent
won't stop to check that this is the case. It might seem
worthwhile also dening what we might call a thoughtless
agent, which can assert any proposition which is either in,
or may be inferred from, its knowledge base, but it is easy
to show that:
Proposition 1. The set of non-trivial propositions which
can be asserted by a thoughtless agent using an argumentation
system exactly the set which
can be asserted by a condent agent using the same argumentation
system.
Proof. Consider a condent agent G and a thoughtless
agent H with the same argumentation system. G can assert
exactly those propositions that it has an argument for. So
by Denition 1 it can assert any p which it can infer from a
minimal consistent subset of , including all the propositions
q in (these are the conclusions of the arguments (fqg; q)).
H can assert any proposition which is either in (which will
be exactly the same as those G can assert) or can be infered
from it. Those propositions which are non-trivial will be
those that can be inferred from a consistent subset of .
These latter will clearly be ones for which an argument can
be built, and so exactly those that can be asserted by G.
Thus the idea of a thoughtless agent adds nothing to our
classication.
At the risk of further overloading some well-used terms
we can dene acceptance conditions:
Definition 8. An agent may have one of three acceptance
attitudes.
a credulous agent can accept any proposition p if it is
backed by an argument.
a cautious agent can accept any proposition p if it is
unable to construct a stronger argument for :p.
a skeptical agent can accept any proposition p if there
is an acceptable argument for p.
With a pair of agents that are thoughtful and skeptical, we
recover the rationality conditions of the dialogue system in
[2].
skeptical agents are more demanding than credulous
ones in terms of the conditions they put on accepting
information. Typically, a skeptical agent which is presented
with an assertion of p will challenge p to obtain the argument
for it, and then validate that this argument is acceptable
given what it knows. We can consider even more
demanding agents. For example, we can imagine a querelous
agent which will only accept a proposition if it can not only
validate the acceptability of the argument for that propo-
sition, but also the acceptability of arguments for all the
propositions in that argument, and all the propositions in
those arguments, and so on.
However, it turns out that:
Proposition 2. The set of propositions acceptable to a
skeptical agent using an argumentation system hA(); Undercut ;
exactly the same as the set of propositions acceptable
to a querelous agent using the same argumentation system.
Proof. Consider a thoughtful agent G and a querelous
agent H with the same argumentation system. By denition,
G can accept any proposition p whose support S is either
not attacked by any argument which is built from , or is
defended by an argument which is part of the acceptable set
of A(). In other words, G will only accept p if all the s 2
are themselves supported by acceptable arguments (which
might just be (fsg; s) if there is no argument for :s). This
is exactly the set of conditions under which H will accept
p.
In other words once we require an argument to be accept-
able, we also require that any proposition which is part of
the support for that argument is also acceptable. Thus the
notion of a querelous agent adds nothing to our classica-
tion.
5. DIALOGUE TYPES
With the agent attitudes specied, we can begin to look
at dierent types of dialogue in detail giving protocols for
These protocols are intentionally simple, to make it
possible to provide a detailed analysis of them as a baseline
from which more complex protocols can be examined. An
important feature common to all these protocols is that no
agent is allowed to repeat a locution. If this prevents the
agent from making any locution, the dialogue terminates.
5.1 Information-seeking
In an information seeking dialogue, one participant seeks
the answer to some question from another participant. If the
information seeker is agent A and the other agent is B , then
we can dene the protocol IS for an information seeking
dialogue about a proposition p as follows:
1. A asks question(p).
2. B replies with either assert(p), assert(:p), or assert(U).
Which will depend upon the contents of its knowledge-base
and its assertion attitude. U indicates that, for
whatever reason B cannot give an answer.
3. A either accepts B 's response, if its acceptance attitude
allows, or challenges. U cannot be challenged and as
soon as it is asserted, the dialogue terminates without
the question being resolved.
4. B replies to a challenge with an assert(S ), where S
is the support of an argument for the last proposition
challenged by A.
5. Go to 3 for each proposition in S in turn.
Note that A accepts whenever possible, only being able to
challenge when unable to accept|\only" in the sense of only
being able to challenge then and challenge being the only
locution other than accept that it is allowed to make. More
exible dialogue protocols are allowed, as in [2], but at the
cost of possibly running forever 5 .
There are a number of interesting properties that we can
prove about this protocol, some of which hold whatever acceptance
and assertion attitudes the agents have, and some
of which are more specic. We have:
Proposition 3. When subject to challenge(p) for any p
it has asserted, a condent or thoughtful agent G can always
respond.
Proof. In order to respond to a challenge(p), the agent
has to be able to produce an argument (S ; p). Since, by
denition, both condent and thoughtful agents only assert
propositions for which they have arguments, these arguments
can clearly be produced if required. This holds even for the
propositions in S . For a proposition to be in S by Deni-
tion 1 it must be part of a consistent, minimal subset of G
which entails p. Any such proposition q is the conclusion
of an argument (fqg; q) and this argument is easily generated
This rst result ensures that step 4 can always follow from
step 3, and the dialogue will not get stuck at that point.
5 The protocol in [2] allows an agent to interject with question(p) for
any p at virtually any point, allowing the dialogue to be llibusted
by issuing endless questions about arbitrary formulae.
It also leads to another result|since with this protocol our
agents only put forward propositions which are backed by
arguments, a credulous agent would have to accept any pro-
postion asserted by an agent:
Proposition 4. A credulous agent G operating under protocol
IS will always accept a proposition asserted by a condent
or thoughtful agent H .
Proof. When H asserts p, G will initially challenge it
(for p to be acceptable it must be backed by an argument,
but no argument has been presented by H and if G had an
argument for p it would not have engaged in the information
seeking dialogue). By Proposition 3, H will always be able
to generate such an argument, and by the denition of its
acceptance condition and the protocol IS, G will then accept
it.
This result is crucial in showing that if A is a credulous
agent, then the dialogue will always terminate, but what if
it is more demanding? Well, it turns out that:
Proposition 5. An information-seeking dialogue under
protocol IS between a credulous, cautious or skeptical agent
G and a condent or thoughtful agent H will always terminate
Proof. At step 2. of the protocol H either replies with
p, :p or U. If it is U, the dialogue terminates. G then
considers p. If G is creduous, then by Corollary 4, G will
accept the proposition and the dialogue will terminate.
If G is cautious, then at step 3, it will either accept p, or
have an argument for :p. In the former case the dialogue
terminates immediately. In the latter case G will challenge
p and by Proposition 3 receive the support S . If G doesn't
have an argument against any of the s 2 S , then they will
be accepted, but this will not make G accept p. The only locution
that G could utter is challenge(p), but it is prevented
from doing this, and the dialogue terminates. If G does have
an argument for the negation of any of the s 2 S , then it
will challenge them. As in the proof of Proposition 3 this will
produce an argument (fsg; s) from H , and G will not be able
to accept this. It also cannot challenge this since this would
repeat its challenge of s, and the dialogue will terminate.
If G is skeptical, then the process will be very similar. At
step 3, G will not be able to accept p (for the same kind of
reason as in the proof of Proposition 4), so will challenge it
and receive the support S . This support may mean that G
has an acceptable argument for p in which case the dialogue
terminates. If this argument is not acceptable, then G will
challenge the s 2 S for which it has an undercutting argu-
ment. Again, this will produce an argument (fsg; s) from
H which won't make the argument for p acceptable. G cannot
make any further locutions, and the dialogue will terminate
While this result is a good one, because of the guarantee of
termination, the proof illustrates a limitation of the dialogue
protocol.
Whether G is skeptical or cautious, it will either immediately
accept p or never accept it whatever H says. That
is H will never persuade G to change its mind. The reason
for this is that the dialogue protocol neither makes G assert
into CS(G) the grounds for not accepting p (thus giving H
the opportunity to attack the relevant argument), nor gives
H the chance to do anything other than assert arguments
which support p.
This position can be justied since IS is intended only
to capture information seeking. If we want H to be able
to persuade G, then the agents are engaging in a persuasion
dialogue, albeit one that is embedded in an information
seeking dialogue as in [13], and this case is thus dealt with
below.
5.2 Inquiry
In an inquiry dialogue, the participants collaborate to answer
some question whose answer is not known to either.
There are a number of ways in which one might construct
an inquiry dialogue (for example see [12]). Here we present
one simple possibility. We assume that two agents A and
have already agreed to engage in an inquiry about some
proposition p by some control dialogue as suggested in [13],
and from this point can adopt the following protocol I:
1. A asserts q ! p for some q or U .
2. B accepts q ! p if its acceptance attitude allows, or
challenges it.
3. A replies to a challenge with an assert(S ), where S
is the support of an argument for the last proposition
challenged by B .
4. Goto 2 for each proposition s 2 S in turn, replacing
5. B asserts q , or r ! q for some r , or U .
6. If A(CS (A)[CS(B)) includes an argument for p which
is acceptable to both agents, then the dialogue terminates
successfully.
7. Go to 5, reversing the roles of A and B and substituting
r for q and some t for r .
This protocol is basically a series of implied IS dialogues.
First A asks \do you know of anything which would imply
were it known?". B replies with one, or the dialogue
terminates with U . If A accepts the implication, B asks
\now, do you know q , or any r which would imply q were
it known?", and the process repeats until either the process
bottoms out in a proposition which both agents agree on, or
there is no new implication to add to the chain. Because of
this structure, it is easy to show that:
Proposition 6. An inquiry dialogue I between two agents
G and H with any acceptance and assertion attitudes will
terminate.
Proof. The dialogue starts with an implied IS dialogue.
By Proposition 5 this dialogue will terminate. If it terminates
with a result other than U, then it is followed with
a second IS dialogue in which the roles of the agents are
reversed. Again by Proposition 5 this dialogue will termi-
nate, possibly with a proof that is acceptable to both agents.
If this second dialogue does not end with a proof or a U,
then it is followed with another IS dialogue in which the
roles of the agents are again reversed. This third dialogue
runs just like the second. The iteration will continue until
either one of the agents responds with a U, or the chain of
implications is ended. One or other will happen since the
agents can only build a nite number of arguments (since
arguments have supports which are minimal consistent sets
of the nite knowledge base), and agents are not allowed to
repeat themselves. When the iteration terminates, so does
the dialogue.
However, it is also true that this rather rigid protocol may
prevent a proof being found even though one is available to
the agents if they were to make a dierent set of assertions.
More precisely, we have:
Proposition 7. Two agents G and H which engage in
a inquiry dialogue for p, using protocol I, may nd the dialogue
terminates unsuccessfully even when A(G [H ) provides
an argument p which both agents would be able to accept
Proof. Consider G has
has frg. Clearly together both agents can produce
p), and this will be acceptable to both agents no
matter their acceptance attitude, but if G starts by asserting
the agents will never nd this proof.
Of course, it is possible to design protocols which don't suer
from this problem, by allowing an agent to assert all the r !
q which are relevant at any point in the dialogue (turning the
dialogue into a breadth-rst search for a proof rather than
a depth rst one) or by allowing the dialogue to backtrack.
Another thing to note is that, in contrast to the information
seeking dialogue, in inquiry dialogues the relationship
between the agents is symmetrical in the sense that both are
asserting and accepting arguments. Thus both an agent's
assertion attitude and acceptance attitude come into play.
As a result, in the case of a condent but skeptical agent, it
is possible for an agent to assert an argument that it would
not nd acceptable itself. This might seem odd at rst, but
on re
ection seems more reasonable (consider the kind of inquiry
dialogue one might have with a child), not least when
one considers that a condent assertion attitude can be seen
as one which responds to resource limitations|assert something
that seems reasonable and only look to back it up if
there is a reason (its unacceptability to another agent) which
suggests that it is problematic.
5.3 Persuasion
In a persuasion dialogue, one party seeks to persuade another
party to adopt a belief or point-of-view he or she does
not currently hold. The dialogue game DC, on which the
moves in [2] are based, is fundamentally a persuasion game,
so the protocol below results in games which are very like
those described in [2]. This protocol, P, is as follows, where
agent A is trying to persuade agent B to accept p.
1. A asserts p.
2. B accepts p if its acceptance attitude allows, if not B
asserts :p if it is allowed to, or otherwise challenges
p.
3. If B asserts :p, then goto 2 with the roles of the agents
reversed and :p in place of p.
4. If B has challenged, then:
(a) A asserts S , the support for
(b) Goto 2 for each s 2 S in turn.
If at any point an agent cannot make the indicated move,
it has to concede the dialogue game. If A concedes, it fails
to persuade B that p is true. If B concedes, then A has
succeeded in persuading it. An agent also concedes the game
if at any point if there are no propositions made by the other
agent that it hasn't accepted.
Once again the form of this dialogue has much in common
with inquiry dialogues. The dialogue starts as if B has asked
A if p is true, and A's response is handled in the same way
as in an inquiry unless B has a counter-argument in which
case it can assert it. This assertion is like spinning
separate IS dialogue in which A asks B if :p is true. Since
we already have a termination result for IS dialogues, it is
simple to show that:
Proposition 8. A persuasion dialogue under protocol P
between two agents G and H will always terminate.
Proof. A dialogue under P is just like an information
seeking dialogue under IS in which agents are allowed to
reply to the assertion of a proposition p with the assertion
of :p as well as the usual responses. Since we know that
a dialogue under IS always terminates, it suces to show
that the assertion of :p does not lead to non-termination.
Since the only dierence between the sub-dialogue spawned
by the assertion of :p and a IS dialogue is the possibility
of the agent to which :p is asserted asserting p in response,
then this is the only way in which non-termination can oc-
cur. However, this assertion of p is not allowed since it
would repeat the assertion that provoked the :p and so the
dialogue would terminate. Thus a P dialogue will always
terminate.
Again there is some symmetry between the agents, but there
is also a considerable asymmetry which stems from the fact
that A is eectively under a burden of proof so it has to win
the argument in order to convince B , while B just has to
fail to lose to not be convinced. Thus if A and B are both
condent-cautious and one has an argument for p and the
other has one for :p, and neither argument is stronger than
the other, despite the fact that the arguments \draw", A
will lose the exchange and B will not be convinced. This is
exactly the same kind of behaviour that is exhibited by all
persuasion dialogues in the literature.
6. COMPLEXITY OF DIALOGUES
Having examined some of the properties of the dialogues,
we consider their computational complexity. Since the protocols
are based on reasoning in logic we know that the
complexity will be high|our aim in this analysis is to establish
exactly where the complexity arises so that we can
try and reduce it,for example, as in [22], but suitable choice
of language.
To study this issue, we return to Denition 1. Given a
knowledge base , we will say there is a prima facie argument
for a particular conclusion h if ' h, i.e., if it is
possible to prove the conclusion from the knowledge base.
The existence of a prima facie argument does not imply the
existence of a \usable" argument, however, as may be in-
consistent. Since establishing proof in propositional logic is
co-NP-complete, we can immediately conclude:
Proposition 9. Given a knowledge base and a conclusion
h, determining whether there is a prima facie argument
for h from is co-NP-complete.
We will say a is a consistent prima facie argument
over if H is a consistent subset of and H ' h.
Determining whether or not there is a consistent prima facie
argument for some conclusion is immediately seen to be
harder.
Proposition 10. Given a knowledge base and conclusion
h, determining whether there is a consistent prima facie
argument for h over is p-complete.
Proof. The following palgorithm decides the problem:
1. Existentially guess a subset H of together with a
valuation v for H .
2. Verify that v
3. Universally select each valuation v 0 of H , and verify
that v 0
The algorithm has two alternations, the rst being an ex-
istential, the second a universal, and so it is indeed a palgorithm. The existential alternation involves guessing a
support for h together with a witness to the consistency of
this support. The universal alternation veries that H ! h
is valid, and so H ' h. Thus the problem is in pTo show the problem is p-hard, we do a reduction from
the qbf2;9 problem [10, p96]. An instance of qbf2;9 is given
by a quantied boolean formula with the following structure:
where is a propositional logic formula over Boolean variables
. Such a formula is true if there
are values we can give to such that for all values
we can give to l , the formula is true. Here is an
example of such a formula.
Formula (2) in fact evaluates to true. (If x1 is true, then
for all values of x2 , the overall formula is true.)
Given an instance (1) of qbf2;9 , we dene the conclusion
h to be dene the knowledge base as
where > and ? are logical constants for truth and falsehood
respectively. Any consistent subset of denes a consistent
partial valuation for the body of (1); variables not given a
valuation by a subset are assumed to be \don't care". We
claim that input formula (1) is true i there exists a consistent
prima facie argument for h given knowledge base .
Intuitively, in considering subsets of , we are actually examining
all values that may be assigned to the existentially
quantied variables . Since the reduction is clearly
polynomial time, we are done.
Now, knowing that there exists a consistent prima facie
argument for conclusion h over implies the existence of a
minimal argument for h over (although it does not tell us
what this minimal argument is). We can thus conclude:
Corollary 1. Given a knowledge base and conclusion
h, determining whether there is an argument for h (i.e., a
minimal consistent prima facie argument for h | Deni-
tion 1) over is p-complete.
The next obvious question is as follows: given (H ; h), where
h, is it minimal?
Corollary 2. Given a knowledge base and prima facie
argument the problem of determining
whether (H ; h) is minimal is p-complete.
Proof. For membership of p, consider the following palgorithm, which decides the complement of the problem:
1. Existentially select a subset H 0 of H and a valuation
2. Verify that v
3. Universally select each valuation v 0 for H 0 .
4. Verify that v 0
The algorithm contains two alternations, an existential followed
by an universal, and so is indeed a palgorithm. The
algorithm works by guessing a subset H 0 of H , showing that
this subset is consistent, and then showing that H 0 ! h is a
tautology, so H 0 ' h. Since the complement of the problem
under consideration is in p, and co- p= p, it follows
that the problem is in pTo show completeness, we reduce the qbf 2;9 to the complement
of the problem, i.e., to showing that an argument
is not minimal. If an argument (H ; h) is not minimal, then
there will exist some consistent subset H 0 of H such that
The reduction is identical to that above: we set
We then ask whether there is a consistent subset H 0 of H
such that H 0 ' h. Since we have reduced a p-complete problem
to the complement of the problem under consideration,
it follows that the problem is p-hard.
These results allow us to handle the complexity of dialogues
involving condent, credulous and cautious agents, which
are only interested in whether arguments can be built for
given propositions. For thoughtful and skeptical agents we
need to consider whether an argument is undercut.
Proposition 11. Given a knowledge base and an argument
the problem of showing that (H ; h)
has an undercutter is p-complete.
Proof. The following palgorithm decides this problem:
1. Existentially guess (i) a subset H 0 of ; (ii) a support
to undercut; and (iii) a valuation v .
2. Verify that v
3. Universally select each valuation v 0 of H 0 .
4. Verify that (i) v 0
For hardness, there is a straightforward reduction from the
qbf 2;9 problem, essentially identical to the reductions given
in proofs above | we therefore omit it.
As a corollary, the problem of showing that (H ; h) has no
undercutter is p-complete.
These results are sucient to demonstrate the worst-case
intractability of argumentation-based approaches for skepi-
cal and thoughtful agents using propositional logic. They
thus motivate the investigation of the behaviour of agents
with dierent attitudes and the use of other logics. These
matters are explored in an extended version of this paper.
7. CONCLUSIONS
This paper has examined three types of argumentation-based
dialogue between agents|information seeking, inquiry
and persuasion, from the typology of [21]|dening a precise
protocol for each and examining some important properties
of that protocol. In particular we have shown that each protocol
leads to dialogues that are guaranteed to terminate,
and we have considered some aspects of the complexity of
these dialogues. The exact form of the dialogues depends on
what messages agents send and how they respond to messages
they receive. This aspect of the dialogue is not specied
by the protocol, but by some decision-making apparatus
in the agent. Here we have considered this decision to be
determined by the agents' attitude, and we have shown how
this attitude aects their behaviour in the dialogues they
engage in.
Both of these aspects extend previous work in this eld. In
particular, they extend the work of [2] by precisely dening a
set of protocols (albeit quite rigid ones) and a range of agent
attitudes|in [2] only one protocol, for persuasion, and only
one attitude, broadly thoughtful-skeptical, were considered.
More work, of course, remains to be done in this area.
Particularly important is determining the relationship between
the locutions we use in these dialogues and those of
agent communication languages such as the FIPA ACL, examining
the eect of adding new locutions (such as retract)
to the language, and identifying additional properties of the
dialogues (such as whether the order in which arguments
are made aects the outcome of the dialogue). We are currently
investigating these matters along with further dialogue
types, more complex kinds of the dialogue types studied
here, such planning dialogues [8], and additional complexity
issues (including the eect of languages other than
propositional logic).
Acknowledgments
This work was partly funded by the EU funded Project IST-
1999-10948.
8.
--R
On the acceptability of arguments in preference-based argumentation framework
Modelling dialogues using argumentation.
Agent dialogues with con icting preferences.
On the acceptability of arguments and its fundamental role in nonmonotonic reasoning
The pleadings game.
The evolution of sharedplans.
A catalog of complexity classes.
A generic framework for dialogue game implementation.
Risk agoras: Dialectical argumentation for scienti
Games that agents play: A formal framework for dialogues between autonomous agents.
An approach to using degrees of belief in BDI agents.
Negotiation through argumentation
Agents that reason and negotiate by arguing.
Relating protocols for dynamic dispute with logics for defeasible argumentation.
Dialogue frames in agent communications.
Ultima ratio: should Hamlet kill Claudius.
Planning other agents' plans.
Commitment in Dialogue: Basic Concepts of Interpersonal Reasoning.
Languages for negotiation.
--TR
Attention, intentions, and the structure of discourse
A catalog of complexity classes
The Pleadings Game
On the acceptability of arguments and its fundamental role in nonmonotonic reasoning, logic programming and <italic>n</italic>-person games
Ultima ratio (poster)
Games That Agents Play
Risk Agoras
Agent Theory for Team Formation by Dialogue
Agent Dialogues with Conflicting Preferences
Dialogue Frames in Agent Communication
Modeling Dialogues Using Argumentation
--CTR
Pieter Dijkstra , Floris Bex , Henry Prakken , Kees De Vey Mestdagh, Towards a multi-agent system for regulated information exchange in crime investigations, Artificial Intelligence and Law, v.13 n.1, p.133-151, January 2005
Pieter Dijkstra , Henry Prakken , Kees de Vey Mestdagh, An implementation of norm-based agent negotiation, Proceedings of the 11th international conference on Artificial intelligence and law, June 04-08, 2007, Stanford, California
Laurent Perrussel , Jean-Marc Thvenin , Thomas Meyer, Mutual enrichment through nested belief change, Proceedings of the fifth international joint conference on Autonomous agents and multiagent systems, May 08-12, 2006, Hakodate, Japan
Simon Parsons , Michael Wooldridge , Leila Amgoud, On the outcomes of formal inter-agent dialogues, Proceedings of the second international joint conference on Autonomous agents and multiagent systems, July 14-18, 2003, Melbourne, Australia
Yuqing Tang , Simon Parsons, Argumentation-based dialogues for deliberation, Proceedings of the fourth international joint conference on Autonomous agents and multiagent systems, July 25-29, 2005, The Netherlands
Eva Cogan , Simon Parsons , Peter McBurney, What kind of argument are we going to have today?, Proceedings of the fourth international joint conference on Autonomous agents and multiagent systems, July 25-29, 2005, The Netherlands
Paul E. Dunne , Peter McBurney, Optimal utterances in dialogue protocols, Proceedings of the second international joint conference on Autonomous agents and multiagent systems, July 14-18, 2003, Melbourne, Australia
Henry Prakken, Formal systems for persuasion dialogue, The Knowledge Engineering Review, v.21 n.2, p.163-188, June 2006
Pietro Baroni , Massimiliano Giacomin , Giovanni Guida, Self-stabilizing defeat status computation: dealing with conflict management in multi-agent systems, Artificial Intelligence, v.165 n.2, p.187-259, July 2005
N. Maudet , B. Chaib-Draa, Commitment-based and dialogue-game-based protocols: new trends in agent communication languages, The Knowledge Engineering Review, v.17 n.2, p.157-179, June 2002
Iyad Rahwan , Sarvapali D. Ramchurn , Nicholas R. Jennings , Peter Mcburney , Simon Parsons , Liz Sonenberg, Argumentation-based negotiation, The Knowledge Engineering Review, v.18 n.4, p.343-375, December | agent communication;argumentation;dialogue games |
544836 | Desiderata for agent argumentation protocols. | Designers of agent communications protocols are increasingly using formal dialogue games, adopted from argumentation theory, as the basis for structured agent interactions. We propose a set of desiderata for such protocols, drawing on recent research in agent interaction, on recent criteria for assessment of automated auction mechanisms and on elements of argumentation theory and political theory. We then assess several recent dialogue game protocols against our desiderata, revealing that each protocol has serious weaknesses. For comparison, we also assess the FIPA Agent Communications Language (ACL), thereby showing FIPA ACL to have limited applicability to dialogues not involving purchase negotiations. We conclude with a suggested checklist for designers of dialogue game protocols for agent interactions. | INTRODUCTION
Formal dialogue games are games in which two or more participants
"move" by uttering locutions, according to certain pre-defined
rules. They have been studied by philosophers since the time of
Aristotle, most recently for the contextual modeling of fallacious
reasoning [14, 23] and as a proof-theoretic semantics for intuitionistic
and classical logic [22]. Outside philosophy, dialogue games
have been used in computational linguistics, for natural language
explanation and generation, and in artificial intelligence (AI), for
automated software design and the modeling of legal reasoning,
e.g., [4]. In recent years, they have found application as the basis
for communications protocols between autonomous software
agents, including for agents engaged in: negotiation dialogues, in
which participating agents seek to agree a division of some scarce
resource [3, 17, 27, 30]; persuasion dialogues, where one agent
seeks to persuade another to endorse some claim [2, 7, 8]; information-seeking
dialogues, where one agent seeks the answer to some
question from another [17]; inquiry dialogues, where several agents
Permission to make digital or hard copies of all or part of this work for
personal or classroom use is granted without fee provided that copies are
not made or distributed for profit or commercial advantage and that copies
bear this notice and the full citation on the first page. To copy otherwise, to
republish, to post on servers or to redistribute to lists, requires prior specific
permission and/or a fee.
AAMAS'02, July 15-19, 2002, Bologna, Italy.
jointly seek the answer to some question [24]; and deliberation di-
alogues, where participants seek to jointly agree a course of action
in some situation [16]. 1
Why have agent protocol designers turned to dialogue games
from argumentation theory? It is reasonable to assume that a rational
agent would only change its beliefs or its preferences after
receiving new information, i.e., not on the basis of whim or malice,
say. For an agent to acquire new information in a dialogue, it needs
a way to probe or challenge the statements of other agents; thus,
locutions which enable utterances to be questioned or contested are
required, along with locutions which enable appropriate responses
to these. In order that this process takes place in an orderly and
efficient fashion, we require rules which govern what can and cannot
and must be said in a dialogue, and when. Dialogue games
provide a framework for the design of such structured discourses,
drawing on specific theories of argumentation. We would expect
that the greater the amount of relevant information passed between
participants, the greater is the likelihood of successful resolution
of the dialogue; increased likelihood of resolution is therefore the
expected payoff of these games when compared with more parsimonious
interaction protocols, such as auctions.
However, despite this recent interest in dialogue-game protocols
for multi-agent systems, we know of no discussion of appropriate
design principles. As these protocols proliferate, designers
and users will require means to assess protocols and to compare
one with another. In this paper, we therefore propose the first
list of desiderata to govern the design and assessment of dialogue
game protocols. To do this, we have drawn upon: the criteria recently
proposed for assessment of automated auction and negotiation
mechanisms in, e.g., [31]; theories of deliberative decision-making
from argumentation theory [1, 15] and political theory [6,
10, 12]; and recent studies of agent communications languages and
interaction protocols [13, 20, 33, 36]. We believe our list of desiderata
will be an initial step towards the development of formal design
and assessment criteria for agent argumentation protocols.
2. PROPOSED
We begin by assuming that agents engaged in dialogues are au-
tonomous, willing and free participants, able to enter and withdraw
from dialogues as and when they see fit. Within each dialogue, they
remain autonomous, and are not compelled to accept or reject any
proposition. These assumptions have implications for some of the
desiderata, as explained below. We also assume that the specification
of a dialogue game protocol consists of: (a) a set of topics of
discussion (which may be represented in some logical language);
This typology of dialogues is from [35], and we use it throughout this paper; note
that negotiation and deliberation dialogues are defined more precisely than is usually
the case within AI.
(b) the syntax for a set of defined locutions concerning these top-
ics; (c) a set of rules which govern the utterance of these locutions;
(d) a set of rules which establish what commitments, if any, participants
create by the utterance of each locution; and (e) a set of rules
governing the circumstances under which the dialogue terminates.
We refer to such a specification as a Dialectical System. 2 We now
list our desiderata with a brief explanation for
1. Stated Dialogue Purpose: A dialectical system should have
one or more publicly-stated purposes, and its locutions and
rules should facilitate the achievement of these. For exam-
ple, the stated purpose of a system for negotiation may be
an agreement on the division of a particular scarce resource;
negotiation over a different resource will result in a different
purpose. Likewise, a discussion about the same resource
which is not a negotiation over its division constitutes a different
purpose, e.g., it may be an information-seeking dia-
logue. The dialogue purposes need to be stated, so that all
participating agents are aware of them in advance of entering
the dialogue. Successful resolution of a dialogue will occur
when its stated purposes are achieved.
2. Diversity of Individual Purposes: A dialectical system should
permit participating agents to achieve their own individual
purposes consistent with the overall purpose of the dia-
logue. These individual purposes may conflict, as when parties
to a negotiation each seek to maximize their individual
utility in any outcome, or they may coincide, as when agents
collectively seek to answer some unknown question.
3. Inclusiveness: A dialectical system should not preclude participation
by any potential agent which is qualified and willing
to participate. Because agents are autonomous entities,
there is a sense in which all agents are deserving of equal
respect. As with human beings [6], agents affected by decisions
have a moral right to be included in deliberations
leading to those decisions. In addition, inclusion of affected
parties in decisions can improve the quality of the decision
outcomes [10].
4. Transparency: Participants to a dialogue should know the
rules and structure of the dialectical system prior to commencement
of the dialogue. In particular, any reference from
dialogues in a dialectical system to an external reality should
be explicitly stated, and known to the participants before
commencement, e.g., when commitments incurred inside a
purchase negotiation dialogue imply subsequent real-world
obligations to execute a particular commercial transaction.
5. Fairness: A dialectical system should either treat all participants
equally, or, if not, make explicit any asymmetries in
their treatment. For instance, it may be appropriate for participants
to play different roles in a dialogue, such as sellers,
buyers and auctioneers in a purchase transaction dialogue
[32]. Agents in these different roles may have different rights
and responsibilities, and these should be known to all.
6. Clarity of Argumentation Theory: A dialectical system
should conform, at least at the outset, to a stated theory of
argument, for example Hitchcock's Principles for Rational
Mutual Inquiry [15] or the persuasion dialogue rules of [9].
This model is presented in [25]. Note that there is no consensus among philosophers
over distinctions, if any, between the words "dialogical" and "dialectical" [5, p. 337].
The reason for this is so that all participants know, and adhere
to, their dialectical obligations, agree on rules of inference
and procedure, and have reasonable expectations of the
responses of others. For example, an agent should know in
advance of making an assertion that its statement may incur
obligations to defend it upon contestation by others; like-
wise, agents contesting an assertion should know if they are
entitled to receive a defence of it. The dialogue-game rules
which embody a theory of argumentation ensure that such
arguments are conducted in an orderly and efficient manner.
If dialogue participants wish to change the argumentation-
theoretic basis or the dialogical rules of the system in the
course of using it for a particular dialogue, being free agents,
they should be enabled to do so. 3
7. Separation of Syntax and Semantics: The syntax of a dialectical
system should be defined separately from its seman-
tics. There are two reasons for this. Firstly, this approach
enables the same protocol syntax to be used with multiple
semantics. Secondly, the problem of semantic verification
of an agent communications language is a thorny one [36],
since it will always be possible for a sufficiently-clever agent
to simulate insincerely any required internal state. Ensuring
that the protocol syntax is defined separately from its semantics
therefore enables the verification of conformity with protocol
syntax, even if the protocol semantics cannot be completely
verified. 4 The recent development of a social seman-
tics, where agents first express publicly their beliefs and intentions
relevant to an interaction, may be seen as an attempt
to extend the domain of verifiability [33].
8. Rule-Consistency: The locutions and rules of a dialogue
system should together be internally consistent; that is, they
should not lead to deadlocks (where no participant may utter
a legal locution), nor infinite cycles of repeated locutions.
9. Encouragement of Resolution: Resolution of each dialogue
(normal termination) should be facilitated, and not precluded,
by the locutions and rules of a dialectical system.
10. Discouragement of Disruption: Normally, the rules of a dialectical
system should discourage or preclude disruptive be-
haviour, such as uttering the same locution repeatedly. How-
ever, as Krabbe notes with regard to retraction [18], achieving
a balance between outlawing disruptive behaviour and
permitting freedom of expression is not necessarily straight-
forward, and will differ by application.
11. Enablement of Self-Transformation: A dialectical system
should permit participants to undergo self-transformation [12]
in the course of a dialogue; e.g., participants to a negotiation
should be able to change their preferences or their valuations
of utility as a result of information they receive from others in
the dialogue, or express degrees of belief in propositions. In
particular, participants should have the right to retract commitments
made earlier in the same dialogue, although not
necessarily always unconditionally. If the protocol does not
permit such transformation, then one agent would not be able
to persuade another to change its beliefs or to adopt a proposal
it had previously rejected; in such circumstances, there
would be no point for the agents to engage in dialogue.
3 This last property is called dialectification in [15].
4 Expressing the rules of dialogue in terms of observable linguistic behaviour is called
externalization in [15].
12. System Simplicity: The locutions and rules of a dialectical
system should be as simple as possible, consistent with
the eleven criteria above. In particular, each locution should
serve a specific and stated function in the dialogue, and the
protocol rules should lead to efficient achievement of the dialogue
purposes.
13. Computational Simplicity: A dialectical system should be
designed to minimize any computational demands on its par-
ticipants, and on the system itself, consistent with the twelve
criteria above.
It is important to note two criteria we have not included here.
We have not specified that dialectical systems should be realistic
representations of some human dialogue, as we see no reason why
agent interactions should necessarily adopt human models of inter-
action. Indeed, dialectical systems may be applied to agent dialogues
which humans do not, or, even, could never undertake, such
as simultaneous negotiations over multiple products with hundreds
of participants. Secondly, we have not stated that the rules of a
dialectical system should require that the participants use particular
rules of inference (such as Modus Ponens), particular logics
or particular decision-making procedures, or that the participating
agents satisfy some criterion of rational behaviour, such as acting
to maximize expected utility. Insisting on such rules and criteria is
contrary to the notion of agent autonomy we assumed at the outset.
In addition to the thirteen principles listed above, there may be
further desiderata appropriate for specific types of dialogue. For in-
stance, for dialogues undertaken to negotiate a division of a scarce
resource, it may be considered desirable that outcomes are Pareto
optimal, i.e., that any other outcome leaves at least one participant
worse off [31]. Because we assume agents are free and willing
participants in a dialogue, acting under no duress, then any agreed
outcome to a negotiation dialogue will satisfy this particular cri-
terion, if certain of the above desiderata are met. We present the
result formally, so as to make clear the assumptions needed:
Proposition: Suppose two or more agents, each of which is purely
self-interested and without malice, engage freely and without duress
in a negotiation dialogue, i.e., a dialogue to agree a division of
some scarce resource. Suppose these agents use a dialogue protocol
which satisfies desiderata 2, 4, and 5, and that this dialogue
is conducted with neither time constraints nor processing-resource
constraints. Suppose further that their negotiation dialogue achieves
resolution, i.e., they agree on a division of the resource in question.
Then the outcome reached is Pareto Optimal.
Proof. Suppose that the outcome reached, which we denote by X ,
is not Pareto Optimal. Then there is another outcome, Y , which
leaves at least one agent, say agent a, better off, while all other
agents are no worse off. Then, it behooves agent a to suggest Y
rather than X for agreement by the participants, since agent a (like
all participants) is self-interested. If Y is suggested by agent a, the
other agents will at least be indifferent between Y and X , because
they are no worse off under Y and may be better off; so, being
without malice, the others should support proposal Y over X in
the dialogue. Now, the only reasons agent a would not suggest Y
would be because of: resource-constraints precluding the identification
of Y as a better outcome; time-constraints precluding the
making of the suggestion of Y in the dialogue; constraints imposed
on agent a by the protocol itself, e.g., rules precluding that particular
agent making suggestions; or social pressures exerted by other
agents on agent a which prevent the suggestion of Y being made.
Each of these reasons contradicts an assumption of the proposition,
and so X must be Pareto Optimal. 2
Of course, if the agents are not purely self-interested, or not free
of duress, or if they enter the discussion under constraints such as
resolution deadlines, then any agreement reached may not be Pareto
optimal. Since most agents in most negotiation dialogues will be
subject to resource- and time-constraints, Pareto optimality may
be seen as a (mostly) unachievable ideal for agent negotiation dia-
logues. An interesting question would be the extent to which any
given negotiation dialogue outcome approximates a Pareto optimal
outcome.
Finally, it is important to note that these desiderata, particularly
numbers 6 (Clarity of Argumentation Theory) and 11 (Enablement
of Self-Transformation), express a particular view of joint decision-making
by autonomous entities. Political theorists distinguish rational-choice
or marketplace models from deliberative democracy
models of social and public decision-making [6]. Rational-choice
models assume that each participant commences the decision-process
with his or her beliefs, utilities and preferences fully formed
and known (at least to him/herself); each participant then chooses
between (i.e., votes for or against) competing proposals on the basis
of his or her own beliefs and preferences. Such a model does
not allow for beliefs and preferences to be determined in the course
of the interaction, nor for participants to acquire a group view of
the issues involved in the decision, for instance, the wider social
consequences of individual actions [28]. In contrast, deliberative
democracy models of joint decision-making emphasize the manner
in which beliefs and preferences are formed or change through the
very process of interacting together, with participants undergoing
what has been called self-transformation [12, p. 184]. In a rational
decision-process, this transformation occurs by the sharing of
information, by challenging and defending assertions, by persua-
sion, and by joint consideration of the relevant issues - i.e., by
argument and debate. Because we assume that software agents are
autonomous, then such argument will be required to convince other
agents to adopt specific beliefs and to commit to specific intentions;
we believe, therefore, that a society of autonomous agents is best
viewed as a deliberative democracy, and not as simply a market-place
Comparison with Game Theoretic Models
At this point it is worth discussing the relationship of argumentation
protocols to work on game-theoretic approaches to negotiation, of
which perhaps the best known examples are [19, 29]. An example
of the kind of issue investigated in this work is how agents with
tasks to carry out in some environment can divide the tasks amongst
themselves to their mutual betterment - the task oriented domains
of [29, pp.29-52]. The key abstraction in this work is that the utility
of possible deals in the domain of negotiation (whatever agents are
negotiating over) can be assessed for any individual agent.
Perhaps the greatest attraction of game-theoretic approaches to
negotiation is that it is possible to prove many desirable features
of a given negotiation protocol. Examples of such properties include
[31, p.204]:
1. Maximising social welfare. Intuitively, a protocol maximises
social welfare if it ensures that any outcome maximises
the sum of the utilities of negotiation participants. If the utility
of an outcome for an agent was simply defined in terms
of the amount of money that agent received in the outcome,
then a protocol that maximised social welfare would maximise
the total amount of money "paid out."
2. Pareto efficiency. As discussed above.
3. Individual rationality. A protocol is said to be individually
rational if following the protocol - "playing by the rules"
- is in the best interests of negotiation participants. Individually
rational protocols are essential because, without them,
there is no incentive for agents to engage in negotiations.
4. Stability. A protocol is stable if it provides all agents with
an incentive to behave in a particular way. The best-known
kind of stability is Nash equilibrium.
5. Simplicity. A "simple" protocol is one that makes the appropriate
strategy for a negotiation participant "obvious". That
is, a protocol is simple if using it, a participant can easily
(tractably) determine the optimal strategy.
6. Distribution. A protocol should ideally be designed to ensure
that there is no "single point of failure" (such as a single
arbitrator), and so as to minimise communication between
agents.
It is worth comparing our desiderata with these. We do not explicitly
state maximising social welfare, as there is no general notion
of this in dialogue games. As we demonstrated above, our criteria
imply Pareto optimal outcomes. Individual rationality amount
to our criterion of individual purpose: an agent cannot be forced to
participate. We do not assume stability, but we do assume that there
in an incentive to resolve the dialogue, i.e., that it is in an agent's
interests to participate in the successful conclusion of the dialogue.
We explicily assume simplicity. Finally, we do not explicitly consider
distribution.
One of the best-known results in the area of game-theoretic negotiation
is Nash's axiomatic approach to bargaining, an attempt to
axiomatically define the properties that a "fair" outcome to negotiation
would satisfy - a desideratum, in effect, for negotiation [29,
pp.50-52]. The properties he identified were: (i) individual rationality
(a participant should not lose from negotiation); (ii) Pareto
invariance with respect to linear
utility transformations (e.g., if one agent counts utility in cents,
while the other counts it in dollars, it should make no difference
to the outcome); and (v) independence of irrelevant alternatives.
Nash proved that mechanisms that guarantee an outcome that maximises
the product of the utilities of participant agents satisfy these
criteria, and moreover, are the only mechanisms that satisfy these
criteria. We have begun working on the formalisation of dialogue
games [25, 26], with the goal of making these desiderata formal.
As a next step, it would be interesting to determine the extent to
which Nash's results transfer to our dialogue game framework.
3. DIALOGUE GAME PROTOCOLS
In this section, we examine three recent proposals for dialogue
game protocols against the desiderata presented above. The three
protocols, which are representative of the literature, have been selected
because they concern three different types of agent interac-
tions. However, not all elements of these protocols are fully speci-
fied, thus making it impossible to assess them against some of the
desiderata. In these cases, we write: Unable to assess.
3.1 A negotiation dialogue
We first consider a dialogue-game protocol for agent negotiation
dialogues proposed by Amgoud, Parsons and Maudet in [3], drawing
on the philosophical dialogue game DC of [23]. 5 This agent interaction
protocol comprises seven distinct locutions: assert, ques-
tion, challenge, request, promise, accept and refuse, and these can
5 This dialogue game was designed to enable persuasion dialogues which would preclude
circular reasoning.
be variously instantiated single propositions; arguments for
propositions (comprised of sets of propositions); or certain types of
implication. For example, the locution promise(p ) q) indicates a
promise by the speaker to provide resource q in return for resource
p. Arguments may be considered to be tentative proofs, i.e., logical
inferences from assumptions which may not all be confirmed.
The syntax for this protocol has only been provided for dialogues
between two participants, but could be readily extended to more
agents.
Following [14], when an agent asserts something (a proposition,
an argument, or an implication), this something is inserted into
a public commitment store accessible to both participants. Thus,
participants are able to share information. In [3], the protocol was
given an operational semantics in terms of a formal argumentation
system. In this semantics, an agent can only utter the locution as-
sert(p), for p a proposition, if that agent has an acceptable argument
for p in its own knowledge base, or in its knowledge base combined
with the public commitment stores. (Acceptable arguments
are those which survive attack from counter-arguments in a defined
manner.) The semantics provided, however, is not sufficient for
automated dialogues.
1. Stated Dialogue Purpose: The protocol is explicitly for negotiation
dialogues, but the syntax does not require the participants
to state the purpose(s) of the specific negotiation
dialogue undertaken.
2. Diversity of Individual Purposes: This is enabled.
3. Inclusiveness: There do not appear to be limitations on which
agents may participate.
4. Transparency: The protocol rules are transparent, and the
authors present the pre-conditions and post-conditions of each
locution.
5. Fairness: Locutions are only given for one participant (Pro-
ponent), with an implicit assumption that they are identical
for the other (Contestor).
6. Clarity of Argumentation Theory: The definitions of protocol
syntax and semantics assume an explicit theory of argumentation
7. Separation of Syntax and Semantics: The syntax is defined
in terms of the argumentation theory semantics, but could be
readily defined separately.
8. Rule-Consistency: The rules appear to be consistent.
9. Encouragement of Resolution: The protocol does not appear
to discourage resolution of the negotiation.
10. Discouragement of Disruption: Disruption is not discour-
aged, as there are no rules preventing or minimizing this be-
haviour. For instance, there are no rules precluding the repeated
utterance of the same locution by an agent, although
there is such a condition in the argumentation semantics given
for the dialogue protocol. 6
11. Enablement of Self-Transformation: Self-transformation
is not enabled. Agents may add to their knowledge base from
the commitment stores of other participants, but there appears
to be no mechanism for their knowledge base to change
or to diminish. Because transformation is not enabled, there
are no retraction locutions.
6 The dialogue game DC also lacks such a rule [23].
12. System Simplicity: There do not appear to be extraneous
locutions.
13. Computational Simplicity: Unable to assess. The computational
complexity of the semantic argumentation mechanism
may be high.
We believe the key weakness of this protocol is the absence of
self-transformation capability. The protocol also makes several implicit
assumptions, which may limit its applicability. Firstly, the
protocol assumes the interaction is between agents with fixed (al-
though possibly different) knowledge bases and possibly divergent
interests. Secondly, the absence of rules precluding disruptive behaviour
and rules for termination conditions, of an explicit statement
of objectives and of formal entry and exit locutions suggest
an implicit assumption that the participants are rational and share
some higher goals. Thirdly, although the semantic argumentation
framework allows agents to hold internally preferences regarding
arguments, the dialogue protocol does not allow for these to be expressed
in the dialogue; nor are degrees of belief or acceptability in
propositions and arguments expressible. Allowing such expression
should increase the likelihood of successful resolution of a negotiation
dialogue. For example, this protocol does not permit the
making of tentative suggestions - propositions uttered for which
the speaker does not yet have an argument.
3.2 A persuasion dialogue
We next consider the protocol proposed by Dignum, Dunin-Ke-p-
licz and Verbrugge [8] for the creation of collective intention by
a team of agents. The protocol assumes that a team has already
been formed, and that one agent, an initiator or proponent, seeks
to persuade others (opponents) in the team to adopt a group belief
or intention. For this dialogue, the authors adapt the rigorous persuasion
dialogue-game of [35], which is a formalization of a rigorous
persuasion dialogue in philosophy. Such dialogues involve
two parties, one seeking to prove a proposition, and one seeking to
disprove it. 7 The protocol presented by Dignum et al. includes
seven locutions: statement, question, challenge, challenge-with-
statement, question-with-statement and final remarks; these last in-
clude: "quit" and "won". The statements associated with challenges
and questions may be concessions made by the speaker.
1. Stated Dialogue Purpose: The protocol is explicitly for a
persuasion dialogue when an initiating agent seeks to "estab-
lish a collective intention within a group" [8, p. 313]. The
syntax requires the initiator to state explicitly the intention it
desires the group to adopt.
2. Diversity of Individual Purposes: The protocol assumes a
conflict of objectives by the participants, but not agreement.
3. Inclusiveness: There do not appear to be limitations on which
agents may participate.
4. Transparency: The protocol rules are transparent to the participating
agents. However, they are not yet fully specified,
since the authors do not articulate the pre-conditions and
post-conditions of each utterance, or all the rules governing
their use.
5. Fairness: Following [35], the protocol rules are asymmet-
rical: the initiator has different rights and obligations from
opponents. However, these differences are known to the participants
7 Note that the persuasion dialogues of [35] deal only with beliefs and not intentions.
6. Clarity of Argumentation Theory: The critical persuasion
dialogues for which the dialogue-game formalism [35] was
developed are idealizations of human dialogues, used by philosophers
to study fallacious reasoning. This underlying argumentation
theory is not stated explicitly in [8], nor is it
self-evidently appropriate for agent interactions.
7. Separation of Syntax and Semantics: The locutions and
syntax of the dialogue are not fully articulated. A partial
operational semantics is provided in terms of the beliefs and
intentions of the participating agents. To the extent that the
syntax and semantics are specified, they appear to be defined
separately.
8. Rule-Consistency: Unable to assess this, as the rules are not
fully articulated.
9. Encouragement of Resolution: The argumentation theory
underlying the protocol assumes the participants have contrary
objectives, which is not necessarily the case. By assuming
antagonism where this is none, the protocol may discourage
resolution.
10. Discouragement of Disruption: Disruption is not discour-
aged, as there are no rules preventing or minimizing this be-
haviour. For instance, there are no rules precluding the repeated
utterance of the same locution by an agent.
11. Enablement of Self-Transformation: Self-transformation
is enabled. However, because the syntax and semantics are
not fully articulated, it is not clear how this is achieved. 8 In
addition, this protocol does not permit degrees of belief or
acceptability to be expressed, nor does it permit retractions
of prior statements.
12. System Simplicity: There do not appear to be extraneous
locutions. However, following [35], participants may only
speak in alternating sequence, and the rules of the dialogue
are quite strict.
13. Computational Simplicity: Unable to assess. The computational
complexity of the semantic mechanism may be high,
as the authors concede.
This protocol is difficult to assess against the desiderata because
the locutions, rules of syntax and the semantics are not fully articu-
lated. In addition, the authors present no case for using the rigorous
persuasion dialogue game adapted from [35] in the agent domain.
This game embodies an explicit theory of argumentation which is
not necessarily appropriate for agent dialogues. In particular, the
theory assumes participants are engaged in a critical persuasion,
and thus have conflicting objectives (namely to prove or disprove a
proposition); consequently the rules and locutions are stricter than
most people would consider appropriate for an ordinary (human
or agent) persuasion dialogue. Participants must speak in alternating
sequence, for example. Moreover, the rigorous persuasion dialogue
is based on the dialogue games of [22], originally designed
as a constructive proof-theory for logical propositions. While such
games can be used to construct, step-by-step, an argument for a
proposed group intention, this would seem a singularly inefficient
means of persuasion. Allowing agents to express a complete argument
for a proposal in one utterance, as Amgoud et al. permit
8 An agent asked by an initiator to adopt an intention which conflicts with an existing
intention may challenge the initiator to provide a proof for the proposed intention, but
the authors do not indicate when and how that proof leads to a revision of the existing
intention [8, Section 4.2.4].
in the negotiation protocol assessed above, would seem far more
efficient.
3.3 An inquiry dialogue
We now consider a dialogue game protocol proposed by McBurney
and Parsons for inquiry dialogues in scientific domains [24].
This presents 30 locutions, which enable participants to propose,
assert, question, accept, contest, retract, and refine claims, the arguments
for them, the assumptions underlying and the rules of inference
used to derive these arguments, and the consequences of
claims. The protocol is based on a specific philosophy of science,
due to Feyerabend and Pera, which stresses the dialogical nature of
scientific knowledge-development. In addition to the protocol syn-
tax, a game-theoretic semantics is presented, linking arguments in
the dialogue after finite times with the long-run (infinite) position
of the dialogue. 9
1. Stated Dialogue Purpose: There is no stated dialogue purpose
2. Diversity of Individual Purposes: This is enabled.
3. Inclusiveness: There do not appear to be limitations on which
agents may participate.
4. Transparency: The protocol rules are transparent, and the
authors present the pre-conditions and post-conditions of each
locution, along with rules governing the combination of locutions
5. Fairness: The rules treat all participants equally. Utterance
of specific statements may incur obligations on the agent
concerned.
6. Clarity of Argumentation Theory: The protocol conforms
explicitly to a specified philosophy of scientific discourse,
uses Toulmin's well-known model of an individual argument
[34], and adheres to most of the principles of rational human
discourse proposed by Alexy and Hitchcock [1, 15]. The
conformance of the protocol is demonstrated formally.
7. Separation of Syntax and Semantics: These are defined
separately.
8. Rule-Consistency: The rules appear to be consistent.
9. Encouragement of Resolution: The protocol does not appear
to discourage resolution, but no rules for termination are
provided. Because this is a model for scientific dialogues, it
is assumed to be of possibly-infinite duration.
10. Discouragement of Disruption: The rules prohibit multiple
utterances of the same locution, but not other forms of
disruption.
11. Enablement of Self-Transformation: Self-transformation
is enabled. Assertions may be retracted and qualified. In
addition, degrees of belief in propositions and rules of inference
may be expressed. However, the mechanisms of self-
transformation internal to an agent are not presented.
12. System Simplicity: With locutions, the protocol is not
simple.
9 The purpose of this semantics is to assess to what extent a finite shapshot of a debate
is representative of the long-run counterpart, in order to assess the likelihood of
dialogues conducted under the protocol finding the answer to the question at issue.
13. Computational Simplicity: Unable to assess.
The semantics provided for this protocol is not an operational
semantics, and no assumptions are made concerning the internal
architectures of the participating agents. Deciding what locutions
to utter in a dialogue under this protocol may be computationally
difficult for a participating agent, particularly if the number of participants
is large and the topics discussed diverse. The absence of
a stated dialogue purpose means that any topic may be discussed at
any time. This is a significant weakness of the protocol.
4. THE FIPA ACL
Finally, by way of comparison, we consider the Agent Communications
Language of FIPA, the Foundation for Intelligent Physical
Agents [11]. The FIPA ACL standard essentially defines a standard
format for labelled messages that agents may use to communicate
with one-another. The standard defines 22 distinct locutions,
and these have been provided with an operational semantics using
speech act theory [20]. The semantics of the language is defined using
pre- and post-condition rules, where these conditions define the
mental state of participants of communication - their beliefs and
intentions. This semantics links utterances in the dialogue to the
mental states of the participants, both preceding utterance of each
locution, and subsequent to it. All the locutions in the FIPA ACL
are ultimately defined in terms of inform and request primitives.
The FIPA ACL standard is a generic agent communication pro-
tocol, and is not based on a dialogue game. However, the various
dialogue-game protocols have all been proposed for agent interactions
negotiation, persuasion, etc - for which the FIPA ACL
could potentially also be used. It is therefore of interest to see how
the FIPA ACL compares to these protocols when assessed against
the desiderata above.
1. Stated Dialogue Purpose: The ACL is intended primarily
for purchase negotiations, and use of the locution cfp -
standing for Call For Proposal - can initiate a negotiation
dialogue with a stated purpose. There does not appear to be
means to state the purpose of other types of dialogue, e.g.,
information-seeking or persuasion dialogues.
2. Diversity of Individual Purposes: This is enabled.
3. Inclusiveness: There do not appear to be limitations on which
agents may participate.
4. Transparency: The ACL rules are transparent, and the definitions
present the pre-conditions and post-conditions of each
locution.
5. Fairness: The rules treat all participants equally.
6. Clarity of Argumentation Theory: There is no explicit underlying
argumentation theory for the FIPA ACL. Implicitly,
the argumentation model is an impoverished one. Participating
agents, for example, have only limited means to question
or contest information given to them by others, i.e., via the
not-understood locution. Moreover, the rules provide speakers
uttering such challenges with no rights to expect a defence
of prior assertions by those who uttered them.
7. Separation of Syntax and Semantics: The syntax is defined
in terms of the semantics, so the two are not separated. For
example, the pre-conditions of the inform locution include a
sincerity condition: the speaker must believe the argument of
this locution to be true before uttering the locution.
8. Rule-Consistency: The rules appear to be consistent.
9. Encouragement of Resolution: The FIPA ACL does not appear
to discourage resolution, but no rules for dialogue termination
are provided.
10. Discouragement of Disruption: There are no rules which
explicitly preclude disruptive behaviour, although the rationality
conditions incorporated in the semantics may limit such
behaviour.
11. Enablement of Self-Transformation: Self-transformation
is limited. The semantics imposes sincerity conditions on ut-
terances, and so agents may only assert what they sincerely
believe to be true. However, there are no locutions for retraction
of prior assertions, and no means to express degrees of
belief or to qualify assertions.
12. System Simplicity: The locutions include both substantive
locutions, e.g., accept-proposal, and procedural locutions,
e.g., propagate, which asks the recipient to forward the message
contents to others. The language would be simpler if
these were treated as different classes of locution.
13. Computational Simplicity: The FIPA ACL essentially just
defines a standard format for messages, and so it is hard to
assess in general the complexity of its use. However, because
agents must check the sincerity condition of inform locutions
before uttering these, they require an internal proof mecha-
nism; this will be in the first-order modal logic of the FIPA
ACL semantics [20], which is at best semi-decidable.
The implicit model of joint decision-making underlying the FIPA
ACL is a rational choice one. As explained in Section 2, this model
has no place for self-transformation in the course of the interac-
tion, and hence gives no value to argumentation activities such as
information-seeking, persuasion, inquiry or joint deliberation. The
rational-choice model may be appropriate for agent purchase negotiation
dialogues, although marketing theorists would argue differ-
ently, since their models of consumer behaviour typically assume
that consumer preferences may only be finalized during the purchase
decision process and not before, e.g., [21]. However, the
rational-choice model, as we argued earlier, is not at all appropriate
for other forms of agent dialogue, such as joint determination of
plans of action or joint inquiries after truth.
5. DISCUSSION
Designer's Checklist
The experience of developing the list of desiderata presented above
and applying them to the three dialogue game protocols have led us
to formulate a list of guidelines for designers of such protocols, as
follows:
G1 The protocol should embody a formal and explicit theory of
argument.
G2 The rules for the protocol should ensure that the reason(s) for
conducting the dialogue are stated within the dialogue at its
commencement.
G3 The protocol should include locutions which enable participants
to:
G3.1 formally enter a dialogue
G3.2 request information
G3.3 provide information
G3.4 request arguments and reasons for assertions
G3.5 provide arguments and reasons for assertions
G3.6 challenge statement and arguments
G3.7 defend statement and arguments
retract previous assertions
G3.9 make tentative proposals
express degrees of belief in statements
G3.11 express degrees of acceptability or preferences regarding
proposals
G3.12 formally withdraw from a dialogue.
G4 The protocol syntax should be defined in observable terms, so
this its conformance can be verified without reference to internal
states or mechanisms of the participants.
G5 The rules of the protocol should seek to preclude disruptive
behavior.
G6 The rules of the protocol should indicate circumstances under
which a dialogue terminates.
G7 The rules of the protocol should identify any difference in formal
roles and the rights and duties pertaining to these.
These guidelines may be viewed as a checklist for designers and
users of agent interaction protocols involving argumentation. To
our knowledge they are the first such design guidelines proposed
for agent dialogue game protocols.
Conclusions
In this paper we have presented the first list of criteria by which to
assess a dialogue-game protocol for agent interactions. These thirteen
desiderata were developed after consideration of: economic
and computational criteria recently proposed for the assessment of
automated auction and negotiation mechanisms; theories of social
and public decision-making in argumentation theory and political
philosophy; and recent research in designing and studying agent
communications languages and interaction protocols.
We have applied these thirteen desiderata to three dialogue game
protocols from the literature, for agent dialogues involving nego-
tiation, persuasion and mutual inquiry, respectively. Interestingly,
each protocol was found to be weak in at least one important as-
pect. The negotiation protocol did not permit agents to express
changes in their beliefs or preferences in the course of the dialogue
(what political theorists call self-transformation), while the persuasion
dialogue implicitly drew on a theory of argumentation we believe
to be inappropriate for the agent domain. The inquiry dialogue
was the only one to make explicit the argumentation theories upon
which it is based, but it permitted discussion to range over any and
all topics simultaneously, thereby limiting its practical usefulness.
We also considered the Agent Communications Language (ACL)
of FIPA, for comparison purposes. The key weaknesses of FIPA
ACL, relative to these desiderata, were found to be its limited support
for formal argumentation and for self-transformation by par-
ticipants. These findings were not surprising, since its designers
did not seek to embody a theory of argumentation, and because it
appears to express a rational-choice (or marketplace) view of agent
society, rather than a deliberative democracy view. In our opin-
ion, these weaknesses preclude the use of FIPA ACL beyond the
purchase negotiation dialogues it was designed for.
In future work, we aim to formalize these desiderata and thus
be in a position to prove formally the properties of dialogue game
protocols, such as those assessed in Section 3.
6.
--R
A theory of practical discourse.
Modelling dialogues using argumentation.
A method for the computational modelling of dialectical argument with dialogue games.
The limits of the dialogue model of argument.
Deliberative Democracy: Essays on Reason and Politics.
Communication and Fallacies: A Pragma-Dialectical Perspective
Environmental risk and democratic process: a critical review.
The Deliberative Practitioner: Encouraging Participatory Planning Processes.
Denotational semantics for agent communication languages.
Some principles of rational mutual inquiry.
A framework for deliberation dialogues.
Dialogue Models for Inquiry and Transaction.
The problem of retraction in critical discussion.
Strategic Negotiation in Multiagent Environments.
Agent communication languages: The current landscape.
Marketing Models.
Dialogische Logik.
Representing epistemic uncertainty by means of dialectical argumentation.
Games that agents play: A formal framework for dialogues between autonomous agents.
A geometric semantics for dialogue-game protocols for autonomous agent interactions
A dialogue-game protocol for agent purchase negotiations
The argumentation theorist in deliberative democracy.
Rules of Encounter: Designing Conventions for Automated Negotiation among Computers.
Logic agents
Distributed rational decision making.
A framework for argumentation-based negotiation
A social semantics for agent communications languages.
The Uses of Argument.
Commitment in Dialogue: Basic Concepts of Interpersonal Reasoning.
Semantic issues in the verification of agent communication languages.
--TR
Rules of encounter
Distributed rational decision making
Denotational semantics for agent communication language
Representing Epistemic Uncertainty by Means of Dialectical Argumentation
Games That Agents Play
Semantic Issues in the Verification of Agent Communication Languages
Agent Communication Languages
A Social Semantics for Agent Communication Languages
A Framework for Argumentation-Based Negotiation
Agent Theory for Team Formation by Dialogue
A Dialogue Game Protocol for Agent Purchase Negotiations
Modeling Dialogues Using Argumentation
--CTR
Peter McBurney , Simon Parsons, Locutions for Argumentation in Agent Interaction Protocols, Proceedings of the Third International Joint Conference on Autonomous Agents and Multiagent Systems, p.1240-1241, July 19-23, 2004, New York, New York
Ulrich Endriss , Nicolas Maudet , Fariba Sadri , Francesca Toni, On optimal outcomes of negotiations over resources, Proceedings of the second international joint conference on Autonomous agents and multiagent systems, July 14-18, 2003, Melbourne, Australia
Simon Wells , Chris Reed, A drosophila for computational dialectics, Proceedings of the fourth international joint conference on Autonomous agents and multiagent systems, July 25-29, 2005, The Netherlands
Paul E. Dunne , Michael Wooldridge , Michael Laurence, The complexity of contract negotiation, Artificial Intelligence, v.164 n.1-2, p.23-46, May 2005
Petra Berenbrink , Leslie Ann Goldberg , Paul W. Goldberg , Russell Martin, Utilitarian resource assignment, Journal of Discrete Algorithms, v.4 n.4, p.567-587, December, 2006
Antonis Kakas , Nicolas Maudet , Pavlos Moraitis, Modular Representation of Agent Interaction Rules through Argumentation, Autonomous Agents and Multi-Agent Systems, v.11 n.2, p.189-206, September 2005
Nicolas Maudet, Negotiating Dialogue Games, Autonomous Agents and Multi-Agent Systems, v.7 n.3, p.229-233, November
Jamal Bentahar , Bernard Moulin , John-Jules Ch. Meyer , Brahim Chaib-draa, A Logical Model for Commitment and Argument Network for Agent Communication, Proceedings of the Third International Joint Conference on Autonomous Agents and Multiagent Systems, p.792-799, July 19-23, 2004, New York, New York
Peter McBurney , Simon Parsons, Posit spaces: a performative model of e-commerce, Proceedings of the second international joint conference on Autonomous agents and multiagent systems, July 14-18, 2003, Melbourne, Australia
Paul E. Dunne, Prevarication in dispute protocols, Proceedings of the 9th international conference on Artificial intelligence and law, June 24-28, 2003, Scotland, United Kingdom
Paul E. Dunne , Peter McBurney, Optimal utterances in dialogue protocols, Proceedings of the second international joint conference on Autonomous agents and multiagent systems, July 14-18, 2003, Melbourne, Australia
Carlos Chesevar , Jarred McGinnis , Sanjay Modgil , Iyad Rahwan , Chris Reed , Guillermo Simari , Matthew South , Gerard Vreeswijk , Steven Willmott, Towards an argument interchange format, The Knowledge Engineering Review, v.21 n.4, p.293-316, December 2006
Henry Prakken, Formal systems for persuasion dialogue, The Knowledge Engineering Review, v.21 n.2, p.163-188, June 2006
Martin Caminada , Leila Amgoud, On the evaluation of argumentation formalisms, Artificial Intelligence, v.171 n.5-6, p.286-310, April, 2007
Pietro Baroni , Massimiliano Giacomin , Giovanni Guida, Self-stabilizing defeat status computation: dealing with conflict management in multi-agent systems, Artificial Intelligence, v.165 n.2, p.187-259, July 2005
N. Maudet , B. Chaib-Draa, Commitment-based and dialogue-game-based protocols: new trends in agent communication languages, The Knowledge Engineering Review, v.17 n.2, p.157-179, June 2002
Brahim Chaib-Draa , Marc-Andr Labrie , Mathieu Bergeron , Philippe Pasquier, DIAGAL: An Agent Communication Language Based on Dialogue Games and Sustained by Social Commitments, Autonomous Agents and Multi-Agent Systems, v.13 n.1, p.61-95, July 2006
Iyad Rahwan , Sarvapali D. Ramchurn , Nicholas R. Jennings , Peter Mcburney , Simon Parsons , Liz Sonenberg, Argumentation-based negotiation, The Knowledge Engineering Review, v.18 n.4, p.343-375, December | interaction protocols;FIPA;argumentation;dialogue games;agent communication languages |
544923 | A problem solving model for collaborative agents. | This paper describes a model of problem solving for use in collaborative agents. It is intended as a practical model for use in implemented systems, rather than a study of the theoretical underpinnings of collaborative action. The model is based on our experience in building a series of interactive systems in different domains, including route planning, emergency management, and medical advising. It is currently being used in an implemented, end-to- end spoken dialogue system in which the system assists a person in managing their medications. While we are primarily focussed on human-machine collaboration, we believe that the model will equally well apply to interactions between sophisticated software agents that need to coordinate their activities. | INTRODUCTION
One of the most general models for interaction between
humans and autonomous agents is based on natural human-human
dialogue. For humans, this is an interface that requires
no learning, and provides maximum
exibility and
generality. To build such an interface on the autonomous
agent side, however, is a formidable undertaking. We have
been building prototypes of such systems for many years,
focusing on limited problem solving tasks. Our approach
involves constructing a dialogue system that serves as the
interface between the human and the back-end agents. The
Permission to make digital or hard copies of all or part of this work for
personal or classroom use is granted without fee provided that copies are
not made or distributed for profit or commercial advantage and that copies
bear this notice and the full citation on the first page. To copy otherwise, to
republish, to post on servers or to redistribute to lists, requires prior specific
permission and/or a fee.
AAMAS'02, July 15-19, 2002, Bologna, Italy.
goal is to insulate the human from the complexities of managing
and understanding agent-based systems, while insulating
the back-end agents from having to understanding
natural language dialogue. To be eective in a range of
situations, the dialogue agent must support contextually-
dependent interpretation of language and be able to map
linguistically specied goals into concrete tasking of back-end
agents.
We believe that a key for enabling such interaction models
is the development of a rich model of collaborative problem
solving. This model is needed for two distinct purposes:
(1) to enable contextual interpretation of language (i.e., intention
recognition); and (2) to provide a rich protocol for
communication between the autonomous agents that comprise
the dialogue system. Thus the dialogue system appears
to the human as an intelligent collaborative assistant agent,
and is itself comprised of autonomous agents.
While work has been done on general theoretical frameworks
for collaborative interaction [8, 3, 11], these proposals
have generally not specied the details of what such models
would look like. We believe that our model is compatible
with the SharedPlans formalism [8, 9, 11]. In fact, one
way of looking at our model is as an elaboration of some of
the key operators (such as Elaborate Group, or Lochbaum's
communicate recipe) in the SharedPlans framework. In our
own previous work [6, 1], we have described the beginnings
of practical models but these have not been very precisely
specied or complete. In this paper, we sketch a comprehensive
model that provides a detailed analysis of a wide
range of collaborative problem solving situations that can
arise. This model is based on our experience in building
collaborative problem solving agents in a range of dierent
domains. In particular, collaborative agents (both human
and autonomous) need to have the capability to:
1. Discuss and negotiate goals;
2. Discuss options and decide on courses of action, including
assigning dierent parts of a task to dierent
agents;
3. Discuss limitations and problems with the current course
of action, and negotiate modications;
4. Assess the current situation and explore possible future
eventualities;
5. Discuss and determine resource allocation;
6. Discuss and negotiate initiative in the interactions;
Interaction
Initiate
Complete
Reject
Continue
Communicative Acts
Suggest
Ask
Inform
Realization
Recognition
Collaborative
Problem Solving
c-Objective
c-Action
c-Resource
c-Situation
c-Adopt
c-Defer
c-Evaluate
c-Identify
Problem Solving
Objective
Action
Resource
Situation
Adopt
Evaluate
Internal State
Add/Modify/Delete
Prove/Lookup/Query
Intention
Evacuation Planner
Objective: evacuate city, rescue people, .
Action: plan operation, choose route, .
Resource: vehicles, routes, .
Situation: vehicle locations, road status, .
Medication Advisor
Kitchen Designer
Task Models
Task/Domain
Specialization
} CPS Act
Figure
1: Collaborative problem solving model
7. Perform parts of the task, and report to others to up-date
shared knowledge of the situation.
Although our focus is on language-based interaction, it is our
belief that these capabilities are required in any su-ciently
complex (realistic,
exible) agent-based system.
2.
OVERVIEW
OF THE MODEL
Our model of collaborative problem solving is shown in
Figure
1. At the heart of the model is the problem solving
level, which describes how a single agent solves problems.
For example, an agent might adopt an obligation, or might
evaluate the likelihood that a certain action will achieve that
objective. This level is based on a fairly standard model of
agent behavior, that we will describe in more detail shortly. 1
The problem solving level is specialized to a particular
task and domain by a task model. The types of domains we
have explored include designing a kitchen, providing medical
advice, assessing damage from a natural disaster, planning
emergency services, and so on. The task model describes
how to perform these tasks, such as what possible objectives
are, how objectives are (or might be) related, what resources
are available, and how to perform specic problem solving
actions such as evaluating a course of action.
For an isolated autonomous agent, these two levels su-ce
to describe its behavior, including the planning and execution
of task-level actions. For collaborative activity, how-
ever, we need more.
The collaborative problem solving level builds on
the single-agent problem solving level. The collaborative
problem solving actions parallel the single-agent ones, except
that they are joint actions involving jointly understood
objects. For example, the agents can jointly adopt an intention
(making it a joint intention), or they can jointly identify
a relevant resource, and so on.
1 Underlying the problem solving level is the representation
of the agent's internal state, for example its current beliefs
and intentions. The details of how these are represented are
not important for understanding the collaborative problem
solving model, however.
Finally, an agent cannot simply perform a collaborative
action by itself. The interaction level consists of actions
performed by individuals in order to perform their part of
collaborative problem solving acts. Thus, for example, one
agent may initiate a collaborative act to adopt a joint in-
tention, and another may complete the collaborative act by
agreeing to adopt the intention.
This paper proceeds as follows. First, we describe the
collaborative problem solving model in more detail, starting
with a review of some underlying concepts, moving on to
the single-agent problem solving level, and nally describing
the collaborative problem solving and interaction levels.
The emphasis is on the information maintained at each level
and its use during collaborative problem solving. We then
present a detailed example of the model in action, drawn
from a medical advisor domain that we are using for our
prototype implementation. We conclude with a few comments
about open issues and future work.
3. BASIC CONCEPTS
All of the levels in our model involve a core set of concepts
related to planning and acting. Many of these concepts have
been used in the planning literature for years, and we only
informally describe them in this section. The application of
the concepts to modeling collaborative interaction is what
is important for present purposes.
3.1 Situations
We start with a fairly standard notion of situation as in
the situation calculus [12]|a situation is a snapshot of the
world at a particular point in time (or hypothetical point
in time when planning into the future). While situations
are a complete state of the world at a certain time, our
knowledge of a situation is necessarily incomplete except in
the most simple cases (like traditional blocks world planning
systems). Also note that a situation might include an agent's
beliefs about the past and the future, and so might entail
knowledge about the world far beyond what is immediately
true.
3.2 Atomic Actions
Also as in the situation calculus, actions are formalized
as functions from one situation to another. Thus, performing
an action in one situation produces a new situation. Of
course, generally we do not know the actual situation we
are in, so typically knowledge about actions is characterized
by statements that if some precondition of an action
is true in some situation, then some eect of it will be true
in the situation resulting from the action. Note that unlike
the standard situation calculus, however, we take actions to
be extended in time and allow complex simultaneous and
overlapping actions.
3.3 Recipes
A specication of system behavior is often called a plan,
or recipe [13]. We will use the term \recipe" here as the
notion of plan has been overused and so is ambiguous. A
very simple form of recipe is a xed sequence of actions
to perform, much like those built by traditional planning
systems. The recipes found in cookbooks often aspire to this
level of simplicity but typically are not as straightforward.
More generally, recipes capture complex learned behavior
and guide an agent towards a goal through a wide range
of possible conditions and ranges of possible results from
previous action.
For our work, we do not care about the specic form of
what a recipe is, or insist that dierent agents have the same
recipes. Rather, a recipe is a way of deciding what to do
next. More formally, a recipe is a function from situations
to actions, where the action is the next thing to do according
to the recipe.
Note that we need some special \actions" to make this
work. First, we must allow the action of doing nothing or
waiting for some period of time, as this might be the best
thing to do for some recipes. We also need to allow the
possibility that a recipe may not specify what to do next
in certain situations. To formalize this, we need to make
the recipe function a partial function, or introduce a special
\failure" value. Finally, we need to allow actions to be planning
actions|i.e., it may be that the best thing to do is to
set a subgoal and do some more planning before any further
physical action.
3.4 Objectives
Our notion of objective is similar to some uses of the
term \goal." But the term goal is used is dierent ways in
the literature: goals are sometimes the intentions driving
an agent's behavior, at other times they are the input to a
planning process, and sometimes they are simply the main
eects of a recipe. Goals are sometimes considered to be
states of the world to attain (e.g., the goal is a situation
where block A is on block B), or sometimes an action that
must be performed (e.g., the goal is to open the door).
We will try to avoid all this ambiguity by not using the
word goal any further. An objective is an intention that
is driving our current behavior. Objectives are expressed in
the form of abstract actions, such as winning the lottery, or
getting block A onto block B. Objectives are not just any
actions. They are actions that are dened in terms of their
eects, and cannot be executed directly. To accomplish ob-
jectives, we need to choose or build a recipe that, if followed,
leads to a state in which eects of the objective hold.
Resources
The nal key concept in the abstract model is that of a
resource. A resource is a object that is used during the
execution of a recipe. Resources might be consumable (i.e.,
cease to exist in their prior form) as a result of the recipe
(e.g., as ingredients are consumed when making a cake), or
might be reusable (e.g., as a hammer is used to drive in
a nail). In a traditional planning model, resources are the
objects that are used to bind the variables in the plan and, in
many applications of planning are essentially resource
allocation problems.
4. PROBLEM SOLVING
Once we have the concepts dened in the last section, we
can now give an quick overview of our model of a single
agent's problem solving behavior. Just as task-level actions
aect the state of the world, problem-solving actions aect
the cognitive state of the agent, which we represent (for purposes
of this paper) as the problem solving (PS) state.
The problem solving state consists of the agents commitments
towards objectives, the recipes for achieving those
objectives, the resources used in those recipes, and so on.
Adopted
Abandoned Completed
Active
Adopt
Select
Abandon Release
Figure
2: Life cycle of an intention
The PS state must contain at least the following information
1. The current situation: what the agent believes (or as-
sumes) to be true as a basis for acting, including what
resources are available;
2. Intended objectives: a forest of objectives that the
agent has adopted, although not all are necessarily
motivating its current action. Each tree in the forest
captures a subobjective hierarchy for a particular
root objective;
3. Active objective(s): the intended objective(s) that is
(are) currently motivating the agent's action; An objective
tree that includes an active objective is an active
objective tree;
4. Intended recipes: the recipes and resources that the
agent has chosen for each of the intended objectives;
5. Recipe library: a set of recipes indexed by objective
and situation types. The library need not be static and
may be expanded by planning, learning, adaptation,
etc.
An agent's problem solving activity involves exploring the
current situation, adopting objectives, deciding on courses of
action and resources to use, performing actions, and using
the results of actions to further modify its objectives and
future actions. The problem solving actions (see Figure 1)
are divided into two classes, one concerned with intention
and commitment and one concerned with knowledge and
reasoning.
4.1 PS Acts Relating to Commitment
In our model of agent behavior (similar to [7, 15]), an
agent is driven by its intentions, in the form of objectives,
recipes, and resource uses to which it is committed. Intentions
move through a life cycle shown in Figure 2.
In order to act, an agent must form intentions by means
of a commitment act that we call Adopt. For example, an
agent might adopt a particular recipe for a certain objec-
tive, say to plan a trip by using a recipe to call a travel
agent. If they change their mind, they may drop the commitment
using a act we call Abandon. For instance, the
agent may change their mind, abandon the recipe to call the
travel agent and adopt a recipe to book a ticket on the web.
Similarly, an agent may adopt or abandon objectives, and
adopt and abandon commitments to use certain resources.
An agent may have several dierent objectives that it is
committed to, and even with respect to one objective, there
may be several sub-objectives that could be chosen to drive
the agent's action. The action of choosing the objective(s)
to motivate the next behavior is called Select. Once an
objective is selected, the agent may perform reasoning to
elaborate on its associated recipe, or to evaluate an action
that that recipe suggests, and may eventually select an action
to perform. If an agent's priorities change, it may Defer
the objective or action, leaving it as an intention to be
addressed later. Finally, when an agent believes that an
objective has been achieved, it may Release the objective,
thereby removing it from its set of intentions.
4.2 PS Acts Relating to Reasoning
Before committing, an agent will typically perform some
reasoning. One of the key operations is to determine what
options are available, which we call Identify. For exam-
ple, an agent may identify a possible recipe for achieving
some objective, or identify certain resources that are avail-
able. They may even identify possible goals to pursue and
consider them before making any commitment. Once an option
is identied, the agent may Evaluate it relative to its
purpose. For instance, it might evaluate a recipe to see how
well it might accomplish its associated objective, or evaluate
an objective to see if it is worthwhile, or evaluate a resource
or action to see how well it serves a particular recipe. In ad-
dition, an agent may choose to Modify a certain objective,
recipe or resource to produce a another that then could be
evaluated.
In addition to reasoning about possible goals and actions,
an agent may also reason about its current situation. Situations
may be identied by exploring them further, and may
be evaluated to see how desirable the current (or expected)
situation is and whether it should plan to change it. Agents
that act and do little planning would only care about the
current situation they are in, and all activity would be tied
to that situation. More complex agents, however, could do
planning in hypothetical situations, or want to act based on
certain assumptions.
4.3 Problem Solving Behavior
With these elements of the problem solving model in place,
we can describe how an agent solves problems. It is convenient
to present this activity as occurring in a series of
phases. In practice, an agent may short circuit phases, or
return to prior phases to reconsider their commitments at
any time.
1. Determining the Objective: An agent may at any time
reconsider the objectives it has, adopt new ones, abandon
old ones, and otherwise modify and adjust them.
Of course eective agents will not spend too much time
reconsidering and evaluating their objectives, but will
spend their eort in pursuing an objective. To do this,
they must rst select one or more objectives to pursue.
These are the active objectives.
2. Determining the Recipe: Given an active objective, an
agent must then determine a recipe to follow that may
achieve the objective. It may be that a recipe has already
been used for some time to determine the agent's
actions in pursuing the objective, and the agent may
simply invoke the recipe once again in the current situation
to determine what to do next. But the agent
might also consider switching to another recipe, ren-
ing an existing recipe, or actually building a new recipe
for this objective. In these latter cases, the next action
the agent does is a planning action that results in
a modied (or new) recipe for the objective.
3. Using the Selected Recipe: Given a selected recipe, the
agent can then identify the next action to perform.
If the recipe returns a sub-objective, then the agent
needs to restart the process of evaluating objectives
and choosing or constructing recipes. If the recipe indicates
an atomic action, the agent can evaluate the
desirability of the proposed action, and if it seems rea-
sonable, perform the action. At that point, the situation
has changed and the process starts again.
To implement such a problem solving agent, we would
need to specify strategies for when objectives, recipes and
proposed actions are evaluated and reconsidered, versus how
often the current objective, recipe or proposed action is
just taken without consideration. Agents that performed
more evaluation and deliberation would be more careful and
might be able to react better to changing situations, whereas
agents that do less evaluation would probably be more responsive
but also more brittle. The specics of these strategies
are not the focus of this paper.
5. COLLABORATIVEPROBLEMSOLVING
We now turn to the central issue of collaborative problem
solving. When two agents collaborate to achieve goals,
they must coordinate their individual actions. To mirror the
development at the problem solving level, the collaborative
problem solving level (see Figure 1) operates on the collaborative
problem solving (CPS) state, which captures the
joint objectives, the recipes jointly chosen to achieve those
objectives, the resources jointly chosen for the recipes, and
so on.
The collaborative problem solving model must serve two
critical purposes. First it must provide the structure that
enables and drives the interactions between the agents as
they decide on joint objectives, actions and behavior. In so
doing, it provides the framework for intention recognition,
and it provides the constraints that force agents to interact
in ways that maintain the collaborative problem solving
state. Second, it must provide the connection between the
joint intentions and the individual actions that an agent performs
as part of the joint plan, while still allowing an agent
to have other individual objectives of its own.
While we talk of shared objectives, intended actions and
resources, we do not want to require that agents have the
same library of recipes to choose from. This seems too strong
a constraint to place on autonomous agents. We assume
only that the agents mutually agree on the meaning of expressions
that describe goals and actions. For example, they
might both understand what the action of taking a trip en-
tails. The specic recipes each has to accomplish this action,
however, may be quite dierent. Their recipes may accomplish
subgoals in dierent orders for instance (one may book
a hotel rst, then get a air ticket, where the other might reverse
the order). They might break the task down into dier-
ent subgoals (e.g., one may call a travel agent and book
ight
and hotel simultaneously, while the other might book
ights
with an agent and nd hotels on the web). And for any
subgoal, they might pick dierent actions (e.g., one might
choose a
ight that minimizes cost, whereas the other might
minimize travel time). To collaborate, the agents must agree
to some level of detail on a new abstract joint recipe that
both can live with. The joint recipe needs be rened no further
in places where the two agents agree that one agent is
responsible for achieving a sub-objective.
Establishing part of the collaborative problem solving state
requires an agreement between the agents. One agent will
propose an objective, recipe, or resource, and the other can
accept , reject or produce a counterproposal or request further
information. This is the level that captures the agent
interactions. To communicate, the agent receiving a message
must be able to identify what CPS act was intended,
and then generates responses that are appropriate to that
intention. In agent-communication languages between pro-
grams, the collaborative act would be explicit. In human-agent
communication based on natural language, a complex
intention recognition process may be required to map the
interaction to the intended CPS act. This will be described
in further detail in the Interaction section below, after the
abstract collaborative model is described.
5.1 Collaborative Problem Solving Acts
As a rst cut, the collaborative problem solving level looks
just like the PS level, except that all acts are joint between
the collaborating agents. We will name these CPS acts using
a convention that just applies a prex of \c-". Thus
the c-adopt-objective act is the action of the agents jointly
adopting a joint objective.
While we can model an individual agent adopting an individual
objective as a primitive act in our model at the PS
level, there is no corresponding primitive act for two agents
jointly adopting a goal. This would require some sort of
mind synchronization that it not possible. We agree with
researchers such as Grosz and Sidner [8] and Cohen and
Levesque [3] in that joint actions must be composed out of
individual actions. There remains a meaningful level of analysis
that corresponds to the PS level model if we view the
CPS acts as complex acts, i.e., objectives, that the agents
recognize and use to coordinate their individual actions. The
constraints on rational behavior that an agent uses at the
PS level have their correlates at the collaborative PS level,
and these inform the intention recognition and planning behavior
of the agents as they coordinate their activities. For
instance, a rational individual agent would not form an objective
to accomplish some state if it believed that the state
currently holds (or will hold in the future at the desired
time). Likewise, collaborating individual agents would not
form a collaborative objective to achieve a state that they
jointly believe will hold at the (jointly) desired time. The
analysis of the behavior at this abstract level provides a simple
and intuitive set of constraints on behavior that would
be hard to express at the interaction action level.
5.2 The Interaction Level
The interaction level provides the connection between the
communicative acts (i.e., speech acts) that the agents per-
form, such as requesting, informing, warning, and promis-
ing, and the collaborative problem solving acts they jointly
perform. In other words, it deals with the individual actions
that agents perform in order to engage in collaborative
problem solving. All the acts at this level take the
form of some operator applying to some CPS act. For in-
stance, an agent can Initiate a collaborative act by making
a proposal and the other agent can Complete the act
(by accepting it) or Reject it (in which case the CPS act
fails because of lack of \buy in" by the other agent). In
more complex interactions, an agent may Continue a CPS
act by performing clarication requests, elaborations, mod-
ications or counter-proposals. The interaction-level acts
we propose here are similar to Traum's [17] grounding act
model, which is not surprising as grounding is also a form
of collaborative action.
From a single agent's perspective, when it is performing
an interaction act (say, initiating adoption of an joint objec-
tive), it must plan some communicative act (say, suggesting
to the other agent that it be done) and then perform (or re-
alize) it. On the other side of the coin, when one agent performs
a communicative act, the other agent must recognize
what interaction act was intended by the performer. Identifying
the intended interaction acts is a critical part of the intention
recognition process, and is essential if the agents are
to maintain the collaborative problem-solving state. For in-
stance, consider a kitchen design domain in which two agents
collaborative to design and build a kitchen. The utterance
\Can we put a gas stove beside the refrigerator" could be
said in order to (1) ask a general question about acceptable
practice in kitchen design; (2) propose adding a stove to the
current design; or (3) propose modifying the current design
(say by using a gas stove rather than an electric one). Each
of these interpretations requires a very dierent response
from the hearer and, more importantly, results in a dierent
situation for interpreting all subsequent utterances. Each
one of these interpretations corresponds to a dierent collaborative
problem solving act. If we can identify the correct
act, we then have a chance of responding appropriately and
maintaining the correct context for subsequent utterances.
We should note that the interaction level is not just required
for natural language interaction. In other modali-
ties, the same processes must occur (for example, the user
initiates a joint action by clicking a button, and the system
completes it by computing and displaying a value). In
standard agent communication languages, these interaction
level actions are generally explicit in the messages exchanged
between agents, thereby eliminating the need to recognize
them (although not to the need to understand and perform
them oneself).
5.3 Examples
To put this all together, consider some typical but constructed
examples of interactions. These examples are motivated
by interactions we have observed in a medical advisor
domain in which the system acts to help a person manage
their medications. These examples are meant to t together
to form a constructed dialogue that illustrates a number of
points about the CPS level analysis.
The simplest collaborative acts consist of an initiate-complete
pair. For example, here is a simple c-identify of a situation:
U: Where are my pills? (1)
S: In the kitchen (2)
Utterance (1) is a Wh-question that initiates the c-identify-
situation act, and utterance (2) answers the question and
completes the CPS act. 2 When utterance (2) is done, the
two agents will have jointly performed the c-identify-situation
action.
Utterances may introduce multiple collaborative acts at
one time, and these may be completed by dierent acts. For
instance:
S: It's time to take an aspirin (3)
U: Okay (4)
U: [Takes the aspirin] (5)
Utterance (3) is a suggestion that U take an aspirin, which
initiates both a c-adopt-objective (to intend to take medication
currently due) and a c-select-action (to take an aspirin).
Utterance (4) completes the c-adopt action and establishes
the joint objective. Action (5) completes the c-select action
by means of U performing the PS-level act select on the
action, resulting in the action being performed.
Many more complex interactions are possible as well. For
instance:
U: What should we do now? (6)
S: Let's plan your medication for the day (7)
U: Okay (8)
Utterance (6) is a question that initiates a c-adopt-objective,
utterance (7) continues this act by answering the question
with a suggestion, and utterance (8) completes the act (thus
establishing the joint objective). Note that the objective
agreed upon is itself a collaborative problem solving act|
they have established a joint objective to perform a c-adopt
action for some as yet unspecied recipe. This could then
lead to pursuing a sub-objective such as creating a recipe as
in the following interaction:
S: You could take your celebrex at noon. (9)
U: Will that interfere with my lunch date (10)
S: No. (11)
U: OK. I'll do that (12)
Utterance (9) is a suggestion that initiates a c-identify-
recipe and continues the previously established c-adopt-objective
action. Utterance (10) completes the c-identify of the recipe
(by grounding the suggestion), continues the c-adopt action,
and initiates a c-evaluate of the recipe by exploring a possible
problem with the suggested action. Utterance (11)
completes the c-evaluate act by answering the question, and
utterance (13) then completes the c-adopt act by agreeing
to the recipe initially suggested in (9).
6. EXTENDED EXAMPLE
To better illustrate the complexity of even fairly simple
collaborative problem solving, the following is an example
of a session with a prototype Medication Advisor system
under development at Rochester [5]. The Medication Advisor
is designed to help people manage their prescription
medication regimes|a serious real-world problem that has
a signicant impact on people's health.
2 Note that we are ignoring grounding issues in this paper.
In a dialogue system, the CPS act is not actually completed
until the answer to the question is grounded by U, say by
the utterance such as \OK" or \thanks".
Behavioral
Agent
Interpretation
Manager
Generation
Manager
Parser
Speech
Planner Scheduler Monitors Events
Task- and Domain-specific
Knowledge Sources Exogenous Event Sources
Response
Planner
Graphics
Speech
Task
Manager
Reference
Discourse
Context
Interpretation
Generation
Behavior
Task
Interpretation
Requests
Problem-Solving
Acts recognized
from user
Problem-Solving
Acts
to perform
Task
Execution
Requests
Figure
3: TRIPS collaborative system architecture
(from [1])
To follow the problem solving, we need to understand
something of the architecture of the system. The Medication
Advisor is an application of the TRIPS spoken dialogue
system [6], whose architecture is shown in Figure 3. As described
in [1], the main components of the system as regards
problem solving are as follows:
The Interpretation Manager (IM), which maintains the
discourse context and recognizes user intention from
their utterances;
The Behavioral Agent (BA), which manages system
problem solving obligations and drives system behavior
The Task Manager (TM), which maintains the collaborative
problem solving state and supports both interpretation
and execution with task- and domain-specic
knowledge; and
The Generation Manager (GM), which coordinates the
generation of spoken speech and multimodal output,
among other functions.
This system is a fully implemented, end-to-end speech-driven
dialogue system, although for this session some aspects of
the problem-solving model were not fully implemented. For
brevity, we have suppressed most of the detail of the actual
message tra-c between components of the system, focusing
solely on the interactions within the collaborative problem
solving model. Utterances are labeled by the agent and a
number indicating the sequence of speaker turns.
The user starts by asking a question:
U: Can I take an aspirin? (1)
The Interpretation Manager asks the Task Manager to interpret
the utterance in the current problem solving con-
text. The Task Manager replies that the utterance looks
like an initiate of a c-evaluate-action. That is, the user
seems to be initiating a collaborative evaluation of the action
of taking aspirin. The Interpretation Manager uses this
result, together with other interpretation steps, and broadcasts
that as the system's understanding of what the user
has just done.
When the Task Manager receives this message, it can up-date
its state to include the fact that, yes, the user has initiated
a new collaborative act. Meanwhile, the Behavioral
Agent also receives the notication. It decides, based on its
current obligations, that the system should attempt to complete
the action. To do this, it asks the Task Manager to
perform the (non-collaborative) act of evaluating an action.
The Task Manager does this by following a recipe for evaluating
that type of action, which involves querying databases
for what medications the user is taking and whether any of
those interact with aspirin. It turns out that the there is an
interaction, so the evaluation is negative. The behavioral
Agent, upon receipt of the result of the evaluation, requests
that the Generation Manager perform the communicative
act inform (with the evaluation results) in order to realize
the interaction act of completing the user's c-evaluate-action
CPS act.
The Generation Manager determines what to say:
S: No, you are taking celebrex and celebrex (2)
interacts with aspirin.
Once the speech has been generated, the Generation Manager
can broadcast to the rest of the system that the inform
act is done. Note that the interaction act (the complete) is
not yet done, since this depends on the user having understood
the system's utterance (i.e., on its being grounded).
The user responds
U: Oh. (3)
The Interpretation Manager takes this as grounding its utterance
(2), since it does not indicate any lack of under-
standing. This fact is broadcast to the rest of the system.
Once the system's utterance is grounded, the Generation
Manager can announce that the interaction act is done. As
there is no additional problem solving content in the user's
utterance (3), the system does not do anything in response.
The user continues
U: Could Celebrex cause headaches? (4)
As with (1), the IM asks the TM to interpret the utterance
in the current problem solving state. Note that the IM must
explicitly indicate that this interpretation should be relative
to the context in which the system's interaction act (the
complete of the c-evaluate-action) is done. This is just one
example of the explicit synchronization necessary in order
to implement the collaborative problem solving model in a
distributed system.
The TM answers that the utterance (4) looks like an initiate
of a c-identify-situation (as to whether celebrex causes
headaches). With this choice of an interpretation, the Task
Manager can update its model of what the user is doing.
Meanwhile, the behavioral Agent decides that the system
should perform its part of the collaborative action in order
to complete it. The TM does the identify-situation and
responds that it is not the case that celebrex can cause
headaches. The BA passes this answer to the GM, requesting
that it inform the user in order to complete the collaborative
act. This results in the following system utterance:
S: No, headaches are not an expected side-eect (5)
of celebrex.
And again, the inform is done once the speech has been
produced.
Meanwhile, the TM, in the process of updating its state
based on the user having initiated the c-identify-situation
regarding celebrex causing headache, has performed some
plan recognition and thinks it is likely that the user may be
trying to cure their headache. Note that up to this point, the
user has not said anything about having a headache|this
is purely an inference based on task and domain knowledge.
The TM reports that this is a problem which should be
resolved, although it leaves it to the BA to prioritize the
system's objectives and decide what to do.
In this case, the BA decides to take initiative and requests
that the TM suggest what to do. The TM responds that
the system should initiate a c-identify-situation regarding
whether the user has a headache. The BA sends this to the
GM, resulting the following utterance:
S: Do you have a headache? (6)
Once the speech has been output, the GM announces that
the ask is done. At this point both interaction acts (\headaches
are not a side-eect of celebrex" and \do you have a headache")
are awaiting grounding, and the question is awaiting a response
from the user. When the user answers
U: Yes. (7)
Both system utterances (5) and (6) are grounded, so both
pending interactions acts are marked as completed, and the
system proceeds to interpret the user's utterance (7) in the
resulting context.
The dialogue for another continues for another fteen utterances
as the system addresses the user's headache and
then supports them with several other aspects of their medication
regime. Unfortunately, space precludes an extended
presentation.
7. RELATED WORK
As noted above, work has been done on general theoretical
frameworks for collaborative interaction [3, 9, 11].
However, the focus of these models was more specifying the
mental details (beliefs, intentions, etc.) of such collabora-
tion, whereas the focus of our model is describing practical
dialogues. Also, in these proposals, many details of what the
models would look like are not given. The SharedPlans formalism
[9, 11], for example, does expressly model the adoption
of recipes (Select Recipe, Select Recipe GR), but that
is as far as it goes. We believe that our model will prove
to be complementary to this formalism, with the remainder
of problem solving acts either existing at some higher
level (e.g. adopt/abandon/evaluate-objective), being added
to the same recipe level (evaluate-recipe), or being part of
the unspecied Elaborate Individual/Elaborate Group processes
Our belief that human-machine interaction can occur most
naturally when the machine understands and does problem
solving in a similar way to humans is very close to the philosophy
upon which the COLLAGEN project [16] is founded.
COLLAGEN is built on the SharedPlan formalism and provides
an articial language, human-computer interface with
a software agent. The agent collaborates with the human
through both communication and observation of actions.
COLLAGEN, as it works on a subset of the SharedPlans
also does not explicitly model most of our problem
solving acts.
Several dialogue systems have divided intention recognition
into several dierent layers, although these layerings are
at much dierent levels than our own. Ramshaw [14] analyzes
intentions on three levels: domain, exploration, and
discourse. Domain level actions are similar to our own domain
level. The discourse level deals with communicative
actions. The exploration level supports a limited amount of
evaluations of actions and plans. These, however, cannot be
directly used to actually build up a collaborative plan, as
they are on a stack and must be popped before the domain
plan is added to.
Lambert and Carberry [10] also had a three level model,
consisting of domain, problem solving, and discourse levels.
Their problem solving level was fairly underdeveloped, but
consists of such recipes as Build Plan and Compare Recipe -
by Feature (which allow the comparison of two recipes on
one of their features). The model does not include other
of our problem solving acts, nor does it explicitly model
collaboration, interaction acts, etc.
These models assumed a master-slave collaboration paradigm,
where an agent must automatically accept any proposal from
the other agent. Chu-Carroll and Carberry [2] extended the
work of Lambert and Carberry, adding a level of proposal
and acceptance, which overcame the master-slave problem.
However, Chu-Carroll and Carberry (along with Ramshaw
and Lamber and Carberry), assume a shared, previously-
specied problem solving plan which is being executed by
the agents in order to collaborate. This restricts collaboration
to homogeneous agents which have identical problem
solving plans, whereas in our model, there is no set problem
solving plan, allowing agents with dierent individual
problem solving strategies to collaborate.
Finally, Elzer [4] specically mentions the need for a problem-solving
model in discourse, citing dialogue segments similar
to those that we give. However, she oers no proposal of a
solution.
8. CONCLUSIONS
The collaborative problem solving model presented in this
paper oers a concrete proposal for modeling collaboration
between agents, including in particular between human and
software agents. Our model is based on our experience
building collaborative systems in several problem solving do-
mains. It incorporates as many elements as possible from
formal models of collaboration, but is also driven by the
practical needs of an implemented system.
9.
ACKNOWLEDGMENTS
This material is based upon work supported by Dept. of
Education (GAANN) grant no. P200A000306; ONR re-search
grant no. N00014-01-1-1015; DARPA research grant
no. F30602-98-2-0133; NSF grant no. EIA-0080124; and
a grant from the W. M. Keck Foundation. Any opinions,
ndings, and conclusions or recommendations expressed in
this material are those of the authors and do not necessarily
re
ect the views of the above-mentioned organizations.
10.
--R
An architecture for more realistic conversational systems.
Con ict resolution in collaborative planning dialogues.
Intention is choice with commitment.
The role of user preferences and problem-solving knowledge in plan recognition for expert consultation systems
The Medication Advisor project: Preliminary report.
TRIPS: An integrated intelligent problem-solving assistant
Collaborative plans for complex group action.
A tripartite plan-based model of dialogue
A collaborative planning model of intentional structure.
First order theories of individual concepts and propositions.
The uses of plans.
A three-level model for plan exploration
COLLAGEN: A collaboration manager for software interface agents.
A Computational Theory of Grounding in Natural Language Conversation.
--TR
Attention, intentions, and the structure of discourse
Intention is choice with commitment
The uses of plans
A computational theory of grounding in natural language conversation
Collaborative plans for complex group action
TRIPs
An architecture for more realistic conversational systems
Conflict resolution in collaborative planning dialogs
COLLAGEN
The Medication Advisor Project: Preliminary Report
--CTR
David Kortenkamp, A Day in an Astronaut's Life: Reflections on Advanced Planning and Scheduling Technology, IEEE Intelligent Systems, v.18 n.2, p.8-11, March
James Allen , George Ferguson , Mary Swift , Amanda Stent , Scott Stoness , Lucian Galescu , Nathan Chambers , Ellen Campana , Gregory Aist, Two diverse systems built using generic components for spoken dialogue: (recent progress on TRIPS), Proceedings of the ACL 2005 on Interactive poster and demonstration sessions, p.85-88, June 25-30, 2005, Ann Arbor, Michigan
Nate Blaylock , James Allen , George Ferguson, Synchronization in an asynchronous agent-based architecture for dialogue systems, Proceedings of the 3rd SIGdial workshop on Discourse and dialogue, p.1-10, July 11-12, 2002, Philadelphia, Pennsylvania
Duen-Ren Liu , Chih-Kun Ke, Knowledge support for problem-solving in a production process: A hybrid of knowledge discovery and case-based reasoning, Expert Systems with Applications: An International Journal, v.33 n.1, p.147-161, July, 2007
James Allen , George Ferguson , Nate Blaylock , Donna Byron , Nathanael Chambers , Myroslava Dzikovska , Lucian Galescu , Mary Swift, Chester: towards a personal medication advisor, Journal of Biomedical Informatics, v.39 n.5, p.500-513, October 2006 | conversational agents;interface agents;intention recognition;coordinating multiple agents and activities |
545103 | A study on the termination of negotiation dialogues. | Dialogue represents a powerful means to solve problems using agents that have an explicit knowledge representation, and exhibit a goal-oriented behaviour. In recent years, computational logic gave a relevant contribution to the development of Multi-Agent Systems, showing that a logic-based formalism can be effectively used to model and implement the agent knowledge, reasoning, and interactions, and can be used to generate dialogues among agents and to prove properties such as termination and success. In this paper, we discuss the meaning of termination in agent dialogue, and identify a trade-off between ensuring dialogue termination, and therefore robustness in the agent system, and achieving completeness in problem solving. Then, building on an existing negotiation framework, where dialogues are obtained as a product of the combination of the reasoning activity of two agents on a logic program, we define a syntactic transformation of existing agent programs, with the purpose to ensure termination in the negotiation process. We show how such transformations can make existing agent systems more robust against possible situations of non-terminating dialogues, while reducing the class of reachable solutions in a specific application domain, that of resource reallocation. | Introduction
Dialogue is one of the most
exible interaction patterns in Multi-Agent Systems,
being something between completely xed protocols and totally free conversations
[3]. Intuitively, a dialogue is not the (purely reactive) question-answer sequence
of a client server architecture framed in a rigid protocol, but is rather a kind of
interaction grounded on an expressive enough knowledge representation. Dialogues
start, in general, from the need to achieve an explicit goal. The goal of initiating
a dialogue could be for example to persuade another party, to nd an information,
to verify an assumption, and so on [15]. In the case of negotiation dialogues, for
instance, agents need to negotiate because they operate in environments with limited
resource availability, and the goal of a dialogue is to obtain a resource. This general
idea applies to dierent scenarios with dierent meanings.
In recent years, computational logic gave a relevant contribution to the development
of Multi-Agent Systems [11, 4], and proved eective to model and implement
the agent knowledge, reasoning, and interactions. Work on argumentation and per-
suasion, that under certain circumstances are considered suitable techniques and
strategies to support conversation and goal achievement, led to many argumentative
frameworks [1, 9, 10]. They often try and embrace very hard problems and result
in very good descriptive models for them, but lack most of the times an execution
model. In [13], Sadri et al. described a logic-based approach to negotiation which
does not take into account persuasion and argumentation in their classic understand-
ing, but which on the other hand allows for proving properties such as termination,
correctness and completeness. The strength of such approach is in that it proposes
an execution model that can be used to achieve an implementation of the system.
In the area of negotiation, there are some general results about dialogue proper-
ties, like termination and success. In [16], the authors consider the use of logic-based
languages for negotiation, and identify two important computational problems in
the use of logic-based languages for negotiation { the problem of determining if
agreement has been reached in a negotiation, and the problem of determining if a
particular negotiation protocol will lead to an agreement.
In this paper, we tackle the problem of termination of agent dialogues. We start
by discussing the meaning of termination in agent dialogue, and show a trade-o
between ensuring termination of dialogues, and therefore robustness in the agent
system, and achieving completeness in problem solving. Then, we show how such
ideas apply in practice to a concrete case of negotiation framework. Building on [12],
where agent dialogue is obtained as a product of the combination of the reasoning
activity of two agents performing an abductive derivation on a logic program, we
show how the agent programs can be transformed in order to ensure termination in
the negotiation process, and we dene three dierent degrees of such transformation.
We show how they make the agents more robust against possible situations of non-terminating
dialogues, while reducing the class of solutions that can be found in a
specic application domain, that of resource reallocation.
Agent dialogue termination: robustness vs. com-
pleteness
In [2] agent societies are categorized in terms of openness,
exibility, stability and
trustfulness, and it is claimed that whereas open societies support openness and
ex-
ibility, closed societies support stability and trustfulness. The author suggests two
classes of societies (semi-open and semi-closed), that balance the trade-o between
these aspects, because in many situations there is a need for societies that support
all of them.
In the case of dialogue, the problem of determining such trade-o still holds,
because the dialogue can be used as a means to let heterogeneous agents communi-
cate, despite the dierences among them, and without necessarily sticking to a given
protocol. On the other hand, if we let agents openly join societies, with no control
on the individuals that access them, problems could arise from their diversity. For
instance, dialogues can last forever.
This work focuses on negotiation dialogues. Let us start by describing what we
intend by dialogue, and let us do it is by example, before we dene it formally in
the next section. In the following dialogue, inspired by [10], agent a will ask agent b
for a resource (a nail), needed to carry out a task (i.e., to hang a picture). Once the
request is refused, a asks b the reason why, with the purpose of acquiring additional
information and nding an alternative solution to her goal.
Example 1
In general, a dialogue is a sequence of alternative dialogue moves, or performa-
tives, where a performative is a message in the form tell(Sender, Recipient, Subject,
Time). Time, in particular, is understood as a transaction time. The concept of
termination of a negotiation dialogue can be recovered into the idea that at a certain
point an agent makes a nal move [16]. Of course, the other agent is supposed to
recognize that such move is intended to terminate the dialogue. If no agent makes
any nal move, both agents could keep exchanging messages, without getting to an
end. Example 2 shows a dialogue between two (particularly overpolite) agents that
exchanging greetings.
Example 2
The situation of Example 2 could be due to the fact that both agents' programs
force them to reply to an incoming greeting with an equal greeting. We can imagine
that loop conditions of the like could unpredictably arise each time we put together,
in an open society, agents that were independently programmed. An obvious solution
to this problem could be to force agents not to tell the same thing twice. But
this measure does not really solve the problem, as Example 3 shows: agents could
exchanging slightly dierent messages, and still get stuck in a loop condition.
Example 3
Then, we could think to introduce a more restrictive measure, e.g., based on
message patterns, that prevents agents from telling a message whose \pattern" is
the same as a previous one in the same dialogue. 1 But this could result in preventing
agents from nding solutions (or agreements, in the case of negotiation), that
could be found otherwise. Example 4 below shows a possibly successful dialogue
that would solve a resource reallocation problem (agent a obtains a screw from b
and therefore can execute a plan to achieve her goal of hanging a picture). Such
dialogue would not be permitted if agent b was prevented from making the move
'propose an exchange' (such is the meaning of the promise performative, that we
inherit from [1]) twice.
Example 4
Still, unless we consider very generic (and therefore very restrictive) patterns,
the threat of non-termination remains. Example 5 could evoke a familiar situationthe pattern could be, in this case: tell( a, b, hello( ), ), where the underline indicates
whatever ground expression.
to those who have spent some moments of their life dealing with small children. In
the example, the challenge performative, also derived from [1], has the meaning of
asking for a justication of what the dialogue partner just said.
Example 5
In the end, we realize intuitively what follows: the more we reduce the set of
dialogue moves that the agents can exchange in the course of a dialogue, the more
we reduce the universe of reachable solutions of a negotiation problem. We think
that the choice about to which extent the dialogue should be constrained to certain
patterns must be left to the system designer(s).
In the next section, we describe a concrete negotiation framework, rstly introduced
in [13] and further extended in [12], that makes use of a logic formalism. In
such framework, the course of agent dialogues is ruled by the knowledge expressed
into the agents' abductive logic programs. Based on that, we introduce several syntactic
transformations of such programs, that make them robust in various degrees
against non-termination, and re
ect the above discussed trade-o. Finally, in Section
4 we prove a theorem that determines a bound in the maximum length of a
dialogue, measured in terms of number of exchanged messages, and we extend such
result to dialogue sequences.
3 Abduction and negotiation
The dialogue framework that we are going to sketch in this section is derived from
[13]. It is composed of a knowledge representation including an abductive logic program
(ALP), a language, a proof-procedure, and a communication layer. Agents are
provided with a suitable architecture, including in particular a planner. The communication
layer is a shared blackboard where agents can post / retrieve messages. As
far as the knowledge representation, we will only say here that agents have a (declar-
ative) representation of goals G, beliefs B, and intentions I, i.e., plans to achieve
goals. Agents will access their beliefs by means of predicates such as have(Resource),
need(Resource), and in a similar way to their intentions (intend(Intention)). The
purpose of negotiation is for the agent to obtain the missing resources, while retaining
the available ones that are necessary for the plan in its current intention. As the
focus of the paper is on the termination issue, and for space limitations, we will not
describe the framework in detail here, although we need to give some intuition on the
abductive proof-procedure adopted by the agents, in order to prove the termination
results of Section 4.
3.1 A negotiation framework
In its classical understanding, abduction is a reasoning mechanism that allows to
nd a suitable explanation to a certain observation or goal, based on an abductive
program. In general, an abductive program is expressed in terms of a triple
is a logic program, A is a set of abducible predicates, i.e., open
predicates which can be used to form explaining sentences, and IC is a set of integrity
constraints. Given a goal g, abduction aims at nding a set A of abducible
predicates that can be supposed true and thus enlarge P , in order to entail g. The
adoption of automatic proof procedures such as that of [5] or [6], supported by a
suitable agent cycle such as for instance the observe-think-act of [8], will implement
a concrete concept of entailment with respect to knowledge bases expressed in abductive
logic programming terms. The execution of the proof procedure within the
agent cycle allows to produce hypotheses (explanations) that are consistent with
the agent constraints, IC, when certain phases of the agent cycle are reached. Constraints
play a major role in abduction, since they are used to drive the formulation
of hypotheses and prevent the procedure from generating wrong explanations to
goals. For this reason, abduction has been originally used for diagnosis and expert
systems.
In recent times, many dierent understandings of abductive reasoning have been
conceived. Abduction has been used, e.g., for planning and scheduling, where the
'hypotheses' that can be made refer to task scheduling, and the constraints can be
used, e.g., to prevent task overlapping and resource con
ict. In an argumentation
framework, abduction has been proposed to build arguments out of a knowledge
base [7]. In [13], abduction has been used to model agent dialogue, following an
argumentative approach. In particular, the abducible hypotheses are dialogue per-
formatives. The abductive agent program is provided with dialogue constraints that
are red each time the agent is expected to produce a dialogue move, e.g., each time
another agent sends him a request for a resource. Such move is then produced as a
hypothesis that must be assumed true in order to keep the knowledge base consis-
tent. The agent knowledge is considered consistent if the agent replies to a partner's
moves, according to the current status of her knowledge base. The use of abduction
in the agent dialogue context, as opposed to other (less formal) approaches, has several
advantages, among which the possibility to determine properties of the dialogue
itself, and the 1-1 relationship holding between specication and implementation,
due to the operational semantics of the adopted abductive proof-procedure.
In the following we will show a dialogue constraint, taken from a very simple
agent program:
Example 6
Constraints here are expressed in terms of condition-action rules, leading in this
particular case from the perception of another agent's dialogue move (observation
phase) to the expression of a new dialogue move (action phase). For instance, the
rst constraint of the example reads: 'if agent a receives a request from another
agent, X, about a resource R that she has, then a tells X that she will accept
the request'. Such rules are interpreted (think phase) by the IFF abductive proof-
procedure, framed in an observe-think-act agent cycle [8].
3.2 IFF-terminating programs
In this paper, we will not make any concrete assumptions on the syntax of the
language of the knowledge base of agents, except for assuming that it contains
notions of literal, complement of sentences, true and false, and that such language
is equipped with a notion of entailment, such that, for every ground literal in the
language, either the literal or its complement is entailed, and such that no literal
and its complement are entailed at the same time.
The IFF [5] is a rewriting abductive proof-procedure consisting of a number of
inference rules. Two basic inference rules are unfolding (backward reasoning), and
propagation (forward reasoning). Implications are obtained by repeatedly applying
the inference rules of the proof procedure to either an integrity constraint in the given
program (ALP), or to the result of rewriting negative literals not A as false ! A.
We will not describe the proof-procedure in detail here, but we will focus on the
issue of proof termination, and give a characterization of the class of IFF-terminating
programs, i.e., of those abductive logic programs for which all IFF-trees for grounded
queries are nite.
Intuitively, the reason why a program is not IFF-terminating can be recovered
into the presence in the program of rules / constraints whose combination leads to
innite propagation or unfolding. It is possible to identify three cases, that can be
generalized. 2 In the following, p and q represent literals.
unfolding unfolding
Here, for the sake of simplicity, we consider only ground programs, thus assuming that P and
IC have already been instantiated. The results could be generalized to non-ground programs.
unfolding propagation
propagation propagation
In all cases, p unfolds / propagates to q and vice versa, ad innitum. In order
to characterize a class of programs that terminate, we dene the property of
aciclicity, tailored to the case of ALP in relationship to the IFF proof procedure
(IFF-aciclicity). In fact, if an ALP is IFF-acyclic, then it is IFF-terminating. An
ALP is IFF-acyclic if we can found a level mapping jj such that:
1. for every ground instance of every clause (if-denition) in P, say
A
2. for every ground instance of every integrity constraint in IC, say
if K is a negative literal, say not M in the body of the integrity constraint,
then
The presence of such level mapping ensures that the situations above do not
occur in an agent program. We will call IFF-acyclic programs acceptable: in fact,
we cannot guarantee termination if an agent gets stuck in an innite branch of the
derivation tree, producing therefore no dialogue move at all. In the following, we
will always require that the agent programs are acceptable. This work builds on
these results and identies a class of agent programs that ensure the termination of
negotiation dialogues and sequences of dialogues.
3.3 Dialogues
Let us now formally dene what we intend by dialogue. In the sequel, capital letters
stand for variables and lower-case letters stand for ground terms.
Denition 1 (performative or dialogue move)
A performative or dialogue move is an instance of a schema of the form tell(X; Y;
is the utterer and Y is the receiver of the performative, and
T is the time when the performative is uttered. Subject is the content of the per-
formative, expressed in some given content language.
Denition 2 (language for negotiation)
A language for negotiation L is a (possibly innite) set of (possibly non ground) per-
formatives. For a given L, we dene two (possibly innite) subsets of performatives,
I(L), F(L) L, called respectively initial moves and nal moves.
An example of language for negotiation is the following, taken from [13]:
The initial and nal moves of L 1 are:
g.
As in this paper we are interested in negotiation for the exchange of resources,
we will assume that there always exists a request move in the initial moves of any
language for negotiation.
Denition 3 (agent system)
An agent system is a nite set A, where each x 2 A is a ground term, representing
the name of an agent, equipped with a knowledge base K(x).
We will assume that in an agent system, the agents share a common language for
negotiation as well as a common content language. For a given agent x 2 A, where A
is equipped with L, we dene the sets L in (x), of all performative schemata of which
x is the receiver, but not the utterer; and L out (x), of all performative schemata of
which x is the utterer, but not the receiver. Note that we do not allow for agents
to utter performatives to themselves. In the sequel, we will often omit x, if clear
from the context, and simply write L in and L out . In our ALP framework, outgoing
performatives are abducibles, which implies that no denition for them is allowed.
In other words, there does not exist a rule in the agent program that contains an
outgoing performative in the head.
Negotiation protocols can be specied by sets of 'dialogue constraints', dened
as follows:
Denition 4 (dialogue constraint)
Given an agent system A, equipped with a language for negotiation L, and an agent
dialogue constraint for x is a (possibly non-ground) if-then rule of the form:
the utterer of p(T ) is the receiver of ^
C is a conjunction of literals in the language of the knowledge base of x. 3
Any variables in a dialogue constraint are implicitly universally quantied from the
outside. The performative p(T ) is referred to as the trigger, as the next
move and C as the condition of the dialogue constraint.
The intuitive meaning of a dialogue constraint p(T
agent
x is as follows: if at a certain time T in a dialogue some other agent y utters a
performative p(T ), then the corresponding instance of the dialogue constraint is
triggered and, if the condition C is entailed by the knowledge base of x, then x
as receiver, at the next time T + 1. This behaviour of
dialogue constraints can be achieved by employing an automatic proof procedure
such as that of [5] within an observe-think-act agent cycle [8], as we said before. The
execution of the proof procedure within the agent cycle allows to produce dialogue
moves immediately after a dialogue constraint is red. A concrete example of a
dialogue constraint allowing an agent x to accept a request is that of Example 6,
where the trigger is tell( Y, a, request( give( R )), T ), the condition is have( R, T
), and the next move is tell( a, Y, accept( request(
We will refer to the set of dialogue constraints associated with an agent x 2 A
as S(x), and we will call it the agent program of x. We will often omit x if clear
from the context or unimportant. In order to be able to generate a dialogue, two
agent programs must be properly combined, that exhibit two important properties:
determinism and exhaustiveness. We say that an agent program is deterministic
and exhaustive if it generates exactly one next move ^
p(T ), and a condition C, except when p(T ) is a nal move. We call P the space of
3 Note that C in general might depend on several time points, possibly but not necessarily
including therefore we will not indicate explicitly any time variable for it.
acceptable, exhaustive, and deterministic agent programs. Some examples of such
programs can be found in [12].
Denition 5 (dialogue)
A dialogue between two agents x and y is a set of ground performatives, fp
such that, for some given t 0:
1. 8 i 0, p i is uttered at time t
2. 8 i 0, if p i is uttered by agent x (viz. y), then p i+1 (if any) is uttered by
agent y (viz. x);
3. 8 i > 0, p i can be uttered by agent 2 fx; yg only if there exists a (grounded)
dialogue constraint
By condition 1, a dialogue is in fact a sequence of performatives. By condition
2, agents utter alternatively in a dialogue. By condition 3, dialogues are generated
by the dialogue constraints, together with the given knowledge base to determine
whether the condition of triggered dialogue constraints is entailed. A dialogue
is a ground nal move, namely p m is a
ground instance of a performative in F(L).
Intuitively, a dialogue should begin with an initial move, according to the given
language for negotiation. The kind of dialogue that is relevant to our purposes is
that started with a request of a resource R. In the knowledge representation that
we chose in the reference framework, we call missing( Rs ) the set of resource that
an agent is missing before she can start executing an intention I. A request dialogue
will be initiated by an agent x whose intention I contains R in its set of missing
resources.
Denition 6 (request dialogue)
A request dialogue with respect to a resource R and an intention I of agent x is a
dialogue agent y 2 A such that, for some t 0,
I and
As a consequence of a dialogue, the agent's intentions might change. According
to the way intentions are modied, a classication of types of terminated request
dialogues is given in [12]. In the sequel, we will assume that a terminated request
dialogue, for a given resource R and intention I, returns an intention I 0 .
Ensuring termination
We can consider the dialogue as a particular IFF derivation, obtained by interleaving
resolution steps made by the two agents. Therefore, we can extend the termination
results of 3.2 to dialogue programs.
4.1 Terminating dialogue programs
We argue that the possible reasons for an innite derivation tree are still, mutatis
mutandis, those of Section 3.2. Let us consider again the three cases, and see if
they can occur when the knowledge is distributed between two agents, a and b. We
put on the left side of each rule / constraint the name of the agent whose program
contains it. We assume that the agents' programs are acceptable.
unfolding unfolding
unfolding propagation
unfolding propagation
propagation propagation
(1) and (2a) are both forbidden by the program acceptability requirement. (2b)
does not represent a possible cause of non-termination: 4 since q is not abducible,
therefore it is not possible to communicate it to b. As for the third case, let us
consider, as p(T), tell(b, a, Subject, T), and as q(T), tell(a, b, Subject, T). This
is apparently a threat for termination. For instance, this is the case in Example 2,
where Subject is hello. We could therefore introduce some restrictions to the dialogue
protocols, in order to prevent such situation, at the cost of a reduction of the space
of reachable solutions, as explained before by examples.
4 The only way to pass the computation thread from an agent a to her dialogue partner b is
through an abducible representing a dialogue move in the head of one of a's diaolgue constraints.
4.2 Three degrees of restrictions
In order to try and prevent propagation from causing innite dialogue move generation
(case 3), we should make sure that the same integrity constraint of an agent
program is not triggered innitely many times. To this purpose, we dene a transformation
T that maps an element S 2 P of the domain of (acceptable, exhaustive
and deterministic) agent programs into another element, T (S) in the same domain.
T is dened as follows:
Denition 7 (Agent program transformation)
Given a language L, an agent x and an agent program S(x) 2 P, the transformation
T with respect to a given set of literals
is dened in the following way:
otherwise:
Therefore, there is a 1-1 correspondence between a restriction p check (T ) and a
transformation function T . We will called restricted according to T the programs
that are elements of the co-domain of T . If an acceptable program is restricted
according to T , when the partner agent produces a move triggering a dialogue constraint
whose condition and restriction are both veried, the agent will jump to a
nal state, thus interrupting the dialogue. It is easy to prove that, given a transformation
function T associated with a restriction p check (T ), if the p check (T ) is ground
for all possible instantiations of p(T exhaustive and deterministic
programs into exhaustive and deterministic programs, i.e., T maps from P
to P.
The choice of the restriction can be made in several ways; we will dene here
three dierent kinds of restrictions, that formally re
ect the considerations of Section
2.
(i) The check is made on ground instances of predicates,
(ii) The check is made on predicate patterns,
(iii) The check is made against an ordering.
Let us consider them one by one. Case (i), check made on ground instances of
predicates, restricts the applicability of a dialogue constraint by preventing it from
being triggered twice by the same instance of dialogue move at dierent times. This
is in line with the characterization of dialogue given in [13], and prevents situations
of innite 'pure' loop, such as the one in Example 2, generated by the constraint
agent a. The restriction in this case
could be:
check
This does not guarantee termination in a nite number of steps, though, as shown
by Example 3. The check on predicate patterns could be implemented in that case
by the restriction
check
The case of check made on predicate patterns is a more restrictive policy, that has
the already mentioned drawbacks and limitations. In particular, the situation of
Example 5 could be caused by an agent that contains in her program the following
dialogue constraint: tell(b; a; Anything; T
A solution to this could be to establish an ordering among the dialogue constraints
of an agent. Before going on, let us point out that the problem of non-termination
is of the agent that starts the negotiation dialogue, although our results
are independent on this. In fact, if it is true that the dialogue can be terminated by
either agent, on the other hand, broadly speaking, the one that started it is the one
that we expect to be waiting for a reply, and not vice versa.
Now, the intuition is that the agent that started the negotiation process, let us
say a, can ideally draw a tree of possible dialogues, that has as a root his initial
move, let us say, tell(a; b; request(: : :); 0). A correct tree could be drawn if a knew
b's program exactly, which is not an assumption we want to make. However, in rst
approximation, a can assume that b has the same constraints as she has. Then, a
can generate a tree that has as nodes the possible dialogue moves, and as branches
the integrity constraints that lead from one move to another one. An example of
such tree, for the language L 1 and the negotiation program dened in [13], is that
of
Figure
1.
The purpose of drawing a tree, that goes from an initial move to some possible
nal moves, is to have an ordering function, that allows to order the dialogue moves,
and consider an ongoing dialogue valid as far as the tree or graph is explored in one
direction (the one that leads to nal moves). It is important to notice that not all
agent programs in P allow us to draw a tree: in order to do that, such ordering
function must exist. We call the existence of such function acyclicity, as we did
in Section 3.2 in the case of IFF-terminating programs. If such function does not
exist, it is not possible to adopt policy (iii), and it is easy to imagine that a dialogue
between two agents having both a non-acyclic program will not likely terminate.
More formally, an instance of ordering function, that we call (rank function) is
dened as follows:
request
accept
refuse
challenge
justify
refuse
promise
refuse
accept
request
accept
refuse
challenge
justify
refuse
promise
refuse
accept
Figure
1. Dialogue tree.
Denition 8 (Rank function)
A rank function, mapping from a language L to the set of natural numbers N,
procedurally dened for a given agent program S 2 P as follows in
two steps. First we label all the performatives of L, by applying one of the following
rules, until no rule produces any change in the labeling:
for all p 2 L that have not been labeled yet, if
for all p 2 L that have not been labeled yet, if
for all p 2 L that have already been labeled, let label(p) be r. Then, if
9ic
If it is not possible to apply such labeling to the language (i.e., if it is not possible
to complete this procedure in nitely many steps), it means that the program is
not acyclic. Otherwise, after labeling the language, let R be max p2L label(p) (R is
nite). Then, rank is dened as label(p).
Once a rank function rank(p; n), n 2 N is dened for all p 2 L, the restriction
turns out:
check
That is, a move of higher rank has been made after a move of lower rank. It is
worth to notice that the introduction of the restrictions does not modify the existing
language ranking.
We will call this policy check against an ordering. It is more restrictive than
the previous ones, since it does not allow jumping backwards from one branch to
another one, and, again, some more possibly successful dialogues could be rejected.
On the other hand, if applied to acceptable, exhaustive, and deterministic programs,
it is enough to ensure termination, as stated by the following theorem:
Theorem 1 (Finite termination of a dialogue with check against an ordering) Let
a and b be two agents provided with acceptable, exhaustive, and deterministic pro-
grams. Let a's program be restricted according to a check against an ordering policy.
Therefore, a dialogue d between a and b will terminate before nitely many moves.
In particular, if R is the maximum rank of a dialogue move, d will have at most
moves.
Proof The proof for such theorem is given inductively. Given a dialogue
agents a and b, and started by a, let R be the maximum
rank of a dialogue move in a's program. By denition of rank, all nodes ranked 0
must be leaves, and therefore nal moves, while all the leaves ranked R are initial
moves. Then we have,
if p j is nal, the dialogue is terminated;
if p j is not nal, with rank(p and it is uttered by b (i.e., j is even),
then p j+1 must be computed by a, and will be either a nal move, or will be
ranked rank(p j+1
if p j is not nal, with rank(p and it is uttered by a (i.e., j is odd),
then p j+1 must be computed by b, and it will be either a nal move, or a move
then the next move p j+2
will be nal and the dialogue will terminate; otherwise p j+2 will be ranked
by denition.
Therefore, each move p j is ranked r, with r R+1 j. Now, if in the dialogue there
are two moves with the same rank, the next move will be the last one. Since there
cannot be more than two moves with the same rank, and the maximum rank is R,
the maximum number of moves computed in a dialogue is R + 2. For all non-nal
move uttered by either agent, there exists a (unique) next move p j+1 , since both
agent programs are exhaustive and deterministic. Moreover, the reasoning required
by either party to compute a dialogue move terminates in a nite number of steps,
since both programs are acceptable. Therefore, the dialogue terminates in at most
moves.
4.3 Termination of dialogue sequences
We can easily extend this termination result to the case of dialogue sequences, formally
dened in [12], aimed at collecting all the (nitely many) missing resources,
missing(I), with respect to an intention I, whose cost is dened as the cardinality
of missing(I). We do not have the space here to formally describe the dialogue
sequence, as we did with dialogues; still we would like to give the intuition, and a
sketchy proof of the second theorem that we are going to enunciate in the following.
Dialogue sequences are dened such that an agent cannot ask the same resource
twice to the same agent, within the same sequence. Since dialogues can modify the
agents' intentions, a dialogue sequence fd is associated with a series of
intentions fI is the agent intention resulting from
I is the 'initial' intention.
Denition 9 (Termination of a sequence of dialogues)
A sequence of dialogues s(I) with respect to an initial intention I a of an agent a
is terminated when, given that I n is the intention after d n , there exists no possible
request dialogue with respect to I n that a can start.
This could be due to two reasons: either after s there are no more missing
resources in the intention, i.e. cost(I n and in a's program
there is no constraint that can start a request.
In order to ensure termination, we could program the agent such that after
every single dialogue the set of missing resources rather shrinks than grows in size:
If an agent program is such that a dialogue can only
decrease the cost of an intention, we call such agent self-interested rational. In that
case, given a system of n agents and an intention I, the length length(s(I))
of a sequence s(I) of dialogue with respect to an intention I, i.e., the number of
dialogues in s(I), is bounded by the product n cost(I). It is possible to prove
that termination is a property that holds for self-interested rational agents whose
programs are restricted according to a check against an ordering policy:
Theorem 2 (Termination of a sequence of dialogues for a restricted agent program)
Let A be a system of n+ 1 agents, and s(I) be a sequence of dialogues with respect
to an initial intention I of a self-interested rational agent a 2 A. Let the agent programs
be acceptable, exhaustive, deterministic, and in particular let a's program be
restricted according to a check against an ordering policy. Then s(I) will terminate
before nitely many dialogue moves. In particular, if R is the maximum rank of a
dialogue move, according to a's ranking function, s(I) will terminate after at most
moves.
Proof (sketch) As the number of dialogues in s(I) is bounded by the product
cost(I), and each dialogue is terminated in at most R moves, the whole sequence
of dialogues will terminate, and will terminate after at most ncost(I)(R+2)
dialogue moves. 5
In the end, we would like to make a parallel between the concept of restriction introduced
to ensure the termination of a (negotiation) dialogue and the self-interested
rationality assumption of Theorem 2. Indeed, self-interested rationality could be
considered a limitation, in that it reduces the space of agent programs. It re
ects
into a reduction of the space of the achievable solutions of a resource reallocation
problem (it is easy to imagine situations where two negotiating agents get stuck in
a 'local maximum' because none of them wants to give away a resource). There
are some results in this respect in [14]. Due to such reduction, a weak notion of
completeness has been introduced in [12].
In this work, we have dealt with the problem of termination in dialogue-based agent
negotiation. Building on an existing dialogue framework, where the course of dialogue
is determined by rules and constraints embodied in the agents' programs, we
introduced several syntactic transformation rules that can modify those programs
towards a better robustness. Two results have been proven, determining an upper
limit to the maximum length of a dialogue and of a sequence of dialogues, measured
in terms of number of exchanged messages. Such results can be generalized, and reect
an existing trade-o between the need to ensure termination in the negotiation
process and the loss in terms of reachable states in the universe of possible solutions
to the problems addressed by negotiation.
--R
Arguments, dialogue and negotia- tion
Categories of arti
Dialogue in team formation.
The IFF proof procedure for abductive logic programming.
On the relation between Truth Maintenance and Abduction.
The role of abduction in logic programming.
From logic programming to multi-agent systems
Reaching agreements through argumen- tation
Agents that reason and negotiate by arguing.
Dialogues for negotiation: agent varieties and dialogue sequences.
Logic agents
Negotiation among Self-Interested Computationally Limited Agents
Commitment in Dialogue: Basic Concepts of Interpersonal Reasoning.
Languages for negotiation.
--TR
Loop checking in logic programming
Reaching agreements through argumentation
Speculative computation with multi-agent belief revision
From logic programming towards multi-agent systems
Categories of Artificial Societies
Dialogue in Team Formation
Dialogues for Negotiation
--CTR
Ralf Schweimeier , Michael Schroeder, A parameterised hierarchy of argumentation semantics for extended logic programming and its application to the well-founded semantics, Theory and Practice of Logic Programming, v.5 n.1-2, p.207-242, January 2005
Pietro Baroni , Massimiliano Giacomin , Giovanni Guida, Self-stabilizing defeat status computation: dealing with conflict management in multi-agent systems, Artificial Intelligence, v.165 n.2, p.187-259, July 2005
Iyad Rahwan , Sarvapali D. Ramchurn , Nicholas R. Jennings , Peter Mcburney , Simon Parsons , Liz Sonenberg, Argumentation-based negotiation, The Knowledge Engineering Review, v.18 n.4, p.343-375, December | abduction;multi-agent systems;computational logic;dialogue;negotiation;termination |
545117 | Channeled multicast for group communications. | Multi-agent systems can benefit from the possibility of broadcasting messages to a wide audience. The audience may include overhearing agents which, unknown to senders, observe conversations and, among other things, pro-actively send suggestions. Current mainstream agent communication languages however lack adequate support for broadcasting. This paper defines the requirements for a form of broadcast that we call channeled multicast, whose distinguishing features include the ability to distinguish streams of messages by their theme, and to address agents by their characteristics. We present an implementation based on multicast IP, called LoudVoice. We show how channeled multicast could be used in alternative to matchmaking, and present in some detail a broadcast-based version of the English Auction Interaction Protocol. Finally, we discuss how we use the ability to overhear conversations in order to build innovative applications and we present a case study which is a testbed for various types of agents and multi-agent systems. | INTRODUCTION
While developing some collaborative, distributed applica-
tions, we realized that many multi-agent systems would ben-
et from the possibility of broadcasting messages to a wide
audience, which may include overhearing agents unknown
to the senders. We even based an abstract architecture [5]
on the ability of observing conversations without being in-
volved. However, it has been convincingly argued that current
mainstream agent communication languages, in addition
to poorly supporting broadcasting, lack an adeguate
group communication semantics [15].
The goal of this paper is to present a communication frame-work
that supports a form of broadcast which we call channeled
multicast. We developed an implementation of channeled
multicast, called LoudVoice, based on multicast IP.
We will discuss how we used this framework for rethinking
some traditional protocols and supporting an innovative approach
to multi-agent system design. As mentioned above,
a cornerstone of this approach is the ability to \overhear"
conversations and indeed, this has eectively dictated the
requirements for channeled multicast.
This paper is organized as follows: Section 2 denes the
requirements for channeled multicast. Section 3 brie
y introduces
one of the main drivers for channeled multicast, the
overhearing architecture. Section 4 describes our implemen-
tation, LoudVoice. In order to demonstrate the eective-
ness of the proposed framework, Section 5 shows how to implement
matchmaking and the English Auction Interaction
Protocol (standardized by FIPA) using channeled multicast;
the latter protocol is examined in detail. Section 6 speculates
on how channeled multicast enables certain classes of
innovative applications, while Section 7 presents, as a case
study, a part of a larger testbed that we are developing.
Finally, Section 8 points to some related work.
2. CHANNELED MULTICAST
For our applications, we need a communication infrastructure
able to support what we call channeled multicast. This
is based on the concept of channel, dened as a stream of
messages (typically corresponding to speech acts) that may
be listened to by many agents simultaneously. 1 The goal
of channeled multicast is to support observable, real-time,
one-to-many communication, and has the following characteristics
Many channels can co-exist (an implementation may
even support on-the-
y channel creation). All messages
exchanged on a channel relate to a single theme,
identied by an expression in a language of choice of
the implementation. If we suppose that a logic-based
language is used and that the application is an auction
1 The choice of the term \channel" comes from the analogy
with radio { indeed, inspiration came from the idea of a
digital stream describing the contents of a sound stream
being broadcasted simultaneously.
system where channels are identied by the type of sold
items, a theme may look like: (contents (modernArt)
and (item (painting) or item (statue))), identifying
a channel dealing with modern art pieces, and
specically paintings or statues.
Every message contains its specic destination. This
may be one or a group of agents. A group address
may take dierent formats: (1) a simple list of agents
{ e.g., (painting-auctioneer, statue-auctioneer);
(2) the name of a team, or other type of identiable
community { e.g., Auctioneer Syndicate; (3) an expression
(in a language of choice of the implementa-
tion) of the characteristics of the intended audience
{ e.g., partipant(auction,#5) and country(Italy)
identies the set of all participants to a given auction
who live in Italy. In this paper, we represent \all listeners
currently tuned on the channel" with everybody.
Every message contains its sender and its private contact
address (i.e. the end-point of a standard point-to-
point network, such as KQML [8] and FIPA [9],[10]).
Agents can \tune" into as many channels as they like,
both to listen, and to send messages (speak ). An agent
tuned into a channel receives all messages sent on it, no
matter what their specic destination. Some form of
restrictions may be applied (e.g., message encryption),
for security or application specic reasons.
From what has been sketched above, it follows that { dier-
ently from blackboards or facilitator-based infrastructures
(e.g. [6]) { a channel is \neutral": it does not route messages
to their \best" destinations, nor does it help in coordinating
activities. Listeners must be ready to receive messages
that they do not know how to handle, and dispose of them
properly, in particular when they have been implicitly addressed
(e.g., messages sent to everybody). A proper choice
of channel themes alleviates these issues, and helps selecting
the \right" audiences.
Also, a channel is not generally required to guarantee deliv-
ery, nor to have store-and-forward capabilities, nor to alert
speakers of the absence of listeners (application dependent
considerations may require these features to be supported,
however). Finally, channeled multicast is not an alternative,
rather a complement, to standard point-to-point communication
As discussed in Section 4, channeled multicast can be implemented
on top of a distributed event service. What it
adds is an explicit distinction between the stream of communication
(the channel, equivalent to a set of event types)
and the destination of messages. A message can be heard by
everybody, but is normally addressed only to a subset of the
audience. This allows us to properly support the semantics
of group communication described by Kumar et al [15].
3. OVERHEARINGONCHANNELEDMUL-
The requirements for channeled multicast derive from the
abstract architecture described in [5, 16], based on the principle
of overhearing. Its objective is to enable unplanned,
so-called \spontaneous" collaboration in an agent community
by means of unobtrusive observations and unsolicited
suggestions.
For performance as well as a clean breakdown of function-
ality, the architecture identies a special agent role, the
overhearer, whose goal is to listen to one or more channels
on behalf of others, analyze the contents of all messages
being exchanged and forward whatever information is rele-
vant. To obtain this service, interested listeners subscribe
to overhearers by passing one or more queries on the message
contents. The query language depends on the chosen
may vary from simple pattern expressions, to
modal languages that enable temporal or knowledge-based
reasoning (one is described in [16]), to the specication of a
mental state (e.g., a belief) held by speakers as abducted by
the overhearer. Based on these subscriptions, overhearers
select and forward messages of interest to their subscribers,
even if the latter are not the intended destinations of the
messages.
Normally, communication between overhearers and their subscribers
is private, i.e. on point-to-point connections. Thus,
subscribing with overhearers diers from directly tuning on
channels for two main reasons. The rst is that messages
are ltered based on their contents. The second is that subscribers
do not need to access the underlying multicast net-work
(which may suer limitations, or simply be inconvenient
for mobile systems such as PDAs).
In the architecture described in [5], the main subscribers
are the so-called suggesters, which are agents whose role is
to give suggestions on the topics of the conversations being
observed. Suggestions are typically informative messages,
which can be sent either directly to the parties involved (via
point to point communication) or on channels, thus making
them public.
4. AN IMPLEMENTATION: LOUDVOICE
Channeled multicast can be implemented in several ways.
For instance:
by using a broadcast communication layer. This is
the solution we adopted, and that is discussed in some
detail below;
by adopting a star network topology with a \concen-
trator" in the middle: a speaker sends its messages
to the concentrator, which in turn forwards it to all
connected listeners. Note that this is just a simplied
variant of a facilitator-based architecture such as [6];
by means of a distributed event service system: common
object-oriented middleware based on RPC, such
as CORBA or Java RMI, provides services that de-couple
the communication between objects; see for instance
the CORBA Event Service [1] or the RMI Distributed
Event Model [3].
by using a messaging system with a publisher / subscriber
paradigm (a variant of the previous case). The
speaker sends a message to a destination \topic", and
the middleware redirects a copy of the message to
all the consumer that are \connected" to such topic.
Message-oriented middleware compliant with the Sun
JMS (Java Message Service) [4] specication can provide
this kind of service.
The choice is dictated by non-functional requirements, such
as the reliability semantics required by the application. For
instance, JMS or CORBA implementations may guarantee
\at most one" or \exactly one" delivery of messages.
For our applications, reliability is not a concern, while real-time
is. Our implementation, called LoudVoice, uses the fast
but inherently unreliable IP multicast and XML for message
encoding. LoudVoice is language-independent; we currently
have application programming interfaces (API) for Java, C
and C++.
Channels have an identier (a name), a theme and an IP
multicast communication address. For the current imple-
mentation, channel themes are used in a way that reminds
newsgroup subjects. A theme is just a list of strings taken
from an application-specic taxonomy of subjects; this taxonomy
is represented as an XML le accessible to all agents
via its URL. Agents discover which channels are available by
means of an API that has, as one of its optional parameters,
one or more taxonomy elements to be used to select chan-
nels. The taxonomy not only constrains the inputs, but also
allows an extended matching that includes channels whose
themes are either equal or hierarchically in relation (parents
or children) with those requested by an agent. The selected
channels are then returned in order of their closeness to the
input with respect to the taxonomy. For instance, given: a
taxonomy \sport > athletics > f running, marathon, high
jump g"; channel 1 on sport; channel 2 on athletics; and,
channel 3 on running, a client looking for marathon will be
returned channel 2 and channel 1, in that order.
Having discovered a channel, an agent can freely listen and
speak by means of another API. Messages, encoded as XML
documents, have a common header, which includes a per-
formative, sender and destination. In accordance to the
requirements, the destination is a list of strings that either
identify specic agents or teams, or, are elements of a taxonomy
of \topics of conversation". The latter type of address
is matched by listeners against their own private list of inter-
ests, using a variation of the algorithm for theme matching,
in order to understand if they are part of the intended audience
Figure
1 summarizes the sequence of steps carried out by an
agent to use LoudVoice.
Internally, LoudVoice utilizes a server which has various ob-
jectives: maintaining the list of channels, answering discovery
requests, broadcasting messages (optionally repeating
them periodically), and so forth. The server receives requests
from clients on UDP socket. Messages to be broadcasted
on a channel are sent to its multicast datagram socket
specied by a class D IP address. This means that each
message is received by an arbitrary number of clients. This
mechanism is inherently unreliable. The order of delivery
is not guaranteed, and messages may be dropped. Also,
broadcasting is limited by the conguration of the routers;
Figure
1: Using LoudVoice
this limitation, though, should be overcome by the diusion
of IPv6 [2], which will include multicast addressing over Internet
LoudVoice also includes overhearer agents, in order to support
the architecture mentioned in the previous section. Currently
we have only one type of overhearer: a simplied version
of the Ontological Overhearer described in [16]. The
language for subscription is based on simple taxonomies,
rather than the more complex ontologies described in [16].
Taxonomies are passed by subscribers with their subscription
query, and describe the possible contents for messages
the subscriber is interested in; matching is performed against
the full message contents by using another variation of the
algorithms adopted for channel discovery. As an extension
to [16], subscribers can also dene lters on senders and
destinations of messages.
5. RETHINKING TRADITIONAL PROTOCOL
Well-known multi-party protocols can be eectively implemented
by means of broadcast communication (such as channeled
multicast) and overhearing. We elaborate here on two
cases: matchmaking and the English Auction Interaction
Protocol.
With channeled multicast, matchmaking { a traditional cornerstone
of many agent communication infrastructures [18]
{ may be easily implemented without a centralized match-
maker. Thanks to the capability of service providers to listen
to all the request messages on the channels, they can
autonomously reply proposing their particular service fea-
tures. A simple, but by no means only, way to do this is
to have service providers subscribe with an overhearer by
providing the attributes of their services as the pattern to
be matched. Thus, matchmaking happens when an agent
needing a service sends a request for providers with certain
expected attributes to everybody. Matching providers, no-
tied by the overhearer, reply by proposing themselves. The
requester subsequently chooses which one to use.
Moreover, if replies to service requests are sent back on
the channel rather than privately, an observer may trivially
build up a database of existing providers, and possibly
enrich it with further observations on the following interactions
among providers and their clients. This could become
a base for a broker or a recommendation system. One
may envisage additional types of services beyond traditional,
ontology-based matchmaking or brokering, e.g. agents reformulating
requests when a requester's terminology does not
match with any of the services, agents helping with decompositions
of complex requests, and so on. All of these services
take leverage on the observability of communication,
on knowledge-level analysis of communication (in terms of
speech acts as well as message contents), and even on human
involvement when appropriate. Machine learning may also
be applied to deal with unknown messages and interaction
styles.
We used channeled multicast to implement a variant of the
English Auction Interaction Protocol standardized by FIPA
[12]. We present here our protocol, which is signicantly
more e-cient than the original, while Section 7 elaborates
on how overhearing can enrich the functionality oered by
an auctioning system.
Conceptually, the English Auction Protocol can be described
as follows: an auctioneer seeks to nd the market price of
an object by initially proposing a price below that of the
supposed market value, and then gradually raising it. Each
time a price is announced, the auctioneer waits to see if
any buyer signals its willingness to pay the proposed price.
As soon as one buyer indicates its interest, the auctioneer
issues a new call for bids at an incremented price. The auction
continues until nobody is willing to buy, at which point
the auction ends. The auctioneer then decides whether or
not to sell, depending on the nal accepted price.
For each round of the auction, FIPA prescribes that the auctioneer
rst sends a call-for-proposal message (\cfp act" in
FIPA terminology), with the proposed price, to each participant
(i.e., potential buyer); then, everybody interested
sends back a \propose" message for declaring its bid; nally,
before moving to the next round, the auctioneer replies to
each \propose" specifying whether a participant's bid has
been accepted or not.
Since FIPA assumes point-to-point communication, the auctioneer
has to identify all potential participants before-hand,
and it has to keep them informed on the state of their bid.
The latter requirement arises because each participant needs
to know if it is competing against someone else { ideally,
even against who one is competing, which may be important
in a competitive scenario { in order to decide whether
it is necessary to increase its bid to win the auction.
Using channeled multicast oers two main opportunities:
rst, reducing the total eort (including the number of mes-
second, increasing the social awareness of an auc-
tion's progress. Indeed, when the auctioneer starts the pro-
cess, it can assume that everybody potentially interested is
tuned into the channel; after participants have placed their
bids, they know that everybody knows how many competitors
{ and who { are involved in the auction. Of course,
channeled multicast alone cannot guarantee common knowledge
(as dened in [13], i.e. perfect distribution of infor-
mation). However, the protocol we describe below is meant
to approximate common knowledge for most practical engineering
purposes. Most importantly, the two-phase termination
not only closely mimics human auctions, but signi-
cantly reduces the chances of uncorrect termination due to
lost messages.
Our protocol prescribes that, for each round of the auc-
tion, the auctioneer sends a single call-for-bid message over
a channel. This call contains the proposed price, as well as
the name of the winner of the previous round (normally, the
rst participant to submit a bid), so everybody is updated
with the current situation of the auction. After receiving
the call, a participant can propose its bid for the current
price by replying on the same channel.
The auctioneer waits to receive bid proposals from partic-
ipants. After that at least two bids have been received {
which means that there is competition among participants
{ the auctioneer sends a new call-for-bid at a higher price;
any further bids for the previous price that arrive late are
simply ignored.
If only one, or no bid has been received within a certain
amount of time, the auctioneer resends its call-for-bid and
waits again. The reason for this repetition is to prevent a
wrong termination of the auction due to loss of messages,
either from the auctioneer or a participant: if the probability
of missing a message in a round is p, this two-phase
termination process reduces this probability to p 2 . At the
end of the second phase, if still nobody or only one participant
has sent its bid, the auction ends and the auctioneer
announces the winner, which is the only one bidding at the
current round, or the winner of the previous round if none. 2
Overall, the protocol requires only four types of messages:
Init Auction, sent by the auctioneer to declare the start
of an auction and to give information about the item being
sold; Propose Bid, sent by the auctioneer with the current
price and the winner of the previous round (nobody
at the rst); Accept Bid, sent by a participant accepting
a price; End Auction, to proclaim the winner. Figures 2
and 3 contain the nite state machines for auctioneer and
participant in UML-like format. For simplicity, the rst is
shown without states and transactions needed to implement
the two phase termination.
Figure
2: English Auction - Auctioneer FSM (sim-
plied)
For the sake of completeness, let us compare the semantics
dened by FIPA for one of their messages with ours. In
2 If there is only one bidder, i.e. no competition, it is not
rational to go to another round, since it will go deserted {
the only bidder will not run against itself. If the price is too
low compared to the auctioneer's expectations, the latter
will refuse to sell, and possibly start a new auction from a
higher price.
Figure
3: English Auction - Participant FSM
the FIPA specication [12], Accept Bid is represented by
a propose act, whose semantics { formally dened in [11] {
includes the following eect:
I j
i.e., if the agent j (who did a previous call for proposal)
commits to the intention of having i doing act, then the
agent i (performing propose) commits to the intention of
doing act. In our case, i is the bidder, j is the auctioneer,
and act is purchasing the item being sold at the currently
proposed price.
Our protocol also uses propose for Accept Bid, but its
semantic is extended as follows (terminology and formalism
are a simplication of Kumar et al. [15]):
where, interpreted in our specic example: represents the
bidder, represents the auctioneer,
represents all the
agents tuned on the channel, e represents the event of
being announced as auction winner, and q represents the
intention of of buying the item from at the currently
proposed amount of money. Informally, the semantics is:
(i.e., the bidder) informs (the auctioneer) that it is willing
to commit to q (the intention of buying this specic item at
the current price) conditional to e (i.e. declares that
is the winner). At the same time, a group belief is established
within
(all listeners) about the commitment of to
q conditional to e.
In summary, in this simple case the semantic dierence between
FIPA and our protocol is that the belief about somebody
bidding at a certain price is a group belief rather than
a belief held by the auctioneer only. Of course, this saves us
the need for the additional messages prescribed by FIPA.
Figure
4 compares the number of messages exchanged with
the FIPA interaction protocol and ours as a function of the
total number of participants to an auction, assuming that,
at each round, 25% of the participants leaves the auction.
We tested our protocol using LoudVoice. Dierent channels
were used to run auctions for dierent types of items (cars,
art, etc. Every item was represented by a unique identier,
a description, its type, and had a base price. For our tests,
we developed an auctioneer agent whose goal was to sell
a list of items, and whose behaviour was represented by
a function that, given the current price of a type of item,
returned its incremented price for the next call-for-bid. A
participant agent had a list of types of items to buy, an
amount of money and, for each item type, a limit on the
Figure
4: English Auction - Msg for num of participant
price to pay and a \priority" value that determined which
items to buy rst.
Our benchmark consisted of an auctioneer handling 10 auctions
simultaneously, 4 participants and 4 channels; each
participant was tuned into two channels, and participated
to all auctions on them. We set the decision making time
to zero, that is, there wasn't any waiting time between receiving
a Propose Bid and replying with the corresponding
Accept Bid. The various software components involved were
distributed on four computers connected by a 100Mb Ethernet
LAN. Table 5 refers to a running test period of 20
minutes during which all the participants accepted the new
price of every call-for-bid (i.e., no upper bound on the money
to spend). The \message received" row reports, for the auc-
tioneer, all replies correctly received from participants and,
for the participants, all messages from the auctioneer as well
as those from other participants. Note that all the auctions
running regularly, in spite of some messages being lost;
this was possible thanks to the two-phase termination described
above.
Table
1: English Auction - Statistics
Auctioneer Partecipants
Messages sent 2307 9164
Messages received 9117 9197
Messages Lost 0.5% 0.3%
CPU Load < 3% < 3%
6. IDENTIFYING INNOVATIVE USES
The previous section focused on how to use channeled multi-cast
to deliver some traditional services. We speculate here
on some additional services that would be hard to implement
if only point-to-point connections were available; a case
study is presented in next section.
Some services that would greatly benet from channeled
multicast and overhearing come easily to mind. These include
monitoring (i.e., tracing an agent's actions), proling
(i.e., classifying an agent with respect to some criteria), and
auditing (i.e., making sure that no organizational rules are
violated). They are based just on the observation of other
agents, without need to interfere with the agents' activities.
Reports here are usually made to humans. Thus, it is possible
to add (or remove) these kind of services while the
system is running.
Some other services are less obvious: recommendation sys-
tems, enforcement of social rules (by inhibiting inappropriate
behaviour, or requiring approval prior to the execution
of certain tasks on behalf of other agents), and so on. As in
the previous cases, this class of services are made possible
throught the observability of communication, and somehow
facilitated by protocols based on speech acts, which allow for
simplied analysis of intentionality. There is a fundamental
issue, however: a recommender or rule-enforcer agent
must be able to interact with the agents being observed,
which in turn must be able to handle conversations that are
asynchronous with respect to their main interactions and
sometimes unforeseeable at design time. Designing systems
that adapt on-the-
y to unknown agents, functionality or
protocols and are able to revise their own mental attitudes
accordingly is non-trivial, of course. The abstract architecture
presented in [5] is a framework for dealing with these
challenges. Its cornerstone is the availability of public models
of agents interacting on a channel. Also, [5] speculates
on computational models that support intention revision effectively
There is one class of agents, however, that oers interesting
opportunities with a limited eort. We call these agents
user assistants. They have limited or adjustable autonomy,
and are typical of knowledge-driven business applications or
other systems often categorized under the accomodating umbrella
of \knowledge management" or \decision support".
Distinguishing feature of a user assistant is its sophisticated
human-computer interface, that allows its user to obtain domain
information and to perform actions based on what has
been found. The assistant acts as an \intelligent" intermediary
with other system components and agents. An example
is in the next section. We also built dynamic Web systems
based on the principles outlined below.
User assistants easily benet from overhearing because part
of the decision making is left to humans. To this end, a user
assistant must be able to handle a language for suggestions
directed not to itself, but to its user. These suggestions are
expected to be information, unsolicited and unforeseen in
their format and content, pushed by external observers as
relevant to the specic context in which the user is acting.
The language for the suggestions is meant to help the assistant
in ltering, categorizing and presenting suggestions,
and nally discarding them when no longer relevant. In the
system described in the next section, suggestions include
business intelligence reports on items and participants in active
auctions, and complement the information made available
to the user by her assistant.
Sophisticated agents may accept languages that { in combination
with appropriate computational models, descriptions
of organizational roles and other information { allow them
to in
uence their behaviour. For instance, a language may
contain statements that cause the suspension of an agent's
current operations until the human has acknowledged the
received information or otherwise acted upon it (e.g., by
obtaining authorizations from third parties).
7. AN EXAMPLE OF OVERHEARING
We elaborate on the English Auction Protocol presented in
Section 5, to show how auctioneers and bidders can ben-
et by somebody overhearing auctions. What follows is a
partial description of a testbed for on-going experimentation
on agents, overhearing, Web searching and publishing,
e-services, human/computer interfaces, and other related ar-
eas. Work is still in progress, and many components are in
their early stages of development.
The system supports human-supervised auctions. In other
words, auctioneers and participants acting on a channel are
special cases of users assistant agents, as dened in the previous
section. Each assistant shows to its user any relevant
information about current auctions (items being sold, prices,
and so on). The user then makes decisions and submits goals
to her assistant. For instance, auctions are started by providing
a description of the item being sold, its initial price,
the increment at each round of the auction, and other parameters
such as timeouts between rounds. In addition to
information on the progress of their current intentions, assistants
are able to show any suggestion that comes in.
The user interface of an assistant is a sort of control console
(Fig. 5), where information on the goals currently pursued
(e.g., participating to an auction) is always visible, while side
frames contain action buttons and summaries of incoming
suggestions. The user can browse the full text of any of
these as well as reports on the current state of the running
auctions. At any time, the user can stop the intentions being
executed by her assistant or change its goals.
To enable some of the functionality described below, we
require that, before selling or bidding, users declare their
identities to their assistants. In turn, assistants make their
users' identities public on the selected channels by means of
a \hello" message.
Figure
5: Screenshot { Auctioneer
We are implementing agents overhearing auctions with different
aims, focusing on dierents aspects and dierents
audiences (auctioneers, bidders, potential bidders). As a
running example, let us consider an auction for the famous
painting \The Kiss" by Gustav Klimt.
The Credit Rating Agent's main goal is to provide reports
on the nancial situation of participants to auctions. The
objective of overhearing by this agent is to help auctioneers
in understanding who is bidding, and possibly in selecting
a winner or refusing to sell. The Credit Rating Agent accesses
information either publically available or stored in local
databases. This information may include lists of bankrupts,
previous transactions in the auction system, articles on the
press concerning nancial situations, results of Web searches,
and so on. Suggestions by the Credit Rating Agent are reports
of ndings, written as Web pages whose URL and title
are sent privately to auctioneers via point-to-point connec-
tions. In a real system, suggestions by this agent and others
presented below may be paid for, for instance when accessing
the reports. In the example of the auction for the painting
\The Kiss" { a very expensive item { the auctioneer can
take advantage from a report by a Credit Rating Agent on
the fact that a certain bidder X is notoriously bankrupt and
has no known nancial support. The auctioneer can decide
to exclude X from the auction, or alternatively grant victory
to Z, which another report says to be a relative of Mr. Bill
Gates, or change the price increment between rounds if all
participants seems to be extremely rich, and so on.
On the opposite side, the Auctioneer Monitor observes the
activity of auctioneers in order to prole their tipical behaviour
or discover other information (for instance, the quality
of items on sale, price / performance ratio, press reports
on the items and auctioneer herself). The collected data
is used to create reports and is proposed as suggestions to
bidders for current auctions in the same way as done by the
Credit Rating Agent. In our auction for \The Kiss", participants
can benet from knowing that this painting is kept
in a museum of Vienna and cannot be on sale, or that the
painting was stolen from the museum just the day before the
auction, or, viceversa, that the auctioneer is a very serious
art expert often selling important pieces.
An Auctions Monitor uses overhearing to know which auctions
are active and to collect some basic statistics on recent
auctions, such as average price per class of item and average
time needed to conclude. It spontaneously sends a report
to any new user willing to participate or sell. Monitors may
specialize on specic classes of items. In our example, one
can envisage a situation where a very rich paintings enthusiast
has just tuned on the channel and declared her identity.
She receives a report from an Auction Monitor saying that
the sale of "The Kiss" is close to its conclusion, with information
on the current price and participants as well as other
trends of the auction. Our hypotethical art lover can then
quickly decide to participate.
Finally, an Opportunity Recognizer is an overhearer (as dened
in Section 3) specializing in analyzing auctions. Its
query language allows it to select Init Auctions and Propose
Bids based on the item being sold, their price and other
characteristics. Rather than connecting to a channel via a
normal user assistant, a user may activate a simple \spy"
agent (possibly running on her PDA, or even a latest generation
mobile phone) for subscribing to an Opportunity
Recognizer in order to be notied of auctions on items of
her interest. In our example, somebody interested in buying
just a reproduction of \The Kiss" for not more than 30$
probably doesn't want to tune on a channel for a long time,
let alone during the auction for the real painting. Instead
she may be ready to activate a user assistant when her spy
alerts her of a sale of a batch of prints by Gustav Klimt.
8. RELATED WORK
Some general purpose communication systems related to
channeled multicast have already been identied in Section
4. There is a plethora of agent communication infras-
tructures, but we are not aware of anything directly com-
parable, for two main reasons. Firstly, it is commonly required
that agent communication is somehow reliable (even
if many systems fall short of supporting common knowledge
[13] because their underlying connection-oriented pro-
tocol, TCP/IP, does not really guarantee anything apart
from the correct order of transmission of messages) while
channeled multicast is not. Secondly, channeled multicast
supports a feature shared only with broadcasting systems,
that is, unconstrained ability for authorized users to listen
to anything being communicated by anybody to anybody
else.
In our opinion, overhearing is widely applicable, even if this
requires some changes in the way multi-agent applications
are conceived. As a matter of fact, overhearing is already
being used in some domains, either implicitly or explicitly,
but supported via traditional agent communication systems.
An extended review is outside the objectives of this paper,
however it is worthwhile to give some examples just to explain
how channeled multicast can be benecial.
A rst example is monitoring the activity of other agents,
or even of teams of agents; see, for instance, [14, 19]. Monitoring
is critical, for example, for visualization [17], identication
of failures, or simply to trace teams activity. One
possible approach (report based monitoring) requires each
monitored agent to explicity communicate its state to the
monitoring system. Clearly, if reporting is done by means of
channeled multicast, the amount of communication needed
is xed, no matter how many monitoring agents (possibly
tracking dierent aspects) are active at any given time. System
performance is then unaected by monitoring, and for
the same reason monitoring does not interfere with dynamic
team changes (addition or removal of agents) since no registration
or similar activity is needed. In the same way,
other type of approaches to monitoring [14], based on the
observation of agent communication, can take advantage of
channeled multicast.
Another class of applications where channeled multicast can
provide advantages is human-computer collaborative sys-
tems, such as QuickSet [7], an agent-based, wireless, collab-
orative, multimodal system that enables multiple users to
create and control military simulations by using speech and
gesture. Sensor-to-application communication (e.g., speech
or gesture recognition) could be protably based on channeled
multicast, improving scalability especially when an unknown
number of applications may be interested in overhearing
(or overseeing) what users are doing in order to collect
information or provide suggestions.
9. CONCLUSIONS AND FUTURE WORK
We presented the requirements for a group communication
infrastructure that we call channeled multicast. Its distinguishing
features include the association of channels with
themes of conversation, and addressing groups by means of
some common agent characteristics. Our implementation,
called LoudVoice, is based on IP multicast. Communication
on channeled multicast is assumed to be unreliable (which
is especially true of LoudVoice); however, we have shown
that it is possible to design agent-level protocols that are
robust in the event of failure as well as highly scalable, as
demonstrated by our implementation of the English Auction
Protocol.
Our investigation into group communication has been triggered
by the idea of overhearing and unplanned collaboration
as an innovative paradigm for multi-agent system
design. Observable communication channels change some
of the traditional requirements of multi-agent system (e.g.,
the availability of a matchmaker), and make it easier to
build systems by incremental development, i.e. by adding
new agents whenever additional functionality is required
or new technology is available. We presented, as a case
study, an experimental environment built on top of Loud-
Voice where typical knowledge management operations intermix
with group activities.
For the future, our work will follow two main lines. The
rst focuses on engineering issues: for instance, supporting
group privacy and better APIs for LoudVoice. The second
line focuses on background research issues for overhearing,
including mental attitude recognition, languages for unsolicited
suggestions and intention revision.
10.
ACKNOWLEDGEMENTS
We thank Mattia Merzi, for helping with the experimentation
of LoudVoice; Mark Carman, for his accurate review of
the paper; and Paolo Avesani, for suggesting the idea from
which this work originated.
11.
--R
CORBA Event Service Speci
IPv6 Related Speci
Java RMI Distributed Event Model Speci
JMS (Java Message Service) Speci
An open agent architecture.
Multimodal interaction for distributed applications.
Foundation for Intelligent Physical Agents.
Foundation for Intelligent Physical Agents.
Foundation for Intelligent Physical Agents.
Foundation for Intelligent Physical Agents.
Knowledge and common knowledge in a distributed environment.
Monitoring deployed agent teams.
Semantics of agent communication languages for group interaction.
Visualizing and debugging distributed multi-agent systems
Brokering and matchmaking for coordination of agent societies: A survey.
Tracking dynamic team activity.
--TR
Knowledge and common knowledge in a distributed environment
QuickSet
KQML as an agent communication language
Visualising and debugging distributed multi-agent systems
Monitoring deployed agent teams
Design and evaluation of a wide-area event notification service
The JEDI Event-Based Infrastructure and Its Application to the Development of the OPSS WFMS
Extending Multi-agent Cooperation by Overhearing
Semantics of Agent Communication Languages for Group Interaction
Ontological Overhearing
--CTR
Francois Legras , Catherine Tessier, LOTTO: group formation by overhearing in large teams, Proceedings of the second international joint conference on Autonomous agents and multiagent systems, July 14-18, 2003, Melbourne, Australia
Stephen Cranefield, Reliable group communication and institutional action in a multi-agent trading scenario, Proceedings of the fourth international joint conference on Autonomous agents and multiagent systems, July 25-29, 2005, The Netherlands
Gery Gutnik , Gal Kaminka, Towards a Formal Approach to Overhearing: Algorithms for Conversation Identification, Proceedings of the Third International Joint Conference on Autonomous Agents and Multiagent Systems, p.78-85, July 19-23, 2004, New York, New York
Ben-Asher , Shlomo Berkovsky , Yaniv Eytani, Management of unspecified semi-structured data in multi-agent environment, Proceedings of the 2006 ACM symposium on Applied computing, April 23-27, 2006, Dijon, France
Adrian K. Agogino , Kagan Tumer, Handling Communication Restrictions and Team Formation in Congestion Games, Autonomous Agents and Multi-Agent Systems, v.13 n.1, p.97-115, July 2006
Oliviero Stock , Massimo Zancanaro , Paolo Busetta , Charles Callaway , Antonio Krger , Michael Kruppa , Tsvi Kuflik , Elena Not , Cesare Rocchi, Adaptive, intelligent presentation of information for the museum visitor in PEACH, User Modeling and User-Adapted Interaction, v.17 n.3, p.257-304, July 2007 | auction protocols;group communications;overhearing;broadcasting;agent communication languages;multicasting |
545223 | A large, fast instruction window for tolerating cache misses. | Instruction window size is an important design parameter for many modern processors. Large instruction windows offer the potential advantage of exposing large amounts of instruction level parallelism. Unfortunately naively scaling conventional window designs can significantly degrade clock cycle time, undermining the benefits of increased parallelism.This paper presents a new instruction window design targeted at achieving the latency tolerance of large windows with the clock cycle time of small windows. The key observation is that instructions dependent on a long latency operation (e.g., cache miss) cannot execute until that source operation completes. These instructions are moved out of the conventional, small, issue queue to a much larger waiting instruction buffer (WIB). When the long latency operation completes, the instructions are reinserted into the issue queue. In this paper, we focus specifically on load cache misses and their dependent instructions. Simulations reveal that, for an 8-way processor, a 2K-entry WIB with a 32-entry issue queue can achieve speedups of 20%, 84%, and 50% over a conventional 32-entry issue queue for a subset of the SPEC CINT2000, SPEC CFP2000, and Olden benchmarks, respectively. | Introduction
Many of today's microprocessors achieve high performance
by combining high clock rates with the ability to dynamically
process multiple instructions per cycle. Unfortu-
nately, these two important components of performance are
often at odds with one another. For example, small hardware
structures are usually required to achieve short clock
cycle times, while larger structures are often necessary to
identify and exploit instruction level parallelism (ILP).
A particularly important structure is the issue window,
which is examined each cycle to choose ready instructions
for execution. A larger window can often expose a larger
number of independent instructions that can execute out-of-
order. Unfortunately, the size of the issue window is limited
due to strict cycle time constraints. This conflict between
cycle time and dynamically exploiting parallelism is exacerbated
by long latency operations such as data cache misses
or even cross-chip communication [1, 22]. The challenge is
to develop microarchitectures that permit both short cycle
times and large instruction windows.
This paper introduces a new microarchitecture that reconciles
the competing goals of short cycle times and large
instruction windows. We observe that instructions dependent
on long latency operations cannot execute until the
long latency operation completes. This allows us to separate
instructions into those that will execute in the near future
and those that will execute in the distant future. The key
to our design is that the entire chain of instructions dependent
on a long latency operation is removed from the issue
window, placed in a waiting instruction buffer (WIB), and
reinserted after the long latency operation completes. Fur-
thermore, since all instructions in the dependence chain are
candidates for reinsertion into the issue window, we only
need to implement select logic rather than the full wakeup-
select required by a conventional issue window. Tracking
true dependencies (as done by the wakeup logic) is handled
by the issue window when the instructions are reinserted.
In this paper we focus on tolerating data cache misses,
however we believe our technique could be extended to
other operations where latency is difficult to determine at
compile time. Specifically, our goal is to explore the design
of a microarchitecture with a large enough "effective" window
to tolerate DRAM accesses. We leverage existing techniques
to provide a large register file [13, 34] and assume
that a large active list 1 is possible since it is not on the critical
path [4] and techniques exist for keeping the active list
1 By active list, we refer to the hardware unit that maintains the state of
in-flight instructions, often called the reorder buffer.
large while using relatively small hardware structures [31].
We explore several aspects of WIB design, including:
detecting instructions dependent on long latency operations,
inserting instructions into the WIB, banked vs. non-banked
organization, policies for selecting among eligible instructions
to reinsert into the issue window, and total capacity.
For an 8-way processor, we compare the committed instructions
per cycle (IPC) of a WIB-based design that has
a 32-entry issue window, a 2048-entry banked WIB, and
two-level register files (128 L1/2048 L2) to a conventional
32-entry issue window with single-level register files (128
registers). These simulations show WIB speedups over the
conventional design of 20% for SPEC CINT2000, 84% for
SPEC CFP2000, and 50% for Olden. These speedups are
a significant fraction of those achieved with a 2048-entry
conventional issue window (35%, 140%, and 103%), even
ignoring clock cycle time effects.
The remainder of this paper is organized as follows. Section
provides background and motivation for this work.
Our design is presented in Section 3 and we evalute its performance
in Section 4. Section 5 discusses related work and
Section 6 summarizes this work and presents future directions
Background and Motivation
2.1 Background
Superscalar processors maximize serial program performance
by issuing multiple instructions per cycle. One of
the most important aspects of these systems is identifying
independent instructions that can execute in parallel. To
identify and exploit instruction level parallelism (ILP), most
of today's processors employ dynamic scheduling, branch
prediction, and speculative execution. Dynamic scheduling
is an all hardware technique for identifying and issuing
multiple independent instructions in a single cycle [32].
The hardware looks ahead by fetching instructions into a
buffer-called a window-from which it selects instructions
to issue to the functional units. Instructions are issued
only when all their operands are available, and independent
instructions can execute out-of-order. Results of instructions
executed out-of-order are committed to the architectural
state in program order. In other words, although instructions
within the window execute out-of-order, the window
entries are managed as a FIFO where instructions enter
and depart in program order.
The above simplified design assumes that all instructions
in the window can be examined and selected for execution.
We note that it is possible to separate the FIFO management
(active list or reorder buffer) from the independent instruction
identification (issue queue) as described below. Re-
gardless, there is a conflict between increasing the window
(issue queue) size to expose more ILP and keeping clock
cycle time low by using small structures [1, 22]. Histor-
ically, smaller windows have dominated designs resulting
in higher clock rates. Unfortunately, a small window can
quickly fill up when there is a long latency operation.
In particular, consider a long latency cache miss serviced
from main memory. This latency can be so large, that by
the time the load reaches the head of the window, the data
still has not arrived from memory. Unfortunately, this significantly
degrades performance since the window does not
contain any executing instructions: instructions in the load's
dependence chain are stalled, and instructions independent
of the load are finished, waiting to commit in program order.
The only way to make progress is to bring new instructions
into the window. This can be accomplished by using a larger
window.
2.2 Limit Study
The remainder of this section evaluates the effect of window
size on program performance, ignoring clock cycle
time effects. The goal is to determine the potential performance
improvement that could be achieved by large instruction
windows. We begin with a description of our processor
model. This is followed by a short discussion of its performance
for various instruction window sizes.
2.2.1 Methodology
For this study, we use a modified version of SimpleScalar
(version 3.0b) [8] with the SPEC CPU2000 [17] and
Olden [11] benchmark suites. Our SPEC CPU2000 benchmarks
are pre-compiled binaries obtained from the SimpleScalar
developers [33] that were generated with compiler
flags as suggested at www.spec.org and the Olden binaries
were generated with the Alpha compiler (cc) using optimization
flag -O2. The SPEC benchmarks operate on their
reference data sets and for the subset of the Olden benchmarks
we use, the inputs are: em3d 20,000 nodes, arity 10;
mst 1024 nodes; perimeter 4Kx4K image; treeadd
levels. We omit several benchmarks either because the
data cache miss ratios are below 1% or their IPCs are
unreasonably low (health and ammp are both less than
for our base configuration.
Our processor design is loosely based on the Alpha
21264 microarchitecture [12, 14, 19]. We use the same
seven stage pipeline, including speculative load execution
and load-store wait prediction. We do not model the clustered
design of the 21264. Instead, we assume a single integer
issue queue that can issue up to 8 instructions per cycle
and a single floating point issue queue that can issue up to 4
instructions per cycle. Table 1 lists the various parameters
for our base machine. Note that both integer and floating
Active List 128, 128 Int Regs, 128 FP Regs
Load/Store Queue 64 Load, 64 Store
Issue Queue Floating Point
Issue Width 12 Floating Point)
Decode Width 8
Commit Width 8
Instruction Fetch Queue 8
Functional Units 8 integer ALUs (1-cycle),
multipliers (7-cycle),
4 FP adders (4-cycle),
multipliers (4-cycle),
dividers (nonpipelined, 12-
(nonpipelined, 24-cycle)
Branch Prediction Bimodal & two-level adaptive
combined, with speculative up-
date, 2-cycle penalty for direct
jumps missed in BTB, 9-cycle for
others
Store-Wait Table 2048 entries, bits cleared every
cycles
L1 Data Cache
Inst Cache
Unified Cache 256 KB, 4 Way
Memory Latency 250 Cycles
TLB 128-entry, 4-way associative,
4 KB page size, 30-cycle penalty
Table
1. Base Configuration
point register files are as large as the active list. For the remainder
of this paper we state a single value for the active
list/register file size, this value applies to both the integer
and floating point register files.
The simulator was modified to support speculative up-date
of branch history with history-based fixup and return-
address-stack repair with the pointer-and-data fixup mechanism
[26, 27]. We also modified the simulator to warm up
the instruction and data caches during an initial fast forward
phase. For the SPEC benchmarks we skip the first four hundred
million instructions, and then execute the next one hundred
million instructions with the detailed performance sim-
ulator. The Olden benchmarks execute for 400M instructions
or until completion. This approach is used throughout
this paper. We note that our results are qualitatively similar
when using a different instruction execution window [24].
2.2.2 Varying Window Size
We performed simulations varying the issue queue size,
from (the base) in powers of 2, up to 4096. For issue
queue sizes of 32, 64, and 128 we keep the active list fixed
at 128 entries. For the remaining configurations, the active
list, register files and issue queue are all equal size.
The load and store queues are always set to one half the
active list size, and are the only limit on the number of outstanding
requests unless otherwise stated. Figure 1 shows
the committed instructions per cycle (IPC) of various window
sizes normalized to the base 32-entry configuration
new =IPC old ) for the SPEC integer, floating
point, and Olden benchmarks. Absolute IPC values for
the base machine are provided in Section 4, the goal here
is to examine the relative effects of larger instruction windows
These simulations show there is an initial boost in the
IPC as window size increases, up to 2K, for all three sets of
benchmarks. With the exception of mst, the effect plateaus
beyond 2K entries, with IPC increasing only slightly. This
matches our intuition since during a 250 cycle memory latency
2000 instructions can be fetched in our 8-way proces-
sor. Larger instruction windows beyond 2K provide only
minimal benefits. Many floating point benchmarks achieve
speedups over 2, with art achieving a speedup over 5 for
the 2K window. This speedup is because the larger window
can unroll loops many times, allowing overlap of many
cache misses. A similar phenomenon occurs for mst.
The above results motivate the desire to create large instruction
windows. The challenge for architects is to accomplish
this without significant impact on clock cycle
time. The next section presents our proposed solution.
3 A Large Window Design
This section presents our technique for providing a large
instruction window while maintaining the advantages of
small structures on the critical path. We begin with an
overview to convey the intuition behind the design. This
is followed by a detailed description of our particular de-
sign. We conclude this section with a discussion of various
design issues and alternative implementations.
3.1
Overview
In our base microarchitecture, only those instructions in
the issue queue are examined for potential execution. The
active list has a larger number of entries than the issue queue
(128 vs. 32), allowing completed but not yet committed
instructions to release their issue queue entries. Since the
active list is not on the critical path [4], we assume that
we can increase its size without affecting clock cycle time.
Nonetheless, in the face of long latency operations, the issue
queue could fill with instructions waiting for their operands
and stall further execution.
We make the observation that instructions dependent on
long latency operations cannot execute until the long latency
operation completes and thus do not need to be exam-0.200.601.001.401.80bzip2 gcc gzip parser perlbmk vortex vpr Average
a) SPEC 2000 Integer1.003.005.00applu art facerec galgel mgrid swim wupwise Average
b) SPEC 2000 Floating Point0.501.502.503.504.50em3d mst perimeter treeadd Average
c) Olden
Figure
1. Large Window Performance
ined by the wakeup-select logic on the critical path. We note
this same observation is exploited by Palacharla, et. al [22]
and their technique of examining only the head of the issue
queues. However, the goal of our design is to remove these
waiting instructions from the issue queue and place them in
a waiting instruction buffer (WIB). When the long latency
operation completes, the instructions are moved back into
the issue queue for execution. In this design, instructions
remain in the issue queue for a very short time. They either
execute properly or they are removed due to dependence on
a long latency operation.
For this paper we focus specifically on instructions in
the dependence chain of load cache misses. However, we
believe our technique could be extended to other types of
long latency operations. Figure 2 shows the pipeline for
a WIB-based microarchitecture, based on the 21264 with
two-level register files (described later).
The fetch stage includes the I-cache, branch prediction
and the instruction fetch queue. The slot stage directs instructions
to the integer or floating point pipeline based on
their type. The instructions then go through register rename
before entering the issue queue. Instructions are selected
from the issue queue either to proceed with the register read,
execution and memory/writeback stages or to move into the
WIB during the register read stage. Once in the WIB, instructions
wait for the specific cache miss they depend on to
complete. When this occurs, the instructions are reinserted
into the issue queue and repeat the wakeup-select process,
possibly moving back into the WIB if they are dependent on
another cache miss. The remainder of this section provides
details on WIB operation and organization.
3.2 Detecting Dependent Instructions
An important component of our design is the ability to
identify all instructions in the dependence chain of a load
cache miss. To achieve this we leverage the existing issue
queue wakeup-select logic. Under normal execution, the
wakeup-select logic determines if an instruction is ready for
execution (i.e., has all its operands available) and selects a
subset of the ready instructions according to the issue constraints
(e.g., structural hazards or age of instructions).
To leverage this logic we add an additional signal-
called the wait bit-that indicates the particular source
operand (i.e., input register value) is "pretend ready". This
signal is very similar to the ready bit used to synchronize
true dependencies. It differs only in that it is used to indicate
the particular source operand will not be available for an extended
period of time. An instruction is considered pretend
ready if one or more of its operands are pretend ready and
all the other operands are truly ready. Pretend ready instructions
participate in the normal issue request as if they were
truly ready. When it is issued, instead of being sent to the
functional unit, the pretend ready instruction is placed in
the WIB and its issue queue entry is subsequently freed by
the issue logic as though it actually executed. We note that
a potential optimization to our scheme would consider an
instruction pretend ready as soon as one of its operands is
pretend ready. This would allow instructions to be moved to
the WIB earlier, thus further reducing pressure on the issue
queue resources.
In our implementation, the wait bit of a physical register
is initially set by a load cache miss. Dependent instructions
observe this wait bit, are removed from the issue queue, and
set the wait bit of their destination registers. This causes
their dependent instructions to be removed from the issue
queue and set the corresponding wait bits of their result
Queue
Issue
Point
Queue
Issue
Integer
Exec
Integer
Exec
(32Kb 4Way)
Cache
Data
Slot Memory
Reg File
Reg File
Integer
Register
Rename
Instruction
Waiting Instruction Buffer
Fetch Rename Issue Register Read Execute
Floating
Cache
(32kb 4Way)
Floating
Rename
Register
Point
Reg File
Reg File
Figure
2. WIB-based Microarchitecture
registers. Therefore, all instructions directly or indirectly
dependent on the load are identified and removed from the
issue queue. The load miss signal is already generated in
the Alpha 21264 since load instructions are speculatively
assumed to hit in the cache allowing the load and dependent
instructions to execute in consecutive cycles. In the case of
a cache miss in the Alpha, the dependent instructions are
retained in the issue queue until the load completes. In our
case, these instructions move to the WIB.
An instruction might enter the issue queue after the
instructions producing its operands have exited the issue
queue. The producer instructions could have either executed
properly and the source operand is available or they
could be in the WIB and this instruction should eventually
be moved to the WIB. Therefore, wait bits must be available
wherever conventional ready bits are available. In this
case, during register rename. Note that it may be possible
to steer instructions to the WIB after the rename stage and
before the issue stage, we plan to investigate this as future
work. Our current design does not implement this, instead
each instruction enters the issue queue and then is moved to
the WIB if necessary.
3.3 The Waiting Instruction Buffer
The WIB contains all instructions directly or indirectly
dependent on a load cache miss. The WIB must be designed
to satisfy several important criteria. First, it must contain
and differentiate between the dependent instructions of individual
outstanding loads. Second, it must allow individual
instructions to be dependent on multiple outstanding loads.
Finally, it must permit fast "squashing" when a branch mispredict
or exception occurs.
To satisfy these requirements, we designed the WIB to
operate in conjunction with the active list. Every instruction
in the active list is allocated an entry in the WIB. Although
this may allocate entries in the WIB that are never
dependent on a load miss, it simplifies squashing on mispre-
dicts. Whenever active list entries are added or removed, the
corresponding operations are performed on the WIB. This
means WIB entries are allocated in program order.
To link WIB entries to load misses we use a bit-vector
to indicate which WIB locations are dependent on a specific
load. When an instruction is moved to the WIB, the
appropriate bit is set. The bit-vectors are arranged in a two
dimensional array. Each column is the bit-vector for a load
cache miss. Bit-vectors are allocated when a load miss is
detected, therefore for each outstanding load miss we store a
pointer to its corresponding bit-vector. Note that the number
of bit-vectors is bounded by the number of outstanding load
misses. However, it is possible to have fewer bit-vectors
than outstanding misses.
To link instructions with a specific load, we augment the
operand wait bits with an index into the bit-vector table corresponding
to the load cache miss this instruction is dependent
on. In the case where an instruction is dependent on
multiple outstanding loads, we use a simple fixed ordering
policy to examine the source operand wait bits and store
the instruction in the WIB with the first outstanding load
encountered. This requires propagating the bit-vector index
with the wait bits as described above. It is possible
to store the bit-vector index in the physical register, since
that space is available. However, this requires instructions
that are moved into the WIB to consume register ports. To
reduce register pressure we assume the bit-vector index is
stored in a separate structure with the wait bits.
Instructions in the WIB are reinserted in the issue queue
when the corresponding load miss is resolved. Reinsertion
shares the same bandwidth (in our case, 8 instructions
per cycle) with those newly arrived instructions that are decoded
and dispatched to the issue queue. The dispatch logic
is modified to give priority to the instructions reinserted
from the WIB to ensure forward progress.
Note that some of the instructions reinserted in the issue
queue by the completion of one load may be dependent
on another outstanding load. The issue queue logic detects
that one of the instruction's remaining operands is unavail-
able, due to a load miss, in the same way it detected the
first load dependence. The instruction then sets the appropriate
bit in the new load's bit-vector, and is removed from
the issue queue. This is a fundamental difference between
the WIB and simply scaling the issue queue to larger en-
tries. The larger queue issues instructions only once, when
all their operands are available. In contrast, our technique
could move an instruction between the issue queue and WIB
many times. In the worst case, all active instructions are dependent
on a single outstanding load. This requires each
bit-vector to cover the entire active list.
The number of entries in the WIB is determined by the
size of the active list. The analysis in Section 2 indicates
that 2048 entries is a good window size to achieve significant
speedups. Therefore, initially we assume a 2K-entry
active list and 1K-entry load and store queues. Assuming
each WIB entry is 8 bytes then the total WIB capacity is
16KB. The bit-vectors can also consume a great deal of
storage, but it is limited by the number of outstanding requests
supported. Section 4 explores the impact of limiting
the number of bit-vectors below the load queue size.
3.3.1 WIB Organization
We assume a banked WIB organization and that one instruction
can be extracted from each bank every two cy-
cles. These two cycles include determining the appropriate
instruction and reading the appropriate WIB entry. There
is a fixed instruction width between the WIB and the issue
queue. We set the number of banks equal to twice this
width. Therefore, we can sustain reinsertion at full band-width
by reading instructions from the WIB's even banks in
one cycle and from odd banks in the next cycle, if enough
instructions are eligible in each set of banks.
Recall, WIB entries are allocated in program order in
conjunction with active list entries. We perform this allocation
using round-robin across the banks, interleaving at
the individual instruction granularity. Therefore, entries in
each bank are also allocated and released in program or-
der, and we can partition each load's bit-vector according
to which bank the bits map to. In our case, a 2K entry
WIB with a dispatch width to the issue queue of 8 would
have banks with 128 entries each. Each bank also stores
its local head and tail pointers to reflect program order of
instructions within the bank. Figure 3 shows the internal
organization of the WIB.
During a read access each bank in a set (even or odd)
operates independently to select an instruction to reinsert
to the issue queue by examining the appropriate 128 bits
from each completed load. For each bank we create a single
bit-vector that is the logical OR of the bit-vectors for
all completed loads. The resulting bit-vector is examined to
select the oldest active instruction in program order. There
are many possible policies for selecting instructions. We
examine a few simple policies later in this paper, but leave
investigation of more sophisticated policies (e.g., data flow
graph order or critical path [15]) as future work. Regardless
of selection policy, the result is that one bit out of the
128 is set, which can then directly enable the output of the
corresponding WIB entry without the need to encode then
decode the WIB index. The process is repeated with an up-dated
bit-vector that clears the WIB entry for the access just
completed and may include new eligible instructions if another
load miss completed during the access.
The above policies are similar to the select policies implemented
by the issue queue logic. This highlights an important
difference between the WIB and a conventional issue
queue. A conventional issue queue requires wakeup
logic that broadcasts a register specifier to each entry.
The WIB eliminates this broadcast by using the completed
loads' bit-vectors to establish the candidate instructions for
selection. The issue queue requires the register specifier
broadcast to maintain true dependencies. In contrast, the
WIB-based architecture leverages the much smaller issue
queue for this task and the WIB can select instructions for
reinsertion in any order.
It is possible that there are not enough issue queue entries
available to consume all instructions extracted from
the WIB. In this case, one or more banks will stall for this
access and wait for the next access (two cycles later) to attempt
reinserting its instruction. To avoid potential livelock,
on each access we change the starting bank for allocating
the available issue queue slots. Furthermore, a bank remains
at the highest priority if it has an instruction to reinsert but
was not able to. A bank is assigned the lowest priority if it
inserts an instruction or does not have an instruction to rein-
sert. Livelock could occur in a fixed priority scheme since
the instructions in the highest priority bank could be dependent
on the instructions in the lower priority bank. This
could produce a continuous stream of instructions moving
from the WIB to the issue queue then back to the WIB since
their producing instructions are not yet complete. The producing
instructions will never complete since they are in the
lower priority bank. Although this scenario seems unlikely
it did occur in some of our benchmarks and thus we use
Bit Vectors WIB Bank
To Issue Queue
Priority
Even Banks
Head
Tail
Bit Vectors WIB Bank
To Issue Queue
Queue
Priority
Odd Banks
Head
Tail
Figure
3. WIB Organization
round-robin priority.
3.3.2 Squashing WIB Entries
Squashing instructions requires clearing the appropriate
bits in each bit-vector and reseting each banks' local tail
pointer. The two-dimensional bit-vector organization simplifies
the bit-vector clear operation since it is applied to the
same bits for every bit-vector. Recall, each column corresponds
to an outstanding load miss, thus we can clear the
bits in the rows associated with the squashed instructions.
3.4 Register File Considerations
To support many in-flight instructions, the number of rename
registers must scale proportionally. There are several
alternative designs for large register files, including multi-cycle
access, multi-level [13, 34], multiple banks [5, 13], or
queue-based designs [6]. In this paper, we use a two-level
register file [13, 34] that operates on principles similar to
the cache hierarchy. Simulations of a multi-banked register
file show similar results. Further details on the register file
designs and performance are available elsewhere [20].
Alternative WIB Designs
The above WIB organization is one of several alterna-
tives. One alternative we considered is a large non-banked
multicycle WIB. Although it may be possible to pipeline the
WIB access, it would not produce a fully pipelined access
and our simulations (see Section 4) indicate pipelining may
not be necessary.
Another alternative we considered is a pool-of-blocks
structure for implementing the WIB. In this organziation,
when a load misses in the cache it obtains a free block to
buffer dependent instructions. A pointer to this block is
stored with the load in the load queue (LQ) and is used to
deposit dependent instructions in the WIB. When the load
completes, all the instructions in the block are reinserted
into the issue queue. Each block contains a fixed number of
instruction slots and each slot holds information equivalent
to issue queue entries.
An important difference in this approach compared to
the technique we use is that instructions are stored in dependence
chain order, and blocks may need to be linked
together to handle loads with long dependence chains. This
complicates squashing since there is no program order associated
with the WIB entries. Although we could maintain
information on program order, the list management of each
load's dependence chain becomes too complex and time
consuming during a squash. Although the bit-vector approach
requires more space, it simplifies this management.
The pool-of-blocks approach has the potential of deadlock
if there are not enough WIB entries. We are continuing to
investigate techniques to reduce the list management overhead
and handle deadlock.
3.6
Summary
The WIB architecture effectively enlarges the instruction
window by removing instructions dependent on load cache
misses from the issue queue, and retaining them in the WIB
while the misses are serviced. In achieving this, we leverage
the existing processor issue logic without affecting the processor
cycle time and circuit complexity. In the WIB archi-
tecture, instructions stay in the issue queue only for a short
period of time, therefore new instructions can be brought
into the instruction window much more rapidly than in the
conventional architectures. The fundamental difference between
a WIB design and a design that simply scales up the
issue queue is that scaling up the issue queue significantly
complicates the wakeup logic, which in turn affects the processor
cycle time [1, 22]. However, a WIB requires a very
simple form of wakeup logic as all the instructions in the
dependence chain of a load miss are awakened when the
miss is resolved. There is no need to broadcast and have all
the instructions monitor the result buses.
Evaluation
In this section we evaluate the WIB architecture. We
begin by presenting the overall performance of our WIB
design compared to a conventional architecture. Next, we
explore the impact of various design choices on WIB per-
formance. This includes limiting the number of available
bit-vectors, limited WIB capacity, policies for selecting instructions
for reinsertion into the issue queue, and multicycle
non-banked WIB.
These simulations reveal that WIB-based architectures
can increase performance, in terms of IPC, for our set of
benchmarks by an average of 20%, 84%, and 50% for SPEC
INT, SPEC FP, and Olden, respectively. We also find that
limiting the number of outstanding loads to 64 produces
similar improvements for the SPEC INT and Olden bench-
marks, but reduces the average improvement for the SPEC
FP to 45%. A WIB capacity as low as 256 entries with
a maximum of 64 outstanding loads still produces average
speedups of 9%, 26%, and 14% for the respective benchmark
sets.
4.1 Overall Performance
We begin by presenting the overall performance improvement
in IPC relative to a processor with a 32-entry
issue queue and single cycle access to 128 registers, hence
a 128-entry active list (32-IQ/128). Figure 4 shows the
speedups (IPC new =IPC old ) for various microarchitec-
tures. Although we present results for an 8-issue processor,
the overall results are qualitatively similar for a 4-issue pro-
cessor. The WIB bar corresponds to a 32-entry issue queue
with our banked WIB organization, a 2K-entry active list,
and 2K registers, using a two-level register file with 128
registers in the first level, 4 read ports and 4 write ports to
the pipelined second level that has a 4-cycle latency. Assuming
the 32-entry issue queue and 128 level one registers
set the clock cycle time, the WIB-based design is approximately
clock cycle equivalent to the base architecture.
For these experiments the number of outstanding loads (thus
bit-vectors) is not limited, we explore this parameter below.
Table
2 shows the absolute IPC values for the base configuration
and our banked WIB design, along with the branch
direction prediction rates, L1 data cache miss rates, and L2
unified cache local miss rates for the base configuration.
For comparison we also include two scaled versions of
a conventional microarchitecture. Both configurations use
a 2K-entry active list and single cycle access to 2K regis-
ters. One retains the 32-entry issue queue (32-IQ/2K) while
Benchmark Base Branch DL1 UL2 Local WIB
IPC Dir Miss Miss IPC
Pred Ratio Ratio
gzip 2.25 0.91 0.02 0.04 2.25
parser 0.83 0.95 0.04 0.22 0.95
perlbmk
vortex
applu 4.17 0.98 0.10 0.26 4.28
art 0.42 0.96 0.35 0.73 1.64
galgel 1.92 0.98 0.07 0.26 3.97
mgrid 2.58 0.97 0.06 0.42 2.57
wupwise 3.38 1.00 0.03 0.25 3.99
em3d 2.28 0.99 0.02 0.16 2.27
mst 0.96 1.00 0.07 0.49 2.51
perimeter 1.00 0.93 0.04 0.38 1.16
treeadd 1.05 0.95 0.03 0.33 1.28
Table
2. Benchmark Performance Statistics
the other scales the issue queue to 2K entries (2K-IQ/2K).
These configurations help isolate the issue queue from the
active list and to provide an approximate upper bound on
our expected performance.
From the results shown in Figure 4, we make the following
observations. First, the WIB design produces speedups
over 10% for 12 of the benchmarks. The average
speedup is 20%, 84%, and 50% for SPEC INT, SPEC
FP, and Olden, respectively. The harmonic mean of IPCs
(shown in Table 2) increases from 1.0 to 1.24 for SPEC INT,
from 1.42 to 3.02 for SPEC FP, and from 1.17 to 1.61 for
Olden.
For most programs with large speedups from the large
issue queue, the WIB design is able to capture a significant
fraction of the available speedup. However, for a few
programs the 2K issue queue produces large speedups when
the WIB does not. mgrid is the most striking example
where the WIB does not produce any speedup while the 2K
issue queue yields a speedup of over two. This phenomenon
is a result of the WIB recycling instructions through the issue
queue. This consumes issue bandwidth that the 2K issue
queue uses only for instructions ready to execute. As evidence
of this we track the number of times an instruction
is inserted into the WIB. In the banked implementation the
average number of times an instruction is inserted into the
Bzip2 Gcc Gzip Parser Perlbmk Vortex Vpr Average
a) SPEC 2000 Integer
5.22 3.91.001.401.802.202.60
Applu Art Facerec Galgel Mgrid Swim Wupwise Average
b) SPEC 2000 Floating Point
4.38 2.611.001.401.802.20
Em3d Mst Perimeter Treeadd Average
c) Olden
Figure
4. WIB Performance
WIB is four with a maximum of 280. Investigations of other
insertion policies (see below) reduces these values to an average
insertion count of one and a maximum of 9, producing
a speedup of 17%.
We also note that for several benchmarks just increasing
the active list produces noticable speedups, in some cases
even outperforming the WIB. This indicates the issue queue
is not the bottleneck for these benchmarks. However, over-all
the WIB significantly outperforms an increased active
list.
Due to the size of the WIB and larger register file, we
also evaluated an alternative use of that space by doubling
the data cache size in the base configuration to 64KB. Simulation
results reveal less than 2% improvements in performance
for all benchmarks, except vortex that shows
a 9% improvement, over the 32KB data cache, indicating0.200.601.001.401.80Integer FP Olden
Figure
5. Performance of Limited Bit-Vectors
the WIB may be a better use of this space. We explore this
tradeoff more later in this section.
We also performed two sensitivity studies by reducing
the memory latency from 250 cycles to 100 cycles and
by increasing the unified L2 cache to 1MB. The results
match our expectations. The shorter memory latency reduces
WIB speedups to averages of 5%, 30%, and 17% for
the SPEC INT, SPEC FP, and Olden benchmarks, respec-
tively. The larger L2 cache has a smaller impact on the
speedups achieved with a WIB. The average speedups were
5%, 61%, and 38% for the SPEC INT, SPEC FP, and Olden
benchmarks, respectively. The larger cache has the most
impact on the integer benchmarks, which show a dramatically
reduced local L2 miss ratio (from an average of 22%
to 6%). Caches exploit locality in the program's reference
stream and can sometimes be sufficiently large to capture
the program's entire working set. In contrast, the WIB can
expose parallelism for tolerating latency in programs with
very large working sets or that lack locality.
For the remainder of this paper we present only the average
results for each benchmark suite. Detailed results for
each benchmark are available elsewhere [20].
4.2 Limited Bit-Vectors
The number of bit-vectors is important since each bit-vector
must map the entire WIB and the area required can
become excessive. To explore the effect of limited bit-vectors
(outstanding loads), we simulated a 2K-entry WIB
with 16, 32, and 64 bit-vectors. Figure 5 shows the average
speedups over the base machine, including the 1024 bit-vector
configuration from above. These results show that
even with only 16 bit-vectors the WIB can achieve average
speedups of 16% for SPEC INT, 26% for SPEC FP, and
38% for the Olden benchmarks. The SPEC FP programs
(particularly art) are affected the most by the limited bit-vectors
since they benefit from memory level parallelism.
bit-vectors (16KB) the WIB can achieve speedups
of 19%, 45%, and 50% for the three sets of benchmarks,
respectively.
Integer FP Olden
Figure
6. WIB Capacity Effects
4.3 Limited WIB Capacity
Reducing WIB area by limiting the number of bit-vectors
is certainly a useful optimization. However, further decreases
in required area can be achieved by using a smaller
capacity WIB. This section explores the performance impact
of reducing the capacity of the WIB, active list and
register file.
Figure
6 shows the average speedups for WIB sizes ranging
from 128 to 2048 with bit-vectors limited to 64. These
results show that the 1024-entry WIB can achieve average
speedups of 20% for the SPEC INT, 44% for SPEC FP, and
44% for Olden. This configuration requires only 32KB extra
space (8KB for WIB entries, 8KB for bit-vectors, and
8KB for each 1024-entry register file). This is roughly area
equivalent to doubling the cache size to 64KB. As stated
above, the 64KB L1 data cache did not produce noticable
speedups for our benchmarks, and the WIB is a better use
of the area.
4.4 WIB to Issue Queue Instruction Selection
Our WIB design implements a specific policy for selecting
from eligible instructions to reinsert into the issue
queue. The current policy chooses instructions from each
bank in program order. Since the banks operate independently
and on alternate cycles, they do not extract instructions
in true program order. To evaluate the impact of instruction
selection policy we use an idealized WIB that has
single cycle access time to the entire structure. Within this
design we evaluate the following instruction selection poli-
cies: (1) the current banked scheme, (2) full program order
from among eligible instructions, (3) round robin across
completed loads with each load's instructions in program
order, and (4) all instructions from the oldest completed
load.
Most programs show very little change in performance
across selection policies. mgrid is the only one to show
significant improvements. As mentioned above, mgrid
shows speedups over the banked WIB of 17%, 17%, and0.200.601.001.401.80Integer FP Olden
Banked
4-Cycle
6-Cycle
Figure
7. Non-Banked WIB Performance
13% for each of the three new policies, respectively. These
speedups are due to better scheduling of the actual dependence
graph. However, in some cases the schedule can be
worse. Three programs show slowdowns compared to the
banked WIB for the oldest load policy (4): bzip 11%,
parser 15%, and facerec 5%.
4.5 Non-Banked Multicycle WIB Access
We now explore the benefits of the banked organization
versus a multicycle non-banked WIB organization. Figure 7
shows the average speedups for the banked and non-banked
organizations over the base architecture. Except the different
WIB access latencies, the 4-cycle and 6-cycle bars
both assume a non-banked WIB with instruction extraction
in full program order. These results show that the longer
WIB access delay produces only slight reductions in performance
compared to the banked scheme. This indicates that
we may be able to implement more sophisticated selection
policies and that pipelining WIB access is not necessary.
5 Related Work
Our limit study is similar to that performed by Skadron
et al. [28]. Their results show that branch mispredictions
limit the benefits of larger instruction windows, that better
branch prediction and better instruction cache behavior have
synergistic effects, and that the benefits of larger instruction
windows and larger data caches trade off and have overlapping
effects. Their simulation assumes a very large 8MB L2
cache and models a register update unit (RUU) [29], which
is a unified active list, issue queue, and rename register file.
In their study, only instruction window sizes up to 256 are
examined.
There has been extensive research on architecture designs
for supporting large instruction windows. In the multiscalar
[30] and trace processors [23], one large centralized
instruction window is distributed into smaller windows
among multiple parallel processing elements. Dynamic
multithreading processors [2] deal with the complexity of a
large window by employing a hierarchy of instruction win-
dows. Clustering provides another approach, where a collection
of small windows with associated functional units
is used to approximate a wider and deeper instruction window
[22].
Recent research [7, 18] investigates issue logic designs
that attemp to support large instruction windows without
impeding improvements on clock rates. Michaud and
exploit the observation that instructions dependent
on long latency operations unnecessarily occupy issue
queue space for a long time, and address this problem
by prescheduling instructions based on data dependencies.
Other dependence-based issue queue designs are studied in
[9, 10, 22]. Zilles et al. [35] and Balasubramonian et al. [4]
attack the problem caused by long latency operations by utilizing
a future thread that can use a portion of the issue
queue slots and physical registers to conduct precomputa-
tion. As power consumption has become an important consideration
in processor design, researchers have also studied
low power instruction window design [3, 16].
6 Conclusion
Two important components of overall execution time are
the clock cycle time and the number of instructions committed
per cycle (IPC). High clock rates can be achieved
by using a small instruction window, but this can limit IPC
by reducing the ability to identify independent instructions.
This tension between large instruction windows and short
clock cycle times is an important aspect in modern processor
design.
This paper presents a new technique for achieving the
latency tolerance of large windows while maintaining the
high clock rates of small window designs. We accomplish
this by removing instructions from the conventional issue
queue if they are directly or indirectly dependent on a long
latency operation. These instructions are placed into a waiting
instruction buffer (WIB) and reinserted into the issue
queue for execution when the long latency operation com-
pletes. By moving these instructions out of the critical path,
their previously occupied issue queue entries can be further
utilized by the processor to look deep into the program for
more ILP. An important difference between the WIB and
scaled-up conventional issue queues is that the WIB implements
a simplified form of wakeup-select. This is achieved
by allowing all instructions in the dependence chain to be
considered for reinsertion into the issue window. Compared
to the full wakeup-select in conventional issue queues, the
WIB only requires select logic for instruction reinsertion.
Simulations of an 8-way processor with a 32-entry issue
queue reveal that adding a 2K-entry WIB can produce
speedups of 20%, 84%, and 50% for a subset of the SPEC
CINT2000, SPEC CFP2000, and Olden benchmarks, re-
spectively. We also explore several WIB design parameters
and show that allocating chip area for the WIB produces
signifcantly higher speedups than using the same area to
increase the level one data cache capacity from 32KB to
64KB.
Our future work includes investigating the potential for
executing the instructions from the WIB on a separate execution
core, either a conventional core or perhaps a grid
processor [25]. The policy space for selecting instructions
is an area of current research. Finally, register file design
and management (e.g., virtual-physical, multi-banked,
multi-cycle, prefetching in a two-level organization) require
further investigation.
Acknowledgements
This work was supported in part by NSF CAREER
Awards MIP-97-02547 and CCR-0092832, NSF Grants
CDA-97-2637 and EIA-99-72879, Duke University, and
donations from Intel, IBM, Compaq, Microsoft, and Erics-
son. We thank the anonymous reviewers for comments and
suggestions on this work.
--R
Clock Rate Versus IPC: The End of the Road for Conventional Microarchitectures.
A Dynamic Multithreading Processor.
Power and Energy Reduction via Pipeline Balancing.
Dynamically Allocating Processor Resources Between Nearby and Distant ILP.
Reducing the Complexity of the Register File in Dynamic Superscalar Processors.
Scalable Register Renaming via the Quack Register File.
Evaluating Future Microprocessors-the SimpleScalar Tool Set
Reducing the Complexity of the Issue Logic.
Early Experiences with Olden.
Compaq Computer Corporation.
Issue Logic for a 600-MHz Out-of-Order Execution Microprocessor
Focusing Processor Policies via Critical-Path Prediction
SPEC CPU2000: Measuring CPU Performance in the New Millennium.
Circuits for Wide-Window Superscalar Processors
The Alpha 21264 Microprocessor.
A Large
Trace Processors.
Memory Behavior of the SPEC2000 Benchmark Suite.
Characterizing and Removing Branch Mis- predictions
Improving Prediction for Procedure Returns with Return- Address-Stack Repair Mechanisms
Branch Prediction
Instruction Issue Logic for High-Performance
Multiscalar Processors.
POWER4 System Microarchitecture.
An Efficient Algorithm for Exploiting Multiple Arithmetic Units.
Understanding the Backward Slices of Performance Degrading Instructions.
--TR
Instruction Issue Logic for High-Performance, Interruptible, Multiple Functional Unit, Pipelined Computers
Multiscalar processors
Complexity-effective superscalar processors
Trace processors
A dynamic multithreading processor
Improving prediction for procedure returns with return-address-stack repair mechanisms
Branch Prediction, Instruction-Window Size, and Cache Size
A low-complexity issue logic
Understanding the backward slices of performance degrading instructions
Circuits for wide-window superscalar processors
Clock rate versus IPC
Multiple-banked register file architectures
Two-level hierarchical register file organization for VLIW processors
Reducing the complexity of the issue logic
Dynamically allocating processor resources between nearby and distant ILP
Focusing processor policies via critical-path prediction
Power and energy reduction via pipeline balancing
Energy-effective issue logic
A design space evaluation of grid processor architectures
instruction scheduling logic
Reducing the complexity of the register file in dynamic superscalar processors
The Alpha 21264 Microprocessor
Early Experiences with Olden
Data-Flow Prescheduling for Large Instruction Windows in Out-of-Order Processors
Characterizing and removing branch mispredictions
--CTR
Rama Sangireddy, Register port complexity reduction in wide-issue processors with selective instruction execution, Microprocessors & Microsystems, v.31 n.1, p.51-62, February, 2007
Simha Sethumadhavan , Rajagopalan Desikan , Doug Burger , Charles R. Moore , Stephen W. Keckler, Scalable Hardware Memory Disambiguation for High-ILP Processors, IEEE Micro, v.24 n.6, p.118-127, November 2004
Il Park , Chong Liang Ooi , T. N. Vijaykumar, Reducing Design Complexity of the Load/Store Queue, Proceedings of the 36th annual IEEE/ACM International Symposium on Microarchitecture, p.411, December 03-05,
Srikanth T. Srinivasan , Ravi Rajwar , Haitham Akkary , Amit Gandhi , Michael Upton, Continual Flow Pipelines: Achieving Resource-Efficient Latency Tolerance, IEEE Micro, v.24 n.6, p.62-73, November 2004
Yongxiang Liu , Anahita Shayesteh , Gokhan Memik , Glenn Reinman, Scaling the issue window with look-ahead latency prediction, Proceedings of the 18th annual international conference on Supercomputing, June 26-July 01, 2004, Malo, France
Edward Brekelbaum , Jeff Rupley , Chris Wilkerson , Bryan Black, Hierarchical Scheduling Windows, Proceedings of the 35th annual ACM/IEEE international symposium on Microarchitecture, November 18-22, 2002, Istanbul, Turkey
Hiroshi Sasaki , Masaaki Kondo , Hiroshi Nakamura, Energy-efficient dynamic instruction scheduling logic through instruction grouping, Proceedings of the 2006 international symposium on Low power electronics and design, October 04-06, 2006, Tegernsee, Bavaria, Germany
Yongxiang Liu , Anahita Shayesteh , Gokhan Memik , Glenn Reinman, Tornado warning: the perils of selective replay in multithreaded processors, Proceedings of the 19th annual international conference on Supercomputing, June 20-22, 2005, Cambridge, Massachusetts
Dan Ernst , Andrew Hamel , Todd Austin, Cyclone: a broadcast-free dynamic instruction scheduler with selective replay, ACM SIGARCH Computer Architecture News, v.31 n.2, May
Adrin Cristal , Jos F. Martnez , Josep Llosa , Mateo Valero, A case for resource-conscious out-of-order processors: towards kilo-instruction in-flight processors, ACM SIGARCH Computer Architecture News, v.32 n.3, p.3-10, June 2004
Ilhyun Kim , Mikko H. Lipasti, Macro-op Scheduling: Relaxing Scheduling Loop Constraints, Proceedings of the 36th annual IEEE/ACM International Symposium on Microarchitecture, p.277, December 03-05,
Haitham Akkary , Ravi Rajwar , Srikanth T. Srinivasan, Checkpoint Processing and Recovery: Towards Scalable Large Instruction Window Processors, Proceedings of the 36th annual IEEE/ACM International Symposium on Microarchitecture, p.423, December 03-05,
E. F. Torres , P. Ibanez , V. Vinals , J. M. Llaberia, Store Buffer Design in First-Level Multibanked Data Caches, ACM SIGARCH Computer Architecture News, v.33 n.2, p.469-480, May 2005
Jos F. Martnez , Jose Renau , Michael C. Huang , Milos Prvulovic , Josep Torrellas, Cherry: checkpointed early resource recycling in out-of-order microprocessors, Proceedings of the 35th annual ACM/IEEE international symposium on Microarchitecture, November 18-22, 2002, Istanbul, Turkey
Adrian Cristal , Oliverio J. Santana , Francisco Cazorla , Marco Galluzzi , Tanausu Ramirez , Miquel Pericas , Mateo Valero, Kilo-Instruction Processors: Overcoming the Memory Wall, IEEE Micro, v.25 n.3, p.48-57, May 2005
Tali Moreshet , R. Iris Bahar, Power-aware issue queue design for speculative instructions, Proceedings of the 40th conference on Design automation, June 02-06, 2003, Anaheim, CA, USA
Tali Moreshet , R. Iris Bahar, Effects of speculation on performance and issue queue design, IEEE Transactions on Very Large Scale Integration (VLSI) Systems, v.12 n.10, p.1123-1126, October 2004
Mikko H. Lipasti , Brian R. Mestan , Erika Gunadi, Physical Register Inlining, ACM SIGARCH Computer Architecture News, v.32 n.2, p.325, March 2004
Amit Gandhi , Haitham Akkary , Ravi Rajwar , Srikanth T. Srinivasan , Konrad Lai, Scalable Load and Store Processing in Latency Tolerant Processors, ACM SIGARCH Computer Architecture News, v.33 n.2, p.446-457, May 2005
Ilhyun Kim , Mikko H. Lipasti, Half-price architecture, ACM SIGARCH Computer Architecture News, v.31 n.2, May
Hans Vandierendonck , Philippe Manet , Thibault Delavallee , Igor Loiselle , Jean-Didier Legat, By-passing the out-of-order execution pipeline to increase energy-efficiency, Proceedings of the 4th international conference on Computing frontiers, May 07-09, 2007, Ischia, Italy
Srikanth T. Srinivasan , Ravi Rajwar , Haitham Akkary , Amit Gandhi , Mike Upton, Continual flow pipelines, ACM SIGOPS Operating Systems Review, v.38 n.5, December 2004
Huiyang Zhou , Thomas M. Conte, Enhancing memory level parallelism via recovery-free value prediction, Proceedings of the 17th annual international conference on Supercomputing, June 23-26, 2003, San Francisco, CA, USA
Yu Bai , R. Iris Bahar, A low-power in-order/out-of-order issue queue, ACM Transactions on Architecture and Code Optimization (TACO), v.1 n.2, p.152-179, June 2004
Tanaus Ramrez , Alex Pajuelo , Oliverio J. Santana , Mateo Valero, Kilo-instruction processors, runahead and prefetching, Proceedings of the 3rd conference on Computing frontiers, May 03-05, 2006, Ischia, Italy
Alex Pajuelo , Antonio Gonzlez , Mateo Valero, Speculative execution for hiding memory latency, ACM SIGARCH Computer Architecture News, v.33 n.3, June 2005
Luis Ceze , Karin Strauss , James Tuck , Josep Torrellas , Jose Renau, CAVA: Using checkpoint-assisted value prediction to hide L2 misses, ACM Transactions on Architecture and Code Optimization (TACO), v.3 n.2, p.182-208, June 2006
Haitham Akkary , Ravi Rajwar , Srikanth T. Srinivasan, An analysis of a resource efficient checkpoint architecture, ACM Transactions on Architecture and Code Optimization (TACO), v.1 n.4, p.418-444, December 2004
Madhavi G. Valluri , Lizy K. John , Kathryn S. McKinley, Low-power, low-complexity instruction issue using compiler assistance, Proceedings of the 19th annual international conference on Supercomputing, June 20-22, 2005, Cambridge, Massachusetts
Huiyang Zhou , Thomas M. Conte, Enhancing Memory-Level Parallelism via Recovery-Free Value Prediction, IEEE Transactions on Computers, v.54 n.7, p.897-912, July 2005
Ahmed S. Al-Zawawi , Vimal K. Reddy , Eric Rotenberg , Haitham H. Akkary, Transparent control independence (TCI), ACM SIGARCH Computer Architecture News, v.35 n.2, May 2007
Andrew D. Hilton , Amir Roth, Ginger: control independence using tag rewriting, ACM SIGARCH Computer Architecture News, v.35 n.2, May 2007
Peter G. Sassone , Jeff Rupley, II , Edward Brekelbaum , Gabriel H. Loh , Bryan Black, Matrix scheduler reloaded, ACM SIGARCH Computer Architecture News, v.35 n.2, May 2007
Francisco J. Mesa-Martnez , Michael C. Huang , Jose Renau, SEED: scalable, efficient enforcement of dependences, Proceedings of the 15th international conference on Parallel architectures and compilation techniques, September 16-20, 2006, Seattle, Washington, USA
Simha Sethumadhavan , Rajagopalan Desikan , Doug Burger , Charles R. Moore , Stephen W. Keckler, Scalable Hardware Memory Disambiguation for High ILP Processors, Proceedings of the 36th annual IEEE/ACM International Symposium on Microarchitecture, p.399, December 03-05,
Albert Meixner , Daniel J. Sorin, Unified microprocessor core storage, Proceedings of the 4th international conference on Computing frontiers, May 07-09, 2007, Ischia, Italy
Monreal , Victor Vinals , Jose Gonzalez , Antonio Gonzalez , Mateo Valero, Late Allocation and Early Release of Physical Registers, IEEE Transactions on Computers, v.53 n.10, p.1244-1259, October 2004
Adrin Cristal , Oliverio J. Santana , Mateo Valero , Jos F. Martnez, Toward kilo-instruction processors, ACM Transactions on Architecture and Code Optimization (TACO), v.1 n.4, p.389-417, December 2004
Joseph J. Sharkey , Dmitry V. Ponomarev , Kanad Ghose , Oguz Ergin, Instruction packing: reducing power and delay of the dynamic scheduling logic, Proceedings of the 2005 international symposium on Low power electronics and design, August 08-10, 2005, San Diego, CA, USA
Joseph J. Sharkey , Dmitry V. Ponomarev , Kanad Ghose , Oguz Ergin, Instruction packing: Toward fast and energy-efficient instruction scheduling, ACM Transactions on Architecture and Code Optimization (TACO), v.3 n.2, p.156-181, June 2006 | latency tolerance;instruction window;memory latency;cache memory |
545235 | Using a user-level memory thread for correlation prefetching. | This paper introduces the idea of using a User-Level Memory Thread (ULMT) for correlation prefetching. In this approach, a user thread runs on a general-purpose processor in main memory, either in the memory controller chip or in a DRAM chip. The thread performs correlation prefetching in software, sending the prefetched data into the L2 cache of the main processor. This approach requires minimal hardware beyond the memory processor: the correlation table is a software data structure that resides in main memory, while the main processor only needs a few modifications to its L2 cache so that it can accept incoming prefetches. In addition, the approach has wide usability, as it can effectively prefetch even for irregular applications. Finally, it is very flexible, as the prefetching algorithm can be customized by the user on an application basis. Our simulation results show that, through a new design of the correlation table and prefetching algorithm, our scheme delivers good results. Specifically, nine mostly-irregular applications show an average speedup of 1.32. Furthermore, our scheme works well in combination with a conventional processor-side sequential prefetcher, in which case the average speedup increases to 1.46. Finally, by exploiting the customization of the prefetching algorithm, we increase the average speedup to 1.53. | Introduction
Data prefetching is a popular technique to tolerate long memory access
latencies. Most of the past work on data prefetching has focused
on processor-side prefetching [6, 7, 8, 12, 13, 14, 15, 19, 20, 23, 25,
26, 28, 29]. In this approach, the processor or an engine in its cache
hierarchy issues the prefetch requests. An interesting alternative is
memory-side prefetching, where the engine that prefetches data for
the processor is in the main memory system [1, 4, 9, 11, 22, 28].
Memory-side prefetching is attractive for several reasons. First, it
eliminates the overheads and state bookkeeping that prefetch requests
introduce in the paths between the main processor and its
caches. Second, it can be supported with a few modifications to
the controller of the L2 cache and no modification to the main pro-
cessor. Third, the prefetcher can exploit its proximity to the memory
to its advantage, for example by storing its state in memory. Fi-
nally, memory-side prefetching has the additional attraction of riding
the technology trend of increased chip integration. Indeed, popular
platforms like PCs are being equipped with graphics engines in the
memory system [27]. Some chipsets like NVIDIA's nForce even integrate
a powerful processor in the North Bridge chip [22]. Simpler
This work was supported in part by the National Science Foundation under
grants CCR-9970488, EIA-0081307, EIA-0072102, and CHE-0121357;
by DARPA under grant F30602-01-C-0078; by Michigan State University;
and by gifts from IBM, Intel, and Hewlett-Packard.
engines can be provided for prefetching, or existing graphics processors
can be augmented with prefetching capabilities. Moreover,
there are proposals to integrate processing logic in DRAM chips,
such as IRAM [16].
Unfortunately, existing proposals for memory-side prefetching engines
have a narrow scope [1, 9, 11, 22, 28]. Indeed, some designs
are hardware controllers that perform simple and specific operations
[1, 9, 22]. Other designs are specialized engines that are
custom-designed to prefetch linked data structures [11, 28]. Instead,
we would like an engine that is usable in a wide variety of workloads
and that offers flexibility of use to the programmer.
While memory-side prefetching can support a variety of prefetching
algorithms, one type that is particularly suited to it is Correlation
prefetching [1, 6, 12, 18, 26]. Correlation prefetching uses past sequences
of reference or miss addresses to predict and prefetch future
misses. Since no program knowledge is needed, correlation prefetching
can be easily moved to the memory side.
In the past, correlation prefetching has been supported by hardware
controllers that typically require a large hardware table to keep the
correlations [1, 6, 12, 18]. In all cases but one, these controllers are
placed between the L1 and L2 caches, or between the processor and
the L1. While effective, this approach has a high hardware cost. Fur-
thermore, it is often unable to prefetch far ahead enough and deliver
good prefetch coverage.
In this paper, we present a new scheme where correlation prefetching
is performed by a User-Level Memory Thread (ULMT) running
on a simple general-purpose processor in memory. Such a processor
is either in the memory controller chip or in a DRAM chip,
and prefetches lines to the L2 cache of the main processor. The
scheme requires minimal hardware support beyond the memory pro-
cessor: the correlation table is a software data structure that resides
in main memory, while the main processor only needs a few modifications
to its L2 cache controller so that it can accept incoming
prefetches. Moreover, our scheme has wide usability, as it can effectively
prefetch even for irregular applications. Finally, it is very
flexible, as the prefetching algorithm executed by the ULMT can be
customized by the programmer on an application basis.
Using a new design of the correlation table and correlation prefetching
algorithm, our scheme delivers an average speedup of 1.32 for
nine mostly-irregular applications. Furthermore, our scheme works
well in combination with a conventional processor-side sequential
prefetcher, in which case the average speedup increases to 1.46. Fi-
nally, by exploiting the customization of the prefetching algorithm,
we increase the average speedup to 1.53.
This paper is organized as follows: Section 2 discusses memory-side
and correlation prefetching; Section 3 presents ULMT for correla-
Possible
Locations of
the Memory
Processor Chip
North
Bridge
Memory
CPU
(a)
Proc
Main
Mem
Proc Mem
Main Memory System
2: Lookup
2: Reply i
3: Prefetch j, k
1: Fetch
(b)
Proc
Main
Mem
Proc Mem
Main Memory System
1: Execute 2: Fetch i
3: Prefetch i
3: Reply i
(c)
Figure
1. Memory-side prefetching: some locations where the memory processor can be placed (a), and actions under push passive (b)
and push active (c) prefetching.
tion prefetching; Section 4 discusses our evaluation setup; Section 5
evaluates our design; Section 6 discusses related work; and Section 7
concludes.
2. Memory-Side and Correlation Prefetching
2.1. Memory-Side Prefetching
Memory-Side prefetching occurs when prefetching is initiated by an
engine that resides either close to the main memory (beyond any
memory bus) or inside of it [1, 4, 9, 11, 22, 28]. Some manufacturers
have built such engines. Typically, they are simple hardwired controllers
that probably recognize only simple stride-based sequences
and prefetch data into local buffers. Some examples are NVIDIA's
DASP engine in the North Bridge chip [22] and Intel's prefetch cache
in the i860 chipset.
In this paper, we propose to support memory-side prefetching with a
user-level thread running on a general-purpose core. The core can be
very simple and does not need to support floating point. For illustration
purposes, Figure 1-(a) shows the memory system of a PC. The
core can be placed in different places, such as in the North Bridge
(memory controller) chip or in the DRAM chips. Placing it in the
North Bridge simplifies the design because the DRAM is not modi-
fied. Moreover, some existing systems already include a core in the
North Bridge for graphics processing [22], which could potentially
be reused for prefetching. Placing the core in a DRAM chip complicates
the design, but the resulting highly-integrated system has lower
memory access latency and higher memory bandwidth. In this paper,
we examine the performance potential of both designs.
Memory- and processor-side prefetching are not the same as Push
and Pull (or On-Demand) prefetching [28], respectively. Push
prefetching occurs when prefetched data is sent to a cache or processor
that has not requested it, while pull prefetching is the opposite.
Clearly, a memory prefetcher can act as a pull prefetcher by simply
buffering the prefetched data locally and supplying it to the processor
on demand [1, 22]. In general, however, memory-side prefetching is
most interesting when it performs push prefetching to the caches of
the processor because it can hide a larger fraction of the memory
access latency.
Memory-side prefetching can also be classified into Passive and Ac-
tive. In passive prefetching, the memory processor observes the requests
from the main processor that reach main memory. Based on
them, and after examining some internal state, the memory processor
prefetches other data for the main processor that it expects the latter
to need in the future (Figure 1-(b)).
In active prefetching, the memory processor runs an abridged version
of the code that is running on the main processor. The execution of
the code induces the memory processor to fetch data that the main
processor will need later. The data fetched by these requests is also
sent to the main processor (Figure 1-(c)).
In this paper, we concentrate on passive push memory-side prefetching
into the L2 cache of the main processor. The memory processor
aims to eliminate only L2 cache misses, since they are the only ones
that it sees. Typically, L2 cache miss time is an important contributor
to the processor stall due to memory accesses, and is usually the
hardest to hide with out-of-order execution.
This approach to prefetching is inexpensive to support. The main
processor core does not need to be modified at all. Its L2 cache needs
to have the following supports. First, as in other systems [11, 15, 28],
the L2 cache has to accept lines from the memory that it has not re-
quested. To do so, the L2 uses free Miss Status Handling Registers
(MSHRs) in such events. Secondly, if the L2 has a pending request
and a prefetched line with the same address arrives, the prefetch simply
steals the MSHR and updates the cache as if it were the reply.
Finally, a prefetched line arriving at L2 is dropped in the following
cases: the L2 cache already has a copy of the line, the write-back
queue has a copy of the line because the L2 cache is trying to write it
back to memory, all MSHRs are busy, or all the lines in the set where
the prefetched line wants to go are in transaction-pending state.
2.2. Correlation Prefetching
Correlation Prefetching uses past sequences of reference or miss
addresses to predict and prefetch future misses [1, 6, 12, 18, 26].
Two popular correlation schemes are Stride-Based and Pair-Based
schemes. Stride-based schemes find stride patterns in the address
sequences and prefetch all the addresses that will be accessed if the
patterns continue in the future. Pair-based schemes identify a correlation
between pairs or groups of addresses, for example between
a miss and a sequence of successor misses. A typical implementation
of pair-based schemes uses a Correlation Table to record the
addresses that are correlated. Later, when a miss is observed, all the
addresses that are correlated with its address are prefetched.
Pair-based schemes are attractive because they have general appli-
cability: they work for any miss patterns as long as miss address
sequences repeat. Such behavior is common in both regular and irregular
applications, including those with sparse matrices or linked
data structures. Furthermore, pair-based schemes, like all correlation
schemes, need neither compiler support nor changes in the application
binary.
Pair-based correlation prefetching has only been studied using
hardware-based implementations [1, 6, 12, 18, 26], typically by placing
a custom prefetch engine and a hardware correlation table between
the processor and L1 cache, or between the L1 and L2 caches.
The typical correlation table, as used in [6, 12, 26], is organized as
follows. Each row stores the tag of an address that missed, and the
addresses of a set of immediate successor misses. These are misses
that have been seen to immediately follow the first one at different
points in the application. The parameters of the table are the
maximum number of immediate successors per miss (NumSucc), the
maximum number of misses that the table can store predictions for
(NumRows), and the associativity of the table (Assoc). According
to [12], for best performance, the entries in a row should replace
each other with a LRU policy.
Figure
4-(a) illustrates how the algorithm works. We call the algorithm
Base. The figure shows two snapshots of the table at different
points in the miss stream ((i) and (ii)). Within a row, successors are
listed in MRU order from left to right. At any time, the hardware
keeps a pointer to the row of the last miss observed. When a miss
occurs, the table learns by placing the miss address as one of the immediate
successors of the last miss, and a new row is allocated for the
new miss unless it already exists. When the table is used to prefetch
((iii)), it reacts to an observed miss by finding the corresponding row
and prefetching all NumSucc successors, starting from the MRU one.
The designs in [1, 18] work slightly differently. They are discussed
in Section 6.
past work has demonstrated the applicability of pair-based
correlation prefetching for many applications. However, it has also
revealed the shortcomings of the approach. One critical problem is
that, to be effective, this approach needs a large table. Proposed
schemes typically need a 1-2 Mbyte on-chip SRAM table [12, 18],
while some applications with large footprints even need a 7.6 Mbyte
off-chip SRAM table [18].
Furthermore, the popular schemes that prefetch several potential immediate
successors for each miss [6, 12, 26] have two limitations:
they do not prefetch very far ahead and, intuitively, they need to observe
one miss to eliminate another miss (its immediate successor).
As a result, they tend to have low coverage. Coverage is the number
of useful prefetches over the original number of misses [12].
3. ULMT for Correlation Prefetching
We propose to use a ULMT to eliminate the shortcomings of pair-
based correlation prefetching while enhancing its advantages. In the
following, we discuss the main concept (Section 3.1), the architecture
of the system (Section 3.2), modified correlation prefetching
algorithms (Section 3.3), and related operating system issues (Sec-
tion 3.4).
3.1. Main Concept
A ULMT running on a general-purpose core in memory performs
two conceptually distinct operations: learning and prefetching.
Learning involves observing the misses on the main processor's L2
cache and recording them in a correlation table one miss at a time.
The prefetching operation involves reacting to one such miss by
looking up the correlation table and triggering the prefetching of several
memory lines for the L2 cache of the main processor. No action
is taken on a write-back to memory.
In practice, in agreement with past work [12], we find that combining
both learning and prefetching works best: the correlation table
continuously learns new patterns, while uninterrupted prefetching
delivers higher performance. Consequently, the ULMT executes the
infinite loop shown in Figure 2. Initially, the thread waits for a miss
to be observed. When it observes one, it looks up the table and generates
the addresses of the lines to prefetch (Prefetching Step). Then,
it updates the table with the address of the observed miss (Learning
Step). It then resumes waiting.
Prefetch addresses
Occupancy time
Miss address
observed generated
Response time
Table
updated
Learning step
Prefetching step
Wait
Figure
2. Infinite loop executed by the ULMT.
Any prefetch algorithm executed by the ULMT is characterized by
its Response and Occupancy times. The response time is the time
from when the ULMT observes a miss address until it generates
the addresses to prefetch. For best performance, the response time
should be as small as possible. This is why we always execute the
Prefetching step before the Learning one. Moreover, we shift as
much computation as possible from the Prefetching to the Learning
step, retaining only the most critical operations in the Prefetching
step.
The occupancy time is the time when the ULMT is busy processing
a single observed miss. For the ULMT implementation of the
prefetcher to be viable, the occupancy time has to be smaller than
the time between two consecutive L2 misses most of the times.
The correlation table that the ULMT reads and writes is simply a
software data structure in memory. Consequently, our scheme eliminates
the costly hardware table required by current implementations
of correlation prefetching [12, 18]. Moreover, accesses to the software
table are inexpensive because the memory processor transparently
caches the table in its cache. Finally, our new scheme enables
the redesign of the correlation table and prefetching algorithms (Sec-
tion 3.3) to address the low-coverage and short-distance prefetching
limitations of current implementations.
3.2. Architecture of the System
Figures
3-(a) and (b) show the architecture of a system that integrates
the memory processor in the North Bridge chip or in a DRAM chip,
respectively. The first design requires no modification to the DRAM
or its interface, and is largely compatible with conventional memory
systems. The second design needs changes to the DRAM chips and
their interface, and needs special support to work in typical memory
systems, which have multiple DRAM chips. However, since our
goal is to examine the performance potential of the two designs, we
abstract away some of the implementation complexity of the second
design by assuming a single-chip main memory. In the following, we
outline how the systems work. In our discussion, we only consider
memory accesses resulting from misses; we ignore write-backs for
simplicity and because they do not affect our algorithms.
In
Figure
3-(a), the key communication occurs through queues 1, 2,
and 3. Miss requests from the main processor are deposited in queues
simultaneously. The ULMT uses the entries in queue 2 to
build its table and, based on it, generate the addresses to prefetch.
The latter are deposited in queue 3. Queues 1 and 3 compete to
access memory, although queue 3 has a lower priority than 1.
When the address of a line to prefetch is deposited in queue 3, the
hardware compares it against all the entries in queue 2. If a match
for address address X is detected, X is removed from both queues.
We remove X from queue 3 because it is redundant: a higher-priority
Bridge
Chip
Units
Other
North
Memory
Controller
Memory61Interface
BusCache
Memory
Processor
Main Processor
Filter
(a)
Interface
Bus
DRAMFilter
Memory
Controller
Main Processor2North
Bridge
Chip
Cache
Memory
Processor
Units
Other
(b)
Figure
3. Architecture of a system that integrates the memory processor in the North Bridge chip (a) or in a DRAM chip (b).
request for X is already in queue 1. X is removed from queue 2 to
save computation in the ULMT. Note that it is unclear whether we
lost the opportunity to prefetch X's successors by not processing X.
The reason is that our algorithms prefetch several levels of successor
(Section 3.3) and, as a result, some of X's successors may
already be in queue 3. Processing X may help improve the state in
the correlation table. However, minimizing the total occupancy of
the ULMT is crucial in our scheme.
Similarly, when a main-processor miss is about to be deposited in
queues 1 and 2, the hardware compares its address against those in
queue 3. If there is a match, the request is put only in queue 1 and
the matching entry in queue 3 is removed.
It is possible that requests from the main processor arrive too fast for
the ULMT to consume them and queue 2 overflows. In this case, the
memory processor simply drops these requests.
Figure
3-(a) also shows the Filter module associated with queue 3.
This module improves the performance of correlation prefetching,
which may sometimes try to prefetch the same address several times
in a short time. The Filter module drops prefetch requests directed to
any address that has been recently issued another prefetch requests.
The module is a fixed-sized FIFO list that records the addresses of all
the recently-issued requests. Before a request is issued to queue 3,
the hardware checks the Filter list. If it finds its address, the request
is dropped and the list is left unmodified. Otherwise, the address is
added to the tail of the list. With this support, some unnecessary
prefetch requests are eliminated.
For completeness, the figure shows other queues. Replies from
memory to the main processor go through queue 4. In addition, the
ULMT needs to access the software correlation table in main mem-
ory. Recall that the table is transparently cached by the memory
processor. Logical queues 5 and 6 provide the necessary paths for
the memory processor to access main memory. In practice, queues 5
and 6 are merged with the others.
If the memory processor is in the DRAM chip (Figure 3-(b)), the
system works slightly differently. Miss requests from the main processor
are deposited first in queue 1 and then in queue 2. The ULMT
in the memory processor accesses the correlation table from its cache
and, on a miss, directly from the DRAM. The addresses to prefetch
are passed through the Filter module and placed in queue 3. As in
Figure
3-(a), entries in queues 2 and 3 are checked against each other,
and the common entries are dropped. The replies to both prefetches
and main-processor requests are returned to the memory controller.
As they reach the memory controller, their addresses are compared
to the processor miss requests in queue 1. If a memory-prefetched
line matches a miss request from the main processor, the former is
considered to be the reply of the latter, and the latter is not sent to
the memory chip.
Finally, in machines that include a form of processor-side prefetch-
ing, we envision our architecture to operate in two modes: Verbose
and Non-Verbose. In Verbose mode, queue 2 in Figures 3-(a) and (b)
receives both main-processor misses and main-processor prefetch re-
quests. In Non-Verbose mode, queue 2 only receives main-processor
misses. This mode assumes that main-processor prefetch requests
are distinguishable from other requests, for example with a tag as in
the MIPS R10000 [21].
The Non-Verbose mode is useful to reduce the total occupancy of the
ULMT. In this case, the processor-side prefetcher can focus on the
easy-to-predict sequential or regular miss patterns, while the ULMT
can focus on the hard-to-predict irregular ones. The Verbose mode is
also useful: the ULMT can implement a prefetch algorithm that enhances
the effectiveness of the processor-side prefetcher. We present
an example of this case in Section 5.2.
3.3. Correlation Prefetching Algorithms
Simply taking the current pair-based correlation table and algorithm
and implementing them in software is not good enough. Indeed, as
indicated in Section 2.2, the Base algorithm has two limitations: it
does not prefetch very far ahead and, intuitively, it needs to observe
one miss to eliminate another miss (its immediate successor). As a
result, it tends to have low coverage.
To increase coverage, three things need to occur. First, we need to
eliminate these two limitations by storing in the table (and prefetch-
ing) several levels of successor misses per miss: immediate succes-
sors, successors of immediate successors, and so on for several lev-
els. Second, these prefetches have to be highly accurate. Finally, the
prefetcher has to take decisions early enough so that the prefetched
lines reach the main processor before they are needed.
These conditions are easier to support and ensure when the correlation
algorithm is implemented as a ULMT. There are two reasons
for it. The first one is that storage is now cheap and, therefore, the
correlation table can be inexpensively expanded to hold multiple levels
of successor misses per miss, even if that means replicating in-
formation. The second reason is the Customizability provided by a
software implementation of the prefetching algorithm.
In the rest of this section, we describe how a ULMT implementation
of correlation prefetching can deliver high coverage. We describe
three approaches: using a conventional table organization, using a
table re-organized for ULMT, and exploiting customizability.
a
c
d
a
c
d
on miss a prefetch d, b
a
c
d
a
c
d
a,b,c,a,d,c,.
(ii) current miss
a
c
a,b,c,a,d,c,.
current miss
Miss Sequence
Correlation Table
(a)
a
c
d
a
c
d
on miss a prefetch d, b
prefetch c
follow link
a
c
d
a
c
d
a,b,c,a,d,c,.
(ii) current miss
a
c
a,b,c,a,d,c,.
current miss
Correlation Table
Miss Sequence
(b)
a,b,c,a,d,c,.
d c
a
a
a
c
c
d
d
Last
SecondLast
current miss
a
c
c
Last
d c
a
a
a
c
c
d
d
SecondLast a,b,c,a,d,c,.
current miss
Miss Sequence
Correlation Table
on miss a prefetch d,b,c
(c)
Figure
4. Pair-based correlation algorithms: Base (a), Chain (b), and Replicated (c).
3.3.1. Using a Conventional Table Organization
As a first step, we attempt to improve coverage without specifically
exploiting the low-cost storage or customizability advantages
of ULMT. We simply take the conventional table organization of
Section 2.2 and force the ULMT to prefetch multiple levels of successors
for every miss. The resulting algorithm we call Chain. Chain
takes the same parameters as Base plus NumLevels, which is the
number of levels of successors prefetched. The algorithm is illustrated
in Figure 4-(b).
Chain updates the table like Base ((i) and (ii)) but prefetches differently
((iii)). Specifically, after prefetching the row of immediate suc-
cessors, it takes the MRU one among them and accesses the correlation
table again with its address. If the entry is found, it prefetches
all NumSucc successors there. Then, it takes the MRU successor in
that row and repeats the process. This is done NumLevels-1 times.
As an example, suppose that a miss on a occurs ((iii)). The ULMT
first prefetches d and b. Then, it takes the MRU entry d, looks-up the
table, and prefetches d's successor, c.
Chain addresses the two limitations of Base, namely not prefetching
very far ahead, and needing one miss to eliminate a second one.
However, Chain may not deliver high coverage for two reasons: the
prefetches may not be highly accurate and the ULMT may have a
high response time to issue all the prefetches.
The prefetches may be inaccurate because Chain does not prefetch
the true MRU successors in each level of successors. Instead, it
only prefetches successors found along the MRU path. For ex-
ample, consider a sequence of misses that alternates between a,b,c
and b,e,b,f: a,b,c,.,b,e,b,f,.,a,b,c,. When miss a is encountered,
Chain prefetches its immediate successors (b), and then accesses the
entry for b to prefetch e and f. Note that c is not prefetched.
The high response time of Chain to a miss comes from having to
make NumLevels accesses to different rows in the table. Each access
involves an associative search because the table is associative and,
potentially, one or more cache misses.
3.3.2. Using a Table Re-Organized for ULMT
We now attempt to improve coverage by exploiting the low cost of
storage in ULMT solutions. Specifically, we expand the table to
allow replicated information. Each row of the table stores the tag
of the miss address, and NumLevels levels of successors. Each level
contains NumSucc addresses that use LRU for replacement. Using
this table, we propose an algorithm called Replicated (Figure 4-(c)).
Replicated takes the same parameters as Chain.
As shown in Figure 4-(c), Replicated keeps NumLevels pointers to
the table. These pointers point to the entries for the address of the last
miss, second last, and so on, and are used for efficient table access.
When a miss occurs, these pointers are used to access the entries of
the last few misses, and insert the new address as the MRU successor
of the correct level ((i) and (ii)). In the figure, the NumSucc entries
at each level are MRU ordered. Finally, prefetching in Replicated is
simple: when a miss is seen, all the entries in the corresponding row
are prefetched ((iii)).
Note that Replicated eliminates the two problems of Chain. First,
prefetches are accurate because they contain the true MRU successors
at each level. This is the result of grouping together all the
successors from a given level, irrespective of the path taken. In
the sequence shown above a,b,c,.,b,e,b,f,.,a,b,c,., on a miss on
a, Replicated prefetches b and c.
Second, the response time of Replicated is much smaller than Chain.
Indeed, Replicated prefetches several levels of successors with a single
row access, and maybe even with a single cache miss. Replicated
effectively shifts some computation from the Prefetching step to the
Learning one: prefetching needs a single table access, while learning
a miss needs multiple table updates. This is a good trade-off because
the Prefetching step is the critical one. Furthermore, these multiple
learning updates are inexpensive: the use of the pointers eliminates
the need to do any associative searches on the table, and the rows to
be updated are most likely still in the cache of the memory processor
(since they were updated most recently).
3.3.3. Exploiting the Customizability of ULMT
We can also improve coverage by exploiting the second advantage
of ULMT solutions: customizability. The programmer or system
can choose to run a different algorithm in the ULMT for each ap-
plication. The chosen algorithm can be highly customized to the
application's needs.
One approach to customization is to use the table organizations and
prefetching algorithms described above but to tune their parameters
on an application basis. For example, in applications where the miss
sequences are highly predictable, we can set the number of levels
of successors to prefetch (NumLevels) to a high value. As a result,
Characteristics Base Chain Replicated
Levels of successors prefetched 1 NumLevels NumLevels
True MRU ordering for each level? Yes No
Number of row accesses in the Prefetching step (Requires SEARCH) 1 NumLevels 1
Number of row accesses in the Learning step (Requires NO
Response time Low High Low
Space requirement (for constant number of prefetches) x x NumLevels x
Table
1. Comparing different pair-based correlation prefetching algorithms running on a ULMT.
we will prefetch more levels of successors with high accuracy. In
applications with unpredictable sequences, we can do the opposite.
We can also tune the number of rows in the table (NumRows). In
applications that have large footprints, we can set NumRows to a high
value to hold more information in the table. In small applications, we
can do the opposite to save space.
A second approach to customization is to use a different prefetching
algorithm. For example, we can add support for sequential prefetching
to all the algorithms described above. The resulting algorithms
will have low response time for sequential miss patterns.
Another approach is to adaptively decide the algorithm on-the-fly,
as the application executes. In fact, this approach can also be used
to execute different algorithms in different parts of one application.
Such intra-application customizability may be useful in complex applications
Finally, the ULMT can also be used for profiling purposes. It can
monitor the misses of an application and infer higher-level information
such as cache performance, application access patterns, or page
conflicts.
3.3.4. Comparing the Algorithms
Table
1 compares the Base, Chain, and Replicated algorithms executing
on a ULMT. Replicated has the highest potential for high
coverage: it supports far-ahead prefetching by prefetching several
levels of successors, its prefetches have high accuracy because they
prefetch the true MRU successors at each level, and it has a low response
time, in part because it only needs to access a single table
row in the Prefetching step. Accessing a single row minimizes the
associative searches and the cache misses. The only shortcoming
of Replicated is the larger space that it requires for the correlation
table. However, this is a minor issue since the table is a software
structure allocated in main memory. Note that all these algorithms
can also be implemented in hardware. However, Replicated is more
suitable for an ULMT implementation because providing the larger
space required in hardware is expensive.
3.4. Operating System Issues
There are some operating system issues that are related to ULMT
operation. We outline them here.
Protection. The ULMT has its own separate address space with its
instructions, the correlation table, and a few other data structures.
The ULMT shares neither instructions nor data with any application.
The ULMT can observe the physical addresses of the application
misses. It can also issue prefetches for these addresses on behalf of
the main processor. However, it can neither read from nor write to
these addresses. Therefore, protection is guaranteed.
Multiprogrammed Environment. It is a poor approach to have all
the applications share a single table: the table is likely to suffer a lot
of interference. A better approach is to associate a different ULMT,
with its own table, to each application. This eliminates interference
in the tables. In addition, it enables the customization of each ULMT
to its own application. If we conservatively assume a 4-Mbyte table
on average per application, 8 applications require 32 Mbytes, which
is only a modest fraction of today's typical main memory. If this
requirement is excessive, we can save space by dynamically sizing
the tables. In this case, if an application does not use the space, its
table shrinks.
Scheduling. The scheduler knows the ULMT associated with each
application. Consequently, the scheduler schedules and preempts
both application and ULMT as a group. Furthermore, the operating
system provides an interface for the application to control its ULMT.
Page Re-mapping. Sometimes, a page gets re-mapped. Since
ULMTs operate on physical addresses, such events can cause some
table entries to become stale. We can choose to take no action and let
the table update itself automatically through learning. Alternatively,
the operating system can inform the corresponding ULMT when a
re-mapping occurs, passing the old and new physical page number.
Then, the ULMT indexes its table for each line of the old page. If
the entry is found, the ULMT relocates it and updates both the tag
and any applicable successors in the row. Given current page sizes,
we estimate the table update to take a few microseconds. Such overhead
may be overlapped with the execution of the operating system
page mapping handler in the main processor. Note that some other
entries in the table may still keep stale successor information. Such
information may cause a few useless prefetches, but the table will
quickly update itself automatically.
4. Evaluation Environment
Applications. To evaluate the ULMT approach, we use nine mostly-
irregular, memory-intensive applications. Irregular applications are
hardly amenable to compiler-based prefetching. Consequently, they
are the obvious target for ULMT correlation prefetching. The exception
is CG, which is a regular application. Table 2 describes the
applications. The last four columns of the table will be explained
later.
Simulation Environment. The evaluation is done using an
execution-driven simulation environment that supports a dynamic
superscalar processor model [17]. We model a PC architecture with a
simple memory processor that is integrated in either the North Bridge
chip or in a DRAM chip, following the micro-architecture of Figure
3.
Table
3 shows the parameters used for each component of
the architecture. All cycles are 1.6 GHz cycles. The architecture is
modeled cycle by cycle.
We model only a uni-programmed environment with a single application
and a single ULMT that execute concurrently. We model all
the contention in the system, including the contention of the application
thread and the ULMT on shared resources such as the memory
controller, DRAM channels, and DRAM banks.
Processor-Side Prefetching. The main processor optionally includes
a hardware prefetcher that can prefetch multiple streams
of stride 1 or -1 into the L1 cache. The prefetcher monitors L1
cache misses and can identify and prefetch up to NumSeq sequen-
Correlation Table
Appl Suite Problem Input NumRows Size (Mbytes)
Base Chain Repl
CG NAS Conjugate gradient Class S 64 1.3 0.8 1.8
Equake SpecFP2000 Seismic wave propagation simulation Test 128 2.5 1.5 3.5
FT NAS 3D Fourier transform Class S 256 5.0 3.0 7.0
Gap SpecInt2000 Group theory solver Rako (subset of test) 128 2.5 1.5 3.5
Mcf SpecInt2000 Combinatorial optimization Test
MST Olden Finding minimum spanning tree 1024 nodes 256 5.0 3.0 7.0
Parser SpecInt2000 Word processing Subset of train 128 2.5 1.5 3.5
Sparse SparseBench[10] GMRES with compressed row storage
Tree Univ. of Hawaii[3] Barnes-Hut N-body problem 2048 bodies 8
Average 140 2.7 1.6 3.8
Table
2. Applications used.
Main Processor:
6-issue dynamic. 1.6 GHz. Int, fp, ld/st FUs: 4, 4, 2
Pending ld, st: 8, 16. Branch penalty: 12 cycles
Memory Processor:
2-issue dynamic. 800 MHz. Int, fp, ld/st FUs: 2, 0, 1
Pending ld, st: 4, 4. Branch penalty: 6 cycles
Main Processor's Memory Hierarchy:
line, 3-cycle hit RT
L2 data: write-back, 512 KB, 4 way, 64-B line, 19-cycle hit RT
RT memory latency: 243 cycles (row miss), 208 cycles (row hit)
Memory bus: split-transaction, 8 B, 400 MHz, 3.2 GB/sec peak
Memory Processor's Memory Hierarchy:
line, 4-cycle hit RT
In North Bridge: RT mem latency: 100 cycles (row miss),
cycles (row hit)
Latency of a prefetch request to reach DRAM: 25 cycles
In DRAM: RT mem latency: 56 cycles (row miss),
cycles (row hit)
Internal DRAM data bus: 32-B wide, 800 MHz, 25.6 GB/sec peak
Parameters (applicable to all procs):
Dual channel. Each channel: 2 B, 800 MHz. Total: 3.2 GB/sec peak
Random access time (tRAC): ns
Time from memory controller (tSystem): ns
OTHER
Depth of queues 1 through
Filter module:
Table
3. Parameters of the simulated architecture. Latencies
correspond to contention-free conditions. RT stands for
round-trip from the processor. All cycles are 1.6 GHz cycles.
tial streams concurrently. It works as follows. When the third miss
in a sequence is observed, the prefetcher recognizes a stream. Then,
it prefetches the next NumPref lines in the stream into the L1 cache.
Furthermore, it stores the stride and the next address expected in
the stream in a special register. If the processor later misses on the
address in the register, the prefetcher prefetches the next NumPref
lines in the stream and updates the register. The prefetcher contains
NumSeq such registers. As we can see, while this scheme works
somewhat like stream buffers [13], the prefetched lines go to L1. We
choose this approach to minimize hardware complexity. A short-coming
is that the L1 cache may get polluted. For completeness,
we resimulated the system with the prefetches going into separate
buffers rather than into L1. We found that the performance changes
very little, in part because checking the buffers on L1 misses introduces
delay.
Algorithm Parameters. Table 4 lists the prefetching algorithms that
we evaluate and the default parameters that we use. The sequential
prefetching supported in hardware by the main processor is called
Conven4 for conventional. It can also be implemented in software
by a ULMT. We evaluate two such software implementations (Seq1
and Seq4). In this case, the prefetcher in memory observes L2 misses
rather than L1.
Unless otherwise indicated, the processor-side prefetcher is off and,
if it is on, the ULMT algorithms operate in Non-Verbose mode (Sec-
tion 3.2). For the Base algorithm, we choose the parameter values
used by Joseph and Grunwald [12] so that we can compare the work.
The last four columns of Table 2 give a conservative value for the
size of the correlation table for each application. The table is two-way
set-associative. We have sized the number of rows in the table
(NumRows) to be the lowest power of two such that, with a trivial
hashing function that simply takes the lower bits of the line address,
less than 5% of the insertions replace an existing entry. This is a very
generous allocation. A more sophisticated hash function can reduce
NumRows significantly without increasing conflicts much. In any
case, knowing that each row in Base, Chain, and Repl takes 20, 12,
and 28 bytes, respectively, in a 32-bit machine, we can compute the
total table size. Overall, while some applications need more space
than others, the average value is tolerable: 2.7, 1.6, and 3.8 Mbytes
for Base, Chain, and Repl, respectively.
ULMT Implementation. We wrote all ULMTs in C and hand-optimized
them for minimal response and occupancy time. One
major performance bottleneck of the implementation is frequent
branches. We remove branches by unrolling loops and hardwiring
all algorithm parameters. We also perform optimizations to increase
the spatial locality and to reduce instruction count. None of the algorithms
uses floating-point operations.
5. Evaluation
5.1. Characterizing Application Behavior
Predictability of the Miss Sequences. We start by characterizing
how well our ULMT algorithms can predict the miss sequences of
the applications. For that, we run each ULMT algorithm simply observing
all L2 cache miss addresses without performing prefetch-
ing. We record the fraction of L2 cache misses that are correctly
predicted. For a sequential prefetcher, this means that the upcoming
miss address matches the next address predicted by one of the
streams identified; for a pair-based prefetcher, the upcoming address
matches one of the successors predicted for that level.
Figure
5 shows the results of prediction for up to three levels of suc-
cessors. Given a miss, the Level 1 chart shows the predictability of
the immediate successor, while Level 2 shows the predictability of
the next successor, and Level 3 the successor after that one. The
experiments for the pair-base schemes use large tables to ensure that
practically no prediction is missed due to conflicts in the table: Num-
Rows is 256 K, Assoc is 4, and NumSucc is 4. Under these condi-
Prefetching Algorithm Implementation Name Parameter Values
Base Base
Chain Chain
Replicated Software in memory as ULMT Repl
Sequential 1-Stream Seq1
Sequential 4-Streams Seq4
Sequential 4-Streams Hardware in L1 of main processor Conven4
Table
4. Parameter values used for the different algorithms.
Level
Correct Prediction Seq1
Base
Level 210 20
50 6080 90CG Equake FT Gap Mcf MST Parser Sparse Tree Average
Correct Prediction
Chain
Repl
Level
CG Equake FT Gap Mcf MST Parser Sparse Tree Average
Correct Prediction
Chain
Repl
Figure
5. Fraction of L2 cache misses that are correctly predicted by different algorithms for different levels of successors.
tions, for level 1, Chain and Repl are equivalent to Base. For levels
2 and 3, Base is not applicable. The figure also shows the effect of
combining algorithms.
Figure
5 shows that our ULMT algorithms can effectively predict the
miss streams of the applications. For example, at level 1, Seq4 and
Base correctly predict on average 49% and 82% of the misses, re-
spectively. Moreover, the best algorithms keep predicting correctly
across several levels of successors. For example, Repl correctly predicts
on average 77% and 73% of the misses for levels 2 and 3, re-
spectively. Therefore, these algorithms have good potential.
The figure also shows that different applications have different miss
behavior. For instance, applications such as Mcf and Tree do not
have sequential patterns and, therefore, only pair-based algorithms
can predict misses. In other applications such as CG, instead, sequential
patterns dominate. As a result, sequential prefetching can
predict practically all L2 misses. Most applications have a mix of
both patterns.
Among pair-based algorithms, Repl almost always outperforms
Chain by a wide margin. This is because Chain does not maintain
the true MRU successors at each level. However, while Repl is effective
under all patterns, it is better when combined with multi-stream
sequential prefetching (Seq4+Repl).
Time Between L2 Misses. Another important issue is the time between
misses. Figure 6 classifies L2 misses according to the number
of cycles between two consecutive misses arriving at the mem-
ory. The misses are grouped in bins corresponding to [0,80) cycles,
cycles, etc. The unit is 1.6 GHz processor cycles.
The most significant bin is [200,280), which contributes with 60% of
all miss distances on average. These misses are critical beyond their
numbers because their latencies are hard to hide with out-of-order
execution. Indeed, since the round-trip latency to memory is 208-243
cycles, dependent misses are likely to fall in this bin. They contribute
more to processor stall than the figure suggests because dependent
misses cannot be overlapped with each other. Consequently, we want
the ULMT to prefetch them. To make sure that the ULMT is fast
enough to learn these misses, its occupancy should be less than 200
cycles.
The misses in the other bins are fewer and less critical. Those in
are too far apart to put pressure on the ULMT's timing.
Those in [0,80) may not give enough time to the ULMT to respond.
Fortunately, these misses are more likely to be overlapped with each
other and with computation.
0%
10%
20%
30%
40%
50%
70%
80%
90%
100%
CG Equake FT Gap Mcf MST Parser Sparse Tree Average
of
Figure
6. Characterizing the time between L2 misses.
5.2. Comparing the Different Algorithms
Figure
7 compares the execution time of the applications under different
cases: no prefetching (NoPref), processor-side prefetching as
listed in Table 4 (Conven4), different ULMT schemes listed in Table
4 (Base, Chain, and Repl), the combination of Conven4 and Repl
(Conven4+Repl), and some customized algorithms (Custom). The
results are for the case where the memory processor is integrated
in the DRAM. For each application and the average, the bars are
normalized to NoPref. The bars show the memory-induced processor
stall time that is caused by requests between the processor and
the L2 cache (UptoL2), and by requests beyond the L2 cache (Be-1.0NoPref Conven4 Base Chain Repl
Conven4+Repl Custom NoPref Conven4 Base Chain Repl
Conven4+Repl NoPref Conven4 Base Chain Repl
Conven4+Repl NoPref Conven4 Base Chain Repl
Conven4+Repl NoPref Conven4 Base Chain Repl
Conven4+Repl Custom
CG Equake FT Gap Mcf
Normaized Execution
Time BeyondL2
Base Chain Repl
Conven4+Repl Custom NoPref Conven4 Base Chain Repl
Conven4+Repl NoPref Conven4 Base Chain Repl
Conven4+Repl NoPref Conven4 Base Chain Repl
Conven4+Repl NoPref Conven4 Base Chain Repl
Conven4+Repl
MST Parser Sparse Tree Average
Normalized Execution
Time BeyondL2
Busy1.00.60.2
Figure
7. Execution time of the applications with different prefetching algorithms.
yondL2). The remaining time (Busy) includes processor computation
plus other pipeline stalls. A system with a perfect L2 cache would
only have the Busy and UptoL2 times.
On average, BeyondL2 is the most significant component of the execution
time under NoPref. It accounts for 44% of the time. Thus,
although our ULMT schemes only target L2 cache misses, they target
the main contributor to the execution time.
Conven4 performs very well on CG because sequential patterns dom-
inate. However, it is ineffective in applications such as Mcf and Tree
that have purely irregular patterns. On average, Conven4 reduces the
execution time by 17%.
The pair-based schemes show mixed performance. Base shows limited
speedups, mostly because it does not prefetch far enough. On
average, it reduces NoPref's execution time by 6%. Chain performs
a little better, but it is limited by inaccuracy (Figure 5) and high
response time (Section 3.3.1). On average, it reduces NoPref's execution
time by 12%.
Repl is able to reduce the execution time significantly. It performs
well in almost all applications. It outperforms both Base and Chain
in all cases. Its impact comes from the nice properties of the Replicated
algorithm, as discussed in Section 3.3.4. The average of the
application speedups of Repl over NoPref is 1.32.
Finally, Conven4+Repl performs the best. On average, it removes
over half of the BeyondL2 stall time, and delivers an average application
speedup of 1.46 over NoPref. If we compare the impact
of processor-side prefetching only (Conven4) and memory-side
prefetching only (Repl), we see that they have a constructive effect in
Conven4+Repl. The reason is that the two schemes help each other.
Specifically, the processor-side prefetcher prefetches and eliminates
the sequential misses. The memory-side prefetcher works in Non-
Verbose mode (Section 3.2) and, therefore, does not see the prefetch
requests. Therefore, it can fully focus on the irregular miss patterns.
With the resulting reduced load, the ULMT is more effective.
Algorithm Customization. In this first paper on ULMT prefetch-
ing, we have attempted only very simple customization for a few ap-
plications. Table 5 shows the changes. For CG, we run Seq1+Repl in
Verbose mode. For MST and Mcf, we run Repl with a higher Num-
Levels. In all cases, Conven4 is on. The results are shown in Figure 7
as the Custom bar in the three applications.
Application Customized ULMT Algorithm
CG Seq1+Repl in Verbose mode
MST, Mcf Repl with
Table
5. Customizations performed. Conven4 is also on.
The customization in CG tries to further exploit positive interaction
between processor- and memory-side prefetching. While CG
only has sequential miss patterns (Figure 5), its multiple streams
overwhelm the conventional prefetcher. Indeed, although processor-side
prefetches are very accurate (99.8% of the prefetched lines are
referenced), they are not timely enough (only 64% are timely) because
some of them miss in the L2 cache. In our customization, we
turn on the Verbose mode so that processor-side prefetch requests
are seen by the ULMT. Furthermore, the ULMT is extended with a
single-stream sequential prefetch algorithm (Seq1) before executing
Repl. In this environment, the positive interaction between the two
prefetchers increases. Specifically, while the application references
the different streams in an interleaved manner, the processor-side
prefetcher "unscrambles" the miss sequence into chunks of same-
stream prefetch requests. The Seq1 prefetcher in the ULMT then
easily identifies each stream and, very efficiently, prefetches ahead.
As a result, 81% of the processor-side prefetches arrive in a timely
manner. With this customization, the speedup of CG improves from
2.19 (with Conven4+Repl) to 2.59. This case demonstrates that even
regular applications that are amenable to sequential processor-side
prefetching can benefit from ULMT prefetching.
The customization in MST and Mcf tries to exploit predictability
beyond the third level of successor misses by setting NumLevels to 4
in Repl. As shown in Figure 7, this approach is successful for MST,
but it produces marginal gains in Mcf.
this initial attempt at customization shows promising re-
sults. After applying customization on three applications, the average
execution speedup of the nine applications relative to NoPref
becomes 1.53.
CG Equake FT Gap Mcf MST Parser Sparse Tree Average
Normalized Execution
Time BeyondL2
Busy1.00.60.2
Figure
8. Execution time for different locations of the memory processor.
Location of Memory Processor. Figure 8 examines the impact of
where we place the memory processor (Figure 3). The first two
bars for each application are taken from Figure 7: NoPref and Con-
ven4+Repl. The last bar for each application corresponds to the
Conven4+Repl algorithm with the memory processor placed in the
memory controller (North Bridge) chip (Conven4+ReplMC). With
the processor in the North Bridge chip, we have twice the memory
access latency (100 cycles vs. 56 cycles), eight times lower memory
bandwidth (3.2 GB/sec vs. 25.6 GB/sec), and an additional 25-cycle
delay seen by the prefetch requests before they reach the DRAM 1 .
However, Figure 8 shows that the impact on the execution time is
very small. It results in a small decrease in average speedups from
1.46 to 1.41. The impact is small thanks to the ability of Repl to
accurately prefetch far ahead. Only the timeliness of the immediate
successor prefetches is affected, while the prefetching of further
levels of successors is still timely. Overall, given these results and
the hardware cost of the two designs, we conclude that putting the
memory processor in the North Bridge chip is the most cost-effective
design of the two.
Prefetching Effectiveness. To gain further insight into these
prefetching schemes, Figure 9 examines the effectiveness of the
lines prefetched into the L2 cache by the ULMT. These lines are
called prefetches. The figure shows data for Sparse, Tree, and the
average of the other seven applications. The figure combines both
L2 misses and prefetches, and breaks them down into 5 categories:
prefetches that eliminate an L2 miss (Hits), prefetches that eliminate
part of the latency of an L2 miss because they arrive a bit late
(DelayedHits), L2 misses that pay the full latency (NonPrefMisses),
and useless prefetches. Useless prefetches are further broken down
into prefetches that are brought into the L2 but that are not referenced
by the time they are replaced (Replaced), and prefetches that
are dropped on arrival to L2 because the same line is already in the
cache (Redundant). Since Coverage is the fraction of the original L2
misses that are fully or partially eliminated, it is represented by the
sum of Hits and DelayedHits as shown in Figure 9. NonPrefMisses
in
Figure
9 is the number of L2 misses left after prefetching, relative
to the original number of L2 misses. Note that NonPrefMisses
can be higher than 1.0 for some algorithms. 1.0 NonPrefMisses is
the number of L2 misses eliminated relative to the original number
of L2 misses. NonPrefMisses can be broken down into two groups:
those misses below the 1.0 line in Figure 9 (1.0 Hits Delayed-
Hits) come from the original misses, while those above the 1.0 line
(Hits are the new L2 conflict
misses caused by prefetches.
Looking at the average of the seven applications, we see why Base
and Chain are not effective: their coverage is small. Base is hurt
1 All these cycle counts are in main-processor cycles.0.51.52.53.5
Base Chain Repl
Conven4+Repl Conven4+ReplMC NoPref Base Chain Repl
Conven4+Repl Conven4+ReplMC NoPref Base Chain Repl
Conven4+Repl Conven4+ReplMC
Sparse Tree Average for 7
applications other than
Sparse and Tree
Hits DelayedHits NonPrefMisses Replaced Redundant
Figure
9. Breakdown of the L2 misses and lines prefetched by
the ULMT (prefetches). The original misses are normalized
to 1.
by its inability to prefetch far ahead, while Chain is hampered by its
high response time and limited accuracy. The figure also shows that
Repl has a high coverage (0.74). However, this comes at the cost of
useless prefetches (Replaced plus Redundant are equivalent to 50%
of the original misses) and additional misses due to conflicts with
prefetches (20% of the original misses). We can see, therefore, that
advanced pair-based schemes need additional bandwidth.
Conven4+Repl seems to have low coverage, despite its high performance
in Figure 7. The reason is that the prefetch requests
issued by the processor-side prefetcher, while effective in eliminating
L2 misses, are lumped into the NonPrefMisses category
in the figure if they reach memory. Since the ULMT prefetcher
is in Non-Verbose mode, it does not see these requests. Conse-
quently, the ULMT prefetcher only focuses on the irregular miss
patterns. ULMT prefetches that eliminate irregular misses appear
as Hits+DelayedHits.
Finally, Figure 9 also shows why Sparse and Tree showed limited
speedups in Figure 7. They have too many conflicts in the cache,
which results in many remaining NonPrefMisses. Furthermore, their
prefetches are not very accurate, which results in large Replaced and
Redundant categories.
Work Load of the ULMT. Figure 10 shows the average response
time and occupancy time (Section 3.1) for each of the ULMT algo-
rithms, averaged over all applications. The times are measured in 1.6
GHz cycles. Each bar is broken down into computation time (Busy)
and memory stall time (Mem). The numbers on top of each bar show
the average IPC of the ULMT. The IPC is calculated as the number
Base Chain Repl ReplMC Base Chain Repl ReplMC
Response time Occupancy time
Number
of
Processor
Cycles Mem
Response Time Occupancy Time
Figure
10. Average response and occupancy time of different
ULMT algorithms in main-processor cycles.
of instructions divided by the number of memory processor cycles.
The figure shows that, in all the algorithms, the occupancy time is
less than 200 cycles. Consequently, the ULMT is fast enough to
process most of the L2 misses (Figure 6). Memory stall time is
roughly half of the ULMT execution time when the processor is in
the DRAM, and more when the processor is in the North Bridge chip
(ReplMC). Chain and Repl have the lowest occupancy time. Note
that Repl's occupancy is not much higher than Chain's, despite the
higher number of table updates performed by Repl. The reasons are
the fewer associative searches and the better cache line reuse in Repl.
The response time is most important for prefetching effectiveness.
The figure shows that Repl has the lowest response time, at around
cycles. The response time of ReplMC is about twice as much.
Fortunately, the Replicated algorithm is able to prefetch far ahead
accurately and, therefore, the effectiveness of prefetching is not very
sensitive to a modest increase in the response time.
Main Memory Bus Utilization. Finally, Figure 11 shows the utilization
of the main memory bus for various algorithms, averaged
over all applications. The increase in bus utilization induced by the
advanced algorithms is divided into two parts: increase caused naturally
by the reduced execution time, and additional increase caused
by the prefetching traffic. Overall, the figure shows that the increase
in bus utilization is tolerable. The utilization increases from the original
20% to only 36% in the worst case (Conven4+Repl). Moreover,
most of the increase comes from the faster execution; only a 6% utilization
is directly attributable to the prefetches. In general, the fact
that memory-side prefetching only adds one-way traffic to the main
memory bus, limits its bandwidth needs.
0%
10%
20%
30%
40%
50%
70%
80%
90%
100%
Base Chain Repl
Utilization
prefetching Due to the reduced execution time Due to prefetching1006020
Conven4
+Repl
Conven4
Figure
11. Main memory bus utilization.
6. Related Work
Memory-Side Prefetching. Some memory-side prefetchers are
simple hardware controllers. For example, the NVIDIA chipset includes
the DASP controller in the North Bridge chip [22]. It seems
that it is mostly targeted to stride recognition and buffers data lo-
cally. The i860 chipset from Intel is reported to have a prefetch
cache, which may indicate the presence of a similar engine. Cooksey
et al. [9] propose the Content-Based prefetcher, which is a hardware
controller that monitors the data coming from memory. If an item
appears to be an address, the engine prefetches it. Alexander and Kedem
[1] propose a hardware controller that monitors requests at the
main memory. If it observes repeatable patterns, it prefetches rows
of data from the DRAM to an SRAM buffer inside the memory chip.
our scheme is different in that we use a general-purpose
processor running a prefetching algorithm as a user-level thread.
Other studies propose specialized programmable engines. For exam-
ple, Hughes [11] and Yang and Lebeck [28] propose adding a specialized
engine to prefetch linked data structures. While Hughes focuses
on a multiprocessor processing-in-memory system, Yang and
Lebeck focus on a uniprocessor and put the engine at every level
of the cache hierarchy. The main processor downloads information
on these engines about the linked structures and what prefetches to
perform. Our scheme is different in that it has general applicability.
Another related system is Impulse, an intelligent memory controller
capable of remapping physical addresses to improve the performance
of irregular applications [4]. Impulse could prefetch data, but only
implements next-line prefetching. Furthermore, it buffers data in the
memory controller, rather than sending it to the processor.
Correlation Prefetching. Early work on correlation prefetching can
be found in [2, 24]. More recently, several authors have made further
contributions. Charney and Reeves study correlation prefetching
and suggest combining a stride prefetcher with a general correlation
prefetcher [6]. Joseph and Grunwald propose the basic correlation
table organization and algorithm that we evaluate [12]. Alexander
and Kedem use correlation prefetching slightly differently [1], as we
indicate above. Sherwood et al. use it to help stream buffers prefetch
irregular patterns [26]. Finally, Lai et al. design a slightly different
correlation prefetcher [18]. Specifically, a prefetch is not triggered
by a miss; instead, it is triggered by a dead-line predictor indicating
that a line in the cache will not be used again and, therefore, a
new line should be prefetched in. This scheme improves prefetching
timeliness at the expense of tighter integration of the prefetcher with
the processor, since the prefetcher needs to observe not only miss
addresses, but also reference addresses and program counters.
We differ from the recent works in important ways. First, they propose
hardware-only engines, which often require expensive hardware
tables; we use a flexible user-level thread on a general-purpose
core that stores the table as a software structure in memory. Second,
except for Alexander and Kedem [1], they place their engines between
the L1 and L2 caches, or between the processor and the L1;
we place the prefetcher in memory and focus on L2 misses. Time
intervals between L2 misses are large enough for a ULMT to be viable
and effective. Finally, we propose a new table organization and
prefetching algorithm that, by exploiting inexpensive memory space,
increases far-ahead prefetching and prefetch coverage.
Prefetching Regular Structures. Several schemes have been proposed
to prefetch sequential or strided patterns. They include the
Reference Prediction table of Chen and Baer [7], and the Stream
buffers of Jouppi [13], Palacharla and Kessler [23], and Sherwood et
al. [26]. We base our processor-side prefetcher on these schemes.
Processor-Side Prefetching. There are many more proposals for
processor-side prefetching, often for irregular applications. A tiny,
non-exhaustive list includes Choi et al. [8], Karlsson et al. [14], Lipasti
et al. [19], Luk and Mowry [20], Roth et al. [25], and Zhang
and Torrellas [29]. Most of these schemes specifically target linked
data structures. They tend to rely on program information that is
available to the processor, like the addresses and sizes of data struc-
tures. Often, they need compiler support. Our scheme needs neither
program information nor compiler support.
Other Related Work. Chappell et al. [5] use a subordinate thread in
a multithreaded processor to improve branch prediction. They suggest
using such a thread for prefetching and cache management. Fi-
nally, our work is also related to data forwarding in multiprocessors,
where a processor pushes data into the cache hierarchy of another
processor [15].
7. Conclusions
This paper introduced memory-side correlation prefetching using a
User-Level Memory Thread (ULMT) running on a simple general-purpose
processor in main memory. This scheme solves many of
the problems in conventional correlation prefetching and provides
several important additional features. Specifically, the scheme needs
minimal hardware modifications beyond the memory processor, uses
main memory to store the correlation table inexpensively, can exploit
a new table organization to increase far-ahead prefetching and
coverage, can effectively prefetch for applications with largely any
miss pattern as long as it repeats, and supports customization of the
prefetching algorithm by the programmer for individual applications.
Our results showed that the scheme delivers an average speedup
of 1.32 for nine mostly-irregular applications. Furthermore, our
scheme works well in combination with a conventional processor-side
sequential prefetcher, in which case the average speedup increases
to 1.46. Finally, by exploiting the customization of the
prefetching algorithm, we increased the average speedup to 1.53.
This work is being extended by designing effective techniques for
ULMT customization. In particular, we are customizing for linked
data structure prefetching, cache conflict detection and elimination,
and general application profiling. Customization for cache conflict
elimination should improve Sparse and Tree, the applications with
the smallest speedups.
Acknowledgments
The authors thank the anonymous reviewers, Hidetaka Magoshi,
Jose Martinez, Milos Prvulovic, Marc Snir, and James Tuck.
--R
Distributed Predictive Cache Design for High Performance Memory Systems.
Dynamic Improvements of Locality in Virtual Memory Sys- tems
Institute for Astronomy
Impulse: Building a Smarter Memory Controller.
Simultaneous Subordinate Microthreading (SSMT).
Generalized Correlation Based Hardware Prefetching.
Reducing Memory Latency via Non-Blocking and Prefetching Cache
Iterative Benchmark.
Prefetching Linked Data Structures in Systems with Merged DRAM-Logic
Prefetching Using Markov Predictors.
Improving Direct-Mapped Cache Performance by the Addition of a Small Fully-Associative Cache and Prefetch Buffers
A Prefetching Technique for Irregular Accesses to Linked Data Structures.
Comparing Data Forwarding and Prefetching for Communication-Induced Misses in Shared-Memory MPs
Scalable Processors in the Billion-Transistor Era: IRAM
A Direct-Execution Framework for Fast and Accurate Simulation of Superscalar Processors
Software Prefetching in Pointer and Call Intensive Environments.
NVIDIA nForce Integrated Graphics Processor (IGP) and Dynamic Adaptive Speculative Pre-Processor (DASP)
Evaluating Stream Buffers as a Secondary Cache Replacement.
Prefetching System for a Cache Having a Second Directory for Sequentially Accessed Blocks.
Dependence Based Prefetching for Linked Data Structures.
Sony Computer Entertainment Inc.
Push vs. Pull: Data Movement for Linked Data Structures.
Speeding up Irregular Applications in Shared-Memory Multiprocessors: Memory Binding and Group Prefetching
--TR
Reducing memory latency via non-blocking and prefetching caches
Evaluating stream buffers as a secondary cache replacement
Speeding up irregular applications in shared-memory multiprocessors
Compiler-based prefetching for recursive data structures
Prefetching using Markov predictors
Comparing data forwarding and prefetching for communication-induced misses in shared-memory MPs
Dependence based prefetching for linked data structures
Simultaneous subordinate microthreading (SSMT)
Improving direct-mapped cache performance by the addition of a small fully-associative cache and prefetch buffers
Push vs. pull
Predictor-directed stream buffers
Dead-block prediction MYAMPERSANDamp; dead-block correlating prefetchers
Scalable Processors in the Billion-Transistor Era
Content-Based Prefetching
Distributed Prefetch-buffer/Cache Design for High Performance Memory Systems
Impulse
An Direct-Execution Framework for Fast and Accurate Simulation of Superscalar Processors
Memory-Side Prefetching for Linked Data Structures
--CTR
Kyle J. Nesbit , James E. Smith, Data Cache Prefetching Using a Global History Buffer, IEEE Micro, v.25 n.1, p.90-97, January 2005
Philip G. Emma , Allan Hartstein , Thomas R. Puzak , Vijayalakshmi Srinivasan, Exploring the limits of prefetching, IBM Journal of Research and Development, v.49 n.1, p.127-144, January 2005
Amir Roth , Gurindar S. Sohi, A quantitative framework for automated pre-execution thread selection, Proceedings of the 35th annual ACM/IEEE international symposium on Microarchitecture, November 18-22, 2002, Istanbul, Turkey
adding logic close to memory to reduce the latency of indirect loads with high miss ratios, ACM SIGARCH Computer Architecture News, v.33 n.3, June 2005
Brian Rogers , Yan Solihin , Milos Prvulovic, Memory predecryption: hiding the latency overhead of memory encryption, ACM SIGARCH Computer Architecture News, v.33 n.1, March 2005
Justin Teller , Charles B. Silio, Jr. , Bruce Jacob, Performance characteristics of MAUI: an intelligent memory system architecture, Proceedings of the 2005 workshop on Memory system performance, June 12-12, 2005, Chicago, Illinois
Dongkeun Kim , Steve Shih-wei Liao , Perry H. Wang , Juan del Cuvillo , Xinmin Tian , Xiang Zou , Hong Wang , Donald Yeung , Milind Girkar , John P. Shen, Physical Experimentation with Prefetching Helper Threads on Intel's Hyper-Threaded Processors, Proceedings of the international symposium on Code generation and optimization: feedback-directed and runtime optimization, p.27, March 20-24, 2004, Palo Alto, California
Ilya Ganusov , Martin Burtscher, Efficient emulation of hardware prefetchers via event-driven helper threading, Proceedings of the 15th international conference on Parallel architectures and compilation techniques, September 16-20, 2006, Seattle, Washington, USA
Hur , Calvin Lin, Memory Prefetching Using Adaptive Stream Detection, Proceedings of the 39th Annual IEEE/ACM International Symposium on Microarchitecture, p.397-408, December 09-13, 2006
Lixin Zhang , Mike Parker , John Carter, Efficient address remapping in distributed shared-memory systems, ACM Transactions on Architecture and Code Optimization (TACO), v.3 n.2, p.209-229, June 2006
Wessam Hassanein , Jos Fortes , Rudolf Eigenmann, Data forwarding through in-memory precomputation threads, Proceedings of the 18th annual international conference on Supercomputing, June 26-July 01, 2004, Malo, France
Ravi Iyer, CQoS: a framework for enabling QoS in shared caches of CMP platforms, Proceedings of the 18th annual international conference on Supercomputing, June 26-July 01, 2004, Malo, France
Stephen Somogyi , Thomas F. Wenisch , Anastassia Ailamaki , Babak Falsafi , Andreas Moshovos, Spatial Memory Streaming, ACM SIGARCH Computer Architecture News, v.34 n.2, p.252-263, May 2006
Zhen Yang , Xudong Shi , Feiqi Su , Jih-Kwon Peir, Overlapping dependent loads with addressless preload, Proceedings of the 15th international conference on Parallel architectures and compilation techniques, September 16-20, 2006, Seattle, Washington, USA
R. Shetty , M. Kharbutli , Y. Solihin , M. Prvulovic, HeapMon: a helper-thread approach to programmable, automatic, and low-overhead memory bug detection, IBM Journal of Research and Development, v.50 n.2/3, p.261-275, March 2006
Zhen Fang , Lixin Zhang , John B. Carter , Ali Ibrahim , Michael A. Parker, Active memory operations, Proceedings of the 21st annual international conference on Supercomputing, June 17-21, 2007, Seattle, Washington
Yan Solihin , Jaejin Lee , Josep Torrellas, Correlation Prefetching with a User-Level Memory Thread, IEEE Transactions on Parallel and Distributed Systems, v.14 n.6, p.563-580, June
Jiwei Lu , Abhinav Das , Wei-Chung Hsu , Khoa Nguyen , Santosh G. Abraham, Dynamic Helper Threaded Prefetching on the Sun UltraSPARC CMP Processor, Proceedings of the 38th annual IEEE/ACM International Symposium on Microarchitecture, p.93-104, November 12-16, 2005, Barcelona, Spain
Adrin Cristal , Oliverio J. Santana , Mateo Valero , Jos F. Martnez, Toward kilo-instruction processors, ACM Transactions on Architecture and Code Optimization (TACO), v.1 n.4, p.389-417, December 2004
Marco Galluzzi , Valentn Puente , Adrin Cristal , Ramn Beivide , Jos-ngel Gregorio , Mateo Valero, A first glance at Kilo-instruction based multiprocessors, Proceedings of the 1st conference on Computing frontiers, April 14-16, 2004, Ischia, Italy
Chia-Lin Yang , Alvin R. Lebeck , Hung-Wei Tseng , Chien-Hao Lee, Tolerating memory latency through push prefetching for pointer-intensive applications, ACM Transactions on Architecture and Code Optimization (TACO), v.1 n.4, p.445-475, December 2004
Zhenlin Wang , Doug Burger , Kathryn S. McKinley , Steven K. Reinhardt , Charles C. Weems, Guided region prefetching: a cooperative hardware/software approach, ACM SIGARCH Computer Architecture News, v.31 n.2, May
Xian-He Sun , Surendra Byna , Yong Chen, Server-based data push architecture for multi-processor environments, Journal of Computer Science and Technology, v.22 n.5, p.641-652, September 2007
Suleyman Sair , Timothy Sherwood , Brad Calder, A Decoupled Predictor-Directed Stream Prefetching Architecture, IEEE Transactions on Computers, v.52 n.3, p.260-276, March | intelligent memory;computer architecture;processing-in-memory;data prefetching;caches;memory hierarchies;correlation prefetching;threads |
545249 | Design tradeoffs for the Alpha EV8 conditional branch predictor. | This paper presents the Alpha EV8 conditional branch predictor The Alpha EV8 microprocessor project, canceled in June 2001 in a late phase of development, envisioned an aggressive 8-wide issue out-of-order superscalar microarchitecture featuring a very deep pipeline and simultaneous multithreading. Performance of such a processor is highly dependent on the accuracy of its branch predictor and consequently a very large silicon area was devoted to branch prediction on EV8. The Alpha EV8 branch predictor relies on global history and features a total of 352 Kbits.The focus of this paper is on the different trade-offs performed to overcome various implementation constraints for the EV8 branch predictor. One such instance is the pipelining of the predictor on two cycles to facilitate the prediction of up to 16 branches per cycle from any two dynamically successive, 8 instruction fetch blocks. This resulted in the use of three fetch-block old compressed branch history information for accesing the predictor. Implementation constraints also restricted the composition of the index functions for the predictor and forced the usage of only single-ported memory cells.Nevertheless, we show that the Alpha EV8 branch predictor achieves prediction accuracy in the same range as the state-of-the-art academic global history branch predictors that do not consider implementation constraints in great detail. | Introduction
The Alpha EV8 microprocessor [2] features a 8-wide superscalar
deeply pipelined microarchitecture. With minimum
branch misprediction penalty of 14 cycles, the performance
of this microprocessor is very dependent on the
branch prediction accuracy. The architecture and technology
of the Alpha EV8 are very aggressive and new challenges
were confronted in the design of the branch predictor. This
paper presents the Alpha EV8 branch predictor in great de-
tail. The paper expounds on different constraints that were
This work was done while the authors were with Compaq during 1999
faced during the definition of the predictor, and on various
trade-offs performed that lead to the final design. In par-
ticular, we elucidate on the following: (a) use of a global
history branch prediction scheme, (b) choice of the prediction
scheme derived from the hybrid skewed branch predictor
2Bc-gskew[19], (c) redefinition of the information vector
used for indexing the predictor that combines compressed
branch history and path history, (d) different prediction and
hysteresis table sizes: prediction tables and hysteresis tables
are accessed at different pipeline stages, and hence can be
implemented as physically distinct tables, (e) variable history
lengths: the four logical tables in the EV8 predictor
are accessed using four different history lengths, (f) guaranteeing
conflict free access to the bank-interleaved predictor
with single-ported memory cells for up to 16 branch predictions
from any two 8-instruction dynamically succesive
fetch blocks, and (g) careful definition of index functions
for the predictor tables.
This work demonstrates that in spite of all the hardware
and implementation constraints that were encountered,
the Alpha EV8 branch predictor accuracy was not compromised
and stands the comparison with virtually all equivalent
in size, global history branch predictors that have been
proposed so far.
The overall EV8 architecture was optimized for single
process performance. Extra performance obtained by
simultaneous multithreading was considered as a bonus.
Therefore, the parameters of the conditional branch predictor
were tuned with single process performance as the primary
objective. However, the EV8 branch predictor was
found to perform well in the presence of a multithreaded
workload.
The remainder of the paper is organized as follows. Section
briefly presents the instruction fetch pipeline of the
Alpha EV8. Section 3 explains why a global history branch
predictor scheme was preferred over a local. In Section 4,
we present the prediction scheme implemented in the Alpha
EV8, 2Bc-gskew. This section also presents the design
space of 2Bc-gskew. The various design dimensions were
harnessed to fit the EV8 predictor in 352 Kbits memory bud-
get. Section 5 presents and justifies the history and path information
used to index the branch predictor. On the Alpha
EV8, the branch predictor tables must support two independent
reads of 8 predictions per cycle. Section 6 presents the
scheme used to guarantee two conflict-free accesses per cycle
on a bank-interleaved predictor. Section 7 presents the
hardware constraints for composing index functions for the
prediction tables and describes the functions that were eventually
used. Section 8 presents a step by step performance
evaluation of the EV8 branch predictor as constraints are
added and turn-around solutions are adopted. Finally, we
provide concluding remarks in Section 9.
Alpha EV8 front-end pipeline
To sustain high performance, the Alpha EV8 fetches up
to two, 8-instruction blocks per cycle from the instruction
cache. An instruction fetch block consists of all consecutive
valid instructions fetched from the I-cache: an instruction
fetch block ends either at the end of an aligned 8-instruction
block or on a taken control flow instruction. Not taken
conditional branches do not end a fetch block, thus up to 16
conditional branches may be fetched and predicted in every
cycle.
On every cycle, the addresses of the next two fetch
blocks must be generated. Since this must be achieved in
a single cycle, it can only involve very fast hardware. On
the Alpha EV8, a line predictor [1] is used for this purpose.
The line predictor consists of three tables indexed with the
address of the most recent fetch block and a very limited
hashing logic. A consequence of simple indexing logic is
relatively low line prediction accuracy.
To avoid huge performance loss, due to fairly poor line
predictor accuracy and long branch resolution latency (on
the EV8 pipeline, the outcome of a branch is known the earliest
in cycle 14 and more often around cycle 20 or 25), the
line predictor is backed up with a powerful program counter
address generator. This includes a conditional branch
predictor, a jump predictor, a return address stack predic-
tor, conditional branch target address computation (from
instructions flowing out of the instruction cache) and final-
address selection. PC-address-generation is pipelined in t-
wo cycles as illustrated in Fig. 1: up to four dynamically
succesive fetch blocks A, B, C and D are simultaneously in
flight in the PC-address-generator. In case of a mismatch
between line prediction and PC-address-generation, the instruction
fetch is resumed with the PC-address-generation
result.
3 Global vs Local history
The previous generation Alpha microprocessor [7] incorporated
a hybrid predictor using both global and local
branch history information. On Alpha EV8, up to 16 branch
outcomes (8 for each fetch block) have to be predicted per
Blocks A and B
Phase 1 Phase 1 Phase 0
Blocks C and D
is completed
PC address generation
Blocks Y and Z
Phase 0
is completed
Phase 1
Phase 0
Line prediction
is completed
prediction tables read
Cycle 1 Cycle 2 Cycle 3
Figure
1. PC address generation pipeline
cycle. Implementing a hybrid branch predictor for EV8
based on local history or including a component using local
history would have been a challenge.
Local branch prediction requires for each prediction a
read of the local history table and then a read of the prediction
table. Performing the 16 local history reads in parallel
requires a dual-ported history table. One port for each fetch
block is sufficient since one can read in parallel the histories
for sequential instructions on sequential table entries.
But performing the 16 prediction table reads would require
a 16-ported prediction table.
Whenever an occurrence of a branch is inflight, the speculative
history associated with the younger inflight occurrence
of the branch should be used [8]. Maintaining and
using speculative local history is already quite complex
on a processor fetching and predicting a single branch per
cycle[20]. On Alpha EV8, the number of inflight branches
is possibly equal to the maximum number of inflight instructions
(that is more than 256). Moreover, in EV8 when
indexing the branch predictor there are up to three fetch
blocks for which the (speculative) branch outcomes have
not been determined (see Fig. 1). These three blocks may
contain up to three previous occurrences of every branch in
the fetch block. In contrast, single speculative global history
(per thread) is simpler to build and as shown in Section 8
the accuracy of the EV8 global history prediction scheme is
virtually insensitive to the effects of three fetch blocks old
global history.
Finally, the Alpha EV8 is a simultaneous multithreaded
processor [25, 26]. When independent threads are running,
they compete for predictor table entries. Such interference
on a local history based scheme can be disastrous, because
it pollutes both the local history and prediction tables. What
is more, when several parallel threads are spawned by a single
application, the pollution is exacerbated unless the local
history table is indexed using PC and thread number.
In comparison, for global history schemes a global history
register must be maintained per thread, and parallel threads
- from the same application - benefit from constructive
aliasing [10].
4 The branch prediction scheme
Global branch history branch predictor tables lead to a
phenomenon known as aliasing or interference [28, 24], in
which multiple branch information vectors share the same
entry in the predictor table, causing the predictions for t-
wo or more branch substreams to intermingle. "De-aliased"
global history branch predictors have been recently intro-
duced: the enhanced skewed branch predictor e-gskew [15],
the agree predictor [22], the bimode predictor [13] and the
YAGS predictor [4]. These predictors have been shown to
achieve higher prediction accuracy at equivalent hardware
complexity than larger "aliased" global history branch predictors
such as gshare [14] or GAs [27]. However, hybrid
predictors combining a global history predictor and a typical
bimodal predictor only indexed with the PC [21] may
deliver higher prediction accuracy than a conventional single
branch predictor [14]. Therefore, "de-aliased" branch
predictors should be included in hybrid predictors to build
efficient branch predictors.
The EV8 branch predictor is derived from the hybrid
skewed branch predictor 2Bc-gskew presented in [19]. In
this section, the structure of the hybrid skewed branch predictor
is first recalled. Then we outline the update policy
used on the EV8 branch predictor. The three degrees of
freedom available in the design space of the 2Bc-gskew predictor
are described: different history lengths for the predictor
components, size of the different predictor components
and using smaller hysteresis tables than prediction ta-
bles. These degrees of freedom were leveraged to design the
"best" possible branch predictor fitting in the EV8 hardware
budget constraints.
4.1 General structure of the hybrid skewed predictor
2Bc-gskew
The enhanced skewed branch predictor e-gskew is a very
efficient single component branch predictor [15, 13] and
therefore a natural candidate as a component for a hybrid
predictor. The hybrid predictor 2Bc-gskew illustrated in Fig.
combines e-gskew and a bimodal predictor. 2Bc-gskew
consists of four 2-bit counters banks. Bank BIM is the bi-modal
predictor, but is also part of the e-gskew predictor.
Banks G0 and G1 are the two other banks of the e-gskew
predictor. Bank Meta is the meta-predictor. Depending
on Meta, the prediction is either the prediction coming out
from BIM or the majority vote on the predictions coming
out from G0, G1 and BIM
4.2 Partial update policy
In a multiple table branch predictor, the update policy
can have a bearing on the prediction accuracy [15]. Partial
update policy was shown to result in higher prediction
accuracy than total update policy for e-gskew.
Applying partial update policy on 2Bc-gskew also results
in better prediction accuracy. The bimodal component ac-000000000000111111n
history
e-gskew prediction
bimodal prediction
metaprediction
Meta
address
majority
vote
address
PREDICTION
Figure
2. The 2Bc-gskew predictor
curately predicts strongly biased static branches. Therefore,
once the metapredictor has recognized this situation, the
other tables are not updated and do not suffer from aliasing
associated with easy-to-predict branches.
The partial update policy implemented on the Alpha EV8
consists of the following:
ffl on a correct prediction:
when all predictors were agreeing do not update (see
otherwise: strengthen Meta if the two predictions were
different, and strengthen the correct prediction on all
participating tables G0, G1 and BIM as follows:
-strengthen BIM if the bimodal prediction was used
-strengthen all the banks that gave the correct prediction
if the majority vote was used
ffl on a misprediction:
when the two predictions were different, first update
the chooser (see Rationale 2), then recompute the over-all
prediction according to the new value of the chooser
-correct prediction: strengthens all participating tables
-misprediction: update all banks
Rationale 1 The goal is to limit the number of strengthened
counters on a correct prediction. When a counter is
strengthened, it is harder for another (address,history) pair
to "steal" it. But, when the three predictors BIM, G0 and
G1 are agreeing, one counter entry can be stolen by another
(address, history) pair without destroying the majority pre-
diction. By not strengthening the counters when the three
predictors agree, such a stealing is made easier.
Rationale 2 The goal is to limit the number of counters
written on a wrong prediction: there is no need to steal a
table entry from another (address, history) pair when it can
be avoided.
4.3 Using distinct prediction and hysteresis arrays
Partial update leads to better prediction accuracy than
total update policy due to better space utilization. It also
allows a simpler hardware implementation of a hybrid predictor
with 2-bit counters.
When using the partial update described earlier, on a correct
prediction, the prediction bit is left unchanged (and not
while the hysteresis bit is strengthened on participating
components (and need not be read). Therefore, a
correct prediction requires only one read of the prediction
array (at fetch time) and (at most) one write of the hysteresis
array (at commit time). A misprediction leads to a read
of the hysteresis array followed by possible updates of the
prediction and hysteresis arrays.
4.4 Sharing a hysteresis bit between several counter
Using partial update naturally leads to a physical implementation
of the branch predictor as two different memory
arrays, a prediction array and a hysteresis array.
For the Alpha EV8, silicon area and chip layout constraints
allowed less space for the hysteresis memory array
than the prediction memory array. Instead of reducing the
size of the prediction array, it was decided to use half size
hysteresis tables for components G1 and Meta. As a result,
two prediction entries share a single hysteresis entry: the
prediction table and the hysteresis table are indexed using
the same index function, except the most significant bit.
Consequently, the hysteresis table suffers from more
aliasing than the prediction table. For instance, the following
scenario may occur. Prediction entries A and B share
the same hysteresis entry. Both (address, history) pairs associated
with the entries are strongly biased, but B remains
always wrong due to continuous resetting of the hysteresis
bit by (address, history) pair associated with A. While such
a scenario certainly occurs, it is very rare: any two consecutive
accesses to B without intermediate access to A will
allow B to reach the correct state. Moreover, the partial up-date
policy implemented on the EV8 branch predictor limits
the number of writes on the hysteresis tables and therefore
decreases the impact of aliasing on the hysteresis tables.
4.5 History lengths
Previous studies of the skewed branch predictor [15] and
the hybrid skewed branch predictor [19] assumed that tables
G0 and G1 were indexed using different hashing function
on the (address, history) pair but with the same history
length used for all the tables. Using different history lengths
for the two tables allows slightly better behavior. Moreover
as pointed out by Juan et al. [12], the optimal history length
for a predictor varies depending on the application. This
phenomenon is less important on a hybrid predictor featuring
a bimodal table as a component. Its significance is further
reduced on 2Bc-gskew if two different history lengths
are used for tables G0 and G1. A medium history length
can be used for G0 while a longer history length is used for
G1.
prediction table 16K 64K 64K 64K
hysteresis table 16K 32K 64K 32K
history length 4 13 21 15
Table
1. Characteristics of Alpha EV8 branch
predictor
4.6 Different prediction table sizes
In most academic studies of multiple table predictors
[15, 13, 14, 19], the sizes of the predictor tables are considered
equal. This is convenient for comparing different
prediction schemes. However, for the design of a real predictor
in hardware, the overall design space has to be ex-
plored. Equal table sizes in the 2Bc-gkew branch predictor
is a good trade-off for small size predictors (for instance
4*4K entries). However, for very large branch predictors
(i.e 4 * 64K entries), the bimodal table BIM is used very
sparsely since each branch instruction maps onto a single
entry.
Consequently, the large branch predictor used in EV8
implements a BIM table smaller than the other three components
4.7 The EV8 branch predictor configuration
The Alpha EV8 implements a very large 2Bc-gskew pre-
dictor. It features a total of 352 Kbits of memory, consisting
of 208 Kbits for prediction and 144 Kbits for hysteresis. Design
space exploration lead to the table sizes indexed with
different history lengths as listed in Table 1. It may be re-marked
that the table BIM (originally the bimodal table) is
indexed using a 4-bit history length. This will be justified
when implementation constraints are discussed in Section
7.
5 Path and branch outcome information
The accuracy of a branch predictor depends both on the
prediction scheme and predictor table sizes as well as on the
information vector used to index it. This section describes
how pipeline constraints lead to the effective information
vector used for indexing the EV8 Alpha branch predictor.
This information vector combines the PC address, a compressed
form of the three fetch blocks old branch and path
history and path information form the three last blocks.
5.1 Three fetch blocks old block compressed his-
tory
Three fetch blocks old history Information used to read
the predictor tables must be available at indexing time. On
the Alpha EV8, the branch predictor has a latency of t-
wo cycles and two blocks are fetched every cycle. Fig. 1
shows that the branch history information used to predict
a branch outcome in block D can not include any (specu-
lative) branch outcome from conditional branches in block
D itself, and also from blocks C, B and A. Thus the EV8
branch predictor can only be indexed using a three fetch
blocks old branch history (i.e updated with history information
from Z) for predicting branches in block D.
Block compressed history lghist When a single branch is
predicted per cycle, at most one history bit has to be shifted
in the global history register on every cycle. When up to
branches are predicted per cycle, up to 16 history bits
have to be shifted in the history on every cycle. Such an
update requires complex circuitry. On the Alpha EV8, this
complex history-register update would have stressed critical
paths to the extent that even older history would have had
to be used (five or even seven-blocks old).
Instead, just a single history bit is inserted per fetch block
[5]. The inserted bit combines the last branch outcome with
path information. It is computed as follows: whenever at
least one conditional branch is present in the fetch block,
the outcome of the last conditional branch in the fetch block
(1 for taken, 0 for not-taken) is exclusive-ORed with bit
4 in the PC address of this last branch. The rationale for
exclusive-OR by a PC bit the branch outcome is to get a
more uniform distribution of history patterns for an appli-
cation. Highly optimized codes tend to exhibit less taken
branches than not-taken branches. Therefore, the distribution
of "pure" branch history outcomes in those applications
is non-uniform.
While using a single history bit was originally thought
of as a compromising design trade-off - since it is possible
to compress up to 8 history bits into 1 - Section 8 shows
that it does not have significant effect on the accuracy of the
branch predictor.
Notation The block compressed history defined above
will be referred to as lghist.
5.2 Path information from the three last fetch
blocks
Due to EV8 pipeline constraints (Section 2), three fetch-
blocks old lghist is used for the predictor. Although, no
branch history information from these three blocks can be
used, their addresses are available for indexing the branch
predictor. The addresses of the three previous fetch blocks
are used in the index functions of the predictor tables.
5.3 Using very long history
The Alpha EV8 features a very large branch predictor
compared to those implemented in previous generation mi-
croprocessors. Most academic studies on global history
branch predictors have assumed that the length of the global
history is smaller or equal to log 2 of the number of entries
of the branch predictor table. For the size of predictor used
in Alpha EV8, this is far from optimal even when using
lghist. For example, when considering "not compressed"
branch history for a 4*64K 2-bit entries 2Bc-gskew predic-
tor, using equal history length for G0, G1 and Meta, history
length 24 was found to be a good design point. When considering
different history lengths, using 17 for G0, 20 for
Meta and 27 for G1 was found to be a good trade-off.
For the same predictor configuration with three fetch
blocks old lghist, slightly shorter length was found to be
the best performing. However, the optimal history length
is still longer than log 2 of the size of the branch predictor
table: for the EV8 branch predictor 21 bits of lghist history
are used to index table G1 with 64K entries.
In Section 8, we show empirically that for large predic-
tors, branch history longer than log 2 of the predictor table
size is almost always beneficial.
branch pre-
dictor
Up to 16 branch predictions from two fetch blocks must
be computed in parallel on the Alpha EV8. Normally, since
the addresses of the two fetch blocks are independent, each
of the branch predictor tables would have had to support two
independent reads per cycle. Therefore the predictor tables
would have had to be multi-ported, dual-pumped or bank-
interleaved. This section presents a scheme that allowed the
implementation of the EV8 branch predictor as 4-way bank
interleaved using only single-ported memory cells. Bank
conflicts are avoided by construction: the predictions associated
with two dynamically successive fetch blocks are
assured to lie in two distinct banks in the predictors.
6.1 Parallel access to predictions associated with
a single block
Parallel access to all the predictions associated with a single
fetch block is straightforward. The prediction tables
in the Alpha EV8 branch predictor are indexed based on a
hashing function of address, three fetch blocks old lghist
branch and path history, and the three last fetch block ad-
dresses. For all the elements of a single fetch block, the
same vector of information (except bits 2, 3 and 4 of the
PC address) is used. Therefore, the indexing functions used
guarantee that eight predictions lie in a single 8-bit word in
the tables.
6.2 Guaranteeing two successive non-conflicting
The Alpha EV8 branch predictor must be capable of delivering
predictions associated with two fetch blocks per
clock cycle. This typically means the branch predictor must
be multi-ported, dual-pumped or bank interleaved.
On the Alpha EV8 branch predictor, this difficulty is
circumvented through a bank number computation. The
bank number computation described below guarantees by
construction that any two dynamically successive fetch
unshuffle
Cycle 0
Phase 1
Y and Z flows out A and B flows out
from the line predictor completed
Phase
Phase 1 Phase 1
Cycle 1 Cycle 2
from the line predictor
bank number computation
for A and B
bank selection
wordline selection
column selection
final PC selection
PC address generation
completed
prediction tables reads
Figure
3. Flow of the branch predictor tables read access
blocks will generate accesses to two distinct predictor
banks. Therefore, bank conflicts never occur. Moreover,
the bank number is computed on the same cycle as the
address of the fetch block is generated by the line predictor,
thus no extra delay is added to access the branch predictor
tables (Fig. 3). The implementation of the bank number
computation is defined below:
let BA be the bank number for instruction fetch block A,
let Y, Z be the addresses of the two previous access slots,
let BZ be the number of the bank accessed by instruction
fetch block Z, let (y52,y51,.,y6,y5,y4,y3,y2,0,0) be the binary
representation of address Y, then BA is computed as
follows:
if ((y6,y5)==B Z ) then BA =(y6,y5\Phi1) else BA= (y6,y5)
This computation guarantees the prediction for a fetch
block will be read from a different bank than that of the
previous fetch block. The only information bits needed to
compute the bank numbers for the two next fetch blocks A
and B are bits (y6,y5), (z6,z5) and BZ : that is two-block
ahead [18] bank number computation. These information
bits are available one cycle before the effective access on
the branch predictor is performed and the required computations
are very simple. Therefore, no delay is introduced
on the branch predictor by the bank number computation.
In fact, bank selection can be performed at the end of Phase
1 of the cycle preceding the read of the branch predictor
tables.
7 Indexing the branch predictor
As previously mentioned, the Alpha EV8 branch predictor
is 4-way interleaved and the prediction and hysteresis
tables are separate. Since the logical organization of
the predictor contains the four 2Bc-gskew components, this
should translate to an implementation with memory ta-
bles. However, the Alpha EV8 branch predictor only implements
eight memory arrays: for each of the four banks
there is an array for prediction and an array for hysteresis.
Each word line in the arrays is made up of the four logical
predictor components.
This section, presents the physical implementation of the
branch predictor arrays and the constraints they impose on
the composition of the indexing functions. The section also
includes detailed definition of the hashing functions that
were selected for indexing the different logical components
in the Alpha EV8 branch predictor.
7.1 Physical implementation and constraints
Each of the four banks in the Alpha EV8 branch predictor
is implemented as two physical memory arrays: the
prediction memory array and the hysteresis memory array.
Each word line in the arrays is made up of the four logical
predictor components.
Each bank features 64 word lines. Each word line contains
prediction words from G0, G1 and Meta, and
8 8-bit prediction words from BIM. A single 8-bit prediction
word is selected from the word line from each predictor
table G0, G1, Meta and BIM. A prediction read spans over
3 half cycle phases (5 phases including bank number computation
and bank selection). This is illustrated in Fig. 3
and 4. A detailed description is given below.
1.Wordline selection: one of the 64 wordlines of the accessed
bank is selected. The four predictor components
share the 6 address bits needed for wordline selection. Fur-
thermore, these 6 address bits can not be hashed since the
wordline decode and array access constitute a critical path
for reading the branch prediction array and consequently -
inputs to decoder must be available at the very beginning of
the cycle.
Wordline selection
(1 out 64)
permutation 8 to 8
Column selection
(1 out of 8 for BIM)
(1 out of 32 for G0, G1, Meta)
predictions pertable
Meta
Time
(1 out of
Bank selection
Figure
4. Reading the branch prediction table
2. Column selection: each wordline consists of multiple
8-bit prediction entries of the four logical predictor tables.
One 8-bit prediction word is selected for each of the logical
predictor tables. As only one cycle phase is available to
compute the index in the column, only a single 2-entry XOR
gate is allowed to compute each of these column bits.
3. Unshuffle: 8-bit prediction words are systematically
read. This word is further rearranged through a XOR permutation
(that is bit at position i is moved at position i \Phi f ).
This final permutation ensures a larger dispersion of the
predictions over the array (only entries corresponding to a
branch instruction are finally useful). It allows also to discriminate
between longer history for the same branch PC,
since the computation of the parameter f for the XOR permutation
can span over a complete cycle: each bit of f can
be computed by a large tree of XOR gates.
Notations The three fetch-blocks old lghist history will
be noted H= (h20, ., h0). A= (a52,.,a2,0,0) is the address
of the fetch block. Z and Y are the two previous fetch block-
is the index function of a table, (i1,i0) being
the bank number, (i4,i3,i2) being the offset in the word,
(i10,i9,i8,i7,i6,i5) being the line number, and the highest order
bits being the column number.
7.2 General philosophy for the design of indexing
functions
When defining the indexing functions, we tried to apply
two general principles while respecting the hardware implementation
constraints. First, we tried to limit aliasing
as much as possible on each individual table by picking individual
indexing function that would spread the accesses
over the predictor table as uniformly as possible. For each
individual function, this normally can be obtained by mixing
a large number of bits from the history and from the address
to compute each individual bit in the function. How-
ever, general constraints for computing the indexing functions
only allowed such complex computations for the un-
shuffle bits. For the other indexing bits, we favored the use
of lghist bits instead of the address bits. Due to the inclusion
of path information in lghist, lghist vectors were more
uniformly distributed than PC addresses. In [17], it was
pointed out that the indexing functions in a skewed cache
should be chosen to minimize the number of block pairs
that will conflict on two or more ways. The same applies
for the 2Bc-gskew branch predictor.
7.3 Shared bits
The indexing functions for the four prediction tables
share a total of 8 bits, the bank number (2 bits) and the
wordline number (i10, ., i5). The bank number computation
was described in Section 6.
The wordline number must be immediately available at
the very beginning of the branch predictor access. There-
fore, it can either be derived from information already available
earlier, such as the bank number, or directly extracted
from information available at the end of the previous cycle
such as the three fetch blocks old lghist and the fetch block
address.
The fetch block address is the most natural choice, since
it allows the use of an effective bimodal table for component
BIM in the predictor. However, simulations showed that the
distribution of the accesses over the BIM table entries were
unbalanced. Some regions in the predictor tables were used
infrequently and others were congested.
Using a mix of lghist history bits and fetch block address
bits leads to a more uniform use of the different word
lines in the predictor, thus allowing overall better predictor
performance. As a consequence, component BIM in the
branch predictor uses 4 bits of history in its indexing function
The wordline number used is given by
(i10,i9,i8,i7,i6,i5)= (h3,h2,h1,h0,a8,a7).
7.4 Indexing BIM
The indexing function for BIM is already using 4 history
bits, that are three fetched blocks old, and some path
information from two fetched block ahead (for bank number
computation). Therefore path information from the last
instruction fetch block (that is Z) is used. The extra bits for
indexing BIM are (i13,i12,i11,i4,i3,i2)= (a11,a9\Phia5,a10\Phi
a6,a4,a3 \Phi z6,a2 \Phi z5).
7.5 Engineering the indexing functions for G0, G1
and Meta
The following methodology was used to define the indexing
functions for G0, G1 and Meta. First, the best history
length combination was determined using standard skewing
functions from [17]. Then, the column indices and the
XOR functions for the three predictors were manually defined
applying the following principles as best as we could:
1.favor a uniform distribution of column numbers for the
choice of wordline index.
Column index bits must be computed using only one two-
entry XOR gate. Since history vectors are more uniformly
distributed than address numbers, to favor an overall good
distribution of column numbers, history bits were generally
preferred to address bits.
2. if, for the same instruction fetch block address A, two
histories differ by only one or two bits then the two occurrences
should not map onto the same predictor entry in any
table : to guarantee this, whenever an information bit is XORed
with another information bit for computing a column
bit, at least one of them will appear alone for the computation
of one bit of the unshuffle parameter.
3. if a conflict occurs in a table, then try to avoid it on the
two other tables: to approximate this, different pairs of history
bits are XORed for computing the column bits for the
three tables.
This methodology lead to the design of the indexing
functions defined below:
Indexing G0 To simplify the implementation of column s-
electors, G0 and Meta share i15 and i14. Column selection
is given by
(i15, i14, i13, i12, i11)= (h7 \Phi h11,h8 \Phi h12,h4 \Phi h5,a9 \Phi
Unshufling is defined by (i4,i3,i2)= ( a4 \Phi a9 \Phi a13 \Phi
\Phia5, a2\Phia14\Phia10\Phih6 \Phih4 \Phih7 \Phia6).
Indexing G1 Column selection is given by
(i15, i14, i13, i12, i11)=
(h19\Phih12,h18\Phih11,h17\Phih10,h16\Phih4,h15\Phih20).
Unshuffling is defined by (i4,i3,i2)=
(a4\Phia11\Phia14\Phia6\Phih4\Phih6 \Phih9\Phih14\Phih15\Phih16\Phiz6,
Indexing Meta Column selection is given by
(i15, i14, i13, i12, i11)= (h7\Phih11,h8\Phih12,
h5\Phih13,h4\Phih9,a9\Phih6).
Unshuffling is defined by (i4,i3,i2)= (a4\Phia10\Phia5
\Phih7\Phih10\Phih14\Phih13\Phiz5, a3\Phia12\Phia14\Phia6\Phih4\Phi h6
8 Evaluation
In this section, we evaluate the different design decisions
that were made in the Alpha EV8 predictor design. We first
justify the choice of the hybrid skewed predictor 2Bc-gskew
against other schemes relying on global history. Then step
by step, we analyze benefits or detriments brought by design
decisions and implementation constraints.
8.1 Methodology
8.1.1 Simulation
Trace driven branch simulations with immediate update
were used to explore the design space for the Alpha EV8
branch predictor, since this methodology is about three orders
of magnitude faster than the complete Alpha EV8 processor
simulation. We checked that for branch predictors
using (very) long global history as those considered in this
study, the relative error in number of branch mispredictions
between a trace driven simulation, assuming immediate
update, and the complete simulation of the Alpha EV8, assuming
predictor update at commit time, is insignificant.
The metric used to report the results is mispredictions
per 1000 instructions (misp/KI). To experiment with history
length wider than log 2 of table sizes, indexing functions
from the family presented in [17, 15] were used for all pre-
dictors, except in Section 8.5. The initial state of all entries
in the prediction tables was set to weakly not taken.
8.1.2 Benchmark set
Displayed simulation results were obtained using traces collected
with Atom[23]. The benchmark suite was SPECIN-
T95. Binaries were highly optimized for the Alpha 21264
using profile information from the train input. The traces
were recorded using the ref inputs. One hundred million
instructions were traced after skipping 400 million instructions
except for compress (2 billion instructions were
skipped). Table 2 details the characteristics of the benchmark
traces.
8.2 2Bc-gskew vs other global history based predictor
We first validated the choice of the 2BC-gskew prediction
scheme against other global prediction schemes. Fig. 5
shows simulation results for predictors with memorization
size in the same range as the Alpha EV8 predictor. Displayed
results assume conventional branch history. For all
the predictors, the best history length results are presented.
Fig. 6 shows the number of additional mispredictions for
the same configurations as in Fig. 5 but using log 2 of the
table size, instead of the best history length.
The illustrated configurations are:
ffl a 4*32K entries (i.e. 256 Kbits) 2Bc-gskew using history
lengths 0, 13, 16 and 23 respectively for BIM,
G0, Meta and G1, and a 4*64K entries (i.e. 512Kbits)
2Bc-gskew using history lengths 0, 17, 20 and 27. For
length, the lengths are equal for
all tables and are 15 for the 256Kbit configuration and
for the 512Kbit.
ffl a bimode predictor [13] consisting of two 128K entries
tables for respectively biased taken and not taken
branches and a 16 Kentries bimodal table, for a total
of 544 Kbits of memorization 1 . The optimum history
1 The original proposition for the bimode predictor assumes equal sizes
for the three tables. For large size predictors, using a smaller bimodal table
is more cost-effective. On our benchmark set, using more than 16K entries
in the bimodal table did not add any benefit.
Benchmark compress gcc go ijpeg li m88ksim perl vortex
dyn. cond. branches (x1000) 12044 16035 11285 8894 16254 9706 13263 12757
static cond. branches 46 12086 3710 904 251 409 273 2239
Table
2. Benchmark characteristics
Figure
5. Branch prediction accuracy for various
global history schemes
Figure
6. Additional Mispredictions when using
log 2
table size history
length (for our benchmark set) was 20. For log 2 history
length 17 bits were used.
ffl a 1M entries (2M bits) gshare. The optimum history
length (on our benchmark set) was 20 (i.e log 2 of the
predictor table size).
ffl a 288 Kbits and 576 Kbits YAGS predictor [4] (respec-
tive best history length 23 and 25 ) the small configuration
consists of a 16K entry bimodal and two 16K
partially tagged tables called direction caches, tags are
6 bits wide. When the bimodal table predicts taken
(resp. not-taken), the not-taken (resp. taken) direction
cache is searched. On a miss in the searched direction
cache, the bimodal table provides the prediction. On
a hit, the direction cache provides the prediction. For
log 2 history length 14 bits (resp 15 bits) were used. 1
Figure
7. Impact of the information vector on
branch prediction accuracy
First, our simulation results confirm that, at equivalent
memorization budget 2Bc-gskew outperforms the other
global history branch predictors except YAGS. There is no
clear winner between the YAGS predictor and 2Bc-gskew.
However, the YAGS predictor uses (partially) tagged arrays.
Reading and checking 16 of these tags in only one and half
cycle would have been difficult to implement. Second, the
data support that, predictors featuring a large number of entries
need very long history length and log 2 table size history
is suboptimal.
8.3 Quality of the information vector
The discussion below examines the impact of successive
modifications of the information vector on branch prediction
accuracy assuming a 4*64K entries 2Bc-gskew predic-
tor. For each configuration the accuracies for the best history
lengths are reported in Fig. 7. ghist represents the
conventional branch history. lghist,no path assumes that
lghist does not include path information. lghist+path includes
path information. 3-old lghist is the same as before,
but considering three fetch blocks old history. EV8 info vector
represents the information vector used on Alpha EV8,
that is three fetch blocks old lghist history including path
information plus path information on the three last blocks.
lghist As expected the optimal lghist history length is
shorter than the optimal real branch history: (15, 17, 23)
instead of (17, 20, 27) respectively for tables G0, Meta and
G1. Quite surprisingly (see Fig. 7), lghist has same performance
as conventional branch history. Depending on the
application, there is either a small loss or a small benefit in
accuracy. Embedding path information in lghist is gener-
Figure
8. Adjusting table sizes in the predictor
ally beneficial: we determined that is more often useful to
de-alias otherwise aliased history paths.
The loss of information from branches in the same fetch
block in lghist is balanced by the use of history from more
branches (eventhough represented by a shorter information
for instance, for vortex the 23 lghist bits represent
on average 36 branches. Table 3 represents the average
number of conditional branches represented by one bit in
lghist for the different benchmarks.
Three fetch blocks old history Using three fetch blocks
old history slightly degrades the accuracy of the predictor,
but the impact is limited. Moreover, using path information
from the three fetch blocks missing in the history consistently
recovers most of this loss.
EV8 information vector In summary, despite the fact
that the vector of information used for indexing the Alpha
branch predictor was largely dictated by implementation
constraints, on our benchmark set this vector of information
achieves approximately the same levels of accuracy
as without any constraints.
8.4 Reducing some table sizes
Fig. 8 shows the effect of reducing table sizes. The
base configuration is a 4*64K entries 2Bc-gskew predictor
(512Kbits). The data denoted by small BIM shows the performance
when the BIM size is reduced from 64K to 16K
2-bit counters. The performance with a small BIM and half
the size for GO and Meta hysteresis tables is denoted by
EV8 Size. The latter fits the 352Kbits budget of the Alpha
EV8 predictor. The information vector used for indexing
the predictor is the information vector used on Alpha EV8.
Reducing the size of the BIM table has no impact at all
on our benchmark set. Except for go, the effect of using half
size hysteresis tables for G0 and Meta is barely noticeable.
go presents a very large footprint and consequently is the
most sensitive to size reduction.
Figure
9. Effect of wordline indices
Indexing function constraints
Simulations results presented so far did not take into account
hardware constraints on the indexing functions. 8 bits
of index must be shared and can not be hashed, and computation
of the column bits can only use one 2-entry XOR gate
Intuitively, these constraints should lead to some loss of ef-
ficiency, since it restricts the possible choices for indexing
functions.
However, it was remarked in [16] that (for caches) partial
skewing is almost as efficient as complete skewing. The
same applies for branch predictors: sharing 8 bits in the
indices does not hurt the prediction accuracy as long as the
shared index is uniformly distributed.
The constraint of using unhashed bits for the wordline
number turned out to be more critical, since it restricted the
distribution of the shared index. Ideally for the EV8 branch
predictor, one would desire to get the distribution of this
shared 8 bit index as uniform as possible to spread accesses
on G0, G1 and Meta over the entire table.
Fig. 9 illustrates the effects of the various choices made
for selecting the wordline number. address only, no path
assumes that only PC address bits are used in the shared
index and that no path information is used in lghist. address
only, path assumes that only PC address bits are used in the
shared index, but path information is embedded in lghist.
no path assumes 4 history bits and 2 PC bits as wordline
number, but that no path information is used in lghist. EV8
illustrates the accuracy of the Alpha EV8 branch predictor
where 4 history bits are used in the wordline number index
and path information is embedded in the history. Finally
complete hash recalls the results assuming hashing on all
the information bits and 4*64K 2Bc-gskew ghist represents
the simulation results assuming a 512Kbits predictor with
no constraint on index functions and conventional branch
history.
Previously was noted that incorporating path information
in lghist has only a small impact on a 2Bc-gskew predictor
indexed using hashing functions with no hardware
constraints. However, adding the path information in the
history for the Alpha EV8 predictor makes the distribution
of lghist more uniform, allows its use in the shared index
compress gcc go ijpeg li m88ksim perl vortex
lghist/ghist 1.24 1.57 1.12 1.20 1.55 1.53 1.32 1.59
Table
3. Ratio lghist/ghist
Figure
10. Limits of using global history
and therefore can increase prediction accuracy.
The constraint on the column bits computation indirectly
achieved a positive impact by forcing us to very carefully
design the column indexing and the unshuffle functions.
The (nearly) total freedom for computing the unshuffle was
fully exploited: 11 bits are XORed in the unshuffling function
on table G1. The indexing functions used in the final
design outperform the standard hashing functions considered
in the rest of the paper: these functions (originally defined
for skewed associative caches [17])exhibit good inter-bank
dispersion, but were not manually tuned to enforce the
three criteria described in Section 7.5.
To summarize, the 352 Kbits Alpha EV8 branch predictor
stands the comparison against a 512 Kbits 2Bc-gskew
predictor using conventional branch history.
9 Conclusion
The branch predictor on Alpha EV8 was defined at the
beginning of 1999. It features 352 Kbits of memory and delivers
up to 16 branch predictions per cycle for two dynamically
succesive instruction fetch blocks. Therefore, a global
history prediction scheme had to be used. In 1999, the hybrid
skewed branch predictor 2Bc-gskew prediction scheme
[19] represented state-of-the-art for global history prediction
schemes. The Alpha EV8 branch predictor implements
a 2Bc-gskew predictor scheme enhanced with an optimized
update policy and the use of different history lengths on the
different tables.
Some degrees of freedom in the definition of 2Bc-gskew
were tuned to adapt the predictor parameters to silicon area
and chip layout constraints: the bimodal component is smaller
than the other components and the hysteresis tables
of two of the other components are only half-size of the predictor
tables.
Implementation constraints imposed a three fetch blocks
old compressed form of branch history, lghist, instead of the
effective branch history. However, the information vector
used to index the Alpha EV8 branch predictor stands the
comparison with complete branch history. It achieves that
by combining path information with the branch outcome to
build lghist and using path information from the three fetch
blocks that have to be ignored in lghist.
The Alpha EV8 is four-way interleaved and each bank
is single ported. On each cycle, the branch predictor supports
requests from two dynamically succesive instruction
fetch blocks but does not require any hardware conflict res-
olution, since bank number computation guarantees by construction
that any two dynamically succesive fetch blocks
will access two distinct predictors banks.
The Alpha EV8 branch predictor features four logical
components, but is implemented as only two memory ar-
rays, the prediction array and the hysteresis array. There-
fore, the definition of index functions for the four (logi-
cal) predictor tables is strongly constrained: 8 bits must
be shared among the four indices. Furthermore, timing
constraints restrict the complexity of hashing that can be
applied for indices computation. However, efficient index
functions turning around these constraints were designed.
Despite implementation and size constraints, the Alpha
branch predictor delivers accuracy equivalent to
a 4*64K entries 2Bc-gskew predictor using conventional
branch history for which no constraint on the indexing functions
was imposed.
In future generation microprocessors, branch prediction
accuracy will remain a major issue. Even larger predictors
than the predictor implemented in the Alpha EV8 may
be considered. However, this brute force approach would
have limited return except for applications with a very large
number of branches. This is exemplified on our benchmark
set in Fig. 10 that shows simulation results for a
4*1M 2-bit entries 2Bc-gskew predictor. Adding back-up
predictor components [3] relying on different information
vector types (local history, value prediction [9, 6], or new
prediction concepts (e.g., perceptron [11]) to tackle hard-
to-predict branches seems more promising. Since such a
predictor will face timing constraints issues, one may consider
further extending the hierarchy of predictors with increased
accuracies and delays: line predictor, global history
branch prediction, backup branch predictor. The backup
branch predictor would deliver its prediction later than the
global history branch predictor.
--R
Next cache line and set pre- diction
Compaq Chooses SMT for Alpha.
The cascaded predictor: Economical and adaptive branch target prediction.
The YAGS branch predictor.
Method and apparatus for predicting multiple conditional branches.
Digital 21264 sets new standard.
The effect of speculatively updating branch history on branch prediction accura- cy
Improving branch predictors by correlating on data values.
Branch prediction and simultaneous multithreading.
Dynamic branch prediction with per- ceptrons
Dynamic history- length fitting: A third level of adaptivity for branch predic- tion
Combining branch predictors.
Trading conflict and capacity aliasing in conditional branch predictors.
A case for two-way skewed-associative caches
Skewed associative caches.
Speculative updates of local and global branch history: A quantitative anal- ysis
A study of branch prediction strategies.
The agree predictor: A mechanism for reducing negative branch history interference.
ATOM: A system for building customized program analysis tools.
The influence of branch prediction table interference on branch prediction scheme performance.
Simultaneous multithreading: Maximizing on-chip parallelism
Exploiting choice
Alternative implementations of two-level adaptive branch prediction
A comparative analysis of schemes for correlated branch prediction.
--TR
Alternative implementations of two-level adaptive branch prediction
A case for two-way skewed-associative caches
The effect of speculatively updating branch history on branch prediction accuracy, revisited
A comparative analysis of schemes for correlated branch prediction
Next cache line and set prediction
Simultaneous multithreading
The influence of branch prediction table interference on branch prediction scheme performance
Exploiting choice
Multiple-block ahead branch predictors
The agree predictor
Trading conflict and capacity aliasing in conditional branch predictors
The bi-mode branch predictor
Dynamic history-length fitting
The YAGS branch prediction scheme
The cascaded predictor
Improving branch predictors by correlating on data values
Skewed-associative Caches
A study of branch prediction strategies
Control-Flow Speculation through Value Prediction for Superscalar Processors
Dynamic Branch Prediction with Perceptrons
Branch Prediction and Simultaneous Multithreading
--CTR
Wei Zhang , Bramha Allu, Loop-based leakage control for branch predictors, Proceedings of the 2004 international conference on Compilers, architecture, and synthesis for embedded systems, September 22-25, 2004, Washington DC, USA
Kristopher C. Breen , Duncan G. Elliott, Aliasing and anti-aliasing in branch history table prediction, ACM SIGARCH Computer Architecture News, v.31 n.5, p.1-4, December
Ayose Falcon , Jared Stark , Alex Ramirez , Konrad Lai , Mateo Valero, Better Branch Prediction Through Prophet/Critic Hybrids, IEEE Micro, v.25 n.1, p.80-89, January 2005
Andr Seznec , Antony Fraboulet, Effective ahead pipelining of instruction block address generation, ACM SIGARCH Computer Architecture News, v.31 n.2, May
Daniel Chaver , Luis Piuel , Manuel Prieto , Francisco Tirado , Michael C. Huang, Branch prediction on demand: an energy-efficient solution, Proceedings of the international symposium on Low power electronics and design, August 25-27, 2003, Seoul, Korea
Jamison D. Collins , Dean M. Tullsen , Hong Wang, Control Flow Optimization Via Dynamic Reconvergence Prediction, Proceedings of the 37th annual IEEE/ACM International Symposium on Microarchitecture, p.129-140, December 04-08, 2004, Portland, Oregon
Daniel A. Jimenez, Piecewise Linear Branch Prediction, ACM SIGARCH Computer Architecture News, v.33 n.2, p.382-393, May 2005
Amirali Baniasadi , Andreas Moshovos, SEPAS: a highly accurate energy-efficient branch predictor, Proceedings of the 2004 international symposium on Low power electronics and design, August 09-11, 2004, Newport Beach, California, USA
Andr Seznec , Eric Toullec , Olivier Rochecouste, Register write specialization register read specialization: a path to complexity-effective wide-issue superscalar processors, Proceedings of the 35th annual ACM/IEEE international symposium on Microarchitecture, November 18-22, 2002, Istanbul, Turkey
Renju Thomas , Manoj Franklin , Chris Wilkerson , Jared Stark, Improving branch prediction by dynamic dataflow-based identification of correlated branches from a large global history, ACM SIGARCH Computer Architecture News, v.31 n.2, May
Wei Zhang , Bramha Allu, Reducing branch predictor leakage energy by exploiting loops, ACM Transactions on Embedded Computing Systems (TECS), v.6 n.2, p.11-es, May 2007
Chunrong Lai , Shih-Lien Lu , Yurong Chen , Trista Chen, Improving branch prediction accuracy with parallel conservative correctors, Proceedings of the 2nd conference on Computing frontiers, May 04-06, 2005, Ischia, Italy
Daniel A. Jimnez, Fast Path-Based Neural Branch Prediction, Proceedings of the 36th annual IEEE/ACM International Symposium on Microarchitecture, p.243, December 03-05,
Haitham Akkary , Srikanth T. Srinivasan , Konrad Lai, Recycling waste: exploiting wrong-path execution to improve branch prediction, Proceedings of the 17th annual international conference on Supercomputing, June 23-26, 2003, San Francisco, CA, USA
E. F. Torres , P. Ibanez , V. Vinals , J. M. Llaberia, Store Buffer Design in First-Level Multibanked Data Caches, ACM SIGARCH Computer Architecture News, v.33 n.2, p.469-480, May 2005
Abhas Kumar , Nisheet Jain , Mainak Chaudhuri, Long-latency branches: how much do they matter?, ACM SIGARCH Computer Architecture News, v.34 n.3, p.9-15, June 2006
Daniel A. Jimnez, Improved latency and accuracy for neural branch prediction, ACM Transactions on Computer Systems (TOCS), v.23 n.2, p.197-218, May 2005
Jamison Collins , Suleyman Sair , Brad Calder , Dean M. Tullsen, Pointer cache assisted prefetching, Proceedings of the 35th annual ACM/IEEE international symposium on Microarchitecture, November 18-22, 2002, Istanbul, Turkey
David Tarjan , Kevin Skadron, Merging path and gshare indexing in perceptron branch prediction, ACM Transactions on Architecture and Code Optimization (TACO), v.2 n.3, p.280-300, September 2005
Andre Seznec, Analysis of the O-GEometric History Length Branch Predictor, ACM SIGARCH Computer Architecture News, v.33 n.2, p.394-405, May 2005
Ayose Falcon , Jared Stark , Alex Ramirez , Konrad Lai , Mateo Valero, Prophet/Critic Hybrid Branch Prediction, ACM SIGARCH Computer Architecture News, v.32 n.2, p.250, March 2004
Oliverio J. Santana , Alex Ramirez , Josep L. Larriba-Pey , Mateo Valero, A low-complexity fetch architecture for high-performance superscalar processors, ACM Transactions on Architecture and Code Optimization (TACO), v.1 n.2, p.220-245, June 2004
Hans Vandierendonck , Koen De Bosschere, XOR-Based Hash Functions, IEEE Transactions on Computers, v.54 n.7, p.800-812, July 2005
Yuan Xie , Gabriel H. Loh , Bryan Black , Kerry Bernstein, Design space exploration for 3D architectures, ACM Journal on Emerging Technologies in Computing Systems (JETC), v.2 n.2, p.65-103, April 2006
Alex Ramirez , Oliverio J. Santana , Josep L. Larriba-Pey , Mateo Valero, Fetching instruction streams, Proceedings of the 35th annual ACM/IEEE international symposium on Microarchitecture, November 18-22, 2002, Istanbul, Turkey
Bradford M. Beckmann , David A. Wood, Managing Wire Delay in Large Chip-Multiprocessor Caches, Proceedings of the 37th annual IEEE/ACM International Symposium on Microarchitecture, p.319-330, December 04-08, 2004, Portland, Oregon
Philo Juang , Kevin Skadron , Margaret Martonosi , Zhigang Hu , Douglas W. Clark , Philip W. Diodato , Stefanos Kaxiras, Implementing branch-predictor decay using quasi-static memory cells, ACM Transactions on Architecture and Code Optimization (TACO), v.1 n.2, p.180-219, June 2004 | EV8 processor;branch prediction |
545273 | A practical model for hair mutual interactions. | Hair exhibits strong anisotropic dynamic properties which demand distinct dynamic models for single strands and hair-hair interactions. While a single strand can be modeled as a multibody open chain expressed in generalized coordinates, modeling hair-hair interactions is a more difficult problem. A dynamic model for this purpose is proposed based on a sparse set of guide strands. Long range connections among the strands are modeled as breakable static links formulated as nonreversible positional springs. Dynamic hair-to-hair collision is solved with the help of auxiliary triangle strips among nearby strands. Adaptive guide strands can be generated and removed on the fly to dynamically control the accuracy of a simulation. A high-quality dense hair model can be obtained at the end by transforming and interpolating the sparse guide strands. Fine imagery of the final dense model is rendered by considering both primary scattering and self-shadowing inside the hair volume which is modeled as being partially translucent. | In this paper, we focus on the dynamics of long hair, and hair mutual
interactions in particular. Hair has highly anisotropic dynamic
Email: {jtchang, jingjin, yyz}@uiuc.edu
Website: www-faculty.cs.uiuc.edu/yyz/research/hair/
properties, i.e. hair strands are extremely hard to stretch but free to
move laterally and interact with each other irregularly. Strands cannot
penetrate each other when they intersect; yet, each strand does
not have a fixed set of neighboring strands. These unique properties
inform us that a custom designed dynamic model is necessary
to achieve realistic results.
The dynamics of long hair involve three aspects. First, an individual
hair strand can deform and interact with the scalp, cloth and
other objects. Second, an initial hairstyle can usually be recovered
after subsequent head movement and the application of external
force fields. This means a hairstyle can memorize its original con-
figuration. Slight movement does not erase this memory. However,
radical movement may permanently damage this memory and no
complete recovery is possible. Third, there are dynamic collisions
among different strands. A real person can have as many as 100,000
hairs. Each hair can be modeled as dozens of hair segments. Directly
detecting pairwise collisions among hair segments is neither
necessary nor computationally practical. Hair usually forms clusters
and layers. Because of static charges and other forces, hairs in
the same cluster or layer stick to each other. Therefore, we should
model hair collisions at a higher abstraction level.
In this paper, we design an integrated sparse model for hair dynamics
considering the aspects mentioned above. Specifically, this
model has the following features: i) an initial hair connection model
that allows hairstyle recovery after minor movement, ii) a hair mutual
collision model that considers the hair volume as a collection of
continuous strips, iii) an adaptive hair generation scheme to complement
our sparse hair model. Since we adopt a dynamic hair
model consisting of layers and clusters, solving physical interactions
among them is computationally efficient without losing much
of the quality from a dense model.
1.1 Related Work
We limit the overview to the previous work on hair dynamics, focusing
on explicit hair models. In these models, each hair strand
is considered for shape and dynamics. They are more realistic and
especially suitable for dynamics of long hair. Rosenblum et al [22]
and Daldegan et al [4] used a mass-spring-hinge model to control
the position and orientation of hair strands. Anjyo et al [1] modeled
hair with a simplified cantilever beam and used one-dimensional
projective differential equation of angular momentum to animate
hair strand. Daldegan et al used sparse characteristic hairs to reduce
computations. None of these previous attempts considered
hair-hair interactions and hairstyle recovery after minor movement.
Individual hair dynamics was approximated using simplified models
Recently, Hadap and Magnenat-Thalmann [10] proposed a novel
approach to model dense dynamic hair as continuum by using a
fluid model for lateral hair movement. Hair-hair collision is approximated
by the pressure term in fluid mechanics while friction
is approximated by viscosity. Single strand dynamics is solved using
the formulation of a multibody open chain. Hair-air interaction
is considered by integrating hairs with an additional fluid system for
the air. This work presented a promising and elegant model for hair
interactions. Nonetheless, it has some limitations; the gradient of
the pressure may generate a collision force along a direction incompatible
with the velocities of the colliding hairs because pressure is
defined as a function of the local density which has no knowledge
of the velocities.
Plante et al [21] proposed a wisps model for simulating interactions
inside long hair. Hair strands are clustered into wisps and
modeled as anisotropic viscous volumes. Each wisp volume consists
of a skeleton and a deformable envelope. The skeleton captures
the global motion of a wisp, while the envelope models the
local radial deformation. However, There is a lack of coherence
in motion among nearby wisps. Koh and Huang presented an approach
by explicitly modeling hair as a set of 2D strips [15]. Collisions
between hair strips are handled to create more realistic mo-
tion. One drawback with the 2D strip-based approach is that the
volumetric aspect of the hair is not captured. Other researchers also
tried to model and constrain hair using a single thin shell or multiple
head hull layers [13; 17].
Recently, there has been a few feature films, such as Final Fantasy
and Monsters Incorporated, with realistic hair simulations as
well as some commercial software packages, such as Shave[24]
and Shag[23], for hair simulations. Shave is considered as the best
commercial hair modeling and simulation software in the industry.
However, its single strand dynamics does not look realistic, and it
does not have hair-hair collision. Final Fantasy is the film with
the best simulations for long human hair. From the press releases,
Aki's hair was modeled as a whole deformable exterior surface and
some of the simulations were done using the Maya cloth plugin.
That means hairs are constrained around the surface to enable very
good hairstyle recovery, but much of the lateral freedom has been
lost. In many situations, the hair flows like a piece of cloth instead
of a set of individual stiff strands. On the other hand, Monsters
Incorporated has long fur simulation [7]. Each hair is considered
as particles linked in a chain by a set of stiff springs. A builder
or a small snippet of code is used to generate the inbetween hairs.
Hair-hair collision has not been considered.
Overview
Although this paper focuses on hair-hair interaction, modeling, simulation
and rendering are three inseparable stages for the production
of fine hair imagery. The input to our simulation algorithm is an initial
sparse hair model with a few hundred strands generated from a
previous hair modeling method [29]. Each strand from the sparse
model has multiple segments connected by vertices. Each strand
serves as the guide hair for a whole cluster and may have its distinct
curly features. The sparse model is then equipped with structural
elements needed for dynamic simulation. For example, each
vertex is considered as a rotational joint with a hinge. Connections
and triangular meshes among guide hairs are then built for simulating
hair-hair interactions. Such an enhanced model is then ready
for dynamic simulation. Note that these enhanced structures are
invisible, which means they are never visualized during hair rendering
although the effects they produce are incorporated into hair
motion. Once an animation sequence of the sparse model is gen-
erated, additional hairs are interpolated to produce a dense model
for final rendering. In the rendering stage, we consider both diffuse
and specular reflection as well as partial translucency of each strand
by integrating volume density rendering with a modified version of
the opacity shadow buffer algorithm [14].
3 Single Hair Strand Dynamics
There are few techniques developed on modeling single hair strand
dynamics [22; 1; 4; 10]. Some of the previous work [22; 4] models
a single strand as particles connected with rigid springs. A hair
strand is approximated by a set of particles. Each particle has 3 degrees
of freedom, namely one translation and two angular rotations.
This method is simple and easy to implement. However, individual
hair strand has very large tensile strength, and hardly stretches by its
own weight and body forces. This property leads to stiff equations
which tend to cause numerical instability unless very small time
steps are used. We model each hair strand as a serial rigid multi-body
chain. There is a rotational joint between two adjacent seg-
ments, and translational motion is prohibited. A single chain can be
considered as a simple articulated body with joint constraints. Dynamic
formulations of articulated bodies are addressed in robotics
[5; 20] as well as graphics literature [27]. Both constrained dynamics
with Lagrange Multipliers [2] and generalized(or reduced)
coordinate formulation [5] can be used equally efficiently. The dynamics
of a serial multibody chain and its generalized coordinate
formulation have recently been applied to single hair simulation by
Hadap and Magnenat-Thalmann [10]. The main focus of our paper
is on hair-hair interaction; therefore, we describe the formulation
of the serial multibody chain and our adaptations briefly in this section
3.1 Kinematic Equations
In our model, we assume that the twisting of a hair strand along its
axis is prohibited. This reduces each rotational joint in a strand to
have two degrees of freedom. A rotational joint can be decomposed
into two cascading one-dimensional revolute joints each of which
has a fixed rotation axis. The rotation angles at the 1D revolute
joints represent the set of generalized coordinates in a multibody
chain system. If a 1D revolute joint has a rotation axis along with
a point q on the axis, the matrix transformation corresponding to
a rotation around by an angle can be given by the exponential
Suppose a hair segment has n preceding 1D revolute
joints in the chain and a local frame is defined at the segment.
Assume the local-to-world transformation for this frame when all
preceding joint angles are zero is gst(0). The updated local-to-
world transformation after a series of rotations at the n joints become
Thus, given an arbitrary series of joint angles, the position of
every vertex in the chain can be obtained using this product of exponentials
of its preceding joints. The exponential map actually is
just another way of formulating a 4 4 homogeneous matrix. It
can be calculated in constant time [20]. Therefore, the whole chain
can be evaluated in linear time.
3.2 Dynamics of Hair Strand
Given the mapping in Eq. 1 which is from the set of generalized
coordinates (joint angles) to real 3D world coordinates, hair strand
simulation can be solved by integrating joint angular velocities and
accelerations. Forward dynamics of a single strand in terms of
joint angular velocities and accelerations can be solved using the
Articulated-Body Method [5] or Lagrange's equations for generalized
coordinates [20]. The former method is more efficient with a
linear time complexity.
Both external and internal forces are indispensable for single hair
dynamics. In this paper, hair-hair interactions are formulated as external
forces in addition to gravity. The actual form of these external
forces will be discussed in Section 4. At each joint of the
hair chain, there is also an internal actuator force to account for the
bending and torsional rigidity of the strand. We model the actuator
force as a hinge with a damping term as in [22]. Since our hair
model may have curly hair strands which means the strands are not
straight even without any external forces, we define a nonzero resting
position for each hinge. Any deviation from the resting position
results in a nonzero actuator force trying to reduce the amount of
deviation. This setup enables a strand to recover its original shape
after subsequent movement.
3.3 Strand-Body Collision
In order to simulate inelastic collision between the hair and human
body, there is no repelling forces introduced by the human body.
Once a hair vertex becomes close enough to the scalp or torso, it
is simply stopped by setting its own velocity to be the same as the
velocity of the human body while all the following vertices in the
multibody chain are still allowed to move freely. Any acceleration
towards the human body is also prohibited at the stopped vertices
which, however, are allowed to move away from or slide over the
human body. Frictional forces are added as well to those vertices
touching the human body. Collision detection is handled explicitly
by checking penetration of hair strand particles with the triangle
mesh of the body parts.
This scheme cannot guarantee that the hair vertices do not penetrate
other colliding surfaces in the middle of a time step. If penetration
does occur, we need to move the part of the penetrating
strand outside the surface in the same time step so that no penetration
can be actually observed. It is desirable that the tip of the hair,
if outside the surface, remains unchanged during this adjustment in
order to introduce minimal visual artifacts. To achieve this goal,
inverse kinematics [20] can be applied to adjust the positions of the
intermediate vertices between the tip and the adjusted locations of
the penetrating vertices. In our implementation we opt for a simpler
method using iterative local displacements. Starting from the root,
we move the first penetrated vertex p1 to its nearest valid location
p1, and then propagate this displacement by moving the subsequent
vertices. More specifically, assume the following vertex of p1 is p2,
we compute the vector
The new location for p2 after the adjustment is
repeat this for all the vertices following p1 until reaching the tip.
4 A Sparse Model for Hair-Hair Interaction
We devise a novel scheme to simulate only a sparse set of hair
strands for complex hair-hair interactions. We first introduce an
elastic model to preserve the relative positions of the hair strands.
The static links model the interaction of the hair due to interweav-
ing, static charges and hairstyling. Second, the hair-hair collision
and friction is simulated using the guide hairs and a collection of
auxiliary triangle strips. Third, an interpolation procedure is described
for generating dense hair from our sparse hair model. Last,
we provide an adaptive hair generation technique to complement
our sparse hair model; additional guide strands are added on the fly
to reduce the interpolation artifacts. The proposed method models
the hair dynamics efficiently with good visual realism.
4.1 Static Links
It is evident that the hair strands tend to bond together with other
strands in their vicinity because of cosmetics, static charges and
the interweaving of curly hairs. As a result, the movement of each
strand is on most part depended on the motion of other strands.
These interactions can have relatively long range effects besides
clustering in a small neighborhood. While hair local clustering is
modeled by default using our sparse model, longer range interaction
is not. Furthermore, slight head movements or external forces exerted
on the hair do not change a hairstyle radically. This is partly
because each hair strand has its internal joint forces and resting
configuration. However, an individual hair's recovery capability is
quite limited especially for long hairs. The bonding effect among
hairs plays an important role. Dramatic movements can break the
bonds created by hairstyling, static charges or interweaving.
To effectively model the bonding effect, we may view the hair
as one elastically deformable volume. Traditional models for deformable
bodies include 3D mass-spring lattice, finite difference,
and finite element method [25; 30]. These models approximate the
deviation of a continuum body from its resting shape in terms of
displacements at a finite number of points called nodal points. Although
the vertices of hair strands may serve as the nodal points
inside this hair volume, directly applying traditional models is not
appropriate for the following reasons. We are only interested in an
elastic model for hair's lateral motion. Under strong external forces,
the continuum hair volume may break into pieces which may have
global transformations among them. Therefore, using one body co-ordinate
system for the whole hair volume is inadequate.
We propose to build breakable connections, called static links,
among hair strands to simulate their elastic lateral motion and enable
hairstyle recovery. These connections are selected initially
to represent bonds specific to a hairstyle since different hairstyles
have different hair adjacency configuration. The static links enforce
these adjacency constraints by exerting external forces onto
the hair strands. Intuitively, one can use tensile, bending and torsional
springs as bonds to preserve the relative positions of the hair
strands. In practice, we opt for a simpler and more efficient method
using local coordinates.
We introduce a local coordinate system to each segment of the
hair strands. For each segment, we find a number of closest points
on nearby strands as its reference points. To improve the perfor-
mance, an octree can be used to store the hair segments for faster
searching. We transform these points, which are in the world co-
ordinates, to the segment's local coordinates ( Fig. 1a). The initial
local coordinates of these reference points are stored as part of the
initialization process. Once strands have relative motion, the local
coordinates of the reference points change and external forces are
exerted onto these strands to recover their original relative positions
(Fig. 1b). We model these external forces as spring forces with zero
resting length. One advantage of using the local coordinates is that
it eliminates the need for bending and torsional springs.
Let us consider a single hair segment h with m reference points.
The initial local coordinates of these reference points are represented
as poh,i,i =1, ., m, while their new local coordinates are
represented as pnh,i,i =1, ., m. The accumulated force this segment
receives due to static links can be formulated as
s d vi . li li
|li| |li|
We compute the spring force using the Hook's law in (2), where
khs,i is the spring constant for the i-th reference point of segment h,
and kd is the universal damping constant. Since the resting length
in our case is zero, poh,i| is multiplied by khs,i directly.
vi is the time derivative of li.
Similar to the bonds of stylized hair, static links can be broken
upon excessive forces. We set a threshold for each static link. If the
length change of a static link is greater than the threshold, the static
link breaks (Fig. 1c). Once a link is broken, the damage is perma-
nent; the link will remain broken until the end of the simulation.
To be more precise, we model the spring constant khs,i as shown in
Fig. 2. As |li| increases beyond 1, the spring constant begins to
decrease gradually and eventually becomes zero at 2 as the spring
snaps. The spring constant will not recover even when |li| shrinks
below 1 again. This nonreversible spring model would make the
motion of the hair look less like a collection of rigid springs.
When external forces recede, the original hairstyle may not be
recovered if some of the static links have been broken. New static
a) b) c)
Figure
1: Each hair segment has its own local coordinate system
where the forces from all static links (dashed lines) are calculated.
links may form for the new hairstyle with updated neighborhood
structures.
ks
l
Figure
2: Spring constant khs,i vs displacement graph.
4.2 Dynamic Interactions
Elastic deformation only introduces one type of hair-hair interac-
tions. Hairs also interact with each other in the form of collision.
To effectively simulate hair-hair collision and friction using a sparse
hair model, we need to have a dynamic model that imagines the
space in between the set of sparse hairs as being filled with dense
hairs. Collision detection among the guide hairs only is much less
accurate. Let us consider a pair of nearby guide hairs. The space
between them may be filled with some hairs in a dense model so another
strand cannot pass through there without receiving any resis-
tance. To model this effect, we can either consider the guide hairs as
two generalized cylinders with large enough radii to fill up the gap
between them, or build an auxiliary triangle strip as a layer of dense
hair between them by connecting corresponding vertices. The triangular
mesh can automatically resize as the guide hairs move, but
it is trickier to resize the generalized cylinders. Therefore, we propose
to construct auxiliary triangle strips between pairs of guide
hairs to approximate a dense hair distribution. If we consider the
set of dense hairs collectively as a volume, a triangle strip represents
a narrow cross section of the volume. A number of such cross
sections can reasonably approximate the density distribution of the
original hair volume.
Since the distance between a pair of vertices from two hairs may
change all the time during simulation, we decide to use the distance
among hair roots. A triangle strip is allowed as long as two guide
hairs have nearby hair roots. Each triangle only connects vertices
from two guide hairs, therefore is almost parallel to them. Note
that the triangle strips may intersect with each other. This does not
complicate things because each triangle is treated as an independent
patch of hair during collision detection. The triangles are only used
for helping collision detection, not considered as part of the real
hair geometry during final rendering. They do not have any other
dynamic elements to influence hair movement. However, some triangles
may have nearby static links which can help them resist de-
formation. The triangle edges are not directly constructed as static
links because static links only connect nearby hair segments while
not all the segments connected by triangles are close to each other.
As in standard surface collision detection, two different kinds
of collision are considered, namely, the collision between two hair
segments and the collision between a hair vertex and a triangular
Since each guide hair represents a local hair cluster with a
certain thickness, a collision is detected as long as the distance between
two hair elements falls below a nonzero threshold. Once a
collision is detected, a strongly damped spring force is dynamically
generated to push the pair of elements away from each other [3].
Meanwhile, a frictional force is also generated to resist tangential
motion. A triangle redistributes the forces it receives to its vertices
as their additional external forces. Both the spring and frictional
forces disappear when the distance between the two colliding elements
becomes larger than the threshold. The spring force in effect
keeps other hairs from penetrating a layer corresponding to a triangle
strip. An octree is used for fast collision detection. All the
moving hair segments and triangles are dynamically deposited into
the octree at each time step. An octree node has a list of segments
and triangles it intersects with.
Hair also exhibits strong anisotropic dynamical properties. Depending
on the orientation of the penetrating hair vertex and the
triangular face, the repelling spring force might vary. For example,
hair segments of similar orientation with the triangle strip should
experience weaker forces. We scale the repelling spring force according
to the following formula.
The original spring force fs is scaled in Eq. (3), where a is the
normalized tangential vector of the hair at the penetrating vertex,
b is the interpolated hair orientation on the triangular face from its
guide hair segments, and is a scale factor. When a and b are perfectly
aligned, the scaled force fs becomes zero. On the contrary,
when they are perpendicular, the spring force is maximized. The
collision force between two hair segments can be defined likewise.
The hair density on each hair strip is also modeled as a contin-
uum. It can be dynamically adjusted during a simulation. If there
is insufficient hair on a strip, the strip can be broken. This would
allow other hair strands to go through broken pieces of a hair strip
more easily. This is reasonable because sometimes there is no hair
between two hair clusters while at the other times, there may be a
dense hair distribution. In our current implementation, the length of
the triangle edges serves as the indicator for when the hair density
on a strip should be adjusted. If a triangle becomes too elongated,
it is labeled as broken. If a triangle is not broken, the magnitude of
the collision force in Eq. (3) is made adaptive by adjusting the scale
factor according to the local width of the triangle strip to account
for the change of hair density on the triangle. Unlike the static links,
this process is reversible. Once the two guide hairs of a strip move
closer to each other again, indicating the hair density between them
is increasing, the generated collision force should also be increased,
and the triangle strip should be recovered if it has been broken. If
every triangle strip in our method is modeled as broken from the
beginning, our collision model becomes similar to the wisp model
in [21] since every guide hair in our method actually represents a
wisp.
It may not be necessary to build triangle strips among all pairs
of nearby strands. For a simple brush in Fig. 3a, we can only
insert triangle strips between horizontally and vertically adjacent
guide hairs. For human hair, we sometimes find it practically good
enough to build triangle strips between guide hairs with horizontally
adjacent hair roots (Fig. 3b). This is because hairs drape down
due to gravity, and the thickness of the hair volume is usually much
Figure
3: a) For a brush, triangle strips can be inserted between
horizontally and vertically adjacent guide hairs. b) For a human
scalp, triangle strips are inserted only between horizontally adjacent
guide hairs
smaller than the dimensions of the exterior surface of the hair vol-
ume. In such a situation, using triangles to fill the horizontal gaps
among guide hairs becomes more important.
4.3 Adaptive Hair Generation
Initially, we select the guide hairs uniformly on the scalp. However,
it is not always ideal to pick the guide hairs uniformly. During a
run of the simulation, some part of the hair may be more active
than the other parts. For example, when the wind is blowing on
one side of the hair, the other side of the hair appears to be less
active. As a result, some computation is wasted for not so active
regions. For not so active regions, fewer guide hairs combined with
interpolation (Section 5) is sufficient. However, for more active
regions, it is desirable to use more guide hairs and less interpolation
for better results. We design an adaptive hair generation method to
complement our sparse hair model.
We generate additional guide strands adaptively during the simulation
to cover the over interpolated regions. The distribution and
the initial number of guide strands are determined before the simu-
lation. However, as the simulation proceeds, more hair strands can
be added. The hair model may become more and more computationally
intensive if hairs can only be inserted. We notice that the
inserted hairs may become inactive again later in the same simula-
tion. Therefore, we also allow them to be deleted if necessary. To
our hair strands relatively sparse, we may also set a limit on
how many adaptive hair strands can exist at the same time. Picking
the right place to generate adaptive guide hair is important.
too far apart
Adaptive Hair
Figure
4: Adaptive hair generation
We use a simple technique to detect where to add and remove
adaptive guide hairs. For each pair of guide strands, we compute the
distance between all pairs of corresponding vertices of the strands.
If any pair of vertices become farther away than a threshold, it indicates
that the hair in between these two guide strands is relying
too much on the interpolation. We then add an adaptive guide hair
half-way between these two strands (Fig. 4). At the same time,
we examine the adaptive guide hairs from the last step of the sim-
ulation. If some of the guide strands are no longer needed (when
the two neighboring strands are close enough), we remove those
strands and save them for future hair generation. When an adaptive
hair is generated, its initial vertex positions and velocities are obtained
by interpolating from those of the two initiating guide hairs.
If there was a triangle strip between these two initiating hairs, it
should be updated to two strips with the new hair in the middle.
The new adaptive hair then follows its own dynamics from the next
time step, colliding with nearby strands and triangle strips. To avoid
discontinuous motion on the rest of the hairs, a new adaptive hair
does not spawn static links with other strands.
5 Hair Interpolation
5.1 Interpolating a dense set of hair strands
Since only a sparse set of hair strands in the order of few hundreds
is simulated, a procedure must be used to interpolate the dynamics
of the remaining hair strands. We designed our interpolation
procedure to complement our sparse hair model to produce believable
hair animation efficiently. Each hair from the sparse model
serves as a guide hair. The remaining hair strands in the dense set
are interpolated from the guide hairs. Intuitively, one could imagine
a simple procedure by averaging the position of the neighboring
strands. However, this approach tends to group strands together into
unnatural clusters. We come up with a more sophisticated method
that produces better interpolation results. It requires an approach
for defining a local coordinate system at each potential hair root.
A typical scheme for this uses a global UP vector, such as the vertical
direction, and the local normal orientation. Our interpolation
procedure works as follows:
Find the nearest root of a guide hair and transform the segments
of that guide hair from the world coordinates to its local
coordinates. Name this transformation M1.
Take these segments in the local coordinates and transform
them back to the world coordinate using the local-to-world
coordinate transformation defined at the root of the interpolated
strand. Name this transformation M1.1
The procedure is summarized as equation (4), where p is the location
of the guide hair in the world coordinate, M1 and M2 are the
two transformations described previously. More than one nearby
guide hairs can be used together to achieve smoother results by
merging the multiple transformed guide hairs with some averaging
scheme. Local clustering effects can be removed by interpolation
from multiple guide hairs. In summary, our procedure generates
better results by taking into account the round shape of the scalp
and considering both rotation and translation between local coordinate
systems.
small objects may miss all the guide hairs, but still hit
some of the strands in the dense model, we decide to run hair-object
collision detection for each hair in the dense model. Although this
involves a certain amount of computation, the computing power
available nowadays on a single processor workstation has already
become sufficient to perform this task in a very short amount of
time. If a hair penetrates an object, the scheme described in Section
3.3 can be used to adjust the hair.
5.2 Hermite Spline interpolation
The smoothness of a hair strand can be improved by Hermite Spline
interpolation. We observe that a relatively coarse strand model with
ten to fifteen segments combined with the spline interpolation is
sufficient for normal hairstyles.
6 Hair Rendering
Although the primary focus of this paper is on hair animation,
we will discuss briefly our approach to rendering realistic hair.
The kind of physical interaction considered here includes self-shadowing
and scattering. Hair strands are not completely opaque.
Therefore, the interaction between light and hair leads to both re-
flection and transmission. When a dense set of hairs is present,
light gets bounced off or transmitted through strands multiple times
to create the final exquisite appearance. Basically, we can view
a dense hair as a volume density function with distinct density
and structures everywhere. The hair density is related to the local
light attenuation coefficient while the structures including the local
hair orientation are related to the phase function during scattering.
In this section, we discuss how to efficiently render animated sequences
of hair with high visual quality by considering the above
factors.
While secondary scattering can improve the rendering quality,
primary scattering and self-shadowing are considered much more
important. Since the rendering performance is our serious concern
when generating hair animations, we decide to simulate the latter
two effects only. This is equivalent to solving the following integral
equation
x
x0 l
where L(x, ) represents the final radiance at x along direction
, f(x,l,) is the normalized phase function for scattering,
Il(x) is the attenuatedlight intensity from the l-th light source,
and (x,x)=exp(- x (()+())d where (x) is the absorption
coefficient and (x) is the scattering coefficient. Thus, our
hair rendering is similar to traditional volume rendering techniques
[12]. That is, the final color of a pixel can be approximated as the
alpha-blending of the colors at a few sample points along the eye
ray going through that pixel. To perform alpha-blending correctly,
the sample points need to be depth-sorted. In terms of hair, the
sample points can be the set of intersections between the eye ray
and the hair segments. Note that the input to the rendering stage
is a large number of hair segments resulted from the discretization
of the spline interpolated dense hairs mentioned in Section 5. In
order to obtain the set of intersections at each pixel efficiently, scan
conversion is applied to the segments and a segment is added into
the depth-sorted list of intersections at a pixel once it passes that
pixel. Antialiasing by supersampling each pixel can help produce
smoother results.
To finish rendering, we still need a color for alpha-blending at
each of the intersections. It should be the reflected color at the
intersection. The reflectance model we use is from [8]. It is a modi-
fied version of the hair shading model in [11] by considering partial
translucency of hair strands. Since other hairs between the light
source and the considered hair segment can block part of the incident
light, the amount of attenuation is calculated using the opacity
shadow maps [14] which can be obtained more efficiently than the
deep shadow maps [19]. Basically, the algorithm in [14] selects a
discrete set of planar (opacity) maps perpendicular to the lighting
direction. These maps are distributed uniformly across the volume
being rendered. Each map contains an approximate transmittance
function of the partial volume in front of the map. Thus, the approximate
transmittance of the volume at any point can be obtained by
interpolating the transmittance at corresponding points on the two
nearest opacity maps. In our implementation, exponential interpolation
has been used since the attenuation of light through a volume
is exponential. The exponential interpolation can be written as
d2 d1
where exp(-1) and exp(-2) are the attenuation at the two nearest
maps, and d1, d2 are the distance from the point to the two maps,
respectively.
When the hair is rendered together with other solid objects, such
as the head and cloth, which we assume to be completely opaque,
the color of the solid objects needs to be blended together with
the hair's during volume rendering. The solid objects also have
their separate shadow buffer for each light source. Anything in
the shadow of the solids receives no light while those solids in the
shadow of the hair may still receive attenuated light.
7 Results
We have successfully tested our hair dynamic model in a few an-
imations. In our experiments, we used around 200 initial guide
hairs during the animation of the sparse model with 15 segments
for each strand. During each time step, a strand is interpolated with
a Hermite spline and discretized into around 50 smaller segments.
Based on this set of resampled sparse hairs, a dense hair model with
50,000 strands is generated on the fly at each frame for the final ren-
dering. The guide hair animation stage takes about one second per
frame on a Pentium III 800MHz processor. Hair interpolation, hair-
object collision detection and antialiased rendering takes another 20
seconds per frame on a Pentium 4 2GHz processor. Fig. 5 shows a
sparse hair model with static links along with the dense interpolated
model. Fig. 8 shows synthetic renderings of animated hair.
7.1 Comparison with Ground Truth
A synthetic head shaking sequence is compared with a real reference
sequence in Fig. 9. The hair strands in the real sequence obviously
have mutual connections since they move together. We use
relatively strong static links to simulate this effect. The head motion
in the synthetic sequence was manually produced to approximate
the real motion. Nonetheless, the synthetic hair motion reasonably
matches the real one.
7.2 Dynamic Collision
To demonstrate the effectiveness of our hair collision strategy, we
built a simple braided hair model and let it unfold under gravity.
There are basically two sets of guide hairs in the model, and static
links and triangle strips are only built among hairs from the same
set. A comparison is given between images from two synthetic sequences
in Fig. 6, one with collision detection and the other with-
out. In the simulation without collision detection, hairs go through
each other. But in the sequence with collision detection, hairs unfold
correctly in a spiral motion.
7.3 Hair-Air Interaction
Hair-air interaction is traditionally modeled as air drag which only
considers the force exerted on the hair from the air. However, the
velocity field of the air is also influenced by the hair. The method
in [10] can be adapted to our model for hair-air interaction. That
is, the air is simulated as a fluid and it generates a velocity field.
Each hair vertex receives an additional external force from the air.
This force can be modeled as a damping force using the difference
between the velocity of the air at the vertex and the velocity of the
hair vertex itself. The force exerted from the hair back to the air can
be modeled similarly. If the air is simulated using a voxel grid [6],
the velocity of the hair at each grid point can be approximated using
the velocities of the nearby hair vertices and auxiliary triangles.
Fig. 10(top) shows images from a hair animation with a wind.
The wind velocity field is driven by an artificial force field with a
changing magnitude and direction. The head and torso are consid- [11]
ered as hard boundaries in the wind field while the wind can go
through hairs with a certain amount of attenuation. [12]
7.4 Brush Simulation [13]
In addition to human hair interactions, we simulate the dynamics [14]
of brushes. Fig. 7 shows images from a sequence with a sphere
colliding with a synthetic brush. The mutual interactions are weak [15]
when only a small number of hairs drape down behind the sphere.
However, when more and more hairs drape down, they stabilize
much faster because of the collisions. [16]
7.5 Hair Rendering with An Artistic Flavor
[17]
An artistic flavor can also be added to the images by rendering the
hair with increased translucency and specularity. Fig. 10(bottom) [18]
shows some re-rendered images from one of the wind blowing se-
quences. [19]
8 Discussions and Conclusions
In this paper, we presented an integrated sparse model for hair dy-
namics. Specifically, the model can perform the following func-
tions: the static links and the joint actuator forces enable hairstyle [22]
recovery; once the static links are broken under external forces,
hairs have the freedom to move laterally; hair-hair collision becomes
more accurate by inserting triangle strips and performing [23]
collision detection among strands as well as between strands and [24]
triangle strips; stable simulation of individual strands is provided [25]
by the formulation for multibody open chains. Although our model
is not originally designed for hairs without obvious clustering ef-
fects, with our multiple hair interpolation scheme, visual results for
this kind of hairs turned out quite reasonable.
Note that for curly hair, we have two levels of details. The sparse [28]
or interpolated hair model only has large-scale deformations without
fine curly details. Each strand in these models serves as the [29]
spine of its corresponding curly strand. Curliness can be added
onto the interpolated dense hair model before rendering as in [29]. [30]
Acknowledgments
This work was supported by National Science Foundation CAREER
Award CCR-0132970 and start-up funds from the University
of Illinois at Urbana Champaign. We would like to thank Mike
Hunter and Sheila Sylvester for their help with the real hair se-
quence, and the anonymous reviewers for their valuable comments.
--R
Modeling dynamic hair as continuum.
Rendering fur with three dimensional textures.
of SIGGRAPH'89
Ray tracing Volume
Graphics (SIGGRAPH 84
A thin shell Volume
Opacity shadow maps.
Rendering, pages 177-182
A simple physics model to animate human hair modeled
in 2d strips in real time.
Rendering hair using pixel blending
and shadow buffers.
Natural hairstyle modeling and animation.
Models and Image Processing
Deep shadow maps.
A Mathematical Introduction to Robotic
A layered wisps model for simulating interactions
Simulating the structure and
dynamics of human hair: Modeling
Visualization and Computer Animation
Shag (plugin for 3d studio max).
Shave (plugin for lightwave).
Elastically deformable models.
The cluster hair model.
Models and Image Processing
Modeling realistic virtual hairstyles.
The Finite Element Method: Solid and Fluid
Mechanics Dynamics and Non-Linearity
Figure 5: Left: a sparse hair model displayed with static links.
Figure 6: A comparison between two hair animations with and without collision detection.
motion because of the collision detection.
Figure 7: Two images from a sequence with a sphere colliding with a brush.
A Practical Model for Hair Mutual InteraJc.
Figure 8: Two synthetic renderings of animated hair.
Figure 9: A comparison between a simulated hair animation and a real video.
row: images from the simulated hair motion sequence.
matches the real hair motion in the video.
Figure 10: Top row: short hair in a changing wind.
--TR
Using dynamic analysis for realistic animation of articulated bodies
Elastically deformable models
Rendering fur with three dimensional textures
A simple method for extracting the natural beauty of hair
Linear-time dynamics using Lagrange multipliers
Fake fur rendering
Large steps in cloth simulation
Deep shadow maps
Visual simulation of smoke
Natural hairstyle modeling and animation
A Mathematical Introduction to Robotic Manipulation
Robot Dynamics Algorithm
A Trigonal Prism-Based Method for Hair Image Generation
Real-Time Hair
Opacity Shadow Maps
A simple Physics model to animate human hair modeled in 2D strips in real time
A layered wisp model for simulating interactions inside long hair
Ray tracing volume densities
A Thin Shell Volume for Modeling Human Hair
Modeling Realistic Virtual Hairstyles
--CTR
Yichen Wei , Eyal Ofek , Long Quan , Heung-Yeung Shum, Modeling hair from multiple views, ACM Transactions on Graphics (TOG), v.24 n.3, July 2005
Byoungwon Choe , Hyeong-Seok Ko, A Statistical Wisp Model and Pseudophysical Approaches for Interactive Hairstyle Generation, IEEE Transactions on Visualization and Computer Graphics, v.11 n.2, p.160-170, March 2005
Stephen R. Marschner , Henrik Wann Jensen , Mike Cammarano , Steve Worley , Pat Hanrahan, Light scattering from human hair fibers, ACM Transactions on Graphics (TOG), v.22 n.3, July
F. Bertails , T-Y. Kim , M-P. Cani , U. Neumann, Adaptive Wisp Tree: a multiresolution control structure for simulating dynamic clustering in hair motion, Proceedings of the ACM SIGGRAPH/Eurographics symposium on Computer animation, July 26-27, 2003, San Diego, California
Florence Bertails , Clment Mnier , Marie-Paule Cani, A practical self-shadowing algorithm for interactive hair animation, Proceedings of the 2005 conference on Graphics interface, May 09-11, 2005, Victoria, British Columbia
Kelly Ward , Nico Galoppo , Ming Lin, Interactive Virtual Hair Salon, Presence: Teleoperators and Virtual Environments, v.16 n.3, p.237-251, June 2007
Pascal Volino , Nadia Magnenat-Thalmann, Animating complex hairstyles in real-time, Proceedings of the ACM symposium on Virtual reality software and technology, November 10-12, 2004, Hong Kong
Zoran Kai-Alesi , Marcus Nordenstam , David Bullock, A practical dynamics system, Proceedings of the ACM SIGGRAPH/Eurographics symposium on Computer animation, July 26-27, 2003, San Diego, California
Florence Bertails , Basile Audoly , Marie-Paule Cani , Bernard Querleux , Frdric Leroy , Jean-Luc Lvque, Super-helices for predicting the dynamics of natural hair, ACM Transactions on Graphics (TOG), v.25 n.3, July 2006
Joseph Teran , Eftychios Sifakis , Silvia S. Blemker , Victor Ng-Thow-Hing , Cynthia Lau , Ronald Fedkiw, Creating and Simulating Skeletal Muscle from the Visible Human Data Set, IEEE Transactions on Visualization and Computer Graphics, v.11 n.3, p.317-328, May 2005
Sunil Hadap, Oriented strands: dynamics of stiff multi-body system, Proceedings of the 2006 ACM SIGGRAPH/Eurographics symposium on Computer animation, September 02-04, 2006, Vienna, Austria
Byoungwon Choe , Min Gyu Choi , Hyeong-Seok Ko, Simulating complex hair with robust collision handling, Proceedings of the 2005 ACM SIGGRAPH/Eurographics symposium on Computer animation, July 29-31, 2005, Los Angeles, California
R. Bridson , S. Marino , R. Fedkiw, Simulation of clothing with folds and wrinkles, Proceedings of the ACM SIGGRAPH/Eurographics symposium on Computer animation, July 26-27, 2003, San Diego, California
R. Bridson , S. Marino , R. Fedkiw, Simulation of clothing with folds and wrinkles, ACM SIGGRAPH 2005 Courses, July 31-August | static links;open chain;hair rendering;collision detection;hair-hair interaction;hair animation |
545340 | On a New Homotopy Continuation Trajectory for Nonlinear Complementarity Problems. | Most known continuation methods for P 0 complementarity problems require some restrictive assumptions, such as the strictly feasible condition and the properness condition, to guarantee the existence and the boundedness of certain homotopy continuation trajectory. To relax such restrictions, we propose a new homotopy formulation for the complementarity problem based on which a new homotopy continuation trajectory is generated. For P 0 complementarity problems, the most promising feature of this trajectory is the assurance of the existence and the boundedness of the trajectory under a condition that is strictly weaker than the standard ones used widely in the literature of continuation methods. Particularly, the often-assumed strictly feasible condition is not required here. When applied to P * complementarity problems, the boundedness of the proposed trajectory turns out to be equivalent to the solvability of the problem, and the entire trajectory converges to the (unique) least element solution provided that it exists. Moreover, for monotone complementarity problems, the whole trajectory always converges to a least 2-norm solution provided that the solution set of the problem is nonempty. The results presented in this paper can serve as a theoretical basis for constructing a new path-following algorithm for solving complementarity problems, even for the situations where the solution set is unbounded. | Introduction
. We denote by R n the n-dimensional Euclidean space, R n
the nonnegative
orthant, and R n
++ the positive orthant. For simplicity, we also denote x 2 R n
(R n
by x - In this paper, all vectors are column vectors, and superscript T denotes
the transpose. e is the vector of all ones in R n : For any x 2 R n , kxk denotes the 2-norm
denotes the vector whose ith component is maxf0; x i g (respec-
tively, minf0; x i g): For any mapping g : R n ! R n and any subset D of R n , g \Gamma1 (D), unless
otherwise stated, denotes the set fx 2 R
Given a continuous mapping f the well-known complementarity problem
(abbreviated, CP(f)) is to determine a vector x satisfying
Let SOL cp (f) denote the solution set of the above problem. Define
We refer to S++ (f) as the set of strictly feasible points. There are several equivalent
equation-based reformulations of CP(f) reported in the literature. One of the well-known
forms is based on the following mapping:
Xy
It is easy to see that x 2 SOL cp (f) if and only if (x ; y ), where
is a solution of the system
This system provides us with a general theoretical framework for various efficient homotopy
continuation methods, including the interior-point methods. See, for example, Lemke
(1965), Lemke and Howson (1980), Kojima, Mizuno and Noma (1989, 1990), Kojima,
Megiddo and Noma (1991), Kojima, Megiddo, Noma and Yoshise (1991), Ye (1997), and
Wright (1997). Mor'e (1996) also used F(x; y) to study the trust-region algorithm for CP(f ).
The fundamental idea of a homotopy continuation method is to solve the problem by tracing
a certain continuous trajectory leading to a solution of the problem. The existence and the
boundedness of a continuation trajectory play an essential role in constructing the homotopy
continuation algorithms for the problem. The following two conditions are standard ones
widely used in the literature to ensure the existence and the boundedness of a continuation
trajectory. See, e.g. McLinden (1980), Megiddo (1989), Kojima, Mizuno and Noma (1989,
1990), Kojima, Megiddo and Noma (1991), Kojima, Megiddo and Mizuno (1993), Kanzow
(1996), Hotta and Yoshise (1999), Burke and Xu (2000), Qi and Sun (2000), and Hotta et
al. (1998).
Condition 1.1. (a) f is monotone on R n , i.e.,
(b) There is a strictly feasible point, i.e., S++ (f) 6= ;:
Condition 1.2. (a) f is a P 0 -function, that is,
(b) S++ (f) 6= ;:
(c) The set
is bounded for every compact subset D of R n
\Theta B++ (f); where
It is easy to see that Condition 1.1 implies Condition 1.2 (see, Kojima, Megiddo and Noma
(1991). Let (a; b) 2 R n
0: Kojima, Mizuno and Noma (1989, 1990) and
Kojima, Megiddo and Noma (1991) considered a family of the system of equations with a
a
0: (2)
Let (x(t); y(t)) denote a solution to the above system. If (x(t); y(t)) is unique for each t ? 0
and continuous in the parameter t, then the set
a
forms a continuous curve in R 2n
which is called the trajectory of solutions of the system
(2). As t ! 0, any accumulation point (-x; - y) of the trajectory, if exists, must be a solution
to CP(f ). Thus by tracing such a trajectory as t ! 0; we can obtain a solution of the
problem. In this paper, we say that a trajectory is bounded if any slice (subtrajectory) of
the trajectory, i.e., positive number, is bounded.
It should be noted that (3) includes several important particular situations. For example,
when the trajectory reduces to the one induced by the well-known Lemke's
method for linear complementarity problems (Lemke 1965, Lemke and Howson 1980). When
the trajectory (3) reduces to the central path based on which many interior-point
and non-interior-point algorithms are constructed. See, for example, Kojima, Mizuno
and Yoshise (1989), Kojima, Megiddo, Noma and Yoshise (1991), Ye (1997), Wright (1997),
Burke and Xu (1998, 1999, 2000).
For monotone maps and P 0 -functions, Kojima et al. (1990) and Kojima, Megiddo
and Noma (1991) proved the existence and the boundedness of the trajectory (3) under the
aforementioned Condition 1.1 and Condition 1.2, respectively. The strictly feasible condition
is indispensable to the existence and the boundedness of their continuation trajectories. In
fact, we give examples (see Section 6) to show that the trajectory (3) may not exist if
the strictly feasible condition fails to hold. It is well-known that Condition 1.1 implies
the existence and the boundedness of the central path (McLinden 1980, Megiddo 1989,
G-uler 1993). On the other hand, since each point on the central path is strictly feasible,
we deduce that for a monotone CP(f) the central path exists if and only if there exists
a strictly feasible point. This property actually holds for P complementarity problems
(Kojima, Megiddo and Yoshise (1991), Zhao and Isac 2000b). Therefore, it is not possible
to remove the strictly feasible condition for those central path-based continuation methods
without destroying their convergence.
Recently, some non-interior point continuation methods have been intensively investigated
for solving CP(f) (Chen and Harker 1993, Kanzow 1996, Hotta and Yoshise 1999,
Burke and Xu 1998, 1999, 2000, Qi and Sun 2000, and Hotta et al. 1998). While these
algorithms start from a non-interior point, the strictly feasible condition is still assumed for
such a class of methods to ensure the existence of non-interior-point trajectories and the
convergence of algorithms. Specifically, Hotta and Yoshise (1999) considered the following
homotopy trajectory:
++ \Theta R 2n
++ \Theta R 2n is a given vector, and
where all algebraic operations are performed componentwise. Their analysis for the existence
and the boundedness of such a continuation trajectory also requires Condition 1.1 or
the one analogous to Condition 1.2 in which (c) of Condition 1.2 is replaced by
is bounded for every compact
subset D of range U(R n
++ \Theta R 2n
Hotta and Yoshise (1999) pointed out that their condition is not weaker than Condition
1.2.
Thus, most of the continuation methods in the literature, including both the interior-point
algorithms and the non-interior-point algorithms, are limited to solving the class of
complementarity problems satisfying Condition 1.2 or its similar versions. Since Condition
1.2 implies that the solution set of CP(f) is nonempty and bounded (see Section 4 for
details), it seems to be restrictive. Some continuation methods use other assumptions such
as the P 0 and R 0 (see, Burke and Xu 1998, Chen and Chen 1999, Zhao and Li 1998) which,
however, still imply the nonemptyness and boundedness of the solution set. It is worth
mentioning that Monteiro and Pang (1996) established several results on the existence of
certain interior-point paths for mixed complementarity problems. Restricted to CP(f ),
their results generalize and, to some extent, unify those already obtained for CP(f ). It is
easy to see, however, that the conditions used by Monteiro and Pang (1996) also imply the
boundedness of the solution set of the problems. In fact, the properness assumptions used
in Monteiro and Pang (1996), such as "uniformly norm coercive", imply that the solution
set of the problem is bounded.
Since the nonemptyness and boundedness of the solution set imply that the P 0 complementarity
problem must be strictly feasible (Corollary 5 in Ravindran and Gowda 1997 and
Corollary 3.14 in Chen et al. 1997), we conclude that most existing continuation methods
actually confine themselves to solve only the class of P 0 complementarity problems with
bounded solution sets, which are strictly feasible problems.
It is known that for monotone CP(f ), the strictly feasible condition is equivalent to the
nonemptyness and boundedness of the solution set of CP(f) (McLinden 1990, Chen et al.
1997, Ravindran and Gowda 1997). This result has been extended to P complementarity
problems by Zhao and Li (2000) who showed that if f is a P -function (see Definition 2.1
in the next section) the solution set of CP(f) is nonempty and bounded if and only if there
exists a strictly feasible point. In summary, we conclude that the following three conditions
are equivalent for P complementarity problems.
ffl There exists a strictly feasible point, i.e., S++ (f) 6= ;:
ffl Solution set of CP(f) is nonempty and bounded.
ffl The central path exists and any slice of it is bounded.
Because of the dependence on the strictly feasible condition, most existing continuation
methods, when applied to a P 0 complementarity problem (in particular, a P problem and
a monotone problem), fail to solve it if it is not strictly feasible (in this case, the solution
set of CP(f) is unbounded).
From the above discussion, a natural question arises: How to construct a homotopy
continuation trajectory such that its existence and boundedness do not require the strictly
feasible condition in the setting of P 0 -complementarity problems? At a cost of losing the
generality of the vector c 2 R n
in (2), a continuation trajectory, which does not need
the strictly feasible condition, does exist. In fact, Kojima et al. (1993) established a result
(Theorem 3.1 therein) which states that if a 2 R n
is fixed then for almost all b 2 R n
, the
trajectory (3) exists. However, their result cannot exclude a zero measure set on which a
trajectory fails to exist. A similar result is also obtained by Zhang and Zhang (1997).
In this paper, we propose a new homotopy formulation for the complementarity prob-
lem. Utilizing this formulation, we study the existence, boundedness and limiting behavior
of a new continuation trajectory which can serve as a theoretical basis for designing a new
path-following algorithm for CP(f) even when its solution set is unbounded. Our continuation
trajectory possesses several prominent features: (f 1 ) The Tihonov regularization
technique is used in the formulation of the homotopy mapping; (f 2 ) In the P 0 situation, the
proposed trajectory (a unique and continuous curve) always exists from arbitrary starting
point
assumption other than the continuity of the mapping f ,
and the boundedness of this trajectory only requires a condition that is strictly weaker than
(b) and (c) of Condition 1.2. Particularly, the strictly feasible condition is not assumed in
our derivation; (f 3 ) In the setting of P complementarity problems, the boundedness of the
trajectory is equivalent to the solvability of the problem, i.e., the trajectory is bounded if
and only if the problem has a solution. Moreover, for monotone complementarity problems,
the whole trajectory converges to a least 2-norm solution provided that the solution set of
CP(f) is nonempty. This is a very desirable property of the homotopy continuation trajectory
with which we may design a continuation method (path-following method) to solve a
even when the solution set is not bounded. For semimonotone functions (in particular,
each accumulation point of the proposed trajectory turns out to be a weak
Pareto minimal solution (see Theorem 5.2). The above-mentioned properties essentially
distinguish the proposed continuation trajectory from the previous ones in the literature.
Since Tihonov regularization trajectory (see, Isac 1991, Venkateswaran 1993, Facchinei
1998, Facchinei and Kanzow 1999, Ravindran and Gowda 1997, Gowda and Tawhid 1999,
Sznajder and Gowda 1998, Tseng 1998, and Facchinei and Pang 1998) can be viewed as
an extreme variant of the proposed trajectory, some new properties of the regularization
trajectory (see Theorem 5.4) are also revealed as by-products from the discussion of this
paper.
The paper is organized as follows: In Section 2, we list some definitions and basic results
that will be utilized later. In Section 3, we give a new homotopy formulation for CP(f)
and prove a useful equivalent version of it. A useful alternative theorem is also shown. In
Section 4, we study the existence and the boundedness of a new continuation trajectory.
The limiting behavior of the trajectory is studied in Section 5. Two examples are given
in Section 6 to show that the central path and other interior-point trajectories studied by
Kojima et al. fail to exist if the strictly feasible condition is not satisfied. Based on the
proposed trajectory, a framework of a new path-following algorithm for CP(f) is also given
in Section 6. Conclusions are given in the last section.
2. Preliminaries. For reference purposes, we introduce some basic results and
definitions.
Let\Omega be a bounded open set in R n . The symbols
-\Omega and
@\Omega denote the closure
and boundary
respectively. Let v be a continuous function from
-\Omega into R n . For any
vector y 2 R n such that
the degree of v at y with respect
to\Omega is denoted by
The following result can be found in Lloyd (1978).
Lemma 2.1. (Lloyd 1978) (a) If v is injective on R n , then for any y
(b) If
then the equation y has a solution
(c) Let g be a continuous function from
The next result is an upper-semicontinuity theorem concerning weakly univalent maps
established recently by Ravindran and Gowda (1997). Since each P 0 -function must be
weakly univalent (Ravindran and Gowda 1997, Gowda and Sznajder 1999), the following
result is very helpful for the analysis in this paper.
Lemma 2.2. (Ravindran and Gowda 1997) Let g : R n ! R n be weakly univalent, that
is, g is continuous, and there exist one-to-one continuous functions
uniformly on every bounded subset of R n . Suppose that q 2 R n such that g \Gamma1 (q )
is nonempty and compact. Then for any given " ? 0; there exists a scalar ffi ? 0 such that
for any weakly univalent function h and for any q with
sup
-\Omega
we have
where B denotes the open unit ball in R n
In particular, h \Gamma1 (q) and
are nonempty and uniformly bounded for q in a neighborhood of q :
We now introduce two classes of functions.
Definition 2.1. (a): A function f is said to be a semimonotone function if for any
x 6= y in R n such that x \Gamma y - 0, there exists some i such that
(b): (Kojima, Megiddo, Noma and Yoshise (1991), Zhao and Isac 2000a) A function
is said to be a P -function if there exists a constant - 0 such that
where
I ng
We note the following easily verifiable relations:
Monotone functions ' P -functions ' P 0 -functions ' Semimonotone functions.
For a linear matrix and q 2 R n , it is evident that f
is a semimonotone function if and only if M is a semimonotone matrix Cottle et al. (1992),
and f is a P -function if and only if M is a P -matrix defined in Kojima, Megiddo, Noma
and Yoshise (1991). V-aliaho (1996) showed that the class of P -matrices coincides with
that of sufficient matrices introduced in Cotlle et al. (1989). A new equivalent definition
for P -function is given in Zhao and Isac (2000a).
3. A new homotopy formulation and an alternative theorem. Given a scalar
Xy
This map can be viewed as a perturbed version of F(x; y) defined by
(1) with the parameter - ? 0. Clearly, (x; y) 2 R 2n
such that F - (x; only if
Under suitable conditions, the solutions of the above system approach to a solution of
CP(f) as - ! 0: This is the basic idea of the well-known Tihonov regularization methods
for CP(f ). See, for example, Isac (1991), Venkateswaran (1993), Facchinei (1998), Facchinei
and Kanzow (1999), Ravindran and Gowda (1997), Gowda and Tawhid (1999), Sznajder
and Gowda (1998), Qi (2000), Tseng (1998), and Facchinei and Pang (1998).
R 2n be given by
Define the convex homotopy H : R n \Theta R n \Theta [0; 1] \Theta [0; 1) ! R 2n as follows:
We note that the above homotopy formulation has two important extreme cases. If
then the map (5) reduces to F - (x; y) which is used in Tihonov regularization
methods. If reduces to
which is investigated by Kojima et al. (1993) who proved that the above map is equivalent
to (2) under a suitable one-to-one transformation such as However, the version
of (6) is mathematically easier to be handled than (2). Let
and is easy to verify that (6) coincides with the homotopy formulation studied
by Zhang and Zhang (1997). Therefore, Zhang and Zhang's homotopy function is actually
a special case of that of Kojima et al. (1993).
the homotopy (5) is essentially different from previous ones in
the literature. We may treat the two parameters - and ' independently in general cases.
But for simplicity and convenience for designing algorithms, we use only one parameter by
setting satisfies the following properties:
is a continuous and strictly increasing function satisfying
An example of OE satisfying the above properties is is a fixed
positive number. Then (5) can be written as
Basing on the above homotopy, we consider the following system:
where the parameter ' 2 (0; 1): For given ' 2 (0; 1), we denote (x('); y(')) by a solution of
the above system, and consider the following set
If (x('); y(')) is unique for each ' 2 (0; 1) and continuous in '; the above set is called the
homotopy continuation trajectory generated by the system (8). We say that the trajectory
is bounded if for any scalar ffi 2 (0; 1); the set
is bounded. For any subsequence denoted by f(x(' k ); y(' k ))g ' T such that (x(' k ); y(' k
0; the cluster point must be a solution to CP(f)
(since f is continuous). Thus by tracing such a trajectory, if it exists, we may find a solution
to the problem.
In this paper, we study the existence, boundedness and limiting behavior of the above
trajectory. The following result, which gives an equivalent description of the system (8), is
useful in the analysis throughout the remainder of the paper.
Lemma 3.1. Given a pair (a; b) 2 R n
++ \Theta R n : For each ' ? 0; the vector (x; y); where
'b; is a solution to the system (8) if and only if x is a solution
to the following equation
where E(x; for each
Proof. Since a 2 R n
++ and ' ? 0; it is easy to verify that x is a solution to
only if it satisfies the following system:
it is easy to see that (x; y)
is a solution to the system (8) if and only if x is a solution to the system (11)-(13). (This
fact will be used again in later sections). 2
In what follows, we establish a basic result used to show the existence of the proposed
continuation trajectory. The following concept is useful.
Definition 3.1. Given (a; b) 2 R n
++ \Theta R n : Let ' 2 (0; 1) be a given scalar. A sequence
++ with kx k k !1 is said to be a ('; OE; -
H)-exceptional sequence for f if for each
x k there exists a scalar fi k 2 (0; 1) such that
for all
The above concept tailored to our needs here can be viewed as a modified form of those
introduced to investigate the solvability of complementarity problems (Isac et al. 1997,
Zhao and Isac 2000a) and variational inequalities (Zhao 1999, Zhao, Han and Qi 1999). For
given ', the concept of ('; OE; -
H)-exceptional sequence is closely related to the existence of a
solution of the system (8) as the following result shows.
Theorem 3.1. Let (a; b) 2 R n
++ \Theta R n and f continuous function. Then
for each ' 2 (0; 1); there exists either a solution to the system (8) or a ('; OE; -
H)-exceptional
sequence for f .
Proof. Let ' be an arbitrary number in (0; 1): Assume that there exists no solution
to the system (8). We now show that there exists a ('; OE; -
H)-exceptional sequence for f .
Consider the homotopy between the identity mapping and E(x; ') defined by (10), i.e.,
operations are performed componentwise. Define the set
We now show that the set is unbounded. Assume the contrary, that is, the set is bounded.
In this case, there is a bounded open
set\Omega ae R n such that f0g [S '
By the definition of S ' ; for any -
@\Omega ; we have H(-x; l) 6= 0 for any l 2 [0; 1]:
By (c) of Lemma 2.1, we deduce that
I denotes the identity
mapping which is one-to-one in R n : Thus by (a) and (b) of Lemma 2.1, we deduce that
has at least a solution, and hence by Lemma 3.1 the system (8) has a solution.
This contradicts the assumption at the beginning of the proof. Thus, the set S ' is indeed
unbounded, and thus there exists a sequence fx k g ' S ' with loss
of generality, we may assume k. We now show that fx k g is a ('; OE; -
exceptional sequence for f: For each x k , it follows from x k 2 S ' that there exists a number
Clearly, l k 6= 1 since kx k k ? 0: On the other hand, if l it follows from (15) that x k
is a solution to the equation E(x; 0: By Lemma 3.1, the pair
a solution to the system (8). This contradicts again the
assumption at the beginning of the proof. Thus, in the rest of the proof, we only need to
consider the situation of l k 2 (0; 1): We show that fx k g is a ('; OE; -
H)-exceptional sequence
for f . We first show that fx k g ae R n
noting that
and 2'a 2 R n
we have from (15) that
0:
Thus fx k g ae R n
It is sufficient to verify (14). Squaring both sides of (15), we have
Multiplying both sides of the above by
The above equation can be written as
'b
By definition, fx k g is a ('; OE; -
H)-exceptional sequence for f . 2.
4. Existence and boundedness of the trajectory. In this section, we show the
existence and the boundedness of the new trajectory under a very weak condition. The
strictly feasible condition is not required in our results. For any vectors v; v 0 2 R n with
we denote by [v the rectangular set [v 0
The following is our
assumption.
Condition 4.1. For any (a; b) 2 R n
++ \Theta R n ; there exists a scalar
the set [
is bounded, where F \Gamma1
and
ae
-oe
Clearly, D - is a rectangular set in R n
now show a general result.
Theorem 4.1. Let (a; b) be an arbitrary point in R n
++ \Theta R n and f
continuous semimonotone function. Then for each ' 2 (0; 1); the system (8) has a solution.
Denote the solution set by
Moreover, if Condition 4.1 holds, then for any ffi 2 (0; 1), the set
is bounded, where C ' is given by (16).
To prove the above result, we utilize the following lemma. Its proof, similar to the one
of Lemma 1 in Gowda and Tawhid (1999), is very easy and omitted here.
Lemma 4.1. Let f be a continuous semimonotone function. Then for any
sequence fu k g ae R n
exists some i 0 and a subsequence fu k j g
such that u k j
is bounded from below.
Proof of Theorem 4.1. To show the former part of the result, by Theorem 3.1,
it is sufficient to show that the function f has no ('; OE; -
H)-exceptional sequence for each
Assume the contrary, that is, there exists a scalar ' 2 (0; 1) such that fx k g is
a ('; OE; -
H)-exceptional sequence for f . By definition, kx
++ and (14) is
satisfied. Since f is semimonotone, by Lemma 4.1, there exist some subsequence fx k j g and
index m such that x k j
is bounded from below. However, by (14) we
have
a i
This is a contradiction since the left-hand side of the above is bounded from below. Thus f
has no ('; OE; -
H)-exceptional sequence for any ' 2 (0; 1), and hence it follows from Theorem
3.1 that the system (8) has a solution for each ' 2 (0; 1):
We now show the boundedness of the set (17). Let ffi 2 (0; 1) be a fixed scalar. Assume
that there exists a sequence f' k g ' (0; ffi] such that f(x(' k ); y(' k ))g is an unbounded sequence
contained in the set (17). We derive a contradiction. Indeed, in this case, fx(' k )g
must be unbounded. By Lemma 4.1, there exist an index p and a subsequence, denoted
also by fx(' k )g; such that x p (' k is bounded from below. Since
, we see from the proof of Lemma 3.1 that x(' k ) must satisfy (11)-
(13). By (13), we have
and the left hand-side of the above is bounded from below, we deduce
from the above that OE(' k
we have
By using (13) again, we have
for all
From the above two relations, we have
a
given as in Condition 4.1.
there exists a k 0 such that ' k ! fl for all k - k 0 and
a
where
ae
a
-oe
Therefore,
(D - k
which is bounded according to Condition 4.1. This contradicts the unboundedness of
)g. The proof is complete. 2
The following condition is slightly stronger than Condition 4.1.
Condition 4.2. There exists a constant fl ? 0 such that the following set
(D
is bounded, where D fl := [0; fle] \Theta [\Gammafl e; fle] and
(D
Corollary 4.1. Let (a; b) be an arbitrary point in R n
++ \Theta R n and f be a continuous
semimonotone function from R n into itself. If Condition 4.2 is satisfied, then for any
constant the set
is bounded, where C ' is given by (16).
Conditions 4.1 and 4.2 are motivated by the following observation.
Proposition 4.1. Let f be a continuous semimonotone function from R n into itself.
Then for each fixed scalar - ? 0, the set F \Gamma1
for any compact set D in R n
Proof. Let - ? 0 be a fixed number. We show this result by contradiction. Assume
that there exists a compact set D in R n
n such that F \Gamma1
- (D) is unbounded. Then there
is a sequence f(x k ; y k )g contained in F \Gamma1
- (D) such that k(x k ; y k )k !1: Notice that
Since D is compact set in R n
there is a bounded sequences f(d k ; w k )g ' D such that
If x k is bounded, then from the above, so is y k . Thus, by our assumption, fx k g must be
unbounded. We may assume that kx k Passing through a subsequence, we may
assume that there exists an index set I such that x k
I and fx k
i g is bounded
I: From the first relation of (19), we deduce that y k
I since
i is bounded. Thus, from the second expression of (19), we deduce that
This contradicts Lemma 4.1, which asserts that
there is an index p 2 I such that x k
is bounded from below. 2
Condition 4.1 actually states that the union of all the bounded sets F \Gamma1
(D - ) is also
bounded. It is equivalent to saying that F \Gamma1
(D - ) is uniformly bounded when the positive
parameter - is sufficiently small. Later, we will prove that for P 0 complementarity problems
Condition 4.1 is strictly weaker than Condition 1.2. In fact, Condition 4.1 may hold even
if the problem has no strictly feasible point (in this case, the solution set of CP(f) is
unbounded).
While the above results show that for any given (a; b) 2 R n
++ \Theta R n a solution (x(');
to the system (8) exists for each ' 2 (0; 1); it is not clear how to achieve the uniqueness and
the continuity of (x('); y(')) for semimonotone complementarity problems . Nevertheless,
at a risk of losing the arbitrariness of the starting point (a; b) in R n
++ \Theta R n ; it is possible
to obtain the uniqueness and the continuity by using the parameterized Sard Theorem,
see, e.g. Allgower and Georg (1990), Kojima et al. (1993), and Zhang and Zhang (1997).
But such a result cannot exclude a zero measure set (in Lebesgue sense) from which the
proposed trajectory fails to exist. For P 0 -functions, however, it is not difficult to achieve
the uniqueness and the continuity of the proposed trajectory as the next result shows.
Theorem 4.2. Let f be a continuous P 0 -function from R n into itself.
(a) For each ' 2 (0; 1); the system (8) has a unique solution denoted by (x('); y(')):
(b) (x('); y(')) is continuous in ' on (0; 1):
(c) If Condition 4.1 (in particular, Condition 4.2) is satisfied, the set f(x(');
(0; ffi]g is bounded for any ffi 2 (0; 1).
(d) If f is continuously differentiable, then (x('); y(')) is also continuously differentiable
in ':
Proof. Since f is a P 0 -function, the Tihonov regularization function, i.e., f(x)+ OE(')x;
is a P-function in x: Therefore, g(x) :=
a P-function in x; where ' 2 (0; 1): By the same proof of Ravindran and Gowda (1997), we
can show that
is a P-function in x for any given c 2 R n
P-function is injective, the equation
has at most one solution for each fixed ' 2 (0; 1); and hence by Lemma 3.1,
the system (8) has at most one solution. Using this fact and noting that each P-function
is semimonotone, we deduce from (a) of Theorem 4.1 that the system (8) has a unique
solution. Part (a) follows. By using this fact and noting that each P 0 -function (in particular,
P-function) is a weakly univalent function (Gowda and Tawhid 1999, Gowda and Sznajder
1999, Ravindran and Gowda 1997), from Lemma 2.2, we conclude that the solution of (8) is
continuous in ' on (0,1). This proves Part (b). Part (c) immediately follows from Theorem
4.1.
We now prove Part (d). Since f(x) is a P 0 -function, the Jacobian matrix f 0 (x) is a
matrix (Mor'e and Rheinholdt 1973). Hence, the matrix (1 \Gamma ')(f 0 (x)+ OE(')I) is a P-matrix
for each fixed number ' 2 (0; 1): Therefore, the matrix
is a nonsingular matrix (Kojima, Megiddo and Noma 1991, Kojima, Megiddo, Noma and
Yoshise 1991) for any (x; y) ? 0: Noting that the above matrix coincides with the Jacobian
matrix of the map -
with respect to (x; y). Thus, by the implicit function theo-
rem, there exists a ffi-neighborhood of ' such that there exists a unique and continuously
differentiable curve (x(t); y(t)) satisfying
In particular, (x('); y(')) is continuously differentiable at '. The proof is complete. 2
It is known that the structure property of the solution set of CP(f) has a close relationship
to the existence of some interior-point trajectories. For instance, Chen et al. (1997)
and Gowda and Tawhid (1999) have proved the existence of a short central path when the
solution set of a P 0 complementarity problem is nonempty and bounded. In fact, a long
central path may not exist in this case. Chen and Ye (1998) gave an example of a 2\Theta 2 P 0
linear complementarity problem with a bounded solution set to show that there is no long
central path. In contrast, the homotopy continuation trajectory proposed in this paper is
always a long trajectory provided that f is a continuous P 0 -function as shown in Theorem
4.1. The next result shows that any slice of the trajectory (subtrajectory) is always bounded
when SOL cp (f) is nonempty and bounded. The following lemma is employed to prove the
result.
Lemma 4.2 Suppose that S is an arbitrary compact set in R n . Let E(x; ') be given by
(10).
(a) Given (a; b) 2 R n
++ \Theta R n : For any ffi ? 0; there must exist a scalar
that
sup
(b) Let h : R n \Theta R 1
++ \Theta R n
defined by
For any ffi ? 0; there exists a scalar fl 2 (0; 1) such that
sup
for all - 2 (0; fl] and (w; v) 2 [0; fle] \Theta [\Gammafl e; fle]:
Proof. Denote by
and
Notice that the inequality
holds for any
scalar
0: We have
Since S is a compact set in R n and OE(') ! 0 as ' ! 0, there exist two positive constants
such that for any sufficiently small ' the following holds:
Thus, for any sufficiently samll ', we have for each i that
By the compactness of S, the result (a) holds. Result (b) can be proved in a similar way. 2
Theorem 4.3. Let f be a continuous P 0 -function from R n into itself. Denote by
the trajectory generated by the (unique) solution of the system
(8) as ' varies. If the CP(f) has a nonempty and bounded solution set, then for any scalar
Proof. The existence of the (unique) trajectory follows from (a) and (b) of Theorem
4.2. It suffices to prove that any short part of the trajectory is bounded under the
assumption of the theorem. Assume that SOL cp (f) is nonempty and bounded. Notice that
E(x) := E(x;
is the well-known Fischer-Burmeister function. Thus
i.e., SOL cp nonempty and bounded by assumption. Since f
is a P 0 -function, E(x) is also a P 0 -function (see Ravindran and Gowda 1997, Gowda and
Tawhid 1999), and thus is weakly univalent (Gowda and Tawhid 1999, Gowda and Sznajder
1999, Ravindran and Gowda 1997). It follows from Lemma 2.2 that for any fixed " ? 0
there exists a ffi ? 0 such that for any weakly univalent function h
sup
-\Omega
we have
"B) is a compact set. By (a) of Lemma 4.2, there exists a number
1 such that for any ' 2 (0;
the map h(x) := E(x; '); a P 0 -function in x; satisfies
the relation (20). Thus, it follows from (21) that the set
is contained in the bounded set E 3.1, the subtrajectory, i.e.,
is bounded.
Let ffi be an arbitrary scalar with loss of generality, we assume ffi 0
Our goal is to show the boundedness of the subtrajectory
Notice that
It suffices to show the boundedness of the set T 2 . We show this by contradiction. Assume
that T 2 is unbounded, i.e., there exists a subsequence f(x(' k ); y(' k ))g ' T 2 such that
which implies that kx(' k )k !1: Thus, by Lemma 4.1, there exists a
subsequence, denoted also by fx(' k )g, such that there is an index p such that x p
and f p (' k ) is bounded from below. Since ' k 2 [ffi 0 ; ffi] for all k, by property of OE('); there
must exist a scalar ff ? 0 such that OE(' k ) - ff for all k: It follows from (13) that
which is a contradiction since the left-hand side is bounded from below. 2
The existence and the boundedness of most interior-point paths were established under
Condition 1.2 or some similar versions. See, e.g. Kojima, Megiddo and Noma (1991),
Kojima, Mizuno and Noma (1989), Kojima, Megiddo, Noma and Yoshise (1991), Hotta and
Yoshise (1999), Hotta et al. (1998), Qi and Sun (2000). In what follows, we show that
Condition 1.2 also implies the existence and the boundedness of our trajectory. In fact, we
can verify that Condition 1.2 implies the nonemptyness and boundedness of the solution
set of CP(f ).
Theorem 4.4. Let f be a continuous function. If Condition 1.2 is satisfied, then the
solution set of the problem CP(f) is nonempty and bounded, and hence the result of Theorem
4.3 remains valid.
Proof. By Theorem 4.3, it is sufficient to show that Condition 1.2 implies the nonemp-
tyness and boundedness of the solution set of CP(f ). Since Kojima, Megiddo and Noma
(1991) showed that Condition 1.2 implies the boundedness of their interior-point trajectory;
by continuity, any accumulation point of their trajectory, as the parameter approaches to
zero, is a solution to CP(f ). Thus the solution set is nonempty. We now show its bounded-
ness. By (b) of Condition 1.2, there is a point x Choose a scalar
r such that then by the definition of B++ (f); we have
\Theta B++ (f):
Since D is a compact set, thus by (c) of Condition 1.2, the following set
is bounded. Since 0 2 D; it follows that
which is contained in F \Gamma1 (D); is also bounded. Since F \Gamma1 (0) coincides with the solution
set of CP(f ), we have the desired result. 2
We have shown that for any P 0 complementarity problem the continuation trajectory
proposed in the paper always exists provided f being continuous (see Theorem 4.2), and
that the trajectory is bounded under any one of the following conditions.
ffl Condition 4.1.
ffl Condition 4.2.
ffl The solution set SOL cp (f) is nonempty and bounded.
ffl Condition 1.2 (in particular, Condition 1.1).
It is interesting to compare the above-mentioned conditions. We summarize the result as
follows:
Proposition 4.2. -function and let (a; b) 2 R n
++ \Theta R n be
a fixed vector. Then Condition 1.1 ) Condition 1.2 ) nonemptyness and boundedness
of the solution set of CP(f) ) Condition 4.2 ) Condition 4.1. However, Condition 4.1
may not imply the boundedness of solution set, and existence of a strictly feasible point.
Hence, Condition 4.1 is strictly weaker than Condition 1.2, and than the nonemptyness and
boundedness assumption of the solution set.
Proof. The first implication is well-known (Kojima, Megiddo and Noma 1991). The
second implication follows from Theorem 4.4, and the last implication is obvious. We
now prove the third implication. Assume that the solution set of CP(f) is nonempty and
bounded. We now show that Condition 4.2 is satisfied. Let E(x) be defined as in the proof
of Theorem 4.4, i.e., which is a P 0 -function. Notice that E \Gamma1 (0) is just the
solution set which is nonempty and bounded. Let " ? 0 be a fixed scalar. By Lemma 2.2,
there exists a ffi ? 0 such that for any weakly univalent function h
sup
-\Omega
it holds that
Consider the function h : R n \Theta R 1
++ \Theta R n
For such fixed -; w and v; the function f(x)
is a P-function (in x) since f is a P 0 -function. Therefore, the function h (x; -; w; v) must
be a P-function in x (Ravindran and Gowda 1997), and thus be weakly univalent in x. For
the above given ffi; by (b) of Lemma 4.2, there exists a number fl ? 0 such that
sup
-\Omega
for all - 2 (0; fl] and (w; v) 2 D fl := [0; fle] \Theta [\Gammafl e; fle]: Therefore, replacing h by h in (22)
and (23), we have
where
Therefore, [
It is not difficult to verify that
(D
(D
It follows from the above two relations
that the set 8 !
(D
is bounded. By the definition of F - (x; y) and continuity of f , we deduce that
(D
is bounded. Thus Condition 4.2 holds.
We now give an example to show that Condition 4.1 holds even when a strictly feasible
point fails to exist. Consider the following example.
!/
This function is monotone. For this example, Condition 1.1 and Condition 1.2 do not hold
since f has no strictly feasible point. The solution set of the CP(f) is
which is unbounded. However, this example satisfies Condition 4.1. Indeed, for
any given (a; b) where a
verify that the
set [
(D - ) are given as in
Condition 4.1. Let (x; y) - 0 and
Xy
Then we have
a 2 ];
Thus,
That is,
which imply that must be uniformly bounded for all - 2 (0; fl] where fl is a fixed
scalar in (0,1). Thus, the set [
(D - ) is bounded, and thus Condition 4.1 is satisfied.While the above example has no strictly feasible point, it satisfies Condition 4.1. Hence,
it follows from Theorem 4.2 that from each point (a; b) 2 R n
++ \Theta R n the proposed continuation
trajectory always exists and any subtrajectory is bounded. It is worth noting that
for this example there exists no central path (Example 6.1).
When restricted to P complementarity problems, it turns out that Condition 4.1 can
be further relaxed. In fact, for this case, we can achieve a necessary and sufficient condition
for the existence and boundedness of the trajectory (see the next section for details). This
prominent feature distinguishes the proposed trajectory from the central path and those
continuation trajectories studied by Kojima, Megiddo and Noma (1991), Kojima et al.
(1990), and Kojima et al. (1993).
5. Limiting behavior of the trajectory. The results proved in Section 4 reveal
that under certain mild conditions the continuation trajectory generated by (8) has at
least a convergence subsequence f(x(' k ); y(' k ))g whose limit point
solution to CP(f ). In this section, we consider the following questions: (a) When does the
entire trajectory converge? (b) In the setting of semimonotone functions, if a subsequence
what can be said about x ?
In this section, we show that if '=OE(') ! 0 as ' ! 0; some much stronger convergence
properties of the proposed trajectory can be obtained. The case of '=OE(') ! 0 as
easy to be satisfied. An example of such a OE is
We first show that for P complementarity problems this trajectory is always bounded
provided that the solution set is nonempty (not necessarily bounded). Hence, for a P
complementarity problem, this trajectory is bounded if and only if the CP(f) has a solution.
This result further improves the result of Theorem 4.2. Moreover, if the problem has a least
element solution x ; i.e., x - u for all u 2 SOL cp (f); we prove that the entire trajectory
is convergent for P complementarity problems.
Theorem 5.1. Let f be a continuous P -function from R n into itself. Suppose that the
solution set of CP(f) is nonempty.
(a) The system (8) has a unique solution (x('); y(')) for each ' 2 (0; 1), and the solution
is continuous on (0,1). Therefore, the homotopy continuation trajectory
(0; 1)g generated by (8) always exists.
(b) If '=OE(') is bounded for ' 2 (0; 1], then any short part of the trajectory (subtrajectory)
is bounded, that is, for any ffi 2 (0; 1), the set f(x(');
(c) If OE is chosen such that '=OE(') ! 0 as ' ! 0 and SOL cp (f) has a least element,
then the entire trajectory must converge to (x ; y ) where x is the (unique) least element
solution.
Proof. Since each P -function is a P 0 -function, Part (a) follows immediately from
Theorem 4.2. We now prove (b) and (c). Since the system (8) is equivalent to (11) - (13),
we have
be an arbitrary solution to the CP(f ). For each i
we have
Thus,
On the other hand, by (24) and (25), we have
where
1-i-n
Since f is a P -function, by using (26) and (27), we have
'e T a -
1-i-n
'(e T a
ne T a
Rearranging terms and dividing both sides by OE('), we have
1-i-n
be an arbitrary scalar. Since '=OE(') is bounded, it follows from the above
inequality that the set fx(') : ' 2 (0; ffi]g is bounded, and by (24), so is
Part (b) follows.
We now consider the case 0: By the boundedness of the trajectory,
there exists a convergence subsequence f(x(' k
which is a solution to CP(f ). Taking the limit, it follows from (28) that there exists an
index i 0 such that
If CP(f) has a least element solution -
substituting u by -
x in the above we deduce that
x is unique, the entire trajectory must converge to this least element of the
solution set. Thus we have Part (c). 2
For the monotone cases, we have the following consequence of Theorem 5.1.
Theorem 5.2. Let f be a continuous monotone map from R n into R n : Suppose that the
solution set of CP(f) is nonempty. Let OE be given such that 0: Then the
trajectory generated by the system (8) converges, as ' ! 0;
to is a least 2-norm solution, i.e., kx k - ku k for all u 2 SOL cp (f):
Proof. Since each monotone map is a P -map with the constant 0, from Theorem
5.1, the trajectory in question always exists and is bounded. In this case, the inequality
(28) reduces to
'(e T a)
Suppose that fx(' k )g; where ' k ! 0; is an arbitrary convergent subsequence with the
limiting point x , i.e., x(' k we have from (29) that
Since u is an arbitrary element in SOL cp (f ), the above inequality implies that x is unique,
and thus the entire trajectory must converge to this solution. It follows from the above
inequality that
that is,
which implies that x is a least 2-norm solution. 2
It is worth to stress the prominent features of the above results. First, it does not require
the strict feasible condition. Second, it does not need any properness conditions such as
Condition 4.1, Part (c) of Condition 1.2 and those used in Monteiro and Pang (1996). For
a P complementarity problem the existence and boundedness of the proposed trajectory
is equivalent to the nonemptyness of the solution set. For a monotone problem, the entire
trajectory converges to a least 2-norm solution if and only if the solution set is nonempty.
Kojima, Megiddo and Noma (1991) showed that if Condition 1.2 holds and f is an affine
is an n \Theta n P 0 -matrix, then the whole interior-point
trajectory studied by them is convergent. Moreover, for positive semidefinite matrix M , if
the strictly feasible condition holds, Kojima et al. (1990) showed that the entire interior-point
trajectory studied by them converges to a solution of CP(f) which is a maximum
complementary solution. The trajectory studied in this paper is convergent even when f is
a nonlinear monotone map and the strictly feasible condition fails to hold.
Theorems 5.1 and 5.2 answer the first question presented at the beginning of this section.
They also partially answer the second question. For P -functions, we have proved that the
limiting point x of the trajectory generated by (8) is the least element of the solution
set provided such an element exists. For monotone problems, Theorem 5.2 states that the
limiting point x is a least 2-norm solution. We now study the property of the limiting point
x of the trajectory in more general cases. Our result reveals that x is at least a weak
Pareto minimal solution if f is a semimonotone map. The following is the definition of weak
Pareto minimal solution (see, e.g. Definition 2 in Sznajder and Gowda 1998, Definition 2.1
and an equivalent description of the concept in Luc 1989).
Definition 5.1. (Sznajder and Gowda 1998, Luc 1989) Let x be an element of a
nonempty set S. We say that x is a weak Pareto minimal element if
In other words, x is weakly Pareto minimal element of S if there is no element s of S
satisfying the inequality s
We have the following result.
Theorem 5.3. Let f be a continuous semimonotone function from R n into itself, and
let (x('); y(')) be a solution to the system (8) for each ' 2 (0; 1). If the subsequence
is a weak Pareto minimal element of
the solution set of CP(f ).
Proof. We assume that x is not a weak Pareto minimum. Then by the definition,
there exists a solution u such that u ! x . Since x(' k must have that x(' k ) ? u
for all sufficiently large k: Since f is a semimonotone function, there exist an index p and
a subsequence of fx(' k )g; denoted also by fx(' k )g; such that x p
sufficiently large k: Thus
for all sufficiently large k. Since x is a solution to CP(f ),
by the same proof of (25), we have
(y
On the other hand, we have
(y
Combining the above two inequalities yields
Taking the limit, by using the fact
x
which contradicts the relation
As we have mentioned in Section 3, the Tihonov regularization trajectory can be viewed
as an extreme variant of the trajectory generated by homotopy (5). We close this section
by considering this extreme variant. It is known that for P 0 -function, the system (4) has
a unique solution x(-) which is continuous in - 2 (0; 1): See Facchinei (1998), Facchinei
and Kanzow (1999), Ravindran and Gowda (1997), Sznajder and Gowda (1998), Gowda
and Tawhid (1999), and Facchinei and Pang (1998). The set fx(- 2 (0; 1)g forms
the Tihonov regularization trajectory. Motivated by the proof of Theorems 5.2 and 5.3, we
prove the following result which generalizes and improves several known results concerning
the Tihonov regularization methods for CP(f ).
Theorem 5.4. (a) Let f be a continuous P -function from R n into R n : If SOL cp (f) 6= ;;
then the entire Tihonov regularization trajectory, i.e., fx(- 2 (0; 1)g; is bounded. If
CP(f) has a least element solution x , i.e., x - u for all u 2 SOL cp (f ), then the entire
trajectory converges to this solution as
(b) continuous monotone function and SOL cp (f) 6= ;; then the
regularization trajectory, i.e., fx(- 2 (0; 1)g; is bounded and converges
to a least 2-norm solution as
(c) is a continuous semimonotone function, then for any sequence
a solution to the system (4) for each - k ; x is a
Pareto minimal solution of CP(f ).
Proof. The ideas of the proof of Parts (a)-(c) are analogous to that of Theorems 5.1,
5.2 and 5.3, respectively. Here, we only prove the Part (a). Parts (b) and (c) can be easily
proved. Let u be an arbitrary solution of CP(f ). Since
we have that
Thus
It follows from (30) that
for all using (31) and (32) and noting that f is a P -function, we obtain
1-i-n
1-i-n
1-i-n
Thus,
1-i-n
which implies that the set fx(- 2 (0; 1)g is bounded. We assume that fx(- k )g is a
subsequence and x(- k 0: From (33), we deduce that
1-i-n
x
If u is a least element solution in the sense that u - v for all v 2 SOL cp (f); it follows
from the above inequality that x the least element solution is unique, the entire
trajectory fx(- 2 (0; 1)g must converge to this solution as
For a differentiable P 0 -function f , Facchinei (1998) showed that if SOL cp (f) is nonempty
and bounded the Tihonov regularization trajectory fx(- 2 (0; -
-]g is bounded for any
fixed -, and he gave an example to show that it is not possible to remove the boundedness
assumption of the solution set in his result without destroying the boundedness of the regularization
subtrajectory. Here, we significantly improved Facchinei's result in the setting of
showed that the boundedness assumption of the solution
set can be removed, and the entire Tihonov regularization trajectory fx(- 2 (0; 1)g;
rather than just a subtrajectory, is bounded. Since P problems include the monotone ones
as special cases, the above (a) and (b) of Theorem 5.4 can be viewed as a generalization of
the results of Subramanian (1988) and Sznajder and Gowda (1998) for the monotone linear
complementarity problems. Part (c) of Theorem 5.4 extends the result of Theorem 3 in
Sznajder and Gowda (1998) concerning P 0 -functions to general semimonotone functions.
6. Examples and a framework of path-following method. We have shown that
for problems it only needs a mild condition to ensure the existence
and boundedness of the proposed homotopy continuation trajectory. This feature of the
homotopy continuation trajectory enable us to design a path-following method (continuation
method) to solve a very general class of complementarity problems even when a strictly
feasible point fails to exist (in this case, if it is solvable, the P 0 complementarity problem
has an unbounded solution set). We first give two examples to show that the proposed
trajectory does exist and is bounded in general situations.
Example 6.1: Consider the monotone LCP(M; q), where
The solution set SOL cp unbounded. Clearly, this problem
has no strictly feasible point. Thus, this problem has no central path, i.e., the following
system
has no solution for each - ? 0: However, from any (a; b) 2 R 2
++ \Theta R 2 , we can verify that the
proposed continuation trajectory exists and converges to the least 2-norm solution. Indeed,
let
The system (7) can be written as
i.e.,
This system has a unique solution x('); i.e.,
continuous trajectory. Let ' ! 0 and '=OE(') ! 0: Then it
is easy to see that which is the least 2-norm solution of the CP.
It is worth noting that for our trajectory the vector b can be chosen as an arbitrary
point in R 2 : However, we can easily check that the trajectory of Kojima, Megiddo and
Noma (1991) does not exist if there exists a component b i - 0:
Example 6.2. Let
This matrix is a P 0 -matrix, but not a P -matrix. The solution set SOL cp
is an unbounded set. Clearly, this problem has no strictly feasible point.
We show that the trajectory studied by Kojima, Megiddo and Noma (1991) does not exist.
++ \Theta R 2 where a Consider the system
where H is given by (6). For this example, the above system can be written as
There are three possible cases:
Case 1: b 2 - 0: The above system has no solution.
Case 2: 0: The above system also has no solution.
Case 3:
a ? 0; and hence
for all sufficiently small ' ? 0: Thus the above system has no solution for all sufficiently
small ': In summary, from any starting point (a; b) 2 R 2
++ \Theta R 2 ; the trajectory of Kojima,
Megiddo and Noma (1991) does not exist. However, since f is a P 0 -function, by Theorem
4.2, the continuous trajectory generated by system (8) always exists for each given point
++ \Theta R 2 : We now verify this fact. Choose OE(') such that '=OE(') ! 0 as ' ! 0:
The system (8) can be written as
i.e.,
The system (34), (37) and (38) has a unique solution, i.e.,
Clearly, by (39) and
(40) is the unique solution to the system (34)-(38). Therefore,
continuous trajectory. Since '=OE(') ! 0; we deduce that x 2
which is the least 2-norm solution.
If '=OE 3 (') - c; where c is a positive constant, then x 2 (')=OE(') is bounded. Therefore, it is
easy to see that the set f(x 1 is bounded for any fixed ffi 2 (0; 1); and
any accumulation point of the set is a weakly Pareto minimal solution. If '=OE 3
then which can also be viewed as a weakly Pareto minimal solution.
Based on the results established in the paper, we may develop a continuation method
for CP(f) by tracing the proposed trajectory. Such a method is expected to solve a broad
class of complementarity problems without the requirement of the strictly feasible condition
or boundedness assumption of the solution set. Here we provide only a framework of the
algorithm without convergence analysis. Let a be an arbitrary point in R n
Thus the starting point can be
chosen as is the current point. We
attempt to obtain the next iterate by solving approximately the following system
scalar. The Newton direction (\Delta'; \Deltax; \Deltay) 2 R 1+2n to the above
system should satisfy the following equationB @
z
\Delta'
\Deltax
\DeltayC
where
Notice that We can specify a framework of the algorithm as follows.
Algorithm: (a): Select a starting point
(b): At the current iterate, solve the following system
!/
\Deltax
\Deltay
Choose suitable step parameters ff k and fi k such that
\Deltay
and
Update fl k and go back to (b).
The main feature of the above algorithm is that the newton direction (\Deltax; \Deltay) is determined
by a system which is quite different from the previous ones in the literature. Some
interesting topics are the global and local convergence and polynomial iteration complexity
of the above algorithm.
7. Conclusions. For most interior-point and non-interior-point continuation meth-
ods, either the existence or the boundedness of the trajectory in question requires some
relatively restrictive assumptions such as Condition 1.1 or Condition 1.2. Because these
methods strongly depend on the existence of a strictly feasible point that is equivalent to
the nonemptyness and boundedness of the solution set in the case of P complementarity
problems Zhao and Li (2000), they possibly fail to solve the problems with an unbounded
solution set even for the monotone cases. However, the continuation trajectory proposed
in this paper always exists for P 0 problems without any additional assumption, and the
boundedness of it needs no strictly feasible condition as shown by Theorems 4.2, 5.1 and
5.2. Particularly, for P problems we prove that the existence and the boundedness of the
proposed trajectory is equivalent to the solvability of CP(f ). Moreover, if f is monotone, the
entire trajectory converges to a least 2-norm solution whenever the solution set is nonempty.
As a by-product, a new property (Theorem 5.4) is elicited for Tihonov regularization tra-
jectory. The results presented in this paper have provided us with a theoretical basis for
constructing a new path-following method to solve a CP(f ). This method is expected to
solve a general class of complementarity problems which is broader than those to which
most existing path-following methods can be applied.
Acknowledgements
. The research presented in this paper was partially supported
by research Grants Council of Hong Kong grant CUHK358/96P. We would like to thank
two anonymous referees, Associate Editor and Area Editor Professor J. S. Pang for their
helpful comments and suggestions that lead to the improvement of the paper.
--R
Numerical Continuation Methods
The global linear convergence of a non-interior path-following algorithm for linear complementarity problems
A polynomial time interior-point path-following algorithm for LCP based on Chen-Harker-Kanzow smoothing techniques
A global and local superlinear continuation-smoothing method for P 0 and R 0 and monotone NCP
A penalized Fischer-Burmeister NCP-function: theoretical investigation and numerical results
On smoothing methods for the P 0
The Linear Complementarity Problem
Sufficient matrices and the linear complementarity problem.
Structural and stability properties of P 0 nonlinear complementarity prob- lems
Beyond monotonicity in regularization methods for nonlinear complementarity problems.
"La Sapienza"
Weak univalence and connected of inverse images of continuous functions.
Existence and limiting behavior of trajectories associated with
Existence of interior points and interior-point paths in nonlinear monotone complementarity problems
Global convergence of a class of non-interior-point algorithms using Chen-Harker-Kanzow-Functions for nonlinear complementarity problems
A complexity analysis of a smoothing method using CHKS-Function for monotone linear complementarity problems
Tikhonov's regularization and the complementarity problem in Hilbert space.
Exceptional families
Some nonlinear continuation methods for linear complementarity prob- lems
A general framework of continuation methods for complementarity problems.
Homotopy continuation methods for nonlinear complementarity problems.
A unified approach to interior point algorithms for linear complementarity problems
A new continuation method for complementarity problems with uniform P-functions
Limiting behavior of trajectories generated by a continuation method for monotone complementarity problems.
A polynomial-time algorithm for linear complementarity problems
Bimatrix equilibrium points and mathematical programming.
Degree Theory
Theory of Vector Optimization
The complementarity problem for maximal monotone multifunction
Stable monotone variational inequalities
Pathways to the optimal set in linear programming
Properties of an interior-point mapping for mixed complementarity problems
Global methods for nonlinear complementarity problems.
A regularization smoothing Newton method for box constrained variational inequality problems with P 0
Improving the convergence of non-interior point algorithm for nonlinear complementarity problems
Regularization of P 0
A note on least two norm solution of monotone complementarity problems.
On the limiting behavior of the trajectory of regularized solutions of P 0 complementarity problems
bounds for regularized complementarity problems
P matrices are just sufficient
An algorithm for the linear complementarity problem with a P 0 - matrix
Interior Point Algorithms Theory and Analysis
On constructing interior-point path-following methods for certain semimonotone linear complementarity problems
Existence of a solution to nonlinear variational inequality under generalized positive homogeneity.
Exceptional families and existence theorems for variational inequalities.
On the strict feasibility condition of complementarity problems
Characterization of a homotopy solution mapping for nonlinear complementarity problems
--TR
--CTR
Guanglu Zhou , Kim-Chuan Toh , Gongyun Zhao, Convergence Analysis of an Infeasible Interior Point Algorithm Based on a Regularized Central Path for Linear Complementarity Problems, Computational Optimization and Applications, v.27 n.3, p.269-283, March 2004
Y. B. Zhao , D. Li, A New Path-Following Algorithm for Nonlinear P*Complementarity Problems, Computational Optimization and Applications, v.34 n.2, p.183-214, June 2006 | homotopy continuation trajectories;p-functions;strictly feasible condition;continuation methods;complementarity problems;P0-functions |
545541 | View updates in a semantic data modelling paradigm. | The Sketch Data Model (SkDM) is a new semantic modelling paradigm based on category theory (specifically on categorical universal algebra), which has been used successfully in several consultancies with major Australian companies. This paper describes the sketch data model and investigates the view update problem (VUP) in the sketch data model paradigm. It proposes an approach to the VUP in the SkDM, and presents a range of examples to illustrate the scope of the proposed technique. In common with previously proposed approaches, we define under what circumstances a view update can be propagated to the underlying database. Unlike many previously proposed approaches the definition is succinct and consistent, with no ad hoc exceptions, and the propagatable updates form a broad class. We argue that we avoid ad hoc exceptions by basing the definition of propagatable on the state of the underlying database. The examples demonstrate that under a range of circumstances a view schema can be shown to have propagatable views in all states, and thus state-independence can frequently be recovered. | Introduction
This is a paper about the view update problem in the
framework of a new semantic data model, the sketch data
model.
View updating has long been recognised as important
and difficult (see for example [11, Chapter 8]). With the
growth of the need for database interoperability and graceful
evolution, the importance is even greater. Yet proposed
approaches continue to be ad hoc, or incomplete, or require
explicit application code support. For a range of recent approaches
to views see [5], [13], [19], [20], [21], [29], [1].
Interoperability and evolution have also led to calls for
greater semantic data modelling [24] with the power to better
model real world constraints. The authors and their
coworkers have been developing such a modelling paradigm
[16], [18] and Dampney and Johnson have been using it
in large scale consultancies [9], [8], [28]. Recently the
methodology has come to be called the Sketch Data Model
(SkDM) as it is based on the category theoretic notion of
mixed sketch [3], [4].
In this paper, in the framework of the SkDM, we propose
an approach to the view updating problem which is based,
unlike previous approaches, on database states (also known
as database instances or snapshots). The approach is consistent
across the range of different schemata and instances.
Despite the approach being instance based, we can prove
that for a large number of schemata view updates can be
propagated to the underlying database independently of the
specific instances involved.
The plan of the paper is as follows. In Section 2 we
introduce the sketch data model, illustrating it with an example
based on a health informatics model. Section 3 outlines
briefly the mathematical foundation of the sketch data
model. The foundation is important, although we try to suppress
mathematical details as far as possible in the rest of
the paper, and we believe that the paper can be understood
reasonably well without detailed study of this section. Section
4 sets the view update problem in the framework of the
sketch data model by presenting a formal, and very broad,
definition of view. In Section 5 we discuss the importance
of logical data independence, and use it to motivate our definition
of propagatable inserts and deletes in a given view.
Section 6 presents a selection of examples of view inserts
and deletes and shows that propagatability can frequently be
determined for a schema without reference to its instances.
Finally, Section 7 reviews related work and Section 8 concludes
In-patient
operation
at
66666666under
by
Oper'n
type
GP
isa
Specialist
has
Medical
pract'ner
isa
I
I
I
I
I
I
I
I
I
memb
Practice
agreem't
// Hospital
// College Person
Figure
1. A fragment of a health informatics
graph, the main component of a health informatics
2 The sketch data model
The sketch data model paradigm is a semantic data modelling
paradigm, closely related to ER modelling [7] and
functional data models [12, 204-207], while incorporating
support for constraints via commutative diagrams, finite
limits and finite coproducts [3]. Formally a sketch data
model is specified by giving an ER sketch. The notion of
ER sketch is defined below (Section 3), but in this section
we will concentrate on giving an informal presentation of
a sketch data model by working through an example
ure 1).
Figure
presents a small fragment of a health informatics
graph, chosen for its illustrative value. (Figure 1 is not
in fact part of the Department of Health data models as they
are confidential.) To aid the following discussion we have
simplified the model a little.
The affinity with ER modelling should be clear on casual
inspection. The graph shows as nodes both entities and
relationships, and as arrows certain many-to-one relations
(functions). Attributes are often not shown, but may also be
included by representing the domain of attribute values as a
node and the function representing the assignment of those
values as an arrow. Some relationships, such as Practice
agreement, are tabulated - represented as two functions
while others, such as isa, can be represented as single
functions.
Now to the extra-ER aspects of a sketch data model. Further
semantics are incorporated into the model by recording
which diagrams commute, and which objects arise as limits
or as coproducts of other components of the graph.
A diagram is said to commute when the composites of the
functions along any two paths with common source and target
are required to be equal. Thus, for example, a Specialist
isa Medical practitioner who is a member of a College,
and a Specialist has a Specialisation which isa College.
Naturally we require that the two references to "College"
in the last sentence refer, for any single specialist, to the
same college - we require the diagram to commute.
Not all diagrams commute, and we have demonstrated
repeatedly in consultancies the value of determining early
in the design process which diagrams do commute, and
why those that do not commute should not do so. The
fact that commuting diagrams can be used to model real
world constraints (for example business rules) can be seen
by considering the two triangles: Their commutativity reflects
the requirement that no operations take place without
there being a practice agreement between the practitioner
and the hospital. If instead the arrow under was not in
the model, then Practice agreement would merely record
those agreements that had been made. If the arrow was
there and the triangles were not required to commute then
each operation would take place under an agreement, but it
would be possible for example to substitute practitioners -
to have one practitioner operate under another practitioner's
agreement.
We will not define limits and coproducts here. Instead
we refer the reader to a standard text, say [30] or [3]. But
we will indicate some uses of limits and coproducts in our
example.
Coproducts correspond to disjoint union. They can be
used to model logical disjunctions and certain type hierar-
chies. To take a simple example, requiring that Medical
practitioner be the coproduct of Specialist and GP ensures
that every practitioner also appears as either, but not both of,
a specialist or a GP. The registration details, which are different
for specialists and GPs, are recorded as attributes (not
shown) of the relevant subtypes.
Limits can take many forms. We will mention just three:
1. The cartesian product is a limit, usually just called the
product. It could be used for example to specify a func-
tion
Specialisation \Theta Operation type
// Scheduled fee:
2. Injective functions can be specified via a limit. The
functions in Figure 1 shown as
// are required to
be injective, and this is achieved via a limit specifica-
tion. important point for the model: The
arrow into Person is not required to be injective, reflecting
the fact that a single person may appear more
than once as a medical practitioner, for example, the
person might practice both as a GP and as a specialist,
or might have more than one specialisation.)
3. A wide range of selection operations can be expressed
as pullbacks. For example, specifying that the square
in
Figure
1 be a pullback ensures that all and only those
practitioners who are members of colleges which occur
among the list of specialisations will appear as specialists
To sum up, a sketch data model IE is a graph, like an
ER graph, together with specifications of commutative dia-
grams, limits and coproducts. We emphasise that this is a
very simple structure: all of these notions can be described
in terms of a graph with an associative composition of arrows
which has identities (such a graph is called a category).
Yet the specifications have a surprisingly wide range of semantic
power. Furthermore, this approach has been demonstrated
to be useful in industrial consultancies.
3 Formal definitions for the SkDM
For completeness, this section outlines the mathematical
foundation for the sketch data model paradigm. The full
details are not essential for understanding the main points of
the paper and some readers might wish to skip this section
on a first reading.
A cone
a directed graph E)
consists of a graph I and a graph morphism C b : I
(the base of C), a node C v
of G (the vertex of C) and, for
each node i in I , an edge
i. Cocones are
dual. The edges e i
in a cone (respectively cocone) are called
projections (respectively injections).
C) is a directed
graph G, a set of pairs of paths in G with common source
and target D (called the commutative diagrams) and a set of
cones (respectively cocones) in G denoted L (respectively
C).
are sketches. A sketch morphism h :
// IE 0 is a graph morphism G
ries, by composition, diagrams in D, cones in L and co-
cones in C to respectively diagrams in D 0 , cones in L 0 and
cocones in C 0 .
Definition 3 A model M of a sketch IE in a category S is
an assignment of nodes and edges of G to objects and arrows
of S so that the images of pairs of paths in D have
equal composites in S and cones (respectively cocones) in
L (respectively in C) have images which are limit cones (re-
spectively colimit cocones).
To each sketch IE there is a corresponding theory [3]
or classifying category [6] which we denote by e
IE. Using
the evident inclusion G
IE we can refer to nodes of
G as objects, edges of G as arrows and (co)cones of IE as
(co)cones in e
IE.
A model M of IE in S extends to a functor f
e
// S. If M and M 0 are models a homomorphism
is a natural transformation from f
M to
f
. Models and homomorphisms determine a category of
models of IE in S denoted by Mod(IE; S), and it is a full
subcategory of the functor category [ e
We speak of (limit-class, colimit-class)-sketches when L
and C are required to contain (co)cones only from the specified
(co)limit-classes. For example, A (finite limit, finite
coproduct)-sketch is a sketch in which all cones and co-
cones are finite (the graphs which are the domains of the
(co)cone bases are finite graphs), and all the cocones are
discrete (the graphs which are the bases of the cocones have
no edges, only nodes).
Definition 4 An ER sketch C) is a (finite
limit, finite coproduct)-sketch such that
ffl There is a specified cone with empty base in L. Its vertex
will be called 1. Arrows with domain 1 are called
elements.
ffl Nodes which are vertices of cocones whose injections
are elements are called attributes. An attribute is not
the domain of an arrow.
ffl The underlying graph of IE is finite.
In this paper an ER sketch is frequently called a sketch
data model (while the sketch data model refers to the SkDM
paradigm).
Definition 5 A database state D for an ER sketch IE is a
model of IE in Set 0
, the category of finite sets. The category
of database states of IE is the category of models
of IE in Set 0
. Thus morphisms of database
states are natural transformations.
Remark 6 Notice that every ER model yields an ER
G be the ER graph, let D be empty and let L
contain only the mandated empty cone with vertex 1. Let
C be the set of discrete cocones of elements of each attribute
domain. If we want to ensure that the ER-relations
are actual mathematical relations, add for each ER-relation
a product cone with base the discrete diagram containing the
entities that it relates, and a "monic" arrow from the relation
node into the vertex of the cone. Add cones to L to ensure
that the "monic" arrows are indeed monic in all models (a
pullback diagram for each such arrow will suffice).
It is now easy to see precisely the extra descriptive capabilities
of the sketch data model: D can be used to record
constraints, and L and C can be used to calculate query results
from other objects. These query results can in turn be
used to add constraints, etc. Furthermore, the techniques we
are using here have been developed with a firm mathematical
foundation, much of which was originally developed for
categorical universal algebra.
4 The view update problem
Views, sometimes called external models or instances of
subschemes, allow a user to query and/or manipulate data
which are only a part of, or which are derived from, the
underlying database. Our medical informatics graph
ure 1) represents a view of a large health administration
database. It in turn might provide views to an epidemiologist
who only needs to deal with the two triangles, with Operation
type, and with their associated attributes; or to an
administrator of the College of Orthopaedic Surgeons who
needs to deal with all data in the inverse image of that col-
lege, and not with any of the data associated only with other
colleges, hospitals, etc.
The view update problem (VUP) is to determine under
what circumstances updates specified in a view can be
propagated to the entire database, and how that propagation
should take place. The essence of the problem is that not
all views are updatable, that is, an insert or a delete which
seems perfectly reasonable in the view, may be ill-defined
or proscribed when applied to the entire database. For ex-
ample, a college administrator can insert the medical practitioner
details for a new member of the college, but even
though such administrators can see the practice agreements
for members of their college, they cannot insert a new practice
agreement for a member because they cannot see (in
the inverse image view) details about hospitals, and every
practice agreement must specify a hospital.
In order to limit the effect of the view update problem,
views have sometimes been defined in very limited ways.
For example, allowable views might be restricted to be just
certain row and column subsets of a relational database. But
generally we seek to support views which can be derived in
any way from the underlying database so views might be
the result of any query provided by the database, and ought
to be able to be structured in any way acceptable under the
data model in use.
For the sketch data model we now provide a definition
of view which supports the generality just described, and in
Section 5 we provide a solution to the view update problem
in the sketch data model paradigm, while in Section 6
we give a range of examples to give some indication of the
breadth of that solution.
Recall from Section 3 that for each sketch IE there is a
corresponding category denoted e
IE. We observed in [10]
that the objects of the classifying category correspond to
the structural queries of the corresponding database (struc-
tural queries do not include numerical computations like
count() or avg()). This motivates the following definition
Definition 7 A view of a sketch data model IE is a sketch
data model V together with a sketch morphism
IE.
Thus a view is itself a sketch data model V, but its entities
are interpreted via V as query results in the original
data model IE. In more formal terms, a database state D
for IE is a finite set valued functor D : e
, and
composing this with V gives a database state D 0 for V, the
V -view of D.
Notation 8 The operation composing with V is usually
written as V . Thus D In fact, V is a func-
tor, so for any morphism of database states ff : D
we obtain a morphism V ff : D 0
Following usual practice we will often refer to a database
state of the form V D as a view. Context will determine
whether "view" refers to such a state, or to the sketch morphism
. If there is any ambiguity, V should be referred to
as the view schema.
5 Logical data independence
The definition of view provided in the previous section
has wide applicability: The presentation of a view as a
sketch data model means it can take any SkDM structural
form; the sketch morphism V ensures that the semantics associated
to the view by the diagrams, limits and colimits in
its sketch data model is compatible with the structure of the
underlying database; and the fact that V takes values in e
allows the view to be derived from any data obtainable from
IE.
Views thus support logical data independence - the
logical structure, the design, of a database can change over
time, but applications programs which access the database
through views will be able to operate unchanged provided
only that the data they need is available in the database, and
that the view mechanism V is maintained as the underlying
database design IE is changed.
We have argued in [17] that view based logical data independence
is required for database interoperability, and that
it should be provided, as suggested by Myopoulos [24], in
a semantically rich model like the sketch data model.
Views, being ordinary database states, albeit obtained as
D from some database D, can be queried in the same
way as any database. The important question to ask, the
view update problem, is "When are views updatable?". After
all, logical data independence only works fully when updates
to the view can be propagated via the view mechanism
to the underlying database.
In the sketch data model, view updates can fail in either
of two ways:
1. There may be no states of the database which would
yield the updated view. This usually occurs because
the update, when carried to the underlying database,
would result in proscribed states. For example, a view
schema might include the product of two entities, but
only one of the factors. In the view, inserting or deleting
from the product seems straightforward, after all,
it looks like an ordinary entity with a function to another
entity. But in the underlying database the resulting
state of the product might be impossible, as for
instance if the numbers of elements in the product and
the factor become coprime.
2. There may be many states of the database which would
yield the updated view. The simplest example of this
occurring is when a view schema includes an entity,
but not one of its attributes. Inserting into the entity
seems straightforward, but in the underlying database
there is no way to know what value the new instance
should have on the invisible attribute, and there are
usually many choices.
Since a view is just a database state, we know how to
insert or delete instances. Thus we define
Definition 9 We say that a specified view insert/delete is
propagatable if there is a unique minimal insert/delete on
the entire database whose restriction to the view (via V ) is
the given view insert/delete. When an insert/delete is propa-
gatable, we call the database obtained from the unique minimal
insert/delete the propagated update.
In mathematical terms, the definition is: Let
IE be a view of IE. Suppose
consists of two database states for V and a database state
monomorphism, with Q 0 being an insert update of Q and
with some database state D of IE. We
say that the insert q is propagatable when there exists an
among all those database states
// D 00 for which V D
Initial here means an initial object in the full subcategory of
the slice category under D. The state D 0 is then called the
propagated update (sometimes just the update). The definition
of propagatable delete is dual (so we seek a terminal
among all those D 00
// D).
ii) The use of "unique minimal" in the definition does not in
general mean a unique state obtained by inserting or deleting
a minimal number of elements. An insert update D 0 is
unique minimal among a class of insert updates if for each
other update D 00 in the class, there is a unique morphism of
databases states ff : D 0
respecting the inclusions
of the database state D in the updates.
iii) When, as will usually be the case, the database is keyed
(that is, for each entity there is a specified attribute called
its key attribute and a specified injective function from the
entity to the attribute) these two interpretations of "unique
minimal" do in fact coincide.
iv) Notice that we define when an insert/delete of a view
(database state) is propagatable, rather than trying to determine
for which view schemata inserts and deletes can
always be propagated. Thus, propagatability, view updata-
bility, is in principle dependent on the database state being
updated.
In fact we can frequently characterise the updatable
states for a given view schema, or even prove that for a variety
of view schemata, all database states are updatable.
Such results are important for designers so that they can design
views that will always be updatable. The next section
provides a collection of examples in which this happens.
Definition 11 A view is called insert (respectively delete)
updatable when all inserts (respectively deletes) are propa-
gatable, independently of the database state.
6 Examples
We collect here a range of illustrative examples. Generally
we keep them (unrealistically) small and simple to
better emphasise the point made by each example. The examples
are intended to show that the definition given in the
previous section does embody an intuitively reasonable notion
of propagatability, and to give some indication of the
breadth of the applicability of the definition.
Example 12 Take the square from Figure 1, and consider
it as a sketch data model. Remember that attributes are not
shown in Figure 1. Consider a view consisting of all of
the specialists with a given specialisation, say all obstetri-
cians. In formal terms, in this view V has one entity Ob-
stetrician, together perhaps with some attributes, and the
sketch morphism V is just the inclusion of that entity and
those attributes into the classifying category generated by
the square. The image of Obstetrician in the classifying
category is the limit of// College
oo Medical practitioner
where the first arrow "picks out" the College of Obstetri-
cians, and the second arrow is member. (This limit is an
example of a pullback.)
This view is delete updatable. If all attributes of Medical
practitioner appear in the view (as attributes of Obstetri-
cian) then the view is insert updatable.
Notice that the results are independent of the states and
attributes of Specialisation and College, and note that
some systems will not support inserts for this view, since
those systems would require the user to specify the specialisation
of each newly inserted obstetrician (even though it
is always obstetrics) and this can't be done in a view which
doesn't include Specialisation. Date [12, p153] has argued
from this to the need for systems to allow the specification
of defaults in view defining fields.
Example 13 Consider the same sketch data model (the
square from Figure 1), and suppose the view consists of
all specialists from possibly several specialisations, perhaps
obstetrics, paediatrics and orthopaedics. Suppose further
that the view includes an attribute of Specialisation that in
the current database state has unique values for each of the
chosen specialisations.
The formal definition of the view is little changed: V
still has one entity, and associated attributes including this
time an attribute of Specialisation. The morphism V is
still an evident inclusion. The image of the entity in the
classifying category is now the limit of
oo Medical practitioner
where n is the number of specialisations viewed, the first
arrow "picks out" each of the corresponding colleges, and
the second arrow is still member.
This view is also delete updatable. If all attributes of
Medical practitioner appear in the view (as attributes of the
viewed entity) then inserts are propagatable for the current
state. If the viewed attribute of Specialisation is guaranteed
to take unique values, for example if it is a key, then
inserts will be propagatable for all states and so the view
will be insert updatable. Conversely, if in some state the
viewed attribute of Specialisation did not take unique values
on the chosen specialisations, then inserts would not be
propagatable for that state.
Example 14 Take all of the entities in Figure 1 from which
College can be reached by following a chain of arrows in
the forward direction, that is all entities except Operation
type, Hospital and Person, and consider them as a sketch
data model in which both the triangle and the square com-
mute. The view data model will be given by taking as V
a diagram of the same shape except that the node corresponding
to College, and its two arrows will be missing.
send each entity to the inverse image of (say) the Orthopaedics
College along the path of arrows connecting the
corresponding entity to College. This is the college admin-
istrator's view for the Orthopaedics College. The administrator
can see all the details, except the Personal details,
of all of the members of that college, and no other practi-
tioner's details.
The view is delete updatable (but be careful: if the square
is specified to be a pullback then deleting the one instance
of Specialisation will in fact delete everything from the
view, and correspondingly all members of the college and
all of their data from the full database). It is insert updatable
at all entities except at In-patient operation and Practice
agreement (where the full database needs to know
about the associated operation types and hospitals) and at
Specialisation (which can have at most one element by the
construction of the view). If in the sketch data model for the
entire database the square is specified to be a pullback, and
Medical practitioner is specified to be a coproduct, then
inserts at GP are not propagatable (an insert at GP necessitates
an insert at Medical practitioner which necessitates
an insert at Specialist after which the coproduct constraint
can never be recovered).
Example 15 Let's extend Figure 1 by adding a new sub-type
of In-patient operation called Under investigation.
It will contain those operations which are under investigation
as a result of complaints, whether from patients, their
families, or practitioners. This extended graph will be our
new data model. Let V consist of two entities and an injective
function between them, A
to the in-patient operations conducted by surgeons who are
practicing members of the Orthopaedics College (these instances
were in the view in the previous example). Let V
take A to the pullback of that inverse image V B along the
inclusion of Under investigation into In-patient operation
(thus V A is the intersection of V B and Under investigation
as subtypes of In-patient operation). This is the
view for the investigating board of the Orthopaedics College
The view is insert and delete updatable. For example an
insert into A would, as part of the view, specify which orthopaedic
operation was being investigated, and when propagated
would generate a new instance in Under investigation
corresponding to the operation. This is what would
happen when the complaint was first received at the col-
lege. (If instead the complaint arrived at say the hospital,
a new instance would be inserted into Under investigation
and the college investigating board would be able to see it
appear in their view.)
In fact, if the view conisted only of A, it would still be
insert and delete updatable (provided that it included all the
attributes of Under investigation and In-patient opera-
although it wouldn't match the semantics of our example
very well - inserts into A would propagate to new instances
in Under investigation and In-patient operation.
This would correspond to a complaint arriving about an orthopaedic
operation which was not stored in the database.
This only seems surprising because we know that the operation
must have taken place before the complaint.
If instead we change the semantics to consider, for ex-
ample, among employees the intersection of those who are
senior executives, and those who are medical staff, an administrator
who is responsible for hiring senior executive
medical staff might employ someone and perform an insert
into the intersection (their view) which would propagate
successfully to the entities Senior executives, Medical
staff and All employees.
Example It is interesting to see what the definition says
about data models which include no attributes, so as to see
an extreme case of its applicability. Suppose the data model
includes no cocones and no cones except the mandated "1"
cone. It happens that a view of any single entity in such
a data model is insert updatable. The update can be calculated
as a Kan extension ([23]) and amounts to freely adding
instances related to the inserted instance (rather than seeking
extant instances to satisfy obligatory relations). This
doesn't work in the presence of attributes because attribute
domains are fixed, so we can't freely add new attribute values
Example 17 Finally, let's consider the effect of having
Medical practitioner specified to be the coproduct of GP
and Specialist.
If the two triangles are taken as the sketch data model,
Medical practitioner is just an ordinary entity, and a view
including only Medical practitioner is insert and delete updatable
If the two triangles plus GP and Specialist, together
with the coproduct specification, are taken as a sketch data
model, then a view including only Medical practitioner is
delete updatable, but not insert updatable. If the view includes
both Medical practitioner and GP (or instead Spe-
cialist) then it is both insert and delete updatable.
7 Related Work
A number of authors are now using sketches to support
data modelling initiatives. Notably Piessens [25], [26] has
developed a notion of data specification including sketches.
He has since obtained results on the algorithmic determination
of equivalences of model categories [27] which are
intended to support plans for view integration. Diskin and
Cadish have used sketches for a variety of modelling cir-
cumstances. See for example [14] and [15]. They have been
concentrating on developing the diagrammatic language of
"diagram operations".
Atzeni and Torlone [2] have developed a solution to the
problem of updating relational databases through weak instance
interfaces. Although they explicitly discuss views,
and note that their approach does not deal with them, the
technique for obtaining a solution is similar to the technique
used here. They consider a range of possible solutions
(as we here consider the range of possible updates
// D 00 ) and they construct a partial order on them,
and seek a greatest lower bound (analogous with our ini-
tial/terminal solution). A similar approach, also to a non-
view problem, appears in [22].
8 Conclusion
To date our approach, in common with many others,
deals with inserts and deletes, but not with modifications
of extant values. Also, our views do not contain arithmetic
operations, and we have not developed special treatments of
null values. Each of these is the subject of ongoing work.
Similarly, this paper does not deal with implementational
issues which are the subject of ongoing research in computational
category theory. Despite these caveats the approach
presented here has surprisingly wide applicability.
It seems that a significant part of the difficulty of solving
the view update problem has arisen because previous authors
have sought a single coherent solution and have based
their proposed solutions on schemata. In practice, many
particular situations (states) have "solutions" although they
fall outside the proposed solution so ad hoc adjustments are
made, losing coherence. This has also contributed to the
impression that the class of updatable views is difficult to
characterise.
We have proposed a single coherent solution, but based
it on states, avoiding ad hoc amendments. Although it is
based on states, we can prove that many schemata always
are or aren't updatable, and we have provided a range of
examples of such situations. This gives the benefits that
were sought in schema based solutions while avoiding ad
hoc amendments.
9
Acknowledgements
The research reported here has been supported in part
by the Australian Research Council, the Canadian NSERC,
the NSW Department of Health, and the Oxford Computing
Laboratory.
--R
Complexity of Answering Queries Using Materialized Views.
Updating relational databases through weak instance interfaces.
Category theory for computing science.
Toposes, Triples and Theories.
Update semantics of relational views.
Handbook of Categorical Algebra 3.
The Entity-Relationship Model- Toward a Unified View of Data
Fibrations and the DoH Data Model.
An illustrated mathematical foundation for ERA.
Introduction to Database Systems.
Introduction to Database Systems
On the correct translation of update operations on relational views.
Algebraic graph-based approach to management of multidatabase sys- tems
Variable set semantics for generalised sketches: Why ER is more object oriented than OO.
On the value of commutative diagrams in information modelling.
Database interoperability through state based logical data inde- pendence
Algorithms for translating view updates into database updates for views involving selec- tions
View updates in relational databases with an independent scheme.
Implementing queries and updates on universal scheme interfaces.
Categories for the Working Mathematician.
Next generation database systems won't work without semantics!
data specifications: an analysis based on a categorical formulation.
Categorical data spec- ifications
Selective Attribute Elimination for Categorical Data Specifica- tions
Lessons from a failed information systems initiative: issues for complex organisations International Journal of Medical Informatics
Information integration using logical views.
Categories and Computer Science.
--TR
An introduction to database systems: vol. I (4th ed.)
View updates in relational databases with an independent scheme
An illustrated mathematical foundation for ERA
Updating relational databases through weak instance interfaces
Categories and computer science
Category theory for computing science, 2nd ed.
Answering queries using views (extended abstract)
Complexity of answering queries using materialized views
Next generation database systems won''t work without semantics! (panel)
Update semantics of relational views
On the correct translation of update operations on relational views
The entity-relationship modelMYAMPERSANDmdash;toward a unified view of data
Algorithms for translating view updates to database updates for views involving selections, projections, and joins
An Introduction to Database Systems
Information Integration Using Logical Views
Implementing Queries and Updates on Universal Scheme Interfaces
Selective Attribute Elimination for Categorial Data Specifications
--CTR
Michael Johnson , C. N. G. Dampney, On category theory as a (meta) ontology for information systems research, Proceedings of the international conference on Formal Ontology in Information Systems, p.59-69, October 17-19, 2001, Ogunquit, Maine, USA
J. Nathan Foster , Michael B. Greenwald , Jonathan T. Moore , Benjamin C. Pierce , Alan Schmitt, Combinators for bidirectional tree transformations: A linguistic approach to the view-update problem, ACM Transactions on Programming Languages and Systems (TOPLAS), v.29 n.3, p.17-es, May 2007 | category theory;view update;data model;semantic data modelling |
563832 | An approach to phrase selection for offline data compression. | Recently several offline data compression schemes have been published that expend large amounts of computing resources when encoding a file, but decode the file quickly. These compressors work by identifying phrases in the input data, and storing the data as a series of pointer to these phrases. This paper explores the application of an algorithm for computing all repeating substrings within a string for phrase selection in an offline data compressor. Using our approach, we obtain compression similar to that of the best known offline compressors on genetic data, but poor results on general text. It seems, however, that an alternate approach based on selecting repeating substrings is feasible. | Introduction
If data is to be stored on CD, DVD or in a static
database it will be compressed once, but often decompressed
many times. Given this scenario, a compression
scheme can aord to spend several hours of
computing time, make multiple passes over the input,
and consume many megabytes of RAM during the
compression process in order to make the compressed
representation as small as possible. Decompression,
however, should be both fast and memory e-cient.
Such a compression scheme is said to be oine.
One way to meet the demand for fast decoding
and high compression levels is to identify a suitable
phrase book such that the input data can be stored as
a series of pointers to entries in the phrase book. For
example, Figure 1 shows how the simple string \How
much wood can a woodchuck chuck if a woodchuck
could chuck wood?" is compressed using three different
phrase books. The rst representation favours
phrases that appear frequently in the string, hence the
space character forms a phrase by itself. The second
representation looks to include the space character at
the start or end of a word to form a phrase. The
third greedily chooses the longest repeating phrase
that it can, which is similar to the strategy employed
in compressors based on the LZ77 schemes
1977], such as gzip, winzip, and pkzip.
In the nal le both the phrase book and the series
of pointers must be stored. It is di-cult to tell by inspection
of our example which of these three phrase
books will yield the best compression. The phrase
book in representation 1 contains only 27 characters,
but has 26 pointers. The phrase books in representation
two and three have more characters in their
Copyright c
2001, Australian Computer Society, Inc. This paper
appeared at the Twenty-Fifth Australasian Computer Science
Conference (ACSC2002), Melbourne, Australia. Conferences
in Research and Practice in Information Technology, Vol.
4. Michael Oudshoorn, Ed. Reproduction for academic, not-for
prot purposes permitted provided this text is included.
phrase books, but signicantly less pointers. Unfortunately
the variables involved in choosing a phrase
book are much more complicated than merely the
number of pointers and number of characters in the
phrase book. Assuming that some sort of statistical
coder (for example, Human coding or arithmetic
coding) will be used to actually encode the pointers
and the phrase book, the frequency distribution or
self entropy of the two components are better indicators
of the tness of a phrase book. In this particular
example, the cost of a zero-order Human code on
characters in the phrase book and pointers in the data
portion, shown in the last row of the Figure 1, indicates
that the rst phrase book leads to the smallest
representation of 21 bytes. Even these calculations
are only an approximation of the nal compression
levels obtained with a code of this nature as necessary
information that describes the Human codes
employed (a prelude) is not included in these estimates
Oine compression through the use
of a phrase book is not a new idea
[Rubin, 1976, Storer & Szymanski, 1982,
Nevill-Manning & Witten, 1994], but with the
increased availability of cheap, powerful computers,
computationally intensive techniques are now viable
during encoding in order to improve compression
levels through the construction of good phrase books.
The task of identifying the best possible phrase book
on any input has been shown to be NP-complete
[Storer & Szymanski, 1982], but using heuristics and
a lot of machine power, compression levels superior
to alternate techniques have been achieved on some
data sets.
Nevill-Manning & Witten introduced an
approach that induces a context free
from the text, using their grammar rules
to describe the phrase book for compression
[Nevill-Manning & Witten, 1994]. Both Larsson &
Moat and Cannane & Williams explore the use
of repeated pairing of characters in order to build
a phrase book, with the emphasis on small and
large data sets respectively [Larsson & Moat, 2000,
describe an e-cient algorithm for nding long
repeating substrings to place in the phrase book
[Bentley & McIlroy, 1999].
This paper expands on work of Apostolico &
Lonardi. Recently they introduced the Offline
compressor [Apostolico & Lonardi, 2000], which calculates
a measure of compression gain for all possible
non-overlapping substrings of a string. A high gain
factor indicates that if the substring was to be chosen
as a phrase for the phrase book, good compression
would result. Similarly, a low gain score for a sub-string
indicates that the particular substring should
not be chosen as a phrase in the phrase book. The
compression algorithm used in Offline is outlined in
Figure
2.
Representation 1 Representation 2 Representation 3
How much 1 How much 1 How much 1
wood 3 could 3 c 3
2 a woodchuck 4 ould 4
could 4 chuck 5 a 5
a 5 a woodchuck 4 chuck 6
could 3 c 3
wood 3 chuck 5 huck 7
chuck 6 wood 2 if 8
chuck 6 wood 2
chuck 6
ould 4
a 5 c 3
wood 3 wood 2
chuck 6 ? 9could 4chuck 6wood 3
Phrases 12
Pointers 9 4 8
Total
Figure
1: The string \How much wood could a wood-
chuck chuck if a woodchuck could chuck wood?" represented
with three possible phrase books. The rst
occurrence of each phrase is shown in gray for each
case, and are numbered in order of rst occurrence.
The nal three rows show the cost of Human encoding
the phrase book and pointers in bytes.
As alluded to in the above example, calculating
the exact gain in compression for any given substring
is a di-cult task. At the commencement of encoding
there is no way of knowing how many phrases will
end up in the phrase book, or what the probability
distributions of characters in the phrase book or
pointers in the data component will be. Accordingly,
Apostolico and Lonardi experimented with three
approximate gain formulations. Using this simple approach
they achieve excellent levels of compression on
some genetic sequences, and competitive compression
levels on general data [Apostolico & Lonardi, 2000].
Details of their results can be found at
www.cs.purdue.edu/homes/stelo/Off-line/.
The implementation of Offline relies on a su-x
tree data structure, which is a trie that holds all
possible su-xes of a string [Ukkonen, 1995, and
references therein].
As they acknowledge, however, a su-x tree is a
large and slow data structure for this task. In this
paper we introduce an alternate approach for performing
compression using the Offline algorithm,
based on a string processing algorithm for nding all
repeating substrings in a string. By focusing only on
the repeating substrings, rather than all su-xes of a
string, we hypothesise that the time taken to perform
gain calculations and string manipulations using the
Offline approach can be signicantly reduced.
Section 2 describes Crochemore's algorithm
[Crochemore, 1981] for nding repeating substrings
INPUT String to compress.
Calculate the gain for all possible
non-overlapping substrings of the
input string to be compressed.
Choose the substring with the
highest gain factor, and add it to
the phrase book.
Step 3 Remove all occurrences of the chosen
substring from the string, and
store a pointer to the original
phrase for each occurrence.
Step 4 Recalculate the gain measure for
all substrings of the input string
that have not been covered by a
chosen phrase.
Step 5 While there is still a positive gain
factor, repeat from Step 2 on the
remaining uncovered string.
OUTPUT Phrase book and list of pointers
representing the input string.
Figure
2: The basic algorithm employed in Offline.
within a string, and explains how we use it to select
phrases in our oine compression scheme, Crush.
Section 3 describes experimental results for both compression
levels and timing for Crush and Offline.
Finally, Section 4 discusses our results, and their implications
The Crush compressor consists of two stages. The
rst analyses the input string using Crochemore's algorithm
to generate a two dimensional array C, which
stores information on all substrings up to a given
length. This data structure is then traversed to calculate
a gain measure for all substrings occurring in the
leftmost uncompressed position of the input string,
and the highest gain substring is chosen for the phrase
book. Note that this approach deviates from the Offline
algorithm as we make a local choice at the left-most
uncovered position, rather than a global choice
over all possible uncovered positions remaining in the
string. These two stages are explained in detail in
the following two subsections, and summarised in Figure
3.
2.1 Stage 1|String analysis
Crochemore's algorithm [Crochemore, 1981] for nd-
ing all repeating substrings in an input string begins
by grouping all positions in the string that have
the same character into a single class. Each of these
classes is then rened into subclasses to get repeating
substrings of length two. In turn these classes are
rened to get substrings of length three, and so on.
For example, consider the input string
a a b a b a a b a a b:
The positions that form the initial classes for strings
of length one are
a b
This rst stage can be accomplished in O(n) time,
where n is the number of characters in the input
string, assuming that the alphabet from which characters
of the string are drawn is indexable (for exam-
ple, ASCII).
The next stage of the algorithm splits each class
into classes that represent the starting position of sub-strings
of length two. The rst class, a, splits into the
classes aa and ab, and class b splits into the classes
ba and b$, where $ is the \end of string" symbol:
ab aa ba b$
Using a nave approach, this stage can be accomplished
in (n) time, simply by checking the character
following each position in each class. For example,
in order to rene class
a
it would be necessary to check positions
of S. In this case
a and
so f3; 8; 11g must form the class for aa, and
forms the class for ab.
The process of renement continues, ignoring any
class that contains only a single position, as that must
not represent a substring that repeats, until no rene-
ments are possible:
ab aa ba
aba aab baa
abaa aaba baab
abaab baaba
abaaba
If the nave approach to renement is adopted at
each stage, then the total running time is O(n 2 ), as
there could be O(n) levels, each requiring (n) time.
Crochemore oers two insights that allows this time
to be reduced to O(n log n) [Crochemore, 1981].
The rst is that it is not necessary to refer back
to the original string S in order to rene a class; the
renement can achieved with respect to other classes
at the same level. In order for members of a class
in level L to be rened into the same class in level
must share the same character in their
L+1st position. The nave approach checks this character
directly in S for each class member. However,
if members of a class share the same L + 1st char-
acter, their length L su-xes must also be identical.
For example, substrings aba that all have a b in the
next position share the three character su-x bab. For
these substrings of length to share a su-x of
length L, their positions plus one must all appear in
another class at level L.
For example, if the substring aba occurs in position
i, and the substring baa occurs in position
taking into account the overlap of the two character
su-x of aba and the two character prex of baa, we
can deduce that the string abaa must occur at positions
i. This is precisely what happens when rening
the class for aba = f1; 4; 6; 9g at level 3. We can
check which of the positions f2; 5; 7; 10g fall in the
same class on level 3, and deduce that such strings
form a class at level 4. In this case, 2, 7, and 10 all
inhabit the class baa, so f1; 6; 9g forms a class at level
4. Similarly, f5g is in a class of its own on level 3, so
f4g forms a class at level 4.
This observation alone does not reduce the running
time of the algorithm, but when used in conjunction
with the observation that not all classes
need be rened at each level, the running time comes
down. Consider again the example of rening
9g at level 3 into classes on level 4. How many
other classes on level three do we need to inspect in
order to perform this renement? As discussed above,
each of the other classes must have a prex of ba so
that it overlaps with the su-x of aba. If we inspect
the renements that took place on level 2 to produce
level 3 we see that the class
split into 2 classes on level 3, namely
and f5g. These are precisely the two classes we
need to consider when rening 9g. This
in turn means that if we use one of them to perform
the renement of aba = f1; 4; 6; 9g, the remaining positions
must fall into the other class at level 4. In
this case we can either rene f1; 4; 6; 9g using f5g,
to get a class f4g and a remaining class of f1; 6; 9g,
or we can rene f1; 4; 6; 9g using f2; 7; 10g, to get a
class f1; 6; 9g and a remaining class of f4g. Obviously
we should choose the smallest classes against which
to rene, leaving the largest class as the \left over",
with no processing required. This is precisely the approach
adopted by Crochemore's algorithm; at each
stage only the \small" classes are rened.
Observe that when a class at level L is rened into
two or more classes at level L + 1, the longest of the
smallest classes cannot be greater than half of the size
of the parent class. So any character in a string can
appear in a \small" class at most O(log 2
n) times,
hence can only be involved in a renement O(log n)
times. Seeing as there are n characters, the overall
running time of Crochemore's algorithm is O(n log n).
This very brief description of the intuition behind
Crochemore's algorithm hides some of the complex
and intricate details required to achieve a fast, memory
e-cient implementation of this algorithm. The
implementation used in this paper operates in O(n)
space, storing only a list of classes for each level of
renement, discarding lists from previous levels. The
constant factor is quite high in this space bound, with
the current implementation requiring 44n bytes of
memory.
2.2 Stage II|Phrase selection
In order to use the results from Crochemore's algorithm
for phrase selection, our current implementation
of Crush stores the class information for each
level as it is derived. As memory conservation during
encoding is not a primary aim of Crush, a simple
array of n integers is used to hold a circular list of
class members for each level. More formally, element
C[L][i] of array C is a pointer to the next member
of the class containing position i on level L, with the
nal class member pointing back to the rst member.
In the above example of Crochemore's algorithm, C
would be:
The number of levels is restricted to K, a parameter
to Crush, so total space requirement for C is O(Kn).
Once array C exists, phrase selection can begin.
Unlike Offline, Crush makes its phrase selections
out of the set of substrings beginning at the leftmost
uncovered position which do not overlap an already
covered position. The offline compressor, however,
considers all possible non-overlapping substrings at
each phrase choice. Crush chooses the phrase p with
the highest gain measure G p out of the set of possible
substrings. If G p 0 then the character at the uncovered
position is skipped, and left to a nal stage
of processing. The nal stage simply treats all uncovered
characters as single letter phrases with innite
stores the single letter in the phrase book
and its uncovered occurrences as pointers.
reported that computing G p
as the cost of storing all occurrences of a phrase
with a zero-order character model less the cost of
storing a single copy and a series of pointers to
the copy gave the best results in their experiments
Accordingly, Crush
uses a similar gain measure.
Let H be the cost in bits of storing a single character
in the input string. Using a simple character
based model and a statistical coder (for example,
Human coding or Arithmetic coding), H would be
around 2 to 3 bits, while an ASCII code has
Quantity H can be estimated by a preliminary scan
of the data which records the probability of each
character, setting
which is Shannon's lower bound on compression levels
[Shannon, 1948]. This is the approach adopted by
Crush.
Let f p be the frequency with which phrase p occurs
in the text, and l p the number of characters in
phrase p. The cost of storing the f p copies of phrase
uncompressed in the text is approximated by Hf p l p
bits. If phrase p is chosen for the phrase book, one
copy is required at a cost of approximately Hl p bits
for the phrase, plus H bits to store either the length
of the phrase, or a terminating symbol for the phrase,
in the phrase book. Apart from the phrase book
copy of p, it is also necessary to store f p pointers
to that phrase. The cost of a pointer to the new
phrase can be estimated by dlog 2
is the number of phrases already in the phrase book
2000]. This is not a very accurate
estimate of pointer cost as it amounts to the
cost of a
at binary code for the pointers currently in
the phrase book. Of course as Crush continues, P ,
the number of phrases will increase, and so the net effect
is to slowly make the cost of adding a phrase more
expensive. The total gain in compression if phrase p
is to be included in the phrase book, therefore, is:
uncompressed representation
phrase book entry cost
pointer costs
Figure
3 shows pseudo code for the complete algorithm
used in Crush. Steps 1 and 2 simply
run Crochemore's algorithm and create the C array,
Step 4 performs phrase selection, and Step 5 nishes
any skipped positions for which there was no positive
gain during the Step 4 processing. The time
required by Crush is dominated by the traversals of
the C lists in Step 4.3.2. For each possible substring,
of which there may be K 1, the entire pointer chain
of O(n) items must be traversed in order to calculate
the frequency of a substring. Step 4.6 also sees
the chain of pointers relating to a selected phrase traversed
a second time to record pointers and mark the
positions as covered. Note Steps 4.3.2 and 4.6 must
also exclude self overlapping positions from consideration
An implementation of Crush as described above, and
an implementation of Offline as downloaded from
www.cs.purdue.edu/homes/stelo/Off-line/ were
run on the Purdue corpus [Purdue, 2001]. Table 1
shows the compression and speed results achieved using
a Pentium III 800MHz CPU with 640Mb of RAM,
primary cache, and running Linux. The C
code was compiled using gcc version egcs-2.91.66
with full optimisations. Phrases in crush were limited
to characters in length. The values reported
in
Table
1 used a gain formula of
The nal term was added in order to bias the phrase
selection towards single chars: that is, to reduce the
number of phrases chosen. Our initial experiments
showed that Crush using the gain measure stated in
the previous section was too aggressive in its phrase
selection, a problem we discuss below. Compression
values assume a Human coder is used in coding both
the phrase book and pointer lists, but prelude-costs
for both codes are not included. For both codes, the
number of codewords was small (less than 100 in all
cases), and so prelude size has negligible eect on nal
compression levels.
As
Table
shows, our compression results were
competitive on most les of the corpus, which is unusual
given our local rather than global approach to
phrase selection. One obvious failure of Crush is
to nd good phrases in the le Spor All 2x, which is
the le Spor All repeated twice. This is an example
of the short-comings of the local choice approach we
have adopted. The Spor All 2x le, as for all the les
in the Purdue Corpus, consists of 258 2 blocks of
about 14 lines of genetic data as shown in Figure 4.
On this le, Offline rst chooses
upstream sequence, from -800 to -1\n
as its highest gain phrase, and then proceeds to choose
200 phrases all of length 800 characters (the maximum
allowed) and that occur four or less times.
Crush, on the other hand, must rst deal with the
characters
before it can select the phrase
upstream sequence, from -800 to -1\n.
In fact, Crush determines that
upstream sequence, from -800 to -1\n
is its rst decent phrase, leaving the preceding characters
to be encoded as singletons. Amongst Crush's
phrase choices for this le are the phrases
upstream sequence, from -800 to -1
upstream sequence, from -800 to -1
upstream sequence, from -800 to -1
3 upstream sequence, from -800 to -1
4 upstream sequence, from -800 to -1
9 upstream sequence, from -800 to -1
7 upstream sequence, from -800 to -1
8 upstream sequence, from -800 to -1
5 upstream sequence, from -800 to -1
6 upstream sequence, from -800 to -1,
which clearly could be improved. Once Crush gets
to line 2222 of the le, the location of the block Offline
designates as its second best phrase, most of
that block has already been covered by earlier choices
of smaller phrases, and so is not available as a choice
to Crush.
A similar problem occurred on general text. We
ran Crush on the small text les from the Calgary
[Calgary, 2001] and Canterbury [Canterbury, 2001]
Input String to be compressed, and
K, the maximum length phrase to consider for the phrase book.
level one classes for Crochemore's algorithm.
algorithm to level K, storing each level in the C array such that
C[k][i] points to the next member of the class containing position i on level k.
Step 3 Set all positions of S to uncovered.
Step 4 While there are uncovered positions in S
Step 4.1 Let i be the smallest uncovered position in S.
Step 4.2 Let j be the min(smallest covered position > i, i +K).
Step 4.3 For each level 2 k j i
Step 4.3.1 Set f 0.
Step 4.3.2 For each position c in the list rooted at C[k][i]
If the k positions fc,c+1,. ,c+k-1g are all uncovered set f f + 1.
Step 4.3.3 Set G k Hfk H(k
Step 4.4 Find
Step 4.5 If Gm 0 then record position i as skipped, mark it as covered, and goto Step 4.
Step 4.6 For each position c in the list rooted at C[m][i]
Step 4.6.1 If the k positions fc,c+1,. ,c+k-1g are all uncovered
Record a pointer in position c to the new phrase.
positions fc; c to covered.
Step 5 For each position i recorded as skipped in Step 4.5
Step 5.1 If the single character at position i is not a phrase, add it to the phrase book.
Step 5.2 Record a pointer to the phrase at position i.
Output Phrase book and list of pointers into the phrase book.
Figure
3: The algorithm used in Crush
File Size bzip2 Offline Crush Offline Crush
Name (bytes) (bpc) (bpc) (bpc) (secs) (secs)
Spor EarlyII 25008 2.894 2.782 2.217 6.2 1.1
Spor EarlyI 31039 1.882 1.835 2.222 8.5 1.9
Helden CGN 32871 2.319 2.264 2.219 9.8 2.1
Spor Middle 54325 2.281 2.176 2.196 21.3 9.0
Helden All 112507 2.261 2.116 2.227 75.0 58.5
Spor All 222453 2.218 1.953 2.195 278.5 291.2
All Up 400k 399615 2.249 2.136 2.275 989.7 1034.8
Spor All 2x 444906 1.531 0.148 2.194 1133.7 1046.7
Table
1: Compression and timing results for the Purdue corpus.
corpora, but the compression results were abysmal,
averaging around four bits per character.
Table
also shows that the running time on the
Purdue corpus, was not as low as we had anticipated.
Running times on the Calgary and Canterbury corpora
were exceptionally fast, but this is hardly surprising
given the poor compression results. The reason
for the low running time on general text is that
short phrases are initially chosen which cover much
of the input string. Subsequent processing only need
look at the remaining substrings, of which there are
few.
Table
2 shows a breakdown of the running time
on the Purdue corpus generated using the gprof soft-
ware. Proling of the code indicated that by far the
majority of the time was spent in counting the frequency
of phrases: Step 4.3.2 in Figure 3. The string
processing portion of Crush with Crochemore's algorithm
was extremely fast.
This paper has reported on a simple attempt to apply
Crochemore's algorithm for nding repeating sub-strings
to phrase selection for oine data compres-
File Steps 1 Step 4.3.2
and 2
Spor EarlyII 2% 78%
Spor EarlyI 2% 82%
Helden CGN 2% 85%
Spor Middle 1% 91%
Helden All 0% 96%
Spor All 0% 98%
All Up 400k 0% 99%
Table
2: Percentage of running time taken in
Crochemore's algorithm (Steps 1 and 2) and frequency
counting (Step 4.3.2) by Crush on the Purdue
Corpus.
sion. We have followed the model of Apostolico &
Lonardi, greedily choosing phrases at each stage that
maximise an approximation of the compression gain
expected if the phrase is included in the phrase book
than a global
approach to phrase selection, however, we trialled
a local approach, which has been shown to work
well for string covering algorithms [Yang, 2000]. The
>RTS2 RTS2 upstream sequence, from -800 to -1
GTAATGGTTCATTTCTTTAATAGCCTTCCATGACTCTTCTAAGTTGAGTTTATCATCAGG
TAGTAAGGATGCACTTTTCGATGTACTATGAGACTGGTCCGCACTTAAAAGGCCTTTAGA
TTTCGAAGACCACCTCCTCGTACGTGTATTGTAGAAGGGTCTCTAGGTTTATACCTCCAA
TGTCCTGTACTTTGAAAACTGGAAAAACTCCGCTAGTTGAAATTAATATCAAATGGAAAA
GTCAGTATCATCATTCTTTTCTTGACAAGTCCTAAAAAGAGCGAAAACACAGGGTTGTTT
GATTGTAGAAAATCACAGCG
>MEK1 MEK1 upstream sequence, from -800 to -1
ACAGAAAGAAGAAGAGCGGA
>NDJ1 NDJ1 upstream sequence, from -800 to -1
GTACGGCCCATTCTGTGGAGGTGGTACTGAAGCAGGTTGAGGAGAGGCATGATGGGGGTT
Figure
4: Beginning of the le Spor All 2x
string processing portion of our compressor based on
Crochemore's algorithm [Crochemore, 1981] is fast,
but the subsequent processing stage is limited by a
poor data structure.
Several techniques oer hope for improvement to
the running time of Crush. The rst is to replace
the pointer chain data structure, which stores the results
of Crochemore's algorithm for later processing,
with an alternate structure. Recently Smyth & Tang
have shown that all the repeating substring information
required by Crush can be stored in (n) space
arrays [Smyth & Tang, 2001]. Their structure plays
the same role as a su-x tree, but is generated directly
from Crochemore's algorithm, hence stores only repeating
substrings. The overhead for its construction
is minimal, and so should not increase the running
time of the rst stage of Crush. Using these arrays
will signicantly reduce the memory requirements of
Crush, and speed up the processing of the repeating
substring information. Importantly, it should allow
the frequency of phrases to be calculated more e-
ciently, which is a major bottleneck. As shown in
Table
2, typically over 90% of the time is spent calculating
phrase frequencies in the current implementation
Another avenue for resource savings is in an alternate
implementation of Crochemore's algorithm.
Future versions of our software will make use of a
new array-based implementation of Crochemore's algorithm
[Baghdadi et al, 2001] that in practice runs
much faster than the standard implementation and
reduces space requirements from 44n to 12n bytes.
One nal avenue worth exploring in this context is
a recent algorithm due to [Smyth & Tang, 2001] that
calculates all repeating substrings that are nonex-
tendible to both the left and right. Currently
Crochemore's algorithm supplies a set of all sub-strings
that are nonextendible only to the right. By
running Crochemore's algorithm on the reverse of the
string, and collating the results with a run on the
string itself, the set of candidate strings for the phrase
book should be reduced, while at the same time the
utility of the remaining string should be enhanced.
We anticipate a substantial improvement in compression
results once this approach is implemented.
A major point of deviation between our approach
and that of Apostolico & Lonardi is that at each
phrase choice, Crush considers only those phrases
starting at the leftmost uncovered position in the input
string|a local approach. This is clearly a major
contributing factor to our poor compression levels on
general text. Examining the phrase choices made by
Crush and Offline, for example, on progc of the
Calgary Corpus, shows that many \good" phrases selected
by Offline are unavailable to Crush because
they have been partially covered by an earlier phrase
choice. Indeed, Crush only chooses four phrases that
are not single characters, hence the poor compression
results. This was the reason for the introduction of
the 2f p l p term in the gain calculation formula. Biasing
the gain towards single characters limits the early
selection of short phrases that occur very frequently;
the very phrases whose selection prevents the use of
longer matches later in the processing. The down-side
is, of course, that it is extremely di-cult for any
infrequent, long phrases to be chosen. For a le in
the Purdue corpus, H is typically around 2.5 bits per
character and so
For a phrase of length 800 to be selected, it must
occur at least 6 times, whereas on Spor All 2x, Offline
routinely chooses phrases of length 800 with a
frequency less than 6.
It is interesting to note that our local approach,
however, still gave good compression on the majority
of the Purdue corpus. The reason that Crush
performs well on the Purdue corpus, rather than on
other text, is that in general the high gain phrases in
the Purdue corpus occur towards the start of the data
les. This allows their early selection by Crush unlike
on general text, where high gain substrings are
never considered as possible phrase candidates because
parts of those phrases have been covered by
an earlier, local choice. Implementing the global approach
using the current pointer chain data structure
in Crush would be prohibitively expensive. Once the
tree data structure of [Smyth & Tang, 2001] is incor-
porated, however, the global approach may become a
feasible option. The compression results on the Purdue
corpus indicate that a global approach would improve
the compression performance on general data.
Another technique that we intend to incorporate
in Crush is a more accurate gain measure. As the
generation of repeating substring information is fast,
with a more appropriate data structure to store this
information we can aord to spend more time estimating
the gain of each phrase. An approximation
based on the self entropy of both pointers and the
phrase book representation should lead to improved
compression levels on a wide range of data. Also an iterative
approach that makes multiple passes over the
data to improve gain estimates is worth investigating
A further avenue for exploration is allowing
phrases to overlap. If phrases are allowed to overlap
then the data section of the compressed le can
no longer be a simple sequential list of pointers to
phrases in the phrase book. Each pointer must also
be paired with an indication of how much the phrase
it represents overlaps the previous text. If Human
coding or similar is used for storing this informa-
tion, so that fast decoding is guaranteed, at least
one bit per pointer must be added to the nal le.
Therefore, to maintain the compression levels of the
non-overlapping implementation, the average size of
the pointers must reduce by one bit. A reduction
in pointer size would occur if a suitable change in
the frequency distribution of pointers occurred when
overlap was allowed, or there was a large reduction in
the number of pointers. Our preliminary experiments
allowing phrase overlap on the Purdue corpus seem to
indicate that allowing overlap is not benecial.
Acknowledgments
Thanks to Lu Yang [Yang, 2000] for making available
code for the k-cover algorithm and to F. Franek
making available code for
a very e-cient implementation of Crochemore's algo-
rithm. Thanks also to the anonymous referees for
their helpful comments.
--R
A fast space-e-cient approach to substring re nement
Data compression using long common strings.
The Calgary Corpus
The Canterbury Corpus
An optimal algorithm for computing the repetitions in a word.
Crochemore's algorithm revisited
Compression by induction of hierarchical grammars.
The Purdue Corpus
Experiments in text
A mathematical theory of communication.
The Mathematical Theory of Communication.
Computing all repeats using O(n) space and O(n log n) time.
Online construction of su-x trees
Computing a k-cover of a string
A universal algorithm for sequential data compres- sion
--TR
Data compression via textual substitution
Experiments in text file compression
General-purpose compression for efficient retrieval
Data Compression Using Long Common Strings
--CTR
Frantiek Frank , Jan Holub , William F. Smyth , Xiangdong Xiao, Computing quasi suffix arrays, Journal of Automata, Languages and Combinatorics, v.8 n.4, p.593-606, April | strings;textual substitution;repeating substrings;offline data compression |
563924 | UML and XML schema. | XML is rapidly becoming the standard method for sending information across the Internet. XML Schema, since its elevation to W3C Recommendation on the 2nd May 2001, is fast becoming the preferred means of describing structured XML data. However, until recently, there has been no effective means of graphically designing XML Schemas without exposing designers to low-level implementation issues. Bird, Goodchild and Halpin (2000) proposed a method to address this shortfall using the 'Object Role Modelling' conceptual language to generate XML Schemas.This paper seeks to build on this approach by defining a mapping between the Unified Modeling Language (UML) class diagrams and XML Schema using the traditional three level database design approach (ie. using conceptual, logical and physical design levels). In our approach, the conceptual level is represented using standard UML class notation, annotated with a few additional conceptual constraints, the logical level is represented in UML, using a set of UML stereotypes, and the XML Schema itself represents the physical level. The goal of this three level design methodology is to allow conceptual level UML class models to be automatically mapped into the logical level, while minimizing redundancy and maximizing connectivity. | Figure
1: Three level design approach
As mentioned, the Universe of Discourse (UoD) being
modeled in XML schema is a university student-rating
system (as shown in the following two output reports).
Subject Title Year NrEnrolled Lecturer
CS100 Intro to Computer 1982 200 P.L.
Science Cook
Engineering
CS100 Intro to Computer
Green
1983 250 A.B.
Science White
Table
1: First Subject table for University UoD
Subject Year Rating NrStudents %
Table
2: Second Subject table for University UoD
Given this example, our goal is to produce an XML
Schema that satisfies all major conceptual integrity
constraints that exist, while at the same time has minimal
redundancy and maximum connectivity.
2.1 Conceptual Level
The first step, in our proposed approach, is to model the
domain using a conceptual level UML class diagram. A
conceptual diagram is used to describe the UoD in terms
of objects and relationships from the real world. Below,
we show a conceptual level UML class diagram, which
represents the example, university student-rating UoD.
University
Student-Rating
System
{CONCEPTUAL
Rating
SubjectYearRating
{Rating::code values
range from 1 to 7}
double
SubjectYear
{Subject::code pattern
Subject
1.* 0.*
Year
+subject +year offered
Figure
2: Example conceptual UML class diagram
Figure
uses standard UML class diagram notation - for
example, classes are shown as rectangles, attributes are
listed within the associated class rectangles, and
relationships are shown as lines linking two or more
classes. Attribute and relationship multiplicity constraints
are also represented using standard UML notation. It
should be noted, however, that a number of non-standard
annotations are also required to represent some common
conceptual constraints, such as the primary identification
of a class (attributes suffixed with {P}). For a more
detailed discussion on the use of UML for conceptual
data modelling please refer to Halpin (1998).
2.2 Logical Level
Once a conceptual level model has been designed, and
validated with the domain expert, it can be used to
automatically generate a logical level diagram. A logical
level diagram describes the physical data structures in an
abstract and often graphical way. In our approach, the
logical level model is a direct (ie. one-to-one)
representation of the XML Schema data structures. To
this end, we represent the logical level as a UML
diagram, which uses the stereotypes defined in an XML
Schema profile.
In figure 3, we show a logical level UML class diagram,
which has been generated from the conceptual level
diagram from figure 2. The logical level diagram shown
uses stereotypes (such as element, complexType,
simpleType, elt and attr) that we have defined
within a UML profile for XML Schema (described in
more detail in section 3). This allows the logical level
UML diagram to directly capture the components of the
physical level XML Schema.
schema
uni.xsd
University Student-Rating System
element
report
has type
base
complexType
ReportType
complexType
complexType
SubjectYearType
complexType
SubjectYearRatingType
xsd:string
complexType
SubjectType
xsd:string
complexType
YearType
simpleType
{values range from 1 to 7}
simpleType
SubjectCodeType
restricts restricts
XSDSimpleType
xsd:integer
XSDSimpleType
xsd:string
XSDSimpleType
xsd:double
Figure
3: Example logical level UML class diagram
It is important to note that this logical diagram can be
automatically generated from the conceptual model using
the approach described in section 4. This removes the
need for the data designer to be concerned with
implementation issues. However, because there are many
ways to map a conceptual level model into a logical level
model, this transformation should be configurable with
design options. Similarly, the data modeller may wish to
directly 'tweak' the logical design to (for example)
introduce controlled redundancy or make other logical-
level design decisions.
2.3 Physical Level
In appendix A, we show a physical level XML Schema,
which corresponds directly to the logical level diagram
shown in figure 3. A physical level model defines the
data structures using the implementation language - in
this case XML Schema. The physical schema shown in
Appendix
A uses the standard textual language defined
by the World Wide Web Consortium (W3C) in its March
2001 XML Schema Recommendation (W3C 2001).
2.4 XML Instance
To help the reader understand the logical and physical
models in sections 2.2 and 2.3, we show below an
example XML instance document. This XML instance,
which incorporates some of the information from the
output reports shown earlier, correctly satisfies the XML
Schema definitions presented in figure 3 and appendix A.
<subject code="CS100">
Introduction to Computer Science
<year year="1982">
P.L.Cook
200
<ratings code="7">
5.00
<ratings code="6">
5.00
<ratings code="5">
<ratings code="4">
<ratings code=3>
<ratings code="2">
2.50
3 XML Schema Profile for UML
In this section, we outline the XML Schema profile,
which we have developed as the basis for logical level
UML class diagrams. It is intended that every concept in
XML Schema has a corresponding representation in the
UML profile (and vice versa). As a result, there is a one-
to-one relationship between the logical and physical
XML Schema representations.
The following set of diagrams graphically describe the
XML Schema UML profile developed by the authors,
using standard UML class diagrams. Figure 4 (section
3.1) shows the relationships between XML Schema
elements and types; figure 5 (section 3.2) shows the
relationships between XML Schema schemas and
namespaces, and figures 6, 7 and 8 (section 3.3) show
how schemas, content models and types are built from
various XML Schema constructs.
3.1 Element-Type Metamodel
The metamodel in figure 4 shows the relationships
between XML Schema concepts such as 'element',
'complexType', `simpleType' and 'XSD simpleTypes'
(which represents those primitive types found in the
XML Schema namespace). These XML Schema concepts
are represented as stereotyped classes, allowing them to
be used in logical level UML class diagrams to represent
the corresponding XML Schema concept. Two of the
relationships between these concepts, namely restricts,
and extends, are represented as stereotyped
specialisations. This was done to allow for instance
substitutability between related user-defined types. The
relationship has type is representated as a stereotyped
dependency between an 'element' and either a
'simpleType' or `complexType'. A dependency is a
special type of association in UML, in which the source
element is dependent on the target element.
stereotype
element
Note:
All association multiplicities are
stereotype
has type
stereotype
complexType
stereotype
extends
stereotype
simpleType
stereotype
restricts
stereotype
XSDSimpleType
Constraints:
Natural Language
A simpleType cannot restrict a
complexType.
A complexType cannot restrict
a simpleType or XSDSimpleType
Figure
4: Element-Type metamodel for XML Schema
version
String
location
{acyclic}1.1
stereotype
schema
stereotype
include
stereotype
redefine
with a 'schema'. This indicates that a
'import' schemas from other `namespaces'.
'schema' can
3.3 Schemas, Content Models and Types
Figure
6 introduces a number of new stereotyped classes
and stereotyped attributes, each of which represents an
XML Schema construct used to create the structure of a
model (such as 'choice', `group', 'seq' etc). This diagram
shows how XML Schema content models and types are
built using UML constructs, according to the XML
specification (W3C 2001).
stereotype
stereotype
stereotype
attr
stereotype
all
stereotype
group
stereotype
choice
stereotype
complexType
stereotype
seq
stereotype
attrGroup
stereotype
elt
stereotype
stereotype
any
Figure
Building content models and complexTypes
complexType
PurchaseOrderType
seq +*1
group
shipAndBill
seq
stereotype
import
stereotype
namespace
choice +*2
+comment
choice
group +shipAndBill
0.* 1.1
Figure
5: XML Schema-Namespace metamodel
3.2 Schema-Namespace Metamodel
The metamodel in figure 5 shows the relationship
between schemas and namespaces in XML Schema. This
model introduces the concept of a 'schema' as a
stereotyped package. A schema can 'include' or
'redefine' another schema. To indicate this, there are two
corresponding 'ring relationships' attached to `schema'.
these relationships are acyclic, because a schema can not
include or redefine either itself, or another schema, which
includes or redefines itself, etc. Another important
stereotyped class in figure 5 is the 'namespace' class. The
'namespace' class is associated with a stereotyped
dependency called 'import', which in turn is associated
Figure
7: An example XML schema at the logical level
To illustrate how these XML Schema classes are used on
a logical level UML diagram, an example is shown in
figure 7. This logical level diagram represents a fragment
of the 'PurchaseOrder' XML Schema code presented in
the XML Schema Part 0: Primer (W3C 2001). Note that
the definitions of the types 'Items' and `USAddress', and
the element 'comment' are omitted from this example for
the sake of brevity.
The corresponding XML schema code fragment for
figure 7 is:
<xsd:complexType name=PurchaseOrderType>
<xsd:sequence>
<xsd:choice>
<xsd:group ref=shipAndBill/>
<xsd:element name=singleUSAddress
<xsd:element ref=comment minOccurs=0/>
<xsd:element name=items type=Items/>
</xsd:sequence>
<xsd:attribute name=orderDate type=xsd:date/>
</xsd:complexType>
<xsd:group name=shipAndBill>
<xsd:sequence>
<xsd:element name=shipTo type=USAddress/>
<xsd:element name=billTo type=USAddress/>
</xsd:sequence> </xsd:group>
The logical level diagram in figure 7 highlights several
important features of the XML Schema profile, which are
not obvious from the metamodels shown. Firstly, the
concept of nesting XML schema content models (e.g. a
'choice' nested inside a `sequence') is represented at the
logical level by introducing separate stereotyped classes,
for each nesting level, linked by a 'composition'
association. In figure 7, this feature was used to link the
'choice' class to the `seq' class with a composition
association. The direction of the composition association
indicates the direction of the nesting, and the ordering of
the attributes within each class indicates the ordering of
the content models.
Note that the default content model for a complexType is
a 'sequence'. Therefore, when a complexType is mapped
from the logical to the physical level, the < >
attributes within the 'complexType' class are
automatically mapped to a sequence of elements within
the physical complexType.
Another important feature of the XML Schema profile, is
that it needs some way of representing anonymous types
and nested content models. In our XML Schema profile,
these are represented in the same way that nesting has
been described, with an additional naming scheme which
ensures uniqueness and the preservation of order. In
particular, anonymous types and content models are
named by appending a sequential number (indicating
order) to an asterix (indicating an anonymous reference).
stereotype
schema
stereotype
element
stereotype
attrGroup
stereotype
complexType
stereotype
simpleType
stereotype
group
Figure
8: Building a schema
Figure
8 shows how the schema constructs used to create
a content model in XML Schema (from figure are
related back to a 'schema' package.
4 Conceptual to Logical Level Mapping
4.1 Goals
As XML Schemas are hierarchical in nature, generating a
logical level model from a conceptual level model
requires us to choose one or more conceptual classes to
start the XML Schema tree-hierarhcy. One option would
be to select a single class as the XML root node, and
progressively nest each related class as child elements of
the root node. An example of an XML-instance generated
by choosing the 'Rating' class as the root of the XML
hierarchy is:
<?xml version=1.0 encoding=UTF-8?>
<rating code=7>
<subject code=cs100> <year year=1982>
200
P.L.Cook
2.50
<rating code=6>
<subject code=cs100> <year year=1982>
However, as this example illustrates, this approach can
lead to redundant data at the instance level. In this
example, the information relating to a subject is repeated
for each 'rating' that has been given in that subject.
Another approach would be to create a relatively flat
schema, in which every class is mapped to a separate
element directly under the root node. The attributes and
associations of each class would be mapped to sub-elements
of these top-level elements. The example below
illustrates this
<?xml version=1.0 encoding=UTF-8?>
<subject code=cs100>
1983
<subject code=CS121>
<year code=1982>
<subject code=CS100/>
<subject code=CS121/>
<year code=1983>
<subject code=CS100/>
However, as this example illustrates, this approach can
lead to disconnected and difficult to read XML instances,
which also have some degree of redundancy.
In contrast to these two approaches, the approach
presented in this paper aims to minimize redundancy in
the XML-instances, while maximising the connectivity of
the XML data structures as much as possible. The
approach presented in this paper for mapping UML
conceptual models into XML Schema is directly based on
the one defined by Bird, Goodchild and Halpin (2000), in
which Object Role Modeling (ORM) diagrams are
mapped into XML Schema. The algorithm described by
Bird, Goodchild and Halpin (2000) is highly suited to our
goals, as we have reason to believe that it generates an
XML Schema that is in Nested Normal Form (ie. nested
XML Schema with no data redundancy).
A number of significant modifications to the algorithm,
however, have had to be made to cater for the inherent
differences between ORM and UML. In particular,
because ORM does not distinguish between classes and
attributes (everything in ORM is either an 'object type' or
a 'relationship type'), the algorithm described by Bird,
Goodchild and Halpin (2000) uses the notion of 'major
object types' to determine the first nesting operation. In
contrast, however, 'classes' in UML are roughly
equivalent to 'major object types' in ORM, and therefore
the process used to automatically determine the default
'major object types' is no longer necessary.
A second major point of difference is that the concept of
'anchors', introduced by Bird, Goodchild and Halpin
(2000) to identify the most conceptually important
player(s) in a relationship type, and to consequently
determine the direction of nesting in some cases, are not
required in our approach. Instead, we use a closely
analogous concept in UML called navigation. Defining
navigation on an association indicates that given an
object at one end, you can easily and directly get to
objects at the other end, usually because the source object
stores some references to objects of the target (Booch,
Rumbaugh and Jacobson 2000). For this reason,
navigation on a UML association tends to point from the
more important player in the association towards the less
important player (which is the opposite direction to that
of 'anchors')
In the remainder of this section, we will describe our
general approach to mapping conceptual UML models
into logical level XML Schemas.
4.2 Methodology
As discussed earlier, the goal of our mapping approach is
to produce an XML Schema, with minimal redundancy
and maximum connectivity in the corresponding instance
documents. The algorithm, that we have designed to
achieve this goal, involves four major steps. Once the
logical level class diagram has been generated from the
conceptual level one, creating the physical XML Schema
is a simple process, due to the direct, one-to-one mapping
between the logical and physical levels.
The first step in the methodology is to create type
definitions for each attribute and class in the conceptual
diagram. The following two rules are used to map
attributes to the appropriate logical level types :
1. Attributes, which have additional constraints
applied to their primitive types (such as integer
and string), map into simpleTypes, which
restrict the associated primitive type. For
example, in figure 9 'SubjectCodeType' restricts
'string' by adding a pattern constraint.
2. Primitive types, used by an attribute, are mapped
into XSD simpleTypes from the XML Schema
namespace. For example, the primitive type
'string' maps to xsd:string.
Based on the example conceptual model from figure 2,
the following types would be created in this step:
simpleType
simpleType
SubjectCodeType
{values range from 1 to 7}
restricts
restricts
XSDSimpleType
xsd:double
XSDSimpleType
xsd:integer
XSDSimpleType
xsd:string
Figure
9: Types created in Step 1
Next, a logical level complex type definition is created
for each class at the conceptual level. Each conceptual
class is mapped into a complexType, with child elements
representing each of its non-primary attributes. Primary
attributes (which are based on simple types) are
included in the complexType definition as XML Schema
attributes (i.e with a attr stereotype). Based on the
example from figure 2, the following complexTypes
would be created in this step:
complexType
complexType
SubjectYearRatingType
attr +code[1.1] RatingCodeType{P}
complexType
SubjectYearType
xsd:string
complexType
SubjectType
attr +code[1.1] SubjectCodeType{P}
elt +title[1.1] :xsd:string
complexType
YearType
attr +year[1.1] xsd:integer{P}
Figure
10: ComplexTypes generated in Step 1
Note that in future steps, some complexTypes may be
removed and nestings of child attributes may be
performed (including primary key attributes).
4.2.2 Step 2: Determine Class Groupings
The next step is used to determine how best to group and
nest the conceptual classes, based on the associations
between them. The approach taken is based directly on
the approach from Bird, Goodchild and Halpin (2000), in
which a combination of 'mandatory-functional'
constraints and 'anchors' are used to determine the
appropriate nesting choices. In contrast to this, a similar
approach based on UML uses 'multiplicity' constraints
and 'navigation' to determine an appropriate nesting for
the schema.
An approach to automatically determine the default
navigation directions is summarised below:
1. If no navigation is defined on an association,
then use the multiplicity constraints to
determine the navigation direction.
a. If exactly one association end has a
minimum multiplicity of 1 (i.e. (1.1) or
(1.*)), then define the navigation in the
direction of the opposite association end, or
b. If one association end has a smaller
maximum multiplicity than the other, (e.g.
'0.7' is smaller than `0.*') then navigate
towards the end with the smaller maximum
multiplicity, or
c. If exactly one class has only one attribute,
then navigate towards it.
The nesting is then determined as follows:
1. If exactly one association end has a multiplicity
of '(1,1)' (i.e. it is mandatory and functional),
then nest the class at the other association end
towards it.
2. If both association ends have a multiplicity of
(1,1), then nest the classes in the opposite
direction to the direction of navigation - i.e.
nest the target of the navigation towards the
source of the navigation
The reasoning behind this mandatory-functional rule
was first discussed by Bird, Goodchild and Halpin
(2000), and can best be explained using an example.
Employee
headLecturer
Subject
1.1 0.*
Figure
11: A mandatory, functional relationship
The UML fragment in figure 11 consists of an employee
being the head lecturer of many subjects, and each
subject having exactly one head lecturer. If the
'headLecturer' was nested towards `Subject', then the
corresponding logical level representation would be:
complexType
SubjectType
xsd:string
complexType
EmployeeType
xsd:string
Figure
12: Logical level mapping of figure 11
For example, an instance of the Subject type might look
like:
<subject code=INFS4201>
Advanced Distributed Databases
<headLecturer nr=123456>
Joe the Lecturer
<subject code=COMP4301>
Distributed Computing
<headLecturer nr=123456>
Joe the Lecturer
There are however, a number of problems with this
nesting approach. The first issue is redundancy created by
repeating employee details with each subject occurrence.
This happens because an employee may be the head
lecturer of more than one subject (according to the UML
model in figure 11). The redundancy is clearly evident in
the corresponding XML instance where 'Joe the
Lecturer' has his details repeated for both the
'INFS4201' and `COMP4301' subjects.
The other issue arising from the above schema is that not
all employees are assigned as head lecturer of a subject
(as is indicated on the conceptual level UML model in
figure 11). Therefore, a separate global element for
employee has to be added to the schema for employees
who aren't in charge of any subjects. This is undesirable
because it reduces the connectivity of the schema.
The solution to both of these problems is to nest towards
the mandatory-functional end of the association. In the
example, this would mean nesting the Subject class inside
the Employee class, therefore producing the following
logical level representation and XML Schema fragment:
complexType
EmployeeType
complexType
SubjectType
xsd:string
xsd:string
Figure
13: Nesting SubjectType within EmployeeType
This type of nesting is preferable because:
a) The minimum frequency of 1 at the Employee
end of the association requires that each Subject
be headed by at least one Employee, and
b) The maximum frequency of 1 at the Employee
end of the association requires that each subject
be headed by at most one Employee.
The minimum frequency of 1 (uniqueness constraint) is
very important in this grouping example because if this
constraint didn't apply to a subject (i.e a subject doesn't
need a head lecturer), subjects without a head lecturer
allocated would not be represented. Similarly, the
maximum frequency of 1 (mandatory constraint) is vital
because if a subject could be headed by more than 1
lecturer, a subject would be repeated for each
corresponding employee thus introducing redundancy
into the schema. Therefore, when a class A is associated
with exactly 1 instance of class B, class A can be nested
inside class B.
Also note that since each association class has an implicit
mandatory, functional relationship with each of the
players of the association (i.e. each association class is
related to exactly one object at each end of the
association), association classes are always nested
towards their association (based on nesting rule 1). these
associations are then nested in the opposite direction of
the navigation (based on rule 2). An example of nesting
the association classes from figure 2 is shown in figure
14.
In this example, the 'SubjectYearRating' association
class is nested, together with the 'Rating' class, towards
the 'SubjectYear' class. Similarly, the `SubjectYear'
class, together with the 'Year' class, are nested towards
the 'Subject' class. Note that the thick-headed arrows in
figure 14 represent the direction of nesting, while the
thin-headed arrows represent the navigation direction.
The dotted line circles indicate classes grouped
nesting purposes.
University Student-Rating System
{Rating::code values range from 1 to 7}
Rating
+ratings 1.7
SubjectYearRating
1.*
SubjectYear
Subject
1.* 0.*
+subject
Year
Figure
14: Nesting the conceptual classes.
for
complexType
complexType
SubjectYearType
xsd:string
complexType
SubjectYearRatingType
Figure
15: Nesting SubjectYearRatingType within
It is important to note that when the 'RatingType'
complexType is nested, the multiplicity constraint of the
resulting element is set to '1.7'. This is because the
multiplicity constraint on the association end attached to
'Rating' is `1.7'.
complexType
SubjectYearType
xsd:string
If navigation cannot be determined between two classes
(say classes A and B), or both ends of the association are
navigable, then the following option exists:
1. If an association class exists between class A
and class B, merge class A and B into the
association class.
For example, if we take the 'title' attribute from Subject,
and change the multiplicity of Subject's association end
to '0.*', navigation cannot be determined. Figure 2, as
shown previously, illustrates this:
In this case, navigation cannot be established between
Subject and Year because both have optional
participation and unbounded maximum frequencies.
Also, both classes have only one attribute making rule 1c
inapplicable. The solution is to merge Subject and Year
with the association class SubjectYear. This merge is
valid because for every instance of SubjectYear, there is
exactly one instance of the Subject and Year classes.
A final point on the nesting topic is the representation of
conceptual level subtypes on the logical level. In our
approach, subtype relationships will be carried down to
the logical level. Also, a class acting as a supertype for a
class or set of classes must not be eliminated from the
mapping process.
4.2.3 Step 3: Build the Complex Type Nestings
After the nesting directions have been identified, the next
step is to perform the complex type nesting. In the
example case study, the 'SubjectYearRatingType' class
is nested within the 'RatingType' class. The
'RatingType' class is then nested as an element within
the 'SubjectYearType' class. The result of this operation
is shown below:
complexType
SubjectType
xsd:string
complexType
YearType
Figure
Final nesting of SubjectYearType
In figure 16, the result of the final operation required in
the case study is shown, in which the 'SubjectYearType'
class is nested within the 'YearType' class, and the
'YearType' class is nested in turn within the
class.
Note that we have chosen to represent those primary
which have a simpleType and a maximum
multiplicity of 1, as 'attributes' of the parent
complexType. When nesting, primary keys remain an
attribute of their respective class after the nesting takes
place. The only exception to this rule is when attributes
are removed from classes being eliminated from the
mapping process. In this case, the attribute will become a
primary key of its new parent class. This choice was
made to simplify the associated XML instance documents
however, ideally this should be a configurable option.
4.2.4 Step 4: Create a Root Element
Because each XML document must have a root element,
a root node is introduced in this step, which represents
the conceptual model as a whole. For example, in figure
17, we show that a root element called 'report' was
introduced when mapping figure 2 to a logical model.
This root element is then associated with a complexType
(in this case called 'ReportType'), which represents the
set of complexType groupings generated in step 3. In our
example, the only complexType grouping is called
'subject'.
element
report
has type
base
complexType
ReportType
Figure
17: The root node of the schema
4.3 Options and Limitations
4.3.1 Options
A number of options are available when mapping from
the conceptual level to the logical level. For example, an
attribute from the conceptual level can be represented
either as an XML schema attribute or as an XML schema
element at the logical level. By default, we have decided
to map primary key attributes to XML schema attributes,
and non-primary key attributes to XML schema elements.
This decision is suitable in the majority of cases, as
primary key atributes are usually based on simple types,
and have multiplicities of 'exactly one' (as are XML
schema attributes).
Other options that may need to be made available to the
data modeller, include the introduction of controlled
redundancy at the logical level, and the decreasing of the
connectivity of the resulting schema.
4.3.2 Limitations
Certain limitations were evident when modelling XML
schemas in UML. these are summarised below:
1. UML is aimed at software design rather than
data modelling, so some new notation for
representing conceptual constraints was required
2. Mixed Content in XML schema is difficult to
express in UML without introducing additional
non-standard notation to the conceptual level.
3. The UML constraint language, OCL, is
syntactically different to the XML schema
regular expression language.
4. Unlike UML, XML Schema does not support
multiple inheritance. Therefore a conceptual
level UML class diagram should not contain
classes with more than one supertype.
5. Some constraints represented on the conceptual
level such as subtype constraints and acyclic
constraints can not be expressed in XML
Schema.
6. In situations where navigation is present in both
directions, the mapping algorithm must be able
to determine the 'stronger' of the two
navigations, to determine the most appropriate
direction for nesting.
5 Conclusion
This paper proposes a method for designing XML
Schemas using the Unified Modeling Language (UML).
The UML was chosen primarily because its use is
widespread, and growing. Secondly, the UML is
extensible, so the new notation being written is totally
compatible with existing UML tools.
Presently, there exists a number of tree-based graphical
tools for developing schemas. these tools are perfect for
small and intuitive schemas, but the more complex the
data is, the harder it is for the designer to produce a
correct schema. The UML makes it easier to visualize the
model, and to ensure that integrity constraints are
defined.
The three-level Information Architecture is the
fundamental methodology followed by many data
modellers. This approach allows the data modeller to
begin by focusing on conceptual domain modelling issues
rather than implementation issues.
Because each conceptual level model has many possible
logical level models, there is a need for a mapping
algorithm, which uses sensible data design techniques to
translate from one to the other. However, because of the
different design choices, which can be made in this
mapping process, it would be preferable to allow the
designer to choose between common design options.
Because there is a one-to-one relationship between the
logical and physical levels, however, there is then only
one possible mapping to the physical XML Schema itself.
The authors are currently planing to build a prototype
tool, which uses the algorithm described in this paper to
generate a logical level representation, based on a
conceptual level UML class diagram. A prototype tool
has been built however, that can generate an XML
Schema from a corresponding logical level class diagram
(expressed in XMI). In addition to this we also intend on
exploring the generation of an XML Schema that is in
nested normal form and look at reverse engineering a
conceptual level diagram from the physical level.
6
--R
UML for XML Schema Mapping Specification.
Object Role Modeling and XML Schema.
UML Data Models From An ORM Prospective.
W3C XML Working Group
The Unified Modeling Language User Guide.
"year"
--TR
Conceptual schema and relational database design (2nd ed.)
The Unified Modeling Language user guide
--CTR
Russel Bruhn, Designing XML and XML Schema for bioinformatics using UML, Journal of Computing Sciences in Colleges, v.21 n.5, p.13-20, May 2006
Carlo Combi , Barbara Oliboni, Conceptual modeling of XML data, Proceedings of the 2006 ACM symposium on Applied computing, April 23-27, 2006, Dijon, France
Philip J. Burton , Russel E. Bruhn, Using UML to facilitate the teaching of object-oriented systems analysis and design, Journal of Computing Sciences in Colleges, v.19 n.3, p.278-290, January 2004
A. Boukottaya , C. Vanoirbeek , F. Paganelli , O. Abou Khaled, Automating XML documents transformations: a conceptual modelling based approach, Proceedings of the first Asian-Pacific conference on Conceptual modelling, p.81-90, January 23, 2004, Dunedin, New Zealand
Esperanza Marcos , Csar J. Acua , Beln Vela , Jos M. Cavero , Juan A. Hernndez, A database for medical image management, Computer Methods and Programs in Biomedicine, v.86 n.3, p.255-269, June, 2007 | UML;XML;XML Schema;DTD |
563925 | Compacting discriminator information for spatial trees. | Cache-conscious behaviour of data structures becomes more important as memory sizes increase and whole databases fit into main memory. For spatial data, R-trees, originally designed for disk-based data, can be adopted for in-memory applications. In this paper, we will investigate how the small amount of space in an in-memory R-tree node can be used better to make R-trees more cache-conscious. We observe that many entries share sides with their parents, and introduce the partial R-tree which only stores information that is not given by the parent node. Our experiments showed that the partial R-tree shows up to per cent better performance than the traditional R-tree. We also investigated if we could improve the search performance by storing more descriptive information instead of the standard minimum bounding box without decreasing the fanout of the R-tree. The partial static O-tree is based on the O-tree, but stores only the most important part of the information of an O-tree box. Experiments showed that this approach reduces the search time for line data by up to 60 per cent. | Introduction
Latest surveys have shown that the availability of
cheap memory will lead to computer systems with
main memory sizes in the order of terrabytes over
the next 10 years [Bernstein et al., 1998]. Many
databases will then t entirely in main memory. For
database technology, this means that the traditional
bottleneck of memory-disk latency will be replaced
by the cpu-memory latency as the crucial factor for
database performance. It is therefore increasingly important
for database index structures and algorithms
to become sensitive to the cache behaviour.
Recent research on index structures includes several
papers addressing cache-sensitive index struc-
tures. In [Rao and Ross, 2000], the authors propose
a pointer elimination technique for B+-trees. Nodes
of the CSB+-tree are stored contiguously and therefore
only the pointer to the rst child node needs
to be stored in the parent. This pointer elimination
technique was extended to spatial data structures
in the paper [Ross et al., 2001] which introduced
cost-based unbalanced R-trees (CUR-trees).
The cost factors of the cache behaviour of a given
architecture can be modeled into a cost function
and a CUR-tree is built which is optimized with respect
to the cost function and a given query model.
Prefetch instructions in combination with multiple-
size nodes can further assist to achieve better cache
behaviour [Chen et al., 2001]. Other methods improve
the space utilization by compressing the entries
in R-tree nodes to get wider trees [Kim et al., 2001].
The values describing the discriminator or minimum
bounding box (MBB) in R-trees are represented relatively
to its parent and the number of bits per value
is reduced by mapping the values to a coarser repre-
sentation. The problem of cache-sensitivity of data
structures with complex keys is addressed in the paper
[Bohannon et al., 2001] which uses partial keys of
xed size.
In this paper, we will investigate how we can make
better use of in-memory spatial tree nodes by eliminating
unnecessary information in the discriminator
and storing more descriptive information than the
minimum bounding boxes traditionally used to approximate
the spatial objects.
In-memory R-trees typically have very small node
sizes, between 3 and 7 entries, because the natural
node size is determined by the size of a cacheline. We
observed that for such R-trees a substantial part of
the information is stored multiple times at dierent
levels in the tree. Very often, entries in the nodes
share at least one of their sides with the enclosing
parent bounding box.
We propose the use of partial information in R-tree
nodes and show how this can improve space utilization
and performance of R-trees. We introduce the
concept of partial R-trees where information is handed
down from parent to children to eliminate multiple
storing of values. Only values which add information
about the entry which is not already given in the parent
are stored in the node.
We also investigate how more descriptive information
than the standard minimum bounding box can be
stored while not using up additional space compared
to the standard R-tree. The partial static O-tree
is based on the O-tree [Sitzmann and Stuckey, 2000],
but stores at most four values only, thus not increasing
the fanout of the tree while providing better
discriminator information and thus improving the
search.
The structure of the paper is as follows. Section 2
brie
y describes R-trees and talks about their in-memory
use. We present partial R-trees in Section 3,
and describe how redundant information can be eliminated
and how this aects insertion and search. Section
4 describes how to store more descriptive information
about the entries in partial O-trees. We
present experimental results in Section 5 before concluding
our paper in Section 6 with a summary and
an outlook to future work in Section 7.
Using R-trees in main memory
2.1 The R-tree structure
The R-tree and its variants [Guttman, 1984,
Beckmann et al., 1990] is a data structure for n-dimensional
data. It has originally been designed to
index disk-based data.
An R-tree node t has a number of subtrees t:n,
and for each 1 i t:n a discriminator or minimum
bounding box t[i]:d (which is an array of four side
values), and a pointer t[i]:t. The pointer points at an
object identier if the node is a leaf node. Otherwise
it points at another R-tree node. R-trees are built
with a maximum number of entries per node M and
a minimum number of entries per node m M=2.
Further rules which determine the shape of the R-tree
are:
1. All leaf nodes appear on the same level.
2. Every node which is not the root node contains
between m and M entries.
3. The root node has at least two entries unless it
is a leaf node.
If insertion of an entry in a node results in an
overfull node, the node is split. Splitting algorithms
with linear and quadratic complexity have
been presented in the literature [Guttman, 1984,
Beckmann et al., 1990].
2.2 Dierences between disk and in-memory
R-trees
The concept of the R-tree to organize spatial objects
in a balanced search tree by using minimum bounding
boxes (MBBs) as discriminators works well in the
disk-based context. The I/O cost of R-trees is directly
dependent on the number of block accesses for
systems with small main-memory, and is very competitive
compared to other spatial access methods.
This approach can also be transfered to in-memory
use of R-trees and works similarly well with respect
to the cache-behaviour of R-trees, whose performance
depends on the number of cache misses occurring.
Thus, the strength of the R-tree concept is the same
for disk-based and in-memory use.
Looking at the size of the R-tree nodes, we observe
that, as expected, the number of entries per node in
the in-memory case is signicantly smaller. In the
disk-based case with a page size of 4 KBytes, we can
t 170 entries into one node of a regular R-tree and
255 if we use the CSB+-technique to eliminate pointers
assuming 4 byte long numbers. For in-memory
R-trees using a cacheline of 64 bytes, a normal R-tree
node using 4 bytes per number can only store 3 en-
tries. The CSB+-pointer elimination technique does
en
Figure
1: Memory layout of a CSB+-R-tree node
not increase the fanout of the tree in this case. Assuming
bytes per number, we can t 5 entries, and
with the CSB+-technique 7 entries per node.
Figure
1 shows the memory layout of a CSB+-R-
tree node. A node consists of the pointer p to the
contiguous array of child nodes, a counter n for the
number of entries and entries e 1 to e n . Each entry e i
consists of four sides s 1 to s 4 . As the child nodes of a
CSB+-R-tree node t are stored contiguously, we can
refer to the i th child node of t as t:(p + i), using the
base pointer t:p and adding the oset i. In the conventional
R-tree notation, this corresponds to t:t[i],
the i th child node of a conventional R-tree node.
Analyzing the characteristics of small R-tree
nodes, we can make some observations which are quite
dierent to the large R-tree nodes used for disk-based
data:
the discriminator MBBs are usually very small
and entries share sides with the enclosing MBB
increasing the fanout of small nodes has an
immediate eect on the height of the tree
We will investigate whether we can improve performance
of in-memory R-trees by making use of the
two properties stated above.
Eliminating multiply stored information
3.1 Analyzing R-trees with small nodes
The idea for partial R-trees was motivated by a small
experiment. Using simple datasets of lines and polygons
(see Section 5), we counted how many discriminators
in an R-tree share one or more sides with their
parents when using nodes of small sizes. The results
in
Table
1 are for an R-tree with 3 entries per node.
File Sides shared with parent (in Avg.
l100000 30.93 48.35 14.55 0.01 6.03 2.28
p10000 27.35 29.67 18.72 7.40 16.79 2.27
p50000 27.74 28.80 10.09 7.73 16.61 2.26
p100000 27.79 20.04 19.12 7.75 16.27 2.25
Table
1: MBB sides shared with parent MBB
On average, only 2.25 to 2.3 sides of a discriminator
have to be stored per entry as shown in the last
column. For all les, at least 83 per cent of discriminators
share at least one side with their parent. We
will propose a method where the information from the
parent discriminator is used to construct the complete
box and show that this increases the fanout of the R-tree
nodes, resulting in better search performance.
3.2 A more compact R-tree representation
We propose storing only those sides of a discriminator
which the entry does not have in common with its
parent. Consider the objects depicted in Figure 2.
O2
O3
Figure
2: Objects in an R-tree node
The dashed line represents the minimum bounding
box of a node that contains the three objects O1,
O2, and O3. As O1 shares the left and upper side
with the parent, we only need to store the right and
bottom side. For O2, only the left side needs to be
stored as the other three sides are given by the parent
discriminator. All four sides for O3 must be stored in
the node as the entry shares no side with the parent.
The structure of an R-tree node with partial information
will therefore be more dynamic than the
traditional R-tree node structure. Instead of storing
four sides for every discriminator, we store between
0 and 4 sides per entry. The structure is shown in
Figure
3.
bn e1
for valuesbits
Figure
3: The node structure of a partial R-tree
The partial R-tree node contains a pointer p and
a counter n for the number of entries in the node.
The rst part of the node then contains a sequence of
4-bit elds. The bitelds correspond to the entries in
ascending order. Each biteld indicates which sides
are stored in the node and which sides have to be
inherited from the parent. The actual sides are stored
in reverse order starting from the end of the node.
This allows us to make full use of the space of the
R-tree node and to minimize copy and comparison
operations during insertion and search.
As
Table
1 shows, we only store about 2.25 sides
per discriminator. Assuming that a side is 4 bytes
large, we save which corresponds to
bits. Using 4 bits per entry for extra information,
we still save 52 bits per entry.
A partial R-tree node s has a base pointer s:p, a
number of subtrees s:n, and for each 1 i s:n a
4-bit array s[i]:b indicating which sides dier from the
parent discriminator. The remainder of the node is
an array of side values s:s where s:s[j] is the j th side
value in the node (corresponding to the side of the j th
true bit occurring in the bit elds). Here we ignore
the fact that this array is stored in reverse order. We
assume s:l gives the number of sides stored in the
partial R-tree node. Note this is not actually stored in
the node, since it can be calculated from the number
of true bits in the 4-bit elds.
3.3 Creating the partial representation
Given an R-tree node t with parent discriminator pd
we can create a corresponding partial R-tree node s
by mapping each entry in t to its partial information
using the algorithm ToPartial. We ignore the pointer
information, which is eectively unchanged.
ToPartial(t; pd)
s:n := t:n
for
for
else
s:l
return s
For each discriminator t[i]:d occuring in the R-tree
node we check each side j versus the parent discrim-
inator. If it is dierent to the corresponding side in
the parent, we set the corresponding bit s[i]:b[j] store
the side in the next available space in the partial R-tree
node s. The s:l keeps track of how many sides
are stored in total.
3.4 Extracting information from a partial R-
tree
To read a discriminator in a partial R-tree node, we
need to combine the information from the parent discriminator
with the information stored in the node.
The algorithm ToComplete converts a partial R-tree
node s together with its parent discriminator pd to
a (total) R-tree node. Again we ignore the pointer
information.
ToComplete(s; pd)
t:n := s:n
l := 0
for
t[i]:d := pd
for
if (s[i]:b[j])
l
return t
After setting the number of subtrees appropriately,
the count of sides is initialized, and then each bit eld
is examined in turn. The discriminator is initialized
to the parent discriminator, and then for each dier-
ent side (makred by a true bit), the next side value is
copied in.
The R-tree node t returned from ToComplete for
partial R-tree node s is identical to the R-tree node
given to ToPartial to construct s. No information is
lost. It is important to note that for search operations
we do not need to restore the entries completely
but can rather use the partial information to perform
search as we will describe below.
Building partial R-trees
The insertion procedure of an object in a partial R-tree
requires some extra steps compared to the standard
R-tree insertion. The Algorithm Insert describes
the insertion of an object o with discriminator e in a
partial R-tree s with discriminator pd. The algorithm
eectively rst converts each partial R-tree node visited
to its corresponding (total) R-tree node, performs
the insertion and then converts back to a partial R-tree
node (or nodes).
Insert(s; pd; e; o)
case s
external:
if
else
return (sl; sr; dl; dr)
replace t[i] by tl and tr
if (sr 6= null)
if
else
return (sl; sr; dl; dr)
When inserting an object into an external node,
we convert the node to a complete node and simply
add the entry naively. The new parent discriminator
is pd [ e, that is the minimal bounding box that includes
both pd and e. As the representation of the
node depends on the parent discriminator, the representation
of entries other than the new entry may
have changed and might need to be recalculated. For
example, an entry previously sharing 2 sides with the
parent might now only share 1 side with the new,
larger parent discriminator. We convert the expanded
(total) R-tree node back to a partial R-tree node. Full
then tests whether the new representation is too large
to t into a node. A partial R-tree node s is full if
sidebits is too great to t in the node
(together with the bits for s:n and the pointer).
If the new representation of the node is too large
for the node, either because there was no space for
the new entry, or its addition caused other entries to
no longer t, the node is split. We use the linear and
quadratic splitting algorithms proposed in the literature
[Guttman, 1984, Beckmann et al., 1990]. The
algorithm split the node s into two nodes con-
tainint the same entries that will each t in the available
space. The changes are then passed back to the
parent node.
In internal nodes, we rst select the best subtree
for insertion. We choose the subtree whose discriminator
shows the smallest increase in area after insertion
of the new entry. Insertion is then continued
in the selected subtree. The results of the insertion
on the lower level are then used to recalculate the
new representation of the node. Note that we need
to recalculate the representation of the node not only
when the node on the lower has been split but also
when the parent discriminator has changed. The new
representation might be larger after the parent discriminator
has been enlarged. This means, that we
might have to split a node on an internal level, even
if no new entry was added to it by insertion lower in
the tree. We check if a split is necessary after the
new representation has been determined and propagate
the changes to the parent level.
The algorithms shown convert partial R-tree nodes
to R-tree nodes and back again for most processing
for ease of explanation, in the implementation most
conversion of discriminators is avoided.
3.6 Searching partial R-trees
When searching a partial R-tree, we do not need to
reconstruct the complete entry before we can determine
whether search should continue in a child tree
or not. Instead, we only need to compare the stored
sides of an entry with the query as we know that the
inherited sides match the query.
This is claried by Figure 4. Imagine we have determined
that query box q intersects with the parent
discriminator pd of some entry with discriminator e.
In order to determine this we have checked that the
box q does not lie completely above pd, to the left
of pd, etc. These comparisons are represented by the
four dashed arrows. In order to determine that q intersects
e we do not need to consider whether it lies
completely below e or to the right since these comparisons
were already made with the parent box pd.
Hence the only two comparisons required are with respect
to the sides of e not shared with the parent.
e
pd
Figure
4: Reducing comparisons with partial R-trees
We search an R-tree node by simultaneously scanning
through the biteld at the start of the node and
the entries stored at the back of the node. We store
the query sides in such a way that we can detect a
mismatch of entry and query with at most four comparisons
l := 0
for
match := true
for
if (s[i]:b[j])
if (s:s[l++] q:d[i]))
match := false; break
if (match)
case
For each entry in an internal node, we check
whether the stored sides clash with the query. We
use the bit elds to determine which sides stored in
s:s refer to which discriminator sides We know that
all the sides not stored in the node are the same as
in the parent discriminator and therefore match the
query. If the entry matches the query, we continue
search on the level below. Otherwise, we consider the
next entry. For each entry in an external node, we
compare all stored sides with the query and output
the entry if it matches the query.
The search algorithm shows that the redundancy
in a normal R-tree not only occurs when storing sides,
but also when doing comparisons. By having partial
R-trees we reduce the number of comparisons during
search at the same time as we reduce the number of
sides stored in an entry.
Storing better information
In the partial R-tree, we try to improve the performance
of R-trees by eliminating redundant informa-
tion, thus creating more room for other entries which
x
y
Figure
5: Approximation of polygons in a R-tree and
O-tree
results in an increased fan-out of the node. Alter-
natively, we can try to improve the search behaviour
of the tree by using the gained space to store more
descriptive information than the standard minimum
bounding box. This will help to lter out unsuccessful
search paths at a higher level in the tree. In the
paper [Sitzmann and Stuckey, 2000], we introduced
the O-tree, a constraint-based data structure, which
stores an orthogonal box in addition to the standard
bounding box to give a better description of the objects
in the tree.
4.1 The O-tree approach
The structure of the O-tree is similar to the R-tree.
The dierence is that we store two minimum bounding
boxes per entry: the conventional MBB and an
additional MBB along axes v and w which leave the
origin at an angle of =4 to the x and y axes. The
values of v and w can be obtained as
pand
2. Thus, an object in an O-tree is
described by eight values, representing the lower and
upper bounds on the four axes x; We can
store an O-tree discriminator in an array of eight side
values" the lower and upper bound of the x axis are
stored in sides 0 and 1, sides 2 and 3 refer to the y
axis, sides 4 and 5 to the v axis and sides 6 and 7 to
the w axis.
As shown in Figure 5, the standard MBB, depicted
with solid lines, is a very poor representation for some
kinds of data. Although, the two shaded polygons
are far from intersecting each other, an intersection
test based on the MBBs indicates an overlap. More
information about the object is given if we describe
the object with an additional orthogonal box (with
dashed lines). Using the intersection of both boxes,
the lack of overlap of the two polygons is clear.
The O-tree representation is particularly useful for
line data. Figure 6 compares the area of the discriminator
in an R-tree and O-tree for line data.
sin
cos
Figure
Size of the O-tree bounding box and the
R-tree bounding box for line data
When storing a 2-d unit length line at an angle 0
=8 to the horizontal, the area of the bounding
box is cos() sin(). In an O-tree, on the other hand,
the area of the intersection of the bounding boxes is
cos() sin() sin 2 (). This means that the O-tree
region bounding a line is on average 2
times the area of the R-tree minimum bounding box.
In our experiments, we found that O-trees indeed
improve the accuracy of the search signicantly. But
the disadvantage of O-trees became also apparent: as
we store eight numbers per entry instead of four, the
fanout of the tree is signicantly reduced. Therefore,
the overall search performance could only be slightly
improved for line data intersection queries.
The O-tree as rst presented is impractical for in-memory
use. For a typical 64 bytes cacheline, we can
only t 2 entries in a node.
In this paper, we transfer the O-tree approach to
in-memory data structures and try to overcome the
weakness of the O-tree by storing only the four most
descriptive sides of the O-tree and combine this information
with information obtained from the parent
discriminator.
4.2 A compact O-tree representation
Although eliminating shared sides also reduces the
number of sides per discriminator in an O-tree, on av-
erage, we still need to store more than four sides per
discriminator. The fanout of the O-tree is therefore
still smaller than in the standard R-tree. We therefore
apply another technique and eliminate those side of
the O-tree discriminators that add no or little information
about the objects they describe. The partial
O-tree is based on the complete O-tree representation
of a discriminator, but only the four most descriptive
sides are selected for storage in the entry. Less than
four sides can be selected if the discriminator shares
more than four sides with its parent. The node structure
is similar to the partial R-tree representation.
e4bits
4 numbers (or less)
for values
Figure
7: The node structure of a partial O-tree
The node contains a pointer p to the rst of the
child nodes and a counter n. The bitelds form the
rst part of the remaining node, but in the partial O-tree
case contain 8 bits each, indicating which sides of
the entry are stored in the node. The sides are stored
again in descending order at the back of the node.
In most cases, we will store 4 sides per entry, but
for some nodes, more information will be shared with
the parent and the number of sides can be further
reduced. Compared to the R-tree, we have slightly
more space overhead with one additional byte per entry
used as the bit vector, but we have almost halved
the space requirements of the complete O-tree.
4.3 Selecting the most descriptive data
A partial O-tree node is created by starting
the complete O-tree node and for each enrty repeatedly
discarding the least useful information until the
representation contains at most 4 sides. The algorithm
OToPartial is the equivalent for O-trees of
ToPartial.
OToPartial(t; pd)
s:n := t:n
for
for
while (js[i]:bj >
for
if (s[i]:b[j])
s:l
return s
For each O-tree discriminator d We rst eliminate
the sides of d which are shared with its parent by
setting their bits in s[i]:b to false. As long as this bit
eld (considered as a set) contains more than 4 sides,
we repeatedly call EliminateSide to delete the side with
the least importance, i.e., information, from the set
of sides. Once we have reduced the set to at most
four sides, we store the sides in the node and set the
biteld.
EliminateSide chooses the side to eliminate which
causes the least increase in area of the discriminator.
The area of the discriminator is the intersection of the
MBB and the orthogonal MBB. We discard the side
for which the discriminator shows the least increase
in area if the side is eliminated.
Figure
?? shows an example for the elimination
of sides. The depicted minimum bounding box and
orthogonal minimum bounding box describe an object
or a group of objects. In part (a) of the gure,
all sides of both boxes are stored and their information
describes the area shaded grey. (Stored sides are
shown as solid lines). We can see that the lower and
upper bounds of x do not add any extra information,
as these bounds are given by the mobb already. If we
eliminate them, as shown in part (b), we still describe
the same shaded area. Furthermore, we can observe
that the lower and upper bounds of y add only little
information about the object. Eliminating these from
the discriminator results in an area which is slightly
larger. but still a tight approximation of the object.
4.4 Reconstructing the O-tree representation
For search or dynamic insertion, we might want to
reconstruct the complete O-tree representation. Opposed
to the partial R-tree, during the conversion
from the complete O-tree representation to the partial
O-tree representation, information can get lost. We
(b) (c)
(a)
Figure
8: Eliminating sides of an O-tree discriminator
only store an approximation of the original O-tree
discriminator, therefore the partial O-tree representation
will be less accurate than the complete O-tree
representation, while still more descriptive than the
R-tree information.
Algorithm OToComplete converts a partial O-tree
node back into a complete O-tree node, i.e., an entry
with eight sides. Although the entry is complete,
it is only an approximation of the original complete
O-tree representation. As for the partial R-tree, we
start with copying the parent discriminator. We then
replace the values of the sides stored with their actual
values. The function tighten propagates the tightens
the O-tree discriminator. The resulting discriminator
is an approximation of the original O-tree entry.
OToComplete(s; pd)
t:n := s:n
l := 0
for
d := pd
for
if (s[i]:b[j])
l
t[i]:d := Tighten(d)
return t
Although we do not store all information about the
O-tree discriminator, we may be able to reconstruct
some more information from the sides we have stored.
The MBB on axes x and y also gives bounds on axes
v and w and vice versa. We can thus tighten these
bounds. Figure 9 illustrates tightening on a MBB for
x and y and MBB for v and w (MOBB). The MBB for
x and y determines a tighter lower bound for v and
while the MBB for v and w determines a tighter
upper bound for y. The tighter bounds are illustrated
by dashed lines.
The function tighten takes the discriminator with
the sides that have been read from the node and the
parent.
mobb
xl xu
mbb
yl
vl wl
wl
Figure
9: Tightening the representation of an O-tree
discriminator
Tighten(d)
return d
Boundaries of x and y based on the values of v
and w of the orthogonal box are computed. The original
values are replaced if the computed bounds are
tighter. The same process is then performed for the
orthogonal MBB.
4.5 Insertion in dynamic partial O-trees
The insertion of entries in a partial O-tree is identical
to the insertion for partial R-trees except for
the function GetRepresentation. For both trees, the
new representation of a node is based on the previous
representation stored in the tree. In the partial
R-tree, this representation is a complete and accurate
representation of the entry. In the partial O-
tree, this representation is already an approximation
of an entry. Creating a new partial O-tree representation
will therefore produce another approximation
of an already approximated entry. The quality of the
representation of an entry therefore deteriorates every
time a representation has to be changed and is
recalculated. During dynamic insertion this happens
very frequently. The accuracy of object description
in the partial dynamic O-tree is therefore very poor
and leads to inaccurate search involving high cost.
Partial dynamic O-trees therefore seem not useful in
practice, although the very rst approximation is a
very accurate description of entries which provides
more information than the R-tree bounding box.
We therefore suggest to use the partial O-tree approach
in a static environment. The partial static
O-tree is generated from an existing O-tree and every
discriminator in the tree is converted only once into
the partial O-tree representation. The result is a partial
O-tree which takes up only about half the space
of the original O-tree but represents the entries in a
more descriptive way than the R-tree.
4.6 Building a static partial O-tree
A static partial O-tree is created from a complete O-
tree. We can determine the fanout of a partial O-tree
for a given node size. A complete O-tree with that
fanout is created. The nodes in the complete O-trees
are about twice as large as the ones in the partial O-
tree. We then read the complete O-tree and convert
every discriminator into the partial representation by
applying the StoreSides algorithm. The result is a
partial O-tree with nodes that t in the given node
size and with the fanout of the complete (larger) O-
tree. As we only had to apply the approximation step
once, the discriminators in the tree are very accurate.
4.7 Searching partial O-trees
We can search partial O-trees in two dierent ways
which are dierent in cost and accuracy. The Accu-
rateSearch shown below reads every entry in the tree
and tries to reconstruct its complete O-tree representation
using ReadEntry.
for
if intersect(t[i]:d; q))
case
The search algorithms traverses the tree, reconstructs
the complete representation of the entries and
outputs the leafs whose approximations intersect with
the approximation of the query object.
The alternative is a faster, but less accurate search
algorithm similar to the partial R-tree search. The entries
are not completely reconstructed, but only the
sides stored are compared with the query. This reduces
the search time as no copying of the parent
node, copying of entry sides and tightening have to be
performed. FastSearch is exactly analogous to Search.
l := 0
for
match := true
for
if (s[i]:b[j])
if (s:s[l++] q:d[i]))
match := false; break
if (match)
case
Both search algorithms only implement the lter
step which creates a set of candidate objects which
might actually intersect the query. The renement
step takes each candidate object in a subsequent step
and checks its intersection with the query object.
5 Experiments
5.1 Description of Experiments
Our experiments were conducted on a Sun Ultra-5
with 270 MHz and 256 MByte RAM under Solaris
2.6. A level-2 cache line on our architecture is 64
bytes. We implemented the partial R-tree and partial
O-tree using the CSB+-technique as described in
Sections 3 and 4. We compared the partial R-trees
to a normal R-tree which also uses the pointer elimination
technique. For a 64 byte cacheline, traditional
O-trees cannot be used as the node can only contain
entries. We therefore do not include results of the
normal O-tree in our graphs and discussion. For all
trees, We used the quadratic splitting algorithm for
all trees.
Our test data consists of a set of randomly constructed
line and polygon data relations. Each line
data set contains a number of lines each with approximate
length 20 in a square of area 5000. The polygon
data sets contain convex polygons with up to 10 nodes
and edges of length approximately 40 in a square area
of 10000. The polygons are constructed by randomly
creating the 10 points and using the Graham scan
algorithm to calculate their convex hull.
Our experiments measure the performance for partial
R-trees and partial static O-trees and compare
them with the search performance of the R-tree. For
each test case, we queried each tree with 10,000 random
queries and measured the number of node ac-
cesses, search time, number of results and search
times including the renement step.
5.2 Experimental Results
Figures
show the average number of node
per query for line data and polygon data, re-
spectively. Node accesses can be used as a rough measure
for cache misses, i.e., it is the worst case number
of cache misses. For the line data, the partial R-trees
show a reduction of per cent in the number of node
accesses compared to the R-tree. The accurate partial
O-tree reduces the number of node accesses by 60 per
R-tree Partial Partial O-tree
lines R-tree Accurate Fast
100 26 21 25 26
1000 221 159 139 152
5000 1010 724 474 513
10000 1954 1403 869 933
50000 9194 6444 4010 4295
100000 18193 12701 7613 8105
Figure
10: Average node accesses for a line data query
R-tree Partial Partial O-tree
polys R-tree Accurate Fast
500 279 209 264 266
1000 559 407 549 554
5000 2711 1982 2708 2732
10000 5305 3923 5303 5354
100000 51492 38728 51021 51516
Figure
11: Average node accesses for a polygon data
query
cent. The fast search on the partial O-tree still has
per cent less node accesses than the R-tree. For
polygon data, the partial R-tree shows a similar improvement
of 25 per cent. The partial O-trees, on the
other hand, only reduce the number of node accesses
by a marginal 1 per cent.
Measuring the search time per query for line and
polygon data, we obtained the results shown in Figures
12 and 13. For line data, the partial R-tree improves
on the normal R-tree by about 25 per cent.
The partial O-tree with accurate search shows how
expensive the accurate search is: although it could
decrease the number of node accesses signicantly,
the gained time is used to perform expensive opera-
tions. The search time of the partial accurate O-tree
is cent higher than for the R-tree. The less
accurate fast O-tree search algorithm, on the other
hand, reduces the search time by up to 35 per cent.
This shows clearly that, although the fast search is
slightly less accurate and will search more subtrees,
its faster execution time results in the best perfor-
mance. For polygon data, the partial R-tree reduces
search time by up to 25 per cent compared to the
R-tree Partial Partial O-tree
lines R-tree Accurate Fast
1000 0.33 0.29 1.12 0.42
5000 1.92 1.51 3.84 1.49
10000 4.11 3.20 7.03 2.79
50000 19.74 14.48 32.58 13.06
100000 38.72 28.37 61.64 24.59
Figure
12: Average search time for line data
R-tree Partial Partial O-tree
polys R-tree Accurate Fast
500 0.31 0.37 2.09 0.68
1000 0.72 0.75 4.22 1.44
5000 5.21 4.07 21.36 7.51
10000 10.89 8.50 42.37 15.19
50000 54.10 40.30 203.94 74.29
100000 110.58 84.10 405.51 147.84
Figure
13: Average search time for polygon data
(Partial) Partial O-tree O-tree
lines R-tree Accurate Fast
500 1175 418 516 416
1000 2283 805 953 805
5000 11751 4155 4498 4155
10000 23417 8286 8809 8282
50000 116440 41177 43223 41131
100000 233360 82657 85969 82581
Figure
14: Total number of results for line data (in
normal R-tree. The partial O-trees do not improve
on the R-tree. Again, the accurate O-tree search suffers
from the high computation cost and shows an
increase of 250 per cent. Even the fast O-tree search
shows a slight increase in search time. While showing
a great improvement for line data, the additional
orthogonal bounding box of O-trees does not seem to
help to make search on polygon data more e-cient.
Nevertheless, as shown in Figures 14 and 15, it
still reduces the number of hits in the lter step sig-
nicantly. We have included the number of results
for a complete O-tree with the same fanout to have a
measure of the best possible reduction in results.
Compared to the R-tree and partial R-tree, which
both have the same number of hits, the complete O-tree
reduces the number of results by per cent for
line data and 25 per cent for polygon data. The accurate
search partial O-tree still achieves a 64 per cent
reduction for line data and 23 per cent for polygon
data. The fast O-tree search is slightly less accurate,
but can still show a reduction of 64 per cent for line
data and 20 per cent for polygon data.
(Partial) Partial O-tree O-tree
polys R-tree Accurate Fast
100 595 467 486 456
500 2833 2192 2287 2155
1000 5745 4422 4583 4371
5000 28474 22057 22875 21725
10000 56938 44033 45608 43419
100000 567204 436057 449977 430840
Figure
15: Total number of results for polygon data
(in thousands)
We can now compare the search time of the search
trees if we take the renement step into account. The
renement step takes the set of results obtained by
searching the tree in the lter step and tests for actual
intersection of the query object and the object
in the candidate result set. The search times for this
experiment are shown in Figures 16 and 17.
R-tree Partial Partial O-tree
lines R-tree Accurate Fast
500 0.31 0.31 0.79 0.42
1000 0.66 0.66 1.36 0.72
5000 4.22 3.47 4.85 2.63
10000 8.67 7.06 9.11 5.10
50000 43.17 34.46 43.30 24.02
100000 87.13 68.72 81.69 45.80
Figure
Average search time for line data including
renement time
R-tree Partial Partial O-tree
polys R-tree Accurate Fast
500 1.12 1.17 2.86 1.53
1000 2.48 2.40 5.86 3.20
5000 14.91 13.15 30.08 16.85
10000 30.38 26.78 59.47 33.50
50000 153.34 132.38 290.43 163.11
100000
Figure
17: Average search time for polygon data including
renement time
As the number of results does not change, the improvement
of the partial R-tree compared to the normal
R-tree is similar for search including the rene-
ment step. For the O-trees, on the other hand, the
relative performance to the R-tree has changed. The
accurate search O-tree now shows a slight improvement
over the R-tree for the large line data le and
similar performance for the other line data sets. The
fast search O-tree now shows a reduction in search
time of up to 60 per cent, again for the larger les.
For polygon data, even the reduced number of candidates
for the renement step does make the partial O-tree
competitive. For the accurate search, the search
time now is 60 higher than for the R-tree and still no
improvement is shown for the fast O-tree search on
polygon data.
6
Summary
We investigated how to make better use of the space
in a small R-tree node for in-memory applications.
As many entries share sides with their parents, we
introduced the partial R-tree which only stores information
that is not given by the parent node. Our experiments
showed that the partial R-tree shows better
performance than the R-tree for random line queries
on line and polygon data. The improvements range
from 10 to per cent. This is due to a higher fanout
of the node which showed to be 4 compared to 3 in
the normal R-tree. We also investigated if we could
make better use of the space by storing dierent information
that promises to yield a better approximation
of the entry. The partial O-tree is based on the
O-tree but stores only the most important part of
the information of an O-tree box. We implemented
a static version of the partial O-tree and investigated
two search algorithms. The fast search algorithm still
shows enough accuracy and showed improvements of
per cent for line data without renement step and
per cent improvement for line data with rene-
ment step. For polygon data, search could not be
improved, but the static partial O-tree still showed
stable performance similar to the R-tree.
7 Future Work
We will investigate if we can further improve the
fanout of the tree by storing the bitelds encoded using
the Human-encoding. Furthermore, we will also
experiment with static partial O-trees in a disk-based
database environment.
--R
The Asilomar Report in Database Research.
Improving Index Performance Through Prefetching.
Optimizing Multidimensional Index Trees for Main Memory Access.
Making B
--TR
The R*-tree: an efficient and robust access method for points and rectangles
The Asilomar report on database research
Making B+- trees cache conscious in main memory
Optimizing multidimensional index trees for main memory access
Main-memory index structures with fixed-size partial keys
Improving index performance through prefetching
R-trees
Cost-based Unbalanced R-Trees
O-Trees
--CTR
Jeong Min Shim , Seok Il Song , Jae Soo Yoo , Young Soo Min, An efficient cache conscious multi-dimensional index structure, Information Processing Letters, v.92 n.3, p.133-142, 15 November 2004 | in-memory databases;spatial databases;index structures |
564104 | Distributed component architecture for scientific applications. | The ideal goal of not having the user dealing with concurrency aspects has proven hard to achieve in the context of system (compiler, run-time) supported automatic parallelization for general purpose languages and applications. More focused approaches, of automatic parallelization for numerical applications with a regular structure have been successful. Still, they cannot fully handle irregular applications (e.g the solution of Partial Differential Equations (PDEs) for general geometries).This paper describes a new approach to the parallelization of scientific codes. We make use of the object-oriented and generic programming techniques in order to make parallelism implicit (invisible for the user). Instead of generating a new solution based on a existing one, we take advantage of the application characteristics in order to capture the concurrency infrastructure and to provide part of the solution process to the user. Our goal is to achieve "transparent" concurrency by giving the user the illusion of a sequential programming environment. We isolate the user from the tedious aspects of geometrical data representation, communication patterns computation and communication generation in the process of writing a parallel solver for PDEs. The user concentrates on providing only the local numerical computations which is a straight forward mapping from the numerical algorithm.Furthermore, we describe a system that demonstrates our approach. We address the issues of efficiency for our system and we show that our approach is scalable. | Introduction
Most of the scientic computing applications are concerned
with the solution of the Partial Dierential
Equations (PDEs) which describe some physical phe-
nomena. Typical application areas are computa-
tional
uid dynamics, computational biology, and so
forth. The solution of PDEs, either by Finite Element
Method (FEM), or by Finite Dierence (FD), involves
the discretization of the physical domain and the computation
of the solution at the discretized points. The
computation is then assembled into a global system
of linear equations.
The discrete solution algorithm of a PDE is called
solver. The scientic computing eld is concerned
with employing powerful computing resources for
solving numerical analysis problems, such as PDEs.
For example, for simulating a protein folding modeled
by PDEs, 10000 particles are used and
Copyright c
2002, Australian Computer Society, Inc. This paper
appeared at the 40th International Conference on Technology of
Object-Oriented Languages and Systems (TOOLS Pacic 2002),
Sydney, Australia. Conferences in Research and Practice in Information
Technology, Vol. 10. James Noble and John Potter,
Eds. Reproduction for academic, not-for prot purposes permitted
provided this text is included.
time steps are required. This would require
days (therefore for a feasible computation time frame,
one would need steps). In this exam-
ple, he biological data consists of 1500 protein elds,
protein sequences, so 15000 protein models.
In order to predict the protein structure, all possible
sets (structures) have to be generated (optimize
by performing a selection based on a \energy func-
The only way to deal with the size of data arising
from large, numerical computations and the number
of iteration steps involved is to use parallel com-
puting. Parallel computing has been employed extensively
in the scientic computing eld in an explicit
manner. Most of the parallel scientic applications
use the Fortran language, in conjunction with
message passing paradigm (Message Passing Interface
to specify the decomposition, map-
ping, communication and synchronization. Reuse has
been explored in its incipient phase as function li-
braries. Even though at limit Fortran libraries account
for some reuse, Fortran applications are hardy
extendible. As a consequence, despite their similar
structure, most of the parallel application are re-designed
from scratch. Given that the process of writing
distributed memory applications is complex and
error-prone, this means low productivity.
Our goal is to abstract away from the user the
distributed computing aspects. That is, to give the
user the illusion of a sequential programming model.
Our thesis is that scalable parallelizing compilers for
real application codes are very far in time.
With our approach, we are taking advantage of the
application specic features to \automate" the parallelization
process. We separate the user data (numer-
ical application specic data) from the parallelization
algorithm. Therefore, we capture the concurrency infrastructure
for the class of application at hand (PDE
solvers) and dynamically use it during the user's solution
process. We use generic programming techniques
in order to couple user data and the workload partitions
for the transparent distributed solution process.
In the reminder of the paper we will refer to the
FEM solution process, since we treat the FD case
as a particularization of the general solution process.
Thus, the key features of FEM solvers are:
The applications are data parallel and loosely
synchronous. Domain decomposition is a technique
used to break down the physical domain
into smaller sub-domains, that can be treated
separately. With this method, data at the border
between sub-domains is logically connected with
data from other sub-domains. In a distributed
memory setting this means that data residing on
remote processors are needed locally. The computation
steps consist of independent local com-
putations, followed by communication.
The physical domain is described by a geomet-
rical, discretized structure. This usually translates
into nodes, elements, or faces (edges, i.e.
the connection between the elements). Any user
application specic abstractions (matrices, vec-
tors, etc.) or attributes (e.g. pressure, temper-
ature) are indexed according to the nodes, ele-
ments, or faces. The numerical computation consists
mainly on iterations over the entities (nodes,
elements, edges).
The applications are inherently dynamic: experimentation
with dierent geometrical data struc-
ture(degree of freedom, element shapes) or different
numerical algorithms (time discretization
schemes, iteration schemes, etc.) is at the core
of the physical phenomena simulation (numerical
applications).
We make use of the application domain features
(PDE solvers) and object-oriented techniques in solving
the problem of transparent concurrency for numerical
applications.
1.1 Contributions
This paper makes the following contributions:
A distributed component model for scien-
tic computation. We present a component
model suitable for concurrent scientic applications
that enables us to achieve the \illusion" of a
sequential programming model. We distinguish
between active and passive components and their
visibility at dierent levels: system and user. We
take advantage of the loosely synchronous feature
of the class of applications we refer. Therefore,
our component model allows for optimal commu-
nication. Moreover, we use generic programming
techniques in order to be able to couple user data
and algorithms with our concurrency infrastruc-
ture. In our distributed component model there
is no notion of global address or name space, or
remote invocations.
Automatic data consistency. We introduce
the notion of dependent and independent data
items. Dependent data needs to be globally con-
sistent, while independent data does not. The
distinction allows us to guide the user in invoking
the global consistency phase during the computa-
tion. The communication is then automatically
taken care of.
An architecture for the scalable solution
of PDEs. Our solution exploits the similarities
of the applications, as well as the genericity of
the parallelization process in order to capture the
concurrency infrastructure that can be reused as-is
by any scientic application (i.e. distributed
PDE solver) programmer. The hardest aspects of
the solution process are dealt with and therefore
isolated from the user. That is, geometrical data
representation, computation of the communication
patterns and communication generation.
The reminder of the paper is organized as follows.
Section 2 overviews the existing approaches to the
problem we are trying to solve (transparent con-
currency) and motivates our approach. Section 3
presents a system for the transparent concurrency of
the parallel PDE solvers. Section 4 describes a prototype
framework implementation of our system. It
also discusses the design rationale that drove our ap-
proach. Section 5 concludes our paper and gives some
directions for future research.
Existing Approaches
Several approaches exist to support the parallelization
of scientic applications. The spectrum of the
approaches lies between manual parallelization, at
one end, to the fully automatic parallelization, at the
other end. Despite its inconveniences, manual parallelization
is still the most popular approach to writing
concurrent scientic applications, due to its e-ciency
and taylorability. On the other hand, writing concurrent
scientic applications is an error-prone, complex
task. Automatic parallelization is one way to
tackle the complexity and the reliability of concurrent
applications. Research into parallelizing compilers
for scientic applications has been successful
for Fortran applications with simple data representations
and regular access patterns (Ujaldon, Sharma,
1993). Compile-time analysis cannot handle
arbitrary data access patterns that depend on
run-time information. Run-time analysis has been
used to address the issue of compiling concurrent loop
nests in the presence of complicated array references
and irregularly distributed arrays (Wu, Das, Saltz,
Berryman & Hiranandani 1995). However, these approaches
are limited to loop level parallelism and simple
data layouts (arrays) in the context of Fortran lan-
guages. The code excerpt in gure 1 exemplies the
applicability of the compiler support for parallelization
The loop level parallelism is ne grain and results
in a lot of communication. Also, the compiler support
can only handle arbitrary access patterns in the
context of simple data layouts, such as arrays.
Object-oriented/based distributed systems that
support data parallel applications have been proposed
(Chang, Sussman & Saltz 1995), (Hassen,
They either do not support
complex data representations, with general distribu-
tion, or many of the concurrency aspects are still visible
to the user. A complete survey of models and
languages for parallel computation can be found in
(Skillicorn & Talia 1998). We will only refer to the
object-oriented models. In (Skillicorn & Talia 1998),
they are classied into external and internal models
according to whether the parallelism is orthogonal to
the object model, or integrated with the object model.
We are interested in the internal object models, because
these are closely related with data-parallelism,
since at the top level the language appears sequen-
tial. Existing approaches require communication to
be explicit, but reduce some of the burden of the synchronization
associated with it. Our model aims to
hide communication and synchronization to the user.
With our prototype implementation, we succeed in
achieving this to the extent that the synchronization
phase has to be triggered by the user. Anyway, we
will extend our system implementation to automatically
detect and trigger the synchronization phase
when necessary.
Chaos++ (Chang et al. 1995) is a library that provides
support for distributed arrays and distributed
pointer-based data structures. The library can be
used directly by the application programmers to parallelize
applications with adaptive and/or irregular
data access patterns. The object model is based on
Compilers - regular case:
Example:
1.Gather data dependence info
(i.e. f[+,3],[0,3]g)
2.Data and computation decomposition
(i.e. the iteration space).
3.Code (communication) generation
(based on data flow information
and owner-compute rule.)
Run-time support - irregular case:
Example:
1.Build the communication schedule
(i.e. a translation table lists the
home processor and the local address
for each array element)
2.Move the data based on the schedule
The transformed code:
Call DataMove(y, DS)
Figure
1: Compile-time and run-time support for parallelization.
mobile and globally addressable objects. A mobile object
is an object that knows how to pack and unpack
itself to and from a message buer. A globally addressable
object is an object that it is assigned to one
processor, but allows copies to reside on other processors
(referred to as ghost objects).
The rst problem we see with this approach is
that the user is expected to provide implementations
of the pack and unpack functions (that support deep
copy) when subclassing the mobile component. On
one hand, the packing and unpacking tasks are low
level operations that expose the user to many of the
concurrency aspects (what to pack, when to pack,
where to unpack, what to unpack). On the other
hand, more mobile objects may contain a pointer to
same-sub-objects. The user has to make sure that
only one copy of a sub-object exists at any point
during the program execution. The use of globally
addressable objects can alleviate some of these prob-
lems. The global object is intended in order to support
the global pointer concept. The main problem that
we see is that the contents of the ghost objects are
updated by explicit calls to data exchange routines.
Our approach is dierent from this one by not mixing
the concurrency aspects at the user level. We do
not let the user see the active components (i.e. associated
with a process) of our model. Also, we do
not need to naively transport the entire content of an
object for every communication. We only update the
parts of the objects that are needed for subsequent
computations and avoid the overhead associated with
communicating entire objects.
In contrast with the generative approach, such as
compiler support for parallelization we use a constructional
approach. Therefore we construct part of
the solution. We want to achieve the \illusion" of
a sequential programming environment, as opposed
to transforming sequential programs to run in par-
allel. Our approach is due to the limited compiler
support for dynamic analysis and e-ciency consider-
ations. Our approach is based on capturing the concurrency
infrastructure of a class of applications and
reusing it for every new application. This idea is similar
with the notion of skeletons, ready made building
blocks (?), or abstractions characteristic to a class of
applications. In our case, we are closer to algorithmic
skeletons, those that encapsulate structure. (Botorog
Kuchen 1995) explore the approach of algorithmic
skeletons, or common parallelization patterns for another
class of applications, i.e. adaptive multigrid
methods. Moreover, (Botorog & Kuchen 1995) list
a series of requirements for a language supporting algorithmic
skeletons, among which data access control
and polymorphism. The authors introduce the notion
of parallel abstract data type (PADT) in order to account
for the required features. Furthermore, the host
language used for experimentation is an imperative
language, i.e. the C programming language. We argue
that object-oriented models and generic programming
naturally fulll the requirements for implementing
algorithmic skeletons. Therefore we concentrate
on an e-cient data parallel object model suitable for
high performance, parallel applications.
In contrast with object-oriented distributed
middle-ware, we do not support any notion of global
name or address space. That is because e-ciency is
important for the applications we address and such
methods wouldn't fulll this requirement. Moreover,
we are not giving the user access to the distributed
computing aspects such as communication, synchro-
nization, etc. As a consequence, we achieve both,
e-ciency and transparency of the concurrency.
3 A View of a System for Transparent Con-
currency
In this section we describe a system for transparent
concurrency of the distributed solution of the PDEs
from the user's perspective. In this perspective, we
emphasize the following requirements for our system:
Applicability - the class of the applications we
address is the parallelization for the general solution
of PDEs (FEM, FD etc. PDEs are at
the core of most of engineering and the natural
sciences. With such a large class of applications,
our system is most likely to be highly relevant
for a large applied research community.
Usability - we expose the user to a small, well
tested, well documented component set, together
with the control thread (the way the components
play together), that can be easily learned and
used.
Extendibility - the user should be able to use
our system in conjunction with his/her own data
and algorithms in order to complete the solution
process.
E-ciency - last, but not least, e-ciency is an
important requirement of the scientic applica-
tions. Our architectural approach is driven by
e-ciency as well.
In succeeding to meeting our requirements, we
have accounted for the following:
To achieve large applicability, we only focus on
capturing the concurrency infrastructure, that
is the load balanced data distribution, nding
the data that needs to be communicated (i.e.
computing the communication patterns), knowing
when the communication takes place. We
only employ a high-level mathematical interface
in the form of the geometrical data description
for the discretized physical domain. Our solution
is general and it does not involve any algorithmic
or discrete mathematics interface. With
our solution, any existing computational kernels
(e.g. BLAS (Dongarra, Croz, Hammarling
Hanson 1988), Lapack (Angerson, Bai, Don-
garra, Greenbaum, McKenney, Du Croz, Ham-
marling, Demmel & Bischof 1990)) or numerical
library (e.g. Dipack (Langtangen 1999)) can be
employed by the user for the solution process.
Most of the existing numerical libraries for the
distributed solution of PDEs are still low-level
and complex. Here, by low-level, we mean that
the user is still aware of the data distribution and
communication aspects, as well as of many other
low-level aspects (renumbering, data access o-
set, etc. (Smith 1998)). By complex, we mean
that the libraries provide a rich, mixed function-
ality. Part of the functionality accounts for numerical
abstractions (linear algebra solvers, time
discretization schemes, etc. More general functionality
(sometimes duplicated by each library,
in its own \philosophy"), such as geometrical
data representation, local element to global mesh
renumbering, etc., it is mixed in also. Therefore
the libraries become large, hard to use and
learn. We separate the parallel solution process
and the application specic data and algorithm.
We achieve high usability by designing a small
set of well tested, well documented components,
with a narrowly focused functionality.
Our solution is high level and reusable through
the use of encapsulation. We hide the details of
concurrency from the user, while achieving reuse
as-is of the entire concurrency infrastructure. We
also hide the tedious details of the geometrical
data representation from the user. At the same
time, the component-based design accounts for
the reuse of any existing (numerical) software artifacts
Our solution is e-cient because we employ a
truly distributed object model. That is, in our
model, there is no notion of a global name or
address space, or remote invocations. The active
objects are loosely synchronous. Communication
only takes place at particular times during
the application life time. We optimize communication
by knowing exactly when communication
takes place and aggregating data into large messages
The main concepts/steps involved in the distributed
solution of the PDEs are:
1. Geometric data representation.
2. Data structure creation.
3. O-processor data updates.
4. Local numerical computation.
Our system accounts for the rst three phases, while
the user is responsible for providing the numerical
algorithm for a particular PDE.
3.1 Geometrical Data Representation
Geometrical data representation is one of the hard
aspects of scientic applications that use general
meshes. Even though the applications use similar
geometries, many dierent representations coexist in
practice, making existing scientic codes hard to un-
derstand, modify, maintain and extend. We isolate
the geometrical data representation in a component
with a well dened interface for accessing all the
needed geometrical attributes. At the system level,
the geometrical data representation can be easily re-
placed, without aecting the system functionality, or
requiring any modications in any other system mod-
ules. Therefore, our system takes over the task of
providing the user with a geometrical data representation
to be used for the implementation of user's
application.
The user species the structure of the input domain
as an input le, called mesh or grid, for the sys-
tem. The user also species the number of the processors
available on his/her system. The le describing
the domain has a prescribed, well documented for-
mat. Such les are usually obtained with the help of
tools called mesh generators 2 .
The system reads the data from the le into the internal
components. Dierent element shapes used for
the discretization of the input domain can be specied
in the mesh le.
The system uses a load-balanced partitioning algorithm
(as provided by METIS 3 ) for breaking down
the input mesh structure data into smaller regions.
All the details related with the geometrical data representation
are encapsulated by our system compo-
nent. The user gets access to all geometrical data aspects
through our component interface, which is the
Subdomain. At the system level, the geometrical data
representation can be easily replaced, without aect-
ing the system functionality, or requiring any modi-
cations in any other system modules.
3.2 Data Structure Creation
The system creates a number of regions from the input
data structure equal with the number of the processors
available. Then it associates each data region
with a process 4 , transparently for the user. The internal
boundary geometrical mesh data is duplicated
on each process, such as the user has access, locally,
to all the o-processor data needed during one computation
The user has access to the geometrical data local
to one processor through the Subdomain compo-
nent. The component presents the user with a uniform
view of the geometrical data structure that can
be employed in a sequential programming model for
implementing a numerical algorithm (solver). All the
distributed computing aspects that the component incorporates
are invisible to the user.
The user has to subclass the a system provided
component UserData for dening any attribute (e.g.
pressure, temperature) or data abstraction dened on
the mesh structure that it is involved in the computation
of the nal result. The user provides the concrete
An example of such tool can be found at
http://www.sfb013.uni-linz.ac.at/ joachim/netgen/
4 In our model a single process runs on a single processor.
interface for storing and retrieving a user dened data
item to/from a specic mesh location (element, node,
etc.
3.3 O-Processor Data Updates
The concurrency structure of the applications we
address (the solution of PDEs) consists of independent
local computation, followed by communication
phases. Therefore, they are loosely synchronous
(Chang et al. 1995). In order to automatically
provide for the o-processor data updates, we
need to know when and what to communicate. That
is, we need to know the communication patterns and
when to generate communication. Our system computes
the communication patterns, but it is the user
that explicitly invokes the update phase. Then, the
system performs the updates transparently.
Each region associated with a process is stored in
the Subdomain component. The \invisible" part of
this component makes use of the local data in order to
account for the distributed computing part. The component
computes the communication patterns, creates
the communication data container objects and
maintains the global data consistency transparent for
the user. The user has to call a system provided
generic function, Update, which makes sure that the
user dened data is globally consistent.
3.4 Local numerical Computation
Our system treats the user data dierently, depending
on the dependency attribute:
1. We call dependent data any dened user property
that is the nal result (i.e. the unknown
for which the equations are solved, e.g. pres-
sure, temperature, etc.), or updated by another
dependent data item (e.g. some of the coe-cient
matrices computed based on the unknown).
2. We call independent data any user data that it is
not assigned the result of an expression computed
based on dependent data.
Figure
shows how the system supports the above
tasks transparently to the user, and what is the user
contribution in completing the solution process. As
shown in gure 2, we employ a Master/Worker concurrency
model. A master process is responsible for
reading in the domain data from an input le and
distributing the subdomains to the workers. We actually
use a hybrid master/worker and SPMD (Single
Process Multiple Data) concurrency model. We use
the SPMD model due to the key observation for our
parallelization system: all the worker processes execute
a similar task on the local domain (data). That
is because the class of application we address share
the same feature, namely, they are data parallel.
In gure 2 we associate each subdomain with a
unique process. We say that a subdomain has communication
capabilities, that is, it \knows" how to
\pack/unpack itself" and send/receive. Inside the
system there are other components with similar fea-
tures. We call these components active components.
In contrast, some other components, passive, capture
only the structure data, and/or the algorithm associated
with it, without having any communication ca-
pabilities. At the user level, only the \passive" components
are employed, or visible.
The user sees only the \passive interface" of the
Subdomain. This will allow the user to manipulate the
appropriate geometrical data. We hide the geometrical
data representation from the user. The system
instantiates and \computes" the \right" subdomain
for each worker. The user acts only at a subdomain
level, in a sequential fashion. The system replicates
the user algorithm and data over all the workers. The
workers communicate transparently for the user, by
using messages.
In gure 3 we show the dierence between a code
excerpt written in a \classical sequential manner",
and the same code excerpt enriched with our system
functionality to look sequential, but execute dis-
tributed. We do not show the \classical concurrent
model" for such an application (i.e. the MPI) ver-
sion, since we assume that its complexity is evident
to the reader and the space would not allow for such
an illustration. In gure 3, we emphasize the dier-
ence between the two models by using grey for our
model required modications. We use black for the
similar parts. It is easy to see the data items B and
are candidates for the dependent data. Therefore,
in the code excerpt above, these data specialize our
component UserData. Also, the loc data variable reects
the data the user sees, i.e. a Subdomain. On
the other hand, the data items A and C are independent
and they do not require any modication to the
sequential algorithm.
An important observation here it is that the user is
the one who has to \observe" the dierence between
the dependent and independent data. With proper
guidance provided by the system documentation and
the user experience, this \task" is straight forward.
We have implemented a prototype frame-
work( (Johnson 1997), (Bassetti, Davis &
Quinlan 1998)) to demonstrate our approach, using
the C++ programming language (Stroustrup 2000).
We have used the METIS library for the load
balanced (graph) partitioning of the irregular mesh.
We use the Object Oriented Message Passing Interface
library for communication.
Figure
4 depicts a view of the prototype, using
the UML (Fowler, Scott & Booch 1999) (Unied
Modeling Language) formalism. For brevity we only
show the key components and interfaces in the UML
diagram.
Our design is based on a careful study of the application
domain (PDE solvers) and their behavior.
Our key observation is that the applications share a
similar structure, with dierences residing in the \nu-
merical problem parameters", rather than the concurrency
infrastructure. Therefore, by separating the
parallel algorithm and geometrical data from the application
specic parameters, data and algorithms, we
were able to come up with a solution to automate the
parallelization process.
In gure 4, the UserAlg component is \the hook"
for the user to \anchor" his/her computation into the
framework. The user subclasses the abstract component
UserAlg 6 and provides his/her main control
ow which complements the framework's control
ow
in completing a particular application. The user also
hooks the data representation for his/her application
here. As we have mentioned earlier( 3.4), the data
can be independent or dependent. Further more, it
can be user dened, or components imported by the
user from other libraries. In this case, we say that
the user extends the framework through composition.
Composition is the rst mechanism used to extend
6 In this particular implementation, components and classes are
the same
User supplied
application
specific part
The input file
Communication
Worker 1 Worker n
Mesh data
Master
Data
decomposition
Communication
(Metis library)
library (MPI)
Figure
2: The main building blocks of the system.
the framework. The user dependent data has to be
dened by subclassing the framework container component
UserData. Subclassing is the second mechanisms
used to extend the framework.
The user has access to the geometrical data for
a subdomain through the interface of the Subdomain
component. The user algorithm component is parameterized
with the Subdomain component. The Subdomain
is created by the framework, such as the user
has a contiguous, consistent view of his/her data. The
user writes his/her application as for a single Subdo-
main. In a SPMD fashion, the framework instantiates
a number of workers which replicate the user provided
computation for all the subdomains. A worker
process is modeled by a Worker component, which
is not visible for the user. The component receives
its work from a master process, in a Master/workers
fashion. The Master component reads in the input
data as provided by the user (the discretized physical
domain) and breaks it down into smaller partitions
that it sends to the workers. Based on their local
data provided by the Subdomain component, a worker
sets up the communication patterns in cooperations
with other workers. The generic function Update uses
the communications patterns for a Subdomain and
the container component UserData to automatically
generate communication every time the user calls the
function, after updating a user dependent data item.
The component-based design we have chosen
for our framework is due to our constructional ap-
proach, i.e. we construct part of the solution process.
The generative approach would be to analyze an existing
solution and generate a new one (e.g. compiler
driven parallelization). The compiler techniques
(data
ow analysis, dynamic analysis) are limited to
the regular applications, that use simple data layout.
Frameworks fulll best our need for the user to be
able to plug in his own data representations and al-
gorithms, i.e. to provide the remaining part of the
solution process.
The benets of our architecture choice range from
the ability to \automate" the parallelization process
for a general class of applications which has never
been achieved before (general, irregular applications),
to the data locality and communication optimization,
the data encapsulation concept naturally provides.
Intensively researched e-ciency issues such as data locality
and communication optimization come for free.
Or almost, at least. They come at the cost of whatever
makes object-oriented languages slow: abstraction
penalty, dynamic binding penalty and inheritance
penalty. At last, but not least, the (object-
oriented) framework has been the most eective route
to reuse (Parsons, Rashid, Speck & Telea 1999).
Genericity is an important aspect of our design.
Because the parallel structure of the numerical applications
we refer to can be expressed independently of
the representation, the concurrency infrastructure is
based on the concept of generic programming. We use
generic programming to be able automatically generate
communication for user data. We use containers
as place holders for later dened user data to be able
to pack/unpack and send/receive the data. This solution
enables us to free the user from any distributing
computing aspects, from data distribution, and data
coherence and consistency, to data communication.
The concurrency model we use is the hybrid
master/workers and SPMD. SPMD is the widely used
concurrency model for numerical applications, since
they are data parallel. We use a special process, a
master, to evenly divide the data and send the work
to the worker processes. This way, the workers will
have approximately the same amount of work load,
and the computation will be balanced.
The validation of the framework has two aspects.
At the usability level, our system is open source and
it will be made available to the researchers in the
application domain area to experiment with it. We
Classical sequential model:
A FEM Poisson Solver using a tetrahedral mesh
void main() f
ComputeB();
ComputePressure();
void ComputePressure() f
for
void ComputeA() f
void ComputeC() f
void ComputeB() f
for
for
Our non-classical sequential model:
A FEM Poisson Solver using a tetrahedral mesh
void main() f
Update(P, loc data);
ComputeB();
ComputePressure();
void ComputePressure() f
for
Update(P, loc data);
void ComputeA() f
void ComputeC() f
void ComputeB() f
B.Init;
Update(B, loc data);
for
(j+k+1)%loc data.GetNve(),
local data.GetNve());
B.SetAt(knode, temp);
Update(B, loc data);
Figure
3: A comparison of two sequential programming models.
UserMain:
public UserAlg
UserDefined:
public UserData User supplied
components
Subdomain
Worker CommPattern
Master
Metis
MeshStruct
UserAlg
virtual void Main()
OOMPI
IntBoundary
UserData
User defined
External packages
internally used by
the framework
Framework internals
User visible
template void Update(UserData &, Subdomain&)
-myDomain
Figure
4: The infrastructure of the framework.
are implementing a test suite to test the functionality
of our framework and for the documentation purposes
as well.
From the e-ciency point of view, we are interested
in the application speed-up. We intend to run
the framework on clusters of PCs or NOWs (Network
of Workstations) running Linux operating system.
Therefore we will measure the application speed-up
by running x sized problems on an increasing number
of processors. Then we will measure for dierent
size of problems as well.
5 Conclusion and Future Work
In this paper we have presented a new approach towards
the parallelization of scientic codes, i.e. a
constructional approach. In contrast to the generative
approach (e.g.compiler driven parallelization), we
construct part of the solution, instead of generating a
new solution based on a existing one. We use a component
based architecture in order to be able to allow
the user to build on our concurrency infrastructure.
With our approach, we get closer to the ideal goal
of not having the user to deal with the concurrency
at all, without restricting the generality of the application
class. Therefore, we are able to handle the
distributed solution of PDEs for general geometries
(meshes).
Given that e-ciency is an important constraint
of the class of the applications we address, we
show how a \truly distributed" component model
can alleviate the e-ciency problems of the object-oriented
middle-ware ( (Java RMI n.d.), (CORBA
Success Stories n.d.), (Distributed Component Object
Model (DCOM) n.d. The paper attempts
to explore the appropriateness of objects in conjunction
with concurrency (a much desired association
(Meyer 1993)) in the context of high performance
computing. High performance scientic computing
is known as a community traditionally reluctant to
object-oriented techniques because of the poor performance
implementations of object-oriented languages
and systems.
In the future, we will work more towards bringing
evidence that our approach is scalable. We intended
our system architecture for a cheap,
exible
distributed computing platform, consisting of clusters
of Linux PCs, or NOWs. With a scalable approach, a
potentially \unlimited" number of computers can be
used for gaining computational power.
--R
An overview of a compiler for scalable parallel machines
ans Sorensen
LAPACK: A portable linear algebra library for high-performance computers
Optimizations for parallel object-oriented frame- works
Algorithmic skeletons for adaptive multigrid methods
Distributed Component Object Model (DCOM) (n.
UML Distiled: A Brief Guide to the Standard Object Modeling Language
How frameworks compare to other object-oriented reuse techniques
Computational Partial Di
Systematic concurrent object-oriented programming
"framewok"
The transition of numerical soft- ware: From nuts-and-bolts to abstractions
--TR
An extended set of FORTRAN basic linear algebra subprograms
Systematic concurrent object-oriented programming
Object-oriented runtime support for complex distributed data structures
A flexible operation execution model for shared distributed objects
Models and languages for parallel computation
The C++ Programming Language
Computational Partial Differential Equations
Distributed Memory Compiler Design For Sparse Problems
An Overview of a Compiler for Scalable Parallel Machines
Algorithmic Skeletons for Adaptive Multigrid Methods
Run-Time Techniques for Parallelizing Sparse Matrix Problems
A "Framework" for Object Oriented Frameworks Design | generic programming;concurrency;scientific applications;distributed components |
564387 | Two-stage language models for information retrieval. | The optimal settings of retrieval parameters often depend on both the document collection and the query, and are usually found through empirical tuning. In this paper, we propose a family of two-stage language models for information retrieval that explicitly captures the different influences of the query and document collection on the optimal settings of retrieval parameters. As a special case, we present a two-stage smoothing method that allows us to estimate the smoothing parameters completely automatically. In the first stage, the document language model is smoothed using a Dirichlet prior with the collection language model as the reference model. In the second stage, the smoothed document language model is further interpolated with a query background language model. We propose a leave-one-out method for estimating the Dirichlet parameter of the first stage, and the use of document mixture models for estimating the interpolation parameter of the second stage. Evaluation on five different databases and four types of queries indicates that the two-stage smoothing method with the proposed parameter estimation methods consistently gives retrieval performance that is close to---or better than---the best results achieved using a single smoothing method and exhaustive parameter search on the test data. | INTRODUCTION
It is well-known that the optimal settings of retrieval parameters
generally depend on both the document collection and the query.
Permission to make digital or hard copies of all or part of this work for
personal or classroom use is granted without fee provided that copies are
not made or distributed for profit or commercial advantage and that copies
bear this notice and the full citation on the first page. To copy otherwise, to
republish, to post on servers or to redistribute to lists, requires prior specific
permission and/or a fee.
SIGIR'02, August 11-15, 2002, Tampere, Finland.
For example, specialized term weighting for short queries was studied
in [3]. Salton and Buckley studied many different term weighting
methods used in the vector-space retrieval model; their recommended
methods strongly depend on the type of the query and the
characteristics of the document collection [13]. It has been a great
challenge to find the optimal settings of retrieval parameters automatically
and adaptively accordingly to the characteristics of the
collection and queries, and empirical parameter tuning seems to be
inevitable in order to achieve good retrieval performance. This is
evident in the large number of parameter-tuning experiments reported
in virtually every paper published in the TREC proceedings
[15].
The need for empirical parameter tuning is due in part from the
fact that most existing retrieval models are based on certain pre-
assumed representation of queries and documents, rather than on
a direct modeling of the queries and documents. As a result, the
"adaptability" of the model is restricted by the particular representation
assumed, and reserving free parameters for tuning becomes a
way to accommodate any difference among queries and documents
that has not been captured well in the representation. In order to
be able to set parameters automatically, it is necessary to model
queries and documents directly. This goal has been explored recently
in the language modeling approach to information retrieval,
which has attracted significant attention since it was first proposed
in [9].
The first uses of the language modeling approach focused on its
empirical effectiveness using simple models [9, 7, 2, 1]. Recent
work has begun to develop more sophisticated models and a systematic
framework for this new family of retrieval methods. In [4],
a risk minimization retrieval framework is proposed that incorporates
language modeling as natural components, and that unifies
several existing retrieval models in a framework based on Bayesian
decision theory. One important advantage of the risk minimization
retrieval framework over the traditional models is its capability of
modeling both queries and documents directly through statistical
language models, which provides a basis for exploiting statistical
estimation methods to set retrieval parameters automatically. Several
special language models are explored in [6, 4, 16], and in all
uses of language modeling in IR, smoothing plays a crucial role.
The empirical study in [17] reveals that not only is retrieval performance
generally sensitive to the setting of smoothing parameters,
but also that this sensitivity depends on the type of queries that are
input to the system.
In this paper, we propose a family of language models for information
retrieval that we refer to as two-stage models. The first stage
involves the estimation of a document language model independent
of the query, while the second stage involves the computation of
the likelihood of the query according to a query language model,
which is based on the estimated document language model. Thus,
the two-stage strategy explicitly captures the different influences of
the query and document collection on the optimal settings of retrieval
parameters.
We derive the two-stage models within the general risk minimization
retrieval framework, and present a special case that leads
to a two-stage smoothing method. In the first stage of smoothing,
the document language model is smoothed using a Dirichlet prior
with the collection language model as the reference model. In the
second stage, the smoothed document language model is further
interpolated with a query background language model. we propose
a leave-one-out method for estimating the first-stage Dirichlet
parameter and make use of a mixture model for estimating the
second-stage interpolation parameter. Evaluation on five different
databases and four types of queries indicates that the two-stage
smoothing method with the proposed parameter estimation method-
which is fully automatic-consistently gives retrieval performance
that is close to, or better than, the result of using a single smoothing
method and exhaustive parameter search on the test data.
The proposed two-stage smoothing method represents a step toward
the goal of setting database-specific and query-specific retrieval
parameters fully automatically, without the need for tedious
experimentation. The effectiveness and robustness of the approach,
along with the fact that there is no ad hoc parameter tuning in-
volved, make it very useful as a solid baseline method for the evaluation
of retrieval models.
The rest of the paper is organized as follows. We first derive the
two-stage language models in Section 2, and present the two-stage
smoothing method as a special case in Section 3. We then describe,
in Section 4, methods for estimating the two parameters involved
in the two-stage smoothing method. We report our experimental
results in Section 5. Section 6 presents conclusions and suggestions
for future work.
2. TWO-STAGE LANGUAGE MODELS
2.1 The risk minimization framework
The risk minimization retrieval framework is a general probabilistic
retrieval framework based on Bayesian decision theory [4].
In this framework, queries and documents are modeled using statistical
language models, user preferences are modeled through loss
functions, and retrieval is cast as a risk minimization problem. The
framework unifies several existing retrieval models within one general
probabilistic framework, and facilitates the development of
new principled approaches to text retrieval.
In traditional retrieval models, such as the vector-space model [12]
and the BM25 retrieval model [11], the retrieval parameters have
almost always been introduced heuristically. The lack of a direct
modeling of queries and documents makes it hard for these models
to incorporate, in a principled way, parameters that adequately address
special characteristics of queries and documents. For exam-
ple, the vector-space model assumes that a query and a document
are both represented by a term vector. However, the mapping from
a query or a document to such a vector can be somehow arbitrary.
Thus, because the model "sees" a document through its vector rep-
resentation, there is no principled way to model the length of a doc-
ument. As a result, heuristic parameters must be used (see, e.g., the
pivot length normalization method [14]). Similarly, in the BM25
retrieval formula, there is no direct modeling of queries, making
it necessary to introduce heuristic parameters to incorporate query
term frequencies [11].
One important advantage of the risk minimization retrieval frame-work
[4] over these traditional models is its capability of modeling
both queries and documents directly through statistical language
modeling. Although a query and a document are similar in the
sense that they are both text, they do have important differences.
For example, queries are much shorter and often contain just a few
keywords. Thus, from the viewpoint of language modeling, a query
and a document require different language models. Practically, separating
a query model from a document model has the important
advantage of being able to introduce different retrieval parameters
for queries and documents when appropriate. In general, using statistical
language models allows us to introduce all parameters in a
probabilistic way, and also makes it possible to set the parameters
automatically through statistical estimation methods.
2.2 Derivation of two-stage language models
The original language modeling approach as proposed in [9] involves
a two-step scoring procedure: (1) Estimate a document language
model for each document; (2) Compute the query likelihood
using the estimated document language model (directly). The two-stage
language modeling approach is a generalization of this two-step
procedure, in which a query language model is introduced so
that the query likelihood is computed using a query model that is
based on the estimated document model, instead of using the estimated
document model directly. The use of an explicit and separate
query model makes it possible to factor out any influence of queries
on the smoothing parameters for document language models.
We now derive the family of two-stage language models for information
retrieval formally using the risk minimization framework.
In the risk minimization framework presented in [4], documents
are ranked based on the following risk function:
Z
Z
#D
Let us now consider the following special loss function, indexed
by a small constant #,
c otherwise
where #Q -#D # R is a model distance function, and c is a
constant positive cost. Thus, the loss is zero when the query model
and the document model are close to each other, and is c otherwise.
Using this loss function, we obtain the following risk:
Z
#D
Z
is the sphere of radius # centered at #D in the parameter
space.
Now, assuming that p(#D | d, S) is concentrated on an estimated
value - #D , we can approximate the value of the integral over #D by
the integrand's value at -
#D . Note that the constant c can be ignored
for the purpose of ranking. Thus, using A # B to mean that A and
B have the same effect for ranking, we have that
Z
p(#Q | q, U) d#Q
When #Q and #D belong to the same parameter space (i.e.,
#D ) and # is very small, the value of the integral can be approximated
by the value of the function at -
#D times a constant (the
volume of S #D )), and the constant can again be ignored for the
purpose of ranking. That is,
#D | q, U)
Therefore, using this risk we will be actually ranking documents according
to p( - #D | q, U), i.e., the posterior probability that the user
used the estimated document model as the query model. Applying
Bayes' formula, we can rewrite this as
#D | U) (1)
Equation 1 is our basic two-stage language model retrieval for-
mula. Similar to the model discussed in [1], this formula has the
following
#D , U) captures how well the estimated
document model -
#D explains the query, whereas p( - #D | U)
encodes our prior belief that the user would use -
#D as the query
model. While this prior could be exploited to model different document
sources or other document characteristics, in this paper we
assume a uniform prior.
The generic two-stage language model can be refined by specifying
a concrete model p(d | #D , S) for generating documents and a
concrete model p(q | #Q , U) for generating queries; different specifications
lead to different retrieval formulas. If the query generation
model is the simplest unigram language model, we have the
scoring procedure of the original language modeling approach proposed
in [9]; that is, we first estimate a document language model
and then compute the query likelihood using the estimated model.
In the next section, we present the generative models that lead to
the two-stage smoothing method suggested in [17].
3. THE TWO-STAGE SMOOTHING
query, and denote the words in the vocabulary.
We consider the case where both #Q and #D are parameters of unigram
language models, i.e., multinomial distributions over words
in V .
The simplest generative model of a document is just the unigram
language model #D , a multinomial. That is, a document would be
generated by sampling words independently according to p(- | #D ),
or
p(d | #D ,
Y
Each document is assumed to be generated from a potentially different
model as assumed in the general risk minimization frame-
work. Given a particular document d, we want to estimate #D . We
use a Dirichlet prior on #D with parameters #1 , #2 , . , # |V | ),
given by
#(
Y
The parameters # i are chosen to be #
a parameter and p(- | S) is the "collection language model," which
can be estimated based on a set of documents from a source S . The
posterior distribution of #D is given by
p(#D | d, S) #
Y
p(w | #D ) c(w,d)+-p(w | S)-1
and so is also Dirichlet, with parameters #
Using the fact that the Dirichlet mean is # j /
k #k , we have that
p-(w | -
Z
p(w | #D )p(#D | d, S)d#D
w#V c(w, d) is the length of d. This is the Dirichlet
prior smoothing method described in [17].
We now consider the query generation model. The simplest
model is again the unigram language model #Q , which will result
in a retrieval model with the Dirichlet prior as the single smoothing
method. However, as observed in [17], such a model will not
be able to explain the interactions between smoothing and the type
of queries. In order to capture the common and non-discriminative
words in a query, we assume that a query is generated by sampling
words from a two-component mixture of multinomials, with one
component being #Q and the other some query background language
model p(- | U). That is,
p(q | #Q , #,
Y
where # is a parameter, roughly indicating the amount of "noise"
in q.
Combining our estimate of #D with this query model, we have
the following retrieval scoring formula for document d and query
q.
p(q | -
#D , #,
Y
Y
In this formula, the document language model is effectively smoothed
in two steps. First, it is smoothed with a Dirichlet prior, and second,
it is interpolated with a query background model. Thus, we refer to
this as two-stage smoothing.
The above model has been empirically motivated by the observation
that smoothing plays two different roles in the query likelihood
retrieval method. One role is to improve the maximum likelihood
estimate of the document language model, at the very least assigning
non-zero probabilities to words that are not observed in the
document. The other role is to "explain away" the common/non-
discriminative words in the query, so that the documents will be
discriminated primarily based on their predictions of the "topical"
words in the query. The two-stage smoothing method explicitly de-couples
these two roles. The first stage uses Dirichlet prior smoothing
method to improve the estimate of a document language model;
this method normalizes documents of different lengths appropriately
with a prior sample size parameter, and performs well empirically
[17]. The second stage is intended to bring in a query background
language model to explicitly accommodate the generation
of common words in queries.
The query background model p(- | U) is in general different from
the collection language model p(- | S). With insufficient data to estimate
p(- | U), however, we can assume that p(- | S) would be a
reasonable approximation of p(- | U). In this form, the two-stage
smoothing method is essentially a combination of Dirichlet prior
smoothing with Jelinek-Mercer smoothing [17]. Indeed, it is very
easy to verify that when with just the Dirichlet
prior smoothing, whereas when
Mercer smoothing. Since the combined smoothing formula still
follows the general smoothing scheme discussed in [17], it can be
implemented very efficiently. In the next section, we present methods
for estimating - and # from data.
Collection avg. doc length max. doc length vocab. size -
Table
1: Estimated values of - along with database characteristics.
4. PARAMETER ESTIMATION
4.1 Estimating -
The purpose of the Dirichlet prior smoothing at the first stage is
to address the estimation bias due to the fact that a document is an
extremely small amount of data with which to estimate a unigram
language model. More specifically, it is to discount the maximum
likelihood estimate appropriately and assign non-zero probabilities
to words not observed in a document; this is the usual role of language
model smoothing. A useful objective function for estimating
smoothing parameters is the "leave-one-out" likelihood, that is, the
sum of the log-likelihoods of each word in the observed data computed
in terms of a model constructed based on the data with the
target word excluded ("left out"). This criterion is essentially based
on cross-validation, and has been used to derive several well-known
smoothing methods including the Good-Turing method [8].
Formally, let be the collection of docu-
ments. Using our Dirichlet smoothing formula, the leave-one-out
log-likelihood can be written as
#-1(- |
Thus, our estimate of - is
which can be easily computed using Newton's method. The update
formula is
where the first and second derivatives of #-1 are given by
and
Since g # 0, as long as g #= 0, the solution will be a global max-
imum. In our experiments, starting from value 1.0 the algorithm
always converges.
The estimated values of - for three databases are shown in Table
1. There is no clear correlation between the database characteristics
shown in the table and the estimated value of -.
4.2 Estimating #
With the query model hidden, the query likelihood is
p(q | #,
Z
Y
In order to estimate #, we approximate the query model space by
the set of all N estimated document language models in our col-
lection. That is, we will approximate the integral with a sum over
all the possible document language models estimated on the collec-
tion, or
p(q | #,
Y
is the smoothed unigram language
model estimated based on document d i using the Dirichlet
prior approach.
Thus, we assume that the query is generated from a mixture of N
document models with unknown mixing weights {# i } N
. Leaving
what we really want is not to
maximize the likelihood of generating the query from every document
in the collection, instead, we want to find a # that can maximize
the likelihood of the query given relevant documents. With
estimate, we would indeed allocate higher weights
on documents that predict the query well in our likelihood function;
presumably, these documents are also more likely to be relevant.
With this likelihood function, the parameters # and {# i } N
can
be estimated using the EM algorithm. The update formulas are
and
5. EXPERIMENTS
In this section we first present experimental results that confirm
the dual-role of smoothing, which provides an empirical justification
for using the two-stage smoothing method for retrieval.
We then present results of the two-stage smoothing method using
the estimated parameters, comparing it to the optimal performance
from using single smoothing methods and an exhaustive parameter
search.
5.1 Influence of Query Length and
Verbosity on Smoothing
In [17], strong interactions between smoothing and the type of
queries have been observed. However, it is unclear whether the
high sensitivity observed on long queries is due to a higher density
of common words in such queries, or to just the length. The
two-stage smoothing method assumes that it is the former. In order
to clarify this, we design experiments to examine two query
factors-length and "verbosity." Specifically, we consider four different
types of queries, i.e., short keyword, long keyword, short ver-
bose, and long verbose queries, and compare how they each behave
with respect to smoothing. As we will show, the high sensitivity is
indeed caused by the presence of common words in the query, and
this provides an empirical justification for the two-stage smoothing
method.
We generate the four types of queries from TREC topics 1-150.
These 150 topics are special because, unlike other TREC topics,
they all have a "concept" field, which contains a list of keywords
related to the topic; these keywords serve well as the "long key-
word" version of our queries. Figure 1 shows an example of such a
topic (topic 52).
Title: South African Sanctions
Description: Document discusses sanctions against
South Africa.
Narrative:
A relevant document will discuss any aspect of
South African sanctions, such as: sanctions
declared/proposed by a country against the South
African government in response to its apartheid
policy, or in response to pressure by an individual,
organization or another country; international
sanctions against Pretoria imposed by the United
Nations; the effects of sanctions against S. Africa;
opposition to sanctions; or, compliance with
sanctions by a company. The document will identify
the sanctions instituted or being considered, e.g.,
corporate disinvestment, trade ban, academic boycott,
arms embargo.
Concepts:
1. sanctions, international sanctions,
economic sanctions
2. corporate exodus, corporate disinvestment, stock
divestiture, ban on new investment, trade ban,
import ban on South African diamonds, U.N. arms
embargo, curtailment of defense contracts,
cutoff of nonmilitary goods, academic boycott,
reduction of cultural ties
3. apartheid, white domination, racism
4. antiapartheid, black majority rule
5. Pretoria
Figure
1: Example topic, number 52. The keywords are used
as the "long keyword" version of our queries.
We use all of the 150 topics, and generate the four versions of
queries in the following way:
1. short keyword: Using only the title of the topic description
(usually a noun phrase) 1
2. short verbose: Using only the description field (usually one
sentence).
3. long keyword: Using the concept field (about 28 keywords
on average).
4. long verbose: Using the title, description and the narrative
field (more than 50 words on average).
Occasionally, a few function words were manually excluded, in order to
make the queries purely keyword-based.
The relevance judgments available for these 150 topics are mostly
on the documents in TREC disk 1 and disk 2. In order to observe
any possible difference in smoothing caused by the types of docu-
ments, we partition the documents in disks 1 and 2 and use the three
largest subsets of documents, accounting for a majority of the relevant
documents for our queries. The three databases are AP88-89,
WSJ87-92, and ZIFF1-2, each about 400MB-500MB in size. The
queries without relevance judgments for a particular database were
ignored for all of the experiments on that database. Four queries do
not have judgments on AP88-89, and 49 queries do not have judgments
on ZIFF1-2. Preprocessing of the documents is minimized;
only a Porter stemmer is used, and no stop words are removed.
Combining the four types of queries with the three databases gives
us a total of 12 different testing collections.
To understand the interaction between different query factors and
smoothing, we examine the sensitivity of retrieval performance to
smoothing on each of the four different types of queries. For both
Jelinek-Mercer and Dirichlet smoothing, on each of our 12 testing
collections we vary the value of the smoothing parameter and
record the retrieval performance at each parameter value. The results
are plotted in Figure 2. In each case, we show how the average
precision varies according to different values of the smoothing parameter
From these figures, we see that the two types of keyword queries
behave similarly, as do the two types of verbose queries. The retrieval
performance is generally much less sensitive to smoothing
in the case of the keyword queries than for the verbose queries,
whether long or short. Therefore, the sensitivity is much more correlated
with the verbosity of the query than with the length of the
query. Indeed, the short verbose queries are clearly more sensitive
than the long keyword queries. In all cases, insufficient smoothing
is much more harmful for verbose queries than for keyword
queries. This confirms that smoothing is indeed responsible for
"explaining" the common words in a query, and provides an empirical
justification for the two-stage smoothing approach.
We also see a consistent order of performance among the four
types of queries. As expected, long keyword queries are the best
and short verbose queries are the worst. Long verbose queries
are worse than long keyword queries, but better than short key-word
queries, which are better than the short verbose queries. This
appears to suggest that queries with only (presumably good) key-words
tend to perform better than more verbose queries. Also,
longer queries are generally better than short queries.
5.2 Effectiveness of the Two-stage
Smoothing Method
To evaluate the two-stage smoothing method, we first test it on
the same 12 testing collections as described earlier. These collections
represent a very good diversity in the types of queries, but
the databases are all homogeneous and relatively small. In order to
further test the robustness of the two-stage smoothing method, we
then test it on three much bigger and more heterogeneous TREC
collections. These are the official ad hoc retrieval collections used
in TREC-7, TREC-8, and the TREC-8 small web track. The official
TREC-7 and TREC-8 ad hoc tasks have used the same document
database (i.e., TREC disk4 and disk5 excluding the Congressional
Record data), but different topics (topics 351-400 for TREC-7 and
401-450 for TREC-8). The TREC-8 web track and the TREC-8
official ad hoc task share the same 50 topics. Since these topics do
not have a concept field, we have only three types of queries: short-
short-verbose, and long-verbose. The size of these large
collections is about 2GB in the original source. Again, we perform
minimum pre-processing - only a Porter stemmer was used, and no
precision
lambda
Sensitivity of Precision (Jelinek-Mercer, AP88-89)
short-keyword
long-keyword
short-verbose
long-verbose0.10.30.5
precision
lambda
Sensitivity of Precision (Jelinek-Mercer, WSJ87-92)
short-keyword
long-keyword
short-verbose
long-verbose0.10.30.5
precision
lambda
Sensitivity of Precision (Jelinek-Mercer, ZF1-2)
short-keyword
long-keyword
short-verbose
long-verbose0.10.30.5
precision
prior (mu)
Sensitivity of Precision (Dirichlet, AP88-89)
short-Keyword
long-Keyword
short-Verbose
long-Verbose0.10.30.5
precision
prior (mu)
Sensitivity of Precision (Dirichlet, WSJ87-92)
short-keyword
long-keyword
short-verbose
long-verbose0.10.30.5
precision
prior (mu)
Sensitivity of Precision (Dirichlet, ZF1-2)
short-keyword
long-keyword
short-verbose
long-verbose
Figure
2: Sensitivity of Precision for Jelinek-Mercer smoothing (top) and Dirichlet prior smoothing (bottom) on AP88-89 (left),
WSJ87-92 (center), and Ziff1-2 (right).
stop words were removed.
For each testing collection, we compare the retrieval performance
of the estimated two-stage smoothing parameters with the best results
achievable using a single smoothing method. The best results
of a single smoothing method are obtained through an exhaustive
search on its parameter space, so are the ideal performance of the
smoothing method. In all our experiments, we use the collection
language model to approximate the query background model.
The results are shown in Table 2. The four types of queries are
abbreviated with the two initial letters (e.g., SK for Short-Keyword).
The standard TREC evaluation procedure for ad hoc retrieval is
followed, and we have considered three performance measures -
non-interpolated average precision, initial precision (i.e., precision
at recall), and precision at five documents. In all the results,
we see that, the performance of two-stage smoothing with the estimated
parameter values is consistently very close to, or better than
the best performance of a single method by all the three measures.
Only in a few cases, the difference is statistically significant (indi-
cated with a star).
To quantify the sensitivity of the retrieval performance to the
smoothing parameter for single smoothing methods, we also show
(in parentheses) the median average precision at all the parameter
values that are tried 2 . We see that, for Jelinek-Mercer, the sensitivity
is clearly higher on verbose queries than on keyword queries;
the median is usually much lower than the best performance for
verbose queries. This means that, it is much harder to tune the
# in Jelinek-Mercer for verbose queries than for keyword queries.
Interestingly, for Dirichlet prior, the median is often just slightly
below the best, even when the queries are verbose. (The worst
cases are significantly lower, though.) From the sensitivity curves
in
Figure
2, we see that as long as we set a relatively large value
2 For Jelinek-Mercer, we tried 13 values {0.01, 0.05, 0.1, 0.2, 0.3, 0.4, 0.5,
0.6,0.7,0.8,0.9,0.95, 0.99}; for Dirichlet prior, we tried 10 values {100, 500,
800, 1000, 2000, 3000, 4000, 5000, 8000, 10000}.
for - in Dirichlet prior, the performance will not be much worse
than the best performance, and there is a great chance that the median
is at a large value for -. This immediately suggests that we
can expect to perform reasonably well if we simply set - to some
"safe" large value. However, it is clear, from the results in Table
2, that such a simple approach would not perform so well as
our parameter estimation methods. Indeed, the two-stage performance
is always better than the median, except for three cases of
short-keyword queries when it is slightly worse. Since the Dirichlet
prior smoothing dominates the two-stage smoothing effect for
these short-keyword queries (due to "little noise"), this somehow
suggests that the leave-one-out method might have under-estimated
-.
Note that, in general, Jelinek-Mercer has not performed as well
as Dirichlet prior in all our experiments. But, in two cases of verbose
queries (Trec8-SV and Trec8-LV on the Trec7/8 database),
it does outperform Dirichlet prior. In these two cases, the two-stage
smoothing method performs either as well as or better than
the Jelinek-Mercer. Thus, the two-stage smoothing performance
appears to always track the best performing single method at its
optimal parameter setting.
The performance of two-stage smoothing does not reflect the
performance of a "full-fledged" language modeling approach, which
would involve more sophisticated feedback models [4, 6, 16]. Thus,
it is really not comparable with the performance of other TREC sys-
tems. Yet some of the performance figures shown here are actually
competitive when compared with the performance of the official
TREC submissions (e.g., the performance on the TREC-8 ad hoc
task and the TREC-8 web track).
These results of the two-stage smoothing method are very en-
couraging, especially because there is no ad hoc parameter tuning
involved in the retrieval process with the approach. Both - and #
are automatically estimated based on a specific database and query;
- is completely determined by the given database, and # is deter-
Database Query Best Jelinek-Mercer Best Dirichlet Two-Stage
AvgPr (med) InitPr Pr@5d AvgPr (med) InitPr Pr@5d AvgPr Initpr Pr@5d
Database Query Best Jelinek-Mercer Best Dirichlet Two-Stage
AvgPr (med) InitPr Pr@5d AvgPr (med) InitPr Pr@5d AvgPr Initpr Pr@5d
Web Trec8-SV 0.203 (0.191) 0.611 0.392 0.267 (0.249) 0.699 0.492 0.253 0.680 0.436
Table
2: Comparison of the estimated two-stage smoothing with the best single stage smoothing methods on small collections (top)
and large collections (bottom). The best number for each measure is shown in boldface. An asterisk (*) indicates that the difference
between the two-stage smoothing performance and the best single smoothing performance is statistically significant according to the
signed rank test at the level of 0.05.
mined by the database and the query together. The method appears
to be quite robust according to our experiments with all the different
types of queries and different databases.
6. CONCLUSIONS
In this paper we derive general two-stage language models for
information retrieval using the risk minimization retrieval frame-
work, and present a concrete two-stage smoothing method as a
special case. The two-stage smoothing strategy explicitly captures
the different influences of the query and document collection
on the optimal settings of smoothing parameters. In the first stage,
the document language model is smoothed using a Dirichlet prior
with the collection model as the reference model. In the second
stage, the smoothed document language model is further interpolated
with a query background model.
We propose a leave-one-out method for estimating the first-stage
Dirichlet prior parameter and a mixture model for estimating the
second-stage interpolation parameter. These methods allow us to
set the retrieval parameters automatically, yet adaptively according
to different databases and queries. Evaluation on five different
databases and four types of queries indicates that the two-stage
smoothing method with the proposed parameter estimation scheme
consistently gives retrieval performance that is close to, or better
than, the best results attainable using a single smoothing method,
achievable only through an exhaustive parameter search. The effectiveness
and robustness of the two-stage smoothing approach,
along with the fact that there is no ad hoc parameter tuning in-
volved, make it a solid baseline approach for evaluating retrieval
models.
While we have shown that the automatic two-stage smoothing
gives retrieval performance close to the best results attainable using
a single smoothing method, we have not yet analyzed the optimality
of the estimated parameter values in the two-stage parameter space.
For example, it would be important to see the relative optimality of
the estimated - and # when fixing one of them. It would also be
interesting to explore other estimation methods. For example, -
might be regarded as a hyperparameter in a hierarchical Bayesian
approach. For the estimation of the query model parameter #, it
would be interesting to try different query background models. One
possibility is to estimate the background model based on resources
such as past queries, in addition to the collection of documents. Another
interesting future direction is to exploit the query background
model to address the issue of redundancy in the retrieval results.
Specifically, a biased query background model may be used to rep-
resent/explain the sub-topics that a user has already encountered
(e.g., through reading previously retrieved results), in order to focus
ranking on the new sub-topics in a relevant set of documents.
ACKNOWLEDGEMENTS
We thank Rong Jin, Jamie Callan, and the anonymous reviewers for
helpful comments on this work. This research was sponsored in full
by the Advanced Research and Development Activity in Information
Technology (ARDA) under its Statistical Language Modeling
for Information Retrieval Research Program, contract MDA904-
00-C-2106.
--R
Information retrieval as statistical translation.
A hidden Markov model information retrieval system.
On the estimation of 'small' probabilities by leaving-one-out
A language modeling approach to information retrieval.
Relevance weighting of search terms.
Pivoted document length normalization.
--TR
Term-weighting approaches in automatic text retrieval
Pivoted document length normalization
Improving two-stage ad-hoc retrieval for short queries
A language modeling approach to information retrieval
A hidden Markov model information retrieval system
Information retrieval as statistical translation
A vector space model for automatic indexing
Document language models, query models, and risk minimization for information retrieval
Relevance based language models
A study of smoothing methods for language models applied to Ad Hoc information retrieval
On the Estimation of ''Small'' Probabilities by Leaving-One-Out
--CTR
Mark D. Smucker , James Allan, Lightening the load of document smoothing for better language modeling retrieval, Proceedings of the 29th annual international ACM SIGIR conference on Research and development in information retrieval, August 06-11, 2006, Seattle, Washington, USA
Donald Metzler, Estimation, sensitivity, and generalization in parameterized retrieval models, Proceedings of the 15th ACM international conference on Information and knowledge management, November 06-11, 2006, Arlington, Virginia, USA
L. Azzopardi , M. Girolami , C. J. van Rijsbergen, User biased document language modelling, Proceedings of the 27th annual international ACM SIGIR conference on Research and development in information retrieval, July 25-29, 2004, Sheffield, United Kingdom
James Allan , Courtney Wade , Alvaro Bolivar, Retrieval and novelty detection at the sentence level, Proceedings of the 26th annual international ACM SIGIR conference on Research and development in informaion retrieval, July 28-August 01, 2003, Toronto, Canada
Hugo Zaragoza , Djoerd Hiemstra , Michael Tipping, Bayesian extension to the language model for ad hoc information retrieval, Proceedings of the 26th annual international ACM SIGIR conference on Research and development in informaion retrieval, July 28-August 01, 2003, Toronto, Canada
M. Winslett , K. Chang , A. Doan , J. Han , C. Zhai , Y. Zhou, Database research at the University of Illinois at Urbana-Champaign, ACM SIGMOD Record, v.31 n.3, September 2002
Rong Jin , Joyce Y. Chai , Luo Si, Learn to weight terms in information retrieval using category information, Proceedings of the 22nd international conference on Machine learning, p.353-360, August 07-11, 2005, Bonn, Germany
Donald Metzler , W. Bruce Croft, Combining the language model and inference network approaches to retrieval, Information Processing and Management: an International Journal, v.40 n.5, p.735-750, September 2004
Xiaohua Zhou , Xiaohua Hu , Xiaodan Zhang , Xia Lin , Il-Yeol Song, Context-sensitive semantic smoothing for the language modeling approach to genomic IR, Proceedings of the 29th annual international ACM SIGIR conference on Research and development in information retrieval, August 06-11, 2006, Seattle, Washington, USA
Guihong Cao , Jian-Yun Nie , Jing Bai, Integrating word relationships into language models, Proceedings of the 28th annual international ACM SIGIR conference on Research and development in information retrieval, August 15-19, 2005, Salvador, Brazil
Tao Tao , ChengXiang Zhai, Mining comparable bilingual text corpora for cross-language information integration, Proceeding of the eleventh ACM SIGKDD international conference on Knowledge discovery in data mining, August 21-24, 2005, Chicago, Illinois, USA
Seung-Hoon Na , In-Su Kang , Ji-Eun Roh , Jong-Hyeok Lee, An empirical study of query expansion and cluster-based retrieval in language modeling approach, Information Processing and Management: an International Journal, v.43 n.2, p.302-314, March 2007
Paul Ogilvie , Jamie Callan, Combining document representations for known-item search, Proceedings of the 26th annual international ACM SIGIR conference on Research and development in informaion retrieval, July 28-August 01, 2003, Toronto, Canada
Jaime Teevan , David R. Karger, Empirical development of an exponential probabilistic model for text retrieval: using textual analysis to build a better model, Proceedings of the 26th annual international ACM SIGIR conference on Research and development in informaion retrieval, July 28-August 01, 2003, Toronto, Canada
Xiaoyong Liu , W. Bruce Croft, Cluster-based retrieval using language models, Proceedings of the 27th annual international ACM SIGIR conference on Research and development in information retrieval, July 25-29, 2004, Sheffield, United Kingdom
Jianfeng Gao , Haoliang Qi , Xinsong Xia , Jian-Yun Nie, Linear discriminant model for information retrieval, Proceedings of the 28th annual international ACM SIGIR conference on Research and development in information retrieval, August 15-19, 2005, Salvador, Brazil
Thomas R. Lynam , Chris Buckley , Charles L. A. Clarke , Gordon V. Cormack, A multi-system analysis of document and term selection for blind feedback, Proceedings of the thirteenth ACM international conference on Information and knowledge management, November 08-13, 2004, Washington, D.C., USA
Jianfeng Gao , Jian-Yun Nie , Guangyuan Wu , Guihong Cao, Dependence language model for information retrieval, Proceedings of the 27th annual international ACM SIGIR conference on Research and development in information retrieval, July 25-29, 2004, Sheffield, United Kingdom
Ying Zhao , Justin Zobel, Searching with style: authorship attribution in classic literature, Proceedings of the thirtieth Australasian conference on Computer science, p.59-68, January 30-February 02, 2007, Ballarat, Victoria, Australia
Jian-Yun Nie , Guihong Cao , Jing Bai, Inferential language models for information retrieval, ACM Transactions on Asian Language Information Processing (TALIP), v.5 n.4, p.296-322, December 2006
Jianfeng Gao , Chin-Yew Lin, Introduction to the special issue on statistical language modeling, ACM Transactions on Asian Language Information Processing (TALIP), v.3 n.2, p.87-93, June 2004
Joyce Y. Chai , Chen Zhang , Rong Jin, An empirical investigation of user term feedback in text-based targeted image search, ACM Transactions on Information Systems (TOIS), v.25 n.1, p.3-es, February 2007
ChengXiang Zhai , John Lafferty, A risk minimization framework for information retrieval, Information Processing and Management: an International Journal, v.42 n.1, p.31-55, January 2006
Wessel Kraaij , Jian-Yun Nie , Michel Simard, Embedding web-based statistical translation models in cross-language information retrieval, Computational Linguistics, v.29 n.3, p.381-419, September | interpolation;risk minimization;two-stage smoothing;parameter estimation;leave-one-out;mixture model;two-stage language models;dirichlet prior |
564395 | Bayesian online classifiers for text classification and filtering. | This paper explores the use of Bayesian online classifiers to classify text documents. Empirical results indicate that these classifiers are comparable with the best text classification systems. Furthermore, the online approach offers the advantage of continuous learning in the batch-adaptive text filtering task. | INTRODUCTION
Faced with massive information everyday, we need automated
means for classifying text documents. Since hand-crafting
text classifiers is a tedious process, machine learning
methods can assist in solving this problem[15, 7, 27].
Yang & Liu[27] provides a comprehensive comparison of
supervised machine learning methods for text classification.
In this paper we will show that certain Bayesian classifiers
are comparable with Support Vector Machines[23], one
of the best methods reported in [27]. In particular, we
will evaluate the Bayesian online perceptron[17, 20] and the
Bayesian online Gaussian process[3].
For text classification and filtering, where the initial training
set is large, online approaches are useful because they
allow continuous learning without storing all the previously
Permission to make digital or hard copies of all or part of this work for
personal or classroom use is granted without fee provided that copies are
not made or distributed for profit or commercial advantage and that copies
bear this notice and the full citation on the first page. To copy otherwise, to
republish, to post on servers or to redistribute to lists, requires prior specific
permission and/or a fee.
SIGIR'02, August 11-15, 2002, Tampere, Finland.
seen data. This continuous learning allows the utilization
of information obtained from subsequent data after the initial
training. Bayes' rule allows online learning to be performed
in a principled way[16, 20, 17]. We will evaluate the
Bayesian online perceptron, together with information gain
considerations, on the batch-adaptive filtering task[18].
2. CLASSIFICATION AND FILTERING
For the text classification task defined by Lewis[9], we
have a set of predefined categories and a set of documents.
For each category, the document set is partitioned into two
mutually exclusive sets of relevant and irrelevant documents.
The goal of a text classification system is to determine whether
a given document belongs to any of the predefined cate-
gories. Since the document can belong to zero, one, or more
categories, the system can be a collection of binary classi-
fiers, in which one classifier classifies for one category.
In Text REtrieval Conference (TREC), the above task is
known as batch filtering. We will consider a variant of batch
filtering called the batch-adaptive filtering[18]. In this task,
during testing, if a document is retrieved by the classifier,
the relevance judgement is fed back to the classifier. This
feedback can be used to improve the classifier.
2.1 Corpora and Data
For text classification, we use the ModApte version of
the Reuters-21578 corpus 1 , where unlabelled documents are
removed. This version has 9,603 training documents and
test documents. Following [7, 27], only categories that
have at least one document in the training and test set are
retained. This reduces the number of categories to 90.
For batch-adaptive filtering, we attempt the task of TREC-
9[18], where the OHSUMED collection[6] is used. We will
evaluate on the OHSU topic-set, which consists of 63 topics.
The training and test material consist of 54,710 and 293,856
documents respectively. In addition, there is a topic statement
for each topic. For our purpose, this is treated as an
additional training document for that topic. We will only
use the title, abstract, author, and source sections of the
documents for training and testing.
2.2 Representation
There are various ways to transform a document into a
representation convenient for classification. We will use the
Available via http://www.daviddlewis.com/resources/
testcollections/reuters21578.
bag-of-words approach, where we only retain frequencies
of words after tokenisation, stemming, and stop-words re-
moval. These frequencies can be normalized using various
schemes[19, 6]; we use the ltc normalization:
l
l i,d t i
j#{terms in d} (l j,d
where the subscripts i and d denote the ith term and the
dth document respectively, TF i,d is the frequency of the ith
term in the dth document, n i is the document-frequency of
the ith term, and N is the total number of documents.
2.3 Feature Selection Metric
Given a set of candidate terms, we select features from
the set using the likelihood ratio for binomial distribution
advocated by Dunning[5]:
is the number of relevant (non-relevant) training
documents which contain the term, R t (N t ) is the number
of relevant (non-relevant) training documents which do
not, and N is the total number of training documents.
Asymptotically, -2 ln # is # 2 distributed with 1 degree of
freedom. We choose terms with -2 ln # more than 12.13,
i.e. at 0.05% significance level. More details on the feature
selection procedures will be given in section 4.
2.4 Performance Measures
To evaluate a text classification system, we use the F1
measure introduced by van Rijsbergen[22]. This measure
combines recall and precision in the following way:
number of correct positive predictions
number of positive examples
number of correct positive predictions
number of positive predictions
Recall
For ease of comparison, we summarize the F1 scores over
the di#erent categories using the micro- and macro-averages
of F1 scores[11, 27]:
categories and documents
average of within-category F1 values.
The micro- and macro-average F1 emphasize the performance
of the system on common and rare categories re-
spectively. Using these averages, we can observe the e#ect
of di#erent kinds of data on a text classification system.
In addition, for comparing two text classification systems,
we use the micro sign-test (s-test) and the macro sign-test
(S-test), which are two significance tests first used for comparing
text classification systems in [27]. The s-test compares
all the binary decisions made by the systems, while
the S-test compares the within-category F1 values. Similar
to the F1 averages, the s-test and S-test compare the
performance of two systems on common and rare categories
respectively.
To evaluate a batch-adaptive filtering system, we use the
T9P measure of TREC-9[18]:
number of correct positive predictions
Max(50, number of positive predictions)
which is precision, with a penalty for not retrieving 50 documents
3. BAYESIAN ONLINE LEARNING
Most of this section is based on work by Opper[17], Solla
Suppose that each document is described by a vector x,
and that the relevance indicator of x for a category is given
by label y # {-1, 1}, where -1 and 1 indicates irrelevant
and relevant respectively. Given m instances of past data
the predictive probability of the
relevance of a document described by x is
Z
da p(y|x, a)p(a|Dm ),
where we have introduced the classifier a to assist us in the
prediction. In the Bayesian approach, a is a random variable
with probability density p(a|Dm ), and we integrate over all
the possible values of a to obtain the prediction.
Our aim is to obtain a reasonable description of a. In
the Bayesian online learning framework[16, 20, 17], we begin
with a prior p(a|D0 ), and perform incremental Bayes'
updates to obtain the posterior as data arrives:
p(y t+1 |x t+1 , a)p(a|D t )
R da p(y t+1 |x t+1 , a)p(a|D t )
To make the learning online, the explicit dependence of
the posterior p(a|D t+1) on the past data is removed by approximating
it with a distribution p(a|A t+1 ), where A t+1
characterizes the distribution of a at time t + 1. For exam-
ple, if p(a|A t+1) is a Gaussian, then A t+1 refers to its mean
and covariance.
Hence, starting from the prior
from a new example (y t+1 , x t+1) comprises two steps:
Update the posterior using Bayes rule
Approximate the updated posterior by parameterisation
where the approximation step is done by minimizing the
Kullback-Leibler distance between the the approximating and
approximated distributions.
The amount of information gained about a after learning
from a new example can be expressed as the Kullback-Leibler
distance between the posterior and prior distribu-
IG(y t+1 , x t+1|D t
Z
da p(a|D t+1) log 2
Z
da p(a|A t+1) log 2
where instances of the data D are replaced by the summaries
A in the approximation.
To simplify notation henceforth, we use p t (a) and # t to
denote p(a|A t ) and averages taken over p(a|A t ) respectively.
For example, the predictive probability can be rewritten as
Z
da p(y|x, a)p t (a) = #p(y|x, a)# t .
In the following sections, the scalar field
be used to simplify notation and calculation.
3.1 Bayesian Online Perceptron
Consider the case where a describes a perceptron. We then
define the likelihood as a probit model
ya x
0 is a fixed noise variance, and # is the cumulative
Gaussian distribution
Z u
-#
If p0 (a) is the spherical unit Gaussian, and p t (a) is the
Gaussian approximation, Opper[16, 17] and Solla &
obtain the following updates by equating the means and covariances
of p(a|A t+1) and p(a|A t , (y t+1 , x t+1 )):
where
#p(y t+1 |h)#
y t+1 #h# t
3.1.1 Algorithm
Training the Bayesian online perceptron on m data involves
successive calculation of the means #a# t and covariances
C t of the posteriors, for t # {1, ., m}:
1. Initialize #a# 0 to be 0 and C0 to be 1 (identity matrix),
i.e. a spherical unit Gaussian centred at origin.
2. For
3. y t+1 is the relevance indicator for document x t+1
4. Calculate s t+1 , # t+1 , #h# t and #p(y t+1 |h)# t
5. Calculate
and
6. Calculate #
#p(y t+1 |h)# t
7. Calculate # 2
#p(y t+1 |h)# t
#p(y t+1 |h)# t
8. Calculate #a# t+1 and C t+1
The prediction for datum (y, x) simply involves the calculation
of #p(y|x, a)#
3.2 Bayesian Online Gaussian Process
Gaussian process (GP) has been constrained to problems
with small data sets until recently when Csato & Opper[3]
and Williams & Seeger[24] introduced e#cient and e#ective
approximations to the full GP formulation. This section will
outline the approach in [3].
In the GP framework, a describes a function consisting of
function values {a(x)}. Using the probit model, the likelihood
can be expressed as
where #0 and # are described in section 3.1.
In addition, p0(a) is a GP prior which specifies a Gaussian
distribution with zero mean function and covariance/kernel
function K0 (x, x # ) over a function space. If p t (a) is also a
Gaussian process, then Csato & Opper obtain the following
updates by equating the means and covariances of p(a|A t+1)
and p(a|A t , (y t+1 , x t+1 )):
where
#p(y t+1 |h)#
y t+1 #h# t
Notice the similarities to the updates in section 3.1. The
main di#erence is the 'kernel trick' introduced into the equations
through
New inputs x t+1 are added sequentially to the system via
the 1)th unit vector e t+1 . This results in a quadratic
increase in matrix size, and is a drawback for large data
sets, such as those for text classification. Csato & Opper
overcome this by introducing sparseness into the GP. The
idea is to replace e t+1 by the projection
where
This approximation introduces an error
which is used to decide when to employ the approximation.
Hence, at any time the algorithm holds a set of basis vec-
tors. It is usually desirable to limit the size of this set. To
accommodate this, Csato & Opper describe a procedure for
removing a basis vector from the set by reversing the process
of adding new inputs.
For lack of space, the algorithm for the Bayesian Online
Gaussian Process will not be given here. The reader is referred
to [3] for more information.
4. EVALUATION
4.1 Classification on Reuters-21578
In this evaluation, we will compare Bayesian online per-
ceptron, Bayesian online Gaussian process, and Support Vector
Machines (SVM)[23]. SVM is one of the best performing
learning algorithms on the Reuters-21578 corpus[7, 27].
The Bayesian methods are as described in section 3, while
for SVM we will use the SV M light package by Joachims[8].
Since SVM is a batch method, to have a fair comparison,
the online methods are iterated through the training data 3
times before testing. 2
4.1.1 Feature Selection
For the Reuters-21578 corpus, we select as features for
each category the set of all words for which -2 ln # > 12.13.
We further prune these by using only the top 300 features.
This reduces the computation time required for the calculation
of the covariances of the Bayesian classifiers.
Since SVM is known to perform well for many features,
for the SVM classifiers we also use the set of words which
occur in at least 3 training documents[7]. This gives us 8,362
words. Note that these words are non-category specific.
4.1.2 Thresholding
The probabilistic outputs from the Bayesian classifiers can
be used in various ways. The most direct way is to use the
Bayes decision rule, to determine
the relevance of the document described by x. 3 However,
as discussed in [10, 26], this is not optimal for the chosen
evaluation measure.
Therefore, in addition to 0.5 thresholding, we also empirically
optimise the threshold for each category for the F1
measure on the training documents. This scheme, which we
shall call MaxF1, has also been employed in [27] for thresholding
kNN and LLSF classifiers. The di#erence from our
approach is that the threshold in [27] is calculated over a
validation set. We do not use a validation set because we
feel that, for very rare categories, it is hard to obtain a reasonable
validation set from the training documents.
For the Bayesian classifiers, we also perform an analytical
threshold optimisation suggested by Lewis[10]. In this
scheme, which we shall call ExpectedF1, the threshold for
each category is selected to optimise the expected F1 :
|D +|+
otherwise,
where # is the threshold, p i is the probability assigned to
document i by the classifier, D is the set of all test docu-
ments, and D+ is the set of test documents with probabilities
higher than the threshold #.
Note that ExpectedF1 can only be applied after the probabilities
for all the test documents are assigned. Hence the
classification can only be done in batch. This is unlike the
first two schemes, where classification can be done online.
4.1.3 Results and Discussion
section A.2 for discussion on the number of passes.
3 For SVM, to minimise structural risks, we would classify
the document as relevant if w x is the
hyperplane, and b is the bias.
section A.3 for discussion on the jitter terms # ij .
Table
1: Description of Methods
Description 4
Perceptron fixed feature (for bias)
Table
2: Micro-/Macro-average F1
Perceptron 85.12 / 45.23 86.69 / 52.16 86.44 / 53.08
Table
1 lists the parameters for the algorithms used in our
evaluation, while Table 2 and 3 tabulate the results. There
are two sets of results for SVM, and they are labeled SVMa
and SVM b . The latter uses the same set of features as the
Bayesian classifiers (i.e. using the -2 ln # measure), while
the former uses the set of 8,362 words as features.
Table
2 summarizes the results using F1 averages. Table
3 compares the classifiers using s-test and S-test. Here, the
MaxF1 thresholds are used for the classification decisions.
Each row in these tables compares the method listed in the
first column with the other methods. The significance levels
from [27] are used.
Several observations can be made:
. Generally, MaxF1 thresholding increases the performance
of all the systems, especially for rare categories.
. For the Bayesian classifiers, ExpectedF1 thresholding
improves the performance of the systems on rare categories
. Perceptron implicitly implements the kernel used by
GP-1, hence their similar results.
. With MaxF1 thresholding, feature selection impedes
the performance of SVM.
. In
Table
2, SVM with 8,362 features have slightly lower
micro-average F1 to the Bayesian classifiers. However,
the s-tests in Table 3 show that Bayesian classifiers
outperform SVM for significantly many common cat-
egories. Hence, in addition to computing average F1
measures, it is useful to perform sign tests.
. As shown in Table 3, for limited features, Bayesian
classifiers outperform SVM for both common and rare
categories.
. Based on the sign tests, the Bayesian classifiers outperform
(using 8,362 words) for common categories,
and vice versa for rare categories.
Table
3: s-test/S-test using MaxF1 thresholding
Perceptron #
"#" or "#" means P-value # 0.01; ">" or "<" means 0.01 < P-value # 0.05; "#" means P-value > 0.05.
The last observation suggests that one can use Bayesian
classifiers for common categories, and SVM for rare ones.
4.2 Filtering on OHSUMED
In this section, only the Bayesian online perceptron will
be considered. In order to avoid numerical integration of
the information gain measure, instead of the probit model
of section 3.1, here we use a simpler likelihood model in
which the outputs are flipped with fixed probability #:
where
The update equations will also change accordingly, e.g.
#p(y t+1 |h)#
y t+1 #h# t
Using this likelihood measure, we can express the information
gained from datum (y t+1 , x t+1) as
log
c
log 2
where
c
We use evaluation. The following sections
will describe the algorithm in detail. To simplify presen-
tation, we will divide the batch-adaptive filtering task into
batch and adaptive phases.
4.2.1 Feature Selection and Adaptation
During the batch phase, words for which -2 ln # > 12.13
are selected as features.
During the adaptive phase, when we obtain a feedback, we
update the features by adding any new words with -2
12.13. When a feature is added, the distribution of the perceptron
a is extended by one dimension:
4.2.2 Training the classifier
During the batch phase, the classifier is iterated through
the training documents 3 times. In addition, the relevant
documents are collected for use during the adaptive phase.
During the adaptive phase, retrieved relevant documents
are added to this collection. When a document is retrieved,
the classifier is trained on that document and its given relevance
judgement.
The classifier will be trained on irrelevant documents most
of the time. To prevent it from "forgetting" relevant documents
due to its limited capacity, whenever we train on an
irrelevant document, we would also train on a past relevant
document. This past relevant document is chosen successively
from the collection of relevant documents.
This is needed also because new features might have been
added since a relevant document was last trained on. Hence
the classifier would be able to gather new information from
the same document again due to the additional features.
Note that the past relevant document does not need to be
chosen in successive order. Instead, it can be chosen using
a probability distribution over the collection. This will be
desirable when handling topic-drifts.
We will evaluate the e#ectiveness of this strategy of re-training
on past retrieved relevant documents, and denote
its use by +rel. Though its use means that the algorithm
is no longer online, asymptotic e#ciency is una#ected, since
only one past document is used for training at any instance.
4.2.3 Information Gain
During testing, there are two reasons why we retrieve
a document. The first is that it is relevant, i.e.
represents the document. The second
is that, although the document is deemed irrelevant
by the classifier, the classifier would gain useful information
from the document. Using the measure IG(y, x|D t ), we calculate
the expected information gain
0.40.8N ret
q Target number of
Figure
1: # versus Nret tuned for T9P
A document is then deemed useful if its expected information
gain is at least #. Optimizing for the T9P measure
(i.e. targeting 50 documents), we choose # to be
where N ret is the total number of documents that the system
has retrieved. Figure 1 plots # against N ret . Note that this
is a kind of active learning, where the willingness to tradeo#
precision for learning decreases with N ret . The use of this
information gain criteria will be denoted by +ig.
We will test the e#ectiveness of the information gain strat-
egy, against an alternative one. The alternative, denoted by
+rnd, will randomly select documents to retrieve based on
the probability
50-N ret
293856 otherwise,
where 293,856 is the number of test documents.
4.2.4 Results and Discussion
Table
4 lists the results of seven systems. The first two are
of Microsoft Research Cambridge and Fudan University re-
spectively. These are the only runs in TREC-9 for the task.
The third is of the system as described in full, i.e. Bayesian
online perceptron, with retraining on past retrieved relevant
documents, and with the use of information gain. The rest
are of the Bayesian online perceptron with di#erent combinations
of strategies.
Besides the T9P measure, for the sake of completeness, Table
4 also lists the other measures used in TREC-9. Taken
together, the measures show that Bayesian online percep-
tron, together with the consideration for information gain,
is a very competitive method.
For the systems with +rel, the collection of past known
relevant documents is kept. Although Microsoft uses this
same collection for its query reformulation, another collection
of all previously seen documents is used for threshold
adaptation. Fudan maintains a collection of past retrieved
documents and uses this collection for query adaptation.
reports results from run ok9bfr2po, while we report
results from the slightly better run ok9bf2po.
Average number of relevant documents retrieved
Average
number
of
features
Pptron+rel+ig
Pptron+ig
Pptron+rnd
Pptron
Figure
2: Variation of the number of features as
relevant documents are retrieved. The plots for
Pptron+rel+ig and Pptron+ig are very close. So are
the plots for Pptron+rnd and Pptron.
In a typical operational system, retrieved relevant documents
are usually retained, while irrelevant documents are
usually discarded. Therefore +rel is a practical strategy to
adopt.
Figure
2 plots the average number of features during the
adaptive phase. We can see that features are constantly
added as relevant documents are seen. When the classifier
is retrained on past documents, the new features enable the
classifier to gain new information from these documents. If
we compare the results for Pptron+rel and Pptron in Table
4, we find that not training on past documents causes
the number of relevant documents retrieved to drop by 5%.
Similarly, for Pptron+rel+ig and Pptron+ig, the drop is
8%.
Table
5 breaks down the retrieved documents into those
that the classifier deems relevant and those that the classifier
is actually querying for information, for Pptron+ig
and Pptron+rnd. The table shows that none of the documents
randomly queried are relevant documents. This is
not surprising, since only an average of 0.017% of the test
documents are relevant. In contrast, the information gain
strategy is able to retrieve 313 relevant documents, which is
26.1% of the documents queried. This is a significant result.
Consider Pptron+ig. Table 4 shows that for Pptron, when
the information gain strategy is removed, only 731 relevant
documents will be retrieved. Hence, although most of the
documents queried are irrelevant, information gained from
these queries helps recall by the classifier (i.e. 815 documents
versus 731 documents), which is important for reaching
the target of 50 documents.
MacKay[13] has noted the phenomenon of querying for
irrelevant documents which are at the edges of the input
space, and suggested maximizing information in a defined
region of interest instead. Finding this region for batch-
adaptive filtering remains a subject for further research.
Comparing the four plots in Figure 2, we find that, on
average, the information gain strategy causes about 3% more
features to be discovered for the same number of relevant
documents retrieved. A consequence of this is better recall.
Table
4: Results for Batch-adaptive filtering optimized for T9P measure.
Microsoft 5 Fudan Pptron+rel+ig Pptron+ig Pptron+rnd Pptron+rel Pptron
Total retrieved 3562 3251 2716 2391 2533 1157 1057
Relevant retrieved 1095 1061 1227 1128 732 772 731
Macro-average recall 39.5 37.9 36.2 33.3 20.0 20.8 20.0
Macro-average precision 30.5 32.2 35.8 35.8 21.6 61.9 62.3
Mean T9P 30.5 31.7 31.3 29.8 19.2 21.5 20.8
Mean Utility -4.397 -1.079 15.318 15.762 -5.349 18.397 17.730
Mean T9U -4.397 -1.079 15.318 15.762 -5.349 18.397 17.730
Mean scaled utility -0.596 -0.461 -0.025 0.016 -0.397 0.141 0.138
Zero returns
Table
5: Breakdown of documents retrieved for Pptron+ig and Pptron+rnd. The numbers for the latter are in
brackets.
Relevant Not Relevant Total
docs retrieved by perceptron classifier proper 815 (732) 378 (345) 1193 (1077)
docs retrieved by information gain (or random strategy) 313 (0) 885 (1456) 1198 (1456)
Total 1128 (732) 1263 (1801) 2391 (2533)
5. CONCLUSIONS AND FURTHER WORK
We have implemented and tested Bayesian online perceptron
and Gaussian processes on the text classification prob-
lem, and have shown that their performance is comparable
to that of SVM, one of the best learning algorithms on
text classification in the published literature. We have also
demonstrated the e#ectiveness of online learning with information
gain on the TREC-9 batch-adaptive filtering task.
Our results on text classification suggest that one can use
Bayesian classifiers for common categories, and maximum
margin classifiers for rare categories. The partitioning of the
categories into common and rare ones in an optimal way is
an interesting problem.
SVM has been employed to use relevance feedback by
Drucker et al [4], where the retrieval is in groups of 10 doc-
uments. In essence, this is a form of adaptive routing. It
would be instructive to see how Bayesian classifiers perform
here, without storing too many previously seen documents.
It would also be interesting to compare the merits of incremental
SVM[21, 1] with the Bayesian online classifiers.
Acknowledgments
We would like to thank Lehel Csato for providing details
on the implementation of the Gaussian process, Wee Meng
Soon for assisting in the data preparation, Yiming Yang
for clarifying the representation used in [27], and Loo Nin
Teow for proof-reading the manuscript. We would also like
to thank the reviewers for their many helpful comments in
improving the paper.
6.
--R
Incremental and decremental support vector machine learning.
Analysis of Binary Data.
Relevance feedback using support vector machines.
Accurate methods for the statistics of surprise and coincidence.
OHSUMED: An interactive retrieval evaluation and new large test collection for research.
Text categorization with support vector machines: Learning with many relevant features.
Making large-scale SVM learning practical
Representation and Learning in Information Retrieval.
Evaluating and optimizing automomous text classification systems.
Training algorithms for linear text classifiers.
Bayesian interpolation.
Monte Carlo implementation of Gaussian process models for Bayesian regression and classification.
Feature selection
Online versus
A Bayesian approach to online learning.
The TREC-9 filtering track final report
Optimal perceptron learning: an online Bayesian approach.
Incremental learning with support vector machines.
Information Retrieval.
The Nature of Statistical Learning Theory.
Using the Nystrom method to speed up kernel machines.
Bayesian Mean Field Algorithms for Neural Networks and Gaussian Processes.
A study on thresholding strategies for text categorization.
--TR
Term-weighting approaches in automatic text retrieval
Representation and learning in information retrieval
Bayesian interpolation
Information-based objective functions for active data selection
OHSUMED
The nature of statistical learning theory
Evaluating and optimizing autonomous text classification systems
Training algorithms for linear text classifiers
Feature selection, perception learning, and a usability case study for text categorization
Making large-scale support vector machine learning practical
A Bayesian approach to on-line learning
Optimal perceptron learning
A re-examination of text categorization methods
A study of thresholding strategies for text categorization
Information Retrieval
Text Categorization with Suport Vector Machines
Relevance Feedback using Support Vector Machines
--CTR
Kian Ming Adam Chai, Expectation of f-measures: tractable exact computation and some empirical observations of its properties, Proceedings of the 28th annual international ACM SIGIR conference on Research and development in information retrieval, August 15-19, 2005, Salvador, Brazil
Hwanjo Yu , ChengXiang Zhai , Jiawei Han, Text classification from positive and unlabeled documents, Proceedings of the twelfth international conference on Information and knowledge management, November 03-08, 2003, New Orleans, LA, USA
Rey-Long Liu, Dynamic category profiling for text filtering and classification, Information Processing and Management: an International Journal, v.43 n.1, p.154-168, January 2007
Randa Kassab , Jean-Charles Lamirel, Towards a synthetic analysis of user's information need for more effective personalized filtering services, Proceedings of the 2007 ACM symposium on Applied computing, March 11-15, 2007, Seoul, Korea
Vaughan R. Shanks , Hugh E. Williams , Adam Cannane, Indexing for fast categorisation, Proceedings of the twenty-sixth Australasian conference on Computer science: research and practice in information technology, p.119-127, February 01, 2003, Adelaide, Australia
Rey-Long Liu , Wan-Jung Lin, Adaptive sampling for thresholding in document filtering and classification, Information Processing and Management: an International Journal, v.41 n.4, p.745-758, July 2005
Aynur Dayanik , David D. Lewis , David Madigan , Vladimir Menkov , Alexander Genkin, Constructing informative prior distributions from domain knowledge in text classification, Proceedings of the 29th annual international ACM SIGIR conference on Research and development in information retrieval, August 06-11, 2006, Seattle, Washington, USA
new topic identification using multiple linear regression, Information Processing and Management: an International Journal, v.42 n.4, p.934-950, July 2006
Franca Debole , Fabrizio Sebastiani, An analysis of the relative hardness of Reuters-21578 subsets: Research Articles, Journal of the American Society for Information Science and Technology, v.56 n.6, p.584-596, April 2005 | text classification;text filtering;online;bayesian;machine learning |
564404 | A new family of online algorithms for category ranking. | We describe a new family of topic-ranking algorithms for multi-labeled documents. The motivation for the algorithms stems from recent advances in online learning algorithms. The algorithms we present are simple to implement and are time and memory efficient. We evaluate the algorithms on the Reuters-21578 corpus and the new corpus released by Reuters in 2000. On both corpora the algorithms we present outperform adaptations to topic-ranking of Rocchio's algorithm and the Perceptron algorithm. We also outline the formal analysis of the algorithm in the mistake bound model. To our knowledge, this work is the first to report performance results with the entire new Reuters corpus. | INTRODUCTION
The focus of this paper is the problem of topic ranking for
text documents. We use the Reuters corpus (release 2000)
as our running example. In this corpus there are about a
hundred dierent topics. Each document in the corpus is
tagged with a set of topics that are relevant to its content.
For instance, a document from late August 1996, discusses
a bill by Bill Clinton to increase the minimum wage by a
whole 90 cents. This document is associated with 9 topics;
four of them are labour, economics, unemployment, and
retail sales. This example shows that there is a semantic
overlap between the topics. Given a feed of documents,
such as the Reuters newswire, the task of topic ranking is
concerned with ordering the topics according to their relevance
for each document independently. The framework
that we use in this paper is that of supervised learning. In
this framework we receive a training set of documents each
of which is provided with its set of relevant topics. Given the
labeled corpus, the goal is to learn a topic-ranking function
that gets a document and outputs a ranking of the topics.
In the machine learning community this setting is often
referred to as a multilabel classication problem. The motivation
of most, if not all, of the machine learning algorithms
for this problem stem from a decision theoretic view.
Permission to make digital or hard copies of all or part of this work for
personal or classroom use is granted without fee provided that copies are
not made or distributed for profit or commercial advantage and that copies
bear this notice and the full citation on the first page. To copy otherwise, to
republish, to post on servers or to redistribute to lists, requires prior specific
permission and/or a fee.
SIGIR'02, August 11-15, 2002, Tampere, Finland.
Namely, the output of the algorithms is a predicted set of
relevant topics and the quality of the predictions is measures
by how successful we are in identifying the set of relevant
topics. In this paper we combine techniques from statistical
learning theory with the more natural goals of information
retrieval tasks. We do so by adopting and generalizing techniques
from online prediction algorithms to the particular
task of topic ranking. Our starting point is the Perceptron
algorithm [8]. Despite (or because of) its age and simplicity
the Perceptron algorithm and its variants have proved to
be surprisingly eective in a broad range of applications in
machine learning and information retrieval (see for instance
[3, 6] and the references therein). The original Perceptron
algorithm has been designed for binary classication prob-
lems. Our family of algorithms borrows the core motivation
of the Perceptron algorithm and largely generalizes it to the
more complex problem of topic ranking. Since the family of
algorithms uses Multiclass Multilabel feedback we refer to
the various variants as the MMP algorithm.
A few learning algorithms for multi-labeled data have
been devised in the machine learning community. Two notable
examples are a multilabel version of AdaBoost called
AdaBoost.MH [10] and a multilabel generalization of Vap-
nik's Support Vector Machines by Elissee and Weston [2].
These two multilabel algorithms take the same general approach
by reducing a multilabel problem into multiple binary
problems by comparing all pairs of labels. Our starting
point is similar as we use an implicit reduction into pairs.
Yet, using a simple pre-computation, MMP is much simpler
to implement and use than both SVM and AdaBoost.MH.
Note, however, that in contrast to AdaBoost.MH and SVM
which were designed for batch settings, in which all the examples
are given in advance, MMP , like the Perceptron, is
used in online settings in which the examples are presented
one at a time.
Our experiments with the two corpora by Reuters suggest
that MMP can oer a viable alternative to existing
algorithms and can be used for processing massive datasets.
2. PROBLEM SETTING
As discussed above, the text processing task in this paper
is concerned with is category or topic ranking. In this problem
we are given access to a stream of documents. Each
document is associated with zero or more categories from
a predened set of topics. We denote the set of possible
categories by Y and the number of dierent topics by k,
In the new distribution of the Reuters corpus
(which we refer to as Reuters-2000) there are 102 dierent
topics while Reuters-21578 consists of 91 dierent topics.
Since there is semantic overlap between the topics a document
is typically associated with more than one topic. More
formally, a document is labeled with a set y Y of relevant
topics. In the Reuters-2000 corpus the average size of y is
3:2 while in Reuters-21578 the average size is 1:24. We say
that a topic r (also referred to as a class or category) is relevant
for a given document if r is in the set of relevant topics,
y.
In the machine learning community this setting is also often
referred to as a multilabel classication problem. There
are numerous dierent information ltering and routing tasks
that are of practical use for multilabel problems. The focus
of this paper is the design, analysis, and implementation
of category-ranking algorithms. That is, given a document,
the algorithms we consider return a list of topics ranked according
to their relevance. However, as discussed above, the
feedback for each document is the set y and thus the topics
for each document in the training corpus are not ranked but
rather marked as relevant or non-relevant.
Each document is represented using the vector space model
[9] as vectors in R n . We denote a document by its vector
representation
. All the topic-ranking algorithms we
discuss in this paper use the same mechanism: each algorithm
maintains a set of k prototypes,
wk . Analogous
to the representation of documents, each prototype is
a vector,
. The specic vector-space representation
we used is based on the pivoted length normalization of
Singhal et. al. [12]. Its description is deferred to the Sec. 4.2.
The focus of the paper is a new family of algorithms that
builds the prototype from examples, i.e., a corpus S of m
documents each of which is assigned a set of relevant topics,
.
The set of prototypes induce a ranking on the topics according
to their similarity with the vector representation
of the document. That is, given a document x the inner-product
wk x induce an ordering of the
relevance level for each topic. We say that topic r is ranked
higher than topic s if
x >
ws
x. Given a feedback,
i.e. the set of relevant topics y of a document x, we say
that the ranking induced by the prototypes is perfect if all
the relevant topics are ranked higher than the non-relevant
topics. In a perfect ranking any pair of topics r 2 y and
the scores induced by
wr is higher than
that the one induced by
ws ,
ws x. We can measure
the quality of a perfect topic-ranking by the size of the gap
between the lowest score among the relevant topics to the
highest score among the non-relevant topics,
min r2y
f
f
ws
Adopting the terminology used in learning theory we refer
to the above quantity as the margin of a document x with respect
to a set of relevant topics y and prototypes
wk .
Clearly, a perfect ranking of a document implies that its
margin is positive. The margin can also be computed when
the prototypes do not induce a perfect ranking. In this case
the margin is negative. An illustration of the margin is given
in Fig. 1. The illustration show the margin in case of a perfect
ranking (left) and a non-perfect one (right). In both
cases there are 9 dierent topics. The relevant topics are
marked with circles and the non-relevant with squares. The
value of the margin is the length of arrow where a positive
values is denoted by an arrow pointing down and a negative
Figure
1: Illustration of the notion of the margin for
a perfect ranking (left) and a non-perfect ranking.
margin by an arrow pointing up. The notion of margin is
rather implicit in the algorithms we discuss in the paper.
However, it plays an important role in the formal analysis
of the algorithms.
3. CATEGORY-RANKING ALGORITHMS
MMP, being a descendant of the Perceptron algorithm,
is an online algorithm: it gets an example, outputs a rank-
ing, and updates the hypothesis it maintains { the set of
prototypes
wk . The update of the prototypes takes
place only if the predicted ranking is not prefect. Online algorithms
become especially handy when the training corpus
is very large since they require minimal amounts of mem-
ory. In the case of batch training (i.e. when the training
corpus is provided in advance and not example by example)
we need to augment the online algorithm with a wrapper.
Several approaches have been proposed for adapting online
algorithms for batch settings. A detailed discussion is given
in [3]. The approach we take in this paper is the simplest
to implement. We run the algorithm in an online fashion on
the provided training corpus and use the nal set of topic
prototypes, obtained after a single pass through the data,
as the topic-ranking hypothesis. We leave more sophisticated
schemes to future research. In the description below,
we omit the index of the document and its set of relevant
topics and denote them as
x and y, respectively.
Given a document
x we dene the error-set of the induced
ranking
wk with respect to the relevance (of topics)
as the set of categories that are inconsistent with the predicted
ranking. Formally, the error-set is dened as
x <
ws xg :
Clearly, the predicted ranking is not perfect if and only if
E is not empty. The size of E can be as large as (
natural question that arises is what type of loss one should
try to minimize whenever E is not empty. We now describe
three dierent losses that yield a simple and e-cient algo-
rithms. These loss functions stem from learning theoretic
insights and also exhibit good performance with respect to
the common information retrieval performance measures. A
nice property of the three losses is that they can be encapsulated
into a single update scheme of the prototypes.
The rst loss is the indicator function, that is, equals 1 if
another way, this loss simply
counts the number of times the topic-ranking was not per-
fect. The second loss is the size of E. Therefore, examples
which were ranked perfectly suer a zero loss while the rest
Initialize: Set
Loop: For
Obtain a new document
Rank the topics according to:
wk x i
Obtain the set of relevant topics y i
If the ranking of x i is not perfect do:
1. For r 2 y i set:
ws x i
2. For r 62 y i set:
ws x i
3. Set normalization factor:
r nr ( loss1 )
4. Update for r 2 y
c
5. Update for r 62 y
c
Return:
wk
Figure
2: The topic-ranking algorithm.
of the examples suer losses in proportion to how poorly
the predicted rankings perform in terms of number of disordered
topics. The third loss is a normalized version of the
second loss, namely, it is jEj=(jY yjjyj). Since maximal
size of the error set is jY yjjyj, the third loss is bounded
from above by one. On each document with rank prediction
error MMP moves the prototypes whose indices constitute
the error-set. The prototypes that correspond the set of
relevant topics are moved toward
x and those which correspond
to non-relevant topics are moved away from
x. The
motivation of this prototype-update stems from the Perceptron
algorithm: it is straightforward to verify that this form
of update will increase the value of inner-products between
x and each of the prototypes in the subset of relevant categories
and similarly decrease the rest of the inner-products.
Put another way, this update of the prototypes is geared
towards increasing the ranking-margin associated with the
example and thus reducing the ranking losses above. The
amount by which we move each prototype depends on its
ranking and the type of loss we employ. We dene nr to
be the number topics wrongly ranked with respect to topic
r. If r is the index of a relevant topic r 2 y, then nr is the
number of non-relevant topics ranked above r. Analogously,
if r is the index of a non-relevant topic nr is the number
of relevant topics below r. Each prototype
wr is moved in
the direction of
x and in proportion to nr . Once the values
are computed, we normalize each on of them by
a scaling factor, denoted c. The value of c depends on the
version of the loss that is used. This scaling factor c ensures
that the sum over nr equals the loss we are trying to make
small, that is,
for the rst loss,
for the second loss, and
for the
third loss. To summarize, if the topic ranking is not perfect
each of prototype
wr corresponding to a wrongly ranked
relevant topic is changed to
(nr=c)x and an analogous
modication is made to the wrongly ranked prototypes from
the set of non-relevant topics. The pseudo code describing
the algorithm is given in Fig. 2.
To analyze the performance of the algorithm we again
need to use the notion of margin. Generalizing the notion
of margin from one document to an entire corpus, we dene
the margin, denoted
, of a set of prototypes
wk on a
corpus S as the minimal margin attained on the documents
in S,
min r2y
f
f
ws xg
For the purpose of the analysis we assume that the total
norm of the prototypes is 1 (
1). This assumption
can be satised by global normalization of the prototypes
that does not change the induced ranking. We are now
ready to state the main theorem which shows that MMP
make a bounded number of mistakes compared to a set of
prototypes that is constructed with hindsight (after obtaining
all the examples) and achieves a perfect ranking on the
corpus.
Theorem 1. [Mistake bound] Let
be an input sequence for MMP where
kg. Denote by Assume that there exists
a set of prototypes
k of unit norm that achieves
a perfect ranking on the entire sequence with margin
S .
Then, MMP obtains the following bounds,
2:
Due to the space restrictions the proof is omitted from
the manuscript.
4. EXPERIMENTS
In this section we describe the experiments we performed
that compare the three variants of MMP with a simple extension
of the Perceptron algorithms and Rocchio's algorithm
We start with a description of the datasets we
used in our experiments.
4.1 Datasets
We evaluated the algorithms on two text corpora. Both
corpora were provided by Reuters.
Reuters-21578: The documents in this corpus were collected
from the Reuters newswire during 1987. The data
set is available from http://www.daviddlewis.com/resources.
Number of
Fraction
of
Data
set
Number of
Fraction
of
Data
set
Figure
3: The distribution of the number of relevant
topics in Reuters-21578 (left) and Reuters-2000.
We used the ModApte version of the corpus and pre-processed
the documents as follows. All words were converted to lower-
case, digits were mapped to a single token designating it is
a digit, and non alpha-numeric characters were discarded.
We also used a stop-list to remove very frequent words. The
number of dierent words left after pre-processing is 27,747.
After the pre-processing the corpus contains 10,789 documents
each of which is associated with one or more topics.
The number of dierent topics in the ModApte version of
Reuters-21578 is 91. Since this corpus is of relatively small
size, we used 5-fold cross validation in our experiments and
did not use the original partition into training and test sets.
While each document in the Reuters-21578 corpus can be
multilabeled, in practice the number of such documents is
relatively small. Over 80% of the documents are associated
with a single topic. On the left hand side of Fig. 3 we show
the distribution of the number of relevant topics. The average
number of relevant topics per document is 1:24.
Reuters-2000: This corpus contains 809; 383 documents
collected from the Reuters newswire in a year period (1996-
08-20 to 1997-08-19). Since this corpus is large we used the
rst two thirds of the corpus for training and the remaining
third for evaluation. The training set consisted of all
documents that were posted from 1996-08-20 through 1997-
04-10, resulting in 521; 439 training documents. The size of
the corpus which was used for evaluation is 287; 944. We
pre-processed as follows. We converted all upper-case characters
to lower-case, replaced all non alpha-numeric characters
with white-spaces, and discarded all the words that
appeared only once in the training set. The number of dier-
ent words that remained after this pre-processing is 225; 329.
Each document in the collection is associated with zero or
more topics. There are 103 dierent topics in the entire cor-
pus, however, only 102 of them appear in the training set.
The remaining category marked GMIL, for millennium is-
sues, tags only 5 documents in the test set. We therefore
discarded this category. Unlike the Reuters-21578 corpus,
each document in the corpus is tagged by multiple topics:
about 70% of the documents are associated with at least
three dierent topics. The average number of topics associated
with each document is 3:20. The distribution of the
number of relevant topics per document appears on the right
hand-side of Fig. 3.
4.2 Document representation
All the algorithms we evaluated use the same document
representation. We implemented the pivoted length normalization
of Singhal et. al. [12] as our term-weighting al-
gorithm. This algorithm is considered to be one of the most
eective algorithms for document ranking and retrieval. We
now brie
y outline the pivoted length normalization. Let
d l
i denote the number of times a word (or term) indexed l
appears in the document indexed i. Let m i denote the number
of unique words appear in in the document indexed i by
l be the number of documents
in which the term indexed l appears. As before, the total
number of documents in the corpus is denoted by m. Using
these denitions the idf weight of a word indexed l is
log(m=u l ). The average frequency of the terms appearing
in document t is,
avg[d l
l d l
and the average frequency of number of unique terms in the
documents is,
Using these denitions the tf weight of a word indexed l
appears in the document indexed t is,
Here slope is a parameter among 0 and 1. We set
0:3 which leads to the best performance at the experiments
reported in [12].
4.3 Algorithms for comparison
In addition to the three variants of MMP we also implemented
two more algorithms: Rocchio's algorithm [7] and
the Perceptron algorithm. As with MMP these algorithms
use the same pivoted length normalization as their vector
space representation and employ the same form of category-
ranking by using a set of prototypes
wk .
Rocchio: We implemented an adaptation of Rocchio's
method as adapted by Ittner et .al [5] to text categoriza-
tion. In this variant of Rocchio the set of prototypes vectors
wk are set as follows,
r
i2Rr
x l
r
x l
where Rr is the set of documents which contain the topic
r as one of their relevant topics and R c r is its complement,
i.e., all the documents for which r is not one of their relevant
topics. Following the parameterization in [5], we set
and
4. Last, as suggested by Amit Singhal in a private
communication, we normalize all of the prototypes to a unit
norm.
Perceptron: We also implemented the Perceptron algo-
rithm. Since the Perceptron algorithm is designed for binary
classication problems, we decomposed the multilabel problem
into multiple binary classication problems. For each
topic r, the set of positive examples constitute the docu-
ments' indices from Rr , and R c r is the set of negative exam-
ples. We then ran the Perceptron algorithm on each of the
binary problems separately and independently. We therefore
obtained again a set of prototypes
wk each of
which is an output of the corresponding output of Perceptron
algorithm.
Round Number.
Averaged
Cumulative
Loss 1
Perceptron
MMP l.1
MMP l.2
MMP l.3
Averaged
Cumulative
Loss 2
Perceptron
MMP l.1
MMP l.2
MMP l.3
Round Number.
Averaged
Cumulative
Loss 3
Perceptron
MMP l.1
MMP l.2
MMP l.3
Figure
4: The round-averaged performance measures as a function of the number of training documents that
were processed for Reuters-21578 : loss1 (left), loss2 (middle) and loss3 (right).
Averaged
Cumulative
Loss 1
Perceptron
MMP l.1
MMP l.2
MMP l.3
Averaged
Cumulative
LossPerceptron
MMP l.1
MMP l.2
MMP l.3
Round Number.
Averaged
Cumulative
LossPerceptron
MMP l.1
MMP l.2
MMP l.3
Figure
5: The round-averaged performance measures as a function of the number of training documents that
were processed for Reuters-2000 : loss1 (left), loss2 (middle) and loss3 (right). A log-scale was used for the
x-axis.
4.4 Feature Selection
For both datasets the number of unique terms after the
pre-processing stage described above was still large: 27; 747
words in the Reuters-21578 and 225; 339 words in Reuters-
2000. Since we used cross-validation for Reuters-21578 the
actual number of unique terms was slightly lower, 25; 061
on the average. Yet, it was still relatively large and therefore
employed feature selection for both corpora. We used
weights of the prototypes generated by the adaptation of
Rocchio's algorithm described above as our mean for feature
selection. For each topic we sorted the terms according
to their weights assigned by Rocchio. We then took for each
topic the maximum between a hundred terms and the top
portion of 2:5% from the sorted lists. This ensures that for
each topic we have at least 100 terms. The combined set
of selected terms is used as the feature set for the various
algorithms and is of size 3; 468 of Reuters-21578 and 9; 325
for Reuters-2000. The average number of unique words per
document in the cross-validated training sets was reduced
from 49 to 37 for Reuters-21578 and from 137 to 121 for
Reuters-2000. After this feature selection stage we applied
all the algorithms on the same representation of documents
which are now restricted to the selected terms.
4.5 Evaluation Measures
Since the paper is concerned with both classical IR methods
and more recent statistical learning algorithms we used
two sets of performance measures to compare the eective-
ness of the algorithms. First, we evaluated each of the algorithm
with respect to the three losses employed by the
variants of MMP. The second set of performance measures
is based on measures used in evaluating document retrieval
systems. Specically, the performance measure we used in
the evaluation are: recall at r, precision at r, one-error,
coverage, average-precision, and maxF1 . We also provide
precision-recall graphs. We now give formal descriptions of
the evaluation measures used in our experiments.
For describing the evaluation measures used in the experiments
we need the following denition. Given a set of prototypes
and xing the example to (x; y), we dene rank(x; r)
as the ranking of the topic indexed r in list of topics sorted
according to the prototypes. Thus, the rank-value of the
top-ranked topic is 1 and of the bottom-ranked topic is k.
The performance measure for the test set is the average value
of each measure over the documents in the set.
Recall The recall value at r is the ratio between the number
of the topics from the set of relevant topics y whose rank is
at most r and the size of the relevant topics y.
Precision The precision value at r is the ratio between the
number of the topics from the set of relevant topics y whose
rank is at most r and the position r.
OneErr The one-error (abbreviated OneErr) takes the values
0 or 1. It indicated whether the top-ranked element
Algorithm loss1 100 loss2 loss3 100
Rocciho
Perceptron 15.71 4.87 3.13
MMP l. 3 19.13 0.92 0.60
Table
1: A comparison of the performance of
the various algorithms on the test-set for different
learning-theoretic performance measures on
Reuters-21578.
Algorithm OneErr 100 Coverage AvgP maxF1
Rocciho 14.48 1.29 0.90 0.85
Perceptron 9.59 4.27 0.91 0.89
MMP l. 3 14.51 0.96 0.90 0.84
Table
2: A comparison of the performance of the
various algorithms on the test-set for dierent retrieval
performance measures on Reuters-21578.
belongs to the set of relevant topics y (zero error) or not
(error of 1),
Therefore, the average OneErr over a test corpus re
ects
the fraction of documents for which the top-ranked topic
was not from the list of relevant topics.
Coverage The coverage value re
ects how far we need to
go down the ranked-list of topics in order to retrieve all the
relevant topics,
For convenience this denition implies that for documents
with a single relevant topic a perfect ranking achieves a coverage
of zero.
AvgP The average precision (abbreviated AvgP), as the
name implies, is the average precision taken at the positions
of the relevant topics,
r2y
The average precision is perhaps the most common performance
measure in document retrieval.
maxF1 The F1 value at r is dened as,
The maxF1 is the maximal F1 value that can be obtained.
For further information on the F1 performance measure see [13].
We would like to note that the there are natural relations
between the learning theoretic losses, loss1 loss3 , and the
retrieval-based performance measures. For instance, whenever
OneErr is 1 then also since if the top-ranked
Algorithm loss1 100 loss2 loss3 100
Rocciho 70.71 12.42 3.33
Perceptron 38.86 10.43 2.64
MMP l. 3 34.57 2.90 0.75
Table
3: A comparison of the performance of
the various algorithms on the test-set for different
learning-theoretic performance measures on
Reuters-2000.
Algorithm OneErr 100 Coverage AvgP maxF1
Rocciho 24.42 9.89 0.73 0.63
Perceptron 6.04 9.45 0.87 0.86
MMP l. 3 6.49 3.98 0.91 0.87
Table
4: A comparison of the performance of the
various algorithms on the test-set for dierent retrieval
performance measures on Reuters-2000.
topic is not one of the relevant topics then clearly the ranking
is not perfect and thus OneErr loss1 . Also, if
then the Coverage and loss2 coincide.
4.6 Results
Let us rst discuss the performance of the online algorithms
(the variants of MMP and Perceptron) on the training
sets. In Fig. 4 and Fig. 5 we show the performance
with respect to loss1 , loss2 and loss3 for Reuters-21578 and
Reuters-2000 respectively. For each round (new document)
we compute the cumulative loss of of the algorithms divided
by the number of documents processes, i.e., the round number
loss
(i))=M . Note that we used log-scale
for the x-axis. As expected, we can see from the gures
that each version of MMP performs well with respect to the
loss measure it was trained on. In addition, the Perceptron
performs well with respect to loss1 , especially on Reuters-
21578. The relatively good performance of the Perceptron
with respect to loss1 might be attributed to the fact that the
Reuters-21578 corpus is practically single-labeled and thus
loss1 and the classication error used by the Perceptron are
practically synonymous. As we discuss in the sequel, the
performance after most of the documents have been processed
also is highly correlated with the performance of the
algorithms on unseen test data. This type of behavior indeed
agrees with the formal analysis of online algorithms [4].
The performance of the algorithms on the test sets is summarized
in four tables. A summary of the performances
with respect to the learning-theoretic and information retrieval
measures on Reuters-21578 are given in Table 1 and
Table
2, respectively. An analogous summary for Reuters-
2000 is given in Table 3 and Table 4. In addition we also
provide in Fig. 6 precision-recall graphs for both corpora.
The performance of MMP and the Perceptron algorithm
with respect to the learning-theoretic losses is consistent
with their behavior on the test data: each variant of MMP
performs well with respect to the loss it employs in training.
Moving on to retrieval performance measures, we see that
Recall
Precision
Rocciho
Perceptron
MMP l.1
MMP l.2
MMP l.3
Precision
Rocciho
Perceptron
MMP l.1
MMP l.2
MMP l.3
Figure
versus recall (x-axis) graphs for the various algorithms on Reuters-21578 (left)
and Reuters-2000 (right).
with respect to all performance measures, with the exception
of Coverage, the best performing topic-ranking algorithm is
MMP with loss1 . For coverage MMP trained with either
loss2 or loss3 achieve the best performance. We also see
strong correlations between loss1 and OneErr and between
loss2 and Coverage. One possible explanation to the improved
performance of loss1 compared to loss2 and loss3 is
that it gives each document the same weight in the update
while the other losses prefer documents with large number
of relevant topics. We plan to investigate this conjecture,
both theoretically and empirically, in future work.
The Perceptron algorithm performs surprisingly well on
both datasets with respect to most of the performance mea-
sures. The main deciency of the algorithms is its relatively
poor performance in terms of Coverage. It achieves
the worst Coverage value in all cases. This behavior can
also be observed in the precision-recall graphs. While the
precision the Perceptron algorithm for low recall values is
competitive with all the variants of MMP and even better
than the variants that employ loss2 and loss3 on Reuters-
21578, as long as recall is below 0:9. Alas, as the recall
increases the precision of the Perceptron algorithm drops
sharply and for high recall values it exhibits the worst pre-
cision. One possible explanation for this behavior is that
the Perceptron is a tailored for classication and thus the
implicit classication loss (the \hinge" loss) it employs is
insensitive to the induced ranking.
Despite our attempts to implement a state-of-the-art version
of Rocchio that takes into account phenomena like the
length of the documents, Rocchio's performance was the
worst. This is especially surprising since in a head-to-head
comparison of Rocchio with recent variants of AdaBoost [11]
the results of the two algorithms were practically indistin-
guishable, despite the fact that it took two orders of magnitude
more to train the latter. Amit Singhal from Google
oered one possible explanation for this relatively poor per-
formance. Rocchio was originally designed for document re-
trieval. Furthermore, the recent improvements that employ
length normalization were tuned on TREC's document retrieval
tasks. Despite its similarity in nature to document
ranking, the topic ranking problem seems to exhibit dierent
statistical characteristics and might require new adaptations
for topic ranking problems.
5.
In this paper we presented a new family of algorithms
called MMP for topic-ranking which are simple to imple-
ment. The algorithmic approach suggested in this paper attempts
to combine the formal properties of statistical learning
techniques with the eectiveness of practical information
retrieval algorithms. The online framework that we used
to derive MMP makes the algorithm practical for massive
datasets such as the new release of Reuters. We believe that
our experiments show that MMP indeed makes a non-trivial
step toward provably correct and practically eective methods
for IR.
There are quite a few possible extensions and modica-
tions to MMP that might further improve its eectiveness.
One possible extension we are currently exploring a better
fusion of IR-motivated losses such as the average precision
into the core of the algorithm. We have preliminary results
that indicate that such an improvement is plausible. A few
more straightforward, yet powerful, extensions were not discussed
in this paper. In particular, MMP can be incorporated
with kernel-techniques [14, 1]. Other, more robust,
methods for converting MMP to a batch algorithm (see for
instance [3] and the references therein) are also possible and
will be evaluated in future work.
Acknowledgments
Thanks to Amit Singhal for clarications and suggestions on
pivoted-length normalization. Thanks also to Noam Slonim
for discussions, to Ofer Dekel and Benjy Weinberger for their
help in pre-processing the corpora, and to Leo Kontorovich
for comments on the manuscript. We also would like to
acknowledge the nancial support of EU project KerMIT
No. IST-2000-25341 and the KerMIT group members for
useful discussions.
6.
--R
An Introduction to Support Vector Machines.
Large margin classi
On weak learning.
Text categorization of low quality images.
Feature selection
Relevance feedback information retrieval.
The perceptron: A probabilistic model for information storage and organization in the brain.
Developments in automatic text retrieval.
Improved boosting algorithms using con
Boosting and Rocchio applied to text
Pivoted document length normalization.
Information Retrieval.
Statistical Learning Theory.
--TR
On weak learning
Pivoted document length normalization
Feature selection, perception learning, and a usability case study for text categorization
Large margin classification using the perceptron algorithm
Boosting and Rocchio applied to text filtering
Improved Boosting Algorithms Using Confidence-rated Predictions
An introduction to support Vector Machines
Information Retrieval
--CTR
Ryan McDonald , Koby Crammer , Fernando Pereira, Flexible text segmentation with structured multilabel classification, Proceedings of the conference on Human Language Technology and Empirical Methods in Natural Language Processing, p.987-994, October 06-08, 2005, Vancouver, British Columbia, Canada
Nadia Ghamrawi , Andrew McCallum, Collective multi-label classification, Proceedings of the 14th ACM international conference on Information and knowledge management, October 31-November 05, 2005, Bremen, Germany
Shenghuo Zhu , Xiang Ji , Wei Xu , Yihong Gong, Multi-labelled classification using maximum entropy method, Proceedings of the 28th annual international ACM SIGIR conference on Research and development in information retrieval, August 15-19, 2005, Salvador, Brazil
Vaughan R. Shanks , Hugh E. Williams , Adam Cannane, Indexing for fast categorisation, Proceedings of the twenty-sixth Australasian conference on Computer science: research and practice in information technology, p.119-127, February 01, 2003, Adelaide, Australia
Yoav Freund , Raj Iyer , Robert E. Schapire , Yoram Singer, An efficient boosting algorithm for combining preferences, The Journal of Machine Learning Research, 4, p.933-969, 12/1/2003
Fernando Ruiz-Rico , Jose Luis Vicedo , Mara-Consuelo Rubio-Snchez, NEWPAR: an automatic feature selection and weighting schema for category ranking, Proceedings of the 2006 ACM symposium on Document engineering, October 10-13, 2006, Amsterdam, The Netherlands
Franca Debole , Fabrizio Sebastiani, An analysis of the relative hardness of Reuters-21578 subsets: Research Articles, Journal of the American Society for Information Science and Technology, v.56 n.6, p.584-596, April 2005
Efficient Learning of Label Ranking by Soft Projections onto Polyhedra, The Journal of Machine Learning Research, 7, p.1567-1599, 12/1/2006 | online learning;category ranking;perceptrons |
564406 | Comparing cross-language query expansion techniques by degrading translation resources. | The quality of translation resources is arguably the most important factor affecting the performance of a cross-language information retrieval system. While many investigations have explored the use of query expansion techniques to combat errors induced by translation, no study has yet examined the effectiveness of these techniques across resources of varying quality. This paper presents results using parallel corpora and bilingual wordlists that have been deliberately degraded prior to query translation. Across different languages, translingual resources, and degrees of resource degradation, pre-translation query expansion is tremendously effective. In several instances, pre-translation expansion results in better performance when no translations are available, than when an uncompromised resource is used without pre-translation expansion. We also demonstrate that post-translation expansion using relevance feedback can confer modest performance gains. Measuring the efficacy of these techniques with resources of different quality suggests an explanation for the conflicting reports that have appeared in the literature. | INTRODUCTION
Cross-Language Information Retrieval (CLIR) systems seek to
identify pertinent information in a collection of documents
containing material in languages other than the one in which the
user articulated her query. Intrinsic to the problem is a need to
transform the query, document, or both, into a common
terminological representation, using available translation
resources. Thus, system performance is necessarily limited by the
caliber of translations; clearly resources with broader coverage are
preferable. High quality linguistic resources are typically difficult
to obtain and exploit, or expensive to purchase. Participants in the
major international CLIR evaluations such as CLEF, NTCIR, and
TREC ([29], [30], [31]) frequently express a desire for better, and
preferably low-cost, translation resources. The large multilingual
collections available on the Internet have motivated researchers to
attempt mining unstructured sources of linguistic data (e.g.,
Resnik [24]), fueled by the natural expectation that the use of
more comprehensive resources will yield improvements in cross-language
performance. It has even been suggested that CLIR
evaluations may be measuring resource quality foremost (or
equivalently, financial status) [7]. Scanning the papers of CLIR
Track participants in TREC-9 and TREC-2001, we observe a
trend toward the fusion of multiple resources in an attempt to
improve lexical coverage. Clearly a need for enhanced resources
is felt.
Typically, three types of resources are exploited for translingual
mappings: bilingual wordlists (or machine readable dictionaries);
parallel texts; and machine translation systems. The favorite
appears to be bilingual wordlists, which are widely available, can
be easy to use (especially if only word-by-word translation is
attempted), and which preserve information such as alternate
translations. Techniques using aligned parallel texts to produce
statistical translation equivalents have become widely used since
the publication of a method using Latent Semantic Indexing by
Landauer and Littman [16]; however, these corpora are difficult to
obtain and must first be aligned and indexed. Machine translation
(MT) systems are perhaps the easiest approach for query
translation, but may be computationally prohibitive for document
translation. MT systems typically produce only a single candidate
thus some information of potential use to a retrieval
system is lost. For an overview of translation methods in CLIR,
see Oard and Diekema [19].
Regardless of the type of resource(s) used, several problems
remain. Pirkola et al. [21] outline the major issues from a
dictionary-based perspective; however, many of these same
concerns arise when corpora or MT systems are used. They list
difficulties with untranslatable terms, variations in inflectional
forms, problems with phrase identification and translation, and
translation ambiguity between the source and target languages as
the main problems.
To cope with the paucity of translation resources and their
inherent limitations, various techniques have been proposed.
Query expansion is routinely used in monolingual retrieval, either
by global methods such as thesauri, by local methods such as
pseudo relevance feedback (PRF), or by local context analysis
(LCA) [26]. In a multilingual setting, expansion can take place
prior to translation, afterwards, or at both times.
The effect of resource quality on retrieval efficacy has received
little attention in the literature. This study explores the
relationship between the quality of a translation resource and
CLIR performance. The effectiveness of both corpus and
Permission to make digital or hard copies of all or part of this work for
personal or classroom use is granted without fee provided that copies are
not made or distributed for profit or commercial advantage and that
copies bear this notice and the full citation on the first page. To copy
otherwise, or republish, to post on servers or to redistribute to lists,
requires prior specific permission and/or a fee.
SIGIR'02, August 11-15, 2002, Tampere, Finland.
dictionary-based resources was artificially lowered by randomly
translating different proportions of query terms, simulating
variability in the coverage of resources. We first discuss prior
related work and then present our experimental design which
explores multiple query expansion techniques. The remainder of
the paper is devoted to an analysis of the empirical results.
2. PREVIOUS WORK
Regarding translation resources for CLIR, we believe that two
points are widely agreed upon:
. resources are scarce and difficult to use; and
. resources with greater lexical coverage are preferable.
Because of the first point, the rarity of electronic sources for
translation, investigators may be drawn to use the resources most
readily available to them, rather than those best suited for
bilingual retrieval. The second point is widely held, but to our
knowledge, in only two cases has the benefit of increased lexical
coverage been quantified [8], [27]; however, many different
resources have been pair-wise compared extrinsically based on
performance in bilingual retrieval tasks (e.g., [14], [18], [28]).
Degradation of documents and queries has been examined in two
of the TREC evaluations, but only in a monolingual setting.
Retrieval of garbled text documents was investigated to simulate a
task where documents might contain numerous errors, such as if
documents were created by optical character recognition [12].
And in TREC-9, short query forms containing realistic spelling
errors were provided to test the ability of systems to cope with
such mistakes. Also in TREC-9, the Query track examined the
effects of query variability on system performance, but queries
were re-stated, rather than purposefully weakened [5].
Query expansion based upon an entire query rather than on a
candidate term's similarity to individual query search terms has
been shown to be effective in monolingual settings [23].
Similarly, blind relevance feedback has been shown to be
remarkably effective, especially when an initial query formulation
lacks terms present in many relevant documents [25]. This might
be the case when a query is very short, or when specific domain
terminology (e.g., medicine, engineering) is used.
In a multilingual setting it seems plausible that pre-translation
expansion would indeed be helpful. If a resource contains a
restricted number of translatable search terms, then the
degradation arising out of the translation process will cause many
important query words to be unavailable for document ranking.
But, if many words (or word forms) related to the query are
translated, then the ultimate number of terms available for
searching the target language is greater. This method presumes
that the set of translated terms still represents the query semantics
(i.e., the user's information request is not significantly altered by
expansion and translation). If query translation does not produce a
query with many coordinate terms, additional expansion through
relevance feedback can likely improve precision as well as recall.
Many positive reports regarding the benefits of query expansion
for CLIR have been reported; however, negative reports have
been made frequently as well. We believe that differences in test
collections, retrieval systems, language pairs, and translation
resources obfuscate the conclusions of prior studies.
Ballesteros and Croft explored query expansion methods for CLIR
and reported "combining pre- and post-translation expansion is
most effective and improves precision and recall." [1] The use of
both techniques led to an improvement from 42% to 68% of
monolingual performance in mean average precision. The
improvement from application of both methods was appreciably
greater than the use of only pre- or post-translation expansion.
Their work only examined a single language pair (English to
Spanish), and relied on the Collins's English-Spanish electronic
dictionary.
In a subsequent study [2], Ballesteros and Croft examined the use
of co-occurrence statistics in parallel corpora to select translations
from a machine-readable dictionary. Application of this technique
was very effective and boosted bilingual performance from 68%
to 88% of a monolingual baseline. Here they suggested that post-translation
expansion helps remove errors due to incorrect
translations.
More recently, Gey and Chen wrote an overview of the TREC-9
CLIR track, which focused on using English queries to search a
Chinese news collection [9]. Their summaries of work by several
top-scoring track participants reveal a disconcerting lack of
consistency as to the merits of query expansion methods:
. 10% improvement in average precision with either pre-translation
or post-translation expansion, but only short
queries benefited from the use of both
. "Pre-translation query expansion did not help"
. "The best cross-language run did not use post-translation
expansion"
. "Pre-translation expansion yielded an improvement of
42% over an unexpanded base run"
. "The best run used both pre- and post-translation
expansion"
. "Post-translation query expansion yielded little
improvement"
With inconsistent results like these, it is impossible to ascertain
what techniques do and do not work. Each of the six systems
referred to above used different translation resources, and we
believe this amplifies the confusion. Until the effects of poor
lexical coverage are better understood, shadows may hang over
many research results unless the quality of translation resources
employed is first ascertained. In an analysis of language resources
used in the CLEF 2000 campaign [10], Gonzolo suggested
measurement of resources and retrieval strategies in isolation, a
recommendation we endorse.
In the TREC-2001 cross-language evaluation, which focused on
English to Arabic retrieval, the system with the highest bilingual
performance made use of several unique translation resources,
which seems to agree with the notion that greater lexical coverage
is helpful. However, it is impossible to discriminate between the
benefits of the retrieval system that was employed and the
resources utilized. Interestingly, the authors reported that pre-translation
expansion was detrimental when post-translation
relevance feedback was also applied, contradicting the results
reported by Ballesteros and Croft [28].
A few investigations have examined the effect of resource size on
CLIR performance. Two reports have measured retrieval
performance as a function of resources for English-Chinese
retrieval. Xu and Weischedel plotted performance on the TREC-
5,6 Chinese tasks using a lexicon mined from parallel texts [27].
They used lexicons of a fixed size, where a lexicon of size n
contained mappings for the n most frequent English words;
bilingual performance was not improved for sizes greater than
20,000 terms. Franz et al. examined three parallel collections for
use on the TREC-9 Chinese topics [8]. Using short queries, they
found that out-of-vocabulary rate was more important than
domain, dialect, or style in predicting system performance.
For the CLEF-2001 workshop, Kraaij examined the relative
merits of an MT system, a lexical database, and a parallel corpus,
and emphasized the benefits that can be obtained from combining
such disparate translation resources [14]. With the use of all three
resources he observed bilingual performance 98% of a
monolingual baseline for English to French retrieval. Separate use
of a dictionary, a corpus, and an MT system yielded performance
only 73%, 90%, and 92% of a monolingual baseline. He offered
the opinion that "the mean average precision of a run is
proportional to the lexical coverage [of the translation
resources]", but this statement appears to be based only on a
qualitative examination of why performance on certain topics
differed depending on the resources and language pairs used.
The results reported in the present paper confirm Kraaij's
conjecture and quantify the degree to which inferior resource
quality affects CLIR performance and under which circumstances
query expansion techniques can mitigate translation errors due to
poor lexical coverage.
3. EXPERIMENTS
3.1 Test Collection
The CLEF-2001 test collection was used for all of our
experiments (see [20] for a description). The collection contains
roughly 1 million newspaper articles published in 1994 or 1995
(see
Table
1).
Table
1. CLEF-2001 Document Collection
Documents Unique words
Dutch 190,604 692,745
English 110,282 235,710
French 87,191 479,682
German 225,371 1,670,316
Italian 108,578 1,323,283
The Bilingual Track in the CLEF-2001 evaluation permitted a
variety of query languages to be used to search either the Dutch or
English collections. Here we only explored the five language pairs
Dutch, French, German, Italian, and Spanish, to English. The test
suite contains fifty topic statements, but only forty-seven of the
topics contain a relevant English article. A mixture of topics
including local, national, and international subjects was selected.
In each language topic statements were crafted by native speakers
and significant effort was expended to ensure that the intended
topic semantics were preserved in the respective languages.
3.2 Document and Query Processing
Document processing was designed to require minimal use of
language specific resources such as stopword lists, lexicons,
stemmers, lists of phrases, or manually-built
thesauri, so each language's sub-collection was handled much the
same. Punctuation was eliminated, letters were down-cased, and
diacritical marks were preserved. Thus documents and queries are
represented as bags of unnormalized word forms. Queries were
tokenized in the same fashion as documents, but obvious query
structure (e.g., 'find documents that' or 'relevant documents must
contain') was removed. We used a retrieval system developed in-house
for all of our experiments. The system uses a statistical
language model of retrieval with Jelinek-Mercer smoothing of
document term frequencies. See [3], [13], and [22] for more
details on these models.
To perform pre-translation expansion, we relied solely on local
methods based on an initial retrieval from the appropriate source
language sub-collection of the CLEF-2001 documents. For
example, to investigate pre-translation expansion for Italian to
English retrieval, we would first do a monolingual retrieval in the
Italian collection (i.e., La Stampa and SDA-IT). Using the top
ranked retrieved documents as positive exemplars and
presuming the lowest 75 ranked out of 1000 were irrelevant, we
produced a set of 60 weighted terms for each query that included
the original query terms; this is analogous to both query
expansion and query term re-weighting as described in Harman
[11]. It should be pointed out that the sub-collections in each
language of the CLEF-2001 evaluation are contemporaneous, so
this set of expansion terms might be somewhat better than an
arbitrary monolingual collection. We did not investigate global
methods for query expansion in the source language because this
would have required a thesaurus in each source language that we
wished to investigate.
When a query was expanded after translation, we again relied on
pseudo relevance feedback based on terms extracted from
retrieved target language documents. As with pre-translation
expansion, we identified weighted terms for use as an
expanded query and searched the target language (English)
collection for a second time.
3.3 Translation Resources
For reasons of convenience we only examined corpus- and
dictionary-based translation - it was not clear to us how to best
degrade commercial translation software since many packages are
optimized for grammatically correct sentences rather than word-
by-word translation. Both the parallel corpus and the multilingual
wordlist were extracted from the Web. These resources were not
validated and may contain numerous errors.
We collected a variety of bilingual wordlists where English was
one of the languages involved. Translation equivalents for over
ninety thousand English words are available in at least one of
forty or so languages. We did not attempt to utilize or reverse
engineer web-based interfaces to dictionaries, but rather only
sought wordlists in the public domain, or whose use appeared
unrestricted; the Ergane dictionaries [32] and files from the
Internet Dictionary Project [34] are the largest sources. We used
the ABET extraction tool to convert these disparate wordlists to
machine-readable form [17].
When translating a word using a bilingual wordlist we simply use
all of the alternative mappings for the word, and each mapping is
weighted using the same query term frequency as the original
word. In our wordlist the mean number of entries per term by
language is: 3.01 for Dutch; 2.08 for French; 1.58 for German;
1.52 for Italian; and 1.57 for Spanish.
We also built a set of aligned corpora using text mined from the
Europa site [33]; specifically, we downloaded eight months of the
Official Journal of the European Union (December 2000 through
August 2001). The Journal is published in eleven languages in
PDF format. We converted the PDF formatted documents to text
encoded in ISO-8859-1, aligned the documents using simple rules
for whitespace and punctuation with Church's char_align program
[6], and then indexed the data. It was easiest to construct a
separate aligned corpus for each non-English language, rather
than to build a single, multiply aligned collection. The resulting
collection contains roughly 100MB of text in each language. The
number of words with at least one English translation produced by
the two resources is shown in Table 2. It should be noted that
many of the terms extracted from the aligned corpus are names or
numbers that would not normally be contained in a dictionary, so
the number of entries reported here is not a clear indication of a
superior resource.
Table
2. Bilingual Resource Size (in terms)
Wordlist Corpus
Dutch 15,591 184,506
French 23,322 135,454
German 94,901 224,961
Italian 18,461 138,890
When translating a word using an aligned corpus, we select the
single best candidate translation.
3.4 Experimental Design
We now describe the experiments we undertook. We focused only
on word-by-word query translation because of its simplicity. Our
goal is to compare four methods of query expansion or
augmentation under a spectrum of conditions corresponding to
differing quality translation resources. The four methods
examined are no use of expansion, pre-translation expansion only,
post-translation only, and the use of both pre- and post-translation
expansion. Figure 1 illustrates the procedure we followed.
Previously we mentioned that only 47 of the CLEF-2001 topics
contain a relevant English article; however, 12 additional topics
contain only one or two relevant documents. This may be
attributable to the design goals of the evaluation: a certain number
topics were sought that focused on local subjects, and the
American-based LA Times is less likely to report on these issues.
Since relevance feedback is only expected to enhance retrieval
performance when a reasonable number of germane documents
are present in the target language collection, we chose to evaluate
our runs using the 35 topics with three or more relevant
documents. Topics 44, 52, 54, 57, 59, 60, 62, 63, 67, 73, 74, 75,
78, 79, and 88 were discarded. Kwok and Chan [15] developed a
technique designed to provide for good query expansion in this
situation (where a target collection only has a small number of
relevant documents), but we did not attempt it here. Their idea is
based on searching a larger collection that is expected to contain
many more documents about that domain; they termed the
technique 'collection enrichment'.
We considered two methods for impairing our translation
resources. The first method was the simple idea of physically
creating new wordlists or corpora with missing lexical entries.
This seemed laborious, so instead we opted for simulating weaker
resources by randomly declining to translate a given percentage of
query terms. In other words, for each term, we would generate a
random number between 0 and 1, and only if the value was
greater than the degree of degradation did we attempt to find a
target language mapping. We could have removed a percentage
of all lexical entries from the resource, but since only a small
percentage of the terms occur in the CLEF queries, this would be
counterproductive. The same random seeds were used for both
corpus or wordlist translations. In practice a language resource
would likely have more mappings for common terms and fewer
entries for proper nouns or obscure terms. We did not attempt to
model this, but dropping low frequency words is probably a better
idea than randomly omitting query terms.
Starting with no degradation, we removed terms in increments of
10%, up to complete degradation. When a decision was made not
to translate a given term, the untranslated form was left in the
query as a potential translation. This is a common practice, and is
motivated by the observation that in related languages, many
morphological cognates exist. Thus, even when a resource is
100% degraded, corresponding to a state in which no translation
resource is available, it is still possible to retrieve relevant
documents.
Different terms will be omitted from a query for a particular
random seed; this is expected to increase the variance in our
evaluation measures. Averaging over a number of trials, each
using a different seed, would provide a clear solution to this
problem. We decided against this for reasons of expediency;
otherwise the number of runs would have been unmanageable. We
chose to focus primarily on mean average precision to evaluate
our results, but we collected statistics for precision at low recall
levels as well.
In
Table
3 a monolingual baseline is compared to bilingual
queries at four levels of resource impairment and the effect of pre-translation
expansion is shown. Italian is used as the source
language and the parallel corpus is used to map terms into English
for the short version of query 66, "Russian Withdrawal from
Latvia" (Italian: "Ritiro delle truppe russe dalla Lettonia "). At 0%
degradation pre-translation expansion is dramatically better due to
several poor translations; at 40%, these translations are dropped
because of the random resource degradation, so performance
Source
Language
Query
Translate w/
Degraded
Resource
Apply
Relevance
Feedback
Expand
Query
Retrieve in
Target
Language
Ranked
Document
query forms were
considered in each of five source languages: Dutch, French,
expansion was used, followed by translation. There were two
methods for translation: use of a bilingual dictionary using all
available translations; and statistical translation using an aligned
parallel corpus. Eleven versions of each resource were simulated
corresponding to different measures of lexical coverage. After
translation, retrieval was performed on the target language
collection, which was English, and optionally, pseudo relevance
feedback was applied. A total of 1320 runs were created.
Figure
1. Overview of Experiments Performed
actually rises; however, at 80% only the term 'withdrawal' is
correctly translated without expansion, so expansion is critical
here. We note that at 100% degradation we still obtain a
reasonable degree of performance, but only when expansion is
used. This is due to cognates (like "estonia" and "russia") that
were extracted from Italian articles during source language query
expansion, but which require no translation into English.
Table
3. Illustration of the effects of pre-translation expansion
and resource degradation
Degradation /
Query
Recall at
1000 docs
Average
Precision
Precision at
docs
English
Monolingual
{latvia=1, russian=1, withdrawal=1}
0% Degradation 11 0.1176 0.1
{communities=1, directive=1, latvia=1, russian=1,
With Expansion 11 0.4791 0.7
agreements=103, armed=74, august=134, baltic=177,
countries=112, estonia=144, foreign=95, incubators=73,
latvia=76, latvian=135, line=215, lithuania=135,
living=82, maintain=92, military=112, minorities=77,
near=71, negotiations=78, news=109, north=76,
pension=73, pensioners=71, press=85, radar=94,
reported=76, rights=86, russia=212, russian=74,
service=84, soldiers=108, station=84, suspended=76,
tallinn=77, troops=926, unit=70, voltage=75, warsaw=101,
40% Degradation 11 0.6982 0.8
{delle=1, latvia=1, russian=1, troops=1, withdrawal=1}
With Expansion 11 0.4400 0.5
{31=119, agosto=134, agreement=83, baltic=177,
estonia=144, incubators=73, latvia=76, latvian=135,
line=215, maintain=92, military=112, moscow=199, near=71,
negotiations=78, news=109, north=76, pension=73,
radar=94, reported=76, rights=86, russia=212, russian=74,
service=84, soldiers=108, stampa=85, station=84,
suspended=76, tallinn=77, troops=926, unit=70,
voltage=75, warsaw=101, withdrawal=958, within=135}
80% Degradation 11 0.0069 0.0
{delle=1, directive=1, russe=1, withdrawal=1}
With Expansion 11 0.5077 0.6
{31=119, agosto=134, baltic=177, dislocate=73,
estone=144, estonia=238, near=71, negotiations=78,
news=109, pensione=73, radar=94, reported=76, riga=215,
russe=1019, russia=212, russian=74, russo=118,
service=84, stampa=85, station=84, tallinn=77,
troops=926, unit=70, withdrawal=958}
100% Degradation 0 0.0000
{dalla=1, delle=1, russe=1}
{31=119, agosto=134, baltic=148, dislocate=73,
estone=144, estonia=238, news=109, pensione=73, radar=94,
riga=215, russa=74, russe=1019, russi=193, russia=212,
russo=118, service=84, stampa=85, stazione=84,
tallinn=77}
4. RESULTS
The resources used for translation in our experiments are
uncurated resources derived from the Web. Because the adequacy
of these resources for cross-language retrieval has not previously
been demonstrated, we first assessed the performance of the
uncompromised resources. Only if a sufficient level of
performance was seen would our experiments be meaningful;
otherwise concern about whether these conclusions hold for
superior resources would arise.
A baseline of English monolingual performance is shown in Table
4, for the three query forms (title-only (or T), title+description (or
TD), and title+description+narrative (or TDN)) with and without
the application of pseudo relevance feedback.
Table
4. Mean Average Precision of a Monolingual Baseline
w/RF
TDN TDN
w/RF
English 0.3578 0.4067 0.4383 0.4284 0.4825 0.4780
In
Table
5 we report the percentage of mean average precision
achieved by each bilingual run performed with intact translation
resources when pre-translation expansion was not used. For our
English baselines, relevance feedback improved the title-only
queries, but did not appreciably change when longer topic
statements were used. Each column in the table (below) is
compared to the corresponding English run. We observe that
when the parallel corpus is used for translation, an average of
between 68% and 75% relative performance is obtained,
depending on the run condition; with our dictionary, only 35% to
59% is seen on average. The dictionary appears to be an inferior
resource, but the lower performance could also be attributable to
our failure to normalize word forms. Longer topic statements fare
better, and relevance feedback is somewhat helpful. We point out
that pre-translation query expansion was not used in the table
below. Given the lower performance when using the dictionary
for translation, we must be cautious in drawing conclusions from
those data.
Table
5. Bilingual Performance with Uncompromised
Resources (percentage of monolingual performance)
w/RF
TD TD
w/RF
TDN TDN
w/RF
Dutch
Dict. 43.9 55.2 26.3 35.3 24.5 41.4
French
Dict. 57.2 57.4 48.7 60.7 61.3 78.9
German
Dict. 42.3 37.5 26.1 33.3 38.7 43.0
Italian
Dict. 35.5 51.0 33.4 48.2 39.6 61.2
Dict. 51.2 51.8 41.2 54.2 57.6 71.6
Mean
Dict. 46.0 50.6 35.1 46.3 44.3 59.2
Now we get to the heart of the matter - addressing the question of
how performance worsens as a translation resource is degraded.
Figure
2 shows the performance in an agglutinative language,
Dutch; and retrieval in Spanish is illustrated in Figure 3. For each
language, six conditions are shown corresponding to the use of T,
TD, or TDN topic statements with either corpus- or dictionary-based
translation.
Figure
2. Effectiveness of expansion techniques as a function of resource degradation for the Dutch topics. Going from left to right,
the three plots on the top row used the title-only, title+description, and title+description+narrative topic statements, respectively,
and the parallel corpus for translation. Dictionary-based translation was used for the plots on the second row. Each plot shows the
performance under four conditions: no expansion; only pre-translation expansion; only post-translation expansion; and both pre-and
post-translation expansion.
Figure
3. Effectiveness of expansion techniques as a function of resource degradation for the Spanish topics. The plots are
arranged as in the previous figure.0.050.150.250.350.45
Degradation
Mean
Average
Precision None
Pre
Post
Degradation
Mean
Average
Precision None
Pre
Post
Degradation
Mean
Average
Precision None
Pre
Post
Degradation
Mean
Average
Precision None
Pre
Post
Degradation
Mean
Average
Precision None
Pre
Post
Degradation
Mean
Average
Precision None
Pre
Post
Degradation
Mean
Average
Precision None
Pre
Post
Degradation
Mean
Average
Precision None
Pre
Post
Degradation
Mean
Average
Precision None
Pre
Post
Degradation
Mean
Average
Precision None
Pre
Post
Degradation
Mean
Average
Precision None
Pre
Post
Degradation
Mean
Average
Precision None
Pre
Post
Both
4.1 No Expansion
Looking at Figures 2 and 3, we first note that retrieval
performance drops linearly with decreased lexical coverage when
no expansion is performed, confirming Kraaij's conjecture. The
decrease depends on the caliber of the resource (the dictionary
plots are noticeably worse), and on the length of the query.
Unsurprisingly, longer queries perform better: they have further to
fall when a weaker resource is used.
4.2 Post-Translation Expansion Alone
We find that the use of blind relevance feedback consistently
increases the mean average precision by a modest amount. This
occurs in each of the five language pairs and across variations in
the lexical coverage of the different translation resources.
4.3 Pre-Translation Expansion Alone
Pre-translation expansion is tremendously useful across all levels
of degradation. At higher levels of degradation, gains between
200 and 300% are realized. Only when a comprehensive
translation resource is used, or when no comparable expansion
collection is available, would we expect to see no benefit from
expansion. Therefore, we recommend that this technique be
applied whenever gains in precision justify the computational and
procedural complexity of automated query expansion.
Amazingly, with no resource at all (i.e., the situation when a
resource is 100% degraded), pre-translation expansion alone can
result in better performance than when an uncompromised
resource is used without expansion. This follows earlier work by
Buckley et al. [4] that viewed English as "misspelled French" and
attempted bilingual retrieval using rules for spelling correction
and reliance on cognate matches. Pre-translation expansion
appears to multiply the number of cognates useful for retrieval in
related languages.
4.4 Pre- and Post-Translation Expansion
Finally, in agreement with the work cited earlier by Ballesteros
and Croft, we confirm that a combination of pre- and post-translation
expansion often yields the greatest performance.
However, pre-translation expansion is responsible for the greatest
gains. We see an improvement of approximately 10% - 15% when
relevance feedback is also applied. This occurs either when the
inferior resource, the wordlist, is used, or at high levels of
degradation when the parallel corpus is used for translation.
4.5 Results in Other Languages
Figures
2 and 3 illustrated the detriment that occurs when a
weaker translation resource is used, along with the ability of query
expansion to ameliorate the losses due to poor lexical coverage, in
Dutch and Spanish. The same trends hold in French, German, and
Italian. A comparison of expansion techniques at four levels of
lexical coverage is shown in Table 6.
The table shows the mean average precision experienced with
corpus-based translation and TD topics. The highlighted cells
indicate when an increase in performance using an expansion
technique was statistically significant at the 95% confidence level
(Wilcoxon test). The use of both pre-translation and post-translation
expansion is almost always better, but at low levels of
degradation, pre-translation expansion alone sometimes
outperforms the combination. With high quality resources, many
of the expansion terms will be correctly translated, and so gains
that normally would occur by finding words related to, but not
present in the initial query, using relevance feedback, are found
instead by the initial feedback from the source language.
Table
6. Effects of Corpus Degradation on Expansion Utility
0% 30% 70% 100%
None
Post 0.3067 0.2643 0.1548 0.0697
Dutch
Both 0.3640 0.3439 0.2529 0.2113
Post 0.3467 0.2964 0.2907 0.1451
French
Both
Post 0.3009 0.2566 0.1717 0.1135
German
Both 0.3448 0.2974 0.3043 0.2440
Post
Italian
Both
Post 0.3253 0.2950 0.2583 0.1018
Both
4.6 Limitations
To consider a breadth of source languages, query lengths, and
expansion methods, some compromises were made; these should
be considered in evaluating our results. Such factors include using
a simple method for query term translation (unbalanced
translation without translation of multiword units), reliance on
contemporaneous newsprint collections for expansion, and use of
a single random seed when selecting query terms not to translate.
5. CONCLUSIONS
In this paper we have demonstrated empirically the intuitive
notion that bilingual retrieval performance drops off as the lexical
coverage of translation resources decreases, and we confirmed
that the relationship is approximately linear. Moreover, by using
degraded translation resources we presented a framework to
discover under which circumstances traditional query expansion
techniques prove most beneficial.
We strongly recommend the use of pre-translation expansion
when dictionary- or corpus-based query translation is performed;
in some instances this expansion can treble performance.
However, the computational expense and availability of
comparable expansion collections should be considered.
Additional relevance feedback in the target language is often
useful, and can provide an additional 10-15% benefit. However,
when high quality (i.e., comprehensive) resources are available,
little gain is likely to occur. Differences in resource quality may
account for disagreeing reports on the effectiveness of query
expansion in cross-language retrieval.
We also demonstrated that even with very poor cross-language
resources, good performance is still feasible when pre-translation
expansion is used. This result is particularly important because it
suggests that translingual retrieval in low-density languages will
benefit significantly from such expansion.
6.
--R
'Phrasal Translation and Query Expansion Techniques for Cross-Language Information Retrieval.' In the
'Resolving Ambiguity for Cross-language Retrieval.' In the
'Using Clustering and Super Concepts within SMART: TREC-6.' In E.
'The TREC-9 Query Track.' In E.
'Char_align: A program for aligning parallel texts at the character level.' In the
'May the Best Team Win: Language Resources in CLIR.' Position paper at the CLEF-2000 workshop
'TREC-9 Cross-Language Information Retrieval (English - Chinese) Overview.' In E.
'Language Resources in Cross-Language Text Retrieval: A CLEF Perspective.' In Carol Peters
'Relevance Feedback Revisited.' In the
'Overview of the Fourth
'Using Language Models for Information Retrieval.' Ph.
'TNO at CLEF-2001: Comparing Translation Resources.' To appear in Carol Peters
'Improving Two-Stage Ad-Hoc Retrieval for Short Queries.' In the
'Fully automated cross-language document retrieval using latent semantic indexing.' In the
'Converting On-Line Bilingual Dictionaries from Human-Readable to Machine-Readable Form.' In these
'JHU/APL Experiments at CLEF: Translation Resources and Score Normalization.' To appear in Carol Peters
'Cross-Language Information Retrieval.' In M.
'Dictionary- Based Cross-Language Information Retrieval: Problems, Methods, and Research Findings.' In Information Retrieval
'A Language Modeling Approach to Information Retrieval.' In the
'Concept Based Query Expansion.' In the
'Improving Retrieval Performance by Relevance Feedback.' In the Journal of the American Society for Information Science
'Query Expansion Using Local and Global Document Analysis.' In the
'Cross-lingual Information Retrieval Using Hidden Markov Models.' In the
'TREC 2001 Cross-lingual Retrieval at BBN.' In TREC-2001 Notebook Papers
Forum, http://www.
Project
Conference, http://trec.
http://dictionaries.
http://www.
--TR
Relevance feedback revisited
Concept based query expansion
Query expansion using local and global document analysis
Phrasal translation and query expansion techniques for cross-language information retrieval
Resolving ambiguity for cross-language retrieval
Improving two-stage ad-hoc retrieval for short queries
A language modeling approach to information retrieval
Information retrieval as statistical translation
Quantifying the utility of parallel corpora
Converting on-line bilingual dictionaries from human-readable to machine-readable form
Dictionary-Based Cross-Language Information Retrieval
Language Resources in Cross-Language Text Retrieval
TNO at CLEF-2001
JHU/APL Experiments at CLEF
--CTR
Paola Virga , Sanjeev Khudanpur, Transliteration of proper names in cross-lingual information retrieval, Proceedings of the ACL workshop on Multilingual and mixed-language named entity recognition, p.57-64, July 12,
Paul Clough , Mark Sanderson, Measuring pseudo relevance feedback & CLIR, Proceedings of the 27th annual international ACM SIGIR conference on Research and development in information retrieval, July 25-29, 2004, Sheffield, United Kingdom
Gina-Anne Levow, Issues in pre- and post-translation document expansion: untranslatable cognates and missegmented words, Proceedings of the sixth international workshop on Information retrieval with Asian languages, p.77-83, July 07-07, 2003, Sappro, Japan
Jiang Zhu , Haifeng Wang, The effect of translation quality in MT-based cross-language information retrieval, Proceedings of the 21st International Conference on Computational Linguistics and the 44th annual meeting of the ACL, p.593-600, July 17-18, 2006, Sydney, Australia
Tuomas Talvensaari , Martti Juhola , Jorma Laurikkala , Kalervo Jrvelin, Corpus-based cross-language information retrieval in retrieval of highly relevant documents: Research Articles, Journal of the American Society for Information Science and Technology, v.58 n.3, p.322-334, February 2007
Tuomas Talvensaari , Jorma Laurikkala , Kalervo Jrvelin , Martti Juhola , Heikki Keskustalo, Creating and exploiting a comparable corpus in cross-language information retrieval, ACM Transactions on Information Systems (TOIS), v.25 n.1, p.4-es, February 2007
Empirical studies on the impact of lexical resources on CLIR performance, Information Processing and Management: an International Journal, v.41 n.3, p.475-487, May 2005
Gina-Anne Levow , Douglas W. Oard , Philip Resnik, Dictionary-based techniques for cross-language information retrieval, Information Processing and Management: an International Journal, v.41 n.3, p.523-547, May 2005
Paul Mcnamee , James Mayfield, Character N-Gram Tokenization for European Language Text Retrieval, Information Retrieval, v.7 n.1-2, p.73-97, January-April 2004
Jialun Qin , Yilu Zhou , Michael Chau , Hsinchun Chen, Multilingual Web retrieval: An experiment in EnglishChinese business intelligence, Journal of the American Society for Information Science and Technology, v.57 n.5, p.671-683, March 2006
Wessel Kraaij , Jian-Yun Nie , Michel Simard, Embedding web-based statistical translation models in cross-language information retrieval, Computational Linguistics, v.29 n.3, p.381-419, September
Kazuaki Kishida, Technical issues of cross-language information retrieval: a review, Information Processing and Management: an International Journal, v.41 n.3, p.433-455, May 2005 | query expansion;query translation;cross-language information retrieval;translation resources |
564412 | Document clustering with committees. | Document clustering is useful in many information retrieval tasks: document browsing, organization and viewing of retrieval results, generation of Yahoo-like hierarchies of documents, etc. The general goal of clustering is to group data elements such that the intra-group similarities are high and the inter-group similarities are low. We present a clustering algorithm called CBC (Clustering By Committee) that is shown to produce higher quality clusters in document clustering tasks as compared to several well known clustering algorithms. It initially discovers a set of tight clusters (high intra-group similarity), called committees, that are well scattered in the similarity space (low inter-group similarity). The union of the committees is but a subset of all elements. The algorithm proceeds by assigning elements to their most similar committee. Evaluating cluster quality has always been a difficult task. We present a new evaluation methodology that is based on the editing distance between output clusters and manually constructed classes (the answer key). This evaluation measure is more intuitive and easier to interpret than previous evaluation measures. | INTRODUCTION
Document clustering was initially proposed for improving the
precision and recall of information retrieval systems [18]. Because
clustering is often too slow for large corpora and has indifferent
performance [8], document clustering has been used more
recently in document browsing [3], to improve the organization
and viewing of retrieval results [6], to accelerate nearest-neighbor
search [1] and to generate Yahoo-like hierarchies [12].
Common characteristics of document clustering include:
. there is a large number of documents to be clustered;
. the number of output clusters may be large;
. each document has a large number of features; e.g., the
features may include all the terms in the document; and
. the feature space, the union of the features of all
documents, is even larger.
In this paper, we propose a clustering algorithm, CBC (Clustering
By Committee), which produces higher quality clusters in
document clustering tasks as compared to several well known
clustering algorithms. Many clustering algorithms represent a
cluster by the centroid of all of its members (e.g., K-means) [13]
or by a representative element (e.g., K-medoids) [10]. When
averaging over all elements in a cluster, the centroid of a cluster
may be unduly influenced by elements that only marginally
belong to the cluster or by elements that also belong to other
clusters. To illustrate this point, consider the task of clustering
words. We can use the contexts of words as features and group
together the words that tend to appear in similar contexts. For
instance, U.S. state names can be clustered this way because they
tend to appear in the following contexts:
___ driver's license illegal in ___
___ outlaws sth. primary in ___
___'s sales tax senator for ___
If we create a centroid of all the state names, the centroid will also
contain features such as:
(List B) ___'s airport archbishop of ___
___'s business district fly to ___
___'s mayor mayor of ___
___'s subway outskirts of ___
because some of the state names (like New York and Washington)
are also names of cities.
Permission to make digital or hard copies of all or part of this work for
personal or classroom use is granted without fee provided that copies are not
made or distributed for profit or commercial advantage and that copies bear
this notice and the full citation on the first page. To copy otherwise, or
republish, to post on servers or to redistribute to lists, requires prior specific
permission and/or a fee.
SIGIR'02, August 11-15, 2002, Tampere, Finland.
Using a single representative from a cluster may be problematic
too because each individual element has its own idiosyncrasies
that may not be shared by other members of the cluster.
CBC constructs the centroid of a cluster by averaging the feature
vectors of a subset of the cluster members. The subset is viewed
as a committee that determines which other elements belong to the
cluster. By carefully choosing committee members, the features of
the centroid tend to be the more typical features of the target class.
For example, our system chose the following committee members
to compute the centroid of the state cluster: Illinois, Michigan,
Minnesota, Iowa, Wisconsin, Indiana, Nebraska and Vermont. As
a result, the centroid contains only features like those in List A.
Evaluating clustering results is a very difficult task. We introduce
a new methodology that is based on the editing distance between
clustering results and manually constructed classes (the answer
We argue that it is easier to interpret the results of this
evaluation measure than the results of previous measures.
The remainder of this paper is organized as follows. In the next
section, we review related clustering algorithms that are
commonly used in document clustering. Section 3 describes our
representational model and in Section 4 we present the CBC
algorithm. The evaluation methodology and experimental results
are presented in Sections 5 and 6. In Section 7, we show an
example application of CBC. Finally, we conclude with a
discussion and future work.
2. RELATED WORK
Generally, clustering algorithms can be categorized as hierarchical
and partitional. In hierarchical agglomerative algorithms, clusters
are constructed by iteratively merging the most similar clusters.
These algorithms differ in how they compute cluster similarity. In
single-link clustering [16], the similarity between two clusters is
the similarity between their most similar members while
complete-link clustering [11] uses the similarity between their
least similar members. Average-link clustering [5] computes this
similarity as the average similarity between all pairs of elements
across clusters. The complexity of these algorithms is O(n 2 logn),
where n is the number of elements to be clustered [7]. These
algorithms are too inefficient for document clustering tasks that
deal with large numbers of documents. In our experiments, one of
the corpora we used is small enough (2745 documents) to allow
us to compare CBC with these hierarchical algorithms.
Chameleon is a hierarchical algorithm that employs dynamic
modeling to improve clustering quality [9]. When merging two
clusters, one might consider the sum of the similarities between
pairs of elements across the clusters (e.g. average-link clustering).
A drawback of this approach is that the existence of a single pair
of very similar elements might unduly cause the merger of two
clusters. An alternative considers the number of pairs of elements
whose similarity exceeds a certain threshold [4]. However, this
may cause undesirable mergers when there are a large number of
pairs whose similarities barely exceed the threshold. Chameleon
clustering combines the two approaches.
Most often, document clustering employs K-means clustering
since its complexity is linear in n, the number of elements to be
clustered. K-means is a family of partitional clustering algorithms.
The following steps outline the basic algorithm for generating a
set of K clusters:
1. randomly select K elements as the initial centroids of
the clusters;
2. assign each element to a cluster according to the
centroid closest to it;
3. recompute the centroid of each cluster as the average of
the cluster's elements;
4. repeat Steps 2-3 for T iterations or until the centroids do
not change, where T is a predetermined constant.
K-means has complexity O(K-T-n) and is efficient for many
document clustering tasks. Because of the random selection of
initial centroids, the resulting clusters vary in quality. Some sets
of initial centroids lead to poor convergence rates or poor cluster
quality.
Bisecting K-means [17], a variation of K-means, begins with a set
containing one large cluster consisting of every element and
iteratively picks the largest cluster in the set, splits it into two
clusters and replaces it by the split clusters. Splitting a cluster
consists of applying the basic K-means algorithm a times with
K=2 and keeping the split that has the highest average element-
centroid similarity.
Hybrid clustering algorithms combine hierarchical and partitional
algorithms in an attempt to have the high quality of hierarchical
algorithms with the efficiency of partitional algorithms. Buckshot
[3] addresses the problem of randomly selecting initial centroids
in K-means by combining it with average-link clustering. Cutting
et al. claim its clusters are comparable in quality to hierarchical
algorithms but with a lower complexity. Buckshot first applies
average-link to a random sample of n elements to generate K
clusters. It then uses the centroids of the clusters as the initial K
centroids of K-means clustering. The complexity of Buckshot is
nlogn). The parameters K and T are usually
considered to be small numbers. Since we are dealing with a large
number of clusters, Buckshot and K-means become inefficient in
practice. Furthermore, Buckshot is not always suitable. Suppose
one wishes to cluster 100,000 documents into 1000 newsgroup
topics. Buckshot could generate at most 316 initial centroids.
3. REPRESENTATION
CBC represents elements as feature vectors. The features of a
document are the terms (usually stemmed words) that occur
within it and the value of a feature is a statistic of the term. For
example, the statistic can simply be the term's frequency, tf,
within the document. In order to discount terms with low
discriminating power, tf is usually combined with the term's
inverse document frequency, idf, which is the inverse of the
percentage of documents in which the term occurs. This measure
is referred to as tf-idf [15]:
We use the mutual information [2] between an element
(document) and its features (terms).
In our algorithm, for each element e, we construct a frequency
count vector em is the total number
of features and c ef is the frequency count of feature f occurring in
element e. In document clustering, e is a document and c ef is the
term frequency of f in e. We construct a mutual information
vector em ), where mi ef is the mutual
information between element e and feature f, which is defined as:
c
c
c
if
log
c is the total frequency count of all features of
all elements.
We compute the similarity between two elements using
the cosine coefficient [15] of their mutual information vectors:
f
e
f
f
e
f
f
e
e
mi
mi
mi
mi
e
e
sim2
4. ALGORITHM
CBC consists of three phases. In Phase I, we compute each
element's top-k similar elements. In our experiments, we used
20. In Phase II, we construct a collection of tight clusters, where
the elements of each cluster form a committee. The algorithm
tries to form as many committees as possible on the condition that
each newly formed committee is not very similar to any existing
committee. If the condition is violated, the committee is simply
discarded. In the final phase of the algorithm, each element is
assigned to its most similar cluster.
4.1 Phase I: Find top-similar elements
Computing the complete similarity matrix between pairs of
elements is obviously quadratic. However, one can dramatically
reduce the running time by taking advantage of the fact that the
feature vector is sparse. By indexing the features, one can retrieve
the set of elements that have a given feature. To compute the top
similar elements of an element e, we first sort the mutual
information vector MI(e) and then only consider a subset of the
features with highest mutual information. Finally, we compute the
similarity between e and the elements that share a feature
from this subset. Since high mutual information features tend not
to occur in many elements, we only need to compute a fraction of
the possible pairwise combinations. With 18,828 elements, Phase
I completes in 38 minutes. Using this heuristic, similar words that
share only low mutual information features will be missed by our
algorithm. However, in our experiments, this had no visible
impact on cluster quality.
4.2 Phase II: Find committees
The second phase of the clustering algorithm recursively finds
tight clusters scattered in the similarity space. In each recursive
step, the algorithm finds a set of tight clusters, called committees,
and identifies residue elements that are not covered by any
committee. We say a committee covers an element if the
element's similarity to the centroid of the committee exceeds
some high similarity threshold. The algorithm then recursively
attempts to find more committees among the residue elements.
The output of the algorithm is the union of all committees found
in each recursive step. The details of Phase II are presented in
Figure
1.
In Step 1, the score reflects a preference for bigger and tighter
clusters. Step 2 gives preference to higher quality clusters in Step
3, where a cluster is only kept if its similarity to all previously
kept clusters is below a fixed threshold. In our experiments, we
set terminates the recursion if no committee is
found in the previous step. The residue elements are identified in
Step 5 and if no residues are found, the algorithm terminates;
otherwise, we recursively apply the algorithm to the residue
elements.
Each committee that is discovered in this phase defines one of the
final output clusters of the algorithm.
4.3 Phase III: Assign elements to clusters
In Phase III, every element is assigned to the cluster containing
the committee to which it is most similar. This phase resembles K-means
in that every element is assigned to its closest centroid.
Input: A list of elements E to be clustered, a similarity
database S from Phase I, thresholds # 1 and # 2 .
Step 1: For each element e - E
Cluster the top similar elements of e from S using
average-link clustering.
For each cluster discovered c compute the following
score: |c| - avgsim(c), where |c| is the number of
elements in c and avgsim(c) is the average
similarity between elements in c.
Store the highest-scoring cluster in a list L.
Step 2: Sort the clusters in L in descending order of their
scores.
Step 3: Let C be a list of committees, initially empty.
For each cluster c - L in sorted order
Compute the centroid of c by averaging the
frequency vectors of its elements and computing
the mutual information vector of the centroid in
the same way as we did for individual elements.
If c's similarity to the centroid of each committee
previously added to C is below a threshold # 1 , add
c to C.
Step 4: If C is empty, we are done and return C.
Step 5: For each element e - E
If e's similarity to every committee in C is below
threshold # 2 , add e to a list of residues R.
Step is empty, we are done and return C.
Otherwise, return the union of C and the output of a
recursive call to Phase II using the same input
except replacing E with R.
Output: a list of committees.
Figure
1. Phase II of CBC.
Unlike K-means, the number of clusters is not fixed and the
centroids do not change (i.e. when an element is added to a
cluster, it is not added to the committee of the cluster).
5. EVALUATION METHODOLOGY
Many cluster evaluation schemes have been proposed. They
generally fall under two categories:
. comparing cluster outputs with manually generated
answer keys (hereon referred to as classes); and
. embedding the clusters in an application (e.g.
information retrieval) and using its evaluation measure.
One approach considers the average entropy of the clusters, which
measures the purity of the clusters [17]. However, maximum
purity is trivially achieved when each element forms its own
cluster.
Given a partitioned set of n elements, there are n - (n - 1) / 2
pairs of elements that are either in the same partition or not. The
partition implies n - (n - 1) / 2 decisions. Another way to
evaluate clusters is to compute the percentage of the decisions that
are in agreement between the clusters and the classes [19]. This
measure sometimes gives unintuitive results. Suppose the answer
key consists of 20 equally sized classes with 1000 elements in
each. Treating each element as its own cluster gets a misleadingly
high score of 95%.
The evaluation of document clustering algorithms in information
retrieval often uses the embedded approach [6]. Suppose we
cluster the documents returned by a search engine. Assuming the
user is able to pick the most relevant cluster, the performance of
the clustering algorithm can be measured by the average precision
of the chosen cluster. Under this scheme, only the best cluster
matters.
The entropy and pairwise decision schemes each measure a
specific property of clusters. However, these properties are not
directly related to application-level goals of clustering. The
information retrieval scheme is goal-oriented, however it
measures only the quality of the best cluster. We propose an
evaluation methodology that strikes a balance between generality
and goal-orientation.
Like the entropy and pairwise decision schemes, we assume that
there is an answer key that defines how the elements are supposed
to be clustered. Let C be a set of clusters and A be the answer key.
We define the editing distance, dist(C, A), as the number of
operations required to transform C into A. We allow three editing
operations:
. merge two clusters;
. move an element from one cluster to another; and
. copy an element from one cluster to another.
Let B be the baseline clustering where each element is its own
cluster. We define the quality of cluster C as follows:
A
dist
A
dist
This measure can be interpreted as the percentage of savings from
using the clustering result to construct the answer key versus
constructing it from scratch (i.e. the baseline).
We make the assumption that each element belongs to exactly one
cluster. The transformation procedure is as follows:
1. Suppose there are m classes in the answer key. We start
with a list of m empty sets, each of which is labeled with
a class in the answer key.
2. For each cluster, merge it with the set whose class has
the largest number of elements in the cluster (a tie is
broken arbitrarily).
3. If an element is in a set whose class is not the same as
one of the element's classes, move the element to a set
where it belongs.
4. If an element belongs to more than one target class,
copy the element to all sets corresponding to the target
classes (except the one to which it already belongs).
dist(C, is the number of operations performed using the above
transformation rules on C.
Figure
2 shows an example. In D) the cluster containing e could
have been merged with either set (we arbitrarily chose the
second). The total number of operations is 5.
6. EXPERIMENTAL RESULTS
In this section, we describe our test data and present an evaluation
of our system. We compare CBC to the clustering algorithms
presented in Section 2 and we provide a detailed analysis of K-means
and Buckshot. We proceed by studying the effect of
different clustering parameters on CBC.
a
e
c
d
e
a
c
d
a
c
d
e
a
c
d
e
C) D)
E)
a
e
c
d
e
Figure
2. An example of applying the transformation rules to
three clusters. A) The classes in the answer key; B) the
clusters to be transformed; C) the sets used to reconstruct the
classes (Rule 1); D) the sets after three merge operations (Step
E) the sets after one move operation (Step 3); F) the sets
after one copy operation (Step 4).
6.1 Test Data
We conducted document-clustering experiments with two data
sets: Reuters-21578 V1.2 1 and 20news-18828 2 (see Table 1). For
the Reuters corpus, we selected documents that:
1. are assigned one or more topics;
2. have the attribute LEWISSPLIT="TEST"; and
3. have and tags.
There are 2745 such documents. The 20news-18828 data set
contains 18828 newsgroup articles partitioned (nearly) evenly
across 20 different newsgroups.
6.2 Cluster Evaluation
We clustered the data sets using CBC and the clustering
algorithms of Section 2 and applied the evaluation methodology
from the previous section. Table 2 shows the results. The columns
are our editing distance based evaluation measure. CBC
outperforms K-means with K=1000 by 4.14%. On the 20-news
data set, our implementation of Chameleon was unable to
complete in reasonable time. For the 20-news corpus, CBC
spends the vast majority of the time finding the top similar
documents (38 minutes) and computing the similarity between
documents and committee centroids (119 minutes). The rest of the
computation, which includes clustering the top-20 similar
documents for every one of the 18828 documents and sorting the
clusters, took less than 5 minutes. We used a Pentium III 750MHz
processor and 1GB of memory.
6.3 K-means and Buckshot
Figure
3 and Figure 4 show the cluster quality of different K's on
the 20-news data set plotted over eight iterations of the K-means
and Buckshot algorithms respectively. The cluster quality for K-means
clearly increases as K reaches 1000 although the increase
in quality slows down between K=60 and K=1000.
Buckshot has similar performance to K-means on the Reuters
corpus; however it performs much worse on the 20-news corpus.
This is because K-means performs well on this data set when K is
large (e.g. K=1000) whereas Buckshot cannot have K higher . On the Reuters corpus, the best clusters for K-means
were obtained with and Buckshot can have K as
large as 52
. However, as K approaches 52, Buckshot
degenerates to the K-means algorithm, which explains why
Buckshot has similar performance to K-means. Figure 5 compares
the cluster quality between K-means and Buckshot for different
values of K on the 20-news data set.
Table
1. The number of classes in each test data set and the
number of elements in their largest and smallest classes.
CLASSES
CLASS
CLASS
Reuters 2745 92 1045 1
20-news
Table
2. Cluster quality (%) of several algorithms on the
Reuters and 20-news data sets.
REUTERS 20-NEWS
K-means 62.38 70.04
Buckshot 62.03 65.96
Bisecting K-means 60.80 58.52
Chameleon 58.67 n/a
Average-link 63.00 70.43
Complete-link 46.22 64.23
Figure
3. K-means cluster quality on the 20-news data set for
different values of K plotted of over eight iterations.0.20.40.61 2 3 4 5 6 7 8
Itera tions (T)
Quality
Figure
4. Buckshot cluster quality on the 20-news data set
for different values of K plotted of over eight iterations.0.20.40.61 2 3 4 5 6 7 8
Itera tions (T)
Quality
Buckshot first applies average-link clustering to a random sample
of n elements, where n is the number of elements to be
clustered. The sample size counterbalances the quadratic running
time of average-link to make Buckshot linear. We experimented
with larger sample sizes to see if Buckshot performs better. For
the 20-news data set, clustering 137 elements using average-link
is very fast so we can afford to cluster a larger sample. Figure 6
illustrates the results for K=150 on the 20-news data set where F
indicates the forced sample size and F=SQRT is the original
Buckshot algorithm described in Section 2. Since K > 137,
F=SQRT is just the K-means algorithm (we always sample at least
K elements). Buckshot has better performance than K-means as
long as the sample size is significantly bigger than K. All values
of F # 500 converged after only two iterations while F=SQRT
took four iterations to converge.
6.4 Clustering Parameters
We experimented with different clustering parameters. Below, we
describe each parameter and their possible values:
1. Vector Space Model (described in Section 3):
a) MI : the mutual-information model;
the term-frequency model;
c) TFIDF1 : the tf-idf model;
d) TFIDF2 : the tf-idf model using the following
2. Stemming:
terms are not stemmed;
are stemmed using Porter's
stemmer [14].
3. Stop Words:
a) W- : no stop words are used as features;
all terms are used.
4. Filtering:
filtering is performed;
terms with MI<0.5 are deleted.
When filtering is on, the feature vectors become smaller and the
similarity computations become much faster.
We refer to an experiment using a string where the first position
corresponds to the Stemming parameter, the second position
corresponds to the Stop Words parameter and the third position
corresponds to the Filtering parameter. For example, experiment
S+W-F+ means that terms are stemmed, stop words are ignored,
and filtering is performed. The Vector Space Model parameter
will always be explicitly given.
Figure
7 illustrates the quality of clusters generated by CBC on
the Reuters corpus while varying the clustering parameters. Most
document clustering systems use TFIDF1 as their vector space
model; however, the MI model outperforms each model including
TFIDF1. Furthermore, varying the other parameters on the MI
model makes no significant difference on cluster quality making
MI more robust. TF performs the worse since terms with low
discriminating power (e.g. the, furthermore) are not discounted.
Although TFIDF2 slightly outperforms TFIDF1 (on experiment
S+W-F-), it is clearly not as robust. Except for the TF model,
stemming terms always produced better quality clusters.
7. EXAMPLE
We collected the titles and abstracts for the 46 papers presented at
SIGIR-2001 and clustered them using CBC. For each paper, we
used as part of its filename the session name in which it was
presented at the conference and a number representing the order
in which it appears in the proceedings. For example, Cat/017
refers to a paper that was presented in the Categorization session
and that is the 17 th paper in the proceedings. The results are
shown in Table 3.
The features of many of the automatically generated clusters
clearly correspond to SIGIR-2001 session topics (e.g. clusters 1,0.20.61
Quality
K-means Buckshot
Figure
5. Comparison of cluster quality between K-means
and Buckshot for different K on the 20-news data set.0.50.60.7
Iterations (T)
Quality
F=SQRT F=500 F=1000 F=1500 F=2000
F=2500 F=3000 F=3500 F=4000
Figure
6. Buckshot cluster quality with K=150 for varying
sample size (F) on the 20-news data set plotted over five
iterations.
4, Applying the evaluation methodology of
Section 5 gives a score of 32.60%. This score is fairly low for the
following reasons:
. Some documents could potentially belong to more than one
session. For example, Lrn/037 was clustered in the
categorization cluster #1 because it deals with learning and
text categorization (it is titled "A meta-learning approach for
text categorization"). Using the sessions as the answer key,
Lrn/037 will be counted as incorrect.
. CBC generates clusters that do not correspond to any session
topic. For example, all papers in Cluster #6 have news stories
as their application domain and the papers in Cluster #5 all
deal with search engines.
Table
3. Output of CBC applied to the 46 papers presented at SIGIR-2001: the left column shows the clustered documents and the
right column shows the top-7 features (stemmed terms) forming the cluster centroids.
MS/035, Eval/011, MS/036, Sys/006, Cat/018 threshold, score, term, base, distribut, optim, scheme
3 LM/015, LM/041, LM/016, CL/012, CL/013, CL/014 model, languag, translat, expans, estim, improv, framework
user, result, use, imag, system, search, index
5 Web/030, Sys/007, MS/033, Eval/010 search, engin, page, web, link, best
6 Sum/002, Lrn/039, LM/042 stori, new, event, time, process, applic, content
8 Lrn/040, Sys/005 level, space, framework, vector, comput, recent
9 QA/046, QA/044, QA/045, MS/034 answer, question, perform, task, passag, larg, give
11 Web/032, Sum/026, Web/031 link, algorithm, method, hyperlink, web, analyz, identifi
Cat=Categorization, CL=CrossLingual, Eval=Evaluation, LM=LanguageModels, Lrn=Learning, MS=MetaSearch, QA=QuestionAnswering,
RM=RetrievalModels, Sum=Summarization, Sys=Systems, US=UserStudies, Web=Web
Figure
7. CBC evaluation of cluster quality when using different clustering parameters (Reuters corpus).0.10.30.50.7
MI TF TFIDF1 TFIDF2
Vector Space Mode ls
Quality
8. CONCLUSION
Document clustering is an important tool in information retrieval.
We presented a clustering algorithm, CBC, which can handle a
large number of documents, a large number of output clusters, and
a large sparse feature space. It discovers clusters using well-
scattered tight clusters called committees. In our experiments on
document clustering, we showed that CBC outperforms several
well-known hierarchical, partitional, and hybrid clustering
algorithms in cluster quality. For example, in one experiment,
outperforms K-means by 4.14%.
Evaluating cluster quality has always been a difficult task. We
presented a new evaluation methodology that is based on the
editing distance between output clusters and manually constructed
classes (the answer key). This evaluation measure is more
intuitive and easier to interpret than previous evaluation measures.
CBC may be applied to other clustering tasks such as word
clustering. Since many words have multiple senses, we can
modify Phase III of CBC to allow an element to belong to
multiple clusters. For an element e, we can find its most similar
cluster and assign e to it. We can then remove those features from
e that are shared by the centroid of the cluster. Then, we can
recursively find e's next most similar cluster and repeat the feature
removal. This process continues until e's similarity to its most
similar cluster is below a threshold or when the total mutual
information of all the residue features of e is below a fraction of
the total mutual information of its original features. For a
polysymous word, CBC can then potentially discover clusters that
correspond to its senses. Preliminary experiments on clustering
words using the TREC collection (3GB) and a proprietary
collection (2GB) of grade school readings from Educational
Testing Service gave the following automatically discovered word
senses for the word bass:
(clarinet, saxophone, cello, trombone)
(Allied-Lyons, Grand Metropolitan, United Biscuits,
Cadbury Schweppes)
(contralto, baritone, mezzo, soprano)
(Steinbach, Gallego, Felder, Uribe)
(halibut, mackerel, sea bass, whitefish)
(Kohlberg Kravis, Kohlberg, Bass Group, American
and for the word China:
(Russia, China, Soviet Union, Japan)
(earthenware, pewter, terra cotta, porcelain)
The word senses are represented by four committee members of
the cluster.
9.
ACKNOWLEDGEMENTS
The authors wish to thank the reviewers for their helpful
comments. This research was partly supported by Natural
Sciences and Engineering Research Council of Canada grant
OGP121338 and scholarship PGSB207797.
10.
--R
Optimization of inverted vector searches.
Word association norms
Scatter/Gather: A cluster-based approach to browsing large document collections
ROCK: A robust clustering algorithm for categorical attributes.
Data Mining - Concepts and Techniques
Reexamining the cluster hypothesis: Scatter/Gather on retrieval results.
Data Clustering: A Review.
The use of hierarchical clustering in information retrieval.
Chameleon: A hierarchical clustering algorithm using dynamic modeling.
Clustering by means of medoids.
Hierarchically classifying documents using very few words.
Some methods for classification and analysis of multivariate observations.
An algorithm for suffix stripping.
Introduction to Modern Information Retrieval.
Numerical Taxonomy: The Principles and Practice of Numerical Classification.
A comparison of document clustering techniques.
Information Retrieval
Clustering with instance-level constraints
--TR
Scatter/Gather: a cluster-based approach to browsing large document collections
Reexamining the cluster hypothesis
Optimization of inverted vector searches
Data clustering
Data mining
Information Retrieval
Introduction to Modern Information Retrieval
Chameleon
Hierarchically Classifying Documents Using Very Few Words
Clustering with Instance-level Constraints
--CTR
Dolf Trieschnigg , Wessel Kraaij, Scalable hierarchical topic detection: exploring a sample based approach, Proceedings of the 28th annual international ACM SIGIR conference on Research and development in information retrieval, August 15-19, 2005, Salvador, Brazil
Han , Eren Manavoglu , Hongyuan Zha , Kostas Tsioutsiouliklis , C. Lee Giles , Xiangmin Zhang, Rule-based word clustering for document metadata extraction, Proceedings of the 2005 ACM symposium on Applied computing, March 13-17, 2005, Santa Fe, New Mexico
Bhushan Mandhani , Sachindra Joshi , Krishna Kummamuru, A matrix density based algorithm to hierarchically co-cluster documents and words, Proceedings of the 12th international conference on World Wide Web, May 20-24, 2003, Budapest, Hungary
Reid Swanson , Andrew S. Gordon, A comparison of alternative parse tree paths for labeling semantic roles, Proceedings of the COLING/ACL on Main conference poster sessions, p.811-818, July 17-18, 2006, Sydney, Australia
Zheng-Yu Niu , Dong-Hong Ji , Chew-Lim Tan, Document clustering based on cluster validation, Proceedings of the thirteenth ACM international conference on Information and knowledge management, November 08-13, 2004, Washington, D.C., USA
Zheng-Yu Niu , Dong-Hong Ji , Chew Lim Tan, Using cluster validation criterion to identify optimal feature subset and cluster number for document clustering, Information Processing and Management: an International Journal, v.43 n.3, p.730-739, May, 2007
Chih-Ping Wei , Chin-Sheng Yang , Han-Wei Hsiao , Tsang-Hsiang Cheng, Combining preference- and content-based approaches for improving document clustering effectiveness, Information Processing and Management: an International Journal, v.42 n.2, p.350-372, March 2006
SanJuan , Fidelia Ibekwe-SanJuan, Text mining without document context, Information Processing and Management: an International Journal, v.42 n.6, p.1532-1552, December 2006
Khaled M. Hammouda , Mohamed S. Kamel, Efficient Phrase-Based Document Indexing for Web Document Clustering, IEEE Transactions on Knowledge and Data Engineering, v.16 n.10, p.1279-1296, October 2004 | document representation;document clustering;evaluation methodology;machine learning |
564435 | Robust temporal and spectral modeling for query By melody. | Query by melody is the problem of retrieving musical performances from melodies. Retrieval of real performances is complicated due to the large number of variations in performing a melody and the presence of colored accompaniment noise. We describe a simple yet effective probabilistic model for this task. We describe a generative model that is rich enough to capture the spectral and temporal variations of musical performances and allows for tractable melody retrieval. While most of previous studies on music retrieval from melodies were performed with either symbolic (e.g. MIDI) data or with monophonic (single instrument) performances, we performed experiments in retrieving live and studio recordings of operas that contain a leading vocalist and rich instrumental accompaniment. Our results show that the probabilistic approach we propose is effective and can be scaled to massive datasets. | INTRODUCTION
A natural way for searching a musical audio database for
a song is to look for a short audio segment containing a
melody from the song. Most of the existing systems are
based on textual information, such as the title of the song
and the name of the composer. However, people often do
not remember the name of the composer and the song's title
but can easily recall fragments from the soloist's melody.
The task of query by melody attempts to automate the
music retrieval task. It was rst discussed in the context
of query by humming [11, 13, 14]. These works focus on
converting hummed melodies into symbolic MIDI format
(MIDI is an acronym for Musical Instrument Digital Inter-
face. It is a symbolic format for representing music). Once
Permission to make digital or hard copies of all or part of this work for
personal or classroom use is granted without fee provided that copies are
not made or distributed for profit or commercial advantage and that copies
bear this notice and the full citation on the first page. To copy otherwise, to
republish, to post on servers or to redistribute to lists, requires prior specific
permission and/or a fee.
SIGIR'02, August 11-15, 2002, Tampere, Finland.
the query is converted into a symbolic format the challenge
is to search for musical performances that approximately
match the query. Most of the research so far has been conducted
with music stored in MIDI format [12] or in monophonic
(i.e. single vocal or instrument) recordings (see for
instance [9, 7] and the references therein). In this paper,
we suggest a method for query by melody where the query
is posed in symbolic form as a monophonic melody and the
database consists of real polyphonic recordings.
When dealing with real polyphonic recordings we need to
address several complicating factors. Ideally melodies can
be represented as sequences of notes, each is a pair of frequency
and temporal duration. In real recordings two major
sources of di-culty arise. The rst is the high variability of
the actual durations of notes. A melody can be performed
faster or slower than the one dictated by the musical score.
This type of variation is often referred to as tempo vari-
ability. Furthermore, the tempo can vary within a single
performance. For instance, a performance can start with a
slow tempo which gradually increases. The second complicating
factor is the high variability of the spectrum due to
many factors such as dierences in tone colors (timbre) of
dierent singers/instruments, the intentional variation by
the leading vocalists (e.g. vibrato and dynamics) and by
\spectral masking" of the leading vocal by the accompanying
vocals and orchestra.
We propose to tackle these di-culties by using a generative
probabilistic approach that models the temporal and
spectral variations. We associate each note with a hidden
tempo variable. The tempo variables capture the temporal
variations in the durations of notes. To enable e-cient com-
putation, the hidden tempo sequence is modeled as a rst
order Markov process. In addition, we also describe a simple
probabilistic spectral distribution model that is robust
to the masking noise of the accompanying instruments and
singers. This spectral distribution model is a variant of the
harmonic likelihood model for pitch detection [16]. Combining
the temporal and spectral probabilistic components, we
obtain a joint model which can be thought of as a dynamic
Bayesian network [8]. This representation enables e-cient
alignment and retrieval using dynamic programming.
This probabilistic approach is related to several recent
works that employ Hidden Markov Models (HMM) for music
processing. Raphael [15] uses melody information (pitches
and durations of notes) in building an HMM for a score
following application. A similar approach is taken by Durey
and Clements [9] who use the pitch information of notes
for building HMMs for melody retrieval. However, both
approaches were designed for and evaluated on monophonic
music databases. Most work on polyphonic music processing
addressed tasks such as music segmentation into textures [6],
polyphonic pitch tracking [18], and genre classication [17,
10]. We believe that the approach we describe in this paper
is a step toward an eective retrieval procedure for massive
musical datasets.
2. PROBLEM SETTING
In our setting, we are given a melody and our task is
to retrieve musical performances containing the requested
melody and to nd its location within the retrieved perfor-
mances. A melody is a sequence of notes where each note
is a pair of a pitch value and a duration value. Our goal
is to retrieve melodies from audio signals representing real
performances.
Formally, let R+ denote the positive real numbers. Let
be frequency values (in Hz) and let [f l ; fh ] be a
diapason. A diapason of a singer (or an instrument) is the
range of pitch frequencies that are in use by the singer (or
by the instrument). For instance, a tenor singer typically
employs a diapason of [110Hz; 530Hz]. Let denote the
set of all possible frequencies of notes. In the well-tempered
Western music tuning system,
the possible pitches of notes in the diapason. A melody is
described formally by a sequence of pitches, p 2 k , and a
sequence of durations, d 2 R+ k , in a predened time units
(e.g. seconds or samples).
A performance of a melody is a discrete time sampled audio
performance is formally entirely
dened given the melody: play or sing using pitch p1 for the
rst d1 seconds, then play or sing pitch p2 for the next d2
seconds, and so on and so forth. In reality, a melody does
not impose a rigid framework. The actual frequency content
of a given note varies with the type of instrument that
is played and by the performer. Examples for such variations
are the vibrato and timbre eects. The accompaniment also
greatly in
uences the spectral distribution. While playing a
note using pitch p, we are likely to see a local concentration
of energy close to multiples of the frequency p in the power
spectrum of the signal. However, there may be other spectral
regions with high levels of energy. We will address this
problem later on in this section. Another source of variation
is local scaling of the durations of notes as instructed by the
melody. The performer typically uses a tempo that scales
the duration and moves from one tempo to another, thus
using a dierent time scale to play the notes. Therefore,
we also need to model the variation in the tempo which we
describe now.
A tempo sequence is a sequence of scaling factors, m 2
. The actual duration of note i, denoted e
d i is d i scaled
by
Seemingly, allowing dierent scaling
factors for the dierent notes adds a degree of freedom that
makes the melody duration values redundant. However, a
typical tempo sequence does not change rapidly and thus
re
ects most of the information of the original durations
(up to a scaling factor). Table 1 shows two examples of
sequences. A pitch{duration{tempo triplet (p; d; m)
generates an actual pitch{duration pair (p; e d) .
Rallentando 1.2 1.2 1.25 1.3 1.3
Table
1: Examples of scaling factor sequences: In
the rst sequence the scaling factors are gradually
increasing and thus the tempo is decreasing
("Rallentando"). In the second example the scaling
factors are decreasing and the tempo is increasing
(\Accelerando").
In order to describe the generation of the actual performance
audio signal o from (p; e d) we introduce one more vari-
able, s 2 R+ k where s i is the starting time (sample number)
of note i in the performance. We dene s
e
d j for
consecutive blocks of signal
samples. Let
be the block of samples
generated by note i.
The power spectrum of varies signicantly from performance
to performance, according to various factors such as
the spectral envelope of the soloist and pitches of accompaniment
instruments. Since our goal is to locate and retrieve
a melody from a dataset that may contain thousands of per-
formances, we resort to a very simple spectral model and do
not explicitly model these variables. We use an approximation
to the likelihood of a block spectrum given its pitch.
3. FROM MELODY TO SIGNAL:
A GENERATIVE MODEL
To pose the problem in a probabilistic framework, we need
to describe the likelihood of a performance given the melody,
We cast the tempo sequence m as a hidden random
variable, thus the likelihood can be written as,
For simplicity, we assume that the tempo sequence does not
depend on the melody. While this assumption, naturally,
does not always hold, we found empirically that these types
of correlations can be ingnored in short pieces of perfor-
mances. With this assumption and the identity e
Equ. (1) can be rewritten as,
d)
We now need to specify the prior distribution over the tempo,
(m), and the posterior distribution of the signal given the
pitches and the actual durations of the notes P (ojp; e d).
3.1 modeling
We chose to model the tempo sequence as a rst order
Markov process. As we see in the sequel this choice on one
hand allows an e-cient alignment and retrieval, and on the
other hand, was found empirically to be rich enough. There-
fore, the likelihood of m is given by,
Y
We use the log-normal distribution to model the conditional
probability
where is a scaling parameter of the variance. The prior
distribution of the rst scaling factor P (m1) is also assumed
to be log-normal around zero with variance , log 2 (m i )
N (0; ). In our experiments, the parameter was determined
manually according to musical knowledge. This parameter
can also be learned from MIDI les.
3.2 Spectral Distribution Model
In this section we describe our spectral distribution model.
There exist quite a few models for the spectral distribution
of singing voices and harmonic instruments. However, most
of these models are rather general. These models typically
assume that the musical signal is contaminated with white
noise whose energy is statistically independent of the signal.
See for instance [16] and the references therein. In contrast,
we assume that there is a leading instrument, or soloist, that
is accompanied by an orchestra or a chorus. The energy of
the accompaniment is typically highly correlated with the
energy of the soloist. Put another way, the dynamics of the
accompaniment matches the dynamics of the soloist. For
instance, when the soloist sings pianissimo the chorus follows
her with pianissimo voices. We therefore developed a
simple model whose parameters can be e-ciently estimated
that copes with the correlation in energy between the leading
soloist and the accompaniment. In Fig. 1 we show the
spectrum of one frame of a performance signal from our
database. The harmonics are designated by dashed lines. It
is clear from the gure that there is a large concentration
of energy at the designated harmonics. The residual en-
ergy, outside the harmonics, is certainly non-negligible but
is clearly lower than the energy of the harmonics. Thus, our
assumptions, although simplistic, seem to capture to a large
extent the characteristics of the spectrum of singing with
accompaniment.
Using the denition of a block o i from Sec. 2, the likelihood
of the signal given the sequences of pitches and durations
can be decomposed into a product of likelihood values
of the individual blocks,
Y
Therefore, the core of our modeling approach is a probabilistic
model for the spectral distribution of a whole block given
the underlying pitch frequency of the soloist. Our starting
point is similar to the model presented in [16]. We assume
that a note with pitch p i attains high energy at frequencies
which are multiples of p i , namely at p i h for integer h.
These frequencies are often referred to as harmonics. Since
our signal is band limited, we only need to consider a nite
set of harmonics h, h 2 f1; 2; :::; Hg. For practical purposes
we set H to be 20 which enables a fast parameter estimation
procedure. Let F (!) denote the observed energy of the
block at frequency !. Let S(!) denote the energy of the
soloist at frequency !. The harmonic model assumes that
Figure
1: The spectrum of a single frame along with
an impulse train designating the harmonics of the
soloist.
S() is bursts of energy centered at the harmonics of the
pitch frequency, p i h, and we model it as a weighted sum of
where A(h) is the volume gain for the harmonic whose index
is h. The residual of the spectrum at frequency ! is
denoted N(!) and is equal to
now describe a probabilistic model that leads to the following
log-likelihood score,
log
denotes the '2-norm.
To derive the above equation we assume that the spectrum
of the ith block, F , is comprised of two components. The
rst component is the energy of the soloist, S(!) as dened
in Equ. (2). The second component is a general masking
noise that encompasses the signal's energy due to the accompaniment
and aects the entire spectrum. We denote
the noise energy at frequency ! as (!). The energy of the
spectrum at frequency ! is therefore modeled as,
We now impose another simplifying assumption by setting
the noise to be a multivariate normal random variable and
further assuming that the noise values at each frequency !
are statistically independent with equal variance. Thus, the
noise density function is
where v is the variance and L is the number of spectral
points computed by the discrete Fourier transform. (We
chose to get a good spectral resolution.) Taking the
log of the above density function we get,
log
The gain values A(h) are free parameters which we need to
estimate from the spectrum. Assuming that the noise level
is relatively small compared to the bursts of energy at the
harmonics of the pitch frequency, we set the value of A(h)
to be F (p i h). We also do not know the noise variance v. For
parameter we use the simple maximum likelihood
(ML) estimate which can be easily found as follows. The
maximum likelihood estimate of v is found by taking the
derivative of log f(jv) with respect to v,
@ log f(jv)
Rearranging Equ. (4), the noise value at frequency !, (!),
can be written as,
By using above equation for (!) along with values set for
A(h) and the maximum likelihood estimate v in Equ. (6)
we get,
log
log
Since the stochastic ingredient of our spectral model is the
accompanying noise, the noise likelihood above also constitute
the likelihood of the spectrum.
To summarize, we now overview our approach for retrieval.
We are given a melody (p; d) and we want to nd an audio
signal which represents a performance of this melody.
Using our probabilistic framework, we cast the problem as
the problem of nding a signal portion whose likelihood
given the melody, P (ojp; d), is high. Our search strategy
is as follows. We nd the best alignment of the signal to
the melody as we describe in the next section. The score
of the alignment procedure we devise is also our means for
retrieval. We then rank the segments of signals in accordance
with their likelihood scores and return the segments
achieving high likelihoods scores.
4. ALIGNMENT AND RETRIEVAL
Alignment of a melody to a signal is performed by nd-
ing the best assignment of a tempo sequence. Formally,
we are looking for the scaling factors m that attain the
highest likelihood score, m Although
the number of possible sequences of scaling factors
grows exponentially with the sequence length, the problem
of nding m can be e-ciently solved using dynamic
programming, as we now describe.
the scaling factors of the
rst i notes of a melody. Let the rst
t samples of a signal. Let M be a discrete set of possible
scaling factor values. For 2 M , let M i;t; be a set of
all possible sequences of i scaling factors, m i , such that
is the scaling factor of note i and
1. Initialization
2. Recursion
(i
3. Termination
Figure
2: The alignment algorithm.
the actual ending time of note i. Let
t; ) be the joint
likelihood of
The pseudo code for computing
recursively is
shown in Fig. 2.
The most likely sequence of scaling factors m is obtained
from the algorithm by saving the intermediate values that
maximize each expression in the recursion step. The complexity
of the algorithm is O(kT jM j 2 D), where k is the number
of notes, T is the number of samples in the digital signal,
jM j is the number of all possible tempo values and D is the
maximal duration of a note. Using a pre-computation of
the likelihood values we can reduced the time complexity
by a factor of D and thus the run time of the algorithm reduces
to O(kT jM j 2 ). It is important to clarify that the pre-computation
does not completely determine a single pitch
value for a frame. It calculates the probability of the frame
given each possible pitch in the diapason.
As mentioned above, our primary goal is to retrieve the
segments of signals representing the melody given by the
query. Theoretically, we need to assign a segment its likelihood
score,
d). However, this
marginal probability is rather expensive to compute. We
thus approximate this probability with the joint probability
of the signal and most likely sequence of scaling factors,
d) . That is, we use the likelihood score of the
alignment procedure as a retrieval score.
5. EXPERIMENTAL RESULTS
To evaluate our algorithm we collected 50 dierent melodies
from famous opera arias, and queried these melodies in a
database of real recordings. The recordings consist of 832
performances of opera arias performed by more then 40
dierent tenor singers with full orchestral accompaniment.
Each performance is one minute. The data was extracted
from seven audio CDs [2, 3, 5, 1, 4], and saved in wav for-
mat. Most of the performances (about 90 percent) are digital
recordings (DDD/ADD). Yet, some performances are
digital remastering of old analog recordings (AAD). This
Spectral Distribution Model
HSN HIN
AvgP Cov Oerr AvgP Cov Oerr
Melody
length
Table
2: Retrieval results
introduced additional complexity to the retrieval task due
to varying level of noise.
The melodies for the experiments were extracted from
MIDI les. About half of the MIDI les were downloaded
from the Internet 1 and the rest of the MIDI les were performed
on a MIDI keyboard and saved as MIDI les.
We compared three dierent tempo-based approaches for
retrieval. The rst method simply uses the original durations
given in the query without any scaling. We refer to
this simplistic approach as the Fixed Tempo (FT) model.
The second approach uses a single scaling factor for all the
durations of a given melody. However, this scaling factor is
determined independently for each signal so as to maximize
the signals likelihood. We refer to this model as the Locally
Fixed Tempo (LFT) model. The third retrieval method is
our variable tempo model that we introduced in this paper.
We therefore refer to this method as the Variable Tempo
model. By taking a prex subset of each melody used
in a query we evaluated three dierent lengths of melodies:
5 seconds, 15 seconds, and 25 seconds.
To assess the quality of the spectral distribution model
described in Sec. 3.2, we implemented the spectral distribution
model described in [16]. This model assumes that the
harmonics of the signal are contaminated with noise whose
mean energy is independent of the energy of the harmonics.
We refer to our model as the Harmonics with Scaled Noise
(HSN) model and to the model from [16] as the Harmonics
with Independent Noise (HIN) model.
To evaluate the performances of the methods we used
three evaluation measures: one-error, coverage and average
precision. To explain these measures we introduce the
following notation. Let N be the number of performances
in our database and let M be the number of melodies that
we search for. (As mentioned above, in our experiments
50.) For a melody index i we denote
by Y i the set of the performances containing melody i. The
probabilistic modeling we discussed in this paper induces a
natural ordering over the performances for each melody. Let
R i (j) denote the ranking of the performance indexed j with
respect to melody i. Based on the above denitions we now
http://www.musicscore.freeserve.co.uk,
http://www.classicalmidi.gothere.uk.com
give the formal denitions of the performance measures we
used for evaluation.
One-Error. The one-error measures how many times the
top-ranked performance did not contain the melody posed
in the query. Thus, if the goal of our system is to return a
single performance that contains the melody, the one-error
measures how many times the retrieved performance did not
contain the melody. Formally, the denition of the one-error
is,
predicate holds and 0 otherwise.
Coverage. While the one-error evaluates the performance
of a system with respect to the top-ranked performance, the
goal of the coverage measure is to assess the performance of
the system for all of the possible performances of a melody.
Informally, Coverage measures the number of excess (non-
relevant) performances we need to scan until we retrieve all
the relevant performances. Formally, Coverage is dened as,
Average Precision. The above measures do not su-ce in
evaluating the performances of retrieval systems as one can
achieve good (low) coverage but suer high one-error rates,
and vice versa. In order to assess the ranking performance
as a whole we use the frequently used average precision mea-
sure. Formally, the average precision is dened as,
In addition we also use precision versus recall graphs to illustrate
the overall performances of the dierent approaches
discussed in the paper. A precision-recall graph shows the
level of precision for dierent recall values. The graphs presented
in this paper are non-interpolated, that is, they were
calculated based on the precision and recall values achieved
at integer positions of the ranked lists.
In
Table
2 we report results with respect to the performance
measures described for the FT, LFT, and VT mod-
els. For each tempo model we conducted the experiments
with the two spectral distribution models HIN and HSN.
It is clear from the table that the Variable Tempo model
with the Harmonics with Scaled Noise spectral distribution
outperforms the rest of the models and achieves superior
results. Moreover, the performance of the Variable Tempo
model consistently improves as the duration of the queries
increases. In contrast, the Fixed Tempo does not exhibit
any improvement as the duration of the queries increases
and the Locally Fixed Tempo shows only a moderate improvement
when using fteen second long queries instead of
ve second long queries and it does not improve as the duration
grows to twenty ve seconds. A reasonable explanation
for these phenomena is that the amount of variability in a
very short query is naturally limited and thus the leverage
gained by accurate tempo modeling which takes into account
the variability in tempo is rather small. Thus, as the query
Recall
Precision
Precision
Recall
Precision
Figure
3: Precision-recall curves comparing the performance
of three tempo models for queries consisting
of ve seconds (top), fteen second (middle),
and twenty ve seconds (bottom).
Precision
Recall
Precision
Recall
Precision
Figure
4: Precision-recall curves comparing the performance
of each of the tempo models for three different
query lengths.
duration grows the power of the variable tempo model is
better exploited. The Locally Fixed Tempo can capture the
average tempo of a performance but clearly fails to capture
changes in the tempo. Since the chance of a tempo change
grows with the duration of the query the average tempo
stops from being a good approximation and we do not see
further improvement in the retrieval quality.
In Fig 3 we give precision-recall graphs that compare the
three tempo models. Each graph compares FT, LFT and
VT for dierent query durations. The VT model clearly
outperforms both the FT and LFT models. The longer the
query the wider the gap in performance. In Fig 4 we compare
the precision-recall graphs for each model as a function
of the query duration. Each graph shows the precision-recall
curves for 5, 15, and 25 seconds queries. We again see that
only the VT model consistently improves with the increase
in the query duration. Using a globally xed tempo (FT) is
clearly inadequate as it results in very poor performance {
precision is never higher than 0.35 even for low level of re-
call. The performance of the LFT model is more reasonable.
A precision of about 0.5 can be achieved for a recall value of
0.5. However, the full power of our approach is utilized only
when we use the VT model. We achieve an average precision
of 0.92 with a recall of 0.75. It seems that with the VT
model we reach an overall performance that can serve as the
basis for large scale music retrieval systems.
Lastly, as a nal sanity check of the conjecture of the robustness
of the VT model we used the VT and LFT model
with three long melody queries (one minute) and applied
the retrieval and alignment process. We then let a professional
musician listen to the segmentation and browse the
segmented spectogram. An example of a spectogram with a
segmentation of the VT model is given in Fig 5. The example
is of a performance where the energy of accompaniment
is higher than the energy of the leading tenor. Nonetheless,
a listening experiment veried that our system was able to
properly segment and align the melody posed by the query.
Although these perceptual listening tests are subjective, the
experiments indicated that the VT model also provides an
accurate alignment and segmentation.
6. DISCUSSION
In this paper we presented a robust probabilistic model
for query by melody. The proposed approach is simple
to implement and was found to work well on polyphony-
rich recordings with various types of accompaniments. The
probabilistic model that we developed focuses on two main
sources of variability. The rst is variations in the actual
durations of notes in real recordings (tempo variability) and
the second is the variability of the spectrum mainly due to
the \spectral masking" of the leading vocal by the accompanying
vocals and orchestra. In this work we assumed that
the pitch information in a query is accurate and only the
duration can be altered in the performance. This assumption
is reasonable if the queries are posed using a symbolic
input mechanism such as a MIDI keyboard. However, an
easier and more convenient mechanism is to hum or whistle
a melody. This task is often called \query by humming".
In addition to the tempo variability and spectral masking,
a query by humming system also needs to take into account
imperfections in the pitch of the hummed melody. Indeed,
Frequency
Time
Figure
5: An illustration of the alignment and segmentation
of the VT model. The pitches of the notes
in the melody are overlayed in solid lines.
much of the work on query by humming have been devoted
to music retrieval using noisy pitch information. The majority
of the work on query by humming though have focused
on search of noisy queries in symbolic databases. Since the
main thrust of this research is searches in real polyphonic
recordings, it complements the research on query by humming
and can supplement numerous systems that search in
databases. We plan to extend our algorithm so it
can be combined with a front end for hummed queries. In
addition, we have started conducting research on supervised
methods for musical genre classication. We believe that by
combining highly accurate genre classication with a robust
retrieval and alignment we will be able to provide an eec-
tive tool for searching and browsing for both professionals
and amateurs.
Acknowledgments
We would like to thank Moria Koman for her help in creating
the queries used in the experiments and Leo Kon-
torovitch for useful comments on the manuscript.
7.
--R
Best of opera.
Les 40 tenors.
nessun dorma
The young domingo.
A model for reasoning about persistent and causation.
Melody spotting using hidden Markov models.
An overview of audio information retrieval.
Query by humming: Musical information retrieval in an audio database.
Searching monophonic patterns within polyphonic sources.
The new zealand digital library melody index
Automatic segmentation of acoustic musical signals using hidden markov models.
Speech enhancement by harmonic modeling via map pitch track- ing
Audio information retrieval (air) tools.
pitch tracking using joint bayesian estimation of multiple frame parameters.
--TR
A model for reasoning about persistence and causation
Query by humming
An overview of audio information retrieval
Automatic Segmentation of Acoustic Musical Signals Using Hidden Markov Models
--CTR
Keiichiro Hoashi , Kazunori Matsumoto , Naomi Inoue, Personalization of user profiles for content-based music retrieval based on relevance feedback, Proceedings of the eleventh ACM international conference on Multimedia, November 02-08, 2003, Berkeley, CA, USA
Fang-Fei Kuo , Man-Kwan Shan, Looking for new, not known music only: music retrieval by melody style, Proceedings of the 4th ACM/IEEE-CS joint conference on Digital libraries, June 07-11, 2004, Tuscon, AZ, USA
Olivier Gillet , Gal Richard, Drum loops retrieval from spoken queries, Journal of Intelligent Information Systems, v.24 n.2, p.159-177, May 2005 | spectral modeling;query by melody;graphical models;music information retrieval |
564880 | Distributed streams algorithms for sliding windows. | This paper presents algorithms for estimating aggregate functions over a "sliding window" of the N most recent data items in one or more streams. Our results include:<ol>For a single stream, we present the first &egr;-approximation scheme for the number of 1's in a sliding window that is optimal in both worst case time and space. We also present the first &egr; for the sum of integers in [0..R] in a sliding window that is optimal in both worst case time and space (assuming R is at most polynomial in N). Both algorithms are deterministic and use only logarithmic memory words.In contrast, we show that an deterministic algorithm that estimates, to within a small constant relative error, the number of 1's (or the sum of integers) in a sliding window over the union of distributed streams requires &OHgr;(N) space. We present the first randomized (&egr;,&sgr;)-approximation scheme for the number of 1's in a sliding window over the union of distributed streams that uses only logarithmic memory words. We also present the first (&egr;,&sgr;)-approximation scheme for the number of distinct values in a sliding window over distributed streams that uses only logarithmic memory words.</olOur results are obtained using a novel family of synopsis data structures. | INTRODUCTION
There has been a
urry of recent work on designing effective
algorithms for estimating aggregates and statistics
over data streams [1, 2, 3, 4, 5, 6, 8, 9, 11, 12, 14, 15, 16,
17, 18, 19, 25], due to their importance in applications such
as network monitoring, data warehousing, telecommunica-
tions, and sensor networks. This work has focused almost
entirely on the sequential context of a data stream observed
by a single party. Figure 1 depicts an example data stream,
where each data item is either a 0 or a 1.
On the other hand, for many of these applications, there
are multiple data sources, each generating its own stream. In
network monitoring and telecommunications, for example,
each node/person in the network is a potential source for
new streaming data. In a large retail data warehouse, each
retail store produces its own stream of items sold. To model
such scenarios, we previously proposed a distributed streams
model [13], in which there are a number of data streams,
each stream is observed by a single party, and the aggregate
is computed over the union of these streams.
Moreover, in many real world scenarios (e.g., marketing,
tra-c engineering), only the most recent data is important.
For example in telecommunications, call records are generated
continuously by customers, but most processing is done
only on recent call records. To model these scenarios, Datar
et al. [4] recently introduced the sliding windows setting for
data streams, in which aggregates and statistics are computed
over a \sliding window" of the N most recent items
in the data stream.
This paper studies the sliding windows setting in both
the single stream and distributed stream models, improving
upon previous results under both settings. In order to
describe our results, we rst describe the models and the
previous work in more detail.
1.1 Sequential and Distributed Streams
The goal in a (sequential or distributed) algorithm for
data streams is to approximate a function F while minimizing
(1) the total workspace (memory) used by all the parties,
(2) the time taken by each party to process a data item, and
(3) the time to produce an estimate (i.e., the query time).
Many functions on (sequential and distributed) data streams
require linear space to compute exactly, and so attention is
focused on nding either an (; -approximation scheme or
an -approximation scheme, dened as follows.
Definition 1. An (; -approximation scheme for a quantity
X is a randomized procedure that, given any positive
computes an estimate ^
of X that is
position
position 79
1-rank 42 43 44
Figure
1: An example data stream, through bits. The position in the stream (position) and the rank
among the 1-bits (1-rank) are computed as the stream is processed.
within a relative error of with probability at least 1 -, i.e.,
Pr
is a deterministic procedure that, given any positive < 1,
computes an estimate whose worst case relative error is at
most .
Algorithms for a Sliding Window over a Single
Stream. Datar et al. [4] presented a number of interesting
results on estimating functions over a sliding window for
a single stream. A fundamental problem they consider is
that of determining the number of 1's in a sliding window,
which they call the Basic Counting problem. In the stream
in
Figure
1, for example, the number of 1's in the current
window of the 39 most recent items is 20. They presented an
-approximation scheme for Basic Counting that uses only
O( 1
log 2 (N)) memory bits of workspace, processes each
data item in O(1) amortized and O(log N) worst case time,
and can produce an estimate over the current window in
time. They also prove a matching lower bound on the
space. They demonstrated the importance of this problem
by using their algorithm as a building block for a number
of other functions, such as the sum of bounded integers and
the Lp norms (in a restricted model).
We improve upon their results by presenting an -approx-
imation scheme for Basic Counting that matches their space
and query time bounds, while improving the per-item processing
time to O(1) worst case time. We also present an
-approximation scheme for the sum of bounded integers in
a sliding window that improves the worst case per-item processing
time from O(log N) to O(1).
Our improved algorithms use a family of small space data
structures, which we call waves. An example wave for Basic
Counting is given in Figure 2, for the data stream in
Figure
1. (The basic shape is suggestive of an ocean wave
about to break.) The x-axis is the 1-rank, and extends to
the right as new 1-bits arrive. As we shall see, as additional
stream bits arrive, the wave retains this basic shape while
\moving" to the right so that the crest of the wave is always
over the largest 1-rank thus far.
Algorithms for Distributed Streams. In the distributed
streams model [13], each party observes only its
own stream, has limited workspace, and communicates with
the other parties only when an estimate is requested. Specif-
ically, to produce an estimate, each party sends a message
to a Referee who computes the estimate. This model reects
the set-up used by commercial network monitoring
products, where the data analysis front-end serves the role
of the Referee. Among the results in [13] were (i) an (; -
approximation scheme for the number of 1's in the union of
distributed streams (i.e., in the bitwise OR of the streams),
using only O( 1
log(1=-) log n) memory bits per party, where
n is the length of the stream, and (ii) an (; -approximation
scheme for the number of distinct values in a collection of
distributed streams, using only O( 1
log(1=-) log R) memory
bits, where the values are in [0::R]. Both algorithms
use coordinated sampling: each stream is sampled at the
same random positions, for a given sampling rate. Each
party stores the positions of (only) the 1-bits in its sample.
When the stored 1-bits exceed the target space bound, the
sampling probability is reduced, so that the sample ts in
smaller space. Sliding windows were not considered.
In this paper, we combine the idea of a wave with coordinated
sampling. We store a wave consisting of many random
samples of the stream. Samples that contain only the
recent items are sampled at a high probability, while those
containing older items are sampled at a lower probability.
We obtain an (; -approximation scheme for the number
of 1's in the position-wise union of distributed streams over
a sliding window. We also obtain an (; -approximation
scheme for the number of distinct values in sliding windows
over both single and distributed streams. Each scheme uses
only logarithmic memory words per party.
The algorithms we propose are for the distributed streams
model in the stored coins setting [13], where all parties share
a string of unbiased and fully independent random bits, but
these bits must be stored prior to observing the streams,
and the space to store these bits must be accounted for in
the workspace bound. Previous works on streaming models
(e.g., [1, 5, 6, 8, 18, 19]) have studied settings with stored
coins. Stored coins dier from private coins (e.g., as studied
in communication complexity [21, 23, 24]) because the same
random string can be stored at all parties.
1.2 Summary of Contributions
The contributions of this paper are as follows.
1. We introduce a family of synopsis data structures called
waves, and demonstrate their utility for data stream
processing in the sliding windows setting.
2. We present the rst -approximation scheme for Basic
Counting over a single stream that is optimal in worst
case space, processing time, and query time. Specif-
ically, for a given accuracy , it matches the space
bound and O(1) query time of Datar et al. [4], while
improving the per-item processing time from O(1) amortized
(O(log N) worst case) to O(1) worst case.
3. We present the rst -approximation scheme for the
sum of integers in [0::R] in a sliding window over a single
stream that is optimal in worst case space (assum-
ing R is at most polynomial in N ), processing time,
by
by 8
by 4
by 2
by 1
44 67 91847284 9199
window
Figure
2: A deterministic wave and an example window query 39). The x-axis shows the 1-ranks, and
on the y-axis, level i is labeled as \by 2 i ".
and query time. Specically, it improves the per-item
processing time of [4] from O(1) amortized (O(log N)
worst case) to O(1) worst case.
4. We show that in contrast to the single stream case,
no deterministic algorithm can estimate the number
of 1's in a sliding window over the union of distributed
streams within a small constant relative error unless it
uses
space.
5. We present the rst randomized (; -approximation
scheme for the number of 1's in a sliding window over
the union of distributed streams that uses only logarithmic
memory words. We use this as a building
block for the rst (; -approximation scheme for the
number of distinct values in a sliding window over distributed
streams that uses only logarithmic memory
words.
The remainder of the paper is organized as follows. Section
presents further comparisons with previous related
work. Section 3 and Section 4 present results using the
deterministic (randomized, resp.) wave synopsis. Finally,
Section 5 shows how the techniques can be used for various
other functions over a sliding window such as distinct values
counting and nth most recent 1.
2. RELATED WORK
In the paper introducing the sliding windows setting [4],
the authors gave an algorithm for the Basic Counting problem
that uses exponential histograms. An exponential histogram
maintains more information about recently
seen items, less about old items, and none at all about items
outside the \window" of the last N items. Specically, the
k0 most recent 1's are assigned to individual buckets, the k1
next most recent 1's are assigned to buckets of size 2, the k2
next most recent 1's are assigned to buckets of size 4, and
so on, until all the 1's within the last N items are assigned
to some bucket. For each bucket, the EH stores only its size
(a power of 2) and the position of the most recent 1 in the
bucket. Each k i (up to the last bucket) is either 1
Upon receiving a new item, the last bucket is discarded if
its position no longer falls within the window. Then, if the
new item is a 1, it is assigned to a new bucket of size 1. If
this makes
then the two least recent buckets of
size 1 are merged to form a bucket of size 2. If k1 is now too
large, the two least recent buckets of size 2 are merged, and
so on, resulting in a cascading of up to log N bucket merges
in the worst case. As we shall see, our approach using waves
avoids this cascading.
Our previous paper [13] formalized the distributed streams
model and presented several (; -approximation schemes
for aggregates over distributed streams. We also compared
the power of the distributed streams model with the previously
studied merged streams model (e.g., [5, 19]), where all
the data streams arrived at the same party, but interleaved
in an arbitrary order.
The algorithm by Flajolet and Martin [7] and its variant
due to Alon, Matias and Szegedy [1] estimate the number
of distinct values in a stream (and also the number of 1's in
a bit stream) up to a constant relative error of > 1. The
algorithm works in the distributed streams model too, and
can be adapted to sliding windows [4]. There are two results
we know of that extend this algorithm to work for arbitrary
relative error, by Trevisan [25] and by Bar-Yossef et al. [3]. 1
Trevisan's algorithm can be extended to distributed streams
quite easily, but the cost of extending it to sliding windows is
not clear. There are O(log(1=-)) instances of the algorithm,
using dierent hash functions, and each must maintain the
O( 1
smallest distinct hashed values in the sliding window
of N values. Assuming the hashed values are random, maintaining
just the minimum value over a sliding window takes
O(log N) expected time [4]. We do not know how to extend
the algorithm in [3] to sliding windows, and in addition, its
space and time bounds for single streams are worse than ours
(however, their algorithm can be made list e-cient [3]).
We now quickly survey some other recent related work.
Frameworks for studying data synopses were presented in [12],
along with a survey of results. There have been algorithms
for computing many dierent functions over a data stream
observed by a single party, such as maintaining histograms [16],
maintaining signicant transforms of the data that are used
to answer aggregate queries [14], and computing correlated
aggregates [9]. Babcock et al. [2] considered the problem
1 Datar et al. [4] also reported an extension to arbitrary relative
error for a sliding window over a single stream, using
the Trevisan approach [20].
of maintaining a uniform random sample of a specied size
over a sliding window of the most recent elements.
In communication complexity models [22], the parties have
unlimited time and space to process their respective in-
puts. Simultaneous 1-round communication complexity results
can often be related to the distributed streams model.
The lower bounds from 1-round communication complexity
certainly carry over directly.
None of these previous papers use wave-like synopses.
3. DETERMINISTIC WAVES
In this section, we will rst present our new -approxima-
tion scheme for the number of 1's in a sliding window over
a single stream. Then we will present our new -approx-
imation scheme for the sum of bounded integers in a sliding
window over a single stream. Finally, we will consider distributed
streams, for three natural denitions of a sliding
window over such streams. We will show that our small-
space deterministic schemes can improve the performance
for two of the scenarios, but for the third, no deterministic
-approximation scheme can obtain sub-linear space.
3.1 The Basic Wave
We begin by describing the basic wave, and show how it
yields an -approximation scheme for the Basic Counting
problem for any sliding window up to a prespecied maximum
window size N . The basic wave will be somewhat
wasteful in terms of its space bound, processing time, and
query time.
Consider a data stream of bits, and a desired positive < 1.
To simplify the notation, we will assume throughout that 1
is an integer. We maintain two counters: pos, which is the
current length of the stream, and rank, which is the current
number of 1's in the stream (equivalently, the 1-rank of the
most recent 1).
The wave contains the positions of the recent 1's in the
stream, arranged at dierent \levels". The wave has
dlog(2N)e levels. For contains the
positions of the 1= recent 1-bits whose 1-rank is a
multiple of 2 i . 2 Figure 2 depicts the basic wave for the data
stream in Figure 1, for
3 and 48. In the gure,
there are ve levels, with level i labeled as \by
it contains the positions of the recent 1-bits
whose 1-ranks are 0 modulo 2 i . The 1-ranks are given on
the x-axis.
Given this wave, we estimate the number of 1's in a window
of size n N as follows. Let
are to estimate the number of 1's in stream positions [s; pos].
The steps are:
1. Let p1 be the maximum position stored in the wave that
is less than s, and p2 the minimum position stored in
the wave that is greater than or equal to s. (If no such
as the exact answer.) Let r1
and r2 be the 1-ranks of p1 and p2 respectively.
2. Return ^
and otherwise r := r 1 +r 2For example, given the window query depicted in Figure 2,
we have
To simplify the description, we describe throughout the
steady state of a wave. Initially, there will be fewer than
1-bits and the wave stores all of them.
As
noted earlier, the actual number of 1's in this window is 20,
and indeed ^
Lemma 1. The above estimation procedure returns an estimate
x that is within a relative error of of the actual
number of 1's in the window.
Proof. Each level i contains (1= (stored with
their positions in the stream) whose 1-ranks are 2 i apart.
Thus, regardless of the current rank, the earliest 1-rank at
level i is at most rank 2 i . Thus, the dierence between
rank and the earliest 1-rank in level ' 1 is at least
the dierence in 1-ranks is at least as
large as the dierence in positions, it follows that p1 exists.
Let j be the smallest numbered level containing position p1 .
We know that the number of 1's in the window is in [rank
r2 +1; rank r1 ]. For example, it is between [50 32+1; 50
24] in
Figure
2. Thus if r2 we return the exact
answer. So assume By returning the
midpoint of the range, we guarantee that the absolute error
is at most r 2 r 1
. By construction, there is at most a 2 j gap
between r1 and its next larger position r2 . Thus the absolute
error in our estimate is at most 2 j 1 . To bound the relative
error, we will show that all the positions in level j 1 are
contained in the window, and this includes at least 2 j 1
1's.
Let r3 be the earliest 1-rank at level j 1. Position p1 was not
in level j 1, so r1 < r3 . Since r2 is the smallest 1-rank in the
wave larger than r1 , we have r2 r3 . Moreover, as argued
above, r3 rank
. Therefore, the actual number of 1's
in the window is at least rank r2 +1 rank r3
.
Thus the relative error is less than 1
.
Note that the proof readily extends beyond the steady
state case: Any level with fewer than 1
positions will
contain a position less than s, and hence can not serve the
role of level j 1 above.
3.2 Improvements
We now show how to improve the basic wave in order to
obtain an optimal deterministic wave for a sliding window
of size N . Let N 0 be the smallest power of 2 greater than
or equal to 2N . First, we use modulo N 0 counters for pos
and rank, and store the positions in the wave as modulo N 0
numbers, so that each takes only log N 0 bits, regardless of
the length of the stream. Next, we discard or expire any positions
that are more than N from pos, as these will never be
used, and would create ambiguity in the modulo N 0 arith-
metic. We keep track of both the largest 1-rank discarded
(r1) and the smallest 1-rank still in the wave (r2 ), so that
the number of 1's in a sliding window of size N can be
answered in O(1) time. Processing a 0-bit takes constant
time, while processing a 1-bit takes O(log(N)) worst case
time and O(1) amortized time, as a new 1-bit is stored at
each level i such that its 1-rank is a multiple of 2 i . Each of
these improvements is used for the EH synopsis introduced
by Datar et al. [4], to obtain the same bounds.
However, the deterministic wave synopsis is quite dierent
from the EH synopsis, so the steps used are dierent too.
Signicantly, we can decrease the per-item processing time
to O(1) worst case, as follows. Instead of storing a single
position in multiple levels, we will store each position only
at its maximum level, as shown in Figure 3. 3 For levels
3 In the gure, we have not explicitly discarded positions
by
by 8
by 4
by 2
by 1
44 7672
Figure
3: An optimal deterministic wave. The x-axis shows the 1-ranks, and on the y-axis, level i is labeled
as \by 2 i ".
positions, and for level
' 1, we store d 1
+1e positions. (At all levels, we may store
fewer positions, because we discard expired positions.) In
the wave, the positions at each level are stored in a xed
length queue, called a level queue, so that each time a new
position is added for the level, the position at the tail of the
queue is removed (assuming the queue is full). For example,
using a circular buer for each queue, the new head position
simply overwrites the next buer slot. We maintain a doubly
linked list of the positions (of the 1-bits) in the wave in
increasing order. Positions evicted from the tail of a level
queue are spliced out of this list. As each new stream item
arrives, we check the head of this sorted list to see if the
head needs to be expired.
Finally, as observed in [4], the set of positions is a sorted
sequence of numbers between 0 and N 0 , so by storing the
dierence (modulo N 0 ) between consecutive positions instead
of the absolute positions, we can reduce the space
from O( 1
log(N) log N) bits to O( 1
log 2 (N)) bits.
Figure
4 summarizes the steps of the deterministic wave
algorithm. Putting it altogether, we have:
Theorem 1. The algorithm in Figure 4 is an -approx-
imation scheme for the number of 1's in a sliding window of
size N over a data stream, using O( 1
log 2 (N)) bits. Each
stream item is processed in O(1) worst case time. At each
time instant, it can provide an estimate in O(1) time.
Proof. (sketch) The proof of relative error follows
along the lines of the proof of Lemma 1, because the set
of positions in the improved wave is the same or a superset
of the set of positions in the basic wave. The wave level in
step 3(a) is the position of the least-signicant 1-bit in rank
(numbering from 0). Assuming this is a constant time op-
eration, the time bounds follow from the above discussion. 4
As for the space, because the level queues are updated in
place, the same block of memory can be used throughout,
and hence the linked list pointers are osets into this block
and not full-sized pointers. The space bound follows.
The space bound is optimal because it matches the lower
outside the size in order to show the full
levels. All positions less than pos
and is the largest expired 1-rank.
4 Below, we show how to determine the wave level in constant
time even on a weaker machine model that does not
explicitly support this operation in constant time.
Upon receiving a stream bit b:
1. Increment pos. (Note: All additions and comparisons
are done modulo N 0 , the smallest power of 2 greater
than or equal to 2N .)
2. If the head (p; r) of the linked list L has expired (i.e.,
discard it from L and from (the
tail of) its queue, and store r as the largest 1-rank
discarded.
3. If
(a) Increment rank, and determine the corresponding
wave level j, i.e., the largest j such that rank is a
multiple of 2 j .
(b) If the level j queue is full, then discard the tail of
the queue and splice it out of L.
(c) Add (pos; rank) to the head of the level j queue
and the tail of L.
Answering a query for a sliding window of size N :
1. Let r1 be the largest 1-rank discarded. (If no such r1 ,
x := rank as the exact answer.) Let r2 be the
1-rank at the head of the linked list L. (If L is empty,
as the exact answer.)
2. Return ^
and otherwise r := r 1 +r 2
Figure
4: A deterministic wave algorithm for Basic
Counting over a single stream.
bound by Datar et al. [4] for both randomized and deterministic
algorithms.
Computing the Wave Level on a Weaker Machine
Model. Step 3(a) of Figure 4 requires computing the least-
signicant 1-bit in a given number. On a machine model
that does not explicitly support this operation in constant
time, a naive approach would be to examine each bit of rank
one at a time until the desired position is found. But this
takes (log N) worst case time, because rank has N 0 bits.
Instead, we store the log N 0 wave levels associated with the
sequence in an array (e.g., f0;
This takes only O(log N
log log N) bits. We also store a counter d of log N 0 log log N 0
bits, initially 1. As 1-bits are received, the desired wave
level is the next element in this array. The rst 1-bit after
reaching the end of the array has the property that the last
log log N 0 bits of rank are 0, and the desired wave level is
log log N 0 plus the position of the least signicant 1-bit in
d (numbering from 0). We then increment d and return to
cycling through the array. This correctly computes the wave
level at each step. Moreover, note that while we are cycling
through the array, we have log N 0 steps until we need to
know the least signicant 1-bit in d. Thus by interleaving
(i) the cycling and (ii) the search through the bits of d, we
can determine each of the wave levels in O(1) worst case
time.
Basic Counting for Any Window of Size n N .
The algorithm in Figure 4 achieves constant worst case query
time for a sliding window of size N . For a sliding window
of any size n N , this single wave can be used to give
an estimate for the Basic Counting problem that is within
an relative error, by following the two steps outlined for
the Basic Wave (Section 3.1). However, the query time for
window sizes less than N is O( 1
log(N)) in the worst case,
because we must search through the linked list L in order to
determine p1 and p2 . This matches the query time bound
for the EH algorithm [4].
3.3 Sum of Bounded Integers
The deterministic wave scheme can be extended to handle
the problem of maintaining the sum of the last N items in a
data stream, where each item is an integer in [0::R]. Datar et
al. [4] showed how to extend their EH approach to obtain an
-approximation scheme for this problem, using O( 1
(log N+
log R)) buckets of log N log R) bits each, O(1)
query time, and O( log R
log N ) amortized and O(log N log R)
worst case per-item processing time. (They also presented
a matching asymptotic lower bound on the number of bits,
under certain weak assumptions on the relative sizes of N ,
R, and .) We show how to achieve constant worst case
per-item processing time, while using the same number of
memory words and the same query time. (The number of
bits is O( 1
(log N+log R) 2 ), which is slightly worse than the
EH bound if R is super-polynomial in N .)
Our algorithm is depicted in Figure 5. The sum over
a sliding window can range from 0 to N R. Let N 0 be
the smallest power of 2 greater than or equal to 2NR. We
maintain two modulo N 0 counters: pos, the current length,
and total, the running sum. There are
levels. The algorithm follows the same general steps as the
algorithm in Figure 4. Instead of storing pairs (p; r), we
store triples (p; v; z) where v is the value for the data item
(not needed before because the value for a stored item was
always 1) and z is the partial sum through this item (the
equivalent of the 1-rank for sums). When answering a query,
we know that the window sum is in [total z2 +v2 ; total z1 ],
and we return the midpoint of this interval.
The key insight in this algorithm is that it su-ces to store
an item (only) at a level j such that 2 j is the largest power
of 2 that divides a number in (total; total Naively, one
would mimic the Basic Counting wave by viewing a value
v as v items of value 1. But this would take O(R) worst
case time to process an item. Datar et al. [4] reduced this to
log R) time by directly computing the EH resulting
after inserting v items of value 1. However, a single item
is stored in up to O(log N log R) EH buckets. In contrast,
Upon receiving a stream value v 2 [0::R]:
1. Increment pos. (Note: All additions and comparisons
are done modulo N 0 .)
2. If the head (p; v 0 ; z) of the linked list L has expired
(i.e., p pos N ), then discard it from L and from
(the tail of) its queue, and store z as the largest partial
sum discarded.
3. If v > 0 then do:
(a) Determine the largest j such that some number
in (total; total + v] is a multiple of 2 j . Add v to
total.
(b) If the level j queue is full, then discard the tail of
the queue and splice it out of L.
(c) Add (pos; v; total) to the head of the level j queue
and the tail of L.
Answering a query for a sliding window of size N :
1. Let z1 be the largest partial sum discarded from L.
(If no such z1 , return total as the exact answer.)
Let (p; v2 ; z2) be the head of the linked list L. (If L is
empty, return as the exact answer.)
2. Return ^
x := total
Figure
5: A deterministic wave algorithm for the
sum over a sliding window.
we store the item just once, which enables our O(1) time
bound.
The challenge is to quickly compute the wave level in
step 3(a); we show how to do this in O(1) time. First observe
that the desired wave level is the largest position j
(numbering from 0) such that some number y in the interval
has 0's in all positions less than j (and
hence y is a multiple of 2 j ). Second, observe that y 1 and
y dier in bit position j, and if this bit changes from 1 to
0 at any point in [total; total is not the largest.
Thus, j is the position of the most-signicant bit that is 0
in total and 1 in total + v. Accordingly, let f be the bitwise
complement of total, and let
the bitwise AND of f and g. Then the desired wave level is
the position of the most-signicant 1-bit in h, i.e., blog hc. 5
Putting it altogether, we have:
Theorem 2. The algorithm in Figure 5 is an -approx-
imation scheme for the sum of the last N items in a data
stream, where each item is an integer in [0::R]. It uses
O( 1
(log N log R)) memory words, where each memory
word is O(log N log R) bits (i.e., su-ciently large to hold
an item or a window size). Each item is processed in O(1)
worst case time. At each time instant, it can provide an
5 On a weaker machine model that does not support this
operation on h in constant time, we can use binary search
to nd the desired position in O(log(log N log R)) time,
as follows. Let w be the word size, and B be a bit mask
comprising of w
1's followed by w
0's. We begin by checking
zero. If so, we left shift B by wpositions
and recurse. Otherwise, we right shift B by wpositions and
recurse.
estimate in O(1) time.
Proof. (sketch) For the purposes of analyzing the approximation
error, we reduce the wave to an equivalent basic
wave for the Basic Counting problem, as follows. For each
triple (p; v; z) in the sums wave, we have a pair (p; z 0 ) in the
basic wave for each z stored in all levels l
such that z 0 is a multiple of 2 l . Also add the pair (p1 ; z1)
where z1 is the largest partial sum discarded by the sums
wave algorithm, to all levels l 0 such that z 0 is a multiple of
. Next, for each level, discard all but the most recent + 1 at the level. Let
the minimum level containing p1 . Adapting the argument
in the proof of Lemma 1, it can be shown that (1) regardless
of the current rank, the earliest 1-rank at level i is at most
there is at most a 2 j gap between r1 and
its next larger position r2 , and (3) all the positions in level
are contained in the window.
We know that the window sum is in [total z2 +v2 ; total
z1 ], and since we take the midpoint, the absolute error of
x is at most z 2 v 2 z 1
. The gap between z2 v2 and z1 is
at most the gap between r1 and r2 in the basic wave. Thus
by (2) above, the absolute error is at most 2 j 1 . Moreover,
by (1) and (3) above, the actual window sum is at least
. Thus the relative error is less than 1
.
The space and time bounds are immediate, given the
above discussion of how to perform step 3(a) in constant
time.
3.4 Distributed Streams
We consider three natural denitions for a sliding window
over a collection of t 2 distributed streams, as illustrated
for the Basic Counting problem:
1. We seek the total number of 1's in the last N items in
each of the t streams (t N items in total).
2. A single logical stream has been split arbitrarily among
the parties. Each party receives items that include a
sequence number in the logical stream, and we seek the
total number of 1's in the last N items in the logical
stream.
3. We seek the total number of 1's in the last N items in
the position-wise union (logical OR) of the t streams.
The deterministic wave can be used to answer sliding windows
queries over a collection of distributed streams, for
both the rst two scenarios. For the rst scenario, we apply
the single stream algorithm to each stream. To answer a
query, each party sends its count to the Referee, who simply
sums the answers. Because each individual count is within
relative error, so is the total. The second scenario can
similarly be reduced to the single stream problem. The only
issue is that each party knows only the latest sequence number
in its stream, not the overall latest, so some waves may
contain expired positions. Thus to answer a query, each
party sends its wave to the Referee, who computes the maximum
sequence number over all the parties and then uses
each wave to obtain an estimate over the resulting window,
and sums the result. Because each individual estimate is
within an relative error (recall the discussion at the end of
Section 3.2), so is the total. By improving the single stream
performance over the previous work, we have improved the
distributed streams performance for these two scenarios.
However, the third scenario is more problematic. Denote
as the Union Counting problem the problem of counting the
number of 1's in the position-wise union of t distributed data
streams. (If each stream represents the characteristic vector
for a set, then this is the size of the union of these sets.) We
present next a linear space lower bound for deterministic
algorithms for this problem, before considering randomized
algorithms in Section 4.
A Lower Bound for Deterministic Algorithms. We
show the following lower bound on any deterministic algorithm
for the Union Counting problem that guarantees a
small constant relative error.
Theorem 3. Any deterministic algorithm that guarantees
a constant relative error 1for the Union Counting
problem requires
n) space for n-bit streams, even for
parties (and no sliding window).
Proof. The proof is by contradiction. Suppose that an
algorithm existed for approximating Union Counting within
a relative error of 1using space less than n, where
. (We have not attempted to maximize the constants
or .)
Let A and B be the two parties, where A sees the data
stream X and B sees the data stream Y . X and Y are of
length n (n even), and a query request occurs only after both
streams have been observed. Suppose that both X and Y
have exactly n
ones and zeroes. Note that for this restricted
scenario, the exact answer for the Union Counting problem
is n
where H(X;Y ) is the Hamming distance between X and Y .
For each possible message m from A to the Referee C, let
Sm denote the set of all inputs to A for which A sends m
to C. Since A's workspace is only n bits, the number of
distinct messages that A could send to C is 2 n . The number
of possible inputs for A is ( n
Using the pigeonhole
principle, we conclude that there exists a message m that A
sends to C such that
Because the relative error is at most and the exact answer
is at most n, the absolute error of any estimate produced
by the algorithm is at most n. We claim that no
two inputs in Sm can be at a Hamming distance greater
than 4n. The proof is by contradiction. Suppose there are
two inputs X1 and X2 in Sm such that H(X1 ; X2 ) > 4n.
Consider two runs of the algorithm: in the rst,
and in the second, In
both runs, the Referee C gets the same pair of messages,
and so it outputs the same estimate z. Because the absolute
error in both cases is at most n, we have by equation
(1) that z n
z n+ 1
For a given n-bit input t with exactly n
2 1's, the number of
n-bit inputs with exactly n1's at a Hamming distance of k
from t (k an even number) is ( n=2
combinations of kout of n0's in t
ipped to 1's and kout of n1's in t
ipped
to 0's. (There are no such inputs at odd distances.) Thus
the number of such inputs at Hamming distance at most k
is
4 , is at most (1
By the above claim, for all messages m that A sends to C,
we have:
By choosing = 1in equation (2), we have that
By choosing suitably large, it follows from
equation (3) that
4n
We obtain the contradiction, which completes the proof.
Sum of Bounded Integers. For the sum of bounded
integers problem, scenarios 1 and 2 are straightforward applications
of the single stream algorithm. For scenario 3, if
\union" means to take the position-wise sum, the problem
reduces to the rst scenario. If \union" means to take the
position-wise maximum, then the lower bound applies, as
the number of 1's in the union is a special case of the sum
of the position-wise maximum.
The linear space lower bound for deterministic algorithms
in Theorem 3 is the motivation for considering the randomized
waves introduced in the next section.
4. RANDOMIZED WAVES
Similar to the deterministic wave, the randomized wave
contains the positions of the recent 1's in the data stream,
stored at dierent levels. Each level i contains the most
recently selected positions of the 1-bits, where a position is
selected into level i with probability 2 i . Thus the main
dierence between the deterministic and randomized waves
is that for each level i, the deterministic wave selects 1 out
of every 2 i 1-bits at regular intervals, whereas a randomized
wave selects an expected 1 out of every 2 i 1-bits at random
intervals. Also, the randomize wave retains more positions
per level.
4.1 The Basic Randomized Wave
We begin by describing the basic randomized wave, and
show how it yields an (; -approximation scheme for Union
Counting in any sliding window up to a prespecied maximum
window size N . We then sketch the proof of the approximation
guarantees, which uses the main error analysis
lemma from [13]. Finally, we show the time and space
bounds.
Let N 0 be the minimum power of 2 that is at least 2N ; let
be the desired error probability. Each
maintains a basic randomized wave for its stream,
consisting of d one for each
level
We use a pseudo-random hash function h to map positions
to levels, according to an exponential distribution. For
1=2 d . h() is computed as follows: we consider the numbers
as members of the eld
In a preprocessing step, we choose q and r uniformly and
independently at random from G and store them with each
party. In order to compute h(p), a party computes
Party receiving a stream bit b:
1. Increment pos. (Note: All additions and comparisons
are done modulo N 0 .)
2. Discard any position p in the tail of a queue that has
expired (i.e., p pos N ).
3. If
parties use the same function h.)
(a) If the level l queue Q j (l) is full, then discard the
tail of Q j (l).
(b) Add pos to the head of Q j (l).
Answering a query for a sliding window of size n
N , after each party has observed pos bits:
1. Each party j sends its wave, fQ
to the Referee. Let s := max(0; pos
pos] is the desired window.
2. For j := be the minimum level such that
the tail of Q j (l j ) is a position p s.
3. Let l := max j=1;::: ;t l j . Let U be the union of all
positions in Q1 (l
4. Return ^
jU \ W j.
Figure
randomized wave algorithm for Union
Counting in a sliding window (t streams).
operations being performed in G). We represent
x as a d-bit vector and then h(p) is the largest y
such that the y most signicant bits of x are zero (i.e.,
[0::d]. The two properties
of h that we use are: (1) x is distributed uniformly over
G. Hence the probability that h(i) equals l (where l < d) is
exactly 1=2 l+1 . (2) The mapping is pairwise independent,
i.e., for distinct p1 and p2 , Pr
k2g. This is the same hash
function we used in [13], except that the domain and range
sizes now depend only on the maximum window size N and
not the entire stream length.
The steps for maintaining the randomized wave are summarized
in the top half of Figure 6. A 1-bit arriving in
position p is selected into levels h(p). The sample
for each level, stored in a queue, contains the c= 2 most
recent positions selected into that level. 36 is a constant
determined by the analysis; we have not attempted to
minimize c.) Consider a queue Q j (l), whose tail (earliest
element) is at position i. Then Q contains all the 1-bits
in the interval [i; pos] whose positions hash to a value greater
than or equal to l. We call this the range of Q j (l). As we
move from level l to l + 1, the range may increase, but it
will never decrease. For any window of size at most N , the
queues at lower numbered levels may have ranges that fail
to contain the window, but as we move to higher levels, we
will (with high probability) nd a level whose range contains
the window.
The bottom half of Figure 6 summarizes the steps for
answering a query. We receive a query for the number of
1's in the interval Each
initially selects the lowest numbered level l j such
that the range of Q contains W (step 2). Let l be the
maximum of these levels over all the parties. Thus at each
the range of Q j (l ) contains W . This implies that
each queue contains the positions of all the 1-bits in W in
its stream that hash to a value at least l . We take the
union of the positions in the t queues, to form the queue
for level l of the position-wise OR of the streams (step 3).
The algorithm returns the number of positions in this queue
that fall within the window W , scaled up by a factor of 2 l
(step 4).
Lemma 2. The algorithm in Figure 6 returns an estimate
for the Union Counting problem for any sliding window of
size n N that is within a relative error of with probability
greater than 2=3.
Proof. For each level l, dene S
(b maintains the positions of the
most recent 1's in S j (l). Consider the size of the overlap
of S j (l) and W . This is large for small l (because the
probability of selecting a 1-bit in W for S j (l) is 1=2 l ), and
decreases as l increases. If the overlap at level l is greater
than c= 2 , then W contains a position not among the c= 2
most recent positions of S j (l). On the other hand, if the
overlap is less than or equal to c= 2 , then the range of Q
contains W . Thus, l j (the level selected by P j ) is the minimum
level such that jS j (l) \ W j c= 2 .
In other words, we are progressively halving the sampling
probability until we are at a level where the number of points
in the overlap is less than or equal to c= 2 . This very random
process has been analyzed in our previous paper [13]
(though in a dierent scenario). Thus, the lemma follows
from Lemma 1 in [13].
By taking the median of O(log(1=-)) independent instances
of the algorithm, we get our desired (; -approximation
scheme:
Theorem 4. The above estimation procedure is an (; -
approximation scheme for the Union Counting problem for
any sliding window of size at most N , using O( log(1=-) log 2 N
bits per party. The time to process an item is dominated by
the time for an expected O(log(1=-)) nite eld operations.
Proof. For each of the O(log(1=-)) instances, we have
O(log N) queues of O(1= 2 ) positions, and each position is
O(log N) bits. Also, for each instance, we have the hash
function parameters, q and r, which are O(log N) bits each.
Note that the approximation guarantees hold regardless of
the number of parties.
The per-item processing is O(1) expected time per instance
because the expected number of levels to which each
new position is added (step 3) is bounded by 2, and likewise
the expected number of levels that position
was ever in is bounded by 2. Thus scanning the tails of
the queues at levels looking for p (step 2) takes
constant expected time.
4.2 Improvements in Query Time
The query time for the above estimation procedure is the
time for the Referee to receive and process O(t log(1=-) log N
memory words. If all queries were for window size N , each
could easily keep track of the minimum level l j at
which the range of Q contains the window, with constant
processing time overhead. When a query is requested,
determining l , the Referee
retains only those positions p in the queues that both
fall within the window and have h(p) l . (To avoid recomputing
h(p), h(p) could be stored in all queues containing
p.) In this way, the Referee computes Q j (l ) \ W for each
explicitly receiving Q j (l ) from party P j . It
takes the union of these retained positions and returns the
estimate ^ x as before. This reduces the query time to O(t= 2 )
time per instance, while preserving the other bounds.
5. EXTENSIONS
Number of Distinct Values. With minor modica-
tions, the randomized wave algorithm can be used to estimate
the number of distinct values in a sliding window over
distributed streams. An item selected for a level's sample
is now stored as an ordered pair (p; v), where v is a value
that was seen in the stream and p is the position of the most
recent occurrence of the value. This is updated every time
the value appears again in the stream. A sample at level l
stores the c
pairs with the most recent positions that were
sampled into that level. Note that, in contrast to the Union
Counting scheme, the hash function now hashes the value
of the item, rather than its position.
Each party maintains pos, the length of its observed stream.
It also maintains a (doubly linked) list of all the pairs in its
wave, ordered by position. This list lets the party discard
expired pairs.
When an item v arrives, we insert (pos; v) into levels
h(v). If v is already present in the wave, we update its
position. To determine the presence of a value in the wave,
we use an additional hash table (hashed by an item's value)
that contains a pointer to the occurrence of the value in
the doubly linked list. Updating a value's position requires
moving its corresponding pair from its current position to
the tail of the list, and this can be done in constant time.
The value's position has to be updated in each of the levels to
which it belongs. A straightforward argument shows that all
this per-item processing can be done in constant expected
time, because each value belongs to an expected constant
number of levels.
To produce an estimate, each party passes its wave to
the Referee. The Referee constructs a wave of the union
by computing a level-wise union of all the waves that it re-
ceives. This resulting wave is used for the estimation. As
before, we perform O(log(1=-)) independent instances of the
algorithm, and take the median. The space bound and approximation
guarantees follow directly from the arguments
in the previous section. Putting it altogether, we have:
Theorem 5. The above estimation procedure is an (; -
approximation scheme for the number of distinct values in
a sliding window of size N over distributed streams. It uses
O( log(1=-) log N log R
bits per party, where values are in [0::R],
and the per-item processing time is dominated by the time
for an expected O(log(1=-)) nite eld operations.
Handling Predicates. Note that our algorithm for distinct
values counting stores a random sample of the distinct
values. This sample can be used to answer more complex
queries on the set of distinct values (e.g., how many even distinct
values are there?), where the predicate (\evenness") is
not known until query time. In order to provide an (; -
approximation scheme for any such ad hoc predicate that
has selectivity at least (i.e., at least an fraction of the
distinct values satisfy the predicate), we store a sample of
size O( 1
) at each level, increasing our space bound by a
factor of 1
. Such problems without sliding windows were
studied in [10].
Nth Most Recent 1. We can use the wave synopsis
to obtain an (; -approximation scheme for the position of
the Nth most recent 1 in the stream, as follows. Instead
of storing only the 1-bits in the wave, we store both 0's
and 1's. Thus, items in level l are 2 l positions apart, not
l 1's apart. In addition, we keep track of the 1-rank of
the 1-bit closest to each item in the wave. The rest of the
algorithm is similar to our Basic Counting scheme. Note
that we need O(
log 2 (m)) bits, where m is an upper bound
on the window size needed in order to contain the N most
recent 1's.
Other Problems. Our improved time bounds for Basic
Counting and for Sum over a single stream lead to improved
time bounds for all problems which reduce to these
problems, as described in [4]. For example, an -approx-
imation scheme for the sliding average is readily obtained
by running our sum and count algorithms (each targeting a
relative error of
6.
--R
The space complexity of approximating the frequency moments.
Sampling from a moving window over streaming data.
Reductions in streaming algorithms
Maintaining stream statistics over sliding windows.
An approximate L 1
Testing and spot-checking of data streams
Probabilistic counting algorithms for data base applications.
An approximate L p
On computing correlated aggregates over continual data streams.
Distinct sampling for highly-accurate answers to distinct values queries and event reports
New sampling-based summary statistics for improving approximate query answers
Synopsis data structures for massive data sets.
Estimating simple functions on the union of data streams.
Clustering data streams.
Computing on data streams.
Stable distributions
Personal communication
On randomized one-round communication complexity
Communication Complexity.
Private vs. common random bits in communication complexity.
Public vs. private coin ips in one round communication games.
A note on counting distinct elements in the streaming model.
--TR
Probabilistic counting algorithms for data base applications
Private vs. common random bits in communication complexity
Public vs. private coin flips in one round communication games (extended abstract)
Communication complexity
New sampling-based summary statistics for improving approximate query answers
The space complexity of approximating the frequency moments
On randomized one-round communication complexity
Synopsis data structures for massive data sets
Testing and spot-checking of data streams (extended abstract)
On computing correlated aggregates over continual data streams
Space-efficient online computation of quantile summaries
Estimating simple functions on the union of data streams
Data-streams and histograms
Reductions in streaming algorithms, with an application to counting triangles in graphs
Sampling from a moving window over streaming data
Maintaining stream statistics over sliding windows
Surfing Wavelets on Streams
Distinct Sampling for Highly-Accurate Answers to Distinct Values Queries and Event Reports
An Approximate Lp-Difference Algorithm for Massive Data Streams
An Approximate L1-Difference Algorithm for Massive Data Streams
Clustering data streams
Stable distributions, pseudorandom generators, embeddings and data stream computation
--CTR
Linfeng Zhang , Yong Guan, Variance estimation over sliding windows, Proceedings of the twenty-sixth ACM SIGMOD-SIGACT-SIGART symposium on Principles of database systems, June 11-13, 2007, Beijing, China
Edith Cohen , Martin Strauss, Maintaining time-decaying stream aggregates, Proceedings of the twenty-second ACM SIGMOD-SIGACT-SIGART symposium on Principles of database systems, p.223-233, June 09-11, 2003, San Diego, California
Michael H. Albert , Alexander Golynski , Angle M. Hamel , Alejandro Lpez-Ortiz , S. Srinivasa Rao , Mohammad Ali Safari, Longest increasing subsequences in sliding windows, Theoretical Computer Science, v.321 n.2-3, p.405-414, August 2004
Abhinandan Das , Sumit Ganguly , Minos Garofalakis , Rajeev Rastogi, Distributed set-expression cardinality estimation, Proceedings of the Thirtieth international conference on Very large data bases, p.312-323, August 31-September 03, 2004, Toronto, Canada
Edith Cohen , Haim Kaplan, Efficient estimation algorithms for neighborhood variance and other moments, Proceedings of the fifteenth annual ACM-SIAM symposium on Discrete algorithms, January 11-14, 2004, New Orleans, Louisiana
Brain Babcock , Mayur Datar , Rajeev Motwani , Liadan O'Callaghan, Maintaining variance and k-medians over data stream windows, Proceedings of the twenty-second ACM SIGMOD-SIGACT-SIGART symposium on Principles of database systems, p.234-243, June 09-11, 2003, San Diego, California
Edith Cohen , Martin J. Strauss, Maintaining time-decaying stream aggregates, Journal of Algorithms, v.59 n.1, p.19-36, April 2006
Suman Nath , Phillip B. Gibbons , Srinivasan Seshan , Zachary R. Anderson, Synopsis diffusion for robust aggregation in sensor networks, Proceedings of the 2nd international conference on Embedded networked sensor systems, November 03-05, 2004, Baltimore, MD, USA
Sumit Ganguly, Counting distinct items over update streams, Theoretical Computer Science, v.378 n.3, p.211-222, June, 2007
L. K. Lee , H. F. Ting, Maintaining significant stream statistics over sliding windows, Proceedings of the seventeenth annual ACM-SIAM symposium on Discrete algorithm, p.724-732, January 22-26, 2006, Miami, Florida
Arvind Arasu , Gurmeet Singh Manku, Approximate counts and quantiles over sliding windows, Proceedings of the twenty-third ACM SIGMOD-SIGACT-SIGART symposium on Principles of database systems, June 14-16, 2004, Paris, France
Izchak Sharfman , Assaf Schuster , Daniel Keren, A geometric approach to monitoring threshold functions over distributed data streams, Proceedings of the 2006 ACM SIGMOD international conference on Management of data, June 27-29, 2006, Chicago, IL, USA
Edith Cohen , Haim Kaplan, Spatially-decaying aggregation over a network: model and algorithms, Proceedings of the 2004 ACM SIGMOD international conference on Management of data, June 13-18, 2004, Paris, France
Arvind Arasu , Jennifer Widom, Resource sharing in continuous sliding-window aggregates, Proceedings of the Thirtieth international conference on Very large data bases, p.336-347, August 31-September 03, 2004, Toronto, Canada
Brian Babcock , Chris Olston, Distributed top-k monitoring, Proceedings of the ACM SIGMOD international conference on Management of data, June 09-12, 2003, San Diego, California
Edith Cohen , Haim Kaplan, Spatially-decaying aggregation over a network, Journal of Computer and System Sciences, v.73 n.3, p.265-288, May, 2007
Yishan Jiao, Maintaining stream statistics over multiscale sliding windows, ACM Transactions on Database Systems (TODS), v.31 n.4, p.1305-1334, December 2006
Lukasz Golab , M. Tamer zsu, Issues in data stream management, ACM SIGMOD Record, v.32 n.2, p.5-14, June | waves;sliding windows;distributed;data streams |
564915 | Algorithms for fault-tolerant routing in circuit switched networks. | In this paper we consider the k edge-disjoint paths problem (k-EDP), a generalization of the well-known edge-disjoint paths problem. Given a graph G=(V,E) and a set of terminal pairs (or requests) T, the problem is to find a maximum subset of the pairs in T for which it is possible to select paths such that each pair is connected by k edge-disjoint paths and the paths for different pairs are mutually disjoint. To the best of our knowledge, no nontrivial result is known for this problem for k>1. To measure the performance of our algorithms we will use the recently introduced flow number F of a graph. This parameter is known to satisfy F=O(\Delta \alpha^-1 \log n), where \Delta is the maximum degree and \alpha is the edge expansion of G. We show that a simple, greedy online algorithm achieves a competitive ratio of O(k^3 \cdot F) which naturally extends the best known bound of O(F) for k=1 to higher $k$. To get this bound, we introduce a new method of converting a system of k disjoint paths into a system of k length-bounded disjoint paths. Also, an almost matching deterministic online lower bound \Omega(k \cdot F) is given.In addition, we study the k disjoint flows problem (k-DFP), which is a generalization of the well-known unsplittable flow problem (UFP). The k-DFP is similar to the k-EDP with the difference that we now consider a graph with edge capacities and the requests can have arbitrary demands d_i. The aim is to find a subset of requests of maximum total demand for which it is possible to select flow paths such that all the capacity constraints are maintained and each selected request with demand d_i is connected by k disjoint paths, each of flow value d_i/k.The k-EDP and k-DFP problems have important applications in fault-tolerant (virtual) circuit switching which plays a key role in optical networks. | Introduction
This paper was motivated by a talk given by Rakesh Sinha from Ciena Inc. at
the DIMACS Workshop on Resource Management and Scheduling in Next Generation
Networks, March 26-27, 2001. The speaker pointed out in his talk that
standard problems such as the edge-disjoint paths problem and the unsplittable
ow problem are insu-cient for practical purposes: they do not allow a rapid
adaptation to edge faults or heavy load conditions. Instead of having just one
path for each request, it would be much more desirable to determine a collection
of alternative independent paths for each accepted request that can
exibly be
used to ensure rapid adaptability. The paths, however, should be chosen so
that not too much bandwidth is wasted under normal conditions. Keeping this
in mind, we introduce two new (to the best of our knowledge) optimization
problems: the k edge-disjoint paths problem (k-EDP) and the k disjoint
ows
problem (k-DFP).
In the k-EDP we are given an undirected graph E) and a set of
terminal pairs (or requests) T . The problem is to nd a maximum subset of the
pairs in T such that each chosen pair can be connected by k disjoint paths and,
moreover, the paths for dierent pairs are mutually disjoint.
Similarly, in the k-DFP we are given an undirected network E) with
edge capacities and a set of terminal pairs T with demands d i , 1 i jT j. The
problem is to nd a subset of the pairs of maximum total demand such that
each chosen pair can be connected by k disjoint paths, each path is carrying
units of
ow and no capacity constraint is violated.
In order to demonstrate that the k-DFP can be used to achieve fault tolerance
together with a high utilization of the network resources and also a rapid
adaptability, consider a network G in which new edge faults may occur continuously
but the total number of faulty edges at the same time is at most f .
In this case, given a request with demand d, the strategy is to reserve k
disjoint
ow paths for it, for some k 1, with total demand (1 As
long as at most f edge faults appear at the same time, it will still be possible
to ship a demand of d along the remaining paths. Furthermore, under fault-free
conditions, only a fraction f=k of the reserved bandwidth is wasted, which can
be made su-ciently small by setting k su-ciently large (which will, of course,
be limited by the properties of the network).
1.1 Previous results
Since we are not aware of previous results for the k-EDP and the k-DFP for
k > 1, we will just survey results for the heavily studied case of
the edge-disjoint paths problem (EDP) and the more general unsplittable
ow
problem (UFP).
Several results are known about the approximation ratio and competitive
ratio achievable for the UFP under the assumption that the maximum demand
of a commodity, dmax , does not exceed the minimum edge capacity, c min , called
here the no-bottleneck assumption [1, 12, 4, 7, 10, 13, 14]. If only the number
of edges, m, is known, Baveja and Srinivasan [4] present a polynomial time
algorithm with approximation ratio O(
m). On the lower bound side, it was
shown by Guruswami et al. [10] that on directed networks the UFP is NP-hard
to approximate within a factor of m 1=2 for any > 0. The best result
for the EDP and the UFP known so far was given by Kolman and Scheideler
[14]. Using a new parameter called the
ow number F of a network, they show
that a simple online algorithm has a competitive ratio of O(F ) and prove that
n), where for the EDP is the maximal degree of the network,
is the edge expansion, and n is the number of nodes. For the UFP, has to
be dened as the maximal node capacity of the network for the bound above to
hold, where the capacity of a node is dened as the sum of the capacities of its
edges. Combining the approach of Kolman and Scheideler [14] with the AAP
algorithm [1], Chakrabarti et al. [7] recently proved an approximation ratio of
O( 2 1 log 2 n) for the more general UFP with prots where is just the
maximal degree of the network.
We also consider two related problems, the integral splittable
ow problem
(ISF) [10] and the k-splittable
ow problem (k-SFP). In both cases, the input
and the objective (i.e., to maximize the sum of accepted demands) are the
same as in the UFP. The dierence is that in the ISF all demands are integral
and a
ow satisfying a demand can be split into several paths, each carrying
an integral amount of
ow. In the k-SFP 1 a demand may be split into up
to k
ow paths (of not necessarily integral values). Under the no-bottleneck
assumption Guruswami et al. [10] give an O(
m) approximation for
the ISF. Recent results of Kolman and Scheideler [14] allow to achieve an O(F )
randomized competitive ratio and an O(F ) deterministic approximation ratio
for both of these problems on uniform capacity networks. Although the ISF and
the k-SFP on one side and the k-DFP on the other seem very similar at rst
glance, there is a serious dierence between the two. Whereas the ISF and the
k-SFP are relaxations of the UFP (they allow the use of more than one path
for a single request and the paths are not required to be disjoint), the k-DFP
is actually a more complex version of the UFP since it requires several disjoint
paths for a single request.
1.2 New results
The main results or this paper are
a deterministic online algorithm for the k-EDP with competitive ratio
a deterministic oine algorithm for the k-DFP on unit-capacity networks
with an approximation ratio of O(k 3 F log(kF )),
a lower
bound
for the competitive ratio of any deterministic online
algorithm for the k-EDP (and thus, obviously, for the k-DFP).
1 The k-splittable
ow problem was recently independently introduced by Baier et al. [3].
Thus, for constant k, we have matching upper and lower bounds for the k-EDP.
Furthermore, we demonstrate that the disjointness condition of the k paths
for every single request seems to be the crucial condition that makes the problems
above harder than other problems such as the integral splittable
ow problem
[10] or the k-splittable
ow problem.
Using known techniques, we also show how the online algorithm for the k-
EDP can be transformed into an oine algorithm with approximation ratio
O(k 3 F ) for the k-EDP with prots, and we describe how the oine algorithm
for the k-DFP can be converted into a randomized online algorithm for the
k-DFP with an expected competitive ratio of O(k 3 F log(kF )).
Our algorithms for the k-EDP and k-DFP are based on a simple concept,
which is a natural extension of the bounded greedy algorithm (BGA) that has
already been studied in several papers [12, 13, 14]: for every request for which
there are still k disjoint
ow paths of total length at most L available without
violating the capacity constraints, select any such system of k paths for it. The
core of the paper is in the analysis of this simple algorithm. The problem is
to show that this strategy works even if the optimal oine algorithm connects
many requests via k disjoint paths of total length more than L. In order to
solve this problem we use a new technique, based on Menger's theorem and the
Lovasz Local Lemma, that converts large systems of k disjoint paths into small
systems of k disjoint paths. Previously, shortening strategies were only known
1.3 Basic notation and techniques
Many of the previous techniques for the EDP and related problems do not allow
us to prove strong upper bounds on approximation or competitive ratios due
to the use of inappropriate parameters. If m is the only parameter used, an
upper bound of O(
m) is essentially the best possible for the case of directed
networks [10]. Much better ratios can be shown if the expansion or the routing
number [16] of a network are used. These measures give very good bounds for
low-degree networks with uniform edge capacities, but are usually very poor
when applied to networks of high degree or highly nonuniform degree or edge
capacities. To get more precise bounds for the approximation and competitive
ratios of algorithms, Kolman and Scheideler [14] introduced a new network
measure, the
ow number F . Not only does the
ow number lead to more precise
results, it also has the major advantage that, in contrast to the expansion or
the routing number, it can be computed exactly in polynomial time. Hence we
will use the
ow number in this paper as well.
Before we can introduce the
ow number, we need some notation. In a
concurrent multicommodity
ow problem there are k commodities, each with
two terminal nodes s i and t i and a demand d i . A feasible solution is a set
of
ow paths for the commodities that obey capacity constraints but need not
meet the specied demands. An important dierence between this problem and
the unsplittable
ow problem is that the commodity between s i and t i can be
routed along multiple paths. The (relative)
ow value of a feasible solution is
the maximum f such that at least f d i units of commodity i are simultaneously
routed for each i. The max-
ow for a concurrent multicommodity
ow problem
is dened as the maximum
ow value over all feasible solutions. For a path p in a
solution, the
ow value of p is the amount of
ow routed along it. A special class
of concurrent multicommodity
ow problems is the product multicommodity
ow
problem (PMFP). In a PMFP, a nonnegative weight (u) is associated with each
node u 2 V . There is a commodity for every pair of nodes and the demand for
the pair (u; v) is equal to (u) (v).
Suppose we have a network E) with arbitrary non-negative edge
capacities. For every node v, let the capacity of v be dened as
w:fv;wg2E c(v; w) and the capacity of G be dened as
c(v). Given
a concurrent multicommodity
ow problem with feasible solution S, let the dilation
D(S) of S be dened as the length of the longest
ow path in S and the
congestion C(S) of S be dened as the inverse of its
ow value (i.e., the congestion
tells us how many times the edge capacities would have to be increased in
order to fully satisfy all the original demands, along the paths of S). Let I 0 be
the PMFP in which
for every node v, that is, each pair of nodes
(v; w) has a commodity with demand c(v) c(w)=. The
ow number F (G) of a
network G is the minimum of maxfC(S); D(S)g over all feasible solutions S of
I 0 . When there is no risk of confusion, we will simply write F instead of F (G).
Note that the
ow number of a network is invariant to scaling of capacities.
The smaller the
ow number, the better are the communication properties of
the network. For example, F
n), F
(log n), F (butter
n). In the analysis
of the presented algorithms, a useful tool will be the Shortening lemma [14].
Lemma 1.1 (Shortening Lemma) For any network with
ow number F it
holds: for any 2 (0; 1] and any feasible solution S to an instance of the concurrent
multicommodity
ow problem with a
ow value of f , there exists a feasible
solution with
ow value f=(1+) that uses paths of length at most 2 F (1+1=).
Moreover, the
ow through any edge e not used by S is at most c(e)=(1
Another useful class of concurrent multicommodity
ow problems is the
balanced multicommodity
ow problem (or short BMFP). A BMFP is a multi-
commodity
ow problem in which the sum of the demands of the commodities
originating and the commodities terminating in a node v is at most c(v) for
every We will make use of the following property of the problem [14]:
Lemma 1.2 For any network G with
ow number F and any instance I of a
for G, there is a feasible solution for I with congestion and dilation at
most 2F .
Apart from the
ow number we will also need Cherno bounds [11], the
(symmetric form of the) Lovasz Local Lemma [9] and Menger's theorem [6,
p. 75].
Lemma 1.3 (Cherno Bound) Consider any set of n independent binary
random variables
and be chosen so that
E[X ]. Then it holds for all - 0 that
Lemma 1.4 (Lovasz Local Lemma) Let A An be \bad" events in an
arbitrary probability space. Suppose that each event is mutually independent of
all other events but at most b, and that Pr[A i
with probability greater than 0, no bad event occurs.
Lemma 1.5 (Menger's theorem) Let s and t be distinct vertices of G. Then
the minimal number of edges separating s from t is equal to the maximal number
of edge-disjoint s-t paths.
1.4 Organization of the paper
In Section 2 we present our upper and lower bounds for the k-EDP and some
related problems, and in Section 3 we present our upper bounds for the k-DFP.
The paper ends with a conclusion and open problems.
Algorithms for the k-EDP
Consider the following extension of the bounded greedy algorithm: Let L be
a suitably chosen parameter. Given a request, if it is possible to nd k edge-disjoint
paths, between the terminal nodes of the request that are
mutually disjoint with the previously selected paths and that fulll
L, where jpj is the length (i.e., the number of edges) of a path p, then accept
the request and select any such collection of paths for it. Otherwise, reject the
request. Let us call this algorithm k-BGA.
Note that the problem of nding k edge-disjoint paths of total
length at most L can be reduced to the classical min-cost (integral)
ow problem,
which can be solved by standard methods in polynomial time [8, Chapter 4]. It
is worth mentioning that if there were a bound of L=k on the length of every
path, the problem would not be tractable any more (cf. [5]).
2.1 The upper bound
Theorem 2.1 Given a network G of
ow number F , the competitive ratio of
the k-BGA with parameter
Proof. In the following, we call the k edge-disjoint paths that were selected
for a request a k-system. A k-system is small if it has at most L edges.
Let B be the solution obtained by the k-BGA and O be the optimal solution.
For notational simplicity we allow a certain ambiguity. Sometimes B and O
refer to the subsets of T of the satised requests, and sometimes to the actual
k-systems that realize the satised requests. We say that a k-system q 2 B is
a witness for a k-system p if p and q share an edge. Obviously, a request with
a small k-system in the optimal solution that was rejected by the k-BGA must
have a witness in B.
Let O 0 O denote the set of all k-systems in O that are larger than L
and that correspond to requests not accepted by the k-BGA and that do not
have a witness in B. Then each k-system in O O 0 either has a witness or
was accepted by the k-BGA. Since the k-systems in O O 0 are edge-disjoint,
each request accepted by the k-BGA can be a witness to at most L requests in
O O 0 . Hence, jO O
It remains to prove an upper bound on jO 0 j. To achieve this, we transform
the k-systems in O 0 into a set P of possibly overlapping but small k-systems.
these small k-systems would have been candidates for the k-BGA but were
not picked, each of them has at least one witness in B. Then we show that the
small k-systems in P do not overlap much and thus many k-systems from B are
needed in order to provide a witness for every k-system in P .
Note that the set O 0 of k-systems can be viewed as a feasible solution of
relative
ow value 1 to the set of requests O 0 of the concurrent multicommodity
ow problem where each request has demand k. The Shortening lemma with
parameter immediately implies the following fact.
Fact 2.2 The k-systems in O 0 can be transformed into a set R of
ow systems
transporting the same amount of
ow such that every
ow path has a length of
at most 5k F . Furthermore, the congestion at every edge used by a k-system
in O 0 is at most 1 1=(2k), and the congestion at every other edge is at most
1=(2k).
This does not immediately provide us with short k-systems for the requests
in O 0 . However, it is possible to extract short k-systems out of the
ow system
R.
Lemma 2.3 For every request in O 0 , a set of small k-systems can be extracted
out of its
ow system in R with a total
ow value of at least 1=4.
Proof. Let xed request from O 0 and let E i be the set of all edges
that are traversed by the
ow system for Consider any set of k 1
edges in E i . Since the edge congestion caused by R is at most 1 + 1=(2k), the
total amount of
ow in the
ow system for that traverses the k 1
edges is at most (k 1)(1 1=2. Thus, the minimal s
in the graph consists of at least k edges. Hence, Menger's theorem [6]
implies that there are k edge-disjoint paths between s i and t i in E i . We take
any such k paths and denote them as the k-system 1 . We associate a weight
(i.e., total
ow) of k 1 with 1 , where 1 is the minimum
ow from s i to t i
through an edge in E i belonging to the k-system 1 .
Assume now that we have already found ' k-systems
' 1. If
stop the process of dening j . Otherwise, the
must still be at least k, because the total
ow
along any k 1 edges in E i is still less than the total remaining
ow from s i to
Thus, we can apply Menger's theorem again. This allows us to nd another
k-system '+1 between s i and t i and in the same way as above we associate with
it a weight '+1 . Let ^
' be the number of k-systems at the end of the process.
So far there is no guarantee that any of the k-systems dened above will be
small, neither that they will transport enough
ow between the terminal pair
s i and t i . However, after a simple procedure they will satisfy our needs.
According to Fact 2.2, all
ow paths in R have a length of at most 5kF .
Hence, the total amount of edge capacity consumed by a
ow system in R representing
a request in O 0 is at most 5k 2 F . If there were k-systems in
of total weight at least 1=4 that use more than 20k 3 F edges each, then they
would not t into the available edge capacity, because 20k 3 F
Thus, there exists a subset of the k-systems total weight at least
1=4 such that each of them is small, that is, each of them uses at most 20k 3 F
edges. ut
denote the set of small k-systems for request (s
Lemma 2.3, and let S be the set of all S i . A random experiment will nally
help us to bound jO 0 j in terms of jBj. Independently for each (s
choose exactly one of its k-systems in S, where a k-system j is picked with a
probability proportional to its
ow value. After the selection, each of the chosen
k-systems is used to carry k units of
ow, one unit along each of its paths. Let
P denote the chosen k-systems with the k units of
ow.
Since each k-system in P is small, it must have been a candidate for the
k-BGA. But it was rejected by the k-BGA and hence it must have a witness in
B. By the denition of O 0 this witnessing must be at an edge that is not used
by any k-system in O 0 . Hence, only edges outside of the edges used by O 0 can
be potential witness edges. From Fact 2.2 we know that each of these edges
can have a congestion of at most 1=(2k). Hence, after selecting a k-system for
each request at random and shipping a demand of 1 along each of its paths,
the expected congestion at every potential witness edge is at most 2. Thus,
in expectation, every k-system from B can serve as a witness to at most 2 L
k-systems from P . We conclude that there exists a random choice for which the
k-systems from B serve as witnesses to at most 2 L jBj k-systems from P (cf.
[13]). Since jP the proof is completed. ut ut
The above upper bound on the competitive ratio for the k-BGA with parameter
is the best possible, since a k-system of size (k 3 F ) may prevent
other k-systems from being selected. An open question is whether it is
possible to achieve a better competitive ratio with a stronger restriction on the
size of the k-systems that are used by the k-BGA.
2.2 General online lower bound
Next we show there is a lower bound that holds for the competitive ratio of any
deterministic online algorithm for the k-EDP problem which is not far away
from the performance of the k-BGA.
Theorem 2.4 For any n, k, and F log k n with n k 2 F there is a graph
G of size (n) with maximum degree O(k) and
ow number such that the
competitive ratio of any deterministic online algorithm on G is
Proof. A basic building block of our construction is the following simple graph.
Let D k (diamond) denote the graph consisting of two bipartite graphs K 1;k and
K k;1 glued naturally together at the larger sides. The two k-degree nodes in D k
are its endpoints. Let C (chaplet) denote the graph consisting of F diamond
graphs attached one to the other at the endpoints, like in an open chaplet.
The core of the graph G consists of disjoint copies
of the chaplet graph C attached to the inputs of a k-ary multibutter
y
ure 1). In addition, a node s is connected to the rst k chaplet graphs and a
node t is connected to the rst k output nodes of the multibutter
y. Let s i;j
denote the rst endpoint of a diamond j in a chaplet i, and let t i;j (= s i;j+1 )
denote the other endpoint. We will use the fact that a k-ary multibutter
y
with inputs and outputs (which is a network of degree O(k)) can route any
r-relation from the inputs to the outputs with edge congestion and dilation at
most O(max[r=k; log k n 0 ]) [16].
multi-
butterfly
log k n
F
inputs outputs
Figure
1: The graph for the lower bound.
First, we show that our graph G has a
ow number of the
diameter of G is
F ) it is su-cient to prove that a PMFP with
for the given graph can be solved with congestion and dilation O(F ). Consider
each node v of degree - v to consist of - v copies of nodes and let V 0 be the set
of all of these copies. Then the PMFP reduces to the problem of sending a
packet of size 1=N for any pair of nodes in V 0 , where Such a routing
problem can be split into N permutations i with i
Each such permutation represents a routing
problem in the original network where each node is the starting point and
endpoint of a number of packets that is equal to its degree. We want to bound
the congestion and dilation for routing such a problem.
In order to route , we rst move all packets to the inputs of the k-ary
multibutter
y in such a way that every input node of the multibutter
y will
have O(kF ) packets. This can clearly be done with edge congestion O(F ) and
dilation O(F ). Next, we use the multibutter
y to send the packets to the rows
of their destinations. Since every input has O(k F ) packets, this can also be
done with congestion and dilation O(F ). Finally, all packets are sent to their
correct destinations. This also causes a congestion and dilation of at most O(F ).
Hence, routing only requires a total congestion and dilation of O(F ).
Combining the fact that all packets are of size 1=N with the fact that we
have N permutations i , it follows that the congestion and dilation of routing
the PMFP in the given graph is O(F ). Hence, its
ow number is
Now consider the following two sequences of requests:
(1) (s; t), and
Obviously, every deterministic online algorithm has to accept (s; t) to ensure
a nite competitive ratio for the sequence (1). However, in this case none of
the other requests in (2) can be satised. But the optimal solution for (2) is
to reject (s; t) and to accept all other requests. Hence, the competitive ratio is
2.3 Managing requests with prots
In the k edge-disjoint paths with prots problem (k-EDPP) we are given an
undirected graph E) and a set of requests T . Each request r
has a positive prot b(r i ). The problem is to nd a subset S of the pairs in T of
maximum prot for which it is possible to select disjoint paths such that each
pair is connected by k disjoint paths.
It turns out that a simple oine variant of the k-BGA gives the same approximation
ratio for the k-EDPP as we have for the k-EDP. The algorithm
involves sorting the requests in decreasing order of their prots and running the
k-BGA on this sorted sequence. We call this algorithm the sorted k-BGA.
Theorem 2.5 Given a network G of
ow number F , the approximation ratio
of the sorted k-BGA with parameter for the k-EDPP.
Proof. The proof is almost identical to the proof of Theorem 2.1. The only
additional observation is that, since the sorted k-BGA proceeds through the
requests from the most protable, every small k-system in O O 0 and in the
modied set P has a witness in B with larger or equal prot. ut
2.4 The multi-EDP
Another variant of the k-EDP our techniques can be applied to is the multiple
edge-disjoint paths problem (multi-EDP) which is dened as follows: given a
graph G and a set of terminal pairs with integral demands d i , 1 d i , nd
a maximum subset of the pairs for which it is possible to select disjoint paths
so that every selected pair i has d i disjoint paths. Let dmax denote the maximal
demand over all requests.
A variant of the k-BGA, the multi-BGA, can be used here as well: Given
a request with demand d i , reject it if it is not possible to nd d i edge-disjoint
paths between the terminal pairs of total length at most 20d i d 2
select any such d i paths for it.
Theorem 2.6 Given a network G of
ow number F , the competitive ratio of
the multi-BGA is O(d 3
Proof. The proof goes along the same lines as the proof of Theorem 2.1: rst,
the Shortening lemma with parameter applied and, afterwards,
the extraction procedure is used. The dierence is that now we extract only
-systems for a request with demand d i , not dmax -systems. ut
3 Algorithms for the k-DFP
Throughout this section we will assume that the maximal demand is at most k
times larger than the minimal edge capacity, which is analogous to assumptions
made in almost all papers about the UFP. We call this the weak bottleneck
assumption. Moreover, we assume that all edge capacities are the same. Since
F is invariant to scaling, we simply set all edge capacities to one. The minimal
demand of a request will be denoted by d min . We rst show how to solve the
oine k-DFP, and then mention how to extend this solution to the online case.
To solve the oine k-DFP, we rst sort the requests in decreasing order of
their demands. On this sorted sequence of requests we use an algorithm that
is very similar to the k-BGA: Let L be a suitably chosen parameter. Given a
request with a demand of d, accept it if it is possible to nd k edge-disjoint
ow value d=k between the terminal nodes of the request
that t into the network without violating the capacity constraints and whose
total length
is at most L. Otherwise, reject it. This extension of the
k-BGA will be called k-
ow BGA.
The next theorem demonstrates that the performance of the k-
ow BGA for
the k-DFP is comparable to the performance of the k-BGA for the k-EDP. It is
slightly worse due to a technical reason: it is much harder to use our technique
for extracting short k-systems for the k-DFP than for the k-EDP.
Theorem 3.1 Given a unit-capacity network G with
ow number F , the approximation
ratio of the k-
ow BGA for the k-DFP with parameter
O(k 3 F log(kF )), when run on requests sorted in non-increasing order, is
O(k 3 F log(kF )).
Proof. As usual, let B denote the set of k-systems for the requests accepted by
the BGA and O be the set of k-systems in the optimal solution. Each k-system
consists of k disjoint
ow paths which we also call streams. For notational
simplicity we will sometimes think about B and O also as a set of streams
(instead of k-systems).
For each stream q 2 B or q 2 O, let f(q) denote the
ow along that stream.
If q belongs to the request (s =k. For a set
Q of streams let
f(q). Also, for an edge e 2 E and a stream q,
let F (e; q) denote the sum of
ow values of all streams in B passing through e
whose
ow is at least as large as the
ow of q, i.e., F (e;
a witness for a stream q if f(p) f(q)
and p and q intersect in an edge e with F 1. For each edge e let
W(e; B) denote the set of streams in B that serve as witnesses on e. Similarly,
for each edge e let V(e; Q) denote the set of streams in Q that have witnesses
on e. We also say that a k-system has a witness on an edge e if any of its k
streams has a witness on e. We start with a simple observation.
3.2 For any stream q and edge e, if q has a witness on e then
Proof. Let p be a witness of q on e. Assume, by contradiction, that F (e; q) <
1=2. It easily follows that f(p) < 1=2. Since f(q) f(p) and F
by the denition of a witness, we have a contradiction. ut
Let O 0 O be the set of k-systems that are larger than L and that correspond
to requests not accepted by the k-
ow BGA and that do not have a
witness in B. The next two bounds on jjO n O 0 jj and jjO 0 jj complete the proof.
Lemma 3.3 jjO n O 0 jj (1
Proof. We partition O n O 0 into two sets. Let O 1 O n O 0 consist of all
the k-systems corresponding to requests accepted by the BGA and let O
Obviously, jjO 1 jj jjBjj. Note that each k-system in O 2 must
have a witness in B. Let E 0 E denote the set of all edges on which some
k-system from O 2 has a witness. We then have
For the rst inequality note that a k-system of demand d i in O 2 may only have
a witness at a single edge, and this edge can only be traversed by a
ow of
belonging to that k-system. The second inequality holds due to the unit
capacities and the last one follows from Claim 3.2.
Since all k-systems in B are of length at most L, we have
streams p2B
systems s2B
This completes the proof of Lemma 3.3. ut
In the next lemma we bound kO 0 k by rst transforming the large k-systems
in O 0 into a set S of small k-systems and then bounding kSk in terms of kBk.
Lemma 3.4 kO
Proof. In order to prove the lemma, we will transform the k-systems in O 0
into a set of k-systems S in which each k-system has a length at most L and
therefore must have a witness in B. To achieve this, we perform a sequence of
transformations:
1. First, we scale the demands and edge capacities so that each edge in G
has a capacity of requests have demands that
are integral multiples of k. More precisely, the demand of each request
of original demand d is set to d
1=3)d], this slightly increases the demands and therefore also the
ows
along the streams so that the total
ow along an edge is now at most
(1+1=3)C. Note that slightly increasing the demands only increases kO 0 k
and therefore only makes the bound on the relationship between kO 0 k and
kBk more pessimistic.
2. Next, we replace each request (s
of demand k each, shipped along the same k-system as for (s
every k-system of such a request, we only keep the rst 8c kF and the
last 8c kF nodes along each of its k streams, for some
The resulting set of (possibly disconnected) streams of a k-system will be
called a k-core. As shown in Claim 3.5, we can distribute the elementary
requests into C=c sets S so that the congestion caused by the
k-cores within each set is at most 2c at each edge.
3. Afterwards, we consider each S i separately. We will reconnect disconnected
streams in each k-core in S i with
ow systems derived from the
ow number. The reconnected k-cores will not yet consists of k disjoint
streams. We will show in Claim 3.6 how to extract k-systems of length at
most L from each reconnected k-core.
4. Once we have found the short k-systems, we will be able to compare kO 0 k
with kBk with the help of witnesses.
Next we present two vital claims. The proof of the rst claim requires the use
of the Lovasz Local Lemma, and the proof of the second claim is similar to the
proof of Theorem 2.1.
3.5 The elementary requests can be distributed into C=c sets S
for some so that for each set S i the edge congestion caused by
its k-cores is at most 2c.
Proof. We rst prove the claim for afterwards demonstrate
how to get to
Consider the random experiment of assigning to each elementary request a
number uniformly and independently at random, and let S i be
the set of all requests that choose number i. For every edge e let the random
variable X e;i denote the number of streams assigned to S i that traverse e. Since
the maximal edge congestion is at most 4C=3, we have E[X e;i ] 4c=3 for every
edge e. Every edge e can be used by at most one stream of any k-core. Hence,
a k-core can contribute a value of at most 1 to X e;i and the contributions of
dierent k-cores are independent. We can use the Cherno bound to derive
For every edge e and every i e;i be the event that X e;i > 2c.
2, the above probability estimate bounds the probability that
the event A v;i appears. Our aim is to show, with the help of the LLL, that it
is possible in the random experiment to assign numbers to the requests so that
none of these events appears, which would yield our claim. To apply the LLL
we have to bound the dependencies among the events A e;i .
Each edge e can be used by at most 2C k-cores and these are the only k-cores
that aect the values X e;i , C=cg. Realizing that each of the k-cores
contains at most 2k(8c kF ) edges and that the k-cores choose their sets S i
independently at random, we conclude that the event A e;i depends on at most
events A f;j .
To be able to use the LLL, we now olny have to choose the value c so that
e e 4c=3 4
(32ck
This can certainly be achieved by setting
The above procedure is su-cient if
42 a more
involved technique will be used. The k-cores will be distributed into the sets S i
not in a single step but in a sequence of renements (a similar approach was
used, e.g., by Leighton et al. [15] and Scheideler [16]). In the rst renement,
our aim is to show that for c the k-cores can be distributed into
the sets S so that the edge congestion in each S i is at most (1
O(1= =3. For this we use the same random experiment as for c above.
It follows that E[X e;i and that
Hence, to be able to use the LLL, we have to choose the value c 1 so that
e e 4 3
This can certainly be achieved by setting c large enough, which
completes the rst renement step.
In the second renement step, each S i is rened separately. Consider some
xed S i . Our aim is to show that for c the k-cores in S i can be
distributed into the sets S so that the edge congestion in each
S i;j is at most (1
=3. The proof for this follows exactly
the same lines as for c 1 . Thus, overall C=c 2 sets S i;j are produced in the second
step, with the corresponding congestion bound.
In general, in the (' 1)st renement step, each set S established in rene-
ment ' is rened separately, using c
the rst time. Note that in this case, c
this point we use the method presented at the beginning of the proof for the
parameter c to obtain C=c 0 sets
for some c
congestion of at most@ '
Y
where l is the total number of renement steps. Using the facts that 1 +x e x
for all x 0 and that e x holds for the product
that
Y
1= 3
for a constant 0 < 1=2 that can be made arbitrarily small by making sure
that c ' is above a certain constant value depending on . Hence, it is possible
to select the values c so that the congestion in each S i at the end is
at most 2c 0 . ut
3.6 For every set S i , every elementary request in S i can be given k-
systems of total
ow value at least 1=4 such that each of them consists of at
most L edges. Furthermore, the congestion of every edge used by an original
k-system in S i is at most 2c + 1=(2k), and the congestion of every other edge is
at most 1=(2k).
Proof. For an elementary request r let p r
'r be all the disconnected
streams in its k-core, 1 ' r k. Let the rst 8c kF nodes in p r
i be denoted by
a r
i;8ckF and the last8c kF nodes in p r
i be denoted by b r
.
Consider the set of pairs
'r
f(a r
Due to the congestion bound in Claim 3.5, a node v of degree - can be a starting
point or endpoint of at most 2c- pairs in L. From Lemma 1.2 we know that for
any network G with
ow number F and any instance I of the BMFP on G there
is a feasible solution for I with congestion and dilation at most 2F . Hence, it
is possible to connect all of the pairs in L by
ow systems of length at most
2F and
ow value f(p r
so that the edge congestion is at most 2c 2F . Let
the
ow system between a r
i;j and b r
i;j be denoted by f r
i;j . For each elementary
request and each 1 i ' r and each 1 j 8c kF , we dene a
ow system g r
it moves from s to a r
i;j along p r
, then from a r
i;j to b r
along f r
i;j , and nally from b r
i;j to t along p r
, and we assign to it a
ow value of
This ensures that a total
ow of f(p r
being shipped for
each p r
. Furthermore, this allows us to reduce the
ow along f r
i;j by a factor of
1=(8c Hence, the edge congestion caused by the f r
i;j for all reduces
to at most 4c F=(8c Therefore, the additional congestion at
any edge is at most 1=(2k), which proves the congestion bounds in the claim.
Now consider any given elementary request t). For any set of k 1
edges, the congestion caused by the
ow systems for r is at most (k 1)(1
Hence, according to Menger's theorem there are k edge-
disjoint
ows in the system from s to t. Continuing with the same arguments
as in Theorem 2.1, we obtain a set of k-systems for r with as properties stated
in the claim. ut
Now that we have short k-systems for every elementary request, we combine
them back into the original requests. For a request with demand d this results
in a set of k-systems of size at most L each and total
ow value at least d=(4k).
Let the set of all these k-systems for all requests be denoted by S. Since every
k-system has a size at most L, it could have been a candidate for the BGA.
Thus, each of these k-systems must have a witness. Crucially, every edge that
has witnesses for these k-systems must be an edge that is not used by any of
the original k-systems in O 0 . (This follows directly from the denition of O 0 .)
According to the proof of Claim 3.6, the amount of
ow from S traversing any
of these edges is at most 1=(2k). Let E 0 be the set of all witness edges.
For each request we now choose independently at random one of its k-
systems, with probability proportional to the
ow values of the k-systems. This
will result in a set of k-systems P in which each request has exactly one k-system
and in which the expected amount of
ow traversing any edge in E 0 is at most
1=(2k). Next, we assign the original demand of the request to each of these
k-systems. This causes the expected amount of
ow that traverses any edge in
to increase from at most 1=(2k) to at most 4k 2.
We are now ready to bound kPk in terms of kBk. For every k-system h 2 S,
let the indicator variable X h take the value 1 if and only if h is chosen to be
in P . We shall look upon kPk as a random variable (though it always has the
same value) and bound its value by bounding its expected value E[kPk]. In the
following we assume that f(h) is the
ow along a stream of the k-system h and
d(h) is the demand of the request corresponding to h. Also, recall that the total
ow value of k-systems in S belonging to a request with demand d is at least
d=(4k).
4k
where the last calculations are done in the same way as in the proof of
Lemma 3.3. ut
Combining the two lemmas proves the theorem. ut
We note that if the minimum demand of a request, d min , fullls d min
k= log(kF ), then one would not need Claim 3.5. In particular, if d min were
known in advance, then the k-
ow BGA could choose to
achieve an approximation ratio of O(k 3 F=(d min =k)). This would allow a smooth
transition from the bounds for the k-EDP (where d to the k-DFP.
3.1 Online algorithms for the k-DFP
In this section we present a randomized online algorithm for the k-DFP. This
algorithm, which we shall call the randomized k-
ow BGA, is an extension of
the k-
ow BGA algorithm for the oine k-DFP. The technique we present for
making oine algorithms online has been used before [2, 14].
Consider, rst, the set O of k-systems for requests accepted by the optimal
algorithm. Let O 1 O consist of k-systems each with demand at least k=2, and
let O Exactly one of the following events is true: (1) jjO 1 jj 1=2jjOjj
or (2) jjO 2 jj > 1=2 jjOjj.
The randomized k-
ow BGA begins by guessing which of the two events
above will happen. If it guesses the former, it ignores all requests with demand
less than k=2 and runs the regular k-
ow BGA on the rest of the requests. If it
guesses the latter, it ignores all requests with demand at least k=2 and runs the
ow BGA on the rest.
Theorem 3.7 Given a unit-capacity network G with
ow number F , the expected
competitive ratio of the randomized k-
ow BGA for the online k-DFP is
Proof. The proof runs along exactly the same lines as the proof for Theorem
3.1, but we have to prove Lemma 3.2 for the changed situation. Note that
the original proof for Lemma 3.2 relies on the fact that requests are sorted in
a non-decreasing order before being considered. That need not be true here.
as usual, the k-systems for requests accepted by the randomized
ow BGA.
Consider the case when the algorithm guesses that jjO 1 jj 1=2 jjOjj. We
claim that for any stream q 2 O 1 and edge e, if q has a witness on e then
kW(e; B)k 1=2. Let q be witnessed by p, a stream in B. Now, since the algorithm
only considers requests with demand at least k=2, f(p) 1=2. The claim
follows since kW(e; B)k f(p). Following the rest of the proof for Theorem 3.1,
substituting O 1 for O, shows that in this case the randomized k-
ow BGA will
have a competitive ratio of O(k 3 F log(kF )).
Now consider the case when the algorithm guesses jjO 2 jj 1=2 jjOjj. We
claim that even in this case for any stream q 2 O 2 and edge e, if q has a
witness on e then kW(e; B)k 1=2. From the denition of witnessing, we have
F 1. Next, from the denition of O 2 , f(q) < 1=2. The claim
follows as kW(e; B)k F (e; q). As in the previous case, the rest of the proof
for Theorem 3.1 applies here too; substitute O 2 for O.
The competitive ratio in both cases is O(k 3 F log(kF )). Note that the algorithm
may guess incorrectly which event shall be true. But that just reduces
the expected competitive ratio by a factor of 2. ut
3.2 Comparison with other
ow problems
In this section we demonstrate that the k-DFP may be harder to approximate
than other related
ow problems because of the requirement that the k paths
for every request must be disjoint.
The k-splittable
ow problem and the integral splittable
ow problem have
been dened in the introduction. As already mentioned there, previous proof
techniques [14] imply the following result under the no-bottleneck assumption
(i.e., the maximal demand is at most equal to the minimal edge capacity).
Theorem 3.8 For a uniform-capacity network G with
ow number F , the approximation
ratio of the 1-BGA with parameter for the k-SFP and for
the ISF, when run on requests ordered according to their demands starting from
the largest, is O(F ).
Proof. The crucial point is that in the analysis of the BGA algorithm for the
UFP problem in the previous work [14] the solution of the BGA is compared
with an optimal solution of a relaxed problem, namely the fractional maximum
multicommodity
ow problem, and this problem is also a relaxation for both
the ISF and the k-SFP. It follows that the approximation guarantee O(F ) of the
BGA proved for the UFP problem holds for the k-SFP and the ISF problems
as well. ut
Using the standard techniques mentioned earlier, the algorithm can be converted
into a randomized online algorithm with the same expected competitive
ratio. If there is a guarantee that the ratio between the maximal and the minimal
demand is at most 2 (or some other constant) or that the maximal demand
is at most 1=2 (or some other constant smaller than 1, the edge capacity), the
online algorithm can be made even deterministic with the same competitive
ratio (cf. [13]). Taking into account the online lower bound of Theorem 2.4,
this indicates that the k-SFP and the ISF are indeed simpler problems than the
k-DFP.
The techniques of the current paper imply results for the ISF even when the
no-bottleneck assumption does not hold and only the weak bottleneck assumption
is guaranteed (i.e., the maximal demand is at most k times larger than
the minimal edge capacity). In this case we use a variant of the k-
ow BGA,
vary-BGA, that looks for d i disjoint paths of total length O(d i d 2
for a request with demand d i . A consequence of the Theorem 3.1 is the following
corollary:
Corollary 3.9 Given a uniform-capacity network G with
ow number F , the
approximation ratio of the vary-BGA for the ISF under the weak bottleneck
assumption, when run on requests ordered according to their demands starting
from the largest, is O(d 3
In this case the algorithm can also be converted into a randomized online
algorithm.
Conclusions
In this paper we presented upper and lower bounds for the k-EDP and the k-
DFP and related problems. Many problems remain open. For example, what is
the best competitive ratio a deterministic algorithm can achieve for the k-EDP?
We suspect that it is O(k F ), but it seems very hard to prove. Concerning the
k-DFP, is it possible to simplify the proof and improve the upper bound? We
suspect that it should be possible to prove an O(k F ) upper bound as well.
Even an improvement of the O(k 3 F log(kF )) bound k-DFP to O(k 3 F ) would
be interesting.
A bunch of other problems arises for networks with nonuniform edge capac-
ities: the k-
ow BGA algorithm can be used on them as well but is it possible
to prove the same performance bounds?
Acknowledgements
We would like to thank Rakesh Sinha for bringing these problems to our attention
and Alan Frieze for helpful insights.
--R
Strongly polynomial algorithms for the unsplittable ow problem.
On the k-splittable ow problem
Approximation algorithms for disjoint paths and related routing and packing problems.
On the complexity of vertex-disjoint length-restricted path prob- lems
Approximation algorithms for the unsplittable ow problem.
Combinatorial Optimization.
Approximation Algorithms for Disjoint Paths Problems.
Improved bounds for the unsplittable ow problem.
Packet routing and job-shop scheduling in O(congestion
Universal Routing Strategies for Interconnection Networks.
--TR
Combinatorial optimization
Near-optimal hardness results and approximation algorithms for edge-disjoint paths and related problems
Approximation Algorithms for Disjoint Paths and Related Routing and Packing Problems
Simple on-line algorithms for the maximum disjoint paths problem
Improved bounds for the unsplittable flow problem
Universal Routing Strategies for Interconnection Networks
Strongly Polynomial Algorithms for the Unsplittable Flow Problem
On the k-Splittable Flow Problem
Approximation algorithms for disjoint paths problems
--CTR
Amitabha Bagchi , Amitabh Chaudhary , Petr Kolman, Short length menger's theorem and reliable optical routing, Proceedings of the fifteenth annual ACM symposium on Parallel algorithms and architectures, June 07-09, 2003, San Diego, California, USA
Ronald Koch , Ines Spenke, Complexity and approximability of k-splittable flows, Theoretical Computer Science, v.369 n.1, p.338-347, 15 December 2006
Ronald Koch , Ines Spenke, Complexity and approximability of k-splittable flows, Theoretical Computer Science, v.369 n.1-3, p.338-347, December, 2006
Amitabha Bagchi , Amitabh Chaudhary , Petr Kolman, Short length Menger's theorem and reliable optical routing, Theoretical Computer Science, v.339 n.2, p.315-332, 12 June 2005 | multicommodity flow;fault-tolerant routing;greedy algorithms;flow number;edge-disjoint paths |
564916 | Parallel dynamic programming for solving the string editing problem on a CGM/BSP. | In this paper we present a coarse-grained parallel algorithm for solving the string edit distance problem for a string A and all substrings of a string C. Our method is based on a novel CGM/BSP parallel dynamic programming technique for computing all highest scoring paths in a weighted grid graph. The algorithm requires \log p rounds/supersteps and O(\fracn^2p\log m) local computation, where $p$ is the number of processors, p^2 \leq m \leq n. To our knowledge, this is the first efficient CGM/BSP algorithm for the alignment of all substrings of C with A. Furthermore, the CGM/BSP parallel dynamic programming technique presented is of interest in its own right and we expect it to lead to other parallel dynamic programming methods for the CGM/BSP. | INTRODUCTION
Molecular Biology is an important field of application for
parallel computing. Sequence comparison is among the fundamental
tools in Computational Molecular Biology and is
used to solve more complex problems [14], including the
computation of similarities between biosequences [11, 13,
15]. Beside such Molecular Biology applications, sequence
comparison is also used in several other applications [8, 9,
17]. The notions of similarity and distance are, in most
cases, interchangeable and both of them can be used to infer
the functionality or the aspects related to the evolutive
history of the evolved sequences. In either case we are looking
for a numeric value that measures the degree by which
the sequences are alike or di#erent.
We now give a formal definition of the string editing prob-
lem. Let A be a string with |A| symbols on some fixed size
alphabet #. In this string we can do the following edit op-
erations: deletion, insertion and substitution. Each edit operation
is assigned a non negative real number representing
the cost of the operation: D(x) for deletion of a symbol x;
I(x) for insertion of a symbol x and T (x, y) for the exchange
of the symbol x with the symbol y. An edit sequence # is
a sequence of editing operations and its cost is the sum of
the costs of its operations. Let A and C be two strings with
respectively, with m < n.
The string editing problem for input strings A and C consists
of finding an edit sequence # of minimum cost that
transforms A into C.
Figure
1: Grid DAG G
The cost of # is the edit distance from A to C. Let
E(i, j) be the minimum cost of transforming the prefix of
R (i,j)
Figure
2: Processor P (i,j) Stores the Submatrix G (i,j)
A of length i into the prefix of C of length j,
It follows that E(i,
It is easy to see that the string editing problem can be
modeled by a grid graph [1, 12] (Figure 1). An (m, n) grid
graph E) is a directed acyclic weighted graph whose
vertices are (m points of the grid, with rows
. m and columns 0 . n. Vertex (i, j) has a directed edge
to (i
endpoints are within the boundaries of the grid [12].
In [14] the authors describe how to obtain the similarity
(alignment) between two strings by using string editing.
Assuming a similarity score that satisfies the triangle in-
equality, the similarity problem can be solved by computing
the largest source-sink path in the weighted directed acyclic
graph G (grid dag) that corresponds to the edit sequence
which transforms A into C.
The standard sequential algorithms for the string editing
problem are based on dynamic programming. The complexity
of these algorithms is O(mn) time. Given the similarity
matrix, the construction of the optimal alignment can be
done in O(m+n) sequential time [14]. Parallel dynamic programming
is a well studied topic. E#cient parallel PRAM
algorithms for dynamic programming have been presented
by Galil and Park [5, 6]. PRAM algorithms for the string
editing problem have been proposed by Apostolico et al [1].
A general study of parallel algorithms for dynamic programming
can be found in [7].
In this paper we study parallel dynamic programming
for the string editing problem using the BSP [16] Coarse
Grained Multicomputer (CGM) [3, 4] model. A CGM consists
of a set of p processors P1 , . , Pp with O(N/p) local
memory per processor, where N is the space needed by
the sequential algorithm. Each processor is connected by
a router that can send messages in a point-to-point fash-
ion. A CGM algorithm consists of alternating local computation
and global communication rounds separated by a
barrier synchronization. A round is equivalent to a super-step
in the BSP model. Each communication round consists
of routing a single h-relation with O(N/p). We require
that all information sent from a given processor to another
processor in one communication round be packed into one
long message, thereby minimizing the message overhead. In
the CGM model, the communication cost is modeled by the
number of communication rounds. The main advantage of
BSP/CGM algorithms is that they map very well to standard
parallel hardware, in particular Beowulf type processor
clusters [4].
The main concern is on the communication requirements.
Our goal is to minimize the number of communication rounds.
We present a CGM/BSP algorithm for solving the string
edit distance problem for a string A and all substrings of a
string C via parallel dynamic programming. An O(n 2 log m)
sequential algorithm was presented in [12] (to solve the all
approximate repeats in strings problem). This problem also
arises in the common substring alignment problem [10]. The
method requires log p rounds/supersteps and O( n 2
log m)
local computation. To our knowledge, this is the first e#-
cient CGM/BSP algorithm for this problem.
Furthermore, the CGM/BSP parallel dynamic programming
technique presented is of interest in its own right. We
expect that our result will lead to other parallel dynamic
programming methods for the CGM/BSP.
2. A CGM ALGORITHM FOR COMPUTING
ALL HIGHEST SCORING PATHS
In this section we present a parallel algorithm for computing
all highest scoring paths (AHSP) in weighted (m, n)
grid graphs using a CGM with p processors and mn
local
memory per processor. Using this method, we can find an
optimal alignment between A and C.
We divide the grid graph G into p subgrids G (i,j) , 1 #
and each processor P (i,j) stores
subgrid G (i,j) (Figure 2).
Let the left boundary, L, of G be the set of points in the
leftmost column. The right, top and bottom boundaries, R,
T and B, respectively, are defined analogously. The boundary
of G is the union of its left, right, top, and bottom
boundaries (L # R # T # B).
Let DISTG (i,j) be a m+n-1
containing
the lengths of all shortest paths that begin at the left
boundary of G (i,j) , and end at the right
(R (i,j) ) or bottom (B (i,j) ) boundary of G (i,j) . The matrix
consists of four submatrices L (i,j) -R (i,j) (stor-
ing all the shortest paths that begin at the left boundary and
end at the right boundary of G (i,j) ), L (i,j) -B (i,j) (storing all
the shortest paths that begin at the left boundary and end
at the bottom boundary of G (i,j) ), T (i,j) -R (i,j) (storing all
the shortest paths that begin at the top boundary and end
at the right boundary of G (i,j) ) and T (storing
all the shortest paths that begin at the left boundary and
end at the bottom boundary of G (i,j) ). Using the algorithm
of Schmidt [12], each processor can compute all distances of
the paths from the left and top boundaries to the right and
bottom boundaries in G (i,j) in time O( mn
log m
The general strategy of our CGM/BSP algorithm is as fol-
lows: In the general step of the algorithm, several processors
collaborate to join previously calculated subgrids. At the beginning
of each step, each subgrid has a distance matrix distributed
among a group of processors. Two neighbor grids
are joined by the processors that hold the two distance ma-
trices, resulting in a new distance matrix distributed among
these processors. Each step of the algorithm reduces by a
factor of 1/2 the number of subgrids remaining to be merged.
2.1 Joining Grids
We will now show a sequential algorithm to join two adjacent
grids with a common horizontal boundary. The case of
a common vertical boundary is analogous. In the next sub-section
we will show how the distance matrices of two grids
of size l - k, each stored in q processors, can be used by the
2q processors to build the distance matrix of the (2l -1)-k
size merged grid. This procedure takes time O((l
(provided that q is small compared to l and k) and a constant
number of communication rounds. Each round transfers
O((l data from/to each processor and the local
memory required by each processor is O((l
For simplicity, we will refer the upper grid as Gu , with
boundaries Lu , Tu , Bu and Ru , and the lower grid as G l ,
with boundaries L l , T l , B l and R l . We will refer to the
distance matrices for the upper, lower and final grids as
DISTu , DIST l and DIST ul , respectively. It is important
to note that the size of the resulting distance matrix can
be di#erent from the total size of the two original distance
matrices. However, when four grids are joined in a 2 - 2
configuration the sizes add up precisely.
Each initial distance matrix is stored
as a t - t matrix evenly distributed among q processors
(the -1 in the definition of t accounts for the top left and
bottom right corners). The q processors that store DISTu
consecutive columns of DISTu .
The q processors that store DIST l (P l1 , P l2 , . , P lq ) will
store consecutive rows of DIST l . Note that the distance
matrices are actually banded matrices and that a great portion
of these matrices will not be involved in the joining
operation.
Figure
3 illustrates which parts of the old matrices are
copied and which parts are used to build the new matrix.
The copied parts will require some redistribution at the end
of the step. We will concentrate on calculating the t - t sub-matrix
of DIST ul . The shaded areas illustrate the regions
where no paths exist. The submatrices e#ectively involved
in the calculations have a thicker border.
B
B l R l R u
DIST u DIST l
DIST ul
Figure
3: Matrices DISTu , DIST l , and DIST ul .
The existence of the unused (shaded) parts in the distance
matrices has an impact on some constants in the paper, but
is not important to our results. Therefore we will ignore it
for simplicity.
To define indices to the interesting part of DIST ul let
us concentrate on the paths from Lu # Tu to B l # R l . All
these paths cross the common boundary
(sources) be the sequence s i , of points of Lu #
Tu beginning at the lower left corner of Lu and ending at
the top right corner. Let D (destinations) be the sequence
of points of B l # R l starting at the lower
left corner of G l and ending at the top right corner. Let
M be the (middle) sequence m i , of points of
taken from left to right (Figure 4). We will denote
by m(i, j) the index k of the leftmost point mk that belongs
to an optimum path between s i and d j . If there is no such
path, then m(i, value will be used as a sentinel,
with no other meaning).
The determination of a single m(i, j) involves a search
through the entire sequence M to find the x that minimizes
but we can use previously calculated
results to restrict this search, using the following
R l
G l
R u
G u
Figure
4: Merging Gu and G l
Monge properties [1, 12]:
Property 1. If
valid j.
Property 2. If j1 < j2 then m(i, j1
valid i.
Basically, these properties imply that two optimum left-most
paths that share a common extremity cannot cross.
The proof is based on the fact that we can take two crossing
paths and exchange parts of them to build even better
paths, or build a path that is more to the left.
Hence, if we know m(i1 ,
and j, for all i between i 1 and i 2 we can search for m(i,
between m(i1 , j) and m(i2 , j). Furthermore, if for a certain
j we have m(i, j) for all i in a sequence 1 <
one value of i in each
interval of the sequence using only one sweep through M in
doubling the number of known paths. In
this operation, that we will call sweep, we use several rows of
DISTu and the jth column of DIST l . A similar procedure
can be used to calculate m(i, several values of j.
These properties lead to the following sequential algorithm
to obtain the new distance matrix. This algorithm
will be the base for our parallel algorithm. It is based on
a recursive version presented in [1]. We will calculate and
use all m(i, j), calculating the distance between s i and d j in
the process. At each step, we will begin with some marked
points in the sequences S and D, such that if s i and d j
are marked then m(i, j) is already known. The intervals
between marked points contain points not yet used in the
computations. At each step, we pick the middle point of
each interval and mark it, calculating all the required paths
and crossing points. We begin with only the extremities of
S and D being marked.
Algorithm 1. Sequential Merge of DISTu and DIST l
Input: Two distance matrices DISTu and DIST l .
Output: DIST ul
(3.1) Take the middle point of each of the remaining
intervals in S and calculate the paths to all
marked points in D;
(3.2) Take the middle point of each of the remaining
intervals in D and calculate the paths to all
marked points in S;
(3.3) Take the already used middle points in S and D
and calculate the paths between them. Mark all
these points;
WHILE there are unmarked points
- End of Algorithm -
Theorem 1. Algorithm 1 requires #(kt+t 2 ) (sequential)
time.
Proof. Step 3.1 makes one sweep for each marked point
in D. This requires time #(kr is the number
of marked points in D. Step 3.2 also requires time #(kr
3.3 can be done in time #(kr
for each (now) marked point in D. It is not hard to see
that the loop is executed for #(log t) iterations. The value
of r approximately doubles at each iteration, from 2 to t.
Hence, it follows that the total time for this algorithm is
2.2 Parallelizing the Join Operation
We will now show how Algorithm 1 can be modified to
compute DIST ul on a CGM. The natural way to parallelize
this algorithm is to make each processor determine a di#er-
ent part of DIST ul , but the division of DISTu and DIST l to
accomplish this is data dependent. Our solution is based on
a dynamic scheduling of blocks of DIST ul to the processors.
The CGM version of Algorithm 1 has three main phases:
determination of the subproblems and of the used parts of
DISTu and DIST l , determination of the scheduling of the
subproblems, and solution of the subproblems.
We will now adapt Algorithm 1 to calculate all distances
between points in an interval S # of S (from i 1 to
points in an interval D # of D (from j1 to j2 ). Both intervals
have size t # , so Assume that
are already calculated for all j, j1 # j #
j2 , and m(i, j1 ) and m(i, j2 ) are already calculated for all
In other words, we already know the solution
for the borders of this subproblem. The most important
di#erence between this subproblem and the entire problem
is that only a part of each matrix DISTu or DIST l is used.
The shapes of these parts are irregular as shown in Figure 5.
Hence, in order to make a sweep for one point in S # (or
a segment of one row of DISTu (a segment
of one column of DIST l ). The running time of the sweep
is determined by the size of this segment and is therefore
variable, and so is the total running time of the subproblem.
The sizes of all the necessary segments can be calculated in
time O(t), and we know how many times the algorithm will
sweep each segment. Thus, the total running time of the
subproblem can be estimated.
The complete problem can be divided in several subproblems
and solved in parallel by 2q processors. To do this, let
us divide S and D into 2q equal segments of size t
(we suppose, for simplicity, that t # is an integer) that overlap
only in the extremities. This will lead to 4q 2 subproblems
grid U
grid L
DIST u
Figure
5: Data required to compute a block of DIST ul
which will be distributed among the 2q processors. In fact,
some of these subproblems, involving Tu and B l , will be
empty. We will omit this for now, because this will not
be significant to the asymptotic performance. For now, we
will work with 4q 2 subproblems, instead of the more natural
quantity q 2 , in order to distribute the workload more
evenly. At the beginning, the CGM/BSP algorithm for the
problem calculates the m(i, j) for all (i, that belong
to any subproblem boundary. That implies calculating
equally spaced rows and 2q+1 equally spaced columns
of DIST ul and associated m(i, j).
Lemma 1. All values of DIST ul (i,
multiple of t # - 1 and the corresponding values of m(i,
can be calculated in parallel by the 2q processors that store
DISTu and DIST l in time O(k log t
+qt) and space O( t 2
with two communication rounds, where O(qt) data is sent/re-
ceived by each processor.
Proof. Processors (that store DIST l distributed
by rows) will calculate the 2q rows. Each one
knows the lengths of the paths from an interval of points
in M to all points in D. Hence, they need to receive from
Pu1 , Pu2 , . , Puq the length of the paths from the chosen
points of S (a sample of S) to the same interval in
M . They also send information to allow Pu1 , Pu2 , . , Puq
to calculate the 2q +1 required columns of DIST ul . For this
communication, each processor sends/receives O(k) data.
With these data, each processor from P l1 , P l2 , . , P lq calculates
the paths from the sample of S to D, using a variation
of Algorithm 1. Since t # 2q + 1, the running time
of this step will be dominated by the last log( t
iterations
of the loop, when all points in the sample of S are marked
and several in D are not. Each iteration will make 2q
sweeps of a segment of size k
q . The total running time is
O(k log(t/q)).
Each processor among P l1 , P l2 , . , P lq now has versions
of all the paths (with lengths and crossing points) from the
sample of S to D, each one considering only crossings at
a certain interval of M . To calculate the better paths, D
is partitioned into q equal intervals, and each processor receives
all q versions of paths to all points in a certain interval
to t/q points). Each processor sends/receives
O(qt) data and spends O(qt) time in a na-ve search, or O(t)
time using Monge properties to avoid searching through all
versions of a path. We will omit the details for the latter
due to lack of space. This concludes the procedure.
Each processor among P l1 , P l2 , . , P lq now stores information
about 4q of the 4q 2 subproblems: the m(k,x), m(k+
frontiers of each subproblem, as previously de-
scribed, that define the part of DIST l that is used by each
subproblem. Pu1 , Pu2 , . , Puq will contain information about
the m(x,k) and m(x,k borders of the subproblems
that define the usage of DISTu . These processors will
not contain the actual data necessary to process these sub-
problems, just the information about their borders. This
information is sent to the proper processors, so each one
will identify which of its data is used in which subproblem.
Besides that, this information will be used to estimate the
time/space requirement for each subproblem.
2.3 Scheduling Subproblems to Processors
As commented earlier, to solve each subproblem, Algorithm
sometimes sweeps a segment of a row from DIST l
and sometimes sweeps a segment of a column from DISTu .
The estimated running time of the subproblem can be divided
in an estimation based on the used parts of DIST l
and other based on the used parts of DISTu . This running
time can thus be estimated by the e#orts of two processors.
The same will apply to the total memory necessary to store
the needed parts of DISTu and DIST l . Each processor then
takes O(t # ) time to work on their estimations for one sub-
to work on 4q subproblems. Then
all processors send its time/space estimation to a single pro-
cessor, say Pu1 , that receives and processes O(q 2 ) data.
Now, we need to distribute the 4q 2 subproblems to the 2q
processors. The objective of this distribution is to minimize
the completion time by balancing the load on all processors.
This is a special case of the well known, NP-hard, multiprocessor
scheduling problem. In our case, we have an additional
restriction on the space required by all subproblems
assigned to a single processor because we have to perform
the entire distribution in a single communication round. In
the following, we present a solution which ensures that the
claimed time and space bounds are met.
Lemma 2. The 4q 2 subproblems can be scheduled among
2q processors in time O(q 2 log q), resulting in O( t 2
and time requirements for each processor.
Proof. Since the subproblems overlap only at the bor-
ders, the total space required is In the worst
case, one subproblem will require entire columns and rows,
or Space/2q. The total running time of all subproblems is
more di#cult to calculate, because the sweeps along segments
of rows or columns take di#erent times. Let us consider
the case of DIST l : If we get the 2q subproblems that
are vertically aligned in DIST l , then they all use the same
sweeping pattern for their segments of columns of DIST l .
As all these segments add up to size t, the total running
time of these sweeps will be basically the same as the running
time for these sweeps in the sequential algorithm. The
final conclusion is that Time, the sum of the running times
of all the subproblems, is approximately equal to the running
time of the sequential algorithm, which is O(t 2 ). In the
worst case, one subproblem will run in Time/2q.
To guarantee that each processor will spend O(t 2 /q) time
and need O(t 2 /q) space, we calculate a cost for each pro-
cessor, equal to the sum of its time and space requirements.
The sum of all these costs is Cost = O(t 2 ). The problem
is to distribute the subproblems among all processors. The
local cost at a processor is the sum of the costs of all subproblems
assigned to it. We will try to minimize the maximum
(among all processors) local cost. Since the maximum cost
that a subproblem can have is Cost/2q, the maximum local
cost cannot be greater than the minimum local cost plus
Cost/2q, or it would be possible to reassign subproblems in
a better way. Hence, the optimum solution has cost less than
(twice the cost of the best possible solution). Using
a list scheduling heuristic, allocating in a greedy way the
most costly subproblems first, we obtain a solution that has
cost 4/3 of the optimum solution [2]. Hence, the maximum
local cost will be less than or equal to
Since this cost is the sum of the space and time require-
ments, all processors will require O(t 2 /q) time and O(t 2 /q)
space to solve its subproblems. This scheduling is calculated
by Pu1 in time O(q 2 log q), where this time is dominated by
the required sorting of the subproblems.
Once the assignment is established, processor Pu1 broadcasts
this information to all other processors (O(q 3 ) data
sent). Each processor then sends its data to the proper processors
and receive the data of the subproblems assigned to
it. This is a complicated communication step that will require
considerable bookkeeping, but the total data sent/re-
ceived is O(kt/q) per processor. Finally, each processor
solves its subproblems, generating a t # - t # submatrix of
DIST ul for each subproblem. The space required by the
data from DIST l and DISTu can be discarded as the sub-problems
are solved. Once the subproblems are solved, a
last communication step redistributes the submatrices of
DIST ul in a way that will be adequate to join the new grid
with its neighbor (to the left or to the right).
Theorem 2. If a (2l-1)-k (or l -(2k-1)) grid has two
l-k halves, and the distance matrix of each half is distributed
in the local memories of a di#erent set of q processors, then
it is possible to calculate the distance matrix of the full grid
in parallel with the 2q processors given in time O((l
and a constant number of communication steps. The local
memories and the total data sent/received in each step by
one processor is O((l
The theorem follows from the algorithm described above.
The total communication required consists of the following
steps: (1) Distribution of the samples of S and D to
allow the processors to start calculating the boundaries of
the subproblems. (2) Distribution of the tentative lengths
and crossing points of the paths determined in the previous
step. Each processor concentrates candidates for certain
paths. (3) Distribution of the results of the searches for
the best paths, defining the boundaries of the subproblems.
The estimated size and time requirements of the subproblems
are sent to one processor. (4) One processor sends the
assignment of subproblems to all processors. (5) The data
for the subproblems are distributed among processors, following
the pre-determined assignment. (6) The results are
redistributed among the processors.
2.4 Overall Analysis
Given an n-m grid and p processors, this grid is initially
divided into p smaller grids. To simplify the exposition, we
assume the number of processors to be an even power of
two. Each subgrid is then processed by one processor to obtain
its distance matrix. The division of the grid must aim
for the best overall performance of the algorithm. To our
knowledge, the best sequential algorithm for the problem requires
O(nm log(min(n, m))) time [12]. This algorithm was
proposed to build a more complex structure that would support
several kinds of queries and take O(nm log(min(n, m)))
space. However it can easily be adapted to use only O((n
m) 2 ) space in our case where we are interested only in the
boundary to boundary distances. These results make it
tempting to divide the grid in strips to minimize the logarithmic
factor but this is a very small gain and the local
memory required would be prohibitive. As previously
stated, we will divide the grid into a # p - # p configuration
to ensure that the local memory required for the distance
matrix will be O((n +m) 2 /p).
This leads to the following conclusion:
Theorem 3. The distance matrix of an n-m with
grid can be calculated in parallel, on a CGM with p < # m
processors, in time O( n 2
log m) with O(log p) communication
rounds and O( nm
local memory.
Proof. Based on the previous discussion. The algorithm
requires O( mn
log m
time to calculate the distance matrices
of the p subgrids. To build the final distance matrix it requires
log p merging steps where each merging step requires
a constant number of communication rounds. The p < # m
bound is su#cient to ensure that all communication rounds
involve O( nm
data per processor. The processing time of
the merging steps is O( (n+m) 2
resulting
in a total of O( n 2
log p). Thus the whole algorithm runs
in O( n 2
log m) time.
In the final distance matrix we find the scores of all the
alignments of substrings of C with A (lengths of paths from
top to bottom of the grid), among other results of the same
kind.
3. CONCLUSION
In this paper, we present an e#cient algorithm to compute
the edit distance between a string A and all substrings
of a string C on the CGM model. The algorithm requires
log p rounds/supersteps and O( n 2
log m) local computation.
Thus it presents linear speedup and the number of communication
rounds is independent of the problem size.
4.
ACKNOWLEDGMENTS
The authors wish to thank the referees for their helpful
comments.
5. ADDITIONAL AUTHORS
S. W. Song (Universidade de S-ao Paulo, S-ao Paulo, SP
- Brazil, email: song at ime. usp. br. Partially supported
by FAPESP grant 98/06138-2, CNPq grant 52.3778/96-
1 and 46.1230/00-3, and CNPq/NSF Collaborative Research
Program grant 68.0037/99-3).
6.
--R
Bounds on Multiprocessing Timing Anomalies.
Scalable parallel geometric algorithms for coarse grained multicomputers.
Parallel dynamic programming.
Dynamic programming with convexity
An introduction to parallel dynamic programming.
Approximate string matching.
An algorithm for di
On the Common Substring Alignment Problem.
A general method applicable to the search for similarities in the amino acid sequence of two proteins.
All highest scoring paths in weighted graphs and their application to finding all approximate repeats in strings.
The theory and computation of evolutionary distances: Pattern recognition.
Introduction to Computational Molecular Biology.
of common molecular subsequences.
A bridging model for parallel computation.
Fast text searching allowing errors.
--TR
A bridging model for parallel computation
Efficient parallel algorithms for string editing and related problems
Fast text searching
Dynamic programming with convexity, concavity and sparsity
Scalable parallel geometric algorithms for coarse grained multicomputers
All Highest Scoring Paths in Weighted Grid Graphs and Their Application to Finding All Approximate Repeats in Strings
Approximate String Matching
A fast algorithm for computing longest common subsequences
On the common substring alignment problem
An introduction to parallel dynamic programming
--CTR
Stjepan Rajko , Srinivas Aluru, Space and Time Optimal Parallel Sequence Alignments, IEEE Transactions on Parallel and Distributed Systems, v.15 n.12, p.1070-1081, December 2004 | parallel algorithms;BSP;CGM;string editing;dynamic programming |
566175 | Model checking Java programs using structural heuristics. | We describe work in troducing heuristic search into the Java PathFinder model checker, which targets Java bytecode. Rather than focusing on heuristics aimed at a particular kind of error (such as deadlocks) we describe heuristics based on a modification of traditional branch coverage metrics and other structure measures, such as thread inter-dependency. We present experimental results showing the utility of these heuristics, and argue for the usefulness of structural heuristics as a class. | INTRODUCTION
There has been recent interest in model checking software
written in real programming languages [3, 10, 15, 24, 25, 33]
and in using heuristics to direct exploration in explicit-state
model checkers [12, 35]. Because heuristic-guided search is
clearly directed at nding errors rather than verifying the
complete correctness of software, the connections between
model checking and testing are made particularly clear when
these ideas are combined. In this paper we present one fruitful
product of the intersection of these elds and show how
to apply it to nding errors in programs.
The primary challenge in software model checking, as in
all model checking, is the state space explosion problem:
exploring all of the behaviors of a system is, to say the least,
di-cult when the number of behaviors is exponential in the
possible inputs, contents of data structures, or number of
threads in a program. A vast array of techniques have been
applied to this problem [8], rst in hardware verication, and
now, increasingly, in software verication [3, 10, 21]. Many
of these techniques require considerable non-automatic work
We present the basic heuristic framework and discuss the
creation of user dened heuristics in a tool paper elsewhere
[18].
Italy
by experts or do not apply as well to software as to hardware.
Most of these techniques are aimed at reducing the size of
the total state space that must be explored, or representing
it symbolically so as to reduce the memory and time needed
for the exploration.
An alternative approach is to concentrate not on verifying
the correctness of programs but on dealing with the state
space explosion when attempting to nd errors. Rather than
reducing the overall size of the state space, we can attempt
to nd a counterexample before the state explosion exhausts
memory. Heuristic model checking usually aims at generating
counterexamples by searching the bug-containing part of
the state space rst. Obviously we do not know, in general,
what part of a program's state space is going to contain
an error, or even if there is an error present. However, by
using measurements of the exploration of a program's structure
(in particular, its branching structure or thread inter-dependency
structure), we believe a model checker can often
improve its ability to nd errors in programs. Although
one of the strongest advantages of model checking is the
generation of counterexamples when verication fails, traditional
depth-rst search algorithms tend to return very long
counterexamples; heuristic search, when it succeeds, almost
always produces much more succinct counterexamples.
In this paper we explore heuristic model checking of software
written in the Java programming language and use heuristics
based on coverage measurements derived from the world
of software testing. We introduce the notion of structural
heuristics to the classication of heuristics used in model
checking, and present (and describe our motivations in de-
veloping) successful and novel heuristics from this class.
The paper is organized as follows. Section 2 describes heuristic
model checking, examines related work, and introduces
the various search algorithms we will be using. Section 3
brie
y presents the Java PathFinder model checker and the
implementation of heuristic search. The new heuristics are
dened and described in detail in section 4, which also includes
experimental results. We present conclusions and
consider future work in a nal section.
2. HEURISTIC MODEL CHECKING
In heuristic or directed model checking, a state space is explored
in an order dependent on an evaluation function for
states. This function (the heuristic) is usually intended to
guide the model checker more quickly to an error state. Any
priority queue
while (Q not empty)
in Q with best f
remove S from Q
for each successor state S 0 of S
if S 0 not already visited
is the goal then terminate
store
Figure
1: Algorithm for best-rst search.
resulting counterexamples will often be shorter than ones
produced by the depth-rst search based algorithms traditionally
used in explicit-state model checkers.
The growing body of literature on model checking using
heuristics largely concentrates on heuristics tailored to nd
a certain kind of error [12, 16, 22, 26, 35]. Common heuristics
include measuring the lengths of queues, giving preference
to blocking operations [12, 26], and using a Hamming
distance to a goal state [14, 35]. Godefroid and Khurshid
apply genetic algorithm techniques rather than the more basic
heuristic searches, using heuristics measuring outgoing
transitions from a state (similar to our most-blocked heuristic
{ see Table 1), rewarding evaluations of assertions, and
measuring messages exchanged in a security protocol [16].
Heuristics can also be used in symbolic model checking to
reduce the bottlenecks of image computation, without necessarily
attempting to zero in on errors; Bloem, Ravi and
thus draw a distinction between property-dependent
and system-dependent heuristics [5]. They note that only
property-dependent heuristics can be applied to explicit-
state model checking, in the sense that exploring the state
space in a dierent order will not remove bottlenecks in the
event that the entire space must be explored. However, we
suggest a further classication of property-dependent heuristics
into property-specic heuristics that rely on features of
a particular property (queue sizes or blocking statements
for deadlock, distance in control or data
ow to false valuations
for assertions) and structural heuristics that attempt
to explore the structure of a program in a way conducive to
nding more general errors. The heuristic used in FLAVERS
would be an example of the latter [9]. We concentrate primarily
on structural heuristics, and will further rene this
notion after we have examined some of our heuristics.
Heuristics have also been used for generating test cases [29,
and, furthermore, a model checker can be used for test
case generation [1, 2]. Our approach is not only applicable
to test case generation, but applies coverage metrics used
in testing to the more usual model checking goal of nding
errors in a program.
2.1 Search Algorithms
A number of dierent search algorithms can be combined
with heuristics. The simplest of these is a best-rst search,
which uses the heuristic function h to compute a tness f
in a greedy fashion (Figure 1).
The A algorithm [19] is similar, except that like Dijkstra's
shortest paths algorithm, it adds the length of the path to
S 0 to f . When the heuristic function h is admissible, that
is, when h(S 0 ) is guaranteed to be less than or equal to the
queue
while (Q not empty)
while (Q not empty)
priority queue
remove S from Q
for each successor state S 0 of S
if S 0 not already visited
is the goal then terminate
store
remove all but k best elements from Q 0
Figure
2: Algorithm for beam search.
length of the shortest path from S 0 to a goal state, A is
guaranteed to nd an optimal solution (for our purposes,
the shortest counterexample). A is a compromise between
the guaranteed optimality of breadth-rst search and the
e-ciency in returning a solution of best-rst search.
Beam-search proceeds even more like a breadth-rst search,
but uses the heuristic function to discard all but the k best
candidate states at each depth (Figure 2).
The queue-limiting technique used in beam-search may also
be applied to a best-rst or A search by removing the
worst state from Q (without expanding its children) whenever
inserting S 0 results in Q containing more than k states.
This, of course, introduces an incompleteness into the model
checking run: termination without reported errors does not
indicate that no errors exist in the state space. However,
given that the advantage of heuristic search is its ability
to quickly discover fairly short counterexamples, in practice
queue-limiting is a very eective bug-nding tactic.
The experimental results in section 4 show the varying utility
of the dierent search strategies. Because none of the
heuristics we examined are admissible, A lacks a theoretical
optimality, and is generally less e-cient than best-rst
search. Our heuristic value is sometimes much larger than
the path length, in which case A behaves much like a best-rst
search.
As far as we are aware, combining a best-rst search with
limitations on the size of the queue for storing states pending
is not discussed or given a name in the literature of heuristic
search. A best-rst search with queue limiting can nd very
deep solutions that might be di-cult for a beam-search to
reach unless the queue limit k is very small.
More specically, the introduction of queue-limiting to heuristic
search for model checking appears to be genuinely novel,
and raises the possibility of using other incomplete methods
when the focus of model checking is on discovery of errors
rather than on verication. As an example, partial order
reduction techniques usually require a cycle check that may
be expensive or over-conservative in the context of heuristic
search [13]. However, once queue-limiting is considered, it is
natural to experiment with applying a partial order reduction
without a cycle check. The general approach remains
one of model checking rather than testing because storing of
states already visited is crucial to obtaining good results in
our experience, with one notable exception (see the discussion
in sections 4.1.1 and 4.2.1).
3. JAVA PATHFINDER
Java PathFinder (JPF) is an explicit state on-the-
y model
checker that takes compiled Java programs (i.e. bytecode
class-les) and analyzes all paths through the program for
deadlock, assertion violations and linear time temporal logic
(LTL) properties [33]. JPF is unique in that it is built
on a custom-made Java Virtual Machine (JVM) and therefore
does not require any translation to an existing model
checker's input notation. The dSPIN model checker [25]
that extends SPIN [23] to handle dynamic memory allocation
and functions is the most closely related system to the
JPF model checker.
Java does not support nondeterminism, but in a model checking
context it is often important to analyze the behavior of
a program in an aggressive environment where all possible
actions, in any order, must be considered. For this reason
we added methods to a special class (called Verify) to allow
nondeterminism to be expressed (for example, Verify.random(2)
will nondeterministically return a value in the range 0{2,
inclusive), which the model checker can then trap during
execution and evaluate with all possible values.
An important feature of the model checker is the
exibility
in choosing the granularity of a transition between states
during the analysis of the bytecode. Since the model checker
executes bytecode instructions, the most ne-grained analysis
supported is at the level of individual bytecodes. Un-
fortunately, for large programs the bytecode-level analysis
does not scale well, and therefore the default mode is to analyze
the code on a line-by-line basis. JPF also supports
atomic constructs (denoted by Verify.beginAtomic() and
Verify.endAtomic() calls) that the model checker can trap
to allow larger code fragments to be grouped into a single
transition.
The model checker consists of two basic components:
State Generator - This includes the JVM, information
about scheduling, and the state storage facilities required
to keep track of what has been executed and
which states have been visited. The default exploration
in JPF is to do a depth-rst generation of the
state space with an option to limit the search to a maximum
depth. By changing the scheduling information,
one can change the way the state space is generated
- by default a stack is used to record the states to be
expanded next, hence the default DFS search.
Analysis Algorithms - This includes the algorithms for
checking for deadlocks, assertion violations and violation
of LTL properties. These algorithms work by instructing
the state generation component to generate
new states, backtrack from old states, and can check
on the state of the JVM by doing API calls (e.g. to
check when a deadlock has been reached).
The heuristics in JPF are implemented in the State Generator
component, since many of the heuristics require information
from the JVM and a natural way to do the implementation
is to adapt the scheduling of which state to explore
next (e.g. in the trivial case, for a breath-rst search one
changes the stack to a queue). Best-rst (also used for A )
and beam-search are straightforward implementations of the
algorithms listed in section 2.1, using priority queues within
the scheduler. The heuristic search capabilities are currently
limited to deadlock and assertion violation checks { none of
the heuristic search algorithms are particularly suited to cycle
detection, which is an important part of checking LTL
properties. In addition, the limited experimental data on
improving cycles in counterexamples for liveness properties
is not encouraging [14].
Heuristic search in JPF also provides a number of additional
features, including:
users can introduce their own heuristics (interfacing
with the JVM through a well-dened API to access
program variables etc.)
the sum of two heuristics can be used
the order of analysis of states with the same heuristic
value can be altered
the number of elements in the priority queue can be
limited
the search depth can be limited
4. STRUCTURAL HEURISTICS
We consider the following heuristics to be structural heuris-
tics: that is, they are intended to nd errors, but are not
targeted specically at particular assertion statements, in-
variants, or deadlocks. Rather, they explore some structural
aspect of the program (branching structure or thread-
interdependence).
4.1 Code Coverage Heuristics
The code coverage achieved during testing is a measure of
the adequacy of the testing, in other words the quality of
the set of test cases. Although it does not directly address
the correctness of the code under test, having achieved high
code coverage during testing without discovering any errors
does inspire more condence that the code is correct. A case
in point is the avionics industry where software can only be
certied for
ight if 100% structural coverage, specically
modied condition/decision coverage (MC/DC), is achieved
during testing [30].
In the testing literature there are a vast number of structural
code coverage criteria, from simply covering all statements
in the program to covering all possible execution paths. Here
we will focus on branch coverage, which requires that at every
branching point in the program all possible branches
be taken at least once. In many industries 100% branch
coverage is considered a minimum requirement for test adequacy
[4]. On the face of it, one might wonder why coverage
during model checking is of any worth, since model checkers
typically cover all of the state space of the system under
analysis, hence by denition covering all the structure of
the code. However, when model checking Java programs
the programs are often innite-state, or have a very large
nite state space, which the model checker cannot cover due
1. States covering a previously untaken branch receive
the best heuristic value.
2. States that are reached by not taking a branch receive
the next best heuristic value.
3. States that cover a branch already taken are ranked
according to how many times that branch has been
taken (worse scores are assigned to more frequently
taken branches).
Figure
3: Our basic branch-coverage heuristic.
to resource limitations (typically memory). Calculating coverage
therefore serves the same purpose as during testing:
it shows the adequacy of the (partial) model checking run.
As with test coverage tools, calculating branch coverage during
model checking only requires us to keep track of whether
at each structural branching point all options were taken.
Since JPF executes bytecode statements, this means simple
extensions need to be introduced whenever IF* (related to
any if-statement in the code) and TABLESWITCH (related to
case-statements) are executed to keep track of the choices
made. However, unlike with simple branch coverage, we also
keep track of how many times each branch was taken, rather
than just whether it was taken or not, and consider coverage
separately for each thread created during the execution
of the program. The rst benet of this feature is that the
model checker can now produce detailed coverage information
when it exhausts memory without nding a counterexample
or searching the entire state space. Additionally, if
coverage metrics are a useful measurement of a set of test
cases, it seems plausible that using coverage as a heuristic to
prioritize the exploration of the state space might be useful.
One approach to using coverage metrics in a heuristic would
be to simply use the percentage of branches covered (on a
per-thread or global basis) as the heuristic value (we refer
to this as the %-coverage heuristic). However, this approach
does not work well in practice (see section 4.1.1). Instead,
a slightly more complex heuristic proves successful (Figure
3).
The motivation behind this heuristic is to make use of the
branching structure of a program while avoiding some of the
pitfalls of the more direct heuristic.
The %-coverage heuristic is likely to fall into local minima,
exploring paths that cover a large number of branches but do
not in the future increase coverage. Our heuristic behaves in
an essentially breadth-rst manner unless a path is actually
increasing coverage. By default, our system explores states
with the same heuristic value in a FIFO manner, resulting
in a breadth-rst exploration of a program with no branch
choices. However, because the frontier is much deeper along
paths which have previously increased coverage, we still advance
exploration of structurally interesting paths.
Our heuristic delays exploration of repetitive portions of the
state space (those that take the same branches repeatedly).
If a nondeterminisic choice determines how many times to
execute a loop, for instance, our heuristic will delay exploring
through multiple iterations of the loop along certain
paths until it has searched further along paths that skip
the loop or execute it only once. We thus achieve deeper
coverage of the structure and examine possible behaviors
after termination of the loop. If the paths beyond the loop
continue to be free of branches or involve previously uncovered
branches, exploration will continue; however, if one of
these paths leads to a loop, we will return to explore further
iterations of the rst loop before executing the latter loop
more than once.
A number of options can modify the basic strategy:
Counts may be taken globally (over the entire state
space explored) or only for the path by which a particular
state is reached. This allows us to examine
either combinations of choices along each path or to
try to maximize branch choices over the entire search
when the ordering along paths is less relevant. In prin-
ciple, the path-based approach should be useful when
taking certain branches in a particular combination in
an execution is responsible for errors. Global counts
will be more useful when simply exercising all of the
branches is a better way to nd an error. An instance
of the latter would be a program in which one large
nondeterministic choice at the beginning results in different
classes of shallow executions, one of which leads
to an error state.
The branch count may be allowed to persist { if a
state is reached without covering any branches, the
last branch count on the path by which that state was
reached may be used instead of giving the state the
second best heuristic value (see Figure 3). This allows
us to increase the tendency to explore paths that
have improved coverage without being quite as prone
to falling into local minima as the %-coverage heuristic
The counts over a path can be summed to reduce the
search's sensitivity to individual branch choices.
These various methods can also be applied to counts
taken on executions of each individual bytecode in-
struction, rather than only of branches. This is equivalent
to the idea of statement coverage in traditional
testing.
The practical eect of this class of heuristic is to increase
exploration of portions of the state space in which non-deterministic
choices or thread interleavings have resulted
in the possibility of previously unexplored or less-explored
branches being taken.
4.1.1 Experimental Results
We will refer to a number of heuristics in our experimental
results (Table 1). In addition to these basic heuristics, we
indicate whether a heuristic is measured over paths or all
states by appending (path) or (global) when that is an op-
tion. Some results are for an A or beam search, and this is
also noted.
Denition
branch The basic branch-coverage heuristic.
%-coverage Measures the percentage of branches covered.
States with higher coverage receive better values.
BFS A breadth-rst search
DFS A depth-rst search. (depth n) indicates that
stack depth is limited to n.
most-blocked Measures the number of blocked threads.
More blocked threads result in better values.
interleaving Measures the amount of interleaving of threads.
See section 4.2.
random Uses a randomly assigned heuristic value.
Table
1: Heuristics and search strategies.
The DEOS real-time operating system developed by Honeywell
enables Integrated Modular Avionics (IMA) and is currently
used within certain small business aircraft to schedule
time-critical software tasks. During its development a routine
code inspection led to the uncovering of a subtle error
in the time-partitioning that could allow tasks to be starved
of CPU time - a sequence of unanticipated API calls made
near time-period boundaries would trigger the error. Inter-
estingly, although avionics software needs to be tested to a
very high degree (100% MC/DC coverage) to be certied for
ight, this error was not uncovered during testing. Model
checking was used to rediscover this error, by using a translation
to PROMELA (the input language of the SPIN model
checker) [28]. Later a Java translation of the original C++
code was used to detect the error. Both versions use an
abstraction to nd the error (see the discussion in section
4.3). Our results (Table 2) are from a version of the Java
code that does not abstract away an innite-state counter {
a more straightforward translation of the original C++ code
into Java.
The %-coverage heuristic does indeed appear to easily become
trapped in local minima, and, as it is not admissible,
using an A search will not necessarily help. For comparison
to results not using heuristics, here and below we also
give results for breadth-rst search (BFS), depth-rst search
(DFS) and depth-rst searches limited to a certain maximum
depth. For essentially innite state systems (such
as this version of DEOS), limiting the depth is the only
practical way to use DFS, but as can be seen, nding the
proper depth can be di-cult { and large depths may result
in extremely long counterexamples. Using a purely
random heuristic does, in fact, nd a counterexample for
DEOS { however, the counterexample is considerably longer
and takes more time and memory to produce than with the
coverage heuristics.
We also applied our successful heuristics to the DEOS system
with the storing of visited states turned
ing testing or simulation rather than model checking, essen-
tially). Without state storage, these heuristics failed to nd
a counterexample before exhausting memory.
4.2 Thread Interleaving Heuristics
A dierent kind of structural heuristic is based on maximizing
thread interleavings. Testing, in which generally the
scheduler cannot be controlled directly, often misses subtle
race conditions or deadlocks because they rely on unlikely
thread scheduling. One way to expose concurrency errors is
At each step of execution append the thread just executed
to a thread history.
Pass through this history, making the heuristic value
that will be returned worse each time the thread just
executed appears in the history by a value proportional
to:
1. how far back in the history that execution is and
2. the current number of live threads
Figure
4: Our basic interleaving heuristic.
to reward \demonic" scheduling by assigning better heuristic
values to states reached by paths involving more switching
of threads. In this case, the structure we attempt to explore
is the dependency of the threads on precise ordering.
If a non-locked variable is accessed in a thread, for instance,
and another thread can also access that variable (leading to
a race condition that can result in a deadlock or assertion
violation), that path will be preferred to one in which the
accessing thread continues onwards, perhaps escaping the effects
of the race condition by reading the just-altered value.
We calculate this heuristic by keeping a (possibly limited in
history of the threads scheduled on each path (Figure
4).
4.2.1 Experimental Results
During May 1999 the Deep-Space 1 spacecraft ran a set of
experiments whereby the spacecraft was under the control
of an AI-based system called the Remote Agent. Unfortu-
nately, during one of these experiments the software went
into a deadlock state, and had to be restarted from earth.
The cause of the error at the time was unknown, but after
some study, in which the most likely components to have
caused the error were identied, the error was found by applying
model checking to a Java version of the code { the
error was due to a missing critical section causing a race
violation to occur under certain thread interleavings introducing
a deadlock [20]. Our results (Table use a version
of the code that is faithful to the original system, as it also
includes parts of the system not involved in the deadlock.
Our experiments (here and in other examples not presented
in the interest of space) indicate that while A and beam-search
can certainly perform well at times, they generally
do not perform as well as best-rst search. Our heuristics
are not admissible, so the optimality advantages of A do
not come into play. In general, both appear to require more
judicious choice of queue-limits than is necessary with best-rst
search.
Finally, for the dining philosophers (Table 4), we show that
our interleaving heuristic can scale to quite large numbers
of threads. While DFS fails to uncover counterexamples
even for small problem sizes, the interleaving heuristic can
produce short counterexamples for up to 64 threads. The
most-blocked heuristic, designed to detect deadlocks, generally
returns larger counterexamples (in the case of size 8
and queue limit 5, larger by a factor of over a thousand)
after a longer time than the interleaving heuristic. Even
more importantly, it does not scale well to larger numbers of
Explored Length Max Depth
branch (path)
branch
branch (global)
branch
%-coverage (path) FAILS - 20,215 - 334
%-coverage
%-coverage (global) FAILS - 20,213 - 334
random 162 240 8,057 334 360
DFS (depth 500) 6,782 383 392,479 455 500
DFS (depth 1000) 2,222 196 146,949 987 1,000
DFS (depth 4000) 171 270 8,481 3,997 4,000
Results with state storage turned
branch(global)
Table
2: Experimental results for the DEOS system.
All results obtained on a 1.4 GHz Athlon with JPF limited to 512Mb. Time(s) is in seconds and Memory is in megabytes. FAILS indicates
failure due to running out of memory. The Length column reports the length of the counterexample (if one is found). The Max Depth column
reports the length of the longest path explored (the maximum stack depth in the depth-rst case).
Explored Length Max Depth
branch (path) (queue 40) FAILS - 1,765,009 - 12,092
branch (path) (queue 160) FAILS - 1,506,725 - 5,885
branch (path) (queue 1000) 132 290 845,263 136 136
branch (global) (queue 40) FAILS - 1,758,416 - 12,077
branch (global) (queue 160) FAILS - 1,483,827 - 1,409
branch (global) (queue 1000) FAILS - 1,509,810 - 327
random FAILS - 55,940 - 472
DFS (depth 500) 43 54 116,071 500 500
DFS (depth 1000) 44 64 117,235 1000 1000
interleaving FAILS - 378,068 - 81
interleaving (queue 5) 15 17 38,449 913 913
interleaving (queue 40) 116 184 431,752 869 869
interleaving (queue 160) 908 501 1,287,984 869 870
interleaving (queue 1000) FAILS - 745,788 - 177
interleaving
interleaving (queue 5)
interleaving (queue 40)
interleaving (queue 160)
interleaving (queue 1000)
interleaving (queue 5) (beam) 14
interleaving (queue 40) (beam) 91 113 238,945 924 924
interleaving (queue 160) (beam) 386 418 1,025,595 898 898
interleaving (queue 1000) (beam) FAILS - 1,604,940 - 365
most-blocked 7 33 7,537 158 169
most-blocked (queue 5) FAILS - 922,433 - 27,628
most-blocked (queue 40) FAILS - 913,946 - 4,923
most-blocked (queue 160) FAILS - 918,575 - 1,177
most-blocked (queue
most-blocked
most-blocked (queue 5)
most-blocked (queue 40)
most-blocked (queue 160)
most-blocked (queue 1000)
Table
3: Experimental results for the Remote Agent system.
Explored Length Max Depth
branch (path) 8 FAILS - 374,152 - 41
random 8 FAILS - 218,500 - 86
most-blocked 8 FAILS - 310,317 - 285
most-blocked (queue 5) 8 17,259 378 891,177 78,353 78,353
most-blocked (queue
most-blocked (queue
most-blocked (queue 1000) 8 46 59 123,640 254 278
interleaving 8 FAILS - 487,942 -
interleaving (queue
interleaving (queue
interleaving (queue
interleaving (queue 1000) 8
most-blocked (queue 5)
most-blocked (queue 40)
most-blocked (queue 160)
most-blocked (queue 1000)
interleaving (queue 5)
interleaving (queue 40)
interleaving (queue 160)
interleaving (queue 1000)
most-blocked (queue 40)
interleaving (queue 5)
interleaving (queue 40)
interleaving (queue 160)
interleaving (queue 5) 64 59 206 101,196 514 514
Table
4: Experimental results for dining philosophers.
threads. We have only reported, for each number of philosopher
threads, the results for those searches that were successful
in the next smaller version of the problem. Results
not shown indicate that, in fact, failed searches do not tend
to succeed for larger sizes.
The key dierence in approach between using a property-
specic heuristic and a structural heuristic can be seen in
the dining philosophers example where we search for the
well-known deadlock scenario. When increasing the number
of philosophers high enough (for example to 16) it becomes
impossible for an explicit-state model checker to try all the
possible combinations of actions to get to the deadlock and
heuristics (or luck) are required. A property-specic heuristic
applicable here is to try and maximize the number of
blocked threads (most-blocked heuristic from Table 1), since
if all threads are blocked we have a deadlock in a Java pro-
gram. Whereas a structural heuristic may be to observe
that we are dealing here with a highly concurrent program
{ hence it may be argued that any error in it may well be
related to an unexpected interleaving { hence we use the
heuristic to favor increased interleaving during the search
(interleaving heuristic from Table 1). Although the results
are by no means conclusive, it is still worth noting that for
this specic example the structural heuristic performs much
better than the property-specic heuristic.
For the dining philosophers and Remote Agent example we
also performed the experiment of turning storage.
For the interleaving heuristic, results were essentially unchanged
(minor variations in the length of counterexamples
and number of states searched). We believe that this is because
to return to a previously visited state in each case
requires an action sequence that will not be given a good
heuristic value by the interleaving heuristic (for example in
the dining philosophers, alternating picking up and dropping
of forks by the same threads). For the most-blocked
heuristic, however, successful searches become unsuccessful
{ removal of state storage introduces the possibility of non-termination
into the search. For example, the most-blocked
heuristic without state storage may not even terminate, in
some cases.
Godefroid and Khurshid apply their genetic algorithm techniques
to a very similar implementation of the dining philosophers
(written in C rather than Java) [16]. They seed their
genetic search randomly on a version with 17 running threads,
reporting a 50% success rate and average search time of 177
seconds (on a slower machine than we used). Our results
suggest that the dierences may be as much a result of the
heuristics used (something like most-blocked vs. our interleaving
heuristic) as the genetic search itself. Application of
our heuristics in dierent search frameworks is an interesting
avenue for future study.
4.3 The Choose-free Heuristic
Abstraction based on over-approximations of the system
behavior is a popular technique for reducing the size of
the state space of a system to allow more e-cient model
checking [7, 11, 17, 34]. JPF supports two forms of over-
approximation: predicate abstraction [34] and type-based
abstractions (via the BANDERA tool) [11]. However, over-approximation
is not well suited for error-detection, since
the additional behaviors introduced by the abstraction can
lead to spurious errors that are not present in the origi-
nal. Eliminating spurious errors is an active area of research
within the model checking community [3, 6, 21, 27, 31].
JPF uses a novel technique for the elimination of spurious
errors called choose-free search [27]. This technique is based
on the fact that all over-approximations introduce nondeterministic
choices in the abstract program that were not
present in the original. Therefore, a choose-free search rst
searches the part of the state space that doesn't contain any
nondeterministic choices due to abstraction. If an error is
found in this so-called choose-free portion of the state space
then it is also an error in the original program. Although this
technique may seem almost naive, it has been shown to work
remarkably well in practice [11, 27]. The rst implementation
of this technique was by only searching the choose-free
state space, but the current implementation uses a heuristic
that gives the best heuristic values to the states with the
fewest nondeterministic choice statements enabled, i.e. allowing
the choose-free state space to be searched rst but
continuing to the rest of the state space otherwise (this also
allows choose-free to be combined with other heuristics).
The DEOS example can be abstracted by using both predicate
abstraction [34] and type-based abstraction [11]. The
predicate abstraction of DEOS is a precise abstraction, i.e.
it does not introduce any new behaviors not present in the
original, hence we focus here on the type-based abstraction
{ specically we use a Range abstraction (allowing the values
and 1 to be concrete and all values 2 and above to
be represented by one abstract value) to the appropriate
variable [11]. When using the choose-free heuristic it is discovered
that for this Range abstraction the heuristic search
reports a choose-free error of length 26 in 20 seconds.
These heuristics for nding feasible counterexamples during
abstraction can be seen as an on-the-
y under-approximation
of an over-approximation (from the abstraction) of the system
behavior. The only other heuristic that we are aware
of that falls into a similar category is the one for reducing
infeasible execution sequences in the FLAVERS tool [9].
5. CONCLUSIONS AND FUTURE WORK
Heuristic search techniques are traditionally used to solve
problems where the goal is known and a well-dened measure
exists of how close one is to this goal. The aim of
the heuristic search is to guide the search, using the mea-
sure, to achieve the goal as quickly (fewest steps) as possible.
This has also been the traditional use of heuristic search in
model checking: the heuristics are dened with regards to
the property being checked. Here we advocate a complementary
approach where the focus of the heuristic search
is more on the structure of the state space being searched,
in our case the Java program from which the state space is
generated.
We do not believe that structural heuristics should replace
property-specic heuristics, but rather propose that they
be used as a complementary approach. Furthermore, since
the testing domain has long used the notion of structural
code coverage, it seems appropriate to investigate similar
ideas in the context of structural heuristics during model
checking. Here we have shown that for a realistic example
(DEOS) a heuristic based on branch coverage (a relatively
structural coverage measure) gives encouraging results.
It is worth noting that a much stronger coverage measure
(MC/DC) did not \help" in uncovering the same error during
testing (i.e. 100% coverage was achieved but the bug
was not found). We conjecture that the use of code coverage
during heuristic model checking can lead to classes of
errors being found that the same coverage measures during
testing will not uncover. For example, branch coverage is
typically of little use in uncovering concurrency errors, but
using it as a heuristic in model checking will allow the model
checker to evaluate more interleavings which might lead to
an error (branch coverage found the deadlock in the Remote
Agent example, whereas traditional testing failed 2 ).
There are a number of possible avenues for future work. As
our experimental results make clear, a rather daunting array
of parameters is available when using heuristic search { at
the very least, a heuristic, search algorithm, and queue size
must be selected. We hope to explore the practicalities of
selecting these options, gathering more experimental data
to determine if, for instance, as it appears, proper queue
size limits are essential in checking programs with a large
number of threads. A further possibility would be to attempt
to apply algorithmic learning techniques to nding
good parameters for heuristic model checking.
The development of more structural heuristics and the renement
of those we have presented here is also an open
problem. For instance, are there analogous structures to be
explored in the data structures of a program to the control
structures explored by our branch-coverage heuristics? We
imagine that these other heuristics might relate to particular
kinds of errors as the interleaving heuristic relates to
concurrency errors.
6.
--R
Using model checking to generate tests from specications.
Test Generation and Recognition with Formal Methods.
Automatically Validating Temporal Safety Properties of Interfaces.
Software Testing Techniques.
Symbolic Guided Search for CTL Model Checking.
Model Checking and Abstraction.
Model Checking.
The Right Algorithm at the Right Time: Comparing Data Flow Analysis Algorithms for Finite State Veri
Directed explicit model checking with HSF-Spin
Partial Order Reduction in Directed Model Checking.
VeriSoft: A Tool for the Automatic Analysis of Concurrent Reactive Software.
Exploring Very Large State Spaces Using Genetic Algorithms.
Construction of Abstract State Graphs with PVS.
Heuristic Model Checking for Java Programs.
A formal basis for heuristic determination of minimum path cost.
Formal Analysis of the Remote Agent Before and After Flight.
Lazy Abstraction.
Algorithms for automated protocol veri
The State of SPIN.
Automating Software Feature Veri
A Dynamic Extension of SPIN.
Protocol Veri
Classical search strategies for test case generation with Constraint Logic Programming.
RTCA Special Committee 167.
Modular and Incremental Analysis of Concurrent Software Systems.
An Automated Framework for Structural Test-Data Generation
Model Checking Programs.
Using Predicate Abstraction to Reduce Object-Oriented Programs for Model Checking
Validation with Guided Search of the State Space.
--TR
Software testing techniques (2nd ed.)
Model checking and abstraction
Validation with guided search of the state space
Model checking
Bandera
Symbolic guided search for CTL model checking
Verification of time partitioning in the DEOS scheduler kernel
Using predicate abstraction to reduce object-oriented programs for model checking
Directed explicit model checking with HSF-SPIN
Automatically validating temporal safety properties of interfaces
The right algorithm at the right time
Tool-supported program abstraction for finite-state verification
Lazy abstraction
dSPIN
Heuristic Model Checking for Java Programs
Partial Order Reduction in Directed Model Checking
Finding Feasible Counter-examples when Model Checking Abstracted Java Programs
Exploring Very Large State Spaces Using Genetic Algorithms
Construction of Abstract State Graphs with PVS
Counterexample-Guided Abstraction Refinement
The State of SPIN
An Automated Framework for Structural Test-Data Generation
Modular and Incremental Analysis of Concurrent Software Systems
Model Checking Programs
Using Model Checking to Generate Tests from Specifications
--CTR
Neha Rungta , Eric G. Mercer, A context-sensitive structural heuristic for guided search model checking, Proceedings of the 20th IEEE/ACM international Conference on Automated software engineering, November 07-11, 2005, Long Beach, CA, USA
Kevin Seppi , Michael Jones , Peter Lamborn, Guided Model Checking with a Bayesian Meta-heuristic, Fundamenta Informaticae, v.70 n.1,2, p.111-126, April 2006
Madanlal Musuvathi , Shaz Qadeer, Iterative context bounding for systematic testing of multithreaded programs, ACM SIGPLAN Notices, v.42 n.6, June 2007
Guillaume Brat , Doron Drusinsky , Dimitra Giannakopoulou , Allen Goldberg , Klaus Havelund , Mike Lowry , Corina Pasareanu , Arnaud Venet , Willem Visser , Rich Washington, Experimental Evaluation of Verification and Validation Tools on Martian Rover Software, Formal Methods in System Design, v.25 n.2-3, p.167-198, September-November 2004
Jianbin Tan , George S. Avrunin , Lori A. Clarke , Shlomo Zilberstein , Stefan Leue, Heuristic-guided counterexample search in FLAVERS, ACM SIGSOFT Software Engineering Notes, v.29 n.6, November 2004
Jianbin Tan , George S. Avrunin , Lori A. Clarke, Heuristic-Based Model Refinement for FLAVERS, Proceedings of the 26th International Conference on Software Engineering, p.635-644, May 23-28, 2004
Oksana Tkachuk , Sreeranga P. Rajan, Application of automated environment generation to commercial software, Proceedings of the 2006 international symposium on Software testing and analysis, July 17-20, 2006, Portland, Maine, USA
Dimitra Giannakopoulou , Corina S. Pasareanu , Jamieson M. Cobleigh, Assume-Guarantee Verification of Source Code with Design-Level Assumptions, Proceedings of the 26th International Conference on Software Engineering, p.211-220, May 23-28, 2004
Robby , Matthew B. Dwyer , John Hatcliff, Bogor: an extensible and highly-modular software model checking framework, ACM SIGSOFT Software Engineering Notes, v.28 n.5, September
Alexander Pretschner , Heiko Ltzbeyer , Jan Philipps, Model based testing in incremental system development, Journal of Systems and Software, v.70 n.3, p.315-329, March 2004
Cyrille Artho , Howard Barringer , Allen Goldberg , Klaus Havelund , Sarfraz Khurshid , Mike Lowry , Corina Pasareanu , Grigore Rosu , Koushik Sen , Willem Visser , Rich Washington, Combining test case generation and runtime verification, Theoretical Computer Science, v.336 n.2-3, p.209-234, 26 May 2005
John Penix , Willem Visser , Seungjoon Park , Corina Pasareanu , Eric Engstrom , Aaron Larson , Nicholas Weininger, Verifying Time Partitioning in the DEOS Scheduling Kernel, Formal Methods in System Design, v.26 n.2, p.103-135, March 2005 | coverage metrics;heuristics;testing;model checking |
566199 | Specification, verification, and synthesis of concurrency control components. | Run-time errors in concurrent programs are generally due to the wrong usage of synchronization primitives such as monitors. Conventional validation techniques such as testing become ineffective for concurrent programs since the state space increases exponentially with the number of concurrent processes. In this paper, we propose an approach in which 1) the concurrency control component of a concurrent program is formally specified, 2) it is verified automatically using model checking, and 3) the code for concurrency control component is automatically generated. We use monitors as the synchronization primitive to control access to a shared resource by multipleconcurrent processes. Since our approach decouples the concurrency control component from the rest of the implementation it is scalable. We demonstrate the usefulness of our approach by applying it to a case study on Airport Ground Traffic Control.We use the Action Language to specify the concurrency control component of a system. Action Language is a specification language for reactive software systems. It is supported by an infinite-state model checker that can verify systems with boolean, enumerated and udbounded integer variables. Our code generation tool automatically translates the verified Action Language specification into a Java monitor. Our translation algorithm employs symbolic manipulation techniques and the specific notification pattern to generate an optimized monitor class by eliminating the context switch overhead introduced as a result of unnecessary thread notification. Using counting abstraction, we show that we can automatically verify the monitor specifications for arbitrary number of threads. | INTRODUCTION
Writing a concurrent program is an error prone task. A
concurrent programmer has to keep track of not only the
possible values of the variables of the program, but also the
states of its concurrent processes. Failing to use concurrency
constructs such as semaphores or monitors correctly
results in run-time errors such as deadlocks and violation of
safety properties. Conventional validation techniques such
as testing become ineffective for concurrent programs since
the state space of a concurrent program increases exponentially
both with the number of variables and the number of
concurrent processes in it.
Monitors are a programming language construct introduced
to ease the difficult task of concurrent programming
[14]. Structured programming languages help programmers
in keeping track of the states of the program variables by
providing abstractions such as procedures and associated
scoping rules to localize variable access. Monitors are a similar
mechanism for structuring concurrent programs, they
provide scoping rules for concurrency.
Since monitors are an integral part of Java, recently, concurrent
programming using monitors gained increased attention
[16]. A monitor consists of a set of variables shared
among multiple processes and a set of associated procedures
for accessing them. Shared variables of a monitor are not
accessible outside of its procedures. At any time, only one
process is allowed to be active in a monitor. Processes synchronize
using specific operations which lets them wait (i.e.,
sleep) until they receive a signal from another process. Wait
and signal operations are coordinated using condition vari-
ables. Even though monitors provide a better abstraction
for concurrent programming compared to other constructs
such as semaphores, they are still error prone. Coordinating
wait and signal operations on several condition variables
among multiple processes can be very challenging even for
the implementation of simple algorithms.
In this paper we propose a new approach for developing
reliable concurrent programs. First aspect of our approach is
to start with a specification of the concurrency control component
of the program rather than its implementation. We
use monitors as the underlying concurrency control prim-
itive. We present a monitor model in Action Language
[4]. Action Language is a specification language for reactive
software systems. It supports both synchronous and asynchronous
compositions and hierarchical specifications. We
show that monitors can be specified in Action Language using
asynchronously composed modules. Our monitor model
in Action Language has one important aspect, it does not
rely on condition variables. Semantics of Action Language
lets us get rid of condition variables (with associated wait
and signal operations) which simplifies the specification of
a monitor significantly.
Most important component of our approach is the use
of an automated verification tool for checking properties of
monitor specifications. Action Language is supported by
an infinite-state model checker that can verify or falsify (by
producing counter-example behaviors) both invariance and
liveness (or any CTL) properties of Action Language specifications
[6]. In this paper we focus on verification of monitor
invariants, however, the approach presented in this paper
can be extended to universal portion of the temporal logic
CTL. For the infinite-state systems that can be specified in
Action Language, model checking is undecidable. Hence our
verifier uses various heuristics to guarantee convergence. It
does not produce false positives or false negatives but its
analysis can be inconclusive.
The last component of our approach is a code generation
algorithm for synthesizing monitors from Action Language
specifications. Our goal is not generating complete
programs, rather, we are proposing a modular approach
for generating concurrency control component of a program
that manipulates shared resources.
We use a case study on Airport Ground Traffic Control
[21] to show the effectiveness and scalability of our tech-
nique. This case study uses a fairly complex airport ground
network similar to that of Seattle Tacoma International Air-
port. Our model checker can verify all the safety properties
of the specification for this case study and our code generation
tool automatically generates an optimized Java class (in
terms of the context switch overhead that would be incurred
in a multithreaded application).
Recently, there has been several attempts in adopting
model checking to verification of concurrent programs [8,
13]. These approaches translate a concurrent Java program
to a finite model and then check it using available model
checking tools. Hence, they rely on the ability of model
checkers to cope with the state space explosion problem.
However, to date, model checkers are not powerful enough
to check implementations of concurrent programs. Hence,
most of the recent work on verification of concurrent programs
have been on efficient model construction from concurrent
programs [8, 13, 11].
Our approach provides a different direction for creating
reliable concurrent programs. It has several advantages: 1)
It avoids the implementation details in the program which
do not relate to the property to be verified. 2) There is
no model construction problem since the specification language
used has a model checker associated with it.
pushing the verification to an earlier stage in software development
(to specification phase rather than the implementation
phase) it reduces the cost of fixing bugs. However,
our approach is unlikely to scale to generation of complete
programs. This would require the specification language to
be more expressive and would introduce a model construction
problem at the specification stage. Hence we focus on
synthesizing concurrency control components which are correct
by construction and can be integrated to a concurrent
program safely. Another aspect of our approach which is
different from the previous work is the fact that we are using
an infinite state model checker rather then finite state
techniques. Using our infinite state model checker we can
verify properties of specifications with unbounded integer
variables and arbitrary number of threads.
While this work was under review, independently, Deng et
al. also presented an approach that combines specification,
synthesis and verification for concurrent programs [12]. One
crucial difference between our approach and the approach
presented in [12] is apparent in the (otherwise remarkably
similar) titles. In our approach automated verification is
performed on the specification, not on the implementation.
Hence, our approach shields the automated verification tool
from the implementation details.
Rest of the paper is organized as follows. In Section 2 we
describe our case study. In Section 3 we explain concurrency
control with monitors and their implementation in Java. In
Section 4 using our case study we discuss how monitors can
be specified in Action Language. We also present a general
template for specifying monitors in Action Language in that
section. In Section 5 we discuss how monitor specifications
can be automatically verified using Action Language Ver-
ifier. We also show that using counting abstraction monitor
specifications can be verified for arbitrary number of
threads. In Section 6 we present the algorithms for automatically
generating Java monitor classes from monitor
specifications. Finally, in Section 7, we state our conclusions
and directions for future work.
2. AN AIRPORT GROUND TRAFFIC
CONTROL CASE STUDY
We will present an Airport Ground Traffic Control case
study to demonstrate the techniques proposed in this pa-
per. Airport Ground Traffic Control handles allocation of
airport ground network resources such as runways, taxiways,
and gates to the arriving and departing airplanes. Airport
Ground Traffic Control is safety critical. 51.5% of hull loss
accidents from 1959 through 1996 were associated with airport
ground operations such as taxi, takeoff, and landing [1].
Simulations play an important role for airport safety since
they enable early prediction of possible runway incursions
which is a growing problem at busy airports throughout
the world. [21] presents a concurrent simulation program
for modeling Airport Ground Traffic Control using Java
threads. In this paper, we demonstrate that concurrency
control component of such a program can be formally spec-
ified, automatically verified, and synthesized in our frame-
work. We use the same airport ground network model used
in [21] (shown in Figure 1) similar to Seattle/Tacoma International
Airport. There are two runways: 16R and 16L.
Runway 16R is used by arriving airplanes during landing.
After landing, an arriving airplane takes one of the exits
C3-C8. After taxiing on C3-C8, the arriving airplanes need
to cross runway 16L. After crossing 16L, they continue on
one of the taxiways B2, B7, B9-B11 and reach the gates in
which they park. Departing airplanes use runway 16L for
takeoff. The control logic for ground traffic of this airport
must implement the following rules:
1. An airplane can land (takeoff) using 16R (16L) only if
no airplane is using 16R (16L) at the moment.
2. An airplane taxiing on one of the exits C3-C8 can cross
runway 16R
runway 16L
Figure
1: An airport ground network similar to that
of the Seattle Tacoma International Airport
runway 16L only if no airplane is taking off at the
moment.
3. An airplane can start using 16L for taking off only if
none of the crossing exits C3-C8 is occupied at the mo-
ment. (Arriving airplanes have priority over departing
airplanes.)
4. Only one airplane can use a taxiway at a time.
We give the Action Language specification of the Airport
Ground Traffic Control system in Section 4. In Section 5 we
discuss how we used the Action Language Verifier to automatically
verify the properties of this system. In Section 6
we show the Java monitor class synthesized from the Action
Language specification.
3. CONCURRENCY CONTROL WITH
MONITORS
A monitor is a synchronization primitive that is used to
control access to a shared resource by multiple concurrent
processes. A monitor consists of a set of variables and procedures
with the following rules: 1) The variables in a monitor
can only be accessed through the procedures of the monitor.
execute procedures of the monitor
at the same time. We can view the second rule as monitor
having a mutual exclusion lock. Only the process that has
the monitor lock can be active in the monitor. Any process
that calls a monitor procedure has to acquire the monitor
lock before executing the procedure and release it after it
exits. This synchronization is provided implicitly by the
monitor semantics, hence, the programmer does not have to
explicitly write the acquire lock and release lock operations.
Monitors provide additional synchronization operations
among processes based on condition variables. Two operations
on condition variables are defined: wait and signal.
A process that performs a wait operation on a condition
variable sleeps and releases the monitor lock. It can only
be awakened by a signal operation on the same condition
variable. A waiting process that has been awakened has to
re-acquire the monitor lock before it resumes operation. If
there are no waiting processes, then signal operation is ignored
(and forgotten, i.e., it does not affect processes which
execute a wait later on). Wait and signal operations can be
implemented using one wait queue per condition variable.
When a process executes the wait operation on a condition
variable it enters the corresponding wait queue. A signal
operation on a condition variable removes one process from
the corresponding wait queue and resumes its operation (af-
ter re-acquiring the monitor lock). In signal and continue
semantics for the signal operation, signaling process keeps
the monitor lock until it exits or waits. Different semantics
and additional operations have also been used for signaling
such as signal and wait semantics and signalAll operation
[3].
Typically condition variables are used to execute a set of
statements only after a guarding condition becomes true. To
achieve this, a condition variable is created that corresponds
to the guarding condition. The process which will execute
the guarded statements tests the guarding condition and
calls the wait on the corresponding condition variable if the
guarding condition is false. Each process which executes a
statement that can change the truth value of the guarding
condition signals this to the processes that are waiting on
the corresponding condition variable.
State of a monitor is represented by its variables. Set
of states that are safe for a monitor can be expressed as
a monitor invariant [3]. Monitor invariant is expected to
hold when no process is accessing the monitor (i.e., it is not
guaranteed to hold when a process is active within a monitor
procedure).
3.1 Monitors in Java
Java is an object-oriented programming language that
supports concurrent programming via threads and monitors.
Each Java object has a mutual exclusion lock. A monitor
in Java is implemented using the object locks and the
synchronized keyword. A block of statements can be declared
to be synchronized using the lock of an object o as
synchronized(o) f . g. This block can only be executed
after the lock for the object o is acquired. Methods
can also be declared to be synchronized which is equivalent
to enclosing the method within a synchronized block using
object this, i.e., synchronized(this) f . g. A monitor
object in Java is created by declaring a class with private
variables that correspond to shared variables of the monitor.
Then each monitor procedure is declared as a synchronized
method to meet the mutual exclusion requirement.
Wait and signal operations are implemented as wait and
notify methods in Java. However, in Java, each object has
only one wait queue. This means that when there is a notify
call, any waiting process in the monitor can wake up. If there
is more than one condition that processes can be waiting for,
awakened processes have to recheck the conditions they have
been waiting for. Note that, if a process that was waiting
for a different condition is awakened, then the notify call
is lost. This can be prevented by using notifyAll method
which wakes up all the waiting processes.
Using a single wait queue and notifyAll method one can
safely implement monitors in Java. However, such an implementation
will not be very efficient. To get better efficiency,
one can use other objects (declared as members of the monitor
class) as condition variables together with synchronized
blocks on those objects. Since each object has a lock and
an associated wait queue, this makes it possible to put processes
waiting on different conditions to different queues.
However, this implies that there will be more than one lock
used in the monitor. (In addition to monitor lock there will
be one lock per condition variable.) Use of multiple locks in
Java monitor classes is prone to deadlocks and errors [16].
4. SPECIFICATION OF MONITORS
Although monitors provide a higher level of abstraction
for concurrent programs compared to mechanisms such as
semaphores, they can still be tedious and difficult to im-
plement. We argue that Action Language can be used to
specify monitors in a higher level of abstraction. Monitor
specifications in Action Language do not rely on condition
variables. Since in Action Language an action is executed
only when its guard evaluates to true, we do not need conditional
waits.
Figure
2 shows the Action Language specification of the
Airport Ground Traffic Control case study. An Action Language
specification consists of a set of module definitions. A
module definition consists of variable declarations, a restrict
expression, an initial expression, submodule definitions, action
definitions, and a module expression. Semantically,
each module corresponds to a transition system with a set
of states, a set of initial states and a transition relation.
Variable declarations define the set of states of the module.
In
Figure
2 we implemented the shared resources of the
Airport Ground Traffic Control, which are runways and taxi-
ways, as integer variables. Variables numRW16R and numC3
denote the number of airplanes on runway 16R and on taxiway
C3, respectively. We use enumerated variables (local
variable pc in submodule Airplane) to encode states of an
airplane. An arriving airplane can be in one of the following
states: arFlow, touchDown, taxiToXY, taxiFrXY and
parked, where the state arFlow denotes that the airplane is
in the air approaching to the airport, the state touchDown
denotes that the airplane has just landed and is on the runway
16R, the state taxiToXY denotes that the airplane is
currently in the taxiway Y and is going to cross the runway
X, the state taxiFrXY denotes that the airplane is currently
in the taxiway Y and has just crossed the runway X, and
finally, the state parked denotes that the airplane is parked
at the gate. Similarly, a departing airplane can be in one of
the following states: parked, takeOff, and depFlow, where
the state parked denotes that the airplane is parked at the
gate, the state takeOff denotes that the airplane is taking
off from the runway 16L, and the state depFlow denotes that
the airplane is in the air departing from the airport.
Action Language is a modular language. An Action Language
specification can be defined in terms of a hierarchy
of modules. In Figure 2 module main has a submodule
Airplane. Submodule Airplane models both arriving and
departing airplanes and corresponds to a process type (or
thread class in Java). Submodule Airplane has one local
boolean variable (pc) which is used to keep track of the
states an airplane can be in. Note that, each instantiation
of a module will create different instantiations of its local
variables.
Set of states can be restricted using a restrict expression.
In
Figure
2, variables numC3, numRW16R, and numRW16L are restricted
to be greater than or equal to 0. Initial expression
defines the set of initial states of a module. For instance, in
Figure
variables numRW16R, numRW16L, and numC3 are initialized
Initial and restrict expressions of the submodules
are conjoined with the initial and restrict expressions of the
main module to obtain the overall initial condition and the
restrict expression, respectively.
Behavior of a module (i.e., its transition relation) is defined
using a module expression. A module expression (which
starts with the name of the module) is written by combining
its actions and submodules using asynchronous (denoted by
-) and/or synchronous (denoted by &) composition opera-
tors. For instance, module Airplane is defined in terms of
asynchronous composition of its actions reqLand, exitRW3,
and so on. Module main (which defines the transition relation
of the overall system) is defined in terms of asynchronous
composition of multiple instantiations of its sub-module
Airplane. The specification in Figure 2 specifies a
system with more than two airplane processes. In Figure 2
only asynchronous composition is used.
Each atomic action in Action Language defines a single
execution step. In an action expression for an action a,
primed (or range) variables, rvar(a), denote the next-state
values for the variables and unprimed (or domain) variables,
dvar(a), denote the current-state values. For instance, action
exitRW3 in module Airplane indicates that when an arriving
airplane is in the touch-down state (pc=touchDown) if
taxiway C3 is available (numC3=0) then in the next state runway
16R will have one less airplane (numRW16R'=numRW16R-1)
and taxiway C3 will be used by one more airplane
(numC3'=numC3+1) and the airplane will be in state
pc'=taxiTo16LC3. Note that an airplane taxiing on taxiway
runway 16L on its route and continues on
taxiway B2 (see Figure 1).
Asynchronous composition of two actions is defined as the
disjunction of their transition relations. However, we also
assume that an action preserves the values of the variables
which are not modified by itself. Formally, we extend the action
expression exp(a1 ), for action a1 , by conjoining it with
a frame constraint exp 0 (a1
denotes set difference. Similarly, we extend the expression
for a2 , exp(a2 ), to exp 0 (a2 ). Then, we define exp(a1 - a2 )
as
Asynchronous composition of two modules is defined similarly
In
Figure
monitor invariants that we expect the system
to satisfy is written using the spec, invariant and next key-words
at the end of the main module. In Action Language
keywords invariant, eventually and next are synonyms
for CTL operators AG, AF, and AX, respectively.
The specification given in Figure 2 specifies a solution to
the Airport Ground Traffic Control without specifying the
details about the implementation of the monitor. It is a high
level specification compared to a monitor implementation,
in the sense that, it does not introduce condition variables
and waiting and signaling operations which are error prone.
We give a general template for specifying monitors in Action
Language in Figure 3. It consists of a main module m
and a list of submodules . The variables of the
main module (denoted var(m)) define the shared variables
of the monitor specification. Currently, available variable
types in Action Language are boolean, enumerated and inte-
ger. This restriction comes from the symbolic manipulation
module main()
integer numRW16R, numRW16L, numC3 .;
initial: numRW16R=0 and numRW16L=0 numC3=0 .;
restrict: numRW16R?=0 and numRW16L?=0 and numC3?=0.;
module Airplane()
enumerated pc -arFlow, touchDown, parked, depFlow,
taxiTo16LC3, taxiTo16LC4, taxiTo16LC5, taxiTo16LC6,
taxiTo16LC7, taxiTo16LC8, taxiFr16LB2, taxiFr16LB7,
taxiFr16LB9, taxiFr16LB10, taxiFr16LB11, takeOff-;
initial: pc=arFlow or pc=parked;
reqLand: pc=arFlow and numRW16R=0 and pc'=touchDown
and
exitRW3: pc=touchDown and numC3=0 and numC3'=numC3+1
and numRW16R'=numRW16R-1 and pc'=taxiTo16LC3;
crossRW3: pc=taxiTo16LC3 and numRW16L=0 and numB2A=0
and pc'=taxiFr16LB2 and numC3'=numC3-1 and
reqTakeOff: pc=parked and numRW16L=0 and numC3=0
and numC4=0 and numC5=0 and numC6=0 and
numC7=0 and numC8=0 and and pc'=takeOff and
leave: pc=takeOff and pc'=depFlow
and
Airplane: reqLand - exitRW3 - crossRW3 - .
endmodule
spec: invariant(numRW16R!=1 and numRW16L!=1)
spec: invariant(numC3!=1)
spec: invariant(next(numRW16L=0) or not(numRW16L=0 and
endmodule
Figure
2: Action Language Specification of the Airport
Ground Traffic Control Case Study
capabilities of the Action Language Verifier (which can be
extended as we discuss in [15]). We also allow declaration of
parameterized constants. For example, a declaration such as
parameterized integer size would mean that size is an
unspecified integer constant, i.e., when a specification with
such a constant is verified it is verified for all possible values
of size.
Each submodule m i corresponds to a process type, i.e.,
each instantiation of a submodule corresponds to a process.
Each submodule m i has a set of local variables (var(m i
and atomic actions (act(m i )). Note that, in a monitor specification
our goal is to model only the behavior of a process
that is relevant to the properties of the monitor. Therefore,
local variables var(m i ) of a submodule should only include
the variables that are relevant to the correctness of the mon-
itor. The transition relation of a submodule is defined as the
asynchronous execution of its atomic actions.
To simplify the abstraction and code generation
algorithms we will present in the following sections we restrict
the form of action expressions as follows: Given an
action a 2 act(m i ), exp(a) can be written as
exp(a) j d l (a) - r l (a) - ds(a) - rs(a)
where d l (a) is an expression on unprimed local variables of
module l (a) is an expression on primed and
unprimed local variables of m i , ds(a) is an expression on unprimed
shared variables (var(m)), and rs(a) is an expression
module m()
enumerated
parameterized integer
restrict: restrictCondition;
initial: initialCondition;
module
integer .
boolean .
enumerated .
restrict: .
initial: .
an 1
an 1
endmodule
module mn ()
endmodule
spec: monitorInvariant
endmodule
Figure
3: A Monitor Template in Action Language
on primed and unprimed shared variables. For example, for
the action exitRW3 in Figure 2
d l (a) j pc=touchDown
r l (a) j pc'=taxiTo16LC3
In the template given in Figure 3, transition relation of the
main module m is defined as the asynchronous composition
of its submodules, which defines the behavior of the overall
system.
5. VERIFICATION OF MONITOR
Action Language Verifier [6] consists of 1) a compiler that
converts Action Language specifications to composite symbolic
representations, and 2) an infinite-state symbolic model
checker which verifies (or falsifies) CTL properties of Action
Language specifications. Action Language compiler translates
an Action Language specification to a transition system
that consists of a state space S, a set of initial
states I ' S, and a transition relation R ' S \Theta S. Unlike the
common practices in model checking, S can be infinite and
R may not be total (i.e., there maybe states s 2 S for which
there does not exist a s 0 such that (s; s 0 R). For the infinite
state systems that can be specified in Action Language,
model checking is undecidable. Action Language Verifier
uses several heuristics to achieve convergence such as approximations
based on truncated fixpoint computations and
widening, loop-closures and approximate reachability anal-
ysis. Since we allow non-total transition systems also some
fixpoint computations have to be modified [6].
For the monitor model given in Figure 3, the state space S
is obtained by taking the Cartesian product of the domains
of the shared variables of the main module (var(m)) and the
domains of the local variables of each submodule (var(m i
for each instantiation. The transition relation of R of the
overall system is defined as
r ijk
where r ijk corresponds to the action expression of action ak
in instantiation j of module m i . Action Language parser
renames local variables of each submodule m i for each instantiation
j to obtain r ijk 's. Also, as explained above, action
expressions are automatically transformed by the Action
Language parser by adding the frame constraints (un-
modified variables preserve their value). Initial states of the
system is defined as
I
I ij
where Im denotes the initial condition for main module m,
and I ij denotes the initial condition for instantiation j of
module m i .
Composite Symbolic Library [15] is the symbolic manipulator
used by the Action Language Verifier. It combines
different symbolic representations using the composite model
checking approach [5]. Generally, model checking tools have
been built using a single symbolic representation such as
BDDs [17] or polyhedra [2]. A composite model checker
combines different symbolic representations to improve both
the efficiency and the expressiveness of model checking. Our
current implementation of the Composite Symbolic Library
uses two basic symbolic representations: BDDs (for boolean
and enumerated variables) and polyhedral representation
(for integers). Since Composite Symbolic Library uses an
object-oriented design, Action Language Verifier is poly-
morphic. It can dynamically select symbolic representations
provided by the Composite Symbolic Library based on the
variable types in the input specification. For example, if
there are no integer variables in the input specification Action
Language Verifier becomes a BDD-based model checker.
To analyze a system using Composite Symbolic Library,
one has to specify its initial condition, transition relation,
and state space using a set of composite formulas. A composite
formula is obtained by combining integer arithmetic
formulas on integer variables with boolean variables using
logical connectives. Enumerated variables are mapped to
boolean variables by the Action Language parser. Since
integer representation in the Composite Symbolic Library
currently supports only Presburger arithmetic, we restrict
arithmetic operators to + and \Gamma. However, we allow multiplication
with a constant and quantification.
A composite formula, p, is represented in disjunctive normal
form as
where p it denotes the formula of basic symbolic representation
type t in the ith disjunct, and n and T denote the
number of disjuncts and the number of basic symbolic representation
types, respectively. Our Composite Symbolic
Library implements methods such as intersection, union,
complement, satisfiability check, subset test, which manipulate
composite representations in the above form. These
methods in turn call the related methods of basic symbolic
representations.
Action Language Verifier iteratively computes the
fixpoints for the temporal operators using the symbolic operations
provided by the Composite Symbolic Library. Action
Language Verifier uses techniques such as truncated
fixpoints, widening and collapsing operators to compute approximations
of the divergent fixpoint computations [6].
However, Action Language Verifier does not generate false
negatives or false positives. It either verifies a property or
generates a counter-example or reports that the analysis is
inconclusive. This is achieved by using appropriate type of
approximations for the fixpoints (lower or upper approxima-
tion) based on the temporal property and the type of the
input query (which could be verify or falsify).
5.1 Counting Abstraction
In the Action Language template for monitor specifications
Figure
3), each submodule is instantiated a fixed
number of times. This means that the specified system has
a fixed number of processes. For example, the specification
in Figure 2 describes a system with a specific number
of airplane processes and hence, any verification result obtained
for this specification is only guaranteed for a system
with a specific number of airplane processes. In this section
we will present the adaptation of an automated abstraction
technique called counting abstraction [9] to verification of
monitor specifications in Action Language. Using counting
abstraction one can automatically verify the properties of a
monitor model for arbitrary number of processes. The basic
idea is to define an abstract transition system in which
the local states of the processes are abstracted away but
the number of processes in each local state is counted by
introducing a new integer variable for each local state. For
this abstraction technique to work we need local states of
submodules to be finite. For example, if a submodule has
a local variable that is an unbounded integer, we cannot
use the counting abstraction. Note that, shared variables
(i.e. var(m)) can still be unbounded since they are not
abstracted away.
Consider the specification in Figure 2. Each Airplane
process has local states. Note that, in the general case
Figure
3), each local state corresponds to a valuation of all
the local variables of a submodule, i.e., the set of local states
of a submodule is the Cartesian product of the domains of
the local variables of that submodule. Given a module m i ,
be the set of all possible valuations of its local variables
the set of local states of m i . In
the counting abstraction, we introduce a nonnegative integer
variable to represent each local state of each submodule.
I.e., for each submodule m i and for each local state s 2 S i of
module m i we declare a nonnegative integer variable i s . For
example, for the specification in Figure 2, we introduce
integer variables for the Airplane submodule: arFlowC for
state pc=arFlow and depFlowC for state pc=depFlow. These
variables will represent the number of processes that are in
the local state that corresponds to them. For example, if
arFlowC is 2 in the abstract system, this will imply that
there are 2 processes in the corresponding states of the concrete
system where pc=arFlow holds. Note that, there could
be more than one concrete state that corresponds to an abstract
state.
Once we defined the mapping between the states of the
concrete system and the abstract system, next thing to do
is to define the abstract transition relation, i.e., to translate
the actions of the original system to the actions of the
abstract system. Consider action exitRW3 in Figure 2. To
translate this action to an action on the abstract system we
only have to change the part of the expression using the current
and next state local variables (i.e., pc=touchDown and
pc'=taxiTo16L3). The part of the expression on current and
next state shared variables (i.e., numC3=0 and
numRW16R'=numRW16R-1 and numC3'=numC3+1) will remain
the same. Since we restricted all local variables to be fi-
nite, without loss of generality, we can assume that all local
variables are boolean variables (as we discussed in Section
5 Action Language Verifier translates enumerated variables
to boolean variables). As we stated in Section 4 we assume
that the action expression is in the form:
exp(a) j d l (a) - r l (a) - ds(a) - rs(a)
where d l (a) is a boolean logic formula on local domain variables
and r l (a) is a boolean logic formula on local domain
variables. Since we are assuming that local variables can
only be boolean, we do not need to have domain variables
in r l (a). We can transform any action expression to a set of
equivalent action expressions in this form by splitting disjuncts
that involve both local and shared variables. For the
action exitRW3 d l is pc=touchDown and r l is
pc'=taxiTo16L3.
Let Sd ' S i be the set of local states of the module m i
which satisfy expression d l . We translate d l to an expression
on the variables of the abstract transition system by
generating the expression
d a
l j
s2S d
is the integer variable that represents the local state
s. Note that, d a
l indicates that there exists a process which
is in a local state that satisfies d l . For exitRW3 the formula
we obtain is simply touchDownC?0, i.e., there exists a process
in the local state pc=touchDown.
Let Sr ' S i be the set of local states of the module i which
satisfy expression r l . We translate r l to an expression on the
variables of the abstract transition system by generating the
expression
r a
l j
s2S d ;t2Sr ;s6=t (i 0
The first disjunction enumerates all possible local current
and next state pairs for the action and updates the
counters accordingly. The second disjunct takes into account
the cases where the local state of the process does not
change. For the action exitRW3 we obtain the following ex-
pression: touchDownC'=touchDownC-1 and taxiTo16L3C'=
taxiTo16L3C'+1. Then, the abstraction of action exitRW3
is:
exitRW3: touchDownC?0 and numC3=0 and numC3'=numC3+1
and numRW16R'=numRW16R-1 and touchDownC'=touchDownC-1
and taxiTo16L3C'=taxiTo16L3C+1;
After generating the abstract state-space and the abstract
transition relation, last component of the abstraction is to
translate the initial states. First, for each submodule m i
in
Figure
3 we declare a nonnegative parameterized integer
constant num m i which denotes the number of instances
of module m i . By declaring this constant parameterized
we guarantee that the verified properties will hold for all
possible number of instantiations of each submodule. Let
init i denote the local initial expression of a submodule m i
and let S init denote the set of local states of the module m i
which satisfy expression init i . Then we add the following
constraint to the initial expression of the abstract transition
system:
init a
For the specification in Figure 2 we create a nonnegative parameterized
integer constant numAirplane. Using the initial
condition of the submodule Airplane, which is pc=arFlow
or pc=parked, we obtain the following constraint
taxiTo16LC3C=0 and taxiTo16LC4C=0 .
and replace the initial constraint of Airplane submodule
with this new constraint.
One can show that the monitor invariants verified on the
abstract specification generated by the counting abstraction
are also satisfied by the original monitor specification for arbitrary
number of processes. This can be shown by defining
an abstraction function between the state space of the original
specification and the state space of the abstract specification
generated by the counting abstraction [10].
for each submodule m i in m do
Declare a parameterized integer num m i in m
to the restrict expression of m
Remove local variable declarations of module m i
for each local state s 2 S i of module m i do
Declare an integer variable i s in m i
to the restrict expression of m i
Replace initial expression init i of module m i with
are the set of local states
of m i which satisfy init i
for each action a in module m i do
Replace d l (a) with
s2S d
where S d is the set of local states
of m i that satisfy d l (a)
Replace r l (a) with
s2S d ;t2Sr ;s6=t (i 0
where Sr is the set of local states
of m i that satisfy r l (a)
Figure
4: Algorithm for counting abstraction
5.2 Experimental Results
Table
1 shows the performance of the Action Language
Verifier for the Airport Ground Traffic Control monitor specification
given in Figure 2. In the first column we denote
the total number of processes in the specification. For ex-
ample, the results from the first row are for the specification
Table
1: Verification Results For Airport Ground
Traffic Control Specification. xA (xD) denotes x
many arriving (departing) airplane processes and
x=P denotes arbitrary number of airplane pro-
cesses. P1, P2, and P3 are the properties given
in
Figure
2. CT and VT denote transition system
construction time and property verification time (in
seconds), respectively.
8A,PD 3.95 2.28 1.54 2.59
PA,2D 1.67 1.31 0.88 3.94
PA,4D 3.15 2.42 1.71 5.09
PA,8D 6.40 4.64 3.32 7.35
in
Figure
processes. CT denotes the
time spent in constructing the composite symbolic representations
for the transition relation and the initial states
of the the input specification (including the parsing time).
VT denotes the verification time for each property. Although
the input is an infinite state system (since numC3,
numRW16R, and numRW16L are unbounded variables) the verification
time scales very well. This is due to the efficiency
of the composite symbolic representation and the BDDs. If
we had partitioned the transition system to eliminate the
boolean variables (as is done in most infinite state model
checkers) we would have obtained 2 64 partition classes for
the fourth instance in Table 1. Mapping the boolean variables
to integers on the other hand would have created 64
more integer variables, increasing the cost of arithmetic constraint
manipulation (which is not likely to scale as well as
BDDs).
We used counting abstraction to verify the Airport Ground
Traffic Control monitor specification for arbitrary number
of arriving and departing airplanes. First we verified specifications
for a fixed number of arriving airplanes and an
arbitrary number of departing airplanes by using counting
abstraction only on the states of departing airplanes. For
example, the row 4A,PD in Table 1, denotes the case with
arriving airplanes and an arbitrary number of departing
airplanes. Although counting abstraction generates an integer
variable for each local state of an airplane process, the
experimental results in Table 1 shows that it scales well.
In fact, the case where both states of the arriving and the
departing airplanes are abstracted (PA,PD) properties are
verified faster compared to some other cases. This is possibly
due to the fact that counting abstraction, in a way,
simplifies the system by abstracting away the information
about individual processes. For example, in the abstract
transition system it is not possible to determine which airplane
is in which state, we can only keep track of the number
of airplanes in a particular state.
We verified a large number of concurrent system specifications
using Action Language Verifier including other monitor
specifications such as monitors for sleeping barber prob-
lem, readers-writers problem and bounded buffers. Our experimental
results are reported in [20].
6. SYNTHESIS OF MONITORS
In the monitor specification given in Figure 2, the shared
variables such as numRW16R and numC3 represent the resources
that will be shared among multiple processes. Submodule
Airplane specify the type of processes that will share these
resources. Our goal is to generate a monitor class in Java
from monitor specifications such as the one given in Figure
2. First, we will declare the shared variables of the monitor
specification (for example, numRW16R and numC3 in Figure 2)
as private fields of the monitor class. Hence, these variables
will only be accessible to the methods of the monitor class.
We will not try to automatically generate code for the
threads that will use the monitor. This would go against the
modularization principle provided by the monitors. Rather,
we will leave the assumption that the threads behave according
to their specification as a proof obligation. In general, a
submodule in a monitor specification (Figure should specify
the most general behavior of the corresponding thread,
or, equivalently, it should specify the minimum requirements
for the corresponding thread for the monitor to execute
correctly. Since the specifications about the local behavior
of the threads are generally straightforward (such as an
Airplane process should not execute exitRW3 action before
executing reqLand) we think that it would not be too difficult
for the concurrent programmer to take the responsibility
for meeting these specifications.
We will generate a monitor method corresponding to each
action of each submodule in the monitor specification. Consider
the action:
exitRW3: pc=touchDown and numC3=0 and numC3'=numC3+1
and numRW16R'=numRW16R-1 and pc'=taxiTo16L3;
We are not interested in the expressions on local variable
pc of the submodule Airplane. As we discussed above, we
are only generating code for the monitor class which only
has access to the shared variables. For the action exitRW3
removing the expressions on the local variables leaves us
with the expression
exitRW3: numC3=0 and numC3'=numC3+1 and
To implement this action as a monitor method we first have
to check the guarding condition numC3=0 and then update
numRW16R and numC3. However, if the guarding condition
does not hold, we should wait until a process signals that
the condition might have changed. A straightforward translation
of this action to a monitor method would be
public synchronized void exitRW3() -
while (!(numC3==0))
The reason we call the notifyAll method at the end is to
wakeup processes that might be waiting on a condition related
to variable numRW16R or numC3 which have just been
updated by this action. Also note that the wait method is
inside a while loop to make sure that the guard still holds after
the thread wakes up. In the above translation, we used
synchronized keyword to establish atomicity. Note that
atomicity in Java is established only with respect to other
methods or blocks which are also synchronized. So for this
approach to work we have to make sure that shared variables
are not modified by any part of the program which is
not synchronized. We can establish this by declaring shared
variables as private variables in the monitor class and making
sure that all the methods of the monitor class are synchronized
Using this straightforward approach, we can translate a
monitor specification (based on the template given in Figure
to a Java monitor class using the following rules: 1) Create
a monitor class with a private variable for each shared
variable of the specification. 2) For each action in each sub-
module, create a synchronized method in the monitor class.
In the method for action a start with a while loop which
checks if ds(a) is true and waits if it is not. Then, put a
set of assignments to update the variables according to the
constraint in rs(a). After the assignments, call notifyAll
method and exit. We will call this translation single-lock
implementation of the monitor since it uses only this lock of
the monitor class.
6.1 Specific Notification Pattern
The single-lock implementation described above is correct
but it is inefficient [7, 18]. If we implement the Airport
Ground Traffic Control monitor using the above scheme an
exitRW3 action would awaken all the airplane threads that
are sleeping. However, departing airplane threads should
be awakened only when the number of airplanes on runway
16L or one of the taxiways in C3-C8 changes (when one of
the variables numRW16L, numC3, numC4, numC5, numC6, numC7,
and numC8 become zero) and they do not need to be awakened
on an update to status of runway 16R (when numRW16R
is updated) or on entrance of an airplane into the taxiway
C3 (when numC3 is incremented). Using different condition
variables for each guarding condition improves the performance
by awakening only related threads and eliminating
the overhead incurred by context switch for threads which
do not need to be awakened. In [7] using separate objects
to wait and signal for separate conditions is described as a
Java design pattern called specific notification pattern.
Figure
5 shows a fragment of the Java monitor that is automatically
generated by our code generator from the Action
Language specification of the Airport Ground Traffic Control
monitor given in Figure 2 using specific notification pat-
tern. The method for action exitRW3 calls Guard exitRW3
method in a while loop till it returns true. If it returns false
it waits on the condition variable exitRW3Cond. Any action
that can change the guard for exitRW3 from false to true
notifies the threads that are waiting on condition variable
exitRW3Cond using exitRW3Cond. If the guard (numC3==0)
is true then Guard exitRW3 method decrements the number
of airplanes using runway 16R (numRW16R=numRW16R-1)
and increments the number of airplanes using taxiway C3
atomically and returns true. Since executing exitRW3 can
only change action reqLand's guard from false to true, only
threads that are waiting on condition variable reqLandCond
are notified before method exitRW3 returns.
Action leave does not have a guard, i.e., its execution
does not depend on the state of shared variables of the mon-
itor. Hence, the method for action leave does not need to
wait to decrement the number of airplanes on runway 16L
(numRW16L=numRW16L-1). After updating numRW16L however
it notifies the threads waiting on the condition variables
crossRW3Cond and reqTakeOffCond.
We will give an algorithm for generating Java code from
monitor specifications in Action Language using specific notification
pattern below. As stated before, we will assume
that each action expression is in the form:
exp(a) j d l (a) - r l (a) - ds(a) - rs(a)
where d l (a) is an expression on unprimed local variables of
module l (a) is an expression on primed and
unprimed local variables of m i , ds(a) is an expression on unprimed
shared variables (var(m)), and rs(a) is an expression
on primed and unprimed shared variables. Since we are not
interested in the local states of the processes we will only
use ds(a) and rs(a) in the code generation for the monitor
methods. Let guarda denote a Java expression equivalent to
ds(a). We will also assume that rs(a) can be written in the
ev is an expression
on domain variables in var(m). Let assigna denote a set of
assignments in Java which correspond to rs(a).
To use the specific notification pattern in translating Action
Language monitor specifications to Java monitors we
need to associate the guard of each action with a lock specific
to that action. Let a be an action with a guard, guarda ,
and let conda be the condition variable associated with a.
The thread that calls the method that corresponds to action
a will wait on conda when guarda evaluates to false. Any
thread that calls a method that corresponds to another ac-
tion, b, that can change the truth value of guarda from false
to true will notify conda . Hence, after an action execution,
only the threads that are relevant to the updates performed
by that action will be awakened.
The algorithm given in Figure 6 generates the information
about the synchronization dependencies among different actions
needed in the implementation of the specific notification
pattern. For each action a in each submodule m i the algorithm
decides whether action a is guarded or unguarded by
checking the expression ds(a). If ds(a) is true (meaning that
there is no guard) then the action is marked as unguarded.
Otherwise, it is marked as guarded and a condition variable,
conda , is created for action a. Execution of an unguarded
action does not depend on the shared variables, hence, it
does not need to wait on any condition variable. Next, the
algorithm finds all the actions that should be notified after
action a is executed. We can determine this information by
checking for each action b 6= a, if executing action a when
ds(b) is false can result in a state where ds (b) is true. If
this is possible, then the condition variable created for action
b, cond b , is added to the notification list of action a,
which holds the condition variables that must be notified
after action a is executed.
Figure
7 shows translation of guarded and unguarded actions
to Java [18]. For each guarded action a a specific
notification lock, conda is declared and one private method
and one public method is generated. The private method
Guarded Executea is synchronized on this object. If the
guard of action a is true then this method executes assignments
in assigna and returns true. Otherwise, it returns
false. Method Guarded W aita first gets the lock for conda .
public class AirPortGroundTrafficControl-
private int numC3;
private int numRW16L, numRW16R;
private Object exitRW3Cond;
private Object reqTakeOffCond;
private Object reqLandCond;
private Object crossRW3Cond;
public AirPortGroundTrafficControl() -
exitRW3Cond=new
reqTakeOffCond=new
reqLandCond=new
crossRW3Cond=new
private synchronized boolean Guard-reqLand() -
if (numRW16R==0) -numRW16R=numRW16R+1;return true;-
else return false;
public void reqLand() -
try- reqLandCond.wait();-
catch(InterruptedException
private synchronized boolean Guard-exitRW3() -
return true;
else return false;
public void exitRW3() -
try- exitRW3Cond.wait();-
catch(InterruptedException
synchronized(reqLandCond) -reqLandCond.notify();-
public void leave() -
synchronized(crossRW3Cond) -crossRW3Cond.notify();-
synchronized(reqTakeOffCond) -reqTakeOffCond.notify();-
// other notifications
Figure
5: AirportGroundTrafficControl Class Using
specific notification pattern
for each action a do
if ds (a) 6= true then
mark a as guarded
create condition variable conda
else mark a as unguarded
for each action b s.t. a 6= b do
if post(:ds (b); exp(a)) " ds (b)
add cond b to notification list of a
Figure
Extracting Synchronization Information
private Object new Object();
public void Guarded W aita
while (!Guarded Executea ()) f
try f conda.wait();g
catch(InterruptedException e) fg
private synchronized boolean Guarded Executea () f
else return false;
(a)
public void Executea () fsynchronized(this) fassigna ;gg
(b)
Figure
7: Translation of (a) guarded and (b) unguarded
actions
Then it runs a while loop till Guarded Executea method returns
true. In the body of the while loop it waits on conda
till it is notified by some thread that performs an update
that can change truth value of method guarda and, there-
fore, Guarded Executea . For each unguarded action a a
single public method Executea is produced. This method
first acquires the lock for this object. Then executes the
assignments assigna of the corresponding action. Before
exiting the public methods Guarded W aita and Executea ,
is executed for
each action b in the notification list of action a (note that
this is not shown in Figure 7).
The automatically generated Java monitor class should
preserve the verified properties of the Action Language spec-
ification. This can be shown in two steps: 1) Showing that
the verified properties are preserved by the single-lock implementation
of the Action Language specification. 2) Showing
the equivalence between the single-lock implementation and
the specific notification pattern implementation. The proof
of correctness of specific notification pattern (step 2) is given
in [18]. The algorithm we give in Figure 6 extracts the necessary
information in order to generate a Java monitor class
that correctly implements the specific notification pattern.
Below, we will give a set of assumptions under which the
monitor invariants that are verified on an Action Language
specification of a monitor are preserved by its single-lock
implementation as a Java monitor class.
1. Initial Condition: The set of program states immediately
after the constructors of the monitor and the
threads are executed satisfy the initial expression of
the Action Language specification.
2. Atomicity: The observable states of the monitor are
defined as the program states where this lock of the
monitor is available, i.e., the states where no thread is
active in the monitor.
3. Thread Behavior: The local behavior of the threads
are correct with respect to the monitor specification.
4. Scheduling: If there exists an enabled action then an
enabled action will be executed.
Assuming that the above conditions hold we claim that the
observable states of the single-lock implementation of the
Action Language monitor specification satisfy the monitor
invariants verified by the Action Language Verifier.
7. CONCLUSIONS AND FUTURE WORK
We think that our approach of combining specification,
verification and synthesis, presented in this paper, can provide
the right cost-benefit ratio for adaptation of automated
verification techniques in practice. Writing a monitor specification
has three major benefits: 1) It is a higher-level
specification of a solution then a monitor implementation
since it eliminates the need for condition variables and wait
and signal operations. 2) Action Language specifications
can be verified with Action Language Verifier. Verified
monitor specifications in Action Language can be automatically
translated into Java monitor implementations where
the correctness of the implementation is guaranteed by construction
We are working on the integration of the automated counting
abstraction algorithm to the Action Language Verifier.
We think that our approach is applicable to interesting,
real-world applications as demonstrated by our case study.
For our approach to be applicable to a wider range of sys-
tems, we would like to extend our techniques to systems
with boolean or integer arrays and recursive data structures
(such as linked lists). We are working on both of these prob-
lems. We think that we can provide some analysis for arrays
using uninterpreted functions. For analyzing specifications
with recursive data structures, we are currently integrating
the shape analysis technique [19] to Composite Symbolic
Library. Our verification tools Composite Symbolic Library
and Action Language Verifier are available at:
http://www.cs.ucsb.edu/~bultan/composite/
Acknowledgments
We would like to thank Aysu Betin for
her help in the implementation of the automated code generation
algorithm.
8.
--R
Statistical summary of commercial jet aircraft accidents
Automatic symbolic verification of embedded systems.
Concurrent programming
Action Language: A specification language for model checking reactive systems.
Action Language Verifier.
Specific notification for Java thread synchronization.
Automatic verification of parameterized cache coherence protocols.
A deadlock detection tool for concurrent Java programs.
Model checking Java programs using Java PathFinder.
An operating system structuring concept.
A library for composite symbolic representation.
Concurrent Programming in Java
Symbolic model checking.
A structured approach for developing concurrent programs in Java.
Shape analysis.
Heuristics for efficient manipulation of composite constraints.
Modeling of Airport Operations Using An Object-Oriented Approach
--TR
Concurrent programming
A structured approach for developing concurrent programs in Java
A deadlock detection tool for concurrent Java programs
Composite model-checking
Action Language
Bandera
Monitors
Symbolic Model Checking
Invariant-based specification, synthesis, and verification of synchronization in concurrent programs
Automatic Symbolic Verification of Embedded Systems
A Library for Composite Symbolic Representations
Heuristics for Efficient Manipulation of Composite Constraints
Constraint-Based Verification of Client-Server Protocols
Automatic Verification of Parameterized Cache Coherence Protocols
Shape Analysis
Action Language Verifier
--CTR
Robert J. Hall , Andrea Zisman, Model interchange and integration for web services, ACM SIGSOFT Software Engineering Notes, v.29 n.5, September 2004
Betin-Can , Tevfik Bultan , Mikael Lindvall , Benjamin Lux , Stefan Topp, Application of design for verification with concurrency controllers to air traffic control software, Proceedings of the 20th IEEE/ACM international Conference on Automated software engineering, November 07-11, 2005, Long Beach, CA, USA
Aysu Betin Can , Tevfik Bultan , Mikael Lindvall , Benjamin Lux , Stefan Topp, Eliminating synchronization faults in air traffic control software via design for verification with concurrency controllers, Automated Software Engineering, v.14 n.2, p.129-178, June 2007 | concurrent programming;specification languages;infinite-state model checking;monitors |
566236 | Deterministic parallel backtrack search. | The backtrack search problem involves visiting all the nodes of an arbitrary binary tree given a pointer to its root subject to the constraint that the children of a node are revealed only after their parent is visited. We present a fast, deterministic backtrack search algorithm for a p-processor COMMON CRCW-PRAM, which visits any n-node tree of height h in time O((n/p+h)(logloglogp)22). This upper bound compares favourably with a natural &Ohgr;(n/p+h) lower bound for this problem. Our approach embodies novel, efficient techniques for dynamically assigning tree-nodes to processors to ensure that the work is shared equitably among them. | Introduction
Several algorithmic techniques, such as those employed for solving many optimization
problems, are based on the systematic exploration of a tree, whose
internal nodes correspond to partial solutions (growing progressively more rened
with increasing depth) and whose leaves correspond to feasible solutions.
In this paper, we are concerned with the implementation of tree explorations
This research was supported, in part, by the EC ESPRIT III Basic Research
Project 9072-GEPPCOM; by the CNR of Italy under Grant CNR96.02538.CT07;
and by MURST of Italy under Project MOSAICO. The results in this paper appeared
in preliminary form in the Proceedings of the 23rd International Colloquium
on Automata, Languages and Programming, Paderborn, Germany, July 1996.
Preprint submitted to Elsevier Preprint January 2001
on shared-memory parallel machines. Specically, we consider the backtrack
search problem, which involves visiting all the nodes of a tree T subject to the
constraints that (1) initially only the root of T is known to the processors,
and (2) the children of a node are made known only after the node itself is
visited. Moreover, the structure of T , its size n and its height h are unknown
to the processors.
We assume that a node can be visited (and its children revealed) in constant
time.
Since
n) work is needed to visit n nodes and since any tree of height
h contains a path of h nodes whose visit times must form a strictly increasing
sequence, it follows that any algorithm for the backtrack search problem
requires
time on a p-processor machine.
A number of works on parallel backtrack search have appeared in the litera-
ture. Randomized algorithms have been developed for the completely-connected
network of processors [KZ93,LAB93] and the butter
y network [Ran94], which
run, optimally, in O(n=p+h) steps, with high probability. It should be noted,
however, that the butter
y algorithm focuses on the number of \node-visiting"
steps and does not fully account for overhead due to manipulations of local
data structures. A deterministic algorithm is given in [KP94], which runs in
O(
ph) time on a p p
O(p). It is not clear
whether this latter algorithm can be extended to work for larger tree sizes. The
relationship between computation and communication for the exploration of
trees arising from irregular divide-and-conquer computations has been studied
in [WK91]. A number of related problems have also been addressed in the liter-
ature, such as branch-and-bound [Ran90,KZ93,LAB93,KP94,HPP99a,HPP99b]
and dynamic tree embeddings [AL91,BGLL91,LNRS92].
In this paper, we present a deterministic PRAM algorithm for backtrack search
whose running time is within a triply-logarithmic factor of the natural lower
bound discussed above. Our main result is summarized in the following theorem
Theorem 1 There is a deterministic algorithm running on a p-processor
COMMON CRCW-PRAM that performs backtrack search on any n-node
bounded-degree tree of height h in O ((n=p + h)(log log log p) 2 ) time, in the
worst case.
Ours is the rst ecient, deterministic PRAM algorithm that places no restrictions
on the structure, size or height of the (bounded-degree) tree to which
it is applied, and whose running time faithfully accounts for all costs. The algorithm
performs an optimal number of O(n=p
steps, while the O((log log log p) 2 ) multiplicative factor in the running time
captures the average overhead per step required to ensure that the workload
is equitably distributed among the processors. As a consequence, our algorithm
would become optimal if the cost of a node visit
were
e
is likely to be the case in typical applications of backtrack search, where every
node represents a complex subproblem to be solved.
The rest of the paper is organized as follows. Section 2 provides a number of
basic denitions and discusses a simple, direct approach to backtrack search
which our algorithm uses in combination with a more sophisticated strategy to
attain eciency. The high-level structure of our algorithm is described in Section
3, while Section 4 provides a detailed description of the key routine that
performs node visits and load balancing. In Section 5 we argue the generality
of our approach by discussing how it can be adapted to schedule straight-line
computations represented by bounded-degree DAGs. Section 6 closes the
paper with some nal remarks.
Preliminaries
Our algorithm is designed for the COMMON CRCW PRAM model of com-
putation, which consists of p processors and a shared memory of unbounded
size. In a single step, each processor either performs a constant amount of
local computation or accesses an arbitrary cell of the shared memory. In the
COMMON CRCW variant of the PRAM, concurrent reads are permitted as
are concurrent writes, provided that all competing processors write the same
value [JaJ92].
Let T be the tree to be visited. For simplicity, we assume that the tree is
binary, although our results can be immediately extended to the more general
class of bounded-degree trees. For concreteness, we suppose that each node is
represented in memory by means of a descriptor. Initially, only the descriptor
of the root is available in the shared memory of the PRAM at a designated
location. The descriptor of any other node is generated only by accessing
the descriptor of the node's father. A visit to a node involves accessing its
descriptor, and generating and storing the descriptors of its children (if any).
As mentioned before, a node visit is assumed to take constant time.
A straightforward strategy to solve the backtrack search problem is to visit
the tree in a breadth-rst, level-by-level fashion. An algorithm based on such
a strategy would proceed in phases, where each phase visits all the nodes at a
certain level and evenly redistributes their children among the processors, to
guarantee that the overall number of parallel visiting steps is at most n=p+h.
(Here the term parallel visiting step refers to a k-tuple (k p) of simultaneous
visits to distinct tree nodes performed by distinct PRAM processors.)
A perfectly balanced redistribution of tree nodes among processors between
successive parallel visiting steps can be accomplished deterministically using
simple parallel prex sums [JaJ92], yielding an O(n=p+h log p) overall running
time for backtrack search. Note that this strategy also works for the weaker
EREW PRAM variant, where concurrent read/write accesses are not allowed.
In fact, an asymptotically optimal number of (n=p visiting
steps can still be achieved without perfect balancing, by requiring that the
nodes at any level of the tree be only \approximately redistributed" among
the processors, that is, the nodes a processor is given must be at most a
constant factor more than what it would receive with perfect balancing. An
approximate redistribution can be attained by using the following result by
Goldberg and Zwick.
Fact 2 ([GZ95]) For an arbitrary sequence of p integer values a 0 ; a
the approximate prex sums b
can be determined in O(log log p) worst-case
time on a p-processor COMMON CRCW-PRAM.
By employing the approximate prex sums to implement node redistribution
after visiting each level of the tree, we get a deterministic O(n=p+h log log p)-
time algorithm for the backtrack search problem on a p-processor COMMON
CRCW-PRAM, for any values of n, h and p.
In the next sections we devise a more sophisticated strategy which outperforms
the above simple one for trees where log log p=(log log log p) 2 ). This
asymptotic improvement results in near- optimal performance through careful
\load-balancing" techniques without excessive global communication.
3 A High-Level View of the Algorithm
Our algorithm proceeds in a quasi-breadth-rst fashion. Let the tree nodes
be partitioned into h levels, where the nodes of one level are all at the same
distance from the root. The exploration process is split into stages, each of
which visits a stratum of the tree consisting of consecutive
levels. At the beginning of a stage, all nodes at the top level of the stratum are
(approximately) distributed among the processors. Note that the previously
mentioned straightforward strategy based on approximate prex sums visits
any stratum of size
Therefore, we focus on techniques
to cope eciently with smaller strata.
Consider a stage visiting a stratum with nodes. For convenience,
we number the levels of the stratum from 0 to ' 1, from top to bottom. The
stage explores all the nodes in these levels. At any point during the exploration,
the set of nodes whose descriptors have been generated but which are not yet
visited is called the frontier. (The initial frontier contains all the nodes in the
top level of the stratum.) Let F (j) denote those frontier nodes at level j, for
denote the entire frontier. In order to evaluate
the progress that the algorithm is making, we dene a weight function on the
frontier F as follows
i.e. nodes at level j have weight 3 ' j . Note that the contribution of a frontier
node to w(F ) is exponentially decreasing in its level within the stratum. Also,
visiting a frontier node at level j involves replacing that node in the frontier by
its children (if any), whose combined weight of at most 2 3
is a constant factor less than that of their parent. Hence, each node visit decreases
the frontier weight. Visiting nodes at lower numbered levels rather than
nodes further down the stratum results in a more substantial decrease in the
weight function. In order to avoid frequent, expensive balancing steps, our exploration
strategy does not necessarily proceed in a regular, breadth-rst man-
ner. Nonetheless, we make use of certain cheaper, weight-driven load-balancing
techniques to ensure that the frontier weight decreases at a predictable rate.
A pictorial representation of the exploration process is shown in Fig. 1
A stage consists of two parts. In the rst part, a sequence of (parallel) visiting
steps is performed to explore nodes in the stratum until the frontier weight
is less than or equal to p. In order to detect the end of the rst part, visiting
(a)
stratum
?h
Fig. 1. The weight-driven exploration process on a tree of height h. (a) The portion
within thin solid lines encloses visited nodes belonging to previous strata. Thick
solid lines enclose visited/generated nodes within the current stratum of ' levels.
Frontier nodes lie along the thick spline. Dashed lines enclose the nodes that are
still to be generated. (b) The frontier weight reduction induced by a visit of a node
at level j of the stratum, with 0 j < '.
steps are executed in batches of ' and a weight estimate is computed after the
execution of each batch, using the approximate prex sums algorithm, whose
O(log log p) complexity is dominated by that of the ' visiting steps. The second
part of the stage completes the exploration of the stratum as follows. First,
for every 0 j < ', a cluster of 2 ' j distinct processors is assigned to each
node of F (j) by means of approximate prex sums in O(log log p) time. (Since
such an assignment is feasible.) Next,
all of the descendants of each node in F are visited by the corresponding
cluster of processors in O(') time. More specically, consider a frontier node
x at level j and let fp be the cluster of processors assigned to it.
The exploration proceeds in ' 1 j rounds, where in round k, 0 k < ' 1 j,
all descendants of x at distance k from it are visited. In round zero p 0 visits
x and gives its children (if any) to p 0 and p 1 . Thereafter for each round, p i
takes the node (if any) given to it in the previous round, visits it, and gives
its children to processors p 2i and p 2i+1 , and so on.
At the end of the stage, the children of the nodes on the last level of the
stratum, which make the initial frontier for the next stage, will be evenly
distributed among the processors by employing again the approximate prex
sums algorithm.
A very high-level, procedural description of our new strategy for visiting small
strata with O(p' 2 ) nodes is given in Fig. 2. In summary, each stratum is
visited in a stage (procedure STAGE VISIT) by rst alternating ' parallel
visiting steps (procedure VISITING STEP) with an approximate count
of the frontier weight (procedure APPROXIMATE COUNT), until the latter
goes below p. Then, the visit of the stratum is completed by rst allocating
processor clusters to the residual unexplored nodes (procedure ALLOCATE
CLUSTERS), then visiting their subtrees within the stratum using
the simple technique illustrated above (procedure COMPLETE VISIT),
and nally redistributing the initial frontier for the next stage to the p processors
(procedure REDISTRIBUTE NODES). Note that the three proce-
procedure STAGE VISIT()
while w(F ) > p
do repeat ' times
repeat
while
Fig. 2. Overall structure of the algorithm for visiting small strata
dures
NODES can all be implemented by means of simple variations of
the approximate prex sums algorithm of [GZ95].
In order to determine the total running time of a stage, we need to give a
bound on the number of visiting steps performed. Let F t be the frontier at the
beginning of the tth visiting step. The step is called full, if it
visits
p) nodes
in F t , and it is called reducing if it visits at least half of the nodes in
for each i in the range 0 i < '. Section 4 will show how to perform a visiting
step in time O((log log log p) 2 ) while ensuring that it is always either full or
reducing (see Theorem 10). Clearly, for a stratum of m nodes, there are at
most O(m=p) full visiting steps in the stage, whereas the number of reducing
steps is bounded by the following lemma.
Lemma reducing visiting steps are sucient to
reduce the frontier weight to at most p.
PROOF. The proof is based on the following property.
Claim. Let x be two sequences of nonnegative
integers such that
Proof of Claim. The proof is by induction on n. The case
Suppose that the property holds for some n 1 and consider sequences of
Assume that x n > y n , since otherwise the inductive step is
immediate. It is easy to see that
Note that
therefore, by applying the
induction hypothesis, we have that
which, combined with the previous inequality, proves the claim.
Consider a reducing visiting step. Let F be the frontier prior to the execution
of the step and let n j be the number of nodes in F (j) visited in the step,
the visiting step is reducing, we have
for any i, 0 i < ', and the claim shows that
jF (j)j
Thus, the visited nodes account for at least half the total frontier weight. Since
the combined weight of the children of any node is at most two thirds of the
weight of their parent, it follows that the weight reduction must be at least
one third of the total weight of the visited nodes, i.e., at least one sixth of the
frontier weight w(F ) prior to the execution of the visiting step. Thus, the new
frontier weight following the completion of the step is at most (5=6)w(F ).
Since the frontier at the beginning of the stage contains O(p' 2 ) nodes at level
0, the initial frontier weight is O(p' 2 3 ' ), which implies that the frontier weight
will be less than or equal to p after O(') reducing steps. This proves the lemma.
>From the above discussion we conclude that our new strategy can be employed
to visit any stratum of size visiting steps
strata of size
can be
visited in O(m=p) time using the straightforward breadth-rst strategy outlined
in Section 2, we can suitably interleave the two strategies and obtain an
algorithm that visits any stratum in time O((m=p
value of m. This immediately yields a backtrack search algorithm with the
running time stipulated in Theorem 1. Note that the number of visiting steps
required is O(n=p + h) in all.
4 Implementation of a Visiting Step
In this section, we describe the implementation of a visiting step which enforces
the property that the step is always either full or reducing.
The key idea is a \heap-like" data structure D that holds the frontier nodes
from which nodes are extracted prior to the beginning of the visiting step and
to which their children are inserted at the end of that step. Conceptually, D
is composed of an ' p=' array of tree rings. We also regard the p PRAM
processors as being conceptually arranged into ' rows and
At the beginning of the visiting step, the tree rings of the ith row contain all
current frontier nodes at level i, 0 i < '. A tree ring is structured as a forest
of complete binary trees of dierent sizes 1 . The leaf vertices in a tree ring are
nodes of the tree being visited and each internal vertex contains pointers to
its children. The roots of the trees in the same tree ring are organized in a
doubly-linked list, ordered by tree size. (This data structure is broadly similar
to one used in [CV88].) As in the previous section, we assume that the stratum
being visited is of size O(p' 2 ). We use K to denote an upper bound on the
size (i.e., the number of node descriptors stored) of any tree ring during the
execution of a stage. Later, we will show that hence the height of
any tree in a tree ring will always be O(log '). It should be noted that while
each tree ring is notionally associated with a particular processor, since it is
stored in the shared PRAM memory it is accessible to all.
A visiting step consists of two sub-steps, VISIT and BALANCE, which are
described in the following paragraphs.
VISIT This sub-step is executed in parallel by each column of processors.
Let s be the total number of nodes held by the tree rings of the column and let
c > 1 be a constant to be specied later. The ' processors in the column select
the minfs; 4c'g topmost nodes from the union of their tree rings, and distribute
these nodes evenly among themselves. Then, each processor visits the nodes it
receives. Finally the children of these just-visited nodes are inserted into the
appropriate tree rings within the column.
BALANCE This sub-step is executed in parallel by each row of processors
and aims at partially balancing the nodes stored in the tree rings of the row.
We dene the degree of a processor as the number of tree nodes contained in
its tree ring. Let f i be the sum of the degrees of all processors in row i, for
i.e., the number of frontier nodes at level i of
the stratum.) BALANCE redistributes the nodes among the tree rings in such
a way that upon completion at most minff processors have degree
larger than cdf i =qe in row i, for any 0 i < '. Moreover, BALANCE never
increases the maximum processor degree in any row. The actual implementation
of the BALANCE sub-step is rather involved and is discussed separately
in Subsection 4.1.
To avoid confusion discussing the elements of the tree being visited and the trees
employed in the tree rings, we will use the term node exclusively in connection with
the former and reserve the term vertex for the latter.
We have:
Lemma 4 A visiting step is always either full or reducing.
PROOF. Let F be the frontier at the beginning of the visiting step. Then,
there are at most minfjF (j)j; qg=(2K) processors in row j of degree larger than
cdjF (j)j=qe, for each j, 0 j < '. This is ensured either by the BALANCE
sub-step executed at the end of the preceding visiting step or, if the visiting
step under consideration is the rst of the stage, by the (approximately) even
distribution of frontier nodes guaranteed at the start of the stage. We call the
tree nodes maintained by these overloaded processors bad nodes and all the
others good nodes. Since K is an upper bound to the degree of any processor,
we have that the total number of bad nodes in the rst i levels of the frontier
is
minfjF (j)j; qg
for any 0 i < '. Thus the bad nodes at level i or lower account for at most
half the total number of frontier nodes at those levels.
Suppose jF j > 3p and let r q be the number of columns holding fewer than
nodes. Since a column holds at most
nodes, the number of good nodes is bounded as follows:
which, following some tedious but simple arithmetic manipulations, implies
that r q(1 1=8c). Since c is a constant greater than one, we conclude that
columns hold at least ' nodes. Thus, the visiting step
will visit nodes, hence the step is full.
Consider now the case jF j 3p. Since the number of good nodes in each
column is at most c(jF j=q +') 4c', it follows that the total number of nodes
to be visited in the step is at least equal to the total number of good nodes.
From the observation made above, we know that if we visited only the good
nodes, then for any 0 i < ' we would visit at least half of the frontier nodes
at level i or lower, hence the step would be reducing. Since in each column we
select the topmost nodes available, if some good nodes are not visited it can
only be because at least the same number of nodes at higher levels are visited
in their place, which maintains the reducing property.
In order to implement the visiting step described above, we need ecient
primitives to operate on the tree rings. Consider a stage visiting a stratum of
Note that at the beginning of the stage the degree of each
processor is O(' 2 ), and that after each VISIT sub-step the degree increases
by at most an O(') additive term. (Each of the O(') nodes visited by the
processors in a column can generate at most two children during a single
VISIT step.) Since the BALANCE sub-step does not increase the maximum
degree of a processor and O(m=p visiting steps are executed
overall, we can conclude that the maximum processor degree will always be
O(' 3 ). As a consequence, throughout the stage each tree ring contains at most
O(log ') trees of O(' 3 ) size and O(log ') height each.
It can be shown [CV88] that:
Given k nodes evenly distributed among (k) processors, a tree ring
whose trees contain these nodes as leaves, can be constructed by the
processors in O(log
(2) Two tree rings of size O(k) can be merged into one tree ring in O(log
time by a single processor.
(3) Any number of k leaves can be extracted by O(k) processors from a tree
ring in time proportional to the maximum height of a tree in the tree
ring. After extraction, the tree ring structure can be restored within the
same time bound.
It can be easily argued that the VISIT sub-step can be implemented in a
straightforward fashion within each column using standard techniques such as
prex and the aforementioned primitives to manipulate the tree rings. From
the above discussion we conclude:
Lemma 5 For strata of size O(p' 2 ), VISIT can be executed in O(log
O(log log log p) time.
4.1 Implementation of BALANCE
As mentioned before, we use to denote an upper bound to the
degree of any processor when a stratum of size O(p' 2 ) is explored (an exact
value for K can be derived from the analysis). We assume that K is known by
all processors prior to the beginning of the entire algorithm. Since BALANCE
is executed in parallel and independently by all rows, we will concentrate on
the operations performed by an arbitrary row, say row k. Let f k denote the
total number of tree nodes maintained by the processors of this row at the
beginning of the BALANCE sub-step. The purpose of the sub-step is to
redistribute these nodes among the processors in such a way that, after the
redistribution, the number of processors of degree greater than cdf k =qe is at
most minff k ; qg=(2K). (It should be noted that the value f k is not known to
the processors.) The sub-step also ensures that the maximum processor degree
is not increased. A crucial feature of the implementation of BALANCE is that
nodes are not physically exchanged between the processors, which would be
too costly for our purposes, but instead they are \moved" by manipulating
the corresponding tree rings, with a cost logarithmic in the number of nodes
being moved.
BALANCE is based on a balancing strategy introduced by Broder et al. in
[BFSU92], which makes use of a special kind of expander dened below.
Denition 6 ([BFSU92]) An undirected graph E) is an (a; b)-
extrovert graph, for some a; b with 0 < a; b < 1, if for any set S V , with
jSj ajV j, at least bjSj of its vertices have strictly more neighbours in V S
than in S.
The existence of regular extrovert graphs of constant degree is proved through
the probabilistic method in [BFSU92].
For each row, we identify its q processors with the vertices of a regular (a; b)-
extrovert graph E) of odd degree d, where a; b and d are constants. Let
consists of
phases, numbered from 0 to 1. In each phase, some tree nodes maintained
by the row processors are marked as dormant, and will not participate in
subsequent phases. The remaining nodes are said to be active. At the beginning
of Phase 0 all frontier nodes are active. For 0 i < , Phase i performs the
following actions.
(1) Each processor with more than K
active nodes in its tree ring declares
itself congested.
is built as a directed
version of a subgraph of G. The construction proceeds by performing
steps of the following type [BFSU92]. Initially, D is empty. In each step,
every congested processor not yet in D checks whether at least (d
of its neighbours are either non-congested or already in D and, if so,
enters D by acquiring edges to (d + 1)=2 of these neighbours, which also
enter D.
Comment: The construction and the fact that d is odd guarantee that D
is a DAG, and that each congested processor in D has out-degree greater
than its in-degree, while each non congested processor in the DAG has
out-degree 0. Moreover, D has depth at most .
(3) A sub-DAG D 0 D is identied comprising all congested processors with
more than K
active nodes, and all of their descendants.
Each congested processor not in D that has more than K
active
nodes, marks all but K
of its active nodes as dormant.
be such that 2 j K
. Note that K
for j . Each processor in D 0 extracts, for each of its direct successors
in D 0 , a tree containing 2 j distinct active nodes from its tree ring, and
sends a pointer to this tree to the successor in question.
Each processor merges the trees it receives into its tree ring.
A pictorial representation of the construction of DAGs D and D 0 performed
in a phase of BALANCE is given in Fig. 3.
R
R
(c)
R
R
(b)
(a)
(d)
Fig. 3. The DAG construction process performed by BALANCE. (a) The extrovert
graph G connecting the processors of a row. White nodes represent uncongested
processors. Black and shaded nodes represent, respectively, \heavily" congested processors
(more than K
active nodes) and \lightly" congested processors (more
than K
=2 and at most K
active nodes). (b)-(c) Two-step construction of
DAG D. (d) The nal subdag D 0 D containing all heavily congested nodes and
their descendants in D.
In what follows, we show that at the end of the phases the number of
processors of degree more than cdf k =qe is at most minff k ; qg=(2K), and that
the maximum processor degree is not increased.
Lemma 7 For 0 i < , at the beginning of Phase i each processor holds at
most K
active nodes and at most K(1
nodes. Moreover, no
phase increases the maximum processor degree.
PROOF. We proceed by induction on i. The case
Suppose that the property holds up to index i. By induction, each processor
starts Phase i with at most K
active nodes and at most K nodes overall. A
congested processor that does not make it into the DAG D is not involved in
any movement of nodes in the phase. Each such processor begins with at most
K nodes (K
active and K(1
dormant) and at the end of the phase
at most K
of its nodes remain active while the rest become (or remain)
dormant. Moreover, the processor's degree does not change. If sub-DAG D 0
is empty then all congested processors in D have at most K
active nodes
and at most K K
nodes. Since in this case no
exchange of pointers takes place, the property for index
instead that D 0 is not empty, that is, there is at least one congested processor
in D with more than K
active nodes. (Note that in this case the maximum
processor degree is greater than K
congested processor in D 0 transmits
active nodes to each of its successors. Since the out-degree of a congested
processor in D 0 is greater than its in-degree this represents a net loss of at
least
nodes. Therefore, in any such processor the number
of active nodes at the end of the phase is at most
and its overall degree is decreased. Moreover, the number of dormant nodes for
such a processor stays unchanged, namely K(1
Finally,
a non-congested processor begins the phase with at most K
active nodes
and receives at most d2 j dK
new active nodes, which adds up to
which is less than the maximum processor degree. As in the previous case, the
number of dormant nodes for such a processor stays unchanged, that is less
than K(1
We refer to the processors maintaining dormant nodes as rogues. Let R(j)
denote the set of rogues at the beginning of Phase j and C(j) the set of
processors that declare themselves congested in the phase. Dene r
and c
log 1=
Ka
. We have:
Lemma 8 For
PROOF. We proceed by induction on j. The case clearly true since
Suppose that the property holds up to index j 1 and consider index
j. Note that the rogues at the beginning of Phase j will be given by the set
plus a set C 0 C(j 1) containing congested processors that did
not make it into the DAG during Phase j 1. Let us give an upper bound to
j. Note that c j 1 a minff k ; qg aq, since otherwise congested processors
would account for more than
a
active nodes, which is impossible. By the extrovertness of the graph G, after
the rst t steps of DAG construction, the number of congested processors not
in D are at most c j 1 . This implies that jC 0 j c j 1 (1 b) , hence the
number of rogues at the beginning of Phase j will be
Lemma 9 By the end of the BALANCE procedure, the number of processors
of degree more than cdf k =qe is at most minff k ; qg=(2K) for a suitable choice of
the constant c. Moreover, the procedure is executed in time O((log log log p) 2 )
on the COMMON CRCW-PRAM.
PROOF. Let us rst consider the case 0 . At the beginning of Phase 0 ,
each processor maintains at most K
active nodes (by Lemma 7
), and the number of rogues is
(1)
(by Lemma 8 and the choice of ). Moreover Lemma 7 shows that the maximum
degree of processors that are not rogues at the end of Phase 0 will not
increase above the cdf k =qe threshold in the subsequent 0 1 phases.
Now consider the case where < 0 . In this case
for c 2(d
. Moreover, since r j is increasing in j, the total number of
rogues is no more than that indicated in Equation 1.
We now evaluate the running time. Consider a phase of BALANCE. Step 1
clearly takes O(1) time. Every DAG construction step is accomplished in constant
time, hence the construction of D (Step 2) takes time O(). Since D
has depth at most it is easy to see that Step 3 can be accomplished in time
O(), as well. The cost of the remaining steps is dominated by the cost of the
extraction and merging operations performed on the tree rings, which take
O(log Noting that the number of phases is
we conclude that the overall running time is
O( log
The following theorem combines the contributions of this section and establishes
the result announced in Section 3 upon which the analysis of our back-track
strategy is based.
Theorem visiting step within a stratum of size O(p' 2 ) can be implemented
in O((log log log p) 2 ) time on a p-processor COMMON CRCW-PRAM,
while ensuring that the step is either full or reducing.
5 Evaluation of Bounded-Degree DAGs
In this section we show how some of the ideas involved in the backtrack search
algorithm may be used to solve the DAG evaluation problem. In a computation
DAG , nodes with zero in-degree are regarded as inputs, while other nodes
represent operators whose operands are the values computed by their predecessors
(i.e., nodes adjacent with respect to incoming edges). Nodes with zero
out-degree are regarded as outputs. A node can be evaluated only after all of
its operands have been evaluated. The DAG evaluation problem consists of
evaluating all output nodes. We dene the layers of the DAG in the obvious
way: the inputs are at layer zero and the layer of every other node is one plus
the maximum layer among its predecessors.
In our parallel setting, we assume that a DAG D of constant degree is stored
in the shared memory of a p-processor COMMON-CRCW PRAM. Each DAG
node is represented by a descriptor containing the following information: a eld
that species the type of operation associated to that node; a eld to store its
value; two elds for each operand (i.e., each incoming edge), where processors
will write the value of the operand, and a timestamp to record the time of
and pointers to the node's successors in D. Initially, only pointers to
the descriptors of the DAG inputs are known and evenly distributed among
the processors.
Notice the similarity between the DAG evaluation and the backtrack search
problems. While the computational DAG is not necessarily a tree, nevertheless
we can still visit (i.e., evaluate) it by proceeding in a quasi breadth-rst
stratum-by-stratum fashion as in the backtrack search algorithm.
More precisely, recall that in the backtrack search problem a node is revealed
by the processor that visits its (unique) parent. In the DAG evaluation prob-
lem, \visiting" a node entails computing the node's value and writing this
value in the node's descriptor and, together with a timestamp, in the appropriate
elds of its successors' descriptors. A node is revealed (hence ready to
be evaluated/visited itself) only when the last of its predecessors has been
evaluated, and the node is regarded as being a \child" of that predecessor
(with ties being broken arbitrarily). In this fashion, a spanning forest for the
DAG is implicitly identied and the DAG evaluation can be regarded as a
visit of this a forest.
By noting that our backtrack search algorithm can be employed to visit any
forest of bounded-degree trees in O((n=p
is the total number of nodes and h the maximum tree-height in the forest,
we conclude that the DAG evaluation problem can be solved within the same
time bound.
6 Conclusions
In this work we devised an ecient deterministic strategy for performing parallel
backtrack search on a shared memory machine. Specically, our strategy
attains a running time which is only a triply logarithmic factor away
from a natural lower bound for the problem. As with all previous studies,
our investigation has mainly focused on running time. On the other hand,
the overall space required by our algorithm can grow as large as the tree
size n, whereas the space required by the randomized schemes proposed in
[KZ93,LAB93,Ran94] is bounded above by minfn; phg. This latter quantity,
however, is close to n for large values of p and/or highly unbalanced trees. It
remains a challenging open problem to devise fast and space ecient back-track
search algorithms and, more generally, to study time-space tradeos for
parallel backtrack search.
--R
Coding theory
Tight bounds for on-line tree embeddings
Approximate parallel scheduling.
Optimal deterministic approximate parallel pre
Fast deterministic parallel branch and bound.
Deterministic branch and bound on distributed-memory machines
parallel algorithms for backtrack search and branch and bound computation.
An atomic model for message-passing
Dynamic tree embeddings in butter ies and hypercubes.
A simpler analysis of the Karp-Zhang parallel branch-and- bound method
Optimal speed-up for backtrack search on a butter y network
Communication complexity for parallel divide- and-conquer
--TR
Approximate parallel scheduling. Part I: the basic technique with applications to optimal parallel list ranking in logarithmic time
Coding theory, hypercube embeddings, and fault tolerance
Communication complexity for parallel divide-and-conquer
Tight bounds for on-line tree embeddings
An introduction to parallel algorithms
Dynamic tree embeddings in butterflies and hypercubes
An atomic model for message-passing
Randomized parallel algorithms for backtrack search and branch-and-bound computation
Branch-and-bound and backtrack search on mesh-connected arrays of processors
Optimal speedup for backtrack search on a butterfly network
Near-perfect Token Distribution
Optimal deterministic approximate parallel prefix sums and their applications
A Simpler Analysis of the Karp-Zhang Parallel Branch-and-Bound Method | load balancing;parallel algorithms;PRAM model;backtrack search |
566244 | Decision lists and related Boolean functions. | We consider Boolean functions represented by decision lists, and study their relationships to other classes of Boolean functions. It turns out that the elementary class of 1-decision lists has interesting relationships to independently defined classes such as disguised Horn functions, read-once functions, nested differences of concepts, threshold functions, and 2-monotonic functions. In particular, 1-decision lists coincide with fragments of the mentioned classes. We further investigate the recognition problem for this class, as well as the extension problem in the context of partially defined Boolean functions (pdBfs). We show that finding an extension of a given pdBf in the class of 1-decision lists is possible in linear time. This improves on previous results. Moreover, we present an algorithm for enumerating all such extensions with polynomial delay. | Introduction
Decision lists have been proposed in [31] as a specification of Boolean functions which amounts to a simple
strategy for evaluating a Boolean function on a given assignment. This approach has been become popular
in learning theory, since bounded decision lists naturally generalize other important classes of Boolean
functions. For example, k-bounded decision lists generalize the classes whose members have a CNF or
DNF expression where each clause or term, respectively, has at most k literals, and, as a consequence,
also those classes whose members have a DNF or CNF containing at most k terms or clauses, respectively.
Another class covered by decision lists the one of decision trees [30].
Informally, a decision list can be written as a cascaded conditional statement of the form:
else b d
where each t i (v) means the evaluation of a term t i , i.e., a conjunction of Boolean literals, on an assignment
v to the x each b i is either 0 (false) or 1 (true).
The important result established in [31] is that k-decision lists, i.e., decision lists where each term t i has
at most k literals and k is a constant, are probably approximately correct (PAC) learnable in Valiant's model
[35]. This has largely extended the classes of Boolean functions which are known to be learnable. In the
sequel, decision lists have been studied extensively in the learning field, see e.g. [18, 8, 16, 9].
However, while it is known that decision lists generalize some classes of Boolean functions [31], their relationships
to other classes such as Horn functions, read-once functions, threshold functions, or 2-monotonic
functions, which are widely used in the literature, were only partially known (cf. [5, 3]). It thus is interesting
to know about such relationships, in particular whether fragments of such classes correspond to decision
lists and how such fragments can be alternatively characterized. This issue is intriguing, since decision lists
are operationally defined, while other classes such as Horn functions or read-once functions are defined on
a semantical (in terms of models) or syntactical (in terms of formulas) basis, respectively.
In this paper, we shed light on this issue and study the relationship of decision lists to the classes mentioned
above. We focus on the elementary class of 1-decision lists (C 1-DL ), which has received a lot of attention
and was the subject of a number of investigations, eg. [31, 26, 8, 9]. It turns out that this class relates in an
interesting way to several other classes of Boolean functions. In particular, it coincides with independently
defined semantical and syntactical such classes, as well as with the intersections of other well-known classes
of Boolean functions. We find the following characterizations of C 1-DL . It coincides with
ffl C R
DH , the renaming-closure of the class of functions f such that both f and its complement f are Horn
[12] (also called disguised "double" Horn functions);
ffl CND , the class of nested differences of concepts [20], where each concept is described by a single
the intersection of the classes of 2-monotonic functions [29] and read-once functions,
i.e., functions definable by a formula in which each variable occurs at most once [17, 23, 35, 34].
the intersection of threshold functions (also called linearly separable functions) [29] and
read-once functions; and,
ffl CLR-1 , the class of linear read-once functions [12], i.e., functions represented by a read-once formula
such that each binary connective involves at least one literal.
Observe that the inclusion C 1-DL ' C TH " CR-1 follows from the result that C 1-DL ' C TH [5, 3] and the
fact that C 1-DL ' CR-1 ; however, the converse was not known.
The above results give us new insights into the relationships between these classes of functions. Moreover,
they provide us with a semantical and syntactical characterization of 1-decision lists in terms of (renamed)
Horn functions and read-once formulas. On the other hand, we obtain characterizations of the intersections
of well-known classes of Boolean functions in terms of operationally, semantically, and syntactically defined
classes of Boolean functions.
As we show, a natural generalization of the results from 1-decision lists to k-bounded decision lists fails
in almost all cases. The single exception is the coincidence with nested differences of concepts, which
holds for an appropriate base class generalizing terms. Thus, our results unveil characteristic properties of
1-decision lists and, vices versa, of the intersections of classes of Boolean functions to which they coincide.
Furthermore, we study computational problems on 1-decision lists. We consider recognition from a formula
(also called membership problem [19] and representation problem [4, 1]) and problems in the context
of partially defined Boolean functions.
A partially defined Boolean function (pdBf) can be viewed as a pair (T ; F ) of sets T and F of true and
false vectors v 2 f0; 1g n , respectively, where T " It naturally generalizes a Boolean function,
by allowing that the range function values on some input vectors are unknown. This concept has many
applications, e.g., in circuit design, for representation of cause-effect relationships [7], or in learning, to
mention a few. A principal issue on pdBfs is the following: Given a pdBf (T ; F ), determine whether some
f in a particular class of Boolean functions C exists such that T ' T (f) and F ' F (f), where T (f) and
F (f) denote the sets of true and false vectors of f , respectively. Any such f is called an extension of f in C,
and finding such an f is known as the extension problem [6, 27]. Since in general, a pdBf may have multiple
extensions, it is sometimes desired to know all extensions, or to compute an extension of a certain quality
(e.g., one described by a shortest formula, or having a smallest set T (f) ).
The extension problem is closely related to problems in machine learning. A typical problem there is the
following ([4]). Suppose there are n Boolean valued attributes; then, find a hypothesis in terms of a Boolean
function f in a class of Boolean functions C, which is consistent with the actual correlation of the attributes
after seeing a sample of positive and negative examples, where it is known that the actual correlation is a
function g in C. In our terms, a learning algorithm produces an extension of a pdBf. However, there is a
subtle difference between the general extension problem and the learning problem: in the latter problem,
an extension is a priori known to exist, while in the former, this is unknown. A learning algorithm might
take advantage of this knowledge and find an extension faster. The extension problem itself is known as
the consistency problem [4, 1]; it corresponds to learning from a sample which is possibly spoiled with
inconsistent examples.
In this context, it is also interesting to know whether the pdBf given by a sample uniquely defines a
Boolean function in C; if the learner recognizes this fact, she/he has identified the function g to be learned.
This is related to the question whether a pdBf has a unique extension, which is important in the context of
teaching [32, 22, 33, 15]. There, to facilitate quicker learning, the sample is provided by a teacher rather than
randomly drawn, such that identification of the function g is possible from it (see e.g. [5, 15] for details).
Any sample which allows to identify a function in C is called a teaching sequence (or specifying sample
[5]). Thus, the issue of whether a given set of labeled examples is a teaching sequence amounts to the issue
of whether S, seen as a pdBf, has a unique extension in C. A slight variant is that the sample is known
to be consistent with some function g in C. In this case, the problem amounts to the unique extension
problem knowing that some extension exists; in general, this additional knowledge could be utilized for
faster learning.
Alternative teaching models have been considered, in which the sample given by the teacher does not
precisely describe a single function [16]. However, identification of the target function is still possible, since
the teacher knows how the learner proceeds, and vice versa, the learner knows how the teacher generates
his sample, called a teaching set in [16]. To prevent "collusion" between the two sides (the target could be
simply encoded in the sample), an adversary is allowed to spoil the teaching set by adding further examples.
Our main results on the above issues can be summarized as follows:
Recognizing 1-decision lists from a formula is tractable for a wide class of formulas, including Horn
formulas, 2-CNF and 2-DNF, while unsurprisingly intractable in the general case.
ffl We point out that the extension problem for C 1-DL is solvable in linear time. This improves on the
previous result that the extension problem for C 1-DL is solvable in polynomial time [31]. As a consequence,
a hypothesis consistent with a target function g in C 1-DL on the sample can be generated in linear time. In
particular, learning from a (possibly spoiled) teaching sequence is possible in linear time. We obtain as a
further result an improvement to [16], where it is shown that learning a function g in C 1-DL from a particular
teaching set is possible in O(m 2 n) time, where m is the length of a shortest 1-decision list for g, n is the
number of attributes, and the input size is assumed to be O(mn). Our algorithm can replace the learning
algorithm in [16], and finds the target in O(nm) time, i.e., in linear time. We mention that [8] presents the
result, somewhat related to [16], that 1-decision lists with k alternations (i.e., changes of the output value))
are PAC learnable, where the algorithm runs in O(n 2 m) time.
ffl We present an algorithm which enumerates all extensions (given by formulas) of a pdBf in C 1-DL with
polynomial delay. As a corollary, the problems of deciding whether a given set of any examples is a teaching
sequence and whether a consistent sample is a teaching sequence are both solvable in polynomial time.
Moreover, a small number of different hypotheses (in fact, even up to polynomially many) for the target
function can be produced within polynomial time.
The rest of this paper is organized as follows. The next section provides some preliminaries and fixes
notation. In Section 3, we study the relationships of 1-decision lists to other classes of functions. In Section
4, we address the recognition problem from formulas, and in Section 5, we study the extension problem.
Section 6 concludes the paper.
Preliminaries
We use x to denote Boolean variables and letters u; v; w to denote vectors in f0; 1g n . The i-th
component of a vector v is denoted by v i . Formulas are built over the variables using the connectives -;
and :. A term t is a conjunction
literals such that P (t) "
a clause c is defined dually (change - to -); t (resp., c) is Horn, if jN(t)j - 1 (resp., P (c) - 1). We use
and ? to denote the empty term (truth) and the empty clause (falsity), respectively. A disjunctive normal
are Horn. Similarly, a conjunctive normal form (CNF)
Horn, if all c i are Horn.
E.g., the term f2g, and is Horn, while the clause
thus it is not Horn.
A partially defined Boolean function (pdBf) is a mapping defined by
and denotes a set of true vectors (or positive examples), F ' f0; 1g n
denotes a set of false vectors (or negative examples), and T " ;. For simplicity, we denote a pdBf by
It can be seen as a representation for all (total) Boolean functions (Bfs)
that any such f is called an extension of
We often identify a formula ' with the Bf which it defines. A term t is an implicant of a Bf f , if t - f
holds, where - is the usual ordering defined by f - g $ T (f) ' T (g). Moreover, t is prime if no proper
subterm t 0 of t is an implicant of f . A DNF
if each term t i is a prime implicant of ' and
no term t i is redundant, i.e., removing t i from ' changes the function.
A decision list L is a finite sequence of pairs
defines a Bf f : f0; 1g
f0; 1g by )g. We call a Bf sometimes a decision list, if f is definable
by some decision list; this terminology is inherited to restricted decision lists.
A k-decision list is a decision list where each term t i contains at most k literals; we denote by C k-DL the
class of all (functions represented by) k-decision lists. In particular, C 1-DL is the class of decision lists where
each term is either a single literal or empty. A decision list is monotone [15], if each term t in it is positive,
k-DL we denote the restriction of C k-DL to monotone decision lists.
A Bf f is Horn, if F is denotes the closure of set S ' f0; 1g n of vectors
under component-wise conjunction - of vectors; by C Horn we denote the class of all Horn functions. It
is known that f is Horn if and only if f is represented by some Horn DNF. If f is also represented by a
positive DNF, i.e., a DNF in which each term is positive, then f is called positive; C pos denotes the class of
all positive functions.
For any vector w 2 f0; 1g n , we define 0g. The
renaming of an n-ary Bf f by w, denoted f w , is the Bf f(x \Phi w), i.e., T (f w
where \Phi is componentwise addition modulo 2 (XOR). For any class of Bfs C, we denote by C R the closure
of C under renamings. The renaming of a formula ' by w, denoted ' w , is the formula resulting from ' by
replacing each literal involving a variable x i with w its opposite. E.g., let
Then, the renaming of f by
3 Characterizations of 1-Decision Lists
Read-once functions. A function f is called read-once, if it can be represented by read-once formula,
i.e., a formula without repetition of variables. The class CR-1 of read-once functions has been extensively
studied in the literature, cf. [34, 24, 35, 28, 21, 17, 23, 11].
Definition 3.1 Define the class FLR-1 of linear read-once formulas by the following recursive form:
(2) if ' 2 FLR-1 and x i is a variable not occurring in ', then x i -', x i -', x i -', x i -' 2 FLR-1 .
Call a Bf f linear read-once [12], if it can be represented by a formula in FLR-1 , and let CLR-1 denote the
class of all such functions. E.g., x 1 x 2 not.
Note that two read-once formulas are equivalent if and only if they can be transformed through associativity
and commutativity into each other [21]. Hence, the latter formula does not represent a linear read-once
function.
The following is now easy to see (cf. also [5, p. 11]):
Proposition 3.1
Note that any ' 2 FLR-1 is convertible into an equivalent 1-decision list in linear time and vice versa.
Horn functions. We next give a characterization in terms of Horn functions. A Bf f is called double Horn
[13], if T (f)). The class of these functions is denoted by CDH . Note
that f is double Horn if and only if f and f are Horn. E.g.,
is double Horn, because
is Horn. Alternatively, a Bf f is double Horn if and only if it has both a Horn DNF and a Horn CNF
representation. In the previous example, this is easily seen to be the case. The class of double Horn functions
has been considered in [13, 12] for giving T (f) and F (f) a more balanced role in the process of finding a
Horn extension.
We can show the somewhat unexpected result that the classes C R
DH and CLR-1 coincide (and hence C R
This gives a precise syntactical characterization of the semantically defined class C R
and, by the previous result, a semantical characterization of C 1-DL .
The proof of this result is based on the following lemma, which can be found in [13, 12]. Let
ng and any permutation of V . Then, let \Gamma - be the set of Horn terms
ng
g.
Lemma 3.2 ([13]) Let f be a Bf on variables x i , holds if and only if f can be
represented by a DNF
t2S t for some permutation - of V and S
By algebraic transformations of the formula ', it can be rewritten to a linear read-once formula of the
if d is even
d, and the variables x 11
are all different.
Since any linear read-once formula can be transformed to such a formula by changing the polarities of
variables, we obtain the next result. Denote by C rev
the class of all reversed
double Horn functions.
This lemma can also be derived from a related result on finite distributive lattices, see [25].
Theorem 3.3 C R
1-DL . 2
Thus, there exists an interesting relationship between 1-decision lists, read-once formulas, and (disguised)
Horn functions. By means of this relationship, we are able to precisely characterize the prime DNFs of
functions in C R
DH . This is an immediate consequence of the next theorem.
Theorem 3.4 Every f 2 C R
DH (equivalently, f 2 C 1-DL , f 2 CLR-1 ) has a renaming w such that f w is
positive and represented by the unique prime DNF
are pairwise disjoint positive terms and t i for are possibly
empty. In particular, (3.2) implies Conversely, every such ' of (3.2) represents an f 2 C R
(equivalently, an f 2 C 1-DL , f 2 CLR-1 ). 2
Nested differences of concepts. In [20], learning issues for concept classes have been studied which
satisfy certain properties. In particular, learning of concepts expressed as the nested difference c 1 n
has been considered, where the c i are from a concept class which
is closed under intersection. Here, a concept can be viewed as a Bf f , a concept class C as a class of Bfs
CC , and the intersection property amounts to closedness of CC under conjunction, i.e., f
Clearly, the class of Bfs f definable by a single (possible empty) term t enjoys this
property. Let CND denote the class of nested differences where each c i is a single term. Then the following
holds.
Proposition 3.5 C
(We shall prove a more general result at the end of this section in Theorem 3.14, and also give a characterization
of C mon
.) Thus, the general learning results in [20] apply in particular to the class of 1-decision
lists, and thus also to disguised double Horn functions and linear read-once functions.
Threshold and 2-monotonic functions. Let us denote by C TH the class of threshold functions and by
C 2M the class of 2-monotonic functions.
A function f on variables x threshold (or, linearly separable) if there are weights w i ,
threshold w 0 from the reals such that f(x only if
A function is 2-monotonic, if for each assignment A of size at most 2, either f A - f A or f A - f A holds,
where A denotes the opposite assignment to A [29].
The property of 2-monotonicity and related concepts have been studied under various names in the fields of
threshold logic, hypergraph theory and game theory. This property can be seen as an algebraic generalization
of the thresholdness. Note that C R
We have the following unexpected result.
Theorem 3.6 C
Proof. It is well-known that C TH ae C 2M [29], where ae is proper inclusion; moreover, also C 1-DL ' C TH
has been shown [5, 3]. (Notice that in [12], the inclusion C R
independently shown, using
the form (3.2) and proceeding similar as in [3]; the idea is to give all the variables in t j the same weight,
decreasing by index j, and to assign x i a weight so that every term in ' has same weight;
the threshold w 0 is simply the weight of a term t.)
Thus, by the results from above, it remains to show that C 2M " CR-1 ' C R
DH holds.
Recall that a function g on x 1 regular [29], if and only if g(v) - g(w) holds for all v; w 2
by C reg the class of regular functions. The
following facts are known (cf. [29]):
(a) Every regular function is positive and 2-monotonic;
(b) every 2-monotonic function becomes regular after permuting and renaming arguments.
(c) C reg is closed under arbitrary assignments A.
From (a)-(c), it remains to show that C reg " CR-1 ' CLR-1 .
We claim that any function f 2 C reg " CR-1 can be written either as
where f 0 is a regular read-once function not depending on any x i j
induction using
Theorem 3.3 gives then the desired result and completes the proof.
Since f is read-once, it can be decomposed according to one of the following two cases:
Case 1: where the f i depend on disjoint sets of variables B i and no f i can be
decomposed similarly. We show that jB i holds for at most one i, which means that f has form (i). For
this, assume on the contrary that, without loss of generality, jB 1 2. By considering an assignment
A that kills all f 3 that the function regular. Observe that any prime
implicant of g is a prime implicant of f 1 or f 2 , and that each of them has length - 2 (since f is read-once
and by the assumption on the decomposition). Let ' be the smallest index in
assume without loss of generality that ' 2 B 1 . Let t be any prime implicant of f 2 and
is the unit vector with
k. Note that l ! h and l 2 OFF (v) by definition. Then
holds. Indeed, ON(w) 6' P
1. Consequently, the vectors
v and w with
Thus g is not
regular, which is a contradiction. This proves our claim.
Case 2: where the f i depend on disjoint sets of variables B i and no f i can be decomposed
similarly. Then, the dual function f d has the form in case 1. (Recall that a formula representing the dual
of f , f is obtained from any formula representing f by interchanging - and - and 0 and 1,
respectively.) Since the dual of a regular function is also regular [29], it follows that f d has the form (i),
which implies that f has form (ii). 2
Thus, we have established the main result of this section.
Theorem 3.7 C
A generalization of this result is an interesting issue. In particular, whether for k-decision lists and read-k
functions, where k is a constant, similar relationships hold. It appears that this is not the case.
Using a counting argument, one can show that for every k ? 1, C k-DL contains some function which is
not expressible by a read-k formula. In fact, a stronger result can be obtained.
Let for any integer function F (n) denote CR the class of Bfs f(x are
definable by formulas in which each variable occurs at most F (n) - 1 times. For any class of integer
functions F, define CR
F (n)2F CR
pos and C -k
pos the classes of positive Bfs f
such that all prime implicants of f have size k (resp., at most k), where k is a constant.
Lemma 3.8 For every k ? 1, for all but finitely many n ? k there exists an n-ary f 2 C k
pos such that
kk! log n ).
Proof. Since all prime implicants of a positive function are positive, C k
pos contains
functions on n variables. On the other hand, the number of positive functions in CR is bounded by
loss of generality, a formula ' defining some positive function does
not contain negation. Assuming that all variables occur F (n) times, the formula tree has m leaves (atoms)
nodes (connectives). Written in a post-order traversal, it is a string of
of which m denote atoms and the others connectives. There are m!
ways to place the atoms in the
string, if they were all different (this simplification will suffice), times 2 m\Gamma1 combinations of connectives.
If we allow the single use of a binary connective r(x; y), which evaluates to the right argument y, we may
assume w.l.o.g. that ' contains exactly F (n) occurrences of each variable. Thus, (3.4) is an upper bound on
positive read-F (n) functions in n variables. (Clearly, ? and ? are implicitly accounted since multiple trees
for e.g. are counted.)
us compare (3.3) with (3.4). Clearly, (3.4) is bounded by
since m!
. Take the logarithm of (3.3) and (3.5) for base
2, and consider the inequality
Since
amounts to
where p(n) is a polynomial of degree k \Gamma 1. For F
kk! log n , we obtain
kk! log n and thus
kk! log n
k log n
It is easily seen that for large enough n, this inequality holds. This proves the lemma. 2
Let f( n
kk! log n ) be the class of functions F (n) such that F (n) - n
kk! log n holds for infinitely many n.
Theorem 3.9 C -k
pos 6' CR (f( n
kk! log n )), for every k ? 1.
It is easy to see that every function in C -k
pos is in CR (n is the lowest polynomial degree
pos ' CR (n k 0
Corollary 3.10 C mon
kk! log n
kk! log n )), for every k ? 1.
Consequently, any generalization of the parts in Theorem 3.7 involving read-once functions to a characterization
of k-decision lists in terms of read-k functions fails; this remains true even if we allow a polynomial
number of repetitive variable uses, where the degree of the polynomial is smaller than k \Gamma 1.
Let us now consider a possible generalization of the characterization in terms of Horn functions. Since
C k-DL contains all functions with a k-CNF (in particular, also the parity function on k variables), it is hard
to see any interesting relationships between C k-DL and combinations or restrictions of Horn functions.
For nested differences of concepts, however, there is a natural generalization of the result in Theorem 3.7.
Let CND (C) denote the class of all functions definable as nested differences of Bfs in C, and let similarly
denote CDL (C) the class of functions definable by a C-decision list, i.e., a decision list in which each term t i
except the last (t replaced some f 2 C. Then, the following holds.
Theorem 3.11 Let C be any class of Bfs. Then, CDL
contains the complements of the functions in f .
Proof. We show by induction on d - 1 that every f represented by a C-decision list of length - d is in
CND (C [f?g), and that each nested difference f 1 n (f 2 n are from C [f?g,
is in CDL (C).
(Basis) For there are two C-decision lists: (?; 0) and (?; 1) respectively. They are represented by the
nested difference ? n ? and ?, respectively. Conversely, (?; 1) represents ?, and for any function f 2 C ,
the decision list (f ; 0); (?; 1) obviously represents f ; observe that f 2 C holds.
Suppose the statement holds for d, and consider the case d + 1. First, consider a C-decision
loss of generality f 1 6j ?. By the induction hypothesis, the tail
of L can be represented by a nested difference D
defining a Bf f 0 2 CND (C). If b defines the function which can be represented by
the nested difference ? replacing f 0 by D 0 , this is a nested difference of functions in C [ f?g.
Hence, f 2 CND (C [f?g) holds. On the other hand, if represents the function
which is equivalent to :(f 1 - f 0 ); since the complement of any function g is represented by the nested
difference ? n g, we obtain from the already discussed scheme for disjunction that f is represented by the
nested difference
replacing f 0 with D 0 , we obtain a nested difference of functions in C [ f?g, hence f 2 CND (C [ f?g).
Second, let be any nested difference of functions in C [ f?g. By the
induction hypothesis, D represents a function f 0 2 CDL (C); thus, D represents
the function
It is easy to see that for any C, CDL (C) is closed under complementation [31] (replace in a decision list
each b i by to obtain a decision list for the complement function). Hence, f 0 is represented by some C-
decision list L 0 . Now, if f otherwise, the decision list
f . Hence, f 2 CDL (C).
Consequently, the induction statement holds for d + 1. This concludes the proof of the result. 2
Proposition 3.5 is an immediate corollary of this result. Moreover, we get the following result. Let C k-cl
denote the class of functions definable by a single clause with at most k literals, plus ?.
Corollary 3.12 C
Thus, CND (C k-cl ) characterizes C k-DL . However, C k-cl is not closed under conjunction, and thus, strictly
speaking, not an instance of the schema in [20]. A characterization by such an instance is nonetheless
possible. Call a subclass C 0 ' C a disjunctive base of a class C, if every f 2 C can be expressed as a
disjunction of functions f i in C 0 .
Lemma 3.13 If C 0 is a disjunctive base for C, then CDL (C 0
Proof. Suppose an item (f; b) occurs in a C-decision list L. By hypothesis, each
Replace the item by k items (f Then, the resulting decision list is equivalent to L.
Hence each C-decision list can be converted into an equivalent C 0 -decision list. 2
Theorem 3.14 C
Proof. By Corollary 3.12 and Lemma 3.13. 2
Thus, nested differences of k-CNF functions are equivalent to k-decision lists. Observe that from the proof
of this result, linear time mappings between nested differences and equivalent k-decision do exist. A similar
equivalence C does not hold. The reason is that the class of single-term functions is
not a base for C k-CNF , which makes it impossible to rewrite a C k-CNF -decision list to a k-decision list in
general.
The classes of bounded monotone decision lists can be characterized in a similar way. Let C pos
k-DNF and
C neg
k-CNF be the subclasses of C k-DNF and C k-CNF whose members have a positive DNF and a negative CNF
(i.e., no positive literal occurs), respectively.
Theorem 3.15 C mon
k-CNF
Thus, in particular, if C Lit \Gamma denotes the class of negative literals plus ?, then we obtain the following.
Corollary 3.16 C rev
4 Recognition from a Formula
A 1-decision list, and thus also its relatives, can be recognized in polynomial time from formulas of certain
classes, which include Horn formulas. The basis for our recognition algorithm is the following lemma:
Lemma 4.1 A Bf f is in C 1-DL if and only if either (ia) x
holds for some j, and (ii) f holds for all j satisfying (ia) or (ib)
(resp., (ic) or (id)). 2
Given a formula ', the recognition algorithm proceeds as follows. It picks an index j such that one of
(ia)-(id) holds, and then recursively proceeds with ' as in (ii). The details can be found in [12]. The
following result on its time complexity is immediate from the fact that the recursion depth is bounded by
n and that at each level O(n) test (ia)-(id) are made. Let for a formula ' denoted j'j its length, i.e., the
number of symbols in '.
Theorem 4.2 Let F be a class of formulas closed under assignments, such that checking equivalence of '
to ? and ?, respectively, can be done in O(t(n; j'j)) time for any ' 2 F . 2 Then, deciding whether a given
represents an f 2 C 1-DL can be done in O(n 2 t(n; j'j)) time. 2
Hence, the algorithm is polynomial for many classes of formulas, including Horn formulas and quadratic
(2-CNF) formulas. Since testing whether and a quadratic formula is
possible in O(j'j) time (cf. [10, 14]), we obtain the following.
Corollary 4.3 Deciding whether a given Horn DNF or 2-CNF ' represents an f 2 C 1-DL can be done in
Theorem 4.2 has yet another interesting corollary.
Corollary 4.4 Deciding if an arbitrary positive (i.e., negation-free) formula ' represents an f 2 CLR-1 can
be done in polynomial time. 2
In fact, deciding whether a positive formula ' represents a read-once function is co-NP-complete [21, 11].
It turns out that the class of CLR-1 is a maximal subclass of CR-1 w.r.t. an inductive (i.e., context-free)
bound on disjunctions and conjunctions in a read-once formula such that deciding f 2 CR-1 from a positive
formula ' is polynomial. Indeed, it follows from results in [11, 21] that if in part (2) of Definition 3.1 either
disjunction with a term x 1 x 2 or conjunction with a clause x 1 - x 2 is allowed, then the recognition problem
is co-NP-hard.
In general, the recognition problem is unsurprisingly intractable.
Theorem 4.5 Deciding whether a given formula ' represents a function f 2 C 1-DL is co-NP-complete.
Proof. The recognition problem for CR-1 is in co-NP [2], and it is easy to see that it also in co-NP for C 2M .
Since co-NP is closed under conjunction, membership in co-NP follows from 3.7. The hardness part is
easy: any class C having the projection property, i.e., C is closed under assignments, contains each
arity, and does not contain all Bfs, is co-NP-hard [19]; obviously, C 1-DL enjoys this property. 2
As for k-decision lists, it turns out that the recognition problem is not harder than for 1-decision lists. In
fact, membership in co-NP follows from the result that k-decision lists are exact learnable with equivalence
queries in polynomial time (proved by Nick Littlestone, unpublished; this also derivable from results in
[20] and Theorem 3.14), and the result [2] that for classes which are exact learnable in polynomial time
with equivalence and membership queries (under minor constraints), the recognition problem is in co-NP.
Hardness holds by the same argument as in the proof of Theorem 4.5.
We conclude this section with a some remarks concerning the equivalence and the implication problem.
The problems are, given k-decision lists L 1 and L 2 representing functions f 1 and f 2 , respectively, decide
As usual, t(n; j'j) is monotonic in both arguments.
respectively. Both problems are obviously
in co-NP, and they are complete for any fixed k - 3, since they subsume deciding whether a k-DNF formula
is a tautology. On the other hand, for both problems are polynomial, and in fact solvable in linear
time. For the remaining case can be seen that the problem is also polynomial; the underlying
reason is that the satisfiability problem for 2-CNF formulas is polynomial.
5 Extension problems
The extension problem for C 1-DL has already been studied to prove the PAC-learnability of this class. It is
known [31] that it is solvable in polynomial time. We point out that the result in [31] can be further improved,
ibby showing that the extension problem for C 1-DL can be solved in linear time. This can be regarded as a
positive result, since the extension problem for the renaming closures of classes that contain C 1-DL is mostly
intractable, e.g., for C R
Horn
, C R
pos , C R
or no linear time algorithms are known.
We describe here an algorithm EXTENSION for the equivalent class CLR-1 , which uses Lemma 4.1 for
a recursive extension test; it is similar to the more general algorithm described in [31], and also a relative
of the algorithm "total recall" in [20]. Informally, it examines the vectors of T and F , respectively, to see
whether a decomposition of form L-' or L-' is possible, where L is a literal on a variable x
it discards the vectors from T and F which are covered or excluded by this decomposition, and recursively
looks for an extension at the projection of (T ; F ) to the remaining variables. Cascaded decompositions
are handled simultaneously.
Algorithm EXTENSION
Input: a pdBf (T ; F and a set I ' ng of indices.
has an extension f 2 CLR-1 , where T [I ] and
are the projections of T and F to I , respectively; otherwise, "No".
Step 1. if T [I no true vectors *)
no false vectors *)
Step 2. I
then go to Step 3 (* no extension x possible *)
else begin (* go into recursion *)
if
elseif
else (* /
Step 3. J
else begin (* go into recursion *)
if
elseif
else (* /
To find an extension of a given pdBf (T ; F ), the algorithm is called with I ng. Observe that it
could equally well consider going into the recursive calls. In particular, if
an index i is in the intersection of these sets, then both decompositions x are equally good.
Note that the execution of steps 2 and 3 alternates in the recursion. Moreover, the algorithm remains correct
if only a subset S ' I may lead to a different extension.
Proposition 5.1 Given a pdBf (T ; F ), where correctly finds an
It is possible to speed up the above algorithm by using proper data structures so that it runs in time
linear time; the technical details can be found in [12]. In particular, the data
structures assure that the same bit of the input is looked up only few times. Roughly, counters #T ij ,
record how many vectors in T have value j at component i, such that #T ij is in
a bucket BT[#T ij ]. Moreover, a list LT ij of all the vectors v in T so that v has at component i value j is
maintained, and at each component i of v a link to the entry of v in the respective list LT ij exists. For F ,
analogous data structures are used.
Theorem 5.2 The extension problem for C 1-DL (equivalently, for CLR-1 , and CND ) is solvable in time
in linear time. 2
Thus, in the learning context we obtain the following result.
Corollary 5.3 Learning a Bf f 2 C 1-DL from an arbitrary (possibly spoiled) teaching sequence for f is
possible in linear time in the size of the input.
It turns out that our algorithm can be used as a substitute for the learner in the teacher/learner model for
C 1-DL described in [16]. That algorithm is based on the idea to build a decision list by moving an item
('; b), where ' is a literal and b an output value, from the beginning of a decision list towards the end if it is
recognized that some example is misclassified by this item. Initially, all possible items are at the beginning,
and the procedure loops until no misclassification occurs (see [16] for details); it takes O(m 2 n) many steps
if the input has size O(mn), where m is the length of the shortest decision list for the target.
The method in [16] is somewhat dual to ours, and it is easily seen that the items which remain at the
beginning of the list are those whose literals are selectable for decomposition in our algorithm. Thus, by the
greedy nature of our algorithm, it constructs from the (possibly spoiled) teaching set as in [16] exactly the
target function. This shows that C 1-DL is an efficiently teachable class; since the teaching set is constructible
from the target in linear time, we have that C 1-DL is a nontrivial class of optimal order, i.e., linear time for
both teaching and learning.
5.1 Generating all extensions
For computing all extensions of a pdBf in C 1-DL , we describe an algorithm which outputs linear read-once
formulas for these extensions, employing the decomposition property of Lemma 4.1 for CLR-1 . Roughly,
the algorithm outputs recursively all extensions with common prefix fl in their linear read-once formulas. It
is a backtracking procedure similar to EXTENSION, but far more complicated.
The reason is that multiple output of the same extension must be avoided. Indeed, syntactically different
renamed forms 3.1 may represent the same linear-read once function. This corresponds to the fact that different
1-decision lists may represent the same function. E.g., both
the function In order to avoid such ambiguity, we have to single out some normal form; we adopt
for this purpose that the innermost level of the renamed form (3.1) for a linear read-once function contains
at least two literals, if the formula involves more than one level.
Our algorithm, ALL-EXTENSIONS, uses an auxiliary procedure that checks whether a given pdBf (T [I];
F [I]) has an extension in CLR-1 subject to the constraint that in a decomposition starting with a conjunction
only literals from a given set Lit - (resp., Lit - ) can be used until
a disjunction step (resp., conjunction step) is made. This constraint is used to take commutativity of the
connectives - and - into account. We make the concept of constrained extensions more precise.
For convenience, define for a set of vectors S that
v2S ON(v) and similarly that OFF
v2S OFF (v). Moreover, a literal L is -selectable (resp., -selectable) for S if either
ON(S)). The set of all
-selectable (resp., -selectable) literals for S is denoted by Sel-Lit - (S) (resp., by Sel-Lit - (S)).
Lit -constraint: An extension f 2 CLR-1 of a pdBf (T [I]; F [I]), I ' ng, is a Lit -constrained
extension, where Lit - is a set of -selectable literals for T [I], if the linear read-once formula of f
has form L 1 and ff is a disjunction
Lit -constraint: An extension f 2 CLR-1 of a pdBf (T [I]; F [I]), I ' ng, is a Lit -constrained
extension, where Lit - is a set of -selectable literals for F [I], if the linear read-once formula of f
has form L 1 and ff is a
conjunction
We use two symmetric algorithms, REST-EXT- and REST-EXT-, which handle the cases where we
look for a nontrivial (i.e., different from ? and ?) Lit -constrained (resp., Lit -constrained) extension.
The algorithm for the conjunction case is as follows.
Algorithm REST-EXT-
ng, a set Lit - of -selectable literals for T [I ].
Output: "Yes", if (T [I ]; F [I]) has a Lit -constraint extension f 2 CLR-1 and f 6= ?; ?; otherwise, "No".
Step 1. if
else output "No" (exit);
Step 2. I \Sigma := fi j x
(* try a maximal -decomposition; use first all literals not occurring opposite *)
I \Sigma
output "Yes" (exit) (* extension L 1 exists *)
output "No" (exit) ;
(* (T [I ]; F [I]) has an extension in CLR-1 , F has at least 2 jI \Sigma j vectors *)
cand := empty list; (* list of -decomp. candidates
for each subset J ' I \Sigma do begin
Lit J
insert Lit J
is -decomp.
for each L 2 Lit J
do insert Lit J
while cand is not empty do begin
remove a set A from cand;
I A := I n fV (L) j L 2 Ag;
-decompose by A *)
if some literal is -selectable for FA [I A ] then
(* test for extension
if jF A [I A ]j 6= 2 jI A j\Gamma1 then output "Yes" (exit)
else for each L 2 A do insert A n fLg into cand;
output "No" (exit). 2
The algorithm for the case of Lit -constraint extensions, REST-EXT-, is completely symmetric. There,
the roles of T and F , as well as of "-" and "-", are interchanged. Since the formulation of the algorithm is
straightforward, we omit it here.
Lemma 5.4 REST-EXT- correctly answers whether a given pdBf (T [I]; F [I]) has a Lit -constrained
extension in CLR-1 , and runs in time O(n(jT
Proof. We first prove correctness. It is easy to see that Step 1 is correct: If L 2 Lit -
L is an extension. If jT [I]j 6= 2 jIj , then some vector v 2 f0; 1g n [I] n T [I] exists; in this case,
is an extension, which is a Lit -constrained extension if jIj - 2. Thus the
algorithm correctly outputs "Yes" in Step 1. Suppose it outputs "No" in Step 1. Then Lit
which means that only one variable is eligible, or jT which means that is the
only possible extension. Thus, the algorithm correctly outputs "No".
Now consider Step 2. Observed that this step is only reached if F 6= ; holds. Suppose then the algorithm
outputs "Yes". If it does so in the first "if" statement, then Lit -
be the literals L
I \Sigma . If I holds since otherwise F 0 [I \Sigma
f;g, and hence jF 0 [I \Sigma which is a contradiction. Thus L 1 is an extension
of (T [I]; F [I]) which is clearly Lit -constrained; therefore, the output is correct. If I \Sigma 6= ;, then let
and consider the
' is a Lit -constrained extension of (T [I]; F [I]); hence, also in this case the output is correct. Otherwise,
"Yes" is output in the "while" loop. Then, some literal L is -selectable for FA [I A ] and jF A [I A ]j 6= 2 jI A j\Gamma1 .
We claim that FA 6= ; holds. Let us assume the contrary. If which is
a contradiction. Otherwise, Lit - 6= ;, and
L2A L represents an extension in CLR-1 . It is easy to see that
A ' Lit J
L2Lit J
L is an extension in CLR-1 as well. Thus F 0 in the first
"if" statement of Step 2 satisfies jF 0 [I \Sigma ]j 6= 2 jI \Sigma j because otherwise
a contradiction. Therefore, in case of FA = ;, the "while" loop would not have been entered since the
algorithm halts in the first "if" statement of Step 2.
Now, we can say that (T [I A ]; FA [I A ]) has an extension in CLR-1 represented by a linear read-once formula
since FA 6= ; and (T [I]; F [I]) has an extension in CLR-1 (oth-
erwise, the "while" loop would not have been entered). Indeed, for I I A n fV (L )g, we have (i)
consequently, some extension fi
of
must be different from ?; ?. Thus, it follows
that (T [I]; F [I]) has an extension
L2A L (L - fi). Since ' is clearly Lit -constrained, the output is
correct.
On the other hand, suppose the algorithm outputs "No". Towards a contradiction, assume that (T [I];
F [I]) has a Lit -constrained extension ' in CLR-1 . Assume first that . This means
are the literals L j in ' such that V (L
I \Sigma , and that
are all other such literals in Lit - with that property. Since ' is an extension, the set
empty. Hence, also the set S
is an extension of (;; F 0 [I \Sigma ]),
which implies jF 0 [I \Sigma ]j 6= 2 jI \Sigma j . Since, as already observed, Lit - 6= ;, the algorithm outputs "Yes" in the
first "if". This is a contradiction.
Thus, ' must be of form fi). It is easy to see that L is -selectable for FA [I A ]
where either (a)
The algorithm inserts A to cand before the "while" loop. If A 6= fL then the algorithm must
find in the "while" loop jF A [I A it does not output "Yes"), and hence it inserts (among
possible others) a subset A 1 n fL g ' fL into cand such that L is -selectable for FA 1 [I A 1 ],
and so on. Eventually, the algorithm must encounter A
and jF A k [I A k ]j 6= 2 jI A k
therefore, the algorithm outputs "Yes", which is a contradiction. This proves the
correctness of the algorithm.
Concerning the bound on the execution time, it is clear that Step 1 can be done in O(jT j) time. In Step 2,
I \Sigma is computable in O(n) time, and F 0 in O(njF time. The test jF 0 [I \Sigma ]j 6= 2 jI \Sigma j can be done in O(njF
time, by computing the set F 0 [I \Sigma ] in O(njF determining its size. The call of EXTENSION can
be evaluated in O(n(jT (Theorem 5.2). Thus, the first phase of Step 2 takes O(n(jT
time.
The remaining second phase uses the list cand, which can be easily organized such that lookup, insertion,
and removal of a set A takes O(n) time. Multiple consideration of the same set A can be avoided by
accumulating all sets A which had been inserted into cand so far in a list, for which lookup and insertion
can be done in O(n) time.
Consider the body of the outer "for" loop in the second phase. The statements before the "while" loop
take O(n 2 ) time; in order to assess the time for the "while" loop, we note the following fact.
Fact 5.1 For a fixed J ' I \Sigma , the algorithm encounters during execution of the "while" loop at most njF j
different sets A such that some literal L is -selectable for FA [I A ] and jF A [I A
To show this, let I
an evidence for A, if w 2 T (L ) for
every L 2 A and w 2 F (L ) for every L 2 Lit J
must have an evidence w.
The same w can serve as an evidence for at most n of the encountered A's. Indeed, suppose different such
sets
- with (unique) -selectable literals
resp. FA 2
have the same evidence w.
This implies that either L 1 2 Lit J - or L 2 2 Lit J - . In fact, both must hold. To verify this,
suppose by contradiction that w.l.o.g. L
- . This implies A g. Now (T [I]; F [I]) has the
extensions
consequently, it also
has an extension L 1;1
which is a contradiction and proves
- . It follows that A 1 [ fL 1
Obviously, at most n \Gamma 1 sets A 2 different from A 1 are possible; if T 6= ;, then both
Lit J
are impossible, and hence no A 2 different from A 1 exists. It follows that the number of encountered
sets A as in Fact 5.1 is bounded by njF j (resp., jF j).
In the body of the "while" loop, I A can be computed in O(n) time, and FA resp. FA [I A ] in O(njF
maintaining counters, the subsequent tests whether some literal is -selectable for FA [I A ] and jF A [I A ]j 6=
are straightforward in O(n) resp. constant time. The inner "for" loop can be done in O(n 2 ) time.
Thus, the body of the "while" loop takes O(n(jF A is as in Fact 5.1, and O(njF
otherwise.
Consequently, for a fixed J ' I \Sigma , in total O(njF jn(jF j +n)) time is spent in the while-loop for A's as in
Fact 5.1, and O(n 2 jF jnjF time for all other A's, since Fact 5.1 implies that there are at most n+n 2
of the latter inserted to cand. Thus, for fixed J ' I \Sigma , the "while" loop takes O(n 3 jF
time. Altogether, the body of the outer "for" loop takes O(n 3 jF
Since the "for" loop is executed at most 2 jI \Sigma j - jF j times, it follows that the second phase of Step 2 takes
takes in total O(n(jT
conclude similarly, exploiting Fact 5.1 and I that the second phase of Step 2 takes O(n 2 jF
and that Steps 1 and 2 together take O(n(jT This proves the lemma. 2
We remark that selecting a set A of maximum size for removal from cand in the while-loop of REST-
EXT- is a plausible heuristics for keeping the running time short in general. Organization of cand such
that maintenance and selection of A stay within O(n) time is straightforward.
Similarly, we obtain a symmetric result for the case of disjunction.
Lemma 5.5 REST-EXT- correctly answers whether a given pdBf (T [I]; F [I]) has a Lit -constrained
extension in CLR-1 , and runs in time O(n(n 2 jT
The main algorithm, ALL-EXTENSIONS, is described below. In the algorithm, we store prefixes of a
linear read-once formula ' in a string, in which parentheses are omitted (they are redundant and can be
reinserted unambiguously); for technical convenience, we add "x 0 -" in front of this prefix. For example,
consider the )). The string representations of the proper prefixes fl
of ' are
represented by
Algorithm ALL-EXTENSIONS
Input: A pdBf
extensions of (T ; F ) in CLR-1 .
Step 1. if special treatment of extension ? *)
special treatment of extension ? *)
Step 2. fl := "x 0 -"; I :=
Procedure ALL-AUX
Input: A pdBf (T ; F ), prefix fl of a linear read-once formula, set I of available variable indices and sets of literals
allowed for decomposition.
Output: Formulas FLR-1 for all extensions f 2 CLR-1 of (T ; F ) having fl as prefix, and the literal plus operator after
is according to Lit - ; Lit - .
Step 1. (* Expand fl by a conjunction step *)
while there is a literal L 2 Lit - do begin
I 0 := I n V (L); (* variable of L, V (L), is no longer available *)
literal L for further decomposition *)
complementary literal of L *)
output the extension "/ - L";
then begin (* expand fl by "L-" *)
endftheng
endfwhileg.
Step 2. (* Expand fl by a disjunction step *)
while there is a literal L 2 Lit - do begin
I 0 := I n V (L); (* variable of L, V (L), is no longer available *)
literal L for further decomposition *)
complementary literal of L *)
output the extension " - L";
then begin (* expand fl by "L-" *)
endftheng
endfwhileg. 2
An example is provided in the appendix.
Theorem 5.6 Algorithm ALL-EXTENSIONS correctly outputs formulas
extensions polynomial delay, where / i 6j / j for
Proof. Correctness of the algorithm can be shown by induction on jIj.
The delay between consecutive outputs is O(n 5 (jT
correctly answers in O(n(jT j and the other steps in the bodies
of the loops in Steps 1 and 2 of ALL-AUX take O(njF j) and O(njT j), respectively. There are at most O(n 2 )
subsequent calls of ALL-AUX which do not yield output, because the recursion depth is bounded by n and
recursive calls of ALL-AUX are only made if they lead to output, which is checked using REST-EXT-
and/or REST-EXT-. Hence, the delay is bounded by O(n 5 (jT the first and the last output
happen within this time bound, the result follows. 2
Observe that in [13], a similar algorithm for computing extensions in CDH has been described. However,
the problem there is simpler, since (3.1) is a unique form for each function in CDH , and, moreover, the
set Lit - of -selectable literals (resp., set Lit - of -selectable literals) may not contain both a literal x i
and its opposite x i if these conditions do not hold, solving the problem in
polynomial time is much more difficult and needs further insights. The present algorithm is thus a nontrivial
generalization of the one in [13].
Improvements to ALL-EXTENSIONS can be made by using appropriate data structures and reuse of
intermediate results. However, it remains to see whether a substantially better algorithm, in particular, a
linear time delay algorithm, is feasible.
Theorem 5.6 has important corollaries.
Corollary 5.7 There is a polynomial delay algorithm for enumerating the (unique) prime DNFs for all
extensions of a pdBf (T ; F ) in CLR-1 (resp., in C 1-DL , CND , and C R
Proof. By Theorem 3.4, the prime DNF for a linear read-once formula ' can be obtained from ' in O(n 2 )
time. 2
Denote by C(n) the class of all Bf of n variables in C. Then, if we apply the algorithm on (T ; F ), where
all members of CLR-1 (n). Hence,
Corollary 5.8 There is a polynomial delay algorithm for enumerating the (unique) prime DNFs of all
in C 1-DL (n), CND (n), and C R
Transferred to the learning context, we obtain:
Corollary 5.9 Algorithm ALL-EXTENSIONS outputs all hypotheses f 2 CLR-1 which are consistent with
a given sample S with polynomial delay. Similar algorithms exist for C 1-DL , CND , and C R
DH .
As a consequence, if the sample almost identifies the target function, i.e., there are only few (up to polynomially
many) different hypotheses consistent with the sample S, then they can all be output in polynomial
time in the size of S.
As another corollary to Theorem 5.6, checking whether a pdBf (T ; F ) uniquely identifies one linear-read
once function is tractable.
Corollary 5.10 Given a pdBf (T ; F ), deciding whether it has a unique extension f 2 CLR-1 (equivalently,
DH ) is possible in polynomial time.
For learning, this gives us the following result.
Corollary 5.11 Deciding whether a given sample S is a teaching sequence for CLR-1 (equivalently, for
C 1-DL and CND ) is possible in polynomial time.
Example 5.1 Consider the pdBf (T ; F ), where (001)g. The algorithm
ALL-EXTENSIONS outputs the single extension which corresponds to the 1-decision list
In fact, / is the unique linear-read once extension of (T ; F ). Observe that
only extensions f 2 CLR-1 of form x 3 - ' are possible, as x 3 is the only - resp. -selectable literal; since
no term x 3 x j can be an implicant of an extension and T contains two vectors, it follows that x 3
the only extension of (T ; F ) in CLR-1 . 2
6 Conclusion
In this paper, we have considered the relation between decision lists and other classes of Boolean functions.
We found that there are a number of interesting and unexpected relations between 1-decision lists, Horn
functions, and intersections of classes with read once-functions. These results provide us with syntactical
and semantical characterizations of an operationally defined class of Boolean functions, and vice versa with
an operational and syntactical characterization of intersections of well-known classes of Boolean functions.
Moreover, they allow us to transfer results obtained for one of these particular classes, the corresponding
others. In this way, the characterizations may be useful for deriving future results.
On the computational side, we have shown that some problems for 1-decision lists and their relatives
are solvable in polynomial time; in particular, finding an extension of a partially defined Boolean function
(in terms of learning, a hypothesis consistent with a sample) in this class is feasible in linear time, and
enumeration of all extensions of a pdBf in this class (in terms of learning, all hypotheses consistent with
sample) is possible with polynomial delay. Furthermore, the unique extension problem, i.e., recognition of
a teaching sequence, is polynomial.
Several issues remain for further research. As we have shown, a simple generalization of the characterizations
of 1-decision lists in terms of other classes of Boolean functions is not possible except in a single
case. It would be thus interesting to see under which conditions such a generalization could be possible.
Observe that the inclusion C k-DL ' C TH (k) is known [3], where C TH (k) denotes the functions definable as
a linearly separable function where terms of size at most k replace variables. A precise, elegant description
of the C k-DL fragment within C TH (k) would be appreciated; as we have shown, intersection with read-k
functions is not apt for this. Moreover, further classes of Boolean functions and fragments of well-known
such classes which characterize k-decision lists would be interesting to know.
Other issues concern computational problems. One is a possible extension of the polynomial-time delay
enumeration result for 1-decision list extensions do k-decision lists for k ? 1. While finding a single
extension is possible in polynomial time [31], avoiding multiple output of the same extension is rather
difficult, and a straightforward generalization of our algorithm is not at hand. Intuitively, for terms of size
role and makes checking whether items of a decision list are redundant intractable
in general. We may thus expect that in general, no such generalization of our algorithm for k ? 1 is possible.
Acknowledgments
. The authors thank Martin Anthony for pointing out the equivalence of CLR-1 and
C 1-DL . Moreover, we greatly appreciate the comments given by the anonymous STACS '98 reviewers on the
previous version of this paper. Moreover, we thank Leonid Libkin for pointing out an alternative derivation
of Lemma 3.2 and sending papers, and we thank Nick Littlestone for clarifying the source of the exact
learnability result for C k-DL .
--R
Complexity Theoretic Hardness Results for Query Learn- ing
Threshold Functions
Computational Learning Theory.
On specifying Boolean functions by labelled examples.
PAC Learning with Irrelevant Attributes.
Partial Occam's Razor and Its Applications.
Generating Boolean
Double Horn Functions.
On the Complexity of Timetable and Multicommodity Flow Problems.
On the Complexity of Teaching.
Teaching a Smarter Learner.
Criteria for Repetition-Freeness of Functions in the Algebra of Logic
Lower Bounds on Learning Decision Lists and Trees.
On the Geometric Separability of Boolean Functions.
Learning Nested Differences of Intersection-Closed Concept Classes
The Complexity of Very Simple Boolean Formulas With Applications.
A computational model of teaching.
Combinatorial Characterization of Read-Once Formulae
On Repetition-Free Contact Schemes and Repetition-Free Superpositions of the Functions in the Algebra of Logic
Separatory Sublattices and Subsemilattices.
Learning When Irrelevant Attributes Abound: A New Linear Threshold Algorithm.
Horn Extensions of a Partially Defined Boolean Function.
Functions Computed by Monotone Boolean Formulas With No Repeated Variables.
Threshold Logic and its Applications.
Induction of Decision Trees.
Learning Decision Lists.
Learning with a Helpful Teacher.
Teachability in Computational Learning.
On the Theory of Repetition-Free Contact Schemes
A Theory of the Learnable.
--TR
A theory of the learnable
On generating all maximal independent sets
Functions comuted by monotone boolean formulas with no repeated variables
The complexity of very simple Boolean formulas with applications
Cause-effect relationships and partially defined Boolean functions
Learning Nested Differences of Intersection-Closed Concept Classes
Teachability in computational learning
Computational learning theory
A computational model of teaching
Combinatorial characterization of read-once formulae
Exact transversal hypergraphs and application to Boolean MYAMPERSANDmgr;-functions
On the complexity of teaching
On specifying Boolean functions by labelled examples
Lower bounds on learning decision lists and trees
Teaching a smarter learner
and best-fit extensions of partially defined Boolean functions
Double Horn functions
Complexity theoretic hardness results for query learning
Horn Extensions of a Partially Defined Boolean Function
Learning Decision Lists
Induction of Decision Trees
Learning Quickly When Irrelevant Attributes Abound
Partial Occam''s Razor and Its Applications | polynomial delay enumeration;extension problem;boolean functions;decision lists;teaching sequence |
566245 | Asymptotic behavior in a heap model with two pieces. | In a heap model, solid blocks, or pieces, pile up according to the Tetris game mechanism. An optimal schedule is an infinite sequence of pieces minimizing the asymptotic growth rate of the heap. In a heap model with two pieces, we prove that there always exists an optimal schedule which is balanced, either periodic or Sturmian. We also consider the model where the successive pieces are chosen at random, independently and with some given probabilities. We study the expected growth rate of the heap. For a model with two pieces, the rate is either computed explicitly or given as an infinite series. We show an application for a system of two processes sharing a resource, and we prove that a greedy schedule is not always optimal. | Introduction
Heap models have recently been studied as a pertinent model of discrete event
systems, see Gaubert & Mairesse [19,20] and Brilman & Vincent [12,13]. They
provide a good compromise between modeling power and tractability. As far
as modeling is concerned, heap models are naturally associated with trace
monoids, see [31]. It was proved in [20] that the behavior of a timed one-
bounded Petri net can be represented using a heap model (an example appears
in
Figure
1). We can also mention the use of heap models in the physics of
This work was partially supported by the European Community Framework IV
programme through the research network ALAPEDES ("The ALgebraic Approach
to Performance Evaluation of Discrete Event Systems")
Preprint submitted to Elsevier Preprint 26 October 2000
surface growth, see [5]. The tractability follows essentially from the existence
of a representation of the dynamic of a heap model by a (max,+) automaton,
see [13,19].
A heap model is formed by a finite set of slots R and a finite set of pieces
A. A piece is a solid block occupying a subset of the slots and having a poly-
omino shape. Given a ground whose shape is determined by a vector of R R
and a word we consider the heap obtained by piling up
the pieces a in this order, starting from the ground, and according to
the Tetris game mechanism. That is, pieces are subject to vertical translations
and occupy the lowest possible position above the ground and previously piled
pieces. Let y(w) be the height of the heap w. We define the optimal growth
rate as ae min = lim inf n minw2A n y(w)=n. An optimal schedule is an infinite word
such that lim n is the prefix of length n of
u. An optimal schedule exists under minimal conditions (Proposition 4). We
can define similarly the quantity ae max and the notion of worst schedule. The
problem of finding a worst schedule is completely solved, see [17,19]. Finding
an optimal schedule is more difficult, the reason being the non-compatibility
of the minimization with the (max; +) dynamic of the model. In [21], it is
proved that if the heights of the pieces are rational, then there exists a periodic
optimal schedule. If we remove the rationality assumption, the problem
becomes more complicated. Here we prove, and this is the main result of the
paper, that in a heap model with two pieces, there always exists an optimal
schedule which is balanced, either periodic or Sturmian. We characterize the
cases where the optimal is periodic and the ones where it is Sturmian. The
proof is constructive, providing an explicit optimal schedule.
As will be detailed below, a heap model can be represented using a specific
type of (max,+) automaton, called a heap automaton. A natural question is the
Given a general (max,+) automaton over a two letter alphabet, does
there always exist an optimal schedule which is balanced (for an automaton
defined by the triple (ff; -; fi), set define an optimal
schedule as above)? The answer to this question is no, which emphasizes the
specificity of heap automata among (max,+) automata. A counter-example is
provided in Figure 4.
We also consider random words obtained by choosing successive pieces inde-
pendently, with some given distribution. We denote by ae E the average growth
rate of the heap. Computing ae E is in general even more difficult than computing
ae min . In [21], ae E is explicitly computed if the heights of the pieces are
rational and if no two pieces occupy disjoint sets of slots. Here, for models with
two pieces, we obtain an explicit formula for ae E in all cases but one where ae E
is given as an infinite series.
To further motivate this work, we present a manufacturing model studied by
Gaujal & al [23,22]. There are two types of tasks to be performed on the same
a
a b
Fig. 1. One-bounded Petri net and the associated heap model.
machine used in mutual exclusion. Each task is cyclic and a cycle is constituted
by two successive activities: one that requires the machine (durations: ff 1 and
and one that does not (durations: ff 2 and fi 2 respectively).
Think for instance of the two activities as being the processing and the packing.
This jobshop can be represented by the timed one-bounded Petri net of Figure
1. The durations ff are the holding times of the places. As
detailed in [20], an equivalent description is possible using the heap model
represented in Figure 1. The height of a heap a 1 corresponds
to the total execution time of the sequence of tasks a executed in this
order. An infinite schedule is optimal if it minimizes the average height of the
heap, or equivalently if it maximizes the throughput of the Petri net. We do
not make any restriction on the schedules we consider. In particular we do not
impose a frequency for tasks a and b. As a justification, imagine for instance
that the two tasks correspond to two different ways of processing the same
object. We prove in x7.4 that if ff
then there is a Sturmian optimal schedule; otherwise there exists a balanced
periodic optimal schedule. We also show in x7.5 that the greedy schedule is
not always optimal.
Assume now that in the model of Figure 1, the successive tasks to be executed
are chosen at random, independently, and with some probabilities p(a) and
p(b). If ff 1 or fi 1 is strictly positive, then we obtain an exact formula for ae E .
It enables in particular to maximize the throughput over all possible choices
for p(a) and p(b), see x8 for an example.
Let us compare the results of this paper with other cases where optimality
is attained via balance. In Hajek [25], there is a flow of arriving customers
to be dispatched between two queues and the problem is to find the optimal
behavior under a ratio constraint for the routings. The author introduces the
notion of multimodularity, a discrete version of convexity, and proves that a
multimodular objective function is minimized by balanced schedules. Variants
and extensions to other open queueing or Petri net models have been carried
out in [1,2], still using multimodularity. In a heap model however, one can
prove that the heights are not multimodular. In [22,23], the authors consider
the model of Figure 1. They study the optimal behavior and the optimal
behavior under a frequency constraint for the letters. Balanced schedules are
shown to be optimal and the proofs are based on various properties of these
sequences. We consider a more general model. For the unconstrained problem,
we prove in Theorem 14 that balanced schedules are again optimal. On the
other hand, under frequency constraints, we show in x7.6 that optimality is not
attained via balanced words anymore. Our methods of proof are completely
different from the ones mentioned above.
The paper is organized as follows. In x2 and x3, we define precisely the model
and the problems considered. We prove the existence of optimal schedules under
some mild conditions in x3.1. In x4, we recall some properties of balanced
words. We introduce in x5 the notions of completion of contours and completion
of pieces in a heap model. We prove in x6 that it is always possible to
study a heap model with two pieces by considering an associated model with
at most 3 slots. We provide an enumeration of all the possible simplified mod-
els: there are 4 cases. In x7.1-7.4, we prove the result on optimal schedules,
recalled above, by considering the four cases one by one. Greedy scheduling
is discussed in x7.5, and ratio constraints in x7.6. In x8, we study the average
growth rate.
Consider a finite set R of slots and a finite set A of pieces. A piece a 2 A is a
rigid (possibly non-connected) "block" occupying a subset R(a) of the slots.
It has a lower contour and an upper contour which are represented by two row
vectors l(a) and u(a) in (R[f\Gamma1g) R with the convention l(a)
if r 62 R(a). They satisfy u(a) - l(a). We assume that each piece occupies at
least one slot, 8a 2 A;R(a) 6= ;, and that each slot is occupied by at least
one piece, 8r 2 R;9a 2 A; r 2 R(a). The shape of the ground is given by a
vector I 2 R R . The 6-tuple constitutes a heap model.
The mechanism of the building of heaps was described in the introduction. It
is best understood visually and on an example.
Example 1 We consider the following heap model.
are strictly positive reals. We have represented, in
Fig. 2, the heap associated with the word
I
a
a
a
a
a
a
Fig. 2. Heap associated with the word ababa.
We recall some standard definitions and notations. We denote by 1fAg the
function which takes value 1 if A is true and 0 if A is false. We denote by R+
the set of non-negative reals, and by N and R the sets Nnf0g and Rnf0g.
Let A be a finite set (alphabet). We denote by A the free monoid on A, that
is, the set of (finite) words equipped with concatenation. The empty word is
denoted by e. The length of a word w is denoted by jwj and we write jwj a for
the number of occurrences of the letter a in w. We denote by alph(w) the set
of distinct letters appearing in w. An infinite word (or sequence) is a mapping
A. The set of infinite words is denoted by A ! . An infinite word
there exists l 2 N such that u In
this case, we write We denote by the prefix
of length n of u.
When A is the set of pieces of a heap model, (infinite) words will also be
called schedules. We also interpret a word w 2 A as a heap, i.e. as
a sequence of pieces piled up in the order given by the word.
The upper contour of the heap w is a row vector xH (w) in R R , where xH (w) r
is the height of the heap on slot r. By convention, xH I, the shape of the
ground. The height of the heap w is
We recall that a set K equipped with two operations \Phi
and\Omega is a semiring
if \Phi is associative and
commutative,\Omega is associative and distributive with
respect to \Phi, there is a zero element 0 (a \Phi
unit element 1
The set is a semiring, called the (max,+) semir-
ing. From now on, we use the semiring notations:
semiring Rmin is obtained from Rmax by replacing max by min
and \Gamma1 by +1. The subsemiring
\Phi;\Omega ) is the Boolean semiring.
We use the matrix and vector operations induced by the semiring structure.
For matrices A; B of appropriate sizes,
We usually omit
the\Omega sign, writing for instance AB instead
of
A\Omega B. On the other hand, the operations denoted by +; \Gamma; \Theta and = always
have to be interpreted in the conventional algebra. We define the 'pseudo-
norm' jAj We denote by 0, resp. 1, the vector or matrix whose
elements are all equal to 0, resp. 1 (with the dimension depending on the
context).
For matrices A and B of appropriate sizes, the proof of the following inequality
is immediate:
\Phi\Omega
For matrices U; V and A of appropriate sizes and such that all the entries
of U; V; UA and V A are different from 0, the following non-expansiveness
inequality holds:
Given an alphabet A, a (max,+) automaton of dimension k is a triple
are the initial and final vectors
and
max is a monoid morphism. The morphism -
is entirely defined by the matrices -(a); a 2 A; and for
we have (product of matrices in Rmax ). The map
is said to be recognized by the (max,+) au-
tomaton. A (max,+) automaton is a specialization to Rmax of the classical
notion of an automaton with multiplicities, see [8,16].
An automaton (ff; -; fi) of dimension k over the alphabet A is represented
graphically by a labelled digraph. The graph has k nodes; if
there is an arc between nodes i and j with labels a and -(a) ij
there is an ingoing arrow at node i with label ff i and if fi j ? 0 then there is
an outgoing arrow at node j with label fi j . Examples appear in Figures 9,10
or 11.
For each piece a of a heap model H, we define the matrix M(a) 2 R R
by
Example 2 In the model considered in Figure 1 and Example 1, the matrices
associated with the pieces are
The entries have to be interpreted in Rmax .
Variants of Theorem 3 are proved in [13,19,20].
Theorem 3 Let I) be a heap model. For a word
, the upper contour and the height of the heap satisfy (products in
More formally, yH is recognized by the (max,+) automaton (I; M;1).
From now on, we identify the heap model and the associated (max,+) au-
tomaton, writing either We also call
H a heap automaton.
3 Asymptotic Behavior
Consider a (max,+) automaton its recognized map y. We
define the optimal growth rate (in R[ f\Gamma1g) as:
ae min
min
An optimal schedule is a word w 2 A ! such that lim n
We define the worst growth rate as ae max
A worst schedule is defined accordingly.
Consider a probability law fp(a); a 2 Ag (p(a) 2 [0; 1];
Random words are built by choosing the successive letters independently and
according to this law. Let p(w); n; be the probability for a random
word of length n to be w. We have
. When it exists, we define the average growth rate as:
p(w) \Theta y(w) : (7)
The optimal problem consists in evaluating ae min (U) and finding an optimal
schedule. The worst case problem consists in evaluating ae max (U) and finding
a worst schedule. The average case problem consists in evaluating ae E (U ).
When we consider a heap automaton H, the limits ae min (H); ae max (H) and
ae E (H) correspond respectively to the minimal, maximal and average asymptotic
growth rate of a heap.
3.1 Preliminary results
We consider the optimal problem first. It follows from (2) that
As a consequence
of the subadditive theorem, we have
lim nn
min
jwj=n
min
jwj=n
We also have for all w 2 A ,
\Phi\Omega min
ff
i\Omega min
\Phi\Omega jffj
\Phi\Omega
When we deduce that ae min ae and that the lim inf is a
limit in (6).
Proposition 4 Let U = (ff; -; fi) be a (max,+) automaton such that 8i; ff i ?
and such that ae min (U) 6= 0. Then there exists an optimal schedule.
PROOF. It follows from (9) that the automata (ff; -; fi) and (1; -; 1) have
the same optimal schedules (if any). Since ae min 6= 0, we deduce from (8) that
for all k 2 N , there exists w(k) 2 A n feg such that
jw(k)j \Theta ae min - j-(w(k))j \Phi - jw(k)j \Theta (ae min
By the subadditive inequality (2), we then have, for all l 2 N ,
Now define ~
and consider the infinite word ~
~
obtained by concatenation of the words ~
w(k). We consider
the prefix of length n of ~
w for an arbitrary n 2 N . There exists k n 2 N
such that
~
where u is a prefix of w(k n 1). Using
(2) and (10), we get
ae min - j-( ~
kn
w(i))j \Phi
kn
w(i)j
ni
Obviously, k n is an increasing function of n and lim n!+1 k
we obtain that:
kn
w(i)j
ni
Let us take care of the last term on the right-hand side of (11). Note that
It implies that
a2A j-(a)j \Phi
a2A j-(a)j \Phi
Starting from (11) and using (12) and (13), we obtain that
It completes the proof.
We now consider the worst case problem. As above, if 8i; ff i ? 0;
the lim sup is a limit in the definition of ae max . As opposed to the optimal case,
the worst case problem is completely solved. We recall the main result; it is
taken from [17] and it follows from the (max,+) spectral theorem (the most
famous and often rediscovered result in the (max,+) semiring, see [15,4,28]
and the references therein).
Proposition 5 Let U = (ff; -; fi) be a trim (see x3.2) (max,+) automaton of
dimension k. Then, ae max (U) is equal to ae max (M), the maximal eigenvalue of
the matrix
a2A -(a). That is
ae
l
a ij be such that -(a ij be such that
say that
weight circuit of M). Then (a
worst schedule.
In the case of a heap automaton, there exists a worst schedule of the form u
where the period u is such that 8a 2 A; juj a - 1. For a heap automaton with
two pieces (a and b), a worst schedule can always be found among a ! , b ! and
example where the worst schedule is indeed (ab) ! appears in Figure
18.
3.2 Deterministic automaton
A (max,+) automaton (ff; -; fi) is trim if for each state i; there exist words u
and v such that ff-(u) i ? 0 and -(v)fi i ? 0. It is deterministic if there exists
exactly one i such that ff i ? 0; and if for all letter a and for all i, there exists
at most one j such that -(a) ij ? 0. It is complete if for all letter a and for all
there exists at least one j such that -(a) ij ? 0.
A heap automaton is deterministic if and only if there is a single slot. On the
other hand, a heap automaton is obviously always trim and complete. In the
course of the paper, we consider other types of (max,+) automata: Cayley and
contour-completed automata. These automata will be deterministic, trim and
complete.
be a deterministic and trim (max,+) automaton over the
alphabet A. Let U 0 be the (min,+) automaton defined by the same triple (with
be the maps recognized by U and U 0 respectively.
Since U is deterministic, it follows that y U
\Gamma1. Defining the (min,+) matrix
and applying the (min,+) version of Proposition 5 (replace max by min everywhere
in the statement of the Proposition), we get that
ae min
the minimal eigenvalue of N . Also if (i
circuit, then (a
is an optimal schedule.
Proposition 6 Let U = (ff; -; fi) be a deterministic, complete and trim
(max,+) automaton over the alphabet A. Assume that
a2A -(a) is
an irreducible matrix (i.e. 8i;
We define the (R+ ; +; \Theta) matrix
a2A p(a) \Theta 1f-(a) ij ? 0g. Let - be the unique vector
satisfying - \Theta
1. The expected growth rate is
(the products are the usual
ones).
Proposition 6 is proved in [17]. It follows from standard results in Markov
chain theory (P is the transition matrix and - is the stationary distribution).
A consequence of Proposition 6 is that ae E (U) can be written formally as a
rational fraction of the probabilities of the letters. That is ae
R and S are real polynomials over the commuting indeterminates p(a); a 2 A.
More generally, it is possible, under the assumptions of Prop. 6, to obtain the
formal power series
as a rational fraction
(over the indeterminates x; p(a); a 2 A), see for instance [8].
Finitely distant automata. Two (max,+) automata
defined over the same alphabet A are said to be finitely distant if
Two heap automata (I; M;1) and are finitely distant. Indeed, according
to (3), we have
The asymptotic problems are equivalent for two finitely distant automata U
and V. That is ae schedules
coincide.
Since most heap automata are not deterministic, we can not apply the results
in (14) and Proposition 6 directly to them. We often use the following pro-
cedure: Given a (max,+) automaton, find a deterministic, trim, and finitely
distant automaton, then apply the above results to the new automaton.
Balanced Words
Balanced and Sturmian words appear under various names and in various areas
like number theory and continued fractions [29], physics and quasi-crystals [24]
or discrete event systems [25,22]. For reference papers on the subject, see [7,9].
A finite word u is a factor of a (finite or infinite) word
is a finite subsequence of consecutive letters in w, i.e.
for some i and n. A (finite or infinite) word w is balanced if j juj a \Gamma jvj a
for all letter a and for all factors u; v of w such that jvj. The balanced
words are the ones in which the letters are the most regularly distributed. The
shortest non-balanced word is aabb.
An infinite word u is ultimately periodic if there exist n 2 N and l 2 N such
that Sturmian word is an infinite word over a two
letters alphabet which is balanced and not ultimately periodic.
We now define jump words. Let us consider ff
We label the points fnff 1 by a, and the points fnff 2 by b.
Let us consider the set fnff 1 in its natural order
and the corresponding sequence of labels. Each time there is a double point,
we choose to read a before b. We obtain the jump word with characteristics
words are balanced. If ff 1 =ff 2 is rational then w is periodic;
if ff 1 =ff 2 is irrational then w is Sturmian.
a
a
a
a a
Fig. 3. Representation of the jump word (ff
It is also possible to define words as above except that we read b before a
whenever there is a double point. These words are still balanced and we still
call them jump words (below, when necessary, we will precise what is the
convention used for double points).
A more common but similar description of jump words uses cutting sequences.
There exists an explicit arithmetic formula to compute the n-th letter in a
given jump word (using the so-called mechanical characterization, see [9]).
Optimal schedules and balanced words. We prove in Theorem 14 that in
a heap model with two pieces, there always exist an optimal schedule which
is balanced. If we still consider a two letter alphabet but a general (max,+)
automaton, then this is not true anymore. The counter-example below was
suggested to us by Thierry Bousch [10]. Consider the deterministic (max,+)
automaton (ffi; -; 1) represented in Figure 4. It is easy to check that ae
and that an optimal schedule is the non-balanced word (aabb) ! . No balanced
word is optimal in this example.
1Fig. 4. (Max,+) automaton with no balanced optimal schedules.
5 Completion of Profiles and Pieces
5.1 Cayley automaton
Given A in R k\Thetal
, we define -(A) in R k\Thetal
and
that -(A) is the normalized matrix associated with A.
Let us consider a (max,+) automaton over the alphabet A. We
define
In the case of a heap automaton H, -(H) is the set of normalized upper
contours.
Assume that -(U) is finite. Then we define the Cayley automaton of U as
follows. It is the deterministic (max,+) automaton (ffi; -; fl) of dimension -(U)
over the alphabet A, where for
and fl It follows from this definition that for w 2 A , -(w)
Hence we have
We just proved that the automaton U and its Cayley automaton recognize the
same map (see also [17]).
The dimension of the Cayley automaton is in general much larger than the
one of U . However, it is deterministic, complete, and assuming for instance
that 8i; fi i ? 0, it is also trim. In particular when H is a heap automaton and
-(H) is finite, then the Cayley automaton is deterministic, complete and trim.
The Cayley automaton is used in x7.2.
The procedure described above is similar to the classical determinization algorithm
for Boolean automata. The difference is of course that -(U) is always
finite in the Boolean case.
5.2 Contour-completed automaton
Given a heap model H, it is easy to see that -(H) is infinite as soon as there
exist two pieces a and b whose slots are not the same. This motivated the
introduction in [21] of the refined notion of normalized completed contours. In
some cases, the set of such contours will be finite whereas -(H) is infinite. Here,
we recall only the results that will be needed. For details, and in particular
for an algebraic definition of completion in terms of residuation, see [21].
Let us consider a heap model described as the
We associate with the piece a 2 A, the upper
contour piece a and the lower contour piece a defined as follows
We still denote by M(a);M(a), the matrices defined as in (4) and associated
with the new pieces a; a.
An example of upper and lower contour pieces is provided in Figure 5. For
clarity, pieces of height 0 are represented by a thick line.
piece a
piece a
piece a
Fig. 5. A piece and the associated upper and lower contour pieces.
Given a vector x 2 R R
interpreted as the upper contour of a heap, we define
the completed contour OE(x) 2 R R
as follows
aji2R(a)
The vector OE(x) can be loosely described as the maximal upper contour such
that the height of a heap piled up on x is the same as the height of a heap
piled up on OE(x). More precisely, we have
For the sake of completeness, let us prove (19). Given a word
we define
We are going to prove the following results which put together imply (19)
It follows from the definition that (21) and (22) hold for the empty word e
(setting Assume now that (21) and (22) hold for all words of length
less or equal than n. We consider the word wa where w is of length n and a
is a letter.
If i 62 R(a) and i 62 R(w), then
Since obviously OE(x)M(w)M(a) i - xM(w)M(a) i , we get that
This concludes the proof of (21)
and (22), hence of (19).
Given a contour x 2 R R
, we define the normalized completed contour
-(OE(x)). Let us define
Let us assume that '(H) is finite. Then we define the contour-completed automaton
of H. It is a deterministic, complete and trim (max,+) automaton
over the alphabet A, of dimension '(H). It is defined by (ffi; -; 1) where for
The automaton H and its contour-completed automaton recognize the same
map, i.e.
The proof is analogous to the one of (17). The contour-completed automaton
is used several times in x7, see for instance Example 16.
5.3 Piece-completed heap automaton
After having defined the completion of contours, we introduce in this section
the completion of pieces.
We define the upper-completed pieces a ffi ; a 2 A, and the lower-completed pieces
a
min
We check easily that u(a hence we have indeed
defined pieces. Let us comment on this definition. Let x be a piece such that
a 0 be the piece obtained by piling up a and the part
of the lower contour piece x corresponding to the slots R(x) " R(a). The
piece a 0 is such that the heaps a 0 x and ax are identical. Hence, the piece a ffi
can be interpreted as the piece with lower contour l(a) and with the largest
possible upper contour such that the asymptotic behavior of a heap is not
modified when replacing the occurrences of a by a ffi . There is an analogous
interpretation for the pieces a ffi . An illustration of upper and lower completion
is given in Example 8 and Figure 6.
With the heap automaton we associate the heap automaton
and the heap automaton H
Lemma 7 A heap automaton H is finitely distant from both the heap automaton
H ffi and the heap automaton H ffi .
PROOF. Let us set
a2A
a2A
We want to prove the following inequalities, for all w 2 A ,
Since we have the left-hand side
inequalities in (26) and (27) follow immediately. Let us prove the right-hand
side inequality in (26), the proof of the one in (27) being similar.
First of all, for two words x and y over the alphabet A, we have (where R(x)
and R(y) are defined as in (20))
To prove (28), it is enough to remark that it follows from the definition in (4)
We need another intermediary result: for any two pieces a; b 2 A, we have
l
Furthermore, it is immediate that M(a This concludes the
proof of (29).
Obviously, the right inequality in (26) holds for words of length 1. Let us
assume that it holds for all words of length n. Let be a word of
length n+ 1. Assume there exists ng such that R(w i
then using (28), we get
with an analogous equality for M ffi . Setting
we deduce that we have
where the last inequality is obtained by applying the recurrence assumption to
the words u and v which are of length n. Assume now that R(w i )"R(w
for all be such that IM ffi . Assume that
The case remains to be treated. We obtain, using recursively
(29), that
We conclude that
by definition of K ffi . This completes the proof.
We define the bi-completed pieces a ffi
Here the pieces a ffi
are obtained by lower-completion first and then
upper-completion. We can also define pieces, say - a ffi
performing
upper-completion first and then lower-completion, that is: R(-a ffi
u(-a
min
In general, the pieces a ffi
are different, in other words the operations of
upper and lower-completion do not commute. An example of bi-completion is
provided in Figure 6. On this example, the pieces a ffi
are different.
Example 8 Consider the heap automaton with pieces defined by
It is simpler to obtain the completed pieces graphically, using the intuition
described above. We have represented in Figure 6 the upper, lower and bi-
completed pieces: fa
g.
The heap automaton H ffi
over the alphabet A, defined by
called the piece-completed heap automaton associated
with
Lemma 9 A heap automaton H and the associated piece-completed automaton
are finitely distant.
PROOF. By definition, we have H ffi
we get the result.
-a
a
a
a
a
Fig. 6. Two pieces and the associated upper-completed, lower-completed and
bi-completed pieces.
Given a set of pieces A, let us denote by A
ffi the upper-completed,
lower-completed and bi-completed sets of pieces. Given two pieces a and b, we
say that r is a contact slot for ab if
R(b) (visually, a is in contact with b at slot r in the heap ab).
We have
. In words,
a set of lower-completed (resp. upper-completed or bi-completed) pieces is left
unchanged by performing another lower (resp upper or bi) completion.
PROOF. The arguments below are based on the following immediate remark:
Given a and b in the same set of pieces, if i is a contact slot of ab then
By definition we have, 8a
It implies that j(i) is a contact slot for ba and that both i and j(i) are contact
slots for ba ffi . Obviously, it implies that i is a contact slot for b ffi a ffi and we
conclude that l((a This completes the proof of . The
proof of
Since i is a contact slot for b ffi a ffi , we also obtain that u(b ffi
for all k, we have M(b ffi
We also have that i is a contact slot for
. Using this together with (29), we get that 8k 2 R(b); 8l 2 R(a),
It implies that l((a ffi
We deduce that we have (a ffi
and we
can prove in a similar way that (a ffi
. We conclude that
Both the contour completion of x5.2 and the above piece completion are based
on the idea of local transformations which do not modify the asymptotic behavior
of heaps. However, they are different: the completed contours are not
the upper contours of the heaps of completed pieces.
6 Minimal Realization
The goal of this section is to prove that given a heap automaton with two
pieces, there exists a finitely distant one of dimension at most 3, Theorem 12.
A set of bi-complete pieces is a set A such that A ffi
A. From now on, we
always implicitly consider bi-complete pieces. Due to Lemma 9 and 10, we can
make this assumption without loss of generality.
be a heap automaton with set of slots R and let ~
R be a
subset of R. The heap model obtained by restriction of H to ~
R is denoted by
R and defined by H j ~
R\Theta ~
(visually, the new pieces are the
old ones restricted to ~
R).
Lemma 11 Let H be a heap automaton on the alphabet A and with set of
slots R. Let ~
R be a subset of R. The automaton H j ~
R is finitely distant from
H if and only if ~
R contains a contact slot for each word ab; a; b 2 A; such that
PROOF. Let Assume that ~
R contains at least one contact
slot for each ab such that R(a) " R(b) 6= ;. Let (a; b) be such a couple. We
have, by definition of a contact slot,
R\Theta ~
R\Theta ~
R\Theta ~
R\Theta ~
Let us consider a word w 2 A . Using repeatedly the equality in (28), we
obtain that
belongs to I(w) if v is a subword
of w and if two consecutive letters of v, say v are such that R(v i
For each word v 2 I(w), we obtain by using repeatedly (30)
that M(v) j ~
R\Theta ~
R\Theta ~
R (v). We deduce that M(w) j ~
R\Theta ~
R\Theta ~
R (w). We
conclude easily that
1- sup
w2A
R\Theta ~
R (w)1
w2A
a2A
Hence, H j ~
R is finitely distant from H. We have shown that the condition is
sufficient. Let us prove that it is necessary. Assume that ab; R(a) " R(b) 6= ;,
has no contact slot in ~
R. Let ffi be the minimal gap between a and b in the
heap ab over the slots ~
R. Then we have jM(ab)j
R\Theta ~
R (ab)j
It implies that jM((ab) n )j
R\Theta ~
R ((ab) n )j \Phi - n \Theta ffi, showing that H and
R are not finitely distant.
Theorem 12 Let be a heap automaton with two pieces. Over
the same alphabet, there exists a heap automaton ~
M;1) of dimension
at most 3 and which is finitely distant from H.
PROOF. By choosing one contact slot for each one of the words aa; ab; ba and
bb, we obtain a set ~
R of cardinality at most 4 and such that the automaton
R is finitely distant from H, see Lemma 11. We now prove that 3 slots
are always enough. We define the application c
denotes the set of subsets of A 2 . The set c(r) contains xy if r is a contact
slot of xy. Assume that R(a) " R(b) 6= ; and consider a slot r 2 R(a) " R(b).
Let us prove that c(r) must contain words starting with a and b and words
finishing with a and b. Assume for instance that c(r) does not contain any
word starting with a. Then, according to (24), there exists x 2 A such that
Since ax does not belong to c(r), the maximum above is attained for j 6= r and
we have u(a This contradicts the fact that A is a set of bi-complete
pieces.
To summarize, we must have
If we have faa; bbg ae c(r) (resp. fab; bag ae c(r)), we complete the slot r with
a contact slot for the heap ab and one for the heap ba (resp. for aa and bb).
We have a set of at most 3 slots which satisfies the required properties.
Now assume that R(a) " It is enough for ~
R to contain a contact slot
of aa and one of bb, hence to be of cardinality 2, for H j ~
R to be finitely distant
from H. This completes the proof.
Performed on the original heap automaton, instead of the piece-completed
one, the above argument would not work. Consider the heap model H of
dimension 4 defined by
and There exists no proper subset ~
R of R such that H j ~
R is
finitely distant from H.
Example 13 Let us illustrate Theorem 12. We consider the heap automaton
of dimension 4 and consisting of the two bi-complete pieces
defined by
We have fbbg. Here,
we can choose either ~
or f1; 3; 4g and the heap automaton H j ~
R
will be finitely distant from H. This can be 'checked' on Figure 7. In this
a
a
Fig. 7. A heap automaton of dimension 4 and a finitely distant one of dimension 3.
example, we do not always have
R\Theta ~
R (w)1. However we can
check that 1 -
R\Theta ~
R (w)1 - 1.
Lemma 11 and Theorem 12 are minimal realization type of results. Here is
the generic problem of this kind: Given an automaton with multiplicities in a
semiring, find another automaton recognizing the same map and of minimal
dimension.
In a commutative field, the minimal realization problem is solved, see [8] for a
proof and references. In Rmax , it is a well-known difficult and unsolved problem,
see [18] for partial results and references. Here, our result is specific in several
ways. First, we look at a particular type of (max,+) automata, heap automata
with two pieces. Second, we look for a realization by a heap automaton and not
by an arbitrary (max,+) automaton. Third, we only require an approximate
type of realization, see (15).
6.1 Classification of heap models with two pieces
As a by-product of Theorem 12, to study heap automata with two pieces, it
is enough to consider automata with bi-complete pieces and of dimension at
most 3. We are going to show that there are only four cases which need to be
treated (up to a renaming of pieces and slots) which are:
We recall that the function c(:) was defined in the proof of Theorem 12.
(i) If R(a) " we have seen in the proof of Theorem 12, that the heap
model can be represented with two slots only, one for each piece.
(ii) Let us assume that r be such that aa 2 c(r). Using
(31), we have either faa; bbg ae c(r) or faa; ab; bag ae c(r). If we are in the
second case, we complete r with a contact slot for bb. If we are in the first
case, let us consider a slot r 0 such that ab 2 c(r 0 ). We have, as before, either
select the
slots g. If fab; aa; bbg ae c(r 0 ), then we complete r 0 with a contact slot for
ba. In all cases, we obtain a finitely distant heap model with at most two slots.
(iii) Let us assume that R(b) ae R(a); R(b) 6= R(a). Let r be a slot such that
must have either fbb; ab; bag ae c(r) or
c(r). In the second case, we conclude as in (ii). In the first case, we
complete r with a slot r 0 such that aa 2 c(r 0 ). Compared with (ii), there is a
new possible situation: two slots
(iv) Let us assume that R(a) " R(b) 6= ;; R(a)nR(b) 6= ;; R(b)nR(a) 6= ;.
We consider a slot r 2 R(a) " R(b) and such that ab 2 c(r). We have either
fab; aa; bbg ae c(r) or fab; bag ae c(r). In the first case, we complete r with a
slot r 0 such that ba 2 c(r 0 ). In the second case, we complete r with a contact
slot r a for aa and a contact slot r b for bb. Compared with the cases (ii) and
(iii), there is a new possible situation: three slots fr; r a ; r b g with
and g.
7 Heap Models with Two Pieces: Optimal Case
Let H be a heap model with two pieces. To solve the optimal problem, it is
sufficient to consider the typical cases described in x6.1. Two situations need
to be distinguished:
ffl H is 'determinizable', i.e. there exists a finitely distant, trim, and deterministic
ffl H is 'not-determinizable'.
For 'determinizable' automata, there exists a periodic optimal schedule. We
will see below that there are two cases where H is 'not-determinizable'. In both
cases, we are able to identify 'visually' the optimal schedules. The resulting
theorem can be stated as follows.
Theorem 14 Let us consider a heap model with two pieces. There exists an
optimal schedule which is balanced, either periodic or Sturmian.
PROOF. We consider in x7.1-7.4 the four different cases described in x6.1.
For each case, we prove that the results of Theorem 14 hold. Furthermore we
provide an explicit way to compute ae min (H) and an optimal schedule in each
case.
In the sections below, we always denote the heap model considered by
3g. Viewed as a
heap automaton, it is denoted by We always implicitly assume
that we are working with bi-complete pieces. We recall that by modifying the
ground shape in a heap automaton, we obtain a finitely distant automaton.
Below we choose the ground shape which is the most adapted to each case.
If one of the two pieces, say a, then the optimal problem
becomes trivial. We have ae min optimal schedule is
provided by a ! . From now on, we assume that l(a) 6= u(a) and l(b) 6= u(b).
We set
i2R(a)
7.1 The case
We assume that the ground shape is 1. We claim that the jump word u with
characteristics (see x4) is optimal. Furthermore, we have ae min
h a h b =(h a An example is provided in Figure 8.
We now prove these assertions. Let us pile up the pieces according
to the jump word u defined by (h a ; h b ; 0). We have, by construction,
Hence we have lim n xH (u[n]) 1
as the heap is without any gap, it implies immediately
that u is optimal. The optimal schedule u is balanced, periodic when h a =h b is
rational and Sturmian when h a =h b is irrational, see x4. We have
a
a
a
a
a
a
Fig. 8. The jump word (h a
ae min
h a ju[n]j a
To be complete, let us prove that it is not possible to find a periodic optimal
schedule in the case h a =h b irrational. Let v be a finite word and let us consider
the schedule v ! . Since h a =h b is irrational, we have h a jvj a 6= h b jvj b . Let us
assume that h a jvj a ? h b jvj b . It implies that jvj a ? jvjh b =(h a
h a jv n j a =jv n
7.2 The case
As
-(yM(b)), for all x; y 2 R 2 . Let us choose the ground shape to be -(1M(a)).
We have -(IM(b))g. Hence we can solve the optimal
problem using the Cayley automaton, see x5.1. Applying the results of x3.2, it
is always the case that one of the schedules a is optimal. These
schedules are obviously balanced.
Example 15 Consider the heap automaton H with pieces defined by
We have represented the pieces in Fig. 9. We check easily that
(the ground shape being (1; \Gamma1)). Let (ff; -; 1) be the Cayley
automaton and let We have
The minimal eigenvalue of the Rmin matrix M is 2, the circuits of minimal
a
a
a
Fig. 9. Heap model with two pieces and its Cayley automaton.
mean weight are f2g and f3g and M 22 = -(a) 22 ; M We have
ae min are optimal schedules.
7.3 The case
This case could be reduced to the case
one in x7.4) by adding a third slot and setting
We treat the case separately
in order to get more precise results. Let us set . For
Hence, we have
Assume that ffi - 0 and let the ground shape be equal to 1M(a). We have,
We deduce that
We also have 1M(ab n+1 By assumption, we have h b ? 0.
Hence there exists a smallest integer m such that
It implies, using (32), that 8n - m;'(1M(ab n We conclude that
We have proved that '(H) is finite. In the case ffi - 0, a similar analysis holds.
In all cases we can solve the optimal problem using the contour-completed
automaton and the results of x3.2. We have represented in Figure 10, the
a
a
a
a
a
Fig. 10. Contour-completed automaton
contour-completed automaton in the case m - 3 and (without the mul-
tiplicities). There are exactly m simple circuits in this automaton with
respective labels b and ab 2, the multiplicity
to go from '(1M(ab n )) to '(1M(ab n+1 )) is 1 while the one to go from
'(1M(ab n+1 )) to '(1M(a)) is always equal to h a . Hence the circuits of label
ab are not of minimal mean weight. We conclude that an
optimal schedule can be found among the schedules (ab m
is optimal). These schedules are balanced.
Example We consider the heap automaton with pieces a and b defined by
The pieces are represented in Figure 11. The completion operation has the
ab
a
a
a
Fig. 11. Heap model and its contour-completed automaton.
following
n. Hence we have \Gamma3)g. Let (ff; -; 1) be
the contour-completed automaton and let -(b)). The minimal
eigenvalue of M is ae min and the circuit of minimal mean weight is
labelled by abbb. We conclude that ae min and that an optimal schedule
is (abbb) ! .
7.4 The case
Two situations need to be considered: (i) the case u(a)
the case u(a) 2 ? l(a) 2 (with the case being
treated similarly).
Case
Assume that there exists an infinite heap w with an infinite number of each
piece and without any 'gap' at slots 1 and 3. Now, we focus on the second
slot of the heap w. The heights of the pieces a and b at slot 2 are given by
We set the ground shape to be
The heights of the pieces at slot 2 are now given by fnh a ; n 2 N g and
g. Hence, the sequence of labels (read from bottom to top)
at slot 2 is the jump word w defined by (h a ; h b ; 0). Now, if we pile up
the pieces according to w, we indeed obtain a heap without any gap on
slots 1 and 3. An illustration is given in Figure 12. On slot 2, the pieces
have been shortened to facilitate their identification. If h a =h b is rational
a
a
a
a
a
a
Fig. 12. The optimal heap is the jump word babbaba \Delta \Delta \Delta .
then w is balanced and periodic and otherwise it is Sturmian. If h a =h b is
irrational, there does not exist any periodic optimal schedule. At last, we
have ae min The proof is exactly the same as in x7.1.
Case
Assume that u(a)
This contradicts the fact that a is bi-complete. We conclude that we have
and in the same way u(b)
Given there is a contact at slot 2 between the last two pieces
of the heaps xab and yab (resp. xba and yba) then
there is a contact at slot
2 between the last two pieces of the heap xab (resp. xba) then it is also the
case in the heap xaab (resp. xbba). It implies that
us set the ground shape to be I =
where the real L ? 0 is assumed to be large enough to
have a or b (see Figure 13 for an
It implies that the slot 2 is a contact slot for ab and ba; hence we
have We deduce that
We assume for the moment that h a =h b is irrational. Let x be the jump word
us assume that the infinite heap abx has no gap on slots 1 and
3. Then, the heights of the pieces on slot 2 are:
ffl lower part of piece a: fnh a \Gamma u(a)
ffl upper part of piece a: fnh a
ffl lower part of piece b: fnh b ; n 2 Ng;
ffl upper part of piece b: fnh b
Since h a =h b is irrational, by density of the points fnh b (mod h a ); n 2 Ng in
the interval [0; h a ], there exists a couple (p; q) 2 N 2 such that
ph a \Gamma u(a)
This is a violation of the piling mechanism, see Figure 13-(i) for an illustration.
Hence we conclude that there are some gaps on slot 1 or 3 in the heap abx.
Let l 1 be such that there is no gap at slots 1 and 3 in the heap abx[l 1
and there is a gap at slot 1 or 3 in the heap abx[l 1 In Figure 13-(i), we
have l be such that there is no gap at slots
1 and 3 in the heap bax[l there is a gap at slot 1 or 3 in the heap
2]. Note that we have l 1 - \Gamma1 and l 2 - \Gamma1, and that it is possible to
have l
Let us consider a heap abu (resp. bau), u 2 A . There are three possible cases.
(1) There is no gap at slots 1 and 3 in the heap and
1). Let x n is the n-th letter of x. If
otherwise. Similarly we have '(IM(bax[l 2
and
a
a
I
a
a
a
a b
a
Fig. 13. (i) Heap
(2) There is no gap at slots 1 and 3 and u 6= x[juj]. In this case, we must
have
1). Assume we have , the case being treated
similarly. Since u 6= x[juj], in the heap x[n]b m a, there is a contact at slot 2
between the last two pieces. We conclude that '(IM(abx[n]b m
This case is illustrated in Figure 13-
(iii) where
(3) There is a gap somewhere in the heap at slot 1 or 3. This implies that
we have in the heap u a contact at slot 2 between a piece a and a piece b, or
between a piece b and a piece a. Considering the last couple (a; b) or (b; a) of
pieces in contact at slot 2, we obtain (for abu, the case bau is treated similarly)
where the heap abv, or bav, is such that there is no gap at slots 1 and 3.
The heap abv, or bav, is in one of the two cases (1) or (2) above. Case (3)
is illustrated in Figure 13-(ii) where
babab and
To summarize, we have proved that
The set '(H) is finite, hence we can apply the results of x3.2 to the contour-
completed automaton.
us assume that h a =h b is rational. We still consider the jump word x
with characteristics (h a ; h b ; 0), which is now periodic, see x4. If the heap abx
(or bax) has no gap on slots 1 and 3, then the schedule x is optimal (same
argument as in x7.1). If the heaps abx and bax both have a gap somewhere on
slot 1 or 3, the proof carries over exactly as in the case h a =h b 62 Q.
The structure of the countour-completed automaton can be deduced from the
above proof. For simplicity, we denote the state '(IM(w)) by w, and we
use the convention a there is a transition
abx[n] xn+1 \Gamma! abx[n + 1] and a transition abx[n] x 0
n+1 . For
there is a transition bax[n] xn+1 \Gamma! bax[n+ 1] and a transition bax[n] x 0
n+1 . In
ba
a
a a ab
abx[l1
a
a
Fig. 14. Outline of the contour-completed automaton.
Figure
14, we have represented an outline of the contour-completed automaton
in the case l 1 ? 0; l 2 ? 0 (ingoing and outgoing arrows as well as some arcs
are missing, and the multiplicities have been omitted).
Using the above analysis, we can get the value of the multiplicities in the
contour-completed automaton. Doing this, we obtain that there is a circuit
of minimal mean weight in the contour-completed automaton of label
with the conventions
Hence, one of the following schedules is optimal:
g. It remains to be proved that
are balanced.
We are going to prove that (bax[l 2 ]abx[l 1 ]) ! is balanced. We treat the case
If we have l 1 or l 2 equal to -1, the argument can be easily
adapted. Due to the definition of l 1 , the following intervals are all disjoint
(visually, they correspond to the portions of the second column occupied by
the pieces in the heap abx[l 1 ]. We consider open intervals in I a and closed
ones in I b in order to ensure that the first interval in I a and I b are indeed
I
I
In the same way, the following intervals are all disjoint (up to the minus sign,
they correspond to the portions of the second column occupied by the pieces
in the heap bax[l 2 ]):
I 0
An illustration of the intervals in I a ; I b ; I 0
a and I 0
b is provided in Figure 15-
(i)-(ii). Let us label the intervals in I a [ I 0
a by a and the ones in I b [ I 0
b by
b. If we read the sequence of labels from bottom to top, we obtain the word
~
is the mirror word of x[l 2 ] (the mirror word of the word
is the word ~
(by definition of l 1
and l 2 )
(n a h a
(\Gamman 0
a h a
a h a
Let us choose t 2 (\Gamman 0
a h a +l(a)
a h a )"[\Gamman 0
Let us consider the set
By construction, each real of S is in a different interval of I a ; I b ; I 0
a or I 0
b .
Hence, if we read the sequence of labels associated with S from bottom to
top, we obtain x[l Also by construction, we have t
a )h a 2
(n a h a
(i) (ii) (iii) (iv)
a
a a
a
\DeltaFig. 15. Illustration of the proof; here x[l 1
Let us set m a = n a
a and m
b . We define
Figure
15-(iii). By construction, we have either ( ) or (
for each n; n 0 such that 1 -
for each n; n 0 such that 1 -
Let us assume that (the case of Figure 15-(iii)). The other case is
treated similarly. Because of the property ( ), the sequence of a's and b's
corresponding to S is the same as the one corresponding to
Equivalently the jump words (h a
have the same prefix of length l 1 2. If we decide to
read double points as ba (see x4), then the jump word z with characteristics
is a balanced and periodic word, which is equal to
palindrome is a word equal to its mirror word. The above
construction shows that ~
is a palindrome (for instance, in Figure 15-
(iv), the sequences of a's and b's read from bottom to top, and top to bottom,
are the same). It implies that it is impossible to have l
x[l]abx[l] is
never a palindrome). By the same type of arguments, we can prove that x[l 1
and x[l 2 ] are also palindromes. Hence we have x[l 2 ]abx[l 1
and we conclude that (x[l 2 ]abx[l 1 ]ba) ! is balanced.
The fact that (abx[l 1 are balanced is proved in a similar way.
7.5 Greedy scheduling
We treat completely an instance of the jobshop described in the introduction,
see
Figure
1. The durations of the activities are assumed to be ff
4 \Theta ff); ff We assume that 1=15 ! ff ! 1=11. The
model corresponds to case (ii) in x7.4 above.
The contour-completed automaton of
in
Figure
16. The labels of the simple circuits are a; b; ba; ba 2 and ba 3 . Their
a j ff 51
a j ff
Fig. 16. Contour-completed automaton.
respective mean weights are ff 5 ; 1; 1=2; 1=3 and ff 15=4 . Hence the label of the
circuit of minimal mean weight is ba 2 if 4=45 - ff and ba 3 if ff - 4=45. We
conclude that an optimal schedule is
a
a
a
a
a
a b
a
a
a
a
a
a a
a
a
a
a
Fig. 17. Model with schedule and optimal schedule.
The greedy scheduling consists in always allocating the resource to the first task
which is ready to use it (i.e. w[n
have Here the greedy
schedule is always (ba 3 We conclude that greedy scheduling is suboptimal
in the case ff 2 (4=45; 1=11), see Figure 17.
This is in sharp contrast with a result from [23] xIV. There, the optimal
problem is studied for the model of Figure 1, but the authors consider a
slightly different criterion: minimization of the idle time of the resource. They
show that greedy schedules are indeed optimal for this criterion.
7.6 Ratio constraints
In [25,22,23], the authors were primarily interested in the following constrained
optimal problem: Find w 2 A ! minimizing lim n yH (w[n])=n while satisfying
In a manufacturing model, the motivation is to maximize the throughput while
meeting a given production ratio. For this constrained problem, and for the
model of Figure 1, it is proved in [22,23] that the optimal schedule is always
the jump word Two points are worth being noticed. First, the
optimal schedule is balanced and when fl 2 Q, it is of the form u ! where u is
the shortest balanced word meeting the ratio constraint. Second, the optimal
schedule does not depend on the timings of the model (ff
Figure
1).
These two properties depend heavily on the specific shape of the pieces in the
model of Figure 1. They are not satisfied in a general heap model with two
pieces, as shown below.
Example 17 Consider the model of Example 15. We look at the constrained
optimal problem with ratio 1=2. The optimal schedule of length 2n; n 2 N ;
a a
a
a
Fig. 18. Optimal and worst schedule of ratio 1/2 and length 4.
is a n b n (or b n a n ) as illustrated on Figure 18. A possible optimal schedule is
balanced word with ratio 1=2 is optimal. Here,
the schedule (ab) ! , whose period is the shortest balanced word meeting the
constraint, is not an optimal but a worst case schedule! Examples in the same
spirit appear in [14], xVI-1 and in [20], x5.1.
8 Heap Models with Two Pieces: Average Case
In this section, products have to be interpreted in the field (R; +; \Theta). We
still assume that l(a) 6= u(a) and l(b) 6= u(b), otherwise the average problem
becomes trivial.
As in x7, the distinction between 'determinizable' and `non-determinizable'
automata is important. For the 'determinizable' case, it is easy to check that
the automata obtained in x7.1-7.4 are all irreducible. Hence we obtain ae E by
applying Prop. 6. Below, we illustrate this case on one example. There are two
cases where the heap automaton is 'non-determinizable', see x7. In one case,
we come up with an explicit formula for ae E and in the other case, we express
it as an infinite series.
Determinizable automaton. We consider the heap automaton H of x7.5.
Let fp(a); p(b)g be the probability distribution of the pieces. The contour-
completed automaton is represented in Figure 16. The corresponding transition
matrix is (see Prop. 6):
Its stationary distribution is
We conclude that we have, Prop. 6,
ae
This formula is valid for 1=15 ! ff ! 1=11, see x7.5. For instance, in the case
(the one of Figure 17), we have
ae E (H)
p(a)
ae
Case
space and let
independent random variables such that Pfx ag
and Pfx p(b). We set xH
and xH (n) 2 are transient random walks with respective drifts p(a)h a and
p(b)h b . We deduce immediately that ae E
Case 3g. We consider the case
simple but lengthy
computation provides the following formula (the details are available from
the authors on request). Let us denote by u (= u 1
We use the convention a
ae
One can obtain approximations of ae E (H) by truncating the infinite sums.
Computations of ae E for closely related models are carried out in [26].
9 Conclusion: Heap Models with Three or More Pieces
As recalled in the introduction, the optimal problem for a heap model with
an arbitrary number of rational pieces (8a 2 A;u(a); l(a) 2 Q R
solved
in [21]. In Theorem 14, the case of a heap model with two general pieces is
treated. We recall the results in the table below.
Characterizing optimal schedules is an open problem for models with three
pieces or more. Generalized versions of jump words appear naturally in some
models. Let be the alphabet. We consider ff
kg. We label the points fnff i by
a i and we consider the set [ k
in its natural order. The
infinite sequence of labels is called the (hypercubic) billiard sequence with
characteristics us consider the heap
model
Using an argument similar to the
one in x7.4, we obtain that the billiard sequence with characteristics
is an optimal schedule. A similar result is obtained for the heap model
Further research. During the reviewing process of this work, alternative
proofs of Theorem 14 as well as further developments have been proposed in
[30,11]. The methods in [11] also enable to refute the Lagarias-Wang finiteness
conjecture [27].
Acknowledgements
The authors thank Thierry Bousch, St'ephane Gaubert, Bruno Gaujal and
Colin Sparrow for stimulating discussions on the subject. The detailed comments
of an anonymous referee have also helped in improving the paper.
--R
Admission control in stochastic event graphs.
Optimal admission
Complexity of sequences defined by billiard in the cube.
Synchronization and Linearity.
Fractal Concepts in Surface Growth.
Complexity of trajectories in rectangular billiards.
Recent results on Sturmian words.
Rational Series and their Languages.
Sturmian words.
Personnal communication
Asymptotic height optimization for topical IFS
Evaluation de Performances d'une Classe de Syst'emes de Ressources Partag'ees.
Dynamics of synchronized parallel systems.
Timed Petri net schedules.
Describing industrial processes with interference and approximating their steady-state behaviour
Performance evaluation of (max
Task resource models and (max
Modeling and analysis of timed Petri nets using heaps of pieces.
Performance evaluation of timed Petri nets using heaps of pieces.
Optimal allocation sequences of two processes sharing a resource.
Extremal splittings of point processes.
Calcul de temps de cycle dans un syst'eme (max
The finiteness conjecture for the generalized spectral radius of a set of matrices.
Idempotent Analysis
Mots infinis en arithm'etique.
Optimal sequences in heap models.
Heaps of pieces
--TR
Tilings and patterns
Rational series and their languages
Timed Petri net schedules
Minimal (max,+) Realization of Convex Sequences
Automata, Languages, and Machines
Optimal Allocation Sequences of Two Processes Sharing a Resource
Mots infinis en arithmMYAMPERSANDeacute;tique
--CTR
Pascal Hubert , Laurent Vuillon, Complexity of cutting words on regular tilings, European Journal of Combinatorics, v.28 n.1, p.429-438, January 2007
Sylvain Lombardy , Jacques Sakarovitch, Sequential?, Theoretical Computer Science, v.356 n.1, p.224-244, 5 May 2006 | timed Petri net;optimal scheduling;heap of pieces;+ semiring;sturmian word;tetris game;automaton with multiplicities |
566247 | Computational complexity of some problems involving congruences on algebras. | We prove that several problems concerning congruences on algebras are complete for nondeterministic log-space. These problems are: determining the congruence on a given algebra generated by a set of pairs, and determining whether a given algebra is simple or subdirectly irreducible. We also consider the problem of determining the smallest fully invariant congruence on a given algebra containing a given set of pairs. We prove that this problem is complete for nondeterministic polynomial time. | A and a lies in
the subuniverse of A generated by
It is easy to nd a reduction of Gen-Con to Gen-SubAlg, see for example,
[3, Theorem 5.5]. Thus Gen-Con can be no harder than Gen-SubAlg.
However, in [14] Jones and Laaser proved that Gen-SubAlg is complete
for P (the class of problems solvable in polynomial time). It is known that
NL is contained in the class of problems solvable in polynomial time, and
it is generally believed that the inclusion is proper. Thus Gen-SubAlg is
apparently strictly harder than Gen-Con.
Since it lies in NL, there is an algorithm for Gen-Con that runs in
polynomial time. Our Algorithm 1 is, of course, nondeterministic. But
even if it were converted to a deterministic algorithm in the natural way
(i.e., by an exhaustive search for a successful computation path), it would
not be particularly ecient, running in time proportional to the square,
or perhaps even the cube of s, the size of the input. This is because the
algorithm repeatedly recomputes numerous quantities, rather than saving
them (since the space required to save the information exceeds O(log s)
bits). By contrast, in a recent note [7], R. Freese exhibited an algorithm for
Gen-Con that runs in linear time. However, Freese's algorithm uses linear,
rather than logarithmic, space. In the 1980s, Demel, Demlova and Koubek
[4, 5] presented linear-time algorithms for many of the problems discussed
in this paper.
1. Background Material
We provide here only the barest summary of the notions we need from
universal algebra and complexity theory. For more details on universal al-
gebra, the reader should consult any of [3, 9, 19], and for computational
complexity, [11, 20, 22]. Also, the rst two sections of our paper [2] contain
a more extensive discussion of both of these topics.
For a nonnegative integer n, an n-ary operation on a set A is a function
A. The integer n is called the rank of f. An algebra is a pair
Fi, in which A is a nonempty set, and F is a set of operations on
A. The set A is called the universe and F the set of basic operations of the
algebra A. If F is nite, the algebra is said to be of nite similarity type.
A subuniverse of A is a subset closed under the basic operations.
Denition 1.1. Let Fi be an algebra. A congruence on A is a set
A A such that
is an equivalence relation on A, and
Here, the rank of f is n.
The set of congruences on A is denoted Con(A). The smallest element
of this set is the identity relation while the largest
is the relation A2. It is easy to see that Con(A) is closed under arbitrary
nonempty intersections. Given a set A A we dene
called the congruence on A generated by .
A nontrivial algebra A is called simple if while A is
called subdirectly irreducible if there is a A such that for all2 Con(A) fAg, . The congruence is called the monolith of A.
The formal denitions of complexity theory are usually given in terms of
languages, i.e., sets of nite strings over some xed alphabet. Associated
with each language L is a decision problem: Given a string x, decide whether
L. The amount of time or space required by a Turing machine to perform
this computation generally depends on the length of the input string x.
The language L is said to be computable in polynomial time if there is
a polynomial p such that some deterministic Turing machine can decide
whether an input string x of length s lies in L in time O p(s) . The set of
all languages computable in polynomial time is denoted P.
The set NL consists of those languages computable by a nondeterministic
Turing machine whose space requirements are in O(log s), for an input
of length s. We say that such a problem is computable in nondeterministic
log-space. Similarly, NP denotes the set of languages computable in
nondeterministic polynomial time.
Of course in practice, we prefer to couch our discussion in terms of \real"
problems, rather than languages. But we always tacitly assume that there is
some reasonable encoding of the instances of the problem into nite strings.
In this way, we can identify our mathematical problems with formal lan-
guages, and we describe our problems as certain subsets of the set of all
appropriate instances.
Given two problems A and B, we say that A is log-space reducible to B
log B) if there is a function f, computable in (deterministic) log-space,
such that for every instance x of A, x 2 A () f(x) 2 B. B is said to be
hard for NL if every member of NL is log-space reducible to B, and B is
complete for NL if it is both hard for NL and a member of NL. It is not
hard to see that 'log' is reexive and transitive. Thus if B is known to be
NL-complete and if B log A 2 NL, then A is NL-complete as well.
It is not hard to show that NL P NP. It is generally believed,
although still unproved, that each of these inclusions is proper. It follows
from this belief that a proof that a problem B is complete for one of these
classes is strong evidence that B does not belong to any of the preceding
classes in this list of inclusions.
We make the following assumptions regarding the format of an input
instance to the problems Gen-Con, Simp and SI. All algebras are nite and
of nite similarity type. The underlying set of an algebra can be assumed
to be f0; positive integer n, and, in fact, this set can
be represented in the input by its cardinality. This requires only log n bits
of storage. Each operation of an algebra can be represented as a table of
values. Thus, a k-ary operation will be represented as a k-dimensional array,
with both the indices and entries coming from n1g. An array such
as this occupies nk log n bits in the input stream.
Fi be an algebra of cardinality n. Suppose that
the maximum rank of any member of F is r. Then, as an input instance
to either Simp or SI, the size of A is at least max(nr; nq). Similarly, let
s denote the size of a typical instance, of Gen-Con. This is
bounded below by max(nr; jj ; nq). We can certainly conclude that
log s max(r log n; log q):
2. Membership in NL
In order to prove that Gen-Con lies in NL, we need a slight variation
on the classical theorem, due to Maltsev [18], describing the congruence on
an algebra A generated by a set of pairs. The only dierence between our
formulation and that found in most texts is that we replace the monoid of
all unary polynomial operations on A with a smaller and more manageable
subset that we now describe. The proofs of Lemma 2.1 and Theorem 2.2
are identical to those of Theorems 4.18 and 4.19 in [19]. A treatment very
similar to ours can be found in Section 2.1.2 of [24].
Let A be a set and f an n-ary operation on A, for some n 1. We dene
an
Thus, f(A) is the set of all unary operations on A obtained by substituting
elements of A for all but one of the variables in f. The members of f(A) are
called elementary translations. We write C(A) for the set of unary constant
operations on A. Finally, if F is any set of operations on A, we let
Lemma 2.1. Let A be a set and let F be a set of operations on A. Then
Algorithm
(1) z a, n jAj
(3) Choose z0 2 A
Choose (u; v) 2
do
Choose
od
Reject
od
Reject
For a set S of unary operations on A, let S denote the submonoid of the
monoid of all self-maps of A generated by S. In particular, the identity map
is an element of S.
Theorem 2.2. Let Fi be a nite algebra, A A and a; b 2 A.
only if there are elements
pairs
such that
(2)
Notice that in the above theorem, we can assume that m < jAj. For
if not, then there are indices j < k such that In that case, the
sequence (along with the associated sequence of
(ci; di) and fi) serve as witnesses to (a; b) 2 CgA().
It is a simple matter to turn the characterization in Theorem 2.2 into a
procedure for computing Gen-Con.
Theorem 2.3. Gen-Con 2 NL.
Proof. Consider the nondeterministic algorithm labeled Algorithm 1. Essentially
this procedure takes a guess at the sequences z0;
Theorem 2.2. If it nds such sequences, the
algorithm accepts the input In each trip through the main loop
(starting at statement 2), z contains the value of zi. We nondeterministically
choose values z0 to be zi+1 and u; v to be ci; di. In steps 5{9, we choose an
operation f 2 F(A) and test whether fz; f(di)g. If this equality
holds, we set (at step 12) zi+1 to be the value of z0 and continue. If the
equality fails, we reject the instance.
The computation of the operation f is accomplished by nondeterministically
choosing a series of operations g 2 F(A) whose composite is to be f. We
don't keep track of all of these g's. Rather, we follow the images of ci and
di under these maps by recording them in the variables u and v. The length
of the composition needed to obtain f can be bounded by n2, since that is
how many pairs (u; v) are possible. (And there is no need to encounter a
more than once.)
It is also not necessary to construct the entire set F(A) in line 7. The
set is part of the input. For
be the constant operation with value i. To choose g, pick integers k and
ar
of A. The data hk; is sucient to determine the operation
The total auxiliary memory required by Algorithm 1 is the space for
storing the variables: z; z0; ar. Each of these holds an
integer in the range [0; n), hence requires only log n bits of storage, except
for k and ' which require log(q+n) and log r bits respectively. Thus the total
space requirement is on the order of (r+7) log n+log(q+n)+log r 2 O(log s),
where s is the size of the instance, by the inequality (1).
Theorem 2.3 can also be obtained from Immerman's theorem [12]. Immerman
showed that every language denable in FO(TC) (rst-order logic
with a transitive closure operator) lies in NL. It follows from Theorem 2.2
that Gen-Con can be so dened. Similar remarks apply to the following
theorem.
Theorem 2.4. Both Simp and SI lie in NL.
Proof. Observe that a nontrivial algebra A is simple if and only if
For each a; b; c; d, the truth of (a; b) 2 CgA(c; d) can be determined with a
single call to Gen-Con. The computation required to verify formula (3) can
be accomplished with four nested loops. It is important to observe that the
space required for the call to Gen-Con can be reused on each trip through
the loop. Thus, in addition to the space required by one call to Gen-Con,
we only need to allocate space for the four loop counters, which run from 0
to jAj 1. Thus Simp 2 NL.
Similarly, A is subdirectly irreducible if and only if
Using an argument similar to that used for simplicity, we see that SI 2 NL.
We conclude this section with a discussion of a problem rst considered
in Belohlavek and Chajda [1]. Let us dene
A an algebra and C a congruence class
of some congruence on A :
If is a congruence of an algebra A, then a congruence class of is a set
of the form for some xed element a of A.
Belohlavek and Chajda show that when restricted to those algebras that
generate a congruence-regular variety, the problem Cong-Class lies in P.
However, using the techniques we have developed in this section, we are able
to show that not only can the congruence-regularity assumption be dropped,
but Cong-Class actually lies in NL, a (presumably proper) subclass of P.
Theorem 2.5. Cong-Class 2 NL.
Proof. Let A be an algebra, and C A. Since the empty set is never a
congruence class, we assume that C is nonempty.
It is easy to see that C is a class of some congruence if and only if C is a
class of the congruence . Fix an element c 2 C. By the denition of ,
we clearly have C c= , thus we need only check the reverse inclusion. In
other words, we wish to check the condition
This condition can be checked with a simple loop. Strictly speaking, we can
not call Gen-Con as a subroutine, since that would require enough space
to hold the structure hA; C2; x; ci. Instead, the code from Algorithm 1 must
be inserted directly into the loop with references to replaced by C. Thus
Cong-Class lies in NL.
Unlike our primary problems, Gen-Con, SI and Simp, we have been
unable to determine whether Cong-Class is complete for NL. We leave
that as an open problem.
Problem. Is Cong-Class complete for NL?
3. NL-Hardness of the problems
We now turn to the problem of determining a lower bound for each of
these problems. Specically, we wish to show that each of the three problems
discussed in Theorems 2.3 and 2.4 is NL-hard. For this we will use some
facts from the complexity theory of nite graphs.
A directed graph (digraph) is a structure hG; "i, in which G is a nonempty,
nite set (the vertices) and " G G (the edges).
hG; "i be a digraph and a; b 2 G. A path from a to b of length n
is a sequence of vertices such that for every 0 i < n,
". For every vertex a, we agree that there is a path from a to a
(of length 0). We dene
there is a path from a to b
One of the best-known problems in complexity theory is the Graph Accessibility
In other words, GAP is the problem of determining whether there is a path
from a to b in a given digraph. This problem was shown to be complete
for NL in [13], although the result is also implicit in [21]. It is used as the
motivating problem for nondeterministic log-space in [20], where it is called
REACHABILITY.
The digraph G is called strongly connected if for every a 2 G, G.
In other words, for every a and b, there is a directed path from a to b.
The vertex b will be called an attractor if, for every vertex a, b 2 R(a).
Associated with these notions, we introduce two more problems.
strongly connected g
has an attractor
Str-Con was proved to be NL-complete by Laaser, see [13]. As far as we
know, the problem Attract is new.
Theorem 3.1. Each of the problems GAP, Str-Con and Attract is
complete for NL.
Proof. We mentioned above that both GAP and Str-Con are complete for
NL. Let G be a digraph. Observe that
In a manner similar to that used for SI in the proof of Theorem 2.4, an
algorithm for Attract (and also for Str-Con) can be based on two nested
loops, with a call to GAP inside the innermost loop. The space used for
the GAP computation can be reused. Thus Attract lies in NL.
To show that Attract is NL-hard, we shall give a log-space reduction of
GAP to Attract. Let hG; a; bi be an instance of GAP, where
c)g. Let
We claim that hG; a; bi 2 GAP if and only if H 2 Attract,
with c as the attractor.
To see this, suppose rst that there is a path p from a to b in G. Then
for any vertex v of G, the sequence v; p; c is a path in H from v to c. Thus
c is an attractor. Conversely, if c is an attractor in H, then there is a path
(in H) from a to c. But such a path must include b, and (since there is no
exit from c) only the last vertex in the path is equal to c. Thus, there is a
path in G from a to b, so that hG; a; bi 2 GAP.
This reduction is clearly computable in log-space, since the only auxiliary
storage that is needed is for several counters. Thus Attract is NL-
complete.
gw
Figure
1. Part of an algebra A(G)
The reader has surely noticed the structural similarity between the conditions
in (5) and those in equivalences (3) and (4):
We shall now exhibit reductions between the graph problems of Theorem 3.1
and the algebra problems discussed in Theorem 2.4. For this we use the
following construction.
hG; "i be a digraph. Fix an element ? 2= G and let G[f?g.
Dene a new graph (Thus ? is an isolated point of G?.)
For (the closed neighborhood
of v), and let choose a function
in such a way that for all
In other words, for each edge from v to w there should be some i with
Note that for all i we have Also, for each v 2 G we
dene the operation gv on G? by
w otherwise.
Finally, we dene an algebra
The construction of A(G) is illustrated schematically in Figure 1.
Let us make two observations about the algebra A(G). First, for any
element a of G, the subuniverse generated by a is R(a)[f?g. Second, A(G)
is a unary algebra, that is, each of its basic operations is of rank 1. A useful
fact about unary algebras is the following lemma. The proof is an easy
verication.
Lemma 3.2. Let B be a unary algebra and S a subuniverse of B. Then the
binary relation is a congruence on B.
Note that the congruence S has exactly one nontrivial congruence class,
namely S itself. For the next two lemmas, we omit the superscript 'A(G)'
in the notation Cg(x; y).
Lemma 3.3. (1) Let a and b be vertices of G. Then (b; ?) 2 Cg(a; ?) if
and only if b 2 R(a).
(2) If c and d are distinct elements of G?, then Cg(c; ?) Cg(c; d).
Proof. Let be the subalgebra of A(G) generated by a.
it follows from Lemma 3.2 that Cg(a; ?) S. Thus, if
R(a).Conversely, if b 2 R(a), then there is a sequence of indices
such that xed by each fi, we obtain
For the second claim, if then the inclusion is trivial. So suppose
working modulo Cg(c; d) we have
is the smallest congruence identifying c with ?, we
get Cg(c; ?) Cg(c; d).
The relationship between the algebraic problems and the graph problems
is given in the following lemma.
Lemma 3.4. For any digraph G and a; b 2 G we have
hG;
Proof. The rst equivalence follows immediately from Lemma 3.3(1). Suppose
that G is strongly connected. To show A(G) simple, pick a pair c; d
of distinct elements from G?. We wish to show that Cg(c; d) is the universal
congruence. Without loss of generality, assume that c = ?. ByLemma 3.3(2), we have (c; ?) 2 Cg(c; d). By assumption
Lemma 3.3(1), the congruence class of c modulo Cg(c; ?) contains all of G?.
Thus Cg(c; ?), hence also Cg(c; d) is universal.
Conversely, suppose that A(G) is simple. Pick vertices a; b in G. Since
Cg(a; ?) is the universal congruence, we apply Lemma 3.3(1) again to obtain
Now we address the third equivalence. Suppose that b is an attractor of
G. We wish to show that Cg(b; ?) is the smallest nontrivial congruence (the
monolith) of A(G). Choose any pair c; d of distinct elements. Assume that
using Lemma 3.3, Cg(b; ?) Cg(c; ?) Cg(c; d).
For the converse, suppose that Cg(c; d) is the monolith of A(G), with
not the identity congruence, hence by the minimality of Cg(c; d), we get
for any a 2 G, Cg(c; ?) Cg(a; ?), hence by
Lemma 3.3(1), c 2 R(a). In other words, c is an attractor of G.
Finally, we can combine Lemma 3.4 and Theorem 3.1 to obtain our main
theorem.
Theorem 3.5. Each of the problems Gen-Con, Simp and SI is complete
for NL.
Remarks:
(1) From Lemma 3.4 we see that Gen-Con remains complete for NL
if we restrict to instances hA; ; a; bi in which A is a unary algebra
and
(2) It is natural to wonder about the complexity of recognizing congruences
on an algebra. In other words, given an algebra A and a binary
relation on A, determine whether is a congruence on A. It is not
hard to see that this can be done in (deterministic) log-space.
First, one can verify that is an equivalence relation using three
nested loops, each running through the elements of A. For example,
if a and b are two of the loop counters, then we can test the symmetry
of by verifying that whenever (a; b) is in , so is (b; a).
To test the second condition of Denition 1.1, use two sets of variables
ar and denotes the maximum rank of
any of the basic operations.) For each basic operation f, have each of
traverse the entire set Ar. Whenever we
have (ai; bi) 2 for all i k, verify that
. This requires 2r counters, each using log n bits. Note that an input
instance to this problem is almost identical to that of Gen-Con,
so we conclude from inequality (1) that our space requirements are
bounded by the logarithm of the size of the input.
(3) In the construction of A(G), the sequence hgviv2G of unary operations
can be replaced with a single binary operation given by
y otherwise.
This does not result in any space-saving when all operations are
given via tables, but might be very ecient if the operations are
allowed to be presented by other means, such as Boolean circuits.
(4) GAP is a problem for directed graphs. There is an analogous prob-
lem, called UGAP, for undirected graphs. It follows at once that
However, it is an open question whether UGAP is
complete for NL. The complexity class SL (symmetric log-space) is
dened in such a way that UGAP is complete for SL. See Lewis and
Papadimitriou[16] for details. The completeness of UGAP for NL
is equivalent to the assertion that
A set A can be viewed as an algebra in which the set of basic
operations is empty. In that case, for any subset of A2, CgA()
is nothing but the smallest equivalence relation on A containing .
Now it is easy to see that (a; b) 2 CgA() if and only if a and b lie in
the same connected component of the undirected graph hA; i, where
g. In other words, UGAP coincides with
the special case of Gen-Con in which the \algebra" is constrained
to have no basic operations. In our experience, this special case is
of lesser complexity than is the general case. This suggests that one
ought to try to prove that Gen-Con 2= SL, thereby settling the
question of whether SL and NL are distinct.
Recall the problem Cong-Class mentioned at the end of Section
2. We proved in Theorem 2.5 that Cong-Class lies in NL.
Since we have been unable to prove that this problem is complete
for NL, we are led to wonder whether Cong-Class might lie in an
interesting proper subclass. SL seems to be a natural candidate. As
a companion to Problem 2, we ask
Does Cong-Class lie in SL?
4. Fully invariant congruences
An endomorphism of an algebra Fi is a homomorphism from A
to itself, in other words, a function A such that for all f 2 F and
. The collection of
all endomorphisms of A is denoted End(A).
A congruence on A is called fully invariant if for all (x; y) 2 and
all h 2 End(A), h(x); h(y) 2 . We denote by Con(A) the set of fully
invariant congruences of A. It is immediate from the denition that
This equation has several consequences. First, both A and A2 are fully
invariant congruences on A. Second, for any A2, there is a smallest
fully invariant congruence on A containing . We shall write CgA() for
this congruence. Finally, Theorem 2.2 can be applied to compute CgA()
(with F replaced by F [ End(A)).
Parallel to our problem Gen-Con, we dene
With minor modications, Algorithm 1 can be used to compute Gen-Confi.
In light of equation (6), if Algorithm 1 is used to compute Gen-Confi, then
in step 7, g must be chosen from rather than from F(A). But
note that Thus, we provide a modied
algorithm, Algorithm 2, in which this step is replaced with the sequence
7a{7e. The idea behind this sequence of steps is as follows. We rst toss a
coin. If the coin comes up 'heads', we choose g 2 F(A) as before. However,
on 'tails', we guess an arbitrary function g : A ! A and then check to see if
g is an endomorphism of A. If it is, we proceed to step 8. If not, we reject
this instance.
Algorithm
(1) z a, n jAj
(3) Choose z0 2 A
Choose (u; v) 2
do
7a. Toss a coin
7b. If heads then choose g 2 F(A)
7c. else do
7d. Choose
7e. If g 2= End(A) then Reject
od
od
Reject
od
Reject
Unlike the original algorithm, this modied version can not be executed
in log-space. This is because we require enough space to hold the entire
function g whenever the coin comes up 'tails'. Since a function from A to A
is a list of n integers in the range 1g, the space requirement for g
is n log n. In general, this will not be bounded by the logarithm of the size
of the input (see inequality (1)).
However, our modied algorithm does run in (nondeterministic) polynomial
time. The verication that a function g is an endomorphism requires
one pass through each of the tables for the basic operations of the algebra.
Since the algorithm reaches step 7 at most n3 times, the total running time
will be bounded by a polynomial in the size of the input.
As an alternative, one can prove that Gen-Confi 2 NP by observing
that in light of Theorem 2.2, Gen-Confi can be dened by a second-order,
existential sentence. From Fagin's theorem [6] it follows that any language
dened in this way lies in NP.
We now wish to prove that Gen-Confi is hard for NP. We will do this
by reducing the well-known problem Clique to Gen-Confi. For a positive
integer n, let Kn denote the digraph with vertex set f1; ng and (di-
rected) edges f (x; g. If G is a digraph, then a clique of G is asubgraph isomorphic to some Kn. We call G loopless if it has no edges of
the form (x; x). We dene
loopless digraph, n 1,
and G has a clique of size n :
The problem Clique is known to be NP-complete, see [8, p. 194].
"i and digraphs. A homomorphism from H
to G is a function t: H ! G such that (x; y) 2 implies (t(x); t(y)) 2 ".
Note that a loopless graph G has a clique of size n if and only if there is a
homomorphism from Kn to G.
In [10], Hedrln and Pultr described an elegant transformation from digraphs
to unary algebras that has been used several times [2, 15] to reduce
problems involving graphs to similar problems involving algebraic struc-
tures. Given a digraph hG; "i, we shall dene an algebra Gb as follows.
The universe of Gb is the set are points
not appearing in either G or ". are unary
operations dened by
Furthermore, let t: H ! G be a digraph homomorphism. We dene a
function t^: Hb ! Gb given by
Theorem 4.1 (Hedrln and Pultr, [10]). The mappings G ! Gb and t ! t^
constitute a full and faithful functor from the category of digraphs to that
of algebras with two unary operations. In other words, for each pair H,
G of digraphs, and each digraph homomorphism t, the function t^: Hb !
Gb is a homomorphism and furthermore, the mapping t ! t^ is a bijectionbetween the homomorphisms from H to G and the homomorphisms between
Hb and Gb.
It follows that any homomorphism from Hb to Gb must preserve u and v,
and map vertices to vertices and edges to edges.
Lemma 4.2. Clique log Gen-Confi.
Proof. Let hG; ni be an instance of Clique, with "i. Fix a new
vertex a and dene That is, there
is an edge from a to each vertex of G as well as an edge in the opposite
direction. Let denote the disjoint union of the
graphs G0 and K.
Now dene G00 to be G0 +K and set A = Gc00 (see Theorem 4.1). Pick two
distinct vertices a and b from K, and let e be the edge from a to b. Finally,
To complete the proof of the Lemma, we
shall show that
hG;
that is, G has a clique of size n if and only if (a; a) 2 .
Suppose rst that G has a clique of size n. Then G0 has a clique of size
n+1 that includes the vertex a. Therefore, there is a graph homomorphism
t0 from K to G0. Because of the symmetry of K, we can assume that
a. The map t0 can be extended to a graph homomorphism t: G00 !
G00 by mapping each vertex of G0 to itself. Theorem 4.1 yields an (algebra)
homomorphism t^: A ! A. Note that by the denition of t^, we have
Now, using the fact that is a fully invariant congruence, we compute
Conversely, suppose (a; a) 2 . Since A is a unary algebra and Kb is a
subalgebra, by Lemma 3.2 there is a on A. Since
(a; a) 2= , we certainly have * . On the other hand, , so is
not fully invariant. (For otherwise, = Cg() .) It follows that some
endomorphism of A must fail to map Kb to itself. By Theorem 4.1, this
endomorphism is of the form t^, for some t: G00 ! G00, and it must be the
case that t does not map K into itself. But since K is complete and is
disjoint from G0, t must actually map K to G0. Therefore, G0 contains a
clique of size n + 1. At most one of the vertices in the clique can be equal
to a, so we conclude that G has an n-clique.
Theorem 4.3. Gen-Confi is NP-complete
Proof. Our modied version of Algorithm 1 shows that Gen-Confi 2 NP.
Since Clique is NP-complete, it follows from Lemma 4.2 that Gen-Confi
is NP-complete as well.
The notion of \full invariance" can be extended to objects other than
congruences on algebras. For example, let hG; "i be a digraph and
G. Let us call S fully invariant if for every h 2 End(G), h(S) S.
Furthermore dene the fully invariant subset generated by a set S (denoted
SgG(S)) to be the smallest fully invariant subset of G containing S. Notice
that g. We dene two problems:
digraph and S a fully invariant subset g
digraph and a 2 SgG
Using the same ideas as in Theorem 4.3, we can prove the following.
Theorem 4.4. FI-Subset is complete for co-NP. Gen-Subsetfi is complete
for NP.
Proof. Suppose that hG; Si is an instance of FI-Subset. Let P denote
the complement of FI-Subset. To show that S is not fully invariant, we
can guess a function verify that h is an endomorphism and
that h(S) * S. This gives a nondeterministic algorithm for P that runs in
polynomial time. To reduce Clique to P, let G be a loopless graph and n
a positive integer. Let Kn. Then it is easy to see that Kn fails to
be fully invariant in H if and only if there is a (graph) homomorphism from
Kn to G. This in turn is equivalent to the existence of an n-clique in G.
Thus P is NP-complete, and therefore FI-Subset is complete for co-NP.
Now for the second problem. The condition hG;
can be checked by guessing a function G, checking that h is an
endomorphism of G, and that a 2 h(S). To prove that Clique log
Gen-Subsetfi follow the construction given in Lemma 4.2 to produce the
graph G00. Then one easily sees that G has an n-clique if and only if
a 2 SgG 00 (K). Thus Gen-Subsetfi is NP-complete.
--R
Computing congruences eciently
On full embeddings of categories of algebras
Introduction to automata theory
Languages that capture complexity classes
Complete problems for deterministic polynomial time
The computational complexity of some problems in universal algebra
Symmetric space-bounded computation
Categories for the working mathematician
On the general theory of algebraic systems
Computational complexity
Relationships between nondeterministic and deterministic tape com- plexities
Introduction to the theory of computation
Remarks on fully invariant congruences
Universal algebra for computer scientists
Department of Mathematics
Department of Computer Science
--TR
Fast algorithms constructing minimal subalgebras, congruences, and ideals in a finite algebra
Languages that capture complexity classes
Introduction to the Theory of Computation
Introduction To Automata Theory, Languages, And Computation
Computers and Intractability
Complexity of Some Problems Concerning Varieties and Quasi-Varieties of Algebras | simple algebra;nondeterministic log-space;congruence;graph accessibility |
566248 | Decision tree approximations of Boolean functions. | Decision trees are popular representations of Boolean functions. We show that, given an alternative representation of a Boolean function f, say as a read-once branching program, one can find a decision tree T which approximates f to any desired amount of accuracy. Moreover, the size of the decision tree is at most that of the smallest decision tree which can represent f and this construction can be obtained in quasi-polynomial time. We also extend this result to the case where one has access only to a source of random evaluations of the Boolean function f instead of a complete representation. In this case, we show that a similar approximation can be obtained with any specified amount of confidence (as opposed to the absolute certainty of the former case.) This latter result implies proper PAC-learnability of decision trees under the uniform distribution without using membership queries. | Introduction
Decision trees are popular representations of Boolean func-
tions. They form the basic inference engine in well-known
machine learning programs such as C4.5 [Q86, Q96]. Boolean
decision trees have also been used in the problem of performing
reliable computations in the presence of faulty components
[KK94] and in medical diagnosis. The popularity
of decision trees for representing Boolean functions may be
attributed to the following reasons:
Universality: Decision trees can represent all Boolean
functions.
Amenability to manipulation: Many useful operations
on Boolean functions can be performed efficiently in
time polynomial in the size of the decision tree rep-
resentation. In contrast, most such operations are intractable
under other popular representations. Table 1
gives a comparison of decision trees with DNF formulas
and read-once branching programs.
Supported by NSF grant 9820840.
The advantages of a decision tree representation motivate
the following problem:
Given an arbitrary representation of a Boolean function
f , find an equivalent representation of f as a decision tree
of as small a size as can be.
It is immediately evident that this problem is bound to be
hard as stated. Polynomial time solvability of this problem
would imply that satisfiability of CNF formulas can be decided
in polynomial time which is impossible unless P=NP.
We therefore consider a slightly different problem. Let us
say that g is an -approximation of f if the fraction of assignments
on which g and f differ in evaluation is at most
.
Given an arbitrary representation of a Boolean function
f , find an -approximation of f as a decision tree of as small
a size as can be.
In order not to fall into the same trap as before, we are
now interested in solving this problem efficiently but realis-
tically: that is, we may use time polynomial in the following
parameters:
1. The size of the given representation of f .
2. The size of the smallest decision tree representation of
f for a given .
3. The inverse of the desired error tolerance, i.e., 1=.
Such approximations would be useful in all applications
where a small amount of error can be tolerated in return for
the gains that would accrue from having a decision tree rep-
resentation. Indeed this is the case for most applications in
machine learning and data mining. For example, one could
post-process the hypothesis output of a learning program and
convert it into a decision tree while ensuring that not much
error has been introduced by choosing a suitably small .
Note here that one may use knowledge of special properties
of the representation scheme of the hypothesis in constructing
the decision tree approximation. Further note that one
may even construct a decision tree approximation for a decision
tree hypothesis! This would be useful in conjunction
with programs like C4.5 which output decision trees but do
not make special efforts to ensure that the output tree is provably
the smallest it can be for a desired error tolerance. At the
expense of sacrificing a little more error, one could achieve
the desired minimization in such cases.
Table
1: The complexity of operations in different representation schemes
Read-Once Branching Decision Trees
Programs
Universality
AND of 2 representations Polynomial time a Polynomial time a Polynomial time a
of 2 representations Polynomial time a Polynomial time a Polynomial time a
Complement of a representation Exponential time a Polynomial time a Polynomial time a
Deciding satisfiability Polynomial time a Polynomial time a Polynomial time a
Deciding unsatisfiability NP complete b Polynomial time a Polynomial time a
Deciding monotonicity co-NP complete b Open Polynomial time c
Deciding equivalence co-NP complete b co-RP d Polynomial time e
Deciding symmetry co-NP complete b Polynomial time c Polynomial time c
Deciding relevance co-NP complete b Open Polynomial time e
of variables
Counting number of #P-complete f Polynomial time c Polynomial time c
satisfying assignments
Making representation NP hard b co-RP d Polynomial time e
irredundant
Making representation NP hard b Open NP-hard g
minimum
Truth-table NP hard h Open Polynomial time i
minimization
a Straightforward from the definition of the representation scheme.
Easy reduction from CNF-SATISFIABILITY.
c Proved in this paper.
d Is a result (or follows from one) in [BCW80].
e It is a folk theorem proves that decision trees are testable for equivalence in polynomial time; the other
results follow from this.
f Proved in [S75].
Proved in [ZB98].
h A result of Masek cited in Garey and Johnson's book [GJ79].
Proved in [GLR99].
We first show that in the case of some well-known representation
schemes, small -approximating decision trees can
be obtained in quasi-polynomial time. polynomial factor
of the first parameter listed above is multiplied by a factor
which involves an exponent logarithmic in the second and
third parameters.) These schemes are:
1. Decision trees
2. Ordered Binary Decision Diagrams
3. Read-once Branching Programs
4. O(log n)-height Branching Programs
5. Sat-j DNF formulas, for constant j
6. -Boolean formulas
The third item above is a generalization of the first two, so
the result for the first two follows from the third. Our quasi-polynomial
time algorithm actually holds with more generality
than for just these classes. Roughly speaking, all representation
schemes for which the number of satisfying assignments
of the input function under "small" projections can
be computed efficiently-a property we call sat-countable
in this paper-would come under the technique employed
here. Indeed, we present the algorithm in this more general
way and then argue that the required properties hold for all
the above schemes. It is worth emphasizing here that although
the time taken by our algorithm is quasi-polynomial,
the size of the decision tree approximation is not: in fact, the
output decision tree has the smallest size that any decision
tree of its height and level of approximation can have. In this
sense, it is optimal and certainly has size no larger than that
of the smallest decision tree which can represent the boolean
function being approximated.
We also consider the situation where only some evaluations
of a Boolean function f are available. Given a sample S
of such evaluations, we show that the previous algorithm can
be modified slightly to give a quasi-polynomial time algorithm
which produces a small -approximating decision tree
over the sample S. That is, the decision tree may disagree
with f in evaluating at most jSj assignments out of S, for
any given .
We argue that this latter result implies proper quasi-polynomial
time PAC-learnability of decision trees under the uniform
distribution. Informally, the learning result may be interpreted
8as follows. Compared to the absolute certainty of
the -approximation in the first result, the learning result says
that if we are given access only to a source of random evaluations
of f (instead of a complete representation of f ) then
the output of our algorithm will be an -approximating decision
tree with as much confidence as desired, but not absolute
certainty. This may be the only way to obtain decision tree
approximations for representation schemes like DNF formulas
for which counting the number of satisfying assignments
is #P-complete [GJ79].
A novel feature of the learning algorithm is that it is not
an Occam algorithm [BEHW87] unlike the ones known in
learning theory. This is because our algorithm may actually
make a few errors even on the training sample used. Consequently
the analysis of the sample complexity is a generalization
of the ones normally used, and may be of some
independent interest.
The learning result can be compared with similar ones
in learning theory. Bshouty's monotone theory based algorithm
[B95] can be deployed to learn decision trees under
any arbitrary but fixed distribution in polynomial time but
has the following drawbacks in comparison with our algo-
rithm: the algorithm uses membership queries and outputs
not a decision tree but a depth-3 formula. Similarly, Bshouty
and Mansour's algorithm [BM95] does not output a decision
tree. Ehrenfeucht and Haussler [EH89] show that decision
trees of rank r are learnable in time n O(r) under any distribu-
tion. The rank of a decision tree T is the height of the largest
complete binary tree that can be embedded in T . Since a decision
tree of m nodes has rank at most log m, at first glance,
this result would seem to be an improvement over the learning
result of this paper since one could learn m node decision
trees in quasi-polynomial time under any distribution! The
difference is this: in learning m node decision trees over n
variables our algorithm would always produces a decision
tree of size no larger than m using a sample of size at most
polynomial in m and the inverse of the error and confidence
parameters. In contrast, the algorithm of Ehrenfeucht and
Haussler may output a tree of size n O(log m) using a sample
of size quasi-polynomial in n; m and polynomial in the
inverse of the error and confidence parameters.
The rest of the paper is organized as follows. Section 2
contains definitions and lemmas used in the remaining sec-
tions. Section 3 has our algorithm for finding an -approx-
imating decision tree given a sat-countable representation.
Section 4 contains the results on -approximating decision
trees given only a source of random evaluations of a Boolean
function. We conclude with some open problems in Section
5.
Preliminaries
Let f be a Boolean function over a set
of n variables. A (total) assignment is obtained by setting
each of the n variables to either 0 or 1; such an assignment
may be represented by an n-bit vector in f0; 1g n in the natural
way. A satisfying assignment for f is one for which
1. The number of satisfying assignments for f is
denoted by ]f .
A partial assignment is obtained when only a subset of
variables in V is assigned values. A partial assignment may
be represented by a vector of length n each of whose elements
is either 0, 1, or *. A vector element is * if the corresponding
variable was not assigned a value. Thus, the total
number of partial assignments is 3 n and the number of partial
assignments with k variables assigned values is n
The size of a partial vector , denoted jj, is the number
of elements in assigned 0 or 1. The empty partial vector,
denoted , is the one in which all variables are assigned *.
The projection of f under a partial assignment , denoted
f , is the function obtained by "hardwiring" the values
of the variables included in . More precisely, given a
total assignment and a partial assignment , let denote
the total assignment obtained by setting each variable whose
value is not * in to the value in and each variable whose
value is * in to the value in . Then, f is defined by
We are interested only in projection-closed representation
classes of Boolean functions, i.e., ones for which given
a representation for a Boolean function f and any partial vector
, the Boolean function f can also be represented in the
class and, moreover, such a representation can be computed
in polynomial time. We say that a projection-closed representation
class is (polynomial-time) sat-countable if given a
representation for f , the value of ]f can be computed in time
polynomial in the size of the representation and n, the total
number of variables. If d is a representation of the function
f , we use jdj to denote the size of d. Where the context assures
that there is no ambiguity, we treat a representation as
synonymous with the Boolean function being represented.
The error err(f; f 0 ) of f with respect to another Boolean
function f 0 defined over the same set of n variables is
the total number of assignments such that f() 6= f 0 ();
moreover f is an -approximation of f 0 if
We consider the following projection-closed representation
classes of Boolean functions in this paper.
1. Decision Trees. A decision tree T is a binary tree where
the leaves are labeled either 0 or 1, and each internal
node is labeled with a variable. Given an assignment
evaluated by starting at the root
and iteratively applying the following rule, until a leaf is
reached: let the variable at the current node be x the
value of at position i is 1 then branch right; otherwise
branch left. If the leaf reached is labeled 0 (resp. 1)
then 1). The size of a decision tree is
its number of nodes.
2. Branching programs (BPs). A branching program is a
directed acyclic graph with a unique node of in-degree
(called the root, and two nodes of out-degree 0 (called
leaves), one labeled 0 and the other labeled 1; each non-leaf
node of the graph contains a variable, and has out-degree
exactly two.
If every variable appears at most once on any root-leaf
path, then the branching program is called read-once
(ROBP). Note that a decision tree can effectively be
considered to be an ROBP. Assigments are evaluated
following the same rule as for decision trees. The height
of a BP is the length of the longest path from the root to
a leaf node.
An ordered binary decision diagram (OBDD) is an
ROBP with the additional property that variables appear
in the same order on any path from root to leaf.
3. SAT-j DNF formulas: DNF formulas in which every assignment
is satisfied by at most j terms of the formula.
4. formulas: Boolean formulas in which every variable
occurs at most once.
Proposition 1 Decision trees, OBDDs, ROBPs, BPs, SAT j-
DNF formulas, and formulas are projection-closed.
Proof. For any BP, the projection under a partial vector
can be computed as follows: redirect incoming edges for
each vertex labeled by a variable that is assigned a value in
to the left (respectively, right) child of the vertex if that
variable is assigned the value 0 (respectively, 1) in . Recursively
delete vertices with no incoming edges. By using
depth-first search, these steps can be achieved in linear time.
Note that if the BP is a decision tree, OBDD, ROBP, or a h-
height BP then the projection also belongs to the same class.
For SAT j-DNF and -formulas, the projection can be
obtained by substituting the values for each assigned variable
in . A 0 in a DNF term will result in the deletion of that
term, whereas a 1 results in the deletion of that variable from
the term. In a -formula, appropriate Boolean algebra rules
are applied to eliminate the 1's and 0's so obtained. This is
accomplished in linear time in both cases.
Proposition 2 ROBPs and O(log n)-height BPs are sat-count-
able.
Proof. The number of satisfying assignments of an ROBP
f is computed as follows.
Traverse the nodes of f in reverse topological order. Let
f(x) denote the sub-ROBP rooted at a node x consisting of
all vertices that can be reached from x and the edges joining
them. When a node x is visited the fraction x , 0 x
1, of assignments of f(x) that are satisfying assignments is
computed as follows. If x is a leaf then x is the same (0
or 1) as the value of the leaf node; otherwise x is an internal
node and x is y+z, where y and z are the left and right
children of x.
A simple inductive argument shows that on completion
r , where r is the root of the ROBP, is the fraction of satisfying
assignments of f . Consequently,
n is the number of variables in f .
Next, let B be a O(log n) height BP representing a Boolean
function f . First, construct a decision tree equivalent
to f by "spreading out" B by creating a separate copy of a
node whenever needed rather than sharing subfunctions as
in a branching program. Such a decision tree may not immediately
satisfy the "read-once" property, but it is easily
converted into one by eliminating subtrees under duplicated
variables along a path. The total number of nodes in this resultant
decision tree is at most 2 O(log Finally,
compute the number of satisfying assignments for the decision
tree as described above for a ROBP.
The next two propositions are not used in the paper; they
are proved here simply in order to complete Table 1.
Proposition 3 Decision trees can be tested for monotonicity
in polynomial time.
Proof.
Let T be a given decision tree over n variables.
It is convenient to extend the partial order defined over
the Boolean lattice to the set of partial vectors by: if
for all i, implies that i 6= 0. For any partial vector
, we will say that for every
total vector , T
Each leaf node x in T determines a partial vector p(x)
based on the assignment to variables on the path from the
root of T to the leaf node. Let us say that x is a counterexample
to monotonicity of T if there is a partial vector p(x)
such that T and x has a value of 1. The essential
observation is that T is monotone if and only if no leaf of T
is a counterexample to its monotonicity.
It is easy to test for monotonicity of T using the above
observation: for each leaf node x assigned the value 1, let
be the partial vector obtained by setting to 1 only the
variables in p(x) assigned a 1 and leaving the remaining variables
as *. If under the projection p 0 (x) T is not identically 1,
then x is a counterexample to monotonicity as demonstrated
by any path to 0 in the projection.
A Boolean function f(v
permutation
(v 0
Proposition 4 ROBPs can be tested for symmetry in polynomial
time.
Proof.
This proof is inspired by the central idea in [BCW80].
Let f be any Boolean function over the set of variables
denote the set of assignments
that f evaluates to 1.
We first generalize f to be a real-valued function by
treating V to be a set of real variables; more precisely re-define
f by:
Y
Y
When the variables in V assume the values 0 and 1, the
value of the redefinition coincides with the value of the Boolean
so we do have a true generalization. As shown
in [BCW80], given a ROBP representation of the Boolean
function f , the value of the real function on any real vector
over V can be computed in linear time by visiting the ROBP
in topological order.
Next, let
Table
2: System of Linear Equations for calculating jR k jB
be the set of assignments in f + with precisely k ones. Then,
Y
x
Y
Now computing the values of g(0);
mentioned above by using the ROBP representation of f and
treating jR k j as variables leads to the system of linear equations
in Table 2.
It's easily shown that the rank of the coefficient matrix
therefore the system admits a unique solution for
Finally, observe that the Boolean
function f is symmetric if and only if jR k j is either 0 or n
for all values of k; 0 k n.
From the above proof, it follows that we can decide symmetry
for OBDDs and decision trees also in polynomial time.
Proposition 5 SAT j-DNF formulas are sat-countable.
Proof. Let us say that two terms t and t 0 are conflicting if t
contains a literal l and t 0 contains a literal l. The consensus
of two non-conflicting terms t and t 0 , denoted tt 0 is the term
obtained from the union of all the literals in t and t
t 0 are conflicting, then their consensus is 0.
The definition of a SAT-j DNF formula f implies that in
every set terms of the formula,
there must be at least two conflicting terms. Therefore, using
the principle of inclusion and exclusion,
Here, for any term t of k literals ]t is simply 2 n k . From
the comment above, this sum needs to consider at most the
consensus of j terms of f . For constant j, the total time
for the computation is a polynomial.
Proposition 6 -formulas are sat-countable.
Proof. Let f be a -formula over a set of n variables. If
f is the constant 1, then f is the constant
0, then is a term containing a single literal,
then can be written either as f 1 f 2
or are -formulas over disjoint
sets of n 1 and n 2 variables respectively. Then, it is easy
to argue that
Recursive
application of these rules ensures that ]f can be computed in
3 Finding a Decision Tree Approximation
The main result of this section is an algorithm for constructing
a decision tree -approximation of any Boolean function
f represented in a projection-closed sat-countable class.
The heart of our algorithm is a procedure FIND which is
a generalization of the dynamic programming method used
in [GLR99] for truth-table minimization of decision trees.
FIND works as follows. Given f , a Boolean function
over n variables, a height parameter h and a size parameter
m, it builds precisely one tree from the set T ;k , for each
partial vector of size at most h and for eac h k; 0 k m.
(Here, T ;k is the set of all decision tree representations of
the function f of size at most k and height at most h jj
that have minimum error with respect to f and among all
such trees, are of minimum size.) The desired approximation
will therefore be the tree constructed for the set T ;w , where
1g.
The algorithm employs a two-dimensional array P [; k]
to hold a tree in T ;k . A tree in the P array will be represented
by a triple of the form (root, left subtree, right sub-
tree), unless it contains a single leaf node, in which case it
will be represented by the leaf's value. For a partial vector
, the notation :v 1 (:v 0, respectively) denotes the
partial vector obtained by extending by setting the variable
v to 1 (0, respectively).
Lemma 7 Algorithm FIND is correct, i.e., given a sat-count-
able representation of a Boolean function f , a height parameter
h, and a size parameter m, FIND outputs a decision tree
T 0 of height at most h and size at most m such that among all
such decision trees, err(T 0 , f ) is minimum; if there is more
than one decision tree with the same minimum error, then
is of minimum size among these trees.
Proof. We show by induction on l
that P [; k] is a tree in T ;k , for all 0 jj h and
01. foreach such that jj h do
02. if ](f
03. else P [; 0] 0;
04.
05. for to 0 do
06. foreach such that do
07. foreach do
08. P [;
09. foreach variable v not used in and each k do
11. if err(P [; k]; f ) > errV then
12.
13. else if err(P [; k]; f
14. if (jP [:v 0; k
15.
Figure
1: Algorithm FIND.
mg. For any , the
tree must be a leaf with value 0 or 1, depending on which
value yields the minimum error relative to f . Lines 2 and
3 of Algorithm FIND examine the hypercube corresponding
to f and determine whether the majority of assignments are
0 or 1. This is also true for any such that l
l
Assume that P [; k 0 ] has been correctly computed for all
such that l < l and all k 0 in [0; minf2 hjj
Also assume that all P [; k 0 ] have been correctly computed
for all k 0 in [0; k 1]. We show that FIND causes a tree
in T ;k to be placed in P [; k]. If the size of the trees
in T ;k is less than k, then, from the induction hypothe-
sis, P [; k] is initialized to a tree in T ;k in line 8. Lines
9-15 cannot then modify P [; k] and the algorithm is cor-
rect. Therefore, let the size of the trees in T ;k be exactly
k. Let Opt be any tree in T ;k and let v be its root.
Now v must be a variable that is not assigned a value in
. Let the sizes of Opt's left and right subtrees be k 0 and
respectively. Observe that k 0 and k 1 are one of the
examined in line 9. Let
and let 1. From the induction hypothe-
err(Right subtree of Opt; f1 ). Since
the error of a tree is the sum of the errors of its two subtrees,
the algorithm finds a tree for P [; k] which has error at most
that of Opt and size at most that of Opt. The lemma follows.
Lemma 8 Let p(jf j; n) denote the time complexity for computing
the number of satisfying assignments of an arbitrary
projection of a given sat-countable function f . The time
complexity of FIND is O(n O(h)
Proof. Since f is a sat-countable representation, the time
required by line 2 is O(p(jf j; n)n). The number of partial
vectors examined in line 1 is h
lines 1-4 take O(p(jf j; n)n O(h) ) time. Lines 5 and 6 cause
the same O(n O(h) ) partial vectors to be examined. The variable
(line 7) takes on at most m values, there are at most n
possibilities for v and m possible combinations of k 0 and k 00
in line 9.
The complexity of lines 10-15 is dominated by O(1) error
computations between a decision tree T in P and the
sat-countable function f . Each such error computation
can be implemented as follows. For any leaf node x in
be the partial vector corresponding to the evaluation
path in T leading up to x. The contribution to the
total error of the partial vector is then either ]((f
the leaf x has value 0 and 2 njjjj ]((f ) ) if it has
value 1. The total error err(T obtained by summing
the errors computed in this fashion over each leaf
of T . The complexity of this computation is bounded by
O(p(jf j; n)m) and that of lines 5-15 and hence Algorithm
FIND is bounded by O(n O(h) m 3 p(jf j; n)). As is common
in dynamic programming algorithms, memoizing helps to
reduce the overall complexity. Observe that the complexity
of error computation can be reduced by maintaining a
second two-dimensional array E each of whose elements
contains the error of the corresponding element in array P .
First E[; 0] can be computed in O(p(jf j; n)) time in lines
2 and 3. Then the remaining E[; k]s are computed every
time P [; k] is updated in O(1) time by simply summing
the error of the left and right subtrees of P [; k]. With
this time-saving modification, the time complexity becomes
O((p(jf
Lemma 9 Let T be an m-node decision tree. Then there
exists a decision tree T of height at most
and at most m nodes such that T is an -approximation of
T .
Proof. Restrict T to height h by converting any node x
at level h to either 0 or 1 depending on whether there are
more 0's or 1's respectively in the hypercube defined by the
path leading to x. Call this tree T . Clearly T has no
more than m nodes and the error of T is confined to the
hypercubes of the converted nodes x at level h in the original
tree. Since there are at most dm=2e such nodes and the
error of each node is at most 2 n h 1 , it follows that T is a
-approximation of T . Substituting
4 now yields the desired result.
Theorem 10 Given a sat-countable Boolean function representation
f whose smallest decision tree representation has
at most m nodes and any error parameter , we can find a
decision tree T 0 of at most m nodes which -approximates f
in time polynomial in jf j and n log m= .
Proof. Given f , we use the standard doubling trick to determine
in O(log m ) iterations of the algorithm the least value
m such that FIND(f , m , log((m returns a decision
tree which -approximates f . By Lemma 9, m is at
most m, the size of the smallest decision tree which can represent
f . The correctness and time complexity then follow
from Lemmas 7 and 8 respectively.
4 Learning Decision Trees under the Uniform
Distribution
We show that the algorithm of the previous section can be extended
to learn decision trees under the uniform distribution.
As we remarked in the introduction, this means that, given
access to a uniformly distributed sample of evaluations of
a boolean function f an error parameter and a confidence
parameter -, our algorithm will output a a decision tree T
of at most m nodes, where m is the least number of nodes
needed to represent f as a decision tree and such that T -
approximates f with confidence at least 1 -. The algorithm
takes time polynomial in n log(m=) and log(1=-), i.e., it is
a quasi-polynomial time algorithm. However, the sample-
complexity of the algorithm is only a modest polynomial in
the parameters m, log n, log(1=-) and log(1=).
We use the following additional terminology to prove the
results of this section. Let T m;h;n denote the class of decision
trees over n variables that have height at most h and
size at most m. For any decision tree T , let T (h) be the
tree of height h obtained from T by converting all non-leaf
nodes of depth h in T to leaf nodes with classification 0 or
1, depending on whether the majority of the assignments in
the corresponding hypercube of f are classified as 0 or 1,
respectively.
Recall that for any two Boolean functions f 1 , f 2 over n
denotes the number of assignments
for which f 1 () 6= f 2 (); by extension, if S is a sample
of classified examples of the form h; bi where is an assignment
is the
number of examples in S of the form h; bi where f() 6= b.
We need the following well-known inequalities.
Proposition 11 (Chernoff Bounds) Let
the outcomes of r identical, independent Bernoulli trials
with Prob [X
pr and for 0
Prob[R (p
Given a sample S of classified examples of a
boolean function of the form h; bi where is an assignment
and b 2 f0; 1g, a height parameter h, and a size parameter
m, a decision tree D of height at most h and size at
most m can be computed such that among all such decision
trees, err(D, S) is minimum, and among all such minimum
error trees, D has minimum size. The computation requires
O(n O(h)
Proof. Let S denote the assignments in S that extend the
partial assignment . For a given , S can be computed
in O(jSjn) time. Modify the condition of Line 2 of Algorithm
FIND so that number of assignments of S whose values
are 1 and 0 are compared. The modified Line 2 takes
O(jSjn) time. All references to f (lines 10, 11, and 13)
are replaced by S . Error computations can be carried out
as described in the proof of Lemma 8. Each error computation
takes O(jSjm) time. Since the rest of the algorithm is
unchanged, the complexity is obtained by replacing p(jf j; n)
by jSj. Note that this is also true of the modified algorithm
proposed in the proof of Lemma 8. Correctness follows from
Lemma 7.
Theorem 13 Given:
A uniformly distributed sample S of size
of examples of an m-node decision tree T over n variables
An error parameter , 0 < < 1, and
A confidence parameter -, 0 < - < 1,
we can find a decision tree D in T m;h;n with
in time O(rm 2 n O(h) ) such that with confidence at least 1 -,
the error of D in approximating T is at most , i.e.,
Proof.
We execute algorithm FIND modified to deal with a sample
S as described in Lemma 12 with the parameters m and
h as above. Let
Call a decision tree T 0 in T m;h;n bad if err(T
For any fixed bad decision tree T 0 ,
outputs
m;h;n and has least error over sample S]
2Here, the last inequality follows from Chernoff bounds
applied to the number of errors in S of the trees T 0 and
Now the probability p that FIND outputs any bad tree T 0
in T m;h;n is certainly at most jT m;h;n j 2e r 2
8 . The number
of binary trees on at most m nodes is at most 2 4 m and so
the number of decision trees of at most m nodes is at most
which also is an upper bound on T m;h;n . Con-
sequently, for our choice of r in the proposition (and after
a little bit of arithmetic), the probability p turns out to be at
most -.
Conclusions
Given a sat-countable representation of a boolean function or
a uniformly distributed sample of evaluations of a boolean
function, this paper presents a quasi-polynomial algorithm
for computing a decision tree of smallest size that approximates
this function. Is it possible to achieve this in polynomial
time? Failing this, is it possible to obtain a decision
tree whose size is within a polynomial factor of the smallest
approximating decision tree in polynomial time?
Finding a decision tree of smallest size equivalent to a
given one is NP-hard [ZB98]. This opens the question of
whether at least a polynomial approximation of the smallest
equivalent decision tree is possible in polynomial time. The
ideas in this paper do not seem enough to answer this ques-
tion, but there is some hope that combining these ideas with
the results of Ehrenfeucht and Haussler [EH89] will work.
As a matter of fact, their results can already be used to give
a quasi-polynomial approximation to the smallest decision
tree equivalent to any projection-closed representation which
allows testing for tautology and satisfiability in polynomial
time in quasi-polynomial time. This is done in the following
way.
We consider the sample S in the Ehrenfeucht and Haussler
algorithm to be all 2 n assignments. However, we avoid
using time polynomial in the sample size, by noting that the
operations on the sample in the algorithm consist only of:
1. Checking if all assignments in S evaluate to either 0 or
1, and
2. Computing a new sample S 0 obtained by projecting
given variable to 0 or 1.
Doing these operations in time polynomial in the given representation
converts their algorithm into one whose complexity
has an added factor of the form O(n O(r) ), where r is the
smallest rank of any equivalent decision tree; since r cannot
exceed O(log m), where m is the size of the smallest equivalent
decision tree, we get the desired quasi-polynomial approximation
Finally, can the ideas of this paper be combined with
those of Ehrenfeucht and Haussler to properly learn decision
trees under arbitrary distributions with or without membership
queries?
6
Acknowledgment
We thank the anonymous referee who suggested the sharper
bound on T m;h;n which led to an improvement in the sample
complexity in Theorem 13.
--R
Occam's Razor.
Equivalence of Free Boolean Graphs can be Decided Probabilistically in Polynomial Time.
Exact Learning Boolean Functions via the Monotone Theory.
Simple Learning Algorithms for Decision Trees and Multi-variate Polynomials
Learning Decision Trees from Random Examples.
Computers and Intractability
Exact Learning when Irrelevant Variables Abound.
Constructing Optimal Binary Decision Trees is NP-complete
On boolean decision trees with faulty nodes.
Induction of Decision Trees.
Learning Decision Tree Classi- fiers
On Some Central Problems in Computational Complexity.
Finding Small Equivalent Decision Trees is Hard.
--TR
Occam''s razor
Learning decision trees from random examples needed for learning
Exact learning Boolean functions via the monotone theory
Learning decision tree classifiers
Partial Occam''s razor and its applications
Exact learning when irrelevant variables abound
Computers and Intractability
Induction of Decision Trees
Simple learning algorithms for decision trees and multivariate polynomials
On some central problems in computational complexity. | representations of Boolean functions;learning theory;algorithms;decision trees |
566251 | On two-sided infinite fixed points of morphisms. | Let &Sgr; be a finite alphabet, and let h:&Sgr;*&Sgr; be a morphism. Finite and infinite fixed points of morphismsi.e., those words w such that h(w)=wplay an important role in formal language theory. Head characterized the finite fixed points of h, and later, Head and Lando characterized the one-sided infinite fixed points of h. Our paper has two main results. First, we complete the characterization of fixed points of morphisms by describing all two-sided infinite fixed points of h, for both the "pointed" and "unpointed" cases. Second, we completely characterize the solutions to the equation h(xy)=yx in finite words. | called the Thue-Morse infinite word, is the unique one-sided infinite fixed point of which
starts with 0. In fact, nearly every explicit construction of an infinite word avoiding certain
patterns involves the fixed point of a morphism; for example, see [8, 15, 24, 20]. One-sided
infinite fixed points of uniform morphisms also play a crucial role in the theory of automatic
sequences; see, for example, [1].
Because of their importance in formal languages, it is of great interest to characterize
all the fixed points, both finite and infinite, of a morphism h. This problem was first
studied by Head [9], who characterized the finite fixed points of h. Later, Head and Lando
[10] characterized the one-sided infinite fixed points of h. (For different proofs of these
characterizations, see Hamm and Shallit [7].) In this paper we complete the description
of all fixed points of morphisms by characterizing the two-sided infinite fixed points of h.
Related work was done by Lando [14]. Two-sided infinite words (sometimes called bi-infinite
words or bi-infinite sequences) play an important role in symbolic dynamics [16], and have
also been studied in automata theory [18, 19], cellular automata [12], and the theory of codes
[22, 5].
We first introduce some notation, some of which is standard and can be found in [11].
For single letters, that is, elements of \Sigma, we use the lower case letters a; b; c; d. For finite
words, we use the lower case letters t; u; v; w; x; z. For infinite words, we use bold-face
letters t; u; v; w;x;y; z. We let ffl denote the empty word. If w 2 \Sigma , then by jwj we mean
the length of, or number of symbols in w. If S is a set, then by Card S we mean the number
of elements of S. We say x is a subword of y 2 \Sigma if there exist words w; z 2 \Sigma such
that
If h is a morphism, then we let h j denote the j-fold composition of h with itself. If there
exists an integer j 1 such that h j (a) = ffl, then the letter a is said to be mortal; otherwise
a is immortal. The set of mortal letters associated with a morphism h is denoted by M h .
The mortality exponent of a morphism h is defined to be the least integer t 0 such that
We write the mortality exponent as t. It is easy to prove
that exp(h) Card M h .
We let \Sigma ! denote the set of all one-sided right-infinite words over the alphabet \Sigma. Most
of the definitions above extend to \Sigma ! in the obvious way. For example, if
then is a language, then we define
Perhaps slightly less obviously, we can also define a limiting word
for a letter a, provided
h . In this case, there exists t 0 such that
ffl. Then we define
which is infinite if and only if x 62 M
h . Note that the factorization of h(a) as wax, with
h and x 62 M
h , if it exists, is unique.
In a similar way, we let ! \Sigma denote the set of all left-infinite words, which are of the form
to be the set of
left-infinite words formed by concatenating infinitely many words from L, that is,
h , then we define the left-infinite word
Again, if the factorization of h(a) as wax exists, with w 62 M
then it is unique.
We can convert left-infinite to right-infinite words (and vice versa) using the reverse
operation, which is denoted w R . For example, if
We now turn to the notation for two-sided infinite words. These have been much less
studied in the literature than one-sided words, and the notation has not been standardized.
Some authors consider 2 two-sided infinite words to be identical if they agree after applying
a finite shift to one of the words. Other authors do not. (This distinction is sometimes
called vs. "pointed" [2, 17].) In this paper, we consider both the pointed and
unpointed versions of the equation As it turns out, the "pointed" version of
this equation is quite easy to solve, based on known results, while the "unpointed" case is
significantly more difficult. The latter is our first main result, which appears as Theorem 5.
We let \Sigma Z denote the set of all two-sided infinite words over the alphabet \Sigma, which are
of the form \Delta In displaying an infinite word as a concatenation of words,
we use a decimal point to the left of the character c 1 , to indicate how the word is indexed.
Of course, the decimal point is not part of the word itself. We define the shift oe(w) to be
the two-sided infinite word obtained by shifting w to the left one position, so that
Similarly, for k 2 Zwe define
If w;x are 2 two-sided infinite words, and there exists an integer k such that
we call w and x conjugates, and we write w x. It is easy to see that is an equivalence
relation. We extend this notation to languages as follows: if L is a set of two-sided infinite
words, then by w L we mean there exists x 2 L such that w x.
If w is a nonempty finite word, then by w Z we mean the two-sided infinite word
Using concatenation, we can join a left-infinite word with a right-infinite
word to form a new two-sided infinite word, as follows:
If L ' \Sigma is a set of words, then we define
is a morphism, then we define
Finally, if
h , then we define
a two-sided infinite word. Note that in this case the factorization of h(a) as wax is not
necessarily unique, and we use the superscript i to indicate which a is being chosen.
We can produce one-sided infinite words from two-sided infinite words by ignoring the
portion to the right or left of the decimal point. Suppose We
define
a left-infinite word, and
a right-infinite word.
Finite and one-sided infinite fixed points
In this section we recall the results of Head [9] and Head and Lando [10]. We assume
is a morphism that is extended to the domains \Sigma ! and ! \Sigma in the manner
discussed above.
such that xay and xy 2 M
and
Note that there is at most one way to write h(a) in the form xay with xy 2 M
h .
Theorem 1 A finite word w 2 \Sigma has the property that
h .
Theorem 2 The right-infinite word w is a fixed point of h if and only if at least one of the
following two conditions holds:
(a) w
(a) for some a 2 \Sigma, and there exist x 2 M
h and y 62 M
h such that
There is also an evident analogue of Theorem 2 for left-infinite words:
Theorem 3 The left-infinite word w is a fixed point of h if and only if at least one of the
following two conditions holds:
(a)
h for some a 2 \Sigma, and there exist x 62 M
h and y 2 M
h such that
3 Two-sided infinite fixed points: the "pointed" case
We assume h : \Sigma ! \Sigma is a morphism that is extended to the domain \Sigma Z in the manner
discussed above. In this section, we consider the equation
words.
Proposition 4 The equation solution if and only if at least one of the
following conditions holds:
(a) w 2 F Z
h for some a 2 \Sigma, and there exist x 62 M
h such that
or
(c)
(a) for some a 2 \Sigma, and there exist x 2 M
h such that
there exist x; z 62 M
h , such
that xay and
Proof. Let By definition, we have
so if
We may now apply Theorem 2 (resp., Theorem 3) to R(w) (resp., L(w)). There are 2
cases to consider for each side, giving 2 cases.
Example. Let be the Thue-Morse morphism, which maps
the one-sided Thue-Morse
infinite word. Then there are exactly 4 two-sided infinite fixed points of g, as follows:
All of these fall under case (d) of Proposition 4. Incidentally, all four of these words are
overlap-free.
4 Two-sided infinite fixed points: the "unpointed" case
We assume h : \Sigma ! \Sigma is a morphism that is extended to the domain \Sigma Z in the manner
discussed above. In this section, we characterize the two-sided infinite fixed points of a
morphism in the "unpointed" case. That is, our goal is to characterize the solutions to
h(w) w. The following theorem is the first of our two main results.
Theorem 5 Let h be a morphism. Then the two-sided infinite word w satisfies the relation
only if at least one of the following conditions holds:
(a) w F Z
(b) w
h for some a 2 \Sigma, and there exist x 62 M
h and y 2 M
h such that
(c)
(a) for some a 2 \Sigma, and there exist x 2 M
h and y 62 M
h such that
(d) w
there exist x; z 62 M
h , such
that xay and
h !;i (a) for some a 2 \Sigma, and there exist x; y 62 M
h such that xay with
such that
Before we begin the proof of Theorem 5, we state and prove three useful lemmas.
Lemma 6 Suppose w, x are 2 two-sided infinite words with w x. Then h(w) h(x).
Proof. Since w x, there exists j such that
0:
(2)
Our second lemma concerns periodicity of infinite words. We say a two-sided infinite
word
is periodic if there exists a nonempty word x such that there exists an integer
(w). The integer p is called a period of w.
Lemma 7 Suppose word such that there
exists a one-sided right-infinite word x and infinitely many negative indices 0 ?
such that
for j 1. Then w is periodic.
Proof. By assumption
for j 1. Hence c i j and so the right-infinite word x is periodic
of period . Since this is true for all j 1, it follows that x is periodic of period
so w is periodic of period g.
Our third lemma concerns the growth functions of iterated morphisms.
be a morphism. Then
(a) there exist integers
(b) there exists an integer M depending only on Card \Sigma such that for all
we have j M .
We note that part (a) was asserted without proof by Cobham [4]. However, the proof
easily follows from a result of Dickson [6] that N k contains no infinite antichains under the
usual partial ordering; see also Konig [13]. For completeness, we give the following proof,
suggested by S. Astels (personal communication).
Proof. (a) Suppose g. First, choose i 1;1 to be the least index
such that jh i 1;1 (a 1 successively choose i
g.
choose i 2;1 to be the least index i
successively choose i
1. g.
Note that S 2 ' S 1 .
Continuing in this fashion, we produce an infinite sequence of indices
that jh i r;n (a j )j jh i r;n+1 (a j )j for r and all n 1. We can then choose
(b) We omit the proof, although we observe that we can take
Now we can prove Theorem 5.
Proof. ((=): Suppose case (a) holds, and w F Z
h . Then there exists x 2 F Z
h with
h , we can write
It follows that applying
Lemma 6, we conclude that h(w)
Next, suppose case (b) holds, and w
h . Then w x for some x of the form
xay with x 62 M
h and y 2 M
h . Then we have
and by Lemma 6, we conclude that h(w)
Cases (c), (d), and (e) are similar to case (b).
Finally, if case (f) holds, then
and so
there exists k such that
Let
Then it is not hard to see that
Figure 1. Note that
c (1)
s
(0)+1
s
c .
s
c
c s (-1)+1
h(
Figure
1: Interpretation of the function s
We define the set C as follows: Our argument is divided into
two major cases, depending on whether or not C is empty.
Case 1: C 6= ;. In this case, there exists j such that j. Now consider the pointed
word We have x w and by Eq. (4) we have
Then, by Proposition 4, one of cases (a)-(d) must hold.
Case 2: There are several subcases to consider.
Case 2a: There exist integers
Among all pairs (i; choose one with there exists
an integer k with is a pair satisfying (5) with smaller
difference, while if k ! j, then (k; j) is a pair satisfying (5) with smaller difference. Hence
k. But this is impossible by our assumption. It follows that
but 1). Hence this case cannot occur.
Case 2b: There exists an integer r such that s(i) ! i for all
all i r. Then h(c r which by the inequalities contains c as a
subword. Therefore, letting a = c r , it follows that
where is a left-infinite word,
are finite words, and is a right-infinite word. Furthermore, we have
Now the equation implies that h(y) is a prefix of v, and by an easy induction
we have h(y)h 2 (y)h 3 (y) \Delta \Delta \Delta is a prefix of v. Suppose this prefix is finite. Then y
h , and
so
a contradiction, since we have assumed It follows
that z := h(y)h 2 (y)h 3 (y) \Delta \Delta \Delta is right-infinite and hence y 62 M
h .
By exactly the same reasoning, we find that \Delta \Delta \Delta h 3 (x) h 2 (x) h(x) is a left-infinite suffix of
u. We conclude that w
h !;i (a), and hence case (e) holds.
Case
Now consider the following factorization of certain conjugates of w, as follows: for i 0,
we have w x i
word), and z right-infinite word). Note that
assumption, so i s(i \Gamma 1); hence y i is nonempty. Evidently we have
Now the equation h(y i z implies that h(y i ) is a prefix of z i . Now an easy induction,
as in Case 2b, shows that v := h(y is a prefix of z i . If v were finite, then
we would have y
h , and so
Hence v is right-infinite, and so y
h . There are now two further subcases to consider:
Case 2ci: Suppose sup i0 It then follows that jy i j d. Hence
there is a finite word u such that y infinitely many indices i 0. From the above
argument we see that the right-infinite word h(u)h 2 (u)h 3 (u) \Delta \Delta \Delta is a suffix of w, beginning
at position s(i for infinitely many indices i 0. We now use Lemma 7 to conclude
that w is periodic.
Thus we can write
loss of generality, we may assume p is minimal
We claim p. For if not we must have and then since
h(w) w, we would have w is periodic with periods p and q, hence periodic of period
q). But since p was minimal we must have Hence q 2p. Now let
must have l ? 0. Then
It now follows that
for all integers i. Now multiplying
by \Gammal, we get \Gammalp ? l \Gammal in Eq. (7), and we have
a contradiction, since s(i) ? i for all i. It follows that
There exists k such that h(c 1 . Using the division theorem,
We have
By above we know jvj 1, so xy 6= ffl. Suppose
h . It
follows that w 2 F Z
h . A similar argument applies if
a contradiction. Thus x; y 6= ffl, and case (f) holds.
Case 2cii: sup i0
z
denotes the j-fold composition of the function
s with itself. First we prove the following technical lemma.
Lemma 9 For all integers r 1 there exists an integer n 0 such that B j (n) ? r for
t.
Proof. By induction on t. For the result follows since
sup
Now assume the result is true for t; we prove it for t + 1. Define m := max a2\Sigma jh(a)j. By
induction there exists an integer n 1 such that t. Then, by
the definition of m there exist an integer m, and an integer n 3 such
that s(n 3
Now
Similarly, we have
for all j 0. By the same reasoning, we have
for all j 0. Thus we find
(by Eq. (8))
(by induction and Eq. (9))
Similarly, for 2
r:
It thus follows that we can take This completes the proof of Lemma 9.
Now let M be the integer specified in Lemma 8, and define r := sup 1iM B i (0). By
Lemma 9 there exists an integer n 0 such that B
We have
It follows that
. But this contradicts Lemma 8. This contradiction shows that this case
cannot occur.
Case 2d: This case is the mirror image of Case 2c 1 , and the proof
is identical. The proof of Theorem 5 is complete.
5 Some examples
In this section we consider some examples of Theorem 5.
Example 1. Consider the morphism f defined by a
Then
This falls under case (f) of Theorem 5.
Example 2. Consider the morphism ' defined by 0
we have '(w) w. This falls under case (e) of Theorem 5. Incidentally, c i equals the sum
of the digits, modulo 3, in the balanced ternary representation of i.
6 The equation in finite words
It is not difficult to see that it is decidable whether any of conditions (a)-(e) of Theorem 5
hold for a given morphism h. However, this is somewhat less obvious for condition (f) of
Theorem 5, which demands that the equation possess a nontrivial 2 solution. We
conclude this paper by discussing the solvability of this equation and, in our second main
result, we give a characterization of the solution set.
To do so it is useful to extend the notation , previously used for two-sided infinite
words, to finite words. We say w z for w; z 2 \Sigma if w is a cyclic shift of z, i.e., if there exist
such that It is now easy to verify that is an equivalence
relation. Furthermore, if w z, and h is a morphism, then h(w) h(z). Thus condition
(f) can be restated as h(z) z. The following theorem shows that the solvability of the
equation
1 Note that s(i) ? i for all i implies that hence Case 2d
really is the mirror image of Case 2c.
By nontrivial we mean xy 6= ffl.
Theorem 10 Let h be a morphism h : \Sigma ! \Sigma . Then h(z) z possesses a solution z
if and only if F h d is nonempty for some 1 d Card \Sigma.
Proof. (=: Suppose F h d is nonempty for some d, say x d. Then by definition of F h d,
xy. Then
and so there exist 0
such that h i In other words, h i (z) is a finite fixed point of h j \Gammai . Hence F h j\Gammai
is nonempty. This implies A h d is nonempty for some d with 1 d Card \Sigma. Thus F h d is
nonempty.
Remarks.
1. Note that Theorem 10 does not characterize all the finite solutions of h(z) z; it
simply gives a necessary and sufficient condition for solutions to exist.
2. As we have seen in Theorem 1, the set of finite solutions to z is finitely
generated, in that the solution set can be written as S for some finite set T . However,
the set of solutions to h(z) z need not even be context-free. For consider the morphism
defined by h(a)
If T were context-free, then so would be T " a
c
. But
c
which is not context-free.
We finish with a discussion of the set T of words z for which h(z) z. From the proof
of Theorem 10, there exist i ! j such that h i (z) is a fixed point of h j \Gammai . Since h i (z) z,
we may restrict our attention to the set
then is the set of all
cyclic permutations of words in S.
To describe S we introduce an auxiliary morphism ~ h : ~
\Sigma, where ~
a 2 ~
\Sigma if and only if the following three conditions hold:
(1) a is an immortal letter of h;
contains exactly one immortal letter for all i 1; and
contains a for some i 1.
We define the morphism ~
h by ~ h(a) = a 0 where a 0 is the unique immortal letter in h(a).
The relation of ~ h to S is as follows. If z 2 S, then z 2 F
Hence there exists
an integer p such that is an immortal letter
for 1 j p. It follows easily that a j 2 ~
\Sigma. Hence h cyclically shifts z iff ~ h cyclically shifts
~
. (The words x j and y j are uniquely specified by i and a j .)
Theorem 11 We have
Card
Proof. Suppose a 2 ~
\Sigma. Define a j , x j and y j by a
where a j+1 2 ~
\Sigma. It is clear that there is a t Card ~
\Sigma such that if j j k (mod t) then
By the definition of F h i , all words in F h i
are of the form
a e i
for some a = a 0 2 ~
\Sigma. Since there are only finitely many a j , x j and y j and e i Card ~
\Sigma for
all i 1, the result follows.
Therefore, we now concentrate on the set ~
T of words ~ z that are cyclically shifted by ~ h.
Suppose ~
g. Since ~ h acts as a permutation P on ~
there exists a unique
factorization of P into disjoint cycles. Suppose appearing in the
factorization of P , and let jcj denote the length t of the cycle c. Define the language L(c) as
follows:
For example, if . Note that the definition
of L(c) is independent of the particular representation chosen for the cycle.
Now define the finite collection R 0 of regular languages as follows:
is a cycle of P and 1 v jcj and gcd(v;
We now define a finite collection R of regular languages. Each language in R is the
union of some languages of R 0 . The union is defined as follows. Each language L(c v ) in R 0
is associated with a pair (t; v) where is an integer relatively prime to t. Then
the languages L(c v 1
are each a subset of the same language of R if and
only if the system of congruences
. (10)
possesses an integer solution x, where t m. Note that a language in R 0
may be a subset of several languages of R.
We say a word w is the perfect shuffle of words w and the first
j symbols of w are the first symbols of w in that order, the second j symbols of w are
the second symbols of w in that order, and so on. We write
The following theorem characterizes the set ~
T , and is our second main result.
Theorem 12 Let ~ z 2 ~
\Sigma , and let ~ h permute ~
\Sigma. Then ~ h(~z) ~ z if and only if ~
z is the perfect
shuffle of some finite number of words contained in some single language of R.
Proof. Let ~ h permute ~
\Sigma, with induced permutation P . Let ~
z is the perfect shuffle of some finite number of words contained in a single
language of R. For simplicity of notation we consider the case where ~
z is the perfect shuffle
of two such words; the general case is similar and is left to the reader.
Thus assume ~
w). Further, assume w 2 L(c v ) for some cycle c and integer v
relatively prime to
relatively prime to
t. (Here the indices are assumed to be taken modulo t.)
d s+1 for
t. (Here the indices are assumed to be taken modulo "
t.)
By hypothesis there exists an integer x such that vx j 1 (mod
t).
A simple calculation shows that we may assume 0 x
~
(indices of a taken mod n), and so ~ h(~z) ~
z.
Suppose ~ h(~z) ~
z. Then there exists an integer y such that ~ h(b
the indices are taken modulo n. Define
Then, considering its action on b 0 the morphism ~ h induces a permutation of the
indices n) which, by elementary group theory, factors
into g disjoint cycles, each of length m.
define the words
It is clear that ~
and so it follows that ~ h cyclically shifts each w i by y=g.
Now each k there is a unique solution t (mod m) of the congruence
y
Multiplying through by g, we find
has a solution t, so
has a solution t. But ~ h t (b so each symbol b kg+i of w i is in the orbit of ~
h on z i . It
follows that each symbol of w i is contained in the same cycle c i of P . Suppose c i has length
is the least positive integer with this property.
However, we also have ~ h m (b
there is a solution v to the congruence v \Delta y
m). Then
n). Using the division theorem, write
g. Since gcd(v; m) = 1, and t i j m, we must have
Now
Then for we have
From ~ h(b
and so y
Thus the system of equations (10) possesses a solution
This completes the proof.
--R
Automates finis en th'eorie des nombres.
Ensembles reconnaissables de mots bi-infinis
Axel Thue's Papers on Repetitions in Words: a Translation.
On the Hartmanis-Stearns problem for a class of tag machines
Finitary codes for biinfinite words.
Finiteness of the odd perfect and primitive abundant numbers with distinct factors.
Characterization of finite and one-sided infinite fixed points of morphisms on free monoids
On sequences which contain no repetitions.
Fixed languages and the adult languages of 0L schemes.
Fixed and stationary
Introduction to Automata Theory
Recursive cellular automata invariant sets.
Theorie der endlichen und unendlichen Graphen: kombinatorische Topologie der Streckenkomplexe.
Periodicity and ultimate periodicity of D0L systems.
A problem on strings of beads.
An Introduction to Symbolic Dynamics and Coding.
The limit set of recognizable substitution systems.
Ensembles reconnaissables de mots biinfinis.
Ensembles reconnaissables de mots biinfinis.
An inequality for non-negative matrices
--TR
Periodicity and ultimate periodicity of DOL systems
An introduction to symbolic dynamics and coding
Introduction To Automata Theory, Languages, And Computation
Ensembles reconnaissables de mots bi -infinis limite et dMYAMPERSANDeacute;terminisme
The Limit Set of Recognizable Substitution Systems
Ensembles reconnaissables de mots biinfinis
--CTR
F. Lev , G. Richomme, On a conjecture about finite fixed points of morphisms, Theoretical Computer Science, v.339 n.1, p.103-128, 11 June 2005 | fixed point;combinatorics on words;morphism;infinite words |
566252 | Weak bisimilarity between finite-state systems and BPA or normed BPP is decidable in polynomial time. | We prove that weak bisimilarity is decidable in polynomial time between finite-state systems and several classes of infinite-state systems: context-free processes and normed basic parallel processes (normed BPP). To the best of our knowledge, these are the first polynomial algorithms for weak bisimilarity problems involving infinite-state systems. | Introduction
Recently, a lot of attention has been devoted to the study of decidability
and complexity of verification problems for infinite-state systems [33,12,5].
We consider the problem of weak bisimilarity between certain infinite-state
processes and finite-state ones. The motivation is that the intended behavior
of a process is often easy to specify (by a finite-state system), but a 'real' implementation
can contain components which are essentially infinite-state (e.g.,
counters, buffers, recursion, creation of new parallel subprocesses). The aim is
On leave at the Institute for Informatics, Technical University Munich. Supported
by the Alexander von Humboldt Foundation and by the Grant Agency of the Czech
Republic, grant No. 201/00/0400.
Supported by DAAD Post-Doc grant D/98/28804.
Preprint submitted to Elsevier Preprint 8 November 2000
to check if the finite-state specification and the infinite-state implementation
are semantically equivalent, i.e., weakly bisimilar.
We concentrate on the classes of infinite-state processes definable by the syntax
of BPA (Basic Process Algebra) and normed BPP (Basic Parallel Pro-
cesses) systems. BPA processes (also known as context-free processes) can be
seen as simple sequential programs (due to the binary operator of sequential
composition). They have recently been used to solve problems of data-flow
analysis in optimizing compilers [13]. BPP [8] model simple parallel systems
(due to the binary operator of parallel composition). They are equivalent to
communication-free nets, the subclass of Petri nets [36] where every transition
has exactly one input-place [11]. A process is normed iff at every reachable
state it can terminate via a finite sequence of computational steps.
Although the syntax of BPA and BPP allows to define simple infinite-state
systems, from the practical point of view it is also important that they can give
very compact definitions of finite-state processes (i.e., the size of a BPA/BPP
definition of a finite-state process F can be exponentially smaller than the
number of states of F -see the next section). As our verification algorithms
are polynomial in the size of the BPA/BPP definition, we can (potentially)
verify very large processes. Thus, our results can be also seen as a way how
to overcome the well-known problem of state-space explosion.
The state of the art. Baeten, Bergstra, and Klop [1] proved that strong
bisimilarity [35] is decidable for normed BPA processes. Simpler proofs have
been given later in [20,14], and there is even a polynomial-time algorithm
[17]. The decidability result has later been extended to the class of all (not
necessarily normed) BPA processes in [10], but the best known algorithm is
doubly exponential [4]. Decidability of strong bisimilarity for BPP processes
has been established in [9], but the associated complexity analysis does not
yield an elementary upper bound (although some deeper examination might in
principle show that the algorithm is elementary). Strong bisimilarity of BPP
has been shown to be co-NP-hard in [28]. However, there is a polynomial-time
algorithm for the subclass of normed BPP [18]. Strong bisimilarity between
normed BPA and normed BPP is also decidable [7]. This result even holds
for parallel compositions of normed BPA and normed BPP processes [22].
Recently, this has even been generalized to the class of all normed PA-processes
[16].
For weak bisimilarity, much less is known. Semidecidability of weak bisimilarity
for BPP has been shown in [11]. In [15] it is shown that weak bisimilarity is
decidable for those BPA and BPP processes which are 'totally normed' (a process
is totally normed if it can terminate at any moment via a finite sequence
of computational steps, but at least one of those steps must be 'visible', i.e.,
non-internal). Decidability of weak bisimilarity for general BPA and BPP is
open; those problems might be decidable, but they are surely intractable (as-
suming P 6= NP ). Weak bisimilarity of (normed) BPA is PSPACE-hard [38].
An NP lower bound for weak bisimilarity of BPP has been shown by St-r'ibrn'a
in [38]. This result has been improved to \Pi p
-hardness by Mayr [28] and very
recently to PSPACE-hardness by Srba in [37]. Moreover, the PSPACE lower
bound for weak bisimilarity of BPP in [37] holds even for normed BPP.
The situation is dramatically different if we consider weak bisimilarity between
certain infinite-state processes and finite-state ones. This study is motivated
by the fact that the intended behavior of a process is often easy to specify
(by a finite-state system), but a 'real' implementation can contain components
which are infinite-state (e.g., counters, buffers, recursion, creation of
new parallel subprocesses). It has been shown in [26] that weak bisimilarity
between BPP and finite-state processes is decidable. A more general result
has recently been obtained in [21], where it is shown that many bisimulation-
like equivalences (including the strong and weak ones) are decidable between
PAD and finite-state processes. The class PAD [31,30] strictly subsumes not
only BPA and BPP, but also PA [2] and pushdown processes. The result in
[21] is obtained by a general reduction to the model-checking problem for the
simple branching-time temporal logic EF, which is decidable for PAD [30].
As the model-checking problem for EF is hard (for example, it is known to
be PSPACE-complete for BPP [26] and PSPACE-complete for BPA [39,27]),
this does not yield an efficient algorithm.
Our contribution. We show that weak (and hence also strong) bisimilarity
is decidable in polynomial time between BPA and finite-state processes, and
between normed BPP and finite-state processes. To the best of our knowl-
edge, these are the first polynomial algorithms for weak bisimilarity with
infinite-state systems. Moreover, the algorithm for BPA is the first example
of an efficient decision procedure for a class of unnormed infinite-state systems
(the polynomial algorithms for strong bisimilarity of [17,18] only work
for the normed subclasses of BPA and BPP, respectively). Due to the afore-mentioned
hardness results for the 'symmetric case' (when we compare two
BPA or two (normed) BPP processes) we know that our results cannot be
extended in this direction. A recent work [29] shows that strong bisimilarity
between pushdown processes (a proper superclass of BPA) and finite-state
ones is already PSPACE-hard. Furthermore, weak bisimilarity remains computationally
intractable (DP-hard) even between processes of one-counter nets
and finite-state processes [23] (one-counter nets are computationally equivalent
to the subclass of Petri nets with at most one unbounded place and can
be thus also seen as very simple pushdown automata). Hence, our result for
BPA is rather tight. The question whether the result for normed BPP can
be extended to the class of all (not necessarily normed) BPP processes is left
open. It should also be noted that simulation equivalence with a finite-state
process is co-NP-hard for BPA/BPP processes [25], EXPTIME-complete for
pushdown processes [24], but polynomial for one-counter nets [24].
The basic scheme of our constructions for BPA and normed BPP processes is
the same. The main idea is that weak bisimilarity between BPA (or normed
BPP) processes and finite-state ones can be generated from a finite base of
'small' size and that certain infinite subsets of BPA and BPP state-space can
be 'symbolically' described by finite automata and context-free grammars,
respectively. A more detailed intuition is given in Section 3. An interesting
point about this construction is that it works although weak bisimulation is
not a congruence w.r.t. sequential composition, but only a left congruence.
In Section 4, we propose a natural refinement of weak bisimilarity called
termination-sensitive bisimilarity which is a congruence and which is also
decidable between BPA and finite-state processes in polynomial time. The result
demonstrates that the technique which has been used for weak bisimilarity
actually has a wider applicability-it can be adapted to many 'bisimulation-
like' equivalences. Finally, we should note that our aim is just to show that the
mentioned problems are in although we do compute the degrees of bounding
polynomials explicitly, our analysis is quite simple and rough. Moreover,
both presented algorithms could be easily improved by employing standard
techniques. See the final section for further comments.
We use process rewrite systems [31] as a formal model for processes. Let Act =
countably infinite sets of
actions and process constants, respectively. The class of process expressions E
is defined by
Const and " is a special constant that denotes the empty expres-
sion. Intuitively, ':' is sequential composition and `k' is parallel composition.
We do not distinguish between expressions related by structural congruence
which is given by the following laws: ':' and `k' are associative, 'k' is commu-
tative, and '"' is a unit for `:' and 'k'.
A process rewrite system [31] is specified by a finite set of rules \Delta which have
the form E a
Act . Const (\Delta) and Act (\Delta) denote
the sets of process constants and actions which are used in the rules of \Delta,
respectively (note that these sets are finite). Each process rewrite system \Delta
defines a unique transition system where states are process expressions over
Const (\Delta), Act (\Delta) is the set of labels, and transitions are determined by \Delta
and the following inference rules (remember that 'k' is commutative):
a
E:F a
EkF a
We extend the notation E a
F to elements of Act in the standard way. F is
reachable from E if E w
Sequential and parallel expressions are those process expressions which do not
contain the 'k' and the `:' operator, respectively. Finite-state, BPA, and BPP
systems are subclasses of process rewrite systems obtained by putting certain
restrictions on the form of the rules. Finite-state, BPA, and BPP allow only
a single constant on the left-hand side of rules, and a single constant, sequential
expression, and parallel expression on the right-hand side, respectively.
The set of states of a transition system which is generated by a finite-state,
BPA, or BPP process \Delta is restricted to Const (\Delta), the set of all sequential expressions
over Const (\Delta), or the set of all parallel expressions over Const (\Delta),
respectively.
Example
"g be a process
rewrite system. We see that \Delta is a BPA system; a part of the transition system
associated to \Delta which is reachable from Z looks as follows:
Z I.Z I.I.Z I.I.I.Z
z
d d
d
If we replace each occurrence of the ':' operator with the `k' operator, we obtain
a BPP system which generates the following transition system (again, we only
draw the part reachable from Z):
Z Z || I Z || I || I Z || I || I || I
d d
d
z z z z
A process is normed iff at every reachable state it can (successfully) terminate
via a finite sequence of computational steps. For a BPA or BPP process, this
is equivalent to the condition that for each constant X 2 Const (\Delta) of its
underlying system \Delta there is some w 2 Act such that X w
". We call such
constants X with this property normed.
The semantical equivalence we are interested in here is weak bisimilarity [32].
This relation distinguishes between 'observable' and `internal' moves (compu-
tational steps); the internal moves are modeled by a special action which is
denoted '- ' by convention. In what follows we consider process expressions
over Const (\Delta) where \Delta is some fixed process rewrite system.
Definition 2 The extended transition relation ' a
)' is defined by E a
binary relation R over process expressions is a weak bisimulation iff whenever
then for each a 2 Act :
there is F a
ffl if F a
then there is E a
are weakly bisimilar, written E - F , iff there is a weak bisimulation
relating them.
Weak bisimilarity can be approximated by the family of - i relations, which
are defined as follows:
and the following conditions hold:
there is F a
then there is E a
It is worth noting that - i is not an equivalence for i - 1, as it is not transitive.
It is possible to approximate weak bisimilarity in a different way so that the
approximations are equivalences (see [21]). However, we do not need this for
our purposes.
\Gamma be a finite-state system with n states, f; g 2 Const (\Gamma). It is easy to
show that the problem whether f - g is decidable in O(n 3 ) time. First we
compute in O(n 3 ) time the transitive closure of the transition system w.r.t.
the -
transitions and thus obtain a new system in which a
! is the same as
a
) in the old system. Then it suffices to decide strong bisimilarity of f and
g in the new system. This can be done in O(n 2 log n) time, using partition
refinement techniques from [34].
Sometimes we also consider weak bisimilarity between processes of different
process rewrite systems, say \Delta and \Gamma. Formally, \Delta and \Gamma can be considered
as a single system by taking their disjoint union.
In this section we prove that weak bisimilarity is decidable between BPA and
finite-state processes in polynomial time.
Let E be a BPA process with the underlying system \Delta, F a finite-state process
with the underlying system \Gamma such that Const (\Delta) " Const
(w.l.o.g.) that E 2 Const (\Delta). Moreover, we also assume that for all f; g 2
Const (\Gamma), a 2 Act such that f 6= g or a 6= - we have that f a
implies
f a
\Gamma. If those ' a
!' transitions are missing in \Gamma, we can add them
safely. Adding these transitions does not change the weak bisimilarity relation
among the states. In order to do this it suffices to compute (in cubic time)
the transitive closure of \Gamma w.r.t. the - transitions. These extra transitions do
not influence our complexity estimations, as we always consider the worst case
when \Gamma has all possible transitions. The condition that a 6= - is there because
we do not want to add new transitions of the form f -
our proof for weak bisimilarity would not immediately work for termination-
sensitive bisimilarity (which is defined at the end of this section).
We use upper-case letters to denote elements of Const (\Delta), and lower-case
letters f; to denote elements of Const (\Gamma). Greek letters ff; are
used to denote elements of Const (\Delta) . The size of \Delta is denoted by n, and the
size of \Gamma by m (we measure the complexity of our algorithm in (n; m)).
The set Const (\Delta) can be divided into two disjoint subsets of normed and
unnormed constants (remember that X 2 Const (\Delta) is normed iff X w
for some w 2 Act ). Note that it is decidable in O(n 2 a constant is
normed. The set of all normed constants of \Delta is denoted Normed (\Delta). In our
constructions we also use processes of the form fff ; they should be seen as
BPA processes with the underlying system \Delta [ \Gamma.
Intuition: Our proof can be divided into two parts: first we show that the
greatest weak bisimulation between processes of \Delta and \Gamma is finitely repre-
sentable. There is a finite relation B of size O(nm 2 ) (called bisimulation base)
such that each pair of weakly bisimilar processes can be generated from that
base (a technique first used by Caucal [6]). Then we show that the bisimulation
base can be computed in polynomial time. To do that, we take a sufficiently
large relation G which surely subsumes the base and 'refine' it (this refinement
technique has been used in [17,18]). The size of G is still O(nm 2 ), and
each step of the refinement procedure possibly deletes some of the elements
of G. If nothing is deleted, we have found the base (hence we need at most
steps). The refinement step is formally introduced in Definition 9 (we
compute the expansion of the currently computed approximation of the base).
Intuitively, a pair of processes belongs to the expansion iff for each a
move
of one component there is a a
move of the other component such that the
resulting pair of processes can be generated from the current approximation
of B. We have to overcome two problems:
1. The set of pairs which can be generated from B (and its approximations) is
infinite.
2. The set of states which are reachable from a given BPA state in one ' a
move is infinite.
We employ a 'symbolic' technique to represent those infinite sets (similar to
the one used in [3]), taking advantage of the fact that they have a simple (reg-
ular) structure which can be encoded by finite-state automata (see Theorem 6
and 12). This allows to compute the expansion in polynomial time.
relation K is well-formed iff it is a subset of the relation G
defined by
Const (\Gamma)) \Theta Const (\Gamma))
(Const (\Delta) \Theta Const (\Gamma))
(Const (\Gamma) \Theta Const (\Gamma))
Const (\Gamma))
Note that the size of any well-formed relation is O(nm 2 ) and that G is the
greatest well-formed relation.
One of the well-formed relations is of special importance.
Definition 4 The bisimulation base for \Delta and \Gamma, denoted B, is defined as
follows:
As weak bisimilarity is a left congruence w.r.t. sequential composition, we
can 'generate' from B new pairs of weakly bisimilar processes by substitution
(it is worth noting that weak bisimilarity is not a right congruence w.r.t.
sequencing-to see this, it suffices to define X -
Z. Now
generation procedure can be defined for any
well-formed relation as follows:
Definition 5 Let K be a well-formed relation. The closure of K, denoted
is the least relation M which satisfies the following conditions:
contains an unnormed constant, then (fffi; g); (fffih; g) 2
M for every fi 2 Const (\Delta) and h 2 Const (\Gamma).
Note that Cl(K) contains elements of just two forms - (ff; g) and (fff; g).
consists of
and the pairs which can be immediately derived from Cl(K) i by the
rules 2-6 of Definition 5.
Although the closure of a well-formed relation can be infinite, its structure is in
some sense regular. This fact is precisely formulated in the following theorem:
Theorem 6 Let K be a well-formed relation. For each g 2 Const (\Gamma) there
is a finite-state automaton A g of size O(nm 2 ) constructible in O(nm 2
such that
PROOF. We construct a regular grammar of size O(nm 2 ) which generates
the mentioned language. Let G
Const (\Gamma)g [ fUg
Const (\Delta) [ Const (\Gamma)
ffl ffi is defined as follows:
for each ("; h) 2 K we add the rule h ! ".
for each (f; h) 2 K we add the rules h
for each (Y f; h) 2 K we add the rules
for each (X; h) 2 K we add the rule h ! X and if X is unnormed, then
we also add the rule h ! XU .
for each X 2 Const (\Delta), f 2 Const (\Gamma) we add the rules U ! XU , U ! X,
A proof that G g indeed generates the mentioned language is routine. Now we
translate G g to A g (see, e.g., [19]). Note that the size of A g is essentially the
same as the size of G g ; A g is non-deterministic and can contain "-rules.
It follows immediately that for any well-formed relation K, the membership
problem for Cl(K) is decidable in polynomial time. Another property of Cl(K)
is specified in the lemma below.
Cl(K). Similarly, if (fi; f) 2
PROOF. We just give a proof for the first claim (the second one is similar).
Let (fff; g) 2 Cl(K) i . By induction on i.
and we can immediately apply the rule 3 or 5 of
Definition 5 (remember that ff can be ").
ffl Induction step. Let (fff; g) 2 Cl(K) i+1 . There are three possibilities (cf.
Definition 5).
I. There is r such that (fff; r) 2 K. By induction hypothesis
we know (fffih; r) 2 due to the rule 3 of
Definition 5.
II. there is r such that (Y
hypothesis we have (flfih; r) 2 Cl(K), and hence also (Y flfih; r) 2
Cl(K) by the rule 5 of Definition 5.
III. contains an unnormed constant.
Then (fl ffifih; g) 2 Cl(K) by the last rule of Definition 5.
The importance of the bisimulation base is clarified by the following theorem.
It says that Cl(B) subsumes the greatest weak bisimulation between processes
of \Delta and \Gamma.
Theorem 8 For all ff; f; g we have ff - g iff (ff; g) 2 Cl(B), and fff - g iff
PROOF. The 'if' part is obvious in both cases, as B contains only weakly
bisimilar pairs and all the rules of Definition 5 produce pairs which are again
weakly bisimilar. The 'only if' part can, in both cases, be easily proved by
induction on the length of ff (we just show the first proof; the second one is
similar).
and (Y; g) 2 B. By the rule 6
of Definition 5 we obtain (Y fi; g) 2 Cl(B). If Y is normed, then Y fi w
for some w 2 Act and g must be able to match the sequence w by some
such that fi - g 0 . By substitution we now obtain that Y g 0 - g.
induction hypothesis. Hence
due to the rule 4 of Definition 5.
The next definition formalizes one step of the 'refinement procedure' which
is applied to G to compute B. The intuition is that we start with G as an
approximation to B. In each refinement step some pairs are deleted from the
current approximation. If in a refinement step no pairs are deleted any more
then we have found B. The next definition specifies the condition on which
a given pair is not deleted in a refinement step from the currently computed
approximation of B.
Definition 9 Let K be a well-formed relation. We say that a pair (X; g) of
K expands in K iff the following two conditions hold:
ffl for each X a
! ff there is some g a
ffl for each g a
there is some X a
ff such that (ff;
The expansion of a pair of the form (Y f; g), (f; g), ("; g) in K is defined in the
same way-for each ' a
!' move of the left component there must be some ' a
move of the right component such that the resulting pair of processes belongs
to Cl(K), and vice versa (note that " -
"). The set of all pairs of K which
expand in K is denoted by Exp(K).
The notion of expansion is in some sense 'compatible' with the definition of
bisimulation. This intuition is formalized in the following lemma.
K be a well-formed relation such that
Cl(K) is a weak bisimulation.
PROOF. We prove that every pair (ff; g); (fff; g) of Cl(K) i has the property
that for each ' a
!' move of one component there is a ' a
)' move of the other
component such that the resulting pair of processes belongs to Cl(K) (we
consider just pairs of the form (fff; g); the other case is similar). By induction
on i.
the claim follows directly from
the definitions.
ffl Induction step. Let (fff; g) 2 Cl(K) i+1 . There are three possibilities:
I. There is an h such that (fff;
Let fff a
flf (note that ff can be empty; in this case we have to
consider moves of the form f a
. It is done in a similar way as below).
As (fff; h) 2 Cl(K) i , we can use the induction hypothesis and conclude
that there is h a
We distinguish two cases:
and as (h; g) 2 K, we obtain
due to Lemma 7. Hence g can use the move g -
g.
. Then there is a transition h a
(see the beginning
of this section) and as (h; g) 2 K, by induction hypothesis we know that
there is some g a
due to Lemma 7.
Now let g a
As (h; g) 2 K, there is h a
Cl(K). We distinguish two possibilities again:
use the move fff -
a 6= - or h 6= h 0 . Then h a
there is fff a
(or fff a
is handled in the same way) such that (flf; h 0
Hence also (flf; g 0 ) 2 Cl(K) by Lemma 7.
II. there is h such that (Y h; g) 2 K, (fif;
Let Y fif a
flfif . As (Y h; g) 2 K, we can use induction hypothesis and
conclude that there is g a
As (fif;
we obtain (flfif;
Let g a
. As (Y h; g) 2 K, by induction hypothesis we know that Y h
can match the move g a
there are two possibilities:
flh such that (flh;
As
immediately have (flfif;
Cl(K). The transition Y h a
can be
h, h y
h, we are done immediately because then Y fi a
and as (h; g 0 ); (fi; needed.
If y 6= - or h 0 6= h, there is a transition h y
As (fif;
due to induction hypothesis we know that there is some fif y
fif y
this is handled in the same way) with (flf; h 0
Y fif a
As
III. contains an unnormed constant and (fi; g) 2 Cl(K) i .
Let ff a
ffi. As (fi; g) 2 Cl(K) i , there is
a
due to the induction hypothesis. Clearly
contains an unnormed constant, hence (ffifl; by the last rule
of Definition 5.
Let g a
As (fi; g) 2 Cl(K) i , there is fi a
and ffi contains an unnormed constant. Hence ff a
due to the last rule of Definition 5.
The notion of expansion allows to approximate B in the following way:
Theorem 11 There is a j 2 N, bounded by O(nm 2 ), such that
PROOF. Exp (viewed as a function on the complete lattice of well-formed
relations) is monotonic, hence the greatest fixed-point exists and must be
reached after O(nm 2 ) steps, as the size of G is O(nm 2 ). We prove that
'':' First, let us realize that immediately from Definition
4, Definition 9, and Theorem 8). The inclusion can be proved
by a simple inductive argument; clearly
by definition of the expansion and the fact
":' As Exp(B j we know that Cl(B j ) is a weak bisimulation due to
Lemma 10. Thus, processes of every pair in B j are weakly bisimilar.
In other words, B can be obtained from G in O(nm 2 ) refinement steps which
correspond to the construction of the expansion. The only thing which remains
to be shown is that Exp(K) is effectively constructible in polynomial time. To
do that, we employ a 'symbolic' technique which allows to represent infinite
subsets of BPA state-space in an elegant and succinct way.
Theorem 12 For all X 2 Const (\Delta), a 2 Act (\Delta) there is a finite-state automaton
A (X;a) of size O(n 2 ) constructible in O(n 2 ) time such that
PROOF. We define a left-linear grammar G (X;a) of size O(n 2 ) which generates
the mentioned language. This grammar can be converted to A (X;a)
by a standard algorithm known from automata theory (see, e.g., [19]). Note
that the size of A (X;a) is essentially the same as the size of G (X;a) . First, let
us realize that we can compute in O(n 2 ) time the sets M - and M a consisting
of all Y 2 Const (\Delta) such that Y -
Const (\Delta)g [ fSg. Intuitively, the index indicates
whether the action 'a' has already been emitted.
Const (\Delta)
ffl ffi is defined as follows:
We add the production S ! X a to ffi, and if X a
we also add the
production
\Delta For every transition Y a
of \Delta and every i such that 1 -
we test whether Z j
i. If this is the case, we add to
ffi the productions
\Delta For every transition Y -
of \Delta and every i such that 1 -
we do the following:
We test whether Z j
i. If this is the case, we
add to ffi the productions
Y a ! Z a
We test whether there is a t ! i such that Z t
a
every t. If this is the case, we add to ffi the productions
The fact that G (X;a) generates the mentioned language is intuitively clear and
a formal proof of that is easy. The size of G (X;a) is O(n 2 ), as \Delta contains O(n)
basic transitions of length O(n).
The crucial part of our algorithm (the 'refinement step') is presented in the
proof of the next theorem. Our complexity analysis is based on the following
facts: Let be a non-deterministic automaton with "-rules,
and let t be the total number of states and transitions of A.
ffl The problem whether a given w 2 \Sigma belongs to L(A) is decidable in
O(jwj \Delta t) time.
ffl The problem whether decidable in O(t) time.
Theorem 13 Let K be a well-formed relation. The relation Exp(K) can be
effectively constructed in
PROOF. First we construct the automata A g of Theorem 6 for every g 2
Const (\Gamma). This takes O(nm 3 ) time. Then we construct the automata A (X;a) of
Theorem 12 for all X; a. This takes O(n 4 ) time. Furthermore, we also compute
the set of all pairs of the form (f; g); ("; g) which belong to Cl(K). It can be
done in O(m 2 ) time. Now we show that for each pair of K we can decide in
expands in K.
The pairs of the form (f; g) and ("; g) are easy to handle; there are at most m
states f 0 such that f a
, and at most m states g 0 with g a
hence we need
to check only O(m 2 ) pairs to verify the first (and consequently also the second)
condition of Definition 9. Each such pair can be checked in constant time,
because the set of all pairs (f; g); ("; g) which belong to Cl(K) has already
been computed at the beginning.
Now let us consider a pair of the form (Y; g). First we need to verify that for
each Y a
! ff there is some g a
h such that (ff; h) 2 Cl(K). This requires
O(nm) tests whether ff 2 As the length of ff is O(n) and the size
of A h is O(nm 2 ), each such test can be done in O(n 2 hence we
need O(n total. As for the second condition of Definition 9,
we need to find out whether for each g a
! h there is some X a
ff such that
To do that, we simply test the emptiness of
The size of the product automaton is O(n 3 we need to perform only
O(m) such tests, hence O(n 3 suffices.
Pairs of the form (Y f; g) are handled in a similar way; the first condition of
Definition 9 is again no problem, as we are interested only in the ' a
!' moves
of the left component. Now let g a
An existence of a 'good' a
move of
Y f can be verified by testing whether one of the following conditions holds:
ffl Y a
" and there is some f -
" and there is some f a
All those conditions can be checked in O(n 3 required analysis
has been in fact done above). As K contains O(nm 2 ) pairs, the total time
which is needed to compute Exp(K) is O(n 4 m 5 ).
As the BPA process E (introduced at the beginning of this section) is an
element of Const (\Delta), we have that To compute B,
we have to perform the computation of the expansion O(nm 2 ) times (see
Theorem 11). This gives us the following main theorem:
Theorem 14 Weak bisimilarity is decidable between BPA and finite-state
processes in O(n 5
4 Termination-Sensitive Bisimilarity
As we already mentioned in the previous section, weak bisimilarity is not a
congruence w.r.t. sequential composition. This is a major drawback, as any
equivalence which is to be considered as 'behavioral' should have this prop-
erty. We propose a solution to this problem by designing a natural refinement
of weak bisimilarity called termination-sensitive bisimilarity. This relation respects
some of the main features of sequencing which are 'overlooked' by weak
consequently, it is a congruence w.r.t. sequential composition. We
also show that termination-sensitive bisimilarity is decidable between BPA and
finite-state processes in polynomial time by adapting the method of the previous
section. It should be noted right at the beginning that we do not aim
to design any new 'fundamental' notion of the theory of sequential processes
(that is why the properties of termination-sensitive bisimilarity are not studied
in detail). We just want to demonstrate that our method is applicable to a
larger class of bisimulation-like equivalences and the relation of termination-
sensitive bisimilarity provides a (hopefully) convincing evidence that some of
them might be interesting and useful.
In our opinion, any 'reasonable' model of sequential behaviors should be able
to express (and distinguish) the following 'basic phenomena' of sequencing:
ffl successful termination of the process which is currently being executed. The
system can then continue to execute the next process in the queue;
ffl unsuccessful termination of the executed process (deadlock). This models a
severe error which causes the whole system to 'get stuck';
ffl entering an infinite internal loop (cycling).
The difference between successful and unsuccessful termination is certainly
significant. The need to distinguish between termination and cycling has also
been recognized in practice; major examples come, e.g., from the theory of
operating systems.
BPA processes are a very natural model of recursive sequential behaviors.
Successful termination is modeled by reaching '"'. There is also a `hidden'
syntactical tool to model deadlock-note that by the definition of BPA systems
there can be an X 2 Const (\Delta) such that \Delta does not contain any rule of the
form X a
us call such constants undefined ). A state Xfi models the
situation when the executed process reaches a deadlock-there is no transition
(no computational step) from Xfi, the process is 'stuck'. It is easy to see that
we can safely assume that \Delta contains at most one undefined constant (the
other ones can be simply renamed to X), which is denoted ffi by convention
[2]. Note that ffi is unnormed by definition. States of the form ffiff are called
deadlocked.
In the case of finite-state systems, we can distinguish between successful and
unsuccessful termination in a similar way. Deadlock is modeled by a distinguished
undefined constant ffi, and the other undefined constants model successful
termination.
Note that by definition of weak bisimilarity. As '"' represents a successful
termination, this is definitely not what we want. Before we define the promised
relation of termination-sensitive bisimilarity, we need to clarify what is meant
by cycling; intuitively, it is the situation when a process enters an infinite
internal loop. In other words, it can do '- ' forever without a possibility to do
anything else or to terminate (either successfully or unsuccessfully).
Definition 15 The set of initial actions of a process E, denoted I(E), is
defined by
Fg. A process E is cycling iff
every state F which is reachable from E satisfies I(F
Note that it is easily decidable in quadratic time whether a given BPA process
is cycling; in the case of finite-state systems we only need linear time.
Definition We say that an expression E is normal iff E is not cycling,
deadlocked, or successfully terminated.
binary relation R over process expressions is a termination-sensitive bisimulation
then the following conditions hold:
ffl if one of the expressions E; F is cycling then the other is also cycling;
ffl if one of the expressions E; F is deadlocked then the other is either normal
or it is also deadlocked;
ffl if one of the expressions E; F is successfully terminated then the other is
either normal or it is also successfully terminated;
there is F a
ffl if F a
then there is E a
are termination-sensitive bisimilar, written E ' F , iff there is
a termination-sensitive bisimulation relating them.
Termination-sensitive bisimilarity seems to be a natural refinement of weak
bisimilarity which better captures an intuitive understanding of 'sameness' of
sequential processes. It distinguishes among the phenomena mentioned at the
beginning of this section, but it still allows to ignore internal computational
steps to a large extent. For example, a deadlocked process is still equivalent to
a process which is not deadlocked yet but which necessarily deadlocks after a
finite number of - transitions (this example also explains why the first three
conditions of Definition are stated so carefully).
The family of ' i approximations is defined in the same way as in case of weak
bisimilarity; the only difference is that ' 0 relates exactly those processes which
satisfy the first three conditions of Definition 16. The following theorem follows
immediately from this definition.
Theorem 17 Termination-sensitive bisimilarity is a congruence w.r.t. sequential
composition.
The technique which has been used in the previous section also works for
termination-sensitive bisimilarity.
Theorem Termination-sensitive bisimilarity is decidable between BPA and
finite-state processes in O(n 5
PROOF. First, all assumptions about \Delta and \Gamma which were mentioned at the
beginning of Section 3 are also safe w.r.t. termination-sensitive bisimilarity;
note that it would not be true if we also assumed the existence of a -loop
f for every f 2 Const (\Gamma). Now we see why the assumptions about \Gamma
are formulated so carefully. The only thing which has to be modified is the
notion of well-formed relation; it is defined in the same way, but in addition
we require that processes of every pair which is contained in a well-formed
relation K are related by ' 0 . It can be easily shown that processes of pairs
contained in Cl(K) are then also related by ' 0 . In other words, we do not
have to take care about the first two requirements of Definition 16 in our
constructions anymore; everything works without a single change.
The previous proof indicates that the 'method' of Section 3 can be adapted to
other bisimulation-like equivalences. See the final section for further comments.
5 Normed BPP Processes
In this section we prove that weak bisimilarity is decidable in polynomial time
between normed BPP and finite-state processes. The basic structure of our
proof is similar to the one for BPA. The key is that the weak bisimulation
problem can be decomposed into problems about the single constants and
their interaction with each other. In particular, a normed BPP process is
finite w.r.t. weak bisimilarity iff every single reachable process constant is
finite w.r.t. weak bisimilarity. This does not hold for general BPP and thus
our construction does not carry over to general BPP.
Example 19 Consider the unnormed BPP that is defined by the following
rules.
a
a i
an
Then the process X 1 w.r.t. bisimilarity, but every subprocess
(e.g. X 3 kX 4 kX 7 or every single constant X i ) is infinite w.r.t. bisimilarity.
Even for normed BPP, we have to solve some additional problems. The bisimulation
base and its closure are simpler due to the normedness assumption,
but the 'symbolic' representation of BPP state-space is more problematic (see
below). The set of states which are reachable from a given BPP state in one
' a
)' move is no longer regular, but it can be in some sense represented by
a CF-grammar. In our algorithm we use the facts that emptiness of a CF
language is decidable in polynomial time, and that CF languages are closed
under intersection with regular languages.
Let E be a BPP process and F a finite-state process with the underlying
systems \Delta and \Gamma, respectively. We can assume w.l.o.g. that E 2 Const (\Delta).
Elements of Const (\Delta) are denoted by X; Y; elements of Const (\Gamma) by
The set of all parallel expressions over Const (\Delta) is denoted by
Const
(\Delta)\Omega and its elements by Greek letters ff; The size of \Delta is denoted
by n, and the size of \Gamma by m.
In our constructions we represent certain subsets of Const
(\Delta)\Omega by finite automata
and CF grammars. The problem is that elements of Const
(\Delta)\Omega are
considered modulo commutativity; however, finite automata and CF grammars
of course distinguish between different 'permutations' of the same word.
As the classes of regular and CF languages are not closed under permutation,
this problem is important. As we want to clarify the distinction between ff
and its possible 'linear representations', we define for each ff the set Lin(ff)
as follows:
is a permutation of the set f1; \Delta
For example, Lin(XkY Xg. We
also assume that each Lin(ff) contains some (unique) element called canonical
form of Lin(ff). It is not important how the canonical form is chosen; we need
it just to make some constructions deterministic (for example, we can fix some
linear order on process constants and let the canonical form of Lin(ff) be the
sorted order of constants of ff).
relation K is well-formed iff it is a subset of (Const (\Delta)[
f"g) \Theta Const (\Gamma). The bisimulation base for \Delta and \Gamma, denoted B, is defined
as follows:
Definition 21 Let K be a well-formed relation. The closure of K, denoted
is the least relation M which satisfies
The family of Cl(K) i approximations is defined in the same way as in Section 3.
Lemma 22 Let (ff; f) 2
Cl(K).
PROOF. Let (ff; f) 2 Cl(K) i . By induction on i.
and we can immediately apply the rule 2 or 3 of
Definition 21.
ffl Induction step. Let (ff; f) 2 Cl(K) i+1 . There are two possibilities.
I. and there are s such that (X; r) 2 K, (fl; s) 2 Cl(K) i , and
rks - f . Clearly rkskg - h, hence also skg - t for some t. By induction
hypothesis we have (flkfi; due to
the second rule of Definition 21 (note that rkt - h).
II. (ff; r) 2 Cl(K) i and there is some s such that ("; s) 2 K and rks - f .
As rkskg - h, there is some t such that rkg - t. By induction hypothesis
we obtain (ffkfi; due to the third
rule of Definition 21.
Again, the closure of the bisimulation base is the greatest weak bisimulation
between processes of \Delta and \Gamma.
Theorem 23 Let ff 2 Const
Const (\Gamma). We have that ff - f iff
PROOF. The 'if' part is obvious. The `only if' part can be proved by induction
on length(ff).
As \Delta is normed and Xkfi - f , there are w; v 2 Act such that
X. The process f must be able to match the sequences
w; v by entering weakly bisimilar states-there are Const (\Delta) such
that fi - g, X - h, and consequently also f - gkh (here we need the fact
that weak bisimilarity is a congruence w.r.t. the parallel operator). Clearly
induction hypothesis, hence (Xkfi; f) 2
by Definition 21.
The closure of any well-formed relation can in some sense be represented by
a finite-state automaton, as stated in the next theorem. For this construction
we first need to compute the set f(fkg; hg. We consider the parallel
composition of the finite-state system with itself, i.e., the states of this system
are of the form fkg. Let our new system be the union of this system with the
old system. The new system has size O(m 2 ) and its states are of the form fkg
or h. Then we apply the usual cubic-time partition refinement algorithm to
decide bisimilarity on the new system (see Section 2). This gives us the set
Theorem 24 Let K be a well-formed relation. For each g 2 Const (\Gamma) there
is a finite-state automaton A g of size O(nm) constructible in O(nm) time
such that the following conditions hold:
ffl whenever A g accepts an element of Lin(ff), then (ff; g) 2 Cl(K)
accepts at least one element of Lin(ff)
PROOF. We design a regular grammar of size O(nm) such that L(G g ) has
the mentioned properties. Let G
Const (\Gamma) [ fSg
Const (\Delta)
ffl ffi is defined as follows:
for each (X; f) 2 K we add the rule S ! Xf .
for each ("; f) 2 K we add the rule S ! f .
\Delta for all f; Const (\Gamma), X 2 Const (\Delta) such that (X; r) 2 K, f - rks
we add the rule s ! Xf .
\Delta for all f; Const (\Gamma) such that ("; r) 2 K, f - rks we add the rule
\Delta we add the rule g ! ".
The first claim follows from an observation that whenever we have ff 2 Lin(ff)
such that fff is a sentence of G g , then (ff; f) 2 Cl(K). This can be easily
proved by induction on the length of the derivation of fff . For the second
part, it suffices to prove that if (ff; f) 2 Cl(K) i , then there is ff 2 Lin(ff) such
that fff is a sentence of G g . It can be done by a straightforward induction on
i.
It is important to realize that if (ff; g) 2 does not necessarily
accept all elements of Lin(ff). For example, if
Const
Const (\Gamma), then A g accepts the string XY Z but not the string XZY . Generally,
A g cannot be 'repaired' to do so (see the beginning of this section); however,
there is actually no need for such 'repairs', because A g has the following nice
property:
Lemma 25 Let K be a well-formed relation such that B ' K. If ff - g, then
the automaton A g of (the proof of) Theorem 24 constructed for K accepts all
elements of Lin(ff).
PROOF. Let G g be the grammar of the previous proof. First we prove that
for all s; Const (\Gamma), Const
(\Delta)\Omega such that fl - r, skr - f there is a
derivation s ! flf in G g for every fl 2 Lin(fl). By induction on length(fl).
the pair ("; r) belongs to B. Hence s ! f by definition of
G g .
is of the form Xkfi
where fi 2 Lin(fi). As Xkfi - r and \Delta is normed, there are u; v 2 Const (\Gamma)
such that X - u, fi - v, and ukv - r. Hence we also have skukv - f , thus
Const (\Gamma). As X - u, the pair (X; u) belongs to B.
by definition of G g . As fi - v and vkt - f , we can use the
induction hypothesis and conclude t ! fif . Hence s ! Xfif as required.
Now let ff - g. As \Delta is normed, there is some r 2 Const (\Gamma) such that " - r.
Hence by definition of G g . Clearly rkg - g and due to
the above proved property we have r ! ffg for every ff 2 Lin(ff). As
is a rule of G g , we obtain
The set of states which are reachable from a given X 2 Const (\Delta) in one ' a
move is no longer regular, but it can, in some sense, be represented by a CF
grammar.
Theorem 26 For all X 2 Const (\Delta), a 2 Act (\Delta) there is a context-free grammar
G (X;a) in 3-GNF (Greibach normal form, i.e., with at most 2 variables at
the right hand side of every production) of size O(n 4 ) constructible in O(n 4 )
time such that the following two conditions hold:
ffl if G (X;a) generates an element of Lin(ff), then X a
(X;a) generates at least one element of Lin(ff)
PROOF. Let G
Const (\Delta)
ffl ffi is defined as follows:
\Delta the rule S ! X a is added to ffi.
for each transition Y a
of \Delta we add the rule
we add the rule Y a ! ").
for each transition Y -
of \Delta we add the rule
also add the rule
for each Y 2 Const (\Delta) we add the rule
The fact that G (X;a) satisfies the above mentioned conditions follows directly
from its construction. Note that the size of G (X;a) is O(n 2 ) at the moment. Now
we transform G (X;a) to 3-GNF by a standard procedure of automata theory
(see [19]). It can be done in O(n 4 time and the size of resulting grammar is
O(n 4 ).
The notion of expansion is defined in a different way (when compared to the
one of the previous section).
Definition 27 Let K be a well-formed relation. We say that a pair (X; f) 2
K expands in K iff the following two conditions hold:
ffl for each X a
! ff there is some f a
such that ff 2 L(A g ), where ff is the
canonical form of Lin(ff).
ffl for each f a
! g the language
A pair ("; f) 2 K expands in K iff f a
implies for each f -
we have that " 2 L(A g ). The set of all pairs of K which expand in K is denoted
by Exp(K).
Theorem 28 Let K be a well-formed relation. The set Exp(K) can be computed
in O(n 11 m 8 ) time.
PROOF. First we compute the automata A g of Theorem 24 for all g 2
Const (\Gamma). This takes O(nm 2 ) time. Then we compute the grammars G (X;a)
of Theorem 26 for all X 2 Const (\Delta), a 2 Act . This takes O(n 6 ) time. Now
we show that it is decidable in O(n f) of K
expands in K.
The first condition of Definition 27 can be checked in O(n 3 time, as there
are O(n) transitions X a
states g such that f a
g, and for each
such pair (ff; g) we verify whether ff 2 is the canonical form
of Lin(ff); this membership test can be done in O(n 2 m) time, as the size of
ff is O(n) and the size of A g is O(nm).
The second condition of Definition 27 is more expensive. To test the emptiness
of first construct a pushdown automaton P which recognizes
this language. P has O(m) control states and its total size is O(n 5 m).
Furthermore, each rule pX a
! qff of P has the property that length(ff) - 2,
because G (X;a) is in 3-GNF. Now we transform this automaton to an equivalent
CF grammar by a well-known procedure described, e.g., in [19]. The size of
the resulting grammar is O(n 5 m 3 ), and its emptiness can be thus checked in
This construction has to be performed O(m) times,
hence we need O(n
Pairs of the form ("; f) are handled in a similar (but less expensive) way. As
K contains O(nm) pairs, the computation of Exp(K) takes O(n
The previous theorem is actually a straightforward consequence of Definition
27. The next theorem says that Exp really does what we need.
Theorem 29 Let K be a well-formed relation such that
Cl(K) is a weak bisimulation.
PROOF. Let (ff; f) 2 Cl(K) i . We prove that for each ff a
! fi there is some
f a
such that (fi; g) 2 Cl(K) and vice versa. By induction on i.
and we can distinguish the following two possibilities
Let X a
fi. By Definition 27 there is f a
such that fi 2
some fi 2 Lin(fi). Hence (fi; g) 2 Cl(K) due to the first part of Theorem
24.
Let f a
g. By Definition 27 there is some string w 2
Let w 2 Lin(fi). We have X a
due to the first part of Theorem 26, and
due to Theorem 24.
Let f a
g. Then Hence ("; g) 2
Cl(K) due to Theorem 24.
ffl Induction step. Let (ff; f) 2 Cl(K) i+1 . There are two possibilities.
I. and there are s such that (X; r) 2 K, (fl; s) 2 Cl(K) i , and
rks - f .
Let Xkff a
fi. The action 'a' can be emitted either by X or by ff. We
distinguish the two cases.
ffikfl. As (X; r) 2 K and X a
! ffi, there is some r a
such that (ffi; r 0 As rks - f and r a
there is some f a
such that r 0 ks - g. To sum up, we have (ffi; r 0
ks - g, hence (ffikfl; g) 2 Cl(K) due to Lemma 22.
Xkae. As (fl; s) 2 Cl(K) i and fl a
ae, there is s a
that (ae; s 0 As rks - f and s a
there is f a
such that
(rks g. Due to Lemma 22 we obtain (Xkae; g) 2 Cl(K).
Let f a
g. As rks - f , there are r x
a such that r 0 ks 0 - g. As (X; r) 2 K, (fl; s) 2 Cl(K) i , there
are X x
ae such that (ffi; r 0
and (ffikae; g) 2 Cl(K) due to Lemma 22.
II. (ff; r) 2 Cl(K) i and there is some s such that ("; s) 2 K and rks - f .
The proof can be completed along the same lines as above.
Now we can approximate (and compute) the bisimulation base in the same
way as in the Section 3.
Theorem There is a j 2 N, bounded by O(nm), such that
PROOF. '':' It suffices to show that
Const (\Delta) or ff = ". We show that (X; f)
expands in B (a proof for the pair ("; f) is similar).
Let X a
fi. As X - f , there is f a
such that fi - g. Let fi be the canonical
form of Lin(fi). Due to Lemma 25 we have
Let f a
g. As X - f , there is X a
g. Due to Theorem 26
there is fi 2 Lin(fi) such that fi 2 L(G (X;a) ). Moreover, fi 2 due to
Lemma 25. Hence,
":' It follows directly from Theorem 29.
Theorem 31 Weak bisimilarity between normed BPP and finite-state processes
is decidable in O(n 12 m 9 ) time.
PROOF. By Theorem 30 the computation of the expansion of Theorem 28
(which costs O(n 11 m 8 ) time) has to be done O(nm) times.
6 Conclusions
We have proved that weak bisimilarity is decidable between BPA processes
and finite-state processes in O(n 5 m 7 ) time, and between normed BPP and
finite-state processes in O(n 12 m 9 ) time. It may be possible to improve the algorithm
by re-using previously computed information, for example about sets
of reachable states, but the exponents would still be very high. This is because
the whole bisimulation basis is constructed. To get a more efficient algorithm,
one could try to avoid this. Note however, that once we have constructed B
(for a BPA/nBPP system \Delta and a finite-state system \Gamma) and the automaton
A g of Theorem 6/Theorem 24 (for Const (\Gamma)), we can
decide weak bisimilarity between a BPA/nBPP process ff over \Delta and a process
Const (\Gamma) in time O(jffj)-it suffices to test whether A f accepts ff
(observe that there is no substantial difference between A f and A g except for
the initial state).
The technique of bisimulation bases has also been used for strong bisimilarity
in [17,18]. However, those bases are different from ours; their design and the
way how they generate 'new' bisimilar pairs of processes rely on additional
algebraic properties of strong bisimilarity (which is a full congruence w.r.t.
sequencing, allows for unique decompositions of normed processes w.r.t. sequencing
and parallelism, etc. The main difficulty of those proofs is to show
that the membership in the 'closure' of the defined bases is decidable in polynomial
time. The main point of our proofs is the use of 'symbolic' representation
of infinite subsets of BPA and BPP state-space.
We would also like to mention that our proofs can be easily adapted to other
bisimulation-like equivalences, where the notion of 'bisimulation-like' equivalence
is the one of [21]. A concrete example is termination-sensitive bisimilarity
of Section 4. Intuitively, almost every bisimulation-like equivalence has the algebraic
properties which are needed for the construction of the bisimulation
base, and the 'symbolic' technique for state-space representation can also be
adapted. See [21] for details.
--R
Decidability of bisimulation equivalence for processes generating context-free languages
Process Algebra.
Reachability analysis of pushdown automata: application to model checking.
An elementary decision procedure for arbitrary context-free processes
More infinite results.
Graphes canoniques des graphes alg'ebriques.
Decidability and Decomposition in Process Algebras.
Bisimulation is decidable for all basic parallel processes.
Bisimulation equivalence is decidable for all context-free processes
Petri nets
Decidability of model checking for infinite-state concurrent systems
An automata-theoretic approach to interprocedural data-flow analysis
A short proof of the decidability of bisimulation for normed BPA processes.
Bisimulation trees and the decidability of weak bisimulations.
Bisimulation equivalence is decidable for normed process algebra.
A polynomial algorithm for deciding bisimilarity of normed context-free processes
A polynomial algorithm for deciding bisimulation equivalence of normed basic parallel processes.
Introduction to Automata Theory
Actions speak louder than words: Proving bisimilarity for context-free processes
Deciding bisimulation-like equivalences with finite-state processes
Effective decomposability of sequential behaviours.
Efficient verification algorithms for one-counter processes
On simulation-checking with sequential systems
Simulation preorder on simple process algebras.
Weak bisimulation and model checking for basic parallel processes.
Strict lower bounds for model checking BPA.
On the complexity of bisimulation problems for basic parallel processes.
On the complexity of bisimulation problems for pushdown automata.
Decidability of model checking with the temporal logic EF.
Process rewrite systems.
Infinite results.
Three partition refinement algorithms.
Concurrency and automata on infinite sequences.
Petri Net Theory and the Modelling of Systems.
Complexity of weak bisimilarity and regularity for BPA and BPP.
Hardness results for weak bisimilarity of simple process algebras.
Model checking CTL properties of pushdown systems.
--TR
Three partition refinement algorithms
Communication and concurrency
Process algebra
A short proof of the decidability of bisimulation for normed BPA-processes
Decidability of bisimulation equivalence for process generating context-free languages
Bisimulation equivalence is decidable for all context-free processes
A polynomial algorithm for deciding bisimilarity of normed context-free processes
Process rewrite systems
Effective decomposability of sequential behaviours
Decidability of model checking with the temporal logic EF
Petri Net Theory and the Modeling of Systems
Introduction To Automata Theory, Languages, And Computation
An Elementary Bisimulation Decision Procedure for Arbitrary Context-Free Processes
Simulation Preorder on Simple Process Algebras
Bisimulation Equivanlence Is Decidable for Normed Process Algebra
Efficient Verification Algorithms for One-Counter Processes
Deciding Bisimulation-Like Equivalences with Finite-State Processes
Reachability Analysis of Pushdown Automata
Bisimulation Equivalence is Decidable for Basic Parallel Processes
Infinite Results
An Automata-Theoretic Approach to Interprocedural Data-Flow Analysis
Weak Bisimulation and Model Checking for Basic Parallel Processes
Model Checking CTL Properties of Pushdown Systems
Concurrency and Automata on Infinite Sequences
On the Complexity of Bisimulation Problems for Pushdown Automata
Petri Nets, Commutative Context-Free Grammars, and Basic Parallel Processes
On Simulation-Checking with Sequential Systems
On the Complexity of Bisimulation Problems for Basic Parallel Processes
--CTR
Richard Mayr, Weak bisimilarity and regularity of context-free processes is EXPTIME-hard, Theoretical Computer Science, v.330 n.3, p.553-575, 9 February 2005
Antonn Kuera , Richard Mayr, Simulation preorder over simple process algebras, Information and Computation, v.173 n.2, p.184-198, March 15, 2002
Antonn Kuera, The complexity of bisimilarity-checking for one-counter processes, Theoretical Computer Science, v.304 n.1-3, p.157-183, 28 July
Antonn Kuera , Petr Janar, Equivalence-checking on infinite-state systems: Techniques and results, Theory and Practice of Logic Programming, v.6 n.3, p.227-264, May 2006 | concurrency;bisimulation;infinite-state systems;process algebras;verification |
566387 | Polynomial-time computation via local inference relations. | We consider the concept of a local set of inference rules. A local rule set can be automatically transformed into a rule set for which bottom-up evaluation terminates in polynomial time. The local-rule-set transformation gives polynomial-time evaluation strategies for a large variety of rule sets that cannot be given terminating evaluation strategies by any other known automatic technique. This article discusses three new results. First, it is shown that every polynomial-time predicate can be defined by an (unstratified) local rule set. Second, a new machine-recognizable subclass of the local rule sets is identified. Finally, we show that locality, as a property of rule sets, is undecidable in general. | INTRODUCTION
Under what conditions does a given set of inference rules define a computationally tractable inference
relation? This is a syntactic question about syntactic inference rules. There are a variety of
motivations for identifying tractable inference relations. First, tractable inference relations sometimes
provide decision procedures for semantic theories. For example, the equational inference
The support of the National Science Foundation under grants IIS-9977981 and IIS-0093100 is gratefully acknowl-
edged. This article is a revised and expanded version of a paper which appeared in the Proceedings of the Third
International Conference on the Principles of Knowledge Representation and Reasoning, October 1992, pp. 403-
412.
Authors' addresses: R. Givan, School of Electrical and Computer Engineering, Purdue University, 1285 EE Build-
ing, West Lafayette, IN 47907; email: givan@purdue.edu; web: http://www.ece.purdue.edu/~givan; D. McAllester,
AT&T Labs Research, P. O. Box 971, 180 Park Avenue, Florham Park, NJ, 07932; email: dmac@research.att.com.;
web: http://www.research.att.com/~dmac.
Permission to make digital/hard copy of part or all of this work for personal or classroom use is granted without fee
provided that the copies are not made or distributed for profit or commercial advantage, the copyright notice, the title
of the publication, and its date appear, and notice is given that copying is by permission of the ACM, Inc. To copy
otherwise, to republish, to post on servers, or to redistribute to lists, requires prior specific permission and/or a fee.
. R. Givan and D. McAllester
rules of reflexivity, symmetry, transitivity, and substitutivity define a tractable inference relation
that yields a decision procedure for the entailment relation between sets of ground equations
[Kozen, 1977], [Shostak, 1978]. Another example is the set of equational Horn clauses valid in
lattice theory. As a special case of the results in this paper one can show automatically that validity
of a lattice-theoretic Horn clause is decidable in cubic time.
Deductive databases provide a second motivation for studying tractable inference relations. A
deductive database is designed to answer queries using simple inference rules as well as a set of
declared data base facts. The inference rules in a deductive database typically define a tractable
inference relation-these inference rules are usually of a special form known as a datalog pro-
gram. A datalog program is a set of first-order Horn clauses that do not contain function symbols.
Any datalog program defines a tractable inference relation [Ullman, 1988], [Ullman, 1989]. There
has been interest in generalizing the inference rules used in deductive databases beyond the special
case of datalog programs. In the general case, where function symbols are allowed in Horn
clause inference rules, a set of inference rules can be viewed as a Prolog program. Considerable
work has been done on "bottom-up" evaluation strategies for these programs and source-to-source
transformations that make such bottom-up evaluation strategies more efficient [Naughton and
Ramakrishnan, 1991], [Bry, 1990]. The work presented here on local inference relations can be
viewed as an extension of these optimization techniques. For example, locality testing provides an
automatic source-to-source transformation on the inference rules for equality (symmetry, reflexiv-
ity, transitive, and substitution) that allows them to be completely evaluated in a bottom-up fashion
in cubic time. We do not know of any other automatic transformation on inference rules that
provides a terminating evaluation strategy for this rule set.
Tractable rule sets also play an important role in type-inference systems for computer programming
languages [Talpin and Jouvelot, 1992], [Jouvelot and Gifford, 1991]. Although we have not
yet investigated connections between the notion of locality used here and known results on tractability
for type inference systems, this seems like a fruitful area for future research. From a practical
perspective it seems possible that general-purpose bottom-up evaluation strategies for
inference rules can be applied to inference rules for type-inference systems. From a theoretical
perspective we show below that any polynomial-time predicate can be defined by a local set of
inference rules and that many type-inference systems give polynomial-time decidable typability.
A fourth motivation for the study of tractable inference relations is the role that such relations
can play in improving the efficiency of search. Many practical search algorithms use some form of
incomplete inference to prune nodes in the search tree [Knuth, 1975], [Mackworth, 1977], [Pearl
and Korf, 1987]. Incomplete inference also plays an important role in pruning search in constraint
logic programming [Jaffar and Lassez, 1987], [van Hentenryck, 1989], [McAllester and Siskind,
1991]. Tractable inference relations can also be used to define a notion of "obvious inference"
which can then be used in "Socratic" proof verification systems which require proofs to be
reduced to obvious steps [McAllester, 1989], [Givan et al., 1991].
As mentioned above, inference rules are syntactically similar to first-order Horn clauses. In fact,
most inference rules can be naturally syntactically expressed 1 by a Horn clause in sorted first-order
logic. If R is a set of Horn clauses, S is a set of ground atomic formulas, and F is a ground
atomic formula, then we write if in first order logic. We write rather than
| - R S R
| - R
Polynomial-time Computation via Local Inference Rules . 3
because we think of R as a set of syntactic inference rules and as the inference relation
generated by those rules. Throughout this paper we use the term "rule set" as a synonym for
"finite set of Horn clauses". We give nontrivial conditions on R which ensure that the inference
relation is polynomial-time decidable.
As noted above, a rule set R that does not contain any function symbols is called a datalog pro-
gram. It is well-known that the inference relation defined by a datalog program is polynomial-time
decidable. Vardi and Immerman independently proved, in essence, that datalog programs
provide a characterization of the complexity class P - any polynomial time predicate on finite
databases can be written as a datalog program provided that one is given a successor relation that
defines a total order on the domain elements [Vardi, 1982], [Immerman, 1986], [Papadimitriou,
1985] [Hella et al., 1997] [Immerman, 1999].
Although datalog programs provide an interesting class of polynomial-time inference relations,
the class of tractable rule sets is much larger than the class of datalog programs. First of all, one
can generalize the concept of a datalog program to the concept of a superficial rule set. We call a
set of Horn clauses superficial if any term that appears in the conclusion of a clause also appears
in some premise of that clause. A superficial rule set has the property that forward-chaining inference
does not introduce new terms. We show in this paper that superficial rule sets provide a different
characterization of the complexity class P. While datalog programs can encode any
polynomial-time predicate on ordered finite databases, superficial rule sets can encode any polynomial-time
predicate on ground first-order terms. Let be a predicate on ground first-order
terms constructed from a finite signature. We define the DAG size of a first-order term t to be the
number of distinct terms that appear as subexpressions of t. 2 It is possible to show that if can be
computed in polynomial time in the sum of the DAG size of its arguments then can be represented
by a superficial rule set. More specifically, we prove below that for any such predicate
on k ground first-order terms there exists a superficial rule set R such that (t 1 , ., t k ) if and only
INPUT is a predicate symbol and ACCEPT is a distinguished
proposition symbol. Our characterization of the complexity class P in terms of superficial
rule sets differs from the previous characterization of P in terms of datalog programs in two ways.
First, the result is stated in terms of predicates on ground terms rather than predicates on data-
bases. Second, unlike the datalog characterization, no separate total order on domain elements is
required.
Superficial rule sets are a special case of the more general class of local rule sets [McAllester,
1993]. A set R of Horn clauses is local if whenever there exists a proof of F from S such
that every term in the proof is mentioned in S or F. If R is local then is polynomial-time
decidable. All superficial rule sets are local but many local rule sets are not superficial. The set of
the four inference rules for equality is local but not superficial. The local inference relations provide
a third characterization of the complexity class P. Let be a predicate on ground first-order
terms constructed from a finite signature. If can be computed in polynomial time in the sum of
1. Any RE inference relation can in principle be defined by first-order Horn clauses but expressing inference rules
involving implicit substitution or higher order matching can be somewhat awkward.
2. The DAG size of a term is the size of the Directed Acyclic Graph representation of the term.
| - R
| - R
| - R
| - R
4 . R. Givan and D. McAllester
the DAG size of its arguments then there exists a local rule set R such that for any ground terms t 1 ,
we have that Q(t 1 , ., t k ) if and only if where Q is a predicate symbol representing
. Note that no superficial rule set can have this property because forward-chaining inference
from a superficial rule set can not introduce new terms. We find the characterization of
polynomial-time predicates in terms of local rule sets to be particularly pleasing because as just
described it yields a direct mapping from semantic predicates to predicates used in the inference
rules.
Unlike superficiality, locality can be difficult to recognize. The set of four inference rules for
equality is local but the proof of this fact is nontrivial. Useful machine-recognizable subclasses of
local rule sets have been identified by McAllester [McAllester, 1993] and Basin and Ganzinger
[Basin and Ganzinger, 1996] [Basin and Ganzinger, 2000] (the former subclass being semi-decidable
and the latter subclass being decidable). Even when only semi-decidable, the resulting procedures
mechnically demonstrate the tractability of many natural rule sets of interest, such as the
inference rules for equality. Here we introduce a third semi-decidable subclass which contains a
variety of natural rule sets not contained in either of these earlier classes. We will briefly describe
the two earlier classes and give examples of rules sets included in our new class that are not
included in the earlier classes.
Basin and Ganzinger identify the class of rule sets that are saturated with respect to all orderings
compatible with the subterm ordering. The notion of saturation is derived from ordered resolution.
We will refer to these rule sets simply as "saturated". Saturation with respect to the class of sub-
term-compatible orders turns out to be a decidable property of rule sets. Membership in the
[McAllester, 1993] class or the new class identified here is only semi-decidable - a rule set is in
these classes if there exists a proof of locality of a certain restricted form (a different form for each
of the two classes).
Basin and Ganzinger identify the subclass of local rule sets that are saturated with respect to all
orderings compatible with the subterm ordering. The approach taken by Basin and Ganzinger is
different from the approach taken here, with each approach having its own advantages. A primary
advantage of the saturation approach is its relationship with well-known methods for first-order
term rewriting and theorem proving - saturation can be viewed as a form of ordered resolution.
A second advantage is that saturation with respect to the class of orders compatible with the sub-term
ordering is decidable while the subclass of local rule sets given here is only semi-decidable.
A third advantage of saturation is that it generalizes the notion of locality to term orders other than
the subterm order. Both approaches support "completion" - the process of extending a rule set
by adding derived rules so that the resulting larger rule set is in the desired subclass of local rule
sets. For the procedures described here and in [McAllester93] one simply converts each counter-example
to locality into a new derived inference rule. The primary advantage of the approach
described in this paper over the saturation approach is the method described here often yields
smaller more efficient rule sets. As an example consider the following rules.
(1)
| - R
x y
x y
Polynomial-time Computation via Local Inference Rules . 5
These rules are local and this rule set is in both McAllester's class and the new class introduced
here. But they are not saturated. Saturation adds (at least) the following rules.
(2)
A decision procedure based on the larger saturated set would still run in O( ) time, but the added
rules significantly impact the constant factors and this is an important issue in practice.
The semi-decidable subclass of local rule sets introduced in [McAllester, 1993] is called the
bounded-local rule sets. This subclass is defined carefully in the body of this paper for further
comparison to the new subclass introduced here. The set of the four basic rules for equality is
bounded-local. As another example of a bounded-local rule set we give the following rules for
reasoning about a monotone operator from sets to sets. Let R f be the following set of inference
rules for a monotone operator.
There is a simple source-to-source transformation on any local rule set that converts the rule set
to a superficial rule set without changing the relation described. For example, consider the above
rules for a monotone operator. We can transform these rules so that they can only derive information
about terms explicitly mentioned in the query. To do this we introduce another predicate symbol
M (with the intuitive meaning "mentioned"). Let R ' f be the following transformed version of
R f .
Note that R ' f is superficial and hence bottom-up (forward-chaining) evaluation must terminate in
polynomial time 3 . Then to determine if we determine, by bottom-up evaluation
whether . An analogous transformation applies to any local rule set.
A variety of other bounded-local rule sets are given [McAllester, 1993]. As an example of a rule
set that is local but not bounded local we give the following rules for reasoning about a lattice.
3. For this rule set bottom-up evaluation can be run to completion in cubic time.
x z
z x
x y
x y
x y
x y
| - R
{
f
6 . R. Givan and D. McAllester
These rules remain local when the above monotonicity rule is added. With or without the monotonicity
rule, the rule set is not bounded-local.
In this paper we construct another useful semi-decidable subclass of the local rule sets which we
call inductively-local rule sets. All of the bounded-local rule sets given in [McAllester, 1993] are
also inductively-local. The procedure for recognizing inductively-local rule sets has been implemented
and has been used to determine that the above rule set is inductively-local. Hence the
inference relation defined by the rules in (5) is polynomial-time decidable. Since these rules are
complete for lattices this result implies that validity for lattice-theoretic Horn clauses is polynomial-time
decidable.
We believe that there are bounded-local rule sets which are not inductively-local, although we
do not present one here. We have not found any natural examples of local rule sets that fail to be
inductively-local. Inductively local rule sets provide a variety of mechanically recognizable polynomial-time
inference relations. Throughout this paper, when we claim that a ruleset is either
bounded-local or inductively-local, that fact has been demonstrated mechanically using our techniques
In this paper we also settle an open question from the previous analysis in [McAllester, 1993]
and show that locality as a general property of rule sets is undecidable. Hence the optimization of
logic programs based on the recognition of locality is necessarily a somewhat heuristic process.
2. BASIC TERMINOLOGY
In this section we give more precise definitions of the concepts discussed in the introduction.
Definition 1. A Horn clause is a first order formula of the form Y
and the Y i are atomic formulas. For any set of Horn clauses R, any finite set S of ground atoms,
and any ground atomic formula F, we write whenever in first-order
logic where is the set of universal closures of Horn clauses in .
There are a variety of inference relations defined in this paper. For any inference relation and
sets of ground formulas S and G we write if for each Y in G.
The inference relation can be given a more direct syntactic characterization. This syntactic
x y
x z
x y
x y
z x z y
| - R S U R
| -
| -
|
Polynomial-time Computation via Local Inference Rules . 7
characterization is more useful in determining locality. In the following definitions and lemma, S
is a set of ground atomic formulas, and F is a single ground atomic formula.
Definition 2. A derivation of F from S using rule set R is a sequence of ground atomic formulas
such that Y n is F and for each Y i there exists a Horn clause
- Y' in R and a ground substitution s such that s[Y'] is Y i and each formula of the form
is either a member of S or a formula appearing in earlier than Y i in the derivation.
Lemma 1: if and only if there exists a derivation of F from S using the rule set R.
The following restricted inference relation plays an important role in the analysis of locality.
Definition 3. We write if there exists a derivation of F from S such that every term
appearing in the derivation appears as a subexpression of F or as a subexpression of some formula
in S.
Lemma 2: (Tractability Lemma) [McAllester, 1993] For any finite rule set R the inference
relation is polynomial-time decidable.
Proof: Let n be the number of terms that appear as subexpressions of F or of a formula in S. If
Q is a predicate-symbol of k arguments that appears in the inference rules R then there are at
most n k formulas of the form Q(s 1 , ., s k ) such that . Since R is finite there
is some maximum arity k over all the predicate symbols that appear in R. The total number of
ground atomic formulas that can be derived under the restrictions in the definition of is
then of order n k . Given a particular set of derived ground atomic formulas, one can determine
whether any additional ground atomic formula can be derived by checking whether each rule
in R has an instance whose premises are all in the currently derived formulas - for a rule with
k' variables, there are only n k' instances to check, and each instance can be checked in polynomial
time. Thus, one can extend the set of derived formulas by checking polynomially many
instances, each in polynomial time; and the set of derived formulas can only be extended at
most polynomially many times. The lemma then follows.
Clearly, if then . But the converse does not hold in general. By definition, if the
converse holds then R is local.
Definition 4. [McAllester, 1993]: The rule set R is local if the restricted inference relation
is the same as the unrestricted relation .
Clearly, if R is local then is polynomial-time decidable.
3. CHARACTERIZING P WITH SUPERFICIAL RULES
In this section we consider predicates on first-order terms that are computable in polynomial time.
The results stated require a somewhat careful definition of a polynomial-time predicate on first-order
terms.
| - R
| - R
| - R
| - R
| - R
| - R S F
| - R
|
8 . R. Givan and D. McAllester
Definition 5. A polynomial-time predicate on terms is a predicate on one or more first-order
terms which can be computed in polynomial time in the sum of the DAG sizes of its
arguments.
Definition 6. A rule set is superficial if any term that appears in the conclusion of a rule also
appears in some premise of that rule.
Theorem 1: (Superficial Rule Set Representation Theorem) If is a polynomial-time predicate
on k first-order terms of a fixed finite signature, then there exists a superficial rule set R
such that for any first-order terms t 1 , ., t n from this signature, we have that is true on arguments
As an example consider the "Acyclic" predicate on directed graphs - the predicate that is true of
a directed graph if and only if that graph has no cycles. It is well-known that acyclicity is a polynomial-time
property of directed graphs. This property has a simple definition using superficial
rules with one level of stratification - if a graph is not cyclic then it is acyclic. The above theorem
implies that the acyclicity predicate can be defined by superficial rules without any stratifica-
tion. The unstratified rule set for acyclicity is somewhat complex and rather than give it here we
give a proof of the above general theorem. The proof is rather technical, and casual readers are
advised to skip to the next section.
Proof: (Theorem 1) We only consider predicates of one argument. The proof for predicates of
higher arity is similar. Let be a one argument polynomial-time computable predicate on
terms, i.e., a predicate on terms such that one can determine in polynomial time in the DAG
size of a term t whether or not holds. Our general approach is to construct a database from
t such that the property of terms can be viewed as a polynomial-time computable property
of the database (since the term t can be extracted from the database and then computed).
We can then get a datalog program for computing this property of the database, given a total
ordering of the database individuals, using the result of Immerman and Vardi [Immerman,
1986], [Vardi, 1982]. The proof finishes by showing how superficial rules can be given that
construct the required database from t and the required ordering of the database individuals.
The desired superficial rule set is then the combination of the datalog program and the added
rules for constructing the database and the ordering. We now argue this approach in more
detail.
We first describe the database S t that will represent the term t. For each subterm s of t we introduce
a database individual c s , i.e., a new constant symbol unique to the term s. We have
assumed that the predicate is defined on terms constructed from a fixed finite signature, i.e.,
a fixed finite set of constant and function symbols. We will consider constants to be functions
of no arguments. For each function symbol f of n arguments in this finite signature we introduce
a database relation P f of n+1 arguments, i.e., P f is a n+1-ary predicate symbol. Now for
any term t we define S t to be the set of ground formulas of the form
a subterm of t (possibly equal to t). The set S t should be viewed as a data-base
with individuals c s and relations P f . Let G t be a set of formulas of the form S(c s , c u ) where
s and u are subterms of t such that S represents a successor relation on the individuals of S t ,
| - R
. c s n
Polynomial-time Computation via Local Inference Rules . 9
i.e., there exists a bijection r from the individuals of S t to consecutive integers such that S(s, u)
is in G t if and only if 1. The result of Immerman and Vardi [Immerman, 1986],
[Vardi, 1982] implies that for any polynomial time property P of ordered databases there exists
a datalog program R such that for all databases D we have P(D) if and only if D ACCEPT.
Since the term t can be easily recovered from the set S t , can be viewed as a polynomial-time
property of S t , and so there must exist a datalog program R such that S t - G t ACCEPT if
and only if . We can assume without loss of generality that no rule in R can derive new
formulas involving the database predicates P f . If R has such rules they can be eliminated by
introducing duplicate predicates P f ', adding rules that copy P f facts to P f ' facts, and then
replacing P f by P f ' in all the rules.
We now add to the rule set R superficial rules that construct the formulas needed in S t and G t -
these rules use a number of "auxiliary" relation symbols in their computations; we assume the
names of these relation symbols are chosen after the choice of R so that there are no occurrences
of these relation symbols in R. First we define a "mentioned" predicate M such that M(s)
is provable if and only if s is a subterm of t.
The second rule is a schema for all rules of this form where f is one of the finite number of
function symbols in the signature and x i is one of the variables x 1 , ., x n . Now we give rules
(again via a schema) that construct a version of the formula set S t where we use the subterms
themselves instead of the corresponding constants.
Now we write a collection of rules to construct the formula set G t , where we again use the
terms themselves rather than corresponding constants. These rules define a successor relation
on the subterms of t. The basic idea is to enumerate the subterms of t by doing a depth-first tree
traversal starting at the root of t and ignorning terms that have been encountered earlier. This
tree traversal is done below in rule sets (11) and (12), but these rule sets rely on various "utility
predicates" that we must first define.
We start by defining a simple subterm predicate Su such that Su(u, v) is provable if u and v are
subterms of t such that u is a subterm of v. The second rule is again a schema for all rules of
this form within the finite signature.
We also need the negation of the subterm predicate, which we will call NI for "not in". To
define this predicate we first need to define a "not equal" predicate NE such that NE(u, v) is
provable if and only if u and v are distinct subterms of the input t.
| - R
| - R
. R. Givan and D. McAllester
Instances of the first rule schema must have f and g distinct function symbols and in the second
rule schema x i and y i occur at the same argument position and all other arguments to f are the
same in both terms. Now we can define the "not in" predicate NI such that NI(s, u) if s is not a
subterm of u. We only give the rules for constants and functions of two arguments. The rules
for functions of other numbers of arguments are similar. Instances of the first rule schema must
have c a constant symbol.
Now for any subterm s of the input we simultaneously define a three-place "walk" relation W(s,
u, w) and a binary "last" relation L(s, u). W(s, u, w) will be provable if s and u are subterms of w
and u is the successor of s in a left-to-right preorder traversal of the subterms of w with elimination
of later duplicates. L(s, u) will be provable if s is the last term of the left-to-right preorder
traversal of the subterms of u, again with elimination of later duplicates. In these
definitions, we also use the auxiliary three-place relation W'(s, u, v), where W'(s, u, f (w, v))
means roughly that s and u are subterms of v such that u comes after s in the preorder traversal
of v and every term between s and u in this traversal is a subterm of w. More precisely, W'(s, u,
v) is inferred if and only if v has the form f (x,y) such that there are occurrences of s and u in the
pre-order traversal of y (removing duplicates within y) where the occurrence of u is later than
the occurrence of s and all terms in between these occurrences in the traversal are subterms of
x. Using W' and NI together (see two different rules below) enables the construction of a pre-order
traversal of y with subterms of x removed that can be used to construct a preorder traversal
of f (x,y) with duplicates removed.
ylast y
ylast x
, , L ylast f x y
xlast x
, , L xlast f x y
ylast y
ylast x
W- flast ylast f x y
flast x
, L flast f x y
Polynomial-time Computation via Local Inference Rules . 11
Finally we define the successor predicate S in terms of W, as follows.
Let R' be the datalog program R plus all of the above superficial rules. We now have that S t -
G t ACCEPT if and only if INPUT(t) ACCEPT, and the proof is complete.
(Theorem
4. CHARACTERIZING P WITH LOCAL RULES
Using the theorem of the previous section one can provide a somewhat different characterization
of the complexity class P in terms of local rule sets. Recall from Definition 4 that a rule set R is
local if for any set of ground atomic formulas S and any single ground atomic formula F, we have
if and only if . We note that the tractability lemma (Lemma 2) implies immediately
that if R is local then is polynomial-time decidable.
Theorem 2: (Local Rule Set Representation Theorem) If is a polynomial-time predicate on
first-order terms then there exists a local rule set R such that for any first-order terms t 1 , ., t k ,
we have that is true on arguments t 1 , ., t k if and only if where Q is a predicate
representing .
Before giving a proof of this theorem we give a simple example of a local rule set for a polynomial-time
problem. Any context-free language can be recognized in cubic time. This fact is easily
proven by giving a translation of grammars into local rule sets. We represent a string of symbols
using a constant symbol for each symbol and the binary function CONS to construct terms that
represent lists of symbols. For each nonterminal symbol A of the grammar we introduce a predicate
of two arguments where P A (x, y) will indicate that x and y are strings of symbols
and that y is the result of removing a prefix of x that parses as category A. For each grammar production
A - c where c is a terminal symbol we construct a rule with no premises and the conclusion
x). For each grammar production A - B C we have the following inference
rule:
.
Finally, we let P be a monadic predicate which is true of strings generated by the distinguished
start nonterminal S of the grammar and add the following rule:
W-
W-
INPUT z
|
|
| - R S F
| - R
| - R
| - R
12 . R. Givan and D. McAllester
. (15)
Let R be this set of inference rules. R is a local rule set. To see this first note that the rules maintain
the invariant that if P A (x, y) is derivable then y is a subterm of x. From this it is easy to show
that any use of any rule in R on derivable premises has the property that every term appearing in
an premise (either at the top level or as a subterm of a top-level term) also appears in the conclusion
(either at the top level or as a subterm of a top-level term). This implies that a proof of P A (x,
y) can not mention terms other than x and its subterms (which includes y).
The rule set R also has the property that if and only if x is a string in the language generated
by the given grammar. General methods for analyzing the order of running time of local
rule sets can be used to immediately give that these clauses can be run to completion in order n 3
time where n is the length of the input string. 4 We have implemented a compiler for converting
local rule sets to efficient inference procedures. This compiler can be used to automatically generate
a polynomial-time parser from the above inference rules.
Proof: (Theorem 2) We now prove the above theorem for local inference relations from the
preceding theorem for superficial rule sets. By the superficial rule-set representation theorem
there must exist a superficial rule set R such that for any first order terms t 1 , ., t k we have that
if and only if INPUT(t 1 , ., t k ) ACCEPT where INPUT is a predicate symbol
and ACCEPT is a distinguished proposition symbol. Our goal now is to define a local rule set
R' such that INPUT(t 1 , ., t k ) ACCEPT if and only if Q(t 1 , ., t k ). For each predicate
arguments appearing in R let S' be a new predicate symbol of k+m arguments.
We define the rule set R' to be the rule set containing the following clauses.
Given the above definition we can easily show that if and only if
and only if Q(t 1 , ., t k ). It remains only to show that R' is local. Suppose that . We
must show that . Let t 1 , ., t k be the first k arguments in F. If F is Q(t 1 , ., t k ) then
either F is in S (in which case the result is trivial), or we must also have
so that it suffices to prove the result assuming that F is the application
of the primed version of a predicate appearing in R. Every derivation based on R' involves
formulas which all have the same first k arguments - in particular, given that S F we must
4. An analysis of the order of running time for decision procedures for local inference relations is given in [McAll-
ester, 1993].
| - R
|
| - R
x
.
- is in R
| - R
|
| - R S F
| - R
| - R
| - R
Polynomial-time Computation via Local Inference Rules . 13
have that S' F where S' is the set of formulas in S that have t 1 , ., t k as their first k argu-
ments. Let S" and F' be the result of replacing each formula
in S' and F, respectively. Since S' F we must have
But since R is superficial every term in the derivation underlying Input(t 1 , .,
either appears in some t i or appears in S". This implies that every term in the derivation
appears in either S' or F, and thus that . (Theorem 2)
5. ANOTHER CHARACTERIZATION OF LOCALITY
In this section we give an alternate characterization of locality. This characterization of locality
plays an important role in both the definition of bounded-local rule sets given in [McAllester,
1993] and in the notion of inductively-local rule sets given in the next section.
Definition 7. A bounding set is a set Y of ground terms such that every subterm of a member
of Y is also a member of Y (i.e., a subterm-closed set of terms).
Definition 8. A ground atomic formula Y is called a label formula of a bounding set Y if
every term in Y is a member of Y.
Definition 9. For any bounding set Y, we define the inference relation to be such that
if and only if there exists a derivation of F from S such that every formula in the
derivation is a label formula of the term set Y.
We have that if and only if where Y is the set of all terms appearing as sub-expressions
of F or of formulas in S. The inference relation can be used to give another
characterization of locality. Suppose that R is not local. In this case there must exist some S and F
such that but . Let Y be the set of terms that appear in S and F. We must have
. However, since we must have for some finite superset Y' of Y.
Consider "growing" the bounding set one term at a time, starting with the terms that appear in S
and F.
Definition 10. A one-step extension of a bounding set Y is a ground term a that is not in Y but
such that every proper subterm of a is a member of Y.
Definition 11. A feedback event for R consists of a finite set S of ground formulas, a ground
formula F, a bounding set Y containing all terms that appear in S and F, and a one-step extension
a of Y such that , but .
By abuse of notation, a feedback event will be written as .
Lemma 3: [McAllester, 1993]: R is local if and only if there are no feedback events for R.
Proof: First note that if R has a feedback event then R is not local - if then
but if then . Conversely suppose that R is not local. In that case
| - R
|
| - R
|
| - R,Y
| - R,Y
| - R S F
| - R,Y
| - R,Y
| - R S F
| - R
| - R,Y S F
| - R S F
| - R,Y
| - R,Y - {a} S F
| - R,Y
| - R,Y - {a}
| - R,Y - {a}
| - R S F
| - R,Y S F
14 . R. Givan and D. McAllester
there is some S and F such that but for some finite Y. By considering at
least such Y one can show that a feedback event exists for R.
The concepts of bounded locality and inductive locality both involve the concept of a feedback
event. We can define bounded locality by first defining C R (S, U) to be the set of formulas Y such
that . R is bounded-local if it is local and there exists a natural number k such that
whenever there exists a k-step or shorter derivation of Y from C R (S, U) such that
every term in the derivation is a member of U - {a}. As mentioned above, the set of the four
basic inference rules for equality is bounded-local - moreover, there exists a procedure for determining
if a given rule set is k-bounded-local for any particular k, and hence there exists semi-deci-
sion procedure which can verify locality for any bounded-local rule set [McAllester, 1993]. This
procedure is sufficiently efficient in practice to verify the locality of a large number of bounded-
local rule sets. But not all local rule sets are bounded-local. The next section introduces the
intuctively-local rule sets, a new recursively-enumerable subclass of the local rule sets.
6. INDUCTIVE LOCALITY
To define inductive locality we first define the notion of a feedback template. A feedback template
represents a set of potential feedback events. We also define a backward chaining process which
generates feedback templates from a rule set R. We show that if there exists a feedback event for R
then such an event will be found by this backchaining process. Furthermore, we define an "induc-
tive" termination condition on the backchaining process and show that if the backchaining process
achieves inductive termination then R is local.
Throughout this section we let R be a fixed but arbitrary set of Horn clauses. The inference relation
will be written as with the understanding that R is an implicit parameter of the
relation.
We define feedback templates as ground objects - they contain only ground first-order terms
and formulas. The process for generating feedback templates is defined as a ground process - it
only deals with ground instances of clauses in R. The ground process can be "lifted" using a lifting
transformation. Since lifting is largely mechanical for arbitrary ground procedures [McAll-
ester and Siskind, 1991], the lifting operation is only discussed very briefly here.
Definition 12. A feedback template consists of a set of ground atomic formulas S, a multiset
of ground atomic formulas G, a ground atomic formula F, a bounding set U, and a one-step
extension a of Y such that F and every formula in S is a label formula of Y, every formula in G
is a label formula of U - {a} that contains a, and such that .
By abuse of notation a feedback template will be written as S, G F. G is a multiset of
ground atomic formulas, each of which is a label formula of U - {a} containing a, and such that
the union of S and G allow the derivation of F relative to the bounding set U - {a}. A feedback
template is a potential feedback event in the sense that an extension of S that allows a derivation
of the formulas in G may result in a feedback event. The requirement that G be a multiset is
needed for the template-based induction lemma given below. Feedback templates for R can be
constructed by backward chaining.
| - R S F
| - R,Y
| - R,Y
| - R,Y - {a}
| - R,Y | - Y
Polynomial-time Computation via Local Inference Rules . 15
Non-deterministic Procedure for Generating a Template for R:
1. Let Y 1 - Y n - F be a ground instance of a clause in R.
2. Let a be a term that appears in the clause but does not appear in the conclusion F and
does not appear as a proper subterm of any other term in the clause.
3. Let Y be a bounding set that does not contain a but does contain every term in the clause
other than a.
4. Let S be the set of premises Y i which do not contain a.
5. Let G be the set of premises Y i which do contain a.
6. Return the feedback template S, G F.
We let T 0 [R] be the set of all feedback templates that can be derived from R by an application of
the above procedure. We leave it to the reader to verify that T 0 [R] is a set of feedback templates.
Now consider a feedback template S, G F.
A feedback template S, G F is a statement that there exists a proof of F local to U -
{a} from the multiset S of U-local premises and the multiset G of (U - {a})-local premises. The
following procedure defines a method of constructing a new template by backchaining from some
{a})-local premise of a given template.
Non-deterministic Procedure for Backchaining from S, G F
1. Let Q be a member of G
2. Non-deterministically choose a ground instance Y 1 - Y n - Q of a clause in R that
has Q as its conclusion and such that each Y i is a label formula of U - {a}.
3. Let S' be S plus all premises Y i that do not contain a.
4. Let G' be G minus Q plus all premises Y i that contain a.
5. Return the template S', G' F.
Note that there need not be any clauses satisfying the condition in step 2 of the procedure in which
case there are no possible executions and no templates can be generated. In step 4 of the above
procedure, G' is constructed using multiset operations. For example, if the multiset G contains two
occurrences of Q, then "G minus Q" contains one occurrence of Q. We need G to be a multiset in
order to guarantee that certain backchaining operations commute in the proof of the induction
lemma below - in particular, we will use the fact that if a sequence of backchaining operations
remove an element Q of G at some point, then there exists a permutation of that sequence of backchaining
operations producing the same resulting template, but that removes Q first.
For any set T of feedback templates we define B[T] to be T plus all templates that can be derived
from an element of T by an application of the above backchaining procedure. It is important to
keep in mind that by definition B[T] contains T. We let B n [T] be B[B[ . B[T]]] with n applications
of B.
Definition 13. A feedback template is called critical if G is empty.
| - Y- {a}
| - Y- {a}
| - Y- {a}
. R. Givan and D. McAllester
If S, - F is a critical template then S F. If S F then S F is a
feedback event. By abuse of notation, a critical template S, - F such that S F will
itself be called a feedback event. The following lemma provides the motivation for the definition
of a feedback template and the backchaining process.
Lemma 4: There exists a feedback event for R if and only if there exists a j such that B j [T 0
contains a feedback event.
Proof: The reverse direction is trivial. To prove the forward direction, suppose that there exists
a feedback event for R. Let S F be a minimal feedback event for R, i.e., a feedback
event for R which minimizes the length of the derivation of F from S under the bounding set U
{a}. The fact that this feedback event is minimal implies that every formula in the derivation
other than F contains a. To see this suppose that Q is a formula in the derivation other than F
that does not involve a. We then have S Q and S - {Q} F. One of these
two must be a feedback event - otherwise we would have S F. But if one of these is a
feedback event then it involves a smaller derivation than S F and this contradicts the
assumption that S F is minimal. Since every formula other than F in the derivation
underlying S F contains a, the template S, - F can be derived by backchaining
steps mirroring that derivation.
The above lemma implies that if the rule set is not local then backchaining will uncover a feed-back
event. However, we are primarily interested in those cases where the rule set is local. If the
backchaining process is to establish locality then we must find a termination condition which
guarantees locality. Let T be a set of feedback templates. In practice T can be taken to be
j. We define a "self-justification" property for sets of feedback templates
and prove that if T is self-justifying then there is no n such that B n [T] contain a feedback event. In
defining the self-justification property we treat each template in T as an independent induction
hypothesis. If each template can be "justified" using the set of templates as induction hypotheses,
then the set T is self-justifying.
Definition 14. We write S, G F if T contains templates
where each S i is a subset of S, each G i is a subset of G and S - {Y 1 , Y 2 , ., Y k } F.
Definition 15. A set of templates T is said to justify a template S, G F if there exists
a Q - G such that for each template S', G' F generated by one step of backchaining
from S, G F by selecting Q at step 1 of the backchaining procedure we have
| - Y- {a} | - Y- {a} /
| - Y- {a} /
| - Y- {a}
| - Y- {a} | - Y- {a}
| - Y- {a}
| - Y- {a}
| - Y- {a} | - Y- {a}
| - T,Y
| - Y- {a}
| - Y- {a}
| - Y- {a}
| - Y- {a}
| - Y- {a}
| - Y- {a}
| - T,Y
Polynomial-time Computation via Local Inference Rules . 17
Definition 16. The set T is called self-justifying if every member of T is either critical or justified
by T, and T does not contain any feedback events.
Theorem 3: (Template-based Induction Theorem) If T is self-justifying then no set of the
contains a feedback event.
Proof: Consider a self-justifying set T of templates. We must show that for every critical template
we have that S F. The proof is by induction on n. Consider
a critical template S, - F in B n [T] and assume the theorem for all critical
templates in B j [T] for j less than n. The critical template S, - F must be derived by
backchaining from some template S', G' F in T. Note that S' must be a subset of S.
If G' is empty then S' equals S and S F because T is self-justifying and thus cannot contain
any feedback events. If G' is not empty then, since T is self-justifying, we can choose a Q
in G' such that for each template S'', G'' F derived from S', G' F by a
single step of backchaining on Q we have S", G" F. We noted above that backchaining
operations commute (to ensure this we took G to be a multiset rather than a set). By the commutativity
of backchaining steps there exists a backchaining sequence from
S', G' F to S, - F such that the first step in that sequence is a backchaining
step on Q. Let S * , G* F be the template that results from this first backchaining
step from S', G' F. Note that S * is a subset of S. We must now have S * , G * F.
By definition, T must contain templates
such that each S i is a subset of S * , each G i is a subset of G * , and S * - {Y 1 , Y 2 , ., Y k } F.
Note that each S i is a subset of S. Since G i is a subset of G * there must be a sequence of fewer
than n backchaining steps that leads from to a critical template
such that S' i is a subset of S. This critical template is a member of B j [T] for
less than n and so by our induction hypothesis this template cannot be a feedback event; as a
consequence we have S' i Y i and thus S Y i . But if S Y i for each Y i , and
(Template-based Induction Theorem)
The following corollary then follows from Theorem 3 along with Lemmas 3 and 4:
Corollary 1: If B n [T 0 [R]] is self-justifying, for some n, then R is local.
We now come the main definition and theorem of this section.
Definition 17. A rule set R is called inductively-local if there exists some n such that B n [T 0
is self-justifying.
| - Y- {a} | - Y
| - Y- {a}
| - Y- {a}
| - Y- {a}
| - Y- {a} | - Y- {a}
| - T,Y
| - Y- {a} | - Y- {a}
| - Y- {a}
| - Y- {a} | - T,Y
| - Y- {a}
| - Y- {a}
| - Y- {a}
| - Y- {a}
| - Y- {a}
. R. Givan and D. McAllester
Theorem 4: There exists a procedure which, given any finite set R of Horn clauses, will terminate
with a feedback event whenever R is not local, terminate with "success" whenever R is
inductively-local, and fail to terminate in cases where R is local but not inductively-local.
Proof: The procedure is derived by lifting the above ground procedure for computing
Lifting can be formalized as a mechanical operation on arbitrary nondeterministic
ground procedures [McAllester and Siskind, 1991]. The lifted procedure maintains a set of
possibly non-ground templates S, G F. Each template must satisfy the conditions
that a occurs as a top-level argument in every atom in G, a does not occur at all in S or F, and
every term in S or Phi occurs in U. A lifted template represents the set of ground templates that
can be derived by applying a subsitution s to the lifted template. More specifically, for any set
of ground terms U let C(U) denote U plus all subterms f of terms in U. A lifted feedback template
represents the set of all well-formed feedback templates of the form
s(S), s(G) s(F).Note that not all expressions of this form need be well-formed
feedback templates, e.g., we might have that s(t) equals s(a) where t occurs in S.
However, if s(S), s(G) s(F) is a well-formed feedback template, then we say it
is covered by the lifted template.
To prove Theorem 4, we first show that there exists a finite set of lifted templates such that the
set of ground templates covered by this lifted set is exactly T 0 [R]. This is done by lifting the
procedure for generating T 0 [R], i.e., each step of the procedure can be made to nondeterministically
generate a lifted object (an expression possibly containing variables) in such a way that
a ground feedback event can be nondeterministically generated by the ground procedure if and
only if it is covered by some lifted feedback event that can be nondeterministically generated
by the lifted procedure. For example, the first step of the procedure for generating T 0 [R] simply
nondeterministically selects one of the (lifted) rules in R. Step 2 selects a unifiable subset
of top-level subterms of the premise of the clause. The most general unifier of this set is then
applied to the clause and a is taken to be result of applying that unifier to any one of the
selected terms. Steps 3, 4, 5, and 6 are then computed deterministically as specified.
Now given a finite set T of lifted templates covering a possibly infinite ground set T', the procedure
for generating B[T'] can be modified to generate a finite set of lifted templates that covers
exactly B[T']. The lifted non-deterministic backchaining procedure starts with a lifted
feedback template and non-deterministically selects, in step 2, a rule whose conclusion is unifiable
with an atom in G. If the unification violates any part of the definition of a feedback event
then the execution fails; for example, the unification might identify a with a subterm of a term
in Y, and thus fail. Steps 3 and 4 are preceeded with a step that nondeterministically selects a
subset of the top level terms occuring in Y 1 , ., Y n to identify with a. The most general unifier
of these terms and a is then applied to all expressions. Again, if any part of the definition
of a feedback template is violated, then the execution fails. Then steps 3, 4, and 5 are computed
as specified. We then get that B n [T 0 [R]] can be represented by a finite set of lifted tem-
plates. Finally, Definition 15 can also be lifted so that we can speak of a lifted template being
justified by a finite set of lifted templates. Now we have that R is inductively local if and only
if there exists an n such that the finite set of lifted templates representing B n [T 0 [R]] is self-jus-
tifying. For any given n this is decidable and theorem 4 follows.
| - Y- {a}
| - Y- {a}
| - C(s(Y - {a}))
| - C(s(Y - {a}))
Polynomial-time Computation via Local Inference Rules . 19
We have implemented the resulting lifted procedure and used it to verify the locality of a variety
of rule sets, including for instance the rule set given as equation (5) above for reasoning about lat-
tices. This procedure is also useful for designing local rule sets - when applied to a nonlocal rule
set the procedure returns a feedback event that can often be used to design additional rules that
can be added to the rule set to give a local rule set computing the same inference relation.
7. LOCALITY IS UNDECIDABLE
We prove that locality is undecidable by reducing the Halting problem.
Theorem 5: The problem of deciding the locality of a rule set R is undecidable.
Let M be a specification of a Turing machine. We first show one can mechanically construct
a local rule set R with the property that the machine M halts if and only if there exists a
term t such that where H is a monadic predicate symbol. Turing machine computations
can be represented by first-order terms and the formula H(t) intuitively states that t is a
term representing a halting computation of M.
To prove this preliminary result we first construct a superficial rule set S such that M halts if
and only if there exists a term t such that INPUT(t) H(t ). The mechanical construction of
the superficial rule set S from the Turing machine M is fairly straightforward and is not given
here. We convert this superficial rule set S to a local rule set R as follows. For each predicate
arguments appearing in S let be a new predicate symbol of m+1 arguments.
The rule set R will be constructed so that (t, s 1 , ., s m ) if and only if INPUT(t) Q(t,
We define the rule set R to be the rule set containing the following clauses:
, , and each clause of the form
where is in S. By the design of
R we can easily show that Q'(t, s 1 , ., s m ) if and only if INPUT(t) Q(t, s 1 , ., s m ), and
so it directly follows that INPUT(t) H(t ) if and only if . So the Turing machine M
halts if and only if for some term t, as desired. The proof that the rule set R is local
closely follows the proof that R' is local in the Local Rule Set Representation Theorem proven
above (Theorem 2).
We have now constructed a local rule set R with the property that M halts if and only if there
exists some term t such that . Now let R' be R plus the single clause H(x) - HALTS
where HALTS is a new proposition symbol. We claim that is local if and only if M does not
halt. First note that if M halts then we have both and so R is not local.
Conversely, suppose that M does not halt. In this case we must show that R' is local. Suppose
that . We must show that . Suppose F is some formula other than HALTS. In
this case is equivalent to . Since R is local we must have and thus
| - R
| - S
Q-
| - R Q- S
x Input- x x
.
|
|
| - R
| - R
| - R
R-
| - R HALTS
| - R
| - R S F
| - R
| - R S F
| - R S F
20 . R. Givan and D. McAllester
. Now suppose F is the formula HALTS. If HALTS is a member of S then the result is
trivial so we assume that HALTS is not in S. Since we must have for
some term c. This implies that and thus and . To show
it now suffices to show that c is mentioned in S. By the preceding argument we
have . Since the rule set R was generated by the construction given above, we have
that every inference based on a clause in R is such that every formula in the inference has the
same first argument. This implies that where is the set of all formulas in S that
have c as a first argument. We have assumed that M does not halt, and thus . Hence
must not be empty. Since every formula in mentions c, and is contained in S, we can
conclude that S must mention c - thus since we have .
8. OPEN PROBLEMS
In closing we note some open problems. There are many known examples of rule sets which are
not local and yet the corresponding inference relation is polynomial-time decidable. In all such
cases we have studied there exists a conservative extension of the rule set which is local. We conjecture
that for every rule set R such that is polynomial-time decidable there exists a local conservative
extension of R. Our other problems are less precise. Can one find a "natural" rule set that
is local but not inductively local? A related question is whether there are useful machine recognizable
subclasses of the local rule sets other than the classes of bounded-local and inductively-local
rule sets?
Acknowledgements
We would like to thank Franz Baader for his invaluable input and discussions. Robert Givan was
supported in part by National Science Foundation Awards No. 9977981-IIS and No. 0093100-IIS.
9.
--R
Automated Complexity Analysis Based on Ordered Resolution.
Automated Complexity Analysis Based on Ordered Resolution.
Query evaluation in recursive databases: bottom-up and top-down rec- onciled
Natural language based inference procedures applied to Schubert's steamroller.
How to define a linear order on finite models.
Relational queries computable in polynomial time.
Descriptive Complexity.
Constraint logic programming.
Algebraic Reconstruction of Types and Effects.
Estimating the efficiency of backtrack programs.
Complexity of finitely presented algebras.
Consistency in networks of relations.
Lifting trans- formations
A Knowledge Representation System for Mathe- matics
Automatic recognition of tractability in inference relations.
A note on the expressive power of Prolog.
Search techniques.
An algorithm for reasoning about equality.
Type and Effect Systems.
Principles of Database and Knowledge-Base Systems
Constraint Satisfaction in Logic Programming.
The complexity of relational query languages.
--TR
Relational queries computable in polynomial time
Constraint logic programming
Principles of database and knowledge-base systems, Vol. I
Constraint satisfaction in logic programming
Bottom-up beats top-down for datalog
Ontic: a knowledge representation system for mathematics
Query evaluation in recursive databases: bottom-up and top-down reconciled
Algebraic reconstruction of types and effects
Automatic recognition of tractability in inference relations
An algorithm for reasoning about equality
Automated complexity analysis based on ordered resolution
Complexity Analysis Based on Ordered Resolution
The complexity of relational query languages (Extended Abstract)
Complexity of finitely presented algebras
Lifting Transformations | automated reasoning;descriptive complexity theory;decision procedures |
566390 | Boolean satisfiability with transitivity constraints. | We consider a variant of the Boolean satisfiability problem where a subset ϵ of the propositional variables appearing in formula Fsat encode a symmetric, transitive, binary relation over N elements. Each of these relational variables, ei,j, for 1 i < j N, expresses whether or not the relation holds between elements j. The task is to either find a satisfying assignment to Fsat that also satisfies all transitivity constraints over the relational variables (e.g., e1,2 ∧ e2,3 e1,3), or to prove that no such assignment exists. Solving this satisfiability problem is the final and most difficult step in our decision procedure for a logic of equality with uninterpreted functions. This procedure forms the core of our tool for verifying pipelined microprocessors.To use a conventional Boolean satisfiability checker, we augment the set of clauses expressing Fsat with clauses expressing the transitivity constraints. We consider methods to reduce the number of such clauses based on the sparse structure of the relational variables.To use Ordered Binary Decision Diagrams (OBDDs), we show that for some sets ϵ, the OBDD representation of the transitivity constraints has exponential size for all possible variable orderings. By considering only those relational variables that occur in the OBDD representation of Fsat, our experiments show that we can readily construct an OBDD representation of the relevant transitivity constraints and thus solve the constrained satisfiability problem. | Introduction
Consider the following variant of the Boolean satisfiability problem. We are given a Boolean
formula F sat over a set of variables V . A subset symbolically encodes a binary relation
over N elements that is reflexive, symmetric, and transitive. Each of these relational variables,
whether or not the relation holds between elements
j. Typically, E will be "sparse," containing much fewer than the N(N \Gamma 1)=2 possible variables.
Note that when e i;j 62 E for some value of i and of j, this does not imply that the relation does
not hold between elements i and j. It simply indicates that F sat does not directly depend on the
relation between elements i and j.
A transitivity constraint is a formula of the form
denote the set of
all transitivity constraints that can be formed from the relational variables. Our task is to find
an assignment -: V ! f0; 1g that satisfies F sat , as well as every constraint in Trans(E). Goel,
et al. [GSZAS98] have shown this problem is NP-hard, even when F sat is given as an Ordered
Binary Decision Diagram (OBDD) [Bry86]. Normally, Boolean satisfiability is trivial given an
OBDD representation of a formula.
We are motivated to solve this problem as part of a tool for verifying pipelined microprocessors
[VB99]. Our tool abstracts the operation of the datapath as a set of uninterpreted functions and
uninterpreted predicates operating on symbolic data. We prove that a pipelined processor has
behavior matching that of an unpipelined reference model using the symbolic flushing technique
developed by Burch and Dill [BD94]. The major computational task is to decide the validity
of a formula Fver in a logic of equality with uninterpreted functions [BGV99a, BGV99b]. Our
decision procedure transforms Fver first by replacing all function application terms with terms
over a set of domain variables fv i j1 - i - Ng. Similarly, all predicate applications are replaced
by formulas over a set of newly-generated propositional variables. The result is a formula F
ver
containing equations of the form v . Each of these equations is
then encoded by introducing a relational variable e i;j , similar to the method proposed by Goel, et
al. [GSZAS98]. The result of the translation is a propositional formula encf
ver ) expressing the
verification condition over both the relational variables and the propositional variables appearing
in F
ver . Let F sat denote :encf
ver ), the complement of the formula expressing the translated
verification condition. To capture the transitivity of equality, e.g., that v i
we have transitivity constraints of the form e [i;j] - e [j;k] ) e [i;k] . Finding a satisfying assignment
to F sat that also satisfies the transitivity constraints will give us a counterexample to the original
verification condition Fver . On the other hand, if we can prove that there are no such assignments,
then we have proved that Fver is universally valid.
We consider three methods to generate a Boolean formula F trans that encodes the transitivity
constraints. The direct method enumerates the set of chord-free cycles in the undirected graph
having an edge (i; for each relational variable e This method avoids introducing additional
relational variables but can lead to a formula of exponential size. The dense method uses
relational variables e i;j for all possible values of i and j such that 1 - . We can then
axiomatize transitivity by forming constraints of the form e [i;j] -e [j;k] ) e [i;k] for all distinct values
of i, j, and k. This will yield a formula that is cubic in N . The sparse method augments E with
additional relational variables to form a set of variables , such that the resulting graph is chordal
[Rose70]. We then only require transitivity constraints of the form e [i;j] - e [j;k] ) e [i;k] such that
. The sparse method is guaranteed to generate a smaller formula than the
dense method.
To use a conventional Boolean Satisfiability (SAT) procedure to solve our constrained satisfiability
problem, we run the checker over a set of clauses encoding both F sat and F trans . The latest
version of the FGRASP SAT checker [M99] was able to complete all of our benchmarks, although
the run times increase significantly when transitivity constraints are enforced.
When using Ordered Binary Decision Diagrams to evaluate satisfiability, we could generate
OBDD representations of F sat and F trans and use the APPLY algorithm to compute an OBDD
representation of their conjunction. From this OBDD, finding satisfying solutions would be trivial.
We show that this approach will not be feasible in general, because the OBDD representation of
F trans can be intractable. That is, for some sets of relational variables, the OBDD representation
of the transitivity constraint formula F trans will be of exponential size regardless of the variable
ordering. The NP-completeness result of Goel, et al. shows that the OBDD representation of
F trans may be of exponential size using the ordering previously selected for representing F sat
as an OBDD. This leaves open the possibility that there could be some other variable ordering
that would yield efficient OBDD representations of both F sat and F trans . Our result shows that
transitivity constraints can be intrinsically intractable to represent with OBDDs, independent of
the structure of F sat .
We present experimental results on the complexity of constructing OBDDs for the transitivity
constraints that arise in actual microprocessor verification. Our results show that the OBDDs can
indeed be quite large. We consider two techniques to avoid constructing the OBDD representation
of all transitivity constraints. The first of these, proposed by Goel, et al. [GSZAS98], generates
implicants (cubes) of F sat and rejects those that violate the transitivity constraints. Although this
method suffices for small benchmarks, we find that the number of implicants generated for our
larger benchmarks grows unacceptably large. The second method determines which relational
variables actually occur in the OBDD representation of F sat . We can then apply one of our three
techniques for encoding the transitivity constraints in order to generate a Boolean formula for the
transitivity constraints over this reduced set of relational variables. The OBDD representation of
this formula is generally tractable, even for the larger benchmarks.
Benchmarks
Our benchmarks [VB99] are based on applying our verifier to a set of high-level microprocessor
designs. Each is based on the DLX RISC processor described by Hennessy and Patterson [HP96]:
1\ThetaDLX-C: is a single-issue, five-stage pipeline capable of fetching up to one new instruction
every clock cycle. It implements six instruction types: register-register, register-immediate,
Circuit Domain Propositional Equations
Variables Variables
Buggy min. 22 56 89
2\ThetaDLX-CC avg. 25 69 124
max.
Table
1: Microprocessor Verification Benchmarks. Benchmarks with suffix "t" were modified
to require enforcing transitivity.
load, store, branch, and jump. The pipeline stages are: Fetch, Decode, Execute, Memory,
and Write-Back. An interlock causes the instruction following a load to stall one cycle if
it requires the loaded result. Branches and jumps are predicted as not-taken, with up to 3
instructions squashed when there is a misprediction. This example is comparable to the DLX
example first verified by Burch and Dill [BD94].
2\ThetaDLX-CA: has a complete first pipeline, capable of executing the six instruction types, and
a second pipeline capable of executing arithmetic instructions. Between 0 and 2 new
instructions are issued on each cycle, depending on their types and source registers, as well as
the types and destination registers of the preceding instructions. This example is comparable
to one verified by Burch [Bur96].
2\ThetaDLX-CC: has two complete pipelines, i.e., each can execute any of the six instruction types.
There are four load interlocks-between a load in Execute in either pipeline and an instruction
in Decode in either pipeline. On each cycle, between 0 and 2 instructions can be issued.
In all of these examples, the domain variables v i , with 1 - i - N , in F
ver encode register
identifiers. As described in [BGV99a, BGV99b], we can encode the symbolic terms representing
program data and addresses as distinct values, avoiding the need to have equations among these
variables. Equations arise in modeling the read and write operations of the register file, the bypass
logic implementing data forwarding, the load interlocks, and the pipeline issue logic.
Our original processor benchmarks can be verified without enforcing any transitivity con-
straints. The unconstrained formula F sat is unsatisfiable in every case. We are nonetheless motivated
to study the problem of constrained satisfiability for two reasons. First, other processor
designs might rely on transitivity, e.g., due to more sophisticated issue logic. Second, to aid designers
in debugging their pipelines, it is essential that we generate counterexamples that satisfy
all transitivity constraints. Otherwise the designer will be unable to determine whether the counterexample
represents a true bug or a weakness of our verifier.
To create more challenging benchmarks, we generated variants of the circuits that require enforcing
transitivity in the verification. For example, the normal forwarding logic in the Execute
stage of 1\ThetaDLX-C must determine whether to forward the result from the Memory stage instruction
as either one or both operand(s) for the Execute stage instruction. It does this by comparing the
two source registers ESrc1 and ESrc2 of the instruction in the Execute stage to the destination
register MDest of the instruction in the memory stage. In the modified circuit, we changed the by-pass
condition ESrc1=MDest to be ESrc1=MDest- (ESrc1=ESrc2-ESrc2=MDest).
Given transitivity, these two expressions are equivalent. For each pipeline, we introduced four
such modifications to the forwarding logic, with different combinations of source and destination
registers. These modified circuits are named 1\ThetaDLX-C-t, 2\ThetaDLX-CA-t, and 2\ThetaDLX-CC-t.
To study the problem of counterexample generation for buggy circuits, we generated 105 variants
of 2\ThetaDLX-CC, each containing a small modification to the control logic. Of these, 5 were
found to be functionally correct, e.g., because the modification caused the processor to stall un-
necessarily, yielding a total of 100 benchmark circuits for counterexample generation.
Table
1 gives some statistics for the benchmarks. The number of domain variables N ranges
between 13 and 25, while the number of equations ranges between 27 and 143. The verification
condition formulas F
ver also contain between 42 and 77 propositional variables expressing the
operation of the control logic. These variables plus the relational variables comprise the set of
variables V in the propositional formula F sat . The circuits with modifications that require enforcing
transitivity yield formulas containing up to 19 additional equations. The final three lines
summarize the complexity of the 100 buggy variants of 2\ThetaDLX-CC. We apply a number of simplifications
during the generation of formula F sat , and hence small changes in the circuit can yield
significant variations in the formula complexity.
3 Graph Formulation
Our definition of Trans(E) (Equation 1) places no restrictions on the length or form of the transitivity
constraints, and hence there can be an infinite number. We show that we can construct a
graph representation of the relational variables and identify a reduced set of transitivity constraints
that, when satisfied, guarantees that all possible transitivity constraints are satisfied. By introducing
more relational variables, we can alter this graph structure, further reducing the number of
transitivity constraints that must be considered.
For variable set E , define the undirected graph G(E) as containing a vertex i for
an edge (i; for each variable e . For an assignment - of Boolean values to the relational
variables, define the labeled graph G(E; -) to be the graph G(E) with each edge (i; labeled as a
1-edge when -(e i;j and as a 0-edge when -(e i;j
A path is a sequence of vertices having edges between successive elements.
That is, each element i p of the sequence (1 - p - while each
successive pair of elements forms an edge (i We consider each edge
to also be part of the path. A cycle is a path of the form [i
Proposition 1 An assignment - to the variables in E violates transitivity if and only if some cycle
in G(E; -) contains exactly one 0-edge.
Proof: If. Suppose there is such a cycle. Letting i 1 be the vertex at one end of the 0-edge, we
can trace around the cycle, giving a sequence of vertices is the vertex at
the other end of the 0-edge. The assignment has -(e [i j ;i j+1
and hence it violates Equation 1.
Only If. Suppose the assignment violates a transitivity constraint given by Equation 1. Then,
we construct a cycle [i of vertices such that only edge (i
A path [i is said to be acyclic when i
is said to be simple when its prefix [i
Proposition 2 An assignment - to the variables in E violates transitivity if and only if some simple
cycle in G(E; -) contains exactly one 0-edge.
Proof: The "if" portion of this proof is covered by Proposition 1. The "only if" portion is
proved by induction on the number of variables in the antecedent of the transitivity constraint
(Equation 1.) That is, assume a transitivity constraint containing k variables in the antecedent is
violated and that all other violated constraints have at least k variables in their antecedents. If there
are no values p and q such that 1 - then the cycle [i
If such values p and q exist, then we can form a transitivity constraint:
This transitivity constraint contains fewer than k variables in the antecedent, but it is also violated.
This contradicts our assumption that there is no violated transitivity constraint with fewer than k
variables in the antecedent. 2
Define a chord of a simple cycle to be an edge that connects two vertices that are not adjacent
in the cycle. More precisely, for a simple cycle [i a chord is an edge (i
G(E) such that 1 - cycle is said to be
chord-free if it is simple and has no chords.
Proposition 3 An assignment - to the variables in E violates transitivity if and only if some chord-
contains exactly one 0-edge.
Proof: The "if" portion of this proof is covered by Proposition 1. The "only if" portion is
proved by induction on the number of variables in the antecedent of the transitivity constraint
(Equation 1.) Assume a transitivity constraint with k variables is violated, and that no transitivity
constraint with fewer variables in the antecedent is violated. If there are no values of p and q such
that there is a variable e [i p ;i q then the corresponding
cycle is chord-free. If such values of p and q exist, then consider the two cases illustrated in Figure
1, where 0-edges are shown as dashed lines, 1-edges are shown as solid lines, and the wavy lines
0-Edge 1-Edge
Figure
1: Case Analysis for Proposition 3. 0-Edges are shown as dashed lines. When a cycle representing
a transitivity violation contains a chord, we can find a smaller cycle that also represents
a transitivity violation.
represent sequences of 1-edges. Case 1: Edge (i 0-edge (shown on the left). Then the
transitivity constraint:
is violated and has fewer than k variables in its antecedent. Case 2: Edge (i 1-edge (shown
on the right). Then the transitivity constraint:
is violated and has fewer than k variables. Both cases contradict our assumption that there is no
violated transitivity constraint with fewer than k variables in the antecedent. 2
Each length k cycle [i given by the following clauses. Each
clause is derived by expressing Equation 1 as a disjunction.
(2)
For a set of relational variables E , we define F trans (E) to be the conjunction of all transitivity
constraints for all chord-free cycles in the graph G(E).
Theorem 1 An assignment to the relational variables E will satisfy all of the transitivity constraints
given by Equation 1 if and only if it satisfies F trans (E).
This theorem follows directly from Proposition 3 and the encoding given by Equation 2.
3.1 Enumerating Chord-Free Cycles
To enumerate the chord-free cycles of a graph, we exploit the following properties. An acyclic path
is said to have a chord when there is an edge (i
We classify a chord-free path as terminal when (i
is in G(E), and as extensible otherwise.
Proposition 4 A path [i is chord-free and terminal if and only if the cycle [i
is chord-free.
This follows by noting that the conditions imposed on a chord-free path are identical to those for a
chord-free cycle, except that the latter includes a closing edge (i
A proper prefix of path [i
Proposition 5 Every proper prefix of a chord-free path is chord-free and extensible.
Clearly, any prefix of a chord-free path is also chord-free. If some prefix [i
were terminal, then any attempt to add the edge (i would yield either a simple cycle
(when or a path having (i
as a chord.
Given these properties, we can enumerate the set of all chord-free paths by breadth first expan-
sion. As we enumerate these paths, we also generate C , the set of all chord-free cycles. Define P k
to be the set of all extensible, chord-free paths having k vertices, for 1 - k - N .
Initially we have ;. Given set P k , we generate set P k+1 and
add some cycles of length k + 1 to C . For each path [i we consider the path
for each edge (i
classify the path as cyclic. When there is an edge (i
classify the path as having a chord. When there is an edge (i we add the cycle
to C . Otherwise, we add the path to P k+1 .
After generating all of these paths, we can use the set C to generate the set of all chord-free
cycles. For each terminal, chord-free cycle having k vertices, there will be 2k members of C-
each of the k edges of the cycle can serve as the closing edge, and a cycle can traverse the closing
edge in either direction. To generate the set of clauses given by Equation 2, we simply need to
choose one element of C for each closing edge, e.g., by considering only cycles [i
which
As
Figure
2 indicates, there can be an exponential number of chord-free cycles in a graph.
In particular, this figure illustrates a family of graphs with 3n vertices. Consider the cycles
passing through the n diamond-shaped faces as well as the edge along the bottom. For each
diamond-shaped face F i , a cycle can pass through either the upper vertex or the lower vertex. Thus
there are 2 n such cycles. In addition, the edges forming the perimeter of each face F i create a
chord-free cycle, giving a total of cycles.
The columns labeled "Direct" in Table 2 show results for enumerating the chord-free cycles for
our benchmarks. For each correct microprocessor, we have two graphs: one for which transitivity
constraints played no role in the verification, and one (indicated with a "t" at the end of the name)
modified to require enforcing transitivity constraints. We summarize the results for the transitivity
@
@
@
@
@
@
@
@
@
@
@
@
@
@
@
@
@
@
oe
ae
Figure
2: Class of Graphs with Many Chord-Free Cycles. For a graph with n diamond-shaped
faces, there are cycles.
Circuit Direct Dense Sparse
Edges Cycles Clauses Edges Cycles Clauses Edges Cycles Clauses
1\ThetaDLX-C-t 37 95 348 78 286 858 42 68 204
2\ThetaDLX-CC-t 143 2,136 8,364 300 2,300 6,900 193 858 2,574
Full min. 89 1,446 6,360 231 1,540 4,620 132 430 1,290
Buggy avg. 124 2,562 10,270 300 2,300 6,900 182 750 2,244
Table
2: Cycles in Original and Augmented Benchmark Graphs. Results are given for the three
different methods of encoding transitivity constraints.
constraints in our 100 buggy variants of 2\ThetaDLX-CC in terms of the minimum, the average, and
the maximum of each measurement. We also show results for five synthetic benchmarks consisting
of n \Theta n planar meshes M n , with n ranging from 4 to 8, where the mesh for
in
Figure
3. For all of the circuit benchmarks, the number of cycles, although large, appears to be
manageable. Moreover, the cycles have at most 4 edges. The synthetic benchmarks, on the other
hand, demonstrate the exponential growth predicted as worst case behavior. The number of cycles
grows quickly as the meshes grow larger. Furthermore, the cycles can be much longer, causing the
number of clauses to grow even more rapidly.
3.2 Adding More Relational Variables
Enumerating the transitivity constraints based on the variables in E runs the risk of generating a
Boolean formula of exponential size. We can guarantee polynomial growth by considering a larger
set of relational variables. In general, let E 0 be some set of relational variables such that
and let F trans be the transitivity constraint formula generated by enumerating the chord-free
cycles in the graph G(E 0 ).
Theorem 2 If E is the set of relational variables in F sat and
F trans (E) is satisfiable if and only if F sat - F trans
We introduce a series of lemmas to prove this theorem. For a propositional formula F over a
set of variables A and an assignment -: A ! f0; 1g, define the valuation of F under -, denoted
, to be the result of evaluating formula F according to assignment -. We first prove that we
can extend any assignment over a set of relational variables to one over a superset of these variables
yielding identical valuations for both transistivity constraint formulas.
Lemma 1 For any sets of relational variables E 1 and E 2 such that assignment
1g, such that [F trans
there is an assignment
[F trans
Proof: We consider the case where g. The general statement of the proposition
then holds by induction on jE
assignment - 2 to be:
Graph G(E 1 ; -) has a path of 1-edges from node i to node j.
We consider two cases:
1. If - 2 (e i;j any cycle in G(E must contain a 0-edge other than
e i;j . Hence adding this edge does not introduce any transitivity violations.
2. If - 2 (e i;j there must be some path P 1 of 1-edges between nodes i and j in
In order for the introduction of 1-edge e i;j to create a transitivity violation, there
must also be some path P 2 between nodes i and j in G(E exactly one 0-
edge. But then we could concatenate paths P 1 and P 2 to form a cycle in G(E
exactly one 0-edge, implying that [F trans
We conclude therefore that adding
1-edge e i;j does not introduce any transitivity violations.Lemma 2 For assignment such that [F trans
also have [F trans
Proof: We note that any cycle in G(E must be present in G(E have the same
edge labeling. Thus, if G(E cycle with a single 0-edge, then neither does G(E now return to the proof of Theorem 2.
Proof: Suppose that F sat - F trans (E) is satisfiable, i.e., there is some assignment - such
that [F sat 1. Then by Lemma 1 we can find an assignment - 0 such that
[F trans Furthermore, since the construction of - 0 by Lemma 1 preserves the values
assigned to all variables in E , and these are the only relational variables occurring in F sat , we can
conclude that [F sat
Suppose on the other hand that F sat - F trans there is some assignment
- 0 such that [F sat
hence F sat - F trans (E) is satisfiable. 2
Our goal then is to add as few relational variables as possible in order to reduce the size of
the transitivity formula. We will continue to use our path enumeration algorithm to generate the
transitivity formula.
3.3 Dense Enumeration
For the dense enumeration method, let EN denote the set of variables e i;j for all values of i and
j such that 1 - Graph G(EN ) is a complete, undirected graph. In this graph,
any cycle of length greater than three must have a chord. Hence our algorithm will enumerate
transitivity constraints of the form e [i;j] - e [j;k] ) e [i;k] , for all distinct values of i, j, and k.
The graph has yielding a total of
The columns labeled "Dense" in Table 2 show the complexity of this method for the benchmark
circuits. For the smaller graphs 1\ThetaDLX-C, 1\ThetaDLX-C-t, M 4 and M 5 , this method yields more
clauses than direct enumeration of the cycles in the original graph. For the larger graphs, however,
it yields fewer clauses. The advantage of the dense method is most evident for the mesh graphs,
where the cubic complexity is far superior to exponential.
3.4 Sparse Enumeration
We can improve on both of these methods by exploiting the sparse structure of G(E). Like the
dense method, we want to introduce additional relational variables to give a set of variables
such that the resulting graph That is, the graph has the property
that every cycle of length greater than three has a chord.
Chordal graphs have been studied extensively in the context of sparse Gaussian elimination. In
fact, the problem of finding a minimum set of additional variables to add to our set is identical to
the problem of finding an elimination ordering for Gaussian elimination that minimizes the amount
of fill-in. Although this problem is NP-complete [Yan81], there are good heuristic solutions. In
Satisfiable? Secs. Satisfiable? Secs.
Full min. Y
Buggy avg. Y 125 Y 1,517 2.3
Table
3: Performance of FGRASP on Benchmark Circuits. Results are given both without and
with transitivity constraints.
particular, our implementation proceeds as a series of elimination steps. On each step, we remove
some vertex i from the graph. For every pair of distinct, uneliminated vertices j and k such that
the graph contains edges (i; j) and (i; k), we add an edge (j; it does not already exist. The
original graph plus all of the added edges then forms a chordal graph. To choose which vertex to
eliminate on a given step, our implementation uses the simple heuristic of choosing the vertex with
minimum degree. If more than one vertex has minimum degree, we choose one that minimizes the
number of new edges added.
The columns in Table 2 labeled "Sparse" show the effect of making the benchmark graphs
chordal by this method. Observe that this method gives superior results to either of the other two
methods. In our implementation we have therefore used the sparse method to generate all of the
transitivity constraint formulas.
SAT-Based Decision Procedures
Most Boolean satisfiability (SAT) checkers take as input a formula expressed in clausal form.
Each clause is a set of literals, where a literal is either a variable or its complement. A clause
denotes the disjunction of its literals. The task of the checker is to either find an assignment to the
variables that satisfies all of the clauses or to determine that no such assignment exists. We can
solve the constrained satisfiability problem using a conventional SAT checker by generating a set
of clauses C trans representing F trans set of clauses C sat representing the formula F sat .
We then run the checker on the combined clause set C sat [ C trans to find satisfying solutions to
In experimenting with a number of Boolean satisfiability checkers, we have found that FGRASP
[MS99] has the best overall performance. The most recent version can be directed to periodically
restart the search using a randomly-generated variable assignment [M99]. This is the first SAT
checker we have tested that can complete all of our benchmarks. All of our experiments were
conducted on a 336 MHz Sun UltraSPARC II with 1.2GB of primary memory.
As indicated by Table 3, we ran FGRASP on clause sets C sat and C trans [C sat , i.e., both without
and with transitivity constraints. For benchmarks 1\ThetaDLX-C, 2\ThetaDLX-CA, and 2\ThetaDLX-CC,
the formula F sat is unsatisfiable. As can be seen, including transitivity constraints increases the
run time significantly. For benchmarks 1\ThetaDLX-C-t, 2\ThetaDLX-CA-t, and 2\ThetaDLX-CC-t, the formula
F sat is satisfiable, but only because transitivity is not enforced. When we add the clauses
for F trans , the formula becomes unsatisfiable. For the buggy circuits, the run times for C sat range
from under 1 second to over 36 minutes. The run times for C trans [ C sat range from less than
one second to over 12 hours. In some cases, adding transitivity constraints actually decreased the
time (by as much as a factor of 5), but in most cases the CPU time increased (by as much as a
factor of 69). On average (using the geometric mean) adding transitivity constraints increased the
time by a factor of 2.3. We therefore conclude that satisfiability checking with transitivity
constraints is more difficult than conventional satisfiability checking, but the added complexity is
not overwhelming.
5 OBDD-Based Decision Procedures
A simple-minded approach to solving satisfiability with transitivity constraints using OBDDs
would be to generate separate OBDD representations of F trans and F sat . We could then use
the APPLY operation to generate an OBDD for F trans - F sat , and then either find a satisfying
assignment or determine that the function is unsatisfiable. We show that for some sets of relational
variables E , the OBDD representation of F trans (E) can be too large to represent and manipulate. In
our experiments, we use the CUDD OBDD package with dynamic variable reordering by sifting.
5.1 Lower Bound on the OBDD Representation of F trans (E)
We prove that for some sets E , the OBDD representation of F trans (E) may be of exponential
size for all possible variable orderings. As mentioned earlier, the NP-completeness result proved
by Goel, et al. [GSZAS98] has implications for the complexity of representing F trans (E) as an
OBDD. They showed that given an OBDD G sat representing formula F sat , the task of finding
a satisfying assignment of F sat that also satisfies the transitivity constraints in Trans(E) is NP-complete
in the size of G sat . By this, assuming P 6= NP , we can infer that the OBDD representation
of F trans (E) may be of exponential size when using the same variable ordering as is used in
G sat . Our result extends this lower bound to arbitrary variable orderings and is independent of the
vs. NP problem.
Let M n denote a planar mesh consisting of a square array of n \Theta n vertices. For example,
Figure
3 shows the graph for Being a planar graph, the edges partition the plane into faces.
As shown in Figure 3 we label these F i;j for 1 - 1. There are a total of (n
such faces. One can see that the set of edges forming the border of each face forms a chord-free
cycle of M n . As shown in Table 2, many other cycles are also chord-free, e.g., the perimeter of
any rectangular region having height and width greater than 1, but we will consider only the cycles
F 1;1 F 1;2 F 1;3 F 1;4 F 1;5
F 3;1 F 3;2 F 3;3 F 3;4 F 3;5
F 4;1 F 4;2 F 4;3 F 4;4 F 4;5
F 5;1 F 5;2 F 5;3 F 5;4 F 5;5
Figure
3: Mesh Graph M 6 .
corresponding to single faces.
n\Thetan to be a set of relational variables corresponding to the edges in M n . F trans (E n\Thetan )
is then an encoding of the transitivity constraints for these variables.
Theorem 3 Any OBDD representation of F trans (E n\Thetan ) must
vertices.
To prove this theorem, consider any ordering of the variables representing the edges in M n .
Let A denote those in the first half of the ordering, and B denote those in the second half. We can
then classify each face according to the four edges forming its border:
A: All are in A.
B: All are in B.
C: Some are in A, while others are in B. These are called "split" faces.
Observe that we cannot have a type A face adjacent to a type B face, since their shared edge cannot
be in both A and B. Therefore there must be split faces separating any region of type A faces from
any region of type B faces.
For example, Figure 4 shows three possible partitionings of the edges of M 6 and the resulting
classification of the faces. If we let a, b, and c denote the number of faces of each respective type,
we see that we always have c - In particular, a minimum value for c is achieved
when the partitioning of the edges corresponds to a partitioning of the graph into a region of type
A faces and a region of type B faces, each having nearly equal size, with the split faces forming
the boundary between the two regions.
A
A A A A
A
A A A A
A
A A A
A
A A
A
A
A
A C
A
A A
A
A
A
A
A
A A
Figure
4: Partitioning Edges into Sets A (solid) and B (dashed). Each face can then be classified
as type A (all solid), B (all dashed), or C (mixed).
Lemma 3 For any partitioning of the edges of mesh graph M n into equally-sized sets A and B,
there must be at least (n \Gamma 3)=2 split faces.
Note that this lower bound is somewhat weak-it seems clear that we must have c - n \Gamma 1.
However, this weaker bound will suffice to prove an exponential lower bound on the OBDD size.
Proof: Our proof is an adaptation of a proof by Leighton [Lei92, Theorem 1.21] that M n has
a bisection bandwidth of at least n. That is, one would have to remove at least n edges to split the
graph into two parts of equal size.
Observe that M n has n 2 vertices and 2n(n \Gamma 1) edges. These edges are split so that n(n \Gamma 1)
are in A and n(n \Gamma 1) are in B.
Let M D
n denote the planar dual of M n . That is, it contains a vertex u i;j for each face F i;j of
M n , and edges between pairs of vertices such that the corresponding faces in M n have a common
edge. In fact, one can readily see that this graph is isomorphic to M n\Gamma1 .
Partition the vertices of M D
n into sets U a , U b , and U c according to the types of their corresponding
faces. Let a, b, and c denote the number of elements in each of these sets. Each face of M n has
four bordering edges, and each edge is the border of at most two faces. Thus, as an upper bound
on a, we must have 4a - 2n(n \Gamma 1), giving a - n(n \Gamma 1)=2, and similarly for b. In addition, since
a face of type A cannot be adjacent in M n to one of type B, no vertex in U a can be adjacent in M D
to one in U b .
Consider the complete, directed, bipartite graph having as edges the set (U a \Theta U b ) [ (U b \Theta U a ),
i.e., a total of 2ab edges. Given the bounds: a
1)=2, the minimum value of 2ab is achieved when either
giving a lower bound:
We can embed this bipartite graph in M D
n by forming a path from vertex u i;j to vertex
where either u i;j 2 U a and u vice-versa. By convention, we will use the path that first
follows vertical edges to u i 0 ;j and then follows horizontal edges to u i 0 ;j 0 . We must have at least
one vertex in U c along each such path, and therefore removing the vertices in U c would cut all 2ab
paths.
For each vertex u i;j 2 U c , we can bound the total number of paths passing through it by
separately considering paths that enter from the bottom, the top, the left, and the right. For those
entering from the bottom, there are at most vertices and i(n \Gamma 1) destination
vertices, giving at most i(n paths. This quantity is maximized for
giving an upper bound of (n \Gamma 1) 3 =4. A similar argument shows that there are at most (n \Gamma 1) 3 =4
paths entering from the top of any vertex. For the paths entering from the left, there are at most
(j vertices and (n \Gamma destinations, giving at most (j \Gamma 1)(n
paths. This quantity is maximized when giving an upper bound of (n \Gamma 1) 3 =4. This
bound also holds for those paths entering from the right. Thus, removing a single vertex would cut
at most (n \Gamma 1) 3 paths.
Combining the lower bound on the number of paths 2ab, the upper bound on the number of
paths cut by removing a single vertex, and the fact that we are removing c vertices, we have:
We can rewrite
for all values of n, we have:
set of faces is said to be edge independent when no two members of the set share an edge.
Lemma 4 For any partitioning of the edges of mesh graph M n into equally-sized sets A and B,
there must be an edge-independent set of split faces containing at least (n \Gamma 3)=4 elements.
Proof: Classify the parity of face F i;j as "even" when even, and as "odd" otherwise.
Observe that no two faces of the same parity can have a common edge. Divide the set of split
faces into two subsets: those with even parity and those with odd. Both of these subsets are edge
independent, and one of them must have at least 1/2 of the elements of the set of all split faces. 2
We can now complete the proof of Theorem 3 Proof: Suppose there is an edge-independent set
of k split faces. For each split face, choose one edge in A and one edge in B bordering that face.
For each value ~y 2 f0; 1g k , define assignment ff ~y (respectively, fi ~y ), to the variables representing
edges in A (resp., B) as follows. For an edge e that is not part of any of the k split faces, define
ff ~y 0). For an edge e that is part of a split face, but it was not one of the ones
chosen specially, let ff 1). For an edge e that is the chosen variable in face i,
let ff ~y This will give us an assignment ff ~y \Delta fi ~y to all of the variables
that evaluates to 1. That is, for each independent, split face F i , we will have two 1-edges when
cycles in the graph will have at least two 0-edges.
On the other hand, for any ~y; ~z 2 f0; 1g k such that ~y 6= ~z the assignment ff ~y \Delta fi ~z will cause an
evaluation to 0, because for any face i where y i 6= z i , all but one edge will be assigned value 1.
Thus, the set of assignments fff ~y j~y 2 f0; 1g k g forms an OBDD fooling set, as defined in [Bry91],
implying that the OBDD must have at least 2 k - 2 (n\Gamma3)=4
We have seen that adding relational variables can reduce the number of cycles and therefore
simplify the transitivity constraint formula. This raises the question of how adding relational variables
affects the BDD representation of the transitivity constraints. Unfortunately, the exponential
lower bound still holds.
Corollary 1 For any set of relational variables E such that E n\Thetan ' E , any OBDD representation
of F trans (E) must
vertices.
The extra edges in E introduce complications, because they create cycles containing edges
from different faces. As a result, our lower bound is weaker.
Define a set of faces as vertex independent if no two members share a vertex.
Lemma 5 For any partitioning of the edges of mesh graph M n into equal-sized sets A and B,
there must be a vertex-independent set of split faces containing at least (n \Gamma 3)=8 elements.
Proof: Partition the set of split faces into four sets: EE, EO, OE, and OO, where face F i;j is
assigned to a set according to the values of i and j:
EE: Both i and j are even.
EO: i is even and j is odd.
OE: i is odd and j is even.
OO: Both i and j are odd.
Each of these sets is vertex independent. At least one of the sets must contain at least 1=4 of
the elements. Since there are at least (n \Gamma 3)=2 split faces, one of the sets must contain at least
vertex-independent split faces. 2
We can now prove Corollary 1.
Proof: For any ordering of the variables in E , partition them into two sets A and B such
that those in A come before those in B, and such the number of variables that are in E n\Thetan are
equally split between A and B. Suppose there is a vertex-independent set of k split faces. For
each value ~y 2 f0; 1g k , we define assignments ff ~y to the variables in A and fi ~y to the variables
in B. These assignments are defined as they are in the proof of Theorem 3 with the addition that
each variable e i;j in n\Thetan is assigned value 0. Consider the set of assignments ff ~y \Delta fi ~z for
all values ~y; ~z 2 f0; 1g k . The only cycles in G(E; ff ~y \Delta fi ~z ) that can have less than two 0-edges
will be those corresponding to the perimeters of split faces. As in the proof of Theorem 3, the set
fff ~y j~y 2 f0; 1g k g forms an OBDD fooling set, as defined in [Bry91], implying that the OBDD
must have at least 2 k - 2 (n\Gamma3)=8
Our lower bounds are fairly weak, but this is more a reflection of the difficulty of proving
lower bounds. We have found in practice that the OBDD representations of the transitivity constraint
functions arising from benchmarks tend to be large relative to those encountered during the
evaluation of F sat . For example, although the OBDD representation of F trans
1\ThetaDLX-C-t is just 2,692 nodes (a function over 42 variables), we have been unable to construct the
OBDD representations of this function for either 2\ThetaDLX-CA-t (178 variables) or 2\ThetaDLX-CC-t
(193 variables) despite running for over 24 hours.
5.2 Enumerating and Eliminating Violations
Goel, et al. [GSZAS98] proposed a method that generates implicants (cubes) of the function F sat
from its OBDD representation. Each implicant is examined and discarded if it violates a transitivity
constraint. In our experiments, we have found this approach works well for the normal, correctly-
designed pipelines (i.e., circuits 1\ThetaDLX-C, 2\ThetaDLX-CA, and 2\ThetaDLX-CC) since the formula F sat
is unsatisfiable and hence has no implicants. For all 100 of our buggy circuits, the first implicant
generated contained no transitivity violation and hence was a valid counterexample.
For circuits that do require enforcing transitivity constraints, we have found this approach im-
practical. For example, in verifying 1\ThetaDLX-C-t by this means, we generated 253,216 implicants,
requiring a total of 35 seconds of CPU time (vs. 0.2 seconds for 1\ThetaDLX-C). For benchmarks
2\ThetaDLX-CA-t and 2\ThetaDLX-CC-t, our program ran for over 24 hours without having generated all
of the implicants. By contrast, circuits 2\ThetaDLX-CA and 2\ThetaDLX-CC can be verified in 11 and 29
seconds, respectively. Our implementation could be improved by making sure that we generate
only implicants that are irredundant and prime. In general, however, we believe that a verifier that
generates individual implicants will not be very robust. The complex control logic for a pipeline
can lead to formulas F sat containing very large numbers of implicants, even when transitivity plays
only a minor role in the correctness of the design.
5.3 Enforcing a Reduced Set of Transitivity Constraints
One advantage of OBDDs over other representations of Boolean functions is that we can readily
determine the true support of the function, i.e., the set of variables on which the function depends.
This leads to a strategy of computing an OBDD representation of F sat and intersecting its support
with E to give a set -
of relational variables that could potentially lead to transitivity violations.
We then augment these variables to make the graph chordal, yielding a set of variables -
Circuit Verts. Direct Dense Sparse
Edges Cycles Clauses Edges Cycles Clauses Edges Cycles Clauses
1\ThetaDLX-C-t 9
Reduced min. 3
Buggy avg. 12 17 19 75 73 303 910 21 14 42
2\ThetaDLX-CC max. 19 52 378 1,512 171 969 2,907 68 140 420
Table
4: Graphs for Reduced Transitivity Constraints. Results are given for the three different
methods of encoding transitivity constraints based on the variables in the true support of F sat .
Circuit OBDD Nodes CPU
Reduced min. 20 1 20 7
Buggy avg. 3,173 1,483 25,057 107
2\ThetaDLX-CC max. 15,784 93,937 438,870 2,466
Table
5: OBDD-based Verification. Transitivity constraints were generated for a reduced set of
variables -
generate an OBDD representation of F trans ( -
it is
satisfiable, generate a counterexample.
Table
4 shows the complexity of the graphs generated by this method for our benchmark cir-
cuits. Comparing these with the full graphs shown in Table 2, we see that we typically reduce the
number of relational vertices (i.e., edges) by a factor of 3 for the benchmarks modified to require
transitivity and by an even greater factor for the buggy circuit benchmarks. The resulting graphs
are also very sparse. For example, we can see that both the direct and sparse methods of encoding
transitivity constraints greatly outperform the dense method.
Table
5 shows the complexity of applying the OBDD-based method to all of our bench-
marks. The original circuits 1\ThetaDLX-C, 2\ThetaDLX-CA, and 2\ThetaDLX-CC yielded formulas F sat
that were unsatisfiable, and hence no transitivity constraints were required. The 3 modified circuits
1\ThetaDLX-C-t, 2\ThetaDLX-CA-t, and 2\ThetaDLX-CC-t are more interesting. The reduction in the
number of relational variables makes it feasible to generate an OBDD representation of the transitivity
constraints. Compared to benchmarks 1\ThetaDLX-C, 2\ThetaDLX-CA, and 2\ThetaDLX-CC, we see
there is a significant, although tolerable, increase in the computational requirement to verify the
modified circuits. This can be attributed to both the more complex control logic and to the need to
apply the transitivity constraints.
For the 100 buggy variants of 2\ThetaDLX-CC, F sat depends on up to 52 relational variables,
with an average of 17. This yielded OBDDs for F trans ( -
ranging up to 93,937 nodes, with
an average of 1,483. The OBDDs for F sat - F trans ( -
ranged up to 438,870 nodes (average
25,057), showing that adding transitivity constraints does significantly increase the complexity of
the OBDD representation. However, this is just one OBDD at the end of a sequence of OBDD
operations. In the worst case, imposing transitivity constraints increased the total CPU time by a
factor of 2, but on average it only increased by 2%. The memory required to generate F sat ranged
from 9.8 to 50.9 MB (average 15.5), but even in the worst case the total memory requirement
increased by only 2%.
6 Conclusion
By formulating a graphical interpretation of the relational variables, we have shown that we can
generate a set of clauses expressing the transitivity constraints that exploits the sparse structure
of the relation. Adding relational variables to make the graph chordal eliminates the theoretical
possibility of there being an exponential number of clauses and also works well in practice.
A conventional SAT checker can then solve constrained satisfiability problems, although the run
times increase significantly compared to unconstrained satisfiability. Our best results were obtained
using OBDDs. By considering only the relational variables in the true support of F sat , we
can enforce transitivity constraints with only a small increase in CPU time.
--R
"Graph-based algorithms for Boolean function manipulation"
"On the complexity of VLSI implementations and graph representations of Boolean functions with application to integer multiplication,"
"Exploiting positive equality in a logic of equality with uninterpreted functions,"
"Processor verification using efficient reductions of the logic of uninterpreted functions to propositional logic,"
"Automated verification of pipelined microprocessor control,"
"Techniques for verifying superscalar microprocessors,"
"BDD based procedures for a theory of equality with uninterpreted functions,"
Computer Architecture: A Quantitative Ap- proach
Introduction to Parallel Algorithms and Architectures: Arrays
"GRASP: A search algorithm for propositional satisfiability,"
"The impact of branching heuristics in propositional satisfiability algorithms,"
"Triangulated graphs and the elimination process,"
"Superscalar processor verification using efficient reductions of the logic of equality with uninterpreted functions,"
"Computing the minimum fill-in is NP-complete,"
--TR
Graph-based algorithms for Boolean function manipulation
On the Complexity of VLSI Implementations and Graph Representations of Boolean Functions with Application to Integer Multiplication
Introduction to parallel algorithms and architectures
Techniques for verifying superscalar microprocessors
Computer architecture (2nd ed.)
GRASP
Processor verification using efficient reductions of the logic of uninterpreted functions to propositional logic
Effective use of boolean satisfiability procedures in the formal verification of superscalar and VLIW
Chaff
The Impact of Branching Heuristics in Propositional Satisfiability Algorithms
Superscalar Processor Verification Using Efficient Reductions of the Logic of Equality with Uninterpreted Functions to Propositional Logic
BDD Based Procedures for a Theory of Equality with Uninterpreted Functions
Automatic verification of Pipelined Microprocessor Control
Exploiting Positive Equality in a Logic of Equality with Uninterpreted Functions
--CTR
Miroslav N. Velev, Efficient formal verification of pipelined processors with instruction queues, Proceedings of the 14th ACM Great Lakes symposium on VLSI, April 26-28, 2004, Boston, MA, USA
Miroslav N. Velev, Using positive equality to prove liveness for pipelined microprocessors, Proceedings of the 2004 conference on Asia South Pacific design automation: electronic design and solution fair, p.316-321, January 27-30, 2004, Yokohama, Japan
Miroslav N. Velev, Using Abstraction for Efficient Formal Verification of Pipelined Processors with Value Prediction, Proceedings of the 7th International Symposium on Quality Electronic Design, p.51-56, March 27-29, 2006
Miroslav N. Velev, Efficient translation of boolean formulas to CNF in formal verification of microprocessors, Proceedings of the 2004 conference on Asia South Pacific design automation: electronic design and solution fair, p.310-315, January 27-30, 2004, Yokohama, Japan
Miroslav N. Velev , Randal E. Bryant, Effective use of boolean satisfiability procedures in the formal verification of superscalar and VLIW microprocessors, Journal of Symbolic Computation, v.35 n.2, p.73-106, February
Miroslav N. Velev, Exploiting Signal Unobservability for Efficient Translation to CNF in Formal Verification of Microprocessors, Proceedings of the conference on Design, automation and test in Europe, p.10266, February 16-20, 2004
Robert Nieuwenhuis , Albert Oliveras , Cesare Tinelli, Solving SAT and SAT Modulo Theories: From an abstract Davis--Putnam--Logemann--Loveland procedure to DPLL(T), Journal of the ACM (JACM), v.53 n.6, p.937-977, November 2006 | boolean satisfiability;formal verification;decision procedures |
566454 | Tradeoffs in power-efficient issue queue design. | A major consumer of microprocessor power is the issue queue. Several microprocessors, including the Alpha 21264 and POWER4TM, use a compacting latch-based issue queue design which has the advantage of simplicity of design and verification. The disadvantage of this structure, however, is its high power dissipation.In this paper, we explore different issue queue power optimization techniques that vary not only in their performance and power characteristics, but in how much they deviate from the baseline implementation. By developing and comparing techniques that build incrementally on the baseline design, as well as those that achieve higher power savings through a more significant redesign effort, we quantify the extra benefit the higher design cost techniques provide over their more straightforward counterparts. | INTRODUCTION
There are many complex tradeoffs that must be made to achieve
the goal of a power-efficient, yet high performance design. The
first is the amount of performance that must be traded off for lower
power. A second consideration that has received less attention is
the amount of redesign and verification effort that must be put in
to achieve a given amount of power savings. Time-to-market constraints
often dictate that straightforward modifications of existing
designs take precedence over radical approaches that require significant
redesign and verification efforts. For the latter, there must
Permission to make digital or hard copies of all or part of this work for
personal or classroom use is granted without fee provided that copies are
not made or distributed for profit or commercial advantage and that copies
bear this notice and the full citation on the first page. To copy otherwise, to
republish, to post on servers or to redistribute to lists, requires prior specific
permission and/or a fee.
ISLPED'02, August 12-14, 2002, Monterey, California, USA.
be a clear and demonstrable power savings with minimal negative
consequences to justify the extra effort.
One microprocessor structure that has received considerable attention
is the issue queue. The issue queue holds decoded and
renamed instructions until they issue out-of-order to appropriate
functional units. Several superscalar processors such as the Alpha
21264 [11] and POWER4 [10], implement a latch-based issue
queue in which each entry consists of a series of latches [1, 4].
The queue is compacting in that the outputs of each entry feed-forward
to the next entry to enable the filling of "holes" created
by instruction issue. New instructions are always added to the tail
position of the queue. In this manner, the queue maintains an oldest
to youngest program order within the queue. This simplifies
the implementation of an oldest-first issue priority scheme. Additional
important advantages of this implementation are that it is
highly modular and can use scannable latches, which simplifies issue
queue design and verification.
However, the high price of this approach is its power consump-
tion: for instance, the integer queue on the Alpha 21264 is the highest
power consumer on the chip [11]. Similarly, the issue queue is
one of the highest power-density regions within a POWER4-class
processor core [1]. For this reason, several techniques for reducing
the issue queue power have been proposed [2, 3, 5]. However,
these prior efforts have exclusively focused on approaches that require
considerable re-design and verification effort as well as design
risk. What has been thus far lacking is a quantitative comparison
of a range of issue queue power optimization techniques
that vary in their design effort/risk, in addition to their power savings
and performance cost. Our analysis results in several possible
issue queue design choices that are appropriate depending on the
redesign and verification effort that the design team can afford to
put in to achieve a lower-power design.
2. NON-COMPACTING LATCH-BASED
ISSUE QUEUE
Figure
1 illustrates the general principle of a latch-based issue
queue design. Each bit of each entry consists of a latch and a multiplexer
as well as comparators (not shown in this figure) for the
source operand IDs. Each entry feeds-forward to the next queue
entry, with the multiplexer used to either hold the current latch contents
or load the latch with the contents of the next entry. The design
shown in Figure 1 loads dispatched instructions into the upper-most
unused queue entries. "Holes" created when instructions issue
are filled via a compaction operation in which entries are shifted
downwards. By dispatching entries into the tail of the queue and
compacting the queue on issue, an oldest to youngest program order
is maintained in the queue at all times, with the oldest instruction
lying in the bottom of the queue shown in Figure 1. Thus, a
simple position-based selection mechanism like that described in
[9], in which priority moves from "lower" to "upper" entries, can
be used to implement an oldest-first selection policy in which issue
priority is by instruction age. Although compaction operation may
be necessary for a simpler selection mechanism, it may be a major
source of issue queue power consumption in latch-based designs.
Each time an instruction is issued, all entries are shifted down to
fill the hole, resulting in all of these latches being clocked. Because
lower entries have issue priority over upper entries, instructions often
issue from the lower positions, resulting in a large number of
shifts and therefore, a large amount of power dissipation.
To eliminate the power-hungry compaction operation, we can
make the issue queue non-compacting [7]. In a non-compacting
queue, holes that result from an instruction issue from a particular
entry are not immediately filled. Rather, these holes remain
until a new entry is dispatched into the queue. At this point, the
holes are filled in priority order from bottom to top. However, in a
non-compacting queue the oldest to youngest priority order of the
instructions is lost. Thus, the use of a simple position-based selection
mechanism like that described in [9] will not give priority to
older instructions as in the compacting design.
HOLD
2:1 MUX
TO ENTRY (N+1)
DETAIL OF QUEUE ENTRY
QUEUE COMPACTION
Figure
1: Latch-based issue queue design with compaction.
To solve the problem of lost instruction ordering while maintaining
much of the power-efficiency advantages of a non-compacting
queue, the reorder buffer (ROB) numbers (sequence numbers) that
typically tag each dispatched instruction can be used to identify
oldest to youngest order. However, a problem arises with this scheme
due to the circular nature of the ROB which may be implemented
as a RAM with head and tail pointers. For example, assume for
simplicity an 8-entry ROB where the oldest instruction lies in location
111 and the youngest in 000. When an instruction commits,
the head pointer of the ROB is decremented to point to the next en-
try. Similarly, the tail pointer is decremented when an instruction
is dispatched. With such an implementation, the oldest instruction
may no longer lie in location 111 in our working example, but in
any location. In fact, the tail pointer may wrap around back to entry
111 such that newer entries (those nearest to the tail) may occupy a
higher-numbered ROB entry than older entries [6]. When this oc-
curs, the oldest-first selection scheme will no longer work properly.
This problem can be solved by adding an extra high-order sequence
number bit which we call the sorting bit that is kept in the
issue queue. As instructions are dispatched, they are allocated a
sequence number consisting of their ROB entry number appended
to a sorting bit of 0. These sequence numbers are stored with the
entry in the issue queue. Whenever the ROB tail pointer wraps
around to entry 111 in our example, all sorting bits are flash set
to 1 in the issue queue. Newly dispatched instructions, however,
including the one assigned to ROB entry 111, continue to receive
a sorting bit of 0 in their sequence numbers. These steps, which
are summarized in Figure 2, guarantee that these newly dispatched
instructions will have a lower sequence number than prior (older)
instructions already residing in the queue.
Once the sorting bit adjustment is in place, older instructions can
properly be selected from the ready instructions as follows. The
most significant bits of the sequence numbers of all ready instructions
are ORed together. If the result of the OR is 1, all ready
instructions whose most significant bits are 0 will be removed from
consideration. In the next step, the second most significant bit of
the sequence numbers of all ready instructions that are still under
consideration are ORed together. If the result of the OR is 1, all
ready instructions still under consideration whose second most significant
bits are 0, will be removed from consideration. The Nth
step is the same as step 2, except the least significant bit of the
sequence number is used. At the end of this step, all ready instructions
will have been removed from consideration except for
the oldest.
However, this OR-based arbitration mechanism requires a final
linear O(N) chain from highest order to lowest order bit. This significantly
increases the delay of the selection logic compared to
the selection logic described by Palacharla [9], after 4 bits with a
entry queue. Note that for a processor that has up to 128 instructions
(ROB of 128 entries) in flight, the full sequence number
consists of 7 bits and a sorting bit. The lack of full age ordering
with 4-bit sequence numbers results in a CPI degradation (shown
in Section 5), although this is an improvement over the CPI degradation
incurred with no age ordering (position-based selection with
non-compaction).
tail
head pointer
pointer
start point
(all 1's)
ROB
sorting bit00sorting bit update criteria
when tail pointer hits the start
point, all sorting bits in the queue
are flash set to 1
Figure
2: Mechanism for updating the sorting bit in the issue
queue.
3. CAM/RAM-BASED ISSUE QUEUES
In this section, we describe issue queue power-saving optimizations
that require redesigning the baseline latch-based queue as a
CAM/RAM structure in which the source operand numbers are
placed in the CAM structure and the remaining instruction information
is placed in the RAM structure. The number of entries corresponds
to the size of the issue queue. The CAM/RAM structure
is arguably more complex in terms of design and verification time
and it does not support compaction. However, because of the lower
power dissipation of CAM/RAM logic relative to random logic, the
CAM/RAM-based issue queue approach has the potential to reduce
the average power dissipation of the queue.
While potentially consuming less power than a latch-based so-
lution, a CAM/RAM-based issue queue still offers opportunities
for further power reductions. CAM and RAM structures require
precharging and discharging internal high capacitance lines and
nodes for every operation. The CAM needs to perform tag matching
operations every cycle. This involves driving and clearing high
capacitance tag-lines, and also precharging and discharging high
capacitance matchline nodes every cycle. Similarly, the RAM also
needs to charge and discharge its bitlines for every read operation.
In the following sub-sections we discuss our approaches to reduce
the power of a CAM/RAM-based issue queue.
3.1 Dynamic Adaptation of the Issue Queue
While fine-grain clock gating is suitable for latch-based issue
queues, the shared resources (bitlines, wordlines, taglines, precharge
logic, sense amps, etc.) of CAM/RAM-based designs make clock
gating less effective than for latch-based designs. However, CAM/RAM-
based designs are very amenable to dynamic adaptation of the issue
queue to match application requirements. As described in [2],
the size of the issue queue needed to maintain close to peak performance
varies from application to application and even among
the different phases of a single application. Thus, an issue queue
that adapts to these different program phases has the potential to
significantly improve power efficiency with little impact on CPI
performance.
In this paper, we implement the basic approach proposed in [2].
In this scheme, the issue queue is broken down into multi-entry
chunks, each of which can be disabled on-the-fly at runtime. A
hardware-based monitor measures issue queue activity over a cycle
window period by counting the number of valid entries in the queue,
after which the appropriate control signals disable and enable queue
chunks [2].
3.2 Banked Issue Queue
Banking is a common practice for RAM-based structures (e.g.,
caches) that can both reduce the delay of the RAM and its power
dissipation. CAM-based structures can also be banked [8], albeit
with some potential impact on CPI performance. The low-order n
address bits normally used for the comparison are instead used to
select one of 2
subarrays. The remaining bits are compared
against the appropriate bits in each CAM subarray entry. Similarly,
these n bits are used to pick which subarray a new entry is placed
in. Thus, only one of the n subarrays is activated for each CAM
access. The CPI degradation comes about when there is a non-uniform
usage of the different subarrays, causing some subarrays
to become full before others. This inefficient usage of the entries
compared to a single CAM structure results in either entries being
needlessly replaced or new entries not being able to be inserted
even with available space in other subarrays. The result is CPI
degradation relative to the single CAM structure.
The issue queue CAM structure presents the additional complication
of having two fields (source operand IDs) on which a match
operation is performed, which prevents more than one subarray
from being disabled in a four-bank design. To approach the ideal
of enabling only one subarray for each access, we propose a novel
banked design that exploits the fact that frequently at least one of
the two source operands is ready when an instruction is dispatched.
Figure
3 shows how frequently only one, both, and neither of the
two source operands are ready when instructions are dispatched
into the integer queue. The simulation is on six of the SPEC2000
integer programs using the methodology described in Section 4.
On average, 13% of the dispatched integer instructions have neither
operand ready. The remaining 87% of the instructions have at
least one operand available and therefore require at most one match
operation for the instruction to wake up.
A banked issue queue organization that exploit this property is
shown in Figure 4. The organization uses four banks, each of
which holds two source operand IDs. One is the full six-bit source
operand field (assuming 64 physical registers) held in the instruction
info (RAM) section of the entry while the other consists of
only the four low-order register ID bits and is held in the CAM part
of the entry (note that 2 of the bits are already used for bank selec-
tion). Thus, only the latter is compared against the low-order four
destination register ID bits that are broadcast. Thus, our banked issue
queue design further reduces power dissipation by eliminating
bzip gcc mcf parser vortex vpr average10305070Percentage
of
Instructions
benchmarks
one op ready
all ready
none ready
none ready & diff.banks
Figure
3: Percentage of instructions with various numbers of
operands available on dispatch. Also shown is the percentage
of cases in which neither source operand is available and where
the source operand IDs are associated with different banks.
one of the two source operand IDs from the CAM. Note that the
match logic is guaranteed to be active for only one cycle. How-
ever, the ready logic, selection logic, and the RAM part may be
active for more than one cycle. Multiple instructions (say N) may
become ready due to result distribution, in which case the ready
logic, selection logic, and RAM part may be active for N cycles.
The selection logic is global in the sense that instructions may be
simultaneously ready in multiple banks.
As shown in the top of Figure 4 for an example add instruction,
three of the cases of source operands being ready or not on dispatch
are easy to handle. The instruction is steered to the bank
corresponding to the ID number of the unavailable source operand.
In the case where both operands of an instruction are available, the
instruction is steered to the bank corresponding to the first operand.
An instruction in the selected bank wakes up when there is a match
between the lower four bits of the destination ID and those of the
source ID corresponding to the unavailable operand. The fourth
case, that of neither operand being available on dispatch, is treated
as a special case. Here, instructions that have neither source operand
available are placed in the Conflict Queue. The Conflict Queue is
simply a conventional issue queue that performs comparisons with
both source operands. Because a small percentage of the instructions
have neither source operand available on dispatch, the Conflict
Queue need only contain a few entries. The destination IDs
of completing instructions are compared with the entries in one of
the banks, as well as with those in the Conflict Queue. Because the
Conflict Queue is small, its energy dissipation pales in comparison
to the savings afforded by banking.
assume 64 physical registers,
instruction that has dependency to a register number ranging in between
bank3,.
R1.R16 goes to bank1, R17.R32
addr1,.
Conflict
Queue
predecoder
Bank1 Bank2
multiplexer
add r1,r2,r19
ready ready
ready
not ready
ready not ready
not ready not ready
easy to handle
special case
add instruction go to separate
queue (conflict queue)
general broadcast is done
goes to bank1, and r2 specifier
will be broadcasted to bank1 only
specific broadcasts are done
Figure
4: Banked issue queue organization and placement of
instructions using the Conflict Queue for the case where neither
source operand is available on dispatch.
4. METHODOLOGY
The design alternatives are implemented at the circuit level and
the power estimations are evaluated by using the IBM AS/X circuit
simulation tool with next generation process parameters. All
the circuits also have been optimized as much as is reasonable for
power and speed. The baseline latch-based issue queue and other
circuit designs borrow from existing POWER4 libraries where appropriate
For the microarchitectural simulations, we used SimpleScalar-
3.0 to simulate an aggressive 8-way superscalar out-of-order pro-
cessor. The simulator has been modified to model separate integer
and floating point queues. The baseline also included register
renaming and physical registers to properly model banked issue
queues. We chose a workload of six of the SPEC2000 integer
benchmarks (each of which is run for 400 million instructions). Issue
queue event counts are captured during simulation and used
with the circuit-level data to estimate power dissipation. We focus
on an integer issue queue with 32 entries in this paper, although the
techniques are largely applicable to other queue structures (e.g.,
floating point queue, dispatch queue, reorder buffer). For the simulation
parameters, we chose a combined branch predictor of bi-modal
and 2-level and fetch and decode widths of 16 instructions
for our 8-way machine with a reorder buffer size of 128 entries.
We used 64KB 2-way L1 and 2MB 4-way L2 caches, four integer
ALUs and multipliers and four memory ports.
5. RESULTS
For the baseline issue queue 1 , each entry needs to be clocked
each cycle even when the queue is idle due to the need to recirculate
the data through the multiplexer to hold the data in place. In an
alternative clock-gated design the main clock as well as the latch
clocks are gated by a control signal whenever an entry does not
have the Valid bit set and is not being loaded. We first examine the
benefits of clock gating the issue queue, which largely depends on
what fraction of the entries can be clock gated for our application
suite.
Figure
5 shows the average number of entries in a 32-entry
integer queue that are and are not clock gated as well as the overall
power savings achieved. For vortex and gcc, on average over 50%
of the queue entries are clock gated, whereas for mcf, parser, and
vpr there is not much clock gating opportunity. On average, a 34%
power savings is achieved with clock gating the issue queue without
any loss of CPI performance.
The tradeoffs between a compacting and non-compacting issue
queue are more complex, as a degradation in CPI performance can
potentially occur with non-compaction due to the lack of an oldest-
first selection scheme. We modified SimpleScalar to model the
holes created in a non-compacting issue queue, the filling of these
holes with newly dispatched instructions, and a selection mechanism
strictly based on location within the queue (rather than the
oldest-first mechanism used by default). With such a scheme, older
instructions may remain in the queue for a long time period, thereby
delaying the completion of important dependence chains. The left-most
bar in Figure 6 shows CPI degradation for our six SPEC2000
integer benchmarks. The degradation is significant, around 8% for
mcf and parser and 5.5% overall. The right-most bar shows the
CPI degradation when the previously described oldest-first selection
scheme is implemented by using four bit sequence number
(including the sorting bit). On average, the partial oldest first selection
scheme reduces the CPI degradation from 5.5% to 2.3%.
1 The baseline described in this paper does not represent the real
POWER4 issue queue. Some mechanisms to reduce power, not
described in this paper, are present in the real POWER4 design.
bzip gcc mcf parser vortex vpr average51525#
of
Queue
Entries
benchmarks
Power
Savings
Clock Gated
Not Clock Gated
Figure
5: Number of queue entries gated, and power savings
relative to baseline for a latch-based issue queue with clock gating
bzip gcc mcf parser vortex vpr average13579
CPI
degradation
benchmarks
non-comp.
partold
Figure
degradation incurred via non-compaction
with position-based selection, and non-compaction with partial
oldest-first selection.
The power savings of the non-compacting latch-based issue queue
relative to the baseline design is shown in Figure 7. The non-compacting
queue power includes the power overhead due to the
oldest-first selection logic overhead as well as the write arbitration
logic overhead that provides the capability of writing to any hole
for the newly dispatched instructions. Even with these additional
overheads, the elimination of the frequent high-power compacting
events has a considerable impact across all benchmarks, achieving
a power savings of 25-45% and 36% overall.
This figure also shows the relative power savings of the non-compacting
CAM/RAM-based issue queue, and a non-compacting
issue queue implemented with clock gating. Redesigning the issue
queue as a CAM/RAM structure achieves a considerable power
savings over the non-compacting latch-based design. However, the
combination of a non-compacting latch-based design and clock gating
achieves slightly better overall savings. Note that the slightly
better power savings for mcf, parser, and vpr with the CAM/RAM-
based design is due to the lack of opportunity for clock gating with
these benchmarks. The choice of one option over the other depends
on a number of factors, including the expertise of the design
team in terms of clock gating versus CAM/RAM implementation,
verification and testing of the CAM/RAM design, and the degree to
which the additional clock skew and switching current variations of
the clock gated design can be tolerated. In the rest of this section,
we explore how the CAM/RAM-based issue queue design can be
augmented with dynamic adaptation or banking to further reduce
power dissipation.
bzip gcc mcf parser vortex vpr average10305070Power
Savings
benchmarks
Latch-based no compaction
based
Latch-based no compaction+clkgating
Figure
7: Power savings relative to baseline of non-compacting
latch-based, non-compacting CAM/RAM-based, and non-compacting
latch-based with clock gating, all with the proposed
partial oldest-first selection scheme.
We assume a 32-entry adaptive issue queue that can be configured
with 32, 24, 16, or 8 entries during application execution. Figure
8 shows the power savings and performance degradation with
the adaptive scheme for different cycle window values [2]. Note the
negative power savings with mcf using the larger cycle windows of
8K and 16K. This occurs because at this coarse level of dynamic
adaptation, the 32-entry configuration is always selected which incurs
a power penalty due to the overhead of the dynamic adaptation
circuitry. The use of smaller cycle windows allows the dynamic
adaptation algorithm to capture the finer-grain phase change behavior
of mcf, resulting in smaller configurations being selected. Over
all of these benchmarks, the use of smaller cycle windows results in
a higher power savings and a lower performance degradation than
when larger cycle windows are used. For a cycle window of 4K, a
34% overall issue queue power savings can be achieved with a 3%
CPI degradation as compared to the CAM/RAM-based design.
bzip gcc mcf parser vortex vpr average
-202060Power
Savings
benchmarks
bzip gcc mcf parser vortex vpr average2610
CPI
degradation
benchmarks
Figure
8: CPI degradation and power savings of the adaptive
issue queue relative to the CAM/RAM-based issue queue for
different cycle window values.
We explored 2, 4, and 8-way banked issue queues using the Conflict
Queue approach described in Section 3.2. The top of Figure
9 shows why banking can be so effective: the relative power
of the CAM structure increases quadratically with the number of
entries. Banking divides the queue into smaller structures, only
one of which is selected each cycle. The bottom part of this figure
shows the power savings achieved with different issue queue sizes
for 2, 4, and 8-way banked queues with only one bank enabled.
There is a clear tradeoff between the reduction in the number of active
entries (and thus bitline length) with higher degrees of banking
and the extra peripheral circuit overhead incurred with more banks.
For a small queue size of 16 entries, the power savings is greatest
with two banks due to the relatively large cost of duplicating the
peripheral circuitry. With the larger 64 entry queue, the savings in
bitline power afforded with 8 banks outweighs the peripheral logic
power overhead.
Relative
Power
# of entries
Array
Savings
# of IQ entries
Banked CAM (1 bank active only)
2banks
4banks
8 banks
Figure
9: Relative power of the issue queue CAM array as a
function of the number of entries (top), and power savings by
degree of banking and issue queue size with a single enabled
bank (bottom).
As mentioned in Section 3.2, banking can incur a CPI degradation
due to underutilization of queue entries resulting from static
allocation of dispatched instructions to banks. This can be partially
remedied by increasing the number of entries in each bank. The
graph at the top of Figure 10 shows the CPI degradation incurred
relative to the baseline for a 4-way banked issue queue with a 4-
entry Conflict Queue for various numbers of entries per bank. The
CPI degradation can be reduced to 2.5% with 10 entries per bank,
a slight increase from the 8 entries nominally used. The middle
graph shows performance degradation for a 4-way banked queue
with entries per bank for different Conflict Queue sizes. A small
number of entries (4-5) is sufficient to reduce the CPI degradation
to negligible levels. Finally, the bottom graph shows the percentage
of time various numbers of banks were active for our 8-way issue
machine with a 4-banked issue queue (10 entries per bank, 4 entry
Conflict Queue), as well as the power savings achieved for each
benchmark. Note that these results account for the power overheads
of the extra entries and the Conflict Queue, and we assume
that both the baseline and banked designs have the entire queue disabled
with no activity (zero banks active for the banked approach).
Overall, a 31% energy savings is achieved with only a 2.5% impact
on CPI performance. This compares favorably with the 34% power
savings and 3% CPI degradation of the adaptive approach, yet the
banked scheme is arguably more straightforward to implement.
5.1 Comparison of Different Alternatives
Clock gating the issue queue has a significant impact on power
dissipation with no CPI degradation. Despite its implementation
and verification challenges it is a well-known and established approach
and therefore represents the most straightforward, albeit
not the most effective, solution to the issue queue power problem.
bzip gcc mcf parser vortex vpr average26CPI
degradation
benchmarks
8ent.
9ent.
10ent.
11ent
bzip gcc mcf parser vortex vpr average5CPI
degradation
benchmarks
bzip gcc mcf parser vortex vpr average50Active
banks
benchmarks
Power
Savings
(%)30.531.50bankact.
1bankact.
2bankact.
3bankact.
4bankact.
Figure
10: 32-entry four-way banked issue queue results relative
to the 32-entry CAM/RAM-based issue queue. CPI degradation
with different numbers of entries per bank (top) with a 4
entry Conflict Queue. CPI degradation with different size Conflict
Queues (middle) with 10 entries per bank. Percentage of
different numbers of active banks and power savings (bottom)
with a 4 entry Conflict Queue and 10 entries per bank.
On the other side, making the queue non-compacting affords an
even greater power savings, albeit with a CPI performance cost due
to the elimination of oldest-first selection. This problem can be
largely remedied with the sequence-number and sorting bit scheme
proposed in this paper with no delay cost and negligible power
impact relative to the power savings with non-compaction. This
makes the non-compacting scheme an attractive alternative to the
baseline compacting design. The combination of non-compaction
and clock gating provides slightly better issue queue power savings
than a CAM/RAM-based design. The two alternatives are functionally
equivalent, but quite different in terms of a number of implementation
and verification cost factors that may favor one over
the other.
Once the designer chooses a CAM/RAM-based implementation,
an adaptive CAM/RAM-based issue queue delivers an additional
26% power savings beyond non-compaction and clock gating. How-
ever, the cost is a slight performance degradation, in addition to the
significant design and verification effort involved. The banked approach
with the Conflict Queue represents an attractive alternative
to the adaptive design. It's power savings and performance degradation
rival that of the adaptive approach, yet its design would be
considered more straightforward by most designers. Finally, the
banked and adaptive issue queue techniques are orthogonal approaches
that can be combined to afford even greater power sav-
ings. Due to the size of our issue queue (32 entries) the combination
of these techniques would not be profitable. However, a larger
128 entry queue could be divided into four 32 entry banks, each
of which would use the adaptive approach described in this paper.
Based on our experience and the results in this paper, we expect
that this combination would produce much greater power savings
than any of the other techniques investigated in this study.
6. CONCLUSIONS
In this paper, we have presented a range of issue queue power
optimization techniques that differ in their effectiveness as well
as design and verification effort. As part of this study, we propose
a sequencing mechanism for non-compacting issue queues
that allows for a straightforward implementation of oldest-first se-
lection. We also devised a banked issue queue approach that allows
for all but one bank to be disabled with little additional power
overhead. Through a detailed quantitative comparison of the tech-
niques, we determine that the combination of a non-compaction
scheme and clock gating achieves roughly the same power savings
as a CAM/RAM-based issue queue. We also conclude that
the adaptive and banked CAM/RAM-based issue queue approaches
achieve a significant enough power savings over the latch-based approaches
to potentially justify their greater design and verification
effort.
7.
ACKNOWLEDGMENTS
This research was supported in part by DARPA/ITO under AFRL
contract F29601-00-K-0182, NSF under grants CCR-9701915 and
CCR-9811929, and by an IBM Partnership Award.
8.
--R
An Adaptive Issue Queue for Reduced Power at High-Performance
Issue Logic for a 600-MHz Out-of-Order Execution Microprocessor
Data Processing System and Method for Using an Unique Identifier to Maintain an Age Relationship Between Executing Instructions.
A 1.8GHz Instruction Window Buffer.
A Design for High-Speed Low-Power CMOS Fully Parallel Content-Addressable Memory Macros
POWER4 System Microarchitecture.
Alpha Processors: A History of Power Issues and a Look to the Future.
--TR
Complexity-effective superscalar processors
Energy-effective issue logic
Energy
An Adaptive Issue Queue for Reduced Power at High Performance
--CTR
Rajesh Vivekanandham , Bharadwaj Amrutur , R. Govindarajan, A scalable low power issue queue for large instruction window processors, Proceedings of the 20th annual international conference on Supercomputing, June 28-July 01, 2006, Cairns, Queensland, Australia
Yingmin Li , Dharmesh Parikh , Yan Zhang , Karthik Sankaranarayanan , Mircea Stan , Kevin Skadron, State-Preserving vs. Non-State-Preserving Leakage Control in Caches, Proceedings of the conference on Design, automation and test in Europe, p.10022, February 16-20, 2004
Simha Sethumadhavan , Franziska Roesner , Joel S. Emer , Doug Burger , Stephen W. Keckler, Late-binding: enabling unordered load-store queues, ACM SIGARCH Computer Architecture News, v.35 n.2, May 2007 | banking;issue queue;non-compacting;low-power;microarchitecture;adaptation;compacting |
566530 | Optimal deterministic protocols for mobile robots on a grid. | This paper studies a system of m robots operating in a set of n work locations connected by aisles in a nxn grid, where mn.Form time to the robots need to move along the aisles, in order to visit disjoint sets of locations. The movement of the robots must comply with the following constraints: (1) no two robots can collide at a grid node or traverse a grid edge at the same time; (2) a robot's sensory capability is limited to detecting the presence of another robot at a neighboring node. We present a deterministic protocol that, for any small constant e > 0, allows m(1-e)n targets and no target is visited by more than one robot. We also prove a lower bound showing that our protocols were known only for m n, while for general m n only a suboptimal randomized protocols were known. | Introduction
A Multi Robot Grid system (shortly, MRG) consists of m robots that operate in a set of
locations connected by aisles in a
n \Theta
grid [ST95]. At any time, the
robots are located at distinct grid nodes, and from time to time each robot is given a set of
work locations (targets) to be visited. The target sets are disjoint and no particular order
is prescribed for visiting the targets in each set. Moreover, robots may end up at arbitrary
locations, once their visits are completed. We may regard the system as representing a
warehouse or a tertiary storage (e.g., tape) system, where robots are employed to gather
or redistribute items. For simplicity, we assume that the system is synchronous, that is,
all robots are provided with identical clocks. Control is distributed in the sense that each
robot's moves are scheduled locally by a processor embedded in the robot. Our goal is
to design an efficient distributed on-line protocol that every robot must follow in order to
visit the assigned targets while avoiding deadlocks and conflicts with other robots. More
specifically, the protocol must comply with the following rules:
ffl At any time all the robots reside in the grid, i.e., no robot can leave the grid or enter
from outside.
ffl No two robots can occupy a grid node or traverse a grid edge at the same time.
ffl A robot cannot exchange information with other robots directly. However, each
robot is equipped with a short-range sensor that is able to detect the presence of
other robots occupying nodes at (Manhattan) distance one from its current location.
ffl In one time unit, a robot can perform a constant amount of internal computation,
read the output of its short-range sensor, and decide whether to remain at the current
grid node, or to move to a neighboring node.
In an MRG(n; m; d) problem, each of m n robots in an MRG system is required to
visit (at most) d n targets, with no grid node being target for more than one robot. For
the sake of simplicity, we assume that n is a power of 4 so that the grid can be recursively
decomposed into subgrids whose size is still a power of 4. The general case of n being any
even power can be handled with minor modifications.
1.1 Related Work
The MRG(n; m; d) problem was originally introduced in [ST95] as a practical case study
within the general quest for social laws to coordinate agents sharing a common environment
in a distributed rather than centralized fashion. While central control relies on a single
arbiter that regulates all possible interactions among the agents, distributed control is
based on a set local rules which must be complied with in order to avoid conflicts. The
need for distributed control stems from a number of shortcomings which may limit the
applicability of a central protocol, such as the need of reprogramming the system when
the set of agents changes over time, or the overhead in computation and communication
introduced by the arbiter. In fact, a distributed protocol may exploit better the intrinsic
parallelism of the problem, since each agent can be programmed independently of the
others to follow the common set of rules. In order to be efficient, a distributed protocol
must require a minimal amount of communication to regulate the interaction between the
agents. Hence, the protocol must be based on simple rules that can be applied locally and
quickly.
Although the MRG(n; m; d) problem entails routing robots on a two-dimensional grid, it
exhibits, however, some fundamental differences from classical message routing problems.
The nodes of a network used to exchange messages are typically processing units able
to compute, maintain a local status and, in many cases, temporarily buffer messages in
transit. In contrast, in an MRG system the grid nodes are passive entities with no status or
computing power, and robots, which are the active agents in the system, must orchestrate
their movements solely based on their processing and sensory capabilities. Moreover, in
message routing, packets travelling through the network can be destroyed and replicated
as long as each message is eventually delivered to its destination(s), while this is clearly
not admissible when dealing with robots.
For the above reasons, as was also observed in [PU96], none of the many message routing
protocols known in the literature appears to be directly applicable to the MRG(n; m; d)
problem (see [Lei92] and [Sib95] for comprehensive surveys of grid protocols). Even hot-potato
protocols, which require only very simple operations at the nodes and do not employ
internal buffers, are not directly applicable since they work under the assumption that in
one time unit a node may receive a packet from each neighbor, manipulate the information
carried by the packet headers and redistribute the packets, suitably permuted, to the
neighbors (e.g., see [NS95]). Nevertheless, we will show in this paper that techniques
employed in message routing can be suitably, yet not trivially, adapted to devise efficient
solutions for the MRG(n; m; d) problem.
Any instance of the MRG(n; m; d) problem can be trivially completed in
by letting the robots circulate along a directed Hamiltonian cycle traversing all the grid
nodes. In fact, Preminger and Upfal [PU96] proved that any deterministic protocol in
which robots are completely blind (i.e., a robot cannot detect the presence of other robots
at any distance)
requires\Omega (n) time, thus implying the optimality of the trivial strategy
in this case.
If the robots are not
n) time is necessary in the worst case due to the grid
diameter. Clearly, a single robot with a single destination (MRG(n; 1; 1) problem) can
achieve this bound by simply traversing a shortest path from its source to its target. For a
larger number m of robots and a single destination per robot (MRG(n; m; 1) problem) two
optimal \Theta (
n)-time protocols are presented in [ST92] and [ST95] for two special cases:
The first protocol is designed for m n 1=4 robots, while the second one works for m
robots, as long as they initially reside in distinct columns.
The only known protocol that deals with an arbitrary number of m n robots and
a single destination is given in [PU96]. The algorithm is randomized and solves any
problem in suboptimal O (
log n) time, with high probability. However,
the algorithm works under a relaxed set of rules. Specifically, it assumes that a robot's
short-range sensor is able to detect the presence of other robots at distance at most two,
that a robot may initially stay outside the grid for an arbitrary amount of time, and that
robots disappear from the grid as soon as they have visited their target. No deterministic
protocol that takes o(n) time and works under the stricter rules described in this paper is
known for the MRG(n; m; 1) problem.
For the case of d ? 1 targets, one could repeat the single-target protocol d times.
However, as we will show in the paper, this strategy does not achieve optimal performance.
To the best of our knowledge, no specific algorithm for the case of d ? 1 targets has been
developed so far.
1.2 Our Results
We devise a simple and general protocol for the MRG(n; m; 1) problem, with m n=4,
which attains optimal \Theta (
n) time. The algorithm implements a routing strategy in a
way that fully complies with the constraints imposed by an MRG system. Our protocol
improves upon the work of [PU96] in several directions. First, the protocol is deterministic,
hence it provides a worst-case guarantee on performance. Second, it achieves optimality in
the general case, thus reducing the running time of [PU96] by an O (log n) factor. Third,
it works in a weaker model in which the robots reside in the grid all the time and their
sensors can detect other robots at distance one.
Next, we consider the case of d ? 1 targets. If we put a constraint on the order of the
visits that fixes a priori the sequence of targets to reach for each robot, a simple argument
based on diameter considerations suffices to prove that any protocol for the problem re-
quires\Omega (d
n) time, in the worst case. Consequently, applying our optimal MRG(n; m; 1)
protocol d times yields an optimal \Theta (d p
n)-time general solution in this case. However, if
the robots can arbitrarily rearrange the order of their targets, the latter approach becomes
suboptimal. Indeed, we prove
an\Omega ip
lower bound to the MRG(n; m; d) problem and
provide an optimal \Theta
ip
-time protocol that matches the lower bound for any d n
and m is an arbitrarily fixed constant. Ours is the first
nontrivial solution to the most general case of the MRG problem.
It must be remarked that our protocols require a common clock governing all robots'
movements, while the results in [PU96] and [ST95] can be adapted to hold under a slightly
weaker notion of synchronicity.
The paper is organized as follows. Section 2 describes an optimal deterministic protocol
for the MRG(n; m; 1) problem under the assumption m n=4. In Section 3 the protocol is
extended to handle the more general MRG(n; m; d) problem with d n and
The section also proves the lower bound showing the optimality of the extended protocol.
Some final conclusions and open problems are drawn in Section 4.
2 The Case of Single Targets
Consider an arbitrary instance of the MRG(n; m; 1) problem, for m n=4. The basic idea
behind our protocol is to perform the routing through sorting, which is a typical strategy
employed in the context of packet routing. However, we need to develop specific primitives
in order to implement such a strategy under the restrictive rules of an MRG system. In
the following, we assume that at any time each robot knows the coordinates of its current
location.
Let us consider the grid as partitioned into n=4 2 \Theta 2 subgrids, which we call tiles. The
protocol has a simple high-level structure consisting of the four phases outlined below:
ffl Phase I - Balancing: The robots relocate in the grid so that each robot ends up in
the top-left node of a distinct tile.
ffl Phase II - Sorting-by-Row: The robots sort themselves by target row. The sorted
sequence of robots is arranged on the grid (one robot per tile) according to the Peano
indexing [Mor66] shown pictorially in Figure 1 and described mathematically later.
In other words, at the end of the sorting, the i-th robot in the sorted order occupies
the top-left corner of the tile of Peano index i.
ffl Phase III - Permuting: The sorted sequence of robots permutes from the Peano
indexing to the row-major indexing.
ffl Phase IV - Routing: The robots first circulate within columns of tiles to reach
the rows containing their targets. Then, they circulate around the rows to visit the
targets.
Before describing the four phases in more detail, we show how to perform some basic
primitives in an MRG system which will be needed to implement the above phases.
Pack Given q t robots on a t-node linear array, pack them into q consecutive nodes at
one end of the array.
Solution: Each robot repeatedly crosses an edge towards the designated end whenever its
short-range sensor detects that the node across the edge is empty. No collisions arise in
this way. Moreover, a simple argument shows that after t time steps all the robots have
completed the packing. 2
Count Given q t robots on a t-node linear array, make q known to each robot.
Solution: The robots first pack at one end of the array and then at the other. A robot
that ends up at the i-th location from one end and at the j-th location from the other,
sets primitive requires no more than 2t steps. 2
Compare-Swap Given a tile with two robots in it, sort the two robots so that the one
associated with the smaller target row goes to the top left corner, while the other goes to
the bottom left corner.
Solution: Suppose that the two robots start at the top and bottom left corners of the
tile. The robots execute a number of rounds until they "learn" their relative order in the
sorted sequence. Specifically, in the i-th round, the robots "implicitly compare" the i-th
most significant bit of the binary representation of their respective target row as follows.
A robot positions itself at the left corner (in the same row) of the tile if its bit is 0, while
it positions itself at the right corner if its bit is 1. Then each robot can infer the other
robot's bit by simply checking for its presence in the same column. The first time that
the robots find different bits, the robot whose bit is 0 moves to the top left corner, while
the other moves to the bottom left corner, and the algorithm ends. If the robots have the
same target row (i.e., all bits are equal) they stay in their starting positions. Overall, the
computation takes no more than log n steps. 2
In the following subsections, we describe the four phases of our protocol in more detail.
2.1 Phase I: Balancing
In this phase, the m n=4 robots start at arbitrary positions in the grid and must
distribute themselves among the n=4 tiles so that each tile contains at most one robot in
its top-left node. This is accomplished in log steps, numbered from 0 to
log according to the following inductive scheme. At the beginning of Step i, with i
even, the robots are already distributed evenly among square subgrids of size
by induction. (This clearly holds for During the step, the robots work independently
within each square subgrid, and partition themselves evenly among rectangular subgrids
of size
n=2 i+2 . Analogously, in Step i with i odd, the robots work independently
within each rectangular subgrid of size
partition themselves evenly
among square subgrids of size
. Clearly, at the end of Step log
the robots are evenly partitioned among the subgrids of size 2 \Theta 2 (the tiles), with at most
one robot per tile. At this point, each robot moves to the top-left corner of its tile.
We now describe the implementation of Step i, with i odd (the implementation of a
balancing step of even index requires only minor modifications). Consider an arbitrary
t \Theta t=2 rectangular subgrid, with
suppose that there are p robots in
the subgrid. Let the rows (resp., columns) of the subgrid be numbered from 1 to t (resp.,
t=2). At the end of the step we want to have bp=2c robots in the upper half (top t=2 rows)
and the remaining dp=2e in the lower half (bottom t=2 rows) of the subgrid. This is done
through the following substeps:
(1) The robots in each row pack towards the left.
(2) The robots in each column pack towards the bottom.
Comment: After this step, the robots form a "staircase" descending from northwest
to southeast in the subgrid.
(3) In each column k ! t=2, each robot determines the number of robots in the column.
If this number is odd, the topmost robot (referred to as leftover) moves to the top
of the column.
leftovers pack towards the right of the topmost row. Then they move down along
column t=2 towards the bottom. Then, in column t=2, each robot determines the
number of robots in the column.
Comment: If p t 2 =4 (which is always the case) then there is enough room in column
t=2 to hold all leftovers.
(5) For every column k, let x be number of robots in the column after Step 4. (Note
that x may be odd only for t=2, the robots pack around the column
center, i.e., on rows t=2, the robots
pack so that bx=2c of them end up in the upper half and the remaining dx=2e end
up in the lower half.
Lemma 1 Phase I takes O (
n) time.
Proof: The correctness of the above strategy is immediate. The resulting time bound is a
geometrically decreasing sum, whose i-th term is the cost O
of balancing step i,
which is implemented in terms of the Pack and Count primitives presented before. 2
2.2 Phase II: Sorting-by-Row
At the end of the balancing phase, the robots are spread among the grid nodes in such
a way that there is at most one robot in each tile, parked in the tile's top-left corner.
The robots will now sort themselves according to their target row, with ties broken ar-
bitrarily. The sorting algorithm relies upon a grid implementation of Batcher's bitonic
sorting algorithm [Bat68] for sequences of size n=4 or smaller. We recall that Batcher's
algorithm is structured as a cascade of log merging stages. At the beginning of the
i-th merging stage, 1 i log the robots are partitioned into (n=4)=2
subsequences each of size 2 i\Gamma1 . Then, pairs of subsequences are merged independently so
that, at the end of the stage, there are (n=4)=2 i sorted subsequences each of size 2 i . In
turn, the i-th merging stage is made of a sequence of i steps, called (i; j)-compare-swap for
specifically, an (i; j)-compare-swap step compares and swaps
pairs of elements at distance 2 j in each subsequence (the direction of the compare/swap
operator is fixed a priori and depends on the values of i and j).
In order to efficiently implement Batcher's algorithm on the grid, we number the n=4
tiles according to the Peano indexing , which can be defined as follows (see Figure 1).
Split the set of indices I four equally sized subsets of consecutive
indices I
I 1g. Similarly, split the grid into four quadrants of n=16 tiles each
and assign the four subsets of indices to the four quadrants, namely, H t'
, and
H br , where t stands for "top," b for "bottom," ' for "left," and r for "right." Assign the set
of indices I 0 to H t' , I 1 to H b' , I 2 to H tr and I 3 to H br . Then proceed recursively within the
quadrants until quadrants of one tile each are reached. An easy argument shows that two
tiles with indices h and h \Phi 2 j in the Peano indexing, where \Phi denotes bitwise exclusive-or,
lie on the same row or column of tiles (depending on the parity of j) at distance O
ip
from each other.
An (i; j)-compare-swap step can be performed as follows. Let k denote any integer in
binary representation has 0 in position j. The following substeps
Figure
1: The 64 tiles of a 16 \Theta 16 grid ordered according to the Peano indexing. Each
square represents a tile with 2 \Theta 2 grid nodes and contains one robot at most, after the
balancing phase.
are executed in parallel for all such values of k:
(1) The robot residing in tile k in the Peano indexing (if any) moves to tile k.
(2) The robots in tile k execute the Compare-Swap primitive according to their target
row, with ties being broken arbitrarily. When only one robot is in the tile, it moves
directly to the tile's top left corner.
(3) The robot with the larger or smaller target (if any) moves to tile k depending
on the direction of the (i; j)-compare-swap operator.
The routing implied by Step 1 above is easily performed by the robots without collisions.
In particular, when j is odd, the robot in tile k first moves to the bottom-left corner
of its tile, and then moves left until it reaches the bottom-left corner of tile k (which is
on the same row as tile k our numbering). When j is even, the robot in tile
first moves to the top-right corner of the tile and then moves upwards along the
column, until it reaches the bottom-right corner of tile k. From there, it then positions
itself at the bottom-left of the tile. Step 3 can be accomplished analogously. Thus, Steps 1
and 3 require O
ip
time overall. By using the Compare-Swap primitive discussed before,
Step 2 requires O (log n) time.
Phase II takes O (
n) time.
Proof: The i-th merging stage of the sorting algorithm, 1 i log consists of a
sequence of (i; j)-compare-swap steps, for As an (i; j)-compare-swap
step takes O
ip
time, the total running time of the algorithm is
log n\Gamma2
O
ip
O
:2.3 Phase III: Permuting
After the sorting phase, the robots reside in distinct tiles, sorted by target row according
to the Peano indexing. In Phase III, the robots permute in such a way that the sorted
sequence is rearranged according to the row-major indexing. Let us call t-column (resp., t-
row) a column (resp., row) of tiles. The permutation is executed according to the following
recursive protocol. If permutation is trivial. Consider
(1) Each robot in H tr swaps positions with the one occupying the corresponding position
in H b'
(2) Within each quadrant, the sorted subsequence of robots recursively permutes from
Peano to row-major indexing.
(3) Within each quadrant, the robots permute so that those in odd t-rows pack to the
top, while those in even t-rows pack to the bottom of the quadrant.
Each robot in the lower half of H t'
positions with the one occupying
the corresponding position in the top half of H tr , (resp., H br ).
The correctness of the permutation protocol is easily established by induction. Below,
we give a pictorial illustration of the steps for robots:
Initial configuration
43
26 28 50 52 58
22
After Step 1
28
43
34 36 42 44 50 52 58
After Step 2
28
After Step 3
28
After Step 4
26
Lemma 3 Phase III takes O (
n) time.
Proof: The movements of robots implied by Step 1, Step 3 and Step 4 can be executed
in a conflict-free fashion in O (
n) time as one robot at most is in each tile. Since the
recursive Step 2 is executed in parallel and independently within subgrids of geometrically
decreasing side, we conclude that the overall permutation time is also O (
n). 2
2.4 Phase IV: Routing
The routing phase starts with the m robots sorted by target row and occupying the first
tiles in the row-major indexing, with at most one robot per tile (in the tile's top-left
corner). We number the t-columns (resp., the t-rows) from 1 to
n=2. Note that, due to
sorting, each t-column holds no more than two robots with targets in the same row. The
routing is performed by first moving the robots to their target row and then to their final
target. This is accomplished in parallel as follows:
(1) For 1 i
n=2, the robot residing in t-column 2i and t-row j
moves to the top-right corner of the tile in t-column 2i \Gamma 1 and t-row j.
Comment: After this step, in any odd-numbered t-column there can be up to four
robots destined for the same row, while the even-numbered t-columns are empty.
(2) The robots in each odd-numbered t-column circulate along a directed Hamiltonian
cycle traversing all of the nodes in the t-column. When a robot traveling on the right
side of the t-column reaches its target row, it attempts to shift right to the adjacent
tile and then moves to the rightmost unoccupied node in such tile.
Comment: Within an odd-numbered t-column, no more than two robots with the
same target row are able to move to the adjacent t-column.
(3) The robots in each t-row circulate along a directed Hamiltonian cycle traversing all
the nodes in the t-row, therefore visiting their target locations.
(4) All the robots go back to the t-columns they occupied at the end of Step 1.
Steps 2-3 are repeated to deliver the robots that have not visited their targets yet.
To this end, the robots that have already completed their task will not attempt to
shift right during Step 2.
Comment: All robots that have not visited their targets at the beginning of Step 5
are now able to do so.
Lemma 4 Phase IV takes O ( p
n) time.
Proof: Steps 1-3 require O (
n) time altogether and are executed at most twice each (due
to Step 5). Step 4 can be executed as follows. In each odd-numbered t-column, the robots
in each row pack to the left. Then, robots in each even-numbered t-column circulate along
a directed Hamiltonian cycle traversing all the nodes in the t-column, and when a robot
sees an empty spot in the adjacent t-column (to the left) it moves into such a spot packing
to the left. Thus, Step 4 requires O (
n) time. This implies that the whole routing phase
also takes O ( p
n) time. 2
The following theorem is an immediate consequence of Lemmas 1, 2, 3 and 4.
Theorem 1 Any instance of the MRG(m; n; 1) problem, with m n=4, can be solved in
time O (
n) in the worst case.
The simple diameter-based argument shows that the running time stated in the above
theorem is optimal. Moreover, the result is easily extended to the case in which m 0 m
robots have one target to reach, while the remaining ones do not have any visit to
perform. It is sufficient to associate the latter robots with a fictitious destination whose
row is
n+1 and let them participate to the various phases of the protocol. Clearly, there
is no increase in the running time.
3 The Case of Multiple Targets
In this section we devise a protocol for the more general MRG(n; m; d) problem where each
of m robots needs to visit up to d grid nodes, with each grid node being visited by at most
one robot. The protocol is first presented for the case m n=4, and then extended to
handle up to m n(1 \Gamma ffl) robots, for any constant 0 arbitrarily fixed. Before
describing the protocols, we prove a lower bound on the running time of any protocol for
the MRG(n; m; d) problem. The lower bound will be employed later to show the optimality
of the proposed protocols.
Lemma 5 For every choice of integers n; m; d, with 1 m; d n, there exists an instance
of the MRG(n; m; d) problem whose solution
time.
Proof: If n ! 4 or d ! 4, the bound follows from the diameter argument. Therefore, let
us examine the case n; d 4. Let n 0 ; d 0 be the largest powers of 4 such that n 0 n and
d. Note that n 0 n=4, d 0 d=4 and d 0 n 0 . Consider a square subgrid of n 0 nodes,
partitioned into d 0 square tiles of size
suppose that one of the m
robots has the d 0 centers of the tiles among its targets, where the center of a tile is the
node in the (
row and in the (
=2)-th column of the tile. In order to
visit its targets the robot must traverse at least
nodes in each of d
or more, for an overall time requirement
=\Omega ip
3.1 An Optimal Protocol for m n=4
Consider an instance of the MRG(n; m; d) problem with d n and m n=4, and let
dc. The protocol is structured as a sequence of k stages of geometrically
increasing running time. For 0 i k, in Stage i all robots having to visit at least 4 i and
less than 4 i+1 destinations, accomplish their task. We call such robots active in Stage i,
whereas the remaining robots are called inert in Stage i. Note that since all robots reside
on the grid at all times, the protocol must orchestrate the movement for both active and
inert robots in every stage.
Stage 0 and Stage 1 are executed by simply running the single-target protocol fifteen
times, one for every possible target of each robot active in these stages. Clearly, inert
robots will participate in the protocol by associating themselves to "fake" destinations, as
described at the end of Section 2.
In Stage i, 2 i k, at most n=4 i robots are active. Let ffi i
regard the
grid as being conceptually partitioned into 4 square subgrids of ffi i nodes each, which
we refer to as ffi i -tiles. Observe that all robots active in this stage fit in one quadrant of a
-tile. Stage i is executed in two rounds. In the first round, the inert robots pack in the
lower half of the grid, while the active robots tour all
-tiles in the upper half of the grid,
stopping in each ffi i -tile for a time sufficient to visit all of their destinations in the tile, and
progressively accumulating in the first tile of the lower half as they complete their tour.
Clearly, different robots may stop in a ffi i
-tile for different amounts of time, depending on
the number of their targets in the tile. Similarly, in the second round the inert robots pack
in the upper half of the grid while the active robots visit their destinations in the lower
half.
We describe in detail the operations performed by the robots in the first round of
Stage i, omitting the description for the second round, which is virtually identical. For
denote the j-th
-tile based on a snake-like ordering of the
-tiles
which proceeds alternatively left-to-right and right-to-left. Note that the ffi i -tiles in the
upper half of the grid are those of indices j 4
(1) The robots relocate in the grid so that all active robots end up in T 1 , while the inert
robots pack to tiles T j
, with 4
(2) The following sequence of substeps is repeated 17 \Delta 4 i\Gamma1 times, in parallel for each
(2.a) Each active robot with unvisited targets in T j visits one arbitrary such target.
(2.b) Robots that visited all of their targets in T j move to the top-left quadrant of
, while each robot that has still some unvisited targets in T j
moves to the
bottom-right quadrant of T j
(2.c) Robots that visited all of their targets in T j
move to the top-left quadrant of
T j+1 .
(2.d) (Only for The robots newly arrived in T j+1
pack to the
tile's bottom-right quadrant.
Lemma 6 For 2 i k, Stage i is correct and is completed in O
ip
time.
Proof: Fix some i, 2 i k, and consider the first round of Stage i (the argument for the
second round is identical). Note that, since i 2, the
comprise (n=4 nodes altogether, hence all inert robots fit in such
tiles. Step (1) is easily accomplished in O (
n) time through the balancing and sorting
techniques described in Section 2. Substep (2.a) is executed independently within
and entails one execution of the single-target protocol of Section 2, which takes O
time. Substeps (2.b), (2.c) and (2.d) can be executed through balancing and sorting within
-tiles and simple relocations between adjacent
-tiles, in time O
. The correctness
of these substeps follows from observing that at any time there cannot be more than
robots in each tile T j , with 1 j 4 and that at the beginning of Substep (2.d)
all robots in T 4 i\Gamma1 =2+1 are packed into the tile's bottom-right quadrant.
It remains to show that 17 \Delta 4 i\Gamma1 iterations of Step (2) are sufficient for each active robot
to visit its targets in the upper half of the grid. Consider an active robot and let d j
be the
number of its targets in the T j , for 1 j 4 =2. The robot will stay in such a tile for
iterations, hence the total number of iterations needed to visit all of its targets
is
Thus, Step (2) takes O
O
ip
time overall. Since the complexity of Step (2)
dominates that of Step (1), O
ip
is also the running time for the entire round. 2
We have:
Theorem 2 Any instance of the MRG(n; m; d) problem with m n=4 and d n can be
solved in optimal \Theta
ip
time, in the worst case.
Proof: Stage 0 and Stage 1 can be correctly performed in \Theta (
n) time by Theorem 1. The
correctness and complexity of Stage i, for 1 i k, is established in Lemma 6. The
total cost is therefore O
O
ip
, and the optimality follows from
Lemma 5. 2
3.2 Extension to m
Let ffl be a fixed constant, consider an instance of the MRG(n; m; d) problem
with robots. Let c be the smallest power of 2 larger than or equal to 3=ffl, and
We regard the grid as conceptually partitioned into c 2 ffi-tiles of size ffi, with
each ffi-tile in turn divided into four (ffi=4)-tiles of size ffi=4. Let the (ffi=4)-tiles be numbered
from 1 to 4c 2 . In order to visit their targets the robots can employ the following protocol.
(1) The robots pack first towards the left in each row and then towards the bottom in
each column, thus ending in a "staircase" configuration descending from northwest
to southeast in the grid. Let A and B denote the two ffi-tiles at the northeast corner
of the grid (see Figure 2).
Comment: It will be shown that at the end of Step (1), A and B are empty.
(2) The following sequence of substeps is repeated for each index i, 1 i
(2.a) Let R i
denote the set of robots occupying the i-th (ffi=4)-tile. All robots move
within the grid so that robots of R i end up in ffi-tile A, while all other robots
return to their positions at the beginning of the step.
Figure
2: Configuration of robots after Step (1)
(2.b) For c 2 times do:
(2.b.1) The robots of R i visit their targets within the ffi-tile where they currently
reside, using the protocol from the previous subsection.
(2.b.2) The robots in each ffi-tile move to corresponding positions in the next ffi-tile,
according to some predetermined Hamiltonian circuit of the ffi-tiles.
(2.c) The robots of R i
move back to the grid nodes which they occupied at the
beginning of Substep (2.a).
The running time of the above protocol is analyzed in the following theorem.
Theorem 3 Any instance of the MRG(n; m; d) problem with m n(1 \Gamma ffl) and d n can
be solved in optimal \Theta
ip
time, in the worst case.
Proof: The row and column packings of Step (1) are easily accomplished in O (
n) time.
At the end Step (1) ffi-tiles A and B must be empty, or otherwise there would be more than
c
c
robots on the grid, which is impossible, since m ffl). Consider an arbitrary iteration
of Step (2). By exploiting the empty ffi-tile B and repeatedly moving robots between
adjacent ffi-tiles, with each robot maintaining the same relative positions within the tile,
it is not difficult to orchestrate a relocation on the grid where ffi-tile A is always kept
empty, and for any fixed ffi-tile T , the robots initially in T end up in a tile adjacent to
A after O (
n) time. Based on this observation it is easily seen that Substep (2.a) can
be executed in time O ( p
n). By Theorem 2, Substep (2.b.1) takes O
ip
time. Since
an empty ffi-tile is always present in the grid during Step (2), it is easy for the robots to
execute Substep (2.b.2) in time O (
n). Since c
ip
time overall. Finally, Substep (2.c) mirrors Substep (2.a), hence it takes time O (
n) as
well. Thus, one iteration of Step (2) is completed in time O
ip
. The theorem follows
because Substeps (2.a) \Xi (2.c) are iterated 4c times. 2
Conclusions
We studied the complexity of moving a set of m robots with limited sensory capabilities,
in a multi robot grid system of size p
n \Theta
n. We provided an O
ip
deterministic
protocol that governs the movement of m n(1 \Gamma ffl) robots, where each robot may visit
up to d distinct locations, but not two robots visit the same location. We also proved a
lower bound showing that the protocol is optimal. It would be interesting to extend the
protocol to handle any number of robots our protocol could be
employed to this end if the rules governing the system were relaxed so to allow a robot
to stay initially outside the grid for an arbitrary amount of time, and to disappear from
the grid as soon as it visits its targets. Finally, another interesting open problem concerns
the extension of the protocol to allow distinct robots to visit the same location. To the
best of our knowledge, no result is known for this setting, except for the trivial O (n)-time
protocol based on a Hamiltonian tour of the grid nodes.
Acknowledgments
The authors would like to thank Elena Lodi, Fabrizio Luccio, Linda
Pagli and Ugo Vaccaro for many interesting discussions and comments during the early
stages of this work, and the referees, who provided a number of useful suggestions.
--R
"Proceedings, AFIPS Spring Joint Computer Conference,"
"Introduction to Parallel Algorithms and Architectures: Arrays ffl Trees ffl Hypercubes,"
"A computer oriented geodetic data base and a new technique in file sequencing,"
"Proceedings, 5th Scandinavian Workshop on Algorithm Theory,"
"Overview of mesh results,"
"Artificial intelligence planning systems: Proceedings of the first international conference,"
On social laws for artificial agent societies: Off-line design
--TR
Introduction to parallel algorithms and architectures
On traffic laws for mobile robots
On social laws for artificial agent societies
Hot-Potato Algorithms for Permutation Routing
Safe and Efficient Traffic Laws for Mobile Robots
--CTR
Kieran T. Herley , Andrea Pietracaprina , Geppino Pucci, One-to-Many routing on the mesh, Proceedings of the thirteenth annual ACM symposium on Parallel algorithms and architectures, p.31-37, July 2001, Crete Island, Greece | social laws;multirobot grid system;computational agents;routing protocol |
566580 | Creating models of truss structures with optimization. | We present a method for designing truss structures, a common and complex category of buildings, using non-linear optimization. Truss structures are ubiquitous in the industrialized world, appearing as bridges, towers, roof supports and building exoskeletons, yet are complex enough that modeling them by hand is time consuming and tedious. We represent trusses as a set of rigid bars connected by pin joints, which may change location during optimization. By including the location of the joints as well as the strength of individual beams in our design variables, we can simultaneously optimize the geometry and the mass of structures. We present the details of our technique together with examples illustrating its use, including comparisons with real structures. | Introduction
A recurring challenge in the field of computer graphics is the creation
of realistic models of complex man-made structures. The
standard solution to this problem is to build these models by hand,
but this approach is time consuming and, where reference images
are not available, can be difficult to reconcile with a demand for
visual realism. Our paper presents a method, based on practices
in the field of structural engineering, to quickly create novel and
physically realistic truss structures such as bridges and towers, using
simple optimization techniques and a minimum of user effort.
"Truss structures" is a broad category of man-made structures,
including bridges (Figure 1), water towers, cranes, roof support
building exoskeletons (Figure 2), and temporary
construction frameworks. Trusses derive their utility and distinctive
look from their simple construction: rod elements (beams)
Figure
1: A cantilever bridge generated by our software, compared
with the Homestead bridge in Pittsburgh, Pennsylvania.
which exert only axial forces, connected concentrically with welded
or bolted joints.
These utilitarian structures are ubiquitous in the industrialized
world and can be extremely complex and thus difficult to model.
For example, the Eiffel Tower, perhaps the most famous truss structure
in the world, contains over 15,000 girders connected at over
30,000 points [Harriss 1975] and even simpler structures, such as
railroad bridges, routinely contain hundreds of members of varying
lengths. Consequently, modeling of these structures by hand can be
difficult and tedious, and an automated method of generating them
is desirable.
1.1 Background
Very little has been published in the graphics literature on the
problem of the automatic generation of man-made structures.
While significant and successful work has been done in recreating
natural structures such as plants, the trunks and roots of
trees, and corals and sponges (summarized in the review paper
by Prusinkiewicz [1993]), these studies emphasize visual plausibility
and morphogenetic realism over structural optimality. Parish
and M- uller recently described a system to generate cityscapes using
L-systems [Parish and M- uller 2001], but this research did not
address the issue of generating individual buildings for particular
purposes or optimality conditons. Computer-aided analysis of simple
truss structures, coupled with graphic displays of deflection or
changing stresses, has been used for educational purposes within
the structural engineering [MacCallum and Hanna 1997] and architecture
[Piccolotto and Rio 1995] communities, but these systems
are not intended for the design of optimal structures.
In the field of structural engineering, the use of numerical optimization
techniques to aid design dates back to at least 1956 when
linear programming was used to optimize frame structures based
on plastic design theory [Heyman 1956]. Since then, extensive
research has been done in the field of "structural synthesis," as
it is sometimes called, although its penetration into industry has
been limited [Topping 1983; Haftka and Grandhi 1986]. Techniques
in the structural engineering literature generally fall into
three broad categories: geometry optimization, topology optimiza-
tion, and cross-sectional optimization (also known as "size opti-
[Kirsch 1989].
Cross-sectional optimization, the most heavily researched of
these three techniques, assumes a fixed topology and geometry (the
number of beams and joints, their connectivity, and locations) and
finds the shape of the beams that will best, either in terms of mass or
stiffness, support a given set of loads. The parameters of the structure
that are changed during optimization, called the design vari-
ables, are properties that affect the cross-sectional area of a beam
such as, for the common case of tubular elements, the radius and
thickness of each tube. An example of this technique in practice is
the design of the beams that are used to build utility transmission
towers [Vanderplaats and Moses 1977], where savings of only a few
hundred dollars in material costs, when multiplied by the thousands
of towers needed for a new transmission route, can be a substantial
gain.
Topology optimization addresses the issues that size optimization
ignores; it is concerned with the number and connectivity of the
beams and joints, rather than their individual shape. Because structure
topology is most easily represented by discrete variables, numerical
techniques used for topology optimization are quite different
from those used for continuous size and geometry optimization
problems. Prior approaches to this problem have included genetic
programming [Chapman et al. 1993], simulated annealing [Reddy
and Cagan 1995], and "ground structure methods" wherein a highly
connected grid of pin-joints is optimized by removing members
based on stress limits [Hemp 1973; Pederson 1992]. A review of
these discrete parameter optimization problems in structural engineering
can be found in Kirsch [1989].
The third category of structural optimization, geometry opti-
mization, lies between the extremes of size and topology optimiza-
tion. The goal of geometry optimization is to refine the position,
strength and, to some extent, the topology of a truss structure. Because
these problems are highly non-linear, geometry optimization
does not have as lengthy a history as size and topology optimization
[Topping 1983]. A common approach, called "multi-level de-
sign," frames the problem as an iterative process wherein the continuous
design variables are optimized in one pass, and then the
topology is changed on a second pass [Spillers 1975]. It is from
this literature that we draw the inspiration for our work.
For civil and mechanical engineers, the ultimate goal of structural
optimization is a highly accurate modeling of reality. Thus,
common to all these techniques, no matter how different their im-
plementations, is a desire for strict physical accuracy. In the field
Figure
2: A cooling tower at a steel mill created by our software
compared with an existing tower. From left to right: a real cooling
tower, our synthesized tower, and the same model with the obstacle
constraints shown.
of computer graphics, however, we are often just as concerned with
the speed of a solution and its visual impact as with its accuracy.
For example, although important to structural engineers, optimizing
the cross-sections of the members used to build a bridge would
be considered wasted effort in the typical computer graphics appli-
cation, as the subtle differences between different shapes of beams
are hardly noticeable from cinematic distances. Thus, rather than
being concerned only with our model's approximation to reality,
we are interested in optimizing the geometry and topology of truss
structures with the goals of speed, user control and physical realism
Representing Truss Structures
Truss structures consist of rigid beams, pin-connected at joints, exerting
axial forces only. This simple form allows us to represent
trusses as a connected set of three-dimensional particles where every
beam has exactly two end-points, and joints can accommodate
any number of beams. In our model, the pin-joints are classified
into three types: free joints, loads, and anchors. Anchors are points
where beams are joined to the earth, and thus are always in force
balance. Loads are points at which external loads are being applied,
e.g. the weight of vehicles on a bridge. Lastly, free joints are pin
joints where beams connect but which are not in contact with the
earth and have no external loads.
2.1 Constructing the Model
Before solving for an optimal truss structure, we must have a clear
idea of what purpose we want the structure to serve. For example, a
bridge must support some minimum weight along its span, the Eiffel
Tower must support observation decks, and roof trusses need to
support the roofing material. We model these support requirements
as loads, which are placed by the user. Although most structural
loads are continuous (e.g. a planar roadbed), appoximating loading
as a set of discrete load-points is standard practice within the civil
and structural engineering disciplines [Hibbeler 1998].
In addition to having external loads, every truss structure must
also be supported at one or more points by the ground. For real
structures, the location of these anchors is influenced by topogra-
phy, geology, and the economics of a particular site, but for our
modeling purposes their positions are specified by the user.
After placement of the anchors and loads, a rich set of free joints
is automatically added and highly connected to all three sets of
Figure
3: From top to bottom: the data specified by the user (loads
are depicted as green spheres and anchors as white cones); the free
joints added by the software above the loads; the automatically generated
initial connections (beams). This structure was the initial
guess used to create the bridge shown in Figure 4.
Figure
4: A typical railroad bridge and similar truss bridge designed
by our software.
joints (see Figure 3). Specifically, our software generates free joints
on a regular three-dimensional grid defined by the locations and
spacings of the load and anchor points. Currently, our user interface
asks the user to provide the number of vertical "layers" of free
joints and whether these joints are initially placed above or below
the loads (for example, they are placed above the road in Figure 3).
In addition to this rectilinear placement, we also experimented with
random placement of the free joints, distributing them in a spherical
or cubic volume surrounding the loads and anchors. We found that
random placement did not affect the quality of the final results, but
could greatly increase the time needed for convergence.
After generating the free joints, the software automatically
makes connections between all three sets of joints, usually connecting
each joint to its nearest neighbors using a simple O(N 2 ) algo-
rithm. Note that during optimization, beams may change strength
and position, but new beams cannot be added. In this sense, we are
using a ground structure technique, as described in Hemp [1973].
The initial structure does not need to be practical or even stable; it
is merely used as a starting point for the optimization problem.
Optimizing Truss Structures
The most important property of any structure, truss or not, is that it
be stable; i.e. not fall down. For a truss structure to be considered
stable, none of the joints can be out of force balance. Because our
model consists of rigid beams exerting axial forces only, we can
describe the forces acting on any joint
is the workless force being exerted by beam j, # is the
vector of these forces for all beams,#g is the gravity vector, m i is the
mass of joint is the number of beams attached to joint i, and
l j is the vector pointing from one end of beam j to the other (the
direction of this vector is not important as long as it is consistent).
Given an objective function G (usually the total mass, but perhaps
containing other terms), we can optimize a truss structure subject
to stability constraints by solving the following problem:
min G(#q)
(2)
where N J is the number of joints and #q is the vector of design vari-
ables. If we wish to do simple cross-sectional optimization, this
design vector is merely # . If we wish to solve the more interesting
geometry optimization problem, we also include the positions of all
the free joints in #q (see Section 3.2 for more details).
In order to avoid physically meaningless solutions, we should
also constrain the maximum force that any member can exert:
is the total number of beams. We constrain the absolute
value of # j because the sign of the workless force will be positive
when the beam is under compression and negative when the beam
is under tension.
In equation 1, we approximate the mass of a joint the
masses of the beams that connect to it plus, in the case of load joints,
whatever external loads may be applied at that joint. Although in
reality the mass of a truss structure (exclusive of the externally applied
loads) is in its beams rather than its pin joints, this "lumping
approximation" is standard practice in structural engineering and
is considered valid as long as the overall structure is significantly
larger than any component member [Hibbeler 1998]. This assumption
reduces the number of force balance constraints by a factor of
two or more, depending on the connectivity of the structure, as well
as allowing us to model members as ideal rigid beams.
3.1 Mass Functions
For a given structural material, the mass of a beam is a function
of its shape (length and cross-section). Under tension, the required
cross-sectional area of a truss member scales linearly with the force
the member exerts [Popov 1998]. Therefore, the volume of a beam
under tension will be a linear function of length and force and, assuming
a constant material density, so will the mass
where k T is a scaling factor determined by the density and tensile
strength of the material being modeled, # l j # is the length of beam
j, and # j is the workless force it exerts (note that # j here will be
negative because the beam is in tension).
Figure
5: A depiction of Euler buckling under a compressive load.
Under compression, long slender beams are subject to a mode of
failure known as Euler buckling, wherein compressive forces can
cause a beam to bend out of true and ultimately fail (see Figure 5).
The maximum axial compressive force that can be supported
by a beam before it undergoes Euler buckling is governed by the
following
is the length of beam j, I is its area moment of inertia,
and A is its cross-sectional area. E is the Young's Modulus of the
material being modeled and r is the radius of gyration, which describes
the way in which the area of a cross-section is distributed
around its centroidal axis.
Because the cross-sectional area of a member is proportional to
the square of r, we can rewrite equation 5 in terms of A and r as
Because we wish to use beams with minimum mass, # j (for a given
beam will be equal to F E and our approximation of the mass
function under compression is
where # is the density of the material, # l j # is the length of beam
j, and # j is the workless force being exerted by the beam. k C is a
scaling factor determined by # and the constants in equation 6.
Assuming the structures are made of steel I-beams, and using
units of meters and kilograms, we use the values of 5-10 -6 for k T
and 1.5-10 -5 for k C in equations 4 and 7 respectively. Because we
plan to do continuous optimization, we wish to avoid any discontinuities
in the mass function and thus we use a nonlinear blending
function between m T and m C centered around # to smooth the
transition.
3.2 Cross-Sectional and Geometry Optimization
Assuming that the objective function G(#q) in equation 2 is merely
a sum of the masses of the joints, and that the vector of design
variables #q consists only of # , we can use the equations developed
in the last section to perform a simple version of size optimization.
Solving this non-linear, constrained optimization problem will give
us, for a fixed geometry, the minimum mass structure that is strong
enough to support its own weight in addition to the user-specified
external loads.
We are interested, however, in the more useful geometry optimization
problem, where both the strengths of the beams and the
geometry of the overall structure can be changed. To allow the simultaneous
optimization of the sizing and geometry variables, we
add the positions of the free joints to the vector of design variables,
#q. We do not add the anchor or load positions to the design vector
because the locations of these two types of joints are set by the user.
Adding these variables to the optimization problem does not
change the form of the equations we have derived, but for numerical
stability we now also constrain the lengths of all the beams to
be above some small value:
min G(#q)
where l min was set to 0.1 meters in the examples reported here. Because
the optimization algorithm we use (sequential quadratic pro-
gramming, described in detail in Gill [1981]) handles inequality
constraints efficiently, these length constraints add minimal cost to
the solution of the problem.
Similar to the methods discussed in Pederson [1992], we use
a multilevel design algorithm consisting of two steps. First, we
solve the optimization problem described in equation 8. Having
found a feasible (if not yet globally optimal) structure, the system
then merges any pairs of joints that are connected to one another
by a beam that is at the minimum allowable length, because these
two joints are now essentially operating as one. The system could
also, if we wanted, eliminate beams that are exerting little force
(i.e. those with small # j #), as they are not actively helping to support
the loads. However, we prefer to leave such "useless" beams
in the model so as to leave open more topology options for future
iterations.
After this topology-cleaning step, the results are examined by
the user and, if they are not satisfactory for either mass or aesthetic
reasons, the optimization is run again using this new structure as
the starting point. In practice, we have found that a single iteration
almost always gave us the structures we desired, and never did it
take more than three or four iterations of the complete cycle to yield
an appealing final result.
3.3 Objective and Constraint Functions
Although the procedure outlined above generates good results,
there are many situations in which we want a more sophisticated
modeling of the physics or more control over the final results. Constrained
optimization techniques allow us to add intuitive "control
knobs" to the system very easily.
For example, in addition to constraints on the minimum length
of beams, we can also impose constraints on the maximum length.
These constraints imitate the real-world difficulty of manufacturing
and shipping long beams. (Due to state regulations on truck flat-
beds, girders over 48 feet are not easily shipped in North Amer-
ica.) Other changes or additions we have made to the objective and
constraint functions include: minimizing the total length of beams
(rather than mass), preferentially using tensile members (cables)
over compressive members, and symmetry constraints, which couple
the position of certain joints to each other in order to derive
symmetric forms.
Another particularly useful class of constraint functions are "ob-
stacle avoidance" constraints, which forbid the placement of joints
or beams within certain volumes. In this paper, we have used two
different types of obstacle constraints: one-sided planar and spherical
constraints. One-sided planar constraints are used to keep joints
and beams in some particular half-volume of space; for example to
keep the truss-work below the deck of the bridge shown in Figure
7. Implementation of this constraint is simple: given a point on the
plane #r and a normal #n pointing to the volume that joints are allowed
to be in, we constrain the distance from the free joints #p to
this plane with N J new constraints:
Figure
From left to right: initial structure, tripod solution, derrick
solution.
Similarly, to keep the beams and joints outside of a spherical vol-
ume, we add a constraint on each beam that the distance between
the center of this sphere and the beam (a line segment) must be be
greater than or equal to some radius R. This distance formula may
be found in geometry textbooks, such as Spanier [1987]. For optimization
purposes, we approximated the gradients of this function
with a finite-difference method.
The primary use of obstacle constraints is to allow the user to
"sculpt" the final structure intuitively while preserving realism, but
they can also serve to produce novel structures by creating local
optima. Figure 6 shows, from left to right, an initial structure, infeasible
because it violates the obstacle avoidance constraint, and
two designs produced as solutions from slightly different (random)
initial guesses for # . In each image, the red sphere is the volume
to be avoided, the green sphere at the top is the load that must be
supported, and the cylinders are the beams, colored cyan or tan depending
on whether they are in compression or tension. The anchors
are located at the three points where the structure touches the
ground. The middle, tripod solution hangs the mass on a tensile
member (a cable) from the apex of a pyramid, and the derrick solution
on the right supports the mass in a much more complex way
(this solution has a mass about 3 times that of the tripod). Both of
these solutions are valid and both exist as real designs for simple
winches and cranes. Although it is a general concern that nonlinear
optimizations can become trapped in sub-optimal local solutions, in
our experience this has not been a problem. When, as in the above
example, the system produces a locally optimal design, we have
found that a few additional iterations of our algorithm are sufficient
to find a much better optimum.
We have described a simple, physically motivated model for the
rapid design and optimization of models of truss structures. The
following examples illustrate the output of this work and demonstrate
the realistic and novel results that can be generated.
4.1 Bridges
Some of the most frequently seen truss structures are bridges.
Strong and easy to build, truss bridges appear in a variety of shapes
and sizes, depending on their use. A common type of truss bridge,
called a Warren truss, is shown in Figure 4 with a photo of a real
railroad bridge. The volume above the deck (the surface along
which vehicles pass) was kept clear in this example by using constraints
to limit the movement of the free joints to vertical planes.
The initial guess to generate this bridge was created automatically
from thirty user-specified points (the loads and anchors),
shown in Figure 3. From this description of the problem, the system
automatically added 22 free joints (one above each load point) and
connected each of these to their eight nearest neighbors, resulting in
a problem with 228 variables (22 free points and 163 beams). The
Figure
7: A bridge with all trusswork underneath the deck.
Figure
8: A perspective and side view of a through-deck cantilever
bridge.
final bridge design consists of 48 joints and 144 beams, some of the
particles and members having merged or been eliminated during the
topology-cleaning step. Similar procedures were used to generate
the initial guesses for all of our results.
Using this same initial structure, but with constraints that no material
may be placed above the deck, we generated a second bridge,
shown in Figure 7. Note that the trusswork under the deck has converged
to a single, thick spine. This spine is more conservative of
materials than the rectilinear trusswork in Figure 4, but in the earlier
case the constraints to keep the joints in vertical planes prevented it
from arriving at this solution.
Another type of bridge, a cantilever truss, is shown in Figure 1.
As with the bridge shown in Figure 7, the cantilever bridge was
constrained to have no material above the deck, and the joints were
further constrained to move in vertical planes only. However, the
addition of a third set of anchor joints in the middle of the span has
significantly influenced the final design of this problem. This bridge
is shown with a real bridge of the same design: the Homestead High
Level Bridge in Pittsburgh, Pennsylvania.
The bridge in Figure 8 was generated with the same starting point
and the same objective function as that in Figure 1, but without
the "clear deck" and vertical-plane constraints. Removal of these
constraints has allowed the structure to converge to a significantly
different solution, called a through-deck geometry.
4.2 Ei#el Tower
A tall tower, similar to the upper two-thirds of the Eiffel Tower
is shown in Figure 9. This tower was optimized from an initial
rectilinear set of joints and beams, automatically generated from
eight user-specified points (the four anchor sites and a four loads
at the top). We concentrated on the top two-thirds of the Tower
because the design of the bottom third is dominated by aesthetic
demands. Similarly, the observation decks, also ornamental, were
not sythesized.
Figure
9: Our trusswork tower, compared with a detail of the Eiffel
Tower. Because they are ornamental and not structural, the observation
decks are not included in our tower.
Figure
10: Three types of roof trusses: from left to right, cambered
Fink, composite Warren, and Scissors. Illustrations of real trusses
are shown at the top of each column. The lower two trusses in each
column were generated with our software.
4.3 Roof Trusses
The frameworks used to support the roofs of buildings are perhaps
the most common truss constructions. We have generated three different
types of roof trusses for two different roof pitches. In Figure
we show each category of truss (cambered Fink, composite
Warren, and Scissors) in its own column, at the top of which is an
illustration of a real example. For a given pitch, all three types of
were generated from the the same initial geometry. The variation
in the results is due to different objective functions: total mass
(cambered Fink and Scissors) and total length of beams (composite
Warren), and different roof mass (the Scissors trusses have a roof
that weighs twice as much as the other two types of trusses).
4.4 Michell Truss
The Michell Truss is a well-known minimum-weight planar truss
designed to support a single load with anchors placed on a circle in
the same plane Although impractical because of the varying lengths
and curved beams needed for an optimal solution, the Michell truss
has been a topic of study and a standard problem for structural
optimization work for nearly a century. We have reproduced the
Michell truss with our system, starting from a grid-like initial guess
and arriving at a solution very close to the analytical optimum (Fig-
ure 11).
4.5 Timing Information
Sequential quadratic programming, relying as it does on the iterative
solution of quadratic sub-problems, is a robust and fast method
for optimizing non-linear equations. Even with complicated problems
containing thousands of variables and non-linear constraints
the total time to optimize any of the above examples from auto-generated
initial guesses varied between tens of seconds and less
than fifteen minutes on a 275MHz R10000 SGI Octane. Specifying
the anchor and load points and the locations of obstacles (if any)
rarely took more than a few minutes and with better user-interface
design this time could be significantly reduced. For comparison,
we timed an expert user of Maya as he constructed duplicates of
the cooling tower (Figure 2), the Warren truss bridge (Figure
and the tower shown in Figure 9 from source photographs of real
structures. We found that modeling these structures at a comparable
level of detail by hand took an hour for the cooling tower, an hour
and a half for the bridge and almost three hours for the Eiffel Tower.
Although informal, this experiment showed that our method has the
potential to speed up the construction of models of truss structures
enormously, while simultaneously guaranteeing physical realism.
5 Summary and Discussion
We have described a system for representing and optimizing
trusses, a common and visually complex category of man-made
structures. By representing the joints of the truss as movable points,
and the links between them as scalable beams, we have framed the
design as a non-linear optimization problem, which allows the use
of powerful numerical techniques. Furthermore, by altering the
mass functions of the beams, the objective function, and the con-
straints, we can alter the design process and easily generate a variety
of interesting structures.
Other than the location of anchors and loads, the factor that we
found to have the largest effect on the final results was the use of
obstacle avoidance constraints. Placement of these constraints is a
powerful way of encouraging the production of certain shapes, such
as the volume inside a cooling tower or the clear traffic deck on a
railroad bridge.
Not surprisingly, we also found that the number of free joints
added during the initial model construction could affect the look of
the final structure. However, this effect was largely one of increasing
the detail of the truss-work, rather than fundamentally changing
the final shape. The initial positions of the free joints, however,
made little difference to the final designs, although they could affect
the amount of time required for the optimization. In cases where the
initial positions did make a difference, it was generally the result of
constraints (such as obstacle avoidance) creating a "barrier" to the
movement of free joints during optimization and thus creating local
minima. In our experience, however, a few additional iterations of
the algorithm were sufficient to get out of these local minima and
find a global solution.
We also found that beyond some minimum number of beams per
joint (three or four), additional connections to more distant joints
had a negligible effect on the final designs. We attribute this lack
of impact to two factors. First, because we are minimizing the total
mass of the structure (or occasionally only the total length of
all members), shorter beams connected to closer joints will be used
more readily than longer ones, especially if they are under compres-
sion. Secondly, by allowing unused beams to "fade away" as the
force they exert drops to zero, initial structures with many beams
per joint can become equivalent to structures with fewer beams per
joint, allowing both more and less complex initial structures to converge
to the same answer.
Although successful at capturing the geometric and topological
complexity of truss structures, our work does not account for all
Figure
11: From left to right: The initial structure from which we began the optimization, our final design, and an optimal Michell truss after
an illustration in Michell [1904]. The red sphere on the left of each image is an obstacle (on the surface of which are the anchors), and the
green sphere on the right is the load which must be supported. Gravity points down.
the details of true truss design. For example, a better objective
function would calculate the actual cost of construction, including
variables such as connection costs (the cost to attach beams to a
joint) and the cost of anchors, which varies depending on terrain
and the force they must transmit to the ground. Nor does our model
explicitly include stress limits in the materials being modeled (al-
though the constants k T and k C in equations 4 and 7 are an implicit
approximation). Similarly, a more complex column formula
than the simplified Euler buckling formula, such as those described
in Popov[1998], might capture more nuances of real design. True
structural engineering must also take into account an envelope of
possible load forces acting on a structure, not a single set as we
have implemented. In each of these cases, our approximations were
made not for technical reasons (for example, load envelopes could
be handled with multi-objective optimization techniques, and stress
limits with more inequality constraints), but rather because the visual
detail they add is not commensurate with the added complexity
and expense of the solution.
Eventually, we would like to be able to include more abstract
aesthetic criteria in the objective function. Our current system incorporates
the concepts of minimal mass and symmetry (via con-
straints), but many elements of compelling design are based on less
easily quantifiable concepts such as "harmony," the visual weight
of a structure, and use of familiar geometric forms. Because our
technique is fast enough for user guidance, implementing even a
crude approximation to these qualitative architectural ideals would
allow users more flexibility in design and the ability to create more
imaginative structures while still guaranteeing their physical realism
Acknowledgements
We would like to thank Joel Heires for helping us with the Maya
modelling tests. The photo of the Homestead Hilevel Bridge in
Figure
1 is Copyright 2002 Pittsburgh Post-Gazette Archives. All
rights reserved. Reprinted with permission. The photo of the steel-
mill cooling tower in Figure 2 is Copyright 2002 Bernd and Hilla
Becher. All rights reserved. The photo of the bridge in Figure 4 is
Copyright 2002 Bruce S. Cridlebaugh. All rights reserved.
--R
Genetic algorithms as an approach to configuration and topology design.
Structural shape optimization-a survey
The Tallest Tower - Eiffel and the Belle Epoque
Optimum Structures.
Design of beams and frames for minimum material consumption.
Quarterly of Applied Mathematics
Structural Analysis
Optimal topologies of structures.
learning package for teaching structural design.
The limits of economy of material in frame structures.
Topology optimization of three dimensional trusses.
Design education with computers.
Engineering Mechanics of Solids.
Modeling and visualization of biological structures.
An improved shape annealing algorithm for truss topology.
An Atlas of Functions.
Iterative Structural Design.
Shape optimization of skeletal structures: A review.
Automated optimal geometry design of structures.
Practical Optimization.
--TR
Structural shape optimization MYAMPERSANDmdash; a survey
An atlas of functions
Procedural modeling of cities | constrained optimization;physically based modeling;truss structures;nonlinear optimization |
566638 | The SAGE graphics architecture. | The Scalable, Advanced Graphics Environment (SAGE) is a new high-end, multi-chip rendering architecture. Each single SAGE board can render in excess of 80 million fully lit, textured, anti-aliased triangles per second. SAGE brings high quality antialiasing filters to video rate hardware for the first time. To achieve this, the concept of a frame buffer is replaced by a fully double-buffered sample buffer of between 1 and 16 non-uniformly placed samples per final output pixel. The video output raster of samples is subject to convolution by a 5x5 programmable reconstruction and bandpass filter that replaces the traditional RAMDAC. The reconstruction filter processes up to 400 samples per output pixel, and supports any radially symmetric filter, including those with negative lobes (full Mitchell-Netravali filter). Each SAGE board comprises four parallel rendering sub-units, and supports up to two video output channels. Multiple SAGE systems can be tiled together to support even higher fill rates, resolutions, and performance. | Overview
SAGE's block diagram is seen in figure 1. In this diagram we have
expanded out the external buses, internal FIFOs, and internal multiplexers
load-balancing switches so that the overall data flow and
required sorting may be more easily seen at the system level.
SAGE's inter-chip connections are typically unidirectional, point-
to-point, source-synchronous digital interconnects. The top half of
SAGE's diagram is fairly similar to other sort-last architectures:
command load balancing is performed across parallel transform and
rasterize blocks, followed by the sort-last tree (the Sched chips) interfacing
to the frame buffer. In SAGE, however, the frame buffer is
replaced with a sample buffer containing 20 million samples. On the
output side of the sample buffer, SAGE introduces an entirely new
graphics hardware pipeline stage that replaces the RAMDAC: a sample
tree followed by parallel Convolve chips that apply a 5.5
programmable reconstruction and bandpass filter to rasters of sam-
ples. Each Convolve chip is responsible for antialiasing a separate
vertical column of the screen; the finished pixels are emitted from
the Convolves in video raster order.
3.2 Command Distribution
At the top of the pipeline, the Master chip performs DMA from the
host to fetch OpenGL command and graphics data streams. The
DMA engine is bidirectional, and contains an MMU so that application
data can reside anywhere in virtual memory: no locking of application
data regions is necessary. For geometric primitives,
streams of vertex data are distributed in a load-balanced way to the
four parallel render pipelines below.
3.3 Rendering: Transform, Lighting, Setup, Rasterization
Each render pipeline consists of two custom chips plus several
memory chips. The first custom chip is a MAJC multi-processor
[Tremblay et al. 2000], the second is the Rasterize chip, which performs
set-up, rasterization, and textured drawing.
Many previous architectures are sort-middle (following the taxonomy
of [Molnar et al. 1994]): they have parallel transform, lighting,
and set-up pipelines, but recombine the streams into a one-primitive-
at-a-time distributed drawing stage. In architecting SAGE, certain
bandwidth advantages motivated our choice of a sort-last architec-
ture: as the average pixel size of application triangles shrinks, the size
of the set-up data becomes larger than the actual sample data of the
triangle. This also improves efficiency by allowing each rasterization
chip to generate all the samples in a triangle, rather than just a fraction
of them, as occurs with interleaved rasterization.
The MAJC chip contains specialized vertex data handling circuits that
support two fully programmable VLIW CPUs. These CPUs are programmed
to implement the classic graphics pipeline stages of trans-
form, clip check, clipping, face determination, lighting, and some geometric
primitive set-up operations. The special vertex data circuitry
handles vertex strip and mesh connectivity data, so that the CPUs only
see streams of non-redundant vertex data most of the time. Thus redundant
lighting computations are avoided, and vertex re-use can asymptotically
reduce the required vertex processing operations to 1/2 of a
vertex per triangle processed when vertex mesh formats are used.
To support a sample buffer with programmable non-uniform sample
positions, the hardware fill algorithms must be extended beyond
simple scan-line interpolation. Generalizations of plane equation
evaluation are needed to ensure correctly sampled renderings of
geometric primitives. Furthermore, the sample fill rate has to run 8
to times faster than the rasterize fill of a non-supersampled machine
just to keep up. The aggregate equivalent commodity DRAM
bandwidth of SAGE's eight 3DRAM memory interleaves is in excess
of 80 gigabytes per second.
The Rasterize chip rasterizes textured triangles, lines, and dots into
the sample buffer at the current sample density. It also performs
some imaging functions and more traditional raster-op and window
operations. Each MAJC Rasterization pipeline can render more
than 20 million lit textured supersampled triangles per second. Each
Rasterizer chip has it own dedicated 256 megabytes of texture
memory. This supports 256 megabytes of user texture memory, at
four times the bandwidth of a single pipe, or up to one gigabyte of
texture memory, when applications use the OpenGL targeted texture
extension (this is a common case for volume visualization ap-
plications).
For textured triangles, the Rasterize chip first determines which
pixels are touched (even fractionally) by the triangle, applies layers
of (MIP-mapped, optionally anisotropic filtered) texturing to each
pixel, determines which (irregular) sample positions are within the
triangle, and interpolates the color, Z, and alpha channel data to
each sample point. The output is a stream of sample rgbaZ packets
with a screen pixel xy address and sample index implying the sam-
ple's sub-pixel location within that pixel. Each Rasterize chip has
two external output buses so that the first stage routing of sample
data to sample memory is performed before the samples leave the
Rasterize chip.
3.4 Sort-Last
Below the Rasterize chips lies a network comprising two Sched
chips to route samples produced by any of the Rasterize outputs to
any pixel interleave of the sample buffer below. As samples arrive
in the Sched chip input FIFO from Rasterize chips, they are routed
into the appropriate second stage FIFO based on their destination
memory interleave. The output of each second stage FIFO is controlled
by the load balancing switch for its memory interleave. Each
switch acts like a traffic light at a busy intersection; traffic from one
source is allowed to flow unimpeded for a time while the other
sources are blocked. This (programmable) hysteresis in the flow of
Host DMA Bus
Master
Dram
mcode
MAJC
Dram
mcode
MAJC
Dram
mcode
MAJC
Dram
mcode
Dram
Dram Dram Dram
texture
Rasterize
texture
Rasterize
texture
Rasterize
texture
MAJC
Rasterize
Sched
Sched
Buffer
Route
Convolve
6 swath line buffer
Convolve
6 swath line buffer
Convolve
6 swath line buffer
Convolve
6 swath line buffer
Figure
1. SAGE block diagram. Thick boxes are CUSTOM CHIPS. Red boxes are FIFOS. Green circles are load balancing SWITCHES.
sample data from different Rasterize chips ensures good cache locality
within the 3DRAM memories below.
The third layer of FIFOs in the Sched chip is a final sample pre-write
queue in front of a single memory interleave. (An interleave is a
group of 4 3DRAM chips connected to the same pre-write queue.
The Sched chips snoop this queue to perform 3DRAM cache
prefetches, before scheduling the sample writes into the sample
buffer.
Because the parallel rasterized sample streams merge together here,
the Sched chip is also the place where special control tokens enforce
various render-order constraints. For example, most algorithms that
make use of the stencil buffer require at least two passes-one to
prepare the stencil buffer with a special pattern, and then another
pass with sample writes conditionally enabled by the presence of
that special stencil pattern. Clearly all stencil writes of the first pass
must complete before any of the second pass sample writes can be
allowed to go forward. When a given interleave on a given Sched
chip encounters a special synchronization token marking a hard ordering
constraint (e.g, the boundary between the two passes), then
no more samples from that rasterizer will be processed until the other
three rasterizer inputs have also encountered and stopped at the
synchronization token. When this occurs, all samples generated by
primitives that entered the SAGE system before the synchronization
token have been processed (the first pass in our example), and now
it is okay to allow the pending samples that entered the system after
the synchronizing token to proceed. The OpenGL driver knows to
generate this ordering token when it is in unordered rendering mode,
and then sees a command to transition to ordered rendering mode
immediately followed by a command to change back to unordered
rendering mode. Other more complex situations are supported by
more complex special token generation by the driver.
As controllers of the 3DRAM chips, the Sched chips also respond to
requests from the Convolve chips for streams of samples to be sent
out over the 3DRAM video output pins to the parallel Convolve chips
to generate the video output.
3.5 The Sample Buffer
The sample buffer consists of 32 3DRAM chips, organized into eight
independent interleaves of four chips each. On the input side, four
3DRAMs share a single set of control, address, and data lines to one
of four sets of memory interleave pins on a Sched chip. On the output
side, each 3DRAM outputs 40-bit samples by double pumping 20
video output pins. Each of these pins has an individual wire to a
Route chip, for a total of 640 wires entering the lower route network.
Logically, the sample buffer is organized as a two dimensional raster
of lists of samples. All lists are the same length, because all pixels
on the screen have the same number of samples. The list-order
of a sample implies its sub-pixel location; the Rasterize and Convolve
chips contain identical sample-location tables accessed by the
sample index, so no space is allocated within the sample buffer for
the sub-pixel location of the sample. The memories are interleaved
per-sample: adjacent samples in a list are in different 3DRAM packages
Unlike first-generation 3DRAM components, the new 3DRAMs used
on SAGE contain an internal 2:1 multiplexer driven by each sam-
ple's window ID, so sample-by-sample double-buffering occurs inside
the 3DRAM chip. Thus only the final rgb alpha/window control
bits emerge in the 40-bit-per-sample output packet.
The size of the sample buffer is enough to support 1280.1024 double-buffered
samples with Z at a sample density of 8, or 1920.1200 at a
sample density of 4. The high sample densities require correspondingly
high render bandwidth into the sample buffer. This was achieved by a
new generation of 3DRAM [Deering et al. 1994]. Because 3DRAM performs
z-buffer compare and alpha buffer blending internally, the traditional
z-buffer read-modify-write operation is simplified into just a
operation. The important operation of clearing the sample buffer
for a new frame of rendering is also potentially adversely affected by
the high sample density, but this too is greatly accelerated by the
3DRAM chips; initializing all samples in a 1280x1024x8 sample raster
takes less than 200 usec. (less than 2% of a 76-hz frame time).
3.6 Sample Raster Delivery
The 640 outputs of the sample buffer feed into an array of 10 Route
chips. Each Route chip is a 2-bit slice of a router function. Each Route
chip connects to 2 output data pins from each of the 32 3DRAMs, and
can redirect this data to any of the four Convolve chips attached to it
below. Because of the need for the Convolve chips to be fed a contiguous
vertical swath of the pixel interleaved sample buffer, samples
are read from the sample buffers in quarter scan line wide, one pixel
high bursts directed at one of the four Convolve chips. (More details
will be discussed in the Convolve section.) It is the job of the Route
chip to absorb these bursts into internal FIFOs, and then dribble them
back out to their destination Convolve chip.
3.7 Convolution, CLUT, Video Timing
Finally, the four Convolve chips perform the reconstruction and
band limiting filtering of the raster stream of samples, producing
pixels that are fed into the next Convolve chip before final video
output. The Convolve chips replace the digital portion of the RAM-
DAC; they contain color look-up and gamma tables, as well as the
video timing generator, cursor logic, and genlock interface. The
Convolve chips do not contain D/A converters. Instead, the Convolve
video outputs are digital, to support various existing and
emerging digital video interfaces. Two high quality external D/A
converters and an S-video interface are on the SAGE video daughter
board to support analog video devices.
The traditional graphics hardware taxonomy refers to this section as
display, however the RenderMan term imaging pipeline may be a
more accurate description of this new functionality.
The next several sections describe the convolution processing in
more detail, starting off with a discussion of previous attempts to
implement video rate antialiasing.
4 CONVOLUTION INTRODUCTION
For over a decade now, users of most (batch) photorealistic rendering
software have been able to obtain high quality antialiased imag-
ery, usually by means of various supersampling algorithms. How-
ever, for real-time hardware systems, cost constraints have precluded
the deployment of all but the most simplistic approximations to
these algorithms. Fill rate limitations make real-time generation of
enough samples challenging. Restrictions in hardware polygon fill
algorithms can preclude sub-pixel spatially variant sampling. Memory
costs and bandwidth limits have prohibited use of double-buffered
supersampled frame buffers. Finally, the computational cost of
real-time antialiasing reconstruction filters has limited hardware
implementations to box or tent filters, which are inferior to most
software reconstruction filters.
Various alternatives to stochastic supersampling have been tried
over the years in attempts to avoid high hardware costs, but to date
all such attempts have limited the generality of the rendering and
have not seen much use in real-time general-purpose graphics hardware
systems. Their use has been confined to applications whose
structure could be adequately constrained: flight simulation and
some video games.
Once the non-uniform supersampling approach is taken, a number
of other rendering effects can be performed by applications through
the use of multi-pass algorithms and user programmable sample
mask patterns. These include motion blur, depth of field, anisotropic
texture filtering, subject to supported sample densities. In this
paper we do not directly address these features, rather, we focuses
on the basic back-end architecture required to support filtered supersampled
buffers.
5.1 PREVIOUS WORK, SOFTWARE
Antialiasing has a rich and detailed history. The mainstream approach
in recent years has been to evaluate the image function at
multiple irregularly spaced sample points per pixel, followed by applying
a reconstruction filter and then resampling with a low-pass
filter. Originally referred to as stochastic supersampling, the basic
idea is to trade off visually annoying aliasing artifacts (jaggies) for
less visually perceptible noise. [Glassner 1995] contains an excellent
survey and discussion of the many variants of this approach
that have been implemented over the years. The pioneering commercial
software implementation of this approach is Pixar's Photo-Realistic
RenderMan [Cook et al. 1987][Upstill 1990].
PREVIOUS WORK, HARDWARE
5.2 Flight simulators, back-to-front sorting-based
algorithms
Real-time antialiasing has been a requirement of flight simulation
hardware for several decades. However, most of the early work in
the field took advantage of known scene structure, usually the ability
to constrain the rendering of primitives to back-to-front. But
these algorithms do not scale well as the average scene complexity
grows from a few hundred to millions of polygons per frame.
5.3 Percentage Coverage Algorithms
Some systems, for example [Akeley 1993][Winner et al. 1997],
have employed polygon antialiasing algorithms based on storing
extra information per pixel about what polygon fragments cover
what fractions of the pixel. In principle, algorithms of this class can
produce higher quality results than even supersampling techniques,
because the exact area contribution of each polygon fragment to the
final visible pixel can be known. In practice, hardware systems can
only afford to maintain a limited amount of shape information
about a limited number of polygon fragments within each pixel. For
scenes consisting of small numbers of large polygons, most polygons
are very much greater in area than a pixel, and the vast majority
of pixels are either completely covered by just one or two poly-
gons. Occlusion edges and silhouettes would then have their jaggies
removed. With care, even corner cases when more than two polygons
of one continuous surface land within one pixel can often be
merged back into the single polygon case.
However, with today's typical polygon shrinking towards a micro-
polygon, such algorithms rapidly become confused, causing unacceptable
visible artifacts.
5.4 Multi-pass Stochastic Accumulation Buffers
The first attempted support for general full scene antialiasing independent
of render order in near-real-time hardware was the multi-pass
stochastic accumulation buffer [Deering et al. 1988][Haeberli
and Akeley 1990]. The approach here was to render the scene multiple
times with different sub-pixel initial screen offsets, and then
combine these samples with an incremental filter into an accumulation
buffer before final display. However, the multiple passes and
the overhead of filtering and image copying reduced the performance
of the systems by an order of magnitude or more, while still
adding substantial cost for the (deeper pixel) accumulation buffer.
As a result, while the technique has been supported by multiple
vendors, it has never found much use in interactive applications.
Also, because the sub-pixel sample positions correlate between pix-
els, the final quality does not match that of software systems.
Supersampling
Some architectures have implemented subsets of the general super-sampling
antialiasing algorithm. [Akeley 1993] and [Montrym et al.
1997] implement a one through eight sample-per-pixel rendering
into a single-buffered sample buffer. When sample rendering is
complete, the samples within each pixel are all averaged together
and transferred to an output pixel buffer for video display. The combined
reconstruction and low-pass filter is thus a 1.1 box filter, and
does not require any multiplies. The 1x1 region of support also
eliminates the need for neighboring pixels to communicate during
filtering. While the quality does not match that of batch software
renderers, the results are appreciably better than no supersampling,
and have proven good enough to be used in flight simulation and
virtual set applications, among others. [Eyles at. al. 1997] implemented
supersampled rendering with a 1.1 weighted filter.
At the lower end, some simple processing for antialiasing is beginning
to show up in game chips. [Tarolli et al. 1999] appears to be
an implementation of a 2.2 single buffered supersampled buffer,
but it is not clear if other than box filtering is supported. The nVidia
Geforce3 supports sample densities of either 2 or 4, with either a
1.1 box filter, or a 3.3 tent filter [Domin 2001]. The resultant
quality is better than no antialiasing at all, but still far from the quality
of batch photorealism software. The frame rates, however, do
suffer almost linearly in proportion to sample density.
The OpenGL 1.3 specification does contain support for supersam-
pling, but only in the context of applying the filtering before the
render buffer to display buffer swap.
6 SAGE SUPERSAMPLING ISSUES
6.1 Programmable Nonuniform Sample Pattern
An important component of high-quality supersampling based anti-aliasing
algorithms is the use of carefully controlled sample patterns
that are not locally periodic. Today's best patterns are constrained
random perturbations of uniform grids. Software algorithms can afford
the luxury of caching tens of thousands of pixel area worth of
pre-computed sample patterns. On-chip hardware is much more severely
space constrained; SAGE only supports a pattern RAM of 64
(2x2 pixels x 16 samples) of 6-bit x and 6-bit y sub-pixel offset en-
tries. However, the effective non-repeating size of this pattern is extended
to 128x128 pixels by the use of a 2D hardware hash function
that permutes access to the pattern entries. The effectiveness of this
hash function can be seen in Figure 5, where each large colored dot
corresponds to a sample. Because the table is so small, it is easily
changeable in real-time on a frame-by-frame basis, supporting temporal
perturbation of the sample pattern.
Note that in the SAGE system the sample tables for the frame currently
being displayed are stored in the Convolve chips, while the
sample tables for the frame currently being rendered are stored in
the Rasterizer chips. If the tables are not static, system software
must ensure that they are updated at the appropriate time boundaries
7 CONVOLVE CHIP ARCHITECTURE DETAILS
One of the primary ways in which our architecture differs from previous
systems is that there is no attempt to compute the antialiased
pixels on the render side of the frame buffer. As far as the sample
buffer is concerned, the output display device is capable of displaying
supersamples; it is up to the back end reconstruction filter pipe-line
to convert streams of supersamples into antialiased pixels on
the fly at full high-resolution video rates.
The peak data rates required to support this are impressive: the
frame buffer has to output 1.6 billion samples per second, or approximately
8 gigabytes per second of data. Real-time high-quality
filtering of this much data is beyond the capabilities of today's silicon
in a single chip. Thus, we had to find a way to spread the convolution
processing of this fire-hose of data across multiple chips.
As seen in Figure 1, our convolution pipeline is split up into four
chips. Each chip is assigned a different vertical swath of the
screen's samples. Because reconstruction filters of up to 5.5 are
supported, each of these vertical swaths must overlap their horizontal
neighbors by up to 2 pixels (half the filter width). The final video
stream is assembled as video is passed from chip to chip; each chip
inserts its portion of each scan line into the aggregate stream. The
last chip delivers the complete video stream. (An optional second
video stream also emerges from this last chip.)
The 5.5 filter size also implies that each sample will potentially be
used in up to 25 different pixel computations. To avoid re-fetching
samples from off-chip, 6 swath-lines worth of sample data is
cached on each Convolve chip. (This RAM consumes half the active
area of the chip.)
The internal architecture of the chip is shown in Figure 2. The video
generation process for each chip starts with the generation of a raster of
convolution center (output pixel center) locations across and down
each swath. As the convolution center location moves, sample data is
transferred from the swath-line buffers into a 5.5 filter processor array.
A schematic of a filter processor is shown in Figure 3. Each filter
processor is responsible for all the samples from one pixel from the
sample buffer; the 5.5 array has access to all the samples that may
contribute to a single output pixel. The filter processor computes
the contribution of its samples to the total convolution; the partial
results from all 25 filter processors are then summed to form the un-normalized
convolution result. Because of the nonuniform, non-locally
repeating properties of good sample patterns, it is not feasible
to cache pre-computed convolution filter coefficients. Instead, each
filter processor contains circuitry for dynamically computing cus-
KernelY
KernelX
In
sample
pattern
5x5 region
sample adr
hash funct
Convolution center
raster location
generation
sum-coeff
1/sum-coeff
Filter Processor
Array *
rgba *
swap RGBA 1Video In Video Out
Figure
2: Convolution chip architecture.
Color
6 swath-line
buffer
Timingrgba
coeff
Inverse
Filter 2 Find 1st
r h
er i
KernelY Range Check
Figure
3: Filter Processor Detail.x 14
Filter
filter
coeff-
icient
sum-coeff
tom filter coefficients for arbitrary sample locations. It also contains
the multiplier-accumulator that actually weights its samples.
This filter coefficient computation proceeds as follows. First, the
sample location relative to the convolution center is computed by
subtracting the sample xy location (generated by the sample pattern
RAM) from the convolution center xy location. Squaring and summing
the these delta xy components results in the squared radial distance
of the sample location from the convolution center. This
squared distance is scaled by the square of the inverse filter radius;
results greater than unity force a zero filter coefficient. Next the
squared distance is encoded into a 3-bit exponent, 5-bit mantissa
(+1 hidden bit) floating-point representation. This 8-bit floating-point
number is then used as an index into a (RAM) table of squared
distance vs. filter coefficients. From a numeric linearity point of
view, the squaring and floating-point encoding nearly cancel out,
resulting in accurate, relatively equally-spaced filter coefficients.
This can be seen in a plot of the synthesized filter values vs. distance
in Figure 4.
The filter coefficient output of this table is a signed 14-bit floating-point
number, which is used to weight the rgba sample values. The
multiplied result is converted back into a 27-bit fixed-point number,
and directed into a set of summing trees. A separate running sum of
applied filter coefficients is similarly calculated.
Thus our hardware places only two restrictions on the reconstruction
filter: it must be radially symmetric in the convolution space,
and the filter radial cross section has to be quantized to 256 values.
Note that through non-uniform video and/or screen space scalings,
elliptical filters in physical display space can be supported. Separable
filters have theoretical advantages over radial filters, but radial
filtering was less complex to implement in hardware.
Technically our filter is a weighted average filter, because of how
we handle filter normalization. We perform a floating-point reciprocal
operation on the sum of the filter coefficients, and a normalizing
multiply on r, g, b, and a. There was an unexpected advantage
in using dynamic normalization: in simulations, the error compared
to the exact solution came out well below expectations. This is because
slight errors in coefficient generation produce a similar bias
in the normalization value and are mostly canceled out. Remaining
numeric errors in coefficient generation have less perceptual effect
because they are equivalent to a correct coefficient at an incorrect
estimation of the sample distance from the center of the filter (error
in sample position). But samples should be quite representative of
the true underlying image in their vicinity. Most visible errors in antialiased
output are due to a sample just missing a significant
change in the underlying image (e.g. from black to white). The contribution
to image output errors due to errors in computing filter co-efficient
values is quite small by comparison.
Hyper-accurate filtering can preserve the quantization present in the
original samples (sometimes called contouring). To mitigate con-
touring, we dither 12-bit rgba samples computed during rendering
to the 10-bit rgba values actually stored in the sample buffer. Con-
1/3, 1/3 Mitchell-Netravali Filter
Filter Weight
The (barely visible) jaggies in this
curve are the quantization errors.
Filter Radius 1.0 2.0
Figure
4: Numerical accuracy of filter representation.
volution reconstruction of dithered sample values effectively reverses
this dithering, achieving 12 bits of accuracy per rgba component.
7.1 Video Outputs
Up to two simultaneous, potentially asynchronous, video rasters
can be generated in parallel by partitioning the four Convolve chips
into two subsets; both video streams will emerge from the digital
video out ports of the last Convolve chip. The swap circuit shown
in
Figure
allows each Convolve chip to add its results to either of
the incoming streams, and pass the other through unmodified.
(There is also a post-processing swap not shown.)
One use for this second channel is to be able to read the antialiased
image back into the computer, through the outer ring bus shown in
Figure
1. Without this option, the host computer would have no way
to get a copy of the antialiased image! This is useful when performing
antialiased rendering intended for later reuse as reflection maps, etc.
The Convolve video timing circuit can run as either a sync master
or as a sync slave genlocked to an external sync source. The two
video streams can be sub-regions of a what the window system and
rendering system think of as a single display (useful for tiling two
lower-resolution projectors/displays). Alternatively, the two video
sources can be two separate, potentially asynchronous, image regions
with potentially different sample densities.
7.2 Video Resizing
A key benefit of the SAGE architecture is that video resolution is determined
by the convolution hardware, not by rendering hardware.
Thus the same hardware used for antialiasing also provides an extremely
high quality video rescaler, with better filtering quality than
is possible with an external scaler because it operates on the original
samples, not pixels, and SAGE correctly performs the filtering in
linear light space. One use of this is to generate NTSC video of arbitrary
zoomed and panned sub-regions of a higher resolution display,
as might be used to document a software program. A more important
use is in systems with real-time guarantees: to conserve fill rate, the
actual size of the image rendered can be dynamically reduced, and
then interpolated back up to the fixed video output size. So a flight
simulator using a 1280.1024 video format might actually be rendering
at 960.768 when the load gets heavy, saving nearly half the fill
time. The system described in [Montrym et al. 1997] also supports
dynamic video resizing, but uses a simple tent filter, and performs
the filtering in a non-linear (post-gamma) light space.
7.3 Fully Antialiased Alpha Channel
SAGE's sample filtering algorithm operates not only on the rgb
channels, but also on stored double-buffered alpha if enabled. For
example, for virtual set applications this means that SAGE automatically
generates a very high quality soft key signal for blending
antialiased edges of virtual objects in front of physical objects,
as well as blending variably transparent rendered objects in front of
physical objects.
Because it is programmable, the choice of reconstruction filters can
be left to the end user, but in general we have found that the same
Mitchell-Netravali family of cubic filters [Mitchell and Netravali
1988] used in high quality software renderers work well for hard-
ware. The choice of reconstruction filter has a subjective compo-
nent: some users prefer smoother filters that banish all jaggies at the
expense of a slight blurring of the image; other users desire a filter
that preserves sharpness at the risk of a few artifacts. There is also
a display-device-specific aspect to the choice of reconstruction fil-
ter: to get close to the same end-user look on a CRT vs. a flat panel
LCD display, slightly different filters are needed. While not of general
use, more exotic filters can be used to help simulate the appearance
of special imaging devices.
8.1 Effects of Negative Lobes
One of the prices that must be paid for the use of high quality reconstruction
filters is occasional artifacts (ringing or fringing) due
to the presence of negative lobes. Our filter hardware clamps negative
color components to zero, and it also keeps a histogram of the
frequency and extent of such occurrences. This histogram data can
be used to dynamically reduce the negative lobes of the reconstruction
filter if artifacts are too severe.
9 Legacy and Compatibility Issues
There are several legacy and compatibility issues that SAGE must
address. Many of these are handled by properties associated with
window ID tags that are part of each sample.
One example is support for applications that were programmed assuming
a non-linear light space and/or a pseudo color space. The
non-linear light space is typically a particular gamma space. SAGE
supports these applications by providing pseudo color, direct color,
and non-linear true color LUTs as specified by window ID properties
of samples. In SAGE, these LUTs are applied to samples before
the convolution. Of course, most 3D rendering is performed in linear
light space, and so can by-pass these pre-convolve LUTs. This
pre-processing ensures that all sample inputs to the antialiasing
convolution process are in the same linear light space. After the
convolution process generates (linear light space) pixels, the pixels
are converted to the proper non-linear light space (e.g. gamma cor-
rection) for the particular display device attached to the system.
Not all pixels should be antialiased. Proper emulation of 2D window
system rendering and legacy applications require accurate emulation
of all those jaggies. Our solution is to disable any filtering
of such pixels via a special property of the window ID tag of the
pixels of such windows. Instead, when so tagged, a sample (typical-
ly the one closest to the convolution center) is chosen to be output
in place of the convolution result. Thus it is possible for the screen
to simultaneously support antialiased and non-antialiased windows.
Because our filter has a 5.5 extent, care must be taken to ensure
that such unfiltered pixels do not contribute any samples to nearby
filtered pixels. This is the case, for example, when a non-antialiased
window occludes an antialiased window. Once again the dynamic
filter normalization circuit comes to the rescue; we simply don't apply
any filter coefficients from aliased pixels within the 5.5 window
of an antialiased pixel, and still get unit volume under the ker-
nel. The same approach is also used to eliminate artifacts at the visible
video border, in place of the traditional approach of adding an
extra non-visible strip of 2 pixels all around the full screen.
Other legacy issues include proper support of traditional antialiased
lines when also subject to supersampling and filtering. Our goal is
to allow as much as possible for existing applications to move to
full scene antialiased operation with minimal source code changes.
10.1 Images
Figure
5 is an image from the SAGE debugging simulator, and
shows the details of our sampling pattern for sample density
dering. The intensity of each dot corresponds to the computed sample
value for rgb; the green lines are triangle tesselation boundaries;
the faint red grid lines are the pixel boundaries. 11 triangles are
shown: a 12-segment radius-3 pie wedge with one slice missing.
Because SAGE's native output environment is a display, the next
set of images are digital photos of functional SAGE hardware driving
a CRT screen. Figures 6 through 10 are shots of a portion of a
1280.1024 CRT display. Each shows the same portion of the same
object, a honeybee. The differences are in the sample count and re-construction
filter. Figure 6 shows one (uniformly spaced) sample
per pixel, with no reconstruction filtering. Figures 7 through 10 are
rendered using a sample density of 8. Figures 7 and 10 use the 4.4
Mitchell-Netravali 1/3 1/3 filter of Figure 4. Figure 8 uses a diameter
4 cylinder filter, and shows considerable blur. Figure 9 uses a
Laplacian filter, and shows enhanced edges. Figure 10 is a wider
(approximately 800.800 pixel) shot of the bee.
10.2 Comparison to RenderMan
During SAGE's development, Pixar's Photorealistic RenderMan
was used to verify the quality of the antialiasing algo-
rithms. Custom RenderMan shaders were written to mimic the different
lighting algorithms employed. The same scene descriptions,
camera parameters, sampling rates, and reconstruction filters were
used to generate images from both renderers. The resulting images
cannot be expected to be numerically identical at every pixel, primarily
because of the different sample patterns used, as well as the
different numeric accuracies employed. (PRMAN uses full 32-bit
IEEE floating-point arithmetic internally.) So as a control, we also
ran PRMAN at a sample density of 256. Numerically, comparing
our hardware 16 sample rendering with that of PRMAN, fewer than
1% of the pixels differed in value by more than 6% (the contribution
of a single sample). However, about the same variance was
seen between the 16-sample and 256 sample PRMAN images. This
explains the visual results: in general, expert observers could not
determine which image was rendered by which system.
10.3 Data Rates and Computational Requirements
A double-buffered sample buffer supporting 8. supersampled
1280.1024 imagery requires storage of over 20 million samples
(approximately an eighth of a gigabyte, including single-buffered
Z). For 76 Hz video display, because of overheads and fragmentation
effects, we designed in a peak video output bandwidth of 1.6
billion samples per second, or 8 gigabytes per second. (Note that the
render fill data rate has to be several times larger than this to support
interesting depth complexity scenes at full frame rates.)
A5.5 filter at a sample density of 4 requires 25*4*4 floating-point
multiply-adds per output pixel, or 800 operations per pixel. A similar
number of operations are needed to generate all the filter coefficients
per pixel. At peak video output rates of 250 MHz, the total
operation count per second exceeds 0.4 teraflops. While these are
numbers generated by specialized hardware, it is important to note
that a (much) greater number of flops are consumed by general purpose
computers running equivalent software antialiasing algorithms
for equivalent work.
10.4 Scalable
The SAGE chip set was designed to scale the performance of a two
chip rendering sub-system into a parallel pipeline rendering system.
Not all the combinatorial variations of chip configurations allowed
by the SAGE architecture have been described in this paper on the
first implementation of a SAGE chip-set based system. In addition,
each current SAGE board has all the necessary hooks to be scaled
at the computer system level, to support even higher fill rates, reso-
lutions, and performance. These include the ability to function as a
sync slave, and synchronization signals for both stereo frame parity
and render buffer flip, as well as some special features enabled by
the architecture of the SAGE Convolve chip. SAGE is not unique
in this respect; one can also tile multiple commodity PC solutions
together. But with SAGE, one starts with a much more powerful
building block with high geometry and fill rates, large texture
stores, and that already performs high quality supersampled anti-aliasing
SAGE is a complex multi-chip machine. Details of textured render-
ing, lighting, picking, texture read-back, context switching, etc.
were re-implemented for SAGE, often somewhat differently than
has been done before. In this short paper, we chose to focus on the
major effects that the antialiasing algorithm had on the architecture;
this is not to detract from other areas of 3D graphics hardware
where the implementers pushed the envelope as well.
Complete SAGE prototype hardware is up and running, with
OpenGL rendering and full scene antialiasing with arbitrary filters
as described in this paper. The board is shown in Figure 11.
13 CONCLUSIONS
A new high end architecture and implementation for 3D graphics
rendering, SAGE, has been described. The performance goal of
over 80 million lit, textured, antialiased triangles per second has
been met. We have also achieved our goal of producing a hardware
antialiasing system whose images are numerically and perceptually
indistinguishable from images generated by the antialiasing portion
of leading software renderers. This is achieved through the use of a
hardware double-buffered sample buffer with on-the-fly video-out
spatial filtering, capable of implementing non-uniform supersampling
with cubic reconstruction filters.
ACKNOWLEDGEMENTS
Thanks to Dean Stanton and Dan Rice for programming, proofread-
ing, and photo composition. Thanks to Clayton Castle for help with
video recording. Thanks to the entire SAGE development team,
without whom SAGE would not be possible.
--R
RealityEngine Graphics.
Course notes of CS448A
The Triangle Processor and Normal Vector Shader: A VLSI system for High Performance Graphics.
FBRAM: A new Form of Memory Optimized for 3D Graphics.
Principles of Digital Image Synthesis.
The Accumulation Buffer: Hardware Support for High-Quality Rendering
Reconstruction Filters in Computer Graphics.
A Sorting Classification of Parallel Rendering
The MAJC Architecture
The RenderMan Companion.
Hardware Accelerated Rendering Of Antialiasing Using A Modified A-buffer Algorithm
Figure 5: Sample pattern at density 16 Figure 6: Bee
Figure 7: Bee
Figure 8: Bee
Figure 9: Bee
Figure 11: Photo of prototype SAGE board.
--TR
The Reyes image rendering architecture
The accumulation buffer: hardware support for high-quality rendering
Reality Engine graphics
A Sorting Classification of Parallel Rendering
PixelFlow
InfiniteReality
Hardware accelerated rendering of antialiasing using a modified a-buffer algorithm
The triangle processor and normal vector shader
Reconstruction filters in computer-graphics
Principles of Digital Image Synthesis
The MAJC Architecture
--CTR
David Zhang , Mohamed Kamel , George Baciu, Introduction, Integrated image and graphics technologies, Kluwer Academic Publishers, Norwell, MA, 2004
T. Whitted , J. Kajiya, Fully procedural graphics, Proceedings of the ACM SIGGRAPH/EUROGRAPHICS conference on Graphics hardware, July 30-31, 2005, Los Angeles, California
Philippe Beaudoin , Pierre Poulin, Compressed multisampling for efficient hardware edge antialiasing, Proceedings of the 2004 conference on Graphics interface, p.169-176, May 17-19, 2004, London, Ontario, Canada
Woolley , David Luebke , Benjamin Watson , Abhinav Dayal, Interruptible rendering, Proceedings of the symposium on Interactive 3D graphics, April 27-30, 2003, Monterey, California
Gregory S. Johnson , Juhyun Lee , Christopher A. Burns , William R. Mark, The irregular Z-buffer: Hardware acceleration for irregular data structures, ACM Transactions on Graphics (TOG), v.24 n.4, p.1462-1482, October 2005
Timo Aila , Ville Miettinen , Petri Nordlund, Delay streams for graphics hardware, ACM Transactions on Graphics (TOG), v.22 n.3, July
ACM Transactions on Graphics (TOG), v.25 n.2, p.375-411, April 2006 | graphics hardware;rendering hardware;video;graphics systems;anti-aliasing;frame buffer algorithms;hardware systems |
566640 | Ray tracing on programmable graphics hardware. | Recently a breakthrough has occurred in graphics hardware: fixed function pipelines have been replaced with programmable vertex and fragment processors. In the near future, the graphics pipeline is likely to evolve into a general programmable stream processor capable of more than simply feed-forward triangle rendering.In this paper, we evaluate these trends in programmability of the graphics pipeline and explain how ray tracing can be mapped to graphics hardware. Using our simulator, we analyze the performance of a ray casting implementation on next generation programmable graphics hardware. In addition, we compare the performance difference between non-branching programmable hardware using a multipass implementation and an architecture that supports branching. We also show how this approach is applicable to other ray tracing algorithms such as Whitted ray tracing, path tracing, and hybrid rendering algorithms. Finally, we demonstrate that ray tracing on graphics hardware could prove to be faster than CPU based implementations as well as competitive with traditional hardware accelerated feed-forward triangle rendering. | Introduction
Real-time ray tracing has been a goal of the computer-graphics
community for many years. Recently VLSI technology has reached
the point where the raw computational capability of a single chip
is sufficient for real-time ray tracing. Real-time ray tracing has
been demonstrated on small scenes on a single general-purpose
CPU with SIMD floating point extensions [Wald et al. 2001b], and
for larger scenes on a shared memory multiprocessor [Parker et al.
1998; Parker et al. 1999] and a cluster [Wald et al. 2001b; Wald
et al. 2001a]. Various efforts are under way to develop chips specialized
for ray tracing, and ray tracing chips that accelerate off-line
rendering are commercially available [Hall 2001]. Given that real-time
ray tracing is possible in the near future, it is worthwhile to
study implementations on different architectures with the goal of
providing maximum performance at the lowest cost.
# Currently at NVIDIA Corporation
{tpurcell, ianbuck, billmark, hanrahan}@graphics.stanford.edu
In this paper, we describe an alternative approach to real-time ray
tracing that has the potential to out perform CPU-based algorithms
without requiring fundamentally new hardware: using commodity
programmable graphics hardware to implement ray tracing. Graphics
hardware has recently evolved from a fixed-function graphics
pipeline optimized for rendering texture-mapped triangles to a
graphics pipeline with programmable vertex and fragment stages.
In the near-term (next year or two) the graphics processor (GPU)
fragment program stage will likely be generalized to include floating
point computation and a complete, orthogonal instruction set.
These capabilities are being demanded by programmers using the
current hardware. As we will show, these capabilities are also sufficient
for us to write a complete ray tracer for this hardware. As
the programmable stages become more general, the hardware can
be considered to be a general-purpose stream processor. The stream
processing model supports a variety of highly-parallelizable algo-
rithms, including ray tracing.
In recent years, the performance of graphics hardware has increased
more rapidly than that of CPUs. CPU designs are optimized
for high performance on sequential code, and it is becoming
increasingly difficult to use additional transistors to improve performance
on this code. In contrast, programmable graphics hardware
is optimized for highly-parallel vertex and fragment shading
code [Lindholm et al. 2001]. As a result, GPUs can use additional
transistors much more effectively than CPUs, and thus sustain a
greater rate of performance improvement as semiconductor fabrication
technology advances.
The convergence of these three separate trends - sufficient raw
performance for single-chip real-time ray tracing; increasing GPU
programmability; and faster performance improvements on GPUs
than CPUs - make GPUs an attractive platform for real-time ray
tracing. GPU-based ray tracing also allows for hybrid rendering
algorithms; e.g. an algorithm that starts with a Z-buffered rendering
pass for visibility, and then uses ray tracing for secondary shadow
rays. Blurring the line between traditional triangle rendering and
ray tracing allows for a natural evolution toward increased realism.
In this paper, we show how to efficiently implement ray tracing
on GPUs. The paper contains three main contributions:
. We show how ray tracing can be mapped to a stream processing
model of parallel computation. As part of this map-
ping, we describe an efficient algorithm for mapping the innermost
ray-triangle intersection loop to multiple rendering
passes. We then show how the basic ray caster can be extended
to include shadows, reflections, and path tracing.
. We analyze the streaming GPU-based ray caster's performance
and show that it is competitive with current CPU-based
ray casting. We also show initial results for a system including
secondary rays. We believe that in the near future, GPU-based
ray tracing will be much faster than CPU-based ray tracing.
. To guide future GPU implementations, we analyze the compute
and memory bandwidth requirements of ray casting on
GPUs. We study two basic architectures: one architecture
without branching that requires multiple passes, and another
with branching that requires only a single pass. We show that
the single pass version requires significantly less bandwidth,
and is compute-limited. We also analyze the performance of
the texture cache when used for ray casting and show that it is
very effective at reducing bandwidth.
Programmable Graphics Hardware
2.1 The Current Programmable Graphics Pipeline
Application
Vertex Program
Rasterization
Fragment Program
Display
Figure
1: The programmable graphics pipeline.
A diagram of a modern graphics pipeline is shown in figure 1.
Today's graphics chips, such as the NVIDIA GeForce3 [NVIDIA
2001] and the ATI Radeon 8500 [ATI 2001] replace the fixed-function
vertex and fragment (including texture) stages with programmable
stages. These programmable vertex and fragment engines
execute user-defined programs and allow fine control over
shading and texturing calculations. An NVIDIA vertex program
consists of up to 128 4-way SIMD floating point instructions [Lind-
holm et al. 2001]. This vertex program is run on each incoming vertex
and the computed results are passed on to the rasterization stage.
The fragment stage is also programmable, either through NVIDIA
register combiners [Spitzer 2001] or DirectX 8 pixel shaders [Mi-
crosoft 2001]. Pixel shaders, like vertex programs, provide a 4-way
SIMD instruction set augmented with instructions for texturing, but
unlike vertex programs operate on fixed-point values. In this pa-
per, we will be primarily interested in the programmable fragment
pipeline; it is designed to operate at the system fill rate (approxi-
mately 1 billion fragments per second).
Programmable shading is a recent innovation and the current
hardware has many limitations:
. Vertex and fragment programs have simple, incomplete instruction
sets. The fragment program instruction set is much
simpler than the vertex instruction set.
. Fragment program data types are mostly fixed-point. The input
textures and output framebuffer colors are typically 8-bits
per color component. Intermediate values in registers have
only slightly more precision.
. There are many resource limitations. Programs have a limited
number of instructions and a small number of registers. Each
stage has a limited number of inputs and outputs (e.g. the
number of outputs from the vertex stage is constrained by the
number of vertex interpolants).
. The number of active textures and the number of dependent
textures is limited. Current hardware permits certain instructions
for computing texture addresses only at certain points
within the program. For example, a DirectX 8 PS 1.4 pixel
shader has two stages: a first texture addressing stage consisting
of four texture fetch instructions followed by eight color
blending instructions, and then a color computation stage consisting
of additional texture fetches followed by color combining
arithmetic. This programming model permits a single
level of dependent texturing.
. Only a single color value may be written to the framebuffer in
each pass.
. Programs cannot loop and there are no conditional branching
instructions.
2.2 Proposed Near-term Programmable Graphics
Pipeline
The limitations of current hardware make it difficult to implement
ray tracing in a fragment program. Fortunately, due to the interest
in programmable shading for mainstream game applications,
programmable pipelines are rapidly evolving and many hardware
and software vendors are circulating proposals for future hardware.
In fact, many of the current limitations are merely a result of the
fact that they represent the very first generation of programmable
hardware. In this paper, we show how to implement a ray tracer
on an extended hardware model that we think approximates hardware
available in the next year or two. Our model is based loosely
on proposals by Microsoft for DirectX 9.0 [Marshall 2001] and by
3DLabs for OpenGL 2.0 [3DLabs 2001].
Our target baseline architecture has the following features:
. A programmable fragment stage with floating point instructions
and registers. We also assume floating point texture and
framebuffer formats.
. Enhanced fragment program assembly instructions. We include
instructions which are now only available at the vertex
level. Furthermore, we allow longer programs; long enough
so that our basic ray tracing components may be downloaded
as a single program (our longest program is on the order of 50
instructions).
. Texture lookups are allowed anywhere within a fragment pro-
gram. There are no limits on the number of texture fetches or
levels of texture dependencies within a program.
. Multiple outputs. We allow 1 or 2 floating point RGBA (4-
vectors) to be written to the framebuffer by a fragment pro-
gram. We also assume the fragment program can render directly
to a texture or the stencil buffer.
We consider these enhancements a natural evolution of current
graphics hardware. As already mentioned, all these features are
actively under consideration by various vendors.
At the heart of any efficient ray tracing implementation is the
ability to traverse an acceleration structure and test for an intersection
of a ray against a list of triangles. Both these abilities require
a looping construct. Note that the above architecture does not include
data-dependent conditional branching in its instruction set.
Despite this limitation, programs with loops and conditionals can
be mapped to this baseline architecture using the multipass rendering
technique presented by Peercy et al. [2000]. To implement a
conditional using their technique, the conditional predicate is first
evaluated using a sequence of rendering passes, and then a stencil
bit is set to true or false depending on the result. The body of
the conditional is then evaluated using additional rendering passes,
but values are only written to the framebuffer if the corresponding
fragment's stencil bit is true.
Although their algorithm was developed for a fixed-function
graphics pipeline, it can be extended and used with a programmable
pipeline. We assume the addition of two hardware features to make
the Peercy et al. algorithm more efficient: direct setting of stencil
bits and an early fragment kill similar to Z occlusion culling [Kirk
2001]. In the standard OpenGL pipeline, stencil bits may be set by
testing the alpha value. The alpha value is computed by the fragment
program and then written to the framebuffer. Setting the stencil
bit from the computed alpha value requires an additional pass.
Since fragment programs in our baseline architecture can modify
the stencil values directly, we can eliminate this extra pass. Another
important rendering optimization is an early fragment kill. With an
fragment kill, the depth or stencil test is executed before the
fragment program stage and the fragment program is executed only
if the fragment passes the stencil test. If the stencil bit is false, no instructions
are executed and no texture or framebuffer bandwidth is
used (except to read the 8-bit stencil value). Using the combination
of these two techniques, multipass rendering using large fragment
programs under the control of the stencil buffer is quite efficient.
As we will see, ray tracing involves significant looping. Although
each rendering pass is efficient, extra passes still have a cost;
each pass consumes extra bandwidth by reading and writing intermediate
values to texture (each pass also requires bandwidth to read
stencil values). Thus, fewer resources would be used if these inner
loops over voxels and triangles were coalesced into a single pass.
The obvious way to do this would be to add branching to the fragment
processing hardware. However, adding support for branching
increases the complexity of the GPU hardware. Non-branching
GPUs may use a single instruction stream to feed several fragment
pipelines simultaneously (SIMD computation). GPUs that support
branching require a separate instruction stream for each processing
unit (MIMD computation). Therefore, graphics architects would
like to avoid branching if possible. As a concrete example of this
trade off, we evaluate the efficiency of ray casting on two architec-
tures, one with and one without branching:
. Multipass Architecture. Supports arbitrary texture reads,
floating-point texture and framebuffer formats, general floating
point instructions, and two floating point 4-vector outputs.
Branching is implemented via multipass rendering.
. Branching Architecture. Multipass architecture enhanced
to include support for conditional branching instructions for
loops and control flow.
2.3 The Streaming Graphics Processor Abstraction
As the graphics processor evolves to include a complete instruction
set and larger data types, it appears more and more like a
general-purpose processor. However, the challenge is to introduce
programmability without compromising performance, for otherwise
the GPU would become more like the CPU and lose its cost-performance
advantages. In order to guide the mapping of new applications
to graphics architectures, we propose that we view next-generation
graphics hardware as a streaming processor. Stream
processing is not a new idea. Media processors transform streams
of digital information as in MPEG video decode. The IMAGINE
processor is an example of a general-purpose streaming processor
[Khailany et al. 2000].
Streaming computing differs from traditional computing in that
the system reads the data required for a computation as a sequential
stream of elements. Each element of a stream is a record of data
requiring a similar computation. The system executes a program
or kernel on each element of the input stream placing the result on
an output stream. In this sense, a programmable graphics processor
executes a vertex program on a stream of vertices, and a fragment
program on a stream of fragments. Since, for the most part we
are ignoring vertex programs and rasterization, we are treating the
graphics chip as basically a streaming fragment processor.
The streaming model of computation leads to efficient implementations
for three reasons. First, since each stream element's
computation is independent from any other, designers can add additional
pipelines that process elements of the stream in parallel.
Second, kernels achieve high arithmetic intensity. That is, they perform
a lot of computation per small fixed-size record. As a result
the computation to memory bandwidth ratio is high. Third, streaming
hardware can hide the memory latency of texture fetches by
using prefetching [Torborg and Kajiya 1996; Anderson et al. 1997;
Igehy et al. 1998]. When the hardware fetches a texture for a frag-
ment, the fragment registers are placed in a FIFO and the fragment
processor starts processing another fragment. Only after the texture
is fetched does the processor return to that fragment. This method
of hiding latency is similar to multithreading [Alverson et al. 1990]
and works because of the abundant parallelism in streams. In sum-
mary, the streaming model allows graphics hardware to exploit par-
allelism, to utilize bandwidth efficiently, and to hide memory la-
tency. As a result, graphics hardware makes efficient use of VLSI
resources.
The challenge is then to map ray tracing onto a streaming model
of computation. This is done by breaking the ray tracer into kernels.
These kernels are chained together by streams of data, originating
from data stored in textures and the framebuffer.
3 Streaming Ray Tracing
In this section, we show how to reformulate ray tracing as a streaming
computation. A flow diagram for a streaming ray tracer is found
in figure 2.
Generate
Eye Rays
Traverse
Acceleration
Structure
Intersect
Triangles
Grid of
Triangle List
Offsets
Camera
Triangles
Triangle List
Shade Hit
and Generate
Shading Rays
Materials
Normals
Figure
2: A streaming ray tracer.
In this paper, we assume that all scene geometry is represented
as triangles stored in an acceleration data structure before rendering
begins. In a typical scenario, an application would specify the scene
geometry using a display list, and the graphics library would place
the display list geometry into the acceleration data structure. We
will not consider the cost of building this data structure. Since this
may be an expensive operation, this assumption implies that the
algorithm described in this paper may not be efficient for dynamic
scenes.
The second design decision was to use a uniform grid to accelerate
ray tracing. There are many possible acceleration data structures
to choose from: bounding volume hierarchies, bsp trees, k-
d trees, octrees, uniform grids, adaptive grids, hierarchical grids,
etc. We chose uniform grids for two reasons. First, many experiments
have been performed using different acceleration data structures
on different scenes (for an excellent recent study see Havran
et al. [2000]). From these studies no single acceleration data structure
appears to be most efficient; all appear to be within a factor
of two of each other. Second, uniform grids are particularly simple
for hardware implementations. Accesses to grid data structures
require constant time; hierarchical data structures, in contrast, require
variable time per access and involve pointer chasing. Code
for grid traversal is also very simple and can be highly optimized in
hardware. In our system, a grid is represented as a 3D texture map,
a memory organization currently supported by graphics hardware.
We will discuss further the pros and cons of the grid in section 5.
We have split the streaming ray tracer into four kernels: eye
ray generation, grid traversal, ray-triangle intersection, and shad-
ing. The eye ray generator kernel produces a stream of viewing
rays. Each viewing ray is a single ray corresponding to a pixel in
the image. The traversal kernel reads the stream of rays produced
by the eye ray generator. The traversal kernel steps rays through the
grid until the ray encounters a voxel containing triangles. The ray
and voxel address are output and passed to the intersection kernel.
The intersection kernel is responsible for testing ray intersections
with all the triangles contained in the voxel. The intersector has
two types of output. If ray-triangle intersection (hit) occurs in that
voxel, the ray and the triangle that is hit is output for shading. If
no hit occurs, the ray is passed back to the traversal kernel and the
search for voxels containing triangles continues. The shading kernel
computes a color. If a ray terminates at this hit, then the color
is written to the accumulated image. Additionally, the shading kernel
may generate shadow or secondary rays; in this case, these new
rays are passed back to the traversal stage.
We implement ray tracing kernels as fragment programs. We execute
these programs by rendering a screen-sized rectangle. Constant
inputs are placed within the kernel code. Stream inputs are
fetched from screen-aligned textures. The results of a kernel are
then written back into textures. The stencil buffer controls which
fragments in the screen-sized rectangle and screen-aligned textures
are active. The 8-bit stencil value associated with each ray contains
the ray's state. A ray's state can be traversing, intersecting, shad-
ing, or done. Specifying the correct stencil test with a rendering
pass, we can allow the kernel to be run on only those rays which
are in a particular state.
The following sections detail the implementation of each ray
tracing kernel and the memory layout for the scene. We then describe
several variations including ray casting, Whitted ray tracing
[Whitted 1980], path tracing, and shadow casting.
3.1 Ray Tracing Kernels
3.1.1 Eye Ray Generator
The eye ray generator is the simplest kernel of the ray tracer. Given
camera parameters, including viewpoint and a view direction, it
computes an eye ray for each screen pixel. The fragment program is
invoked for each pixel on the screen, generating an eye ray for each.
The eye ray generator also tests the ray against the scene bounding
box. Rays that intersect the scene bounding box are processed fur-
ther, while those that miss are terminated.
3.1.2 Traverser
The traversal stage searches for voxels containing triangles. The
first part of the traversal stage sets up the traversal calculation. The
second part steps along the ray enumerating those voxels pierced by
the ray. Traversal is equivalent to 3D line drawing and has a per-ray
setup cost and a per-voxel rasterization cost.
We use a 3D-DDA algorithm [Fujimoto et al. 1986] for this
traversal. After each step, the kernel queries the grid data structure
which is stored as a 3D texture. If the grid contains a null
pointer, then that voxel is empty. If the pointer is not null, the voxel
contains triangles. In this case, a ray-voxel pair is output and the
ray is marked so that it is tested for intersection with the triangles
in that voxel.
Implementing the traverser loop on the multipass architecture requires
multiple passes. The once per ray setup is done as two passes
and each step through a voxel requires an additional pass. At the
end of each pass, the fragment program must store all the stepping
parameters used within the loop to textures, which then must be
read for the next pass. We will discuss the multipass implementation
further after we discuss the intersection stage.
Triangle
Textures
Vertex
Triangle List
Texture
Texture
Grid
y z
x y z
x
y z
x y z
x
y z
x y z
x y z
x y z
x y z
x
v2 y z
x
y z
x
y z
x y z
x
y z
x
y z
x
Figure
4: The grid and triangle data structures stored in texture
memory. Each grid cell contains a pointer to a list of triangles. If
this pointer is null, then no triangles are stored in that voxel. Grids
are stored as 3D textures. Triangle lists are stored in another tex-
ture. Voxels containing triangles point to the beginning of a triangle
list in the triangle list texture. The triangle list consists of a set of
pointers to vertex data. The end of the triangle list is indicated by a
null pointer. Finally, vertex positions are stored in textures.
3.1.3 Intersector
The triangle intersection stage takes a stream of ray-voxel pairs and
outputs ray-triangle hits. It does this by performing ray-triangle intersection
tests with all the triangles within a voxel. If a hit occurs,
a ray-triangle pair is passed to the shading stage. The code for computing
a single ray-triangle intersection is shown in figure 5. The
code is similar to that used by Carr et al. [2002] for their DirectX
8 PS 1.4 ray-triangle intersector. We discuss their system further in
section 5.
Because triangles can overlap multiple grid cells, it is possible
for an intersection point to lie outside the current voxel. The intersection
kernel checks for this case and treats it as a miss. Note
that rejecting intersections in this way may cause a ray to be tested
against the same triangle multiple times (in different voxels). It is
possible to use a mailbox algorithm to prevent these extra intersection
calculations [Amanatides and Woo 1987], but mailboxing is
difficult to implement when multiple rays are traced in parallel.
The layout of the grid and triangles in texture memory is shown
in figure 4. As mentioned above, each voxel contains an offset into
a triangle-list texture. The triangle-list texture contains a delimited
list of offsets into triangle-vertex textures. Note that the triangle-
list texture and the triangle-vertex textures are 1D textures. In fact,
these textures are being used as a random-access read-only memory.
We represent integer offsets as 1-component floating point textures
and vertex positions in three floating point RGB textures. Thus,
theoretically, four billion triangles could be addressed in texture
memory with 32-bit integer addressing. However, much less texture
memory is actually available on current graphics cards. Limitations
on the size of 1D textures can be overcome by using 2D textures
Generate
Find
Intersection
Shade Hit
Shadow Rays
Generate
Shade Hit
Find Nearest
Intersection
Eye Rays
Generate
Shade Hit
Find Nearest
Intersection
Eye Rays
Generate
Shade Hit
Find Nearest
Intersection
Eye RaysShadow Caster Ray Caster Whitted Ray Tracer Path Tracer
(a) (b) (c) (d)
Figure
3: Data flow diagrams for the ray tracing algorithms we implement. The algorithms depicted are (a) shadow casting, (b) ray casting,
(c) Whitted ray tracing, and (d) path tracing. For ray tracing, each ray-surface intersection generates L+ 2 rays, where L is the number of
lights in a scene, corresponding to the number of shadow rays to be tested, and the other two are reflection and refraction rays. Path tracing
randomly chooses one ray bounce to follow and the feedback path is only one ray wide.
list pos, float4 h ){
float list pos, trilist );
float
float
float
return float4( {t, u, v, id} );
Figure
5: Code for ray-triangle intersection.
with the proper address translation, as well as segmenting the data
across multiple textures.
As with the traversal stage, the inner loop over all the triangles
in a voxel involves multiple passes. Each ray requires a single pass
per ray-triangle intersection.
3.1.4 Shader
The shading kernel evaluates the color contribution of a given ray
at the hit point. The shading calculations are exactly like those in
the standard graphics pipeline. Shading data is stored in memory
much like triangle data. A set of three RGB textures, with 32-bits
per channel, contains the vertex normals and vertex colors for each
triangle. The hit information that is passed to the shader includes
the triangle number. We access the shading information by a simple
dependent texture lookup for the particular triangle specified.
By choosing different shading rays, we can implement several
flavors of ray tracing using our streaming algorithm. We will look
at ray casting, Whitted-style ray tracing, path tracing, and shadow
casting. Figure 3 shows a simplified flow diagram for each of the
methods discussed, along with an example image produced by our
system.
The shading kernel optionally generates shadow, reflection, re-
fraction, or randomly generated rays. These secondary rays are
placed in texture locations for future rendering passes. Each ray
is also assigned a weight, so that when it is finally terminated, its
contribution to the final image may be simply added into the image
[Kajiya 1986]. This technique of assigning a weight to a ray
eliminates recursion and simplifies the control flow.
Ray Caster. A ray caster generates images that are identical to
those generated by the standard graphics pipeline. For each pixel on
the screen, an eye ray is generated. This ray is fired into the scene
and returns the color of the nearest triangle it hits. No secondary
rays are generated, including no shadow rays. Most previous efforts
to implement interactive ray tracing have focused on this type of
computation, and it will serve as our basic implementation.
Whitted Ray Tracer. The classic Whitted-style ray tracer
[Whitted 1980] generates eye rays and sends them out into the
scene. Upon finding a hit, the reflection model for that surface is
evaluated, and then a pair of reflection and refraction rays, and a set
of shadow rays - one per light source - are generated and sent out
into the scene.
Path Tracer. In path tracing, rays are randomly scattered from
surfaces until they hit a light source. Our path tracer emulates the
Arnold renderer [Fajardo 2001]. One path is generated per sample
and each path contains 2 bounces.
Shadow Caster. We simulate a hybrid system that uses the standard
graphics pipeline to perform hidden surface calculation in the
first pass, and then uses ray tracing algorithm to evaluate shadows.
Shadow casting is useful as a replacement for shadow maps and
shadow volumes. Shadow volumes can be extremely expensive to
compute, while for shadow maps, it tends to be difficult to set the
proper resolution. A shadow caster can be viewed as a deferred
shading pass [Molnar et al. 1992]. The shadow caster pass generates
shadow rays for each light source and adds that light's contribution
to the final image only if no blockers are found.
Multipass Branching
Kernel Instr. Memory Words Stencil Instr. Memory Words
Count R W M RS WS Count R W M
Generate Eye Ray 28 0 5
Traverse
Intersect
Shade
Shadow
Reflected 26 11 9 9
Path
Table
1: Ray tracing kernel summary. We show the number of instructions required to implement each of our kernels, along with the number
of 32-bit words of memory that must be read and written between rendering passes (R, W) and the number of memory words read from
random-access textures (M). Two sets of statistics are shown, one for the multipass architecture and another for the branching architecture.
For the multipass architecture, we also show the number of 8-bit stencil reads (RS) and writes (WS) for each kernel. Stencil read overhead is
charged for all rays, whether the kernel is executed or not.
3.2 Implementation
To evaluate the computation and bandwidth requirements of our
streaming ray tracer, we implemented each kernel as an assembly
language fragment program. The NVIDIA vertex program instruction
set is used for fragment programs, with the addition of a few
instructions as described previously. The assembly language implementation
provides estimates for the number of instructions required
for each kernel invocation. We also calculate the bandwidth
required by each kernel; we break down the bandwidth as stream
input bandwidth, stream output bandwidth, and memory (random-
access read) bandwidth.
Table
1 summarizes the computation and bandwidth required for
each kernel in the ray tracer, for both the multipass and the branching
architectures. For the traversal and intersection kernels that involve
looping, the counts for the setup and the loop body are shown
separately. The branching architecture allows us to combine individual
kernels together; as a result the branching kernels are slightly
smaller since some initialization and termination instructions are
removed. The branching architecture permits all kernels to be run
together within a single rendering pass.
Using table 1, we can compute the total compute and bandwidth
costs for the scene.
Here R is the total number of rays traced. C r is the cost to generate
a ray; C v is the cost to walk a ray through a voxel; C t is the cost of
performing a ray-triangle intersection; and C s is the cost of shading.
P is the total number of rendering passes, and C stencil is the cost of
reading the stencil buffer. The total cost associated with each stage
is determined by the number of times that kernel is invoked. This
number depends on scene statistics: v is the average number of voxels
pierced by a ray; t is the average number of triangles intersected
by a ray; and s is the average number of shading calculations per
ray. The branching architecture has no stencil buffer checks, so
C stencil is zero. The multipass architecture must pay the stencil read
cost for all rays over all rendering passes. The cost of our ray tracer
on various scenes will be presented in the results section.
Finally, we present an optimization to minimize the total number
of passes motivated in part by Delany's implementation of a
ray tracer for the Connection Machine [Delany 1988]. The traversal
and intersection kernels both involve loops. There are various
strategies for nesting these loops. The simplest algorithm would be
to step through voxels until any ray encounters a voxel containing
triangles, and then intersect that ray with those triangles. How-
ever, this strategy would be very inefficient, since during intersection
only a few rays will have encountered voxels with triangles.
On a SIMD machine like the Connection Machine, this results in
very low processor utilization. For graphics hardware, this yields
an excessive number of passes resulting in large number of stencil
read operations dominating the performance. The following is a
more efficient algorithm:
generate eye ray
while (any(active(ray))) {
if (oracle(ray))
traverse(ray)
else
After eye ray generation, the ray tracer enters a while loop which
tests whether any rays are active. Active rays require either further
traversals or intersections; inactive rays have either hit triangles or
traversed the entire grid. Before each pass, an oracle is called. The
oracle chooses whether to run a traverse or an intersect pass. Various
oracles are possible. The simple algorithm above runs an intersect
pass if any rays require intersection tests. A better oracle, first
proposed by Delany, is to choose the pass which will perform the
most work. This can be done by calculating the percentage of rays
requiring intersection vs. traversal. In our experiments, we found
that performing intersections once 20% of the rays require intersection
tests produced the minimal number of passes, and is within a
factor of two to three of optimal for a SIMD algorithm performing
a single computation per rendering pass.
To implement this oracle, we assume graphics hardware maintains
a small set of counters over the stencil buffer, which contains
the state of each ray. A total of eight counters (one per stencil bit)
would be more than sufficient for our needs since we only have
four states. Alternatively, we could use the OpenGL histogram operation
for the oracle if this operation were to be implemented with
high performance for the stencil buffer.
4.1 Methodology
We have implemented functional simulators of our streaming ray
tracer for both the multipass and branching architectures. These
simulators are high level simulations of the architectures, written in
the C++ programming language. These simulators compute images
and gather scene statistics. Example statistics include the average
number of traversal steps taken per ray, or the average number of
Hall Outside Soda Hall Inside Forest Top Down Forest Inside Bunny Ray Cast
Figure
Fundamental scene statistics for our test scenes. The statistics shown match the cost model formula presented in section 3.2. Recall
that v is the average number of voxels pierced by a ray; t is the average number of triangles intersected by a ray; and s is the average number
of shading calculations per ray. Soda hall has 1.5M triangles, the forest has 1.0M triangles, and the Stanford bunny has 70K triangles. All
scenes are rendered at 1024x1024 pixels.
ray-triangle intersection tests performed per ray. The multipass architecture
simulator also tracks the number and type of rendering
passes performed, as well as stencil buffer activity. These statistics
allow us to compute the cost for rendering a scene by using the cost
model described in section 3.
Both the multipass and the branching architecture simulators
generate a trace file of the memory reference stream for processing
by our texture cache simulator. In our cache simulations we
used a 64KB direct-mapped texture cache with a 48-byte line size.
This line size holds four floating point RGB texels, or three floating
point RGBA texels with no wasted space. The execution order of
fragment programs effects the caching behavior. We execute kernels
as though there were a single pixel wide graphics pipeline. It
is likely that a GPU implementation will include multiple parallel
fragment pipelines executing concurrently, and thus their accesses
will be interleaved. Our architectures are not specified at that level
of detail, and we are therefore not able to take such effects into
account in our cache simulator.
We analyze the performance of our ray tracer on five viewpoints
from three different scenes, shown in figure 6.
. Soda Hall is a relatively complex model that has been used
to evaluate other real-time ray tracing systems [Wald et al.
2001b]. It has walls made of large polygons and furnishings
made from very small polygons. This scene has high depth
complexity.
. The forest scene includes trees with millions of tiny triangles.
This scene has moderate depth complexity, and it is difficult
to perform occlusion culling. We analyze the cache behavior
of shadow and reflection rays using this scene.
. The bunny was chosen to demonstrate the extension of our ray
tracer to support shadows, reflections, and path tracing.
Figure
7 shows the computation and bandwidth requirements of
our test scenes. The computation and bandwidth utilized is broken
down by kernel. These graphs clearly show that the computation
and bandwidth for both architectures is dominated by grid traversal
and triangle intersection.
Choosing an optimal grid resolution for scenes is difficult. A
finer grid yields fewer ray-triangle intersection tests, but leads to
more traversal steps. A coarser grid reduces the number of traversal
steps, but increases the number of ray-triangle intersection tests.
We attempt to keep voxels near cubical shape, and specify grid resolution
by the minimal grid dimension acceptable along any dimension
of the scene bounding box. For the bunny, our minimal grid
dimension is 64, yielding a final resolution of 98 - 64 - 163. For
the larger Soda Hall and forest models, this minimal dimension is
128, yielding grid resolutions of 250 - 198 - 128 and 581 - 128 -
581 respectively. These resolutions allow our algorithms to spend
equal amounts of time in the traversal and intersection kernels.
Outside Inside
Hall
Top Down Inside
Forest
Bunny
Ray Cast26
GInstructions
Intersector
Traverser
Others515GBytes
Multipass
Outside Inside
Hall
Top Down Inside
Forest
Bunny
Ray Cast2GInstructions
Intersector
Traverser
Others515
GBytes
Branching
Figure
7: Compute and bandwidth usage for our scenes. Each column
shows the contribution from each kernel. Left bar on each plot
is compute, right is bandwidth. The horizontal line represents the
per-second bandwidth and compute performance of our hypothetical
architecture. All scenes were rendered at 1024 - 1024 pixels.
4.2 Architectural Comparisons
We now compare the multipass and branching architectures. First,
we investigate the implementation of the ray caster on the multipass
architecture. Table 2 shows the total number of rendering passes
and the distribution of passes amongst the various kernels. The
total number of passes varies between 1000-3000. Although the
number of passes seems high, this is the total number needed to
render the scene. In the conventional graphics pipeline, many fewer
per object are used, but many more objects are drawn. In our
system, each pass only draws a single rectangle, so the speed of the
geometry processing part of the pipeline is not a factor.
We also evaluate the efficiency of the multipass algorithm. Recall
that rays may be traversing, intersecting, shading, or done. The
efficiency of a pass depends on the percentage of rays processed in
that pass. In these scenes, the efficiency is between 6-10% for all
of the test scenes except for the outside view of Soda Hall. This
Pass Breakdown Per Ray Maximum SIMD
Total Traversal Intersection Other Traversals Intersections Efficiency
Hall Outside 2443 692 1747 4 384 1123 0.009
Hall Inside 1198 70 1124 4
Forest Top Down 1999 311 1684 4 137 1435 0.062
Forest Inside 2835 1363 1468 4 898 990 0.068
Bunny Ray Cast 1085 610 471 4 221 328 0.105
Table
2: Breakdown of passes in the multipass system. Intersection and traversal make up the bulk of passes in the systems, with the rest of
the passes coming from ray generation, traversal setup, and shading. We also show the maximum number of traversal steps and intersection
tests for per ray. Finally, SIMD efficiency measures the average fraction of rays doing useful work for any given pass.
Outside Inside
Hall
Top Down Inside
Forest
Bunny
Ray Cast515GBytes
Stencil
State Variables
Data Structures
Figure
8: Bandwidth consumption by data type. Left bars are for
multipass, right bars for branching. Overhead for reading the 8-bit
stencil value is shown on top. State variables are data written to and
read from texture between passes. Data structure bandwidth comes
from read-only data: triangles, triangle lists, grid cells, and shading
data. All scenes were rendered at 1024 - 1024 pixels.
viewpoint contains several rays that miss the scene bounding box
entirely. As expected, the resulting efficiency is much lower since
these rays never do any useful work during the rest of the compu-
tation. Although 10% efficiency may seem low, the fragment processor
utilization is much higher because we are using early fragment
kill to avoid consuming compute resources and non-stencil
bandwidth for the fragment. Finally, table 2 shows the maximum
number of traversal steps and intersection tests that are performed
per ray. Since the total number of passes depends on the worst case
ray, these numbers provide lower bounds on the number of passes
needed. Our multipass algorithm interleaves traversal and intersection
passes and comes within a factor of two to three of the optimal
number of rendering passes. The naive algorithm, which performs
an intersection as soon as any ray hits a full voxel, requires at least
a factor of five times more passes than optimal on these scenes.
We are now ready to compare the computation and bandwidth
requirements of our test scenes on the two architectures. Figure 8
shows the same bandwidth measurements shown in figure 7 broken
down by data type instead of by kernel. The graph shows that, as ex-
pected, all of the bandwidth required by the branching architecture
is for reading voxel and triangle data structures from memory. The
multipass architecture, conversely, uses most of its bandwidth for
writing and reading intermediate values to and from texture memory
between passes. Similarly, saving and restoring these intermediates
requires extra instructions. In addition, significant bandwidth
is devoted to reading the stencil buffer. This extra computation and
bandwidth consumption is the fundamental limitation of the multi-pass
algorithm.
One way to reduce both the number of rendering passes and the
bandwidth consumed by intermediate values in the multipass architecture
is to unroll the inner loops. We have presented data for a
Outside Inside
Hall
Top Down Inside
Forest
Bunny
Ray Cast
Shadow Reflect
Forest0.51.5Normalized
Bandwidth
Stencil
State Variables
Voxel Data
Triangle Data
Shading Data
Figure
9: Ratio of bandwidth with a texture cache to bandwidth
without a texture cache. Left bars are for multipass, right bars for
branching. Within each bar, the bandwidth consumed with a texture
cache is broken down by data type. All scenes were rendered at
pixels.
single traversal step or a single intersection test performed per ray
in a rendering pass. If we instead unroll our kernels to perform four
traversal steps or two intersection tests, all of our test scenes reduce
their total bandwidth usage by 50%. If we assume we can suppress
triangle and voxel memory references if a ray finishes in the middle
of the pass, the total bandwidth reduction reaches 60%. At the
same time, the total instruction count required to render each scene
increases by less than 10%. With more aggressive loop unrolling
the bandwidth savings continue, but the total instruction count increase
varies by a factor of two or more between our scenes. These
results indicate that loop unrolling can make up for some of the
overhead inherent in the multipass architecture, but unrolling does
not achieve the compute to bandwidth ratio obtained by the branching
architecture.
Finally, we compare the caching behavior of the two implemen-
tations. Figure 9 shows the bandwidth requirements when a texture
cache is used. The bandwidth consumption is normalized by dividing
by the non-caching bandwidth reported earlier. Inspecting
this graph we see that the multipass system does not benefit very
much from texture caching. Most of the bandwidth is being used
for streaming data, in particular, for either the stencil buffer or for
intermediate results. Since this data is unique to each kernel in-
vocation, there is no reuse. In contrast, the branching architecture
utilizes the texture cache effectively. Since most of its bandwidth is
devoted to reading shared data structures, there is reuse. Studying
the caching behavior of triangle data only, we see that a 96-99%
hit rate is achieved by both the multipass and the branching system.
This high hit rate suggests that triangle data caches well, and that
we have a fairly small working set size.
In summary, the implementation of the ray caster on the multi-pass
architecture has achieved a very good balance between computation
and bandwidth. The ratio of instruction count to band-width
matches the capabilities of a modern GPU. For example, the
Relative
Extension Instructions Bandwidth
Shadow Caster 0.85 1.15
Whitted Ray Tracer 2.62 3.00
Path Tracer 3.24 4.06
Table
3: Number of instructions and amount of bandwidth consumed
by the extended algorithms to render the bunny scene using
the branching architecture, normalized by the ray casting cost.
NVIDIA GeForce3 is able to execute approximately 2G instruc-
tions/s in its fragment processor, and has roughly 8GB/s of memory
bandwidth. Expanding the traversal and intersection kernels to perform
multiple traversal steps or intersection tests per pass reduces
the bandwidth required for the scene at the cost of increasing the
computational requirements. The amount of loop unrolling can be
changed to match the computation and bandwidth capabilities of
the underlying hardware. In comparison, the branching architecture
consumes fewer instructions and significantly less bandwidth.
As a result, the branching architecture is severely compute-limited
based on today's GPU bandwidth and compute rates. However, the
branching architecture will become more attractive in the future as
the compute to bandwidth ratio on graphics chips increases with the
introduction of more parallel fragment pipelines.
4.3 Extended Algorithms
With an efficient ray caster in place, implementing extensions such
as shadow casting, full Whitted ray tracing, or path tracing is quite
simple. Each method utilizes the same ray-triangle intersection
loop we have analyzed with the ray caster, but implements a different
shading kernel which generates new rays to be fed back through
our system. Figure 3 shows images of the bunny produced by our
system for each of the ray casting extensions we simulate. The total
cost of rendering a scene depends on both the number of rays traced
and the cache performance.
Table
3 shows the number of instructions and bandwidth required
to produce each image of the bunny relative to the ray casting cost,
all using the branching architecture. The path traced bunny was
rendered at 256 - 256 pixels with 64 samples and 2 bounces per
pixel while the others were rendered at 1024 - 1024 pixels. The
ray cast bunny finds a valid hit for 82% of its pixels and hence 82%
of the primary rays generate secondary rays. If all rays were equal,
one would expect the shadow caster to consume 82% of the instructions
and bandwidth of the ray caster; likewise the path tracer would
consume 3.2 times that of the ray caster. Note that the instruction
usage is very close to the expected value, but that the bandwidth
consumed is more.
Additionally, secondary rays do not cache as well as eye rays,
due to their generally incoherent nature. The last two columns of
figure 9 illustrate the cache effectiveness on secondary rays, measured
separately from primary rays. For these tests, we render the
inside forest scene in two different styles. "Shadow" is rendered
with three light sources with each hit producing three shadow rays.
"Reflect" applies a two bounce reflection and single light source
shading model to each primitive in the scene. For the multipass
rendering system, the texture cache is unable to reduce the total
bandwidth consumed by the system. Once again the streaming
data destroys any locality present in the triangle and voxel data.
The branching architecture results demonstrate that scenes with
secondary rays can benefit from caching. The system achieves a
bandwidth reduction for the shadow computation. However
caching for the reflective forest does not reduce the required band-
width. We are currently investigating ways to improve the performance
of our system for secondary rays.
In this section, we discuss limitations of the current system and
future work.
5.1 Acceleration Data Structures
A major limitation of our system is that we rely on a preprocessing
step to build the grid. Many applications contain dynamic ge-
ometry, and to support these applications we need fast incremental
updates to the grid. Building acceleration data structures for dynamic
scenes is an active area of research [Reinhard et al. 2000]. An
interesting possibility would be to use graphics hardware to build
the acceleration data structure. The graphics hardware could "scan
convert" the geometry into a grid. However, the architectures we
have studied in this paper cannot do this efficiently; to do operations
like rasterization within the fragment processor they would
need the ability to write to arbitrary memory locations. This is a
classic scatter operation and would move the hardware even closer
to a general stream processor.
In this research we assumed a uniform grid. Uniform grids, how-
ever, may fail for scenes containing geometry and empty space at
many levels of detail. Since we view texture memory as random-access
memory, hierarchical grids could be added to our system.
Currently graphics boards contain relatively small amounts of
memory (in 2001 a typical board contains 64MB). Some of the
scenes we have looked at require 200MB - 300MB of texture memory
to store the scene. An interesting direction for future work
would be to study hierarchical caching of the geometry as is commonly
done for textures. The trend towards unified system and
graphics memory may ultimately eliminate this problem.
5.2 CPU vs. GPU
Wald et al. have developed an optimized ray tracer for a PC with
SIMD floating point extensions [Wald et al. 2001b]. On an 800
MHz Pentium III, they report a ray-triangle intersection rate of 20M
intersections/s. Carr et al. [2002] achieve 114M ray-triangle inter-
sections/s on an ATI Radeon 8500 using limited fixed point preci-
sion. Assuming our proposed hardware ran at the same speed as a
GeForce3 (2G instructions/s), we could compute 56M ray-triangle
intersections/s. Our branching architecture is compute limited; if
we increase the instruction issue rate by a factor of four (8G in-
structions/s) then we would still not use all the bandwidth available
on a GeForce3 (8GB/s). This would allow us to compute 222M ray-
triangle intersections per second. We believe because of the inherently
parallel nature of fragment programs, the number of GPU instructions
that can be executed per second will increase much faster
than the number of CPU SIMD instructions.
Once the basic feasibility of ray tracing on a GPU has been
demonstrated, it is interesting to consider modifications to the GPU
that support ray tracing more efficiently. Many possibilities immediately
suggest themselves. Since rays are streamed through the
system, it would be more efficient to store them in a stream buffer
than a texture map. This would eliminate the need for a stencil
buffer to control conditional execution. Stream buffers are quite
similar to F-buffers which have other uses in multipass rendering
[Mark and Proudfoot 2001]. Our current implementation of the grid
traversal code does not map well to the vertex program instruction
set, and is thus quite inefficient. Since grid traversal is so similar to
rasterization, it might be possible to modify the rasterizer to walk
through the grid. Finally, the vertex program instruction set could
be optimized so that ray-triangle intersection could be performed in
fewer instructions.
Carr et al. [2002] have independently developed a method of
using the GPU to accelerate ray tracing. In their system the GPU
is only used to accelerate ray-triangle intersection tests. As in our
system, GPU memory is used to hold the state for many active rays.
In their system each triangle in turn is fed into the GPU and tested
for intersection with all the active rays. Our system differs from
theirs in that we store all the scene triangles in a 3D grid on the
GPU; theirs stores the acceleration structure on the CPU. We also
run the entire ray tracer on the GPU. Our system is much more efficient
than theirs since we eliminate the GPU-CPU communication
bottleneck.
5.3 Tiled Rendering
In the multipass architecture, the majority of the memory band-width
was consumed by saving and restoring temporary variables.
Since these streaming temporaries are only used once, there is no
bandwidth savings due to the cache. Unfortunately, when these
streaming variables are accessed as texture, they displace cacheable
data structures. The size of the cache we used is not large enough
to store the working set if it includes both temporary variables and
data structures. The best way to deal with this problem is to separate
streaming variables from cacheable variables.
Another solution to this problem is to break the image into small
tiles. Each tile is rendered to completion before proceeding to the
next tile. Tiling reduces the working set size, and if the tile size is
chosen so that the working set fits into the cache, then the streaming
variables will not displace the cacheable data structures. We have
performed some preliminary experiments along these lines and the
results are encouraging.
6 Conclusions
We have shown how viewing a programmable graphics processor
as a general parallel computation device can help us leverage the
graphics processor performance curve and apply it to more general
parallel computations, specifically ray tracing. We have shown that
ray casting can be done efficiently in graphics hardware. We hope
to encourage graphics hardware to evolve toward a more general
programmable stream architecture.
While many believe a fundamentally different architecture
would be required for real-time ray tracing in hardware, this work
demonstrates that a gradual convergence between ray tracing and
the feed-forward hardware pipeline is possible.
Acknowledgments
We would like to thank everyone in the Stanford Graphics Lab for
contributing ideas to this work. We thank Matt Papakipos from
NVIDIA for his thoughts on next generation graphics hardware,
and Kurt Akeley and our reviewers for their comments. Katie
Tillman stayed late and helped with editing. We would like to
thank Hanspeter Pfister and MERL for additional support. This
work was sponsored by DARPA (contracts DABT63-95-C-0085
and MDA904-98-C-A933), ATI, NVIDIA, Sony, and Sun.
--R
--TR
ARTS</>ccelerated ray-tracing system
The rendering equation
Ray tracing on a connection machine
PixelFlow: high-speed rendering using image composition
Talisman
The Tera computer system
Accommodating memory latency in a low-cost rasterizer
Prefetching in a texture cache architecture
Interactive ray tracing for isosurface rendering
Interactive ray tracing
Interactive multi-pass programmable shading
An improved illumination model for shaded display
A user-programmable vertex engine
The F-buffer
Dynamic Acceleration Structures for Interactive Ray Tracing
Interactive Distributed Ray Tracing of Highly Complex Models
--CTR
Edgar Velzquez-Armendriz , Eugene Lee , Kavita Bala , Bruce Walter, Implementing the render cache and the edge-and-point image on graphics hardware, Proceedings of the 2006 conference on Graphics interface, June 07-09, 2006, Quebec, Canada
Manfred Weiler , Martin Kraus , Markus Merz , Thomas Ertl, Hardware-Based Ray Casting for Tetrahedral Meshes, Proceedings of the 14th IEEE Visualization 2003 (VIS'03), p.44, October 22-24,
H. Du , M. Sanchez-Elez , N. Tabrizi , N. Bagherzadeh , M. L. Anido , M. Fernandez, Interactive Ray Tracing on Reconfigurable SIMD MorphoSys, Proceedings of the conference on Design, Automation and Test in Europe: Designers' Forum, p.20144, March 03-07,
H. Du , M. Sanchez-Elez , N. Tabrizi , N. Bagherzadeh , M. L. Anido , M. Fernandez, Interactive ray tracing on reconfigurable SIMD MorphoSys, Proceedings of the conference on Asia South Pacific design automation, January 21-24, 2003, Kitakyushu, Japan
Anton L. Fuhrmann , Robert F. Tobler , Stefan Maierhofer, Real-time glossy reflections on planar surfaces, Proceedings of the 3rd international conference on Computer graphics, virtual reality, visualisation and interaction in Africa, November 03-05, 2004, Stellenbosch, South Africa
Christian Henning , Peter Stephenson, Accelerating the ray tracing of height fields, Proceedings of the 2nd international conference on Computer graphics and interactive techniques in Australasia and South East Asia, June 15-18, 2004, Singapore
Pradeep Sen , Mike Cammarano , Pat Hanrahan, Shadow silhouette maps, ACM Transactions on Graphics (TOG), v.22 n.3, July
Chih-Chang Chen , Damon Shing-Min Liu, Use of hardware Z-buffered rasterization to accelerate ray tracing, Proceedings of the 2007 ACM symposium on Applied computing, March 11-15, 2007, Seoul, Korea
Nathan A. Carr , Jared Hoberock , Keenan Crane , John C. Hart, Fast GPU ray tracing of dynamic meshes using geometry images, Proceedings of the 2006 conference on Graphics interface, June 07-09, 2006, Quebec, Canada
Daniel Reiter Horn , Jeremy Sugerman , Mike Houston , Pat Hanrahan, Interactive k-d tree GPU raytracing, Proceedings of the 2007 symposium on Interactive 3D graphics and games, April 30-May 02, 2007, Seattle, Washington
Victor Moya , Carlos Gonzalez , Jordi Roca , Agustin Fernandez , Roger Espasa, Shader Performance Analysis on a Modern GPU Architecture, Proceedings of the 38th annual IEEE/ACM International Symposium on Microarchitecture, p.355-364, November 12-16, 2005, Barcelona, Spain
J. Stewart , E. P. Bennett , L. McMillan, PixelView: a view-independent graphics rendering architecture, Proceedings of the ACM SIGGRAPH/EUROGRAPHICS conference on Graphics hardware, August 29-30, 2004, Grenoble, France
Jingyi Yu , Jason Yang , Leonard McMillan, Real-time reflection mapping with parallax, Proceedings of the 2005 symposium on Interactive 3D graphics and games, April 03-06, 2005, Washington, District of Columbia
Kaoru Sugita , Takeshi Naemura , Hiroshi Harashima, Performance evaluation of programmable graphics hardware for image filtering and stereo matching, Proceedings of the ACM symposium on Virtual reality software and technology, October 01-03, 2003, Osaka, Japan
Tim Foley , Mike Houston , Pat Hanrahan, Efficient partitioning of fragment shaders for multiple-output hardware, Proceedings of the ACM SIGGRAPH/EUROGRAPHICS conference on Graphics hardware, August 29-30, 2004, Grenoble, France
Christophe Cassagnabre , Franois Rousselle , Christophe Renaud, Path tracing using the AR350 processor, Proceedings of the 2nd international conference on Computer graphics and interactive techniques in Australasia and South East Asia, June 15-18, 2004, Singapore
Karl E. Hillesland , Sergey Molinov , Radek Grzeszczuk, Nonlinear optimization framework for image-based modeling on programmable graphics hardware, ACM Transactions on Graphics (TOG), v.22 n.3, July
Karl E. Hillesland , Sergey Molinov , Radek Grzeszczuk, Nonlinear optimization framework for image-based modeling on programmable graphics hardware, ACM SIGGRAPH 2005 Courses, July 31-August
F. Losasso , H. Hoppe , S. Schaefer , J. Warren, Smooth geometry images, Proceedings of the Eurographics/ACM SIGGRAPH symposium on Geometry processing, June 23-25, 2003, Aachen, Germany
Peijie Huang , Wencheng Wang , Gang Yang , Enhua Wu, Traversal fields for ray tracing dynamic scenes, Proceedings of the ACM symposium on Virtual reality software and technology, November 01-03, 2006, Limassol, Cyprus
Xianfeng Gu , Song Zhang , Peisen Huang , Liangjun Zhang , Shing-Tung Yau , Ralph Martin, Holoimages, Proceedings of the 2006 ACM symposium on Solid and physical modeling, June 06-08, 2006, Cardiff, Wales, United Kingdom
A. Kolb , L. Latta , C. Rezk-Salama, Hardware-based simulation and collision detection for large particle systems, Proceedings of the ACM SIGGRAPH/EUROGRAPHICS conference on Graphics hardware, August 29-30, 2004, Grenoble, France
acceleration structures for a GPU raytracer, Proceedings of the ACM SIGGRAPH/EUROGRAPHICS conference on Graphics hardware, July 30-31, 2005, Los Angeles, California
Chris Wyman , Scott Davis, Interactive image-space techniques for approximating caustics, Proceedings of the 2006 symposium on Interactive 3D graphics and games, March 14-17, 2006, Redwood City, California
Sudipto Guha , Shankar Krishnan , Kamesh Munagala , Suresh Venkatasubramanian, Application of the two-sided depth test to CSG rendering, Proceedings of the symposium on Interactive 3D graphics, April 27-30, 2003, Monterey, California
Nathan Cournia, Chessboard domination on programmable graphics hardware, Proceedings of the 44th annual southeast regional conference, March 10-12, 2006, Melbourne, Florida
Nathan A. Carr , Jesse D. Hall , John C. Hart, GPU algorithms for radiosity and subsurface scattering, Proceedings of the ACM SIGGRAPH/EUROGRAPHICS conference on Graphics hardware, July 26-27, 2003, San Diego, California
Ingo Wald , Carsten Benthin , Philipp Slusallek, Distributed Interactive Ray Tracing of Dynamic Scenes, Proceedings of the IEEE Symposium on Parallel and Large-Data Visualization and Graphics, p.11, October 20-21,
Lionel Baboud , Xavier Dcoret, Rendering geometry with relief textures, Proceedings of the 2006 conference on Graphics interface, June 07-09, 2006, Quebec, Canada
Nolan Goodnight , Cliff Woolley , Gregory Lewin , David Luebke , Greg Humphreys, A multigrid solver for boundary value problems using programmable graphics hardware, Proceedings of the ACM SIGGRAPH/EUROGRAPHICS conference on Graphics hardware, July 26-27, 2003, San Diego, California
Anthony Sherbondy , Mike Houston , Sandy Napel, Fast Volume Segmentation With Simultaneous Visualization Using Programmable Graphics Hardware, Proceedings of the 14th IEEE Visualization 2003 (VIS'03), p.23, October 22-24,
Yuan Zhou , Michael Garland , Robert Haber, Pixel-Exact Rendering of Spacetime Finite Element Solutions, Proceedings of the conference on Visualization '04, p.425-432, October 10-15, 2004
Abhinav Dayal , Cliff Woolley , Benjamin Watson , David Luebke, Adaptive frameless rendering, ACM SIGGRAPH 2005 Courses, July 31-August
Nolan Goodnight , Cliff Woolley , Gregory Lewin , David Luebke , Greg Humphreys, A multigrid solver for boundary value problems using programmable graphics hardware, ACM SIGGRAPH 2005 Courses, July 31-August
Timothy J. Purcell , Craig Donner , Mike Cammarano , Henrik Wann Jensen , Pat Hanrahan, Photon mapping on programmable graphics hardware, Proceedings of the ACM SIGGRAPH/EUROGRAPHICS conference on Graphics hardware, July 26-27, 2003, San Diego, California
Timothy J. Purcell , Craig Donner , Mike Cammarano , Henrik Wann Jensen , Pat Hanrahan, Photon mapping on programmable graphics hardware, ACM SIGGRAPH 2005 Courses, July 31-August
Ingo Wald, The OpenRT-API, ACM SIGGRAPH 2005 Courses, July 31-August
Shih-wei Liao , Zhaohui Du , Gansha Wu , Guei-Yuan Lueh, Data and Computation Transformations for Brook Streaming Applications on Multiprocessors, Proceedings of the International Symposium on Code Generation and Optimization, p.196-207, March 26-29, 2006
Nathan A. Carr , Jesse D. Hall , John C. Hart, The ray engine, Proceedings of the ACM SIGGRAPH/EUROGRAPHICS conference on Graphics hardware, September 01-02, 2002, Saarbrucken, Germany
Solomon Boulos , Dave Edwards , J. Dylan Lacewell , Joe Kniss , Jan Kautz , Peter Shirley , Ingo Wald, Packet-based whitted and distribution ray tracing, Proceedings of Graphics Interface 2007, May 28-30, 2007, Montreal, Canada
V. Singh , D. Silver , N. Cornea, Real-time volume manipulation, Proceedings of the 2003 Eurographics/IEEE TVCG Workshop on Volume graphics, July 07-08, 2003, Tokyo, Japan
Gregory S. Johnson , Juhyun Lee , Christopher A. Burns , William R. Mark, The irregular Z-buffer: Hardware acceleration for irregular data structures, ACM Transactions on Graphics (TOG), v.24 n.4, p.1462-1482, October 2005
Jrg Schmittler , Ingo Wald , Philipp Slusallek, SaarCOR: a hardware architecture for ray tracing, Proceedings of the ACM SIGGRAPH/EUROGRAPHICS conference on Graphics hardware, September 01-02, 2002, Saarbrucken, Germany
Greg Coombe , Mark J. Harris , Anselmo Lastra, Radiosity on graphics hardware, Proceedings of the 2004 conference on Graphics interface, p.161-168, May 17-19, 2004, London, Ontario, Canada
Jrg Schmittler , Sven Woop , Daniel Wagner , Wolfgang J. Paul , Philipp Slusallek, Realtime ray tracing of dynamic scenes on an FPGA chip, Proceedings of the ACM SIGGRAPH/EUROGRAPHICS conference on Graphics hardware, August 29-30, 2004, Grenoble, France
Nolan Goodnight , Rui Wang , Cliff Woolley , Greg Humphreys, Interactive time-dependent tone mapping using programmable graphics hardware, Proceedings of the 14th Eurographics workshop on Rendering, June 25-27, 2003, Leuven, Belgium
Ingo Wald , Thiago Ize , Andrew Kensler , Aaron Knoll , Steven G. Parker, Ray tracing animated scenes using coherent grid traversal, ACM Transactions on Graphics (TOG), v.25 n.3, July 2006
Nolan Goodnight , Rui Wang , Cliff Woolley , Greg Humphreys, Interactive time-dependent tone mapping using programmable graphics hardware, ACM SIGGRAPH 2005 Courses, July 31-August
Feng Liu , Scott Owen , Ying Zhu , Robert Harrison , Irene Weber, Web based molecular visualization using procedural shaders in X3D, ACM SIGGRAPH 2005 Web program, July 31-August
Naga K. Govindaraju , Brandon Lloyd , Wei Wang , Ming Lin , Dinesh Manocha, Fast computation of database operations using graphics processors, ACM SIGGRAPH 2005 Courses, July 31-August
Aaron E. Lefohn , Joe M. Kniss , Charles D. Hansen , Ross T. Whitaker, A streaming narrow-band algorithm: interactive computation and visualization of level sets, ACM SIGGRAPH 2005 Courses, July 31-August
Naga K. Govindaraju , Brandon Lloyd , Wei Wang , Ming Lin , Dinesh Manocha, Fast computation of database operations using graphics processors, Proceedings of the 2004 ACM SIGMOD international conference on Management of data, June 13-18, 2004, Paris, France
Jens Krger , Rdiger Westermann, Linear algebra operators for GPU implementation of numerical algorithms, ACM SIGGRAPH 2005 Courses, July 31-August
Jens Krger , Rdiger Westermann, Linear algebra operators for GPU implementation of numerical algorithms, ACM Transactions on Graphics (TOG), v.22 n.3, July
Heiko Friedrich , Johannes Gnther , Andreas Dietrich , Michael Scherbaum , Hans-Peter Seidel , Philipp Slusallek, Exploring the use of ray tracing for future games, Proceedings of the 2006 ACM SIGGRAPH symposium on Videogames, p.41-50, July 30-31, 2006, Boston, Massachusetts
Mark J. Harris , Greg Coombe , Thorsten Scheuermann , Anselmo Lastra, Physically-based visual simulation on graphics hardware, Proceedings of the ACM SIGGRAPH/EUROGRAPHICS conference on Graphics hardware, September 01-02, 2002, Saarbrucken, Germany
Jeff Bolz , Ian Farmer , Eitan Grinspun , Peter Schrder, Sparse matrix solvers on the GPU: conjugate gradients and multigrid, ACM SIGGRAPH 2005 Courses, July 31-August
Jeff Bolz , Ian Farmer , Eitan Grinspun , Peter Schroder, Sparse matrix solvers on the GPU: conjugate gradients and multigrid, ACM Transactions on Graphics (TOG), v.22 n.3, July
Vincent C. H. Ma , Michael D. McCool, Low latency photon mapping using block hashing, Proceedings of the ACM SIGGRAPH/EUROGRAPHICS conference on Graphics hardware, September 01-02, 2002, Saarbrucken, Germany
Doug L. James , Kayvon Fatahalian, Precomputing interactive dynamic deformable scenes, ACM Transactions on Graphics (TOG), v.22 n.3, July
R. Mark , Donald Fussell, Real-time rendering systems in 2010, ACM SIGGRAPH 2005 Courses, July 31-August
Ian Buck , Tim Foley , Daniel Horn , Jeremy Sugerman , Kayvon Fatahalian , Mike Houston , Pat Hanrahan, Brook for GPUs: stream computing on graphics hardware, ACM Transactions on Graphics (TOG), v.23 n.3, August 2004
Ingo Wald , Solomon Boulos , Peter Shirley, Ray tracing deformable scenes using dynamic bounding volume hierarchies, ACM Transactions on Graphics (TOG), v.26 n.1, p.6-es, January 2007
Aaron E. Lefohn , Shubhabrata Sengupta , Joe Kniss , Robert Strzodka , John D. Owens, Glift: Generic, efficient, random-access GPU data structures, ACM Transactions on Graphics (TOG), v.25 n.1, p.60-99, January 2006 | programmable graphics hardware;ray tracing |
566643 | Physically based modeling and animation of fire. | We present a physically based method for modeling and animating fire. Our method is suitable for both smooth (laminar) and turbulent flames, and it can be used to animate the burning of either solid or gas fuels. We use the incompressible Navier-Stokes equations to independently model both vaporized fuel and hot gaseous products. We develop a physically based model for the expansion that takes place when a vaporized fuel reacts to form hot gaseous products, and a related model for the similar expansion that takes place when a solid fuel is vaporized into a gaseous state. The hot gaseous products, smoke and soot rise under the influence of buoyancy and are rendered using a blackbody radiation model. We also model and render the blue core that results from radicals in the chemical reaction zone where fuel is converted into products. Our method allows the fire and smoke to interact with objects, and flammable objects can catch on fire. | Figure
1: A turbulent gas flame model of a flamethrower.
events such as explosions where shock waves and other compressible
effects are important, see e.g. [Yngve et al. 2000] and [Neff and
Fiume 1999]. As low speed events, deflagrations can be modeled
using the equations for incompressible flow (as opposed to those for
compressible flow). Furthermore, since viscous effects are small,
we use the incompressible inviscid Euler equations similar to [Fed-
kiw et al. 2001]. As noted therein, these equations can be solved
efficiently using a semi-Lagrangian stable fluid approach, see e.g.
[Staniforth and Cote 1991] and [Stam 1999].
An important, often neglected aspect of fire and flame modeling
concerns the expansion of the fuel as it reacts to form hot gaseous
products. This expansion is the reason for the visual fullness observed
in many flames and is partly responsible for the visual turbulence
as well. Since the incompressible equations do not account
for expansion, we propose a simple thin flame model for capturing
these effects. This is accomplished by using an implicit surface to
represent the reaction zone where the gaseous fuel is converted into
hot gaseous products. Although real reaction zones have a nonzero
(but small) thickness, the thin flame approximation works well for
visual modeling and has been used by scientists as well, see for
example [Markstein 1964] who first proposed this methodology.
Our implementation of the thin flame model is as follows. First,
a dynamic implicit surface is used to track the reaction zone where
the gaseous fuel is converted into hot gaseous products. Then both
the gaseous fuel and the hot gaseous products are separately modeled
using independent sets of incompressible flow equations. Fi-
nally, these incompressible flow equations are updated together in
a coupled fashion using the fact that both mass and momentum
must be conserved as the gas reacts at the interface. While this
gives rather pleasant looking laminar (smooth) flames, we include a
vorticity confinement term, see [Steinhoff and Underhill 1994] and
[Fedkiw et al. 2001], to model the larger scale turbulent flame structures
that are difficult to capture on the relatively coarse grids used
for efficiency reasons in computer graphics simulations. We also
include other features important for visual simulation, such as the
buoyancy effects generated by hot gases and the interaction of fire
with flammable and nonflammable objects. We render the fire as
a participating medium with blackbody radiation using a stochastic
ray marching algorithm. In our rendering we pay careful attention
to the chromatic adaptation of the observer in order to get the correct
colors of the fire.
Previous Work
A simple laminar flame was texture mapped onto a flame-like implicit
primitive and then volume-traced by [Inakage 1989]. [Perry
and Picard 1994] applied a velocity spread model from combustion
science to propagate flames. [Chiba et al. 1994] computed
the exchange of heat between objects by projecting the environment
onto a plane. The spread of flame was a function of both
the temperature and the concentration of fuel. [Stam and Fiume
1995] present a similar model in three spatial dimensions for the
creation, extinguishing and spread of fire. The spread of the fire
is controlled by the amount of fuel available, the geometry of the
environment and the initial conditions. Their velocity field is pre-
defined, and then the temperature and density fields are advected
using an advection-diffusion type equation. They render the fire
using a diffusion approximation which takes into account multiple
scattering. [Bukowski and Sequin 1997] integrated the Berkeley
Architectural Walkthrough Program with the National Institute of
Standards and Technology's CFAST fire simulator. The integrated
system creates a simulation based design environment for building
fire safety systems. An application of physically accurate firelight,
and the impact of different fuel types on the color of flames and the
scene they illuminate is given in [Devlin and Chalmers 2001]. Accurate
ray casting through fire using spatially sparse measured data
rather than simulated data was discussed in [Rushmeier et al. 1995]
using radiation path integration software documented in [Grosshan-
dler 1995].
Although we do not consider high-speed combustion phenomena
such as detonations in this paper, there has been some notable
work on this subject. [Musgrave 1997] concentrated on the explosive
cloud portion of the explosion event using a fractal noise ap-
proach. [Neff and Fiume 1999] model and visualize the blast wave
portion of an explosion based on a blast curve approach. [Mazarak
et al. 1999] discuss the elementary blast wave equations, which
were used to model exploding objects. They also show how to
incorporate the blast wave model with a rigid body motion simulator
to produce realistic animation of flying debris. Most recently,
[Yngve et al. 2000] model the propagation of an explosion through
the surrounding air using a computational fluid dynamics based approach
to solve the equations for compressible, viscous flow. Their
system includes two way coupling between solid objects and surrounding
fluid, and uses the spectacular brittle fracture technology
of [O'Brien and Hodgins 1999]. While the compressible flow equations
are useful for modeling shock waves and other compressible
phenomena, they introduce a very strict time step restriction associated
with the acoustic waves. We use the incompressible flow
equations instead to avoid this restriction making our method more
computationally efficient.
Physically Based Model
We consider three distinct visual phenomena associated with
flames. The first of these is the blue or bluish-green core seen
in many flames. These colors are emission lines from intermediate
chemical species, such as carbon radicals, produced during the
chemical reaction. In the thin flame model, this thin blue core is
located adjacent to the implicit surface. Therefore, in order to track
this blue core, we need to track the movement of the implicit sur-
face. The second visual phenomenon is the blackbody radiation
emitted by the hot gaseous products, in particular the carbon soot.
This is characterized by the yellowish-orange color familiarly associated
with fire. In order to model this with visual accuracy we
need to track the temperatures associated with a flame as depicted
in
Figure
2 (read from left to right). If the fuel is solid or liquid,
the first step is the heating of the solid until it undergoes a phase
change to the gaseous state. (Obviously, for gas fuels, we start in
Figure
2: Flame temperature profile for a solid (or gaseous) fuel.
this gaseous state region in the figure.) Then the gas heats up until
it reaches its ignition temperature corresponding to our implicit surface
and the beginning of the thin blue core region. The temperature
continues to increase as the reaction proceeds reaching a maximum
before radiative cooling and mixing effects cause the temperature
to decrease. As the temperature decreases, the blackbody radiation
falls off until the yellowish-orange color is no longer visible. The
third and final visual effect we address is the smoke or soot that
is apparent in some flames after the temperature cools to the point
where the blackbody radiation is no longer visible. We model this
effect by carrying along a density variable in a fashion similar to
the temperature. One could easily add particles to represent small
pieces of soot, but our focus in this paper is the fire, not the smoke.
For more details on smoke, see [Foster and Metaxas 1997], [Stam
1999] and [Fedkiw et al. 2001]. Figure 3 shows smoke coupled to
our gas flame.
3.1 Blue Core
Our implicit surface separates this gaseous fuel from the hot
gaseous products and surrounding air. Consider for example the
injection of gaseous fuel from a cylindrically shaped tube. If the
fuel were not burning, then the implicit surface would simply move
at the same velocity as the gaseous fuel being injected. However,
when the fuel is reacting, the implicit surface moves at the velocity
of the unreacted fuel plus a flame speed S that indicates how
fast fuel is being converted into gaseous products. S indicates how
fast the unreacted gaseous fuel is crossing over the implicit surface
turning into hot gaseous products. The approximate surface area of
the blue core, AS, can be estimated with the following equation
Figure
3: The hot gaseous products and soot emit blackbody
radiation that illuminates the smoke.
Figure
4: Blue reaction zone cores for large (left) and small
(right) values of the flame reaction speed S. Note the increased
turbulence on the right.
where vf is the speed the fuel is injected across the injection surface
with area Af , e.g. Af is the cross section of the cylindrical tube.
This equation results from canceling out the density in the equation
for conservation of mass. The left hand side is the fuel being injected
into the region bounded by the implicit surface, and the right
hand side is the fuel leaving this region crossing over the implicit
surface as it turns into gaseous products. From this equation, we see
that injecting more (less) gas is equivalent to increasing (decreas-
ing) vf resulting in a larger (smaller) blue core. Similarly, increasing
(decreasing) the reaction speed S results in a smaller (larger)
blue core. While we can turn the velocity up or down on our cylindrical
jet, the reaction speed S is a property of the fuel. For example,
S is approximately .44m/s for a propane fuel that has been suitably
premixed with oxidizer [Turns 1996]. (We use
of our examples.) Figure 4 shows the effect of varying the parameter
S. The smaller value of S gives a blue core with more surface
area as shown in the figure.
This thin flame approximation is fairly accurate for premixed
flames where the fuel and oxidizer are premixed so that the injected
gas is ready for combustion. Non-premixed flames, commonly referred
to as diffusion flames, behave somewhat differently. In a
diffusion flame, the injected fuel has to mix with a surrounding oxidizer
before it can combust. Figure 5 shows the injection of fuel
out of a cylindrically shaped pipe. The cone shaped curve is the
predicted location of the blue core for a premixed flame while the
larger rounded curve is the predicted location of the blue core for
a diffusion flame. As can be seen in the figure, diffusion flames
tend to have larger cores since it takes a while for the injected fuel
and surrounding oxidizer to mix. This small-scale molecular diffusion
process is governed by a second order partial differential equation
that is computationally costly model. Thus for visual purposes,
we model diffusion flames with larger blue cores simply by using
a smaller value of S than that used for a corresponding premixed
flame.
Figure
5: Location of the blue reaction zone core for a premixed
flame versus a diffusion (non-premixed) flame
Figure
Streamlines illustrating the path of individual fluid
elements as they across the blue reaction zone core. The curved
path is caused by the expansion of the gas as it reacts.
3.2 Hot Gaseous Products
In order to get the proper visual look for our flames, it is important
to track individual elements of the flow and follow them through
their temperature histories given by Figure 2. This is particularly
difficult because the gas expands as it undergoes reaction from fuel
to hot gaseous products. This expansion is important to model since
it changes the trajectories of the gas and the subsequent look and
feel of the flame as individual elements go through their temperature
profile. Figure 6 shows some sample trajectories of individual
elements as they cross over the reaction front. Note that individual
elements do not go straight up as they pass through the reaction
front, but instead turn outward due to the effects of expansion. It
is difficult to obtain visually full turbulent flames without modeling
this expansion effect. In fact, many practitioners resort to a number
of low level hacks (and lots of random numbers) in an attempt
to sculpt this behavior, while we obtain the behavior intrinsically
by using the appropriate model. The expansion parameter is usually
given as a ratio of densities, f /h where f is the density of
the gaseous fuel and h is the density of the hot gaseous products.
Figure
7 shows three flames side by side with increasing amounts
of expansion from left to right. Note how increasing the expansion
makes the flames appear fuller. We used (about the
density of air) for all three flames with
.05kg/m3 from left to right.
We use one set of incompressible flow equations to model the
fuel and a separate set of incompressible flow equations to model
the hot gaseous products and surrounding airflow. We require a
model for coupling these two sets of incompressible flow equations
together across the interface in a manner that models the expansion
that takes place across the reaction front. Given that mass and momentum
are conserved we can derive the following equations for
the coupling across the thin flame front:
Figure
7: Comparison of flame shapes for differing degrees of
gaseous expansion. The amount of expansion increases from
left to right making the flame appear fuller and more turbulent.
where Vf and Vh are the normal velocities of the fuel and the hot
gaseous products, and pf and ph are their pressures. Here,
Vf - S is the speed of the implicit surface in the normal direction.
These equations indicate that both the velocity and the pressure are
discontinuous across the flame front. Thus, we will need to exercise
caution when taking derivatives of these quantities as is required
when solving the incompressible flow equations. (Note that the
tangential velocities are continuous across the flame front.)
3.3 Solid Fuels
When considering solid fuels, there are two expansions that need to
be accounted for. Besides the expansion across the flame front, a
similar expansion takes place when the solid is converted to a gas.
However, S is usually relatively small for this reaction (most solids
burn slowly in a visual sense), so we can use the boundary of the
solid fuel as the reaction front. Since we do not currently model the
pressure in solids, only equation 2 applies. We rewrite this equation
as
where s and Vs are the density and the normal velocity of the solid
fuel. Substituting solving for Vf gives
indicating that the gasified solid fuel moves at the velocity of the
solid fuel plus a correction that accounts for the expansion. We
model this phase change by injecting gas out of the solid fuel at
the appropriate velocity. This can be used to set arbitrary shaped
solid objects on fire as long as they can be voxelized with a suitable
surface normal assigned to each voxel indicating the direction of
gaseous injection.
In figure 8, we simulate a campfire using two cylindrically
shaped logs as solid fuel injecting gas out of the logs in a direction
consistent with the local unit surface normal. Note the realistic
rolling of the fire up from the base of the log. The ability to inject
(or not inject) gaseous fuel out of individual voxels on the surface of
a complex solid object allows us to animate objects catching on fire,
burn different parts of an object at different rates or not at all (by using
spatially varying injection velocities), and extinguish solid fuels
simply by turning off the injection velocity. While building an animation
system that allows the user to hand paint temporally and
spatially varying injection velocities on the surface of solid objects
is beyond the scope of this paper, it is a promising subject for future
work.
Implementation
We use a uniform discretization of space into N3 voxels with uniform
spacing h. The implicit surface, temperature, density and pressure
are defined at the voxel centers and are denoted i, j,k, Ti, j,k,
i, j,k and pi, j,k where i, 1,.,N. The velocities are defined at
the cell faces and we use half-way index notation: ui+1/2, j,k where
4.1 Level Set Equation
We track our reaction zone (blue core) using the level set method
of [Osher and Sethian 1988] to track the moving implicit surface.
We define to be positive in the region of space filled with fuel,
negative elsewhere and zero at the reaction zone.
The implicit surface moves with velocity
uf is the velocity of the gaseous fuel and the Sn term governs the
Figure
8: Two burning logs are placed on the ground and used
to emit fuel. The crossways log on top is not lit so the flame is
forced to flow around it.
conversion of fuel into gaseous products. The local unit normal,
is defined at the center of each voxel using central
differencing to approximate the necessary derivatives, e.g. x
(i+1, j,k -i-1, j,k)/2h. Standard averaging of voxel face values is
used to define uf at the voxel centers, e.g. ui,
ui+1/2, j,k)/2. The motion of the implicit surface is defined through
and solved at each grid point using
old -Dt w1x +w2y +w3z (7)
and an upwind differencing approach to estimate the spatial deriva-
tives. For example, if w1 > 0, x (i, j,k -i-1, j,k)/h. Otherwise
if w1 < 0, x (i+1, j,k - i, j,k)/h. This simple approach is effi-
cient and produces visually appealing blue cores.
To keep the implicit surface well conditioned, we occasionally
adjust the values of in order to keep a signed distance function
with First, interpolation is used to reset the values of
at voxels adjacent to the = 0 isocontour (which we don't want to
move since it is the visual location of the blue core). Then we march
out from the zero isocontour adjusting the values of at the other
grid points as we cross them. [Tsitsiklis 1995] showed that this
could be accomplished in an accurate, optimal and efficient manner
solving quadratic equations and sorting points with a binary heap
data structure. Later, [Sethian 1996] proposed the finite difference
formulation of this algorithm that we currently use.
4.2 Incompressible Flow
We model the flow of the gaseous fuel and the hot gaseous products
using a separate set of incompressible Euler equations for each. Incompressibility
is enforced through conservation of mass (or vol-
ume), i.e. . is the velocity field. The
equations for the velocity
are solved for in two parts. First, we use this equation to compute
an intermediate velocity u ignoring the pressure term, and then we
add the pressure (correction) term using
The key idea to this splitting method is illustrated by taking the
divergence of equation 9 to obtain
and then realizing that we want to enforce mass conserva-
tion. Thus the left hand side of equation 10 should vanish leaving a
Poisson equation of the form
that can be solved to find the pressure needed for updating equation
9.
We use a semi-Lagrangian stable fluids approach for finding the
intermediate velocity u and refer the reader to [Stam 1999] and
[Fedkiw et al. 2001] for the details. Since we use two sets of incompressible
flow equations, we need to address the stable fluid update
when a characteristic traced back from one set of incompressible
flow equations crosses the implicit surface and queries the velocities
from the other set of incompressible flow equations. Since the
normal velocity is discontinuous across the interface, the straight-forward
stable fluids approach fails to work. Instead, we need to
use the balance equation 2 for conservation of mass to correctly
interpolate a velocity.
Suppose we are solving for the hot gaseous products and we interpolate
across the interface into a region where a velocity from the
gaseous fuel might incorrectly be used. Instead of using this value,
we compute a ghost value as follows. First, we compute the normal
velocity of the fuel, we use the balance equation
2 to find a ghost value for VG as
Since the tangential velocities are continuous across the implicit
surface, we combine this new normal velocity with the existing tangential
velocity to obtain
as a ghost value for the velocity of the hot gaseous products in the
region where only the fuel is defined. This ghost velocity can then
be used to correctly carry out the stable fluids update. Since both
n and uf are defined throughout the region occupied by the fuel,
and f , h and S are known constants, a ghost cell value for the
hot gaseous products, uG, can be found anywhere in the fuel region
(even quite far from the interface) by simply algebraically evaluating
the right hand side of equation 13. [Nguyen et al. 2001] showed
that this ghost fluid method, invented in [Fedkiw et al. 1999], could
be used to compute physically accurate engineering simulations of
deflagrations.
After computing the intermediate velocity u for both sets of incompressible
flow equations, we solve equation 11 for the pressure
and finally use equation 9 to find our new velocity field. Equation
11 is solved by assembling and solving a linear system of equations
for the pressure as discussed in more detail in [Foster and Fedkiw
2001] and [Fedkiw et al. 2001]. Once again, we need to exercise
caution here since the pressure is discontinuous across the inter-
face. Using the ghost fluid method and equation 3, we can obtain
and solve a slightly modified linear system incorporating this jump
in pressure. We refer the reader to [Nguyen et al. 2001] for explicit
details and a demonstration of the physical accuracy of this
approach in the context of deflagration waves.
The temperature affects the fluid velocity as hot gases tend to
rise due to buoyancy. We use a simple model to account for these
effects by defining external forces that are directly proportional to
the temperature
points in the upward vertical direction, Tair is
the ambient temperature of the air and is positive constant with
the appropriate units.
Fire, smoke and air mixtures contain velocity fields with large
spatial deviations accompanied by a significant amount of rotational
and turbulent structure on a variety of scales. Nonphysical
numerical dissipation damps out these interesting flow features, so
we aim to add them back on the coarse grid. We use the vorticity
confinement technique invented by Steinhoff (see e.g. [Steinhoff
and Underhill 1994]) and used by [Fedkiw et al. 2001] to generate
the swirling effects for smoke. The first step in generating the small
scale detail is to identify the vorticity = u as the source of this
small scale structure. Each small piece of vorticity can be thought
of as a paddle wheel trying to spin the flow field in a particular di-
rection. Normalized vorticity location vectors,
simply point from lower concentrations of vorticity to higher con-
centrations. Using these, the magnitude and direction of the vorticity
confinement (paddle wheel) force is computed as
where > 0 and is used to control the amount of small scale detail
added back into the flow field. The dependence on h guarantees
that as the mesh is refined the physically correct solution is still
obtained. All these quantities can be evaluated in a straightforward
fashion as outlined in [Fedkiw et al. 2001].
Usually a standard CFL time step restriction dictates that the
time step t should be limited by t < h/|u|max where |u|max is the
maximum velocity in the flow field. While this is true for our level
set equation 6 with u replaced by w, the combination of the semi-Lagrangian
discretization and the ghost fluid method allows us to
take a much larger time step for the incompressible flow equations.
We choose our incompressible flow time step to be about five times
bigger than that dictated by applying the CFL condition to the level
set equation, and then stably update using substeps. This reduces
the number of times one needs to solve for the pressure, which is
the most expensive part of the calculation, by a factor of five.
4.3 Temperature and Density
The temperature profile has great effect on how we visually perceive
flames, and we need to generate a temperature time history
for fluid elements that behaves as shown in figure 2. Since this fig-
ure depicts a time history of the temperature of fluid elements, we
need a way to track individual fluid elements as they cross over the
blue core and rise upward due to buoyancy. In particular, we need to
know how much time has elapsed since a fluid element has passed
through the blue core so that we can assign an appropriate temperature
to it. This is easily accomplished using a reaction coordinate
variable Y governed by the equation
where k is a positive constant which we take to be 1 (larger or
smaller values can be used to get a good numerical variation of Y
in the flame). Ignoring the convection term, can be solved
exactly to obtain If we set in the region
of space occupied by the gaseous fuel and solve equation 16 for Y,
then the local value of 1-Y is equal to the total time elapsed since
a fluid element crossed over the blue reaction core.
We solve equation 16 using the semi-Lagrangian stable fluids
method to first update the convection term obtaining an intermediate
value Y. Then we separately integrate the source term analytically
so it too is stable for large time steps, i.e. Ynew = -kDt +Y.
We can now use the values of Y to assign temperature values
to the flow. Since Tignition is usually below the visual blackbody
emission threshold, the temperature we set inside the blue core is
usually not important. Therefore, we can set Tignition for the
points inside the blue core. The region between the blue core and
the maximum temperature in figure 2 is important since it models
the rise in temperature due to the progress of a complex chemical
reaction (which we do not model for the sake of efficiency). Here
the animator has a lot of freedom to sculpt temperature rise curves
and adjust how the mapping corresponds to the local Y values. For
example, one could use Tignition at
and use a linear temperature function for the in between values of
Y (.9,1). For large flames, this temperature rise interval will be
compressed too close to the blue core for our grid to resolve. In
these instances we use the ghost fluid method to set
any characteristic that looks across the blue core into the gaseous
fuel region. The blue core then spits out gas at the maximum
temperature that immediately starts to cool off, i.e. there is no temperature
rise region. In fact, we did not find it necessary to use
the temperature rise region in our examples as we are interested in
larger scale flames, but this temperature rise region would be useful,
for example, when modeling candle.
The animator can also sculpt the temperature falloff region to
the right of figure 2. However, there is a physically correct, viable
(i.e. computationally cheap) alternative. For the values of Y in the
temperature falloff region, we simply solve
which is derived from conservation of energy. Similar to equation
16, we solve this equation by first using the semi-Lagrangian stable
fluids method to solve for the convection term. Then we integrate
the fourth power term analytically to cool down the flame at a rate
governed by the cooling constant cT .
Similar to the temperature curve in figure 2, the animator can
sculpt a density curve for smoke and soot formation. The density
should start low and increase as the reaction proceeds. In the temperature
falloff region, the animator can switch from the density
curve to a physically correct equation
that can (once again) be solved using the semi-Lagrangian stable
fluids method. Again, we did not find it necessary to sculpt densities
for our particular examples.
5 Rendering of Fire
Fire is a participating medium. It is more complex than the types
of participating media (e.g. smoke and fog) that are typically encountered
in computer graphics since fire emits light. The region
that creates the light-energy typically has a complex shape, which
makes it difficult to sample. Another complication with fire is that
the fire is bright enough that our eyes adapt to its color. This chromatic
adaptation is important to account for when displaying fire on
a monitor. See [Pattanaik et al. 1998; Durand and Dorsey 2000].In
this section, we will first describe how we simulate the scattering
of light within a fire-medium. Then, we will detail how to properly
integrate the spectral distribution of power in the fire and account
for chromatic adaptation.
5.1 Light Scattering in a Fire Medium
Fire is a blackbody radiator and a participating medium. The properties
of a participating medium are described by the scattering, absorption
and emission properties. Specifically, we have the scattering
coefficient, s, the absorption coefficient, a, and the extinction
a +s. These coefficients specify the amount of
scattering, absorption and extinction per unit-distance for a beam of
light moving through the medium. The spherical distribution of the
scattered light at a location is specified by a phase-function, p. We
use the Henyey-Greenstein phase-function [Henyey and Greenstein
p( . (19)
4(1+g2 -2g . )1.5
Here, g [-1,1] is the scattering anisotropy of the medium, g > 0
is forward scattering, g < 0 is backward scattering, while
isotropic scattering. Note that the distribution of the scattered light
only depends on the angle between the incoming direction, , and
the outgoing direction, .
Light transport in participating media is described by an integro-differential
equation, the radiative transport equation [Siegel and
Howell 1981]:
Here, L is the spectral radiance, and Le, is the emitted spectral
radiance. Note that s, a, and t vary throughout the medium and
therefore depend on the position x.
We solve Equation 20 to estimate the radiance distribution in
the medium by using a stochastic adaptive ray marching algorithm
which recursively samples multiple scattering. In highly scattering
media this approach is costly; however, we are concerned about
fire which is a blackbody radiator (no scattering, only absorption)
that creates a low-albedo smoke (the only scattering part of the fire-
medium). This makes the Monte Carlo ray tracing approach practical
To estimate the radiance along a ray traversing the medium, we
split the ray into short segments. For a given segment, n, the scattering
properties of the medium are assumed constant, and the radi-
ance, Ln, at the start of the segment is computed as:
aLe, (x)Dx. (21)
This equation is evaluated recursively to compute the total radiance
at the origin of the ray. Dx is the length of the segment, Ln-1 is the
radiance at the beginning of the next segment, and is a sample
direction for a new ray that evaluates the indirect illumination in a
given direction for the segment. We find the sample direction by
importance sampling the Henyey-Greenstein phase function. Note
that we do not explicitly sample the fire volume; instead we rely
on the Monte Carlo sampling to pick up energy as sample rays hit
the fire. This strategy is reasonably efficient in the presence of the
low-albedo smoke generated by the fire.
Figure
9: A metal ball passes through and interacts with a gas
flame.
The emitted radiance is normally ignored in graphics, but for fire
it is an essential component. For a blackbody we can compute the
emitted spectral radiance using Planck's formula:
Le,
where T is the temperature, C1 3.7418 . 10-16Wm2, and C2
1.4388 . 10-2moK [Siegel and Howell 1981]. In the next section,
we will describe how we render fire taking this spectral distribution
of emitted radiance into account.
5.2 Reproducing the Color of Fire
Accurately reproducing the colors of fire is critical for a realistic fire
rendering. The full spectral distribution can be obtained directly by
using Planck's formula for spectral radiance when performing the
ray marching. This spectrum can then be converted to RGB before
being displayed on a monitor. To get the right colors of fire out of
this process it is necessary to take into account the fact that our eyes
adapt to the spectrum of the fire.
To compute the chromatic adaptation for fire, we use a von Kries
transformation [Fairchild 1998]. We assume that the eye is adapted
to the color of the spectrum for the maximum temperature present in
the fire. We map the spectrum of this white point to the LMS cone
responsivities (Lw, Mw, Sw). This enables us to map a spectrum to
the monitor as follows. We first integrate the spectrum to find the
raw XYZ tristimulus values (Xr, Yr, Zr). We then find the adapted
XYZ tristimulus values (Xa, Ya, Za) as:
Here, M maps the XYZ colors to LMS (consult [Fairchild 1998] for
the details). Finally, we map the adapted XYZ tristimulus values to
the monitor RGB space using the monitor white point.
In our implementation, we integrate the spectrum of the blackbody
at the source (e.g. when emitted radiance is computed); we
then map this spectrum to RGB before using it in the ray marcher.
This is much faster than doing a full spectral participating media
simulation, and we found that it is sufficiently accurate, since we
already assume that the fire is the dominating light source in the
scene when doing the von Kries transformation.
Figure
10: A flammable ball passes through a gas flame and
catches on fire.
6 Results
Figure
1, rendered by proprietary software at ILM which is a re-search
project not yet used in production, shows a frame from a
simulation of a flamethrower. We used a domain that was 8 meters
long with 160 grid cells in the horizontal direction
The flame was injected at 30m/s out of a cylindrical pipe with diameter
.4m. We used
.15m/(Ks2). The vorticity confinement parameter
was set to = 16 for the gaseous fuel and to = 60 for
the hot gaseous products. The simulation cost was approximately 3
minutes per frame using a Pentium IV.
Solid objects are treated by first tagging all the voxels inside the
object as occupied. Then all the occupied voxel cell faces have their
velocity set to that of the object. The temperature at the center of
occupied voxels is set to the object's temperature and the (smoke)
density is set to zero. Figure 9 shows a metal sphere as it passes
through and interacts with a gas fire. Note the reflection of the fire
on the surface of the sphere. For more details on object interactions
with liquids and gases see [Foster and Fedkiw 2001] and [Fedkiw
et al. 2001].
Since we have high temperatures (i.e. fire) in our flow field, we
allow our objects to heat up if their temperature is lower than that
of their surroundings. We use a simple conduction model where
we increase the local temperature of an object depending on the
surrounding air temperature and object temperature as well as the
time step Dt. Normally, the value of the implicit surface is set to
a negative value of h at the center of all voxels occupied by objects
indicating that there is no available fuel. However, we can
easily model ignition for objects we designate as flammable. Once
the temperature of a voxel inside an object increases above a pre-
defined threshold indicating ignition, we change the value of the
implicit surface in that voxel from -h to h indicating that it contains
fuel. In addition, those voxel's faces have their velocities augmented
above the object velocity by an increment in the direction
normal to the object surface indicating that gaseous fuel is being
injected according to the phase change addressed earlier for solid
fuels. In figure 10, we illustrate this technique with a spherical ball
that heats up and subsequently catches on fire as it passes through
the flame. Both this flammable ball and the metal ball were computed
on a 120 120 120 grid at approximately 5 minutes per
frame.
7 Conclusion
We have presented a physically based model for animating and rendering
fire and flames. We demonstrated that this model could be
used to produce realistic looking turbulent flames from both solid
and gaseous fuels. We showed plausible interaction of our fire and
smoke with objects, including ignition of objects by the flames.
Acknowledgment
Research supported in part by an ONR YIP and PECASE award
IIS-0085864 and the DOE ASCI Academic Strategic Alliances Program
(LLNL contract B341491). The authors would like to thank
Geiger, Philippe Rebours, Samir Hoon, Sebastian Marino and
Industrial Light + Magic for rendering the flamethrower.
--R
Interactive Simulation of Fire in Virtual Building Environments.
Two dimensional Visual Simulation of Flames
Realistic visualisation of the pompeii frescoes.
Interactive Tone Mapping.
Color Appearance Models.
Visual Simulation of Smoke.
Practical Animation of Liq- uids
Modeling the Motion of a Hot
RADCAL: A Narrow-band Model for Radiation Calculations in Combustion Environment
Diffuse radiation in the galaxy.
A Simple Model of Flames.
Nonsteady Flame Propagation.
Animating Exploding Objects.
Great Balls of Fire.
A Visual Model for Blast Waves and Fracture.
A Boundary Condition Capturing Method for Incompressible Flame Discon- tinuities
Graphical Modeling and Animation of Brittle Fracture.
Fronts Propagating with Curvature Dependent Speed: Algorithms Based on Hamilton-Jacobi Formualtions
A Multiscale Model of Adaptation and Spatial Vision for Realistic Image Display.
Synthesizing Flames and their Spread.
Rendering Participating Media: Problems and Solutions from Application Areas.
A Fast Marching Level Set Method for Monotonically Advancing Fronts.
Thermal Radiation Heat Transfer.
Depicting Fire and Other Gaseous Phenomena Using Diffusion Process.
Stable Fluids.
Modification of the Euler Equations for
Efficient Algorithms for Globally Optimal Trajectories.
An Introduction to Combustion.
Animating Explosions.
--TR
Fronts propagating with curvature-dependent speed: algorithms based on Hamilton-Jacobi formulations
A simple model of flames
Depicting fire and other gaseous phenomena using diffusion processes
Interactive simulation of fire in virtual building environments
Modeling the motion of a hot, turbulent gas
A multiscale model of adaptation and spatial vision for realistic image display
Stable fluids
Graphical modeling and animation of brittle fracture
A non-oscillatory Eulerian approach to interfaces in multimaterial flows (the ghost fluid method)
Animating explosions
A visual model for blast waves and fracture
Animating exploding objects
Visual simulation of smoke
Practical animation of liquids
Realistic visualisation of the Pompeii frescoes
A boundary condition capturing method for incompressible flame discontinuities
Volume Rendering of Pool Fire Data
Interactive Tone Mapping
--CTR
Nafees bin Zafar , Henrik Falt , Mir Zafar Ali , Chamberlain Fong, DD::Fluid::Solver::SolverFire, ACM SIGGRAPH 2004 Sketches, August 08-12, 2004, Los Angeles, California
Zeki Melek , John Keyser, Modeling Decomposing Objects under Combustion, Proceedings of the conference on Visualization '04, p.598.14, October 10-15, 2004
Alfred R. Fuller , Hari Krishnan , Karim Mahrous , Bernd Hamann , Kenneth I. Joy, Real-time procedural volumetric fire, Proceedings of the 2007 symposium on Interactive 3D graphics and games, April 30-May 02, 2007, Seattle, Washington
Tams Umenhoffer , Lszl Szirmay-Kalos , Gbor Szijrt, Spherical billboards and their application to rendering explosions, Proceedings of the 2006 conference on Graphics interface, June 07-09, 2006, Quebec, Canada
Raanan Fattal , Dani Lischinski, Target-driven smoke animation, ACM Transactions on Graphics (TOG), v.23 n.3, August 2004
Pavel Slavik , Marek Gayer , Frantisek Hrdlicka , Ondrej Kubelka, Visualization for modeling and simulation: problems of visualization of technological processes, Proceedings of the 35th conference on Winter simulation: driving innovation, December 07-10, 2003, New Orleans, Louisiana
Ivo Ihrke , Marcus Magnor, Image-based tomographic reconstruction of flames, Proceedings of the 2004 ACM SIGGRAPH/Eurographics symposium on Computer animation, August 27-29, 2004, Grenoble, France
Maurya Shah , Jonathan M. Cohen , Sanjit Patel , Penne Lee , Frdric Pighin, Extended Galilean invariance for adaptive fluid simulation, Proceedings of the 2004 ACM SIGGRAPH/Eurographics symposium on Computer animation, August 27-29, 2004, Grenoble, France
Flavien Bridault-Louchez , Michel Leblond , Franois Rousselle, Enhanced illumination of reconstructed dynamic environments using a real-time flame model, Proceedings of the 4th international conference on Computer graphics, virtual reality, visualisation and interaction in Africa, January 25-27, 2006, Cape Town, South Africa
Jos Stam, Flows on surfaces of arbitrary topology, ACM Transactions on Graphics (TOG), v.22 n.3, July
Adrien Treuille , Antoine McNamara , Zoran Popovi , Jos Stam, Keyframe control of smoke simulations, ACM Transactions on Graphics (TOG), v.22 n.3, July
Xiaoming Wei , Wei Li , Klaus Mueller , Arie Kaufman, Simulating fire with texture splats, Proceedings of the conference on Visualization '02, October 27-November 01, 2002, Boston, Massachusetts
Jeong-Mo Hong , Tamar Shinar , Myungjoo Kang , Ronald Fedkiw, On Boundary Condition Capturing for Multiphase Interfaces, Journal of Scientific Computing, v.31 n.1-2, p.99-125, May 2007
Zeki Melek , John Keyser, Multi-representation interaction for physically based modeling, Proceedings of the 2005 ACM symposium on Solid and physical modeling, p.187-196, June 13-15, 2005, Cambridge, Massachusetts
Z. Fan , Y. Zhao , A. Kaufman , Y. He, Adapted unstructured LBM for flow simulation on curved surfaces, Proceedings of the 2005 ACM SIGGRAPH/Eurographics symposium on Computer animation, July 29-31, 2005, Los Angeles, California
Yongxia Zhou , Jiaoying Shi , Jiarong Yu, Free and shape-controlled flows of smoke, Proceedings of the 2006 ACM international conference on Virtual reality continuum and its applications, June 14-April 17, 2006, Hong Kong, China
Ye Zhao , Xiaoming Wei , Zhe Fan , Arie Kaufman , Hong Qin, Voxels on Fire, Proceedings of the 14th IEEE Visualization 2003 (VIS'03), p.36, October 22-24,
Path-based control of smoke simulations, Proceedings of the 2006 ACM SIGGRAPH/Eurographics symposium on Computer animation, September 02-04, 2006, Vienna, Austria
Ivo Ihrke , Marcus Magnor, Adaptive grid optical tomography, Graphical Models, v.68 n.5, p.484-495, September 2006
Nuttapong Chentanez , Tolga G. Goktekin , Bryan E. Feldman , James F. O'Brien, Simultaneous coupling of fluids and deformable bodies, Proceedings of the 2006 ACM SIGGRAPH/Eurographics symposium on Computer animation, September 02-04, 2006, Vienna, Austria
Ye Zhao , Feng Qiu , Zhe Fan , Arie Kaufman, Flow simulation with locally-refined LBM, Proceedings of the 2007 symposium on Interactive 3D graphics and games, April 30-May 02, 2007, Seattle, Washington
Insung Ihm , Byungkwon Kang , Deukhyun Cha, Animation of reactive gaseous fluids through chemical kinetics, Proceedings of the 2004 ACM SIGGRAPH/Eurographics symposium on Computer animation, August 27-29, 2004, Grenoble, France
Bryan E. Feldman , James F. O'Brien , Okan Arikan, Animating suspended particle explosions, ACM Transactions on Graphics (TOG), v.22 n.3, July
Frdric Pighin , Jonathan M. Cohen , Maurya Shah, Modeling and editing flows using advected radial basis functions, Proceedings of the 2004 ACM SIGGRAPH/Eurographics symposium on Computer animation, August 27-29, 2004, Grenoble, France
Michael B. Nielsen , Ken Museth, Dynamic Tubular Grid: An Efficient Data Structure and Algorithms for High Resolution Level Sets, Journal of Scientific Computing, v.26 n.3, p.261-299, March 2006
Jeong-Mo Hong , Chang-Hun Kim, Discontinuous fluids, ACM Transactions on Graphics (TOG), v.24 n.3, July 2005
Robert Bridson , Ronald Fedkiw , Matthias Muller-Fischer, Fluid simulation: SIGGRAPH 2006 course notes Fedkiw and Muller-Fischer presenation videos are available from the citation page, ACM SIGGRAPH 2006 Courses, July 30-August 03, 2006, Boston, Massachusetts
Nick Rasmussen , Duc Quang Nguyen , Willi Geiger , Ronald Fedkiw, Smoke simulation for large scale phenomena, ACM Transactions on Graphics (TOG), v.22 n.3, July
Simon Premoe , Michael Ashikhmin , Peter Shirley, Path integration for light transport in volumes, Proceedings of the 14th Eurographics workshop on Rendering, June 25-27, 2003, Leuven, Belgium
Andrew Selle , Nick Rasmussen , Ronald Fedkiw, A vortex particle method for smoke, water and explosions, ACM Transactions on Graphics (TOG), v.24 n.3, July 2005
Frank Losasso , Frdric Gibou , Ron Fedkiw, Simulating water and smoke with an octree data structure, ACM Transactions on Graphics (TOG), v.23 n.3, August 2004
Mark Pauly , Richard Keiser , Bart Adams , Philip Dutr , Markus Gross , Leonidas J. Guibas, Meshless animation of fracturing solids, ACM Transactions on Graphics (TOG), v.24 n.3, July 2005
Frank Losasso , Tamar Shinar , Andrew Selle , Ronald Fedkiw, Multiple interacting liquids, ACM Transactions on Graphics (TOG), v.25 n.3, July 2006
Eran Guendelman , Andrew Selle , Frank Losasso , Ronald Fedkiw, Coupling water and smoke to thin deformable and rigid shells, ACM Transactions on Graphics (TOG), v.24 n.3, July 2005
Gladimir V. G. Baranoski , Justin Wan , Jon G. Rokne , Ian Bell, Simulating the dynamics of auroral phenomena, ACM Transactions on Graphics (TOG), v.24 n.1, p.37-59, January 2005
Ronald P. Fedkiw , Guillermo Sapiro , Chi-Wang Shu, Shock capturing, level sets, and PDE based methods in computer vision and image processing: a review of Osher's contributions, Journal of Computational Physics, v.185 n.2, p.309-341, March | flames;stable fluids;incompressible flow;blackbody radiation;smoke;vorticity confinement;implicit surface;chemical reaction |
566644 | Structural modeling of flames for a production environment. | In this paper we describe a system for animating flames. Stochastic models of flickering and buoyant diffusion provide realistic local appearance while physics-based wind fields and Kolmogorov noise add controllable motion and scale. Procedural mechanisms are developed for animating all aspects of flame behavior including moving sources, combustion spread, flickering, separation and merging, and interaction with stationary objects. At all stages in the process the emphasis is on total artistic and behavioral control while maintaining interactive animation rates. The final system is suitable for a high volume production pipeline. | Figure
1: A burning torch is waved through the air.
or can clearly be relaxed because the fluid is already acting
atypically.
Fire on the other hand, is a dramatic element that requires the
maximum level of control possible while maintaining a believable
appearance. We expect fire to look complex and unpredictable,
while at the same time having a recognizable structure according
to the conditions under which it is burning. That complexity by
itself makes direct numerical simulation of fire much less attractive
than for other phenomena for three reasons:
Numerical simulations scale poorly. As the resolution of a 3D
simulation increases, computational complexity increases by
at least O(n3) [Foster and Metaxas 1996; Stam and Fiume
1995]. The resolution required to capture the detail in even a
relatively small fire makes simulation expensive.
The number of factors that affect the appearance of fire under
different circumstances leads to an inter-dependent simulation
parameter space. It's difficult for developers to place intuitive
control functions on top of the underlying physical system.
Fire is chaotic. Small changes in initial conditions cause
radically different results. From an animation standpoint it's
difficult to iterate towards a desired visual result.
Together, these restrictions make numerical simulation a poor
choice as a basis for a large-scale fire animation system, where
control and efficiency are as equally important as realism.
This paper presents a different approach to modeling fire. Instead
of modeling the physics of combustion, we achieve the trade-off
between realism, control, and efficiency by recognizing that many
of the visual cues that define fire phenomena are statistical in
nature. We then separate these vital cues from other components
that we wish to artistically control. This provides us with a number
of distinct structural elements. Some of these elements are physics-
based, but in general they need not be. Each structural element has
a number of animation parameters associated with it, allowing us
to mimic a variety of both real-world and production-world fire
conditions.
Realism is achieved by basing structural state changes on the
measured statistical properties of real flames. In addition, we
define a set of large-scale procedural models that define locally
how a group of flame structures evolve over time. These include
physics-based effects such as fuel combustion, diffusion, or
convection, as well as pure control mechanisms such as animator-
defined time and space curves. Together, the structural flame
elements and procedural controls form the basis of a system for
modeling and animating a wide range of believable fire effects
extremely efficiently in the context of a direction driven animation
pipeline.
2. PREVIOUS WORK
Previous work on modeling dynamic flames falls loosely into two
categories: direct numerical simulation and visual modeling. Direct
simulations have achieved realistic visual results when modeling
the rotational motion due to the heat plume around a fire [Chiba et
al. 1994; Inakge 1990]. Simulation has also proved efficient at
modeling the behavior of smoke given off by a fire [Rushmeier et
al. 1995; Stam and Fiume 1993], and the heat plume from an
explosion [Yngve et al. 2000]. For the shape and motion of flames
however, simulation has not been so useful. Numerical models
from Computational Fluid Dynamics have not proven amenable to
simplification without significant loss of detail [Drysdale 1998].
Without that simplification they are an expensive option for large-scale
animations.
Visual modeling has focused on efficiency and control. Particle
systems are the most widely used fire model [Nishita and Dobashi
2001; Stam and Fiume 1995]. Particles can interact with other
primitives, are easy to render, and scale linearly (if there are no
inter-particle forces). The problem of realism falls solely on the
shoulders of the animator though. Force fields and procedural
noise can achieve adequate looking large-scale effects due to
convection, but it is very difficult to come up with a particle-based
model that accurately captures the spatial coherence of real fire.
Flame coherence has been modeled directly using chains of
connected particles [Beaudoin and Paquet 2001; Reeves 1983].
This retains many of the advantages of particle systems while also
allowing animators to treat a flame as a high level structural
element.
3. ESSENTIAL MODEL
The system we have developed to use as a general fire animation
tool has eight stages. The key to its effectiveness is that many
stages can be either directly controlled by an animator, or driven
by a physics-based model. The components are as follows:
Individual flame elements are modeled as parametric space
curves. Each curve interpolates a set of points that define the
spine of the flame.
The curves evolve over time according to a combination of
physics-based, procedural, and hand-defined wind fields.
Physical properties are based on statistical measurements of
natural diffusion flames. The curves are frequently re-sampled
to ensure continuity, and to provide mechanisms to model
flames generated from a moving source.
Figure
2: Classic three stage buoyant diffusion flame.
The curves can break, generating independently evolving
flames with a limited lifespan. Engineering observations
provide heuristics for both processes.
A cylindrical profile is used to build an implicit surface
representing the oxidization region, i.e. the visible part of the
flame. Particles are point sampled close to this region using a
volumetric falloff function.
Procedural noise is applied to the particles in the parameter
space of the profile. This noise is animated to follow thermal
buoyancy.
The particles are transformed into the parametric space of the
flame's structural curve. A second level of noise, with a
Kolmogorov frequency spectrum, provides turbulent detail.
The particles are rendered using either a volumetric, or a fast
painterly method. The color of each particle is adjusted
according to color properties of its neighbors, allowing flame
elements to visually merge in a realistic way.
To complete the system, we define a number of procedural
controls to govern placement, intensity, lifespan, and
evolution in shape, color, size, and behavior of the flames.
The result is a general system for efficiently animating a variety of
natural fire effects. The cost of direct simulation is only incurred
when it is desired. Otherwise, the system provides complete
control over large-scale behavior.
4. STRUCTURAL ELEMENTS
The fire animation system is built up from single flames modeled
on a natural diffusion flame (see Figure 2). Observed statistical
properties of real flames are used wherever possible to increase the
realism of the model.
4.1 Base Curve
The basic structural element of the flame system is an interpolating
B-Spline curve. Each curve represents the central spine of a single
flame. The flames themselves can merge or split (see Section 4.3)
but all of the fire phenomena we model are built from these
primitives.
In the first frame of animation for which a particular flame is
active, a particle is generated at a fixed point on the burning
surface and released into a wind field. The particle is advanced in
the wind field for a frame using an explicit Euler integration
method (Runge-Kutta is sufficient and completely stable in this
case), and a new particle is generated at the surface. The line
between these two points is sampled so that n control points are
evenly distributed along it. For each additional frame of animation,
they convect freely within the wind field.
After convection, an interpolating B-Spline is fitted so that it
passes through all the points. This curve is then parametrically re-
sampled, generating a new set of n points (always keeping the first
and last unchanged). This re-sampling ensures that whatever the
value of n, visual artifacts don't appear in the structure if control
points cluster together, or as a side effect of large time steps.
4.2 Flame Evolution
During the animation, the control points for the structural curve
evolve within a wind field. This evolution is for providing global
shape and behavior. Specific local detail due to fuel consumption
and turbulence at the combustion interface is added later.
The motion is built from four main components: convection,
diffusion, initial motion, and buoyancy. Initial motion is applied
only at the very base of the curve. If the fire-generating surface has
a velocity V, the particle, P0, is given an initial velocity, -V. The
flame is then generated in a stationary reference frame. Because
flames move with the participating media (i.e., with no specific
inertia), this provides an efficient way of taking account of moving
sources.
Buoyancy is due to the tendency for hotter (less dense) air to rise.
The region around the visible part of the flame is relatively
homogeneous in terms of temperature (at least with respect to the
ambient temperature). We chose to model buoyancy as a direct
linear upward force. The resulting flow is laminar in nature, so we
add rotational components in the form of simulated wind fields and
a noise field generated from a Kolmogorov spectrum.
The equation of motion for a structural particle, p, is defined by
dx
pp
dt
where, w(xp,t), is an arbitrary controlling wind field, d(Tp), the
motion due to direct diffusion, Vp, motion due to movement of the
source and, c(Tp, t), the motion due to thermal buoyancy. Tp is the
temperature of the particle. Diffusion is modeled as random
Brownian motion scaled by the temperature Tp. Thermal buoyancy
is assumed to be constant over the lifetime of the particle,
therefore
c(T,t)=-bg(T-T)t2 (2)
where b is the coefficient of thermal expansion, gy is the vertical
component of gravity, T0 is the ambient temperature, and tp is the
age of the particle. Equations (1) and (2) allow us to combine
accurate simulated velocity fields with ad-hoc control fields
without producing visual discontinuities.
4.3 Separation and Flickering
We model flame separation and flickering as a statistical process.
The classic regions of a free-standing diffusion flame are illustrated
in
Figure
2. The intermittent region of the flame is defined over the
range [Hp, Hi]. From its creation a flame develops until it reaches
Figure
3: Different normalized flame profiles for a candle
flame, torch flame, and camp fire flames respectively.
Hi. At that point we periodically test a random number against the
probability
ip
to determine whether the flame will flicker or separate at a length
h. The frequency f, is the approximate breakaway rate in Hz, while
Vc is the average velocity of the structural control points. From
observation [Drysdale 1998],
sources with a radius r. If necessary, D(h) 1 for h > Hmax, where
Hmax is some artistically chosen limit on flame length.
Once separation occurs, a region of the structural curve is split off
from the top of the flame. This region extends from the top of the
flame to a randomly selected point below. We could find no
measured data that describes the size distribution of separated
flames. In that absence, this point is selected using a normal
distribution function with a mean of Hp+(Hi-Hp)/2 and standard
deviation of (Hi-Hp)/4. The control points representing the buoyant
region are fitted with a curve, and re-sampled in the same way as
for the parent structure (Section 4.1). The number of points split
off is not increased back to n in this case however. This prevents
additional detail appearing in the buoyancy region of the flame
where no additional fuel could be added. During its lifetime, the
structural particles that make up the separated flame follow Eq (1)
as before.
Without modeling the combustion process there's no accurate way
to determine the fuel content of the breakaway flame. Therefore
there's no good model for how long it will remain visible. Each
separate flame is given a life-span of Ai3, where i is a uniform
random variable in the range [0,1], and A is a length scale ranging
from 1/24th second for small flames up to 2 seconds for a large
pool fire. The cube ensures that most breakaway flames are short
lived. There's no reason why the buoyant flame can't separate
again, although in general the entrainment region quickly
dominates as there's little airborne fuel in a buoyant flame except
for oxygen starved fires.
4.4 Flame Profiles
With the global structure and behavior of a flame defined, we now
concentrate on the visible shape of the flame itself. A flame is a
combustion region between the fuel source and an oxidizing agent.
The region itself is not clearly defined, so we require a model that
is essentially volumetric.
One such model treats each segment (the line between two
adjacent control points) of the structural curve as a source for a
potential field [Beaudoin and Paquet 2001]. The total field for a
flame is the summation of the fields for each of its segments. The
flame is rendered volumetrically using different color gradients for
various iso-contours of the potential field. This method gives good
results for small fires like candle flames, and handles flame merging
efficiently. Stylization is difficult however, requiring contraction
functions for each desired flame shape. In addition, turbulent noise
has to be built directly into the potential function as there is no
subsequent transformation stage before rendering.
To retain complete control over basic shape, we represent a flame
using a rotationally symmetric surface based on a simple two-dimensional
profile. The profile is taken from a standard library
and depends on the scale of flame effect that we wish to model
(see
Figure
3 for examples). We have had good results from hand
drawn profiles, as well as those derived from photographs.
We then define the light density of the visible part of the flame as
I
where xf(x) is the closest point to x on the parametric surface
defined by rotating the profile, and I is the density of combustion
at the surface (normalized to one for this work). Equation (4)
defines a simple volumetric density function for our flame.
In order to transform and displace this starting shape into an
organic and realistic looking flame, we need to transform Eq (4)
onto the flame's structural curve. The density function is first point
sampled volumetrically using a Monte Carlo method. The point
samples do not survive from frame to frame. The sampling allows
us to deform the density function without having to integrate it. It
is important though, that the rendering method employed (see
Section 5) be independent of sample density, so that the flame does
not appear pointillistic. Once the density function has been
approximated as particles, we displace and transform them to
simulate the chaotic process of flame formation.
4.5 Local Detail
Two levels of structural fluctuation are applied to the point
samples that define the visible part of the flame. Both animate in
time and space. The first directly affects overall shape by
displacing particles from their original sample positions, while the
second simulates air turbulence. We define buoyancy noise, which
represents the combustion fluctuation at the base of the flame. It
propagates up the flame profile according to the velocities of the
nearest structural particles (Section 4.1). There is no real physical
data to go on here, but a noise function that visually looks good
for this is Flow Noise [Perlin and Neyret 2001]. The rate of
rotation of the linear noise vectors over time is inversely
proportional to the diameter of the flame source, i.e., large flames
lead to quicker vector rotation.
The participating medium also causes the flame to distort over
time. At this scale, a vector field created using a Kolmogorov
spectrum exhibits small-scale turbulence and provides visual
realism. Kolmogorov noise is relatively cheap to calculate and can
be generated on a per-frame basis (see [Stam and Fiume 1993] for
some applications of the Kolmogorov spectrum).
Each sample particle is therefore:
Displaced away from its initial position according to the Flow
Noise value that has propagated up the profile.
Figure
4: A cylindrical coordinate system is used to map
point samples from the profile space to the space of the
deformed structural curve.
Transformed into structural curve space according to the
straightforward mapping shown in Figure 4.
Displaced a second time using a vector field generated from a
Kolmogorov spectrum.
After these transformations, the particles are tested against the
transformed profiles of their parent flame's neighbors. If a particle
is inside a neighboring flame and outside the region defined by Eq
(4), then that particle is not rendered at all. This gives the
appearance that individual flames can merge. The Kolmogorov
field and global wind field w(xp,t) ensure that merged flames
behave in a locally similar fashion even though there is nothing in
the explicit model to account for merging. The particles are then in
their final positions, ready for rendering.
5. RENDERING
5.1 Particle Color
Apparent flame color depends on the types of fuel and oxidizer
being consumed together with the temperature of the combustion
zone (hotter regions are bluer). Instead of trying to calculate the
color of the flames that we want to model, we find a reference
photograph and map the picture onto the two-dimensional profile
used for flame shape (Figure 3). Particles take their base color
directly from this mapping. Obviously, any image could be used,
giving complete control over the color of the flame. This mapping
does not determine the intensity of light from the particle, just its
base color.
5.2 Incandescence
The flames transmit energy towards the camera and into the
environment. We assume that the visible light transmitted is
proportional to the heat energy given out by the flame. This is
approximated by
is the upward mass flow, DHc the heat of
combustion of the fuel (e.g., 15 MJkg-1 for wood, 48 MJkg-1 for
gasoline), Af is the surface area of the fuel (M2), r is the radius, and
k is a scale factor for visual control.
Each particle then has its incandescence at the camera calculated
relative to
where l is the distance from the center of the flame to the camera,
and n is the total number of particle samples. The k, in Eq (5) can
be automatically adjusted to compensate for the energy loss due to
the d(xi) term in Eq (6).
Equation (5) also approximates the total energy given out to the
environment. For global illumination purposes, an emitting sphere
at the center of each flame segment with a radius of h/3, gives
reasonable looking lighting.
5.3 Image Creation
Ideally, the deformed flames should be volume rendered directly. If
the potential field approach is used, relatively fast methods are
available to do this [Brodie and Wood 2001; Rushmeier et al.
1995]. For the system described however, volume rendering
requires density samples to be projected through two inverse
transformations to hit the density function, Eq (4). This is elegant,
but limits us to using noise functions that can be integrated
quickly. Instead, we note that due to the coherence in the global
and local noise fields, sample points that are close together will
tend to remain close together under transformation. Therefore, if
we sample the untransformed profile with sufficient density to
eliminate aliasing, then the transformed sample points should also
not exhibit aliasing. The transforms themselves are inexpensive per
particle and so total particle count isn't a huge factor in terms of
efficiency. We calculate the approximate cross-sectional area of
the flame as it would appear from the camera, then super-sample
the profile with sufficient density for complete coverage.
The images in this paper were created using a sampling rate of
around ten particles per pixel. This gives a range of between
around a thousand and fifty thousand particles per flame. The
intensity of each particle is calculated using Eq (6) so the total
energy of the flame remains constant. The opacity associated with
each particle is more difficult to calculate however. Real flames are
highly transparent. They are so bright in relation to their
surrounding objects that only extremely bright objects are visible
through them. Cinema screens and video monitors cannot achieve
contrast levels high enough to give that effect unless the
background is relatively dark.
To our knowledge there's no engineering (or photographic) model
of apparent opacity in incandescent fluids. So, with no justification
other than observation we calculate the opacity of a particle as
being proportional to the relative brightness between it and what is
behind it. This is by no means a physically correct approach but it
appears to work well in practice. The particles are then motion
blurred using the instantaneous velocities given by Eq (1).
6. PROCEDURAL CONTROL
Separating the fire system into a series of independent components
was driven by the desire to have complete control over the look
and behavior of flame animations. Animators have direct control
over the basic shape and color of the flames (Sections 4.4 and 5.1),
as well as the scale, flickering, and separation behavior (Section
4.3). In addition, global motion is dictated by the summation of
different kinds of wind field. Wind fields, whether procedural
[Rudolf and Raczkowski 2000; Yoshida and Nishita 2000] or
simulated [Miyazaki et al. 2001] are well understood and effective
as a control mechanism. The open nature of the system easily
allows for many other procedural controls, three examples of
which follow.
6.1 Object Interaction
Interaction between the structural flame elements and a stationary
object requires a wind field that flows around a representation of
that object. We approximate the object as a series of solid voxels
and simulate the flow of hot gas around it using the method based
on the Navier-Stokes equations described in [Foster and Metaxas
1997]. This wind field is used in Eq (1) to convect the structural
control points. The only difference is that after their positions are
modified they are tested against the object volume and moved
outside if there's a conflict. Any ambiguity in the direction the
point should be moved is resolved using the adjacency of the
points themselves.
The sample particles used for rendering are more difficult to deal
with in a realistic way. Generally, the wind field naturally carries
the flame away from the object, but in cases where the transformed
flame profile remains partly inside we simply make a depth test
against the object during rendering and ignore the internal
particles. This makes the energy (Eq (6)) of flame segments
inaccurate close to an object. However, the object itself is usually
brightly lit by nearby flames and this is not noticeable. Figure 7
shows a rendered sequence of images involving interaction
between flames and simple objects. The wind field environment
had a resolution of 40x40x40 cells, which proved to be sufficient
to achieve flow that avoids the objects in a believable way.
6.2 Flame Spread
Many factors influence how flames spread over an object, or jump
between objects. Spread rate models rapidly become complex to
take into account fuel and oxidizer concentrations, properties of
the combusting material, atmospheric conditions, angle of attack
and so on. For simplified behavior, we use the procedural model
from [Perry and Picard 1994]. The velocity of the flame front can
be described by,
s
where f is the relative orientation between the flame and the
unburned surface, h is the height of the flame, Tf is the temperature
of the flame, L is the thermal thickness of the burning material, and
l is the thermal conductivity of air. Values of l and L for
different materials are available from tables but it is straightforward
to adjust them to get a desired speed.
For complete animation control, we use a gray-scale image to
determine precisely when flames become active on a surface. The
image is mapped onto the surface of the burning object, and the
value of the image samples correspond directly with flame
activation (or spreading) times. Similar maps control both the
length of time each flame burns as well as its overall intensity,
diameter and height.
6.3 Smoke Generation
Two measures define the capability of a flame to produce smoke.
The first is the smoke point. This is the minimum laminar flame
height at which smoke first escapes from the flame tip. The second
is smoke yield. This is a measure of the volume of smoke
produced and it correlates closely with the radiation emitted from a
diffusion flame. We procedurally generate smoke as particles
above the tip of each flame. The smoke point above the tip is given
simply by h, and the density of particles (or density associated with
each individual particle) is the radiation intensity (Eq (5))
multiplied by an arbitrary scale factor. Once generated, the smoke
is introduced into a gas simulator [Foster and Metaxas 1997] with
an initial velocity of Vpn-1 (from the upper control point) and a
temperature set to maintain the upward buoyancy force specified
by Eq (2).
7. RESULTS
All the images shown in this paper were generated using the
system described. Figure 1 shows a sequence of four images from
an animation of a lit torch being swung through the air. The flames
are generated as if the torch is standing still, but with an initial base
velocity as described in Section 4.2. The control points are
influenced by thermal buoyancy and Kolmogorov noise. There are
5 flame structures, each rendered using 9000 sample particles. The
animation as a whole took 2.7 seconds per frame on a Pentium III
700MHz processor. That broke down as 0.2 seconds for the
dynamic simulation, 0.5 seconds for rendering and 2.0 seconds for
B-Spline fitting and particle sorting.
A more stylized example is the dragon breath animation shown in
Figure
5. Here, flames are generated at the dragon's mouth with an
initial velocity and long lifespan. These flames split multiple times
using the technique outlined in Section 4.3. Again, five flames are
used initially, generating over eighty freely moving structures by
the end of the sequence. There are a total of 1.5 million sample
particles distributed according to the size of the flames relative to
the camera. Dynamic simulation took 3 seconds, rendering
seconds, and B-Spline fitting 1 minute per frame. The possibility of
individual flames merging (Section 4.5) was disabled for artistic
preference. A pre-calculated wind field based on random forces
gives some good internal rotation to the fireball but overall we feel
there's still a low level of turbulence missing from the animation.
Results could be improved by tracking the sample particles over
time through the Kolmogorov noise, although this would incur
more cost during rendering to prevent undersampling.
Figures
6 and 7 show a selection of images from the large format
version of the movie Shrek and interaction between flames and
stationary objects respectively. The animation system was used for
all the fire required by that production. Each component of the
system has between three and six parameters all related to visual
behavior (flicker rate and average lifespan for example). While the
system actually has more parameters than a corresponding direct
numerical simulation, they are fairly intuitive, and mutually
independent. In our experience, aside from learning the simulation
tools used for wind field creation, an animator can be productive
with the system within a week of first using it.
8. CONCLUSION
We have presented a system for modeling diffusion flames. The
main focus has been on efficiency, fast animation turnaround, and
complete control over visual appearance and behavior. With these
goals in mind, the system has been built to take advantage of
recent advances in direct simulation, as well as proven techniques
for generating and controlling wind fields. These methods are
combined with a novel approach for representing the structure of a
flame and a procedural animation methodology, to produce a
comprehensive animation tool for high volume throughput that can
dial between realistic and stylistic results.
9.
--R
Realistic and Controllable Fire Simulation.
Recent Advances in Volume
An Introduction to Fire Dynamics (2nd
Modeling the Motion of a Hot
Realistic Animation of Liquids
Practical Animation of Liquids
Visual Simulation of Smoke
A Simple Model of Flames.
Modeling and Rendering of Various Natural Phenomena Consisting of Particles
A Method for Modeling Clouds based on Atmospheric Fluid Dynamics
Flow Noise
Synthesizing Flames and their Spreading
Particle Systems - A Technique for Modeling a Class of Fuzzy Objects
Modeling the Motion of Dense Smoke in the Wind Field
Stable Fluids
Depicting Fire and Other Gaseous Phenomena Using Diffusion Processes
Turbulent Wind Fields for Gaseous Phenomena
Animating Explosions
Modeling of Smoke Flow Taking Obstacles into Account
Figure 6: The fire system in use on a 3D animated feature film.
Figure 7: Interaction between flames and simple stationary objects.
--TR
A simple model of flames
Turbulent wind fields for gaseous phenomena
Depicting fire and other gaseous phenomena using diffusion processes
Realistic animation of liquids
Modeling the motion of a hot, turbulent gas
Stable fluids
Animating explosions
Particle SystemsMYAMPERSANDmdash;a Technique for Modeling a Class of Fuzzy Objects
Visual simulation of smoke
Practical animation of liquids
Volume Rendering of Pool Fire Data
Modeling and Rendering of Various Natural Phenomena Consisting of Particles
Realistic and controllable fire simulation
Modeling of Smoke Flow Taking Obstacles into Account
A Method for Modeling Clouds Based on Atmospheric Fluid Dynamics
--CTR
Criss Martin , Ian Parberry, Real time dynamic wind calculation for a pressure driven wind system, Proceedings of the 2006 ACM SIGGRAPH symposium on Videogames, p.151-154, July 30-31, 2006, Boston, Massachusetts
Fabrice Neyret, Advected textures, Proceedings of the ACM SIGGRAPH/Eurographics symposium on Computer animation, July 26-27, 2003, San Diego, California
Joshua Schpok , Joseph Simons , David S. Ebert , Charles Hansen, A real-time cloud modeling, rendering, and animation system, Proceedings of the ACM SIGGRAPH/Eurographics symposium on Computer animation, July 26-27, 2003, San Diego, California
Alfred R. Fuller , Hari Krishnan , Karim Mahrous , Bernd Hamann , Kenneth I. Joy, Real-time procedural volumetric fire, Proceedings of the 2007 symposium on Interactive 3D graphics and games, April 30-May 02, 2007, Seattle, Washington
Raanan Fattal , Dani Lischinski, Target-driven smoke animation, ACM Transactions on Graphics (TOG), v.23 n.3, August 2004
Andrew Selle , Alex Mohr , Stephen Chenney, Cartoon rendering of smoke animations, Proceedings of the 3rd international symposium on Non-photorealistic animation and rendering, June 07-09, 2004, Annecy, France
N. Threy , R. Keiser , M. Pauly , U. Rde, Detail-preserving fluid control, Proceedings of the 2006 ACM SIGGRAPH/Eurographics symposium on Computer animation, September 02-04, 2006, Vienna, Austria
Flavien Bridault-Louchez , Michel Leblond , Franois Rousselle, Enhanced illumination of reconstructed dynamic environments using a real-time flame model, Proceedings of the 4th international conference on Computer graphics, virtual reality, visualisation and interaction in Africa, January 25-27, 2006, Cape Town, South Africa
Lin Shi , Yizhou Yu, Taming liquids for rapidly changing targets, Proceedings of the 2005 ACM SIGGRAPH/Eurographics symposium on Computer animation, July 29-31, 2005, Los Angeles, California
Insung Ihm , Byungkwon Kang , Deukhyun Cha, Animation of reactive gaseous fluids through chemical kinetics, Proceedings of the 2004 ACM SIGGRAPH/Eurographics symposium on Computer animation, August 27-29, 2004, Grenoble, France
Alexis Angelidis , Fabrice Neyret , Karan Singh , Derek Nowrouzezahrai, A controllable, fast and stable basis for vortex based smoke simulation, Proceedings of the 2006 ACM SIGGRAPH/Eurographics symposium on Computer animation, September 02-04, 2006, Vienna, Austria
Nick Rasmussen , Duc Quang Nguyen , Willi Geiger , Ronald Fedkiw, Smoke simulation for large scale phenomena, ACM Transactions on Graphics (TOG), v.22 n.3, July
Andrew Selle , Nick Rasmussen , Ronald Fedkiw, A vortex particle method for smoke, water and explosions, ACM Transactions on Graphics (TOG), v.24 n.3, July 2005
Lin Shi , Yizhou Yu, Controllable smoke animation with guiding objects, ACM Transactions on Graphics (TOG), v.24 n.1, p.140-164, January 2005
Frank Losasso , Frdric Gibou , Ron Fedkiw, Simulating water and smoke with an octree data structure, ACM Transactions on Graphics (TOG), v.23 n.3, August 2004
Frank Losasso , Tamar Shinar , Andrew Selle , Ronald Fedkiw, Multiple interacting liquids, ACM Transactions on Graphics (TOG), v.25 n.3, July 2006
Eran Guendelman , Andrew Selle , Frank Losasso , Ronald Fedkiw, Coupling water and smoke to thin deformable and rigid shells, ACM Transactions on Graphics (TOG), v.24 n.3, July 2005 | physically-based modeling;flames;animation systems;wind fields;convection;kolmogorov spectrum |
567177 | Tracking Mobile Units for Dependable Message Delivery. | As computing components get smaller and people become accustomed to having computational power at their disposal at any time, mobile computing is developing as an important research area. One of the fundamental problems in mobility is maintaining connectivity through message passing as the user moves through the network. An approach to this is to have a single home node constantly track the current location of the mobile unit and forward messages to this location. One problem with this approach is that, during the update to the home agent after movement, messages are often dropped, especially in the case of frequent movement. In this paper, we present a new algorithm which uses a home agent, but maintains information regarding a subnet within which the mobile unit must be present. We also present a reliable message delivery algorithm which is superimposed on the region maintenance algorithm. Our strategy is based on ideas from diffusing computations as first proposed by Dijkstra and Scholten. Finally, we present a second algorithm which limits the size of the subnet by keeping only a path from the home node to the mobile unit. | Introduction
Mobile computing reflects a prevailing societal and technological trend towards ubiquitous access to computational
and communication resources. Wireless technology and the decreasing size of computer components allow users
to travel within the office building, from office to home, and around the country with the computer at their side.
Both location-transparent and context-dependent services are desired. Decoupled computing is becoming the norm.
Disconnection is no longer a network fault but a common event intentionally caused by the user in order to conserve
power or a consequence of movement. Tethered connectivity is making way to opportunistic transient connections
via radio or infrared transmitters.
The focus of this paper is message delivery to mobile units. In a fixed network, message delivery relies on
established routes through the network. Although faults can render parts of the network inoperational or even
inaccessible, it is assumed that these faults are infrequent and the system is able to stabilize despite the changes.
In mobility, the changing connectivity of the mobile components is not a fault, but rather a feature. As a mobile
unit moves through the network, its accessibility point changes. In the base station model of mobility, similar to the
cellular telephone system, each mobile unit is connected to the network at a single point, or base station. This base
station can either be wired or wireless, but connectivity changes with greater frequency than typical of network
faults. Our goal is to be able to get a message to a mobile unit as it is moving among cells. In this environment
mobile units can have multiple points of connection in a short period of time, mobile units can disconnect completely
from the network, movement is not necessarily predictable, and tracking the current location of the mobile unit at
multiple places in the network is too expensive.
One proposed solution to delivering a message to a mobile unit is Mobile IP [6]. Every mobile unit is assigned a
single home agent which is responsible for forwarding messages (packets) to the mobile user. Each time the mobile
unit moves, it must provide the home agent with a new location. This solution is simple in that it does not require
any infrastructure changes in the network. Encapsulation and endpoint specific software are used to accomplish
location transparency. More distributed approaches update the routers themselves with forwarding information.
By keeping information closer to the mobile unit [5] the potentially long path for a message originating near the
current location of the mobile unit can be short circuited by not sending it all the way to the home agent. However,
neither solution provides reliable delivery. It is possible for a packet to be sent from the home agent toward the
mobile agent, and for the mobile agent to move before the packet is delivered. In Mobile IP a higher layer in the
network protocol stack is responsible for reliability and retransmission when necessary. Eventually, the packets are
delivered with reasonable probability.
In cellular telephones, a system similar to Mobile IP is employed when users roam outside their home region [9].
When the telephone is activated, the user registers with the home, indicating essentially a new area code for
redirecting calls. The registration process occurs infrequently because the most common case is for the user to
remain within a single region. Within that region, another approach must be taken to locate the mobile each time a
call arrives, and handovers are used to maintain connectivity when a user crosses a cell boundary during a session.
In both cases, if the mobile moves rapidly among cells, the information at the home agent will reflect an old
location, and messages sent to that location will be dropped. Clearly, a forwarding mechanism can be added to the
foreign locations, having them send these otherwise lost messages to the mobile unit's next location. However, with
rapid movement, the messages can continue chasing the mobile without delivery, following the trail of forwarding
pointers. At the same time, the amount of forwarding information can increase dramatically. Although such rapid
movement may seem unlikely, one of the trends in mobility is to reduce the size of the cell (e.g., nanocells) to
increase the frequency reuse. As cell sizes decrease, the time that it takes to traverse a cell similarly decreases.
Another possible application that is characterized by rapid mobile movement is mobile code where it is not
a physical component that moves, but rather a program that traverses the fixed network doing computation at
various network nodes. Rather than connecting to a foreign agent through a wireless mechanism, these mobile code
agents actually execute at a foreign host. They have the ability to move rapidly from one host to another and may
not register each new location with a home. Therefore, delivering a message to a mobile code agent becomes an
interesting application area in which rapid movement is not only feasible, but the common case.
These technological trends pose the following interesting question: Can we devise efficient protocols that guarantee
delivery to a node (or set of nodes) that move at arbitrary speeds across a fixed network? Clearly, trivial
solutions exist, they broadcast the message to all nodes and store the message at all nodes until the mobile arrives.
By contrast, more efficient solutions should limit the broadcast range and/or the amount of storage required.
In this paper, we start with the idea of employing diffusing computations proposed by Dijkstra and Scholten [3]
and adapt it to message delivery. By equating the root node of the computation to the concept of a home agent
from Mobile IP, and by replacing the messages of the computation with mobile units, the result is an algorithm
which, instead of tracking a computation as messages are passed through a system of processing nodes, tracks the
movement of a mobile unit as it visits various base stations in the system. Essentially, the graph of the Dijkstra-
Scholten algorithm defines a region within which the mobile unit is always located. Although this is not directly
a message delivery algorithm, by propagating a message throughout this region, we can achieve message delivery.
The algorithm can be readily adapted for this purpose and can be optimized for message delivery, e.g., our solution
prunes unnecessary portions of the graph reducing the area to which a message must be propagated.
Our approach to algorithm development involves the application of a new paradigm which adapts algorithms
from traditional distributed computing to the mobile environment. This paradigm treats mobile units as messages
that travel across the network, and examines adaptations of standard distributed algorithms to mobility; in this
case, Dijkstra-Scholten as a tracking algorithm. One of the assumptions often made in distributed computing is
FIFO channel behavior. The algorithms we developed rely on the ability to ensure this property in the mobile
environment. In particular, when mobile units and messages move along the same channel. This may seem
unrealistic considering that mobile units move slowly in comparison to the transmission of messages along wires.
The remainder of the paper is organized as follows: Section 2 presents our model of moblity, offers a precise
formulation of the problem and presents our algorithm for ensuring the FIFO behavior of mobile units and messages.
Section 3 explores the details of a message delivery algorithm derived directly from the Dijkstra-Scholten model
for diffusing computations. Section 4 presents another algorithm inspired by the first, but reduces the message
delivery overhead. For this algorithm, we provide a formal verification of its properties. Finally, Section 6 contains
analysis and conclusions.
Problem Definition: Message Delivery
The problem we are interested in is the delivery of messages to rapidly moving mobile units. An acceptable solution
should guarantee at at least once delivery of the message, minimize storage requirements across the network, and
leave no trace of the message in the system within a bounded time after delivery. Because mobile units do not
communicate directly with one another, the network must provide the support to deliver messages.
The cellular telephone design provides the foundation for the model of mobility we adopt in this paper. Figure 1a
shows a typical cellular telephone model with a single mobile support center (MSC) in each cell. The MSC
is responsible for communication with the mobile units within its region and serves as a manager for handover
requests when a mobile moves between MSCs. Figure 1b shows how the cellular telephone model is transformed
into a graph of nodes and channels where the nodes represent the individual cells and the channels represent the
ability of a mobile unit to move from one cell to another. We assume that the resulting network is connected, in
other words, a path exists between every pair of nodes.21
(a) Cellular system (b) Graph model
Figure
1: (a) Cellular system with one MSC per cell. All MSCs are assumed to be connected by a wired network. (b) Abstract model of a
cellular system, as a graph of nodes and channels. Solid lines form a spanning tree. A mobile unit moving across the border between two cells
may miss a simple broadcast along a spanning tree if, for instance, the handover occurs between the broadcast at MSC 2 and the broadcast
at MSC 1 .
We also assume that a mobile unit moving between two MSCs can be modeled as being on a channel identical
to messages in transit. In this manner, we no longer differentiate between physical movement and wired commu-
nication. It is reasonable to ask what happens when messages and mobile units are found on the same channel.
We make the assumption that all channels preserve message ordering, i.e., they are FIFO channels. This appears
to require that mobile units travel through space and reconnect to the next support center as fast or faster than
messages can be transmitted across the network. Although flush primitives [2] can be used to make traditional
non-FIFO channels FIFO, the separate channels used for mobile units and messages makes such flush primitives
inapplicable to mobility. The FIFO behavior, however, can be realized by integrating the handover protocol with
message passing.
For instance, the AMPS standard for cellular communication [9] describes a handover protocol which defines
a sequence of wired messages between source and destination MSCs, as well as wireless communications with the
root
signal
(1)
message
(2)
Figure
2: Dijkstra-Scholten for detecting termination of a diffusing computation. Shaded nodes are idle, white nodes are active.
mobile unit. By introducing a single additional wired message to this protocol, we can coordinate the wireless
transfer of the mobile unit between the MSCs with the wired transfer of any messages, including data messages.
Minor adjustments, must be made to both the sender and receiver to achieve this result, e.g., buffering messages
until the mobile unit announces its arrival at the destination. No changes are forced on the behavior of the mobile
unit. The details of this approach are available in [4]. Here, we emphasize that achieving FIFO behavior between
mobile units and messages requires only trivial changes to existing handover protocols and therefore can be assumed
as a network property in the remainder of this paper.
3 Applying diffusing computations to mobile unit tracking
Diffusing computations have the property that the computation initiates at a single root node while all other nodes
are idle. The computation spreads as messages are sent from active nodes. Dijkstra and Scholten [3] describe an
algorithm for detecting termination of such computations. The basic idea is that of maintaining a spanning tree
that includes all active nodes, as shown in Figure 2. A message sent from an active node to an idle node (message
(1) in
Figure
adds the latter to the tree as a child of the former. Messages sent among tree nodes have no effect
on the structure but may activate idle nodes still in the tree. An idle leaf node can leave the tree at any time by
notifying its parent (signal (2) in Figure 2). Termination is detected when an idle root is all that remains in the
tree.
We adapt this tree maintenance algorithm to the mobile environment. A node is seen as active when the mobile
unit is present. The resulting algorithm maintains a tree identical to Figure 2 with the mobile unit at an active
node or on a channel leaving a tree node. This enables us to guarantee the continued existence of a path from the
root to the mobile unit along tree edges. We use this property to develop a guaranteed message delivery algorithm.
The latter is superimposed on top of the graph maintenance algorithm. To maintain the distinction between the
data messages being delivered and any control messages used to effect the delivery, we will refer to the data message
as an announcement. In this section, we first describe the details of the graph maintenance algorithm, then present
the guaranteed data message, i.e., announcement, delivery algorithm. A short discussion and possible extensions
3.1 Mobile tracking
Although the Dijkstra-Scholten algorithm can be easily described and understood, the distributed message passing
nature of the algorithm leads to subtle complexities. The details of the algorithm can be found in Figure 3. Each
action is one atomic step and we assume weak fairness in action selection. For the purposes of discussion, we
e
MobileAt A boolean, true if mobile unit at A, initially false except at root
Parent(A) the parent of node A, initially null
multiset of children of node A, initially ;
Actions
MobileArrivesA (B) ;arrival at A from B
Effect:
MobileAt A := true
if Parent(A) 6= NULL then
send signal(A) to B
else
;arrival at A from B
Effect:
;mobile moves from A to B
Preconditions:
MobileAt A and channel (A,B) exists
Effect:
MobileAt A := false
node A from tree
Preconditions:
Effect:
send signal(A) to B
Figure
3: Diffusing computations adapted for tracking a mobile unit
assume that the mobile unit is initially located at the root and moves nondeterministically throughout the graph
Figure
3, operation SendMobileA (B)).
In the introduction of this section, we described an algorithm which maintains a tree structure with edges from
parent to child. By the distributed nature of the environment, the sender of a message cannot know whether or
not the destination node is already in the tree, and cannot know whether or not to add the destination as a child.
Therefore, the tree structure is maintained with edges from child to parent (recorded in Parent(A) in Figure 3).
For detecting termination and removing nodes from the tree, a node must be able to detect when it is an idle
leaf node. This is done by tracking each message sent by each node. The Dijkstra-Scholten algorithm requires that
every message be acknowledged by the destination with a signal. If the message arrives and the destination node is
already part of the tree, the spanning tree topology does not change and the signal is sent immediately. Otherwise
the signal is delayed and sent when the destination node removes itself from the tree. The source node tracks all
messages by destination in a multiset or bag. Nodes in this bag indicate children nodes of the spanning tree, nodes
to which the message has not arrived, or nodes from which the signal has been sent but not yet received. When the
bag is empty, the node has no children and can remove itself from the tree by signaling its parent. For detecting
termination of a diffusing computation, it is only necessary to keep a count of the number of successors. Because
we intend to use this information during announcement delivery, we must maintain the bag of children.
Similar processing must occur in the mobile setting. Each movement of the mobile unit is tracked in a multiset
(e.g., Children(A)). An element is removed from this multiset when the node receives a signal (Figure 3, operation
SignalArrives). A signal is sent immediately when the mobile unit arrives and the node is already part of the
tree (
Figure
3, MobileArrives) and is delayed otherwise. A delayed signal is released when the node becomes a
leaf to be removed from the tree (Figure 3, CleanUp).
3.2 Superimposing announcement delivery
Having described the graph maintenance algorithm, we now present an algorithm to guarantee at-least-once delivery
of an announcement. The details of this are shown in Figure 4 as actions superimposed on the graph maintenance
actions of Figure 3. Actions with the same label execute in parallel while new actions are fairly interleaved with
e
hsame as beforei
AnnouncementAtA boolean, true if announcement stored at A, initially false everywhere
started boolean, true if delivery has started, initially false
Actions
MobileArrivesA (B) ;arrival at A from B
Effect:
hsame as beforei
if AnnouncementAtA then
deliver announcement
send ack to Parent(A) and children(A)
;arrival at A from B
hsame as beforei
;moves from A to B
hsame as beforei
AnnouncementArrivesA (B) ;arrival at A from B
Effect:
if MobileAt A then
deliver announcement
send ack to B
else
AnnouncementAtA := true ;save ann.
send announcements to children(A)
node A from tree
Preconditions:
hsame as beforei
Effect:
hsame as beforei
AnnouncementAtA := false ;delete ann.
;arrival at A from B
Effect:
if AnnouncementAtA then
AnnouncementAtA := false ;delete ann.
send acks to children(A) except B
AnnouncementStart ;root sends announcement
Preconditions:
Effect:
started := true
if MoibleAt root = true then
deliver announcement
else
AnnouncementAt root := true
send announcement to children(root)
Figure
4: Announcement delivery on top of diffusing computations.
the existing actions.
For announcement delivery we assume that the announcement originates at the root and we rely on the property
that there is always a path from the root to the mobile unit alongedges in the tree. We note that the reverse edges
of the tree (from parent to child) are a subset of the edges from parent to child maintained as successors of
the parent (e.g., Children(A)). It is only necessary to send the announcement along edges in the spanning tree.
But, because this tree is maintained with pointers from child to parent, the announcement must be propagated
along the successor edges, from parent to child. When an announcement arrives from a source other than the
parent, the announcement is rejected (Figure 4, AnnouncementArrives). In this manner, the announcement is
only processed along the tree paths. Effectively, a frontier of announcements sweeps through the spanning tree.
When the announcement and the mobile unit are co-located at a node, the announcement is delivered (Figure 4,
AnnouncementArrives, MobileArrives).
In a stable environment where the mobile unit does not move, this announcement passing is sufficient to
guarantee delivery. However if the mobile unit moves from a node in the tree below the frontier to a node above
the frontier, delivery may fail. Therefore, each node stores a copy of the announcement until delivery is complete
or the node is removed from the tree (Figure 4, AnnouncementArrives). Storing the announcement in this
manner ensures that the mobile unit cannot move to a region above the frontier without receiving a copy of the
announcement. Because there is always a path from the root to the mobile unit, there must be an announcement
on the frontier traversing this path and the announcement will eventually reach the mobile unit thus leading to
delivery (Figure 4, MobileArrives). This path may change as the mobile unit moves from one region of the tree
to another, however, the existence of a path is guaranteed by the graph maintenance algorithm presented in the
previous section and the existence of the announcement on this path is guaranteed by the delivery algorithm of
this section.
In the worst case, it is possible for the mobile unit to continuously travel with the announcement on the channel
exactly one step behind. Eventually the mobile unit must either stop moving when the maximum length path is
reached (equal to the number of nodes in the system), or the mobile unit will return to a previously visited tree
node. When the mobile unit returns to a tree node, which, by the assumptions, must be above the frontier of
announcements, it will receive the announcement stored there.
Storing the announcement requires an additional cleanup phase to remove all copies. When the mobile unit
receives the announcement, an acknowledgement is generated and sent along the successor and parent edges
ure 4, AnnouncementArrives, MobileArrives). As before, the acknowledgment is rejected along paths which
are not part of the tree (Figure 4, AckArrives). The connectivity of the tree ensures that the acknowledgement
will propagate to all nodes holding copies of the announcement. Leaf nodes being removed from the tree must also
delete their copy of the announcement (Figure 4, CleanUp).
This algorithm ensures at-least-once delivery of the announcement. Because the announcement copies remain
in the graph until an acknowledgment is received, it is possible for the mobile unit to move from a region where
the acknowledgments have propagated to a region where they have not. When this occurs, the mobile unit will
receive an additional copy of the announcement, which it can reject based on sequence numbers. It is important
to note that each time delivery occurs, a new set of acknowledgments will be generated. It can be shown that
these acknowledgments do not inhibit the clean up process, but rather lead to a faster clean up. Each set of
acknowledgments spreads independently through the tree removing announcement copies, but terminates when a
region without announcement copies is reached.
3.3 Discussion
By superimposing the delivery actions on top of the graph maintenance, the result is an algorithm which guarantees
at least once delivery of an announcement while actively maintaining a graph of the system nodes where the mobile
unit has recently traveled.
It is not necessary for the spanning tree be pruned as soon as an idle leaf node exists. Instead this processing
can be delayed until a period of low bandwidth utilization. An application may benefit by allowing the construction
of a wide spanning tree within which the mobile units travels. Tradeoffs include shorter paths from the root to the
mobile unit versus an increase in the number of nodes involved in each announcement delivery.
By constructing the graph based on the movement of the mobile unit, the path from the root to the mobile
unit may not be optimal. Therefore, a possible extension is to run an optimization protocol to reduce the length of
this path. Such an optimization must take into consideration the continued movement of the mobile unit as well as
any announcement deliveries in progress. The tradeoff with this approach is between the benefit of a shorter route
from the root to the mobile unit and the additional bandwidth and complexity required to run the optimization.
Although in our algorithm only one mobile unit is present, the graph maintenance algorithm requires no extensions
to track a group of mobile units. The resulting spanning tree can be used for unicast announcement delivery
without any modifications and for multicast announcement delivery by changing only the announcement clean up
mechanism. As presented, the delivery of the announcement triggers the propagation of acknowledgments. In the
multicast case, it is possible for the announcement not to reach all mobile units before the cleanup starts. One
practical solution is to eliminate the cleanup rules entirely, and assign a timeout to the announcement. This timeout
should be proportional to the time it takes for the announcement to traverse the diameter of the network.
4 Backbone
We now introduce a new tracking and delivery algorithm inspired by the previous investigation with diffusing
computations. Our goal is to reduce the number of nodes to which the announcement propagates, and to accomplish
this we note that only the path between the root and mobile unit is necessary for delivery. In the previous approach,
although the parts of the graph not on the path from the root to the mobile unit can be eliminated with remove
messages, announcements still propagate unnecessarily down these subtrees before the node deletion occurs. To
avoid this, the algorithm presented in this section maintains a graph with only one path leading away from the root
and terminating at the mobile unit. This path is referred to as the backbone. The nodes in the remainder of the
graph form structures referred to as tails and are actively removed from the graph, rather than relying on idle leaf
nodes to remove themselves. Maintenance of this new structure requires additional information to be carried by
the mobile unit regarding the path from the root, as well as the addition of a delete message to remove tail nodes.
The announcement delivery mechanism remains essentially the same as before, but the simpler graph reduces the
number of announcement copies stored during delivery.
To understand how the backbone is kept independent of the tails, we examine how the graph changes as the
mobile unit moves. It is important to note that by the definition of the backbone, the mobile unit is always either
at the last node of the backbone, or on a channel leading away from it. In Figure 5a, the backbone is composed
of nodes A, B, and C and the dashed arrow shows the movement of the mobile unit from node C to D where D is
not part of the graph. This is the most straightforwared case in which the backbone is extended to include D by
adding both the child pointer from C to D (not shown) and the parent pointer in the reverse direction (solid arrow
in
Figure
5b).
In Figure 5b, the mobile moves to a node B, a node already in the backbone and with a non-null parent pointer.
It is clear from the figure that the backbone should be shortened to only include A and B without changing any
parent pointers, and that C and D should be deleted. To explicitly remove the tail created by C and D, a delete
message is sent to the child of B. When C receives the delete from its parent, it will nullify its parent pointer,
propagate the delete to its child, and nullify its child pointer.
If at this point the mobile moves from B onto D before the arrival of the delete (See Figure 5c), D still has a
root
A
root
A
del
A
root root
A
(a) Backbone extended (b) Backbone shortened (c) Tail node added (d) After movement completes
Figure
5: The parent pointers of the backbone change as the mobile moves to (a) a node not in the backbone, (b) a node higher
in the backbone, and (c) a tail node. (d) shows the state after all channels have been cleared.
root
tail
backbone
home
covered
backbone
del
tail
del
(a) Sample diffusing computation (b) Modified graph showing new structure.
Figure
By adapting diffusing computations to mobility, we construct a graph reflecting the movement of the mobile. In order
to deliver an announcement, the only part of the graph we need is the path from the root to the mobile, the backbone. Therefore
we adapt the Dijkstra-Scholten algorithm to maintain only this graph segment and delete all the others.
parent pointer (C) and we cannot distinguish this case from the previous case (where B also had a non null parent
pointer). In the previous case the parent of the node the mobile unit arrived at did not change, but in this case, we
wish to have D's parent set to B (the node the mobile unit is arriving from) so that the backbone is maintained.
To distinguish these two cases, we require the mobile unit to carry a sequence of the identities of the nodes in the
backbone. In the first case where the mobile unit arrives at B, B is in the list of backbone nodes maintained by
the mobile unit, therefore B keeps its parent pointer unchanged, but prunes the backbone list to remove C and D.
However, when the mobile arrives at D, only A and B are in the backbone list, therefore the parent pointer of D
is changed to point to B. But, what happens to the delete message moving from C to D? Because C is no longer
D's parent when the delete arrives, it is simply dropped and the backbone is not affected.
The delivery algorithm is then superimposed on top of the generated graph. It is not sufficient to send the
announcement down the spanning tree created by the backbone without keeping copies at all nodes along the path
because the mobile is free to move from a region below the announcement to one above it (as in Figure 5b, assuming
the announcement had propagated to C but not to D). Therefore, to guarantee delivery, as the announcement
propagates down the backbone, a copy is stored at each node until delivery is complete. We refer to the portion
of the backbone with an announcement as the covered backbone, see Figure 6b. Delivery can occur by the mobile
unit moving to a location in the covered backbone, or the announcement catching up with the mobile unit at a
node. In either case, an acknowledgment is generated and sent via the parent pointers toward the root. If the
announcement is delivered by the mobile unit moving on to the covered backbone, a delete is generated toward the
child and an acknowledgment is generated toward the parent. Therefore any extra copies of the announcement on
the newly created tail will be deleted with the nodes.
4.1 Details
The details for the tracking algorithm are shown in Figure 7. As before, we model arbitrary movement of the
mobile by an action, called SendMobileA (B), that allows a mobile at a node to move non-deterministically onto
any outgoing channel.
MobileArrives shows the bulk of the processing and relates closely to the actions described in Figure 5. When
the mobile unit arrives at a node, the changes to be backbone must be determined. If the mobile is doubling back
onto the backbone, the parent pointers remain unchanged and the path carried by the mobile is shortened to reflect
the new backbone (as in Figure 5b). If the node is not in the backbone (Figure 5a) or is part of a tail (Figure 5c),
then the parent pointers must change to add this node to the backbone, and the node must be appended to the
backbone list carried by the mobile. In any case, the children of this node (if any) are no longer necessary for
announcement delivery, therefore a delete message is sent to the child, and the child pointer is cleared.
In addition to maintaining the graph, we must also address announcement delivery. As in the previous al-
gorithms, when the mobile unit arrives at a node where the announcement is stored, delivery occurs, yielding
at-least-once semantics for delivery. In this algorithm, we introduce a sequence number to ensure exactly-once delivery
semantics. Therefore, when the mobile arrives at a node with the announcement, delivery is attempted if the
sequence number of the last announcement received by the mobile is less than the sequence number of the waiting
announcement. In all cases (whether or not delivery was just accomplished), at this point the announcement has
been delivered to the mobile unit and an acknowledgment is generated along the path toward the root to clean up
the announcement copies. No acknowledgment needs to be generated toward the tails because any announcement
copies on tails will be removed at the same time the tail node is removed from the graph.
When the propagating announcement arrives at a node, AnnouncementArrives, it is either arriving from a
parent or some other node. If the announcement arrives from a node other than the parent, it should be discarded
because to guarantee delivery the announcement need only propagate along the backbone. However, when an
announcement arrives from the parent it must be processed. If the mobile is present, delivery is attempted with
the same restrictions as before with respect to the sequence number and the acknowledgment is started toward the
root. If the mobile is not present, the node stores a copy of the announcement in case the mobile arrives at a later
time. Additionally, the announcement is propagated to the child link.
AckArrives enables the cleanup of the announcements by propagating acks along the backbone toward the
root via the parent pointers. Acks can also be present on tail links, but these are essentially redundant to the delete
messages and do not affect the correctness of the algorithm.
The purpose of the delete messages is to remove the tail segments of the graph. Recall that a tail is created
by a backbone node sending a delete to its child. Therefore, a delete should only arrive from a parent node. If we
were to accept a delete from a non-parent node, as in the delete from C to D in Figure 5c, we could destroy the
backbone. However if the delete arrives from the parent, we are assured that the node no longer resides on the
backbone and should be deleted. Therefore, the arrival of a delete from a parent triggers the deletion of the stored
announcement, the propagation of the delete to the child, and the clearing of both child and parent pointers.
State
AnnouncementAtA boolean, true if announcement stored at A, initially false everywhere
MobileAt A boolean, true if mobile unit at A, initially false except at root
Parent(A) the parent of node A, initially null
Child(A) the child of node A, initially null
started boolean, true if delivery has started, initially false
MList list of nodes carried by the mobile, initially contains only the root
Actions
AnnouncementArrivesA (B) ;arrival at A from B
Effect:
if Parent(A)=B then
if MobileAt A then
deliver announcement
send ack to B
else
AnnouncementAtA :=true ;save ann.
send announcement to Child(A)
;arrival at A from B
Effect:
if Child(A)=B - AnnouncementAtA then
AnnouncementAtA :=false ;delete ann.
send ack to Parent(A)
;arrival at A from B
Effect:
if Parent(A)=B then
if AnnouncementAtA then
AnnouncementAtA :=false ;delete ann.
send delete to Child(A)
Parent(A):=null
Child(A):=null
MobileArrivesA (B) ;arrival at A from B
Effect:
MobileAt A :=true
A keeps old parent
MList truncated after A to the end
if AnnouncementAtA then
deliver announcement
Send ack to Parent(A)
AnnouncementAtA :=false ;delete ann.
else
Parent(A):=B
window protocols. The announcements and all associated acknowledgments would have to be marked by sequence
numbers so that they do not interfere, but the delivery mechanism uses the same graph. Therefore the rules
governing the expansion and shrinking of the graph are not affected but the proofs of garbage collection and
acknowledgment delivery are more delicate.
4.3 Correctness
Because this algorithm deviates significantly from the original Dijkstra-Scholten model of diffusing computations,
essential properties necessary for announcement delivery are proven in this section: 1) announcement delivery is
guaranteed, after delivery announcement copies are eventually removed from the system, and any tail node is
eventually cleared. Although the third property is not essential to announcement delivery, it is necessary to show
announcement cleanup.
Before approaching the proof, we formalize several useful definitions in Figure 8. The most important of these
are the backbone, covered backbone, and tails. Intuitively, the backbone is the sequence of nodes starting at the
root and terminating at either the node holding the mobile unit or the node the mobile unit just left if it is on a
channel. The covered backbone is the sequence of backbone nodes with announcement copies. Tails are any path
segments not on the backbone.
4.3.1 Announcement Delivery Guarantee
Our overall goal is to show at-least-once delivery of an announcement to a mobile unit. Therefore, the first property
that we prove is (A) that from the state where no announcement exists in the system (predelivery), eventually a
state is reached where the mobile unit has a copy of the
predelivery 7! postdelivery (A)
Although it is possible to make this transition in a single step (by executing AnnouncementStart while the
mobile unit is at the root) it is more common for the system to move into an intermediate state where delivery is
in progress (A.1). We must show that from this state (delivery), either the announcement will be delivered, or, in
the worst case, the covered backbone will increase in length to include every node of the system (A.2). Once this
occurs, delivery is guaranteed to take place when the mobile unit arrives at any node (A.3).
predelivery ensures delivery - postdelivery (A.1)
delivery 7! postdelivery - (delivery - h9ff :: coveredBone(ff)
postdelivery (A.3)
Progress properties are expressed using the UNITY relations 7! (read leads-to) and ensures. Predicate relation p 7! q expresses
progress by requiring that if, at any point during execution, the predicate p is satisfied, then there is some later state where q is satisfied.
Similarly, p ensures q states that if the program is in a state satisfying p, it remains in that state unless q is established, and, in
addition, it does not remain forever in a state satisfying p but not q.
D.1
reachable(m; n)
Node n is reachable from node m if there
is a path from m to n where every channel
on the path has the parent and child
pointers of the channel endpoints pointing
toward one another.
D.2
Path p includes node n and is an acyclic
sequence of reachable nodes.
D.3 maxpath(p; n; R) j path(p; n)
Path p is the maximal length path including
node n that satisfies the predicate R.
Extending p in either direction through
concatenation (ffi) either violates the path
relationship or the condition R.
D.4
Path p is the backbone, i.e. the path of
maximal length which includes the root
and does not include the mobile unit on
any channel. The constant mob is used to
identify the mobile unit.
Path p is the covered backbone, i.e. the
maximal length path including the root
(the backbone) where all nodes are storing
announcement copies.
The tail is the maximal length path of any
node n where no node on the path is part
of the backbone.
Figure
8: Useful definitions.
We approach each of these properties in turn, first showing that from predelivery, either delivery or postde-
livery must follow (A.1). Until the action AnnouncementStart fires, the system remains in predelivery and
AnnouncementStart remains enabled. Trivially, when it fires, either the announcement will be delivered (if the
mobile unit is present at the root) or the announcement will begin to propagate through the system.
Once the delivery state is reached, we must show that the covered backbone will increase in length to include
all nodes or the announcement will be delivered (A.2). To do this, we strengthen the progress property A.2 to state
that the covered backbone cannot decrease in length.
delivery
(delivery - coveredBone(ff) - jffj ? postdelivery (A.2.1)
In order to formally make this assertion, we must first show that during delivery the covered backbone ex-
ists. Showing the existence of the covered backbone independent from other system attributes is not possi-
ble. Therefore we prove a stronger invariant that not only establishes the existence of the covered backbone,
but also the existence of the backbone and the relationship between the two. By definition, the covered back-bone
is a subset of the backbone. We further assert that if the covered backbone is shorter than the back-
bone, there is an announcement leaving the last node of the covered backbone (where last(ff) returns the final
element of the path ff). Alternately, if the covered backbone and backbone are equivalent, the mobile
unit must precede the announcement (indicated by the constant ann) in the channel leaving the last node.
delivery
This invariant is proven by showing that it holds initially as well as over all statements of the program. Through-out
this proof, we use several supporting properties which appear in Appendix A. Specifically: Inv I.1.1 the integrity
of the backbone, Inv I.1.2 that the backbone always exists, Inv I.1.3 that there is at most one announcement in a
channel, Inv I.1.4 that there are no announcements during predelivery, and Inv I.1.5 that there are no acknowledgments
during delivery. We now show the proof of the top level property concerning the existence of the covered
backbone during delivery (I.1):
ffl It is trivial to show the initial conditions of I.1 because initially, delivery is false.
ffl MobileArrivesA (B): We assume the integrity of the backbone (Inv I.1.1). First we consider the case where
the system is in delivery and the right hand side of this invariant (I.1) holds. The covered backbone is not
affected if the mobile unit arrives at a non-backbone node or a backbone node below the covered backbone.
If the mobile unit arrives at a covered backbone node, the announcement is delivered and the invariant is
trivially true by falsifying the left hand side.
Next we consider when the system is not in delivery. If the system is in predelivery and we assume there are
no announcements during predelivery (Inv I.1.4), the movement of the mobile unit cannot affect the delivery
status. Once the system is in postdelivery, it cannot return to delivery, so the invariant remains true.
ffl AnnouncementArrivesA (B): We assume there is at most one announcement on a channel in the system
I.1.3). Therefore, if the system is in delivery and we assume this invariant is true before the announcement
arrives, the announcement must be leaving the covered backbone. Further, since the announcement is at the
head of the channel, it cannot be the case that the mobile unit and announcement are in the same channel,
so the covered backbone must be a proper subsequence of the backbone. Therefore, by the definitions of the
covered backbone and backbone, the node the announcement arrives at is on the backbone, and either the
announcement is delivered or is propagated.
If delivery occurs, this invariant is trivially satisfied by falsifying the delivery condition.
If the announcement is propagated into the next channel, then the covered backbone is extended by one node
which has already been shown to be part of the backbone. The announcement is put onto the child link of
this node, which by the backbone definition must be a channel on the backbone or backbone extension. The
announcement must follow the mobile unit if the mobile unit is on the same channel.
As before, if the system is not in delivery, then the delivery status of the system cannot change with the
execution execution of this statement.
Before the statement executes, the mobile unit must be at a node, otherwise the statement
is a skip. Since we assume this invariant to be true, it must be the case that the covered backbone is a proper
subsequence of the backbone. Therefore, when the mobile unit leaves the node, the backbone is only changed
to include the new backbone extension, the covered backbone is not affected, and the invariant remains true.
ffl AnnouncementStart: We assume that if the system is in predelivery, there are no announcements in the
system (Inv I.1.4). Therefore after this statement executes, either delivery occurs and Inv I.1 invariant is
trivially true, or the announcement is placed at the root and on the outgoing link, establishing the right hand
side of the invariant. If the system is in delivery or postdelivery, this statement is a skip.
ffl AckArrivesA (B): We assume there are no acknowledgments in the system during delivery (Inv I.1.5), and
therefore this statement is essentially a skip during delivery. This statement cannot change the delivery
status, therefore, if the system is in predelivery or postdelivery, the invariant is trivially true.
ffl DeleteArrivesA (B): We assume that delete messages do not affect the backbone (Inv I.1.1), therefore they
will not affect the covered backbone, and this invariant will remain true. As before, this statement cannot
change the delivery status.
This concludes the proof that during delivery, the covered backbone exists. We now show that the covered
backbone must grow, as defined by property A.2.1. We note two specific cases that the system can be in with
respect to the mobile unit and the announcement and show how either the covered backbone must increase or
delivery will occur. The first case is where the mobile unit and announcement are not on the same channel. Since
the system is in delivery, there cannot be an acknowledgment on the channel (Inv I.1.5). Since the announcement
is on a backbone channel, there cannot be a delete on the channel (Inv I.1.1). The assumption is that the mobile
unit is not on the same channel. This covers all possible message types that could precede the announcement
on the channel, therefore the announcement must be at the head of the channel. So, in this case, the progress
property A.2.1 which concerns the growth of the covered backbone becomes an ensures because the announcement
will remain at the head of the channel until processed, lengthening the covered backbone, or the mobile unit will
arrive at a node causing delivery. In either case, the condition on right hand side becomes true.
In the second case, the mobile unit and announcement are on the same channel. By Invariant I.1, the mobile
unit precedes the announcement in this channel. We state a trivial progress property that if the mobile unit is at
the head of a channel, it is ensured to arrive at the destination node:
mobile.at.head(m; n) ensures MobileAt(n) (A.2.1.1)
Because there is only one mobile unit, after the mobile unit is removed from the channel, either the system is taken
out of delivery by the mobile unit receiving the announcement, or the system has been reduced to the first case
where the mobile unit and announcement are not on the same channel.
The previous discussion effectively shows property A.2.1, namely that the covered backbone must grow until all
nodes in the system are part of the covered backbone or delivery has occurred. To complete the proof that delivery
is guaranteed, we need to show that when all nodes are part of the covered backbone, delivery must occur. By the
definitions of the covered backbone and backbone, when all nodes are part of the covered backbone, the two are
equivalent. The mobile unit must be on a channel because all nodes have announcement copies and if the mobile
unit is at a node, it must have received the announcement copy (either when the mobile unit arrived or when the
announcement arrived). The destination of the channel the mobile unit is on must be part of the backbone because
all nodes are part of the backbone. If there is a delete in front of the mobile unit , it will not have any effect on
the backbone (Inv I.1.1). There cannot be an acknowledgment in the channel (Inv I.1.5). The announcement must
be behind the mobile unit (Inv I.1.1). Therefore, after the delete (if any) is processed, the mobile unit is at the
head of the channel. The MobileArrivesA (B) action will cause delivery. Therefore, announcement delivery is
guaranteed from the initial state of the system.
4.3.2 Backbone announcements cleaned up
Once the announcement has been delivered, we show that eventually all stored announcement copies are removed.
There are two cases to address: the announcements on nodes on the backbone and those not on the backbone.
In the next section, we will show how all nodes which are not part of the backbone will be cleaned up, while this
section focuses on the cleanup of the backbone nodes. In particular, we wish to show that after the announcement
has been delivered, eventually all announcement copies on the backbone will be deleted.
postdelivery 7! h8m;
We introduce a safety property describing the state of the backbone in postdelivery. Namely, (a.) the backbone
and covered backbone exist, (b.) there is an acknowledgment in the channel heading toward the last node of the
covered backbone, (c.) all nodes in the backbone not in the covered backbone do not have announcement copies,
(d.) there are no announcement copies on any backbone channels or the backbone extension, and (e.) there are
no acknowledgments on the channels of the covered backbone. Intuitively, this invariant shows that there is only
one segment of the backbone with announcement copies and the nodes on this segment are poised to receive an
acknowledgment.
postdelivery
We now show the proof of this statement by showing that if it holds before the execution of each statement, it
must hold after the execution of the statement:
ffl MobileArrivesA (B): When the mobile unit arrives at a non-backbone node, the backbone is extended
to include this node. The channel just traversed will become part of the backbone, but will not have an
announcement on it by the last part of this invariant. The covered backbone will not change. If there is an
announcement at this node, it will be removed so that there are still no announcement copies at nodes other
than the covered backbone.
If the mobile unit arrives at a backbone node that is not part of the covered backbone, the covered backbone
does not change. There are still no announcements at backbone nodes other than the covered backbone, and
because no new channels are added to the backbone, there are no announcement copies on the channels of
the covered backbone.
If the mobile unit arrives at a backbone node that is part of the covered backbone, the covered backbone
is shortened to be all nodes above this new location of the mobile unit. Each of these nodes must have an
announcement copy because they were part of the covered backbone before the mobile unit arrived. Also, an
acknowledgment is generated in the channel heading toward the new covered backbone and this invariant is
established. As before, there are no new channels in the backbone, so there are still no announcements on
any backbone channels.
We must also consider the case where the postdelivery condition is established by the arrival of the mobile
unit. The components of this invariant are established because the only announcement in the system was at
the end of the covered backbone which must be downstream from where the mobile unit arrived, so there are
no announcements in backbone channels. The remainder of the invariant is established in a manner similar
to the case where the system is in postdelivery and the mobile unit arrives at a covered backbone node.
ffl AnnouncementArrivesA (B): If the system is in delivery, the arrival of the announcement could establish
postdelivery. In this case, the components of this invariant are established because all nodes above the mobile
unit are part of the covered backbone. The acknowledgment is put into the channel above the mobile unit,
which is the end of the covered backbone. The announcement copy at the node the mobile unit is at is deleted.
If the system is in postdelivery and the announcement arrives, the announcement could arrive at a backbone
node only from a non-backbone channel and will be dropped. Therefore, the invariant will not be affected
because neither the backbone nodes nor links are affected.
ffl SendMobileA (B): If the mobile unit leaves a node, the covered backbone does not change. Also, the mobile
unit is at the end of the channel, so any announcements in the same channel must be before the mobile unit.
ffl DeleteArrivesA (B): The arrival of a delete at a backbone node will not affect the backbone or the covered
backbone. The arrival of a delete elsewhere in the system will not affect this invariant.
ffl AckArrivesA (B): If an acknowledgment arrives at a covered backbone node, it must be at the end of the
covered backbone. Therefore, the processing of this acknowledgment will shrink the covered backbone by
one node and put the acknowledgment farther up in the backbone. Alternately, the root could receive the
acknowledgment and there would no longer be a covered backbone.
If an acknowledgment arrives at a non-backbone node and is accepted, it will not be put onto the backbone. If
it is not accepted, nothing changes in the covered backbone or backbone, therefore the invariant is maintained.
ffl AnnouncementStart: This statement has no effect during postdelivery. During predelivery, this statement
could establish this invariant by delivering the announcement to a node at the root. In this case, the covered
backbone does not exist, and the invariant is true.
Our goal is to show progress in the cleanup of announcements on the covered backbone. To do this, we use
a progress metric that measures the reduction in length of the covered backbone. Because the only nodes on the
backbone with announcement copies must be on the covered backbone by Invariant I.2, once the covered backbone
has length zero, all announcements on the backbone have been deleted.
postdelivery - coveredBone(ff) -
To prove this statement, we note that by the previous invariants, it has been established that there is an
acknowledgment in the channel heading toward the covered backbone. If the acknowledgment is not at the head
of the channel, then there must be something else in front of it. There cannot be a delete on the channel, because
that would mean there is a delete on the backbone which is not allowed by invariant I.1.1. An announcement would
have no effect because it is not arriving on a parent link. If the mobile unit were on the channel, then the arrival of
the mobile unit would cause delivery because the announcement must be at the last node of the covered backbone,
and the covered backbone would change.
So, either the mobile unit will arrive from the same channel as the acknowledgment or on another channel and
will cause the covered backbone to shrink, or the acknowledgment will be processed and cause the covered backbone
to shrink. Since there are only a finite number of messages in the channel in front of the acknowledgment, these will
be processed and eventually either the acknowledgment will reach the head of the channel or the covered backbone
will shrink in another way (through the arrival of the mobile unit at a covered backbone node).
When the covered backbone shrinks to zero length, there will be no more announcement copies on any backbone
nodes, accomplishing backbone cleanup.
4.3.3 Tail Cleanup
In addition to backbone cleanup, we must also ensure that any announcement copies not on the backbone will
eventually be deleted. More precisely, any node which is on a tail will eventually be cleared or put on the backbone
(C), where clear(n) indicates that n's parent and child pointers are null (which will be shown to imply the
announcement is no longer stored there). Since a node can only accept an announcement from a parent, this
implies that only nodes with non-null parent pointers could have announcement copies. Since a node can only clear
its pointers at the same time as it clears its storage, there is no way for a node (other than the root) to have a copy
of the announcement and non-null pointers.
We cannot guarantee that the mobile unit will eventually arrive at the node thereby adding that node to the
backbone, we must prove that there is a delete message that will eventually arrive at the node if it remains on the
tail. We show that every tail has a delete message on the channel heading toward the first node of the tail (I.3),
where the first node of the tail is defined to be the node whose parent pointer points toward a node that does not
point toward it as the child. This delete message will eventually be processed, shrinking the length of the tail (C.1).
When the tail contains only node, the tail is guaranteed to be cleared (C.2).
tail(-; n) 7! clear(n) -
To show that the tail can shrink, we must guarantee the existence of the delete message at the end of the
tail. We do this by assuming the invariant before each statement execution and showing it holds after statement
execution.
ffl MobileArrivesA (B): If the mobile unit arrives at a backbone node, one or zero tails are created. If no tails
are created, the invariant trivially holds. If a tail is created, it consists of the nodes that are removed from
the backbone. These nodes by definition point toward one another as parent and child, making them a tail.
are affected. The new tail by definition has a first node. The first node of the new tail is the
node formerly pointed to as the child of the node where the mobile unit is currently at. A delete is put onto
this channel, establishing this invariant.
If the mobile unit arrives at a node that is not on the backbone and not on a tail, no new tails are created,
no deletes are sent, and no old tails are affected.
If the mobile unit arrives at a tail node, the tail is cut into two segments around the mobile unit. The nodes
above the mobile unit are not affected because the first node of the tail is still the same and the delete is not
affected. The nodes below the mobile unit are similar to the first case, and the delete generated down the old
child pointer establishes the invariant.
ffl DeleteArrivesA (B): If a delete arrives at any node on a link other than from the parent, this delete could
not be critical to any tail, and therefore dropping it has no effect on the invariant.
If a delete arrives at the first node of a tail along the parent link, the delete is propagated to the new first
node of the tail and a node is removed from the tail.
ffl AnnouncementArrivesA (B): SendMobileA (B): AckArrivesA (B): AnnouncementStart do not affect
the invariant
With this invariant, it is clear that when the delete is processed, a node is removed from the tail, and the tail
shrinks (property C.1). If the delete is not at the head of the channel, the messages ahead of it must be processed.
Neither an acknowledgment nor an announcement will affect progress. If a mobile unit arrives, the node is added
to the backbone, satisfying the progress condition.
Finally, we formally define clear (D.7), then show that if a node is clear, it has no announcement copies (I.3.1):
This invariant is easily shown over every statement. Intuitively, when a node sets both its parent and child pointers
to null, as in DeleteArrivesA (B), the announcement copies at the node are deleted. Since it is not possible to
set the child and parent pointers to null any other way, and an announcement is only accepted from a non-null
parent link, there cannot be an announcement at a node that has both null pointers.
Therefore, once a node is either clear or put back on the backbone, it will not have an announcement copy. As
the tails shrink, we are guaranteed that the announcements not on the backbone will be removed from the system.
We have described two algorithms to guarantee the delivery of an announcement to a mobile unit with no assumptions
regarding the speed of movement. In this section, we compare our approach with other tracking based
delivery schemes designed for the mobile environment including Mobile IP, a scheme by Sony, another by Sanders
et. al., and finally a multicast scheme by Badrinath et. al.
Each of these algorithms uses the notion of a home node toward which the announcement is initially sent. In
Mobile IP [6], the home node tracks as closely as possible the current location of the mobile unit and all data is sent
from the home directly to this location. This information is updated each time the mobile unit moves, introducing
a discrepancy between the actual location and the stored location during movement. Any data sent to the mobile
unit during this update will be dropped. Mobile IP has no mechanism to recover this data, but rather assumes
that higher layer protocols such as TCP will handle buffering and retransmitting lost data. One proposal within
Mobile IP is to allow the previous location of the mobile unit to cache the new location and forward data rather
than dropping it. While this can reduce the number of dropped announcements, it still does not guarantee delivery
as the mobile unit can continue to move, always one step ahead of the forwarded announcement.
One proposal is to increase the amount of correct location information in the system by distributing this
information to multiple routers, as in our tree and backbone maintenance algorithms. The Sony [5] approach keeps
the home node as up to date as possible, but also makes the other system routers active components, caching
mobile unit locations. As the packet is forwarded, each router uses its own information to determine the next hop
for the packet. During movement and updating of routing information, the routers closer to the mobile unit will
have more up to date information, and fewer packets will be lost than in the Mobile IP approach. This approach
still does not provide delivery guarantees, and few details are given concerning updates to router caches. One
benefit is that announcements need not be sent all the way to the home node before being forwarded toward the
mobile unit. Instead any intermediate router caching a location for the mobile unit can reroute the packet toward
the mobile unit.
The Sanders approach [8] has this same advantage, allowing intermediate routers to forward the announcement.
Sanders describes precisely how intermediate routers are updated. A hop by hop path is kept from the home node
to the last known location of the mobile unit. When the mobile unit moves, the path is shortened one node at
a time until the common node between the shortest path to the old location and the shortest path to the new
location is reached, then the path is extended one node at a time. Any announcements encountering the hop by
hop path are forwarded toward the last node of this path. During the updating of this path, announcements move
with the update message which are changing the path and will eventually reach the next known location of the
mobile unit. However, the mobile unit may have moved during the update, in which case, the data messages will
continue to travel with the next update message. Although no messages are dropped, the slow update time and
ability of the mobile unit to keep moving could prevent delivery.
In each of these approaches, a single copy of the announcement is kept in the system, while our approach stores
multiple copies throughout the system until delivery is complete. We believe that sacrificing this storage for the
limited times that our algorithms require is worthwhile to provide guaranteed delivery of the announcement. If we
weaken these requirements, our approaches can be modified to reduce storage. In the first algorithm, the announcement
can be sent down the spanning tree in a wave. When the announcement arrives at a node, if the mobile unit
is present, it is delivered, otherwise it is sent on all outgoing spanning tree channels. Although multiple copies are
generated, they will not be stored, but simply passed to the next hop. When the announcement reaches a leaf
node it will be dropped. If the mobile unit remains in a region of the graph below the announcement propagation,
it will receive the message, however if it is in transit or moves to a region above the message propagation, the
announcement will not be delivered. In our second approach, a single announcement copy can be sent down the
backbone path. Even if the announcement ends up on a tail, it will continue toward the mobile unit. Because
the path we define to the mobile unit is based on movement pattern rather than shortest path as in the Sanders
algorithm, there is only one pathological movement pattern (a figure eight crossing the backbone) where the mobile
unit can continue to avoid delivery.
Another approach which keeps multiple announcement copies is Badrinath's guaranteed multicast algorithm [1]
which stores announcement copies at all system nodes until all mobile units in the multicast group have received the
announcement. This information is gathered by the announcement initiator from the nodes that actually delivered
the announcement. A disadvantage of this algorithm is that all recipients must be known in advance, a property
not always known in multicast. Our first algorithm can trivially be extended to track the movement of multiple
mobile units, and because it is based on the actual movement of the mobile units can reduce the number of nodes
involved in the multicast delivery with respect to the Badrinath approach.
6 Conclusions
Our primary contribution in this work is the introduction of a new approach to the study of mobility, one based on
a model whose mechanics are borrowed directly from the established literature of distributed computing. Treating
mobile units as messages provides an effective means for transferring results from classical distributed algorithm
literature to the emerging field of mobile computing. A secondary contribution is the development of two algorithms
for message delivery to mobile units, the first a direct derivative of the diffusing computations distributed algorithm,
and the second an optimizing refinement of the first based on careful study of the problem and the solution.
Each algorithm is applicable in a variety of settings where mobile computing components are used and reliable
communication is essential.
--R
IP multicast extensions for mobile internetworking.
Flush primitives for asynchronous distributed systems.
Termination detection for diffusing computations.
A distributed snapshot algorithm adapted for message delivery to mobile units.
Comparing four IP based mobile host protocols.
IP mobility support.
A Border Gateway Protocol 4 (BGP-4)
Derivation of an algorithm for location management for mobile communication devices.
Routing in Communication Networks
--TR
Flush primitives for asynchronous distributed systems
Comparing four IP based mobile host protocols
Routing in communications networks
A framework for delivering multicast message in networks with mobile hosts
Understanding Code Mobility
Reliable Communication for Highly Mobile Agents | mobile computing;message delivery;diffusing computations |
567199 | On sparse evaluation representations. | The sparse evaluation graph has emerged over the past several years as an intermediate representation that captures the dataflow information in a program compactly and helps perform dataflow analysis efficiently. The contributions of this paper are three-fold: We present a linear time algorithm for constructing a variant of the sparse evaluation graph for any dataflow analysis problem. Our algorithm has two advantages over previous algorithms for constructing sparse evaluation graphs. First, it is simpler to understand and implement. Second, our algorithm generates a more compact representation than the one generated by previous algorithms. (Our algorithm is also as efficient as the most efficient known algorithm for the problem.) We present a formal definition of an equivalent flow graph, which attempts to capture the goals of sparse evaluation. We present a quadratic algorithm for constructing an equivalent flow graph consisting of the minimum number of vertices possible. We show that the problem of constructing an equivalent flow graph consisting of the minimum number of vertices and edges is NP-hard. We generalize the notion of an equivalent flow graph to that of a partially equivalent flow graph, an even more compact representation, utilizing the fact that the dataflow solution is not required at every node of the control-flow graph. We also present an efficient linear time algorithm for constructing a partially equivalent flow graph. Copyright 2002 Elsevier Science B.V. | Introduction
The technique of sparse evaluation has emerged, over the past several years, as an efficient way of performing
program analysis. Sparse evaluation is based on the simple observation that for any given analysis problem,
a number of the "statements" in a given program may be "irrelevant" with respect to the analysis problem.
As a simple example, consider any version of the "pointer analysis" problem (e.g., see [13, 1]), where the goal
is to identify relations that may exist between pointer-valued variables. An assignment to an integer-valued
variable, such as "i := 10;", will typically be irrelevant to the problem and may be ignored. The goal of
sparse evaluation is very simply to construct a "smaller" program whose analysis is sufficient to produce
results for the original program. This not only enables the analysis to run faster, but also reduces the space
required to perform the analysis. (For some complex analyses like pointer analysis, for which space is often a
bottleneck, sparse evaluation can make the difference between being able to complete the analysis and not.)
The idea of sparse evaluation was born in the context of the work done on the Static Single Assignment
[5, 6], which showed how the SSA form could help solve various analysis problems, such as
constant propagation and redundancy elimination, more efficiently. Choi et al. [2] generalized the idea and
showed how it could be used for an arbitrary dataflow analysis problem expressed in Kildall's framework
[12]. The idea is, in fact, applicable to analysis problems expressed in various different frameworks, and
more generally, to the problem of computing extremal fixed points of a collection of equations of certain
form. However, for the sake of concreteness, we will also deal with analysis problems expressed in Kildall's
framework.
In the context of Kildall's framework, we are interested in solving some dataflow analysis problem over a
control-flow graph G. The idea behind sparse evaluation is to construct a smaller graph H, which we will
refer to as an equivalent flow graph, from whose dataflow solution the solution to the original graph G can
be trivially recovered. More detailed discussions of equivalent flow graphs and their use can be found in
[2, 11, 16].
Choi et al. [2] define a particular equivalent flow graph called the Sparse Evaluation Graph (SEG) and
present an algorithm for constructing it. Johnson et al. [11, 10] define a different equivalent flow graph called
preliminary version of this paper appeared in the proceedings of the Fourth International Static Analysis Symposium,
volume 1302 of Lecture Notes inComputer Science, pages 1-15.
the Quick Propagation Graph (QPG) and present a linear time algorithm for constructing it. In general,
the Quick Propagation Graph is not as compact as the Sparse Evaluation Graph. Cytron and Ferrante [4],
Sreedhar and Gao [16], and Pingali and Bilardi [14, 15] improve upon the efficiency of the original Choi et
al. algorithm for constructing the Sparse Evaluation Graph. Duesterwald et al. [8] show how a congruence
partitioning technique can be used to construct an equivalent flow graph, which we believe is the same as
the standard Sparse Evaluation Graph. The Pingali-Bilardi algorithm and the Sreedhar-Gao algorithm,
which are both linear, have the best worst-case complexity among the various algorithms for constructing
the Sparse Evaluation Graph.
The contributions of this paper are as follows.
ffl We define a new equivalent flow graph, the Compact Evaluation Graph (CEG), and present a linear
time algorithm for constructing the Compact Evaluation Graph for any (monotonic) dataflow analysis
problem. Our algorithm has two advantages over previous algorithms.
Simplicity. Our algorithm is particularly simple to understand and implement. It is conceptually
simple because it is based on two graph transformations, whose correctness is transparently ob-
vious. In implementation terms, its simplicity derives from the fact that it does not require the
computation of the dominator tree. It utilizes just the well-known strongly connected components
algorithm and the topological sort algorithm.
- Compactness. In general, the Compact Evaluation Graph is smaller than both the Sparse Evaluation
Graph and the Quick Propagation Graph. In particular, we show that both SEG and QPG
can also, in principle, be generated utilizing the two graph transformations mentioned above.
However, while the Compact Evaluation Graph is in normal form with respect to these trans-
formations, SEG and QPG are not necessarily so. Since these transformations make the graph
smaller and since these transformations are Church-Rosser, it follows that the Compact Evaluation
Graph is at least as small as both SEG and QPG.
ffl For a reasonable definition of equivalent flow graph, we present a quadratic algorithm for constructing a
equivalent flow graph consisting of the minimum number of vertices. We also show that the problem of
constructing equivalent flow graph consisting of the minimum number of vertices and edges is NP-hard.
ffl We show how we can utilize the fact that the dataflow solution is not required at every node of
the control-flow graph to construct an even more compact representation, which we call a partially
equivalent flow graph. We present an efficient algorithm, also based on simple graph transformations,
to produce a partially equivalent flow graph.
The rest of the paper is organized as follows. Section 2 presents the notation and terminology we use. Section
3 describes the Compact Evaluation Graph and an algorithm for constructing it. Section 4 discusses the
graph transformations used to construct the Compact Evaluation Graph. Section 5 compares the Compact
Evaluation Graph to previously proposed equivalent flow graphs. Section 6 presents our results concerning
equivalent flow graphs of minimum size. Section 7 introduces the concept of a partially equivalent flow
graph and presents an algorithm for constructing one. Section 8 briefly discusses how these concepts apply
in the case of interprocedural analysis. Section 9 presents a comparison of our work with previous work, and
presents our conclusions.
Notation and Terminology
Dataflow analysis problems come in various different flavors, but many of the differences are cosmetic. In this
paper we will focus on "forward" dataflow analysis problems, but our results are applicable to "backward"
dataflow analysis problems as well. We will also assume that the "transfer functions" are associated with
the vertices of the control-flow graph rather than the edges and that we are interested in identifying the
dataflow solution that holds at exit from nodes. In particular, when we talk about the dataflow solution at
a node u, we mean the solution that holds at exit from u.
A control-flow graph G is a directed graph with a distinguished entry vertex. We will denote the vertex set
of G by V (G) (or V ), the edge set of G by E(G) (or E), and the entry vertex by entry(G). For convenience,
we assume that the entry vertex has no predecessors and that every vertex in the graph is reachable from
the entry vertex.
Formally, a dataflow analysis problem instance is a tuple G; M; c), where:
is a semilattice
ffl G is a control-flow graph
a map from G's vertices to dataflow functions
is the "dataflow fact" associated with the entry vertex.
We will refer to M (u) as the transfer function associated with vertex u. The function M can be extended
to map every path in the graph to a function from L to L: if p is a path [v 1 , defined
to be M (v k It is convenient to generalize the above definition to any sequence of
vertices, even if the sequence is not a path in the graph.) The meet-over-all-paths solution MOPF to the
problem instance G; M; c) is defined as follows:
Here denotes the set of all paths p from entry(G) to u. The maximal fixed point solution
of the problem, denoted MFPF , is the maximal fixed point of the following collection of equations over the
set of variables fx u j
Most of the results in this paper apply regardless of whether one is interested in the maximal fixed point
solution or the meet-over-all-paths solution. Whenever we simply refer to the "dataflow solution", the
statement applies to both solutions.
Assume that we are interested in solving some dataflow analysis problem over a control-flow graph G.
The idea behind the sparse evaluation technique is to construct a (usually smaller) graph H, along with a
function f from the set of vertices of G to the set of vertices of H such that the dataflow solution at a node
u in graph G is the same as the dataflow solution at node f(u) in graph H. This implies that it is sufficient
to perform the dataflow analysis over graph H. Furthermore, the graph H and the mapping f are to be
constructed knowing only the set of nodes I in G that have the identity transfer function with respect to the
given dataflow analysis problem. (In other words, the reduction should be valid for any dataflow problem
instance over the graph G that associates the identity transfer function with vertices in I.) Though this
description is somewhat incomplete, it will suffice for now. We will later present a formal definition of an
equivalent flow graph.
Compact Evaluation Graphs: An Overview
The goal of this section is to explain what the Compact Evaluation Graph is and our algorithm for constructing
this graph in simple terms, without any distracting formalisms. Formal details will be presented
in latter sections.
Let S be a set of nodes in G that includes the entry node of G as well as any node that has a non-identity
transfer function with respect to the given dataflow analysis problem. We will refer to nodes in S (whose
execution may modify the abstract program state) as m-nodes, and to other nodes (whose execution will
preserve the abstract program state) as p-nodes.
We are given the graph G and the set S, and the idea is to construct a smaller graph that is equivalent
to G, as explained earlier. Our approach is to generate an equivalent flow graph by applying a sequence
of elementary transformations, very much like the T1-T2 style elimination dataflow analysis algorithms
[18, 9]. We use two elementary transformations T2 and T4 (named so to relate them to the T1, T2, and T3
transformations of [18, 9]).
The Basic Transformations
Transformation T2: Transformation T2 is applicable to a node u iff (i) u is a p-node and, (ii) u has only
one predecessor. Let v denote the unique predecessor of a node u to which T2 is applicable. The graph
is obtained from G by merging u with v: that is, we remove the node u and the edge v ! u from
the graph G, and replace every outgoing edge of u, say u by a corresponding edge
transformation is essentially the same as the one outlined by Ullman[18], but we apply it only to p-nodes.)
Note that the dataflow solution for graph G can be obtained trivially from the dataflow solution for graph
In particular, the dataflow solution for node u in G is given by the dataflow solution for node
in T 2(u; v)(G). The dataflow solution for every other node is the same in both graphs.
Transformation T4: The T4 transformation is applicable to any strongly connected set of p-nodes.
set X of vertices is said to be strongly connected if there exists a path between any two vertices of X, where
the path itself contains only vertices from X). If X is a strongly connected set of p-nodes in graph G, the
graph T 4(X)(G) is obtained from G by collapsing X into a single p-node: in other words, we replace the set
X of vertices by a new p-node, say w, and replace any edge of the form u
the edge replace any edge of the form u by the edge w ! v, and
delete any edges of the form u
Note that the dataflow solution for graph G can be obtained trivially from the dataflow solution for graph
T 4(X)(G). In particular, the solution for any node u in G is given by the solution for the (new) node w in
and by the solution for node u in T 4(X)(G) if u 62 X.
Our algorithm constructs an equivalent flow graph by taking the initial graph and repeatedly applying
the T2 and T4 transformations to it until no more transformations are applicable. We will show latter
that the final graph produced is independent of the order in which these transformations are applied. Let
denote the final graph produced. Every vertex u in the final graph (T2+T
to a set Su of vertices in the original graph G. Further, either S u contains no m-nodes, in which case the
transfer function associated with u in graph is the identity function, or S u contains exactly
one m-node v (and zero or more p-nodes), in which case u's transfer function in graph
the same as the transfer function of v in G. The dataflow solution to any vertex in S u in G is given by
the dataflow solution to the vertex u in the We refer to as the Compact
Evaluation Graph of G.
Computing the Normal Form
We now present our algorithm for constructing the normal form of a graph with respect to the T2 and T4
transformations.
Step 1: Let G denote the initial control flow graph. Let G p denote the restriction of G to the set of p-nodes
in G. (That is, G p is the graph obtained from G by removing all m-nodes and edges incident on them.)
the maximal strongly connected components of G p using any of the standard algorithms. Let X 1 ,
denote the strongly connected components of G 1 in topological sort order. (The topological sort
order implies that if there is an edge from a vertex in X i to a vertex in X j , then i - j.)
Step 2: Apply the T4 transformation to each X i in G. (That is, "collapse" each X i to a single vertex w i .)
Let us denote the resulting graph G 1 . (Note that the graph G p is used only to identify the sets X 1 through
. The transformations themselves are applied starting with the graph G.)
Step 3: Visit the vertices w 1 to w k of G 1 in that order. When vertex w i is visited, check if the
transformation is applicable to it, and if so, apply the transformation. Let us denote the final graph produced
(after w k has been visited) by !(G).
We will show later that !(G) is (T2 +T (G). It is obvious that the complexity of the basic algorithm is
linear in the size of the graph. As is the case with such algorithms, the actual complexity depends on implementation
details, especially details such as how sets are implemented. It is straightforward to implement
the algorithm so that it runs in linear time. Also note that the simple algorithm for identifying strongly
connected components described in [3], due to Kosaraju and Sharir, directly generates the components in
topological sort order.
Example
The example in Figure 1 illustrates our algorithm. Assume that we are interested in identifying the reaching
definitions of the variable x for the graph G shown in Figure 1(i). For this problem, a vertex in the control-flow
graph is a m-node iff it is the entry node or it contains a definition of the variable x. Let us assume that
the nodes r, c, and g (shown as bold circles) are the m-nodes in G, and that the remaining nodes (shown as
regular circles) are p-nodes.
1: The first step of our algorithm is to identify the maximal strongly connected components of
the subgraph of G consisting only of the p-nodes. In this example, G p has only one non-trivial maximal
strongly connected component, namely fa; d; eg. Each of the remaining p-nodes forms a strongly connected
component consisting of a single vertex.
Step 2: The next step is to apply the T4 transformation to each of the strongly connected components
identified in the previous step. The T4 transformation applied to a strongly connected component consisting
of a single vertex (without any self loop) is the identity transformation, and, hence, we need to apply the
T4 transformation only to fa; d; eg. Reducing this set of vertices to a new vertex w gives us the graph in
Figure
1(ii). (In this and later figures, a vertex generated by merging a set X of vertices of the original graph
is shown as a polygon enclosing the subgraph induced by the set X; this subgraph, shown using dashed edges
and italic fonts, is not part of the transformed graph, but is shown only as an aid to the reader.)
Step 3: The next step is to visit the (possibly transformed) strongly connected components - that is, the
set of vertices h, and j - in topological sort order, and try to apply the T2 transformation to
each of them.
We first visit node w. Node w has two predecessors, namely r and c, and the T2 transformation is not
applicable to w. We then visit b, which has only one predecessor, namely w. Hence, we apply the T2
transformation to b and obtain the graph shown in Figure 1(iii). We similarly apply the T2 transformation
to nodes f and i (one after another), merging them both with w, and to node h, merging it with g. The
transformation is not applicable to the last node visited, j.
The graph in Figure 1(vi) is the normal form of G with respect to the T2 and T4 transformations.
4 On T2 and T4 Transformations
In this section we show that if we apply T2 and T4 transformations to a graph, in any order whatsoever,
until no more applicable transformations exist, the resulting graph is unique. We also establish that our
algorithm produces this unique "normal form". In what follows, a T transformation denotes either a T2 or
T4 transformation.
Theorem 1 Let - 1 and - 2 be two T transformations applicable to a graph G. Then there exists a T transformation
1 applicable to - 2 (G) and a T transformation - 0
2 applicable to - 1 (G) such that - 0
Proof
In what follows, we denote the vertex obtained by collapsing a set X of vertices by v X .
disjoint transformations, this follows in a straightforward way. We just choose - 0
1 to
be - 1 and - 0
2 to be - 2 .
Let us now consider two overlapping T2 transformations. If - 1 is T 2(u; v) and - 2 is T 2(v; w), then we
choose - 0
1 to be T 2(u; v) (the same as - 1 ) and - 0
2 to be T 2(u; w) (the same as - 2 , but "renamed" to handle
the merging of v with u).
Let us now consider two overlapping T4 transformations. If - 1 is T 4(X) and - 2 is T 4(Y ), we choose - 0
1 to
be
2 to be T
Let us now consider overlapping
1 be the identity transformation, and - 0
2 to be T vg.
1 be the T 2(v Y ; v), and - 0
2 to be T 4(Y
It follows from the above theorem that T2 and T4 transformations form a finite Church Rosser system.
Hence, every graph has a unique normal form with respect to these transformations. We now show that the
graph !(G) produced by our algorithm is this normal form.
r
a
d
e
f
c
r
f
c
r
f
c
r
c
r
c
r
c
(i) (ii) (iii)
(iv)
(v)
(vi)
a
d
e
a
d
e
a
d
e
f
a
d
e
f
a
d
e
f
Figure
1: An example illustrating our algorithm for constructing the Compact Evaluation Graph.
Theorem 2 No or T4 transformations are applicable to !(G).
Proof
First observe that no (nontrivial) strongly connected set of p-nodes exists in the graph G 1 . Hence, no T4
transformation is applicable to graph G 1 . Clearly, the application of one or more T2 transformations to G 1
cannot create a nontrivial strongly connected set of p-nodes. Hence, no T4 transformation is applicable to
the final graph !(G) either.
Now, consider the construction of !(G) from G 1 . Assume that we find that the T2 transformation is not
applicable to a p-node w i when we visit node w i . In other words, w i has at least two predecessors when we
visit it. Clearly any predecessor of w i must be either a m-node or a node w j where
transformations can only eliminate a node of the form w j where j ? i. Hence, none of w i 's predecessors will
be eliminated subsequently. Hence, the T2 transformation is not applicable to w i in !(G) either. 2
5 A Comparison With Previous Equivalent Flow Graphs
In this section we compare CEG, the equivalent flowgraph produced by our algorithm to two previously
proposed equivalent flow graphs, namely the Sparse Evaluation Graph (SEG) [2] and the Quick Propagation
Graph (QPG) [11, 10]. We will show that both the Sparse Evaluation Graph and the Compact Evaluation
Graph can be generated from the original graph by applying an appropriate sequence of T2 and T4 trans-
formations. (Our goal is not to present algorithms to generate SEG or QPG; rather, it is to show that SEG
and QPG are just two of the many equivalent flowgraphs that can be generated via T2-T4 transformations.)
It follows that CEG is at least as small as SEG and QPG.
We begin by defining the Sparse Evaluation Graph. We say that a vertex x dominates a vertex y if every
path from the entry vertex to y passes through x. We say that x strictly dominates y if x dominates y and
y. The dominance frontier of a vertex x, denoted DF (x), is the set of all y such that x dominates some
predecessor of y but does not strictly dominate y. The dominance frontier of a set of vertices is defined
to be the union of the dominance frontiers of its elements. Let X be a set of vertices. Define IDF
to be DF (X) if 1. The limit of this sequence is called the iterated
dominance frontier of X, denoted IDF (X).
Given a graph G and a set of vertices S, the Sparse Evaluation Graph consists 2 of the set of vertices V IDF
and the set of edges E IDF defined as follows:
there exists a path from x to y in G none of
whose internal vertices are in V IDF g
For any vertex define the set partition(u) as follows:
there exists a path from u to v in G none of
whose internal vertices are in V IDF g
denote the set fug [ partition(u).
dominates every vertex in partition(u).
Proof
Let be a path in G such that none of the v i is in V IDF . We will show that u dominates v i ,
by induction on i.
Consider does not dominate v 1 , then v 1 is in DF (u), by definition. Hence, v 1 must be in V IDF ,
contradicting our assumption. (This is because DF (IDF (X)) ' IDF (X).)
2 Choi et al. also discuss a couple of simple optimizations that can be further applied to the SEG, which we ignore here. We
will discuss these later, in Section 7.
Now consider any i ? 1. We know from the inductive hypothesis that u dominates v i\Gamma1 . If u does not
dominate v i , then v i is in DF (u), by definition. Hence, v i must be in V IDF , contradicting our assumption.
The result follows. 2
be two different vertices in V IDF . Then, partition(u) and partition(v) are disjoint.
Proof
Since domination is an antisymmetric relation, either u does not dominate v or v does not dominate u.
Assume without loss of generality that u does not dominate v. This implies that there exists a path ff from
the entry vertex to v that does not contain u.
Consider any w in partition(v). By definition, there exists a path fi from v to w none of whose internal
vertices are in V IDF . In particular, fi does not contain u.
The concatenation of ff and fi is a path from the entry vertex to w that does not contain u. Hence, u
does not dominate w. It follows from Lemma1 that w is not an element of partition(u). The result follows. 2
Lemma 3 If v is in partition(u), then any predecessor w of v must be in partition
Proof
Recall that we assume that every vertex in the control-flow graph is reachable from the entry vertex. Consider
any path ff from the entry vertex to w and let z be the last vertex in ff that belongs to V IDF . This
implies that w belongs to partition (z). But this also implies that v is in partition + (z), by definition of
partition(z). Hence, z must be u (from Lemma 2). 2
Theorem 3 The Sparse Evaluation Graph can be produced from the original control-flow graph by applying
an appropriate sequence of T2 and T4 transformations.
Proof
We first show that for any vertex u in V IDF , the whole of partition(u) can be merged into u through a
sequence of T2 and T4 transformations.
Consider the subgraph induced by partition(u). Let C 1 denote the strongly connected components
of this subgraph, in topological sort order. Reduce every C i to a vertex w i using a T4 transformation. Let
Hu denote the resulting graph.
Now apply the T2 transformation to vertices w 1 to w k in that order. The T2 transformation will be
applicable to each w i for the following reason.
Note that Lemma 3(b) implies that any predecessor of w i , in the graph Hu , must be either u or some w j
once we apply the T2 transformation to vertices w 1 to w have u as its unique
predecessor. Hence, we can apply the T2 transformation to w i as well and merge it with u.
It is clear that at the end of this process every vertex in partition(u) has been merged into u. If we repeat
this process for every vertex u in V IDF , clearly the resulting graph is the same as the Sparse Evaluation
Graph. 2
Corollary 1 The Compact Evaluation Graph can be generated from the Sparse Evaluation Graph by applying
an appropriate sequence of T2 and T4 transformations.
Note that the application of either T2 or T4 can only make the graph smaller. (Both transformations
reduce the number of nodes and the number of edges in the graph.) Hence, the above corollary implies
that the representation produced by our algorithm is at least as sparse as the one produced by Choi et al.'s
algorithm.
Figure
2 shows the difference between the Compact Evaluation Graph and the Sparse Evaluation Graph
for the example graph G presented in Figure 1. As can be seen, the Compact Evaluation Graph can be
generated from the Sparse Evaluation Graph by applying the transformation T 4(fx; yg).
r
a, b, d, e, f, i
g, h
c a, b, d
c
r
e, f, i
x
g, h
y
Figure
2: (i) The Compact Evaluation Graph, produced by our algorithm. (ii) The Sparse Evaluation Graph,
produced by previous algorithms.
We can also establish results analogous to the above for the Quick Propagation Graph defined in [10].
The Quick Propagation Graph is based on the concept of single-entry single-exit regions. Every single-entry
single-exit R has a unique entry edge such that it is the only edge from a vertex outside R to a
vertex inside R. Let us refer to the vertex u as the entry vertex of R. We can show, just as in the proof
of Theorem 3, that any single-entry single-exit region R consisting only of p-nodes can be merged with its
entry vertex using T2 and T4 transformations. Since the Quick Propagation Graph is constructed precisely
by merging single-entry single-exit regions consisting only of p-nodes with their entry vertices, we have:
Theorem 4 The Quick Propagation Graph can be produced from the original control-flow graph by applying
an appropriate sequence of T2 and T4 transformations.
6 On Minimum Size Equivalent Flow Graphs
We have now seen three different graphs, namely SEG, QPG, and CEG, that can all serve as "equivalent
flow graphs", i.e. help speed up dataflow analysis through sparse evaluation techniques. This raises the
question: what, exactly, is an equivalent flow graph? In particular, is it possible to construct a equivalent
flow graph of minimum size efficiently? In this section, we attempt to address these questions by presenting
one possible definition of equivalent flow graphs.
Let S be a set of vertices in a graph G. Given a path we define the S-projection of
ff, denoted project S (ff), to be the subsequence [x i 1
of ff consisting of only vertices in S. Let oe be
some arbitrary sequence of elements of S. We say that oe is an S-path between vertices x and y iff there
exists a path ff between vertices x and y whose S-projection is oe. We will use the notation x[s
to denote the fact that there is an S-path [s from x to y, usually omitting the superscript S as it
will be obvious from the context.
f be a function from the set of vertices of G to the set of vertices of another graph H. Let
f(S) denote the set ff(x) j x 2 Sg. We say that the hf; Hi preserves S-paths if
(ii) f is one-to-one with respect to S: 8x; y 2 S:(x
(iii) for any vertex y in graph G, [s is a S-path between entry(G) and y in G iff
is a f(S)-path between entry(H) and f(y) in H.
We say that a dataflow analysis problem instance over a graph G is S-restricted if the transfer function
associated with any vertex not in S is the identity function.
With the above definition, one can show that if hf; Hi preserves S-paths, then for any S-restricted dataflow
analysis problem instance over graph G, the MOP or MFP solution for G can be recovered from the MOP
or MFP solution for H.
Theorem 5 Let G be a control-flow graph and S a set of vertices in G that includes the entry vertex of G.
Let hf; Hi preserve S-paths. Let G; M; c) be an S-restricted dataflow analysis problem instance over
G. Define F 0 to be (L; H;M 0 ; c) where M 0 (u) is defined to be M (x) if and the
identity function otherwise. Then, for every vertex u in G,
Proof
Note that for any path ff in G, M (project S (ff)), since the transfer functions associated with
vertices not in S is the identity function. Let r denote the entry vertex of G. Let S-Paths(r ; u) denote the
set of all S-paths from r to u. Obviously,
Since hf; Hi preserve S-paths, it follows trivially that MOPF
Let us now consider the dataflow equations induced by F . Let u be a vertex not in S. Since the transfer
function associated with u is the identity function, the equation associated with u
reduces to
Let us now eliminate from the right hand side of all equations any variable x u associated with a vertex not
in S. The elimination is slightly complicated if there are cycles involving vertices not in S. But if we have a
cycle consisting only of vertices in S, the equations for the vertices in the cycle together imply that x
for any two vertices u and v in the cycle. Hence, mutually recursive equations induced by vertices not in S
can be converted into self-recursive equations, and then the self-recursion can be eliminated. The elimination
process finally transforms the equation associated with any vertex w into
Note that the meet is over the set of all vertices s in S that can reach w through a path consisting of no
vertices in S other than its endpoints. Since we assume that every vertex in the graph is reachable from
the entry vertex, it is clear that s[sw]w iff entry(G)[ffsw]w for some path ff. It is clear that the dataflow
equations of both F and F 0 are isomorphic when reduced to this simple form. Hence, the maximal fixed
point solution of both F and F 0 are the same.The above theorem shows that the conditions of Definition 1 are sufficient to ensure that the dataflow
solution for G can be recovered from the dataflow solution for H. It can also be argued that these conditions
are necessary, in fact, for a theorem like the above to hold. Obviously, we need condition (i) of Definition 1.
Further, for any two vertices in S, it is trivial to construct an S-restricted dataflow analysis problem instance
over G such that the solution at the two vertices are different. Hence, clearly condition (ii) is also necessary.
Similarly, if for any vertex y in G the set of S-paths between entry(G) and y don't correspond to the set
of f(S)-paths between entry(H) and f(y), it is again trivial to construct a S-restricted dataflow analysis
problem instance over G such that the solution at y differs from the solution at f(y). Hence, we may define:
Given a graph G, and a set S of vertices in G, we say that hf; Hi is an equivalent flow graph
of G with respect to S iff hf; Hi preserves S-paths.
6.1 An Algorithm For Constructing Vertex Minimal Equivalent Flow Graphs
We now present a simple algorithm for constructing an equivalent flow graph consisting of the minimum
number of vertices possible. The algorithm runs in O(jSj(jV denotes the number of
m-nodes in the graph, while jV j and jEj denote the number of vertices and edges in the graph.
We begin with some notation that will be helpful in relating algorithms based on collapsing multiple
vertices into a single vertex, such as our earlier algorithm based on T2 and T4 transformations, to the notion
of equivalent flow graphs introduced above. Let - = be an equivalence relation on the vertices of a graph G.
denote the set of equivalence classes of - =. Let [u]- = denote the equivalence class to which vertex
belongs. Let []- = denote the function from V (G) to that maps every vertex to its equivalence class.
We may occasionally omit the subscript - = to reduce notational clutter. We now define the quotient graph
obtained by collapsing every equivalence class into a single vertex. The following definition depends on the
set S of m-nodes and is not the most obvious or natural definition, but the reason for the definition will
become apparent soon.
The graph G=S is the graph H whose vertex set and edge set are as below:
The definition of E(H) deserves some explanation. The basic idea is that an edge u ! v of G will become
the edge [u] ! [v] in the collapsed graph. However, the above definition ensures that certain edges are
eliminated completely. In particular, if u v, the corresponding edge [u] ! [v] will be a self loop, and is
retained only if v 2 S. Similarly, if v then an edge directed to v is projected only if
Otherwise, it is eliminated.
Note that T2 and T4 transformations can be viewed as simple quotient graph constructions. In particular,
corresponds to the equivalence relation that places u and v into the same equivalence class and
every other vertex in an equivalence class by itself. Similarly, T 4(X) corresponds to an equivalence relation
in which all vertices in X are equivalent to each other, while every vertex not in X is in an equivalence
class by itself. A sequence of such transformations corresponds to an equivalence relation too, namely the
transitive closure of the union of the equivalence relations associated with the individual transformations in
the sequence. Hence, the compact evaluation graph itself is the quotient graph with respect to an appropriate
equivalence relation.
We now define a particular equivalence relation - =S induced by set S. Define pred S (u) to be the set of
vertices s 2 S such that there exists an S-path [s] from s to u. Note that for any s 2 S, pred S fsg.
We say that pred S pred S (v). Our algorithm identifies the equivalence classes of the above
equivalence relation, and collapses each equivalence class to a single vertex. More formally, our algorithm
produces the equivalent flow graph h[]- =S ; G=S - =S i.
We will first establish the minimality claim.
Theorem 6 If hf; Hi preserves S-paths, then
Proof
Let hf; Hi preserve S-paths and assume that
pred S (x) , [w] is a S-path from w to x definition of pred S (x)
, [f(w)] is a f(S)-path from f(w) to f(x) (since hf; Hi preserve S-paths)
, [f(w)] is a f(S)-path from f(w) to f(y) (since
, [w] is a S-path from w to y (since hf; Hi preserve S-paths)
pred S (y) definition of pred S (y)
Hence pred S pred S (y) and x - =S y. 2
It follows from the above theorem that we cannot construct an equivalent flow graph with fewer vertices
than h[]- =S ; G= S - =S i. We now just need to show that h[]- =S ; G= S - =S i is an equivalent flow graph. The following
theorem establishes a more general result, namely that for any equivalence relation that approximates
the quotient graph with respect to - = is an equivalent flow graph.
Theorem 7 preserves S-paths iff
Proof
Let f denote =. The forward implication of the theorem follows directly from
Theorem 6. Consider the reverse implication. The first two conditions (of Definition 1) follow trivially, and
we need to show that the third condition holds too.
Recall that x[s denotes the fact that there is an S-path [s from x to y. Let r denote
the entry vertex of G. We need to show that
We will establish the forward implication by induction on the length of the path from r to y. The base
case is trivial since we have f(r)[f(r)]f(r). For the inductive step assume that we have a path P from
r to y, consisting of n edges, whose S-projection is [s
First consider the case where y . Consider the prefix of path P ending at vertex
s k . This is a path from r to s k consisting of less than n edges whose S-projection is [s It
follows from the inductive hypothesis that f(r)[f(s 1
that f(r)[f(s 1
Now consider the remaining case, where
. Note that the presence of path P implies that
]y. Hence, if y
y. If x ! y is the last edge of path P , we have r [s
via a path of edges. It follows from the inductive hypothesis that f(r)[f(s 1
y, then and we are done. Otherwise, H includes the edge f(x) ! f(y), and it follows
that f(r)[f(s 1
We will establish the reverse implication by induction on the length of the path from f(r) to f(y) in H.
Again, the base case is trivial since we have r [r ]r . For the inductive step assume that we have a path
P from f(r) to f(y) of length n edges whose f(S) projection is [f(s 1 We will establish
that r [s in two steps.
Proof that r [s . First consider the case that y . The last edge of P must be of the
is an edge in G. Since we have a path of less than
edges from f(r) to f(x) whose S-projection is [f(s 1 it follows from the inductive
hypothesis that r [s which together with the edge x ! s k implies that r [s
Now consider the case that y
Then, we have a path of less than edges from f(r)
to f(s k ) whose S-projection is [f(s 1 It follows from the inductive hypothesis that
Proof that s k [s k ]y. Consider the suffix of path P from f(s k ) to f(y). This suffix can be written in
the form f(u 1
y. It
immediately follows that s k [s k ]u i for In particular, this implies that there is a path Q
from s k to y in G whose S-projection is [s k ].
It follows that r [s is easy to verify that the equivalence relations corresponding to T2 and T4 transformations are approximations
of - =S . Hence, the above theorem shows that h[]- =S ; G=S - =S i, the Compact Evaluation Graph, the
Sparse Evaluation Graph, and the Quick Propagation Graph are all valid equivalent flow graphs.
Identifying the equivalence classes of - =S
We now present an efficient algorithm for identifying the equivalence classes of - =S . The algorithm is a
partitioning algorithm similar to Hopcroft's algorithm for minimizing finite automata.
We initially start out with a partition in which all nodes are in a single equivalence class. We then refine
the partition by considering every node in S, one by one. For every node m in S, we first perform a traversal
of the graph to identify Rm , the set of all nodes reachable from m without going through another node in
S. These are the nodes u such that pred S (u) contains m. Then, every equivalence class X is refined into two
equivalence classes both these sets are nonempty. This refinement ensures that for
any two vertices x and y left in the same equivalence class, m 2 pred S pred S (y). Hence, once the
refinement has been done with respect to every vertex in S, pred S pred S (y) for any two vertices x and
y left in the same equivalence class.
The refinement of the partition, for a given node m, can be done in linear time, if appropriate data
structures are used. (For example, by maintaining each equivalence class as a doubly linked list, so that an
element can be removed from an equivalence class in constant time.) Consequently, the final partition can
be constructed in time O(jSj(jV
In practice, it might be more efficient to first construct the compact evaluation graph, using the linear
time algorithm, and to then apply the above quadratic algorithm to the smaller compact evaluation graph.
This algorithm is similar in spirit to the work of Duesterwald et al. [8], who present both an O(jV j log jV
algorithm and an O(jV j 2 log V ) partitioning based algorithm for constructing equivalence graphs. (This
complexity measure is based on the assumption that the number of edges incident on a vertex is bounded by
a constant.) Both Duesterwald et al.'s algorithm and Hopcroft's algorithm utilize the edges of the graph to
refine partitions, while our algorithm uses paths in the graph consisting only of p-nodesto do the refinement
step. This guarantees that the graph produced by our algorithm has the minimum number of vertices
possible, which is not the case with Duesterwald et al.'s algorithm.
6.2 Constructing Edge Minimal Equivalent Flow Graphs is NP-hard
We now show that the problem of constructing the smallest equivalent flow graph becomes much more
difficult if one counts the number of edges in the graph as well. Define the size of hf; Hi to be the sum of
the number of nodes and the number of edges in H.
Theorem 8 The problem of finding an equivalent flow graph of minimum size is NP-hard.
Proof
(Reduction from the set-covering problem.) The set-covering problem ([3]) is the following: Given a finite
S2F S, find a minimum-size subset C of F such that
S. The set-covering
problem is known to be NP-hard. We now show that given an instance (X; F) of the set-covering problem,
we can construct in polynomial time a graph G such that a minimum-size cover for (X; F) can be generated
(in polynomial time) from a minimum-size equivalent flow graph for G. We assume that the input instance
is such that X 62 F . Otherwise, fXg is trivially the minimum-size cover for (X; F).
The graph G consists of a m-node r (the entry vertex), a m-node m x for every x 2 X, a p-node pS for
every S 2 F , and a p-node exit. The graph consists of an edge from r to every m x , an edge from m x to p S
and an edge from every p S to exit.
Let hf; Hi be a minimum size equivalent flow graph for G. Assume that every predecessor of f(exit) in
H is some vertex of the form f(p Sw ). If this is not the case, then H can be trivially modified as follows,
without increasing its size, to ensure this. Consider the vertex f(exit) in H. Let w be any predecessor of
f(exit) in H. We claim that there must be some f(p Sw ) reachable from w.
There exists some f(p Sw ) reachable from w.
Proof: Consider the following cases:
Case 1: w is f(r). This is not possible, since hf; Hi preserves S-paths.
Case 2: w is f(m x ), for some x. Clearly, x must be in some set S 2 F . Hence, there must exist
an edge from m x to pS in G. Since hf; Hi preserves S-paths, there must exist some path from
Case 3: w is f(p S ), for some S. The result trivially follows.
Case 4: w is f(exit). This is not possible, since we can drop the edge from f(exit) to itself to
get a smaller equivalent flow graph.
Case 5: w is not f(u), for any u. If no f(p S ) is reachable from w, then we could simply merge
w with f(exit) to generate a smaller equivalent flow graph. Hence, there must exist a f(p S )
reachable from w.
us replace every predecessor w of f(exit) by f(p Sw ). This gives a minimum size equivalent flow
graph in which all predecessors of f(exit) are of the form f(p S ).
It can be shown that the set fS 2 F j f(p S is an edge in Hg is a minimum size cover for (X; F).
Clearly, the conditions for S-path preservation imply that this set must be cover for (X; F). If it is not a
minimum size cover, let C ' F be a minimum size cover for (X; F). Replace the predecessors of f(exit) by
the set ff(pS )jS 2 Cg. This will give us a smaller equivalent flow graph, contradicting our assumption that
hf; Hi is a minimum size equivalent flow graph. 2
6.3 Discussion
Let us look at the results we have seen so far from a slightly different perspective. We have seen that
cycles involving p-nodes are irrelevant and may be eliminated (e.g., via T4 transformations). Once such
cycles are eliminated, the problem of constructing equivalent flow graphs becomes similar to a well-known
problem: minimizing the computation required to evaluate a set of expressions over a set of variables. The
one additional factor we need to consider is that the only operator allowed in the expressions is the meet
operator, which is commutative, associative, and idempotent. For example, the problem may be viewed as
that of minimizing a boolean circuit consisting only of, say, the boolean-and operator.
Our algorithm for constructing the vertex minimal equivalent flow graph essentially eliminates common
subexpressions. From the point of view of performing dataflow analysis, this achieves the "best possible"
space reduction one might hope for (since iterative algorithms typically maintain one "solution" for every
vertex in the graph). This also reduces the number of "meet operations" the iterative algorithm needs to
perform in order to compute the final solution, but not to the least number necessary. Eliminating further
"unnecessary" edges from the graph can reduce the number of meet operations performed by the analysis
algorithm, though it will not provide further space savings.
This helps to place the above NP-hardness result in perspective, indicating what can be achieved efficiently
and what cannot.
There is yet another question concerning the significance of the above NP-hardness result. Our feeling is
that it may be simpler to generate the minimum size equivalent flow graph for control-flow graphs generated
from structured programs than to do it for graphs generated from unstructured programs and that the
NP-hardness result might not hold if we restrict attention to structured control-flow graphs. Consider the
example in Figure 3(i). As before, m-nodes are shown as bold circles. This graph is already in normal form
with respect to both T2 and T4 transformations. Hence, the compact evaluation graph of this graph is itself.
However, a smaller equivalent flow graph exists for this input graph, as shown in Figure 3(ii).
Note, however, that the graph in Figure 3(i) cannot be generated using structured programming constructs.
In contrast, consider the graph shown in Figure 3(iii). This graph can be generated using only structured
constructs such as CASE statements and If-Then-Else statements. The nodes e, f , and g of this graph have
the same solution as the corresponding nodes in Figure 3(i). In this case, however, our linear time T2-T4
based algorithm will be able to reduce this graph to the normal form shown in Figure 3(ii) !
An interesting question that arises is whether it is simpler to generate the minimum size equivalent flow
graph for control-flow graphs of structured programs. In particular, does the NP-hardness result hold if we
restrict attention to structured control-flow graphs?
7 Partially Equivalent Flow Graphs
Note that the equivalent flow graphs we have considered so far permit the dataflow solution for any vertex
in the original graph to be recovered from the sparse graph. In general, we may not require the dataflow
r
a
d
f
e
a
r
d
r
a
d
e, f
Figure
3: An example illustrating the kind of factoring that our algorithm does not attempt to achieve
solution at every vertex. For example, if we are solving the reaching definitions problem for a variable x,
the solution will usually be necessary only at nodes that contain a use of the variable x. One can use this
fact to construct graphs that are even more compact than the equivalent flow graphs. We will refer to such
generalized graphs that allow us to recover the dataflow solution for a specified set of vertices in the original
graph as Partially Equivalent Flow Graphs.
Let us refer to a node where the dataflow solution is required as a r-node and to a node where the dataflow
solution is not required as a u-node. Let us refer to a node that is both a p-node and a u-node as a up-node.
We now define some transformations that can be used in the construction of a Partially Equivalent Flow
Graph.
More Transformations
Transformation T5: The T5 transformation is applicable to a node u if (i) u is an up-node, and (ii) u has a
unique successor. The T5 transformation is structurally the same as a T2 transformation. It simply merges
the node u, to which it is applicable, with u's unique successor. Let v denote u's successor. The graph
T 5(u; v)(G) is obtained by removing the node u and the edge v from the graph G and by replacing
every incoming edge w ! u of u by a corresponding edge w ! v.
Note that the dataflow solution for node u in graph G cannot be, in general, obtained from the dataflow
solution for any node in T 5(u; v)(G). However, this is okay since the dataflow solution at u is not required.
Transformation T6: The T6 transformation is applicable to any set of u-nodes that has no successor.
set X of nodes is said to have no successor if there exists no edge from a node in X to a node outside
X.) If X is a set of u-nodes that has no successor in G, then the graph T 6(X)(G) is obtained from G by
deleting all nodes in X as well as any edges incident on them.
The T6 transformation is rather simple: it says that a node can be deleted if that node and all nodes
reachable from that node are u-nodes. This is similar to the pruning of dead OE-nodes discussed in [19, 2] but
more general.
We now outline a transformation that essentially captures an optimization described by Choi et al [2]. This
optimization, however, requires us to relax our earlier condition that the Partially Equivalent Flow Graph is
to be constructed knowing nothing about the transfer functions associated with the m-nodes. Assume that
we further know whether the transfer function associated with a m-node is a constant-valued function or
not. (For example, in the problem of identifying the reaching definitions of a variable x, every m-node has
a constant-valued transfer function, since it generates the single definition of x contained in that node and
kills all other definitions of x.) Let us refer to a m-node as a c-node if the transfer function associated with
that node is a constant-valued function.
Transformation T7: The T7 transformation is applicable to any c-node that has one or more incoming
edges, and the transformation simply deletes these incoming edges.
The T7 transformation may not preserve the meet-over-all-paths solution since it creates vertices unreachable
from the entry vertex. However, it does preserve the maximal fixed point solution.
Theorem 9 The T2, T4, T5, T6, and T7 transformations form a finite Church-Rosser system.
Proof
Tedious, but straightforward. 2
The Algorithm
Luckily, the transformations do not significantly interact with each other. Let us denote the normal form of a
graph G with respect to the set of all T2, T4, T5, T6, and T7 transformations by (T 2+T4+T5+T6+T 7) (G).
Let T5 (G) denote the normal form of G with respect to the set of all T5 transformations. T6 (G), T7 (G),
are similarly defined. We can show that:
Theorem
Proof
We will sketch the outline of a proof and omit details.
Assume that a graph is in normal form with respect to T4 transformations. In other words, it does not
have any nontrivial (that is, of size ? 1) strongly connected set of p-nodes. Clearly, the application of
a T5 transformation will not create any nontrivial strongly connected set of p-nodes. Hence, the graph
will continue to be in normal form with respect to T4 transformation even after the application of a T5
transformation. Similarly, the graph will continue to be in normal form with respect to T4 transformations
even after the application of a T6 or a T7 or a T2 transformation.
Now assume that a graph is in normal form with respect to T2 transformations. One can show that the
graph will continue to be in normal form with respect to T2 transformations even after the application of a
T5 or T6 or T7 transformation.
Similarly, a graph in normal form with respect to T7 transformations will continue to be so even after the
application of a T6 or T5 transformation. And a graph in normal form with respect to T6 transformations
will continue to be so after the application of a T5 transformation.
This establishes that T5 (T6 (T7 (T2 (T4 (G))))) is in normal form with respect to all the transforma-
tions. 2
We now present our algorithm for constructing a partially equivalent flow graph for a given graph G.
Step 1. Compute outlined earlier).
Step 2. Compute outlined earlier).
Step 3. Compute simply deleting all incoming edges of every c-node in G 2 .
Step 4. Compute perform a simple backward graph traversal from every r-node
to identify the set X of nodes from which a r-node is reachable. Delete all other nodes and edges incident
upon them.
Step 5. Compute be the set of up-nodes of G 4 in topological sort
order. Since G 4 is in normal form with respect to T4 transformations, it cannot have any cycle of p-nodes,
and hence a topological sort ordering of the up-nodes must exist. Visit vertices w k to w 1 , in that order,
applying the T5 transformation to any w i that has only one successor.
The graph G 5 can be shown to be in normal form with respect to all the transformations described earlier.
Example
Figure
4 illustrates the construction of a partially equivalent flow graph using our algorithm. Assume that
all applicable T2 and T4 transformations have been applied to the initial graph using the algorithm outlined
earlier, and that the resulting graph is as shown in Figure 4(i). Assume that we are interested in the dataflow
solution only at nodes e and i (shown as square vertices in the figure). All the remaining nodes (shown as
circles) are u-nodes. We also assume that all the m-nodes have a constant transfer function.
The next step in computing the partially equivalent flow graph is applying all possible T7 transformations.
This produces the graph shown in Figure 4(ii). We then apply all feasible T6 transformations, which produces
the graph in Figure 4(iii).
We then examine all remaining up-nodes in reverse topological sort order, applying the T5 transformations
where possible. It turns out that the T5 transformation is applicable to both f and d, and applying these
transformations produces the normal form in Figure 4(v).
a
a
a
d
f
a
r
d
f
a
r
d
f
e
e
e
e
e
(iv)
(v)
d
d
e
Figure
4: An example illustrating our algorithm for constructing a partially equivalent flow graph.
8 Interprocedural Extensions
We have discussed sparse evaluation as it applies to intraprocedural analysis (or analysis of single procedure
programs). However, the ideas outlined in this paper can be easily extended to the case of interprocedural
analysis. Assume that the input program consists of a set of procedures, each with its own control-flow
graph. Some of the vertices in the graphs may correspond to "call"s to other procedures. We assume that,
as part of the input, all the non-call vertices in each control-flow graph have been annotated as being a
m-node or a p-node. Vertices representing procedure calls, however, are not annotated as part of the input.
Clearly, any procedure all of whose nodes are p-nodes can be "eliminated", and any call to this procedure
may be marked as being a p-node. Iterative application of this idea, in conjunction with our algorithm
for the intraprocedural case, suffices to construct the sparse evaluation representation of multi-procedure
programs, in the absence of recursion.
Recursion complicates issues only slightly. Define a procedure P to be a p-procedure if all the nodes
in procedure P and all the nodes in any procedure that may be directly or transitively called from P are
p-nodes. Define P to be a m-procedure otherwise.
The set of all m-procedures in the program can be identified in a simple linear time traversal of the
call graph. Initially mark all procedures containing a m-node as being a m-procedure. Then, traverse the
call graph in reverse, identifying all procedures that may call a m-procedure, and marking them as being
m-procedures as well.
Once this is done, we may mark a call node as being a m-node if it is a call to a m-procedure and as a p-node
otherwise. Then, we can construct the sparse evaluation representation of each procedure independently,
using our intraprocedural algorithm.
9 Related Work
The precursor to sparse evaluation forms was the Static Single Assignment form [5, 6], which was used to
solve various analysis problems, such as constant propagation and redundancy elimination, more efficiently.
Choi et al. [2] generalized the idea and defined the Sparse Evaluation Graph. Cytron and Ferrante [4],
Sreedhar and Gao [16], and Pingali and Bilardi [14, 15] improve upon the efficiency of the original Choi et
al. algorithm for constructing the Sparse Evaluation Graph. (We will discuss the relative efficiencies of the
various algorithms in detail soon.) Johnson et al. [11, 10] define a different equivalent flow graph called the
Quick Propagation Graph (QPG) and present a linear time algorithm for constructing it. Duesterwald et
al. [8] show how a congruence partitioning technique can be used to construct an equivalent flow graph.
We now briefly compare our work with these different algorithms and representations in terms of the
following three attributes.
Simplicity
Our work was originally motivated by a desire for a simpler algorithm for constructing Sparse Evaluation
Graphs, one that did not require the dominator tree, which had been a standard prerequisite for most
previous algorithms for constructing Sparse Evaluation Graphs. The Johnson et al. algorithm [10] does not
require a dominator tree, but it has its own prerequisites, namely the identification of single-entry single-exit
regions and the construction of a Program Structure Tree. Subsequent to our work, we became aware of
an O(n log n) algorithm by Duesterwald et al. [8] for generating sparse evaluation forms. This algorithm is
based on congruence partitioning and does not require the dominator tree either.
We believe that our algorithm is simpler to understand and implement than these previous algorithms
for constructing sparse representations. (Of course, the dominator tree and the Program Structure Tree do
have other applications, and if they are being built any way, then our algorithm does not offer any particular
advantage in terms of implementation simplicity.)
Compactness
We have shown that the Compact Evaluation Graph is, in general, smaller than the Sparse Evaluation Graph
and the Quick Propagation Graph. Consequently, dataflow analysis techniques can benefit even more by
using this smaller representation.
We have also presented a quadratic algorithm for constructing the equivalent flow graph with the smallest
number of vertices possible. This may be of interest for complicated and expensive analyses, such as pointer
analysis, where it may be worth spending the extra time to reduce the number of vertices in the graph.
Duesterwald et al. present an O(jV j log jV j) algorithm for constructing an equivalent flow graph, which
we believe is exactly the Sparse Evaluation Graph. They also describe another O(jV j log jV
that can lead to further reductions in the size of graph. They then suggest iteratively applying both these
algorithms until the graph can be reduced no more, leading to an O(jV j 2 log jV algorithm. Our algorithm
for constructing the equivalent flow graph with the minimal number of vertices is similar in spirit to this
but constructs an even smaller graph more efficiently.
Efficiency
Comparing the efficiency of the various algorithms for constructing the SSA form and the different equivalent
flow graphs can be somewhat tricky. In particular, under some situations, the worst-case complexity measure
does not tell us the full story.
If we are interested in the problem of constructing a single equivalent flow graph from a given graph, then
comparing these algorithms is easy. The linear algorithms due to Pingali and Bilardi, Sreedhar and Gao,
Johnson and Pingali, as well as our own linear time algorithm are all asymptotically optimal. One could
argue that our algorithm has a smaller constant factor because of its simplicity.
Often, however, we may be interested in constructing multiple equivalent flow graphs from a given control-flow
graph (each with respect to a different set of m-nodes). Our previous observations remain more or less
valid even in this case. For each equivalent graph desired, one has to spend at
building the
map from the vertices of the original graph to the vertices of the equivalent flow graph. All the linear time
algorithms should perform comparably (upto constant factors) for typical control-flow graphs, where jEj is
O(jV j).
Now, assume that we are interested in constructing multiple partially equivalent flow graphs from a given
control-flow graph. The problem of constructing the SSA form falls into this category - the true generalization
of the SSA form appears to be the partially equivalent flow graph, not the equivalent flow graph.
In particular, every subproblem instance specifies a set S of m-nodes as well as a set R of nodes where the
dataflow solution is required. For each such subproblem, we need to construct a partially equivalent flow
graph, and a mapping from every vertex in R to a vertex in the equivalent flow graph. Our algorithm, as
well as Sreedhar and Gao's algorithm, will
for the construction of each partially
equivalent flow graph. The original SSA algorithm [5, 6], in contrast, constructs all the partially equivalent
flow graphs in parallel, sharing the linear time graph traversal overhead. For control flow graphs that arise
in practice, this algorithm usually constructs each partially equivalent flow graph in "sub linear" time, even
though, in the worst case, this algorithm can take quadratic time to construct each partially equivalent flow
graph. Hence, many believe that, in practice, this algorithm will be faster than the algorithms that always
take linear time for every partially equivalent flow graph. (See [17] for empirical evidence supporting this.)
Fortunately, the work of Pingali and Bilardi [14, 15] shows how the original SSA algorithm can be adapted so
that we have the best of both worlds, namely a linear worst-case complexity as well a "sub linear" behavior
for graphs that arise in practice.
When is this finer distinction between the different linear time algorithms likely to be significant? One
could argue that this difference is unlikely to be very significant for complex analysis problems, where the
cost of the analysis is likely to dominate the cost of constructing the equivalent flow graph. Problems such
as the reaching definitions problem, however, are simple and have linear time solutions. In this case, the cost
of constructing the equivalent flow graph may be a significant fraction of the analysis time, and the above
distinction could be significant. On the other hand, some [7] argue that constructing equivalent flow graphs
is not the fastest way to solve such simple analysis problems anyway.
Previous work has shown equivalent flow graphs to be a useful representation, both for improving the
performance of dataflow analysis algorithms as well as for representing dataflow information compactly.
This paper presents a linear time algorithm for computing an equivalent flow graph that is smaller than
previously proposed equivalent flow graphs. We have presented a quadratic algorithm for constructing a
equivalent flow graph consisting of the minimum number of vertices. We have also shown that the problem
of constructing a equivalent flow graph consisting of the minimum number of vertices and edges is NP-hard.
We have shown how the concept of an equivalent flow graph can be generalized to that of a partially
equivalent flow graph and have extended our algorithm to generate this more compact representation. For
simple partitioned problems, such as the reaching definitions problem, the partially equivalent flow graph
directly yields the desired solution, in "factored form".
The results presented here give rise to several interesting questions which appear worth pursuing. How
significant is the NP-hardness result in practice? Can minimum size equivalent flow graphs be constructed
efficiently for special classes of graphs, such as those that can be generated by structured programming
constructs? Are there other graph transformations worth incorporating into our framework?
--R
Efficient flow-sensitive interprocedural computation of pointer-induced aliases and side effects
Automatic construction of sparse data flow evaluation graphs.
Introduction to Algorithms.
Efficiently computing OE-nodes on-the-fly
An efficient method for computing static single assignment form.
Efficiently computing static single assignment form and control dependence graph.
How to analyze large programs efficiently and informatively.
Reducing the cost of data flow analysis by congruence partitioning.
A fast and usually linear algorithm for global dataflow analysis algorithm.
The program tree structure: Computing control regions in linear time.
A unified approach to global program optimization.
A safe approximate algorithm for interprocedural pointer aliasing.
Apt: A data structure for optimal control dependence compu- tation
Optimal control dependence computation and the roman char- iots problem
A linear time algorithm for placing OE-nodes
Efficient Program Analysis Using DJ Graphs.
Fast algorithms for the elimination of common subexpressions.
Detecting program components with equivalent be- haviors
--TR
An efficient method of computing static single assignment form
Introduction to algorithms
Automatic construction of sparse data flow evaluation graphs
Efficiently computing static single assignment form and the control dependence graph
How to analyze large programs efficiently and informatively
A safe approximate algorithm for interprocedural aliasing
Dependence-based program analysis
Efficient flow-sensitive interprocedural computation of pointer-induced aliases and side effects
The program structure tree
A linear time algorithm for placing MYAMPERSANDphgr;-nodes
Optimizing sparse representations for dataflow analysis
Sparse functional stores for imperative programs
APT
Efficient program analysis using DJ graphs
Optimal control dependence computation and the Roman chariots problem
Toward a complete transformational toolkit for compilers
A Fast and Usually Linear Algorithm for Global Flow Analysis
A unified approach to global program optimization
Efficiently Computing phi-Nodes On-The-Fly (Extended Abstract)
Reducing the Cost of Data Flow Analysis By Congruence Partitioning
--CTR
Stephen Fink , Eran Yahav , Nurit Dor , G. Ramalingam , Emmanuel Geay, Effective typestate verification in the presence of aliasing, Proceedings of the 2006 international symposium on Software testing and analysis, July 17-20, 2006, Portland, Maine, USA | equivalent flow graphs;sparse evaluation graphs;quick propagation graphs;graph transformations;dataflow analysis;static single assignment forms;partially equivalent flow graphs |
567200 | Logical optimality of groundness analysis. | In the context of the abstract interpretation theory, we study the relations among various abstract domains for groundness analysis of the logic programs. We reconstruct the well-known domain as a logical domain in a fully automatic way and we prove that it is the best abstract domain which can be set up from the property of groundness by applying logic operators only. We propose a new notion of optimality which precisely captures the relation between and its natural concrete domain. This notion enables us to discriminate between the various abstract domains for groundness analysis from a computational point of view and to compare their relative precision. Finally, we propose a new domain for groundness analysis which has the advantage of being independent from the specific program and we show it optimality. Copyright 2002 Elsevier Science B.V. | Introduction
In the logic programming field, groundness is probably the most important instance
of static analysis. Many domains have been proposed in order to study
groundness of (pure) logic programs, from the very simple domain G by Jones
and Sndergaard [13], to more complex ones, like
The latter is the most widely used, since it is able to characterize both pure
groundness, i.e., if a variable is instantiated to a ground term, and groundness
relations between different variables, i.e., whether the groundness of a variable
depends on the groundness of other variables. In this paper, we show a different
way of building the abstract domain Pos, which directly comes from the definition
of groundness. From our construction, we derive many useful properties,
such as a normal form for its elements and a result of optimality. Moreover, we
answer to some open questions such as "why Pos is considered optimal" from a
computational point of view.
1.1 Motivations
Pos is certainly a well studied domain for groundness analysis of logic programs
[13,15,4,2]. In standard literature, the domain Pos is built in 3 steps: First
consider the set of (classic) propositional formulas built from a finite set V ar and
connectives "; and ); second select only the positive formulas (which are true
when all variables are set to true); third quotient w.r.t. classic logical equivalence.
The domain so obtained is then related to the concrete one (sets of substitutions
closed by instantiation) through a suitable concretization function, explicitly
proving that it induces a Galois insertion. This method of constructing domains
suffers from many drawbacks. The most important one is that the domain has to
be "invented" in some way (a procedure which is usually not formally related to
the property we analyze) and then has to be explicitly proved that the domain
so created is an abstraction of the concrete one and is actually useful for the
analysis. As logic programs compute substitutions and groundness is a property
closed by instantiation, the most natural choice for the concrete domain is sets
of substitutions closed by instantiation, denoted by # (Sub).
Propositional formulas used in the construction of Pos do not reflect the
logic in the concrete domain. In order to understand this apparent asymmetry
between Pos and its natural concrete domain, we just need to make the following
observations.
- From an algebraic point of view, Pos is a boolean algebra (with connectives
therefore we can not inherit Pos algebraic
properties from the concrete domain.
- When we try to compare the logical operations on Pos to the corresponding
ones on # (Sub), we find that only the meet (") on Pos come from the
meet on # (Sub) (set intersection). In fact, the join () of Pos is not the
restriction of the concrete join (set union). For instance, as proved in [8],
the concretization of the formula x (x , y) is not the union of the concretization
of x and x , y. Moreover, in the concrete domain, we can not
have a notion of (classic) implication, since it is not a boolean algebra. So
there is no way to inherit the implication ) from a corresponding operation
of # (Sub).
- Finally, the last step in the construction of Pos is the quotient w.r.t. classic
logic equivalence. For the same motivation in the previous point, it is impossible
to define such an equivalence on # (Sub) (otherwise it would be a
boolean algebra). Therefore, also this step in the construction of Pos does
not come from the properties of the concrete domain.
1.2 The intuition behind our reconstruction of Pos
The main problem of the previous construction is that # (Sub) is not a boolean
algebra. In particular, it does not allow us to define an operation of classic
implication. Instead, # (Sub) is rich enough to allow us to define a notion of
intuitionistic implication (or relative pseudo-complement). The intuitionistic implication
is a generalization of the classic one which leads to a weaker notion of
complement called pseudo-complement. Moreover, intuitionistic implication is
independent from the join operation , i.e., a does not hold. Thus
becomes an algebra with three operations ", [ and ! which correspond
precisely to the ", and ! of intuitionistic logic.
In this paper we show that Pos can be constructed by using only the definition
of groundness. The simplest domain defining the property of groundness
is the least abstract domain containing V ar, which is by Jones and
Sndergaard [13], where each variable v denotes the set of substitutions which
ground v. In the first part of the paper, we show that Pos is exactly the least abstract
domain which contains all the (double) intuitionistic implications between
elements of G
\Gamma!
\Gamma! G). This result generalizes a similar one on
in [11] which proves that Def is the least abstract domain which contains
all the intuitionistic implications between elements of G, i.e.,
\Gamma! G.
Our formalization of Pos enjoys the following properties.
- Pos is built by using only the definition of groundness (the domain G) and the
properties of the concrete domain. We need no more to use boolean algebra,
positive formulas, nor to quotient the resulting set. The construction of the
abstract domains G; Def; Pos now follows a unique, precise logic: The logic
structure of the concrete domain. Moreover, we do not need anymore to
prove that these domains are indeed abstractions of the concrete one, since
it follows by construction. Also the relations among all the domains are now
automatically derivable.
- The operations which characterize Pos are now exactly the same of the
concrete one: " and ! in Pos are the restrictions of " and ! in # (Sub)
and no other operation is considered in Pos.
We immediately obtain a theorem of representation for Pos which states
that every element of Pos is an implication between two elements of
by using the characterization of every element of Pos is an implication
of implications between elements of G, independently from the cardinality of
ar.
- This result allows us to answer to some open questions on Pos. The join on
Pos differs from the join on # (Sub) but, in some cases, they coincide.
W hy Pos contains exactly those concrete joins?
Our representation theorem states that the join on Pos is precise if and only
if it can be written as an intuitionistic implication.
Since Pos is now a sub-algebra of # (Sub) w.r.t. the " and ! operations,
all the properties of Pos are directly derivable from the properties of the
concrete domain. In particular, since # (Sub) is a model of intuitionistic
propositional logic, it follows that also Pos is. Therefore we gain from logic
an axiomatization for Pos.
In the second part of the paper, we use our formalization of Pos in order to
understand why Pos can be considered a "good" domain. To this end, we try
to refine Pos. We wonder which is the domain which contains the implications
between formulas in Pos and find out an important closure property of Pos, i.e.,
that Pos \Psi
This implies that Pos can not be further refined w.r.t.
intuitionistic implication. Therefore Pos is exactly the least abstract domain
closed w.r.t. " and !.
We then consider the disjunctive completion of Pos,
(Pos), which includes
all unions of formulas in Pos. [8] proved that Pos is strictly contained in
(Pos).
We try to refine this result by allowing implications between disjunctive formulas
and prove that it can not be further refined since
\Gamma!
b (Pos).
1.3 Related work
Many works have been devoted to the study of groundness analysis. The first
ones concentrated much on the definition of groundness and basic properties
[13,7], while the last ones studied different characterizations of various abstract
domains. [15] proposed propositional formulas to represent groundness relations.
Many authors followed this approach [4,2] and contributed to develop and study
the domains others focused on the abstract operations or
slightly different characterizations [16,14].
All these authors share the same They construct abstract domains
independently from the property to be analyzed (i.e., from the semantics or concrete
domain) and then prove some properties of the abstraction. Their attention
is focused entirely on the representation of formulas in the abstract domains. This
forces to work always up to isomorphism.
Our idea is to concentrate exclusively on the property of groundness. We
re-construct the domain Pos in a systematic way and show that it is possible
to avoid ad hoc characterizations in the construction process of new domains.
Our work is based on the Heyting completion refinement operator [11]. A first
example in this direction has been presented in [11], where Heyting completion is
used to construct the abstract domain An attempt of building the domain
Pos starting from the disjunctive completion of the domain G (by Jones and
Sndergaard [13]) is shown in [11]. Also in this case, the construction does not
directly come from the property of groundness, but from a domain more complex
than G.
Preliminaries
Throughout the paper we assume familiarity with lattice theory (e.g. see [3,12]),
abstract interpretation [5,6,17] and logic programming [1].
2.1 Notation and basic notions
and C be sets. A n B denotes the set-theoretic difference between A
and B, A ae B denotes proper inclusion and, if X ' A, X is the set-theoretic
complement of X . The powerset of A is denoted by (A). If A is a poset, we
usually denote by A the corresponding partial order. A complete lattice A
with partial ordering A , least upper bound A (join), greatest lower bound
"A (meet), least element ?A , and greatest element ?A , is denoted by hA; A
being A complete, the join and meet operations
are defined on (A). If A is (partially) ordered and I ' A then # I
denotes the set of order-ideals of A, where I ' A is
an order-ideal if I =#I . # (A) is a complete lattice with respect to set-theoretic
inclusion, where the join is set union and the meet is set intersection. We write
to mean that f is a total function from A to B. In the following
we sometimes use Church's lambda notation for functions, so that a function
f will be denoted by x:f(x). If C ' A then Cg. By
we denote the composition x:g(f(x)). Let
lattices. A function f : A 7\Gamma! B is additive
if for any C ' A,
2.2 Abstract interpretation and Galois connections
The standard Cousot and Cousot theory of abstract interpretation is based on
the notion of Galois connection [5]. If C and A are posets and ff : C 7\Gamma! A,
are monotonic functions, such that 8c 2 C: c C fl(ff(c)) and 8a 2
A: ff(fl(a)) A a, then we call the quadruple hC; fl; A; ffi a Galois connection
(G.c.) between C and A. If in addition 8a 2 A:
is a Galois insertion (G.i.) of A in C. In the setting of abstract interpretation,
C and A are called, respectively, concrete and abstract domain, and they are
assumed to be complete lattices. Any G.c. hC; fl; A; ffi can be lifted to a G.i.
by identifying in an equivalence class those objects in A having the same image
(meaning) in C.
Let L be a complete lattice hL; playing the role of the
concrete domain. An (upper) closure operator on L is an operator ae : L 7\Gamma! L
monotonic, idempotent and extensive (viz. 8x 2 L: x L ae(x)) [17]. Each closure
operator ae is uniquely determined by the set of its fixpoints, which is its image
ae(L). ae(L) is a complete lattice with respect to L , but, in general, it is not a
complete sublattice of L, since the join in ae(L) might be different from L . ae(L)
is a complete sublattice of L iff ae is additive.
is the set of fixpoints of a closure operator on L iff X is a Moore-
family of L, i.e. ?L 2 X and X is completely meet-closed (viz. for any non-empty
X). For any X ' L, we denote by
c
(X) the Moore-closure of X ,
i.e. the least subset of L containing X , which is a Moore-family of L. We denote
by hMoore(L); v; u; t; f ? g; Li the complete lattice of all Moore-families of L.
The ordering v is the inverse of set inclusion ('), u is the least Moore-family
which contains set union, (X; Y:
c
t is set intersection (").
The equivalence between G.i., closure operators and Moore-families is well
known [3]. However, closure operators and Moore-families are often more practical
and concise than G.i.'s to reason about abstract domains, being independent
from representation choices for domain objects [6]. Any G.i. hL; fl; A; ffi
is uniquely determined (up to isomorphism) by the closure operator fl ffiff, and,
conversely, any closure operator uniquely determines a G.i. (up to isomorphism).
The complete lattice of all abstract domains (identified up to isomorphism) on
L is therefore isomorphic to Moore(L). The order relation on Moore(L) corresponds
precisely to the order used to compare abstract domains with regard to
their precision: If A and B are abstractions of L, then A is more precise than B
Moore-families.
2.3 Logic programming
Let V be a (denumerable) set of variables. We fix a first-order language L, with
variables ranging in V . T erm is the set of terms of L. For any syntactic object
s, vars(s) denotes the set of its variables. A term t is ground if
The set of idempotent substitutions, i.e., finite mappings from V to terms in
L, is denoted by Sub. Substitutions are lifted to terms in the usual way. If ff is a
substitution, dom(ff) denotes the set f v 2 V j ff(v) 6= v: g, which is always finite
by definition. By fi ffi ff we denote the substitution v:ff(fi(v)). Objects in Sub
are partially ordered by instantiation: a b iff 9' 2 Sub: a = b'.
2.4 Intuitionistic logic and Heyting algebras
Let L be a complete lattice and a; b 2 L. The pseudo-complement (or intuitionistic
implication) of a relatively to b, if it exists, is the unique element
such that for any x 2 L: a "L x L b iff x L a ! b. Relative
pseudo-complements, when they exist, are uniquely given by a
g. A complete lattice A is a complete Heyting algebra (cHa)
if it is relatively pseudo-complemented , that is a ! b exists for every a; b 2 A.
From a logical point of view, an Heyting algebra L is an algebra equipped with
three operations which satisfies the following equations
for every a; b; c 2 L:
a
a
Meet, join and relative pseudo-complement of Heyting algebras precisely correspond
to conjunction, disjunction and intuitionistic implication of intuitionistic
logic (see Birkhoff [3]).
An example of cHa is the complete lattice h # (Sub); '; [; "; Sub; ;i. The
logical operations and " correspond to [ and " operations (set intersection
and set union). Given a; b 2 # (Sub), the intuitionistic implication a
is also given by ([3])
Example 1. Let x; and
g.
For instance, given a binary functor f , the substitution f x / f(y; w) g belongs
to does not.
3 Domains refinements
3.1 Reduced product
The reduced product [6] of abstract domains corresponds to the meet operation
(u) of closure operators. Given two abstract domains X;Y , the reduced product
of X and Y , denoted by X uY , is the least abstract domain which includes both
3.2 Disjunctive completion
The disjunctive completion [6] of an abstract domain A is the least (most ab-
stract) domain which includes A and which is a complete (join-)sublattice of L,
viz. no approximation is induced by considering the join of abstract objects. The
disjunctive completion of A is defined as the most abstract domain which is an
additive closure and includes A (cf. [6,10]):
and X is additive g:
Proposition 2. If L is a cHa and A is finite,
g.
3.3 Heyting completion
Heyting completion [11] is a refinement operator which has been recently introduced
in order to logically interpret the Cousot and Cousot's reduced power
refinement [6]. In this section we recall the definition and some basic results on
Heyting completion refinement [11]. The aim of this refinement is to enhance
domains to represent relational or negative information. The idea is to enrich an
abstract domain by adding all the relative pseudo-complements (or intuitionistic
implications) built from every pair of elements of the given domain. From a
logical point of view, the new domain is the collection of intuitionistic formulas
built from the " and ! connectives, without nested implication.
For the sake of simplicity, we recall the definition only in case the concrete
domain L is a cHa. Given two abstract domains A and B, the Heyting completion
of A wrt B (denoted by A \Psi
\Gamma! B) is precisely captured by the least Moore family
containing all the relative pseudo-complements (without nesting):
A \Psi
The Heyting completion refinement is reductive on the second argument
\Gamma! A v A) and argumentwise monotonic (if A v B and C v D then
A \Psi
\Gamma!
\Gamma! D).
We recall some basic algebraic properties of Heyting completion with respect
to other domain operations. In particular, we consider reduced product (u) and
disjunctive completion (
of abstract domains. In the following of the paper,
Proposition 3. Let L be a cHa.
1. A \Psi
\Gamma! (B u
\Gamma! B) u
\Gamma! C)
2.
\Gamma! (B \Psi
\Gamma! C)
3.
A is finite.
Groundness:
Many domains have been proposed in order to study groundness of pure logic
programs. If V ar is the (finite) set of variables of interest, the simplest one is
ar), due to Jones and Sndergaard [13], where each ar denotes the set
of substitutions which ground every variable in V . This domain is certainly the
most intuitive one and represents exactly the property we want to analyze, since
it is built from the definition of groundness itself. Unfortunately, G is not very
useful for groundness analysis, since it fails in capturing the groundness relations
between different variables, i.e., the domain is able to represent the "result" of
a groundness analysis, but not to be used for "computing"' the analysis itself.
Other domains, based on (classic) propositional logic, have been proposed in
order to enrich G. Pos is the most widely used domain for groundness analysis
[13,15,4,2]. Pos is able to characterize both pure groundness, i.e. whether a
variable is instantiated to ground terms during program execution, and the relations
between the groundness of different program variables, providing in this
sense a clear example of relational analysis. Pos is the set of (classic) propositional
formulas built from V ar, by using the connectives
w.r.t. (classic) logical equivalence (KL). If we denote by F orm(V ar; ffi) the set
of formulas built from V ar by using the connective ffi, then Pos can be defined
in different ways [4,2]:
The domain Def [2] is built by considering all the formulas whose models
are closed under intersection and taking the quotient of this set w.r.t. (classic)
logical equivalence. Def has been also characterized as the set of formulas which
are conjunctions of definite clauses, always quotiented w.r.t. (classic) logical
equivalence.
i2I
ar g =KL
For the sake of notation, note that also the domain can be defined
as a domain of formulas by using the connective " to represent subsets:
We can relate the three domains to the concrete one by using the same concretization
function. The concrete domain considered in all cases is # (Sub) (which is
a cHa). The interpretation of the connectives is the classical one: OE / if and
only if / is a logical consequence of OE. We say that I ' V ar is a model for
OE, denoted I j= OE, if OE is true in the interpretation which assigns true to all
the variables in I and false to the other ones [2]. For the sake of simplicity we
grounds OE to denote f x 2 V ar j Sub. The
concretization function is [4]:
As usual, we shall abuse of notation and call G, and Pos the corresponding
subsets of # (Sub): fl Sub (G), fl Sub (Def) and fl Sub (Pos).
5 Implicational groundness analysis
We start listing some well known facts about G, Pos. A first result on
the reconstruction of abstract domains as implicational ones was proved in [11].
The authors proved that is precisely the Heyting completion of G.
\Gamma! G (5.1)
In this section we prove that also Pos is an implicational domain which depends
only on the simple domain G. We shall prove that
\Gamma!
\Gamma! G. Therefore
by 5.1,
\Gamma! G. In order to prove it, we need to better understand
the relations among the logical operations in Pos and in # (Sub). Since Pos is a
Moore family on # (Sub), the meet operation (") on Pos must be the restriction
of the meet operation on # (Sub), which is set intersection. It follows that, for
every formula OE i 2 Pos,
i2I
i2I
It is well known that the join operation () of Pos is not the restriction of the
concrete one on # (Sub), which is set union [8]. But the isomorphism holds if
we consider variables only.
Lemma 4.
A similar result holds also for the implication. In general, the classic implication
does not coincide with the intuitionistic one. If we consider implications between
a conjunction of variables and a disjunction of variables only, then they
are isomorphic.
Lemma 5.
i2I
i2I
The idea is to find a normal form for elements in Pos which allows disjunction
only between variables and (classic) implication only between a conjunction and
a disjunction of variables. It is well known that every (classic) formula can be put
in conjunctive normal form, i.e., it can be written as conjunction of disjunctions.
Moreover, every disjunction is a clause, therefore we can write it as a unique
implication, by putting all the negated variables on the left and the positive
ones on the right.
Lemma 6. In Pos, every formula OE is equivalent to a formula of the form:
The next step is to transform every formula in Pos into a formula of # (Sub),
which is not a boolean algebra, as next example shows.
Example 7. In a boolean algebra (a " b)
and
ar. Consider in # (Sub) the
g. We show
that strictly contained in (X " Y In fact, the
substitution belongs to (X " Y but not to (X ! Z)
nor to (Y ! Z).
We remind that # (Sub) is a cHa [3], i.e., it is an algebra equipped with three
operations: Meet, join and intuitionistic implication. Meet and join on # (Sub)
are set intersection and set union, while intuitionistic implication is given by
[3]. This allows us to deal with
formulas in # (Sub) by simply using its operations as connectives.
The previous representation lemma, together with the isomorphism relations
shown in Lemmata 4 and 5, allows us to directly transform every formula of Pos
into a formula of # (Sub).
Example 8. Let x (x , y) be a formula in Pos. We look for its concretization
First we transform x (x , y) in normal form, which always
exists by Lemma 6.
Next we transform y ) x by using Lemma 5 and obtain fl Sub (y
which is a formula in # (Sub).
As shown by the above example, we have a constructive method to transform
formulas in Pos into formulas in # (Sub). Thus we obtain an image of
Pos on the concrete domain. Let
generic element of Pos (Lemma 6). The concretization of OE, fl Sub (OE) is precisely
Therefore, we can compute the concrete
image of Pos, as shown by the following theorem.
Theorem 9.
i2I
Our aim is to describe the domain Pos by using the Heyting completion refinement
only. Therefore, we look for a normal form for concrete objects which
uses meets and intuitionistic implications only, that is a normal form which does
not use the disjunctive operation. Note that, in the formulas on the concrete
domain, disjunctions are computed between variables only. The last step of the
construction is then to find a representation in # (Sub) for unions of variables
in terms of intuitionistic implications. The next lemma shows that a (concrete)
disjunction is always equivalent to a (double) implication.
Lemma 10. Let
holds.
Therefore we obtain a transformation of formulas in Pos into formulas in # (Sub)
which use " and ! only. In particular, note that the ! is used with at most
two levels of nesting. This suggests to construct Pos as a double implication.
Theorem 11.
\Gamma!
\Gamma! G
By 5.1 and properties of Heyting completion (Proposition 3.2 and monotonicity),
we immediately obtain:
Corollary 12.
\Gamma!
For domains G, and Pos are depicted below.
O O O O O O O
x y
x"y
true
O O O O O O O
O O O O O O O
O O O O O O O
x$y
x y
x"y
true
O O O O O O O
O O O O O O O
O O O O O O O
O O O O O O O
x$y
x y
x"y
true (x!y)!y
G G \Psi
\Gamma!
\Gamma! G
The first important consequence of Theorem 11 is that the domain Pos is
constructed by using only the definition of groundness (G) and the logical properties
of the concrete domain. We do not need to "invent" the domain, to prove
that it is actually an abstraction of # (Sub), nor to prove that it refines G, since
all these properties hold by construction. In our framework, Pos arises as the
natural refinement of G and
Another consequence is the normal form for elements in fl Sub (Pos). This
result allows us to get rid of Pos and deal directly with fl Sub (Pos). In fact, when
we use a formula of Pos, actually we use an equivalence class w.r.t. (classic)
logic equivalence (KL). In our formalization, we do not need to use equivalence
1 Note that x i;k ; y j;k denote sets of substitutions.
classes any longer. Moreover, the "form" of concrete formulas is indeed very
natural and preserves the intuitive meaning of abstract formulas. Finally, note
that our normal form states that every formula in fl Sub (Pos) can be written
by using only two levels of nesting for the implication and no disjunctions. This
yields a precise upper bound to the length of the formula. It is worth noticing that
both the normal form and the upper bound are independent from the cardinality
of V ar.
6 Optimality of Pos
An abstract domain is a collection of points selected from the concrete domain.
Some points are used to represent the result of the analysis, while the other ones
are used only during the computation. For instance, in Pos the points which
represent the final result of groundness analysis are those in G only, since the
other ones do not provide groundness information. From equations 2.1, 2.2 and
2.3, it is obvious that only axioms 2.2 produce a result which belongs to G (i.e.,
formulas with conjunctions only). This suggests that an abstract domain, to be
optimal, should contain all and only those formulas which have an implicational
form, since implications only can be reduced to formulas in G. Therefore, to
obtain an optimal domain, we have to include all (and only) the implications
in the abstract domain. This concept is precisely captured by the notion of
implicational domain equation.
An abstract domain X is optimal w.r.t. a given domain A (and the operation
") if it is the least (most abstract) solution of the implicational domain equation
\Gamma!
The solution, which always exists, is the most abstract domain X which is more
concrete than A and is closed under \Psi
\Gamma!. Moreover, it turns out to be the most
refined abstract domain we can obtain by using Heyting completion refinement.
6.1 Solution to the equation
In the previous section, starting from the result
\Gamma! G, we have shown
that
\Gamma!
\Gamma! G. A question which naturally arises is: what is the
domain Pos \Psi
\Gamma! Pos? By Proposition 3.2 we know that Pos \Psi
\Gamma! G.
More generally, we wonder which is the most abstract domain which is more
precise than G and closed w.r.t. Heyting completion, i.e., which is the solution
to the equation:
\Gamma! X).
The next theorem answers both questions, since Pos is already closed w.r.t.
Heyting completion. Therefore, it is precisely the least (most abstract) solution
to the equation.
Theorem 13.
\Gamma! Pos
The theorem states that we can not further refine Pos with Heyting completion
refinement. This result comes from properties of substitutions, since, in the
concrete domain # (Sub), join and implication operations are not completely
independent from each other (see Lemma 10).
Moreover this theorem yields a representation result for elements in Pos.
It precisely states that an element of # (Sub) belongs to Pos if and only if
it can be written by using meets and implications only. This is much stronger
than the previous one since it completely characterizes the image of Pos. Differently
from previous characterizations of Pos [2,4], all these results hold on
the concrete domain, that is we can directly deal with formulas without using
any isomorphisms. Therefore, the concretization function fl Sub becomes, in our
construction, the identity function.
This result allows us to answer the question "why Pos is considered optimal
without being disjunctive", also from an intuitive point of view. From the
previous characterization, we know that a disjunctive formula belongs to Pos if
and only if it can be put in implicational form. From a logic point of view, the
elements useful for the analysis are implications only (as they can be reduced
by modus ponens). Since a good domain should contain all and only those joins
which are indeed useful, we would include those joins which can be reduced only,
which are exactly the joins that Pos contains. This explains the result in [9] that
Pos is complete (or precise) w.r.t.
b (Pos), that is Pos is the "optimal" domain
for groundness analysis.
6.2 On disjunctive completion
We proved that we can not further refine Pos by Heyting completion and [9]
showed that it is pointless to use
b (Pos) instead of Pos.
The idea is to refine Pos by disjunctive completion and then to refine it by
Heyting completion, i.e., to find the solution of the equation
b (Pos) u
\Gamma! X). The next theorem claims that also
b (Pos) is closed w.r.t. Heyting
completion.
Theorem 14.
\Gamma!
Therefore,
b (Pos) is the "biggest" domain we can obtain by disjunctive and
Heyting completion refinements and, in view of the result of completeness of
w.r.t.
b (Pos), we have proved that Pos is definitely the best domain for
groundness analysis.
Conclusions
In this paper we have reconstructed Pos by using the properties of the concrete
domain only. With our formalization, we automatically obtain the properties of
Pos by construction. We show that we can get rid of the construction of Pos
with positive formulas and equivalence classes and use directly the operations
which naturally arise from the concrete domain. Moreover, we show a result of
optimality for Pos by proving that it contains exactly all and only the elements
really useful to the analysis. This result is very surprising if we consider that the
domain Pos was built without following any formal notion of optimality.
The main feature of our construction is that it can easily be applied to other
kind of analyses of logic programs, since it depends only on the properties of the
concrete domain. We trust that (many) other analyses can be naturally formalized
in this way, in particular analyses of properties closed under instantiation,
such as type analysis, where the semantics (concrete domain) is always # (Sub).
--R
Introduction to logic programming.
Boolean functions for dependency analysis: algebraic properties and efficient representation.
Lattice theory.
abstract domain for groundness analysis.
Abstract interpretation: a unified lattice model for static analysis of programs by construction or approximation of fixpoints.
Systematic design of program analysis frameworks.
Static inference of modes and data dependencies in logic programs.
Improving abstract interpretations by systematic lifting to the powerset.
The Powerset Operator on Abstract Interpretations.
Compositional optimization of disjunctive abstract interpretations.
Intuitionistic implication in abstract interpretation.
A Semantics-based Framework for the Abstract Interpretation of Prolog
Denotational abstract interpretation of logic programs.
Abstract interpretation of logic programs: the denotational approach.
Precise and efficient groundness analysis for logic programs.
Some results on the closure operators of partially ordered sets.
--TR
Proofs and types
Static inference of modes and data dependencies in logic programs
Algebraic properties of idempotent substitutions
Logic programming
Abstract interpretation and application to logic programs
Precise and efficient groundness analysis for logic programs
Denotational abstract interpretation of logic programs
Improving abstract interpretations by systematic lifting to the powerset
A unifying view of abstract domain design
A logical model for relational abstract domains
The powerset operator on abstract interpretations
Abstract interpretation
Systematic design of program analysis frameworks
Compositional Optimization of Disjunctive Abstract Interpretations
Completeness in Abstract Interpretation
Intuitionistic Implication in Abstract Interpretation
"Optimal" Collecting Semantics for Analysis in a Hierarchy of Logic Program Semantics
--CTR
Roberto Giacobazzi , Francesco Ranzato , Francesca Scozzari, Making abstract domains condensing, ACM Transactions on Computational Logic (TOCL), v.6 n.1, p.33-60, January 2005
Andy King , Lunjin Lu, A backward analysis for constraint logic programs, Theory and Practice of Logic Programming, v.2 n.4-5, p.517-547, July 2002
Giorgio Levi , Fausto Spoto, Pair-independence and freeness analysis through linear refinement, Information and Computation, v.182
Patricia M. Hill , Fausto Spoto, Deriving escape analysis by abstract interpretation, Higher-Order and Symbolic Computation, v.19 n.4, p.415-463, December 2006 | abstract domain;abstract interpretation;heyting completion;intuitionistic logic;static analysis;groundness;logic programming |
567275 | Estimation of state line statistics in sequential circuits. | In this article, we present a simulation-based technique for estimation of signal statistics (switching activity and signal probability) at the flip-flop output nodes (state signals) of a general sequential circuit. Apart from providing an estimate of the power consumed by the flip-flops, this information is needed for calculating power in the combinational portion of the circuit. The statistics are computed by collecting samples obtained from fast RTL simulation of the circuit under input sequences that are either randomly generated or independently selected from user-specified pattern sets. An important advantage of this approach is that the desired accuracy can be specified up front by the user; with some approximation, the algorithm iterates until the specified accuracy is achieved. This approach has been implemented and tested on a number of sequential circuits and has been shown to handle very large sequential circuits that can not be handled by other existing methods, while using a reasonable amount of CPU time and memory (the circuit s38584.1, with 1426 flip-flops, can be analyzed in about 10 minutes). | INTRODUCTION
The dramatic decrease in feature size and the corresponding increase in the
number of devices on a chip, combined with the growing demand for portable
communication and computing systems, have made power consumption one of
the major concerns in VLSI circuits and systems design [Brodersen et al. 1991].
Indeed, excessive power dissipation in integrated circuits not only discourages
the use of the design in a portable environment, but also causes overheating,
This work was supported in part by Intel Corp., Digital Equipment Corp., and the Semiconductor
Research Corp.
Authors' addresses: V. Saxena, AccelChip Inc., Schaumburg, IL; F. N. Najm, University of
Toronto, Department of ECE, 10 Kings College Road, Toronto, Ontario, Canada M5S 3G4; email:
f.najm@utoronto.ca; I. N. Hajj, Dean of the Faculty of Engineering and Architecture, American
University of Beirut, Beirut, Lebanon.
Permission to make digital / hard copy of part or all of this work for personal or classroom use is
granted without fee provided that the copies are not made or distributed for profit or commercial
advantage, the copyright notice, the title of the publication, and its date appear, and notice is given
that copying is by permission of the ACM, Inc. To copy otherwise, to republish, to post on servers,
or to redistribute to lists, requires prior specific permission and /or a fee.
ACM Transactions on Design Automation of Electronic Systems, Vol. 7, No. 3, July 2002, Pages 455-473.
456 . V. Saxena et al.
Circuit
Combinational
Sequential Circuit
Latches
Present
State
Next
State
Inputs Outputs
xxx
Fig. 1. An FSM model of a sequential logic circuit.
which can lead to soft errors or permanent damage. Hence there is a need to
accurately estimate the power dissipation of an IC during the design phase.
The main conceptual difficulty in power estimation is that the power depends
on the input signals driving the circuit. Simply put, a more active circuit will
consume more power. Thus one straightforward method of power estimation is
to simulate the design over all possible inputs, compute the power dissipated
under each input, and average the results. However, such an approach is prohibitively
expensive. Thus the main difficulty in power estimation is that the
power is input pattern-dependent.
It is possible to overcome the pattern-dependency problem by using probabilities
to describe the set of all possible logic signals, and then studying the
power resulting from the collective influence of all these signals. This formulation
achieves a certain degree of pattern-independence that allows one to
efficiently estimate the power dissipation. Most recently proposed power estimation
tools [Najm 1994] are based on such a probabilistic approach, but are
limited to combinational circuits. Only a few techniques have been proposed for
sequential circuits, and they are reviewed in the next section.
We consider that the circuit has the popular and well-structured design style
of a synchronous sequential circuit, as shown in Figure 1. In other words, it
consists of flip-flops driven by a common clock and combinational logic blocks
whose inputs (outputs) are flip-flop outputs (inputs). Therefore, the average
power dissipation of the circuit can be broken down into the power consumed
by the flip-flops and that consumed by combinational logic blocks. This provides
a convenient way to decouple the problem and simplify the analysis.
Estimation of State Line Statistics . 457
In this article, we present a statistical estimation technique for collecting
signal statistics (switching activity and signal probability) at the flip-flop
outputs. This work extends and improves the preliminary work proposed in
Najm et al. [1995]. The statistics are computed by collecting samples obtained
from fast register-transfer-level (RTL) simulation of the circuit under input
sequences that are either randomly generated or independently selected from
user-specified pattern sets. Given these, it is then possible to use any of the
existing combinational circuit techniques to compute the power of the combinational
circuit. The use of an RTL or zero-delay simulator does not affect
the accuracy of the power estimate, since it is assumed the flip-flops are
edge-triggered and filter out any glitches or hazards that may exist at their
inputs.
In the following sections, we give some background and review of previous approaches
(Section 2), formulate the problem in more detail (Section 3), present
our approach (Section 4), give experimental results (Section 5), and conclude
with some discussion (Section 6). Furthermore, a number of theoretical results
are presented and summarized in Appendix A.
2. BACKGROUND
Let um be the primary input nodes of a sequential logic circuit, as
shown in Figure 1, and let x 1 , x 2 , . , x n be the present state lines. For simplicity
of presentation, we have assumed that the circuit contains a single clock that
drives a bank of edge-triggered flip-flops. On the falling edge of the clock, the
flip-flops transfer the values at their inputs to their outputs. The inputs u i and
the present state values determine the next state values and the circuit outputs,
so that the circuit implements a finite state machine (FSM).
Most existing power estimation techniques handle only combinational circuits
[Najm 1994] and require information on the circuit input statistics (tran-
sition probabilities, etc. To allow extension to sequential circuits, it is therefore
sufficient to compute statistics of the flip-flop outputs (and corresponding flip-flop
power). Other existing techniques would then be applied to compute the
power consumed in the combinational block.
We briefly survey the few recently proposed techniques for estimating the
power in sequential circuits. All proposed techniques that handle sequential
circuits [Ismaeel and Breuer 1991; Hachtel et al. 1994; Monteiro and Devadas
1994; Tsui et al. 1994] make the simplifying assumption that the FSM is Markov
[Papoulis 1984], so that its future is independent of its past once its present
state is specified.
Some of the proposed techniques compute only the probabilities (signal and
transition) at the flip-flop outputs, whereas others also compute the power. The
approach in Ismaeel and Breuer [1991] solves directly for the transition probabilities
on the present state lines using the Chapman-Kolmogorov equations
[Papoulis 1984], which is computationally too expensive. Another approach that
also attempts a direct solution of the Chapman-Kolmogorov equations is given
in Hachtel et al. [1994]. Although it is more efficient, it remains quite expensive,
so that the largest test case presented contains less than
458 . V. Saxena et al.
Better solutions are offered by Monteiro and Devadas [1994] and Tsui et al.
[1994], which are based on solving a nonlinear system that gives the present
state line probabilities, as follows. Given probabilities p
, . , p um at the input
lines, let a vector of present state probabilities P
, . , p x n
be applied
to the combinational logic block. Assuming the present state lines are indepen-
dent, one can compute a corresponding next state probability vector as F (P p.s.
The function F (-) is a nonlinear vector-valued function that is determined by
the Boolean function implemented by the combinational logic.
In general, if the next state probabilities form a vector P n.s. , then P n.s. #=
p.s. ), because the flip-flop outputs are not necessarily independent. Both
methods [Monteiro and Devadas 1994; Tsui et al. 1994] make the independence
assumption P n.s. # F (P p.s. Finally, since P n.s. = P p.s. due to the feedback, they
obtain the state line probability values by solving the system
system is solved using the Newton-Raphson method in Monteiro and Devadas
[1994], and using the Picard-Peano iteration method in Tsui et al. [1994].
One problem with this approach is that it is not clear that the system
F (P ) has a unique solution. Being nonlinear, it may have multiple solutions,
and in that case it is not clear which is the correct one. Another problem is the
independence assumption which need not hold in practice, especially in view of
the feedback. Both techniques try to correct for this. In Monteiro and Devadas
[1994], this is done by accounting for m-wise correlations between state bits
when computing their probabilities. This requires 2 m additional gates and can
get very expensive. Nevertheless, they show good experimental results. The
approach in Tsui et al. [1994] is to unroll the combinational logic block k times.
This is less expensive than Monteiro and Devadas [1994], and the authors
observe that with good results can be obtained. Finally, in order for
the FSM to be Markov, its input vectors must be independent and identically
distributed, which is another assumption that also may not hold in practice.
A new simulation-based approach was introduced by the authors in Najm
et al. [1995] that makes no assumptions about the FSM behavior (Markov or
otherwise), makes no independence assumption about the state lines, and allows
the user to specify the desired accuracy and confidence to be achieved in
the results; with some approximation, the algorithm iterates until the specified
accuracy is achieved. The only assumption made (which is given in the
next section) has to do with the autocovariance of the logic signals, which is
mild and generally true for all but periodic logic signals. The method involves
collecting statistics from a number of parallel simulations of the same circuit,
each of which is driven by an independent set of vectors. The statistics gathered
are used to determine the state line probabilities. In this article, we extend this
approach to compute the latch switching activity in addition to the probability,
and we provide an improved convergence criterion that significantly improves
the run-time with no significant loss of accuracy.
3. PROBLEM FORMULATION
Since the system is clocked, it is convenient to work with discrete time, so
that the FSM inputs at time k, u i (k), and its present state at that time x i (k),
Estimation of State Line Statistics . 459
determine its next state x i (k+1), and its output. In order to take into account the
effect of large sets of inputs, one is typically interested in the average power dissipation
over long periods of time. Therefore, we assume that the FSM operates
for all time (-# < k < #). An infinite logic signal x(k) can be characterized by
two measures: signal probability P (x) is the fraction of clock cycles (time units)
in which the signal is high, and transition density D(x) is the average number
of logic transitions per clock cycle. These measures are formally presented in
Appendix
A, where it is also shown that
logic signal derived from x(k) so that T x only in those cycles where x(k)
makes a transition; that is,
0, otherwise.
(1)
It should be stressed that the result true only for discrete-time
logic signals, that is, for signals that make at most one transition per clock cycle,
so that they are glitch-free. In this article, we are mainly concerned with the
flip-flop outputs which are obviously glitch-free, so that this result is relevant.
In order to study the properties of a logic signal over (-#), it is useful
to consider a random model of logic signals. We use bold font to represent
random quantities. We denote the probability of an event A by P{A} and, if x
is a random variable, we denote its mean by E[x]. An infinite logic signal x(k)
can be viewed as a sample of a stochastic process x(k), consisting of an infinite
set of shifted copies of the logic signal. This process, which we call a companion
process, embodies all the details of the logic signal, including its probability and
density. Details and basic results related to the companion process are given in
Appendix
A.2 as an extension of previous continuous-time work [Najm 1993b].
Specifically, the companion process is stationary, and for any time instant k,
the probability that x(k) is high is equal to the signal probability of the logic
signal:
This result holds for any logic signal. If we (conceptually) construct the
companion processes corresponding to the FSM signals, then we can view
the FSM as a system operating on stochastic inputs, consisting of the companion
um (k), and having a stochastic state consisting
of the processes x 1 (k), x 2 (k), . , x n (k). Given statistics of the input vector
um (k)], one would like to compute some statistics of the
state vector
Before going on, we need to make one mild assumption related to the covariance
of the process X(k):
ASSUMPTION 1. The state of the machine at time k becomes independent of
its initial state at time 0 as k #.
This assumption is mild because it is generally true in practice that, for
all nonperiodic logic signals, two values of the signal that are separated by a
large number of clock cycles become increasingly uncorrelated. One necessary
condition of this assumption is that the FSM be aperiodic, that is, that it does not
ACM Transactions on Design Automation of Electronic Systems, Vol. 7, No. 3, July 2002.
460 . V. Saxena et al.
cycle through a repetitive pattern of states. Aperiodicity is implicitly assumed
by most previous work on sequential circuits. Specifically, whenever an FSM is
assumed Markov (in which case aperiodicity becomes equivalent to the above
assumption) the FSM is usually also assumed to be aperiodic.
Before leaving this section, we consider the question of exactly what statistics
of X(k) are required in order to estimate the power. These statistics must
be sufficient to compute the combinational circuit power. Many techniques for
combinational circuit power estimation [Najm 1994] require the signal probability
and transition density at every input (for discrete-time signals, knowing
the transition density is equivalent to knowing the transition probability).
Since the power consumed in the flip-flops can also be derived from D(x i ), then
the state line P can be sufficient to compute the power for the
whole circuit. An algorithm for computing these statistics is presented in the
next section.
4. COMPUTING STATE LINE STATISTICS
We propose to obtain the state line statistics by performing Monte Carlo logic
simulation of the design using a high-level functional description, say, at the
register-transfer-level, and computing the probabilities from the large number
of samples produced. High-level simulation can be done very fast, so that one
can afford to simulate a large number of cycles. However, we need to define a
simulation setup and a mechanism to determine the length of the simulation
necessary to obtain meaningful statistics. It is also important to correctly choose
the input vectors used to drive the simulation. These issues are discussed below.
4.1 Simulation Setup
We first discuss the estimation of the state line probability P Suppose the
FSM is known to be in some state X 0 at time 0. Using (2), and given Assumption
1, we have that for any state signal x i ,
lim
For brevity, we denote the above conditional probability by
so that
lim
Our method consists of estimating P k increasing values of k until
convergence (according to (3)) is achieved. To accomplish this, we perform a
number of simulation runs of the circuit, in parallel, starting from some state
drive the simulations with input vector streams that are consistent
with the statistics of U(k). Each simulation run is driven by a separate independently
chosen input vector stream, and results in a logic waveform x
designates the run number, and N is the
ACM Transactions on Design Automation of Electronic Systems, Vol. 7, No. 3, July 2002.
Estimation of State Line Statistics . 461
number of simulation runs. If we average the results at every time k we obtain
an estimate of the probability at that time as follows,
From the law of large numbers, it follows that
lim
Wedo not actually have to perform an infinite number of runs. Using established
techniques for the estimation of proportions [Miller and Johnson 1990], we can
predict how many runs to perform in order to achieve some user-specified error-tolerance
(#) and confidence (1 - #) levels. Specifically, it can be shown [Najm
1993a] that if we want (1 - 100% confidence that
then we must perform at least N # max(N 2
z #/2
z #/2
and
and where z #/2
is a real-valued function of #, defined as follows. Let z be a
random variable with a standard normal distribution, that is, a normal distribution
with mean 0 and variance 1. Then, for a given #, z #/2 is defined as the
real number for which
P{z > z #/2
The value of z #/2 can be obtained from the erf(-) function available on most
computer systems. For instance, z confidence (i.e.,
and z confidence. From the above equations, it can be seen
that 490 runs are enough to obtain a result with accuracy
confidence.
From the user-specified # and #, the required value of N can be found up
front. Given this, we initiate N parallel simulations of the FSM and for each
state signal x i obtain waveforms representing P k
increasing
values. The same methodology can be used to estimate D(x i ). During
the simulation, statistics for estimation of the state transition density are
also collected, along with the statistics for state line probability estimation.
This results in another set of waveforms for each state signal x i representing
defined in (1).
The remaining question is how to determine when k is large enough so that
can be said to have converged to P
This is discussed in the next section.
462 . V. Saxena et al.
4.2 Convergence in Time
Accurate determination of convergence in time (k) can be computationally very
expensive. This is not because the time to converge is long, but because studying
the system dynamics that determine convergence is very expensive. For in-
stance, even in the relatively simpler case when the system is assumed Markov,
convergence is related to the eigenvalues of the system matrix, whose size is
exponential in the number of flip-flops. To overcome this difficulty, we use a
heuristic technique to efficiently check for convergence. Simply stated, we monitor
the waveform values, over time, until convergence is detected. In order for
this simple approach to work well in practice, we make careful choices for the
specific ways in which the waveforms are monitored and convergence is checked,
as we describe. The resulting method, which we have found works quite well
in practice, has several important features: (1) monitor two waveforms instead
of one, (2) check both the average and difference of the two waveforms over a
time window, and (3) use a low-pass filter to remove the noise in the waveforms.
These are explained below, where we restrict the discussion to the probability
waveforms since the treatment of the density waveforms is similar.
4.2.1 Two Waveforms. Checking convergence of one waveform, say,
may be done by simply monitoring the waveform values until they
have "leveled off " and remained steady for some length of time. By itself, this
simple approach is not advisable because it is possible for a waveform to level
off for some time and then change again before reaching its steady-state value.
In order to reduce the chance of this type of error, we monitor two versions of
the P k waveform for each x i , and check on convergence by looking at both of
them. This is done by considering two different initial states denoted X 0 and
It is clear from Equation (3) that the choice of the initial state does not affect
the final result. It may affect the rate of convergence, but not the final probability
or density values. The only requirement, in order for the error tolerance
and confidence results to be valid, is that all the N simulation runs start in the
same state. Thus, we perform two sets of simulation runs of the machine, each
consisting of N machines running in parallel. Each of the N machines in a set
starts in the same initial state, but different initial states, X 0 and X 1 , are used
for the two sets (the mechanism for determining X 0 and X 1 is presented in Section
4.3). Each machine is driven by an independently selected (see Section 4.5)
input stream so that the observed data from the different machines constitute
a random sample. For each state line signal, statistics are collected from each
of the N parallel simulations in a set. This results in two waveforms for the
steady-state probability, P k increasing k values.
Is it possible to further improve the technique by examining three or more
waveforms? It may be, but we have observed that the use of two waveforms gives
sufficient accuracy. The second waveform basically gives a second opinion, and
we have not found a need for a third.
4.2.2 Time Window. We use two measures to check on the convergence
of the pairs of P k waveforms. Since both P k should
ACM Transactions on Design Automation of Electronic Systems, Vol. 7, No. 3, July 2002.
Estimation of State Line Statistics . 463
converge to P difference
defined as | P k defined as
remains within -# of 0 and - k remains
within -# of some fixed value, for a certain time window, we consider that
have converged to their steady-state P
have experimented with various time window sizes, and found that a window
of just three cycles is sufficient.
During the simulation, we simultaneously obtain another set of two waveforms
corresponding to the transition density, D k
each state signal x i . The convergence criteria used for P presented above
are also applied to determine the convergence of D(x i ). A state signal x i is
declared converged when both P
4.2.3 Filter. The combination of the two features presented above gives
good results in practice, but we have found that it sometimes takes longer
than it should to observe convergence. By this we mean, for instance, that
both waveforms P k will be found to "hover" for a
long time around the same value. They will have effectively converged, but
their constant fluctuations around the steady-state value impede the convergence
check. The fluctuations have the character of random noise and in
some cases they may simply be due to slow system dynamics. In any case, it
is clear that their removal is imperative in order to get a faster convergence
check.
To achieve this, the waveforms are filtered before the convergence criteria are
applied to them. We use a linear phase, ideal low-pass filter with an empirically
selected f c T equal to 0.02, in conjunction with a Hamming window of width
100. The impulse response of the filter is:
0, otherwise.
The first sidelobe of the Hamming window is 41 dB below the main lobe. As a
result the negative component in its frequency response is negligible in comparison
with the other filtering windows such as the Hanning or the Rectangular
window and the resulting FIR filter does not have ripples in the passband.
Although the Blackman and the Bartlett (triangular) windows also result in filters
without ripples in the passband, the Hamming window introduces the narrowest
transition band for the same window size [Oppenheim and Schafer 1989;
Chen 1979]. Any sinusoidal variation with a time period less than 100 (= 1/ f c T )
timesteps is removed by the filtering process, eliminating high-frequency noise
and oscillations in the waveforms. At least 100 time steps are required before
one has sufficient datapoints to use this filter. For this reason, the filter is applied
to the waveforms only after 100 cycles have passed. This amounts to a
warmup period of 100 cycles, which speeds up the convergence check without
compromising the quality of the results. The above filter parameters were chosen
because they were found to work well in practice. The specific parameter
values are not critical; it only matters that the filter remove the high frequency
fluctuations in the waveforms.
464 . V. Saxena et al.
For every state signal x i , each of the four waveforms P k
separately. When all state signals have
converged, the simulation is terminated and the average of the filtered versions
of reported as the signal probability P
x i and the average of the filtered versions of D k
reported as the transition density D(x i ).
4.3 Determination of the Initial States
To carry out the simulations described above, we require two initial states
for the FSM. In case information about the design of the FSM is
available, the user may supply a set of two states that correspond to normal
operation of the circuit. Care should be taken to ensure that these states are
far apart in the state space of the FSM, but in the same connected subset of the
state space, leading to better coverage of the state space during the simulation
process.
In practice, it may not be possible for the user to supply two different states.
Thus, we present a simulation-based technique to determine a second state X 1 ,
given a state X 0 . If the circuit in consideration has an explicit reset state, X 0
could be chosen to be equal to that. In other cases, if no reset state is known,
any state that occurs in the regular operation of the circuit could be supplied
by the user to be used as X 0 . In case no user-supplied information is available,
we choose X as the default value. We initiate a simulation with
the FSM starting in X 0 . The simulation is carried out for 100 timesteps, using
either randomly selected (or user-supplied; see Section 4.5) input vectors. The
required X 1 is chosen to correspond to the state with the largest Hamming
distance from X 0 , observed during the 100 timesteps. Although this does not
guarantee optimal starting points in the state-space, we use this method in the
absence of any other information about the design of the FSM.
The mechanism also ensures that X 1 corresponds to a state that the FSM
would visit under the circuit's normal operation. The choice of 100 timesteps is
empirical and user-controlled. This simulation, to determine X 1 , is carried out
before the main simulation starts and its contribution to the total run-time is
negligible (less than 1%).
4.4 Low Density State Signals
We observed that, in some circuits, there exist some state signals which have
very low switching activity. We refer to such signals as being low-density state
signals. For most of these signals, the low density is observed regardless of
the initial state of the FSM (X 0 or X 1 ). For such signals, it is typical to find
that the transition density waveforms D k but the signal probability
waveforms may not converge as fast. This is because a signal
may get stuck at logic 0 or 1 and remain there for a long time because of low
switching activity. As a result the P k may be close to 1 and the P k
waveform may be close to 0 or vice versa. Although the waveforms do not have
significant variation over time and thus satisfy the average criterion, they do
not satisfy the difference criterion for convergence. As a result in the case of
ACM Transactions on Design Automation of Electronic Systems, Vol. 7, No. 3, July 2002.
Estimation of State Line Statistics . 465
some circuits, the simulation process may continue for a long time, without any
tangible gain of knowledge about the node statistics.
To account for these state signals we have incorporated a stopping criterion
called the low density criterion. Essentially, we relax the error tolerance for state
signals that are declared to be low density. This results in significant speedup
and allows the handling of very large circuits. During the simulation, we keep
note of the last timestep k last in which at least one state signal was declared
to have converged using the regular convergence criterion. We continue the
simulation process using the regular convergence criterion as long as k current -
last < k nochange , where k nochange is a fixed threshold value that indicates how
long one is willing to wait before possibly doing some special case handling for
low-density nodes. In our implementation, k nochange is a user-defined constant.
The special case handling is as follows. We consider all the state signals
that have not yet converged and check to see if either D k
is below a user-specified low-density threshold D min , in which case the state
signal x i is declared to be a low-density state signal. Since low-density state
lines will have little impact on the circuit power, all low-density signals are
immediately assumed to have converged, although the user is cautioned about
their presence. As obvious from the results in Section 5, the number of such state
lines is very low. Since the switching activity for these state lines is low, the
absolute error in the power estimate introduced as a result of this termination
is negligible. The authors have also observed that the low-density criterion has
no effect on the results for the remaining nodes which converge normally.
The simulation is continued further in case some state signals remain that
have neither converged nor are classified as low density. If no new state signals
converge in another k nochange timesteps, the low-density threshold is incremented
by a small amount and the low-density criterion is applied again. For
the benchmark circuits which we considered, this happened only once, and is
pointed out in Section 5.
4.5 Input Generation
In view of Assumption 1, one requirement on the applied input sequence U (k)
is that it not be periodic. Another condition, required for the estimation (4) to
hold, is that the different U sequences used in different simulation runs
selected independently. Otherwise, no limitations are placed
on the input sequence.
The exact way in which the inputs are selected depends on the design and
on what information is available about the inputs. For instance, if the FSM is
meant to execute microcode from a fixed set of instructions, then every sequence
may be a piece of some microcode program, with each U being selected
independently from some pool of typical microcode sections. This method
of input generation faithfully reproduces the bit correlations in U (k) as well
as the temporal correlation between U (k), U Alternatively, if the
user has information on the relative frequency with which instructions occur
in practice, but no specific program from which to select instruction sequences,
then a random number generator can be used to select instructions at random
ACM Transactions on Design Automation of Electronic Systems, Vol. 7, No. 3, July 2002.
466 . V. Saxena et al.
to be applied to the machine. This would preserve the bit correlations, but not
the temporal correlations between successive instructions. Conceivably, if such
correlation data are available, one can bias the random generation process to
reproduce these correlations.
In more general situations, where the machine inputs can be arbitrary,
simpler random generation processes can be used. For instance, it may not
be important in some applications to reproduce the correlations between bits
and between successive vectors. The user may only have information on the
statistics of the individual input bits, such as the probability P
input. In this case, one can design a random generation
process to produce signals that have the required P and D statistics, as
follows.
Using Equations (A.3) in Appendix A, one can compute from P and D the
mean high time and mean low time of the signal. By assuming a certain distribution
type for the high and low pulse widths, one can then easily generate
a logic signal with the required statistics. For instance, if one uses a geometric
distribution (which is equivalent to the signals x i being individually Markov),
then one obtains a fixed value for the probabilities P{x
and P{x i shown in Xakellis and Najm [1994], and
generates the logic signals accordingly. Incidentally, in this case, even though
the inputs are Markov, the FSM itself is not necessarily a Markov system.
Finally, if only the probabilities P are available for the input nodes, and if
it is not important to reproduce any input correlation information, one can generate
the inputs by a sequence of coin flips using a random number generator.
In this case, the inputs are said to be independent and identically distributed
and the FSM can be shown to be Markov, but the individual state bits x i may
not be Markov.
Our implementation results for this approach, reported in the next section,
are based on this last case of independent and identically distributed inputs.
However, the technique is applicable to any other mechanism of input genera-
tion, as we have explained.
5. EXPERIMENTAL RESULTS
This technique was implemented in a prototype C program that accepts a netlist
description of a synchronous sequential machine. The program performs a zero
delay logic simulation and monitors the flip-flop output probabilities and densities
until they converge. To improve the speed, we simulate 31 copies of the
machine in parallel, using bitwise operations. We have tested the program on a
number of circuits from the ISCAS-89 sequential benchmark set [Brglez et al.
1989].
All the results presented below are based on an error tolerance of 0.05 (i.e.,
confidence (i.e., which implies that
For each circuit we choose X these conditions, a typical
convergence characteristic is shown in Figure 2. The two waveforms shown
correspond to p (N )
starting from X 0 and p (N )
starting from X 1 , for node
X.3 of circuit s838.1 (this circuit has 34 inputs, 32 flip-flops, and 446 gates).
Estimation of State Line Statistics . 467
Probability
Fig. 2. Convergence of probability for s838.1, node X.3.
Table
I. A Few ISCAS-89 Circuits
Circuit No. Inputs No. Latches No. Gates
This decaying sinusoidal convergence is typical, although in some cases the
convergence is simply a decaying exponential and is much faster.
In order to assess the accuracy of the technique, we compared the (0.05, 95%)
results to those of a muchmore accurate run of the same program. We used 0.005
error tolerance and 99% confidence for the accurate simulation which required
separate copies of the machines for each of the two initial states X 0
and X 1 . Each of these 66,349 machines was simulated independently. Since we
used a 100-point filter and a convergence window of 3 timesteps, at least 103
timesteps were required before we started to apply the convergence criteria.
Assuming convergence in the shortest possible time (103 cycles), this implies
that a total of more than 6.8 million (103 - 66, 349) vectors were fed into each
set of machines (X 0 and X 1 ).
Since we are interested only in steady-state node values during the simu-
lation, there is no need to use a more accurate timing simulator or a circuit
simulator to make these comparisons. These highly accurate runs take a long
time and, therefore, they were only performed on the limited set of benchmark
circuits given in Table I. We then computed the difference between the statistics
from the (0.05, 95%) run and those from the (0.005, 99%) run. Figure 3 shows
the resulting error histogram for all the state line probability for all the flip-flop
outputs from the circuits in Table I, and Figure 4 is the error histogram for the
state line transition density. Notice that all the nodes have errors well within
ACM Transactions on Design Automation of Electronic Systems, Vol. 7, No. 3, July 2002.
468 . V. Saxena et al.2060100-0.2 -0.15 -0.1 -0.05 0
"proberr"
Fig. 3. Flip-flop probability error histogram.
the desired user-specified 0.05 error bounds for both the state line probability
and transition density.
We monitored the speed of this technique and report some results in Table II,
where the execution times are on a SUN Sparc-5 workstation. These circuits
are much larger (especially in terms of flip-flop count) than the largest ISCAS-
circuits tested in previous methods [Monteiro and Devadas 1994; Tsui et al.
1994]. Furthermore, for those circuits in the table that were also tested in
Monteiro and Devadas [1994] and Tsui et al. [1994], this technique works much
faster. Since our method does not use BDDs to compute probabilities, there are
no memory problems with running large circuits. The largest circuit, s38584.1,
requires 19.2 MB on a SUN Sparc5.
Table
II also gives the number of cycles required for convergence and the
number of state signals that are classified as low density. For the low-density
criterion, the parameters used are: k nochange = 500 and D increment
(if required) of 0.05. As mentioned earlier, at least 103 timesteps are
required before we start to apply the convergence criteria. Most of the smaller
circuits without low-density flip-flops converge within 120 timesteps. As ex-
pected, however, the number of cycles required increases for larger circuits.
The larger flip-flop count means that the machine state space is much larger
and the probability of individual machine states becomes much smaller. As
a result, many more cycles may be required to achieve equilibrium. Larger
circuits also require more CPU time per cycle, since the simulation of the
ACM Transactions on Design Automation of Electronic Systems, Vol. 7, No. 3, July 2002.
Estimation of State Line Statistics . 4691030507090-0.2 -0.15 -0.1 -0.05 0
"denserr"
Fig. 4. Flip-flop density error histogram.
Table
II. Convergence Information for ISCAS89 Benchmarks
Circuit No. Inputs No. Latches No. Gates No. Cycles CPU Time No. Low Dens
s9234.1 36 211 5597 614 12.12 min. 3
s38584.1 38 1426 19253 112 10.25 min. 0
470 . V. Saxena et al.
combinational part of the circuit also takes more time in comparison to smaller
circuits.
In the case of circuits with low-density flip-flops, the number of cycles required
was at least 603 (103 before we applied the normal criteria and 500 wait
cycles before we applied the low-density criterion). Circuit s13207.1 is the only
one in which we needed to increment the low-density threshold D min .
6.
AND CONCLUSIONS
Most existing power estimation techniques are limited to combinational cir-
cuits, whereas all practical circuit designs are sequential. We have presented
a new statistical technique for estimation of the state line statistics in synchronous
sequential circuits. By simulating multiple copies of the circuit, under
independently selected input sequences, statistics on the flip-flop outputs
can be collected. This allows efficient power estimation for the whole design. An
important advantage of this approach is that the desired accuracy of the results
can be specified up front by the user; with some approximation, the algorithm
iterates until the specified accuracy is achieved.
We have implemented this technique and tested it on a number of sequential
circuits with up to 1526 flip-flops and a state space of size greater than 10 459 . The
additional convergence criterion for low-density nodes and the new simulation-based
mechanism to determine the starting state of the FSM leads to reduced
run-time. We confirm that the accuracy specified by the user is indeed achieved
by our technique. The memory requirements are very reasonable, so that very
large circuits can be handled with ease.
A. DISCRETE-TIME LOGIC SIGNALS
be the set of all integers, and let x(k), k # Z,
be a function of discrete time that takes the values 0 or 1. We use such time
functions to model discrete-time logic signals in digital circuits. The definitions
and results presented below represent extensions of similar concepts developed
for continuous time signals [Najm, 1993b]. The main results, Propositions 1
and 3, are therefore given without proof. In Proposition 2 we present a bounding
relationship between probability and density for discrete-time signals.
A.1 Probability and Density
Notice that the set of integers {#-K /2# contains exactly K
elements, where K > 0 is a positive integer.
Definition 1. The signal probability of x(k), denoted P (x), is defined as:
K#K
x(k). (A.1)
It can be shown that the limit in (A.1) always exists.
Estimation of State Line Statistics . 471
If x(k) #= x(k - 1), we say that the signal undergoes a transition at time k.
Corresponding to every logic signal x(k), one can construct another logic signal
undergoes a transition at k; otherwise T x
Let n x (K ) be the number of transitions of x(k) over {#-K /2#
Therefore,
Definition 2. The transition density of a logic signal x(k), denoted by D(x),
is defined as
K#
K . (A.2)
Notice that n x (K
and the limit
in (A.2) exists.
The time between two consecutive transitions of x(k) is referred to as an
intertransition time: if x(k) has a transition at i and the next transition is
at there is an intertransition time of length n between the two
transitions. Let - 1 (- 0 ) be the average of the high (low), that is, corresponding
to intertransition times of x(k). In general, there is no guarantee
of the existence of - 0 and - 1 . If the total number of transitions in positive time
is finite, then we say that there is an infinite intertransition time following the
last transition, and - 0 or - 1 will not exist. A similar convention is made for
negative time.
PROPOSITION 1. If - 0 and - 1 exist, then
and D(x) =-
. (A.3a, b)
PROPOSITION 2. P (x) and D(x) are related as2
D(x).
PROOF. From (A.3), it is easy to arrive at:
time is discrete, then - 1 # 1 and - 0 # 1. Combining this with (A.4) leads
to the required result.
Another way of expressing this result is to say that
that for a given P (x), D(x) is restricted to the shaded region shown in Figure 5.
A.2 The Companion Process
Let x(k), k # Z, be a discrete-time stochastic process [Najm 1993a] that takes
the values 0 or 1, transitioning between them at random discrete transition
times. Such a process is called a 0-1 process. A logic signal x(k); can be thought
of as a sample of a 0-1 stochastic process x(k); that is, x(k) is one of an infinity
of possible signals that comprise the family x(k).
472 . V. Saxena et al.
Fig. 5. Relationship between density and probability.
A stochastic process is said to be stationary if its statistical properties are
invariant to a shift of the time origin [Najm 1993a]. Among other things, the
mean E[x(k)] of such a process is a constant, independent of time, and is denoted
by E[x]. Let n x (K ) denote the number of transitions of x(k) over {#-K /2#
1, . , #+K /2#}. For a given K , n x (K ) is a random variable. If x(k) is stationary,
then E[n x (K )] depends only on K , and is independent of the location of the
time origin. Furthermore, one can show that if x(k) is stationary, then the
mean E[n x (K )/K ] is constant, irrespective of K .
Let z # Z be a random variable with the cumulative distribution function
F z finite k, and with F z
say that z is uniformly distributed over the whole integer set Z. We use z to
construct from x(k) a stochastic 0-1 process x(k), called its companion process,
defined as follows.
Definition 3. Given a logic signal x(k) and a random variable z, uniformly
distributed over Z, define a 0-1 stochastic process x(k), called the companion
process of x(k), given by
For any given is the random variable a function of
the random variable z. Intuitively, x(k) is a family of shifted copies of x(k), each
shifted by a value of the random variable z. Thus, not only is x(k) a sample of
x(k), but one can also relate statistics of the process x(k) to properties of the
logic signal x(k), as follows.
PROPOSITION 3. The companion process x(k) of a logic signal x(k) is station-
ary, with
--R
Combinational profiles of sequential benchmark circuits.
Probabilistic analysis of large finite state machines.
The probability of error detection in sequential circuits using random test vectors.
Probability and Statistics for Engineers
A methodology for efficient estimation of switching activity in sequential logic circuits.
Statistical estimation of the signal probability in VLSI circuits.
A survey of power estimation techniques in VLSI circuits.
Power estimation in sequential circuits.
Exact and approximate methods for calculating signal and transition probabilities in FSMS.
Statistical estimation of the switching activity in digital circuits.
revised January
--TR
Discrete-time signal processing
The probability of error detection in sequential circuits using random test vectors
A survey of power estimation techniques in VLSI circuits
A methodology for efficient estimation of switching activity in sequential logic circuits
Exact and approximate methods for calculating signal and transition probabilities in FSMs
Probabilistic analysis of large finite state machines
Statistical estimation of the switching activity in digital circuits
Power estimation in sequential circuits
One-Dimensional Digital Signal Processing
--CTR
Sanjukta Bhanja , Karthikeyan Lingasubramanian , N. Ranganathan, A stimulus-free graphical probabilistic switching model for sequential circuits using dynamic bayesian networks, ACM Transactions on Design Automation of Electronic Systems (TODAES), v.11 n.3, p.773-796, July 2006 | switching activity;transition density;signal statistics;sequential circuit;signal probability;power estimation;finite-state machine |
567809 | On computing givens rotations reliably and efficiently. | We consider the efficient and accurate computation of Givens rotations. When f and g are positive real numbers, this simply amounts to computing the values of apparently trivial computation merits closer consideration for the following three reasons. First, while the definitions of c, s and r seem obvious in the case of two nonnegative arguments f and g, there is enough freedom of choice when one or more of f and g are negative, zero or complex that LAPACK auxiliary routines SLARTG, CLARTG, SLARGV and CLARGV can compute rather different values of c, s and r for mathematically identical values of f and g. To eliminate this unnecessary ambiguity, the BLAS Technical Forum chose a single consistent definition of Givens rotations that we will justify here. Second, computing accurate values of c, s and r as efficiently as possible and reliably despite over/underflow is surprisingly complicated. For complex Givens rotations, the most efficient formulas require only one real square root and one real divide (as well as several much cheaper additions and multiplications), but a reliable implementation using only working precision has a number of cases. On a Sun Ultra-10, the new implementation is slightly faster than the previous LAPACK implementation in the most common case, and 2.7 to 4.6 times faster than the corresponding vendor, reference or ATLAS routines. It is also more reliable; all previous codes occasionally suffer from large inaccuracies due to over/underflow. For real Givens rotations, there are also improvements in speed and accuracy, though not as striking. Third, the design process that led to this reliable implementation is quite systematic, and could be applied to the design of similarly reliable subroutines. | Introduction
Givens rotations are widely used in numerical linear algebra. Given f and g, a Givens rotation is a 2-by-2
unitary matrix R(c, s) such that
-s c
The fact that R(c, s) is unitary implies
-s c
c
# Computer Science Division University of California, Berkeley, CA 94720 (dbindel@cs.berkeley.edu). This material is
based upon work supported under a National Science Foundation Graduate Research Fellowship.
Computer Science Division and Mathematics Dept., University of California, Berkeley, CA 94720
(demmel@cs.berkeley.edu). This material is based in part upon work supported by the Advanced Research Projects
Agency contract No. DAAH04-95-1-0077 (via subcontract No. ORA4466.02 with the University of Tennessee), the Department
of Energy grant No. DE-FG03-94ER25219, and contract No. W-31-109-Eng-38 (via subcontract Nos. 20552402 and 941322401
with Argonne National Laboratory), the National Science Foundation grants ASC-9313958 and ASC-9813361, and NSF
Infrastructure Grant Nos. CDA-8722788 and CDA-9401156.
# Computer Science Division and Mathematics Dept., University of California, Berkeley, CA 94720
(wkahan@cs.berkeley.edu).
- NERSC, Lawrence Berkeley National Lab, (osni@nersc.gov).
I
From this we see that
real (2)
When f and g are real and positive, the widely accepted convention is to let
However, the negatives of c, s and r also satisfy conditions (1) and (2). And when
and s satisfying (2) also satisfy (1). So c, s and r are not determined uniquely. This slight ambiguity has
led to a surprising diversity of inconsistent definitions in the literature and in software. For example, the
routines SLARTG, CLARTG, SLARGV and CLARGV, the Level 1 BLAS routines SROTG
and CROTG [6], as well as Algorithm 5.1.5 in [5] can get significantly di#erent answers for mathematically
identical inputs.
To avoid this unnecessary diversity, the BLAS (Basic Linear Algebra Subroutines) Technical Forum, in
its design of the new BLAS standard [3], chose to pick a single definition of Givens rotations. Section 2
below presents and justifies the design.
The BLAS Technical Forum is also providing reference implementations of the new standard. In the
case of computing Givens rotation and a few other kernel routines, intermediate over/underflows in straight-forward
implementations can make the output inaccurate (or stop execution or even cause an infinite loop
while attempting to scale the data into a desired range) even though the true mathematical answer might
be unexceptional. To compute c, s and r as e#ciently as possible and reliably despite over/underflow is
surprisingly complicated, particularly for complex f and g.
Square root and division are by far the most expensive real floating point operations on current machines,
and it is easy to see that one real square root and one real division (or perhaps a single reciprocal-square-
root operation) are necessary to compute c, s and r. With a little algebraic manipulation, we also show
that a single square root and division are also su#cient (along with several much cheaper additions and
multiplications) to compute c, s and r in the complex case. In contrast, the algorithm in the CROTG
routine in the Fortran reference BLAS uses at least 5 square roots and 9 divisions, and perhaps 13 divisions,
depending on the implementation of the complex absolute value function cabs.
However, these formulas for c, s and r that use just one square root and one division are susceptible
to over/underflow, if we must store all intermediate results in the same precision as f and g. Define
We systematically identify the values of f and g for which these formulas are
reliable (i.e. guaranteed not to underflow in such a way that unnecessarily loses relative precision, nor
to overflow) by generating a set of simultaneous linear inequalities in log #f# and log #g#, which define a
polygonal region S (for Safe) in (log #f#, log #g#) space in which the formulas may be used.
This is the most common situation, which we call Case 1 in the algorithm. In this case, the new algorithm
runs 25% faster than LAPACK's CLARTG routine, and nearly 4 times faster than the CROTG routine in
the vendor BLAS on a Sun Ultra-10, ATLAS BLAS, or Fortran reference BLAS.
If (log #f#, log #g#) lies outside S, there are two possibilities: scaling f and g by a constant to fit inside
S, or using di#erent formulas. Scaling may be interpreted geometrically as shifting S parallel to the diagonal
line log log #g# in (log #f#, log #g#) space. The region covered by shifted images of S (S's ``shadow'')
is the region in which scaling is possible. In part of this shadow (case 4 in the algorithm), we do scale f and
g to lie inside S and then use the previous formula.
The remaining region of (log #f#, log #g#) space, including space outside S's shadow, consists of regions
where log #f# and log #g# di#er so much that |f | 2 rounds either to |f | 2 (Case 2 in the algorithm) or
(Case 3). Replacing |f | 2 by either |f | 2 or |g| 2 simplifies the algorithm, and di#erent formulas are
used.
In addition to the above 4 cases, there are 2 other simpler ones, when f and/or g is zero.
There are three di#erent ways to deal with these multiple cases. The first way is to have tests and
branches depending on #f# and #g# so that only the appropriate formula is used. This is the most portable
method, using only working precision (the precision of the input/output arguments) and is the one explored
in most detail in this paper.
The second method is to use exception handling, i.e. assume that f and g fall in the most common case
(Case 1), use the corresponding formula, and only if a floating point exception is raised (overflow, underflow,
or invalid) is an alternative formula used [4]). If su#ciently fast exception handling is available, this method
may be fastest.
The third method assumes that a floating point format with a wider exponent range is available to store
intermediate results. In this case we may use our main new formula (Case 1) without fear of over/underflow,
greatly simplifying the algorithm (the cases of f and/or g being zero remain). For example, IEEE double
precision (with an 11-bit exponent) can be used when inputs f and g are IEEE single precision numbers
(with 8-bit exponents). On a Sun Ultra-10, this mixed-precision algorithm is nearly exactly as fast in Case 1
of the single precision algorithm described above, and usually rather faster in Cases 2 through 4. On an Intel
machine double extended floating point (with 15-bit exponents) can be used for single or double precision
inputs, and this would be the algorithm of choice. However, with double precision inputs on a machine like
a Sun Ultra-10 without double-extended arithmetic, or when double precision is much slower than single
precision, our new algorithm with 4 cases is the best we know.
In addition to the new algorithm being significantly faster than previous routines, it is more accurate. All
earlier routines have inputs that exhibit large relative errors, whereas ours is always nearly fully accurate.
The rest of this paper is organized as follows. Section 2 presents and justifies the proposed definition of
Givens rotations. Section 3 details the di#erences between the proposed definition and existing LAPACK
and Level 1 BLAS code. Section 4 describes our assumptions about floating point arithmetic. Section 5
presents the algorithm in the complex case, for the simple cases when presents the
algorithm in the most common complex case, assuming that neither overflow nor underflow occur (Case 1).
Section 7 shows alternate formulas for complex Givens rotations when f and g di#er greatly in magnitude
(Cases 2 and 3). Section 8 describes scaling when f and g are comparable in magnitude but both very large
or very small (Case 4). Section 9 compares the accuracy of our new complex Givens routine and several
alternatives; only ours is accurate in all cases. Section 10 discusses the performance of our complex Givens
routine. Sections 11, 12 and 13 discuss algorithms, accuracy and timing for real Givens rotations, which are
rather easier. Section 14 draws conclusions. The actual software is included in an appendix.
Givens rotations
We will use the following function, defined for a complex variable x, in what follows:
sign(x) is clearly a continuous function away from x = 0. When x is real the definition simplies to
As stated in the introduction, we need extra requirements besides (1) and (2) in order to determine c
and s (and hence r) uniquely. For when at least one of f and g are nonzero, the most that we can deduce
from the first component of R(c, s)[f, g] in (1) is that
some real #. From the fact that c must be real we deduce that if f #= 0 then
and if
As stated before, when and s can be chosen arbitrarily, as long as they satisfy (2).
The extra requirements initially chosen by the BLAS Technical Forum to help resolve the choice of -
sign in (3) and # in (4) are as follows.
The definitions for real and complex data should be consistent, so that real data passed to the complex
algorithm should result in the same answers (modulo roundo#) as from the real algorithm.
Current LAPACK subroutines that use Givens rotations should continue to work correctly with the new
definition.
The current LAPACK subroutines SLARTG and CLARTG (which compute a single real and complex
Givens rotation, resp.) do not satisfy requirement 1. Furthermore, the LAPACK subroutines SLARGV
and CLARGV for computing multiple Givens rotations do not compute the same answers as SLARTG and
CLARTG, resp. The di#erences are described in section 3 below. So some change in practice is needed to
have consistent definitions. (Indeed, this was the original motivation for the BLAS Technical Forum not
simply adopting the LAPACK definitions unchanged.)
However, do not immediately resolve the choice of sign in (1). To proceed we add requirement
R3 The mapping from (f, g) to (c, s, r) should be continuous whenever possible.
Continuity of c and s as functions of f and g is not possible everywhere, because as real f and g approach
(0, along the real line sin #, so c and s must be discontinuous at (0, 0).
But consider c, s, r as functions of (f, increases from 0 to 2#, i.e. f traverses the unit
circle in the complex plane. At consider the common convention (c,
As # increases, remains equal to 1
. Since c is real, continuity implies c stays fixed at
for
all #, and hence are continuous as desired. Thus requirement R3 implies that c
must be nonnegative. Together with (3), this implies that when f #= 0 we have
obviously define f , g and r continuously away from they simplify to
. This is attractive because R(1, 0) is the identity matrix, so using it to multiply an arbitrary
pair of vectors requires no work,
in the light of requirement R3. Since c and s are not continuous
at because sign(f) can change arbitrarily in a small complex neighborhood of 0, we cannot hope
to define # by a continuity argument that includes complex f . Instead, we ask just that c, s, and r be
continuous functions of real f # 0 and complex g #= 0, i.e. they should be continuous as f approaches zero
from the right. This limit is easily seen to be
which we take as the definition for
Finally we consider the case This is impossible to define by continuity, since f and g can
approach 0 from any direction, so instead we add requirement
R4 Given a choice of c and s, choose those requiring the least work.
R(c, s) is typically used to multiply a pair of vectors, and R(1, I requires no work to do this, we
set
In summary, the algorithm for complex or real f and g is as follows.
Algorithm 1: Computing Givens Rotations
(includes the case
must be nonzero)
else (f and g both nonzero)
endif
When f and g are real, the algorithm can be slightly simplified by replacing - g by g.
2.1 Exceptional cases
When this algorithm is run in IEEE floating point arithmetic [2] it is possible that some inputs might be
NaNs (Not-a-Number symbols) or -#. In this section we discuss the values c, s and r should have in these
cases; we insist that the routine must terminate and return some output values in all cases.
We say that a complex number is a NaN if at least one of its real and imaginary parts is a NaN. We say
that a complex number is infinite if at least one of its real and imaginary parts is infinite, and neither is a
NaN.
First suppose at least one NaN occurs as input. The semantics of NaN are that any binary or unary
arithmetic operation on a NaN returns a NaN, so that by extension our routine ought to return NaNs as
well. But we see that our definition above will not necessarily do this, since if an implementation
might reasonably still return since these require no arithmetic operations to compute.
Rather than specify exactly what should happen when an input is NaN, we insist only that at least r be a
NaN, and perhaps c and s as well, at the implementor's discretion. We permit this discretion because NaNs
are (hopefully!) very rare in most computations, and insisting on testing for this case might slow down the
code too much in common cases.
To illustrate the challenges of correct portable coding with NaNs, consider computing max(a, b), which
we will need to compute #f# and #g#. If max implemented (in hardware or software) as "if (a > b) then a else
b" then max(0, NaN) returns NaN but max(NaN, returns 0. On the other hand, the equally reasonable
implementation "if (a < b) then b else a" instead returns 0 and NaN, respectively. Thus an implementation
might mistakenly decide missing the NaN in
g. Our model implementation will work with any implementation of max.
Next suppose at least one # or -# occurs as input, but no NaNs. In this case it is reasonable to return
the limiting values of the definition if they exist, or NaNs otherwise. For example one might return
since
cannot be well-defined while r = |g| can be. Or one could simply return NaNs even if a
limit existed, for example returning
to avoid overspecifying rare cases and thereby possibly slowing down the common cases, we leave it to the
implementor's discretion which approach to take. But we insist that at least r either be infinite or a NaN.
The assiduous reader will have noted that Algorithm 1 leaves ambiguous how the sign of zero is treated
includes both +0 and -0). Di#erent implementations are free to return +0 or -0 whenever
a zero is to be delivered. There seems to be little to be gained by insisting, for example, that
which is what would actually be computed if R(1, +0) were multiplied by the vector
3 Di#erences from current LAPACK and BLAS codes
Here is a short summary of the di#erences between Algorithm 1 and the algorithms in LAPACK 3.0 [1] and
earlier versions, and in the Level 1 BLAS [6]. The LAPACK algorithms in question are SLARTG, CLARTG,
SLARGV and CLARGV, and the Level 1 BLAS routines are SROTG and CROTG. All the LAPACK release
3.0 test code passed as well with the new Givens rotations as with the old ones (indeed, one test failure in the
old code disappeared with the new rotations), so the new definition of Givens rotations satisfies requirement
R2.
returns
The comment in SLARTG about "saving work" does not mean the LAPACK bidiagonal SVD routine
and g are nonzero), SLARTG returns
the negatives of the values of c, s and r returned by Algorithm 1.
Algorithm 1 is mathematically identical to CLARTG. But it is not numerically identical, see
section 9 below.
returns
and returns returns
returns
and g #= 0, CLARGV returns
by r and g by a quantity z from which one can reconstruct both s and c
otherwise). Besides this di#erence, r is
assigned the sign of g as long as either f or g is nonzero, rather that the sign of f (or 1).
but does not compute a quantity like z. CROTG sets
and g are
nonzero, it matches Algorithm 1 mathematically, but not numerically.
Assumptions about floating point arithmetic
In LAPACK, we have the routines SLAMCH and DLAMCH available, which return various machine constants
that we will need. In particular, we assume that machine epsilon is available, which is a power of
the machine radix. On machines with IEEE floating point arithmetic [2], it is either 2 -24 in single or 2 -53
in double. Also, we use SAFMIN, which is intended to be the smallest normalized power of the radix whose
reciprocal can be computed without overflow. On IEEE machines this should be the underflow threshold,
2 -126 in single and 2 -1022 in double. However, on machines where complex division is implemented in the
compiler by the fastest but risky algorithm
a
the exponent range is e#ectively halved, since c 2 even though the true quotient is
near 1. On these machines SAFMIN may be set to # SAFMIN to indicate this. As a result, our scaling algorithms
make no assumptions about the proximity of SAFMIN to the actual underflow threshold, and indeed any tiny
value rather less than # will lead to correct code, though the closer SAFMIN is to the underflow threshold the
fewer scaling steps are needed in extreme cases.
Our algorithms also work correctly and accurately whether or not underflow is gradual. This is important
on the processors where default "fast mode" replaces all underflowed quantities by zero. This means that
the e#ective underflow threshold is SAFMIN/#, since underflow in x can cause a relative error in SAFMIN/#+x
of at most #, the same as roundo#.
In our scaling algorithms we will use the quantity z = (#/SAFMIN) 1/4 rounded to the nearest power of the
radix. Thus we use z as the e#ective underflow threshold, and z as the overflow
threshold. Note that we may safely add and subtract many quantities bounded in magnitude by z 4 without
incurring overflow. We repeat that the algorithms work correctly, if more slowly, if a conservative estimate
of SAFMIN is used (i.e. one that is too large). The powers of z used by the software are computed on the
first call, and then saved and reused for later calls. The values of z and its powers for IEEE machines with
SAFMIN equal to the underflow threshold are as follows.
Single Precision Double Precision
z
z
When inputs include -# and NaN, we assume the semantics of IEEE arithmetic [2] are used.
In later discussion we denote the actual overflow threshold by OV, the underflow threshold by UN, and
the smallest positive number by m, which is 2-UN on a machine with gradual underflow, and UN otherwise.
5 Complex Algorithm when
In what follows we use the convention of capitalizing all variable names, so that C, S and R are the data to
be computed from F and G. We use the notation re(F) and im(F) to mean the real and imaginary parts of
F, and #w# = max(|re w|, |im w|) for any complex number w. We begin by eliminating the easy cases where
at least one of F and G is zero. Variables F, G, S and R are complex, and the rest are real.
Algorithm 2: Computing Givens Rotations when
. includes the case
else if
. G must be nonzero
scale G by powers of z -4 so that z -2 #G# z 2
unscale R by powers of z -4
else
. both F and G are nonzero
. use algorithm described below
endif
We note that even though is an "easy" case we need to scale G to avoid over/underflow when
computing re(G)*2+im(G)*2.
5.1 Exceptional cases
Now we discuss exception handling. It noticeably speeds up the code to implement the tests G=0 and F=0 by
precomputing which will be used later, and then testing whether SG=0 and SF=0.
But as described in section 2.1, either of these tests might succeed even though the real or imaginary part
of F or G is a NaN. Therefore the logic of the algorithm must change slightly as shown below.
Algorithm 2E: Computing Givens Rotations when exception handling
. includes the case
. In case G is a NaN, make sure R is too
else if
. G must be nonzero
scale G by powers of z -4 so that z -2 #G# z 2
. limit number of scaling steps in case G infinite or NaN
unscale R by powers of z -4
. In case F is a NaN, make sure R is too
else
. both F and G are nonzero
. use algorithm described below
endif
The test SCALEG=0 can succeed if one part of G is 0 and the other is a NaN, which is why we must return
instead of R = F to make sure the input NaN propagates to the output R Note that outputs C=1
and S=0 even if there are NaNs and infinities on input.
Similarly, the branch where can be taken when G is a NaN or infinity. This means that a loop
to scale G (and SCALEG) into range might not terminate if written without an upper bound on the maximum
number of steps it can take. This maximum is essentially max(#log z OV#log z m#). The timing depends
strongly on implementation details of scaling (use of unrolling, loop structure, etc. The algorithm we used
could probably be improved by tuning to a particular compiler and architecture. C will always be zero, but
S will be a NaN if G is either infinite or a NaN, and R will be infinite precisely if G is infinite.
6 Complex algorithm when f and g are nonzero
Now assume F and G are both nonzero. We can compute C, S and R with the following code fragment,
which employs only one division and one square root. The last column shows the algebraically exact quantity
computed by each line of code. We assume that real*complex multiplications are performed by two
real multiplications (the Fortran implementation does this explicitly rather than relying on the compiler).
Variables F, G, R and S are complex, and the rest are real.
Algorithm 3: Fast Complex Givens Rotations when f and g are "well scaled"
1. F2 := re(F)*2
2. G2 := re(G)*2
3. FG2 := F2
4.
5. C := F2*D1 |f |/ # |f | 2
7. R :=
8. S := F*D1 f
|f |# |f | 2 +|g| 2
9. S := conj(G)*S f
|f |
Now recall z = (#/SAFMIN) 1/4 , so that z 4 is an e#ective overflow threshold and z -4 is an e#ective
underflow threshold. The region where the above algorithm can be run reliably is described by the following
inequalities, which are numbered to correspond to lines in the above algorithm. All logarithms are to the
base 2.
1. We assume #f# z 2 to prevent overflow in computation of F2
2. We assume #g# z 2 to prevent overflow in computation of G2
3. This line is safe given previous assumptions.
4a. We assume z -2 #f# to prevent underflow of F2 and consequent division by zero in the computation
of D1
4b. We assume #f# z to prevent overflow from the |f | 4 term in F2*FG2 in the computation of D1
4c. We assume #f#g# z 2 to prevent overflow from the |f | 2
|g| 2 term in F2*FG2 in the computation of D1
Either 4d. z
or 4e. z -2 #f#g#
to prevent underflow of F2*FG2 and consequent division by zero in the computation of D1
5. This line is safe given previous assumptions. If C underflows, it is deserved.
6. #g#f# z 4 to prevent overflow of FG2 since # 1 |g|/|f | is large.
7. This line is safe given previous assumptions, returning |R| roughly between z -1 and z 2 . If the smaller
component of R underflows, it is deserved.
UN
log ||F|| / log z
log ||G|| / log z
UN
(4b)
(4a)
(4c)
(4d)
(4e)
Figure
1: Inequalities describing the region of no unnecessary over/underflow. UN and OV are the
over/underflow thresholds; m is the smallest representable positive number.
8. This line is safe given previous assumptions, returning |S| roughly between z -2 and 1. The smaller
component of S may underflow, but this error is very small compared to the other component of S.
9. This line is safe given previous assumptions. If S underflows, it is deserved.
Note that all the inequalities in the above list describe half planes in
(log #f#, log #g#) space. For example inequality 6 becomes
log #g# - log #f# 4 log z.
The region described by all inequalities is shown in figure 1. Each inequality is described by a thin line
marked by arrows indicating the side on which the inequality holds. The heavy line borders the safe region
S satisfying all the inequalities, where the above algorithm can be safely used.
It remains to say how to decide whether a point lies in S. The boundary of S is complicated, so the
time to test for membership in S can be nontrivial. Accordingly, we use the simplest tests that are likely to
succeed first, and only then do we use more expensive tests. In particular, the easiest tests are threshold
comparisons with #f# and #g#. So we test for membership in the subset of S labeled (1) in Figure 2 by the
following algorithm:
if #f# z and #f# z -1 and #g# z then
f, g is in Region (1)
endif
This is called Case 1 in the software.
Region (1) contains all data where #f# and #g# are not terribly far from 1 in magnitude (between
z and in single between z in double), which we expect to be most
arguments, especially in double.
The complement of Region (1) in S is shown bounded by dashed lines in Figure 2. It is harder to test
for, because its boundaries require doing threshold tests on the product #f#g#, which could overflow. So
we will not test for membership in this region explicitly in the case, but do something else instead.
6.1 Exceptional cases
Again we consider the consequence of NaNs and infinities. It is easy to see that if either F or G is infinite,
then the above test for membership in Region (1) cannot succeed. So if su#ces to consider NaNs.
Any test like A#B evaluates to false, when either A or B is a NaN, so Case 1 occurs with NaN inputs only
when #f# and #g# are not NaNs, which can occur as described in section 2.1. By examining Algorithm 3 we
see that a NaN in F or G leads to FG2 and then all of C, S and R being NaNs.
7 Complex algorithm when f and g di#er greatly in magnitude
When |g| 2
rounds to |f | 2 , and the formulas for c, s and r may be greatly simplified
and very accurately approximated by
|f |
|f | 2
This region is closely approximated by the regions #g# 1/2
#f# marked (2) in Figure 2, and is called
Case 2 in the software.
When instead |f | 2
rounds to |g| 2 , and the formulas for c, s and r may be greatly
simplified and very accurately approximated by
|g|
|f | - |g|
|g|
|f | - |g|
|f | - |g|
This region is closely approximated by the region #f# 1/2
#g# marked (3) in Figure 2, and is called
Case 3 in the software.
An important di#erence between the formulas in (7) and (8) versus the formula (5) is that (7) and (8)
are independently homogeneous in f and g. In other words, we can scale f and g independently instead of
by the same scalar in order to evaluate them safely. Thus the "shadow" of the region in which the above
formulas are safe covers all (f, g) pairs. In contrast in formula (5) f and g must be scaled by the same value.
Here are the algorithms implementing (7) and (8) without scaling. Note that (7) does not even require
a square root.
Algorithm 4: Computing complex Givens rotations when #g#f#, using formulas (7),
without scaling
if #G#F# then
endif
Algorithm 5: Computing complex Givens rotations when #f#g#, using formulas (8),
without scaling
if #F#G# then
endif
We may now apply the same analysis as in the last section to these formulas, deducing linear inequalities
in log #f# and log #g# which must be satisfied in order to guarantee safe and accurate execution. We simply
summarize the results here. In both cases, we get regions with boundaries that, like S, are sets of line
segments that may be vertical, horizontal or diagonal. We again wish to restrict ourselves to tests on #f#
and #g# alone, rather than their product (which might overflow). This means that we identify a smaller safe
region (like region (1) within S in Figure 2) where membership can be easily tested. This safe region for
Algorithm 4 is the set satisfying
z -2 #f# z 2 and z -2 #g# z 2
This safe region for Algorithm 5 is the smaller set satisfying
z
This leads to the following algorithms, which incorporate scaling.
Algorithm Computing complex Givens rotations when #g#f#, using formulas (7), with
scaling
if #G#F# then
scale F by powers of z -4 so z -2 #F# z 2
scale G by powers of z -4 so z -2 #G# z 2
unscale S by powers of z -4 to undo scaling of F and G
Algorithm 7: Computing complex Givens rotations when #f#g#, using formulas (8), with
scaling
if #F#G# then
scale F by powers of z -2 so z -1 #F# z
scale G by powers of z -2 so z -1 #G# z
unscale C and R by powers of z -2 to undo scaling of F and G
endif
Note in Algorithm 7 that the value of S is une#ected by independent scaling of F and G.
7.1 Exceptional cases
First consider Case 2, i.e. Algorithm 6. It is possible for either F or G to be NaNs (since #F# and #G# may not
be) but if neither is a NaN then only F can be infinite (since the test is #G#F#, not #G#F#).
Care must be taken as before to assure termination of the scaling of F and G even when they are NaNs or
infinite.
In Case 2 C=1 independently of whether inputs are infinite or NaNs. S is a NaN if either F or G is a NaN
or infinite. If we simply get R=F then R is a NaN (or infinite) precisely when F is a NaN (or infinite); in other
words R might not be a NaN if G is. So in our model implementation we can and do ensure that R is a NaN
if either F or G is a NaN by instead computing computing S.
Next consider Case 3, i.e. Algorithm 7. Analogous comments about the possible values of the inputs as
above apply, and again care must be taken to assure termination of the scaling. In Case 3, if either input is
a NaN, all three outputs will be NaNs. If G is infinite and F is finite, then S and R will be NaNs.
8 Complex algorithm: Scaling in Regions 4a and 4b
For any point (f, g) that does not lie in regions (1), (2) or (3) of Figure 2 we can use the following algorithm:
1. Scale (f, g) to a point (scale - f, scale - g) that does lie in S.
2. Apply Algorithm 3 to (scale - f, scale - g), yielding c, s, - r.
3. Unscale to get
r/scale.
This scaling in Figure 2 corresponds to shifting f, g parallel to the diagonal line by log scale until
it lies in S. It is geometrically apparent that the set of points scalable in regions (4a) and (4b)of Figure 2
lie in the set of all diagonal translates of S, i.e. the "shadow" of S, and can be scaled to lie in S. Indeed, all
points in region (2) and many (but not all) points in region (3) can be scaled to lie in S, but in regions (2)
and (3) cheaper formulas discussed in the last section are available.
First suppose that (f, g) lies in region (4a). Let , we can scale f and g
down by z -2 . Eventually (f, g) will lie in the union of the two arrow-shaped regions A1 and A2 in Figure 3.
Then, if s still exceeds z, i.e. (f, g) is in A1, we multiply f and g by z putting it into A2. Thus, we
guarantee that the scaled f and g are in A2, where it is safe to use Algorithm 3.
Next suppose that (f, g) lies in region (4b). Now let , we can scale f and g up
by z 2 . Eventually (f, g) will like in the union of the two parallelograms B1 and B2 in Figure 4. Then, if s
is still less than z -1 , i.e. (f, g) is in B1, we multiply f and g by z, putting it into B2. Thus, we guarantee
that the scaled f and g are in B2, where it is safe to use Algorithm 3.
These considerations lead to the following algorithm
Algorithm 8: Computing complex Givens rotations when (f, g) is in region (4a) or (4b), with
scaling.
. this code is only executed if f and g are in region (4a) or (4b)
scale F and G down by powers of z -2 until max(#F#G# z 2
if max(#F#G#) > z, scale F and G down by z
else
scale F and G up by powers of z 2 until #F# z -2
if #F# < z -1 , scale F and G up by z
endif
compute the Givens rotation using Algorithm 3
undo the scaling of R caused by scaling of F and G
We call the overall algorithm new CLARTG, to distinguish from old CLARTG, which is part of the
release. The entire source code in included in the Appendix. It contains 248 noncomment
lines, as opposed to 20 in the reference CROTG implementation.
8.1 Exceptional cases
Either input may be a NaN, and they may be simultaneously infinite. In any of these cases, all three outputs
will be NaNs. As before, care must be taken in scaling.
9 Accuracy results for complex Givens rotations
The algorithm was run for values of f and g, where the real and imaginary part of f and g
independently took on 46 di#erent values ranging from 0 to the overflow threshold. with intermediate values
chosen just above and just below the threshold values determining all the edges and corners in Figures 1
through 4, and thus barely satisfying (or not satisfying) all possible branches in the algorithm. The correct
answer inputs was computed using a straightforward implementation of Algorithm 1 using double precision
arithmetic, in which no overflow nor underflow is possible for the arguments tested. The maximum errors
in r, c and s were computed as follows, Here r s was computed in single using the new algorithm and r d was
computed straightforwardly in double precision; the subscripted c and s variables have analogous meanings.
In the absence of gradual underflow, the error metric for finitely representable r s is
and with gradual underflow it is
with the maximum taken over all nonzero test cases for which the true r does not overflow. (On this subset,
the mathematical definitions of c, s and r used in CLARTG and CROTG agree). Note that SAFMIN # 2 #
is the smallest denormalized number. Analogous metrics were computed for s s and c s .
The routines were first tested on a Sun Ultra-10 using f77 with the -fast -O5 flags, which means gradual
underflow is not used, i.e. results less than SAFMIN are replaced by 0. Therefore we expect the measure (11)
UN
log ||G|| / log z
log ||F|| / log z
(1)
(2)
(3) (4a)
(4b)
Figure
2: Cases in the code when f #= 0 and g #= 0
UN
log ||G|| log z
log ||F|| / log z
Figure
3: Scaling when (f, g) is in Region (4a).
UN
log ||G|| log z
log ||F|| / log z
Figure
4: Scaling when (f, g) is in Region (4b).
to be at least 1, and hopefully just a little bigger than 1, meaning that the error |r s - r d | is either just more
than machine epsilon # times the true result, or a small multiple of the underflow threshold, which is the
inherent uncertainty in the arithmetic.
The routines were also tested without any optimization flags, which means gradual underflow is used, so
we expect the more stringent measure (12) to be close to 1.
The results are as follows:
Gradual Underflow
Routine Max error in r s Max error in s s Max error in c s
Old CLARTG 70588 70588 70292
Reference CROTG NaN NaN NaN
Modified Reference CROTG 3.59 3.41 3.22
ATLAS CROTG NaN NaN NaN
Limited ATLAS CROTG 2.88
Vendor CROTG NaN NaN NaN
Limited Vendor CROTG 3.59
With Gradual Underflow
Routine Max error in r s Max error in s s Max error in c s
Old CLARTG 4.60 4.27 4913930
Reference CROTG NaN NaN NaN
Modified Reference CROTG
Here is why the old CLARTG fails to be accurate. First consider the situation without gradual underflow.
When |g| is just above z -2 , and |f | is just below, the algorithm will decide that scaling is unnecessary. As
a result |f | 2 may have a nonnegligible relative error from underflow, which creates a nonnegligible relative
error in r, s and c. Now consider the situation with gradual underflow. The above error does not occur, but
a di#erent one occurs. When 1 # |g| # |f |, and f is denormalized, then the algorithm will not scale. As
a result |f | su#ers a large loss of relative accuracy when it is rounded to the nearest denormalized number,
and then c # |f |/|g| has the same large loss of accuracy.
Here is why the reference BLAS CROTG can fail, even though it tries to scale to avoid over/underflow.
The scale factor |f | computed internally can overflow even when does not. Now
consider the situation without gradual underflow. The sine is computed as
|f |
where the multiplication is done first. All three quantities in parentheses are quite accurate, but the entries
of f/|f | are both less than one, causing the multiplication to underflow to 0, when the true s exceeds
.4. This can be repaired by inserting parentheses
|f |
so the division is done
first. Excluding these cases where |f | inserting parentheses, we get the errors on the
line "Modified Reference CROTG". Now consider the situation with gradual underflow. Then rounding
intermediate quantities to the nearest denormalized number can cause large relative errors, such as s and c
both equaling 1 instead of 1/ # 2.
The ATLAS and vendor version of CROTG were only run with the full optimizations suggested by their
authors, which means gradual underflow was not enabled. They also return NaNs for large arguments even
when the true answer should have been representable. We did not modify these routines, but instead ran
them on the limited subset of examples where |f | + |g| was less than overflow. They still occasionally had
large errors from underflow causing s to have large relative errors, even when the true value of s is quite
large.
In summary, our systematic procedure produced a provably reliable implementation whereas there are
errors in all previous implementations that yield inaccurate results without warning, or fail unnecessarily
due to overflow. The latter only occurs when the true r is close to overflow, and so it is hard to complain
very much, but the former problem deserves to be corrected.
Timing results for complex Givens rotations
For complex Givens rotations, we compared the new algorithm described above, the old CLARTG from
LAPACK, and CROTG from the reference BLAS. Timings were done on a Sun Ultra-10 using the f77
compiler with optimization flags -fast -O5. Each routine was called times for arguments throughout the
f, g plane (see Figure 2) and the average time taken for each argument (f, g); the range of timings for (f, g)
was typically only a few percent. 29 cases were tried in all, exercising all paths in the new CLARTG code.
The input data is shown in a table below.
We note that the timing results for optimized code are not entirely predictable from the source code.
For example, small changes in the way scaling is implemented can make large di#erences in the timings. If
proper behavior in the presence of infinity or NaN inputs were not an issue (finite termination and propagating
infinities and NaNs ot the output) then scaling and some other parts of the code could be simplified and
probably accelerated.
The timing results are in the Figures 5 and 6. Six algorithms are compared:
1. New CLARTG is the algorithm presented in this report, using tests and branches to select the correct
case.
2. OLD CLARTG is the algorithm in LAPACK 3.0
3. Ref CROTG is the reference BLAS
4. ATLAS CROTG is the ATLAS BLAS
5. Vendor CROTG is Sun's vendor BLAS
6. Simplied new CLARTG in double precision (see below)
Figure
5 shows absolute times in microseconds, and Figure 6 shows times relative to new CLARTG. The
vertical tick marks delimit the cases in the code, as described in the table below.
The most common case is Case 1, at the left of the plots. We see that the new CLARTG is about 25%
faster than old CLARTG, and nearly 4 times faster than any version of CROTG.
To get an absolute speed limit, we also ran a version of the algorithm that only works in Case 1; i.e. it
omits all tests for scaling of f and g and simply applies the algorithm appropriate for Case 1. This ultimate
version ran in about .243 microseconds, about 68% of the time of the new CLARTG. This is the price of
reliability.
Alternatively, on a system with fast exception handling, one could run this algorithm and then check
if an underflow, overflow, or division-by-zero exception occurred, and only recompute in this rare case [4].
This experiment was performed by Doug Priest [7] and we report his results here. On a Sun Enterprise 450
server with a 296 MhZ clock, exception handling can be used to (1) save and then clear the floating point
exceptions on entry to CLARTG, (2) run Case 1 without any argument checking, (3) check exception flags
to see if any division-by-zero, overflow, underflow, or invalid operations occurred, (4) use the other cases if
there were exceptions, and (5) restore the exception flag on exit. This way arguments falling into the most
common usual Case 1 run 25% faster than new CLARTG. Priest notes that it is essential to use in-line
assembler to access the exception flags rather than library routines (such as ieee flags()) which can take
to 150 cycles.
Here is a description of the algorithm called "simplified new CLARTG in double precision." It avoids all
need to scale and is fastest overall on the above architecture for IEEE single precision inputs: After testing
for the cases use Algorithm 3 in IEEE double precision. The three extra exponent bits
eliminate over/underflow. On this machine, this algorithm takes about .365 microseconds for all nonzero
inputs f and g, nearly exactly the same as Case 1 entirely in single. This algorithm is attractive for single
precision on this machine, since it is not only fast, but much simpler. Of course it would not work if the
input data were in double, since a wider format is not available on this architecture.
Input data for timing complex Givens rotations
Case Case in code f g
22
26
28
Case
Microseconds
Time to compute complex Givens rotations
Old CLARTG
ATLAS
Vendor
Reference
Double
Figure
5: Time to compute complex Givens rotations.
Computing real Givens rotations
When both f and g are nonzero, the following algorithm minimizes the amount of work:
Algorithm 9: Real Givens rotations when f and g are nonzero, without scaling
endif
We may now apply the same kind of analysis that we applied to Algorithm 3. We just summarize the
results here.
1.535Time to compute complex Givens rotations, relative to new CLARTG
Case
of
time
to
time
for
new
Old CLARTG
ATLAS
Vendor
Reference
Double
Figure
Relative Time to compute complex Givens rotations.
Algorithm 10: Real Givens rotations when f and g are nonzero, with scaling
if scale > z 2 then
scale F, G and scale down by powers of z -2 until scale # z 2
elseif scale < z -2 then
scale F, G and scale up by powers of z 2 until scale # z -2
endif
endif
unscale R if necessary
This algorithm does one division and one square root. In contrast, the SROTG routine in the Fortran
Reference BLAS does 1 square root and 4 divisions to compute the same quantities. It contains 95 noncom-
ment lines of code, as opposed to 22 lines for the reference BLAS SROTG (20 lines excluding 2 described
below), and is contained in the appendix.
12 Accuracy results for real Givens rotations
The accuracy of a variety of routines were measured in a way entirely analogous to the way described in
section 9. The results are shown in the tables below.
First consider the results in the absence of of gradual underflow. All three versions of SROTG use a scale
factor |f | + |g| which can overflow even when r does not. Eliminating these extreme values of f and g from
the tests yields the results in the lines labeled "Limited."
With gradual underflow, letting f and g both equal the smallest positive denormalized number m yields
instead of 1/ # 2, a very large relative error. This is because is the best machine
approximation to the true result # 2m, after which are divided by m to get c and s,
respectively. Slightly larger f and g yield slightly smaller (but still quite large) relative errors in s and c.
Gradual Underflow
Routine Max error in r s Max error in s s Max error in c s
Old SLARTG 1.45 1.81 1.81
Reference SROTG NaN NaN NaN
Limited Reference SROTG 1.51 1.95 1.95
ATLAS SROTG NaN NaN NaN
Limited ATLAS SROTG 1.68 1.55 1.55
Vendor SROTG NaN NaN NaN
Limited Vendor SROTG 1.68 1.55 1.55
With Gradual Underflow
Routine Max error in r s Max error in s s Max error in c s
Old SLARTG 1.45 1.81 1.81
Reference SROTG NaN NaN NaN
Limited Reference SROTG 1.51
Timing results for real Givens rotations
Six routines to compute real Givens rotations were tested in a way entirely analogous to the manner described
in section 10. The test arguments and timing results are shown in the table and figures below.
All three versions of SROTG (reference, ATLAS, and Sun's vendor version) originally computed more
than just s, c and r: they compute a single scalar z from which one can reconstruct both s and c. It is
defined by
s if |f | > |g|
1/c if |f | # |g| and c #= 0
The three cases can be distinguished by examining the value of z and then s and c reconstructed. This
permits, for example, the QR factors of a matrix A to overwrite A when Givens rotations are used to
compute Q, as is the case with Householder transformations. This capability is not used in LAPACK, so
neither version of SLARTG computes z. To make the timing comparisons fairer, we therefore removed the
two lines of code computing z from the reference SROTG when doing the timing tests below. We did not
however modify ATLAS or the Sun performance library in anyway, so those routines do more work than
necessary.
Input data for timing real Givens rotations
Case f g
Case
Microseconds
Time to compute real Givens rotations
Old SLARTG
ATLAS
Vendor
Reference
Double
Figure
7: Time to compute real Givens rotations.
We see from figures 7 and 8 that in the most common case (Case 1, where no scaling is needed), the
new SLARTG is about 18% faster than the old SLARTG, and 1.35 to 2.62 times faster than any version of
SROTG.
To get an absolute speed limit, we also ran a version of the algorithm that only works in Case 1; i.e. it
omits all tests for scaling of f and g and simply applies the algorithm appropriate for data that is not too
large or too small. This ultimate version ran in about .161 microseconds, about 72% of the time of the new
SLARTG. This is the price of reliability.
Experiments by Doug Priest using exception handling to avoid branching showed an 8% improvement in
the most common case, when no scaling was needed.
Finally, the double precision version of SLARTG simply tests for the cases
runs Algorithm 9 in double precision without any scaling. It is nearly as fast as the new SLARTG in the
most common case, when no scaling is needed, and faster when scaling is needed.
14 Conclusions
We have justified the specification of Givens rotations put forth in the recent BLAS Technical Forum stan-
dard. We have shown how to implement the new specification in a way that is both faster than previous
implementations, and more reliable. We used a systematic design process for such kernels that could be used
whenever accuracy, reliability against over/underflow, and e#ciency are simultaneously desired. A side e#ect
of our approach is that the algorithms can be much longer than before when they must be implemented in the
same precision as the arguments, but if fast arithmetic with wider range is available to avoid over/underflow,
the algorithm becomes very simple, just as reliable, and at least as fast.
Time to compute real Givens rotations, relative to new SLARTG
Case
of
time
to
time
for
new
Old SLARTG
ATLAS
Vendor
Reference
Double
Figure
8: Relative Time to compute real Givens rotations.
--R
Faster numerical algorithms via exception handling.
Matrix Computations.
Basic Linear Algebra Subprograms for Fortran usage.
private communication
--TR
The algebraic eigenvalue problem
Implementing complex elementary functions using exception handling
Matrix computations (3rd ed.)
Applied numerical linear algebra
Implementing the complex arcsine and arccosine functions using exception handling
Basic Linear Algebra Subprograms for Fortran Usage
Faster Numerical Algorithms Via Exception Handling
Performance Improvements to LAPACK for the Cray ScientificLibrary
--CTR
Luca Gemignani, A unitary Hessenberg QR-based algorithm via semiseparable matrices, Journal of Computational and Applied Mathematics, v.184 n.2, p.505-517, 15 December 2005
Milo D. Ercegovac , Jean-Michel Muller, Complex Square Root with Operand Prescaling, Journal of VLSI Signal Processing Systems, v.49 n.1, p.19-30, October 2007
Frayss , Luc Giraud , Serge Gratton , Julien Langou, Algorithm 842: A set of GMRES routines for real and complex arithmetics on high performance computers, ACM Transactions on Mathematical Software (TOMS), v.31 n.2, p.228-238, June 2005 | givens rotation;BLAS;linear algebra |
568175 | Jones optimality, binding-time improvements, and the strength of program specializers. | Jones optimality tells us that a program specializer is strong enough to remove an entire level of self-interpretation. We show that Jones optimality, which was originally aimed at the Futamura projections, plays an important role in binding-time improvements. The main results show that, regardless of the binding-time improvements which we apply to a source program, no matter how extensively, a specializer that is not Jones-optimal is strictly weaker than a specializer which is Jones optimal. By viewing a binding-time improver as a generating extension of a self-interpreter, we can connect our results with previous work on the interpretive approach. | INTRODUCTION
Binding-time improvements are semantics-preserving
transformations that are applied to a source program prior
to program specialization. Instead of specializing the original
program, the modified program is specialized. The goal
is to produce residual programs that are better in some
sense than the ones produced from the original program.
A classical example [6] is the binding-time improvement of
a naive pattern matcher so that an o#ine partial evaluator
[20] can produce from it specialized pattern matchers
that are as e#cient as those generated by the Knuth, Morris
(an o#ine partial evaluator
cannot achieve this optimization without a suitable binding-time
improvement).
It is well-known that two programs which are functionally
equivalent may specialize very di#erently. Binding-time improvements
can lead to faster residual programs by improving
the flow of static data, or make a specializer terminate
more often by dynamizing static computations. Numerous
binding-time improvements are described in the literature
(e.g., [3, 7, 20]); they are routinely used for o#ine and on-line
specializers. The main advantage is that they do not
require a user to modify the specializer in order to overcome
the limitations of its specialization method. Hence, they are
handy in many practical situations.
Not surprisingly, several questions have been raised about
binding-time improving programs: What are the limitations
of binding-time improvements, and under what conditions
can these modifications trigger a desired specialization ef-
fect? Is it good luck that a naive pattern matcher may be
rewritten so as to lead an o#ine partial evaluator to perform
the KMP-optimization, or can such binding-time improvements
be found for any problem and for any specializer?
This paper answers these and related questions on a conceptual
level. We will not rely on a particular specialization
method or on other technical details. We are interested in
statements that are valid for all specializers, and we have
identified such conditions.
Figure
1 shows the structure of a specializer system employing
a binding-time improver as a preprocessor. The
binding-time improver bti takes a program p and a division
SD, which classifies p's parameters as static and dynamic,
and returns a functionally equivalent program p # . This program
is then specialized with respect to the static data x
by the specializer spec. The specializer and the binding-time
improver take the division SD as input, but the static
specializer system
spec
Figure
1: Binding-time improver as preprocessor of
a program specializer
data x is only available to the specializer. This flow of
the transformation is common to virtually all specializer
systems. Usually, the specializer is an automatic program
transformer, while the binding-time improvements are often
done by hand. For our investigation, it does not matter how
these steps are performed.
The results in this paper extend previous work on Jones
optimality [18, 28, 22, 32] and the interpretive approach [11,
13, 34]. We will see that Jones optimality [18] plays a key
role in the power of binding-time improvers and, together
with static expression reduction, establishes a certain kind
of non-triviality. Among others, we will precisely answer the
old question whether an o#ine partial evaluator can be as
powerful as an online partial evaluator.
This paper is organized as follows. After reviewing standard
definitions of interpreters and specializers (Sect. 2), we
present the two main ingredients for our work, Jones optimality
(Sect. improvers (Sect. 4). The
main results are proven (Sect. 5) and additional theorems
are presented (Sect. 6). The connection with the interpretive
approach is established (Sect. 7) and opportunities for
optimizing our constructions are discussed (Sect. 8). We
conclude with related work (Sect. and challenges for future
work (Sect. 10).
We assume that the reader is familiar with partial evalu-
ation, e.g., as presented in [20, Part II].
2. PRELIMINARIES
This section reviews standard definitions of interpreters
and specializers. The notation is adapted from [20]; we use
divisions (SD) when classifying the parameters of a program
as static and dynamic. 1 We assume that we are always
dealing with universal programming languages.
2.1 Notation
For any program text, p, written in language L, we let
denote the application of L-program p to its input d
(when the index L is unspecified, we assume that a language
L is intended). Multiple arguments are written as list, such
as . The notation is strict in its arguments.
Equality between program applications shall always mean
strong (computational) equivalence: either both sides of an
This does not imply that our specializers use o#ine partial
evaluation.
equation are defined and equal, or both sides are undefined.
Programs and their input and output are drawn from a common
data domain D . Including all program texts in D is
convenient when dealing with programs that accept both
programs and data as input (a suitable choice for D is the
set of lists known from Lisp). We define a program domain
but leave the programming language unspecified.
Elements of the data domain evaluate to themselves. Programs
are applied by enclosing them in
When we define a program using #-abstraction, the expression
needs to be translated into the corresponding programming
language (denoted by p# .q # P). The translation
is always possible when L is a universal programming
language.
2.2 Interpreters and Specializers
What follows are standard definitions of interpreters and
specializers.
Definition 1. (interpreter) An L-program int # Int is an
Definition 2. (self-interpreter) An L-program sint # Sint
is a self-interpreter for L i# sint is an L/L-interpreter.
Definition 3. (specializer) An L-program spec # Spec is a
specializer for L i#p # P , #x , y
For simplicity, we assume that the programs which we specialize
have two arguments, and that the first argument is
static. Even though we make use of only division SD in the
definition, we keep it explicit (for reasons explained in [12]).
A specializer need not be total. The definition allows spec
to diverge on [ p, SD, x diverges on [ x , y ] for all y . In
practice, specializers often sacrifice the termination behav-
ior. For a discussion of termination issues see [19].
A specializer is trivial if the residual programs it produces
are simple instantiations of the source programs.
Definition 4. (trivial specializer) An L-specializer spec triv
# Spec is trivial
More realistic specializers evaluate static expressions in a
source program. An expression is static if it depends only
on known data and, thus, can be precomputed at specialization
time. We define a special case of static expression
reduction which is su#cient for our purposes. The definition
of running time is taken from [20, Sect. 3.1.7]. 2
Definition 5. (running time) For program data
d1 , ., dn # D , let t p (d1 , ., dn ) denote the running time to
Definition 6. (static expression reduction) A specializer
spec # Spec has static expression reduction if #q , q # P ,
2 The measure for the running time can be a timed semantics
(e.g., the number of elementary computation steps).
The definition tells us that, in terms of residual-program
e#ciency, there is no di#erence between specializing program
q with respect to a value specializing the
composition p of q and q # with respect to a value x . This
implies that the specializer contains at least an interpreter
(a universal program) to evaluate static applications (here
a in the definition of p).
A specializer with static expression reduction is non-trivial.
The definition implies that it has the power to perform universal
computations. Most specializers that have been im-
plemented, including online and o#ine partial evaluators,
try to evaluate as many static expressions as possible to improve
the e#ciency of the residual programs. They satisfy
the static expression reduction property given above.
3. JONES OPTIMALITY
When translating a program by specializing an interpreter,
it is important that the entire interpretation overhead is re-
moved. Let us look at this problem more closely and then
explain the definition of Jones optimality.
3.1 Translation by Specialization
Let p be an N -program, let intN be an N /L-interpreter,
and let spec be a specializer for L. Then the 1st Futamura
projection [9] is defined by
Using Defs. 1 and 3 we have the functional equivalence between
q and p:
Note that program p is written in language N , while program
q is written in language L. This N -to-L-translation
was achieved by specializing the N /L-interpreter intN with
respect to p. We say that q is the target program of source
program p. The translation can always be performed. Consider
a trivial specializer. The first argument of intN is
instantiated to p, and the result is a trivial target program:
Clearly, this is not the translation we expect. The target
program is ine#cient: it contains an entire interpreter. A
natural goal is therefore to produce target programs that
are as e#cient as their source programs. Unfortunately, we
cannot expect a specializer to produce e#cient target programs
from any interpreter. For non-trivial languages, no
specializer exists [16] that could make 'maximal use' of the
static input, here p, in all programs.
3.2 Jones-Optimal Specialization
A specializer is said to be "strong enough" [18] if it can
completely remove the interpretation overhead of a self-inter-
preter. The definition is adapted from [20, 6.4]; it makes use
of the 1st Futamura projection.
Definition 7. (Jones optimality) Let sint # Sint be a self-
interpreter, then a specializer spec # Spec is Jones-optimal
for sint i# Jopt(spec, sint) where
A specializer spec is said to be Jones-optimal if there exists
at least one self-interpreter sint such that, for all source
programs p, the target program p # is at least as e#cient for
all inputs d as the source program p. This tells us that spec
can remove an entire layer of self-interpretation.
The case of self-interpreter specialization is interesting because
it is easy to judge to what extent the interpretive overhead
has been removed by comparing the source and target
programs, as they are written in the same language. In
particular, when programs p and p # are identical, it is safe
to conclude that this goal is achieved. Also, as explained
in [25], the limits on the structure of residual programs that
are inherited from structural bounds in the source programs
are best observed by specializing a self-interpreter (e.g., the
arity of function definitions in a residual program).
It is easy to require Jones optimality, but it is not always
easy to satisfy it. For instance, for a partial evaluator
for a first-order functional language with algebraic data
types [25], a combination of several transformation methods
is necessary (constant folding, unfolding, polyvariant
specialization, partially static values, constructor specializa-
tion, type specialization).
Jones optimality was first proven [28] for lambda-mix; see
also [24]. The first implementation of a Jones-optimal specializer
was the o#ine partial evaluator [26] for a Lisp-like
language. An early o#ine partial evaluator [23] for a similar
language utilizes partially static structures to produce near-
identity mappings. For many specializers it is not known
whether they are Jones-optimal or not. For other partial
evaluators, such as FCL-mix [15], it is impossible to write
a self-interpreter that makes them Jones optimal. Recent
work [32] on Jones optimality concerns tag-elimination when
specializing self-interpreters for strongly typed languages.
Note that the definition of Jones optimality can be satisfied
by a simple construction
else
where mysint is a fixed self-interpreter for which we want
myspec to be Jones-optimal and spec triv is the trivial specializer
from Def. 4. Specializer myspec returns the last
argument, x , unchanged if the first argument, p, is equal
to mysint ; otherwise, myspec performs a trivial specializa-
tion. Clearly, such a specializer is not useful in practice, but
formally Jopt(myspec, mysint).
Finally, we show that a Jones-optimal specializer with
static expression reduction exists. This fact will play an
important role in the next sections. Both properties can be
satisfied by a simple construction
myspec stat
case p of
where case implements pattern matching: if program p consists
of a composition of two arbitrary programs, then p is
decomposed 3 into q and q # , and q is specialized with respect
to the result of evaluating [[sint
is a self-interpreter; otherwise p is specialized with respect
to x . The specializer is Jones-optimal for mysint and has
the expression reduction property of Def. 6. More realistic
3 We shall not be concerned with the technical details of
parsing an L-program p.
Jones-optimal specializers, for instance [26, 23, 25, 28], also
satisfy the property of static expression reduction.
4. BINDING-TIME IMPROVERS
Binding-time improvements [20] are semantics-preserving
transformations that are applied to a source program before
specialization with the aim to produce residual programs
that are better in some sense than those produced from the
original source program. Numerous binding-time improvements
have been described in the literature (e.g., [3, 7, 20]).
Formally, a binding-time improver is a program that takes
a program p and a division of p's arguments into static and
dynamic, and then transforms p into a functionally equivalent
program p # . A binding-time improver performs the
transformation independently of the static values that are
available to a specializer.
Definition 8. (binding-time improver) An L-program bti
# Bti is a binding-time improver for L
Using Defs. 3 and 8, we have the following functional equivalence
between the residual programs r and r # produced by
directly specializing p and specializing a binding-time improved
version of p:
In practice, we expect (and hope) that program r # is better,
in some sense, than program r . For example, binding-time
improvements can lead to faster residual programs by improving
the flow of static information at specialization time.
It is important to note that a binding-timer improver cannot
precompute a residual program since there are no static
values, nor can it make a table containing all possible residual
programs since in general the specialization of a source
program allows for an infinite number of di#erent residual
programs. (Compare this to a translator: a translator cannot
just precompute all results of a source program and store
them, say, in a table in the target program since, in general,
there is an infinite number of di#erent results.)
The term "binding-time improvement" was originally
coined in the context of o#ine partial evaluation [20] to improve
the flow of static information, but such easing transformations
are routinely used for all specialization methods.
This is not surprising since there exists no specialization
method that could make 'maximal use' of available information
in all cases. We use the term binding-time improvement
for any semantics-preserving pre-transformation and
regardless of a particular specialization method.
Note that Def. 8 does not require that each source program
is transformed: the identity transformation,
is a correct (but trivial) binding-time improver. Of course,
for a binding-time improver to be useful, it must perform
non-trivial transformations at least on some programs; oth-
erwise, the residual programs r and r # in (4) are always
identical.
5. JONESOPTIMALITY&BINDING-TIME
IMPROVEMENTS
In the previous sections, two di#erent streams of ideas
were presented, binding-time improvements and Jones optimality
for the specialization of programs. How are they
related? We put these ideas together and present the two
main results.
5.1 Sufficient Condition
The first theorem tells us that for every Jones-optimal
specializer spec 1 there exists a binding-time improver that
allows the specializer to achieve the residual-program e#-
ciency of any other specializer (Jones optimality is a su#-
cient condition). The proof makes use of a general construction
of such a bti .
Theorem 1. (Jones optimality is su#cient) For all specializers
reduction, the
following holds:
Proof: We proceed in two steps.
1. Let sint 1 be the self-interpreter for which spec 1 is Jones-
optimal. For each specializer spec 2 , define a binding-time
improver
| {z }
two stages
q .
The binding-time improver depends on sint 1 and spec 2 ,
but not on program p. Given a program p and a division
SD, program bti produces a new program that
performs p's computation in two stages: first, spec 2
specializes p with respect to x , then sint 1 evaluates
the residual program with the remaining input y . From
Defs. 2 and 3, it follows that the new program is functionally
equivalent to p. According to Def. 8, bti is a
binding-time improver since #p # P , #x , y
2. Consider the rhs of the implication in Thm. 1. For
each spec 2 , let bti be the binding-time improver defined
in (6). Let p be a program and let x be some data, then
we have the binding-time improved program
and obtain the residual program r # by specializing p #
with respect to x :
reduces static expressions and since p # is of
a form that suits Def. 6, we can rewrite (8) as (9) and
obtain a program r # :
After evaluating the application of spec 2 in (9), we obtain
(10); recall that in the rhs
of the implication. Then r # in (11) is the result of specializing
sint 1 with respect to r by spec 1 . Program r #
is as fast as r # according to Def. 6. From the specialization
in (10) and Jopt(spec 1 , sint 1 ), we conclude that
r # is at least as fast as r since #y # D :
This relation holds for any p and for any x . This proves
the theorem.
5.2 Necessary Condition
The second theorem tells us that a specializer spec 1 that is
not Jones-optimal cannot always reach the residual-program
e#ciency of another specializer spec 2 (Jones optimality is a
necessary condition).
Theorem 2. (Jones optimality is necessary) For all specializers
the following holds:
Proof: Assume that the rhs of the implication holds. Choose
Sint such that Jopt(spec 2 , sint 2 ) .
Such a specializer exists for any non-trivial programming
language (e.g., myspec in Sect. 3). Let bti be a binding-time
improver which satisfies the rhs of the implication with
respect to spec 2 , and define
sint
Since bti is a binding-time improver and sint 2 is a self-
sint 1 is also a self-interpreter. In the rhs of
the implication, let p be sint 2 , let x be a program q , and let
y be some data d . Then we have
where
Since Jopt(spec 2 , sint 2 ), we have that
By combining (14) and (16), we realize that
This relation holds for any q and for any d , thus we conclude
that Jopt(spec 1 , sint 1 ). This proves the theorem.
The second theorem does not imply that a specializer that
is not Jones-optimal cannot benefit from binding-time im-
provements, but that there is a limit to what can be achieved
by a binding-time improver if spec 1 is not Jones-optimal.
Observe from the proof of Thm. 2 that the rhs of the implication
can be weakened: instead of quantification "#spec 2 #
Spec", all that is needed is quantification "#spec 2 # Spec,
Remark: In the literature [22], Jones optimality is often
defined using r (y). Under this
assumption, it follows directly from the rhs in Thm. 2 that
must reduce static expressions if spec 2 does.
5.3 Discussion
1. By combining both theorems, we can conclude that, in
terms of residual-program e#ciency, a specializer that
is not Jones-optimal is strictly weaker than a specializer
that is Jones-optimal.
2. The question whether an o#ine partial evaluator can
be as powerful as an online partial evaluator can now
be answered precisely: an o#ine partial evaluator with
static expression reduction that is Jones-optimal can
achieve the residual-program e#ciency of any online
partial evaluator by using a suitable binding-time im-
prover. The binding-time improver depends on the
partial evaluators but not on the source programs.
3. Jones optimality is important for more than just building
specializers that work well with the Futamura pro-
jections. Previously it was found only in the intuition
that it would be a good property [17, 18]. The theorems
give formal status to the term "optimal" in the
name of that criterion.
4. The results also support the observation [25] that a
specializer has a weakness if it cannot overcome inherited
limits and that they are best observed through
specializing a self-interpreter (which amounts to testing
whether a specializer is Jones-optimal).
A way to test the strength of a specializer is to see whether it
can derive certain well-known e#cient programs from naive
and ine#cient programs. One of the most popular tests [10,
6, 29, 2] is to see whether the specializer generates, from
a naive pattern matcher and a fixed pattern, an e#cient
pattern matcher. What makes Jones optimality stand out
in comparison to such tests is that while a Jones-optimal
specializer with static expression reduction is guaranteed to
pass any of these tests by a suitable binding-time improvement
(Thm. 1), a specializer may successfully pass any number
of these tests, but as long as it is not Jones-optimal, its
strength is limited in some way (Thm. 2).
Even though the construction of the binding-time im-
prover used in the proof of Thm. 1 suggests that each source
program p is transformed into a new p # , such a deep transformation
may not be necessary in all cases. To what extent
each source program needs to be transformed in practice depends
on the desired optimization and the actual power of
the specializer spec 1 . More realistic binding-time improvers
will not need to transform each source program.
6. ROBUSTNESS
This section presents two results regarding Jones opti-
mality. They establish a certain kind of non-triviality for a
Jones-optimal specializers with static expression reduction.
In particular, they tell us that there is an infinite number of
self-interpreters for which a Jones-optimal specializer with
static expression reductions is also Jones-optimal.
Theorem 3. (Jones optimality not singularity) Let spec 1 ,
be two Jones-optimal specializers where spec 1
reduces static expressions, and let sint 1 , sint 2 # Sint be two
self-interpreters such that Jopt(spec 1 , sint 1 ) and Jopt(spec 2 ,
sint 2 ), then there exists a self-interpreter sint 3 # Sint , different
beyond renaming from sint 1 and sint 2 , such that
Proof: The proof uses a construction that combines two
self-interpreters and a specializer into a new self-interpreter
without breaking Jones optimality. Define the self-inter-
preter sint 3 by
sint 3
The new self-interpreter sint 3 is di#erent beyond renaming
from the self-interpreters sint 1 and sint 2 since sint 3 contains
both programs. To show Jopt(spec 1 , sint 3 ), we examine how
spec 1 specializes sint 3 . Since spec 1 reduces static expressions
and since sint 3 is of a form that suits Def. 6, we have
then we can conclude from
this relation holds for any p and any d , we have the desired
property Jopt(spec 1 , sint 3 ). This proves the theorem.
First, we observe that spec 1 can be 'more' Jones-optimal for
sint 3 than spec 2 for sint can be faster than p #
which in turn can be faster than p. This is not surprising
since specializers are usually not idempotent, and repeatedly
applying a specializer to a program can lead to further opti-
mizations. This is known from the area of compiler construction
where an optimization may enable further optimization.
This also underlines that it is realistic to choose the timing
condition (#) in Def. 7, as already discussed in Sect. 3.
Second, as a special case of Thm. 3, let spec
and sint . Then we can conclude by repeatedly
applying the theorem that for every Jones-optimal specializer
with static expression reduction there exists an infinite
number of self-interpreters for which the specializer is also
Jones optimal. We state this more formally in the following
theorem.
Theorem 4. (Jones optimality is robust) Let spec # Spec
be a specializer with static expression reduction and let sint #
Sint be a self-interpreter such that Jopt(spec, sint), then we
1. There exists an infinite number of self-interpreters sint i
Sint , which are pairwise di#erent beyond renaming,
such that Jopt(spec, sint i ).
2. There exists an infinite number of specializers spec i #
Spec, which are pairwise di#erent beyond renaming,
such that Jopt(spec i , sint).
Proof: Let copy 0 be a program such that
for all d # D . Define any number of programs copy i+1 as
copy
For all d # D , we have
copy j are di#erent beyond renaming (i # j # 0).
1. Define any number of programs sint i (i # 0):
sint i
Each sint i is a self-interpreter, and all self-interpreters
are di#erent beyond renaming since they contain different
copy programs. Each sint i is of the form used in
Def. 6 and we conclude Jopt(spec, sint i ) since
2. Define any number of programs spec i (i # 0):
Each spec i is a specializer, and all specializers are di#er-
ent beyond renaming since they contain di#erent copy
programs. We conclude that Jopt(spec i , sint) since
This proves the theorem. Remark: the first item can also
be proven by using the construction in the proof of Thm. 3.
The requirement for static expression reduction in both theorems
hints at a certain kind of non-triviality of such Jones-
optimal specializers (also suggested by Thm. 1). With static
expression reduction, we can be sure that Jones optimality is
not just a 'singularity' for a specializer, but that there is an
infinite number of such self-interpreters. A Jones-optimal
specializer with static expression reduction can be said to
be robust with respect to certain non-trivial modifications
of the self-interpreters. This e#ectively excludes the Jones-
optimal specializer myspec defined in Sect. 3. This may be
the kind of fairness sought for in earlier work [22].
7. THREE TECHNIQUES FOR BINDING-TIME
IMPROVING PROGRAMS
In this section we establish the connection between three
known techniques for doing binding-time improvements (Fig-
ure 2): using a stand-alone binding-time improver bti (c),
as we did in the previous sections, and stacking a source
program p with an instrumented self-interpreter (a, b). We
show that Thm. 1 and Thm. 2 cover the two new cases after
a few minor adaptions, and that all three techniques can
produce the same residual programs. Thus, they are equally
powerful with respect to doing binding-time improvements.
Consider the following fragments from Thm. 1 and Thm. 2
the binding-time improved version of p, is specialized
with respect to x :
Observe that bti has the functionality of a translator-if we
disregard the extra argument SD. It is well-known [20] that
the generating extension [8] of an interpreter is a transla-
tor. Since a binding-time improver has the functionality of
an L-to-L translator, it is a generating extension of a self-
interpreter for L.
Instead of binding-time improving p by bti , we can specialize
a suitable self-interpreter sint with respect to p. There
are two ways to produce a residual program by specializing
a self-interpreter: either directly by the specializer projections
[11] (a) or incrementally by the Futamura projections
[9] (b). We will now examine these two cases.
7.1 Incremental Specialization
Let bti be a binding-time improver, and suppose that
spec 0 is a specializer and sint 0 a self-interpreter such that 4
Using this equality, we obtain a new set of equations from
Thus, every binding-time improvement can be performed in
a translative way, as in (29), or in an interpretive way, as in
(32). An example [34] for the latter case is the polyvariant
expansion of a source program by specializing a suitable self-
interpreter, and then applying a binding-time monovariant
o#ine specializer to the modified program. The overall e#ect
of the transformation is that of a binding-time polyvariant
o#ine specializer, even though only a binding-time mono-
variant o#ine specializer was used to produce the residual
program. Similarly, it is known [21, 27, 30, 31] that optimizing
translators can be generated from suitable interpreters.
Such techniques can also be used in self-interpreters to improve
the specialization of programs.
Theorems 1 and 2 carry over to the interpretive case, provided
we replace in the rhs of their implication the quantification
by the quantification "#spec 0 # Spec,
#sint use (32, 33) instead of (29, 30).
7.2 Direct Specialization
The two steps (32, 33) can also be carried out in one step.
For notational convenience, let us first redefine the format of
a self-interpreter and of a specializer. This will make it easier
to accommodate programs with two and three arguments.
A self-interpreter sint # for interpreting programs with two
arguments can be defined by
and a specializer spec # for specializing programs with three
4 For every bti there exists a pair (spec 0 , sint 0 ) such
that (31), and vice versa (Prop. 1 and 2 in Appendix A).
arguments can be defined by
where program q is specialized with respect to one static
argument, a, in (35) and with respect to two static argu-
ments, a and b, in (36). The residual program then takes
the remaining arguments as input. Note that each case has
a di#erent division.
Using spec # , the two steps (32, 33) can be carried out in
one step. Let q be the self-interpreter sint # defined above,
let a be a two-argument program p, and let b and c be some
arguments x and y , respectively, then p can be specialized
with respect to x via sint # using (36):
That r # is a residual program of p follows from (34, 36):
Let bti be a binding-time improver and let spec 1 be a specializer
as in (29, 30), and suppose that spec # is a specializer
and sint # a self-interpreter such that 5
Equation (37) is the 1st specializer projection [11] but for
a self-interpreter instead of an interpreter. Specializing a
program via an interpreter is also known as the interpretive
approach [11]. An example [13] is the generation of a
KMP-style pattern matcher from a naive pattern matcher by
inserting an instrumented self-interpreter between the naive
pattern matcher and an o#ine partial evaluator (Similix).
As mentioned in Sect. 1, an o#ine partial evaluator cannot
achieve this optimization, but specializing the naive matcher
via an instrumented self-interpreter does the job. Another
example is the simulation of an online partial evaluator by
an o#ine partial evaluator [33], and the bootstrapping of
other program transformers [30].
Note that the transformation in (37) can be optimized at
another level: specializing spec # with respect to sint # yields a
new specializer. This internalizes the techniques of the self-
interpreter in the new specializer. This is know as the 2nd
specializer projection [11]; results on generating optimizing
specializers are reported elsewhere (e.g., [13, 33, 30]).
Again, both main theorems apply to this case by replacing
quantification "#bti # Bti " by "#sint # Sint " on the rhs of
the implication.
7.3
Summary
The three techniques for binding-time improving programs
are summarized in Fig. 2. They can produce the same residual
programs and, thus, are equally powerful with respect
to binding-time improving programs. Which technique is
preferable in practice, depends on the application at hand.
There exist example applications in the literature for each
case. The results in this section shed new light on the relation
between binding-time improvements and the interpretive
approach. Our two main theorems cover all three cases.
5 For every pair (spec 1 , bti) there exists a pair (spec # , sint # )
such that (39), and vice versa (Prop. 3 and 4 in Appendix A).
Direct specialization:
a. One step (1st specializer projection):
Incremental specialization:
b. Two steps (1st Futamura projection):
c. Three steps (2nd Futamura projection):
Figure
2: Three techniques for binding-time improving
programs
When we add to (29, 30) a step that generates the binding-time
improver from a self-interpreter, we obtain (40, 41, 42).
This is the 2nd Futamura projection. Detail: For formal
reasons, due to the definition of specializer spec # in (35), the
arguments of bti # are reordered.
8. OPTIMIZINGTHETHEORETICALCON-
The proof of Thm. 1 makes use of a general binding-time
improver. In many cases, only certain fragments that are relevant
to binding-time improving programs for a particular
specializer spec 1 need to be incorporated in a binding-time
improver, not the entire specializer (here spec 2 ). In the remainder
of this section, we will point out some possibilities
for optimizing the general construction by further program
specialization and program composition.
1. Specialization: We observe that the binding-time improved
produced by bti in (7) contains a specializer
whose argument is fixed to p. This makes the
program structure of p # much too general. Usually, not all
components of spec 2 are needed to specialize p. We can improve
the construction by replacing spec 2 by a generating
extension gen p of p, a program which is specified by
A generating extension gen p is a generator of p's residual
programs. We can use the following program instead of
| {z }
composition
x , y ]q . (44)
2. Composition: We notice the fixed composition of sint 1
and gen p in (44). An intermediate data structure, a residual
program, is produced by gen p and then consumed by
sint 1 . Methods for program composition may fuse the two
programs and eliminate intermediate data structures, code
generation, parsing and other redundant operations.
It will be interesting to examine whether some of the
known binding-time improvements can be justified starting
from the general construction, and whether new binding-time
improvers can be derived from the general construction
where spec 2 represents the desired level of specialization.
Lets us illustrate this with an example. Suppose onpe is
an online partial evaluator and o#pe is an o#ine partial evaluator
which reduces static expressions and is Jones-optimal
for self-interpreter sint . We have the specializers
According to Thm. 1 we have t r # (y) # t r (y) where the two
residual programs are produced by
and a binding-time improver defined by
After binding-time improving p with bti , the performance
of r # produced by o#pe is at least as good as the one of r
produced by onpe. Can we derive a useful binding-time im-
prover from bti by program specialization and program com-
position? Can we obtain automatically one of the binding-time
improved programs published in the literature (e.g., [6,
2]) by specializing bti with respect to a source program (e.g.,
a naive pattern matcher)?
Another challenging question is whether a binding-time
improver can be specified by combining 'atomic' binding-time
improvements, and how this relates to combining semantics
by towers of non-standard interpreters [1].
9. RELATED WORK
The first version of optimality appeared in [17, Problem
3.8] where a specializer was called "strong enough" if program
p and program are "essentially the
same programs". The definition of optimality used in this
paper appeared first in [18, p.650]; see also [20, Sect. 6.4].
Since the power of a specializer can be judged in many different
ways, we use the term "Jones optimality" as proposed
in [22]. These works focus on the problem of self-application;
none of them considers the role of Jones-optimal specialization
for binding-time improvements. Also, it was said [25]
that a specializer is 'weak' if it cannot overcome inherited
limits and that this is best observed by specializing a self-
interpreter. Our results underline this argument from the
perspective of binding-time improvements.
The term "binding-time improvement" was originally
coined in the context of o#ine partial evaluation [20], but
source-level transformations of programs prior to specialization
are routinely used in connection with any specialization
method. Binding-time improvements range from rearrangements
of expressions (e.g., using the associativity of arithmetic
operations) to transformations which require some ingenuity
(e.g., the transformation of naive pattern matchers
[6, 2]). Some transformations incorporate fragments of
a specializer [4]. A collection of binding-time improvements
for o#ine partial evaluation can be found in [3, Sect.7] and
in [20, Chapter 12]. The actual power of binding-time improvements
was not investigated in these works.
The idea [11] of controlling the properties of residual programs
by specializing a suitable interpreters was used to perform
deforestation and unification-based information propagation
[13, 14], binding-time polyvariant specialization [34,
30], and other transformations [30] with an o#ine partial
evaluator. These apparently di#erent streams of work are
now connected to binding-time improvements.
Related work [5] has shown that, without using binding-time
improvements, an o#ine partial evaluator with a maximally
polyvariant binding-time analysis is functionally equivalent
to an online partial evaluator, where both partial evaluators
are based on constant propagation. This paper shows
that any o#ine partial evaluator can simulate any online
partial evaluator with a suitable binding-time improver, provided
the o#ine partial evaluator is Jones-optimal.
10. CONCLUSION
Anecdotal evidence suggests that the binding-time improvement
of programs is a useful and inexpensive method
to overcome the limitations of a specializer without having
to modify the specializer itself. These pre-transformation
are often ad hoc, and applied to improve the specialization
of a few programs. It was not know what the theoretical limitation
and conditions for this method are, and how powerful
such pre-transformation can be.
Our results show that one can always overcome the limitations
of a Jones-optimal specializer that has static expression
reduction by using a suitable binding-time improve-
ment, and that this is not always the case for a specializer
that is not Jones-optimal. Thus, regardless of the binding-time
improvements which we perform on a source program,
no matter how extensively, a specializer which is not Jones-
optimal can be said to be strictly weaker than a specializer
which is Jones-optimal. This also answers the question
whether an o#ine partial evaluator can be as powerful as an
online partial evaluator.
Jones optimality was originally formulated to assess the
quality of a specializer for translating programs by specializing
interpreters. Our results give formal status to the term
"optimal" in the name of that criterion. Previously it was
found only in the intuition that it would be a good property;
indeed, it was proposed to rename it from "optimal special-
ization" to "Jones-optimal specialization" precisely because
it was felt to be wrong to imply that no specializer can be
better than a Jones-optimal one. This paper shows a way
in which this implication is indeed true.
The proofs make use of a construction of a binding-time
improver that works for any specializer. This construction
is su#cient for theoretical purposes, but not very practical
(it incorporates an entire specializer). In practice, there
will be more 'specialized' methods that produce the same
residual e#ect in connection with a particular specializer,
as evidenced by the numerous binding-time improvements
given in the literature. However, the universal construction
may provide new insights into the nature of binding-time
improvements. For instance, it will be interesting to
see whether some of the non-trivial binding-time improvements
(e.g., [6, 2]) can be justified by the universal construc-
tion, and whether new pre-transformations can be derived
from it (where the specializer incorporated in the universal
construction represents the desired specialization strength).
Eventually, this may lead to a better understanding of how
to design and reason about binding-time improvements.
Our results do not imply that specializers which are not
Jones-optimal cannot benefit from binding-time improve-
ments. For example, it seems that the KMP test can be
passed by an o#ine partial evaluator which is not Jones-
optimal. This leads to the question what are the practical
limits of specializers that are not Jones-optimal. More work
will be needed to identify cause and e#ect.
On a more concrete level, we are not aware of an on-line
partial evaluator that has been shown to be Jones-
optimal. This question should be answered positively, as it
was already done for o#ine partial evaluators. Also, there
is not much practical experience in building Jones-optimal
specializers for realistic programming languages. For in-
stance, more work will be needed on retyping transformations
for strongly typed languages to make these transformations
more standard, and more program specializers truly
Jones-optimal.
Acknowledgments
Comments by Sergei Abramov, Kenichi Asai, Mikhail Buly-
onkov, Niels Christensen, Yukiyoshi Kameyama, Siau-Cheng
Khoo, and Eijiro Sumii on an earlier version of this paper are
greatly appreciated. Special thanks are due to the anonymous
reviewers for thorough and careful reading of the submitted
paper and for providing many valuable comments. A
preliminary version of the paper was presented at the Second
Asian Workshop on Programming Languages and Systems,
Daejeon, South Korea.
11.
--R
Combining semantics with non-standard interpreter hierarchies
The abstraction and instantiation of string-matching programs
Similix 5.0 Manual.
Extracting polyvariant binding time analysis from polyvariant specializer.
On the equivalence of online and
Partial evaluation of pattern matching in strings.
For a better support of static data flow.
On the partial computation principle.
Partial evaluation of computing process - an approach to a compiler-compiler
Generalized partial computation.
On the generation of specializers.
On the mechanics of metasystem hierarchies in program transformation.
Generating optimizing specializers.
Generating transformers for deforestation and supercompilation.
Compiler generation by partial evaluation: a case study.
Partial evaluation and
Challenging problems in partial evaluation and mixed computation.
Partial evaluation
Program generation
Partial Evaluation and Automatic Program Generation.
Generating a compiler for a lazy language by partial evaluation.
On Jones-optimal specialization for strongly typed languages
Partially static structures in a self-applicable partial evaluator
Evolution of partial evaluators: removing inherited limits.
The structure of a self-applicable partial evaluator
ML pattern match compilation and partial evaluation.
Mechanical proof of the optimality of a partial evaluator.
A positive supercompiler.
Bootstrapping higher-order program transformers from interpreters
Realistic compilation by partial evaluation.
Tag elimination and Jones-optimality
The generation of a higher-order online partial evaluator
Polyvariant expansion and compiler generators.
--TR
Partial evaluation and MYAMPERSANDohgr;-completeness of algebraic specifications
The structure of a self-applicable partial evaluator
Partial evaluation of pattern matching in strings
Partial evaluation, self-application and types
For a better support of static data flow
Generating a compiler for a lazy language by partial evaluation
Partial evaluation and automatic program generation
Extracting polyvariant binding time analysis from polyvariant specializer
Realistic compilation by partial evaluation
Bootstrapping higher-order program transformers from interpreters
Program Generation, Termination, and Binding-Time Analysis
Tag Elimination and Jones-Optimality
On Jones-Optimal Specialization for Strongly Typed Languages
Polyvariant Expansion and Compiler Generators
Combining Semantics with Non-standard Interpreter Hierarchies
On the Mechanics of Metasystem Hierarchies in Program Transformation
Evolution of Partial Evaluators
MK Pattern Match Compilation and Partial Evaluation
--CTR
Niels H. Christensen , Robert Glck, Offline partial evaluation can be as accurate as online partial evaluation, ACM Transactions on Programming Languages and Systems (TOPLAS), v.26 n.1, p.191-220, January 2004 | futamura projections;metacomputation;interpretive approach;jones optimality;binding-time improvements;self-interpreters;specializer projections |
568176 | Search-based binding time analysis using type-directed pruning. | We introduce a new way of performing binding time analysis. Rather than analyzing the program using constraint solving or abstract interpretation, we use a method based on search. The search is guided by type information which significantly prunes the size of the search space, making the algorithm practical. Our claim is not that we compute new, or better information, but that we compute it in a new and novel way, which clarifies the process involved.The method is based upon a novel use of higher-order multi-stage types as a rich and expressive medium for the expression of binding-time specifications. Such types could be used as the starting point in any BTA. A goal of our work is to demonstrate that a single unified system which seamlessly integrates both manual staging and automatic BTA-based staging is possible. | INTRODUCTION
Binding Time Analysis (BTA) can be thought of as the
automatic addition of staging annotations to a well-typed,
semantically meaningful term in some base language. The
Permission to make digital or hard copies of all or part of this work for
personal or classroom use is granted without fee provided that copies are
not made or distributed for profit or commercial advantage and that copies
bear this notice and the full citation on the first page. To copy otherwise, to
republish, to post on servers or to redistribute to lists, requires prior specific
permission and/or a fee.
ASIA-PEPM'02, September 12-14, 2002, Aizu, Japan.
programmer supplies two things 1) a well-typed base language
program and 2) a binding time specification (a set of
instructions as to which parts of the program are static and
which parts are dynamic) and the analysis produces a new
program which is the old program plus staging annotations.
If successful, the new program is called well- annotated, and
the erasure of the staging annotations from the new program
produces the original base language program.
We introduce a new kind of BTA that works by searching
the space of annotated terms that can be produced by
adding one or more staging annotations to a well-typed base
term. These added staging annotations must be consistent
with both the original type of the program, and the user
supplied binding-time specification. The search space is explored
lazily by adding only those staging annotations that
maintain that consistency. If a path is discovered that could
no longer produce a well-annotated term, the path is immediately
pruned.
The search can produce more than one well-annotated
term. By directing the search, the algorithm can be adjusted
to produce "better" well-annotated terms first.
2. MOTIVATION
This work was motivated by our work on the MetaML
meta-programming system. In a meta- programming sys-
tem, meta-programs manipulate object-programs. The meta-programs
may construct object-programs, combine object-
program fragments into larger object-programs, and observe
the structure and other properties of object-programs.
was designed to be useful as a medium for the
expression of run-time code generators, but it has found
many other uses as well.
MetaML is a conservative extension of core ML. It includes
all the features of Standard ML, except the module
system. It adds four kinds of staging annotations for con-
structing, manipulating, and executing code as a first class
object. We discuss three of them here.
The staging annotations partition the program into stages.
Brackets (< _ >) surrounding an expression lift the surrounded
expression to the next stage. Escape (~( _
(which should only appear within brackets) drops its surrounded
expression to a previous stage. Lift (lift _) evaluates
its argument to a ground value (a first order value
like 5) and constructs a program in the next stage with that
constant value. In a two stage world, bracketed code is dy-
namic, and unbracketed code is static. There is no a-priori
restriction to two stages.
The most common use of MetaML is the staging of programs
with interpretive overhead to improve run-time per-
formance. This is the same reason partial evaluators are
often employed. It is useful to consider how this is accomplished
in MetaML. In earlier work[22] we identified a 7 step
process which we greatly abbreviate here.
The example we use is the traditional one - staging the
power function. But, we have used the same process on
many other programs, some orders of magnitude larger.
First write an unstaged program:
if n=0 then 1 else x * (power (n-1) x);
Second, identify the source of the interpretive overhead. In
the power example, it is looping on the variable representing
the exponent n.
Third, consider the type of the unstaged function. Here, it
is (int -> int -> int). Consider an extension of this type
that can be obtained by adding the code type constructor in
one or more places that makes the source of the interpretive
overhead static, and the other parameters dynamic. There
may be several such types. For example, two di#erent extensions
of power's type are (int -> -> ) and
(int -> <int -> int>).
Fourth, choose one of these extension types, and place
staging annotations (bracket, escape, lift) on the original
program to produce a well-annotated program. For example
a staged version of power with type (int -> ->
) is the function pow1.
if n=0 then <1>
else < ~x * ~(pow1 (n-1) x) >;
The choice of which extended type to use to guide the
annotation often depends on the context that the generator
is to be used in. Sometimes it is obvious, other times not so.
The subtleties are beyond the scope of this short introduc-
tion, and not relevant here, since even in a automatic
based system the user must supply the staging specification.
The staged version is then used to produce a computation
without interpretive overhead. In the examples below pow1
is used in several contexts to generate the code fragments to
the right of the evaluates-to arrow (->)
<fn z => ~(pow1 4 )>
-> <fn z => z * z * z * z * 1>
3. A UNIFIED VISION
In the winter of 2002, one of the authors taught a course
on staged computation. A pattern emerged. A typical assignment
consisted of an unstaged function with type t, and
a target extended type t # , and the instructions, "Write a
staged version of the unstaged function with the target type
A discussion arose in class about the possibility of automatically
producing the staged versions of some functions
which could be mixed with other manually staged functions
in a single system. This caused the class, and the instruc-
tor, to consider the process they followed when they staged
a function. The answer was obvious - they used the type
information contained in both the type of the original program
and the target type, to guide the placement of the
staging annotations.
We have argued[20, 21, 23] that manual annotation gives
the programmer finer control than any automatic process
can ever achieve. But, we must admit, that many times
an automatic annotation would both su#ce, and place less
of a burden on the programmer. Why couldn't a tightly
integrated system which supported both manual annotation,
as practiced in MetaML, and an automatic BTA be built?
In our view, the key obstacle to such a system was reconciling
the binding time specifications of a given BTA with
rich type structure in which it is possible to mix
code types, data structures, and higher order functions. Many
BTAs are driven by binding time specifications which are
simple directives indicating that certain global variables and/or
parameters are simply static or dynamic. Newer ones have
been extended to partially static first order data. In MetaML,
code types are completely first class. In MetaML it is possible
for data structures to embed functions which manipulate
code. This kind of partially static higher order functions (i.e.
meta-functions which take meta-functions as arguments) is
missing from binding-time specifications. The key insight
was to use MetaML's rich types themselves as binding time
specifications.
Once this key insight was understood, then several smaller
hurdles were easily resolved.
. Most BTAs are described as part of a partial evaluation
system. Such systems are almost always based
upon source to source transformations, and work at the
source-file level. They produce new source-files which
are compiled by existing compilers. This was hard
to reconcile with MetaML's view that staging annotations
are first class semantic objects and are part of
the language definition. By viewing MetaML types as
staging specifications, it is an easy next step to view
them as directions to the compiler to produce a new
program whose meaning is given in terms of MetaML's
semanticly meaningful staging annotations.
. The output of BTA is opaque to most users. It consists
of a generating extension which when applied to
the static arguments produces the residual program.
The annotations are there in the generating extension,
but the partial evaluation paradigm does not encourage
users to look at or understand the generating ex-
tension. Most users have no idea what they look like,
or how they are used. But in MetaML users already
know what staged programs look like, they write them
themselves when they manually annotate programs, so
the conceptual barrier is lower.
The idea of using higher-order partially static types (func-
tion arrows with code types) as the directive which guides
an automatic BTA in an otherwise manually staged system,
provides a fine level of control that was previously miss-
ing, yet still enables automatic staging when desired. We
elaborate our vision of an integrated system encompassing
both manual and automatic BTA as a simple extension to
MetaML.
produces an annotated term from a suggested staged
type. To MetaML we add a new declaration form stage.
When a user wishes to indicate that he desires an annotated
version f # , of a function f , at the annotated extension type
t, he writes: stage f at t. It is the compiler's job to
produce such a function automatically or cause a compile-time
error if it can't. Declaring the following:
stage power at int -> -> ;
causes the compiler to generate and compile the new function
pow1.
if n=0 then <1> else < ~x * ~(pow1 (n-1) x) >;
This is the same output that we produced by hand above.
Similarly, the declaration:
stage power at int -> <int -> int>;
causes the compiler to generate and compile the function
pow2.
~(if n=0
then <1> else < x * ~(pow2 (n-1)) x >)>;
This scheme is quite flexible. It can be used to stage functions
with partially static data, partially static higher order
types, or to stage functions with more than two stages. For
partially static data consider:
list -> 'a list *)
| map f
stage at ('a-> 'b)-> 'a list-> <'b> list
leading to the automatic introduction of
| map2 f
For higher order partially static types consider:
stage at ('a-> <'b>)->'a list-> <'b list>
leading to the automatic introduction of
| map3 f
For staging programs at more than two stages consider an
inner product function staged to run in three stages[8, 14].
In the first stage, knowing the size of the two vectors o#ers
an opportunity to specialize the inner product function on
that size, removing a looping overhead from the body of
the function. In the second stage, knowing the first vector
o#ers an opportunity for specialization based on the values
in the vector. If the inner product of that vector is to be
taken many times with other vectors it can be specialized
by removing the overhead of looking up the elements of the
first vector each time. In the third stage, knowing the second
vector, the computation is brought to completion.
then ((nth v n)*(nth w
else 0;
stage iprod at
int -> -> < > -> < >
Here the operator '>' is the greater-than operator. MetaML
uses this operator because the normal greater than operator
conflicts with MetaML's use of the symbol > as a staging
annotation. As before, the stage declaration would cause
the compiler to automatically produce the three stage annotated
version:
then << (~(lift (nth ~v n)) * (nth ~(~w) n))
else <<0>>;
4. FRAMEWORK
In this section we describe a minimal language we use to
describe our BTA. The examples in the previous section are
expressed in MetaML, a language considerably richer than
the minimal language we will describe next. The properties
we will show to hold for the minimal language will also hold
for MetaML.
Base Terms. The structure of base terms is defined by an
inductive set of productions. These productions define the
set of syntactically correct terms. For example, a variant of
the lambda calculus with integer constants could be defined
by:
Annotated Terms. Staging annotations are added to
the set of productions to define the set of syntactically correct
annotated terms. For example, we add the productions
for bracket E), and lift.
Erasure is the process of removing annotations from an
annotated term to produce a base term.
Base Types. The set of base types for base terms is also
inductively defined. The actual form of types depends upon
the constructs and concepts inherent in the base language.
For the lambda calculus variant, the types of base terms
can be defined by introducing types for constants like I for
constructors like list, and function
types.
list | T # T
Annotated Types. We also extend the set of base types
by adding the code type constructor to produce the set of
annotated types.
Note that brackets are overloaded to work both on annotated
terms, and on annotated types.
Environments. We assume our language has a full complement
of primitive functions that operate on the base types
(+, #, -, etc.), and the built in data structures (head, tail,
I
t.
I
#n #e#t#
Br
Es
Figure
1: Judgments for well-typed base terms, and
their extension for well-annotated terms.
cons, nil, null, etc. Environments (#) map these global
constants, and lambda-bound variables to their types.
Well-typed Terms. Type judgments select a subset of
the syntactically well formed terms which are semantically
meaningful. We call such terms well-typed. The top half
of
Figure
1 gives a set of judgments for base terms. The
form of a judgment is n) and can be read as under
the environment # the term e can be assigned the type t
at stage n. Here n is a natural number, and terms at level
are terms without any staging annotations. In general a
term at level n is surrounded by n sets of matching brackets.
For base terms the stage information can safely be ignored.
Indeed, by erasing the stage information, the top half of Figure
1 reduces to the familiar type judgments of the lambda
calculus.
Well-annotated Terms. Adding type judgments for the
staging annotations to the judgments for base terms defines
a new judgment that selects a subset of the syntactically correct
annotated terms called the well-annotated terms. The
bottom half of Figure 1 extends the top half with judgments
for the annotations bracket, escape, and lift. The stage information
(the n in the judgment) counts the number of
brackets surrounding the current term. This count ensures
that all escapes appear within brackets, and that variables
are only used in stages later than their binding stage.
Annotated Extensions. There are two inputs to the
process: a well-typed base term e with base type t1 ,
and a target annotated type t2 . The type t2 should be an
annotated extension of the type t1 , and is the type of the
annotated term we wish the BTA to find.
The Relation #. Base types and their annotated extensions
are related by the relation #. The relation #
P(basetype - annotatedtype) intuitively means Can be obtained
by removing staging annotations from. It can be made
precise by the following inductive rules, where we use a notation
in which b i # basetype, and a i # annotatedtype to
remind the reader that the two arguments to the relation
come from di#erent sets.
b # a
b # a
list # a list
The relation # simply formalizes the notion of erasure on
types. All it does is describe in a precise manner when one
type is an erasure of another. If erase t2 then t1 # t2 .
Note that erase acts homomorphically on all type constructors
except the code (bracket) type constructor. Adding
annotations to a program e with type t1 cannot produce
another program of arbitrary type t2 . The types t1 and t2
must be related in the fashion made precise by #.
Given two partial functions #1 and #2 , representing envi-
ronments, where #1 maps term variables to base-types, and
#2 maps term variables to annotated-types, both with the
same domain, we lift the relation # pointwise to environ-
ments. #1 #2 #x # Dom(#1 , #2).#1 x #2 x.
Overloading # on Terms. We overload the relation
P(terms - annotatedterms) on terms as well as types.
The two meanings are so similar that this shouldn't be a
problem conceptually.
if b1 b2 b3 # if
b # a
b # a
b # a
Again we use b i # baseterm and a i # annotatedterm. The
lifted relation simply formalizes the notion of erasure on
terms. If e1 is an erasure of e2 then e1 # e2 .
5. A STAGING CHECKING SYSTEM
Given a term e, its type t, and a target type extension t # ,
we wish to find an annotated term e # such that t # t # and
We need some adjustments to
our notation to capture this precisely.
First, the extension of # to terms may seem too syntactically
rigid. For example, terms which are # equivalent
are not related if the bound variable is not the same. This
should not be a problem since we intend to generate the annotated
terms on the right-hand-side of the relation, and we
can always use the same name for the bound variable.
Second, the ordering on terms is not quite satisfactory.
It is possible to relate annotated terms that are not well-
annotated to well-typed base terms. Since we only care
about relating well-typed terms to well-annotated terms,
we'll define a new relation which captures this distinction.
Overloading # once again we combine the typing judgments,
and the type and term relations into one:
#1 . (#1
Given e1 , t1 , e2 , and t2 infer if e2 could both be well-
annotated at type t2 , and be related to e1 by the term overloading
of #. We formalize this by writing down a new set
of judgments which appear in Figure 2.
Lam
App
If
Code
Escape
Figure
2: Relating well-typed terms to well-
annotated terms.
The judgments are derived in a straightforward manner
from those in Figure 1, and the relation # on types and
terms.
6. FROM CHECKING TO INFERENCE
Can we move from a type checking system that checks
if two terms are related by staging annotation, to an algorithm
that computes the staged program from the unstaged
program? The judgments in Figure 2 describe several rules
for checking relationships between e1 , e2 , t1 , and t2 , when
all four are known. When e2 is unknown, we can use the
rules to guide a search. A slight restructuring of the rules
helps illustrate this. Let the notation e2 A #
denote a search for a well- annotated term e2 whose erasure
is e1 . Let t1 be the type of e1 and t2 be the target type of
the sought-after term e2 ., and s be the current state. We'll
discuss states in a moment. Most of the rules proceed by
searching for annotations for the subterms of e1 , which will
be combined to form e2 . Occasionally we search for the annotation
of a subterm, fully intending to wrap a new set
of brackets around this result when we get it. We call the
number of "pending" brackets the level of the search, and it
is part of the state of the search. Let n be the level of the
search, then a checking rule like If
If
leads naturally to a search rule like:
(if e4 e5 e6
Note how the environment # in the checking rule changes
to an environment # in the search rule. In the checking rules
we map term variables to types, but in a search algorithm
we map term variables to annotated terms. In the inference
algorithm where we compute an annotated term for every
unannotated counterpart term, # maps unannotated term
variables and to well-annotated terms.
What the checking rules don't tell us is in what order to
apply the search rules, or what to do if a rule fails. At some
point we need to make a choice about what implementation
mechanism we will use to implement our search. Prolog
comes immediately to mind, and would probably have
made a good choice, but further investigation suggested several
annoying details, and we choose the functional language
Haskell because of the ease with which we could modify our
program even after drastic changes to what we considered a
search. This will become more clear in what follows.
Except for the rules Code, Escape and Lift, the rules
are syntax directed over the structure of e1 . These rules are
driven by the syntactic structure of t2 and n. We can use this
structure to decide what rules are applicable. Unfortunately,
at any point, more than one rule is usually applicable. We
must also be careful because the rules Code and Esc are
circular, and could lead to derivations of infinite height, and
hence search of infinite depth.
An algorithm will control each of these parameters in some
way. The key to the algorithm is controlling two important
aspects: the search is partial - it may fail. And, the search is
non-deterministic - there may be more than one annotated
term with the given type.
An e#ective way to controlling the search is to attempt
the Escape rule first, followed by the syntax directed rules
over the structure of e1 next, and to apply the Lift and
Code rules only if all the other rules fail. Why this is an
e#ective strategy is discussed in Section 13. To control the
circularity of Code and Escape we embed the algorithm
in a small state machine with three states: clear, up, and
down.
If, Abs, App,.
If, Abs, App,.
If, Abs, App,.
Code
Escape
Escape
Code
up clear down
The algorithm starts in state Clear. Any use of rule Code
moves to state Up, where Escape is not allowed, and any
use of Escape moves to state Down where Code is not al-
lowed. Applying any rule which recurses on a sub- term of
the current term, moves the machine back to state clear.
For example,
that if we have an
integer constant i, and we're searching for a term of type I,
in any state s, then the same term i will su#ce.
For terms with sub-structure we need to search for an-
notations, of a particular type, of some of the given term's
sub-terms. For example:
z up
#z# clear
App
Code
If we can find an annotated version f # of f with the given
type, and an annotated version x # of x, then we can find an
annotated version of the application f x.
If a search on a sub-term fails, then the search for the
whole term fails. But, if a search on a sub-term produces
more than one result, then the search on the whole term
may produce more than one result. If a term has two sub-terms
A and B, and the search on A produces n results, and
the search on B produces m results, the search on the whole
term will produce n -m results.
Fortunately there is a well-studied formalism for describing
such algorithms. This is the notion of monadic com-
putation. Several papers give good overviews of monadic
computation [24, 25, 26], and we assume some familiarity
with monadic programs.
In our case the monad is the non-determinism monad
(some times called the list monad because of the data structure
on which it is based, or the monad of multiple-results).
Consider the search rule specified above. Which of the
sub-searches do we perform first? How do we specify what
to do if one of them fails? If they both succeed, how do we
specify the combination of the two sets of results? This is
the job of the monad. We express the search implicit in the
rule above explicitly in an equation that uses the monadic
do notation.
A #
z
do x # A #
return (f # x # )
Perform the search for an annotation of the subterm x
first, if that succeeds search for an annotation of f next,
if that succeeds combine all the results from both searches
wise, building the newly annotated term f # x # .
Using the checking rules from Figure 2 we build search
rules. We use the do notation to control the search ele-
ments. Each rule leads to a small search component. A
complete search is constructed using the do notation to con-
trol, sequence, and combine the component searches. The
complete algorithm can be found in Appendix A written as
a working Haskell program. The program uses the following
definitions for types and terms.
data
| Code T - < T >
| list
data
| EL String E)
| EV String - x
| EI Int - 5
|
|
|
|
For each checking rule in Figure 2, we derive one or more
search components written as Haskell functions. Each component
has type
type the Monad of Multiple Results
Most rules lead to a single component, but consider the
rule App. Its goal is
When searching (e1 e # 1 ), t1 , and t2 are known, and (e2 e
are not. We know (e1 e # 1 ) is well-typed at type t1 , thus it is
possible to compute the type of e1 , which is s1 # t1 . It is
not possible to compute the domain of e2 which is labeled
s2 in the rule. All we know is that s1 # s2 . So a search rule
based on this checking rule will have to choose some s2 that
is correctly related to s1 . Some simple choices are
and This leads to two di#erent search rules in
our program. Additional rules are possible, and a generic
treatment is discussed in Section 14.
To give a taste of how a component is constructed, we
discuss these two rules. Two ways to stage an expression
would be to stage the whole term as if f had type
t, or to assume f has type #s# t. The first way is
captured by the following component.
appCase1 :: Component
trace "App1" n e t2
do let dom sig e0 - Compute domain type of e0
return (EA e2 e3)
Note how the state of the search on the sub-terms is reset
to clear, and how the component follows the structure of
the typing judgment and the relation #. This is one of the
directed rules, and if applied to an non-application,
the default clause (appCase1 _ _ _ _
the component to fail.
In the second component, we account for the code property
of the argument.
appCase2 _ n sig phi (t1,t2,e @ (EA e0
trace "App2" n e t2
do let dom sig e0 - Compute domain type of e0
(Arr (Code s1) t2) e0
return (EA e2 e3)
appCase2 _ _ _ _
The main algorithm is composed of a search strategy applied
to the individual search components. In the next section
we comment on the control mechanisms used to direct
the strategy of the main algorithm.
7. CONTROL OF THE SEARCH
The do notation is used to control the order of the search,
but its action on failure is to propagate failure once it arises.
This means a single failure, anywhere, causes the whole algorithm
to fail. What is needed is a mechanism to set up
several searches and to combine the successful results into
one large set of results. The monad of multiple results (M),
supports several operations that facilitate this.
leftChoice :: M a -> M a -> M a
leftChoice []
leftChoice xs
first :: [M a] -> M a
first "first done\n"
first
first (first xs)
many :: [M a] -> M a
many
many
many
The operation leftChoice takes two computations producing
multiple results. If the first succeeds, it returns the
result and ignores the second. If the first fails, it runs and
then returns the results of the second computation.
The operation first iterates leftChoice over a list of
computations. It returns the results of the first successful
computation in the list.
The operation many runs all the computations in the list,
and returns the concatenation of all the results. It is the
operation used to specify a branching search.
8. THE MAIN ALGORITHM
The main algorithm is called and regroups the syntax
directed arguments into a tuple, and then calls a2. The
algorithm a2 is defined as large search, whose search strategy
is constructed using first and many on the component
searches. This strategy is only one possible strategy. Other
strategies are possible. We discuss strategies in Section 13
The function a1 takes as input the level (n), environments
mapping term variables to types (sig), and term variables to
annotated terms (phi), two types (t1 and t2), and a term
(e1). It produces multiple results, hence its return type
(M E). It is meant to correspond roughly to the notation
a2 step n sig phi
first
escCase step n sig phi x - note Esc case first
, intCase step n sig phi x
, varCase step n sig phi x
, absCase step n sig phi x
, appCase1 step n sig phi x
, appCase2 step n sig phi x
, appCase3 step n sig phi x
, ifCase step n sig phi x
, liftCase step n sig phi x
, codeCase step n sig phi x
9. EXAMPLE TRACE
In this section we show a trace of the search to stage the
term #f.#x.f x at the type #I # I# I #I#
Abs {0} fn f => fn x => f x : <int -> int> -> int ->
Abs {0} fn x =>
failed
App1 failed
failed
App2 failed
failed
App1 failed
failed
App2 failed
failed
succeeded f
Esc succeeded ~f
Esc
failed
Var succeeded x
succeeded lift x
Esc succeeded ~(lift x)
App1 succeeded ~f ~(lift x)
Code succeeded <~f ~(lift x)>
Abs succeeded fn x => <~f ~(lift x)>
Abs succeeded fn f => fn x => <~f ~(lift x)>
10. CORRECT BY CONSTRUCTION
We believe that the soundness of the search algorithm
with respect to the checking rules can be proved, although
we have not yet done so. Every node of the search space considered
by the search algorithm is generated from the well-typed
source expression by a checking rule. So the search
program never considers an invalid term. The algorithm
may return multiple results, and some of these results may
be "better" than others, but they will all be valid extensions
of the base term, at the type given. In Section 13 we discuss
the use of strategies to order the returned solutions, best
ones first.
If the algorithm uses the strategy employed in Appendix
A, it is easy to argue that the algorithm always terminates.
A measure function[4] can easily be constructed on the arguments
and state that decreases by a well-founded
relation on every recursive call of a1. While n may increase
in the Code rule, it is always accompanied by a decrease
in the size of t2. When t2 has all its brackets stripped o#,
there must be a decrease in the size of e1 by one of the syntax
directed rules, or a failure. The three state automata
encoded in state enforces this. Since the measure function
decreases on every recursive call, and cannot fall below zero,
the algorithm must terminate.
11. POLYVARIANCE
Polyvariance allows a single function to be used at multiple
binding types. The algorithm needs no changes to support
polyvariance. Recall that one of the parameters to the
search algorithm is an environment mapping term variables
and types to annotated terms. By allowing the environment
to map the same term variable at di#erent types to di#erent
annotated terms, polyvariance is achieved. For example consider
the function f in an environment where the function h
has type int -> int -> int.
stage at int -> ->
producing
If the BTA environment had several stagings of h we could
do better.
stage at int -> ->
stage at -> int ->
stage at int -> ->
With these stagings of h we could do more work statically
by using h1 and h2 in the annotated version of f produced
automatically by the compiler.
Although the definition of f mentions only h, the automatic
can make use of all declared stagings of h, and
generates a staging of f with polyvariant uses of h.
A tight integration of both automatic and manual staging
can be used to the mutual advantage of both. Consider manually
staged versions of ( op * ), the primitive multiplication
operator in MetaML. These manually staged versions
exploit the arithmetic identities x #
are beyond the scope of any automatic BTA without using
semantic information. Yet a programmer can easily stage
them manually, injecting this semantic information into the
system.
| times1
| times1
| times2
| times2
To inform the automatic BTA of times we use a variant
of the stage declaration.
stage
stage
without the at type su#x, the stage declaration checks
that times1 and times2 are manually staged versions of
staged type (as is made precise by
the relation #), and adds them both to the environment of
the process.
Thus manually staged versions of functions, which use semantic
information to force computation to earlier stages,
can be used polyvariantly.
12. POLYMORPHISM
Our techniques easily extend to a language with Hindley-Milner
polymorphism. In Hindley-Milner polymorphism
all universally quantified types appear at the outer
most level. This allows a simple preprocessing step to extend
the BTA to a language with Hindley-Milner polymor-
phism. Consider the example where the programmer wants
to stage the standard function
at the type <I -> 'c> -> [I] -> <['c]>. Consider each of
these types to be universally quantified (at the outer-most
level) over the free type variables (those with tick marks,
like 'a). The staged type is more annotated, but less gen-
eral, than the type of map. In order to handle a staging
declaration like
stage at <int -> 'c> -> [int] -> <['c]>;
we first unify the erasure of the target type (<I -> 'c>
-> [I] -> <['c]>) with the type of map (('a -> 'b) ->
['a] -> ['b]) to obtain a substitution # a # I, # b #
d) which we then apply to both the source type and
(unerased) target type before proceeding. The algorithm
must treat type variables as unknown types, and hence only
rules where the actual structure of the type is immaterial
may apply. The will then do the right thing.
13. STRATEGIES
The strategy used to order the individual components
matters. There are two di#erent properties of the output
that we are trying to achieve simultaneously: minimality
and optimality. We are working on precise definitions of
these properties, but do not have them completely worked
out yet. Informally, minimality means that the answer produced
has the minimum number of staging annotations. For
example both <f x> and <f ~ > are extensions of (f x)
where f has type a -> b and x has type a. But the first
is preferred because it has fewer staging annotations than
the second. We wish to arrange our search strategy so that
minimal annotations are found before larger ones, or better
yet, so that the search spaces containing non-minimal ones
are pruned completely.
The three state automata that guides the use of the Code
and Escape rules, not only prevents infinite derivations,
but also prunes non-minimal ones. The annotation ~ is
pruned since the Code rule cannot be applied immediately
after the Escape rule without first descending into a sub-term
Informally, optimality means that all computation is performed
at the earliest stage possible. Static ifs, such as
(if x then else ), are preferred over dynamic ones
such as: <if x then y else z>, because the first allows the
test of the if to be performed in an earlier stage.
Currently, our search strategy uses a heuristic to push optimal
solutions to the front of the result list: Try the Esc
rule first. The component escCase always fails at level 0,
but at levels greater than zero it searches for solutions at
lower levels, i.e. earlier stages. By placing it first in a first
or many strategy, we order stages with the earliest possible
computations first. This explains why the component Escape
is first in function a2. An analysis or proof that this
always leads to optimal results remains as future work.
14. COMPLEXITY
Its hard to estimate the complexity of the algorithm, without
knowing quite a bit about the search strategy used. The
strategy used in Appendix A is extremely simple. It consists
of a single first control operator. This will cause a search
whose maximum depth is proportional to the depth of the
term e plus the number of brackets in the target type. If
the environment maps each term variable to a single annotated
term at each type, then the algorithm will always find
exactly 0 or 1 results. The breadth of the search is not so
easy to estimate, and depends upon both the term being
annotated, and the type at which the annotation is sought,
and is the source of most of the algorithms complexity.
We have identified two places where clever implementation
techniques can overcome some of this complexity.
First, the naive algorithm performs redundant computa-
tions. These redundant computations are possible because
the search space of annotated programs is a directed acyclic
graph. There are often two (or more) paths to the same
subproblem. Here is an example of algorithm trace showing
this behavior.
staging (if large_calc then e1 else e2) ::
If rule fired
staging (large_calc) :: Bool
. large calculation .
If rule failed
rule fired
staging (if large_calc then e1 else e2) :: Bool
If rule fired
staging (large_calc) :: Bool
. same large calculation .
If failed
failed
This is solved by the standard technique of dynamic pro-
gramming. We memoize away results in a table as we compute
them, and then do a table lookup when attempting
sub-searches to make sure we're not recalculating anything.
The second problem is caused by multiple App rules (men-
tioned in section 6). The type checking rule App (Figure
leads to many possible search rules.
App
One search rule for every possible solution to the side
condition
The more deeply applications are nested, and the more
deeply nested code types appear in the target type, the more
this branching blows up. By observing the structure of the
two search rules appCase1 and appCase2, we see that they
have much in common. Clever programming can merge all
these rules into a single case, thus drastically reducing the
branching, and hence the size of the search space. The trick
is to perform sub-searches with s2 instantiated to a type
variable , and to maintain a list of constraints on all type
variables. Thereafter, whenever the algorithm calls for an
equality check on two types , the algorithm employs unification
on types. A failure to unify becomes a failure in
the search. The constraints are initially of the form t #
where # is a type variable and t is a base type. These inequality
constraints can either be strengthened to stronger
inequalities or collapsed to equalities upon unification.
The following single App rule now encompasses the previous
two and many others. The unification, new variable
generation, and constraint maintenance, is all handled in
the underlying monadic machinery. Thus the structure of
the algorithm changes only slightly.
appCase _ n sig phi (t1, t2, e@(EA e0
trace "App" n e t2
do let
(Arr s2 t2) e0
return (EA e2 e3)
appCase _ _ _ _
generates a new type variable s2 and
adds the constraint s1 # s2
Both of these changes, dynamic programming, and use
of unification, have been implemented by reworking the underlying
monad upon which the search is implemented. We
have observed that the traversed search space is notably
smaller. We currently working on quantifying a precise complexity
bounds on an algorithm which employs these two
techniques.
We believe the importance of this work lies in that it describes
a new and very simple framework to describe BTAs
with such advanced features as Higher-order functions, Poly-
morphism, Polyvariance, Partially static first-order data,
Higher-order partially static functions, and an Unbounded
number of stages. We believe the potential for tuning the
algorithm to a highly e#cient one remains.
15. RELATED WORK
Mogensen [16], Bondorf [2, 3] and Consel [5] present BTAs
for higher order languages which are based upon abstract
interpretation.
Three papers on BTAs for typed lambda calculi are closely
related to the work presented here because they express
as a type inference problem. Nielson and Nielson
[18] give a BTA based upon type inference for the two-level
simply- typed lambda-calculus. Given simple binding
times (compile-time or run-time) for all free variables in a
typed expression, they show that their algorithm computes
an unique expression with an optimal set of annotations that
minimizes run-time computation. The complexity of their
algorithm is worst case exponential on the size of the expression
Gomard [11] presents an O(n 3 ) algorithm for annotating
an untyped lambda-calculus term in a similar manner.
His work is interesting because he uses a crude type system
to perform BTA for an un-typed language. The type
system treats all second stage terms as having a single type
"code" and all other terms (except functions) as having type
"value", and uses arrows over these simple types to specify
binding-times for functions.
Henglein [12] presents an e#cient algorithm which uses a
similar trick for expressing binding-time in a similarly simple
type system. His algorithm has complexity O(n #(n, n)),
where # is an inverse of the Ackerman's function, and #(n, n)
is for all intents and purposes a small constant. His algorithm
uses a constraint solving system to determine where
annotations should be placed.
While our algorithm may look like a type inference prob-
lem, it is really a search based algorithm. It is easy to identify
both the search space, and the search strategy employed.
The possibility of multiple solutions also separates it from
type inference.
The use of types as BTA specifications can be found in the
work of Le Meur, Lawall, and Consel[15]. They describe a
module based system for writing binding time specifications
for programs in C. The system allows the programmer to
name multiple binding time specifications for each function
and global variable in a module, and to use these named
specifications in other specifications. Their specifications
are written as stage-annotated types. The module system
can then propagate this information to multiple use-sites of
the annotated functions allowing di#erent specializations at
di#erent occurrences. Unlike our use of annotated types as
binding time specifications, theirs is limited to first order
functions.
Many BTAs are based upon abstract analysis. BTAs for
partially static data have been presented by Launchbury[13]
and Mogensen[17], and polyvariant BTAs have been presented
by Consel [6, 1], Dussart et. al. [7], Rytz & Gengler
[19], amongst others.
Glueck and Joregensen [8, 9] pioneered the use of
multi-level languages. Their work generalizes a standard abstract
interpretation technique to multiple levels. We show
that search based techniques can also generalize to multiple
levels.
The techniques described here incorporate all these features
in a simple framework based upon search.
16. CONTRIBUTIONS
In this paper we have described a radically new approach
to the construction of binding time analyses. The approach
is based upon exploring the search space of well-annotated
extensions of well-typed terms. Type information is an effective
means to prune the search space, and makes the algorithm
practical. The algorithm is surprisingly simple yet
supports such advanced features as Higher-order functions,
Polymorphism, Polyvariance, Partially static first-order data,
Higher-order partially static data, and an Unbounded number
of stages. The complexity of the algorithm has not been
fully analyzed and remains as future work.
The algorithm is based upon the use of code-annotated extensions
of base-types as a binding time specification. Such
types are a rich and expressive mechanism which subsumes
all other mechanisms known to the authors for the expression
of binding-time specifications.
We have argued that an integrated system combining both
manually staged functions, and automatically staged func-
tions, eases the burden on the programmer. Yet, it allows
the fine control that only manually staged systems have supported
until now. This includes the use semantic information
in staged versions that could never be fully automated.
Our proposed system integrates BTA as a semantic part
of the language, and does not depend upon the intervention
of some external tool, whose semantics are separate from
the language.
17.
ACKNOWLEDGMENTS
The work described here was supported by NSF Grant
CCR-0098126, the M.J. Murdock Charitable Trust, and the
Department of Defense. The authors would also like to
thank the students of the class CSE583 - Fundamentals of
Staged Computation, in the winter of 2002, who participated
in many lively discussions about the uses of staging.
18.
--R
Fixpoint computation for polyvariant static analyses of higher-order applicative programs
Automatic autoprojection of higher order recursive equations.
A Computational Logic.
Binding time analysis for higher order untyped functional languages.
Polyvariant binding-time analysis for applicative languages
Polyvariant constructor specialisation.
An automatic program generator for multi-level specialization
A partial evaluator for untyped lambda calculus.
Projection Factorizations in Partial Evaluation.
Deferred compilation: The automation of run-time code generation
Towards bridging the gap between programming language and partial evaluation.
Binding Time Analysis for Polymorphically Typed Higher Order Languages.
Partially static structures in a self-applicable partial evaluator
Automatic binding time analysis for a typed lambda-calculus
A polyvariant binding time analysis.
Advanced Functional Programming
Accomplishments and research challenges in meta-programming
Dsl implementation using staging and monads.
Comprehending monads.
The essence of functional programming.
Monads for functional programming.
--TR
Automatic binding time analysis for a typed MYAMPERSANDlgr;-calculus
Comprehending monads
Binding time analysis for high order untyped functional languages
Automatic autoprojection of higher order recursive equations
Efficient type inference for higher-order binding-time analysis
The essence of functional programming
Polyvariant binding-time analysis for applicative languages
Fixpoint computation for polyvariant static analyses of higher-order applicative programs
Polyvariant constructor specialisation
Multi-stage programming with explicit annotations
An Automatic Program Generator for Multi-Level Specialization
DSL implementation using staging and monads
Towards bridging the gap between programming languages and partial evaluation
Accomplishments and Research Challenges in Meta-programming
Efficient Multi-level Generating Extensions for Program Specialization
Binding Time Analysis for Polymorphically Typed Higher Order Languages
Monads for Functional Programming
Deferred Compilation: The Automation of Run-Time Code Generation | type-directed search;binding time analysis;staging |
568177 | On obtaining Knuth, Morris, and Pratt''s string matcher by partial evaluation. | We present the first formal proof that partial evaluation of a quadratic string matcher can yield the precise behaviour of Knuth, Morris, and Pratt's linear string matcher.Obtaining a KMP-like string matcher is a canonical example of partial evaluation: starting from the naive, quadratic program checking whether a pattern occurs in a text, one ensures that backtracking can be performed at partial-evaluation time (a binding-time shift that yields a staged string matcher); specializing the resulting staged program yields residual programs that do not back up on the text, a la KMP. We are not aware, however, of any formal proof that partial evaluation of a staged string matcher precisely yields the KMP string matcher, or in fact any other specific string matcher.In this article, we present a staged string matcher and we formally prove that it performs the same sequence of comparisons between pattern and text as the KMP string matcher. To this end, we operationally specify each of the programming languages in which the matchers are written, and we formalize each sequence of comparisons with a trace semantics. We also state the (mild) conditions under which specializing the staged string matcher with respect to a pattern string provably yields a specialized string matcher whose size is proportional to the length of this pattern string and whose time complexity is proportional to the length of the text string. Finally, we show how tabulating one of the functions in this staged string matcher gives rise to the 'next' table of the original KMP algorithm.The method scales for obtaining other linear string matchers, be they known or new. | Introduction
Obtaining Knuth, Morris, and Pratt's linear string matcher out of a naive
quadratic string matcher is a traditional exercise in partial evaluation:
run match #pat , res
run PE #match , #pat ,
run match #pat, # , res
Given a static pattern, the partial evaluator should perform all backtracking
statically to produce a specialized matcher that traverses the text in linear
time.
Initially, the exercise was proposed by Futamura to illustrate Generalized
Partial Computation, a form of partial evaluation that memoizes the result of
dynamic tests when processing conditional branches [10]. 1 Subsequently, Consel
and Danvy pointed out that a binding-time improved (i.e., staged) quadratic
string matcher could also be specialized into a linear string matcher, using a
standard, Mix-style partial evaluator [7]. A number of publications followed,
showing either a range of binding-time improved string matchers or presenting
a range of partial evaluators integrating the binding-time improvement [1, 9, 11,
12, 15, 23, 24, 25].
After 15 years, however, we observe that
1. the KMP test, as it is called, appears to have had little impact, if any, on
the development of algorithms outside the field of partial evaluation, and
that
2. except for Grobauer and Lawall's recent work [13], issues such as the
precise characterization of time and space of specialized string matchers
have not been addressed.
The goal of our work is to address the second item, with the hope to contribute
to remedying the first one, in the long run.
1.1 This work
We relate the original KMP algorithm [18] to a staged quadratic string matcher
that keeps one character of negative information (essentially Consel and Danvy's
original solution [7]; there are many ways to stage a string matcher [1, 13], and
we show one in Appendix A). Our approach is semantic rather than algorithmic
or intuitive:
1 For example, the dynamic test can be a comparison between a static (i.e., known) character
in the pattern and a dynamic (i.e., unknown) character in the text. In one conditional
branch, the characters match and we statically know what the dynamic character is. In the
other branch, the characters mismatch, and we statically know what the dynamic character
is not. The former is a piece of positive information, and the latter is a piece of negative
information.
. We formalize an imperative language similar to the one in which the KMP
algorithm is traditionally specified, and we formalize the subset of Scheme
in which the staged matcher is specified.
. We then present two trace semantics that account for the sequence of
indices corresponding to the successive comparisons between characters in
the pattern and in the text, and we show that the KMP algorithm and
the staged matcher share the same trace.
. We analyze the binding times of the staged matcher using an o#-the-shelf
binding-time analysis (that of Similix [3, 4]), and we observe that the only
dynamic comparisons are the ones between the static pattern and the dynamic
text. Therefore, specializing this staged string matcher preserves its
trace, given an o#ine program specializer (such as Similix's) that (1) computes
static operations at specialization time and (2) generates a residual
program where dynamic operations do not disappear, are not duplicated,
and are executed in the same order as in the source program. We also
assess the size of residual programs: it is proportional to the size of the
corresponding static patterns. 2
This correspondence and preservation of traces shows that a staged matcher
that keeps one character of negative information corresponds to and specializes
into (the second half of) the KMP algorithm, precisely. It also has two
corollaries:
1. A staged matcher that does not keep track of negative information, as
in S-rensen, Gl-uck, and Jones's work on positive supercompilation [25],
does not give rise to the KMP algorithm. Instead, we observe that such a
staged matcher gives rise to Morris and Pratt's algorithm [5, Chapter 6],
which is also linear but slightly less e#cient.
2. A staged string matcher that keeps track of all the characters of negative
information accumulated during consecutive character mismatches, as in
Futamura's Generalized Partial Computation [9, 11], Gl-uck and Klimov's
supercompiler [12], and Jones, Gomard, and Sestoft's textbook [15, Figure
12.3] does not give rise to the KMP algorithm either. The corresponding
residual programs are slightly more e#cient than the KMP algorithm,
but their size is not linearly proportional to the length of the pattern.
(Indeed, Grobauer and Lawall have shown that the size of these residual
programs is bounded by |pat | - |#|, where pat denotes the pattern and #
denotes the alphabet [13].)
That said,
(a) there is more to linear string matching than the KMP: for example, in their
handbook on exact string matching [5], Charras and Lecroq list over
di#erent algorithms; and
We follow the tradition of counting the size of integers as units. For example, a table of
m integers has size m log n if these integers lie in the interval [0, n - 1], but we consider that
it has size m.
(b) many naive string matchers exist that can be staged to yield a variety of
linear string matchers, e.g., Boyer and Moore's [1].
We observe that over half of the algorithms listed by Charras and Lecroq can
be obtained as specialized versions of staged string matchers. Proving this
observation can be done in the same manner as in the present article for the
KMP. Furthermore, we can obtain new linear string matchers by exploring the
variety of staged string matchers.
1.2
Overview
The rest of this article is organized as follows. In Section 2, we specify an
operational semantics for the imperative language used by Knuth, Morris, and
Pratt, and in Section 3, we specify an operational semantics for a subset of
Scheme [16]. In each of these sections, we specify:
1. the abstract syntax of the language,
2. its expressible values,
3. its evaluation rules,
4. the string matcher,
5. the semantics of the string matcher, and
6. an abstract semantics of the string matcher.
The point of the abstract semantics is to account for the sequence of comparisons
between the pattern and the text in Knuth, Morris, and Pratt's algorithm
(the "imperative matcher") and in our staged string matcher (the "functional
matcher"). Lemmas 1 and 2 show that the abstract semantics faithfully account
for the comparisons between the pattern and the text in the string matchers,
and Theorem 1 establishes their correspondence:
abstract
imperative matcher
(Section 2.6)
Theorem 1
(Section
abstract
functional matcher
(Section 3.6)
concrete
imperative matcher
(Section 2.5)
(Section 2.6)
concrete
functional matcher
(Section 3.5)
(Section 3.6)
In Section 4, we show that the imperative matcher and the functional matcher
give rise to the same sequence of comparisons. In Section 5, we investigate the
result of specializing the functional matcher with respect to a pattern string using
program specialization and then using a simple form of data specialization.
Section 6 concludes.
2 The KMP, imperatively
In this section, we describe the imperative language in which the imperative
string matcher is specified. The language is canonical, with constant and mutable
identifiers and with immutable arrays. We then present the imperative
string matcher and its meaning. Finally, we specify a trace semantics of the
imperative matcher.
2.1 Abstract syntax
A program consists of statements s # Stm, expressions e # Exp, numerals
mutable identifiers x # Mid , array
identifiers a # Aid , and operators opr # Opr .
s ::= x:=e | s;s | if e then s else s fi |
while e do s od | return e
e ::= num | x | c | a[e] | e opr e | e and e
2.2 Expressible values
A value is an integer, a boolean, or a character in an alphabet:
2.3 Rules
In the following rules, e # Exp, v ,
2.3.1 Auxiliary constructs
The language includes numeric operators and a comparison operator over characters
2.3.2 Stores
A store is a total function:
2.3.3 Constants
Constants are defined with a total function:
2.3.4 Arrays
Arrays are defined with a partial function:
where N denotes the set of natural numbers including zero. Indexing arrays
starts at zero, and indexing out of bounds is undefined.
2.3.5 Relations
The (big-step) evaluation relation for expressions reads as
and the (small-step) evaluation relation for statements reads as
If r # Stm, the computation of s is in progress. If r # Unit , the computation
of s completed normally. If r # Z, the computation of s aborted with a return.
We choose a big-step evaluation relation for expressions because we are not
interested in intermediate evaluation steps. We choose a small-step evaluation
relation for statements because we want to monitor the progress of imperative
computations.
2.3.6 Expressions
(var)
(array)
2.3.7 Statements
(assign)
#while e do s od, # I #unit , #
#while e do s od, # I #s;while e do s od, #
A, C #return e, # I #n, #
2.4 The string matcher
The KMP algorithm consists of two parts: the initialization of the next table
and the actual string matching [18].
2.4.1 Initialization of the next table
The first part builds a next table for the pattern satisfying the following definition
table) The next table is an array of indices with the same
length as the pattern: next [j] is the largest i less than j such that pat [j -
no such i exists
then next [j] is -1.
The initialization of the next table is described by the pseudocode in Figure
we assume that pat, txt, lpat, and ltxt are given in an initial
store # in which pat denotes the pattern and lpat its length, and in which txt
denotes the text and ltxt its length.
while j < lpat - 1 do
while t >= 0 and pat[j] != pat[t] do
then next[j] := next[t]
else next[j] := t
od
Figure
1: Initialization of the next table
2.4.2 String matching
The second part traverses the text using the next table as described by the
program in Figure 2, which is written in the imperative language specified in
Sections 2.1, 2.2, and 2.3. In this second part, lpat and ltxt are constant
identifiers, j and k are mutable identifiers, and pat, txt and next are array
identifiers. (pat denotes the pattern and lpat its length, and txt denotes the
text and ltxt its length.)
3 We write 'pseudocode' instead of `code' because in the language of Sections 2.1, 2.2, and
2.3, arrays are immutable. We could easily extend the language to support mutable arrays,
but doing so would clutter the rest of our development with side conditions expressing that
the next table is not updated in the second part of the KMP algorithm. We have therefore
chosen to simplify the language.
while j<lpat and k<ltxt do
while j >= 0 and pat[j] != txt[k] do
Figure
2: The imperative string matcher
In the rest of this article, we only consider the second part of the KMP
algorithm and we refer to it as the imperative matcher.
2.5 Semantics of the imperative matcher
We now consider the meaning of the imperative matcher. We state without
proof that the imperative matcher terminates and accesses the pattern, the
text, and the next table within their bounds.
What we are after is the sequence of indices corresponding to the successive
comparisons between characters in the pattern and in the text. Because the
imperative language is deterministic and the KMP algorithm is a correct string
matcher, this sequence exists and is unique.
imperative comparison for the string matcher
of Section 2.4 is a derivation tree of the form
derivation tree.
Definition 3 (Index) The following function maps an imperative comparison
into the corresponding pair of indices in the pattern and the text:
index I
Definition 4 (Computation) An imperative computation is a derivation of
the imperative matcher
where the premises S 0 , S 1 , ., Sn-1 are other derivation trees, A contains the
pattern, the text, and the next table, C contains the length of the pattern and
the text, s 0 is the imperative matcher, and # 0 is the initial state mapping all
identifiers to zero.
A computation is said to be complete if r # {-1} # N.
In an imperative computation, each premise might contain imperative com-
parisons. We want to build the sequence of indices corresponding to the successive
comparisons between characters in the pattern and in the text. Applying
the index function to each of the imperative comparisons in each premise gives
such indices. We collect them in a sequence of non-empty sets of pairs of indices
as follows.
be the premises of an imperative
computation. Let c i be the set of imperative comparisons in S i , for
n. The imperative trace is the sequence
and where # is the neutral element for concatenation.
In Section 2.6, Lemma 1 shows that each of the premises in Definition 5 contains
at most one imperative comparison. Therefore, for all i, p i is either empty or a
singleton set. The imperative trace is thus a sequence of singleton sets, each of
which corresponds to the successive comparisons of characters in pat and txt .
We choose three program points: one for checking whether we are at the end
of the pattern or at the end of the text, one for comparing a character in the
pattern and a character in the text, and one for reinitializing the index in the
pattern (i.e., for 'shifting the pattern' [18, page 324]) based on the next table.
Definition 6 (Program points) The imperative program points Match I ,
Compare I
and Shift I
are defined as the following sets of configurations:
Match
Compare I
Shift I
where
while j<lpat and k<ltxt do
while j >= 0 and pat[j] != txt[k] do
do
The set of imperative program points is defined as the sum
Compare I
2.6 Abstract semantics
Definition 7 (Abstract states) The set of abstract imperative states is the
sum of the set of abstract imperative final states and the set of abstract imperative
intermediate states:
States
I
States int
I
States fin
I
States int
where match, compare and shift are injection tags.
Definition 8 (Program points and abstract states) We define the correspondence
between abstract imperative states and the union of imperative program
points and final results by the following relation # I # States int
I - (PP I #
(match, j, Match I if
(compare, j, Compare I
(shift, j,
Definition 9 (Abstract matcher) Let pat , txt # and let next be the next
table for pat . Then the abstract imperative matcher is the following total function
I # States int
I - States I :
(match, j,
(compare, j,
(compare, j,
(shift, j,
# (compare, next[j],
(match,
The function last I yields the last element of a non-empty
sequence of abstract states:
last I
I # States I
last I
Definition 11 (Abstract computations) Let pat , txt # and let # I be
the corresponding abstract imperative matcher. Then the set of abstract imperative
computations, AbsComp I # States
I
, is the least set closed under
(1) (match, 0, I and
last I (S) # I p # S - p # AbsComp I .
S is said to be complete i# last I (S) # States fin
I
(Computations are faithful) Abstract imperative computations represent
imperative computations faithfully. In other words:
1. An imperative computation starts with an initial derivation that either does
not contain any program points or (1) does not contain any program points
apart from the final configuration, (2) does not contain any comparisons,
and (3) the final configuration is a program point P # Match I such that
(match, 0, I P .
2. Whenever the last configuration of an imperative computation is an imperative
program point, P , related to an abstract state, S, by # I , there exists
an imperative program point or final result, P # , and an abstract state, S # ,
such that the following holds: (1) there is a derivation from P to P # that
does not contain other program points, (2) S # I S # , (3) S # I P # , and (4)
the derivation contains a comparison, C, if and only if
and then index I
Proof: Part 1 is straightforward to verify. For Part 2 we must divide by cases
as dictated by the abstract matcher. We show just a single case: P # Match I ,
k. The other cases are similar.
The derivation is
#while j<lpat and k<ltxt do . od, # I #unit , #
while j<lpat and k<ltxt do . od;
else return
# I #if j >= lpat then return k-j else return
A, C #if j >= lpat then return k-j else return
I #return k-j, #
(var)
Since (match, j, I k-j, we also have k-j # I n. Furthermore, we observe
that the derivation contains no other program points and no comparisons. #
Since at most one comparison exists for each step in the derivation, the
imperative trace of Definition 5 is a sequence of singleton sets. Moreover, since
the imperative matcher terminates, the abstract matcher does as well.
Definition 12 (Abstract trace) An abstract imperative trace maps a sequence
of abstract states to another sequence of abstract states:
trace I
I # States # I
trace I
The following corollary of Lemma 1 shows that abstract imperative traces
represent imperative traces.
Corollary 1 (Imperative traces are faithful) Let pat , txt # be given,
be the imperative trace for a complete imperative
computation, and let (compare, j #
be the abstract imperative trace for the corresponding complete abstract imperative
computation. Then
In words, the abstract trace faithfully represents the imperative trace.
2.7
Summary
We have formally specified an imperative string matcher implementing the KMP
algorithm, and we have given it a trace semantics accounting for the indices at
which it successively compares characters in the pattern and in the text. In the
next section, we turn to a functional string matcher and we treat it similarly.
3 The KMP, functionally
In this section, we describe the functional language in which the functional
string matcher is specified. The language is a first-order subset of Scheme (tail-
recursive equations). We then present the functional string matcher and its
meaning. Finally, we specify a trace semantics of the functional matcher.
3.1 Abstract syntax
A program consists of serious expressions e # Exp, trivial expressions t # Triv ,
operators opr # Opr , numerals num # Num, value identifiers x # Vid , function
identifiers f # Fid and sequences of value identifiers #x # Vid # .
3.2 Expressible values
A value is an integer, a boolean, a character, or a string:
3.3 Rules
3.3.1 Auxiliary constructs
The language includes numeric operators, a comparison operator over characters
and a string-indexing operator.
c is the i'th character in s.
Indexing strings starts at zero, and indexing out of bounds is undefined.
3.3.2 Environments
Expressions are evaluated in a value environment # Venv and a function
environment
3.3.3 Relations
The (big-step) evaluation relation for trivial expressions reads as
and the (small-step) evaluation relation for serious expressions reads as
#e, #F #r , #
We choose a big-step evaluation relation for trivial expressions because we
are not interested in intermediate evaluation steps. We choose a small-step evaluation
relation for serious expressions because we want to monitor the progress
of computations.
3.3.4 Programs
At the top level, a program is evaluated in an initial function environment # 0
holding the predefined functions and an initial value environment # 0 holding the
predefined values. The initial configuration of a program
is thus #e 0 , # 0 # in the function environment #:
.,
3.3.5 Trivial expressions
(var)
3.3.6 Serious expressions
3.4 The string matcher
We consider the string matcher of Figure 3 (motivated in Appendix A), which
is written in the subset of Scheme specified in Sections 3.1, 3.2, and 3.3. The
initial environment # 0 binds pat and lpat to the pattern and its length, and txt
and ltxt to the text and its length. None of pat, txt, lpat and ltxt are bound
in the program, and therefore they denote initial values throughout.
In the rest of this article, we refer to this string matcher as the functional
matcher.
3.5 Semantics of the functional matcher
We now consider the meaning of the functional matcher. What we are after
is the sequence of indices corresponding to the successive comparisons between
characters in the pattern and in the text.
functional comparison for the string matcher
of Section 3.4 is a derivation tree of the form
where T denotes another derivation tree.
(letrec ([match
(lambda (j
(if (= k ltxt)
(compare j k))))]
[compare
(lambda (j
(if (eq? (string-ref pat
(match
(if (= 0
(match
(rematch
[rematch
(lambda (j k jp kp)
(if (= kp
(if (eq? (string-ref pat jp)
(if (= jp
(match
(rematch
(compare jp k))
(if (eq? (string-ref pat jp)
(rematch
(rematch
(match
Figure
3: The functional matcher
Definition 14 (Index) The following function maps a functional comparison
into the corresponding pair of indices in the pattern and the text:
index F
Definition 15 (Computation) A functional computation is a derivation of
the functional matcher
where the premises are other derivation trees, # is the initial
function environment, e 0 is the functional matcher, and # 0 is a value environment
mapping pat, txt, lpat, and ltxt to the pattern, the text, and their lengths,
respectively, and all other value identifiers to zero.
A computation is said to be complete if r # {-1} # N.
In a functional computation, each premise might contain functional compar-
isons. We want to build the sequence of indices corresponding to the successive
comparisons between characters in the pattern and in the text. Applying the
index function to each of the functional comparisons in each premise gives such
indices. We collect them in a sequence of non-empty sets of pairs of indices as
follows.
Definition be the premises of a functional
computation. Let c i be the set of functional comparisons in E i , for
n. The functional trace is the sequence
otherwise.
In Section 3.6, Lemma 2 shows that each of the premises in Definition 16 contains
at most one functional comparison. Therefore, for all i, p i is either empty or a
singleton set. The functional trace is thus a sequence of singleton sets, each of
which corresponds to the successive comparisons of characters in pat and txt .
We choose three program points: one for checking whether we are at the end
of the pattern or at the end of the text, one for comparing a character in the
pattern and a character in the text, and one for matching the pattern, and a
prefix of a su#x of the pattern. These program points correspond to the bodies
of the match, compare and rematch functions.
Definition 17 (Program points) The functional program points Match F ,
Compare F
and Rematch F are defined as the following sets of configurations:
Match
Compare F
Rematch
where M is the body of the match function, C is the body of the compare function,
and R is the body of the rematch function.
The set of functional program points is defined as the sum
Compare F +Rematch F .
3.6 Abstract semantics
Definition (Abstract states) The set of abstract functional states is the
sum of the set of abstract functional final states and the set of abstract functional
intermediate states:
F
States int
F
States fin
F
States int
F
(rematch -N- N - N -N)
where match, compare and rematch are injection tags.
Definition 19 (Program points and abstract states) We define the correspondence
between abstract functional states and the union of functional program
points and final results by the following relation #F # States int
(match, j,
(compare, j,
(rematch, j, k, jp, kp) #F #e, # Rematch F
Definition 20 (Abstract matcher) Let pat , txt # . Then the abstract
functional matcher is the following total function #F# States int
(match, j,
(compare, j,
(compare, j,
(match,
(rematch, j, k, 0, 1) otherwise
(rematch, j, k, jp, kp) #F
(match,
(compare, jp,
(rematch, j, k, jp
Definition 21 (Last) The function last F yields the last element of a non-empty
sequence of abstract states:
last
last
Definition 22 (Abstract computations) Let pat , txt # and let #F be
the corresponding abstract functional matcher. Then the set of abstract functional
computations, AbsComp F # States
F
, is the least set closed under
(1) (match, 0,
last F (S) #F p # S - p # AbsComp F .
S is said to be complete i# last F (S) # States fin
F
(Computations are faithful) Abstract functional computations represent
functional computations faithfully. In other words:
1. A functional computation starts with an initial derivation that either does
not contain any program points or (1) does not contain any program points
apart from the final configuration, (2) does not contain any comparisons,
and (3) the final configuration is a program point P # Match F such that
(match, 0,
2. Whenever the last configuration of an functional computation is an functional
program point, P , related to an abstract state, S, by #F , there exists
a functional program point or final result, P # , and an abstract state, S # ,
such that the following holds: (1) there is a derivation from P to P # that
does not contain other program points, (2) S #F S # , (3) S #F P # , and (4)
the derivation contains a comparison, C, if and only if
and then indexF
Proof: Part 1 is straightforward to verify. For Part 2 we must divide by cases
as dictated by the abstract matcher. We show just a single case: P # Match F ,
|txt|. The other cases are similar.
The derivation is
(var)
(var)
#(compare
(app)
where C denotes the body of the compare function, as in Definition 17.
Since (match, j,
k, we also have that (compare, j, k) corresponds to the final configuration in
the derivation. Furthermore, we observe that the derivation contains no other
program points and no comparisons. #
Since at most one comparison exists for each step in the derivation, the
functional trace of Definition 16 is a sequence of singleton sets. Moreover, if one
of the matchers terminates, the other does as well.
Definition 23 (Abstract trace) An abstract functional trace maps a sequence
of abstract states to another sequence of abstract states:
trace
trace
The following corollary of Lemma 2 shows that abstract functional traces
traces.
Corollary 2 (Functional traces are faithful) Let pat , txt # be given,
be the functional trace for a complete functional
computation, and let (compare, j #
be the abstract trace for the corresponding complete abstract functional compu-
tation. Then
In words, the abstract trace faithfully represents the functional trace.
Lemma 3 (Invariants) Let pat , txt # and AbsComp F
be the corresponding
set of abstract functional computations. Then for all s 1 -s
the following conditions, whose conclusions we call invariants, are satisfied:
. If s
. If s
. If s
Proof: Let pat , txt # be given, and let S # AbsComp F
. The proof is by
structural induction on S. The base case is to show that the invariants hold
initially, and the induction cases are to show that the invariants are preserved
at match, compare and rematch.
Initialization
By definition of AbsComp F
, the initial abstract functional state in the computation
S is (match, 0, 0). As lengths of strings, |pat| and |txt| are non-negative,
and by insertion we obtain 0
and (m2) thus hold trivially in the initial abstract functional state.
Preservation at match
Let us assume that Invariants (m1) and (m2) hold at an abstract functional
state (match, j, k). We consider the three possible cases:
j. The next abstract state in
the abstract functional computation is therefore k-j and all the invariants
are preserved.
. -1. The next
abstract state is therefore -1 and all the invariants are preserved.
.
The next abstract state is therefore (compare, k). By the case
assumption Invariants
(c1) and (c2).
The invariants are thus preserved at match.
Preservation at compare
Let us assume that Invariants (c1) and (c2) hold at an abstract functional state
(compare, j, k). We consider the three possible cases:
. 1). The
next abstract state in the abstract functional computation is (match,
1). Since j, k, |pat| and |txt| are integers, j < |pat| #
hold. Since the premises are true by Invariants (c1) and (c2), Invariants
(m1) and (m2) hold.
. txt[k] #= pat[j] # 0: By definition, (compare, j, 1).
The next abstract state is (match, 1). With an argument
identical to the above we obtain Invariant (m2). By inserting the value
in Invariant (m1), as done in the initalization case, we also obtain
Invariant (m1).
. txt[k] #= pat[j]#j > 0: By definition, (compare, j,
The next abstract state is (rematch,
Due to (c1) and j > 0, (r1) holds, and (c2) is identical to
(r2).
thus (r3) holds.
which by convention
denotes the empty string. Similarly, pat[kp # - jp #
denotes the empty string, and Invariant (r4)
holds.
Finally, (r5) holds trivially, because the interval [1, kp # - jp # -
[1, 0] denotes the empty set, by convention.
The invariants are thus preserved at compare.
Preservation at rematch
Let us assume that Invariants (r1), (r2), (r3), (r4), and (r5) hold at an abstract
functional state (rematch, j, k, jp, kp). We consider the five possible cases:
. 0: By definition, (rematch, j, k, jp, kp) #F
(match, 1). The next abstract state in the abstract functional computation
is (match, 1). By Invariant (r2), we obtain
as shown
above.
. 0: By definition, (rematch, j, k, jp, kp) #F
1). The next abstract state is (rematch,
Invariants (r1) and (r2) hold for j and k, the trivial
updates, immediately give Invariants (r1) and
(r2).
kp # . And since j
Invariant (r3) is satisfied.
first look at pat[0] - pat[jp # -1], which is the empty string since
1] is the empty string, and therefore Invariant (r4) holds.
From the invariant we know that the body of (r5) holds for every
k in the interval [1, kp # - jp # - 2], since j
We then only need to show that -(pat[0] - pat[j # - k -
more specifically
1. This is easily seen
since which under the case
assumption give pat[j -
Invariant (r5) holds.
.
k). The next abstract state is therefore (compare,
(r1), (r3), and the case assumption, we have
holds. Since k
. kp <
1). The next abstract state is therefore (rematch,
which give us (r3).
we have pat[0] - pat[jp #
and we only need to show pat[jp # - This is true by
the case assumption and thus (r4) holds.
and since the interval for k is unchanged, because
holds by
assumption.
. kp < j # pat[jp] #= pat[kp]: By definition, (rematch, j, k, jp, kp) #F
(rematch, j, k, 0, kp -jp 1). The next abstract state is therefore (rematch,
By the trivial update of j and k, (r1) and (r2), as shown
above, still hold.
clearly have jp # 0, and the assumption kp > jp
gives us kp . Finally, since kp < j # kp+1 # j,
we have kp thus Invariant (r3) holds.
Again, as shown in the second case, the strings are empty by the
condition thus Invariant (r4) holds.
Similarly to the second case, we only need to show -(pat[0] - pat[j # -
more specifically
holds for
We consider the jpth and (k jp)th entries, which
are the characters pat[jp] and pat[kp], respectively, since k
kp. By the case
assumption the entries are distinct, and we conclude by showing
that the first string contains a jpth entry. The case assumption
us just that; we have 0 # jp and
thus
Invariant (r5) holds. #
The key connection between the abstract functional matcher and the abstract
imperative matcher is stated in the following remark. The remark shows
how to interpret Invariant (r5) in terms of the next table.
Remark 1 We notice that for any j and 0 # a # b, if #k # [a, b].-(pat[0] - pat[j-
then by Definition 1 next [j]
cannot occur in the interval [j - b, j - a].
Indeed, if for some k and some j, (pat[0] - pat[j - k -
and pat[j - k] #= pat[j]), then j - k is a candidate for next[j]. Therefore the
negation of the condition gives us that j - k is not a candidate for next[j].
3.7
Summary
We have formally specified a functional string matcher, and we have given it
a trace semantics accounting for the indices at which it successively compares
characters in the pattern and in the text. In the next section, we show that
for any given pattern and text, the traces of the imperative matcher and of the
functional matcher coincide.
Extensional correspondence between imperative
and functional matchers
Definition 24 (Correspondence) We define the correspondence between imperative
and functional states with the relation # States I - States F :
(match, j,
(compare, j,
(shift, j,
We define # States # I -States # F
such that for any sequences
I
F
for all
hold for empty sequences.
Synchronization is a relation sync # States
I
-States
F
defined as
trace I (S) # trace F (S # last I (S) # last F (S # )
Theorem 1 (Abstract equivalence) For any given pattern and text, there
is a unique complete abstract imperative computation S and a unique complete
abstract functional computation S # , and these two abstract computations are
synchronized, i.e., sync(S, S # ) holds.
Proof: Let pat , txt # be given, and let S # AbsComp I
. The proof is by structural induction on the abstract computation
. The base case is to prove that the abstract computations start in the
same abstract state, and are therefore initially synchronized. The induction
cases are to prove that synchronization is always preserved.
Initialization
By definition of AbsComp I
and AbsComp F
, both abstract computations S and
start in the abstract state (match, 0, 0). Since sync((match, 0, 0), (match, 0, 0))
holds, the abstract computations are initially synchronized.
Preservation from match
We are under the assumption that initial subsequences I of S and I # of S #
are synchronized, i.e., sync(I , I # ) holds, last I last F
(match, j, k). Three cases occur, that are exhaustive by the invariants of Lemma 3:
j. Similarly, by
definition, (match, j, j. By assumption, sync(I , I # ) holds, and
therefore and thus the complete
abstract computations are synchronized.
Similarly, by
definition, (match, j, I -1. As above, synchronization is preserved
since the computations end with the same integer.
. j < |pat| # k < |txt|: By definition, (match, j,
larly, by definition, (match, j,
by assumption, sync(I - (compare, j, k), I # - (compare, j, k)) also holds.
Synchronization is thus preserved in all cases.
Preservation from compare
We are under the assumption that initial subsequences I of S and I # of S # are
synchronized, i.e., sync(I , I # ) holds, last I last F
(compare, j, k). Three cases occur, that are exhaustive by the invariants of
Lemma 3:
. txt[k] #= pat[j] # 0: By definition, (compare, j, 1).
Similarly, by definition, (compare, j,
since by definition. Since sync(I , I # ) by assumption, and the
shift states are not included in the abstract trace, sync(I - (shift, j,
(match,
. txt[k] #= pat[j]#j > 0: By definition, (compare, j,
Similarly, by definition, (compare, j,
holds by assumption, and (shift, j,
. 1).
Similarly, by definition, (compare, j,
sync(I , I # ) holds by assumption, sync(I - (match, j +1, k+1), I # - (match,
Again, synchronization is preserved in all cases.
Preservation from rematch and shift
We are under the assumption that initial subsequences I of S and I # of S #
are synchronized, i.e., sync(I , I # ) holds, last I last F
(rematch, j, k, jp, kp). Since by Definition 24, (shift, j,
for all jp and kp, we only have to consider the cases where the abstract functional
computation goes to a abstract state of a form di#erent from (rematch, j, k, jp, kp).
Doing so is sound because the recursive calls in the rematch function never diverge
(the lexicographic ordering on #m - (kp - jp), m - jp# is a termination
relation for rematch until its call to match or compare). Two cases occur:
. 0: By definition, (rematch, j, k, jp, kp) #F
(match, 1). We know that Invariant (r5) holds for k in the interval
which by Remark 1 implies that next [j] /
From the
case assumption, we know that
next [j] # [-1, j - 1] we then have next Therefore, by definition
of the abstract imperative matcher, (shift, j,
sync(I , I # ) holds by assumption, sync(I -(match, 0, k+1), I # -(match, 0, k+1))
also holds.
. Due to Invariants (r1) and (r3), we have jp <
|pat|. By definition, (rematch, j, k, jp, kp) #F (compare, jp, k). We
know that the body of Invariant (r5) holds for k in the interval [1, j-jp-1],
which by Remark 1 gives us next [j] /
# [jp+1, j-1]. From (r4) we know that
and by the case assumption
we have pat[jp] #= pat[j]. Therefore, jp is a candidate for next [j]. Since
next [j] /
since next [j] is the largest value less than j
satisfying the requirements, we have next Invariant (r3) we
know that jp # 0, so by definition of the abstract imperative matcher,
(shift, j, I (compare, jp, k). Since sync(I , I # ) holds by assumption,
sync(I - (compare, jp, k), I # - (compare, jp, k)) also holds.
Since the KMP algorithm terminates, and since the abstract matchers are total
functions, complete abstract computations exist, and they are unique. #
We are now in position to state our main result, as captured by the diagram
from Section 1.2.
abstract
imperative matcher
(Section 2.6)
Theorem 1
(Section
abstract
functional matcher
(Section 3.6)
concrete
imperative matcher
(Section 2.5)
(Section 2.6)
concrete
functional matcher
(Section 3.5)
(Section 3.6)
Corollary 3 (Equivalence) Let pat , txt # be given. Then there is (1)
a corresponding complete imperative computation, C, with final configuration
#n, #, for some number, n, (2) a corresponding complete functional computa-
tion, C # , with final configuration #n #, for some number, n # , (3)
(4) the traces of C and C # are equal.
Proof: By Theorem 1, the abstract functional matcher terminates, and by
Corollary 2 so does the functional matcher. A complete functional computation
therefore exists. By Lemma 1 and Lemma 2 and their corollaries, the abstract
computations represent the computations such that the trace and the result are
represented faithfully. Finally, by Theorem 1, the abstract computations are
synchronized, which means that the abstract traces and the results are equal.
To summarize, we have shown that for any given pattern and text, the traces
of the imperative matcher and of the functional matcher coincide. In that sense,
the two matchers "do the same", albeit with a di#erent time complexity. In the
next section, we show how to eliminate the extra complexity of the functional
matcher, using partial evaluation.
5 Intensional correspondence between imperative
and functional matchers
We now turn to specializing the functional string matcher with respect to given
patterns. First we use partial evaluation (i.e., program specialization), and next
we consider a simple form of data specialization. We first show that the size
of the specialized programs is linear in the size of the pattern, and that the
specialized programs run in time linear in the size of the text. We next show
that the specialized data coincides with the next table of the KMP.
This section is more informal and makes a somewhat liberal use of partial-
evaluation terminology [21].
(define (main pat s txt d )
(let ([lpat s (string-length pat)] [ltxt d (string-length txt)])
(letrec ([match
(lambda (j s k d )
(compare j k))))]
[compare
(lambda (j s k d )
(match
(if (= 0
(match
(rematch
[rematch
(lambda (j s k d jp s kp s )
(if (= kp
(if (eq? (string-ref pat jp)
(if (= jp
(match
(rematch
(compare jp k))
(if (eq? (string-ref pat jp)
(rematch
(rematch
(match
Figure
4: The binding-time annotated functional matcher
5.1 Program specialization
Figure
4 displays a binding-time annotated version of the complete functional
matcher as derived in Appendix A. Formal parameters are tagged with "s"
(for "static") or "d" (for "dynamic") depending on whether they only denote
values that depend on data available at partial-evaluation time or whether they
denote values that may depend on data available at run time. In addition,
dynamic conditional expressions, dynamic tests, and dynamic additions and
subtractions are boxed. All the other parts in the source program are static
and will be evaluated at partial-evaluation time. All the dynamic parts will be
reconstructed, giving rise to the residual program.
A partial evaluator such as Similix [3, 4] is designed to preserve dynamic computations
and their order. In the present case, the dynamic tests are among the
dynamic computations. They are guaranteed to occur in specialized programs
in the same order as in the source program. Therefore, by construction, Similix
generates programs that traverse the text in the same order as the functional
matcher and thus the KMP algorithm.
For example, we have specialized the functional matcher with respect to
the pattern "abac" (without post-unfolding). The resulting residual program is
displayed in Figure 5, after lambda-dropping [8] and renaming (the character
following the "|", in the subscripts, is the next character in the pattern to be
matched against the text-an intuitive notation suggested by Grobauer and
Lawall [13]). The specialized string matcher traverses the text linearly and
compares characters in the text and literal characters from the pattern. In their
article [18, page 330], Knuth, Morris and Pratt display a similar program where
the next table has been "compiled" into the control flow. We come back to this
point at the end of Section 5.2.
In their revisitation of partial evaluation of pattern matching in strings [13],
Grobauer and Lawall analyzed the size and complexity of the residual code
produced by Similix, measured in terms of the number of residual tests. They
showed that the size of a residual program is linear in the length of the pattern,
and that the time complexity is linear in the length of the text. In the same
manner, we can show that Similix yields a residual program that is linear in the
length of the pattern, and whose time complexity is linear in the length of the
text.
Similix is a polyvariant program-point specializer that builds mutually recursive
specialized versions of source program points (by default: conditional
expressions with dynamic tests). Each source program point is specialized with
respect to a set of static values. The corresponding residual program point is
indexed with this set. If a source program point is met again with the same set
of static values, a residual call to the corresponding residual program point is
generated.
Proposition 1 Specializing the functional matcher of Figure 4 with respect to a
pattern yields a residual program whose size is linear in the length of the pattern.
Proof (informal): The only functions for which residual code is generated
are main, match and compare. The first one, main, is the goal function, but it
contains no memoization points, so only one residual main function is generated.
There is exactly one memoization point-a dynamic conditional expression-
in each of the functions match and compare. The only static data available at
the two memoization points are bound to j, pat, and lpat. The only piece of
static data that varies is the value of j, i.e., j, and since 0 # j < |pat| at the
memoization points (because of the invariants of Lemma 3 in Section 4, and the
fact that the memoization point in match is only reached if j #= |pat|), at most
|pat| variants of the two memoization points can be generated. The number of
(define (main-abac txt)
(let ([ltxt (string-length txt)])
(define (match |abac
(define (compare |abac
(if (eq? #\a (string-ref txt k))
(match a|bac (+ k 1))
(match |abac (+ k 1))))
(define (match a|bac
(define (compare a|bac
(if (eq? #\b (string-ref txt k))
(match ab|ac (+ k 1))
(compare |abac k)))
(define (match ab|ac
(define (compare ab|ac
(if (eq? #\a (string-ref txt k))
(match aba|c (+ k 1))
(match |abac (+ k 1))))
(define (match aba|c
(define (compare aba|c
(if (eq? #\c (string-ref txt k))
(compare a|bac k)))
(match |abac 0)))
. For all txt, evaluating (main-abac txt) yields the same result as evaluating
(main "abac" txt).
. For all k, evaluating (match |abac k) in the scope of ltxt yields the same
result as evaluating (match 0 k) in the scope of lpat and ltxt, where
lpat denotes the length of pat and ltxt denotes the length of txt.
. For all k, evaluating (match a|bac k) in the scope of ltxt yields the same
result as evaluating (match 1 in the scope of lpat and ltxt.
. For all k, evaluating (match ab|ac k) in the scope of ltxt yields the same
result as evaluating (match 2 k) in the scope of lpat and ltxt.
. For all k, evaluating (match aba|c k) in the scope of ltxt yields the same
result as evaluating (match 3 k) in the scope of lpat and ltxt.
Figure
5: Result of specializing the functional matcher wrt. "abac"
residual functions is therefore linear in the size of the pattern. In addition, the
size of each function is bounded by a small constant, as can be seen if one writes
the BNF of residual programs [20]. #
Proposition 2 Specializing the functional matcher of Figure 4 with respect to
a pattern yields a residual program whose time complexity is linear in the length
of the text.
Proof (informal): As proven by Knuth, Morris and Pratt, the KMP algorithm
performs a number of comparisons between characters in the pattern
and in the text, that is linear in the length of the text [18]. Corollary 3 shows
that the functional matcher performs the exact same sequence of comparisons
between characters in the pattern and in the text as the KMP algorithm. All
comparisons are performed in the compare function, and exactly one comparison
is performed at each call to compare. The number of calls to compare is therefore
linear in the length of the text, and since the match function either terminates
or calls compare, the number of calls to match is bounded by the number of calls
to compare. By Proposition 1, residual code is only generated for the functions
main, compare, and match. The time complexity of each of the functions main,
compare, and match is easily seen to be bounded by a small constant. Since main
is only called once and the number of calls to compare and match is linear in the
length of the text, the time complexity of the residual program is linear in the
length of the text. #
5.2 Data specialization
In Section 3.6, Remark 1 connects the rematch function in the functional matcher
and the next table of the KMP algorithm. In this section, we revisit this connection
and show how to actually derive the KMP algorithm with a next table from
the functional matcher using a simple form of data specialization [2, 6, 17, 19].
To this end, we first restate the functional matcher.
In the functional matcher, all functions are tail recursive, i.e., they iteratively
call themselves or each other. In particular, rematch completes either by calling
match or by calling compare. The two actual parameters to match are 0, a literal,
and an increment over k, which is available in the scope of match. The two
actual parameters to compare are jp, which has been computed in the course of
rematch, and k, which is available in the scope of compare.
To make it possible to tabulate the rematch function, we modify the functional
matcher so that it is no longer tail recursive. Instead of having rematch
call match or compare, tail recursively, we make it return a value on which to call
match or compare. We set this value to be that of jp (a natural number) or -1.
Correspondingly, instead of having compare call rematch tail recursively, we make
it dispatch on the result of rematch to call match or compare, tail recursively. The
result is displayed in Figure 6.
In the proof of Theorem 1, we show that when rematch terminates by calling
compare, jp is equal to next [j] in the KMP algorithm. We also show that when
(define (main pat txt)
(let ([lpat (string-length pat)] [ltxt (string-length txt)])
(letrec ([match
(lambda (j
(if (= k ltxt)
(compare j k))))]
[compare
(lambda (j
(if (eq? (string-ref pat
(match
(if (= 0
(match
(let ([next (rematch j 0 1)])
(if (= next -1)
(match
(compare next k))))))]
[rematch
(lambda (j jp kp)
(if (= kp
(if (eq? (string-ref pat jp)
(if (= jp
(rematch
(if (eq? (string-ref pat jp)
(rematch
(rematch
(match
Figure
Variation on the functional matcher
match is called from rematch, the value next [j] in the KMP algorithm is -1. We
only call rematch from compare, and only with
Therefore calling the new rematch function is equivalent to a lookup in the next
table in the KMP algorithm. In particular, tabulating the |pat| input values of
rematch corresponding to all j between 0 and |pat| - 1 yields the next table as
used in the KMP algorithm.
This simple data specialization yields a string matcher that traverses the
text linearly, matching it against the pattern, and looking up the next index
into the pattern in the next table in case of mismatch. In other words, data
specialization of the functional matcher yields the KMP algorithm.
In particular, specializing the string matcher of Figure 6 (or its tabulated
version) with respect to a pattern would compile the corresponding next table
into the control flow of the residual program. The result would coincide with
the compiled code in Knuth, Morris and Pratt's article [18, page 330].
6 Conclusion and issues
We have presented the first formal proof that partial evaluation can precisely
yield the KMP, both extensionally (trace semantics, synchronization) and intensionally
(size of specialized programs, relation to the next table, actual derivation
of the KMP algorithm). We have shown that the key to obtaining the
KMP out of a naive, quadratic string matcher is not only to keep backtracking
under static control, but also to maintain exactly one character of negative in-
formation, as in Consel and Danvy's original solution. Together with Grobauer
and Lawall's complexity proofs about the size and time complexity of residual
programs, the buildup of Corollary 3 paves the way to relating the e#ect of
staged string matchers with independently known string matchers, e.g., Boyer
and Moore's [1].
Our work has led us to consider a family of KMP algorithms in relation with
the following family of staged string matchers:
. A staged string matcher that does not keep track of negative information
gives rise not to Knuth, Morris, and Pratt's next table, but to their f
function [18, page 327], i.e., to Morris and Pratt's algorithm [5, Chapter 6].
Tabulating this function yields an array of the same size as the pattern.
. A staged string matcher that keeps track of one character of negative
information corresponds to Knuth, Morris, and Pratt's algorithm and next
table.
. A staged string matcher that keeps track of a limited number of characters
of negative information gives rise to a KMP-like algorithm. The
corresponding residual programs are more e#cient, but they are also bigger
. A staged string matcher that keeps track of all the characters of negative
information also gives rise to a KMP-like algorithm. The corresponding
residual programs are even more e#cient, but they are also even bigger.
Grobauer and Lawall have shown that the size of these residual programs
is bounded by |pat | - |#|, where |#| is the size of the alphabet [13].
It is however our conjecture that for string matchers that keep track of
two or more characters of negative information, a tighter upper bound on
the size is twice the length of the pattern, i.e., 2|pat |. This conjecture
holds for short patterns.
Let us conclude on two points: obtaining e#cient string matchers by partial
evaluation of a naive string matcher and obtaining them e#ciently.
The essence of obtaining e#cient string matchers by partial evaluation of
a naive string matcher is to ensure that backtracking in the naive matcher is
static. One can then either stage the naive matcher and use a simple partial
evaluator, or keep the naive matcher unstaged and use a sophisticated partial
evaluator. What matters is that backtracking is carried out at specialization
time and that dynamic computations are preserved in specialized programs.
The size of residual programs provides a lower bound to the time complexity
of specialization. For example, looking at the KMP, the size of a residual
program is proportional to the size of the pattern if only positive information
is kept. At best, a general-purpose partial evaluator could thus proceed in time
linear in |pat |, i.e., O(|pat |), as in the first pass of the KMP algorithm. How-
ever, evaluating the static parts of the source program at specialization time,
as driven by the static control flow of the source program, does not seem like
an optimal strategy, even discounting the complexity of binding-time analysis.
For example, the data specialization in Section 5.2 works in time quadratic in
|pat |, i.e., O(|pat | 2 ), to construct the next table. On the other hand, such an
e#cient treatment could be one of the bullets in a partial evaluator's gun [22,
Section 11], i.e., a treatment that is not generally applicable but has a dramatic
e#ect occasionally. For example, proving the conjecture above could lead to
such a bullet.
Acknowledgments
We are grateful to Torben Amtoft, Julia Lawall, Karoline
Malmkj-r, Jan Midtgaard, Mikkel Nygaard, and the anonymous reviewers for
a variety of comments. Special thanks to Andrzej Filinski for further comments
that led us to reshape this article.
This work is supported by the ESPRIT Working Group APPSEM (http://
www.md.chalmers.se/Cs/Research/Semantics/APPSEM/).
A Staging a quadratic string matcher
Figure
7 displays a naive, quadratic string matcher that successively checks
whether the pattern pat is a prefix of one of the successive su#xes of the text
txt. The main function initializes the indices j and k with which to access pat
and txt. The match function checks whether the matching is finished (either
with a success or with a failure), or whether one more comparison is needed.
The compare function carries out this comparison. Either it continues to match
the rest of pat with the rest of the current su#x of txt or it starts to match pat
and the next su#x of txt.
Figure
8 displays a staged version of the quadratic string matcher. Instead of
matching pat and the next su#x of txt, this version uses a rematch function and
a recompare function to first match pat and a prefix of a su#x of pat, which we
know to be equal to the corresponding segment in txt. Eventually, the rematch
function resumes matching the rest of the pattern and the rest of txt. As a
result, the staged string matcher does not backtrack on txt.
In partial-evaluation jargon, the string matcher of Figure 8 uses positive information
about the text (see Footnote 1 page 4). A piece of negative information
is also available, namely the latest character having provoked a mismatch.
Figure
9 displays a staged version of the quadratic string matcher that exploits
this negative information. Rather than blindly resuming the compare function,
the rematch function first checks whether the character having caused the latest
mismatch could cause a new mismatch, thereby avoiding one access to the text.
To simplify the formal development, we inline recompare in rematch and
lambda-lift rematch to the same lexical level as match and compare [8, 14]. The
resulting string matcher is displayed in Figure 10 and in Section 3.4.
There are of course many ways to stage a string matcher. The one we have
chosen is easy to derive and easy to reason about.
--R
The abstraction and instantiation of string-matching programs
Mixed computation and translation: Linearisation and decomposition of compilers.
Similix 5.1 manual.
Automatic autoprojection of recursive equations with global variables and abstract data types.
Exact string matching algorithms.
Sandrine Chiroko
Partial evaluation of pattern matching in strings.
Transforming recursive equations into programs with block structure.
Program transformation system based on generalized partial computation.
Generalized partial computation.
Essence of generalized partial computation.
Occam's razor in metacomputation: the notion of a perfect process tree.
Partial evaluation of pattern matching in strings
Lambda lifting: Transforming programs to recursive equations.
Partial Evaluation and Automatic Program Generation.
Revised 5 report on the algorithmic language Scheme.
Data specialization.
Fast pattern matching in strings.
Program and data specialization: Principles
Abstract Interpretation of Partial-Evaluation Algo- rithms
A transformation-based optimiser for Haskell
Christian Queinnec and Jean-Marie Ge#roy
Partial evaluation of pattern matching in constraint logic programming languages.
A positive su- percompiler
--TR
Lambda lifting: transforming programs to recursive equations
Partial evaluation of pattern matching in strings
Partial evaluation of pattern matching in constraint logic programming languages
Automatic autoprojection of recursive equations with global variable and abstract data types
Essence of generalized partial computation
Partial evaluation and automatic program generation
Abstract interpretation of partial evaluation algorithms
Data specialization
A transformation-based optimiser for Haskell
Lambda-dropping
Glossary for Partial Evaluation and Related Topics
Program transformation system based on generalized partial computation
Revised Report on the Algorithmic Language Scheme
Combining Program and Data Specialization
Occam''s Razor in Metacompuation
Partial evaluation of pattern matching in strings, revisited
--CTR
Mads Sig Ager , Olivier Danvy , Henning Korsholm Rohde, Fast partial evaluation of pattern matching in strings, ACM SIGPLAN Notices, v.38 n.10, p.3-9, October
Yoshihiko Futamura , Zenjiro Konishi , Robert Glck, Automatic generation of efficient string matching algorithms by generalized partial computation, Proceedings of the ASIAN symposium on Partial evaluation and semantics-based program manipulation, p.1-8, September 12-14, 2002, Aizu, Japan
Olivier Danvy , Henning Korsholm Rohde, On obtaining the Boyer-Moore string-matching algorithm by partial evaluation, Information Processing Letters, v.99 n.4, p.158-162, 31 August 2006
Mads Sig Ager , Olivier Danvy , Henning Korsholm Rohde, Fast partial evaluation of pattern matching in strings, ACM Transactions on Programming Languages and Systems (TOPLAS), v.28 n.4, p.696-714, July 2006
Germn Vidal, Cost-Augmented Partial Evaluation of Functional Logic Programs, Higher-Order and Symbolic Computation, v.17 n.1-2, p.7-46, March-June 2004 | trace semantics;data specialization;program specialization;Knuth-Morris-Pratt string matching |
568186 | Unifying object-oriented programming with typed functional programming. | The wide practice of object-oriented programming in current software construction is evident. Despite extensive studies on typing programming objects, it is still undeniably a challenging research task to design a type system for object-oriented programming that is both effective in capturing program errors and unobtrusive to program construction. In this paper, we present a novel approach to typing objects that makes use of a recently invented notion of guarded dependent datatypes. We show that our approach can address various difficult issues (e.g., handling "self" type, typing binary methods, etc.) in a simple and natural type-theoretical manner, remedying the deficiencies in many existing approaches to typing objects. | INTRODUCTION
The popularity of object-oriented programming in current
software practice is evident. While this popularity may result
in part from the tendency to chase after the latest "fads"
in programming languages, there is undeniably some real
substance in the growing use of object-oriented program-
ming. In particular, objected-oriented programming can significantly
software organization and reuse through
encapsulation, inheritance and polymorphism. Building on
our previous experience with Dependent ML [20, 18], we are
Partially supported by the NSF Grants No. CCR-0081316
and No. CCR-0092703
Permission to make digital or hard copies of all or part of this work for
personal or classroom use is granted without fee provided that copies are
not made or distributed for profit or commercial advantage and that copies
bear this notice and the full citation on the first page. To copy otherwise, to
republish, to post on servers or to redistribute to lists, requires prior specific
permission and/or a fee.
ASIA-PEPM'02, September 12-14, 2002, Aizu, Japan.
naturally interested in combining object-oriented programming
with dependent types. However, a straightforward
combination of dependent types with object-oriented programming
(e.g., following a Java-like approach) is largely un-
satisfactory, as such an approach often requires a substantial
use of run-time type downcasting. In search for a more satisfactory
approach, we have noticed that a recently invented
notion of guarded recursive datatype constructors [19] can
be combined with dependent types to enable the construction
of a type system for programming objects that needs
no use of type downcasting. This is highly desirable as type
downcasting is probably one of the most common causes of
program errors in object-oriented languages like Java.
We briefly outline the basic idea behind our approach to
typing programming objects. The central idea of objected-oriented
programming is, of course, the programming ob-
jects. But what is really a programming object? Unfortu-
nately, there is currently no simple answer to this question
(and there unlikely will). In this paper, we take a view of
programming objects in the spirit of Smalltalk [12, 14]; we
suggest to conceptualize a programming object as a little
intelligent being that is capable of performing actions according
to the messages it receives; we suggest not to think
of a programming object as a record of fields and methods
in this paper.
We now present an example to illustrate how such a view
of objects can be formulated in a typed setting. We assume
the existence of a type constructor MSG that takes a type
# and forms the message type (#)MSG; after receiving a
message of type (#)MSG, an object is supposed to return a
value of type # ; therefore, we assign the following type OBJ
to objects:
Suppose that we have declared that MSGgetfst , MSGgetsnd ,
MSGsetfst and MSGsetsnd are message constructors of the
following types, where 1 stands for the unit type.
MSGgetfst
MSGsetfst
In
Figure
1, we implement integer pairs in a message-passing
style, where the withtype clause is a type annotation that
assigns the type int # int # OBJ to the defined function
newIntPair . 1 Note that such ML-like syntax is used to
present examples throughout the paper. Given integers x
1 The reason for newIntPair being well-typed is to be explained
in Section 2.
Figure
2: The value constructors associated with the g.r. datatype constructor HOAS
fun newIntPair x
val
val
fun dispatch MSGgetfst = !xref
| dispatch MSGgetsnd = !yref
| dispatch (MSGsetfst
| dispatch (MSGsetsnd
| dispatch
in dispatch end
withtype int -> int -> OBJ
Figure
1: An implementation of integer pairs
and y, we can construct an integer pair anIntPair by calling
newIntPair(x)(y); we can send the message MSGgetfst to
the pair to obtain its first component: anIntPair(MSGgetfst );
we can also reset its first component to x # by sending it the
message MSGsetfst(x # operations
on the second component of the pair can be performed
similarly; an exception is raised at run-time if anIntPair
cannot interpret a message sent to it.
Obviously, there exists a serious problem with the above
approach to implementing objects. Since every object is
currently assigned the type OBJ, we cannot use types to
di#erentiate objects. For instance, suppose that MSGfoo is
another declared message constructor of type (1)MSG; then
anIntPair (MSGfoo) is well-typed, but its execution leads to
an uncaught exception UnknownMessage at run-time. This
is clearly undesirable: anIntPair(MSGfoo) should be rejected
at compile-time as an ill-typed expression. We will
address this issue and many other ones in objected-oriented
programming by making use of a restricted form dependent
types developed in Dependent ML [20, 18].
The type constructor MSG is what we call a guarded
recursive (g.r.) datatype constructor. The notion of g.r.
datatype constructors, which extends the notion of datatypes
in ML, is recently invented in the setting of functional programming
for handling intentional polymorphism and run-time
type passing [19]. We write # for a guarded type,
where # is a type variable context that may contain some
type constraints. For instance, #1 .# is a guarded type,
this type is equivalent to int # int since we must map #1
to int in order to satisfy the type constraint #1 #2 #
int # bool. The type #2 .# is also a guarded type, where
is equivalent to the
type void, i.e., the type in which there is no element, since
the type constraint #1 #2 # int cannot be satisfied. If
#1 , #2 , #1 #2 #), we notice the type # has
the following interesting feature: instantiating # with a type
#0 , we obtain a type that is equivalent to #1 #1 if #0 is of
the form #1 #2 , or void if #0 is of other forms.
A guarded recursive datatype constructor is a recursively
defined type constructor for constructing guarded datatypes,
{'a1,'a2}.
| {'a1,'a2}.
| {'a1,'a2}.
('a2) HOASapp of ('a1 -> 'a2) HOAS * 'a1 HOAS
| {'a1}. ('a1) HOASlift of 'a1
Figure
3: An example of g.r. datatype constructor
which are a special form of sum types in which each component
is a guarded type. We present a short example of
g.r. datatype constructor as follows for illustrating the no-
tion. More details and examples can be found at [19]. The
syntax in Figure 3 essentially declares a type constructor
HOAS, which can take a type # and then form another type
(#)HOAS. Intuitively, a value of type (# )HOAS represents a
higher-order abstract syntax tree [9, 16] for a value of type
# . The value constructors associated with HOAS are given
the types in Figure 2. Note the type constructor HOAS cannot
be defined in ML. Because of the negative occurrence of
HOAS in the argument type of HOASlam, HOAS cannot be
inductively defined, either. The reason for calling HOAS a
guarded recursive datatype constructor is that HOAS can
be defined as follows through a fixed-point operator, where
# is the kind for types:
-T : #.
Then the value constructors associated with HOAS can be
readily defined through the use of fold/unfold (for recursive
types) and injection (for sum types). We can now define
an evaluation function as follows that computes the value
represented by a given higher-order abstract syntax tree.
fun eval(HOAStup (x1,
| eval(HOASlam
| eval(HOASapp (x1,
|
withtype {'a}. 'a HOAS -> 'a
Note the withtype clause is a type annotation provided by
the user, which indicates that eval is a function of type
#)HOAS #. In other words, the evaluation function
eval is type-preserving.
In the rest of the paper, we are to present a type system
to support g.r. datatype constructors. We then outline an
approach to implementing programming objects, explaining
how various issues in object-oriented programming can be
addressed.
types # | 1 | #1 #2 | #1 #2 |
patterns p ::= x | #p1 , p2# | c[#](p)
clauses ms ::= (p1 # e1 | - | pn # en)
expressions e ::= x | f | c[# ](e)
case e of ms |
values v ::= x | c[# ](v) | #v1 , v2# |
exp. var. ctx. #,
typ. var. ctx. #1 #2
Figure
4: Syntax for the internal language #2,G-
2. THE LANGUAGE # 2,G-
We present a language #2,G- based on the explicitly typed
second-order polymorphic #-calculus. We present both static
and dynamic semantics for #2,G- and then show that the
type system of #2,G- , which supports g.r. datatype con-
structors, is sound.
2.1 Syntax
We present the syntax for #2,G- in Figure 4, which is
mostly standard. We use # for type variables, 1 for the
unit type and # for a (possibly empty) sequence of types
#1 , . , #n . We have two kinds of expression variables: x for
lam-variables and f for fix-variables. We use xf for either
a lam-variable or a fix-variable. We can only form a #-
abstraction over a lam-variable and a fixed-point expression
over a fix-variable. Note that a lam-variable is a value but
a fix-variable is not. We use c for constructors and assume
that every constructor is unary. 2 Also, we require that the
body of either # or fix be a value. The syntax for patterns
is to be explained in Section 2.4.
We use # for substitutions mapping type variables to
types and dom(#) for the domain of #. Note that #
# ], where we assume # dom(#), extends # with a mapping
from # to # . Similar notations are also used for substitutions
mapping variables xf to expressions. We write
.[#]) for the result of applying #) to ., where .
can be a type, an expression, a type variable context, an
expression variable context, etc.
We use # for type variable contexts in #2,G- , which require
some explanation. As usual, we can declare a type
variable # in a type variable context #. We use #
to mean that # is a well-formed type in which every type
variable is declared in #. All type formation rules are standard
and thus omitted. We can also declare a type equality
#1 #2 in #. Intuitively, when deciding type equality under
#, we assume that the types #1 and #2 are equal if #1 #2
is declared in #.
Given two types #1 and #2 , we write to mean that
#1 is #-equivalent to #2 . The following rules are for deriving
judgments of form #, which roughly means that #
matches #.
2 For a constructor taking no argument, we can treat it as a
constructor taking the unit # as its argument.
Pattern typing rules p #)
(pat-tup)
(pat-cons)
Clause typing rule # p
Clauses typing rule # ms : #1 #2
Figure
5: Pattern typing rules
We use #1 #2 for a type constraint; this constraint
is satisfied if we have #1[#2[#] for every # such that
# is derivable. As can be expected, we have the
following proposition.
Proposition 2.1.
. If # is derivable, then # holds.
. If #1 #2 holds, then #2 #1 also holds.
. If #1 #2 and #2 #3 hold, then #1 #
#3 also holds.
2.2 Solving Type Constraints
There is a need for solving type constraints of the form
#1 #2 when we form typing rules for #2,G- . For-
tunately, there is a decision procedure for doing this based
on the set of rules in Figure 7. In these rules, we use T
to range over all type constructors, either built-ins (# and
#), user-defined g.r. datatype constructors, or skolemized
constants.
Theorem 2.2. #1 #2 holds if and only if #1 #
#2 is derivable.
Proof By induction on a derivation of #1 #2 .
2.3 G.R. Datatype Constructors
We use # as the kind for types and (# as the
kind for type constructors of arity n, where n the number of
#'s in (#, . We use T for a recursive type constructor
of arity n and associate with T a list of (value) constructors
c1 , . , ck ; for each 1 # i # k, the type of c i is of the form
is for a sequence of types # i
Expression typing rules
(ty-cons)
(ty-snd)
case e of ms : #2
Figure
Typing rules for expressions
T is not T #
# has no free occurrences in #
# has no free occurrences in #
A is a fresh skolemized constant
Figure
7: Rules for solving type constraints
and # i stands for a (possibly empty) sequence of quantifiers
(assuming
In our concrete
syntax, T can be declared as follows.
typecon (type, .,
| {#2
| .
| {# k }.(# n ) ck of #k
We present some simple examples of g.r. datatype constructors
to facilitate the understanding of this concept.
Example 1 The following syntax
Top of 'a
declares a value constructor Top of the type # TOP;
TOP is defined as -t.#, which is equivalent to #.
Example 2 The following syntax
('a) nil | ('a) cons of 'a * 'a list
declares two constructors nil and cons of the types #.1 #
(#)list and #)list #)list , respectively; the type
constructor list is define as follows, which is essentially equivalent
to the type constructor -t.1 #)t.
Note that the usual list type constructor in ML is defined
as t.
2.4 Pattern Matching
We use p for patterns. As usual, either a type variable
or a value variable may occur at most once in each pattern.
We use a judgment of form v # p #) to mean that
matching a value v against a pattern p yields substitutions
# and # for the type and value variables in p. The rules for
deriving such judgments are listed as follows.
Given a type variable context #0 , a pattern p and a type
# , we can use the rules in Figure 5 to derive a judgment of
the form #0 # p #), whose meaning is formally
captured by Lemma 2.4.
2.5 Static and Dynamic Semantics
We present the typing rules for #2,G- in Figure 6. We
assume the existence of a signature # in which the types of
constructors are declared.
Most of the typing rules are standard. The type (ty-eq)
indicates the type equality in #2,G- is modulo type constraint
solving. Please notice the great di#erence between
the rules presented in Figure 5 for typing clauses and the
"standard" ones in [15].
We form the dynamic semantics of #2,G- through the use
of evaluation contexts, which are defined below.
Evaluation context E ::=
[] | fst(E) | snd(E) | #E, e#v, E# | E(e) | v(E) |
case E of ms
Definition 2.3. A redex is defined as follows.
. fst(#v1 , v2 #) is a redex that reduces to v1 .
. snd(#v1 , v2 #) is a redex that reduces to v2 .
. (#x : #.e)(v) is a redex that reduces to e[x # v].
.v)[# ] is a redex that reduces to v[# ].
. let in e end is a redex that reduces to e[x # v].
. fixf : #.v is a redex that reduces to v[f # fixf : #.v].
. case v of ms is a redex if v # p #) is derivable
for some clause p # e in ms, and the redex reduces
to e[#]. Note that there may be certain amount of
nondeterminism in the reduction of case v of ms as v
may match the patterns in several clauses in ms.
Given a redex e1 , we write e1 # e2 if e1 reduces to e2 . If
and e1 is a redex reducing to e2 , then
we say that reduces to e # 2 in one step.
Let # be the reflexive and transitive closure of #. We say
that e1 reduces to e2 (in many steps) if e1 # e2 holds.
Given a closed well-typed expression e in #2,G- , we use
|e| for the type erasure of e, that is, the expression obtained
from erasing all types in e. We can then evaluate |e| in a untyped
#-calculus extended with pattern matching. Clearly,
holds if and only if |e| evaluates to |e # |. In other
words, #2,G- supports type-erasure semantics.
2.6 Type Soundness
Given an expression variable context # such that #(x) is
a closed type for each x # dom(#), we write # if - #
derivable for each x #
In general, we write (#) to mean that # :
# is derivable and #] holds. The following lemma
essentially verifies that the rules for deriving judgments of
the form p #) are properly formed.
Lemma 2.4. Assume that #0 # p #) is derivable
holds. If v is a closed value of type #0 ],
that is, - derivable, and we have v # p #
(#) for some # and #, then (#0 ]; #0 ]) holds.
Proof By structural induction on a derivation of #0 # p #
#)
As usual, we need the following substitution lemma to establish
the subject reduction theorem for #2,G- .
Lemma 2.5. Assume that # e : # is derivable. If
#) holds, then - # e[#] is derivable.
Proof By structural induction on a derivation of # e :
# .
Theorem 2.6. (Subject Reduction) Assume that - # e :
# is derivable. If e # e # holds, then - # e # is also
derivable.
Proof Assume that
e1 that reduces to e2 . The proof follows from structural
induction on E. In the case where [], the proof proceeds
by induction on the height of a derivation of -
handling various cases through the use of Lemma 2.5. For
handling the typing rule (ty-case), Lemma 2.4 is needed.
However, we cannot prove that if e is a well-typed non-value
expression then e must reduce to another well-typed
expression. In the case where
case v of ms that is not a redex (because v does not match
any pattern in ms), the evaluation of e becomes stuck. This
is so far the only reason for the evaluation of an expression
to become stuck.
3. IMPLEMENTING OBJECTS
In this section, we briefly outline an approach to implementing
objects through the use of g.r. datatype constructors
3.1
In Section 1, we have noticed a serious problem with the
type OBJ, as it allows no di#erentiation of objects. We
address this problem by providing the type constructor MSG
with another parameter. Given a type # and a class C,
(#)MSG(C) is a type; the intuition is that a message of type
should only be sent to objects in the class C, to
which we assign the type OBJ(C) defined as follows:
First and foremost, we emphasize that a class is not a type;
it is really a tag used to di#erentiate messages. For instance,
we may declare a class IntPairClass and associate with it the
fun newPair x
val
val
fun dispatch MSGgetfst = !xref
| dispatch MSGgetsnd = !yref
| dispatch (MSGsetfst
| dispatch (MSGsetsnd
| dispatch
in dispatch end
withtype {'a,'b}. 'a -> 'b -> OBJ(PairClass('a,'b))
Figure
8: A constructor for pairs
following message constructors of the corresponding types:
MSGgetfst
MSGsetfst
The function newIntPair can now be given the type int #
int # OBJ(IntPairClass). Since anIntPair has the type
OBJ(IntPairClass), anIntPair (MSGfoo) becomes ill-typed
if MSGfoo has a type (1)MSG(C) for some class C that is
not IntPairClass. Although classes can be treated as types
syntactically, we feel it better to treat them as type index
expressions. Following Dependent ML [20, 18], we use class
as the sort for classes. In the following presentation, we assume
the availability of g.r. datatype constructors in DML.
3.2 Parameterized Classes
There is an immediate need for class tags parameterizing
over types. Suppose we are to generalize the monomorphic
function newIntPair into a polymorphic function newPair ,
which can take arguments x and y of any types and then
return an object representing the pair whose first and second
components are x and y, respectively. We need a class
constructor PairClass that takes two given types #1 and #2 ,
and forms a class (#1 , #2)PairClass . We may use some syntax
to declare such a class constructor and associate with it
the following polymorphic message constructors:
MSGgetfst
MSGsetfst
The function newPair for constructing pair objects is implemented
in Figure 9.
3.3 Subclasses
Inheritance is a major issue in object-oriented programming
as it can significantly facilitate code organization and
reuse. We approach the issue of inheritance by introducing
a predicate # on the sort class; given two classes C1
and C2 , C1 # C2 means that C1 is a subclass of C2 . The
type of a message constructor mc is now of the general
form #a#C.(# )MSG(a) or #a#C.#1 #2)MSG(a),
where a # C means that a is of the subset sort {a : class |
a # C}, i.e., the sort for all subclasses of the class C; for
a sequence of types # with the same length as #, mc[# ]
becomes a message constructor that is polymorphic on all
subclasses of therefore, mc can be used to
fun newPair x
val
fun dispatch MSGgetfst = !xref
| dispatch MSGgetsnd = !yref
| dispatch (MSGsetfst
| dispatch (MSGsetsnd
| dispatch
in dispatch end
withtype {'a,'b}. 'a -> 'b -> OBJ(('a,'b)PairClass)
fun newColoredPair c x
val
and
and
fun dispatch MSGgetcolor = !cref
| dispatch (MSGsetcolor
| dispatch MSGgetfst = !xref
| dispatch MSGgetsnd = !yref
| dispatch (MSGsetfst
| dispatch (MSGsetsnd
| dispatch
in dispatch end
withtype {'a,'b}. color -> 'a -> 'b ->
Figure
9: Functions for constructing objects in the
classes PairClass and ColoredPairClass
construct a message for any object tagged by a subclass of
the class C0 . For instance, the message constructors associated
with PairClass are now assigned the types in Figure
10. Suppose we introduce another class constructor
ColoredPairClass , which takes two types to form a class.
Also assume the following, i.e., (#1 , #2 )ColoredPairClass is a
subclass of (#1 , #2)P airClass for any types #1 and #2 :
#)ColoredPairClass #)PairClass
We then associate with ColoredPairClass the message constructors
MSGgetcolor and MSGsetcolor , which are assigned
the types in Figure 10. We can then implement the function
newColoredPair in Figure 9 for constructing colored pairs.
Clearly, the implementation of newColoredPair shares a lot
of common code with that of newPair . We will provide
proper syntax later so that the programmer can e#ciently
reuse the code in the implementation of newPair when implementing
newColoredPair .
3.4 Binary Methods
Our approach to typed object-oriented programming offers
a particularly clean solution to handling binary meth-
ods. For instance, we can declare a class EqClass and associate
with it two message constructors MSGeq and MSGneq
which are given the following types:
Suppose self is an object of type OBJ(C) for some C # Eq.
If we pass a message MSGeq(other ) to self , other is required
to have the type OBJ(C) in order for self (MSGeq(other
to be well-typed. Unfortunately, such a requirement cannot
be enforced by the type system of Java; as a consequence,
MSGgetfst
MSGsetfst
MSGgetcolor : #a #)ColoredPairClass .(color )MSGgetcolor (a)
MSGsetcolor : #a #)ColoredPairClass .color # (1)MSGsetcolor (a)
Figure
10: Some message constructors and their types
type downcasts are often needed for implementing and testing
equality on objects.
3.5 The Self Type
Our approach also o#ers a particularly clean solution to
handling the notion of self type, namely, the type of the receiver
of a message. Suppose we want to support a message
MSGcopy that can be sent to any object to obtain
a copy of the object. 3 . We may assume MSGcopy is a
message constructor associated with some class ObjClass
and C # ObjClass holds for any class C. We can assign
MSGcopy the following type to indicate that the returned
object is in the same class as the object to which the message
is sent.
If this is done in Java, all we can state in the type system
of Java is that an object is to return another object after
receiving the message MSGcopy. This is imprecise and is a
rich source for the use of type downcasting.
3.6 Inheritance
Inheritance is done in a Smalltalk-like manner, but there is
some significant di#erence. We now use a concrete example
to illustrate how inheritance can be implemented. This is
also a proper place for us to introduce some syntax that
is designed to facilitate object-oriented programming. We
use the following syntax to declare a class ObjClass and a
message constructor MSGcopy of the type:
Note selfType is merely syntactic sugar here.
class ObjClass {
MSGcopy: selfType => self;
In addition, the syntax also automatically induces the definition
of a function superObj, which is written as follows in
ML-like syntax.
(* self is just an ordinary variable *)
fun dispatch MSGcopy = self
| dispatch
in dispatch end
withtype {a <: ObjClass} OBJ(a) -> OBJ(a)
The function superObj we present here is solely for explaining
how inheritance can be implemented; such a function is
3 It is up to the actual implementation as to how such a copy
can be constructed.
not to occur in a source program. The type of the function
#a # ObjClass.OBJ(a) # OBJ(a) indicates this is a
function that takes an object tagged by a subclass C of
ObjClass and returns an object tagged by the same class.
In general, for each class C, a "super" function of type
#a # C.OBJ(a) # OBJ(a) is associated with C. It should
soon be clear that such a function holds the key to implementing
inheritance. Now we use the following syntax to
declare classes Int1Class and ColoredInt1Class as well as
some message constructors associated with them.
class Int1Class inherits ObjClass {
MSGget_x: int;
MSGset_x (int): unit;
MSGdouble: unit =>
class ColoredInt1Class inherits Int1Class {
(* color is just some already defined type *)
MSGget_c:
MSGset_c
The "super" functions associated with the classes Int1Class
and ColoredInt1Class are automatically induced as follows.
fun dispatch MSGdouble =
| dispatch
in dispatch end
withtype {a <: Int1Class} OBJ(a) -> OBJ(a)
fun dispatch
in dispatch end
withtype {a <: ColoredInt1Class} OBJ(a) -> OBJ(a)
The functions for constructing objects in the classes Int1Class
and ColoredInt1Class are implemented in Figure 11. There
is something really interesting here. Suppose we use newInt1
and newColoredInt1 to construct objects o1 and o2 that are
tagged with Int1Class and ColoredInt1Class , respectively.
If we send the message MSGcopy to o1 , then a copy of o1
(not o1 itself) is returned. If we send MSGdouble to o2 ,
then the integer value of o2 is doubled as it inherits the
corresponding method from the class Int1Class . What is
remarkable is that the object o2 itself is returned if we send
the message MSGcopy to o2 . The reason is that no copying
method is defined for o2 ; searching for a copying method,
eventually finds the one defined in the class ObjClass
(as there is no such a method defined in either the class
ColoredInt1Class or the class Int1Class). This is a desirable
consequence: if o2 were treated as an object in the
fun newInt1 (x0:
val
fun dispatch
| dispatch (MSGset_x
| dispatch MSGcopy = newInt1 (!x)
| dispatch dispatch msg
in dispatch end
withtype int -> OBJ(Int1Class)
fun newColoredInt1 (c0: color, x0:
val
fun dispatch
| dispatch (MSGset_c
| dispatch
| dispatch (MSGset_x
| dispatch dispatch msg
in dispatch end
withtype int -> OBJ(ColoredInt1Class)
Figure
11: Functions for constructing objects in
Int1Class and ColoredInt1Class
class Int1Class (through either F-bounded polymorphism or
match-bounded polymorphism), the returned object would
be in the class Int1Class , not in the class ColoredInt1Class ,
as it would be generated by newInt1 (o2(MSGget x)), making
the type system unsound. We are currently not aware
of any other approach to correctly typing this simple exam-
ple. Note that the function newInt becomes ill-typed if we
employ the notion MyType here.
3.7 Subtyping
There is not an explicit subtyping relation in our ap-
proach. Instead, we can use existentially quantified dependent
types to simulate subtyping. For instance, given a class
tag C, the type #a#C.OBJ(a) is the sum of
all types OBJ(a) satisfying a # C. Hence, for each C1 # C,
OBJ(C1 ) can be regarded as a subtype of OBJECT(C) as
each value of the type OBJ(C1) can be coerced into a value
of the type OBJECT(C). As an example, the type
OBJ((OBJECT(EqClass),OBJECT(EqClass))PairClass)
is for pair objects whose both components support equality
test.
4. RELATED WORK AND CONCLUSION
Our work is related to both intentional polymorphism and
type classes.
There have already been a rich body of studies in the literature
on passing types at run-time in a type-safe manner [11,
10, 17]. Many of such studies follow the framework in [13],
which essentially provides a construct typecase at term level
to perform type analysis and a primitive recursor Typerec
type names at type level to define new type construc-
tors. The language # ML
in [13] is subsequently extended
to #R in [11] to support type-erasure semantics. The type
constructor R in #R can be seen as a special g.r. datatype
constructor.
The system of type classes in Haskell provides a programming
methodology that is of great use in practice. A
common approach to implementing type classes is through
dictionary-passing, where a dictionary is essentially a record
of the member functions for a particular instance of a type
class [1]. We encountered the notion of g.r. datatype constructors
when seeking an alternative implementation of type
classes through intensional polymorphism. An approach to
implementing type classes through the use of g.r. datatype
constructors can be found at [19].
The dependent datatypes in DML [20, 18] also shed some
light on g.r. datatype constructors. For instance, we can
have the following dependent datatype declaration in DML.
datatype 'a list with
| {n:nat} cons(n+1) of 'a * 'a list(n)
The syntax introduces a type constructor list that takes a
type and a type index of sort nat to form a list type. The
constructors nil and cons are assigned the following types.
cons
Given a type # and natural number n, the type (# )list(n) is
for lists with length n in which each element has the type # .
Formally, the type constructor list can be defined as follows:
clearly, this is also a form of guarded datatype constructor,
where the guards are constraints on type index expressions
(rather than on types).
Our notion of objects in this paper is largely taken from
Smalltalk [12], for which a particularly clean and intuitive
articulation can be found in [14]. The literature on types
in object-oriented programming is simply too vast for us to
give an even modestly comprehensive overview of the related
work. Please see [5] for references. Instead, we focus on
some closely related work that either directly influences or
motivates our current work.
Bounded polymorphism [8, 6] essentially imposes subtyping
restrictions on quantified type variables. For instance,
suppose we want to implement a class for ordered sequences.
In order to insert an element into a sequence, we must compare
it with other elements in the sequence. Therefore, we
should only insert elements of a class that provide the appropriate
methods for comparison. This can be achieved
through bounded polymorphism.
F-bounded polymorphism [7], which generalizes the simple
bounded polymorphism, was introduced to handle some
complex issues such as typing binary methods in object-oriented
programming. It has since been adopted in the
design of GJ [2], helping to significantly increase the expressiveness
of the type system of Java. However, F-bounded
polymorphism does not seem to interact well with the sub-class
relation (e.g., please see the example on page 59 [5]).
Matching-bounded polymorphism is similar to bounded
polymorphism. The main di#erence is that matching constraints
are imposed on quantified type variables instead
of subtyping constraints. The notion of MyType [4] essentially
refers to the type of the receiving object of a mes-
sage. With match-bounded polymorphism, the notion of
MyType can allow the possibility of dispensing with most of
the uses of F-bounded polymorphism. The language in [4]
is really a state-of-the-art object-oriented programming language
(when static typing is concerned). This work is carried
further in [3], where imperative features are introduced. The
type system that we are to design shares many common features
with the work in [4], though we employ a completely
di#erent type-theoretical approach. In particular, we intend
to not only simplify the notion of MyType but also make it
more e#ective in capturing program invariants.
Currently, we are particularly interested in implementing
a CLOS-like object system on the top of DML extended with
g.r. datatype constructors, facilitating object-oriented programming
styles in a typed functional programming setting.
5.
--R
Implementing Haskell overloading.
Making the future safe for the past: Adding genericity to the Java programming language.
A paradigmatic object-oriented programming language: design
Foundations of Object-Oriented Languages
A modest model of records
On understanding types
A formulation of the simple type theory of types.
Flexible Type Analysis.
Intensional polymorphism in type-erasure semantics
Compiling polymorphism using intensional type analysis.
The Definition of Standard ML (Revised).
Computation and Deduction.
Fully Reflexive Intensional Type Analysis.
Dependent Types in Practical Programming.
Guarded Recursive Datatype Constructors
Dependent types in practical programming.
--TR
Smalltalk-80: the language and its implementation
On understanding types, data abstraction, and polymorphism
F-bounded polymorphism for object-oriented programming
Implementing Haskell overloading
Compiling polymorphism using intensional type analysis
Smalltalk, objects, and design
Making the future safe for the past
Intensional polymorphism in type-erasure semantics
Dependent types in practical programming
Flexible type analysis
Foundations of object-oriented languages
The Definition of Standard ML
Dependent types in practical programming | dependent types;DML;object-oriented |
568265 | Primitives for authentication in process algebras. | We extend the &pgr;-calculus and the spi-calculus with two primitives that guarantee authentication. They enable us to abstract from various implementations/specifications of authentication, and to obtain idealized protocols which are "secure by construction". The main underlying idea, originally proposed in Focardi (Proc. Sixth Italian Conf. on Theoretical Computer Science, November 1998) for entity authentication, is to use the locations of processes in order to check who is sending a message (authentication of a party) and who originated a message (message authentication). The theory of local names, developed in Bodei et al. (Theoret. Comput. Sci. 253(2) (2001) 155) for the &pgr;-calculus, gives us almost for free both the partner authentication and the message authentication primitives. | Introduction
Authentication is one of the main issues in security and it can have different purposes depending on the
specific application considered. For example, entity authentication is related to the verification of an entity's
claimed identity [18], while message authentication should make it possible for the receiver of a message
to ascertain its origin [30]. In recent years there have been some formalizations of these different aspects of
authentication (see, e.g., [3, 7, 13, 16, 17, 22, 29]). These formalizations are crucial for proofs of authentication
properties, that sometimes have been automatized (see, e.g. [12, 15, 20, 21, 25]). A typical approach
presented in the literature is the following. First, a protocol is specified in a certain formal model. Then
An earlier version of Sections 7-9 appeared in [8]. This work has been partially supported by MURST Progetto TOSCA and
Progetto "Certificazione automatica di programmi mediante interpretazione astratta".
the protocol is shown to enjoy the desired properties, regardless of its operating environment, that can be
unreliable, and can even harbour a hostile intruder.
We use here basic calculi for modelling concurrent and mobile agents and we give then certain kinds
of semantics, offering built-in mechanisms that guarantee authentication. This is the main contribution
of our paper. Our mechanisms enable us to abstract from the various implementations/specifications of
authentication, and to obtain idealized protocols which are "secure by construction". Our protocols, or
rather their specifications can then be seen as a reference for proving the correctness of "real protocols".
The essence of concurrent and mobile computation can be studied in a pure form using the -calculus
[24], a foundational calculus based on the notion of naming. Systems are specified as expressions called
processes. These are obtained by combining, via a few operators (parallel composition, nondeterministic
choice, declarations), the basic actions of sending and of receiving names between processes along channels.
Names represent values, or messages, and also channels. Since processes exchange names in communica-
tions, the interconnection structure of a network can vary dynamically. Recently, Abadi and Gordon [3]
defined the spi-calculus by enriching the -calculus with primitives for encryption and decryption. The
resulting calculus is particularly suited to security issues, among which authentication.
In [6] the -calculus has been equipped with a structural operational semantics which endows each
sequential process P in the whole system with its own local environment, i.e., P has its local space of
names and its local name manager that generates a fresh name, whenever necessary. The basic ingredient of
this proposal is the notion of relative address of a process P with respect to another process Q: it represents
the path between P and Q in (an abstract view of) the network (as defined by the syntax of the calculus).
Note that relative addresses are not available to the users of the -calculus: they are used by the abstract
machine of the calculus only, defined by its semantics.
We propose here to use the ideas underlying this proposal to study authentication, both in the -calculus
and in the spi-calculus. As a matter of fact, this kind of semantics provides us with two built-in authentication
primitives. The key point is that P can use its address relative to Q to uniquely reach (the subterm of the
whole system representing) Q itself. Consequently, relative addresses may be used both to authenticate
the partners of a communication and to authenticate the origin of a message. For the sake of presentation,
we will first introduce our primitive for partner authentication in the -calculus, and the one of message
authentication in the spi-calculus. We can easily combine them, e.g. by introducing the first mechanism in
the spi-calculus so that both kinds of authentication can be enforced.
Partner authentication A variation of the semantics defined in [6] gives us a run-time mechanism that
guarantees each principal to engage an entire run session with the same partners, playing the same roles.
Essentially, we bind sensitive input and output communications to a relative address, i.e. a process P can
accept communications on a certain channel, say c, only if the relative address of its partner is equal to an
a-priori fixed address loc.
In order to achieve this, we index channels with relative addresses, so obtaining input actions on the
form c loc (x) or output actions of the form d loc 0 hMi. While sending a message, our semantics will check if
the address of the sender with respect to the receiver is indeed loc. In particular, assume that P is explicitly
waiting a message from a process reachable following loc, i.e. P performs the input action c loc (x). Then,
no possibly hostile process E having a relative address with respect to P different from loc, can successfully
communicate with P on c. Moreover, E cannot sniff any message M sent by P through the output action
d loc 0 hMi, if the address of E relative to P is different from loc 0 . These "located" I/O primitives enable
processes to have some control on their partners. As an example we can define, in the -calculus syntax, a
protocol that guarantees partner authentication by construction by putting in parallel the following processes:
where loc P represents the address of P relative to Q, chMi stands for sending M along c to every possible
process (the channel c is not indexed by any relative address, formally by the empty one), and c loc P (x) is an
input action located at loc P . The input can only match an output chMi executed by the process reachable
from Q through the relative address loc P . The resulting communication has the effect of binding x to M
within the residual of Q, yielding Q replaces x).
If we consider P , Q and also an intruder E in parallel, (P j Q) j E, the effect is to guarantee to Q
that the communication over c can only be performed with P . Thus, Q is assured that message M has been
indeed received by P . Note that there is no need for c to be a channel private to P and Q.
Although in this paper we focus on authentication primitives, it is interesting to note that located outputs
also guarantee a form of secrecy. As an example consider the following protocol where now P uses a located
output:
where loc Q is the address of Q relative to P . Consider again (P j Q) j E. Now, P is also guaranteed that
the communication over c will be only performed with Q, i.e., E cannot intercept M which will thus remain
secret. Again, the channel c needs not to be private to P and Q. So, we separately model authentication
and secrecy over public channels: our mechanism is thus more concrete than the use of a private channel for
communication.
In every protocol, legitimate processes may play a few different roles, such as sender, server, receiver,
etc. Usually, processes recognize the roles their partners are playing, but seldom they know which are
the partners' relative addresses. So, we shall also index a channel with a variable , to be instantiated
by a relative address, only. Whenever a process P , playing for instance the role of sender or initiator,
has to communicate for the first time with another process S in the role, e.g. of server, it uses a channel
c . Our semantics rules will take care of instantiating with the address of P relative to S during the
communication. Roughly, this implements a sort of anonymous communication. Note that the process S
could also be a hostile process pretending to be the server. However, from that point on, P and S will keep
communicating in the same roles for the entire session, using their relative addresses.
We have sketched how our mechanism guarantees that each principal communicates with the same
partners, in the same role, for the entire session. Through it, we also circumvent some problems arising from
mixing up sessions, in particular, those due to replay attacks. Usually, protocols use challenge-response
mechanisms, based e.g. on nonces (typically numbers used once), whose freshness protects from replay
attacks. In our framework, freshness is no longer needed to distinguish the communications of one session
from the communications of another.
Message authentication The semantics in [6] and its extension to the spi-calculus studied in Section 7 (see
also [5]) directly allow to define another authentication mechanism, providing us with a built-in primitive
that enables the receiver of a message to ascertain its origin, i.e. the process that created it. In fact, the
address of a message M relative to a process P , uniquely identifies the originator of M , even after the
message has been maliciously intercepted and forwarded. Indeed, we guarantee the integrity of the message
its receiver gets it as the originator of M made it. If M is a compound message, the receiver can
additionally ascertain the originators of the components of M . We write the primitive for authentication as
is a message and P is a process. 1 Intuitively, the execution of the process Q starts
only when the check [M @
succeeds, and this happens if and only if the relative addresses of M and of
P with respect to Q coincide: in other words, Q is the generator of M . Note again that this check is done
by the interpreter of the calculus, i.e. the semantic rules, not by the user.
A communication protocol that guarantees message authentication by construction is now easy to define,
by using the plain handshaking communication of the -calculus and the spi-calculus, and the primitive
sketched above. Suppose that a process P sends a message M (for simplicity we consider below a name) to
along a public channel c. In the spi-calculus syntax they have the following form:
where the operator (M) declares M to be a fresh name, different from all the others in the whole system.
If we put P , Q and also an intruder E in parallel, (P j Q) j E, the effect is to guarantee that the residual
of Q is indeed Q 0 [M=x]. Note that here Q is not guaranteed to receive the message directly from P , as
for partner authentication. As a matter of fact, the intruder might as well have intercepted the message M
originated by P and forwarded it to Q. This is legal as we are only checking that M has been originated
by P . As we will see, E cannot modify any of the parts of M without changing the relative address of M
itself, because relative addresses are manipulated by the semantics only. Also in this case, there is no need
for c to be a channel private to P and Q.
Our solutions assume that the implementation of the communication primitives has a reliable mechanism to
control and manage relative addresses. In some real cases this is possible, e.g., if the network management
system filters every access of a user to the network as it happens in a LAN or in a virtual private network.
This may not be the case in many other situations. However, relative addresses can be built by storing the
actual address of processes in selected, secure parts of message headers (cf. IPsec [31]). Yet, our solutions
may help checking the correctness of different implementations, e.g. those based on cryptography, as briefly
discussed in the conclusion.
Contents of this paper In Section 2 we survey the -calculus; in Section 3, we recall relative addresses
and in Section 4 the proved version of the -calculus from [6]. Section 5 is devoted to partner authentication.
In Section 6 we survey the spi-calculus and in Section 7 we enrich it with the relative address mechanism.
Sections 8 and 9 are about the message authentication primitive.
2 The -calculus
In this section we briefly recall the monadic -calculus [24], a model of concurrent communicating processes
based on the notion of naming. Our presentation slightly differs from the usual ones and it will make it
easier to introduce later on the spi-calculus The main difference from standard presentation relies in the
introduction of the new syntactic category of terms, where names and variables are distinguished.
Definition 2.1 (syntax) Terms (denoted by (denoted by
are built according to the syntax
1 Actually, we have [M @
is a message from P ; see the formal development for details.
prefix
(m)P restriction
matching
replication
where may either be M(x) for input or MhNi for output.
Hereafter, the trailing 0 will be omitted. We often write ~ to denote tuples of objects, for instance ~
m for the
vector actually we feel free to consider and to operate on ~
m as if it were a set. Notations are
extended componentwise, e.g. (~n) stands for (n 1 finally,
means that there are no restricted names.
Intuitively, 0 represents the null process which can do nothing. The prefix is the first atomic action that
the process :P can perform. After the execution of the process :P behaves like P . The input prefix
M(x) binds the name x in the prefixed process as follows: when a name N is received along the link named
M , all the (free) occurrences of x in the prefixed process P are substituted with M . The output prefix
MhNi does not bind the name N which is sent along M . Summation denotes nondeterministic choice. The
process . The operator j describes parallel composition of processes.
The components of P 1 jP 2 may act independently; also, an output action of P 1 (resp. P 2 ) at any output port
may synchronize with an input action of P . The value sent by P 1 replaces the relevant
occurrences of the placeholder x in P 2 . The operator (m) acts as a static declaration (i.e. a binder for) the
name m in the process P that it prefixes. In other words, m is a unique name in P which is different from
all the external names. The agent (m)P behaves as P except that actions at ports m and m are prohibited.
However communications along link m of components within P are allowed. Matching is an
if-then operator: process P is activated only if . Finally, the process !P behaves as infinitely
many copies of P running in parallel.
We write fn(M) and fn(P ) for the sets of names free in term M and process P , respectively, and fv(M)
and fv(P ) for the sets of variables free in term M and process P , respectively. A closed term or process is a
term or process without free variables.
2.1 Semantics.
The semantics for the -calculus we consider here is a late semantics, based on a reduction relation and on a
commitment relation. Some structural congruence rules are also needed. The commitment relation depends
on the abstraction and concretion constructs:
An abstraction has the form (x)P , where (x) binds x in P .
A concretion has the form ( ~
m)hMiP , where M is a term, P is a process and the names in ~
are
bound by ( ~
m) in M and P .
An agent A or B is an abstraction, a concretion or a process.
If F is the abstraction (x)P and C the concretion ( ~
m)hMiQ and f ~
then the interactions
F@C and C@F are:
m)(P [M=x] j Q)
Congruence. The structural congruence on processes is defined in the standard way, except for the
treatment of parallel composition that is assumed to be neither commutative nor associative. It is then
defined to be the least congruence satisfying:
if P and Q are -equivalent then P Q;
is a commutative monoid;
m)hMi(R j Q); and ( ~
m)hMiQ
In the following, we will never distinguish congruent terms.
Reduction relation. The reduction relation > is the least relation on closed processes that is transitive
and closed under all contexts, and that satisfies the following axioms:
Red Repl !P > P j !P
Red Match
Commitment relation. An action is a name m (representing input) or a co-name m (representing output)
or a distinguished silent action . Note that actions record only the channel on which the input or the output
occurs. The commitment relation is written P ! A, where P is a closed process, is an action, and A is
a closed agent. It is defined by the rules in Tab. 1.
3 Relative Addresses and their Composition
We recall here the ideas of [6] that serve as a basis for the authentication mechanisms we are going to
introduce. Consider for a while the binary parallel composition as the main operator of the -calculus
(neither associative nor commutative). Then, build abstract syntax trees of processes as binary trees, called
trees of (sequential) processes, as follows. Given a process P , the nodes of its tree correspond to the
occurrences of the parallel operator in P , and its leaves are the sequential components of P (roughly, those
processes whose top-level operator is a prefix or a summation or a replication). A tree of processes is
depicted in Fig. 1.
Comm Out
Comm In
Comm Sum 1
Comm Sum 2
Comm Par 1
Comm Par 2
Comm Inter 1
Comm Inter 2
Comm Res
Comm Red
Comm Struct
Table
1: The commitment relation.
Figure
1: The tree of (sequential) processes of (P
Assume now that the left (resp. right) branches of a tree of sequential processes denote the left (resp.
right) component of parallel compositions, and label their arcs with tag jj 0 (resp. jj 1 ). Therefore, any
sequential component in a process is uniquely identified by a string # over fjj . The string corresponds
to a path from the root, the top-level j of the whole process, to a leaf. Intuitively, # is the address of the
sequential component relative to the root of the binary tree.
Consider now two different sequential processes, say G and R, in a tree and call the path between them
the address of the process G relative to the process R. This relative address can be decomposed into two
parts according to the minimal common predecessor P of G and R in the tree. The relative address is then a
string written # 0 , made of jj 0 's and jj 1 's, where # represents the path from P to R, 2 and # 0 the path from
P to G. Let G and R respectively be the processes P 3 and P 1 of Fig. 1. The address of P 3 relative to P 1
is then jj 0 jj 1 jj 1 jj 1 jj 0 (read the path upwards from P 1 to the root and reverse, then downwards to P 3 ). So to
speak, the relative address points back from R to G.
Relative addresses can be composed, in order to obtain new relative addresses. For instance, the composition
of the relative address jj 1 jj 0 jj Fig. 1) with the relative address of P 3 w.r.t.
is the relative address jj 0 jj 1 jj 0 of P 3 w.r.t. P 2 .
Below we recall the formal definition of relative addresses and we define their composition. More
intuition, the full definitions and the statements of some of their properties are in [6].
Definition 3.1 (relative addresses)
be the empty string, and let be the sum modulo 2. Then, the set of relative
addresses, ranged over by l, is
We will sometimes omit in relative addresses, e.g. we write n for n.
A relative address l compatible with l, written l
As we said before, we use relative addresses to encode paths between pairs of nodes of binary trees of
processes, like the one in Fig. 1. Note that the condition jj 0 # 0
explicit that
the two components of the relative address describe the two distinct paths going out from the same node in
a binary tree. Also, l both refer to the same path, exchanging its source and target.
2 For technical reasons we take the path from P to R instead of the more natural path from R to P .
Address composition is a partial operation, defined only when relative addresses can indeed be com-
posed. We make sure that this is always the case when we apply it. Fig. 2 depicts all the cases in which this
happens.
Definition 3.2 (address composition) Address composition is defined by the following
three exhaustive cases:
1.
2.
3.
It is immediate to see that ? has a neutral element (i.e. l ? an inverse for each element (i.e.
the inverse of l is l 1 ) and that ? is associative (i.e. (l ? l
G R S
(1) (2) (3)
Figure
2: The three possible relative placements of three processes G, S, and R. The result of the composition
of the relative addresses of S w.r.t. R and of G w.r.t. S is represented by the solid arrows.
4 Proved semantics
Relative addresses can be inductively built while deducing transitions, when a proved semantics is used
[9]. In this section, we recall from [10] the proved transition system for the -calculus, in which labels of
transitions encode (a portion of) their deduction tree. The arrows of the proved transition system are labelled
by @#, where is an action and # is a string of jj used to single out the sub-process that actually
performed . The rules for proved commitment are in Tab. 2. They are essentially those of the standard
transition system except for those involving the parallel operator. Rule Comm Par 1 (respectively, Comm
Par 2) adds in the label of its conclusion the tag jj 0 (respectively, jj 1 ) to register that the left (respectively,
right) component of a parallel composition is moving. The rules defining the congruence and the reduction
relation are indeed the same as before. To recover the standard semantics of the -calculus, we only need to
erase any occurrence of # from the labels of transitions.
Note that the information added to labels can be used to inductively build relative addresses. Indeed, the
tags jj i are sufficient to recover the parallel structure of a process P because they provide an encoding of the
tree of processes of P . For instance, suppose that process :P performs the transition @jj 0 . Then, we know
that the -action was performed by a sequential process on the form :P in parallel with some process Q.
Indeed, the whole system had the form :P jQ. More generally, if a process R performs a transition @#,
the path # in the tree of processes permits to reach the sub-process that performs the action. Technically,
we indicate the sub-process R as P@#, which is inductively selected through the following operator.
Comm Out
Comm In
Comm Sum 1
Comm Sum 2
Comm Par 1
Comm Par 2
Comm Inter 1
Comm Inter 2
Comm Res
Comm Red
Comm Struct
Table
2: The proved commitment relation.
Definition 4.1 The localization operator @# is defined on processes by induction as follows:
1.
2. ((m)P
3.
4.
This definition will be helpful at the end of Sections 5 and 8.
Back to Fig. 1, if P 3 communicates with P 1 , the whole process
a computation step. The sub-process P 3 performing the output is Sys@jj 1 jj 1 jj 0 and the sub-process P 1
performing the input is Sys@jj 0 jj 1 . By putting together the single paths, we obtain the relative address
5 Partner authentication
We now introduce our first authentication mechanism. At run-time, it will guarantee each principal to engage
an entire run session with the same partners playing the same roles. We heavily exploit the proved semantics
reported in the previous section. We essentially bind sensitive input and output communications to relative
addresses. More precisely, channels may have a relative address as index, and assume the form c l . Now, our
semantics will ensure that P communicates with Q on c l if and only if the relative address of P w.r.t. Q is
indeed l (and that of Q w.r.t. P is l 1 ). Notably, even if another process R 6= Q possesses the channel c l ,
R cannot use it to communicate with P . Consequently, a hostile process can never interfere with P and Q
while they communicate, not even eavesdrop the exchanged messages.
By using these "located" channels, processes may have some control on their partners. But often a
process P is willing to communicate with several processes, usually through one or very few channels. So,
we shall also index a channel with a variable to be instantiated. Suppose that process P , playing for
instance the role of sender or of initiator, wants to communicate with a process S (e.g. the server), that P
does not know the relative address of S, say l, and finally that the involved channel is c . Then, during the
first communication of P with S, will be instantiated within P by l (recall that our proved operational
semantics indeed computes l). Symmetrically for S, if it uses the same channel c (with a different variable
uses c 0 ). In a sense, this is the case of anonymous communication. Note however that S may
as well use an already located channel c l 0 : the communication occurs only if l From that point on,
P and S will keep communicating in the same roles for the entire session, using their, now known, relative
addresses.
Thus, we extend the names that occur in processes by indexing them with a location, defined to be either
a relative address or a variable to be instantiated by a relative address. Formally,
Definition 5.1 Let 3 ; be a countable set of (address) variables, and let t.
is the set of located channels.
Usually, the empty location
The rules defining the congruence and the reduction relation are the same as before, apart from the obvious
substitution of located names for names. The rules for the new commitment relation are in Tab. 3, where
we omit the symmetric rules for communication. The rules for parallel composition are in the proved style,
Comm Out
Comm In
Comm Sum 1
Comm Par 1
Comm Inter L 1
l 0
if
(l
Comm Inter
where l
Comm Inter 1
Comm Res
Comm Red
Comm Struct
Table
3: The proved located commitment relation.
recording which component (left or right) of the process is moving. There, some of the arrows are annotated
also with a location. For each I/O action rule, the location t of the channel involved is recorded under the
arrow, and it is preserved by all non communication rules and discarded by communications. This location
t is used in the premises of communication rules, to establish the relative addresses of one process with
respect to the other. In fact, if the first process (the receiver) performs an input m@# and the second process
(the sender) performs the complementary output action m@# 0 , then the relative address of the sender with
respect to the receiver is jj 0 #jj 1 # 0 . (The additional jj 0 and jj 1 record that the two processes are the left and
the right partners in the communication.)
There are three different rules, up to symmetries, for every communication between two processes, say
P and Q. We comment on them, and we intuitively relate them with the three possible situations in which
P and Q may be.
Comm Inter L 1 P wants to receive from a process located at l, and Q wants to send to a process located
at l 0 . For the communication to happen the relative addresses of Q w.r.t. P and of P w.r.t. Q (the path
established by the communication) should coincide with these locations that should be compatible,
i.e. l as the side conditions require. This situation reflects the fact that P and Q "know each
other", possibly because they have previously established their connection and are session partners.
Note that when l and l 0 are , we recover the "non-located" communication of the standard -
calculus.
Comm Inter wants to receive from a process located at l, while Q is willing to send a message to
any process (it sends on a channel with location variable 0 ). The communication is successful only if
l coincides with the path established by the communication, i.e. if l coincides with the relative address
of indeed the process from which P wants to receive. After the communication, 0
will be suitably bound to l 1 (the relative address of P w.r.t. Q) within Q, so that now P and Q "know
each other".
If l is the empty location, then the communication is always successful and 0 will still be replaced by
the relative address of P w.r.t. Q.
Comm Inter 1 P and Q do not "know each other" and are willing to communicate with any partner. So,
they exchange their relative addresses, as established while deducing the premises. Intuitively, P and
are performing their initial synchronization. So their two variables are replaced with the relative
address of Q w.r.t. P and vice-versa (l and l 0 , respectively).
Variables are not located. Consequently, when a located channel c l is communicated, it becomes a "free"
channel: the location l is lost. The index to c becomes if so it was; otherwise we get c (with not
occurring in the process). Formally, we have the following.
Definition 5.2 Let (y)P be a abstraction and l iQ be a concretion. Then, their interaction
is
otherwise (with not occurring in Q)
Symmetrically for C@F .
Using the above definition for communication makes it easier to use a channel in a multiplexing way. Suppose
that a process P is committed to communicate on a channel c l with a process Q. Also, assume that
sends c l to a third party R, that receives it as c . The "same" channel c can now be used for further
communications between P and Q (in this case located at l), as well as for communications between R and
some other process (after has been suitably instantiated).
In the rules for commitment we use the particular kind of substitution fjl=jg @# , called selective routed
substitution. It uses a particular substitution fjl=jg, called routed, to be applied only to the sub-process
located at #. The routed substitution takes into account the parallel structure of the process. For that it
updates the relative addresses l while traversing the tree of processes. For instance, to substitute # 0 for
in P 0 jP 1 requires to substitute jj i ? # 0 for in each component P i .
Definition 5.3 The location routed substitution fjl=jg is defined by induction as follows:
1.
l otherwise
2. (m hMi:P
3. (m (x):P
4.
5.
7.
There is no case for !P in the above definition, i.e. the routed substitution may be applied only to P , after
that !P has been reduced to P j!P . This amounts to saying that we are also considering processes that
inside have routed substitutions not fully performed. Consequently, the target of the transitions in Tab. 3
may contain expressions on the form !P fjl=jg. In order not to burden too heavily our notation, we shall still
use P and A for processes, abstractions and concretions with substitutions not yet performed.
We use the above definition to implement a selective substitution that works on the sub-process of a
whole term reachable through #.
. Then, the selective routed substitution Pfjl=jg @# is defined by induction
as
The very fact that channels have indexes, which may also be instantiated, guarantees that two partners can
establish a connection that will remain stable along a whole session. Indeed, let
(note that jj 0 jj 1 is the address of R relative to Q). Then, it is immediate verifying that Q
accepts inputs on c only if they come from R; of course this property remains true for all inputs of Q along
. Symmetrically for the process
Example 5.5 We now illustrate our partner authentication mechanism, through some simple examples. First
of all, consider a single message exchange between Alice, A and Bob, B:
where and 0 are two location variables. After the message exchange, the semantic rule Comm Inter 1
instantiates and 0 in A 0 and B 0 , respectively, with the address of A relative to B (i.e., jj 1 jj 0 ) and of B
relative to A (i.e., jj 0 jj 1 ), respectively.
Intuitively, the instantiation of a with an l represents a secure declaration of the identity of a process
through its location l, that cannot be manipulated even by malicious parties. As we will see later on, located
actions also give some security guarantees on the subsequent message exchanges.
Example (cont'd) Consider now the following protocol (recall that d is simply written as d; we shall
comment below on the use of channel d, which occurs located in A 0 but not in B 0 ).
Here, Bob sends a message to Alice after the reception of MA . Note that Alice is requiring that the second
message comes from the same process to which she sent MA . Since, after the first message exchange, the
variable is instantiated to the address of B 0 relative to A 0 , our semantics guarantees authentication of the
second communication with respect to the (secure) identity declaration of : Alice is assured that the second
message will be sent from the same process that received the first one. In order to illustrate this important
point we add another process C which tries to communicate over channel d. The
process has the following steps:
Since the address of C 0 relative to A 0 is jj 0 jj 0 jj 1 6= jj 0 jj 1 , then either (the residual of) A 0 receives from and
only from (the residual of) B 0 or B 0 and C 0 communicate. In the first case we have:
In the second case we have:
In the example above, the same channel d is used in two different ways: it is "located" for Alice and Bob;
alternatively, it is "free" for B 0 and C 0 . In Example 5.9, we shall see that the same channel can be even used
in a multiplexing fashion: two pairs of processes can interleave their communications, still presenting the
property of being engaged with the same process along the entire session.
In the next example we consider the situation where the channel d is located and used to output a message
M . This usage of channels also guarantees a sort of secrecy of M .
Example 5.6 We have seen in the previous example that message MB was intercepted by C 0 , thus violating
its secrecy. Consider now the following protocol:
Here, Bob is requiring that the message MB is received by the same user that sent the first message. In this
case Bob obtains a form of secrecy: he is assured that MB will be read by the process identified by 0 , i.e.,
the process that sent message MA . Indeed, any new process C to read MB . We have that
but the address of C 00 relative to B 00 is jj
We have seen that locating inputs and outputs corresponds to guaranteeing authentication and secrecy of the
communication, respectively. We can summarize these two concepts as follows. In a message exchange and
with respect to an address l, a process obtains
Partner authentication whenever it receives the message from the process reachable at l, only.
Secrecy whenever only the process reachable at l will receive the message.
We now state the above more precisely, exploiting Def. 4.1. We need the notion of context with two holes,
Theorem 5.7 (authentication) Let ^
PROOF. By inspection on the rules used to deduce the transition; in particular consider the side conditions of rules
Comm Inter L 1 (where
Theorem 5.8 (secrecy) Let ^
PROOF. Analogous to the previous proof.
The next example illustrates how locating both inputs and outputs guarantees a permanent hooking between
two parties and allows to model multiple sessions quite easily.
Example 5.9 Consider the following processes:
~
~
~
Indeed both Alice and Bob are assured that the second message (and also all the subsequent messages
sent and received on located channels) will be sent/received by the user that interacted in the first message
exchange. Hence, the two users, after the first communication are permanently hooked together. This time
a third user ~
indeed able to take the place of ~
B in the communication but only if it
starts the session with ~
A. Instead, it can never communicate with ~
A after the first message exchange between
~
A and ~
We now model multiple sessions. Consider ~
B, where an unbounded number
of instances of Alice and Bob are present, each with a different fresh message. Consider now two instances
of ~
A sending their first messages to two instances of ~
B, i.e, two parallel sessions:
~
Note that after the first message exchange the partners in each session are permanently hooked: the second
message is always sent to the correct party, the one who initiated the session. As a consequence, no replay
of messages is possible among different sessions, also in the presence of a malicious party.
6 The Spi-Calculus
Syntax. In this section we briefly recall, often also literally, the spi-calculus [3], in its monadic version
from [1]. This calculus is an extension of the -calculus, introduced for the description and the analysis
of cryptographic protocols. A first difference with the -calculus is that the spi-calculus has no summation
operator +. Also, terms can be structured as pairs (M;N ), successors of terms suc(M) and encryptions
. The last term above represents the ciphertext obtained by encrypting
the key N , using a shared-key cryptosystem such as DES [27].
Definition 6.1 Terms and processes are defined according to the following BNF-like grammars.
names
x variables
suc(M) successor
shared key encryption
prefix
(m)P restriction
matching
replication
let in P pair splitting
case M of case
case L of fx in P shared key decryption
where may either be M(x) or MhNi. 3
Most of the process constructs are the same of -calculus. The new ones decompose terms:
The process let (x; in P behaves as P [M
is not a
The process case M of is 0, as Q[N=x] if
is stuck otherwise;
The process case L of fx in P attempts to decrypt L with the key N ; if L has the form
then the process behaves as P [M 1 =x and otherwise is stuck.
The structural congruence and the operational semantics for commitment are exactly the same of the -
calculus given in Table 1. Some new reductions rules are instead needed.
Red Split let (x;
Red Zero case 0 of
Red Suc case suc(M) of
Red Decrypt case fM N of fx
7 Names of the spi-calculus handled locally
To introduce our second authentication mechanism, we need to further exploit the ideas contained in [6],
where the relative addresses, introduced in Section 3, are used to handle names locally to sequential processes
in an operational manner. The space of names of a whole process is then partitioned into local
environments associated each with its sequential sub-processes.
To avoid global management of names, we have to solve two problems. Names have to be declared
locally and to be brand-new in that local environment. Furthermore, when a name is exported to other local
environments via communications or by applying a reduction rule, we must guarantee that there are no
clashes involving the other names around. A purely mechanical way of doing that is in [6].
For the sake of simplicity, instead of recalling also the mechanism for generating fresh names, here we
assume that a name is fresh, whenever needed, and we shall recall that by a side condition.
As for keeping names distinct, consider two different sequential processes, say G and R, that have two
syntactically equal names, say n. Suppose now that G sends n to R. To distinguish between the two different
instances of n in the local environment of R, the name generated by G will be received enriched with the
address of G relative to R, which points back from R to the local environment of G.
A slightly more complex situation arises when a process receives a name and sends it to another process.
The name must arrive at the new receiver with the address of the generator (not of the sender) relative to the
new receiver. Consider again Fig. 1, where P 1 sends to P 2 a name generated by P 3 . The rules (in Tab.
for communication use address composition to determine the address of P 3 relative to P 2 , by composing the
address of the message (recording the address of P 3 w.r.t. with the relative address of P 1 w.r.t. P 2 .
We carry the localized semantics of the -calculus of [6] on the monadic spi-calculus. First of all, we
introduce the new set of localized names, that are names prefixed with relative addresses.
Although M is an arbitrary term, we only consider it to be a name or a variable (to be instantiated to a name), because these
are the only useful cases (see [3]).
Definition 7.1 Let N be the set of localized names, where N is the set of standard names and
"" is the operator of language concatenation.
For simplicity, we assume possibly indexed, to range over both N 0 and N and, unless necessary,
we do not syntactically distinguish localized terms from terms, i.e. terms prefixed with relative addresses
like from those not prefixed.
As we said above, we do not recall how the mechanism of [6] generates fresh names whenever needed:
here we simply assume them fresh. However, we require that restricted names are always localized, i.e.
they occur in a declaration as (n). Technically, this is achieved by transforming a process P with (n)
into a new process, obtained by replacing each sub-process of P on the form (n)Q with the process
(n)Qfjn=njg s (the substitution fj=jg s is in Def. 7.3).
When a term M is exported from a process, say P , to another, say Q, it is necessary to compose the
relative address prefixing M with the relative address of P w.r.t. Q. This composition is performed by the
term address composition, that extends address composition in Def. 3.2. Applied to a relative address and
to a localized term, it returns an updated localized term.
Definition 7.2 Term address composition ? T is defined as
We say that , is the term # exported to the relative address # 0 .
Names in N , variables and natural numbers are not prefixed with relative addresses and are insensitive to
address composition:
We now explain how our semantics deals with terms. First, note that the operator ? T considers the compound
term M as a whole, as it does not distribute address composition over the sub-terms of M . Now, consider the
encryption term fMgK . It is an atomic entity, and so it is handled as it were a new name, local to the process
that encrypts M , say P . The two localized terms M and K are frozen just like they were at encryption time,
and their own relative addresses are not changed when the encrypted message is sent through the net; again,
these relative addresses cannot be updated. Since fMgK is atomic, its relative address, say #
always point to the process P that made the encryption. (Technically, M and K are frozen like they were
in the process S containing all the restrictions on the names used in the encryption.) In this way, when
decrypting fMgK the semantic rules recover the correct addresses for M and K by simply composing the
actual address of fMgK , # 0 # 1 , with the (frozen) relative addresses of M and K , respectively. The same
management described above is used for successor and pairs. In the first case, the term M is frozen in
suc(M), while the terms M and N are frozen in (M;N).
Also the routed substitution of Def. 5.3 is extended to deal both with terms and with processes in the
spi-calculus; it distributes to each sub-term or sub-process (note that below the term N cannot be a variable).
Definition 7.3 The spi routed substitution fjN=xjg s is defined by induction as follows on
terms
1. rfjN=xjg
2. zfjN=xjg
z x 6= z
3. 0fjN=xjg
4. suc(M)fjN=xjg
5.
processes
1. 0fjN=xjg
2. (rhMi:P )fjN=xjg
3. (r(y):P )fjN=xjg
4.
5.
7. (let (y
8. (case M of
case MfjN=xjg s of
case MfjN=xjg s of
9. (case L of fy in P )fjN=xjg
case LfjN=xjg s of fx MfjN=xjgs in P
case LfjN=xjg s of fx MfjN=xjgs in PfjN=xjg s otherwise.
Now, the selective routed substitution for the spi-calculus is exactly as in Def. 5.4.
Localized Congruence. The rules for the structural congruence require some changes to accommodate
localized names.
1.
2.
3.
4.
Note that no -conversion is needed: each binding occurrence of the name r in P 1 replaced
by which is different from any name s in P 1 because of the properties of
address composition ?.
Localized Reduction Relation. We add the following reduction rules to those for matching and replication
Red Split let (x;
Red Suc case # 0 # 1 suc(M) of
Red Decrypt case # 0 # 1 of fy
When a process decomposes a term, the involved sub-terms are updated. The intuition for the decryption
rule is that we can decrypt a message fMgN only if we use the key which is exactly how
the frozen key N should appear to the receiver of the encrypted message. When K and N , although starting
from different sites, do refer to the same key, the semantic rules decrypt the message and update its relative
address by composing # 0 # 1 and M . The process P then behaves as Pfj# 0
Localized Commitment Relation. Eventually, we present in Tab. 4 the extended commitments rules. The
localized commitment relation is written P @# ! A, where P is a process, is the action, performed by the
sub-process at #, and A is an agent.
The component # of labels is needed for checking some side conditions of a few rules. Communication
rules are successful only if the complementary actions refer to the same name; while for the restriction rule
it is necessary to check that there are no clashes between the action and the restricted name. The semantic
rules update the messages with the relative addresses of the sender with respect to the receiver. Indeed,
the congruence rules lift relative addresses and restrictions as much as needed. For instance, by applying
we have that the restricted name r in P 1 appears as jj 0 ? r in the process
similarly, by applying the congruence rule the
term M in P 0 appears as Finally, also interactions have to update relative addresses.
Definition 7.4 Let be an abstraction and be a concretion. Then, their localized
interactions are
To see how the localized interactions work, consider F
@C . The restricted names, known as ~
r in P 1 , are
duly updated to jj 0 ? ~
r in the parallel composition of P 0
after the substitution) and P 1 . As for the
message, it appears as M in P 1 and has to be exported at the term address composition ? T is therefore
applied to the relative address jj 1 jj 0 and to M . As an example, consider again the processes in Fig. 1
and suppose that P 0 is willing to send N to the process P 3 . Then, the message will appear as jj 0 jj 0 N in
It will replace, through the routed substitution, the variable x in (P 2 j(P 3 jP 4 ) as jj 1 jj 0 jj 0 N . Note
that it will arrive to P 3 as enriched with the relative address of P 0 w.r.t. P 3 .
8 Message Authentication
We can now intuitively present our authentication primitive [M @
akin to the matching operator. This
"address matching" is passed only if the relative addresses of the two localized terms M and N coincide.
Comm Out
Comm In
Comm Par 1
Comm Par 2
Comm Inter 1
Comm Inter 2
Comm Res
Comm Red
Comm Struct
Table
4: The localized commitment relation.
The intuition is that if we know which process packed N , say P , we can also say that M comes indeed from
authenticating it. More formally, the extensions due to the new primitive consist in a new case for
processes and a new reduction rule.
Definition 8.1 Let M;N be two terms as in Def. 6.1, and l; l 0 2 Loc be two relative addresses. Then
is a process, on which we define the following reduction rule:
Red Address Match [lM @
Note that free names are prefixed with the empty relative address.
Hereafter, we assume an initial start-up phase, in which processes exchange a message and fix their
relative addresses. This can be obtained, e.g., through a preliminary communication between the partners,
from A to B on a restricted shared channel. This start-up phase is indeed an abstraction of the preliminary
secure exchange of secret information (e.g., long term keys) which is necessary in every cryptographic
protocol. We will see an example of this in the next session. This initialization step can be avoided by using
our partner authentication primitive; however, for the sake of presentation, we do not combine here the two
primitives.
Consider the following simple example, where the process B wants to authenticate a message from A
even in the presence of an intruder E. The protocol P is:
E)
(Recall that M is a name that appears as M in A, by assumption; analogously for N in E, below.) Now we
show the role that localized names play, and how they guarantee by construction that B 0 is executed only if
the message bound to x has been originated by A, i.e., if the message received on channel c is indeed M . As
above, the relative address jj 1 jj 0 jj 0 pointing from B to A, encodes the "site" hosting A, thus it gives
the identity of the process from which B is expecting to receive a message on channel c. In order to analyze
the behaviour of the protocol in a hostile environment, we consider a generic intruder E, as powerful as
possible.
We now examine the following two possible message exchanges:
The first message represents the correct exchange of message M from A to B. The second one is an attempt
of E to send a different message N to B. The intruder E could actually be of the following form:
hother bad actionsi:
The names M and N are received by B prefixed by the relative address corresponding to the respective
originators.
Fig. 3 shows the two message exchanges. In particular we see that M is received by B as jj 1 jj 0 jj 0 M
while N becomes jj 0 jj 1 N . It is now immediate to see that only in the first case B will evolve to BfjM=xjg s ,
Figure
3: The process B detects that the message jj 1 jj 0 jj 0 M is authentic while the message jj 0 jj 1 N comes
from the intruder E.
while in the latter it will stop. This is so because only jj 1 jj 0 jj 0 M matches with the address of A. Every
attempt of the intruder of introducing new messages on c is filtered out by the authentication primitive.
A further interesting case arises when the intruder intercepts M and forwards it to B. We will show that
our mechanism accepts the message as authentic. In particular, we reconsider the previous protocol and we
analyze the case of a different intruder that, masquerading as B (written E(B)), intercepts M and forwards
it to B:
does not, actually cannot modify M we would like to accept the message in B even if it has been
forwarded by E. No matter how many times a message is forwarded, address composition maintains its
integrity and the identity of its generator. In detail, E(B) receives M as jj 1 jj 1 jj 0 M . When E forwards it
to B, the message is composed with the address of E relative to B, jj 0 jj 1 , yielding (jj 0
By applying the rule (1) of Def. 3.2 we see that the message is received by B as jj 1 jj 0 jj 0 M , and therefore
it is accepted as authentic. Note also that B can use jj 1 jj 1 jj 0 M as a component of a new message M 0 . The
receiver R of M 0 will get M prefixed by the relative address of A, say #. So R can check that M 0 has been
packed by B 0 , hence the authenticity of M 0 , and even of its components. Indeed the composition of # and
gives the address of the originator of M (i.e. A), relative to R.
We end this section with the following property, in which C[ ] and are contexts with one and
two holes, respectively, and Pfj ~
stands for Pfj ~
Theorem 8.2 (address matching) Let ^
the input on c binds the variable x in the matching. Suppose that
such
1 and
only if l
PROOF. In the first communication the variable x occurring in the matching is instantiated to l
the input binds x (recall that, in Def. 7.3, the substituting term is enriched while going down in the tree of sequential
processes). The sequence of steps leading to Q 00 only change the contexts C and the process P 1 , and it possibly
binds some variables ~
M=xjg. Now the reduction on matching can be performed if and only if [l 00 M @
is at top-level in C 00 , and l is equal to l.
9 Implementing Authentication
In this section we show that our notion of message authentication based on locations helps studying and
analysing cryptographic protocols. The main idea is to observe if a specific authentication protocol is indeed
a good "implementation" of our authentication primitive, i.e., if the cryptographic protocol is as strong in
detecting names with an "incorrect" relative address as our authentication primitive is.
Recall that in the spi-calculus a compound term, such as an encryption M , is considered localized, i.e.
its relative address, say # point to the process P that made the encryption. In this way,
when decrypting fMgK the semantic rules recover the correct addresses for M and K by simply composing
the actual address of fMgK , # 0 # 0 , with the (frozen) relative addresses of M and K , respectively.
We now show an example of a correct run of the Wide Mouthed Frog key exchange protocol. Consider
its simplified version analyzed in [3]. The two processes A and B share keys KAS and KBS with a trusted
server S. In order to establish a secure channel with B, A sends a fresh key KAB encrypted with KAS to
the server S. Then, the server decrypts the key and forwards it to B, this time encrypted with KBS . Now B
has the key KAB and A can send a message M encrypted with KAB to B. The protocol should guarantee
that when B receives M , such a message has been indeed originated by A.
The protocol is composed of the following three messages:
Message
Message 3 A
Its specification in our calculus with localized names (having mechanically replaced restricted names with
their localized counterparts) is:
i:c AB hfMg KAB
(x):case x of fyg jj 1 KAS
in c BS hfyg jj 1 KBS
(x):case x of fyg jj 0 jj 1 KBS
in
c AB (z):case z of frg y in ^
Not surprisingly, the specification is in the style of [3], except for localized restricted names. Fig. 4 shows
how the localized names are handled in a correct execution of the protocol P . Note that KAS and KBS
assume a different relative address in the different processes.
After the reception of message 2, B decrypts the received message
Figure
4: A correct execution of the Wide Mouthed Frog protocol
This means computing
case
of fyg jj 0 jj 1 KBS
in c AB (z):
case z of frg y in ^
B:
The reduction rule Decrypt applies because jj 0 jj 1 . The variable y is then
set to (jj 0 jj 1 that results in jj 1 jj 0 KAB , by the rule (3) of Def. 3.2. This is indeed the
correct reference to the key KAB generated by A and installed in its local environment. In the last message B
receives
, and succeeds in decrypting it with jj 1 jj 0 KAB , obtaining jj 1 jj 0 M . The addresses
of M and of A relative to B are equal, so M is indeed authentic. This characterizes a "correct" execution.
It is well-known that an attack can take place over two sessions of the protocol above. Basically, it
occurs when the intruder replays some messages of the first session in the second one. We follow below
the formalization of [3], where this attack is analyzed. Note that in the single session illustrated above no
problem arises instead, even if an intruder intercepts the message sent by A, then forwarded to B. Indeed
the message received is the right one, as we have just seen above, as no one can alter neither the relative
addresses, nor the encrypted message fMgKAB .
Now, we show that the attack above is immediately detected by an observer that can compare localized
names, e.g. by using our authentication primitive [M @
For the sake of readability, we call A 0 and B 0
the two instances of A and B in the second session where A 0 is trying to send the message M 0 to B 0 using
the session key K 0
Message
hE eavesdrops Message 2i
Message 3 A
hE eavesdrops Message 3i
The intruder eavesdrops the first session and then replays messages 2 and 3 in the second session (messages
The result is that B 0 receives a copy of M instead of one of M 0 .
In order to model two parallel session of the protocol, we consider the following specification:
where the addresses of localized names KAS and KBS are suitably updated in the processes A, B and
S. Note that A generates both M and KAB as fresh names. So each A of A j A originates two different
messages, say M and M 0 , and two different keys, say KAB and K 0
AB . We also modify S so that it can
serve more sessions (for the sake of simplicity we define a server which is just able to handle two sequential
(x):case x of fyg jj 1 KAS
in c BS hfyg jj 1 KBS
i:
c AS (z):case z of fwg jj 1 KAS
in c BS hfwg jj 1 KBS
and we specify the intruder as:
where the names c AB , c AS and c BS are free, and thus known to all the processes of P 0 (unlike KAS and
KBS that are bound).
Now we can observe the attack sequence in Fig. 5. In particular, when the process B 0 decrypts message
which is a message originated from A and not from A 0 , as it
should be. Indeed, the address of A 0 relative to B 0 is jj 1 jj 1 jj 0 jj 1 , and the attack is detected.
We now want to show how the protocol can be done secure by construction through our authentication
primitive. The idea is that the last message should be accepted only if it has been originated by the correct
initiator. In order to do this we need at least a message from the initiator whose address can be compared
with the last message of the protocol. The trick is to add a startup message that securely hooks one initiator
with one responder, by sending a fresh message on a restricted channel. The resulting specification follows
(the modifications are in bold font and the restricted names are not localized, for the sake of readability):
(x):case x of fyg KAS in c BS hfyg KBS i:
c AS (z):case z of fwgKAS in c BS hfwgKBS i
(x):case x of fyg KBS in
c AB (z):case z of frg y in [r @
In P 00 the two B 00 processes receive by the two A 00 two different startup messages, with two different ad-
dresses. It is thus no longer possible for the intruder to carry out a replay attack. In fact, the cheated B 00
will be able to stop before delivering the message to ^
B. By comparing the traces of this protocol correct "by
construction" with the traces of the previous one it is easy to see that they are not equivalent. A potential
attack is thus detected.
K AS
K AS
fMg K AB
AB g d
AS
AB g d
AS
Figure
5: An attack on the Wide Mouthed Frog protocol; where jj k
stands for a sequence of k tags jj i , and
where d
Conclusions and Future Work
We defined two primitives that guarantee partner and message authentication over public channels, based
on the same semantic feature derived form the proved transition systems [9]. Partner authentication is based
on a semantics of the -calculus where names of channels are indexed with the expected relative addresses
of the communicating parties. In particular, any time two processes try to communicate over a common
channel, their relative addresses are checked against the index of the channel used. The communication is
enabled only if the check is passed, i.e. if the relative addresses are compatible with the indexes. Moreover,
the very same channel can be used in a multiplexing fashion: two processes, say P and Q, can go on
exchanging messages on channel c, interleaving their activity with that of another pair of processes, say R
and S, using c as well. It will be never the case that a message for P is read by R or comes from S, unless
this is indeed the intended behaviour of both P and R or both P and S.
Message authentication is based on a semantics of the spi-calculus where each message M is localized,
via a relative address l, to the process P that packed M . The authentication primitive compares the relative
address l with the address l 0 of a process Q, relative to the receiver of M . The check succeeds and
authenticates the message M only if l = l 0 , expressing that it was indeed the process Q that packed M .
Our two primitives are in a sense orthogonal, as they operate on independent features of the calculi con-
sidered. Of course, one may combine them, e.g. by carrying on the spi-calculus the notion of located channel
introduced for the -calculus. Both notions of authentication can then be guaranteed "by construction".
Note that our partner authentication primitive does not transform a public channel into a private one. In-
deed, partner authentication clearly separates the concepts of authentication and secrecy. More importantly,
once two processes communicating on public channels are hooked it is impossible for a third process to
interfere in the communication.
Our notion of message authentication does not need private channels, as well. A message M may be
considered authentic even if it has been intercepted or eavesdropped, i.e., our mechanism does not guarantee
the secrecy of M , but only that M has been generated or packed by the claimed entity. Thus, our primitive
corresponds neither to a private channel in the basic -calculus, nor to a cryptographic one in the spi-
calculus: both appear to be too strong to message authentication alone, as they guarantee also secrecy.
The idea of exploiting locations for the analysis of authentication comes from [14], where however
entities are bound to physical addresses of the net. An approach related is Abadi-Fournet-Gonthier's [2],
in which principals have explicit, fixed names (see also the Join-calculus [19] and SEAL [33]). Here we
relaxed the rigidity of a fixed mapping of sites, by introducing a sort of "identifiers of sites" represented
by relative addresses. As a matter of fact, the actual placement of a process on a site can be recovered by
composing our localized names (akin to the environment function of sequential languages) with allocation
tables (similar to a store). Actually, [14] models a wider notion of authentication, that we plan to investigate
next.
As discussed in the paper, our primitive may be not implementable directly. Indeed, one should have
a low-level, highly reliable mechanism to manage localized names, which is unrealistic in many cases,
but possible for instance in LAN or virtual private networks. A further step could be encrypting relative
addresses within the header of messages, in the style of IPsec [31]. Nevertheless, our proposal can help
reasoning on authentication and security from an abstract point of view. This is indeed the main aim of our
approach and we are presently developing some ideas that we briefly describe in the following.
First, it could be possible to verify the correctness of a cryptographic protocol by showing that its messages
implement partner authentication when needed. As an example, a typical challenge-response technique
requires to send a nonce (random challenge) and to expect it back, encrypted with a secret shared
key. Challenge-response can be proved to implement our located input actions, under some suitable condi-
tions. The proofs that implementations satisfy specifications are often hard, just because private channels
are used to model authenticated channels. Indeed, private channels often seem too far from cryptographic
implementations. So our proposal can help, as we need no private channels.
Moreover, we could verify if a cryptography-based protocol ensures message authentication, by checking
a version of it containing also the primitive [M @
the original formulation and ours should exhibit
the same behaviour. This check of specifications against implementations, is much in the style of the
congruence-based techniques typical of process calculi (see, e.g., [3]).
Finally, we feel confident that our proposal scales up, because some languages for concurrent and reactive
systems, like Facile [32], PICT [28], CML [26], Esterel [4] are built on top of a core process calculus
like the one we use here; also, they have an operational semantics that can easily be turned into a proved
one, as the successful cases of Facile [11] and Esterel [23] show. Of course a great deal of work is still
necessary to make our proposal applicable in real cases.
--R
"Secure Implementation of Channel Abstractions"
"A Calculus for Cryptographic Protocols: The Spi Calculus"
"The Synchronous Programming Language ESTEREL and its Mathematical Semantics"
Security Issues in Process Calculi.
"Names of the -Calculus Agents Handled Locally"
"A Logic of Authentication"
"Authentication via Localized Names"
"Enhanced Operational Semantics: A Tool for Describing and Analysing Concurrent Systems"
"Non Interleaving Semantics for Mobile Processes"
"Causality for debugging mobile agents"
"A Compiler for Analysing Cryptographic Protocols Using Non-Interference"
"Strand Spaces: Why is a Security Protocol Cor- rect?"
"Using Entity Locations for the Analysis of Authentication Protocols"
"Using Non Interference for the Analysis of Security Protocols "
"Non Interference for the Analysis of Cryptographic Proto- cols"
"A Uniform Approach for the Definition of Security Properties"
International Organization for Standardization.
"Three systems for cryptographic protocol analysis"
"Breaking and Fixing the Needham-Schroeder Public-key Protocol using FDR"
"A Hierarchy of Authentication Specification"
"Applying techniques of asynchronous concurrency to synchronous languages"
"A Calculus of Mobile Processes (I and II)"
"Automated Analysis of Cryptographic Protocols Using Mur"
"From CML to its Process Algebra"
"Data Encryption Standard (DES)"
PICT: A programming language based on the pi-calculus
"Verifying authentication protocols in CSP"
Applied Cryptography.
RFC 2411: IP security document roadmap
"Facile Antigua Release Programming Guide"
"Seal: A framework for secure mobile computations"
--TR
A logic of authentication
A calculus of mobile processes, I
From CML to its process algebra
The reflexive CHAM and the join-calculus
Verifying Authentication Protocols in CSP
A calculus for cryptographic protocols
Non-interleaving semantics for mobile processes
Secrecy by typing in security protocols
Applying techniques of asynchronous concurrency to synchronous languages
Pict
Names fo the MYAMPERSANDpgr;-calculus agents handled locally
A compiler for analyzing cryptographic protocols using noninterference
Enhanced operational semantics
Seal
Non Interference for the Analysis of Cryptographic Protocols
Breaking and Fixing the Needham-Schroeder Public-Key Protocol Using FDR
The ESTEREL Synchronous Programming Language and its Mathematical Semantics
A Uniform Approach for the Definition of Security Properties
Secure Implementation of Channel Abstractions
A Hierarchy of Authentication Specifications
Authentication via Localized Names
Automated analysis of cryptographic protocols using Mur/spl phi/
--CTR
C. Bodei , P. Degano , R. Focardi , C. Priami, Authentication primitives for secure protocol specifications, Future Generation Computer Systems, v.21 n.5, p.645-653, May 2005
Chiara Bodei , Mikael Buchholtz , Pierpaolo Degano , Flemming Nielson , Hanne Riis Nielson, Static validation of security protocols, Journal of Computer Security, v.13 n.3, p.347-390, May 2005 | secrecy;operational semantics;proved transition systems;distributed process algebras;authentication;security |
568275 | Listing all potential maximal cliques of a graph. | A potential maximal clique of a graph is a vertex set that induces a maximal clique in some minimal triangulation of that graph. It is known that if these objects can be listed in polynomial time for a class of graphs, the treewidth and the minimum fill-in are polynomially tractable for these graphs. We show here that the potential maximal cliques of a graph can be generated in polynomial time in the number of minimal separators of the graph. Thus, the treewidth and the minimum fill-in are polynomially tractable for all classes of graphs with a polynomial number of minimal separators. | Introduction
The notion of treewidth was introduced at the beginning of the eighties by Robertson and
Seymour [25, 26] in the framework of their graph minor theory. A graph H is a minor of a
graph G if we can obtain H from G by using the following operations: discard a vertex, discard
an edge, merge the endpoints of an edge in a single vertex. Among the deep results obtained
by Robertson and Seymour, we can cite this one: every class of graphs closed by minoration
which does not contain all the planar graphs has bounded treewidth.
A graph is chordal or triangulated if every cycle of length greater or equal to four has a
chord, i.e. edge between two non-consecutive vertices of the cycle. A triangulation of a graph is
a chordal embedding, that is a supergraph, on the same vertex set, which is triangulated. The
treewidth problem is to nd a triangulation such that the size of the biggest clique is as small
as possible. Another closed problem is the minimum ll-in problem. Here we have to nd a
triangulation of the graph such that the number of the added edges is minimum. In both cases
we can restrict to minimal triangulations, i.e. triangulations with a set of edges minimal by
inclusion.
The treewidth and the minimum ll-in play an important role in various areas of computer
science e.g. sparse matrix factorization [27], and algorithmic graph theory [3, 14, 2, 8]. For an
extensive survey of these applications see also [5, 7].
Computing the treewidth is equivalent to nd a tree decomposition, that is a tree such that
each node of the tree is labeled by a vertex set of the graph. The labels of the nodes must
respect some constraints: every vertex of the graph must appear in some label, the endpoints
of an edge must appear in a same label, if a same vertex is in two dierent labels it must be in
all the labels on the unique path of the tree connecting the two occurrences of the vertex. The
width of the tree decomposition is then the size of the largest label minus one, and the treewidth
is the smallest width over all the tree decompositions of the graph. Many graph problems that
model real-life problems are intractable in the sense that they are NP-hard. If we deal with
a class of graphs of bounded treewidth most of these problems become polynomial and even
linear e.g. maximum independent set, hamiltonian circuit or Steiner tree. There are two ways
to solve problems when the treewidth is bounded, the rst uses dynamic programming [5, 16]
and the second is based upon reduction techniques [2, 8].
Unfortunately the computation of the treewidth and of the minimum ll-in of a graph
are NP-hard [1, 30] even for co-bipartite graphs. However, a polynomial time approximation
algorithm with O(log n) performance ratio is described in [9]. The problem of the existence
of a polynomial approximation of the treewidth within a multiplicative constant remains still
open. For any xed constant k, there exist polynomial algorithms nding a tree decomposition
of width at most k if such a decomposition exists. Arnborg et al. [1] gave the rst algorithm
that solves this problem in O(n k+2 ) time. Since numerous improvements have been done on
the domain until the linear time algorithm of Bodlaender [6]. Notice that the constant hidden
by the O notation is doubly exponential in k 2 . Some results for treewidth appeared in the
literature in connection with logic. The works by Arnborg et al. [2], Courcelle [13], Courcelle
and Mosbah [14] led to the conclusion that all the problems which are expressible in extended
monadic second order logic can be solved in linear time for graphs of bounded treewidth.
There exist several classes of graphs with unbounded treewidth for which we can solve
polynomially the problem of the treewidth and the minimum ll-in. Among them there are
the chordal bipartite graphs [19, 12], circle and circular-arc graphs [28, 23], AT-free graphs
with polynomial number of minimal separators [22]. Most of these algorithms use the fact that
these classes of graphs have a polynomial number of minimal separators. It was conjectured in
[17, 18] that the treewidth and the minimum ll-in should be tractable in polynomial time for
all the graphs having a polynomial number of minimal separators. We solve here this ESA'93
conjecture.
The crucial interplay between the minimal separators of a graph and the minimal triangulations
was pointed out by Kloks, Kratsch and M-ller in [21], these results were concluded
in Parra and Scheer [24]. Two minimal separators S and T cross if T intersects two connected
components of GnS, otherwise they are parallel. The result of [24] states that a minimal
triangulation is obtained by considering a maximal set of pairwise parallel separators and by
completing them i.e. by adding all the missing edges inside each separator. However this
characterization gives no algorithmic information about how we should construct a minimal
triangulation in order to minimize the cliquesize or the ll-in.
Trying to solve this later conjecture, we studied in [10, 11] the notion of potential maximal
clique. A vertex set K is a potential maximal clique if it appears as a maximal clique in
some minimal triangulation. In [10], we characterized a potential maximal clique in terms
of the maximal sets of neighbor separators, which are the minimal separators contained in it.
We designed an algorithm which takes as input the graph and the maximal sets of neighbor
separators and which computes the treewidth in polynomial time in the size of the input.
For all the classes mentioned above we can list the maximal sets of neighbor separators in
polynomial time, so we unied all the previous algorithms. Actually, the previous algorithms
compute the maximal sets of neighbor separators in an implicit manner. In [11], we gave a new
characterization of the potential maximal cliques avoiding the minimal separators. This allowed
us to design a new algorithm that, given a graph and its potential maximal cliques, computes
the treewidth and the minimum ll-in in polynomial time. Moreover this approach permitted
us to solve the two problems for a new class of graphs, namely the weakly triangulated graphs.
It was probably the last natural class of graphs with polynomial number of minimal separators
for which the two problems remained open.
This paper is devoted to solve the ESA'93 conjecture, that is the treewidth and the minimum
ll-in are polynomially tractable for the whole class of graphs having a polynomial number of
minimal separators. Recall that if we are able to generate all the potential maximal cliques
of any graph in polynomial time in the number of its minimal separators, then the treewidth
and the minimum ll-in are also computable in polynomial time in the number of minimal
separators. We dene the notion of active separator for a potential maximal clique which leads
to two results. First, the number of potential maximal cliques is polynomially bounded by
the number of minimal separators. Secondly, we are able to enumerate the potential maximal
cliques in polynomial time in their number. These results reinforce our conviction that the
potential maximal cliques are the pertinent objects to study when dealing with treewidth and
minimum ll-in.
Preliminaries
Throughout this paper we consider nite, simple, undirected and connected graphs.
E) be a graph. We will denote by n and m the number of vertices, respectively
the number of edges of G. For a vertex set V 0 V of G, we denote by NG (V 0 ) the neighborhood
of V 0 in GnV 0 so NG
A subset S V is an a; b-separator for two nonadjacent vertices a; b 2 V if the removal
of S from the graph separates a and b in dierent connected components. S is a minimal
b-separator if no proper subset of S separates a and b. We say that S is a minimal separator
of G if there are two vertices a and b such that S is a minimal a; b-separator. Notice that a
minimal separator can be strictly included in another one. We denote by G the set of all
minimal separators of G.
Let G be a graph and S a minimal separator of G. We note CG (S) the set of connected
components of GnS. A component C 2 CG (S) is a full component associated to S if every vertex
of S is adjacent to some vertex of C, i.e. NG S. The following lemmas (see [15] for a
proof) provide dierent characterizations of a minimal separator:
Lemma 1 A set S of vertices of G is a minimal a; b-separator if and only if a and b are in
dierent full components of S.
G be a graph and S be an a; b-separator of G. Then S is a minimal a; b-separator
if and only if for any vertex x of S there is a path from a to b that intersects S only in x.
If C 2 C(S), we say that is a block associated to S. A block (S; C) is called
full if C is a full component associated to S.
Let now E) be a graph and G an induced subgraph of G. We will compare
the minimal separators of G and G 0 .
Lemma 3 Let G be a graph and V 0 V a vertex set of G. If S is a minimal a; b-separator
of the induced subgraph G there is a minimal a; b-separator T of G such that
Proof. Let is an a; b-separator in G. Let T be any minimal
b-separator contained in S 0 . We have to prove that S T . Let x be any vertex of S and
suppose that x 62 T . Since S is a minimal a; b-separator of G 0 , we have a path joining a and
b in G 0 that intersects S only in x (see lemma 2). But is also a path of G, that avoids T ,
contradicting the fact that T is an a; b-separator. It follows that S T . Clearly, T \ V 0 S
by construction of T , so T \
The next corollary follows directly from lemma 3.
E) be a graph and a be a vertex of G. Consider the graph G
G[V nfag]. Then for any minimal separator S 0 of G 0 , we have that S or S [ fag is a minimal
separator of G. In particular, jG j jG 0 j.
3 Potential maximal cliques and maximal sets of neighbor
separators
The potential maximal cliques are the central object of this paper. We present in this section
some known results about the potential maximal cliques of a graph (see also [10, 11, 29]).
Denition 1 A vertex
set
of a graph G is called a potential maximal clique if there is a
minimal triangulation H of G such
that
is a maximal clique of H.
We denote by G the set of potential maximal cliques of the graph G.
A potential maximal
clique
is strongly related to the minimal separators contained
in
.
In particular, any minimal separator of G is contained in some potential maximal clique of G.
The number jG j of potential maximal cliques of G is at least jG j=n.
If K is a vertex set of G, we denote by G (K) the minimal separators of G included in K.
Denition 2 A set S of minimal separators of a graph G is called maximal set of neighbor
separators if there is a potential maximal
clique
of G such that
. We also say that
borders
in G.
We proved in [11] that the potential maximal cliques of a graph are sucient for computing
the treewidth and the minimum ll-in of that graph.
Theorem 1 Given a graph G and its potential maximal cliques G , we can compute the
treewidth and the minimum ll-in of G in O(n 2 jG j jG
Let now K be a set of vertices of a graph G. We denote by C the connected
components of GnK. We denote by S i (K) the vertices of K adjacent to at least one vertex of
no confusion is possible we will simply speak of C i and S i . If S i
say that C i (K) is a full component associated to K. Finally, we denote by SG (K) the set of
in the graph G, i.e. SG (K) is formed by the neighborhoods, in the graph G, of the
connected components of GnK.
Consider graph E) and a vertex set X V . We denote by GX the graph obtained
from G by completing X , i.e. by adding an edge between every pair of non-adjacent vertices of
X . If is a set of subsets of V , GX is the graph obtained by completing all
the elements of X .
Theorem 2 Let K V be a set of vertices. K is a potential maximal clique if and only if :
1. GnK has no full components associated to K.
2. G SG (K) [K] is a clique.
Moreover, if K is a potential maximal clique, then SG (K) is the maximal set of neighbor separators
bordering K, i.e. SG
For example, in gure 1, the vertex sets fb; c; e; gg and fb; d; eg are potential maximal cliques
of the graph of gure 1a and the vertices fx; a potential maximal clique of the graph
of gure 1b.
(a) (b)
z
e
y
f
d
c
a
x
Figure
1: Potential maximal cliques
Remark 1 If K is a potential maximal clique of G, for any pair of vertices x and y of K either
x and y are adjacent in G or they are connected by a path entirely contained in some C i of
GnK except for x and y. The second case comes from the fact that if x and y are not adjacent
in G they must belong to the same S i to ensure that K becomes a clique after the completion
of SG (K). When we will refer to this property we will say that x and y are connected via the
connected component C i .
Remark 2 Consider a minimal separator S contained in a potential maximal
clique
. Let us
compare the connected components of GnS and the connected components of
Gn
(see [11] for
the proofs). The
set
nS is contained in a full component
associated to S. All the other
connected components of GnS are also connected components of
Gn
. Conversely, a connected
component C of
Gn
is either a connected component of GnS (in which case NG (C) S) or
it is contained in
(in which case NG (C) 6 S).
Remark 3 Unlike the minimal separators, a potential maximal
clique
0 cannot be strictly
included in another potential maximal
clique
. Indeed, for any proper
subset
0 of a potential
maximal
clique
, the
dierence
0 is in a full component associated
to
Theorem 2 leads to a polynomial algorithm that, given a vertex set of a graph G, decides if
K is a potential maximal clique of G.
Corollary 2 Given a vertex set K of a graph G, we can recognize in O(nm) time if K is a
potential maximal clique of G.
Proof. We can compute in linear time the connected components C i of GnK and their neighborhoods
. We can also verify in linear time that GnK has no full components associated to
K.
For each x 2 K, we compute all the vertices y 2 K that are adjacent to x in G or connected
to x via a C i in linear time (we have to search the neighborhood of x and the connected
components C i with x 2 S i ). So we can verify in O(nm) time if K satises the conditions of
theorem 2.
4 Potential maximal cliques and active separators
Theorem 2 tells us that
if
is a potential maximal clique of a graph G,
then
is a clique in
. We will divide the minimal separators of
into two classes: those which create
edges in
, which are called actives, and the others, which are called inactives. More
precisely:
Denition 3
Let
be a potential maximal clique of a graph G and let S
be a minimal
separator of G. We say that S is an active separator
for
if
is not a clique in the graph
nfSg , obtained from G by completing all the minimal separators contained
in
, except
S. Otherwise, S is called inactive
for
.
Proposition 1
Let
be a potential maximal clique of G and S
a minimal separator,
active
for
. Let (S;
) be the block associated to S
containing
and let x; ybe two
non-adjacent vertices of
nfSg .
Then
nS is an minimal x; y-separator in
G[C
Proof. Remark that the vertices x and y, non-adjacent in
nfSg , exist by denition of an
active separator. Moreover, since
is a clique, we must have
Let us prove rst
that
nS is a x; y-separator in the graph G
G[C
that x and y are in a same connected component C xy of G 0
nS). Let
Clearly, C
is a connected component of
Gn
. Let T be the neighborhood of C in G.
By theorem 2, T is a minimal separator of G, contained
in
. By construction of T , we have
. Notice that T 6= S, otherwise S would separate C
and
, contradicting the fact that
(see remark 2). It follows that T is a minimal separator of
, dierent from S
and containing x and y. This contradicts the fact that x and y are not adjacent in
nfSg .
We can conclude
that
nS is an x; y-separator of G 0 .
We prove now
that
nS in a minimal x; y-separator of G 0 . We will show that, for any vertex
znS, there is a path joining x and y in G 0 and such that
intersects
nS only in z. By
theorem 2, x and z are adjacent in
, so x and z are adjacent in G or they are connected
via a connected component C i of
Gn
. Notice that C i
, then C i will
be contained in some connected component D of GnS, dierent from
. According to remark
2, we would have NG (C i ) NG (D) S, contradicting z 2 S i . In both cases we have a path 0
from x to z in G 0 , that
intersects
nS only in z.
For the same reasons, z and y are adjacent in G, or there is a connected component C j of
Gn
such that C j
and z; y This gives us a path 00 from z to y in G 0 ,
such that 00
fzg. Remark that C i 6= C j , otherwise we would have a path from x
to y in C i [ fx; yg, contradicting the fact
that
nS separates x and y in G 0 . So the paths 0
and 00 are disjoint except for z, and their concatenation is a path , joining x and y in G 0 and
intersecting
nS only in z. We conclude by lemma 2
that
nS is a minimal separator of G 0 .
By proposition 1, the set T 0
nS is a minimal separator of the subgraph of G induced by
yg. By lemma 3, there is a separator T of G such that T 0 T and T \
deduce:
Theorem 3
Let
be a potential maximal clique and S be a minimal separator, active
for
.
Let (S;
) be the block associated to S
containing
. There is a minimal separator T of G
such
that
It follows easily that the number of potential maximal cliques containing at least one active
separator is polynomially bounded in the number of minimal separators of G. More exactly
number of these potential maximal cliques is bounded by the number of blocks (S;
multiplied
by the number of minimal separators T , so by njG j 2 . Clearly, these potential maximal
cliques have a simple structure and can be computed directly from the minimal separators of
the graph.
Nevertheless, a potential maximal clique may not have active separators. For example
in gure 2, the potential maximal
clique
contains the minimal separators
and fa; c; d 0 g, but no one of them is active
for
. Let us make a
rst observation about the potential maximal cliques containing inactive minimal separators.
d' c'
d c
a
a' b'
Figure
2: Active and inactive separators
Proposition 2
Let
be a potential maximal clique and S
a minimal separator which is
inactive
for
. Let D be the full components associated to S that do not
intersect
.
Then
is a potential maximal clique of the graph Gn [ p
Proof. Let G
. The connected components of G 0
are exactly the connected
components of
Gn
, except for D neighborhoods in G 0 are the same as in
G. It follows that the set SG 0 of the neighborhoods of the connected components of G 0
is exactly
nfSg. Clearly, G 0
has no full components associated
to
. Since S is not
active
for
, we deduce
that
is a clique in G 0
. So, by theorem
2,
is a potential maximal
clique of G 0 .
5 Removing a vertex
E) be a graph and a be a vertex of G. We denote by G 0 the graph obtained from
G by removing a, i.e. G We will show here how to obtain the potential maximal
cliques of G using the minimal separators of G and G 0 and the potential maximal cliques of G 0 .
By corollary 1, we know that G has at least as many minimal separators as G 0 : for any minimal
separator S of G 0 , either S is a minimal separator of G, or S [ fag is a minimal separator of
G. It will follow that the potential maximal cliques of a graph can be computed in polynomial
time in the size of the graph and the number of its minimal separators.
Proposition 3
Let
be a potential maximal clique of G such that a.
Then=
nfag
is either a potential maximal clique of G 0 or a minimal separator of G.
Proof. Let be the connected components of
Gn
and S i be the neighborhood of
C i in G. We denote as usual by
the set of all the S i 's. Remark that the connected
components of G 0
nfag) are exactly C neighborhoods in G 0 are respectively
nfag.
Since
is a clique in G SG
(by theorem 2), it follows
nfag
is a clique in G 0
0 has no full components associated
to
then
0 is a potential
maximal clique of G 0 , according to theorem 2. Suppose now that C 1 is a full component
associated
to
0 in G 0 . Since C 1 is not a full component associated
to
in G, it follows that
Thus,
0 is a minimal separator of G, by theorem 2.
Lemma 4 Let G be a graph and ~
G be any induced subgraph of G. Consider a potential maximal
clique
of ~
G. Suppose that for any connected component C of Gn ~
G, its neighborhood NG (C)
is strictly contained
in
.
Then
is also a potential maximal clique of G.
Proof. Let C be any connected component of Gn ~
G. We denote by ~
V the set of vertices of ~
G.
We want to prove
that
is a potential maximal clique of the graph ~
[C]. Indeed, the
connected components of ~
are the connected components of ~
Gn
plus C. The set S ~
of their neighborhoods consists in fNG (C)g [ S ~
. Since NG (C) is strictly contained
in
~
has no full components associated
to
.
Obviously
is a clique in ~
so
is a
potential maximal clique of ~
G 0 .
The result follows by an easy induction on the number of connected components of Gn ~
G.
Proposition 4
Let
be a potential maximal clique of G such that a. Let C a be the
connected component of
Gn
containing a and let S be the minimal separator
of
such that
If
is not a potential maximal clique of G active
for
. Moreover,
S is not a minimal separator of G 0 .
Proof. Suppose that S is not active
for
. Let D the full components associated to S
in G that do not
intersect
. One of them, say D 1 , is C a . Let G 00 be the graph obtained from G
by removing the vertices of D . According to proposition
2,
is a potential maximal
clique of G 00 . Notice that G 00 is also an induced graph of G 0 . Any connected component C of
contained in some D i , and its neighborhood in G 0 is included in
strictly contained
in
. It follows from lemma 4
that
is a potential maximal clique
of G 0 , contradicting our hypothesis. We deduce that, in the graph G, S is an active separator
for
.
It remains to show that S is not a minimal separator of G 0 . We prove that if S is a minimal
separator of G 0 ,
then
would be a potential maximal clique of G 0 . Let C a be the
connected components of
Gn
and let S neighborhoods in G. Then the
connected components of G 0
are
q , with C 0
C a . Their neighborhoods
in G 0 are respectively S
q , with S 0
In particular, G 0
has no full
component associated
to
and SG 0 contains every element of
, except possibly S.
Suppose that S is a minimal separator of G 0 and let D be a full component associated to S in
dierent from
. By remark 2, D is also a connected component of G 0
, so
is an element of SG 0 . Therefore,
so
is a clique in the graph G 0
We can conclude by theorem 2
that
is a potential maximal clique of G 0 , contradicting our
choice
of
. It follows that S is not a minimal separator of G 0 .
The following theorem, that comes directly from propositions 3 and 4 and theorem 3, shows
us how to obtain the potential maximal cliques of G from the potential maximal cliques of G 0
and the minimal separators of G.
Theorem 4
Let
be a potential maximal clique of G and let G Gnfag. Then one of the
following cases holds:
1.
where
0 is a potential maximal clique of G 0 .
2.
where
0 is a potential maximal clique of G 0 .
3.
is a minimal separator of G.
4.
is a minimal separator of G, C is a connected component of
GnS and T is a minimal separator of G. Moreover, S does not contain a and S is not a
minimal separator of G 0 .
Corollary 3 Let G be a graph, a be a vertex of G and G Gnfag. The number jG j of
potential maximal cliques of G is polynomially bounded in the number jG 0 j of potential maximal
cliques of G 0 , the number jG j of minimal separators of G and the size n of G.
More precisely, jG j jG
Proof. We will count the potential maximal cliques of the graph G corresponding to each case
of theorem 4.
Notice that for a potential maximal
clique
0 of G 0 , only one
ofand
can be a
potential maximal clique of G: indeed, a potential maximal clique of a graph cannot be strictly
included in another one (see remark 3). So the number of potential maximal cliques of type 1
and 2 of G is bounded by jG 0 j.
The number of potential maximal cliques of type 3 is clearly bounded by jG j.
Let us count now the number of potential maximal cliques of type 4, that can be written as
C). By lemma 3, for any minimal separator S 0 of G 0 , we have that S 0 or S 0 [ fag is a
minimal separator of G. Clearly, the number of minimal separators of G of type S 0 or S 0 [ fag
with is at least jG 0 j. Our minimal separator S does not contain a and is not a
minimal separator of G 0 , so S is not of type S 0 or S 0 [ fag, with S 0 2 G 0 . It follows that the
number of minimal separators S that we can choose is at most jG j jG 0 j. For each minimal
separator S, we have at most n connected components C of GnS and at most jG j separators
T , so the number of potential maximal cliques of type 4 is at most n(jG j jG 0 j)j G j.
Let now a 1 ; a be an arbitrary ordering of the vertices of G. We denote by G i the
graph has a single vertex. By corollary 3 we have that
for any j. Notice that
in particular each graph G i has at most jG j minimal separators. Clearly,
the graph G 1 has a unique potential maximal clique. It follows directly that the graph G has
at most njG cliques.
Proposition 5 The number of the potential maximal cliques of a graph is polynomially bounded
in the number of its minimal separators and in the size of the graph.
More precisely, a graph G has at most njG cliques.
We give now an algorithm computing the potential maximal cliques of a graph. We suppose
that we have a function
IS_PMC(
G), that returns TRUE
if
is a potential maximal clique
of G, FALSE otherwise.
function ONE_MORE_V ERTEX
Input: the graphs G, G 0 and a vertex a such that G
the potential maximal cliques G 0 of G 0 , the minimal separators G 0 , G of G 0 and G.
Output: the potential maximal cliques G of G.
begin
for each
p.m.c.
if
IS_PMC(
f
else
if
IS_PMC(
f
end_if
end_if
end_for
for each minimal separator S 2 G
end_if
if (a 62 S and S 62 G 0 ) then
for each T 2 G
for each full component C associated to S in G
end_if
end_for
end_for
end_if
end_for
return G
Table
1: Computing the p.m.c.'s of G from the p.m.c.'s of G
The function ONE_MORE_V ERTEX of table 1 computes the potential maximal cliques
of a graph G from the potential maximal cliques of a graph G Gnfag. This function is based
theorem 4. The main program, presented in table 2, successively computes the potential
maximal cliques of the graphs G Notice that we can compute the vertex
ordering such that each of the graphs G i is connected.
Theorem 5 The potential maximal cliques of a graph can be listed in polynomial time in its
size and the number of its minimal separators.
More exactly, the potential maximal cliques of a graph are computable in O(n 2 mjG
Proof. Let us analyze the complexity of the algorithm. The sets of vertex sets, like G and G ,
will be represented by trees, in such manner that the adjunction of a new element and testing
that a vertex set belongs to our set will be done in linear time (see for example [20]). We also
know by corollary 2 that a call of the function IS_PMC takes O(nm) time.
We start with the cost of one execution of the function ONE_MORE_V ERTEX .
The cost of the rst for loop is at most j 0
G jnm. But we can strongly reduce this complexity,
using a dierent test for verifying
that
respectively
are potential maximal cliques
main program
Input: a graph G
Output: the potential maximal cliques G of G
begin
let an g be the vertices of G
compute G i+1
end_for
Table
2: Algorithm computing the potential maximal cliques
of G. Suppose that we want to check if a potential maximal
clique
0 of G 0 is also a potential
maximal clique of G. Any connected component C 0 of G 0
0 is contained in some connected
component C of
Gn
and we have NG 0 (C) NG (C).
Since
0 is a clique in the graph G S G
0 is a clique in the graph G
Therefore, all we have to check is that
Gn
0 has no full
connected components associated
to
0 , which can be done in linear time. Suppose now
thatis a potential maximal clique of G 0 and let us verify
if
[fag is a potential maximal clique
of G. Clearly, the connected components of
Gn
are the same as the connected components
of G 0
. The neighborhood NG (C) of a connected component of
Gn
is either NG 0 (C) or
It follows that
Gn
0 has no full components associated
to
and that any two
vertices x; y0 are adjacent in G SG
. It remains to check that, in the graph G SG
, a is
adjacent to any vertex x0 . This test can be done in linear time: by searching NG (a) and
the connected components C i of
Gn
with a 2 S i , we compute the vertices
of
0 adjacent to a
in G or connected to a via C i . We conclude that the cost of the rst for loop is O(mjG 0 j),
In the second for loop, computing the potential maximal cliques of type 3, i.e. of type
costs O(nmjG time. This is due to the cost of the G calls to function IS_PMC.
Remark that here we could also test in linear time
if
fag is a potential maximal clique
of G. Since S NG (C) for some connected component of
Gn
(see proof of proposition 3), we
only have to test that
Gn
has no full components associated
to
and that a is adjacent in
G SG
to every x 2 S. Anyway, this will not change the global complexity of the algorithm.
The call to function IS_PMC in the inner loop is done njG j(j G j jG 0 times. Indeed,
we have shown in the proof of corollary 3 that the number of minimal separators S 2 G such
that a 62 S and S 62 G 0 is at most jG j jG 0 j. The number of iteration of the second and
third loop are clearly jG j and respectively n. So the cost of all the calls to function IS_PMC
will be O(n 2 mjG j(j G j jG 0 j).
one execution of the the function ONE_MORE_V ERTEX takes at most O(nmjG
We can compute now the complexity of the main program. Computing the minimal separators
of a graph G can be done in O(n 3 jG time, using the algorithm of Berry, Bordat
and Cogis [4]. If we do this calculus one time for each graph G i , this would take O(n 4 jG j).
But notice that each graph G i is an induced subgraph of G. Consequently, for each minimal
separator S i of G i , there is a minimal separator S of G such that S g. We
can compute rst the minimal separators of the input graph G, in O(n 3 jG time. For computing
the minimal separators of a graph G i , we will take each S 2 G and we will verify if
is a minimal separator of G i . A verication of type can be done
in linear time: it is sucient to test that G i nS i has at least two full components associated to
Therefore, computing the minimal separators of all the graphs G i will not
exceed O(n 3 jG steps.
Remember that the i-th call of the function ONE_MORE_V ERTEX costs at most
time. Using the fact that for all i, jG i j jG j,
it follows that the n calls of the function ONE_MORE_VERTEX will take O(n 2 mjG
steps.
We conclude that the global complexity of the algorithm is O(n 2 mjG j 2 ).
We deduce directly from theorem 1, proposition 5 and theorem 5:
Theorem 6 The treewidth and the minimum ll-in of a graph can be computed in polynomial
time in the size of the graph and the number of its minimal separators. The complexity of the
algorithm is O(n 3 jG
6 Conclusion
The notion of potential maximal clique seems to be very useful for the study of the treewidth and
the minimum ll-in problems. We proved in [11] that the potential maximal cliques are sucient
for computing the treewidth and the minimum ll-in of a graph. In this paper, we enumerate
the potential maximal cliques in polynomial time in the number of minimum separators of the
input graph. In particular, this gives a polynomial algorithm computing the treewidth and the
minimum ll-in for all the graphs with polynomial number of minimal separators.
A class of graphs may have an exponential number of minimal separators and consequently
an exponential number of potential maximal cliques. Notice that there is no such class of
graphs for which the treewidth problem has been solved in polynomial time, except the graphs
of bounded treewidth. For example, the problem is still open for the planar graphs. We think
that a polynomial number of well-chosen potential maximal cliques could permit to compute
or at least approximate the treewidth for classes of graphs with many minimal separators.
--R
Complexity of
An algebraic theory of graph reduction.
time algorithms for NP-hard problems restricted to partial k-trees
Generating all the minimal separators of a graph.
A tourist guide through treewidth.
A linear-time algorithm for nding tree-decompositions of small treewidth
Algorithmic techniques and results.
Reduction algorithms for constructing solutions of graphs with small treewidth.
Approximating treewidth
Algorithms for maximum matching and minimum
The monadic second-order logic of graphs III: Treewidth
Monadic second-order evaluations on tree-decomposable graphs
Algorithmic Graph Theory and Perfect Graphs.
Dynamic algorithms for graphs of bounded treewidth.
Computing treewidth and minimum
Erratum to the ESA'93 proceedings.
Treewidth of chordal bipartite graphs.
Listing all minimal separators of a graph.
Approximating the bandwidth for asteroidal
On treewidth and minimum
Graphs minors.
Graphs minors.
Triangulating graphs and the elimination process.
Treewidth of circular-arc graphs
Aspects algorithmiques des triangulations minimales des graphes.
Computing the minimum
--TR
Complexity of finding embeddings in a <italic>k</>-tree
time algorithms for NP-hard problems restricted to partial <italic>k</>-trees
The multifrontal method for sparse matrix solution
Monadic second-order evaluations on tree-decomposable graphs
An algebraic theory of graph reduction
Approximating treewidth, pathwidth, frontsize, and shortest elimination tree
Treewidth of Circular-Arc Graphs
Treewidth of chordal bipartite graphs
A Linear-Time Algorithm for Finding Tree-Decompositions of Small Treewidth
On treewidth and minimum fill-in of asteroidal triple-free graphs
Characterizations and algorithmic applications of chordal graph embeddings
Listing all Minimal Separators of a Graph
All structured programs have small tree width and good register allocation
Minimum fill-in on circle and circular-arc graphs
Linear-time register allocation for a fixed number of registers
Treewidth
Dynamic Algorithms for Graphs of Bounded Treewidth
Algorithms for Maximum Matching and Minimum Fill-in on Chordal Bipartite Graphs
Reduction Algorithms for Constructing Solutions in Graphs with Small Treewidth
Generating All the Minimal Separators of a Graph
Computing Treewidth and Minimum Fill-In
Approximating the Bandwidth for Asteroidal Triple-Free Graphs
Minimal Triangulations for Graphs with "Few" Minimal Separators
--CTR
V. Bouchitt , D. Kratsch , H. Mller , I. Todinca, On treewidth approximations, Discrete Applied Mathematics, v.136 n.2-3, p.183-196, 15 February 2004 | treewidth;potential maximal cliques;graph algorithms;minimal separators |
568282 | Binary (generalized) post correspondence problem. | We give a new proof for the decidability of the binary Post Correspondence Problem (PCP) originally proved in 1982 by Ehrenfeucht, Karhumki and Rozenberg. Our proof is complete and somewhat shorter than the original proof although we use the same basic. Copyright 2002 Elsevier Science B.V. All rights reserved. | Introduction
Let A and B be two nite alphabets and h; g be two morphisms
. The Post Correspondence Problem, PCP for short, is to determine if
there exists a nonempty word w 2 A such that It was proved
by Post [8] that this problem is undecidable in general. Such a word w that
called a solution of the instance (h; g) of the PCP.
In the binary PCP we assume that the size of the instance (h; g) is two
2. This problem was proved to be decidable by Ehrenfeucht,
Karhumki and Rozenberg [2]. Here we shall give a new shorter proof to
this binary case, although we use the same basic idea as [2]. Our proofs are
combined from [4] and [3], and we have added details to the proof to make it
easier to read. Also, although we restrict to the binary PCP, we shall achieve
more information than really needed for the binary case.
Note that it is also known that if jAj 7, then the PCP remains unde-
cidable, see [7]. The decidability status is open for 3 jAj 6.
Another important problem is the generalized PCP, GPCP for short. It
consists of two morphisms . The
GPCP is to tell whether or not there exists a word w 2 A such that
Here again w is called a solution. We shall denote the instance of the GPCP
by called the begin words and
called the end words. Note that also for the GPCP it is known
that it is decidable, if jAj 2, see [2], and undecidable, if jAj 7, see [6]. As
for the PCP, the decidability status of the GPCP is open for the alphabet
size between these two bounds.
The basic idea in [2] is that each instance (h; g) of the binary PCP is
either
(1) periodic, i.e.,
(2) it can be reduced to an equivalent instance of the binary generalized
PCP with marked morphisms,
and then it is proved that both of these two cases are decidable. Recall
that a morphism h is called marked if the images of all letters begin with
a dierent letter, i.e., h(x) and h(y) start with a dierent letter whenever
y.
For the decidability proof of the periodic case, see [2, 5]. We shall also
present a proof in the next section. We shall give a new proof to the second
case. In [2] it was proved that the binary GPCP is decidable for marked
morphisms. This proof is by case analysis and it is rather long. We shall
give here a new proof, which follows the lines of [3], where it was proved that
the GPCP is decidable for marked morphisms with any alphabet size. Since
here we shall concentrate only on the binary case, the decidability proof
becomes more elementary and shorter than that in [3].
Our proof for the decidability of the marked binary GPCP uses the idea
of reducing a problem instance to nitely many new instances such that at
least one of these new instances has a solution if and only if the original one
has. Then by iterating this reduction we shall nally get to (nitely many)
new instances, where the decision is easy to do.
Note that in the PCP and GPCP we may always assume that the image
alphabet B is binary, since any B can be injectively encoded to f0; 1g . For
example,
is such an encoding. Therefore in the binary case we shall assume that
We shall rst x some notations. The empty word is denoted by ". A
word x 2 A is said to be a prex of y 2 A , if there is z 2 A such that
This will be denoted by x y. A prex of length k of y is denoted
by pref k (y). Also, if z 6= " in y = xz, then x is a proper prex of y, and,
as usual, this is denoted by x < y. We say that x and y are comparable if
x y or y x.
A word x 2 A is said to be a sux of y 2 A , if there is z 2 A such
that This will be denoted by x 4 y and, if z 6= ", then x is called a
proper sux, denoted by x y.
then we also denote that
2 The periodic case
We shall begin with the easier part of the solution and consider rst the
instances of the (binary) PCP, where one of the morphisms is periodic. To
prove this result we shall need lemma, which states a property of the one
counter languages or context-free languages, see [1].
Lemma 1. Let : A ! Z be a monoid morphism into the additive group
of integers and let R A be a regular language. It is decidable whether
Proof. Here the language 1 (0) is a one counter language and one counter
languages are closed under the intersection with regular languages. The
emptiness problem is decidable for one counter languages and even for context-free
languages, see for example [9].
The proof of the next theorem is from [5], see also [2].
Theorem 1. PCP is decidable for instances (h; g), where h is periodic.
Proof. Let and assume that h is periodic and
a word . Dene a morphisms by
for all a 2 A. Dene a regular set f"g. Now
and w 2 1 (0) \ R if and only if w 6= ", g(w) 2 u and In
other words we have
;. By Lemma 1 the latter property is decidable and therefore the claim
follows.
Note that the above proof holds for all alphabet sizes, not only for the
binary case.
3 From PCP to GPCP
be a morphism that is not periodic. Dene the
mapping h (1) by
In other words the images of h (1) are the cyclic shifts of the images of h.
Now dene recursively h
For any two words u; v 2 A it is well known that uv = vu if and only if u
and v are powers of a common word. It follows from this that the maximum
common prex of h(01) and h(10) has length at most jh(01)j 1.
Lemma 2. Let z h be a maximum common prex of h(01) and h(10) and
is a marked morphisms and h (m)
all w 2 f0; 1g . Moreover, for any w, if jh(w)j m, then z h h(w).
Proof. We may assume by symmetry that jh(1)j jh(0)j. Assume rst that
clearly, h (m) (0) and h (m) (1) begin with dierent letters
by the maximality of the z h .
If m jh(0)j, then
juvj jh(0)j, and ux h(0) and We have two possibilities,
either m jh(1)j or m > jh(1)j.
Now h (m)
and since v and x begin with dierent letters, h (m) is marked, see also Figure
1.
We still need to prove that h (m) is a morphisms. This follows since
and
Therefore for all w 2 A , h (m)
h h(w)z h , and the last part of the
claim follows directly from this.
Figure
1: Case
Note that if h is already marked, then z
Let (h; g), where h; be an instance of the binary
PCP. Assume further that h and g are nonperiodic. Let z h be as above,
j. We may assume by symmetry that m n. We now
have the following lemma.
Lemma 3. The instance (h; g) of the binary PCP has a solution if and only
if the instance ((z 1
z h )) of the binary GPCP has a
solution.
Proof. It is obvious that if an instance (h; g) of the PCP has a solution,
then z g z h . This can be seen if we assume that w is solution such that
Assume rst that the instance of the GPCP has a solution w, i.e.,
z h h (m)
z h
and therefore
This is true if and only if
Assume then that (h; g) has a solution w. Since h (m) and g (n) are mor-
phisms, we get that
g(w)z g )z 1
and therefore
z h h (m) (w)z 1
This is true if and only if
z h )h (m)
z h
This proves the claim.
Marked PCP
In this section we shall consider the solution method to the marked (binary)
PCP. The proofs of the lemmata in this section are from [4], and we shall
prove the results for all alphabet sizes.
A block of an instance I = (h; g), where h; , of the marked
PCP is a pair (u; v) 2 A such that
prexes v. If there is
no danger of confusion, we will also say that is a block. A letter
is a block letter if there is a block (u; v) such that a h(u) and b g(v).
In other words, b is the rst letter of the images of a block. Accordingly, a
block is a minimal nontrivial solution of the equation
Lemma 4. Let (h; g) be an instance of the marked PCP for h;
Then for each letter a 2 A, there exists at most one block (u; v) such that
a u. In particular, the instance (h; g) has at most jAj blocks. Moreover,
the blocks of (h; g) can be eectively found.
Proof. Consider any pair (u; v) of words such that h(u) and g(v) are comparable
and h(u) 6= g(v). Since h and g are marked, there exists a unique a 2 A
such that h(ua) and g(v) or h(u) and g(va) are comparable if h(u) < g(v) or
g(v) < h(u), respectively. Since the morphisms are marked, it is clear that
the rst letter of u determines uniquely the rst letter of v and the claim
follows from this inductively.
The latter claim is evident, since fu is a regular
set.
be an instance of the marked binary PCP with h;
is a block letterg: (1)
Note that jA 0 j jAj although A 0 B, since there are at most jAj blocks by
Lemma 4.
We dene the successor of I to be I
and g 0 are from such that
where (u; v) is a block for the letter a 2 A 0 .
Lemma 5. Let I = (h; g) be an instance of the marked PCP and I
be its successor.
I 0 is an instance of the marked PCP.
(ii) I has a solution if and only if I 0 has.
Proof. (i): This is clear since the dierent block words for h (and g) begin
with dierent letters.
Assume that I has a solution w. Then w has two factorizations,
some letter a is a solution for I 0 ,
since h 0 (w
Assume that I 0 has a solution w . Then there are blocks
is a solution of I.
k. By the
denitions, for all x i there exists a block
Therefore the claim follows.
The denition of a successor gives inductively a sequence of instances I i ,
where I I and I
. Note that the reduction of an instance I to its
successor I 0 was already used in [2], but the reduction was done only once.
The dierence here is that we shall iterate this reduction. The decidability
of the marked PCP in [4] was eventually based on the fact that the successor
sequence dened above has only nitely many distinct instances. The authors
of [4] used two measures for an instance I of the marked PCP, namely
the size of the alphabet and the sux complexity :
It is clear that for alphabet sizes of I 0 and I we have jA 0 j jAj. Note that
if we are studying the binary case, then we know that if the alphabet size
decreases, then we get to the unary case, where the PCP becomes decidable.
That is not so straightforward.
Lemma 6. If I is an instance of the marked PCP and I 0 is its successor
then
Proof. Let
. Then there exists at least one block (u; v), where s 4 v. Let
H be a function, where p(s) is the z above with the minimal
length. By the markedness this z is unique and therefore p is an injective
function. Similarly we can dene an injective function from H 0 to G. The
claim follows by the injectivity.
The previous lemma together with jA 0 j jAj yields the following result.
Lemma 7. Let I be an instance of the marked PCP. Then there exist numbers
d such that I i+d = I i for all i n 0 . The numbers n 0 and d can
be eectively found.
The previous lemma means that after n 0 consecutive successors the instances
begin to cycle: I n 0
Lemma 8. The sequence I has the following properties.
(i) The size of the alphabet is constant and
(ii) The instance I 0 of the marked PCP has a solution if and only if, for
all i n 0 , I i has a one letter solution.
Proof. The case (i) follows from the denition of n 0 .
For (ii), we may assume that n By the proof of Lemma 5, case (ii),
for every solution x i to some I i , there is a solution x i+1 to I i+1 such that
solution of a minimum length
to I 0 . Now by the above relation between the solutions, there is a solution
x d to I d , where d is as in Lemma 7 such that
Since the g i and h i cannot be length-decreasing, we have jx 0 j jx d j. But
chosen to be a minimum length solution and x d is also a solution
to I d = I 0 , and therefore necessarily jx and the morphisms g 0 (=
the letters occurring in x d to letters.
But then the rst letter of x d is already a solution to I 0 and by the proof
of Lemma 5 all instances in the loop have a one letter solution. This proves
the case (ii).
Theorem 2. The marked PCP is decidable.
Proof. By constructing the successor sequence we will meet one of the following
cases: (1) the alphabet size is one, (2) the sux complexity goes to
zero or (3) we have a cyclic sequence. The rst two are easy to decide, and
by Lemma 8 we can decide the third case by checking whether there is a
solution of length one and the claim follows.
Note that we can also decide, whether an instance of the marked PCP
has a solution beginning with a xed letter a, since we may map back the
found one letter solutions as in the proof of Lemma 8 and check whether one
of these begins with a.
5 Block structure in the marked binary GPCP
The instances
of the (binary) GPCP can be reduced to instances, where
and since to have a solution we must have
We shall extract the denition of the successor of the marked (binary)
PCP to the marked (binary) GPCP. All denitions in this section apply for
any alphabet size, not only for binary, therefore we use arbitrary alphabet
A as the domain alphabet, and whenever we consider only the binary case
it shall be mentioned.
If the instance of the PCP is neither marked nor periodic, then we transform
it to an instance of the marked GPCP as was done in Section 3.
Assume that we have an instance
For we construct the blocks for (h; g) as in the case of the PCP. We
shall also construct the so called begin block (x; y), where p 1
and there does not exists r < x and s < y such that p 1
that the begin block is unique and, if
For the end words s called
an end block (or an block, to be precise) if h(u)s
not a block for any u 1 u and v 1 v. Let
is an end block and a h(u) or a g(v)g
be the set of all end blocks for the letter a 2 A.
Lemma 9. Let I be an instance of the marked
a be a xed letter. The set of end blocks
E a is a rational relation and can be eectively found. Moreover,
(i) If a is a block letter, E a is nite.
(ii) If E a is innite, then it is a union of a nite set and nite number of
sets
for some words u; v; x;
Proof. Without loss of generality we may assume that s ". The end blocks
can be found similarly as we found the blocks for a letter a: we check rst if
v. If so, ("; v) is an end block. Then we construct
the sequence
are always comparable (as in Lemma 4). Whenever h(u i )z
some z i s 1 , we can check if there is a word w i such that h(u i )s
If such a w i exists, it is unique because g is marked. Consequently
is an end block. Notice that i is not necessarily unique.
If a is a block letter for a block (u; v), then always u i u and v i v
and the sequence terminates. But then there are only nitely many
possible z i such that h(u i )z . The claim (i) follows
hereby.
By the above considerations, if E a is innite, then a is not a block letter,
and the sequence in order to get innitely many possible
z i . This is possible only if there are words x; such that
we have
end block is of form jyj, an end block can always
be written as (xu k u equivalently as (xu 0 (u 00
to get the desired form (here
since there are only nitely many prexes u 0 and v 0 and there are at most
potential end blocks. The rationality of E a follows from the
proofs for (i) and (ii).
We shall call (xu k ; yv k w) and (xu k w; yv k ) in Lemma 9 (ii) extendible end
blocks.
be an instance of the marked GPCP. For
a solution w
is a block decomposition for w, if
is the begin block
for each
is an
Because the blocks are minimal solutions to h(u i is easy to see
that the following lemma holds.
Lemma 10. Every solution w 2 A of I has a unique block decomposition.
Figure
2: Block decomposition of a solution w
Note that, since the block decomposition of a solution may consist only
of an end block, it is necessary to construct also the set
where (x; y) is the begin block. Moreover, if the begin block does not exist,
can be innite as in Lemma 9. This case will be studied in Lemma 12.
be an instance of the binary marked
GPCP. In the binary case we have three choices for a solution:
(0) There are no blocks in the solution.
(1) Exactly one block is used in the solution.
(2) Two blocks are used in the solution.
Here the expression 'used blocks' mean the number of dierent blocks in the
block decomposition.
We shall use the next lemma to prove that the solutions of the type (0)
and (1) can be eectively found.
Lemma 11. Let x; A be xed words. It is decidable, whether
the pair (xu k w; yv k z) is a solution to I for some k > 0, i.e., whether
Proof. If (xu k w; yv k z) contains a solution for some k, then
We obtain
where the left-hand side does not depend on k. Now if this equation holds,
then either or there is a unique k satisfying it. Therefore
we assume that since in the other case the uniqueness of k
guarantees the decidability.
Now if (4) holds for some k, then it holds for all k. And consequently
and the dierence jp 1 h(xu k w)j
constant. We may assume by symmetry that jp 1 h(x)j >
' be the least number such that jp 1 g(yv ' )j > jp 1 h(x)j. Now,
since the possible overow in p 1 h(xu k w) and p 2 g(yv k z) is unique
by the length argument.
We have proved that if there are at most '+1 dierent
cases to check for solutions. Clearly these instance can be decided, since we
have either one or ' to check whether
and xu k z. And since this ' can be eectively found, we have proved
the claim.
For the case (0) we prove
Lemma 12. Let I be an instance of the marked binary GPCP as above. It
is decidable, whether I has a solution of type (0). Moreover, it is decidable,
contains a solution.
Proof. If I has a solution w of type (0), then (w; w) is either in E p or
is the begin block and is an end block.
In other words, we need to check whether there is solution of these forms.
Consider the set E p rst. If E p is nite than the decision is easy.
Therefore we assume that there is an extendible end block (xu k w; yv k ) or
. By Lemma 11, it is decidable whether the extendible end
block contains a solution (note that z = "), and since, by Lemma 9(ii), there
may exist only nitely many extendible end blocks, we have completed the
rst part of the proof.
In the second case, the solutions of the form
is an end block, also reduces to Lemma 11.
Since the begin block is unique, the end blocks are the ones to consider. If
the number of end blocks is nite, then the decision is easy. And if there is
an extendible end block, say (xu k w; yv k ), then we search for the solution in
0, and this can be done by Lemma 11 (replace x
by
We can also prove that the solutions of the type (1) can be eectively
found. This is a consequence of Lemma 11.
Lemma 13. It is decidable, whether an instance of the marked binary GPCP
has a solution of type (1).
Proof. Assume that only one block is used in the solution, i.e., the solution
w is of the form is the begin block, (t; s)
is a block for some letter a and is an end block. Now for a xed end
decision, whether there is a solution in the (t 1 t ' t
for ' > 0, can be done by Lemma 11.
In the solutions of the type (1), the harder case seems to be the possible
extendible end block. Assume therefore that there is an extendible end block
9 and the fact that h and g are marked, the block
(t; s) and this extendible end block necessarily begin with dierent letters.
We should now decide whether for some ' and k,
solution. But also this case reduces to Lemma 11. We have two cases, assume
rst that t 1 t n 6= s 1 s n for all n. Then, for all n, there is a non-empty overow
the words are comparable. If the
words are not comparable for some n, then there is no solution for ' n.
Now the rst letter of r is what makes ' unique in this case. Assume that
there is such an ' for which we have a solution. Then for the rst letter
of r is equal to the rst letter of xu or yv, which is dierent from the rst
letter of t or s, respectively. Therefore t 1 t '+1 and s 1 s '+1 are not comparable,
and there cannot be solutions for the powers greater then this xed '.
We can eectively nd such an ', if we construct the pairs of words
In other
words, we construct the solution as the blocks. Now there are only nitely
many dierent overows in these pairs and if suitable possible overow exists
we can nd it. On the other hand, if no such overow exists, then we will
have a same overow twice or the pair is no longer comparable. And since '
is unique, we may replace x with
the decidability follows.
The other case is that, for some n,
there now is a solution, then necessarily t m xu k
and k. Moreover, if jtj 6= jsj, then m is unique as '
in the previous case if it exists. And if then it is enough to check,
whether which can be done by
Lemma 11.
As a corollary we get
Corollary 1. The unary GPCP is decidable.
Proof. Since all the solutions of the unary GPCP are of the type (0) or (1),
the claim follows from Lemmata 12 and 13.
From now on we shall concentrate on the type (2) solutions. Note that
in this case no extendible end block may occur by Lemma 9.
Next we dene the successors of the instances I
of the marked GPCP. Assume that the begin block (x; y) exists, and that
x y or y x and set p 0
Let be the successor of (h; g) and let (u; v) be any end block of I.
Then
I 0 (u;
is the successor of I w.r.t. (u; v), where (s 0
2 ) is dened as follows: if
Otherwise I 0 (u; v) is not dened.
Lemma 14. An instance I has a solution if and
only if the successor I 0 (u;
2 )) has a solution for
some end block (u; v). Moreover, each solution w to I can be written as
is a solution of I 0 , (x; y) is the begin
block and (u; v) an end block of I.
Proof. Assume rst that I has a solution w with the block decomposition
where for the letter a i , for 2 i k, is the begin
block and (u is an end block. Clearly u 1 v 1 or v 1 u 1 and
. If the rst cases hold, then p 0
and s 0
k+1 and I 0 (u;
i.e., I 0 (u; v) has a solution w
The other cases are similar.
Assume then that
I 0 (u;
has a solution w
is the begin block, and by Lemma 5 (iii),
and so xh 0 (w 0 is a solution of I.
Note that if the begin block does not exists, then there is no succes-
sors, and the only possible solutions are in E p , but this case is decidable by
Lemma 12. On the other hand, if the end words disappear, the instance is
decidable by the next lemma.
Lemma 15. Let I be an instance of the binary
GPCP. For the cases, where s ", the GPCP is decidable.
Proof. Let # be a new symbol not in f0; 1g. Extend the morphisms h and
g in a following way,
Now (h; g) is an instance of the marked PCP, and we can decide whether or
not it has a solution beginning with #.
6 Cycling instances
By Lemma 14 we can reduce an instance I to its successors for all end blocks.
The problem in this approach is that by Lemma 9, I potentially has innitely
many successors. But if there is an extendible end block, then the solutions
of the instance are necessarily of type (0) or (1) and these instances are
decidable by Lemmata 12 and 13. Therefore we may concentrate on the
case where there are no extendible end blocks and, since the unary GPCP is
decidable, the alphabet size is 2.
By Lemma 14, I has a solution if and only if one of the successors has. If
the sux complexity goes to zero at some step , then we can always decide
the successors letters a). Thus we can solve the
original problem. Otherwise, by Lemma 7, there is a number n 0 such that
the morphisms start to cycle.
Clearly to decide the marked binary GPCP it suces to show how to solve
these cycling instances, where the morphisms are binary.
By a successor sequence we mean a sequence
of instances of the marked GPCP such that each
is a successor of ((p (i)
Notice that if I
is the set of all pairs of
end words, is the set of all ith members in the successor sequences, we can
assume that
(A) There is a begin block for all i and
For, if the condition (A) does not hold, we know that no instance in I i+1
is dened and the only possible solutions are in E p and if (B) is not satised
by an instance, then that instance reduces to the marked PCP, which is
decidable by Lemma 15.
We shall next show how to treat the instances that begin to cycle, i.e.,
there exists a d such that for all successor sequences
We shall call such an instance I 0 a
loop instance, and d as the length of the loop. Notice that also (p (i)
(p (i+d)
2 ), since the begin words are constructed as the blocks and therefore
for some d the begin words and the morphisms are the same as in some
previous instance.
Notice that, since, by Lemma 8, the alphabet size does not decrease,
there is a block for both letters in f0; 1g. In particular, there cannot be
extendible end blocks.
Lemma 16. Assume that the instances cycle as in (5) and that a solution
exists. Then we have two cases:
(i) If p (0)
2 then the minimal solution of I 0 is w, where the initial
letter a of w satises h 0 (a) 6= g 0 (a). Hence h i (a) 6= g i (a) for all i 0.
(ii) If p (0)
, then a minimal solution w does not have a prex u such
that p (0)
Proof. The case (i) follows from Lemma 8, since the marked PCP has a
solution if there is a solution of length one. For the case (ii), if there exists
such a u, then p (k)
2 for some k , and therefore the same holds for
all j k. But, since the instance is cycling, (p (0)
t k, and we get a contradiction, since p (0)
.
Hereafter we will assume that p (0)
2 , since the case (i) reduces to
the (cycling) instances ((h(a); g(a)); h
for each a such that
We would like to have some upper bound for the lengths of the new end
blocks in the loop (5). We demonstrate that there is a limit number L such
that, if a solution exists, then the minimal solution is found in some sequence
shorter than L. Moreover, this limit can be eectively found, and the
main result follows from this.
In what follows, we assume that I = ((p has a minimal
solution such that p 1 which does not have u as a prex. Then
this minimal solution is unique, since we assumed that p 1 and the
morphisms are marked. Consequently each I has a unique end block (u; v)
in the block decomposition of the minimal solution. It follows that there
exists a unique successor sequence I of instances such that
I
where is the end block of the minimal solution of I i . This successor
sequence is called the branch of the minimal solutions. Note that we cannot
determine, which is the end block of the minimal solution, but the desired
limit will be obtained anyway.
Let I
be an instance in the branch of the
minimal solutions and w i be the minimal solution of I i . Recall that we
permanently assume that s (i)
2 , which implies that also
2 for each i.
Lemma 17. Let w i be the minimal solution of I i and let
(p (i)
each i.
Proof. The instances
I
share the begin block and the marked morphisms, so clearly w i w i+d or
since the minimal solutions cannot have u, such that p (')
(u), as a prex (recall that p (i)
2 )). If w is a minimal solution to
some instance I, then by Lemma 14, there is a solution w 0 to the successor
of I such that
(and p (i)
consequently jwj > jw 0 j, because the
morphisms are nonerasing. Hence jw Inductively, jw i+t
all t, which proves the claim.
As a byproduct we obtain
Lemma 18. If an instance occurs twice in a successor sequence, it has no
solutions.
Proof. By the proof of the previous lemma, the length of the minimal solution
decreases strictly.
be an instance of the marked binary
GPCP. We assume now, by symmetry, that s 2 4 s 1 . An end block (u; v) of
the instance I satises the equation
. If this is an end block of a solution, then necessarily
the successor of I has the end
words
Lemma 19. Let I
2 )) be the branch of the minimal
solutions of a cycling instance with loop length d. Let also w i be the
minimal solution of I i . Then p (i)
2 is a prex
of p (i)
Proof. It suces to take the proof is analogous for all other values.
Recall also that s (t)
each t. By Lemma 17, w d w 0 . Therefore
shall next prove
that jh 0 (w d )s (d)
)j. Assume on the contrary that jh 0 (w d )s (d)
)j. By the proof of Lemma 14,
and therefore
Hence
js (d)
This is a contradiction, since js (d)
It follows that jh 0 (w d )s (d)
similarly we can prove that jg 0 (w d )s (d)
loss of generality, we assume that s (d)
and since p (0)
are comparable and jh 0 (w d )s (d)
necessarily p (0)
s
Figure
3: Prex property
The previous lemma will be used in the proof our last lemma, which gives
an upper bound for the size of the end blocks in the branch of the minimal
solutions.
For an occurrence of a word u in g(w), its g-block covering in a solution w
of an instance ((p
that
1. is a factor of w,
2. u is a factor of z,
3. u is not a factor of g(v
4. for each i, g(v i
Note that a g-block covering for an occurrence of a factor u (in g(w)) is
unique. Hence we can dene the integer k to be the g-covering length of the
occurrence of u (in w).
Lemma 20. Let I
2 )) be the branch of the minimal
solutions of a cycling instance having loop length d. For all i d,
then the h i+1 -covering lengths of s (i+1)
2 are at
most the g i -covering length of s (i)
1 .
then the g i+1 -covering lengths of s (i+1)
2 are at most
the h i -covering length of s (i)Proof. By Lemma 19, for i d, the words s (i)and s (i)are covered by some
We will prove only the case (i), the other one is analogous. To simplify
the notations, we denote I = I
either I
We have to
show that in both cases, the h 0 -covering length of s 0 is at most the g-covering
length of s.
Assume that (u; v) is the end block of the minimal solution w of I. Let
also a u be the rst letter of u. Then or u 4 v,
which gives us two cases to consider.
(1) If words u and v in the equation
are obtained during the block construction for a letter a. Because there also
is a block for letter a, necessarily s 0 h 0 (a), i.e. the h 0 -covering length of s 0
is 1 (see
Figure
4).
Figure
4: Picture of the case (1)
(2) Assume then that
rst that the g-covering length of the word g(s 0 ) is at most that of the word
s. This is clear, because g(s 0 ) shares with s every one of its block factors
(including the rst one, since as in the case (1), h(u) is covered by a
single g-block longer than h(u) (see Figure 5 for an illustration)),. We show
then that the h 0 -covering of s 0 is not longer than the g-covering of g(s 0 ), from
Figure
5: Block covering of )g(u). The vertical lines illustrate
the block covering.
which the claim follows. Let w 0 be the minimal solution of I 0 . Then the word
is a beginnning block for I, satises
and consequently w is a prex of the minimal solution of I. To show that the
-covering of s 0 is not longer than g-covering of g(s 0 ), it is sucient to show
that the block borderlines in dh 0 (w cutting s 0 can be mapped
injectively to block borderlines in p 1 h(w)g(s
Figure
6). Let now y 0 w 0 be a word that determines a block borderline in
and eg 0 (w
Now the word prex of w, since
and g is marked. But y also determines a block borderline in word p 2
cuts g(s 0 ), because
and hence p 1 h(w) p 2 g(y). Notice nally that the word y determines z 0
uniquely, since g is injective and z 0 determines y 0 by eg 0 (z
that h is injective).
The previous lemma gives us a tool for recognizing instances which are
not in the branch of minimal solutions. Let I 0 be a cycling instance with loop
length d and consider all the instances I d found by the rst d reductions. If
I 0 has a solution then there is a unique I 2 I d in the branch of the minimal
solutions.
Let M be the maximal g- or h-covering length of all the end words s 1
and s 2 in I d . It now follows by Lemma 20 that in the branch of the minimal
Figure
Relation between g(s 0 ) and s 0
solutions the g i or h i -covering length is always less than or equal to M . For
a sequence of cycling instances, the sux complexity is constant
since the blocks of an instance I i are the images of the successor I i+1 , the
block length can never be more than 1. By the previous lemma we
have
Corollary 2. Let I be the branch of the minimal solutions of
a cycling instance with loop length d. For each i d, the end words of I i are
not longer than M((I) 1).
7 Decidability results
Now we are ready to prove our main results.
Theorem 3. The binary marked GPCP is decidable.
Proof. We have already proved that the marked binary GPCP is decidable
in the unary case and the solutions of the type (0) and (1) can be found. It
remains to be shown how to nd the type (2) solutions, i.e., how to solve the
binary marked GPCP for the cycling instances I 0 .
A cycling instance has the blocks for the both letters and p (i)
all successors. In particular, there are no extendible end blocks and only
nitely many successors. The successor relation naturally denes a tree T
having I 0 as the root, all the successors of I 0 as the vertices and the pairs
as the edges.
The decision procedure is based on constructing T partially by rst inserting
the vertices having depth (the distance from the root) at most d
and then computing the number M , the maximal covering length of the end
words of instances at the depth d. For all vertices we check whether there
are solutions of type (0), (1) or an end block (u; v) 2 E p such that
And for all vertices ", we can
always decide if they have a solution by Lemma 15. If some such vertex I
has no solution, then I and all the successors of I can be removed. On the
other hand, if some such I has a solution, then I 0 also has a solution and
the procedure may stop.
For the vertices having depth greater than d, the (partial) construction
of T is more specic: Only the successors I
are inserted. By Corollary 2, the branch of
minimal solutions is included in the partial construction.
But now there are only nitely many instances to be inserted, so each
path (successor sequence) in the partially constructed T will eventually contain
an instance twice, thus I 0 has no solution by Lemma 18, unless some
vertex has a solution for some vertex (u; u)
As we saw in Section 3, the binary PCP is decidable if and only if the
binary marked GPCP is. Therefore Theorem 3 has the following corollary.
Theorem 4. The binary PCP is decidable.
Proof. First, if one of the morphisms is periodic, it can be decided by Theorem
1, and if the instance is marked, then it can be decided by Theorem 2.
Otherwise we construct the equivalent instance of the binary marked GPCP.
The binary marked GPCP is decidable by Theorem 3. The decision procedure
achieved reduces an instance of the binary marked GPCP to nitely
many simpler equivalent instances. By continuing this reduction to each reduced
instance we create a successor tree, where the decision is done in each
path separately according to the following seven rules:
(i) If we get unary successors, then we can decide these successors by
Corollary 1.
(ii) If we get an extendible end block, then the solutions are of the type
(0) or (1) and these cases can be decided by Lemmata 12 and 13,
respectively.
(iii) If we get end block (u; u), then s 0
this is decidable by Lemma 15.
(iv) If we get an instance which already occurred in the path, then the
instances in this path cannot have a solution by Lemma 18.
(v) If the lengths of the end words break the computable limit M((I 0 )+1),
then we do not have to continue this branch, since it is not in the branch
of minimal solutions by Corollary 2.
(vi) If we get an end block (u; u) in E p , then we have a solution. Also, if
there is no begin block then the possible solutions are in E p . These
cases are decidable by Lemma 12.
(vii) If there are no end blocks, then there are no solutions.
8 Conclusions and open problems
We have proved that in the binary case the Post Correspondence Problem is
decidable. Our solutions are on based on the construction of the successors,
which is equivalent to the original instance in the decidability sense. Then
after doing this reduction suciently many times we obtain instances, where
the decision is easy to do.
We note that an instance of the binary GPCP can also be reduced to
an instance of the binary marked GPCP using almost similar arguments as
in Lemma 3. Therefore we also gave a new proof to the decidability of the
binary GPCP.
As open problems we state the following immediate questions:
Decidability of the PCP and the GPCP in the ternary case, i.e.,
3.
Decidability of the strongly 2-marked PCP. A morphism is strongly
2-marked if each image of a letter has a unique prex of length 2. See
also [4].
There is also a very important open question considering the form of the
solutions of the binary PCP.
Let (h; g) be an instance of the binary PCP, where h is nonperiodic. Is
it true that all the solution are from the set fu; vg possibly
equal, words u and v? See also [5].
--R
The (generalized) Post correspondence problem with lists consisting of two words is decidable.
Generalizes PCP Is Decidable for Marked Morphisms.
Decidability and Undecidability of Marked PCP.
Morphisms. In Handbook of Formal Lan- guages
Remarks on generalized Post Correspondence problem.
Decision problems for semi-Thue systems with a few rules
A variant of a recursively unsolvable problem.
Formal Languages.
--TR
Formal languages
Morphisms
Marked PCP is decidable
Remarks on Generalized Post Correspondence Problem
Decision Problems for Semi-Thue Systems with a Few Rules
--CTR
Vesa Halava , Tero Harju , Juhani Karhumki, Decidability of the binary infinite post correspondence problem, Discrete Applied Mathematics, v.130 n.3, p.521-526, 23 August
Vesa Halava , Tero Harju , Juhani Karhumki , Michel Latteux, Extension of the decidability of the marked PCP to instances with unique blocks, Theoretical Computer Science, v.380 n.3, p.355-362, June, 2007
Vesa Halava , Tero Harju , Juhani Karhumki, Undecidability in -Regular Languages, Fundamenta Informaticae, v.73 n.1,2, p.119-125, April 2006 | marked morphisms;decidability;binary post correspondence problem;post correspondence problem;generalised post correspondence problem |
568285 | Decidability of EDT0L structural equivalence. | We show that a tree pushdown automaton can verify, for an arbitrary nondeterministically constructed structure tree t, that t does not correspond to any valid derivation of a given EDT0L grammar. In this way we reduce the structural equivalence problem for EDT0L grammars to deciding emptiness of the tree language recognized by a tree pushdown automaton, i.e., to the emptiness problem for context-free tree languages. Thus we establish that structural equivalence for EDT0L grammars can be decided effectively. The result contrasts the known undecidability result for ET0L structural equivalence. | Introduction
Context-free type grammars G 1 and G 2 are said to be structurally equivalent if, corresponding to
each syntax tree of G 1 producing a terminal word, the grammar G 2 has a syntax tree with the
same structure, and vice versa. The structure of a syntax tree t is the leaf-labeled tree obtained
from t by removing the nonterminals labeling the internal nodes. The importance of the notion of
structural equivalence for context-free grammars is due to the fact that it can be decided effectively
[12, 14, 18] whereas language equivalence is undecidable.
This work has been supported by the Natural Sciences and Engineering Research Council of Canada grants
OGP0041630, OGP0147224.
y Department of Computing and Information Science, Queen's University, Kingston, Ontario K7L 3N6, Canada.
Email: ksalomaa@cs.queensu.ca
z Department of Computer Science, University of Western Ontario, London, Ontario N6A 5B7, Canada. Email:
syu@csd.uwo.ca
Structural equivalence remains decidable also for parallel context-free (E0L) grammars [15, 16,
17, 25]. Surprisingly it was shown in [23] that when the parallel derivations are controlled by a
finite set of tables (in an ET0L grammar), structural equivalence is undecidable. We cannot even
decide whether an E0L grammar and an ET0L grammar having just two tables are structurally
equivalent.
Here we show that structural equivalence becomes decidable if the tables of the grammars are
restricted to be homomorphisms, that is, we have EDT0L grammars. Thus for the structural
equivalence problem we cross the borderline between undecidable and decidable when restricting
the tables of the grammar to be homomorphisms. Our proof uses automata theoretic methods
but differs considerably from the automata theoretic decidability proofs for E0L structural equivalence
[25] and ET0L strong structural equivalence [10]. The E0L structure trees, as well as ET0L
structure trees augmented with the information about the control-sequence used, can be recognized
deterministically bottom-up using a tree automaton model for which equivalence is decidable. This
appears not possible for EDT0L structure trees since the arbitrary choice of the sequence of tables
makes the derivation inherently nondeterministic.
We reduce EDT0L structural equivalence to the emptiness problem for the tree pushdown
automata of Guessarian [8]. These automata recognize exactly the context-free tree languages and
emptiness can be decided algorithmically. We show that, given EDT0L grammars G 1 and G 2 , a
tree pushdown automaton can verify that a nondeterministically guessed structure tree of G 1 does
not correspond to any valid derivation of G 2 .
The decidability proof relies strongly on nondeterminism and an actual algorithm following the
proof requires multiple exponential time. It is seen easily that EDT0L structural equivalence is
PSPACE-hard, so one cannot expect to find a very efficient algorithm. It has been shown in [24]
that E0L structural equivalence is hard for deterministic exponential time. For the EDT0L case
we have not obtained an exponential time lower bound.
Preliminaries
We assume that the reader is familiar with the basics of formal language theory [29]. We briefly
recall some definitions concerning parallel context-free type grammars and tree automata. For
more information regarding parallel grammars the interested reader is asked to consult [21], and
regarding tree automata we refer the reader to [5, 6].
The cardinality of a finite set A is denoted by #A and the power set of A is -(A). Sometimes
we identify a singleton set fag with a. The sets of positive and nonnegative integers are denoted,
respectively, by IN and IN 0 .
The set of (nonempty) finite words over A is A denotes the empty word. The length
of w 2 A is jwj. Let A i , be finite sets. We define the mappings \Pi A 1
by setting for
A tree domain D [6] is a nonempty finite subset of IN that satisfies the following two conditions:
(i) If u 2 D, then every prefix of u is in D.
(ii) For every u 2 D there exists rank D (u) 2 IN 0 such that ui 2 D for
0, the node u has no successors.)
An A-labeled tree is a mapping t : dom(t) \Gamma! A, where dom(t) is a tree domain. A node u 2 dom(t)
is said to be labeled by t(u) 2 A. A node v is a successor (respectively, an immediate successor) of
We assume that notions such as the height,
the root, a leaf, an internal node, and a subtree of a tree t are known. The height of t is denoted
hg(t). We use the convention that the height of a one-node tree is zero. The subtree of t at node
u is t=u.
By the level of a node u 2 dom(t) we mean the distance of u from the root, i.e., juj. Clearly,
the maximal level of a node of t is hg(t). The tree t is said to be balanced if all leaf nodes of t have
the same level. The set of level k subtrees of t, sub k (t), 0 - k - hg(t), is defined as
Note that sub k (t) is a set of trees as opposed to a set of occurrences of subtrees. Thus sub k (t)
contains only one copy of each tree t 0 that occurs as a subtree defined by a level k node of t. Also
ftg.
In our decidability proof we use the top-down tree pushdown automaton model of Guessarian [8].
Below we give a brief informal description of this model which will be sufficient for our purposes.
An interested reader can find the formal algebraic definition in [6, 8]. A tree pushdown automaton
A is an extension of a finite tree automaton where each copy of the finite-state control has access to
an auxiliary pushdown store. The automaton begins the computation at the root of the input tree,
in a given initial state q 0 and with an initial symbol in the pushdown store. When being in a state q
at a node u labeled by b in the input tree and having Z as the topmost stack symbol, depending on
the tuple (q; b; Z), A can either (i) change the internal state and pop Z from the stack, (ii) change
the state and push some symbol to the stack, or (iii) go to the m immediate successor nodes of u
in some states q sending to each of the m nodes a copy of the pushdown stack. (The node
u is assumed to have rank m.) The automaton accepts an input tree t if (in some nondeterministic
it reaches all leaves of t in an accepting state. The tree language recognized by A is
denoted T (A).
In general, the automata of [8] can employ a tree structure in the stack. The automaton model
described above is the so called restricted tree pushdown automaton of [8]. Both the restricted
and general tree pushdown automata recognize exactly the family of context-free tree languages
[3, 4, 6, 20]. Given a context-free tree language T (in terms of a tree pushdown automaton or a
context-free tree grammar), we can effectively construct an indexed grammar that generates the
yield of T . Since emptiness is decidable for indexed grammars [1, 2], we have the following result.
Proposition 2.1 We can decide effectively whether the tree language recognized by a given tree
pushdown automaton is empty.
3 Structural equivalence
We recall and invent some notations concerning parallel context-free grammars [21] and the structure
trees of their derivations [23, 24].
An ET0L grammar G is a tuple
(1)
where V is a finite alphabet of nonterminals, \Sigma is a finite alphabet of terminals, s is the
initial nonterminal, and H is a finite set of tables. A table h 2 H is a finite set of rewrite rules
. (We do not allow rewriting of terminals.) The grammar G
is an EDT0L (deterministic ET0L) grammar if every table h 2 H contains exactly one rule with
left side a, for each nonterminal a 2 V . Thus, h is a morphism V \Gamma! (V [ \Sigma) and h(a) denotes
the right side of the rule of h having the nonterminal a as the left side. The grammar G is an E0L
contains only one table. The grammar is said to be propagating if the right side of
any production is not the empty word, i.e., for all h 2 H and a
In this paper we will be dealing mainly with the structure trees of EDT0L derivations. Below
we define structure trees for arbitrary ET0L grammars since the definition is not essentially simpler
in the deterministic case. In the remainder of this section, G is always an ET0L grammar as in (1).
Let FG denote the set of (V [ \Sigma [ -trees. Here - is a new symbol that will be used to label nodes
corresponding to the empty word. We define the parallel derivation relation of G, ! par
as the union of relations ! par
defined as follows. Let
is obtained from t 1 as follows. Assume that t 1 has m
leaves
m. (Note that if some leaf of t 1 is labeled by a
terminal, the derivation cannot be continued from t 1 .) For each
choose a rule
(2)
a i
2 the node u i has k i successors
labeled respectively by the symbols a i
. If t 1 2 the node u i has
one successor labeled by the symbol -. If t 1
-, then u i is a leaf of t 2 .
We denote by t 0 the singleton tree with the only node labeled by the initial nonterminal s 0 .
The set of syntax trees S(G) of G is defined by
A syntax tree t 2 S(G) is terminal if all leaves of t are labeled by elements of \Sigma [ -
-. The set of
terminal syntax trees of G is denoted TS(G).
In case G is an EDT0L grammar, the rule (2) is determined uniquely by the table h. We call
words over the alphabet H control-sequences. Given an EDT0L grammar G and a control-sequence
by G(!) the syntax tree obtained from the
initial nonterminal by applying the sequence of tables specified by !. Thus G(!) is the unique
tree t such that
if a tree t as in (3) exists, and otherwise G(!) is undefined.
If G is propagating, then in every syntax tree t of G all paths from the root to a leaf have
the same length, i.e., t is balanced. For non-propagating grammars all paths from the root to
a leaf labeled by an element of V [ \Sigma have the same length. Note that our definition does not
allow the rewriting of terminal symbols, i.e., we assume that the grammars are synchronized [21].
This is not a restriction since an arbitrary E(D)T0L grammar can be easily transformed into
a synchronized E(D)T0L grammar in such a way that the transformation preserves structural
equivalence of grammars. The transformation (for E0L grammars) is explained in [15].
For obtained by catenating, in the natural
left-to-right order of the leaves, the labels of the leaves of t. The yield of a syntax tree t is defined
as
is the morphism defined by setting e - (a) = a for a
and e -
-.
For a syntax tree t, yield(t) is the sentential form generated by G in the derivation corresponding
to t. The language L(G) generated by G consists of the terminal words generated by G, i.e.,
Clearly the above definition is equivalent to the standard definition of the language generated by
an E(D)T0L grammar [21].
The structure of a syntax tree t 2 S(G), str(t) is the tree obtained from t by relabeling each
internal node with oe, where oe is a new symbol not in V [ \Sigma. We denote
Elements of STS(G) are called (terminal) structure trees of G. When G is known, we sometimes
speak simply about (\Sigma-)structure trees since the leaves are labeled by elements of \Sigma. Note that the
rules of G determine the maximal number of immediate successors of any node of a structure tree
of G. Thus, of course, the alphabet \Sigma does not by itself determine the set of \Sigma-structure trees.
Grammars G 1 and G 2 are said to be language equivalent if L(G 1 It is well known
that language equivalence is undecidable already for context-free grammars. Here we shall consider
the following two more restricted notions of equivalence. Let G 1 and G 2 be ET0L grammars. The
grammars G 1 and G 2 are
ffl structurally equivalent if STS(G 1
ffl syntax equivalent if are equal modulo a renaming of the nonterminals.
Note that syntax equivalence implies structural equivalence, and structurally equivalent grammars
in turn are always language equivalent. Syntax equivalence is incomparable with the notion of
strong structural equivalence [10] for ET0L grammars. Both syntax and structural equivalence
are decidable for context-free and E0L grammars [7, 12, 14, 15, 18, 25]. (We have not formally
defined these notions for sequential context-free grammars here, but the definitions are analogous
to the parallel case.) Syntax equivalence and strong structural equivalence are decidable also for
ET0L grammars but ET0L structural equivalence is undecidable [10, 23]. Here we will consider the
structural equivalence problem for the deterministic ET0L grammars.
4 The main result
We show that EDT0L structural equivalence can be decided effectively. First we introduce some
notations concerning the width of trees. Intuitively, a structure tree is said to have width M if it
has at most M distinct subtrees at any level.
Definition 4.1 A set of \Sigma-structure trees is said to have subtree-width M (2 IN) if,
for all have at most M distinct subtrees at level k,
i.e.,
In the above definition, note that sub k (t j ) is a set of (\Sigma [ foeg)-labeled trees, i.e., its elements
do not consist of occurrences of subtrees in t j . Note also that the subtree-width M does not need
to be the minimal number of distinct subtrees at any given level, i.e., if
then it has width M 0 for all M 0 - M .
We code the structure of an arbitrary derivation of an EDT0L grammar G 1 as a string. Then
using the string encoding in its stack, a tree pushdown automaton can verify in one computation that
no control-sequence of another grammar G 2 generates the same structure tree. The construction
relies essentially on the fact that the failure of an EDT0L derivation with respect to a given control-
sequence can be checked by following only one (nondeterministically chosen) path of the tree.
Lemma 4.1 Let be an EDT0L grammar. Then there exists M 2 IN such that,
for every control-sequence ! 2 H , the structure tree str(G(!)) has width M .
Proof. Since the tables of H are homomorphisms, it follows that always when nodes
dom(G(!)) have the same length and are labeled by the same nonterminal, then G(!)=u
choosing that str(G(!)) has width M . 2
Structure trees having constant subtree-width can be coded as strings where the ith symbol
from the left codes the information which of the level i subtrees are direct descendants of which of
the level nodes. The ith symbol also codes the order of occurrences of level i subtrees. The
number of distinct level i subtrees is bounded by a constant and, furthermore, we need to consider
only a constant number of level nodes that correspond to pairwise different subtrees.
Below we define the above described coding of structure trees having subtree-width M and
prove a regularity property of such codings (in Lemma 4.3) for propagating EDT0L grammars
only. The restriction to propagating grammars is done just to avoid unnecessarily complicated
notations. Afterwards we explain how the result can be straightforwardly extended for grammars
allowing erasing productions.
Definition 4.2 Let propagating EDT0L grammar and let
For each M 2 IN we define the
set\Omega\Gamma M) to consist of tuples
and - is a mapping of
j such that
every element of occurs in some tuple -(j);
The set of final
f (M) is defined to consist of tuples
and - is a mapping of into
. Note that here \Sigma j is the set
of ordered j-tuples of elements of \Sigma (and not the set of strings of length j).
A sequence W in \Omega\Gamma M)
is said to be well-formed if m
Consider an s-tuple of structure trees denotes the structure
tree where the level one subtrees, from left to right, are t . (This is just the standard algebraic
notation for trees where we allow the symbol oe to have variable arity.)
Corresponding to a well-formed sequence W as in (7) we define inductively an m 1;1 -tuple of
\Sigma-structure trees \Xi(W ). We say that the well-formed sequence W represents \Xi(W ). First a final
represents the m 1 -tuple of structure trees
For the inductive definition, denote by W 0 the suffix of W obtained by deleting the first symbol
assume that
Note that m since W is well-formed. Denote - 1
Example 4.1 Let is the tree given in Figure 1. Choose
A
A
A
A A
#c
c
c
c
c
c
c c
A
A
A
A A
a b b b a b
oe oe oe oe
oe oe
oe
Figure
1.
The following lemma can be proved using induction on the maximal height of the trees t
Lemma 4.2 Let M 2 IN be fixed. Let be a set of \Sigma-structure trees having subtree-width
M such that max 1-i-m there exists a well-formed sequence W 2 \Omega\Gamma M)
as in (7) such that
Furthermore, m i;1 is the cardinality of the set of level
A word W 2 \Omega\Gamma M)
\Omega f (M) is said to be simple if the first symbol of W is of the form (1; m;-),
. If W is simple and well-formed, then \Xi(W ) is a one-tuple (t) where t is a \Sigma-structure tree
and, in practice, we identify \Xi(W ) with t.
As an immediate consequence of Lemma 4.2 we have:
Corollary 4.1 For every \Sigma-structure tree t having subtree-width M there exists a simple well-formed
sequence W 2 \Omega\Gamma M)
\Omega f (M) such that \Xi(W t.
The following lemma states that given a simple well-formed sequence W 2 \Omega\Gamma M)
\Omega f (M) and
a control-sequence ! of an EDT0L grammar G, a finite automaton can determine whether or not
Lemma 4.3 Let propagating EDT0L grammar and M 2 IN. Let the
alphabets
and\Omega f (M) be as in Definition 4.2. We denote by L the set of words - 2
such that
\Omega f (M) is simple and well-formed,
Here we denote the mappings \Pi
simply by \Pi i , 2.
We claim that L is a regular language.
Proof. Condition (i) can be easily verified by a finite automaton. Hence it is sufficient to show
that given - 2
satisfying (i), a finite automaton A can verify whether
or not (ii) holds. (Note that if (i) does not hold, then \Xi(\Pi 1 (-)) is not necessarily defined or may
denote an ordered tuple of trees.)
The set of states of A is
and the initial state is fs 0 g. Assume that A is in a state (U
and reading an input symbol
goes to the rejecting state rej. From our construction it follows that this is
possible only when condition (i) does not hold. Assume then that
then A goes to the accepting state acc. Also, A goes to the accepting state acc if for some x 2 U i ,
After this A reads the rest of the input verifying only that (i) holds.
The remaining possibility is that for all x 2 U is of the form (s
. In this case after reading the symbol
goes to the state (Z where the sets Z are
constructed as follows. For each and each x 2 U i do the following. If
then add the element a to the set Z s j , that each of the sets Z
will be nonempty because - satisfies the condition (5).
Intuitively, U i consists of all elements of V that the derivation of G (following the morphisms
read so far in the second components of the input) reaches at nodes corresponding to the
ith subtree representative at this, say the kth, level of \Xi(\Pi 1 (-)). Thus if condition (8) holds, the
derivation reaches some level k node u of \Xi(\Pi 1 (-)) with a symbol a 2 V such that the number
of immediate successors of u is not equal to jh(a)j. If condition contains a
terminal symbol. Thus a derivation using next the table h cannot have the structure \Xi(\Pi 1 (-))
and condition (ii) holds. The case where conditions (8) and (9) do not hold corresponds to the
situation where the parallel derivation step determined by h at level k does not immediately violate
the structure of the tree. Then the sets Z are constructed to consist, respectively, of all
nonterminals that will appear in the m 2 subtree representatives at the following level.
It remains to define the operation of A when it reaches a final symbol [(m
H] in a state (U Following the above idea this is done so that A accepts exactly then
when the derivation step h produces for some leaf representative b 2 \Sigma a wrong terminal symbol
(or a nonterminal).
The possibility corresponds again to a situation where (i) does not hold. We need to
consider only the possibility
then the derivation step h produces correct terminal symbols at all leaves. This means that (ii)
does not hold and A rejects. On the other hand, A enters the accepting state if for some x 2 U i ,
The above Lemma 4.3 was formulated and proved for propagating grammars only. However,
exactly the same proof works also for general EDT0L grammars: the possibility of having erasing
productions just adds at most one more subtree representative to each level of the structure tree.
We can modify Definition 4.2 so that in the symbols (m (as in (4)) - is a partial function
where -(i) is undefined if i represents a node having - as the immediate successor (or in such cases
-(i) is defined to be a new symbol not belonging to g.) The proof of Lemma 4.3 is then
simply modified by dividing the conditions (8) and (9) into cases depending on whether -(i) is
defined or not. Thus we can prove the following:
Lemma 4.4 The statement of Lemma 4.3 holds without the assumption that G is propagating.
Now we can show that a tree pushdown automaton can in a single computation verify whether
all possible derivations of a given EDT0L grammar violate a given structure tree of constant width.
Lemma 4.5 Let G be EDT0L grammars. Let M be a constant guaranteed
for G 1 by Lemma 4.1. Then we can effectively construct a tree pushdown automaton A such
that only if there exists
such that t has width M .
Proof. Denote k. The tree pushdown automaton A receives as inputs trees where each
internal node has exactly k immediate successors. The internal nodes are labeled by a symbol a 0 .
The set of stack symbols is (\Omega\Gamma M)
Assume that
Then the automaton A accepts the balanced k-ary input tree of
At the beginning of the computation, A nondeterministically pushes into
the stack a word (ff
(The top of the stack is to the left.)
Intuitively, the stack contents is guessed so that
\Omega f (M)) is simple
and well-formed and it satisfies the following property. If we denote
is as in (11). By Corollary 4.1, there exists W satisfying (12).
When reading an input symbol, A always pops the topmost stack symbol and the remaining
stack contents is forwarded to the k successor nodes. After the initial nondeterministic guesses A
does not push any more symbols into the stack.
The states of A consist of two components that operate in parallel. The first component verifies
that the condition (12) holds. As in the proof of Lemma 4.4 we see that this can be done using
only a finite-state memory. The first component operates identically on all paths of the input tree,
that is, it ignores the input symbols and treats the initial stack contents as input.
On the path to a leaf of the input, 1 the second component
of A verifies that
Again from the proof of Lemma 4.4 it follows that this is possible using the finite-state control of
A. The second component of A ignores the second components h of the stack symbols.
Hence, on different paths of the input A verifies that \Xi(W ) is not the structure of a syntax tree
control-sequence ! 0 of length verifies that \Xi(W ) 62 STS(G 2 ). On the
other hand, each branch of the computation has to consume the entire stack so an accepted input
tree is necessarily balanced. Thus A accepts some input tree if and only if there exists a tree t such
that (10) holds and t has subtree-width M . 2
Note that in the proof of Lemma 4.5 it is essential that the guessed instance of t 2 STS(G 1 ) in
the pushdown stack has a string encoding, although the general tree pushdown automaton model of
[8] allows, in fact, trees also in the stack. It can be shown that a tree pushdown automaton cannot
nondeterministically push a balanced tree of arbitrary height into the stack [22], so one could not
directly store an arbitrary t 2 STS(G 1 ) in the tree stack at the beginning of the computation.
Furthermore, simulating the derivations of G 2 given by different control-sequences directly on the
tree t would have the following problem. The paths leading to "failure of the derivation of G 2 in
t" may branch out earlier than the corresponding control-sequences branch out in the input tree.
More specifically, the tree pushdown automaton A has to find, for each control-sequence ! of G 2 ,
at least one path - ! in t that leads to failure. The control-sequences correspond to different paths
in the input and it is, in general, possible that two control-sequences have a very long
common prefix whereas the corresponding paths
in t branch out already at the root
of t. Situations like this do not cause problems when A has a string encoding \Xi(W ) of t in the
pushdown stack. Then A can simulate on all paths of the input all distinct subderivations of G 2
within the structure of t.
Combining Lemmas 4.1 and 4.5 and Proposition 2.1 we have proved the following result. Note
that the constant M in Lemma 4.1 is independent of the control-sequence chosen.
Theorem 4.1 For given EDT0L grammars G 1 and G 2 we can decide effectively whether or not
Exactly as in the proof of Lemma 4.5, for given EDT0L grammars G 1 and G 2 the nonemptiness
of can be reduced to deciding whether a tree pushdown automaton recognizes a
nonempty tree language. Since the number of nonterminals of G 1 and G 2 is finite, we have a new
proof for the decidability of the syntax equivalence. The result follows also from [23].
Theorem 4.2 Syntax equivalence is decidable for EDT0L grammars. 2
5 Discussion and open problems
The decidability results for syntax equivalence, structural equivalence and strong structural equivalence
of E(D)(T)0L grammars are summarized in Table 1. In the table D stands for "decidable"
and U for "undecidable". The decidability of strong structural equivalence for ET0L grammars is
proved in [10]. For E0L grammars, this notion coincides with structural equivalence. Language
equivalence is naturally undecidable for all the cases. Note that the E0L and EDT0L language
families are incomparable.
Syntax equiv. Structural equiv. Strong struct. equiv.
E0L D D D
EDT0L D D D
ET0L D U D
Table
1: Decidability of syntax and [strong] structural equivalence.
The proof of Theorem 4.1 gives only a multiple exponential time algorithm for EDT0L structural
equivalence. We do not know what is the exact complexity of the problem. The deterministic
exponential time hardness result obtained in [24] for E0L structural equivalence cannot be used, at
least not directly, to prove a similar lower bound for the complexity of EDT0L structural equiva-
lence. On the other hand, we cannot expect to obtain an efficient algorithm for the EDT0L case
since it is known that already the structural equivalence problem for linear grammars is PSPACE-complete
[9], and structural equivalence for linear grammars is easily logspace reducible to EDT0L
structural equivalence. Note that when the sentential forms have only one nonterminal (or any
constant number of occurrences of nonterminals), an EDT0L grammar can simulate a context-free
derivation simply by having a different table for each rule.
Intuitively, the decidability proof of the previous section relies on the following two properties
of EDT0L derivations:
(i) the current nonterminal and the remaining control-sequence determine uniquely a subderivation
(ii) all control-sequences generate the structure tree one level at a time.
These properties enabled us to produce a string encoding W of a structure tree such that the failure
of all possible control-sequences to produce this structure can be verified by a finite automaton that
reads W and the control-sequence in parallel.
The necessity of both conditions (i) and (ii) can be illustrated by considering the Indian parallel
grammars. An IP grammar is a context-free grammar with the derivation relation defined so
that at each derivation step one rewrites all occurrences of one (nondeterministically chosen) non-terminal
b in the given sentential form using the same rule with left side b. The other nonterminals
are not rewritten. For the formal definition the reader may consult [2, 19, 26, 27]. It is well known
that languages generated by IP grammars are strictly included in the EDT0L languages.
When the sequence of rules used in a derivation of an IP grammar is viewed as a control-
sequence, the IP grammars clearly have the above property (i). However, different sequences of rules
can generate distinct parts of a given structure tree in completely different order, and no analogy
of condition (ii) seems to hold for IP grammars. Thus in spite of the fact that EDT0L grammars
are strictly more powerful than IP grammars in terms of the family of generated languages, it
does not appear possible to use the proof method of the previous section to decide the structural
equivalence problem for IP grammars. We conjecture that IP structural equivalence is decidable.
For the E0LIP grammars of [11] structural equivalence can be shown to be decidable exactly as
in the proof of Theorem 4.1. (E0LIP grammars combine the Indian parallel and E0L rewriting
mechanisms: at each derivation step all occurrences of every nonterminal are rewritten using the
same rule.)
Russian parallel (RP) grammars [2, 13, 28] extend IP grammars by allowing also (sequential)
context-free derivation steps. The decidability of the RP structural equivalence problem remains
open.
--R
An extension of the context-free case
Regulated rewriting in formal language theory.
IO and OI
Grammars with macro-like productions
Tree automata (Akad'emiai Kiad'o
in: Handbook of Formal Languages
System Sci.
Pushdown tree automata
The strong equivalence of ET0L grammars
A study in parallel rewriting systems
A characterization of parenthesis languages
On some grammars with global productions (in Russian)
A normal form for structurally equivalent E0L grammars
Defining families of trees with E0L grammars
Simplifications of E0L grammars
Some classifications of Indian parallel languages
Mappings and grammars on trees
The Mathematical Theory of L Systems (Academic Press
Deterministic tree pushdown automata and monadic tree rewriting systems
Complexity of E0L structural equivalence
Decidability of structural equivalence of E0L grammars
Parallel context-free languages
Parallel context-free languages
Decomposition theorems for various kinds of languages parallel in nature
Theory of Computation (John Wiley
--TR
Deterministic tree pushdown automata and monadic tree rewriting systems
Some classifications of Indian parallel languages
Decidability of structural equivalence of EOL grammars
Defining families of trees with E0L grammars
Tree languages
Parenthesis Grammars
Theory of Computation
Regulated Rewriting in Formal Language Theory
Mathematical Theory of L Systems
Structural Equivalences and ET0L Grammars (Extended Abstract) | formal languages;tree pushdown automata;parallel grammars |
568337 | CSP, partial automata, and coalgebras. | The paper presents a first reconstruction of Hoare's theory of CSP in terms of partial automata and related coalgebras. We show that the concepts of processes in Hoare (Communicating Sequential Processes, Prentice-Hall, Englewood Cliffs, NJ, 1985) are strongly related to the concepts of states for special, namely, final partial automata. Moreover, we show how the deterministic and nondeterministic operations in Hoare (1985) can be interpreted in a compatible way by constructions on the semantical level of automata. Based on this, we are able to interpret finite process expressions as representing finite partial automata with designated initial states. In such a way we provide a new method for solving recursive process equations which is based on the concept of final automata. The coalgebraic reconstruction of CSP allows us to use coinduction as a new proof principle. To make evident the usefulness of this principle we prove some example laws from Hoare (1985). | Introduction
For people usually working on model theory or semantics of formal specifications
it becomes often very hard to approach the area of process calculi and
process algebras.
There are processes without any physical basis. There is no difference
between concepts as machine, process, agent, state, and system. There is syntax
without semantics. There is no difference between processes and process
expressions. And so on.
The paper is devoted to make some steps to overcome these difficulties.
In contrast to the area of process calculi we insist on the clear intuition that
there is an essential difference between the concepts system (machine, agent),
state, and process, respectively. A system has different states and processes
are devoted to describe the (observable) behavior of systems, where two states
can be observed to be different indeed by the fact that different processes start
in these states.
This paper will be published in
Electronic Notes in Theoretical Computer Science, Volume 19
URL: www.elsevier.nl/locate/entcs
r
The aim of the paper is to make evident that CSP can be interpreted as
a theory of processes for special (deterministic and nondeterministic) partial
automata. The theory that allows to bring CSP and automata into a common
perspective is the theory of coalgebras [6]. We show the coincidence between
the concepts of processes in [4] and the concepts of states in final automata
(coalgebras). Moreover, we analyse how far the constructions and operations
in [4] on the level of processes can be related to and justified by corresponding
compatible constructions on the level of arbitrary automata. This analysis
will put many of the informal arguments and intuitions of Hoare to a formal
semantical level.
We insist also on a clear distinction between the concept of process and the
concept of process expression. Traditionally, process expressions are used for
a (finite) syntactical representation of processes and the algebraic laws in [4]
tell which process expressions denote the same process. Process expressions,
however, can be also seen in a compatible way as syntactical representations
of (finite) automata with initial states. Compatibility means that the process
starting in the corresponding initial state coincides with the process represented
by the same process expression. This observation offers a new method
to solve recursive process equations: A recursive process equation describes
a finite automaton with an initial state and the image of this state with respect
to the unique homomorphism into the final automaton (with processes
as states) is the solution of the recursive equation. We draw attention to the
fact that there is no need to impose a cpo structure on processes to describe
the solution of recursive equations by means of fixed point constructions in
cpo's. Within the coalgebraic approach the fixed point construction can be
seen as being shifted to an external level and as made only once, namely, if we
describe the final automaton (coalgebra) as the result of a category theoretic
fixed point construction.
We hope that the integrated view to CSP, automata, and coalgebras developed
in this paper will be a step in achieving unifications of theories in
computing science as advocated by Hoare in [5]. Such an integrated view,
however, has also a value for its own: It becomes easier to explain and to teach
process calculi. By relating operations on the level of processes to constructions
on the level of automata the possible and adequate scope of applications
of CSP becomes more clear for "users". Finally, I believe that a satisfactory
formal treatment of a phenomenon in computing requires to consider it
from different viewpoints and to understand well the transitions between these
different viewpoints.
Since the paper tries to bridge two apart areas it is written mainly for
two kinds of readers. Reader familiar with coalgebraic reasoning as presented,
e.g., in [6], can read the paper as an introduction to and an explanation of
basic concepts and ideas of CSP. Technically, there will be nothing really new
concerning the theory of coalgebras. Reader familiar with CSP or other process
calculi should be also able to read the paper. To convince this kind of
r
reader of the practical relevance of category theoretic and coalgebraic reasoning
we analyse the category theoretic fixed point construction of final partial
automata in some detail. Besides this, the paper is self-contained in a way that
anybody interested in the theory of processes can read it with some benefit.
The paper is organized as follows. In section 2 we introduce the concept
of deterministic process according to [4] and try to make apparent the strong
relationship to the concept of deterministic partial automaton. Thereby, it
turns out that processes are related to the curried version of partial automata,
as studied in [10], thus a coalgebraic treatment of processes appears to be quite
natural.
Section 3 explores the insight in [10] that (partial) automata in its curried
version should be considered as special coalgebras. We show how the general
category theoretic fixed-point construction of final coalgebras applies to deterministic
partial automata and that these general construction provides a
reasonable model of deterministic processes which turns out to be isomorphic
to the mathematical model presented in [4].
Section 4 makes evident that Hoare defines the interaction of processes
in a coalgebraic manner. Moreover, we show that interaction of processes
corresponds on the semantical level to the synchronization of automata, i.e.,
the processes in an arbitrary synchronized automaton can be desribed by the
interaction of the processes of the single components.
In section 5 we discuss Hoare's treatment of branching and internal non-determinism
which is based on the idea of acceptance (refusal) sets. We show
that the concept of nondeterministic processes in CSP corresponds to the
concept of deterministic filter automata.
Section 6 provides a semantical interpretation of the nondeterministic operations
in [4] on the level of automata and describes the elimination of internal
actions in automata.
Finally, Hoare's treatment of divergence is analysed in section 8. We
show that this treatment is based on a mixture of coalgebraic and algebraic
techniques.
We close the paper with some conclusions and remarks for further work.
Deterministic processes and automata
Fortunately and in contrast to other presentations of processes [4] owns a
mathematically rigour which allows to start immediately a more semantically
oriented analysis of the proposed concept of process. Firstly, Hoare assumes
for any process P a fixed set A of events (actions) in which the process may
engage. A is called the alphabet of P and is also denoted by ffP . The process
with alphabet A which never actually engages in any of the events of A is
called
Secondly, Hoare provides a clean notation for processes. The process
which first engages in the event a 2 A = ffP and then behaves exactly as the
r
process P is denoted by
Omitting brackets is allowed by the convention that ! is right associative.
In such a way a simple vending machine V MA that succesfully serves two
customers with chocolate before breaking can be described by the following
process expression
where ffV chocg. The process which initially engages in either
of the distinct events a after one of these alternative
first events a i has occured, behaves exactly as the process P i is denoted by
where we assume ffP define A to be also the alphabet
of (a Note, that the process denoted by the
process expression (a long as the
are deterministic since the events a are required
to be distinct.
A machine V MB that serves either chocolate or toffee before breaking can
be described now by the process expression
where ffV tofg.
Thirdly, Hoare states that every deterministic process P with alphabet
A may be regarded as a function F with a domain B ' A, defining the set of
events in which the process P is initially prepared to engage; and for each a in
B, the deterministic process F (a) defines the future behavior of the process P
if the first event was a. This means that every deterministic process P 2 DPA
can be uniquely described by a partial function F : A !p DPA with domain
stands for the set of all deterministic processes with
alphabet A.
Globally considered, Hoare assumes, in such a way, the existence of a
bijective mapping
nextA
where [A !p DPA ] denotes the set of all partial functions from A into DPA .
STOPA , e.g., is the process uniquely determined by the condition dom(nextA
i.e., the deterministic process which at all times
can engage in any event of A, can be described uniquely by the conditions
dom(nextA (RUNA A.
Taking into account the idea of automaton we see immediately that the set
of all deterministic processes with alphabet A can be seen as the set of states
of an infinite deterministic partial automaton without output. Traditionally
[1], a deterministic partial automaton without output is defined to be a triple
d) with I a set of input symbols, S a set of states, and d : S \Theta I !p S
r
a partial state transition function. It is well-known, however, that for any such
partial function there is an equivalent curried version, i.e., a total function
dom(-(d)(s)). In such a way an
automaton M can be described equivalently using the curried version of d by
the tripel (I; S; -(d)) as pointed out in [10].
That Hoare's concept of deterministic process can be really reflected by
a partial automaton (A; DPA ; nextA ) will be justified now by considering the
mathematical model of deterministic processes in [4]: A deterministic process
with alphabet A is defined to be any prefix closed subset P of A , i.e., any
(non-empty) subset P 2 A which satisfies the two conditions hi 2 P , and
denotes the empty trace (finite
sequence) and s-t the catenation of traces. The process STOPA is modeled
in this way by the set fhig and RUNA is given by A itself. The domain of
nextA (P ) is denoted in [4] by P 0 and defined by dom(nextA (P
Pg. nextA (P )(a) for any a 2 P denoted in [4] by P (a)
and defined by nextA (P Pg.
From now on let DPA be the set of all prefix closed subsets of A and the
partial automaton will be called the Hoare-model of
deterministic processes with alphabet A. Note, that nextA is bijective indeed
since we can assign to any partial function F : A !p DPA the prefix closed
set next \Gamma1
(a)g.
To make a clear distinction between processes and process expressions we
will use from now on identifiers instead of the (name of the) process
STOPA for building process expressions.
After realizing that the deterministic processes in CSP constitute special
partial automata it will be promising to take into consideration arbitrary partial
automata. The first observation will be that process expressions as, e.g.,
can be interpreted in two different ways.
Firstly, as suggested in [4], it can be interpreted as a "userfriendly" syntactic
notation of the prefix closed set V
of traces, i.e., as representing the element V MB of DPA .
Secondly, however, we can take vmb as a syntactic presentation of a finite
partial automaton
given by
3. This partial automaton can be
depicted as follows
coin '&%$/!''# 2
choctof
To make the translation of a process expression exp into a partial automaton
exp unambiguous we could use the subexpressions of exp to denote
the states of M exp as, e.g., (coin
instead of 3. Note, that this
approach forces us to identify the codomains of the two arrows starting from
(in contrast to the tree oriented pictorial presentation of processes in section
1.2 of [4]). Note, further, that this approach brings us more close to the
labelled transition systems used in [7] to reason about processes.
In the next section we will see that the Hoare-model HMA of deterministic
processes can be characterized by being a final object in the category of all
deterministic partial automata with alphabet (set of input symbols) A. This
means, that there exists for any deterministic partial automaton
mapping
s 2 S and a 2 A, where the left hand side of the equation is defined if, and
only if, the right hand side is defined.
[A !p S]
DPA
next A [A !p DPA
Note, that this condition is equivalent to the traditional condition for the
uncurried version of automata morphisms.
In our example to
fhig.
Both interpretations of a process expression are compatible because the
translation of a process expression exp into a deterministic partial automaton
exp points out implicitly an initial state in M exp , namely, the state that
corresponds to the whole expression exp, and this state will be maped by - Mexp
to the process P exp obtained by the "process interpretation" of the expression.
For our example we have, e.g., - Mvmb
Using prefixing and choice we can only build process expressions representing
finite deterministic processes. To be able to describe syntactically infinite
processes Hoare introduces recursion. Let X be an identifier (process vari-
able) and F (X) be a process expression build on X by prefixing and choice
using events from a fixed set A. The idea in [4] is that F (X) defines a map
DPA such that the recursive process equation can be
taken as the syntactic description of a deterministic process if there is exactly
one fixed point of [[F ]]. Hoare proves that this is the case as long as F (X)
is guarded, i.e., as long as there is at least one occurrence of ! in F (X). The
unique fixed point is denoted in [4] by the process expression
A machine VMC with alphabet A = fcoin; choc; tofg that either serves
chocolate or toffee in a loop can be described using the process expression vmb
above by the recursive equation where the corresponding unique
fixed point V MC 2 DPA is given by all traces from A with coin at each odd
r
position and either choc or tof at each even position.
Fortunately the translation of process expressions into finite partial automata
with an initial state can be extended to recursion thus we obtain a new
method for solving recursive process equations: Let M F be the
finite partial automaton according to F (X) with the initial state s 0 2 S and
with s X 2 S the state that corresponds to the free variable X, i.e., for this state
we have especially dom(t(sX Than we obtain M -X
with initial state s 0 by glueing together s 0 and s
and define for all s 2 S 0 and all a 2 dom(t 0
Now the image of s 0 w.r.t. the unique automata morphism - M
can be taken as the deterministic process described by
the recursive equation
For our example we have arises by glueing
together the states 1 and 3 in Mvmb
ee
choc
yy
tof
and we have - M-X:A:vmb MC. If we consider as a further example the
process expression run
we obtain a "one-state" partial automaton M-X :A:run with - M-X:A:run
RUNA for the only state s 0 in M-X :A:run .
In the next section it will become, hopefully, evident that our method
provides for all process expressions build by identifiers, prefixing, choice, and
recursion the same results as the fixed point construction in [4].
Remark 2.1 That our method extends nicely to mutual recursion should be
obvious. But, as the fixed point construction in [4], our method works only for
guarded expressions. That is, for we have a one-state automaton
MX , i.e., we have s thus by construction M-X :A:X will have no states
at all. Analogously to [4] we will treat the meaning of -X:A:X when we
discuss nondeterministic operators in section 6.
Final coalgebras
For a -coalgebra is a pair (S; t) consisting of
a set S, the carrier of the coalgebra, and a mapping t
and consists of a mapping f which commutes with the
operations:
r
f
To apply this definition to deterministic partial automata we have only
to check that the assignment S 7! [A !p S] extends to a functor A! :
SET. For this we assign to any mapping f the mapping
It is easy to check that this defines in fact a functor A!
Now, the concepts "deterministic partial automata with alphabet A"and "A ! -
coalgebra" turn out to be obviously equivalent. The category of all A! -
coalgebras and all A!-homomorphisms will be denoted, therefore, by DAA .
Because the functor A! : SET ! SET is ! op -continuous [9], i.e., preserves
limits of ! op -chains, we can fortunately use the category theoretic
version of the least fixed point construction [11] to construct the final A! -
coalgebra: The limit (L; (- of an ! op -chain
in SET can be described canonically by all infinite sequences
such that a i 2 A i and f i (a . The mapping
projects to the i-th component a i thus we have -
for all i 2 N .
A 3
The carrier NFA of the intended final A! -coalgebra is given now according
to [9,11] by the limit (NFA ;
of the following
A 3
obtained by applying successively the functor A! to the unique mapping from
into the singleton set into the final object of
the category SET.
To see that NFA is strongly related to the set DPA of all prefix closed
subsets of A we firstly consider the elements of A n
! (1), which can be refered to
as nested functions of depth less than or equal to n. For
has four functions as elements and by using the maps-to-
notation we get A! )g.
r
A pictorial representation could look as follows
coin
choc
coin
\Upsilon\Upsilon
choc
wich can be depicted by
coin
coin
choc
coin
coin
choc
@
@
@
@
@
@
@
choc
coin
\Upsilon\Upsilon
choc
Note, that we make a difference between fully undefined function
of type A 2
the fully undefined function of type
Note, further, that nested functions are different from synchronization trees
[7,13]. That is, a node in our pictures represents the graph below this node in
contrast to synchronization trees where a node represents the path from the
root of the tree to this node.
each function in A! to thus A!
maps each g 2 [A !p [A !p 1]] to
a In general A n
(1) cuts the (possibly empty)
1)-th layer of a nested function where the information that a cutting has
taken place is announced by writing at the corresponding node at depth
n. Moreover, all nodes ; (n+1)\Gammai at depth i ! n are changed to ; n\Gammai and, if
necessary, new sharings are introduced.
For our example we have A!
we have the following transformation
of nested functions
coin
coin
coin
choc
@
@
@
@
@
@
@
coin
\Upsilon\Upsilon
choc
coin
\Upsilon\Upsilon
choc
Remark 3.1 The elements of NFA are by construction infinite sequences
of nested functions with
That is, the elements of A n
! (1) do not correspond directly to
r
the elements of DPA . They represent finite approximations of processes. For
the prefix closed set P ng of bounded traces
corresponds uniquely to a nested function in A n
(1) where at depth n in the
corresponding nested function indicates that we do not know if there exists a
trace in Pn(P -n) which extends the corresponding trace of length n in P -n
or if not.
In general we have a bijection between DPA and NFA since the prefix
closedness ensures that each P 2 DPA can be represented uniquely by the
sequence
1))-n for 0 - n. Further, any of those sequences corresponds uniquely to a
sequence where the equation P
corresponds to the requirement
represented by the sequence hfhig; fhig; of prefix closed sets and thus
by the sequence h
The
limit of the ! op -chain
A 3
A 4
Since there is only one mapping from A! (NFA
we have trivially A! (- 0 thus we obtain a further limit diagram for the
yy
A 3
A 4
The limit properties of both diagrams ensure the existence of a unique
mapping
(1)
and, moreover, it is ensured that this mapping is bijective, i.e., an isomorphism
in SET.
A i+2
The intended coalgebraic model of deterministic processes is now provided
by the A! -coalgebra
Remark 3.2 Note, that the category theoretic fixed point construction provides
a kind of "external" approximation of processes. That is, a process P
r
is identified with the sequence h ; of its finite approximations,
where the g i 's are not processes. Please bear in mind that an open branch
in g i is indicated by and not by ;. This means, processes and their finite
approximations are kept apart.
There is no need for to force a cpo structure on the set of all processes to
be able to speak about finite approximations of infinite processes.
To convince the reader that the coalgebraic model and the Hoare-model
are isomorphic we have to analyse how the mapping uA
works. Let be given a sequence in NFA . The image of P
w.r.t. uA has to be a partial function uA thus we have firstly
to determine the domain of uA (P ). For this we have to bear in mind that all
partial functions g
have the same domain
because
with A i
(!) a total mapping for all i 2 N . That the domain of uA (P ) equals
this common domain of the components g i+1 of P is forced by equation 1,
which implies for all i 2 N
(2)
and thus dom(uA (P
Secondly, we have to define for any a 2 dom(uA (P )) a sequence uA (P
a
a
a
ensures g a
thus we obtain by the assumption P
(!)(g a
a
. That is, h ; g a
a
a
indeed an element of NFA and we
are done.
Since we have nextA (P for each prefix closed set
and each a 2 dom(nextA (P should become
evident, now, that the bijection between DPA and NFA outlined in remark
3.1 is compatible with nextA and uA as stated in
Theorem 3.3 The Hoare-model and the coalgebraic
model are isomorphic A! -coalgebras, i.e., there exists
a bijective mapping apprA : DPA ! NFA such that the following diagram
commutes
DPA
next A
appr A
The coalgebraic model final in the category of A! -
coalgebras by construction [9,11]. Since HMA is isomorphic to CMA we have
Corollary 3.4 are final A! -
coalgebras, i.e., final objects in the category DAA .
r
To justify the claim in section 2 that our new method of solving recursive
equations which is based on the finality of HMA or CMA , re-
spectively, is strongly related to the fixed point construction in [4], we have
to look more close to the proof of the finality of CMA .
be an arbitrary A! -coalgebra. What we
are interested in, is to determine the process that starts in a state s 2 S. That
is, we have to analyse step-by-step which states can be reached from s by which
transitions: The unfolding of the state transition function t
gives the following sequence of commutative diagrams
A!
A 3
A 3
where the left-most rectangle is commutative since there is only one mapping
from S into 1 and all other rectangles are stepwise images of the first one.
for any state s 2 S which states can be reached from
s in one step by which transition. A i
how
arbitrary sequences of transitions of length i, i.e., sequences not taking into
account the restrictions made by t, can be continued according to t in the next
step. Starting in a state s 2 S we obtain in such a way an infinite sequence
for all i 2 N , i.e., with t s
i represents all sequences of transitions in M of length atmost
starting in s. Moreover, t s
i tells which states are reached by sequences of
length exactly i. The states visited in between are forgotten in t s
For the following automaton M with alphabet
a
'&%$/!''# 2a
a
ff
a
xx
OO
and for the state 1 we could depict, e.g., the first four elements of unfoldM (1)
r
as follows
a
a
a
a b
a
@
@
@
@
@
@
@
a
~~
~ ~ ~ ~ ~ ~ ~ ~
Finally, we consider the abstraction of unfoldM (s) into a process. The
mapping all states to thus A i
just forgets the information, which states are reached by sequences of length
i and keeps only the infomation that sequences of length i may be continued.
We obtain now for any s 2 S an infinite sequence
proc
The first four elements of proc M (1), e.g., are
a
\Upsilon\Upsilon
a
a
a
\Upsilon\Upsilon
a
@
@
@
@
@
@
@
a
~~
The commutativity of the above diagrams and the definition of t s
s
respectively, entail for all i 2 N
s
thus proc M (s) becomes indeed a process, i.e., an element of NFA . This means
that we have constructed by proc M (s) the process starting in state s 2 S.
Globally this provides a mapping proc M . That this mapping
constitutes a A!-homomorphism proc CMA and that this A! -
homomorphism is unique can be proved straightfowardly according to the
limit construction of NFA and the ! op -continuity of the functor A!
SET.
4 Interaction and concurrency
Firstly, Hoare describes the interaction of processes P and Q with the same
alphabet defines a process P k Q with ff(P k
which behaves like the system composed of P and Q interacting in lock-step
synchronization, i.e., any occurence of events requires simultaneous participation
of both the processes involved. To model this kind of interaction we have
to define a mapping k : NFA \Theta NFA ! NFA .
In the last section we have seen that is the final A! -
coalgebra, i.e., the final partial automaton with alphabet A. This offers a
canonical way to define mappings from an arbitrary set S into NFA [6]: We
have only to construct a A! -coalgebra
finality of CMA , there exists a unique A!-homomorphism proc
CMA . The substantial problem will be to design M in such a way that the
underlying mapping proc M becomes the intended one.
Following this coalgebraic heuristics it becomes immediately obvious that
we have to synchronize CMA with itself to obtain the appropriate A! -coalgebra:
Let
be the A! -coalgebra such that for any pair of processes
dom(synA
and such that for all a 2 dom(synA
The final A!-homomorphism proc SYN A : SYN A ! CMA due to section
3 makes the following diagram commutative
NFA \Theta NFA
syn A
proc SYN A
That is, for each pair (P; Q) 2 NFA \Theta NFA the equation
uA (proc SYN A
is required. For any event z 2 dom(synA (P; Q)) this means that
uA (proc SYN A
Using the notation in [4] the last condition turns into the equation (P k
thus it becomes apparent that the coalgebraic definition
of proc SYN A
is equivalent to the requirements stated in
law 4, page 67 in [4] for the interaction operator
Since proc SYN A
is uniquely defined by the above conditions we can be sure
that proc SYN A is indeed the intended interaction operator k .
Secondly, Hoare describes the concurrent interaction of processes P and
r
Q with different alphabets ffP 6= ffQ. Only events that are in both their alpha-
bets, i.e., in the intersection ffP " ffQ, are required to synchronize. However,
events in the alphabet of P but not in the alphabet of Q may occur independently
of Q whenever P engages in them. Similarly, Q may engage alone in
events which are in the alphabet of Q but not of P . In such a way the alphabet
of the process P k Q will be the union ffP [ ffQ of the alphabets of the
component processes. Note, that the use of overstrokes in [7] provides another
technique to fix which events in different sets of events have to synchronize.
Let be given now two alphabets A and B. The coalgebraic definition of the
intended mapping k : NFA \Theta NFB ! NFA[B can be extracted from law
7, page 71 in [4]. The synchronization of CMA and CMB provides a partial
automaton
with alphabet A[B as follows: For any pair of processes
we define
dom(syn A;B
and for any c 2 dom(syn A;B (P; Q)) we set
syn A;B
The final
provides the intended concurrent interaction operator
NFA[B . Note, that obviously SYN
The coalgebraic definition of the concurrent interaction operator suggests
a straightforward generalization of synchronization to arbitrary partial automata
Definition 4.1 For any partial automata M
and we define the corresponding synchronized
automaton
as follows: For each
and for any c 2 dom(synM 1
r
As an example we synchronize the vending machine VMC from section 2
with alphabet A = fcoin; choc; tofg and a customer CU with alphabet
fcoin; tof; bisg described by the recursive equation
paying a coin the customer decides between having a toffee
or a biscuit instead. The corresponding partial automata can be depicted by
coin '&%$/!''# 2
ee
choc
yy
tof
-OEAE-aeoe a
coin '&%$/!''# b
ee
bis
yy
tof
and the synchronization SYN VMC;CU of both automata is given by
a)
""
coin
(2; a)
choc
OO
bis
OO
bis
tof
choc
That is, after the customer was able to pay a coin he may decide for toffee
and the machine can deliver a toffee at the same time. If he decides for biscuit
the machine will serve up later on a chocolate. Or, even worth, the machine
may decide to give him a chocolate and he has to interpret this as his own
decision for biscuit to have a second chance to get a toffee.
Note, that by simply extending the alphabet of the customer to
fcoin; tof; bis; chocg we would obtain a synchronized automaton with a dead-
lock
a)
""
coin
(2; a)
OO
bis
OO
bis
tof
Now it turns out that the concurrent interaction of processes exactly describes
how the processes in a synchronized automaton SYNM 1 ;M 2 can be
reconstructed from the processes of the single automata M 1 and M 2 . That
is, synchronization of automata is compatible with interaction of processes as
stated in
Theorem 4.2 For any partial automata M
any pair of states
have that
given by the final (A[B)! -
homomorphism proc SYN A;B
suffices to show that the
r
mapping proc M 1
constitutes a
homomorphism proc M 1
NFA \Theta NFB
syn A;B
The required equality proc SYN M 1 ;M 2
is then
ensured by the uniqueness of final homomorphisms.
We have to show that for any pair (s the equality
\Theta proc M 2
holds. Since proc CMA is a A!-homomorphism we have for
and since proc CMB is a B!-homomorphism we have for s 2
According to the equations 4 and 5, the totality of the mappings proc M 1 ,
proc M 2 , and the definition of SYNM 1 ;M 2 and SYN A;B , respectively, we can
firstly show that the domain of both functions in equation 3 are equal:
dom(syn A;B (proc M 1
\Theta proc M 2
\Theta proc M 2 )
Secondly, we show the equality 3 for all c 2 dom(t 1
B. According to definition of syn A;B , the equality 4,
and the definition of synM 1 ;M 2 we obtain
\Theta proc M 2 )(c)
The other cases can be proved analogously. 2
According to theorem 4.2 we can extend in a compatible way our interpretation
rof process expressions as representations of finite automata to interac-
tion: For two process expressions exp 1 and exp 2 we define
where we can take due to theorem 4.2 as initial state of M exp 1
if
s i is the initial state of M exp i
2.
Remark 4.3 There is an essential problem to relate states in M exp 1
to
variables. The process expression (a seems to have a free
variable X. The idea, however, to take (s (a!X)k(b!X) as the state
corresponding to X does not work well especially with respect to recursion, i.e.,
with respect to the idea to substitute X successively by the whole expression
In general, we can not model substitutivity in a simple
way on the level of automata since X, (a
(b different automata.
For this paper we fix the problem by the following decision: Since interaction
is an essential parallel operator the symbol k builds a border between exp 1 and
impermeable for names. That is, we consider (a X) to be
equivalent to (a ! X)
Please note that Hoare
considers only examples where this problem does not arise, i.e., only examples
5 Nondeterminism in CSP
At first glance the nondeterministic processes in CSP have nothing to do with
the nondeterministic transition systems usually considered in the (coalgebraic)
literature [6]. That is, they are neither related to the power set construction
nor to the finite power set construction P f
If a nondeterministic system in the sense of CSP being in a certain state
can engage in an event then the state reached in the next step will be uniquely
determined by the event. Nondeterminism is restricted to the possibility to
decide locally in each state which events will be accepted or, alternatively,
refused for the next step. That is, even in case we can engage in an event it
may be that we can not carry out this event because it was decided before not
to accept this event for the next step.
At second glance, however, it is possible to relate this kind of systems
to real nondeterministic systems, namely, to image finite nondeterministic
automata [9].
The crucial observation is that the systems in CSP can be motivated along
two ideas: Firstly, by the old idea from Formal Language Theory to abstract
from nondeterminism by constructing out of a nondeterministic automaton N
with the set S of states a deterministic automaton P f N with the set P f (S) of
states. Secondly, by the idea to maintain in P f N the differences between the
original states in N as long as this difference can be expressed in the language
r
A of events.
We consider the following image finite nondeterministic automaton
a
a
a '&%$/!''# 6
Starting from state 1 we can reach by event a either the state 3 or the state
5. The difference between state 3 and state 5 which can be observed locally
in these states and which can be expressed in the language A is the difference
between bg. If we construct now the
corresponding power automaton P f N we can fix this difference by assigning
to the state f3; 5g in P f N the set ffb; cg; fa; bgg. In such a way we obtain out
of two different states 3 and 5 in N a single state f3; 5g in P f N but with two
different local states fb; cg and fa; bg. Following this idea the reachable part
of P f our example
would look as follows
ffagg fflffi flfi
a
a
R
R
R
R
R
R
R
R
R
R
R
R
R
R
R
R
R
R
c
l l l l l l l l l l l l l l l l l l
f;g
f;g fflffi flfi
Note, that the states 2 and 4 are not distinguishable by A since
Operationally considered we have to decide in state f3; 5g if we accept for
the next step either events from fa; bg or from fb; cg. If we decide for fa; bg
the event c can not occur in the next step, the event a, however, will bring
us to the singleton state f6g, and the event b will bring us to the compound
state f2; 4g since we can go in N by b from 3 to 2 and from 5 to 4. In
general we obtain by this variant of power construction systems with a kind
of nondeterministic local filters.
Remark 5.1 Hoare uses families of sets of refused events instead of families
of sets of accepted events to model this kind of nondeterminism. We
have decided for acceptance sets as in [3] because it eases argumentations in
operational terms. In contrast, Hoare argues mainly in observational terms.
Moreover, we will not introduce internal nondeterminism if we transform later
deterministic automata into filter automata. It can be checked, however, that
our descriptions of operations by means of acceptance sets, presented in the
next sections, are fully equivalent to the definitions of Hoare in [4].
To model the concept of nondeterministic processes used in CSP we have
to consider partial automata of the following structure
r
that will be called deterministic filter automata. For any state s 2 S we will
denote the first component of t(s) by acc(t(s)) and the second component,
in abuse of notation, also by t(s). This means that we are dealt with A f
coalgebras for the functor A f
[A !p S] for each set S and with A f
for each
mapping refers to the category of A f
-coalgebras and
A f
-homomorphisms.
The functor A f
! is also ! op -continuous [9] thus we can construct, analogously
to the case of deterministic automata, for each alphabet A a final
A f
The elements of FPA are again infinite sequences h ; where the
components g i are nested functions with an additional acceptance set, i.e.,
a subset of P(A), at each node. We will refer to the elements of FPA as
(deterministic) filter processes. CHAOSA 2 FPA , e.g., that is the most non-deterministic
process which at all times can engage in any event of A and at the
same time refuse any event of A, can be described uniquely by the conditions
CHAOSA for all a 2 A.
Remark 5.2 The set FPA includes all nondeterministic processes defined in
[4], but, also something more because we can not model coalgebraically the
saturation conditions in [4] (and [3]) for the acceptance (refusal) sets. We
guess that Hoare needs these conditions because he identifies divergence
with chaos and tries to treat divergence in a more algebraic manner (compare
section 7). This difference will be a point of further research.
Obviously, we can assign to any deterministic partial automaton
corresponding deterministic filter automaton
for all s 2 S. For a A!-homomorphisms the underlying mapping
provides also a A f
we have dom(A!
to the totality of f This means, we have an embedding functor
for each alphabet A. Note, that the corresponding embedding
according to [4] would take instead of the singleton family of acceptance
sets fdom(t(s))g the family of refusal sets. That is the resulting
filter automaton would own a proper internal nondeterminism which,
however, can never be observed from outside. Note, further that obviously
Besides the problem of "branching nondeterminism" discussed up to now
Hoare tries to treat within his framework also the problem of "internal non-
determinism", i.e., the problem that a system may carry out internal actions
r
which can not be observed from outside. To treat this problem he uses the
concept of acceptance (refusal) sets. We consider the following simple deterministic
filter automaton F
a
a
// f;g
f;g ffbgg ffag; fa; -gg ffa; bg; fbgg '&%$/!''# 6
// f;g
where - is assumed to be an internal action. Hoare insists on the intuition
that "we want these actions to occur automatically and instantaneously as
soon as they can" ([4], p. 111). That is, if we decide internally in state 1 to
accept fa; -g action - will occur instantaneously and we have to go on from
state 3 with a new decision for acceptance. Only in case we decide for fag,
we are allowed to stay at state 1 and to take the chance to reach state 2 via
action a.
Because the decision for fa; -g in 1 is equivalent to being in 3 and making
any decision there we can eliminate action - by identifying states 1 and 3 and
by taking the decions in 3 instead of the decision fa; -g in 1. In such a way we
can describe the observable behavior of F by the following nondeterministic
filter automaton
a
a
f;g
f;g ffbgg ffag; fa; bg; fbgg '&%$/!''# 6
// f;g
As long as there is no divergence in F , i.e. no infinite loop of internal actions,
the elimination of internal actions outlined above is fully compatible with
Hoare's treatment. In case of divergence, however, Hoare firstly identifies
all divergent states with CHAOSA and proceeds with the above elimination
(see section 7).
6 Nondeterministic operators
The realization of our program to interpret the operations in [4] as constructions
on the level of automata and thus to interpret every finite process expression
as representing a finite automata with an initial state such that the
process starting in this state equals the process represented by the same expression
according to [4], becomes a little bit complicated if we take into
account nondeterministic operators.
We can assign deterministic filter processes only to states of deterministic
filter automata. The constructions general choice [], interleaving jjj, and elimination
of internal actions, however, will introduce branching nondeterminism
thus we are obliged, firstly, to take into consideration nondeterministic filter
automata and to define our constructions for this kind of automata. Secondly,
r
we have to describe a transformation of nondeterministic filter automata into
deterministic filter automata to get the deterministic filter processes Hoare
is interested in. Note, that the identification of actions (not considered in [4])
would also introduce branching nondeterminism and could be treated naturally
within our approach.
On the other side, internal actions arise by concealment of actions and
also by the constructions nondeterministic or u and recursion -. To assign
observable deterministic processes to automata with internal actions we have
two possibilities. Firstly, we can eliminate internal actions on the level of
automata and than transform the resulting nondeterministic automata into
a deterministic one. In case of divergence we will not get, in this way, the
deterministic filter processes intended by Hoare. Secondly, we can carry out
the transformation into a deterministic automata first and than we have to
use a mixed coalgebraic and algebraic procedure, according to Hoare, to
eliminate internal actions in deterministic filter processes.
In the sequel we present our semantical interpretations of the nondeterministic
operators in [4] on the level of filter automata. There will be no space
to prove formally the correctness of our interpretations as we have done it for
interaction in theorem 4.2. We hope, however, the reader will be convinced
by the definitions, the informal argumentations, and the examples.
6.1 Nondeterministic filter automata
(Image finite) nondeterministic filter automata are automata of the structure
-coalgebras for the functor
for each set S and with Anf !
for each mapping f
given by P f The category
of Anf ! -coalgebras and Anf ! -homomorphisms will be denoted by NFAA .
Obviously, the embedding in
S allows to assign to any deterministic filter automaton
\Theta [A !p S]) a nondeterministic filter automaton
with t n Note, that this definition works smoothly
since we keep in F n the situations a =
This assignment is compatible with A f
-homomorphisms thus we obtain an
embedding for each alphabet A.
On the other side we can define on basis of the finite power set functor
nondeterministic filter automaton
deterministic filter automatn
r
as follows: For each M 2 P f (S) we define
dom(t
and for each a 2 dom(t p (M)) ' A we set
It can be easily checked that this construction is compatible with Anf ! -homo-
morphisms, i.e., that the finite power set functor
to a functor for each alphabet A. For examples of the
finite power set construction we refer to the next subsections.
6.2 General choice, interleaving, and interaction
The general choice operator [] corresponds on the semantical level to the glueing
of states in automata. Thereby any decision for acceptance in the glued
state is given by glueing decisions of the single states.
Definition 6.1 Let be given a nondeterministic filter automaton
and different states s 1 6= s 2 in S. Then the glueing of states s 1 and s 2 provides
an automaton
as follows: We set S for each
and for each a 2 dom(t
Note, that the case a 2 dom(t(s 1 ))"dom(t(s 2 branching
nondeterminisms and that the construction extends straightforwardly to any
equivalence on states.
On basis of this construction we can extend, now, the translation of process
expressions into automata with an initial state to the []-operator: We
consider two process expressions exp 1 and exp 2 with fX the set of
expressions have in common. Let s i
with
be the state in F exp i
corresponding to the variable X j . Please
bear in mind that a state which correspond to a free variable has always the
domain ; and the acceptance f;g. Then we introduce for the sequence of
r
glueings s 1
the abbreviation
denotes the disjoint union of automata. The expression exp 1 []exp 2
can be interpreted now by the finite nondeterministic filter automaton
with initial state (s 1 is the initial state of F exp i
2.
As example we consider the expressions exp
i.e., the following deterministic automata
and F exp 2
a
c
c
ffa; cgg ffbgg f;g ffb; cgg f;g
Then the nondeterministic automaton F exp 1 []exp 2
initial state (1; 4) and with (3; 5) the state corresponding to the free variable
X in expression exp 1 []exp 2 looks as follows
c
c
a
f;g
As outlined in the introduction of this section we have, firstly, to go to the
power automaton P f
with initial state f(s 1 secondly, to
apply the final A f
to in order to obtain the deterministic filter process represented by
the expression exp 1 []exp 2 according to [4]. For our example we obtain the
following (reachable part of) automaton P f
a b
c
ffbgg fflffi flfi
// f;g
As known from Formal Language Theory any trace t 2 A of actions in
is also a trace of actions in P f
vive versa. The more
interesting point is that both automata are also equivalent with respect to
acceptance traces. That is, F exp 1 []exp 2
can carry out the sequence r 2
of acceptance decisions and actions if, and only if, P f
can. We draw attention to the point that the trace hci from (1; 4) to (3; 5)
can not be continued in F exp 1 []exp 2
since (3; 5) is a final state. To model this
breaking condition for traces we have to consider in P f
acceptance
r
traces thus we can decide in state f2; (3; 5)g for acceptance ; to break the
run. In general, an acceptance trace in a filter
automaton F can not be continued in state s if dom(t(s)) " acc
The intuition behind the interleaving operator jjj is to combine two systems
without any synchronization such that if both systems could engage in the
same action, the choice between them is nondeterministic.
Definition 6.2 For any nondeterministic filter automata F
we define the corresponding interleaved automaton
as follows: For each
and for each a 2 dom(int(s
We adapt example X1, p. 121 in [4] and consider the following (determin-
istic) filter automata F 1 and F 2 with initial states 1
a
a
ffbgg
The interleaving of F 1 and F 2 provides INT F 1 ;F 2 with initial state (1; 1)
a
a
a
a
OO
OO
ffbgg
and the power construction delivers the following part of P f (INT F 1 ;F 2
a
a
ffagg f;; fag; fbg; fa; bgg ffbgg
For process expressions exp 1 and exp 2 we can define now
r
with initial state (s 1 is the initial state of F exp i
2. The
discussion in remark 4.3 concerning free variables applies also to interleaving,
i.e., we decide that there are no free variables in exp 1 jjjexp 2 .
The interaction operator k does not introduce, in contrast to general choice
and interleaving, new branching nondeterminism. Definition 4.1 provides
in such a way for any deterministic filter automata F
corresponding synchronized filter automaton
if we define additionally for each
The definition of synchronization for nondeterministic filter automata is
straightforward.
6.3 Recursion and nondeterministic or
Concealment of actions forces us to give a full treatment of internal actions
anyway thus it will be not so problematic to model recursion - and nondeterministic
or u by the introduction of special internal actions.
Hoare insists on the intuition that the process expression -X:A:X represents
an infinite loop of internal actions, i.e., divergence. To cover this intuition
we have to model the -operator by the introduction of a new internal action
in an automaton and not by glueing of states, as we have done it in section 2
in the context of deterministic partial automata.
be the finite nondeterministic filter automaton according
to F (X) with alphabet A, with the initial state s 0 2 S, and with s X 2 S
the state that corresponds to the free variable X, i.e., for this state we have
especially dom(t(sX f;g. Than we obtain
with initial state s 0 by adding a new internal action to the alphabet and by
introducing a new internal action from s X to s
and t 0
g.
Since the only state in FX at the same time corresponds to X and is initial
in FX we obtain for F-X:A:X the following one-state automaton
back ffbackgg
Remark 6.3 To be correct we have to distinguish for an automaton F between
the alphabet A and the "interface", i.e., the set A obs ' A of its observable
ractions. In this sense it may be that back is already an internal (hidden)
action of F F (X) , i.e., we will have back 2 A int = A n A obs if there is already
an application of recursion - in F (X). Moreover we should observe that A in
stands for the set of observable actions.
The intuition behind the nondeterministic or u is to provide to the outside
of a system a nondeterministic alternative between two possible behaviors.
We can model this intuition by introducing an additional decision point with
two possible internal local decisions.
Definition 6.4 Let be given a nondeterministic filter automaton
and states s S. Then the introduction of an alternative between s 1 and
provides an automaton
us
as follows: We set S rightg and
For process expressions exp 1 and exp 2 we can define analogously to the
general choice operator the nondeterministic filter automaton
us 2
with initial state s 1 u s 2 if s i is the initial state of F exp i
2.
As example for nondeterministic or and recursion we consider the expression
X) from subsection 6.2. According to our definitions
we obtain for F-X:exp 1 uexp 2
c
a
right
yy
c
OO
back
ffbackgg ffleftg; frightgg
with initial state 1 u 4 and with the internal actions left, right, and back.
6.4 Elimination of internal actions
We describe now formally the stepwise elimination of internal actions in automata
as outlined in the introduction of this section.
For this we consider a finite nondeterministic filter automaton
with a fixed set A obs ' A of observable actions For any state s 2 S with
we can construct a new finite nondeterministic
filter automaton
r
by eliminating the internal actions starting in s as follows: We denote by
the set of all states in F
reachable from s by an internal action. We set S
and for each a 2 dom(t 0 (s 0 )) we set
Note, that in case dom(t(s)) n A just delete
in acc(t(s)) all acceptances that include internal actions.
If we apply stepwise the elimination of internal actions to a finite nondeterministic
filter automaton F with the set A obs ' A of observable actions
we will get finally due to the finitarity of F a finite nondeterministic filter
automaton
all thus we can consider F 0 to be an automaton with alphabet A obs .
For the example
in subsection 6.3 we will get after two steps
(in any order) the automaton F 0
c
a
c
ff
ffbgg
where m is the initial state and arises by merging the states 1, 4, 1 u 4, and
(3; 5) in F-X:exp 1 uexp 2
. Further, the power construction provides the following
part of P f
with initial state fmg
c
a
xx
a
c
ffbgg fflffi flfi
OO
Even for finite automata with divergence, i.e., with loops of internal ac-
tions, our procedure provides a reasonable result. The crucial point is that we
abstract from divergence by merging all states of a loop into one state and by
collecting all fully observable acceptances in the loop. In such a way we would
obtain for F-X:A:X (in two steps) the automaton
fg
i.e., an automaton representing the process STOPA . This is indeed not the
r
intuition of Hoare who wishes to interpret -X:A:X as representing chaotic
behavior.
7 Concealment
In the last section we describe, within our framework, Hoare's treatment of
internal actions and thus of divergence.
The problem is to assign to the states of a nondeterministic filter automaton
F with alphabet A and with the set A obs ' A of observable actions
deterministic filter processes with alphabet A obs . Firstly, we can use the power
construction to transform the nondeterministic filter automaton
F into a deterministic filter automaton P f (F) where the singleton
states in P f (F) correspond to the states in F . Secondly, we can assign to each
state in P f (F) a deterministic filter process with alphabet A using the (unique)
final A f
into the final
A f
-coalgebra FMA since the set FPA of deterministic filter processes on
A is the carrier of FMA . Finally, we need a mapping hide
to transform deterministic filter processes on A to deterministic filter processes
on A obs , i.e., processes with observable actions only. The definition of
will be the subject of this section.
Before going to technical details we draw attention to the following crucial
observations:
(i) Since the state transition function vA
the final A f
-coalgebra FMA is bijective we have as well a A f
A ) both with the
same carrier FPA [6,12].
(ii) The definition of hide : FPA ! FPA obs and thus the concept of "nonde-
terministic processes" in [4] is based on a complex mixture of coalgebraic
and algebraic techniques (see remark 5.2). This difference between the
fully coalgebraic concept of "deterministic processes" in [4] and the mixed
coalgebraic and algebraic concept of "nondeterministic processes" shows
itself also in the difference between the fixed point constructions in [4].
For "deterministic processes" the construction starts with the completely
undefined process STOPA and proceeds by extending definedness. In
contrast, the construction for "nondeterministic processes" starts with
the completely defined (and accepted) process CHAOSA and proceeds
by reducing definedness (and acceptance).
(iii) The mapping hide
will provide neither in the coalgebraic
nor in the algebraic sense any kind of homomorphism.
We are going now to define hide
. We start by considering
two A f
-subcoalgebras of FMA : Obviously, the set FPA obs of observable
processes constitutes a A f
-subcoalgebra of FMA . That is, FMA obs
r
can be seen as a A f
-subcoalgebra of FMA . Moreover, we can characterize
FMA obs by the equation FMA
acc(v A (p)) ' That is, FMA obs
is the greatest
A f
-subcoalgebra of FMA contained in the set LocA obs of locally observable
processes (see [9]).
In the same way we obtain the set DIVA obs
' FPA of all divergent pro-
cesses, i.e., processes with infinite traces of internal actions, as carrier of the
A f
is the greatest A f
-subcoalgebra of FMA
contained in the set LocA int
of processes with local internal actions.
If we turn, next, to the algebraic viewpoint we can observe that the A f
A ) is generated by the set FPA obs
[DIVA obs
, i.e., we
i. This means that FM \Gamma1
A is the smallest
A f
-algebra of FM \Gamma1
A containing FPA obs
[DIVA obs
. In such a way we can use
the common algebraic induction with the two basic cases FPA obs
and DIVA obs
to define things on FPA .
For the definition of hide
we need, further, an auxiliary
mapping that merges a finite set of observable
filter processes into a single observable filter process. Using the embedding
and the power construction
we obtain for the final
a further
thus the final
! -homomorphism from P f (N(FMA obs
to FMA obs
provides the intended mapping merge
Let be given, now, a set A of actions with a designated set A obs ' A of
observable actions. To define the mapping
we consider first the two basic cases. For observable processes we take obviously
the identity
and divergent processes have to be identified (according to Hoare) with
chaotic behavior
The induction step is based on the A f
That is, we can consider any process p 2 FPA n (FPA obs
) as the
result of applying the operation v \Gamma1
A to the argument (acc(v A (p)); vA (p)), and
the induction assumption will be that hide(v A (p)(a)) 2 FPA obs
is already
defined for all a 2 dom(vA (p)).
To define on this assumption hide(p) 2 FPA obs it will be enough to assign
to (acc(v A (p)); vA (p)) 2 P(P(A)) \Theta [A !p FPA ] a pair
r
since we can use the
A obs
A obs
to define
A obs
Note,that we will have according to this definition and the bijectivity of vA obs
Analogously to the elimination of internal actions described in section 6.4
we define (acc
as follows: We
denote by the set of all
processes in FPA reachable from p by an internal action, and we set
Note, that we get indeed acc p ' according to
the equations acc
(hide(q)), and the induction
assumption. For each a 2 dom(g p ) we set
in case a 2 (dom(vA
in case a 2 dom(vA
in case a 2 S
As example we consider the deterministic filter automaton F-X:exp 1 uexp 2
from section 6.3 with alphabet A = fa; b; c; lef t; right; backg and the set
of observable actions. The second and third component of
the process proc F -X:exp 1 uexp 2
(1 starting in state
can be depicted by
right
OEOE
right
a
\Upsilon\Upsilon
c
c
\Upsilon\Upsilon
and the second and third component of the corresponding observable process
r
are
a
c
a
OEOE
c
ffbgg fi
a
\Upsilon\Upsilon
c
ii
a
\Upsilon\Upsilon
c
Since there is no divergence in F-X:exp 1 uexp 2
this process coincides with the
process starting in the state fmg of the automaton P f
in section
6.4 that arises from F-X:exp 1 uexp 2
by, firstly, eliminating internal actions and
by, secondly, abstracting from nondeterminism using the power construction.
8 Conclusion and further work
We have shown that the concepts of processes in [4] are strongly related,
on the semantical level, to the concepts of deterministic partial automata,
deterministic filter automata, and nondeterministic filter automata. We were
able to give a compatible semantical interpretation of most of the operations
in [4] on the level of automata.
The algebraic laws in [4] turn now to statements concerning the compatibility
of constructions on different levels and in a next step we have to prove
these laws as we have done it for the statement in theorem 4.2 concerning the
compatibility of synchronization of automata and interaction of processes.
Based on the results and categorical concepts of this paper we should be
able to develop a more general theory of combining and structuring automata.
This would include, e.g., the straightforward interpretation of changes of symbols
by means of functors between categories of automata (analogously to
[13]). It would be also very interesting and necessary to relate the constructions
and results of this paper to similar constructions and results in the area
of behavioral [8] and hidden [2] algebraic specifications.
It will be also convenient to consider weaker concepts of homomorphisms
based on the obvious partial ordering on the sets [A !p S], [A !p P f (S)],
and P(P(A)), respectively. This would allow, e.g., to consider the synchronized
automaton SYNM 1 ;M 2 as a (relative) subautomaton of the product
automaton . Moreover, we will be able in such a way to extend
our considerations in [14] concerning traces and runs in deterministic partial
automata to filter automata.
Finally, it seems to be worth to extend the analysis of section 7 to other
process calculi. That is, to find out to what extend coalgebraic and algebraic
techniques are mixed there and how far they may be separated and combined
in a more structured way.
r
--R
Universal Theory of Automata.
A Hidden Agenda.
Algebraic Theory of Processes.
Communicating Sequential Processes.
Unification of Theories: A Challenge for Computing Science.
A tutorial on (co)algebras and (co)induction.
Communication and Concurrency.
Initial Computability
Universal coalgebra: a theory of systems.
Automata and coinduction (an exersice in coalgebra).
The category theoretic solution of recursive domain equations.
Functorial operational semantics and its denotational dual.
Models for concurrency.
A coalgebraic introduction to CSP.
--TR
Communicating sequential processes
Initial computability, algebraic specifications, and partial algebras
Algebraic theory of processes
Communication and concurrency
Models for concurrency
Unification of Theories
Automata and Coinduction (An Exercise in Coalgebra)
Universal coalgebra: a theory of systems
--CTR
Michele Boreale , Fabio Gadducci, Processes as formal power series: a coinductive approach to denotational semantics, Theoretical Computer Science, v.360 n.1, p.440-458, 21 August 2006 | communicating sequential processes;partial automata;coalgebras;final automata |
568342 | Combining a monad and a comonad. | We give a systematic treatment of distributivity for a monad and a comonad as arises in giving category theoretic accounts of operational and denotational semantics, and in giving an intensional denotational semantics. We do this axiomatically, in terms of a monad and a comonad in a 2-category, giving accounts of the Eilenberg-Moore and Kleisli constructions. We analyse the eight possible relationships, deducing that two pairs are isomorphic, but that the other pairs are all distinct. We develop those 2-categorical definitions necessary to support this analysis. | Introduction
In recent years, there has been an ongoing attempt to incorporate operational
semantics into a category theoretic treatment of denotational semantics. The
denotational semantics is given by starting with a signature for a language
without variable binding, and considering the category -Alg of -algebras [4].
The programs of the language form the initial -algebra. For operational seman-
tics, one starts with a behaviour functor B and considers the category B-Coalg
of B-coalgebras [5, 7]. By combining these two, one can consider the combination
of denotational and operational semantics [14, 16]. Under size conditions,
the functor gives rise to a free monad T on it, the functor B gives rise to a
cofree comonad D on it, and the fundamental structure one needs to consider is
a distributive law of T over D, i.e., a natural transformation : TD ) DT subject
to four axioms; and one builds the category -Bialg from it, a -bialgebra
being an object X of the base category together with a T -structure and a D-structure
on X , subject to one evident coherence axiom. This phenomenon was
the subject of Turi and Plotkin's [16], with leading example given by an idealised
parallel language, with operational semantics given by labelled transition
systems. In fact the work of this paper sprang from discussions between one of
the authors and Plotkin, whom we acknowledge gratefully.
As a separate piece of work, Brookes and Geva [2] have also proposed the
study of a monad and a comonad in combination. For them, the Kleisli category
for the comonad gives an intensional semantics, with maps to be regarded as
algorithms. They add a monad in the spirit of Moggi to model what has been
called a notion of computation [11]. They then propose to study the category
for which an arrow is a map of the form DX ! TY in the base category, where
T is the monad and D is the comonad. In order for this to form a category, one
needs a distributive law of D over T , i.e., a natural transformation
subject to four coherence axioms. Observe that this distributive law allowing
one to make a two-sided version of a Kleisli construction is in the opposite
direction to that required to build a category of bialgebras.
Motivated by these two examples, in particular the former, we seek an account
of the various combinations of a monad and a comonad, with a treatment
of Eilenberg-Moore and Kleisli constructions. That is the topic of this paper.
The answer is not trivial. It is not just a matter of considering the situation
for a distributive law between two monads and taking a dual of one of them, as
there are fundamental dierences. For instance, to give a pair of monads T and
distributive law of T over T 0 is equivalent to giving a monad structure
on T 0 T [1] with appropriate coherence, but nothing like that is the case for a
distributive law of a monad T over a comonad D. To give a distributive law of
equivalent to giving a lifting of the monad T to T 0 -Alg, but not
a lifting of T 0 to T -Alg. However, to give a distributive law of a monad T over
a comonad D is equivalent to lifting T to D-Coalg and also to lifting D to T -
Alg. Dual remarks, with the Kleisli construction replacing the Eilenberg-Moore
construction, apply to distributive laws of comonads over monads. So we need
an analysis specically of distributive laws between a monad and a comonad,
and that does not amount to a mild variant of the situation for two monads.
In principle, when one includes an analysis of maps between distributive laws,
one has eight choices here: given (T ; D;) on a category C and (T 0
one could consider natural transformations
or the other three alternatives given by
dualisation; and one could dualise by reversing the directions of and 0 . But
not all of these possibilities have equal status. Two of them each arise in two
dierent ways, re
ecting the fact that a category -Bialg of bialgebras for a
monad T and a comonad D may be seen as both the category of algebras for
a monad on D-Coalg and as a category of coalgebras for a comonad on T -Alg.
And two of the eight possibilities do not correspond to applying an Eilenberg-
Moore or Kleisli construction to an Eilenberg-Moore or Kleisli construction at
all. We investigate the possibilities in Sections 6 to 8.
As an application of morphisms of distributive laws, consider Turi and
Plotkin's work [16]. Suppose we have two languages, each specied by a distributive
law for a syntax monad over a behaviour comonad. To give translations of
both syntax and behaviour, i.e., a monad morphism and a comonad morphism,
that respect the operational semantics, is equivalent to giving a morphism of
distributive laws. So this framework provides a consistent and comprehensive
translation of languages both in syntax and semantics. Similar remarks apply
to the other combinations of monads and comonads.
We make our investigations in terms of an arbitrary 2-category K. The reason
is that although the study of operational and denotational semantics in [16]
was done in terms of ordinary categories, i.e., modulo size, in the 2-category
Cat, it was done without a direct analysis of recursion, for which one would
pass to the 2-category of O-categories, i.e., categories for which the homsets
are equipped with !-cpo structure, with maps respecting such structure. More
generally, that work should and probably soon will be incorporated into axiomatic
domain theory, requiring study of the 2-category V -Cat for a symmetric
monoidal closed V subject to some domain-theoretic conditions [3]. Moreover,
our denitions and analysis naturally live at the level of 2-categories, so that
level of generality makes the choices clearest and the proofs simplest. Mathe-
matically, this puts our analysis exactly at the level of generality of the study of
monads by Street in [15], but see also Johnstone's [6] for an analysis of adjoint
lifting that extends to this setting. The 2-categorical treatment claries the
conditions needed for adjoint lifting. The topic of our study, distributivity for
monads and comonads, agrees with that of MacDonald and Stone [9, 10] when
restricted to Cat. Mulry [12] has also done some investigation into liftings to
Kleisli categories.
Much of the abstract work of the rst four technical sections of this paper is
already in print, primarily in Street's paper [15]. But that is an old paper that
was directed towards a mathematical readership; it contains no computational
examples or analysis; and the material relevant to us is interspersed with other
work that is not relevant. We happily acknowledge Street's contribution, but
thought it worthwhile to repeat the relevant part before reaching the substantial
new work of this paper, which appears in Sections 6 to 8.
Formally, we recall the denition of 2-category in Section 2, dene the notion
of a monad in a 2-category, and introduce the 2-categories Mnd(K) and
Mnd (K). We characterise the Eilenberg-Moore construction and the liftings
to those constructions in Section 3. We also explain a dual, yielding the Kleisli
construction and the liftings to those constructions in Section 4. This is all
essentially in Street's paper [15]. In Section 5, we give another dual, yielding
accounts of the Eilenberg-Moore and Kleisli constructions for comonads, and
the liftings to them. Then lies the heart of the paper, in which we consider
the eight possible combinations of monads and comonads, characterising all of
them. For a given 2-category K, we rst consider the 2-category CmdMnd(K)
in Section 6. We characterise the category of bialgebras using this 2-category.
It also yields a characterisation of functors between categories of bialgebras. In
Section 7, we consider Mnd Cmd (K), characterising the Kleisli category of
a monad and a comonad and functors between them. We consider the other
possibilities in Section 8, which consists of four cases, i.e., four 2-categories
of distributive laws. We give explanations of the constructions of 0-cells, 1-
cells and 2-cells of K from 2-categories of distributive laws. We also give some
examples of categories constructed in this way when
2 Monads in 2-categories
In this section, we dene the notion of 2-category and supplementary notions.
We then dene the notion of a monad in a 2-category K and we dene two
2-categories, Mnd(K) and Mnd (K), of monads in K.
2.1 Denition A 2-category K consists of
a set of 0-cells or objects
for each pair of 0-cells X and Y , a category K(X;Y ) called the homcate-
gory from X to Y
for each triple of 0-cells X , Y and Z, a composition functor
for each 0-cell X , an object id X of K(X;X), or equivalently, a functor
called the identity on X
such that the following diagrams of functors commute
@
@ @
@ @
R
In the denition of a 2-category, the objects of each K(X;Y ) are often called
1-cells and the arrows of each K(X;Y ) are often called 2-cells. We typically
abbreviate the composition functors by juxtaposition and use to represent
composition within a homcategory.
Obviously, the denition of 2-category is reminiscent of the denition of
category: if one takes the denition of category and replaces homsets by hom-
categories, composition functions by composition functors, and the axioms by
essentially the same axioms but asserting that pairs of functors rather than
functions are equal, then one has exactly the denition of a 2-category.
2.2 Example The leading example of a 2-category is Cat, in which the 0-
cells are small categories and Cat(C; D) is dened to be the functor category
[C; D]. In this paper, we sometimes treat Cat as though Set is a 0-cell of Cat.
Technically, the existence of two strongly inaccessible cardinals together with a
careful variation in the use of the term small allows that.
2.3 Example For any symmetric monoidal closed category V , one has a 2-
category V -Cat, whose objects are small V -categories, and with homcategories
given by V -functors and V -natural transformations. Two specic examples of
this are
the 2-category LocOrd of small locally ordered categories, locally ordered
functors, and natural transformations, where V is the category Poset of
posets and order-preserving functions.
the 2-category of small O-categories, O-functors, and natural transforma-
tions, where O is the cartesian closed category of !-cpo's.
Each 2-category K has an underlying ordinary category K 0
given by the
0-cells and 1-cells of K. A 2-functor between 2-categories K and L is a functor
from K 0
to L 0
that respects the 2-cell structure. A 2-natural transformation
between 2-functors is an ordinary natural transformation that respects the 2-
cell structure. Given a 2-functor U denitions give rise to
the notion of a left 2-adjoint, which is a left adjoint that respects the 2-cells.
More details and equivalent versions of these denitions appear and are analysed
in [8].
Now we have the denition of 2-category, we can dene the notion of a monad
in any 2-category K, generalising the denition of monad on a small category,
which amounts to the case of
2.4 Denition A monad in a 2-category K consists of a 0-cell C, a 1-cell
subject to commutativity
of the following diagrams in the homcategory K(C;C)
@ @
@ @
@
R
For example, if one lets a monad in K as we have just dened
it amounts exactly to a small category with a monad on it. More generally, if
-Cat, then a monad in K amounts exactly to a small V -category together
with a V -monad on it. So, for instance, a monad in O-Cat amounts to a small
O-category together with a monad on it, such that the monad respects the !-cpo
structure of the homs.
For any 2-category K, one can construct a 2-category of monads in K.
2.5 Denition For any 2-category K, the following data forms a 2-category
0-cells are monads in K.
A 1-cell in Mnd(K) from (C; T
C 0 in K, together with a 2-cell subject to commutativity
in K(C;C 0 ) of
@ @
@ @
@
R
and
JT
A 2-cell in Mnd(K) from (J; j) to (H; h) is a 2-cell : J ) H in K subject
to the evident axiom expressing coherence with respect to j and h, i.e.,
the following diagram commutes:
2.6 Example ([16]) Suppose we are given a language (without variable
binding) generated by a signature. The denotational models of this language
are given by -algebras on Set, where is functor dened by
where varies over signature. A -algebra is a set X together with a map
equivalently an interpretation of each on the set X . In general,
each polynomial functor on Set freely generates a monad on Set, so there exists
a monad on Set such that -alg is isomorphic to T -Alg, the category
of Eilenberg-Moore algebras for the monad In this case, the set
for a set X is the set of terms freely generated by the signature applied to X .
Next suppose we are given and 0 . The endofunctors freely generate
monads
lifts uniquely to a natural transformation such that (Id; t) is a
morphism from (Set; T Mnd(Cat). The X component
of t is a map from TX to T 0 X , i.e., a map which sends each term generated by
to a term generated by 0 respecting the term structure. So translation of
languages can sometimes be captured as a morphism of monads.
2.7 Example ([16]) Consider
A 1 -algebra consists of a set X together with a constant nil
each element a 2 A an atomic action a:
Now consider a second language 2
by adding parallel operator jj to the
signature of 1 . The corresponding polynomial functor is given by
For these two languages 1 and 2 , we can give an example of a natural
transformation 1
by dening the X component to be the inclusion of
X into the rst and second components of 2
X .
Both endofunctors
freely generate monads
spectively. The natural transformation induced by the above natural
transformation from 1
to 2
is the inclusion of T 1
X .
Finally in this section, we mention a dual construction. For any 2-category
K, one may consider the opposite 2-category K op , which has the same 0-cells
as K but K op composition induced by that of K. This
allows us to make a dierent construction of a 2-category of monads in K, as
we could say
2.8 Denition For a 2-category K, dene Mnd
Analysing the denition, a 0-cell of Mnd (K) is a monad in K; a 1-cell from
together with a 2-cell
subject to two coherence axioms, expressing coherence
between and 0 and between and 0 ; and a 2-cell from (J; j) to (H; h) is a
2-cell in K from J to H subject to one axiom expressing coherence with respect
to j and h. The central dierence between Mnd(K) and Mnd (K) is in the
1-cells, because j is in the opposite direction.
3 Eilenberg-Moore constructions
In this section, we develop our denitions of the previous section, in particular
that of Mnd(K), by characterising the Eilenberg-Moore constructions in terms
of the existence of an adjoint to a inclusion 2-functor [15].
For each 2-category K, there is a forgetful 2-functor U
sending a monad (C; in K to its underlying object C. This 2-functor
has a right 2-adjoint given by the 2-functor Inc sending an
object X of K to (X; id; id; id), i.e., to X together with the identity monad on
it. The denition of Mnd(K) and analysis of it are the central topics of study
of [15], a summary of which appears in [8].
3.1 Denition A 2-category K admits Eilenberg-Moore constructions for monads
if the 2-functor Inc : K ! Mnd(K) has a right 2-adjoint.
3.2 Remark Note in general what the above 2-adjunction means. There is an
isomorphism between two categories for each monad
X in K:
We denote the T-component " of the counit by a pair
@ @
@ @
@
R
Then the universality for 1-cells means that for each 1-cell (J;
for each 1-cell J and each 2-cell
satisfying coherence conditions, there exists a unique 1-cell J
K such that U T J
Next, the universality for 2-cells means that for each 2-cell : (J;
subject to a
coherence condition, there exists a unique 2-cell
K such that U T are implied by the universality
for 1-cells.
3.3 Proposition If has a right
2-adjoint given by the Eilenberg-Moore construction for a monad on a small
category.
Proof Let be a monad in Cat. We have a forgetful functor
be a natural transformation
given by u . Then we have a
We show that this 1-cell satises
the universal property.
Given a category X and given a map (J;
a functor
objects by putting
Ja, and arrows by sending f : a ! b to Jf : Ja ! Jb. Then we have
Mnd(Cat). The unicity of [
(J;
For 2-dimensional property, let : (J; be a 2-cell in Mnd(Cat),
Ha for each object a in X , then b :
turns out to be a natural transformation by coherence condition of . It is
easy to show that this b
is the unique natural transformation which satises
3.4 Remark Note here what the universal property says: it says that for any
small category X and any small category C with a monad T on it, there is a
natural isomorphism of categories between [X; T -Alg] and the category for which
an object is a functor J together with a natural transformation TJ )
J subject to two coherence conditions generalising those in the denition of T -
algebra. This is a stronger condition than the assertion that every adjunction
gives rise to a unique functor into the category of algebras of the induced monad.
3.5 Example If V has equalisers, then V -Cat admits Eilenberg-Moore constructions
for monads, and again, the construction is exactly as one expects.
This is a fundamental observation underlying [15].
3.6 Proposition Suppose K admits Eilenberg-Moore constructions, i.e., the
2-functor Inc has a right 2-adjoint K. Then for any 0-cell
of there exists an adjunction hF T
T-Alg in the 2-category K that generates the monad T.
Proof The proof is written in [15].
Consider the 1-cell Mnd(K). By using the universality
for 1-cells, we have a unique 1-cell F T-Alg such that u T F
U be the unit of the monad
the 2-cell in K is a 2-cell from
by using the universality for 2-cells,
there exists a unique 2-cell "
Again by the universal property, U T ("
implies that " T F T F T By using the equation (1) and the coherence
Hence we can show the existence
of an adjunction in the 2-category K.
3.1 Liftings to Eilenberg-Moore constructions
Now assume K admits Eilenberg-Moore constructions for monads. For each
we call the 0-cell T-Alg in K an Eilenberg-Moore
construction for the monad T. Here, we investigate the existence and nature of
liftings of 1-cells to Eilenberg-Moore constructions at the level of generality we
have been developing.
3.7 Denition Let
Mnd(K). A 1-cell lifts to a 1-cell
J on Eilenberg-Moore
constructions if the following diagram commutes in K.
T-Alg
3.8 Denition Suppose both 1-cells lift to
respectively
on Eilenberg-Moore constructions. A 2-cell : J lifts to a 2-cell
H on Eilenberg-Moore constructions if the equation U T 0
holds.
Lemma The right adjoint 2-functor sends each
1-cell in Mnd(K) to a lifting of J , and each
to a lifting of
Proof By using the 2-naturality of the counit, the following diagram commutes
for 1-cells (J;
Inc(T-Alg)
Inc((J; j)-Alg)
Hence we have U T 0
Similarly, naturality for a 2-cell : (J; implies the equation
Conversely, every lifting arises uniquely from Mnd(K).
3.10 Theorem Suppose a 1-cell lifts to
Eilenberg-Moore constructions for monads
Then there exists a unique 1-cell (J; in Mnd(K) such that (J;
J .
Suppose both 1-cells lift to
tively, arising from 1-cells (J; j); (H; respectively, i.e., (J;
and (H;
H . If a 2-cell lifts to
H on Eilenberg-
Moore constructions, then is a 2-cell in Mnd(K) from (J; j) to (H; h) such
that
.
Proof Given
Jid =========
id
For this 2-cell j, note that j 0
a 1-cell from the T to T 0 . Since (J; j)-Alg is the unique 1-cell
such that u T 0
need only show that Ju T jU
J .
But equation (1) implies Ju T jU
J . So by
universality, we have (J;
J .
By denition of ( )-Alg, the 2-cell -Alg : (J; j)-Alg ) (H; h)-Alg is the
unique one such that U T 0
universality for 2-cells implies
.
3.11 Corollary Liftings of 1-cells to Eilenberg-Moore constructions are equivalent
to 1-cells in Mnd(K). Liftings of 2-cells to Eilenberg-Moore constructions
are equivalent to 2-cells in Mnd(K).
Given an arbitrary 2-category K, we have constructed the 2-category Mnd(K)
of monads in K. Modulo size, this construction can itself be made 2-functorial,
yielding a 2-functor Mnd : 2-Cat ! 2-Cat, taking a small 2-category K
to Mnd(K), with a 2-functor G sent to a 2-functor
similarly for a 2-natural transformation. In fact,
the 2-category 2-Cat forms a 3-category, and the 2-functor Mnd extends to a
3-functor, but we do not use those facts further in this paper, so we do not give
the deninitions here. It follows that, given a 2-adjunction F a U : K ! L, one
obtains another 2-adjunction Mnd(F ) a
shall use this fact later.
4 Kleisli construction
In this section, we consider a dual to the work of the previous section. This
is not just a matter of reversing the direction of every arrow in sight. But by
putting we can deduce results about Mnd (K) from results about
Mnd(L). In particular, we have
4.1 Proposition
1. The construction Mnd (K) yields a 2-functor Mnd : 2-Cat ! 2-Cat.
2. The forgetful 2-functor U : Mnd (K) ! K has a left 2-adjoint given by
sending an object X of K to the identity monad
on X .
We can characterise Kleisli constructions by using the 2-category Mnd (K).
We can show the following by the dual argument to Proposition 3.3.
4.2 Proposition If has a left
2-adjoint given by the Kleisli construction for a monad on a small category.
Spelling out the action of the 2-functor ( )-Kl : Mnd (Cat) ! Cat on 1-cells
and 2-cells, a 1-cell (J; sent to the functor
which sends an object a of T-Kl to the object Ja of
and an arrow f : a ! b of T-Kl, i.e., an arrow ^
of C, to the
arrow of T 0 -Kl given by j b J
to the natural transformation -Kl : (J; j)-Kl ) (H; h)-Kl whose a component
is given by 0
Ha a
The above construction and proof extend readily to the case of
In light of this result, we say
4.3 Denition A 2-category K admits Kleisli constructions for monads if the
has a left 2-adjoint.
4.4 Proposition Suppose a 2-category K admits Kleisli constructions for
monads. with the left 2-adjoint to Inc given by K. For
any 0-cell there is an adjunction hFT
in K that generates the monad T.
Proof Dual to the proof of Proposition 3.6.
4.1 Liftings to Kleisli constructions
Now we assume a 2-category K admits Kleisli constructions for monads. For
each monad in K we call T-Kl a Kleisli construction for the
monad T.
We can dene the liftings to Kliesli constructions as follows:
4.5 Denition Let
Mnd (K). A 1-cell lifts to a 1-cell
constructions if the following diagram commutes in K.
T-Kl
FT 0We can also dene the notion of a lifting of a 2-cell.
4.6 Denition Suppose 1-cells lift to
Kleisli constructions. A 2-cell lifts to a 2-cell
H on Kleisli
constructions if the equation
holds.
Since Mnd we remark that
Lemma The following two conditions are equivalent.
1. K admits Kleisli constructions for monads.
2. K op admits Eilenberg-Moore constructions for monads.
So, dualising Theorem 3.10, we have
4.8 Theorem Suppose a 1-cell lifts to
constructions for monads
exists a unique 1-cell (J;
J .
Suppose both 1-cells lift to
respectively,
arising from 1-cells (J; j); (H; respectively, i.e., (J;
J and
H . If a 2-cell lifts to
H on Kleisli constructions,
then is a 2-cell in Mnd (K) from (J; j) to (H; h) such that
.
4.9 Corollary Liftings of 1-cells to Kleisli constructions are equivalent to 1-
cells in Mnd (K). Liftings of 2-cells to Kleisli constructions are equivalent to
2-cells in Mnd (K).
5 Comonads in 2-categories
We now turn from monads to comonads. The results we seek about comonads
follow from those about monads by consideration of another duality applied to
an arbitrary 2-category. Given a 2-category K, one may consider two distinct
duals: K op as in the previous section and K co . The 2-category K co is dened
to have the same 0-cells as K but with K co (X; Y ) dened to be K(X;Y ) op .
In K op , the 1-cells are reversed, but the 2-cells are not, whereas in K co , the
2-cells are reversed but the 1-cells are not. One can of course reverse both 1-cells
and 2-cells, yielding K coop , or isomorphically, K opco .
5.1 Denition A comonad in K is dened to be a monad in K co , i.e., a 0-cell
C, a 1-cell subject to the
duals of the three coherence conditions in the denition of monad.
Taking a comonad in K as we have just dened it is exactly a
small category together with a comonad on it.
One requires a little care in dening Cmd(K), the 2-category of comonads
in K. If one tries to dene Cmd(K) to be Mnd(K co ), then there is no forgetful
2-functor from Cmd(K) to K.
5.2 Denition For a 2-category K, dene Cmd(K) to be Mnd(K co ) co .
Explicitly, a 0-cell in Cmd(K) is a comonad in K. A 1-cell in Cmd(K) from
together with a 2-cell
subject to two coherence conditions, one relating and 0 , the
other relating and 0 . A 2-cell from (J; j) to (H; h) is a 2-cell in K from J to
H subject to one coherence condition relating j and h.
Note carefully the denition of a 1-cell in Cmd(K). It consists of a 1-cell
and a 2-cell in K; of those, the 1-cell goes in the same direction as that in the
denition of Mnd(K), but the 2-cell goes in the opposite direction.
5.3 Example In [16], categories of coalgebras for behaviour endofunctors on
are used. Examples are
P! is the nite powerset functor. A B 1 -coalgebra is a set X together with
a
A-labelled transition system. A B 2
coalgebra is a nitely branching A-labelled transition system. B 1
-coalgebras are
used for deterministic processes and B 2 -coalgebras are used for nondeterministic
processes.
Similar to the algebras for endofunctors, endofunctors like
on Set
cofreely generate comonads, i.e., there exist comonads (D
respectively on Set such that B 1
-Coalg.
Suppose endofunctors B and B 0 cofreely generate comonads D and D 0 re-
spectively. Then every natural transformation between two behaviour
functors generates a natural transformation d : D ) D 0 such that (Id; d) is a
morphism from D to D 0 in Cmd(Cat). This analysis can be extended to consider
natural transformations from D to B 0 , but we do not have examples at that full
level of generality.
For the above endofunctors B 1 and B 2 , we can consider the natural transformation
whose X component sends to ; and (a; x) to f(a; x)g. It
generates a comonad morphism from D 1
to D 2
Also, one may dene Cmd Since the operations ( ) op
and ( ) co commute, we have
5.4 Proposition For any 2-category K, Cmd
5.1 Eilenberg-Moore constructions for comonads
Just as in the situation for monads, there is an underlying 2-functor U
K, which has a right 2-adjoint given by Inc sending an object
X to the identity comonad on X ; and again, one may say
Denition A 2-category K admits Eilenberg-Moore constructions for comon-
has a right 2-adjoint.
Although not stated explicitly in [15], it follows routinely that the 2-category
Cat admits Eilenberg-Moore constructions for comonads, and they are given by
the usual Eilenberg-Moore construction. Again here, the construction Cmd(K)
yields a 2-functor Cmd : 2-Cat ! 2-Cat.
5.6 Proposition Suppose a 2-category K admits Eilenberg-Moore constructions
for comonads. We denote the right 2-adjoint by
For any 0-cell of Cmd(K), there is an adjunction hU D ; G D ;
in K that generates the comonad D .
Proof Dual to the proof of Proposition 3.6.
5.1.1 Liftings to Eilenberg-Moore constructions
Now, dually to the case for monads, assume K admits Eilenberg-Moore constructions
for comonads. For each comonad in K, we call
D -Coalg an Eilenberg-Moore construction for D .
5.7 Denition Let
Cmd(K). A 1-cell lifts to a 1-cell
Eilenberg-Moore constructions if the following diagram commutes in K.
U D
Denition Suppose 1-cells lift to
0 -Coalg on Eilenberg-Moore constructions. A 2-cell : J lifts to a 2-cell
H on Eilenberg-Moore constructions if U D 0
we remark that
Lemma The following two conditions are equivalent.
1. K admits Eilenberg-Moore constructions for comonads.
2. K co admits Eilenberg-Moore constructions for monads.
So, dualising Theorem 3.10, we have
5.10 Theorem If
-Coalg is a lifting of a 1-cell J
Eilenberg-Moore constructions, then there is a unique 1-cell (J;
in Cmd(K) such that (J;
J .
Suppose 1-cells
respectively. If a 2-cell
in K is a lifting of a 2-cell : J
is a 2-cell in Cmd(K) from (J; j) to (H; h) such that
.
5.11 Corollary Liftings of 1-cells to Eilenberg-Moore constructions are equivalent
to 1-cells in Cmd(K). Liftings of 2-cells to Eilenberg-Moore constructions
are equivalent to 2-cells in Cmd(K).
5.1.2 Liftings to Kleisli constructions
Now assume K admits Kleisli constructions for comonads. For each comonad
in K, we call D -CoKl a Kleisli construction for the comonad D .
5.12 Denition Let
Cmd (K). A 1-cell lifts to a 1-cell
Kleisli constructions if the following diagram commutes in K.
F DJ
Denition Suppose 1-cells lift to
-CoKl respectively on Kleisli constructions. A 2-cell lifts to a
2-cell
H on Kleisli constructions if
Similarly to Lemma 4.7,
Lemma The following two conditions are equivalent.
1. K admits Kleisli constructions for comonads.
2. K op admits Eilenberg-Moore constructions for comonads.
Once again by dualising Theorem 3.10, we have
5.15 Theorem If
-CoKl is a lifting of a 1-cell
C 0 to Kleisli constructions for comonads, then there is a unique 1-cell (J;
J .
Suppose 1-cells
respectively. If a 2-cell
(H; h)-CoKl in K is a lifting of a 2-cell : J is a 2-cell in Cmd (K)
from (J; j) to (H; h) such that
.
5.16 Corollary Liftings of 1-cells to Kleisli constructions for comonads are
equivalent to 1-cells in Cmd (K). Similarly, liftings of 2-cells to Kleisli constructions
for comonads are equivalent to 2-cells in Cmd (K).
In previous sections, we have dened 2-functors Mnd, Mnd , Cmd and Cmd .
So in principle, one might guess that there are eight possible ways of combining
a monad and a comonad as there are three dualities: start with the monad or
start with the comonad; taking ( ) on the monad or not; and likewise for the
comonad. In fact, as we shall see, there are precisely six. First we analyse the
2-functor CmdMnd. In order to do that, we give the denition of a distributive
law of a monad over a comonad in a 2-category.
6.1 Denition Given a monad on an object
C of a 2-category K, a distributive law of T over D is a 2-cell
which satises the laws involving each of , , and :
6.2 Denition For any 2-category K, the following data forms a 2-category
Dist(K) of distributive laws :
A 0-cell consists of a 0-cell C of K, a monad T on it, a comonad D on it,
and a distributive law : TD ) DT .
consists of a 1-cell
together with a 2-cell subject to the
monad laws, together with a 2-cell in K of the form
subject to the comonad laws, all subject to one coherence condition given
by a hexagon
JTD
@
@ @
@
@
R
@ @
@ @
@
R
A 2-cell from (J; consists of a 2-cell from J to H in K
subject to two conditions expressing coherence with respect to j t and h t
and coherence with respect to j d and h d .
6.3 Proposition For any 2-category K, the 2-category CmdMnd(K) is isomorphic
to Dist(K).
Thus Dist(Cat) gives as 0-cells exactly the data considered by Turi and
Plotkin [16]. Turi and Plotkin did not, in that paper, address the 1-cells of
but they propose to do so in future. The 0-cells provide them with
a combined operational and denotational semantics for a language; the 1-cells
allow them to account for the interpretation of one language presented in such
a way into another language thus presented. In fact, it was in response to
Plotkin's specic proposal about how to do that that much of the work of this
paper was done. For a simple example, one might have a monad and comonad
on the category Set, and embed it into the category of !-cpo's in order to add
an account of recursion.
6.4 Example We give an example of a distributive law for a monad over a
comonad. Let be the monad on Set sending a set X to the set X of -
nite lists, and let (D; ; ) be the comonad that sends a set X to the set of streams
. Consider the natural transformation : TD ) DT whose X component
sends a nite list of streams
a 1
a 2
an with
a i1 a i2 a i3 ; (1 i n) to
the stream of nite lists (a 11 a 21
an1 )(a 12 a 22
an2 )(a 13 a 23
an3 ) . This
natural transformation satises the axioms for a distributive law of a monad
over a comonad. Hence these data give an example of a 0-cell of CmdMnd(Cat).
It also becomes a 0-cell of both Cmd Mnd(Cat) and Mnd Cmd(Cat) later.
6.5 Example The distributive laws in [16] are given in the following manner.
For a given language and a suitable behaviour B, Turi and Plotkin model a
GSOS rule by a natural transformation (Id B) ) BT , where is the
monad freely generated by the endofunctor . They then show that the monad
lifts to B-coalg the category of B-coalgebras for the endofunctor B,
which means T ; and lift.
B-coalg
~
for the comonad (D; ; ) cofreely generated by B,
this diagram is equivalent to the lifting diagram for the monad to the
category of Eilenberg-Moore coalgebras for the comonad D . By Theorem 3.10,
this is equivalent to one datum and two conditions:
a natural transformation
a 1-cell of Cmd(Cat).
the natural transformation a 2-cell from (T ; ) 2 to
the natural transformation : a 2-cell from Id to (T ; ) in
Cmd(Cat).
Hence it is equivalent to give a distributive law : TD ) DT .
A corollary of Proposition 6.3, which although easily proved, is conceptually
fundamental, is
6.6 Corollary CmdMnd(K) is isomorphic to MndCmd(K).
Proof It is easily to check that Dist(K) is isomorphic to Dist(K co ) co . Since
6.7 Theorem Suppose K admits Eilenberg-Moore constructions for monads
and comonads. Then, Inc : K ! CmdMnd(K) has a right 2-adjoint.
Proof Since K admits Eilenberg-Moore constructions for monads, Inc
Mnd(K) has a right 2-adjoint. Since Cmd : 2-Cat ! 2-Cat is a 2-functor, it
sends adjunctions to adjunctions, so Cmd(Inc)
a right 2-adjoint. Since K admits Eilenberg-Moore constructions for comonads,
has a right adjoint. Composing the right adjoints gives
the result.
This result gives us a universal property for the construction of the category
of -Bialgebras, given a monad T, a comonad D , and a distributive law of T
over D. In this precise sense, one may see the construction of a category of
bialgebras as a generalised Eilenberg-Moore construction.
Using Proposition 6.3 and Corollary 6.6, we may characterise the right 2-
adjoint in three ways, giving
6.8 Corollary If K admits Eilenberg-Moore constructions for monads and
comonads, then given a distributive law of a monad
the following are equivalent:
-Bialg determined directly by the universal property of a right 2-adjoint
to the inclusion Inc sending X to the identity distributive
law on X
the Eilenberg-Moore object for the lifting of T to D -Coalg
the Eilenberg-Moore object for the lifting of D to T-Alg.
By the universal property, the right 2-adjoint ( )-Bialg inherits an action on
1-cells and 2-cells. The behaviour of the right 2-adjoint on 0-cells gives exactly
the construction ( )-Bialg studied by Turi and Plotkin [16]. Its behaviour on
1-cells will be fundamental to their later development as outlined above.
More concretely, the right 2-adjoint sends each 1-cell (J;
-Bialg such that the following diagram
commutes, where U : -Bialg ! C is the canonical 1-cell.
-Bialg
U
It also sends each 2-cell to a 2-cell -Bialg :
satisfying the equation U 0
6.9 Remark Although all 1-cells and 2-cells in MndCmd(K) give liftings to
bialgebras, we do not have a converse as we cannot construct the data for a
1-cell in MndCmd(K) from a given lifting.
6.10 Example Consider the Eilenberg-Moore construction, i.e., the category
of -bialgebras, for the monad, comonad, and distributive law of Example
6.4. Since the comonad (D; ; ") is cofreely generated by the endofunctor
on Set, D-Coalg is isomorphic to Id-coalg, the category of coalgebras for the
endofunctor Id; this is the category of deterministic dynamical systems. Hence
every object of D-Coalg can be seen as a dynamical system
(X; ) with state space X and transition function
The Eilenberg-Moore construction T-Alg for the monad T is as follows: each
object is a semigroup X with a structure map h which sends a list
of elements to their composite.
So the category -Bialg for the distributive law : TD ) DT is as follows.
An object of -Bialg is a dynamical system (X; )
where the state space X is given by a semigroup such that h(x 1 x 2 xn
every nite sequence x 1
xn of X elements.
An arrow f
is a Y that is a morphism of both semigroups and dynamical
systems.
7 Mnd Cmd (K)
This section is essentially about Kleisli constructions, considering the complete
dual to the previous section. One can deduce from Corollary 6.6
7.1 Corollary Mnd Cmd (K) is isomorphic to Cmd Mnd (K).
Moreover, one can deduce an equivalent result to Proposition 6.3: this yields
that the isomorphic 2-categories of Corollary 7.1 amount to giving the opposite
distributive law to that given by Cmd and Mnd, and hence give an account of
Kleisli constructions lifting along Kleisli constructions. The left 2-adjoint to
can again be characterised in three ways:
7.2 Corollary If K admits Kleisli constructions for monads and comonads,
then given a distributive law of a comonad (D; ; ) over a monad
following are equivalent:
-Kl determined directly by the universal property of the inclusion Inc :
sending X to the identity distributive law on X
the Kleisli object for the lifting of T to D -Kl
the Kleisli object for the lifting of D to T-Kl
This is the construction proposed by Brookes and Geva [2] for giving intensional
denotational semantics.
The fundamental step in the proof here lies in the use of the proof of Theorem
6.7, and that proof relies upon the following: some mild conditions on K
hold of all our leading examples, allowing us to deduce that K admits Eilenberg-
Moore and Kleisli constructions for monads and comonads; and each of the
constructions Mnd, Mnd , Cmd and Cmd is 2-functorial on 2-Cat, so preserves
adjunctions.
Spelling out the action of the 2-functor on 1-cells and 2-cells, a 1-cell (J;
sent to the 1-cell J-Kl : -Kl ! 0 -Kl such that
the following diagram commutes:
-Kl
F J
F 0Here F and F 0
are canonical 1-cells.
sent to the 2-cell -Kl
such that -KlF
7.3 Remark Although all 1-cells and 2-cells in Mnd Cmd (K) give liftings to
Kleisli constructions for monads and comonads, we can not have a converse as
we cannot construct the data for a 1-cell in Mnd Cmd (K) from a lifting.
7.4 Proposition When Cat, the Kleisli construction for monads and
comonads exists and is given as follows. Let (D; ; ) be a comonad and
be a monad on C and : DT ! TD be a distributive law on D and T . Then
the objects of -Kl are the those of C. An arrow from x to y in -Kl is given by
an arrow f : Dx ! Ty in C. For each object x, the identity is given by x x .
The composition of arrows f z in -Kl seen as arrows
in C is given by the composite z
f) x
in C.
Proof We need only write the image under the left 2-adjoint Inc
Cmd Mnd (Cat). This left adjoint is given by composing the two left 2-adjoints
as in Theorem 6.7: the 2-functor Cmd applied to the Kleisli construction of T ,
and the Kleisli construction of D.
Now we give an example of a distributive law of a comonad over a monad,
hence a 0-cell of Cmd Mnd (K), and the Kleisli construction for a monad and
comonad.
7.5 Example Let (P;
fg) be the powerset monad on Set, i.e., the powerset
functor P and union operation S
mapping
a comonad on Set where the endofunctor D
sends a set X to the product set A X for some set A. Consider the natural
component sends a pair (a; ) of an
element a of A with 2 P (X) to the set f(a; x)jx 2 g. This satises the
axioms for a distributive law of a comonad over a monad. Hence this gives an
example of a 0-cell in Cmd Mnd (Cat). It also turns out to be a 0-cell in both
CmdMnd (Cat) and MndCmd (Cat).
This distributive law is essentially the same as the one in Power and Turi's
paper [13]. Their monad is the nonempty powerset monad on Set.
7.6 Example Applying Proposition 7.4, we spell out the Kleisli construction
-Kl for the monad and comonad given in the above example. The objects
of the category -Kl are the those of Set. An arrow from X to Y in -Kl is
given by a The identity arrow for each object X
is given by the map sends a pair (a; x) to
the singleton fxg. The composition of arrows
-Kl seen as maps ^
composite
sends (a; x) to the
subset
f(a; x)g of Z.
8 The other four possibilities
Applying the work of previous sections to the remaining four possible combinations
of a monad with a comonad, we can summarise the various 2-categories
by the following table, including the previous 2-categories.
Each 2-category is dened as follows.
A 0-cell consists of a 0-cell C of K, a monad T on it, a comonad D on it,
and a distributive law whose direction is listed in the second column of
Table
1.
Table
1: Distributive laws
Cmd Mnd TD
Mnd Cmd TD
CmdMnd
MndCmd DT
Cmd Mnd
consists of a 1-cell J :
together with a 2-cell j t with direction in the third column,
subject to monad laws, and a 2-cell j d in the fourth column, subject to
comonad laws, all subject to to one coherence hexagon.
A 2-cell from (J; consists of a 2-cell from J to H in K
subject to two conditions expressing coherence with respect to j t and h t
and coherence with respect to j d and h d .
8.1 Remark As described in the Table 1, the 2-categories CmdMnd(K);Cmd Mnd(K)
and Mnd Cmd(K) have the same 0-cells, and CmdMnd (K); MndCmd (K) and
Cmd Mnd (K) have the same 0-cells.
In considering the possible ways of combining a pair of categories each with
a monad and a comonad, there appear three possible independent dualities:
or the dual
or the dual
or the dual.
This gives eight possibilities, but we can see from above list that two of them
do not arise. The two that do not arise are
and the complete dual, dualising all three items,
8.1 Cmd Mnd(K)
Consider Cmd Mnd(K). When K admits Kleisli constructions for comonads
and Eilenberg-Moore constructions for monads, we can consider the Kleisli construction
for a comonad lifting to the Eilenberg-Moore object for the monad.
In detail, for a 0-cell (C; T;
and a comonad distributive law : TD ) DT , we rst
lift the comonad D on to the Eilenberg-Moore construction for the monad T by
applying the 2-functor Cmd to obtain
the comonad (T-Alg; (D; )-Alg; -Alg; -Alg) in Cmd (K). Then we apply the
2-functor ( )-CoKl to obtain the Kleisli construction for the comonad. Observe
that the composition 2-functor ( )-CoKlCmd (( )-Alg) cannot be characterised
as a left or right 2-adjoint functor to Inc.
When construction gives the following category for a given
Mnd(Cat). Objects are the Eilenberg-Moore algebras
for the monad T. An arrow f from h : y is an arrow
in C such that k . For each T-algebra h :
the identity arrow is given by the arrow x
8.2 Example Applying the above construction to the 0-cell given in Example
6.4, we have the following category. An object is a T-algebra for the monad T,
hence it is a semigroup arrow f from a semigroup
to Y is a morphism of semigroups from the semigroup
to k, where the multiplication of h ! is dened by
8.2 Mnd Cmd(K)
When K admits Eilenberg-Moore constructions for comonads and Kleisli construction
for monads, we have a composite 2-functor ( )-KlMnd
Mnd Cmd(K) ! K. This functor sends each 0-cell of Mnd Cmd(K) to the
Kleisli construction for the monad lifted to the Eilenberg-Moore construction
for the comonad.
Spelling out the above construction when Cat, the construction sends
each to the following category. Objects are D -coalgebras. An
arrow from is an arrow f in C such
that Df
8.3 Example Recall Example 6.4, the example of a distributive law of a
monad over a comonad and consider the above construction. It yields the
following category. Objects are D -coalgebras, hence deterministic dynamical
systems. An arrow f from a dynamical system (X; ) to (Y; ) is a morphism
of dynamical systems from (X; ) to (Y ; ) where (Y ; ) is the dynamical
system whose state space is given by the set of nite lists Y of the set Y and
with transition function given by (y 1 y 2
yn
8.3 CmdMnd (K)
When K admits Eilenberg-Moore constructions for comonads and Kleisli constructions
for monads, we have a 2-functor (
K, which sends each 0-cell to an Eilenberg-Moore construction for the comonad
lifted to the Kleisli construction for the monad.
Spelling out the construction when sends each 0-cell (C; T;
to the following category. An object is an arrow h in C such
that T x An arrow from
TDy is an arrow f : x ! Ty in C such that
Dy
These conditions on both objects and arrows are strict. One can consider
application of this construction to the distributive law given in Example 7.5.
Each object is a hence a labelled transition system,
but the rst equation on objects says that every state x 2 X can only have
transitions to itself with labels in A.
8.4 Remark In the above example, the 0-cell constructed by the Eilenberg-
Moore construction for a comonad lifted to the Kleisli construction for a monad
is restrictive. In [13], by forgetting the counit and comultiplication of a given
comonad, Power and Turi considered the category of coalgebras for an endofunctor
rather than a comonad on the Kleisli category for the monad, where they
used only distributivity for the nonempty powerset monad and the A-copower
endofunctor. In order to provide a framework for their example, we need to
investigate the 2-category of endo-1-cells in K.
8.4 MndCmd (K)
When K admits Kleisli constructions for comonads and Eilenberg-Moore constructions
for monads, we have a 2-functor (
K sending each 0-cell to the Eilenberg-Moore construction for the monad lifted
to the Kleisli construction for the comonad.
Spelling out the construction when sends each 0-cell (C; T;
to the following category. An object is an arrow h in C such
that h D An arrow from
y is an arrow f : Dx ! y in C such that
f Dh
We can also apply this construction to the distributive law in Example 7.5,
but we cannot see any concrete meaning to the objects and arrows in that
category.
--R
Categories in Computer Science
Axiomatic domain theory in categories of partial maps
An initial algebra approach to the speci
Adjoint lifting theorems for categories of algebras
Review of the elements of 2-categories
Foundational Methods in Computer Science
Liftings and Kleisli extensions
Notions of computation and monads
Lifting theorems for Kleisli categories.
A Coalgebraic Foundation for Linear Time Semantics
Initial algebra and
The formal theory of monads
Towards a mathematical operational seman- tics
--TR
Notions of computation and monads
Axiomatic domain theory in categories of partial maps
Initial Algebra and Final Coalgebra Semantics for Concurrency
Lifting Theorems for Kleisli Categories
Towards a Mathematical Operational Semantics
--CTR
Marco Kick , John Power , Alex Simpson, Coalgebraic semantics for timed processes, Information and Computation, v.204 n.4, p.588-609, April 2006
Marina Lenisa , John Power , Hiroshi Watanabe, Category theory for operational semantics, Theoretical Computer Science, v.327 n.1-2, p.135-154, 25 October 2004
Paul-Andr Mellis, Comparing hierarchies of types in models of linear logic, Information and Computation, v.189 n.2, p.202-234, March 15, 2004 | algebra;coalgebra;monad;comonad;2-category;distributive law;kleisli construction;bialgebra |
568396 | The theory of interactive generalized semi-Markov processes. | In this paper we introduce the calculus of interactive generalized semi-Markov processes (IGSMPs), a stochastic process algebra which can express probabilistic timed delays with general distributions and synchronizable actions with zero duration, and where choices may be probabilistic, non-deterministic and prioritized. IGSMP is equipped with a structural operational semantics which generates semantic models in the form of generalized semi-Markov processes (GSMPs), i.e. probabilistic systems with generally distributed time, extended with action transitions representing interaction among system components. This is obtained by expressing the concurrent execution of delays through a variant of ST semantics which is based on dynamic names. The fact that names for delays are generated dynamically by the semantics makes it possible to define a notion of observational congruence for IGSMP (that abstracts from internal actions with zero duration) simply as a combination of standard observational congruence and probabilistic bisimulation. We also present a complete axiomatization for observational congruence over IGSMP. Finally, we show how to derive a GSMP from a given IGSMP specification in order to evaluate the system performance and we present a case study. | Introduction
Stochastically timed process algebras (see e.g. [20,12,1,6,10,16,28,5,11,17,22])
are formal specication languages which describe concurrent systems both
from the viewpoint of interaction and from the viewpoint of performance. They
extend the expressiveness of classical process algebras by introducing a notion
of time in the form of delays with probabilistic duration. The advantages of integrating
the description of interaction with the description of performance are
Preprint submitted to Elsevier Preprint 26 June 2000
several. First of all we can specify and analyze systems for combined behavioral
and performance properties, e.g. via a notion of integrated equivalence, that
relates terms with the same behavioral and performance characteristics. In
doing this we can take advantage of the feature of compositionality oered by
process algebras, which describe a concurrent system in term of the behavior
of its composing processes. Secondly, we can analyze behavioral and performance
properties of a system, separately on two projected semantic models
(a standard transition system labeled with actions and a stochastic process
with some kind of Markov property), which are automatically derived from the
initial integrated specication. This has the advantage that such models are
guaranteed to be consistent, since they are formally derived from the same
initial integrated specication.
A lot of work has been previously done in the eld of Markovian process algebras
(see e.g. [20,6,17] and the references therein). They are stochastically
timed process algebras, where the probabilistic distribution of a delay is assumed
to be exponential. This causes the passage of time to be \memoryless"
and has the consequence that the system behavior can be described (via interleaving
operational semantics) by expressing the execution of a time delay
as an atomic transition without explicitly representing durations for delays.
Moreover the limitation to exponential distributions allows a straightforward
transformation of the semantic model of a system into a Continuous Time
Markov Chain (CTMC ). The limitation imposed over durations is very strong
from a modeling viewpoint because, e.g., not even deterministic (xed) durations
can be expressed. The capability of expressing general probabilistic
distributions would give the possibility of producing much more realistic spec-
ications of systems. Even system activities which have an uncertain duration
could be represented probabilistically by more adequate distributions than
exponential ones (e.g. Gaussian distributions or experimentally determined
distributions).
Some previous eorts have been done in order to try to extend the expressiveness
of Markovian process algebras to probabilistic time with general distribution
[12,1,10,16,28,5,11,22]. The main point in doing this is to understand
how to dene semantic models and semantic reasoning, e.g. the denition of
an adequate notion of bisimulation based equivalence. In probability theory
systems capable of executing parallel activities with generally distributed durations
are represented by Generalized Semi-Markov Processes (GSMPs) [24].
Previously [5] we have studied how to develop an adequate operational semantics
for a process algebra with general distributions which generates semantic
models in the form of GSMPs. In [5] we have shown that the problem of representing
time delays in semantic models is basically the same as describing
the behavior of a system via ST semantics [13,4,9,7]. According to ST seman-
tics, the evolution of a delay is represented in semantic models, similarly as
in GSMPs , as a combination of the two events of delay start and delay ter-
mination, where the termination of a delay is uniquely related to its start by,
e.g., identifying each delay with a unique name. As we will see this approach is
very natural for expressing time delays, especially when a duration is expressed
through general probability distributions. Moreover the use of ST semantics
leads to a notion of choice among delays which is based on preselection policy.
A choice among alternative delays is resolved by rst performing a probabilistic
choice among the possible delays, and then executing the selected delay.
Therefore the choice of a delay is naturally represented in semantic models as
a probabilistic choice among transitions representing delay starts. As we also
show in [5] this method of solving choices, compared to the race policy used in
Markovian process algebras, is very adequate and simple when dealing with
generally distributed durations. On the other hand this adheres to the fact
that, while in CTMC probabilistic choices are implicitly expressed through a
\race" of exponential distributions, in GSMPs they are explicitly expressed
via a probabilistic selection mechanism. From the semantic model of a system,
derived in this way, it is easy to derive a performance model in the form of a
GSMP . A GSMP can then be analyzed through well established mathematical
or simulative techniques in order to obtain performance measures of the
system (see e.g. [14]).
In this paper we consider a variant of the algebra of [5] that allows us to dene
a notion of observational congruence which abstracts from internal computations
which are not visible from an external observer ( actions). This
is desirable because it may lead to a tremendous state space reduction of
semantic models. Technically this is obtained by restricting the possible durations
of synchronizable actions to zero durations only. More precisely, following
an approach which is quite usual in real-time process algebras (see e.g. [27])
and which has been imported in the stochastic process algebra community
in [19,17], we distinguish between actions f representing a delay whose duration
is given by the probability distribution f (itself) and standard actions of
CCS/CSP [25,21] (including internal actions) with zero duration. In analogy
to [17] we call the resulting algebra: calculus of Interactive Generalized Semi-Markov
Processes. The name re
ects the separated orthogonal treatment of
delays and standard actions.
Following the ideas of [5], we dene the operational semantics of a delay f
in IGSMP through ST semantics. Hence a delay is represented in semantic
models as a combination of the event of start of the delay f + and the event of
termination of the delay f . Moreover we assign names (consisting of indexes
i) to delays so that the execution of a delay is represented by the two events
i and f i and no confusion arises (in the connection between delay starts
and delay terminations) when multiple delays with the same distribution f
are concurrently executed.
In this paper we employ the new technique for expressing ST semantics that
we have introduced in [7] which is based on dynamic names. As opposed to
the technique employed in [5], which is based on static names, the technique
we use here allows us to establish equivalence of systems via the standard
notion of bisimulation (so that existing results and tools can be exploited),
nevertheless preserving the possibility of obtaining nite ST semantic models
even in the case of recursive systems. By exploiting the fact that this technique
is also compositional, we dene ST semantics through Structural Operational
Semantics (SOS) and we produce a complete axiomatization for ST bisimulation
over nite state processes.
As in [5], we resolve choices among several delays by means of preselection
policy. We associate with each delay a weight w: in a choice a delay is selected
with probability proportional to its weight. For instance <f; w>:0+<g;w 0 >:0
represents a system which performs a delay of distribution f with probability
a delay of distribution g with probability w
are expressed in semantic models by associating weights to transitions f
representing the start of a delay.
The semantics of standard actions a (including internal actions ) in IGSMP
is just the standard interleaving semantics. This re
ects the fact that these
actions have zero duration and can be considered as being executed atomically.
As in [19,17] the choice among standard actions is just non-deterministic. We
can express external choices (e.g. a + b) which are based on the behavior
of other system components, but also non-deterministic internal choices (e.g.
which cannot be resolved through interaction. This can be seen as
an expressive feature, since it allows for an underspecication of the system
performance, but has the drawback that it makes sometimes impossible to
derive a purely probabilistic model of the system (see Sect. 4). We assume
the so-called maximal progress [27]: actions have priority over delays, thus
expressing that the system cannot wait if it has something internal to do, i.e.
We present a formal procedure for transforming the semantic model obtained
from a suitable specication of a system (see Sect. 4) into a GSMP . Such a
procedure just turns each delay of the system into a dierent element of the
GSMP and system weighted choices into probabilistic choices of a GSMP .
Finally, as an example of modular IGSMP specication, we consider Queueing
Systems G=G=1=q, i.e. queuing systems with one server and a FIFO queue
with q-1 seats, where interarrival time and service time are generally dis-
tributed. Moreover we show how to derive the performance model of such
queuing systems (a GSMP) by applying the formal procedure above.
Summing up, the contribution of this work is a weak semantics for a language
expressing generally distributed durations, probabilistic choices (preselection
policy), non-determinism and priority. The use of the technique of [7] for
expressing ST bisimulation allows us to dene observational congruence for
IGSMP simply as a combination of the standard notion of observational congruence
and probabilistic bisimulation [23] and to produce a complete axiomatization
for this equivalence. Moreover we show how to automatically derive
GSMPs from IGSMP specications and we present the example of Queueing
Systems G=G=1=q.
The paper is structured as follows. In Sect. 2 we present the calculus of Interactive
GSMPs and its operational semantics. In Sect. 3 we present the notion
of observational congruence and its complete axiomatization. In Sect. 4 we
present the formal procedure for deriving a GSMP from a complete system
specication. In Sect. 5 we present the example of Queueing Systems G=G=1=q.
Finally, in Sect. 6 we report some concluding remarks including comparison
with related work and directions for future research.
2 The Calculus of Interactive GSMPs
2.1 Syntax of Terms and Informal Semantics of Operators
The calculus of interactive GSMPs is an extension of a standard process algebra
with operators of CCS/CSP [25,21], which allows us to express priority,
probabilistic choices and probabilistic delays with arbitrary distributions. This
is done by including into the calculus, in addition to standard actions, a special
kind of actions representing delays. Delays are represented as <f; w> and
are characterized by a weight w and a duration distribution f . The weight
w determines the probability of choosing the delay in a choice among several
delays. The set of weights is R
I ranged over by w; w :. The duration distribution
f denotes the probability distribution function of the delay duration.
The set of duration probability distribution functions is i.e. the set
of probability distribution functions f such that ranged
over by f , g, h. The possibility of expressing priority derives from the inter-relation
of delays and standard actions. In particular we make the maximal
progress assumption: the system cannot wait if it has something internal to
do. Therefore we assume that, in a choice, actions have priority over delays,
behaves as :P .
Let Act be the set of action types containing a distinguished type representing
an internal computation. Act is ranged over by a; b;
I be the set of delays. 1 Let Var be a set of
1 In the following we consider f to be a shorthand for <f; 1> when this is clear
process variables ranged over by X; Y; Z. Let ARFun
Act fgg be a set of action relabeling functions,
ranged over by '.
Denition 2.1 We dene the language IGSMP as the set of terms generated
by the following syntax
where fg. An IGSMP process is a closed term of IGSMP . We
denote by IGSMP g the set of strongly guarded terms of IGSMP . 2
\0" denotes a process that cannot move. The operators \:" and \+" are the
CCS prex and choice. The choice among delays is carried out through the
preselection policy by giving each of them a probability proportional to its
weight. Note that alternative delays are not executed concurrently, rst one
of them is chosen probabilistically and then the selected delay is executed.
Moreover actions have priority over delays in a choice. \=L" is the hiding
operator which turns into the actions in L, \[']" is the relabeling operator
which relabels visible actions according to '. \k S " is the CSP parallel operator,
where synchronization over actions in S is required. Finally \recX" denotes
recursion in the usual way.
2.2 Operational Semantics
As we will formally see in Sect. 4.3, a Generalized Semi-Markov Process
(GSMP) represents the behavior of a system by employing a set of elements,
which are similar to the clocks of a timed automata [3]. Each element has
an associated duration distribution (element lifetime) and its execution in a
GSMP is characterized by the two events of start (when the element is born)
and termination (when the element dies).
Since IGSMPs extend GSMPs with the capability of interacting via standard
actions, the semantic model of an IGSMP process is a labeled transition sys-
tem, where a transition represents a basic event: the execution of a standard
action, the start of a delay or the termination of a delay.
Similarly as in GSMPs , the execution of a delay is represented in semantic
models by the two events of delay start and delay termination and enough
information is provided (delays are given unique element names) in order to
ensure that each event of delay termination is uniquely related to the corresponding
event of delay start. This corresponds to representing the execution
from the context.
2 We consider the delay <f; w> as being a guard in the denition of strong
guardedness.
of delays with ST semantics [13,4,9,7]. As we also show in [5] this semantics
is just what we need for representing durational actions, if the duration is
expressed with general probability distributions. On the other hand, by employing
a realization of ST semantics based on the identication of delays
with names, the eect of applying ST semantics to an IGSMP process is to
generate the names for elements of the underlying GSMP .
While in [5] ST semantics is expressed by assigning static names to delays
according to their syntactical position in the system, here we employ a new
technique for generating ST semantic models, that we have introduced in [7],
which is based on dynamic names, i.e. names computed dynamically while
the system evolves. The advantage of this technique is that it allows us to
establish ST bisimulation of systems via the standard notion of observational
congruence [25] and to preserve the niteness of ST semantic models even
in the presence of recursion. On the contrary a technique for establishing
ST bisimulation of two processes based on static names must employ a more
complex denition of bisimulation which associates the names of the delays
of one process with the names of the corresponding delays used by the other
one [4,5].
In IGSMP the technique of [7] is employed for giving semantics to delays.
The \type" of a delay is simply its duration distribution and what we observe
of a system is its ability of performing delays of certain types f 2 PDF
The problem of preserving the relationship between starts and terminations
of delays arises, like in the ST semantics of standard process algebras, when
several delays of the same type f are being executed in parallel. When a delay
f terminates (event f ) we need some information for establishing which event
of delay start (f refers to. The technique introduced in [7] is based on the
idea of dynamically assigning, during the evolution of the system, a new name
to each delay that starts execution, on the basis of the names assigned to the
delays already started. Names consist of indexes that distinguish delays with
the same duration distribution. In particular the event of a delay start <f; w>
is represented in semantic models by a transition labeled by <f
is the minimum index not already used by the other delays with distribution
f that have started but not yet terminated. This rule for computing indexes
guarantees that names are reused and that nite models can be obtained also
in the presence of recursion. The termination of the delay is simply represented
by a transition labeled by f i , where the \identier" i uniquely determines
which delay f is terminating. Since the method to compute the index for
a starting delay is xed, it turns out that delays of processes that perform
the same execution traces of delays get the same names. As a consequence,
contrary to [4,5], ST bisimilarity can simply be checked by applying standard
bisimilarity to the semantic models of processes.
Moreover the technique introduced in [7] allows us to dynamically assign
names to delays, according to the rule formerly described, via SOS semantics
(hence in a compositional way) through the idea of levelwise renaming. In
order to obtain structural compositionality it is necessary to determine, e.g.
in the case of the parallel composition operator, the computations of P k Q
from the computations of P and Q. This is done by parameterizing in state
terms each parallel operator with a mapping M . For every delay f started
by records the association between the name f i , generated according
to the xed rule above for identifying f at the level of P k S;M Q, and
the name f j (which in general is dierent from f i ), generated according to
the same rule for identifying the same delay f inside P (or Q). In this way
when, afterwards, such a delay f terminates in P (or Q) the name f j can be
re-mapped to the correct name f i at the level of P k S;M Q, by exploiting the
information included in M . In M the delay f of P k S;M Q which gets index
i is uniquely identied by expressing the unique name j it gets in P or in Q
and the \location" of the process that executes it: left if P , right if Q. Such an
association is represented inside M by the triple (f;
indices
I location loc 2 stands for left and
\r" for right. In the following we use f : (i; loc j ) to stand for (f;
The weight w associated with the start of a delay determines, as already
explained, the probability that the delay is chosen in spite of other delays
and, therefore, the probability that the delay starts.
As it is natural in the context of a stochastic process algebra, we assume delay
starts to be urgent. Therefore we have that, if a system state can perform a
delay start, it does not let time pass, so possible delays in execution cannot
terminate in that state. This causes another form of priority in our language:
the priority of delay starts over delay terminations, i.e. <f
Summing up, we have two forms of priority in our semantic models: the priority
of actions over delays (starts or terminations) and the priority of delay starts
over delay terminations. As opposed to [17], where a similar notion of priority
is captured in the denition of equivalence among systems, we prefer to express
priority by cutting transitions which cannot be performed directly in semantic
models (a solution also hinted in [18]). This allows us to have smaller semantic
models and to dene the notion of equivalence more simply, without having
to discard any transitions when establishing bisimulation.
In order to dene the operational semantics for the processes of IGSMP ,
we need a richer syntax to represent states. Let TAct
I be the set of delay starts, where <f
a:P
a
a
a
P a
a
P a
a 2 L
P a
'(a)
P a
a
2S Q a
a
P a
a
a
PfrecX:P=Xg a
recX:P a
Fig. 1. Standard Rules
represents the beginning of the delay <f; w> identied by i. 3 Besides let
I be the set of delay terminations,
represents the termination of the delay with duration distribution
f identied by i. ranges over Act [ TAct [ TAct We denote
an index association, whose elements are associations (i; loc j ), with iassoc
which ranges over the set IAssoc of partial bijections from N
I + to Loc N
I
Finally a mapping M is a relation from PDF + to N
I
is a set including an independent index
association for each dierent duration distribution.
The set IGSMP s of state terms of IGSMP is generated by the following syntax:
We denote by IGSMP sg the set of strongly guarded terms of IGSMP s . We
consider the operators \k S " occurring in a IGSMP term P as being \k S;; "
when P is regarded as a state.
The semantics of state terms produces a transition system labeled over Act [
3 In the following we consider f
i to be a shorthand for <f
when this is clear
from the context.
4 Given a relation M from A to B, we denote with M a the set fb Mg.
;w>
Fig. 2. Rules for Start Moves
;w>
PfrecX:P=Xg
recX:P
Fig. 3. Rules for Termination Moves
TAct [TAct , ranged over by
:. The transition relation is dened by
the standard operational rules of Fig. 1 and by the two operational rules in
the rst part of Fig. 2 and Fig. 3.
The rule of Fig. 2 denes the transitions representing the start of a delay, by
taking into account the priority of \" actions over delays and by employing
the function SM . SM (P ) evaluates the multiset of start moves leaving state
represented as pairs (<f
are the derivatives of
the moves. We use multisets so that we take into account several occurrences
of the same weight w. SM (P ) is dened by structural induction as the least
element of Mu fin (TAct satisfying the rules in the second
part of Fig. 2. The meaning of the rule for P k S;M Q is the following. When P
performs
i then a new index n(M f ) is determined for identifying the delay
f at the level of \k S;M " and the new association added to
M . The function
I computes the new index to be used for
identifying the delay f that is starting execution by choosing the minimum
index not used by the other delays f already in execution:
dom(iassoc)g. A symmetric mechanism takes place for a move f
i of Q.
The function melt : Mu fin (TAct
dened in the third part of Fig. 2, merges the start moves with the same label
and the same derivative state by summing their weights.
The rule of Fig. 3 denes the transitions representing the termination of a
delay, by taking into account the priority of \" actions over delays and the
priority of delay starts over delay terminations, and by employing the auxiliary
transition > !. The transition relation > !, labeled over TAct , is
dened in the second part of Fig. 3. The meaning of the operational rules
for \P k S;M Q" is the following. When P performs f i the delay f with index
j associated to l i in M terminates at the level of the parallel operator. A
symmetric mechanism takes place for a move f i of Q.
Note that even if the two rules in the rst part of Fig. 2 and Fig. 3 include
negative premises, the operational semantics is nevertheless correct [15].
This because negative premises are not in the rules which induce on the term
structure, but only in \top-level" rules. Moreover the denition of delay start
transitions of Fig. 2 is based on the denition of standard action transitions
of Fig. 1 only, and the denition of delay termination transitions of Fig. 3 is
based on the denition of standard action transitions of Fig. 1 and of delay
start transitions of Fig. 2.
We are now in a position to dene the integrated (representing both interaction
and performance) semantic model of a process.
5 We denote by Mu fin (S) the set of nite multiset over S, we use fj and jg as
multiset parentheses, and we use to denote multiset union.
O/
O/ O/-
-f .recX.f.X O/
O/
O/
f
f
recX.f.X
recX.f.X
recX.f.X
{ f:(2,l )
Fig. 4. Example of recursive system
Denition 2.2 The integrated semantic model I[[P is the
labeled transition system (LTS) dened by:
where:
is the least subset of IGSMP sg such that:
TAct is the set of labels.
is the restriction of ! to S P L S P .
Example 2.3 In Fig. 4 we depict the integrated semantic model of
In the following theorem, where we consider \P=L", \P [']", and \P k S P" to
be static operators [25], we show that nite semantic models are obtained for
a wide class of recursive systems.
Theorem 2.4 Let P be a IGSMP g process such that for each subterm recX:Q
of P , X does not occur free in Q in the context of a static operator. Then P
is a nite state process.
Proof The proof of this theorem derives from the fact that the number of
states of the semantics of P which dier only for the contents of mappings
parameterizing parallel operators, are always nite, because the maximum
index a delay may assume is bounded by the maximum number of processes
that may run in parallel in a state.
Note that the class of processes considered in this corollary includes strictly
the class of nets of automata, i.e. terms where no static operator occurs in the
scope of any recursion.
3 Observational Congruence for IGSMP
The notion of observational congruence for IGSMP is dened, similarly as
in [19,17], as a combination of the classical notion of observational congruence
[25] and the notion of probabilistic bisimulation of [23].
In our context we express cumulative probabilities by aggregating weights.
Denition 3.1 The function
I
which computes the aggregated weight that a state P 2 IGSMP sg reaches a
set of states C 2 P(IGSMP sg ) by starting a delay with duration distribution
is dened as:
;w>
We are now in a position to dene the notion of strong bisimilarity for terms
of IGSMP sg . Let NPAct TAct , the set of non-probabilistic actions,
be ranged over by .
Denition 3.2 An equivalence relation over closed terms of IGSMP sg is a
strong bisimulation i P Q implies
for every 2 NPAct ,
for every f 2 PDF + and equivalence class C of ,
Two closed terms are strongly bisimilar, written P Q, i
is included in some strong bisimulation.
We consider as being dened also on the open terms of IGSMP sg by extending
strong bisimilarity with the standard approach of [25].
The denition of weak bisimilarity is an adaptation of that presented in [19,17]
to our context.
Let
a sequence of transitions including
a single transition and any number of transitions. Moreover we dene
possibly empty sequence of
6 The summation of an empty multiset is assumed to yield 0. Since the method for
computing the new index of a delay f that starts in a state P is xed, we have that
several transitions f leaving P have all the same index i.
transitions. Moreover we let C denote the set of processes that may silently
evolve into an element of C, i.e. C
Qg.
Denition 3.3 An equivalence relation over closed terms of IGSMP sg is a
for every 2 NPAct ,
equivalence class C of ,
Two closed terms are weakly bisimilar, written P Q, i
is included in some weak bisimulation.
Dierently from [19,17] we do not need to express conditions about the stability
of bisimilar processes because we consider only strongly guarded processes.
As a consequence there is no process of IGSMP sg that is forced in a loop
and we do not have to recognize this situation. A justication for the fact that
we do not consider processes with weakly guarded recursion is the following
one. A process that is forced in a loop can be seen as a Zeno process, i.e. a
processes which performs innite computations without going beyond a certain
point in time. Discarding weakly guarded processes allows us to avoid the
technical complications deriving from the treatment of Zenoness (see [8]) and
on the other hand seems not to be so restrictive.
Similarly as in [19,17] it is possible to reformulate weak bisimilarity in the
following way, which is simpler but less intuitive.
Lemma 3.4 An equivalence relation over closed terms of IGSMP sg is a
for every 2 NPAct ,
equivalence class
C of ,
The proof that this reformulation is correct derives from that given in [17],
simply by substituting rates of exponential distributions with weights.
The denition of observational congruence, where again we discard the requirement
about stability, is the following one.
Denition 3.5 Two closed terms are observational congru-
f
< , >Fig. 5. Minimal semantic model
for every 2 NPAct ,
for every 2 NPAct ,
for every f 2 PDF + and equivalence class C of ,
Again we consider ' as being dened also on the open terms of IGSMP sg by
extending observational congruence with the standard approach of [25].
Theorem 3.6 ' is a congruence w.r.t. all the operators of IGSMP , including
recursion.
Proof The proof of this theorem follows the lines of the similar proof in [25]
that is adapted to our setting. The only relevant case is that of parallel composition
operator. It su-ces to show that f(P 1 k S;M Q;
is a (weak) bisimulation.
Example 3.7 In Fig. 5 we depict the minimum semantic model for the recursive
system of Fig. 4, which is obtained by merging bisimilar states. The
weight 2 of the initial transition derives from the aggregation of the weights of
the two initial transitions in the model of Fig. 4. However since in the initial
state there is no alternative to such a transition, its weight is not relevant for
the actual behavior (in isolation) of the system.
3.1 Axiomatization
In this section we present an axiom system which is complete for ' on nite
state IGSMP sg terms.
The axiom system A IGSMP for ' on IGSMP sg terms is formed by the axioms
presented in Fig. 6. In this gure \bb" and \j" denote, respectively, the left
merge and synchronization merge operators (see e.g. [2]). Moreover ranges
We recall from Sect. 2.2 that
The axioms (P ri1) and (P ri2) express the two kinds of priorities of IGSMP ,
respectively, priority of actions over (semi-)delays and priority of delay starts
over delay terminations. The axiom (Par) is the standard one except that
when the position of processes P and Q is exchanged we must invert left and
right inside M . The inverse M of a mapping M is dened by
Mg. Axioms (LM4) and (LM5) just
re
ect the operational rules of the parallel operator for a delay move of the
left-hand process. The axioms (Rec1 strongly guarded recursion
in the standard way [26].
If we consider the obvious operational rules for \bb S;M " and \j S;M " that derive
from those we presented for the parallel operator 7 then the axioms of A IGSMP
are sound.
A sequential state is dened to be one which includes \0", \X" and operators
\:", \+", \recX" only; leading to the following theorem.
Theorem 3.8 If an IGSMP sg process P is nite state, then 9P
Proof Let s be the states of the operational semantics of P , s n P .
It can be easily seen that for each exist J i and i;j , k i;j with
where
0. Then for each
i, from 1 to n, we do the following. If i is such that 9j 2
by applying (Rec3), that s
Then we replace each subterm s i occurring in the equations for s
its equivalent term. When, in the equation for s n P , we have replaced s n 1 ,
we are done.
For sequential states the axioms of A IGSMP involved are just the standard
axioms of [26], and the axioms for priority and probabilistic choice. From
Theorem 3.8 and by resorting to arguments similar to those presented in [26]
and [17] we derive the completeness of A IGSMP .
Theorem 3.9 A IGSMP is complete for ' over nite state IGSMP sg processes.
Proof The proof of this theorem follows the lines of the proof of [26]. In
particular weights are treated as rates of exponential distributions in the proof
of [17].
7 The denition of the operational rule for \j S;M " must allow for actions \" to be
skipped [2], as re
ected by axiom (SM4).
Y is not free in recX:P
strongly guarded in P
Fig. 6. Axiomatization for IGSMP
Example 3.10 Let us consider the system recX:f:X k ; recX:f:X of the previous
example. In the following we show how this process can be turned
into a sequential process by applying the procedure presented in the proof
of Theorem 3.8. In the following we
stand for <f
and we abbreviate
A IGSMP '
Initially we note that
by applying (Rec2) and (TAct). We start the procedure of the proof of Theorem
3.8 with the initial state P k ;;; P . We have:
by applying (Par), (LM4) and (SM3). From this equation we derive:
by applying (P rob). Then, we have:
by applying (Par), (LM4), (LM5), (SM3) and (P ri2). Then, we have:
by applying (Par), (LM5) and (SM3). From this equation we derive:
by applying (Par), (A1) and (SM1) to P k ;;ff:(1;r 1 )g P 0 . Finally we have:
by applying (Par), (LM4), (LM5), (SM3) and (P ri2). Now we perform the
second part of the procedure where we generate recursive processes and we
substitute states with equivalent terms. We start with
the state does not occur in its equivalent term we do not have to generate any
recursion. Substituting the state with its equivalent term in the other equations
generates the new equation:
Then we consider the state P 0 k ;;ff:(1;l 1 );f:(2;r 1 )g P 0 . We change its equation by
generating a recursion as follows:
Substituting the state with its equivalent term in the remaining equations
generates the new equation:
Now we consider the state P 0 k ;;ff:(1;l 1 )g P . We change its equation by generating
a recursion as follows:
Substituting the state with its equivalent term in the remaining equations
generates the new equation:
Therefore we have turned our initial system recX:f:X k ; recX:f:X into the
recursive sequential process <f
that the operational semantics of this process generates the labelled transition
system of Fig. 5.
4 Deriving the Performance Model
In this section we show how to formally derive a GSMP from a system speci-
cation. In particular this transformation is possible only if the specication
of the system is complete both from the interaction and from the performance
point of view.
A specication is complete from the interaction viewpoint if the system spec-
ied is not a part of a larger system which may in
uence its behavior, hence
when every standard action appearing in its semantic model is an internal
action. Note that the states of the semantic model of such a system can be
classied as follows:
choice states: states whose outgoing transitions are all (weighted) delay
starts
timed states: states whose outgoing transitions are all delay terminations
silent states: states whose outgoing transitions are all actions
A specication is complete from the performance viewpoint if all the choices in
which the specied system may engage are quantied probabilistically. This
means that the semantic model must not include silent states with a non-deterministic
choice among dierent future behaviors. In other words a silent
state either must have only one outgoing transition, or all its outgoing
transitions must lead to equivalent behaviors. This notion can be formally
dened as follows: A semantic model is complete w.r.t. performance if it can
be reduced, by aggregating weakly bisimilar states (see Sect. 3), to a model
without silent states.
Provided that a system P 2 IGSMP g satises these two conditions, we now
present a formal procedure for deriving the GSMP representing the performance
behavior of P from its integrated semantic model I[[P
4.1 Elimination of Actions
The rst phase is to minimize the state space S P by aggregating states that
are equivalent according to the notion of weak bisimulation dened in Sect. 3.
Since we supposed that the system P satises the two conditions above, a side
eect of this minimization is that all actions disappear from I[[P ]].
We denote the resulting LTS with (S P;m stands
for \minimal". We have [TAct , hence S P;m includes only choice
states and timed states.
4.2 Solution of Choice Trees
The second phase is the transformation of every choice tree present in the
semantic model into a single probabilistic choice. A choice tree is formed by
the possible choice paths that go from a given choice state (the root of the
tree) to a timed state (a leaf of the tree). Note that such trees cannot include
loops composed of one or more transitions, because after each delay start the
number of delays in execution strictly increases. To be precise, such trees are
directed acyclic graphs (DAGs) with root, since a node may have multiple
incoming arcs. The choice trees are
attened into a single choice that goes
directly from the root to the leaves of the tree, with the following inductive
procedure.
Initially (at step 0) we transform our semantic model by turning all weights
into the corresponding probability values. We denote the resulting LTS with
stands for \probabilistic", dened by:
I [0;1] [ TAct , where positive real numbers represent probabilities
;w>
where:
;w>
Hence now we have a semantic model with delay termination transitions and
probabilistic transitions labeled by a probability prob. Note that delay start
events are removed from transition labels. The occurrence of such events becomes
implicit in the representation of system behavior similarly as in GSMPs .
At the k-th step, beginning from the LTS (S P;p;k
eliminate a node in a choice tree, thus reducing its size. This is done by considering
a choice state s 2 S P;p;k 1 with incoming probabilistic transitions. Such
transitions are removed and replaced by a new set of probabilistic transitions
which are determined in the following way. Each incoming probabilistic transition
is divided into multiple transitions, one for each probabilistic transition
that leaves the state s. Its probability is distributed among the new transitions
in parts that are proportional to the probabilities of the transitions that
leave s. Moreover, if s has no incoming delay termination transitions, then s
is eliminated together with its outcoming probabilistic transitions. Therefore
the resulting LTS (S P;p;k ; L
where:
s
The algorithm terminates when we reach k for which
choice state with incoming probabilistic transitions. Since at every step we
eliminate a node in a choice tree of the initial semantic model, thus reducing
its size, we are guaranteed that this will eventually happen. Let t be such k.
The LTS that results from this second phase is denoted by (S
If the nodes of a choice tree are eliminated by following a breadth-rst visit
from the root, it can be easily seen that the time complexity of the algorithm
above is just linear in the number of probabilistic transitions forming the tree
(the DAG). This because by following this elimination ordering, each node to
be eliminated has one ingoing probabilistic transition only.
4.3 Derivation of the GSMP
Now we show how to derive a generalized semi-Markov processes from the
semantic model obtained at the end of the previous phase.
A generalized semi-Markov process (GSMP) [24] is a stochastic process dened
on a set of states fs j s 2 Sg as follows. In each state s there is a set of
active elements ElSt(s) taken from a set El, that decay at the rate C(e; s),
El. The set El is partitioned into two sets El 0 and El with
El . If e 2 El 0 the element e has an exponentially distributed lifetime, if
instead e 2 El it has a generally distributed lifetime. Whenever in a state
s an advancing element e dies, the process moves to the state s 0 2 S with
probability
A GSMP can be represented by a tuple:
where:
S is the set of the states of the GSMP .
El is the set of the elements of the GSMP .
is a function that associates with each element the
distribution of its lifetime.
El is a function that associates with each state the set of its
active elements.
I + is a partial function that associates a decay rate with
each active element of each state. C is partial because for each s 2 S it is
dened only for the (e; s) such that e 2 ElSt(s).
a relation that represents the transitions between
the states of the GSMP . They are labeled by the element e 2 El that
terminates. We include only transitions for which P r(s;
P r is a function that associates a (non zero) probability with each transition
of the GSMP (relation !). The meaning of P r is: if in s an element e
terminates, with probability P r(s; e; s 0 ) the process moves into state s 0 . For
what we said in the previous item, P r is never zero over its domain, whilst
it is considered as zero outside.
I [0;1] is a function that associates with each state the probability
that it is the initial state.
Note that, given a tuple dening a GSMP , the sets El 0 and El are derived in
the following way (where Exp() is the exponential distribution with rate
With respect to the general denition of a GSMP given above, we have that in
an IGSMP all elements (delays) decay at rate 1, i.e. they all advance uniformly
with time at the same speed.
The performance semantic model P[[P is derived from the
The elements of the GSMP are the \identied" delays f i labeling the transitions
of ! P;p . The states of the GSMP are the timed states of S P;p .
A transition leaving a state of the GSMP is derived beginning from a delay
termination transition leaving the corresponding timed state of S P;p and, in
the case this transition leads to a choice state, from a probabilistic transition
leaving this state. The timed state of S P;p reached in this way is the state of
the GSMP the derived transition leads to. Note that we are certain to reach
a timed state because all choice trees have been solved and, consequently (see
Sect. 4.2) choice states cannot have incoming probabilistic transitions. Each
transition of the GSMP is labeled by the element f i terminating in the corresponding
termination transition. The probability associated with a transition
of the GSMP (function P r) is the probability of the corresponding probabilistic
transition (or probability 1 if the transition is derived from a delay
termination transition leading directly to a timed state).
The performance semantics of an IGSMP process P is dened as follows.
Denition 4.1 The performance semantics P[[P of a process P 2 IGSMP g
is a GSMP represented by the tuple:
where:
We let
be the unique prob such that:
P init is dened as follows:
8 The transition relations ! 1 and ! 2 are disjoint. This because the only
transition in ! P;p that leaves state s and is labeled by a given element f i , leads
either to a choice state or to a timed state.
Example 4.2 In Fig. 7 we show the GSMP derived, by applying the formal
translation we have presented, from the integrated semantic model of Fig. 4. In
particular the GSMP is obtained, as described above, from the minimal model
of Fig. 5. Since such model does not include standard action transitions the
system considered is complete both from the interactive and the performance
viewpoints. In the GSMP of Fig. 7 the states are labeled by the active elements
and the transitions with the terminating elements. The probability P r
associated to each transition of the GSMP that is shown in the picture is 1.
Moreover P init is 1 for the unique state of the GSMP (it is pointed by the
arrow to outline the fact that it is the initial state). The elements
represent the delays f 1 and f 2 respectively, and the probability distribution
function of both is given by function f .
Fig. 7. Derived GSMP
5 Example: Queueing Systems G/G/1/q
In this section we present an example of specication with IGSMP . In particular
we concentrate on Queuing Systems (QSs) G=G=1=q, i.e. QSs which
have one server and a FIFO queue with q-1 seats and serve a population of
unboundedly many customers. In particular the QS has an interarrival time
which is generally distributed with distribution f and a service time which is
generally distributed with distribution g.
Such a system can be modeled with IGSMP as follows. Let a be the action
representing the fact that a new customer arrives at the queue of the service
center, d be the action representing that a customer is delivered by the queue
to the server. The process algebra specication is the following one: 9
9 In the specication we use process constants, instead of the operator \recX", to
denote recursion. The reason being that the use of constants is suitable for doing
QS G=G=1=q
Arrivals
Queue h
d:Queue h-1
Queue q-1
d:Queue q-2
Server
d:g:Server
We have specied the whole system as the composition of the arrival process,
the queue and the server which communicate via action types a and d. Then
we have separately modeled the arrival process, the queue, and the server. As a
consequence if we want to modify the description by changing the interarrival
time distribution f or the service time distribution g , only component Arrivals
or Server needs to be modied while component Queue is not aected. Note
that the role of actions a and d is dening interactions among the dierent
system components. Such actions have zero duration and they are neglected
from the performance viewpoint.
In Fig. 8 we show I[[QS G=G=1=q ]]. In this picture A stands for Arrivals, A 0 stands
for f :a:Arrivals, A 00 stands for a:Arrivals. Similarly, S stands for Server , S 0
stands for g:Server , S 00 stands for g :Server . Moreover, Q h stands for Queue h ,
for any h. We omit parallel composition operators in terms, so, e.g., AQ h S
stands for Arrivals k fag (Queue h
In order to derive the performance model of the system QS G=G=1=q we have to
make sure that it is complete both from the interaction and the performance
viewpoints. In Fig. 8 we have visible actions a and d, therefore the behavior
of the system can be in
uenced by interaction with the environment and is
not complete. We make it complete by considering QS G=G=1=q =fa; dg so that
every action in the semantic model of Fig. 8 becomes a action.
As far as completeness w.r.t. performance is concerned, we present in Fig. 9
the minimal version of I[[QS G=G=1=q =fa; dg]], obtained by aggregating weakly
bisimilar states (see Sect. 3). Since in the minimal model there are no longer
internal actions, we have that our system is complete also w.r.t. performance.
By applying the formal procedure dened in Sect. 4, hence by solving choice
trees in the minimal model of Fig. 9, we nally obtain the GSMP of Fig. 10.
The probability P r associated to each transition of the GSMP is 1. P init is 1
for the state pointed by the arrow and 0 for all the other states. The elements
e 1 and e 2 represent the delays f and g.
specications, while the use of operator \recX" is preferable when dealing with
axiomatizations. The two constructs are shown to be completely equivalent in [25].
f
f
f
a
d
a
d
a
d
a
f
a
d
a
d-2 d-2
d-2 A'Q S'
d-2
d-2
a
d
Fig. 8. Integrated Semantic Model
f
f
f
f
f
d-2 A'Q S'
d-2 AQ S"
d-2
Fig. 9. Minimal Semantic Model
6 Conclusion
In this section we present some related work and we outline some open problems
left for future research.
Fig. 10. Derived GSMP
6.1 Related work
Several algebraic languages which express generally distributed durations like
IGSMP have been previously developed. We start from the languages that
follow a completely dierent approach in representing the behavior of systems.
In [10] a truly concurrent approach to modeling systems is proposed which
employs general distributions. From a term of the algebra presented in [10]
a truly concurrent semantic model (a stochastic extension of a bundle event
structure) is derived that represents statically the concurrency of the system by
expressing the components of the system and the causal relationships among
them. Therefore the behavior of the system is not described by representing
explicitly all possible global system states as it happens in labeled transition
systems. In this way a very concise semantic model is obtained where duration
distributions can be statically associated with durational actions. The
drawback of this approach is that the semantic models produced must nevertheless
be translated to a transition system form before their performance can
be evaluated. This because in GSMPs the evolution of a stochastic process is
represented in such a form. Another algebraic approach to modeling systems
with general distributions is the discrete event simulation approach [16,11].
For example in [16] an algebra is developed that extends CCS with temporal
and probabilistic operators in order to formally describe discrete event simu-
lations. Such algebra employs actions with null duration (events) and delays
similarly as in IGSMP . The states produced by the operational semantics include
explicitly the residual durations of delays (as real numbers), hence the
semantic models of systems are not nite. This approach excludes a priori the
possibility of making mathematical analysis of such models by means of established
theoretical results such as analytical solution methods for (insensitive)
GSMPs [24].
Other languages have been previously developed, that are more similar to
IGSMP , in that they represent the behavior of specied systems via labeled
transition systems which are (in many cases) nite. According to these approaches
[12,28,22] the execution of durational actions is represented in an
abstract way, as in IGSMP , without including explicitly their residual durations
(as real numbers) in states. In [12] the technique of \start reference"
is employed in order to have a pointer to the system state where an action
begins its execution. In [28], instead, information about causality relations
among actions is exploited in order to establish the starting point of actions.
In a recent work [22] a methodology for obtaining nite semantic models from
the algebra of [16] is dened, which is based on symbolic operational seman-
tics. Such semantics generates symbolical transition systems which abstract
from time values by representing operations on values as symbolic expres-
sions. The drawback of these approaches is that the structure of the semantic
models generated is very dierent from that of GSMPs. It is therefore not always
clear how to derive a performance model for a specied system and [22]
only provides a (quite involved) procedure for deriving a GSMP from systems
belonging to a certain class.
The languages that are closest to IGSMP , in that they produce semantic models
which represent probabilistic durations as in GSMPs are those of [1,5,11].
In particular such semantic models represent the performance behavior of systems
by means of (some kind of) clocks with probabilistic duration which
can be easily seen as the elements of a GSMP . With the language of [1],
performance models are derived from terms specifying systems by applying
to them a preliminary procedure that gives a dierent name to each durational
action of the term. In this way, each name represents a dierent clock
in the semantic model of the system. In the approach of [1] the events of
action starts are not explicitly expressed in the semantic models and choices
are resolved via the race policy (alternative actions are executed in parallel
and the rst action that terminates wins) instead of the preselection policy
as in IGSMP . The language of [11] is endowed with an abstract semantics
which may generate nite intermediate semantic models (from these models it
is then possible to derive the innite models which are used for discrete event
simulation). With this language clock names must be explicitly expressed in
the term that specify the system and the fact that a dierent name is used
for each clock is ensured by imposing syntactical restrictions in terms. As in
IGSMP the execution of a clock is represented by the events of clock start
and clock termination, but here these two events must be explicitly expressed
in the term specifying a system and they are not automatically generated by
the operational semantics. Unfortunately the language of [11] can only express
choices between events of clock terminations (which are resolved through race
policy) and cannot express probabilistic choices which are a basic ingredient
of GSMPs . With the language we have previously developed in [5], clock
names and events of start and termination are automatically generated, as in
IGSMP , by the operational semantics. In this way system specications can
simply express general distributed delays as probability distribution functions
and we do not have to worry about clock names and events. A drawback of
the approaches of [1,5,11] w.r.t. IGSMP is that there is no easy way to decide
equivalence of systems (hence to minimize their state space). This is because
in order to establish the equivalence of two systems it is necessary to associate
in some way the names of the clocks used by one system with the names of
the corresponding clocks used by the other one. Trying to extend the notion
of bisimulation in this way turns out to be rather complex especially in the
presence of probabilistic choices (see [5]). In IGSMP , instead, names of clocks
are dynamically generated by the operational semantics with a xed rule. In
this way equivalent systems get the same names for clocks and there is no need
to associate names of clocks for establishing equivalence. We can, therefore,
rely on standard (probabilistic) bisimulation and we have the opportunity to
reuse existing results and tools.
6.2 Future research
As far as future work is concerned we are trying to extend our approach in
two main directions.
The capability of IGSMP to express general distributions should allow us to
extend the notion of observational congruence as follows. The idea is that
we could be \weak" also on delays. A delay could be seen as a \timed "
and we could equate a sequence of timed with a single timed provided
that distribution of durations are in the correct relationship. For example a
sequence (or a more complex pattern) of exponential could be equated by a
phase-type distributed . The solution of this problem seems quite involved.
Moreover we are investigating the possibility of introducing in IGSMP an
operator for delay interruption. This requires the introduction of a special
event in semantic models representing \delay interruption" instead of \delay
termination". An interruption operator would greatly enhance the expressive
power of IGSMP , since a preemption mechanism is needed to model many
real systems.
Acknowledgements
We thank the anonymous referees for their helpful comments. This research
has been partially funded by MURST progetto TOSCA.
--R
--TR
Communicating sequential processes
Petri net models for algebraic theories of concurrency
Communication and concurrency
A complete axiomatisation for observational congruence of finite-state behaviours
Bisimulation through probabilistic testing
Transition system specifications with negative premises
Model-checking in dense real-time
On "Axiomatising Finite Concurrent Processes"
A LOTOS extension for the performance analysis of distributed systems
Adding action refinement to a finite process algebra
A compositional approach to performance modelling
A tutorial on EMPA
A Complete Axiomatization for Observational Congruence of Prioritized Finite-State Behaviors
Towards Performance Evaluation with General Distributions in Process Algebras
Axiomatising ST-Bisimulation Equivalence
An algebraic approach to the specification of stochastic systems
An Overview and Synthesis on Timed Process Algebras
Deciding and Axiomatizing ST Bisimulation for a Process Algebra with Recursion and Action Refinement
--CTR
Pedro R. D'Argenio , Joost-Pieter Katoen, A theory of stochastic systems: part I: Stochastic automata, Information and Computation, v.203 n.1, p.1-38, November 25, 2005
Mario Bravetti , Roberto Gorrieri, Deciding and axiomatizing weak ST bisimulation for a process algebra with recursion and action refinement, ACM Transactions on Computational Logic (TOCL), v.3 n.4, p.465-520, October 2002
Pedro R. D'Argenio , Joost-Pieter Katoen, A theory of stochastic systems: part II: process algebra, Information and Computation, v.203 n.1, p.39-74, November 25, 2005
G. Clark , J. Hillston, Product form solution for an insensitive stochastic process algebra structure, Performance Evaluation, v.50 n.2-3, p.129-151, November 2002
Joost-Pieter Katoen , Pedro R. D'Argenio, General distributions in process algebra, Lectures on formal methods and performance analysis: first EEF/Euro summer school on trends in computer science, Springer-Verlag New York, Inc., New York, NY, 2002 | stochastic process algebras;probabilistic bisimulation;generalized semi-Markov processes;observational congruence |
568692 | Numerical Schemes for Variational Inequalities Arising in International Asset Pricing. | This paper introduces a valuation model of international pricing in the presence of political risk. Shipments between countries are charged with shipping costs and the country specific production processes are modelled as diffusion processes. The political risk is modelled as a continous time jump process that affects the drift of the returns in the politically unstable countries. The valuation model gives rise to a singular stochastic control problem that is analyzed numerically. The fundamental tools come from the theory of viscosity solutions of the associated HamiltonJacobiBellman equation which turns out to be a system of integral-differential Variational Inequalities with gradient constraints. | Introduction
We develop a continuous time model of international asset pricing in a two-country
framework with political risks. The structure of the model is similar to
Dumas (1992) except for the inclusion of political risk. Assets are homogeneous
except for location, and serve as production inputs as well as consumption goods.
International capital markets are fully integrated in the sense that individuals from
each country can freely buy and sell claims to assets located in both countries.
However, individuals can only consume assets located in their country of residence.
Assets can be shipped between countries at a cost and subsequently may be utilized
for either production or consumption purposes at their new location.
Political risk enters the model via uncertainty in the drift of the stochastic
production process associated with the politically risky country. Fundamentally,
political risk represents uncertainty about future government actions which may
impact the value of firms and/or the welfare of individuals. If we focus simply
on the value of firms, there are still a lot of government actions which can affect
firm profitability and the values of securities it issues. Changes in the tax code (or
its implementation), price ceilings, local content requirements, quotas on imported
inputs, labor law provisions, and numerous other areas of government regulation can
affect firm profitability and/or security values. One could argue that all governments
exhibit political risk in that there is some uncertainty about their future actions.
However, the degree of risk tends to vary dramatically with some governments
(countries) viewed as "politically stable" and others as quite risky.
For simplicity, we treat one of our countries as exhibiting no political risk. That
is, the drift is known for the production process of assets located in that country.
There is still uncertainty about the value of production assets located in that country
due to market forces, technology, weather, etc. We associate such uncertainty with
a (country specific) brownian motion; however, the drift of the process is known in
the politically stable country. In the politically risky (unstable) country, we assume
that the expected productivity of assets can take on two values. The lower state
can be interpreted as representing the local government's ability to impose a tax
or regulation on firms producing in that country which negatively impacts their
profitability. Symmetrically, the high state can be interpreted as either a lower tax
rate or even a subsidy for local production, perhaps in an indirect form via changing
a restrictive regulation.
Furthermore, a negative action can be followed by a positive one and vice
versa. Recently, we have seen asset prices and exchange rates (a special type of
asset price) yoyoing up and down in response to sequences of government actions,
as well as conjectures about future actions. In a rather simplified manner, we are
attempting to capture this sort of phenomenon by having the drift in the risky
country determined by a continuous time Markov chain which can take one of two
possible values. Consequently, the extent of political risk in our model is determined
by both the spread between the two drift parameters from the Markov chain and
by the transition probabilities.
We formulate this model as a singular stochastic control problem whose states
describe the production technology processes in both countries. The collective utility
is the value function of this optimization problem and it is characterized as the
unique (weak) solution of the associated Hamilton-Jacobi-Bellman (HJB) equation.
Because of the presence of the shipping costs and the effects of the Markov chain
process, the HJB equation actually turns out to be a system of Variational In-
equalities, coupled through the zeroth order terms, with gradient constraints. Such
problems typically result in a depletion of the state space into regions of idleness
and regions where singular controls are exercised. In the context of the model we
are developing herein, the singular policies correspond to "lump-sum" shipments
from one country to the other.
Similar problems with singular policies arise in a wide range of models in the
areas of asset and derivative pricing. They are essentially linked to the fundamental
issue of irreversibility of financial decisions in markets with frictions such as trans-action
or shipping costs or an irreversible loss of an investment opportunity related
to unhedgeable risks. Unfortunately, such problems do not have in general smooth
solutions, let alone closed form ones. It is therefore imperative to analyze these
problems numerically by building accurate schemes for the value function as well
as the free boundaries which characterize the singular investment policies.
We undertake this task and we construct a family of numerical schemes for the
collective utilities and the equilibrium prices. These schemes have all the desired
properties for convergence, namely, stability, monotonicity and consistency. They
belong to the class of the so-called "time-splitting" schemes which approximate separately
- in each half-time iteration - the first- and the second-order derivatives.
These schemes are known to be very suitable for the approximation of the solutions
of a certain type of second-order nonlinear partial differential equations similar to
the ones arising in our model.
Although it is highly simplified, the proposed model captures some of the flavor
of an international environment where assets may be exposed to substantially
different risks because they are located within the jurisdictions of different govern-
ments. In effect, they are different assets and will generally exhibit different prices
because of their location. As we shall see, political risk not only influences asset
values but also consumption patterns. Furthermore, if we interpret the ratio of the
output prices in the two economies as a real exchange rate, then that exchange rate
will exhibit sustained deviations from its Purchasing Power Parity (PPP) value.
The paper is organized as follows. In section 2, we describe the basic model
and we provide analytic results for the value function. In section 3, we construct the
numerical schemes for the value function and the trading policies. In section 4, we
interpret the numerical results, and we provide some conclusions and suggestions
for extensions of this work.
2. The model and the associated Variational Inequalities.
In this section we describe the international asset pricing model we are going
to use in order to study the effects of political risk on international asset prices,
consumption and investment behavior across countries.
We concentrate on a simplified two-country model where capital markets are
fully integrated in the sense that individuals from each country can own claims
to assets located in either country. These assets serve as production inputs and
consumption goods. The production technology is stochastic and differs across the
two countries; however there is a single technology in each country. One of the
two countries is considered "politically unstable" and exhibits political risk. We
model the political risk via a continuous time Markov chain which affects the rate
of return of the stochastic production process in the politically risky country. We
assume, for simplicity, that there are only two states for the Markov chain, a low
and a high state. As it was pointed out in the introduction, the different states can
be interpreted, among other things, as representing the local government's ability
to alter the tax rate on firms producing in that country or to alter its subsidizing
policy on local production. In this context, the low state corresponds to a high tax
rate, with the high state representing either a low tax or perhaps a subsidy.
We denote the goods of the two countries by X and Y and the production
technology processes in the countries X and Y by X t and Y t respectively. Country
Y is considered to be politically stable and its production process Y t is modelled as a
diffusion process with drift coefficient b and volatility parameter oe 2 . Country X has
a production technology process with similar diffusion structure - with volatility
parameter oe 1 - but its drift coefficient is affected by a two-state Markov chain, say
z t , which represents the effects of the political instability.
Consumption on country X is denoted by C x , which includes both consumption
of local output and of imports from country Y. Consumption in country Y is defined
in an analogous manner and is denoted by C y . Cumulative shipments, as of time
t, from country X to country Y are denoted by L t ; such shipments (exports from
country X ) incur proportional shipping costs at a rate -. In a similar manner,
cumulative shipments from country Y (imports by country X ), denoted by M t
incur proportional shipping costs at a rate -. Without loss of generality, we assume
that country X is charged with the shipping costs.
Using the above definitions, we can write the state processes for the capital
stocks in the two countries as
with
t and W 2
being brownian motions on a probability
correlation this we can take W 2
being
a brownian motion independent of W 1
t . The constants oe 1 ; oe 2 and b are assumed to
be positive.
The process z t is a continuous-time Markov chain with two states z 1 and z 2
such that
As it was discussed above, the low state z 1 is associated with an unfavorable
political state (from the perspective of the production process owners) as opposed
to the high state z 2 which represents the favorable political state in country X . We
denote by p i;j , the transition probabilities of z t for the above states.
The collective (or integrated) utility payoff for consumers of both countries
over their consumption rates is
Z +1e \Gammaaet U(C x
A policy (C x
it is F t -progressively measurable -
being nondecreasing
CADLAG processes such that C x
Z te \Gammaaes (C x
s +C y
and the following state constraints are satisfied
The collective consumer function U : [0; +1) \Theta [0; +1) ! [0; +1) is assumed
to be increasing and concave in both arguments. Also, U(0;
for some constants M ? 0 and
We define the collective across-countries value function V (x;
Az
Z +1e \Gammaaet U(C x
The set A z is the set of admissible policies (C x
are assumed
to satisfy the above measurability and integrability conditions and the state constraints
ae is a positive constant which plays the role of a discount factor.
In order to guarantee that V is finite for all x - 0, y - 0 and
impose the following restrictions on the market parameters.
First, we define oe;
A process is CADLAG if it is right-continuous with left limits.
and if b ! z 2 , 8
We assume that at least one of the following sets of inequalities holds
together with the additional related conditions
or (2.6b)
We continue with some elementary properties of the value function.
Proposition 2.1: The value function V is increasing and jointly concave in the
spatial arguments (x; y). Moreover, for fixed z, V is uniformly continuous on
Proof: The monotonicity follows from the fact that for the point (x+ ffl; y) (respec-
tively (x; y+ ffl) the set of admissible policies A z;(x+ffl;y) satisfies A z;(x+ffl;y) oe A z;(x;y) ;
the latter follows from the monotonicity and concavity of the utility function, the
form of the state dynamics and the definition of the value function. These proper-
ties, together with the state constraints (2.4) are also used to establish the concavity
of the value function. Indeed, if (C x
are optimal
policies for the points 1), the policy
for z). For the uniform continuity of the value
function on [0; +1) \Theta [0; +1) we refer the reader to Proposition 2.2 in Tourin and
Proposition 2.2: Under the growth conditions (2.6a) and (2.6b) and the properties
of the utility function U , the value function is well defined on [0; +1) \Theta [0; +1) for
The proof appears in Appendix B.
Remark 2.1: Even though we look at the case of linear coefficients in the state
equations (2.1) and (2.2), this assumption is by no means restrictive. In fact, all
the arguments presented herein can be easily generalized to the case of general co-efficients
long as the functions oe
[0; +1) satisfy the conditions: they are Lipschitz concave functions of their argument
with oe 1 and at least one of the oe i 's,
The motivation behind the choice of linear
coefficients is only for the sake of simplicity; the methodology is easier to present
and also, the numerical schemes are validated for such coefficients.
The classical way to attack problems of stochastic control is to analyze the
relevant equation that the value function is expected to solve, namely the Hamilton-
Jacobi-Bellman equation. This (HJB) equation is the offspring of the Dynamic
Programming Principle and stochastic analysis. When singular policies are allowed,
the HJB equation becomes a Variational Inequality with gradient constraints. These
constraints are associated to the "optimal direction" of instantaneously moving the
optimally controlled state processes. In the context of optimal consumption and
investment problems, such situations arise when transaction fees are paid (see, for
example, Zariphopoulou (1991), Tourin and Zariphopoulou (1994)). In the problem
we study herein, the analysis is more complicated because the drift of the state
process X t is influenced by the fluctuations of the Markov chain z t . This feature,
together with the presence of singular policies results into an HJB equation which
For a similar problem with general coefficients, we refer the interested reader
to Scheinkman and Zariphopoulou (1997).
is actually a system of Variational Inequalities coupled through the zeroth order
term (see equations (2.7) and (2.8)).
It is a well established fact that if it is known a priori that the value function is
smooth, then standard verification results guarantee that it is the unique smooth solution
of the HJB equation. Moreover, if first order conditions for optimality apply,
then they are sufficient to determine the optimal policies in the so-called feedback
formula. (See, for example, Fleming and Soner (1993)). Unfortunately, this is rarely
the case. As in our problem, the value function might not be smooth and therefore
it is necessary to relax the notion of solutions of the (HJB) equation. These weak
solutions are the so-called (constrained) viscosity solutions and this is the class of
solutions we will be using throughout the paper. In models of optimal investment
and consumption with transaction costs, which are a special case of the model described
above, this class of solutions was first employed by Zariphopoulou (1992).
Subsequently this class of solutions was used among others by Davis, Panas and
Zariphopoulou (1993), Davis and Zariphopoulou (1994), Tourin and Zariphopoulou
(1994), Shreve and Soner (1994), Barles and Soner (1995), and Pichler (1996). The
characterization of V as a constrained solution is natural because of the presence
of state constraints given by (2.4).
The notion of viscosity solutions was introduced by Crandall and Lions (1983)
for first-order equations, and by Lions (1983) for second-order equations.
Constrained viscosity solutions were introduced by Soner (1986) and Capuzzo-
Dolcetta and Lions (1987) for first-order equations (see also Ishii and Lions (1990)).
For a general overview of the theory we refer to the User's Guide by Crandall, Ishii
and Lions (1994) and to the book by Fleming and Soner (1993). We provide the
definition of constrained viscosity solutions in Appendix A.
The following theorem provides a unique characterization of the value function.
Its proof is discused in Appendix B.
Theorem 2.1. The value function is the unique constrained viscosity solution on
+1), of the system of the variational inequalities
min
aeV (x;
and
min
aeV (x;
in the class of concave and increasing functions with respect to the spatial argument
Here L is the differential operator
and
As it was mentioned earlier, the presence of singular policies leads to a depletion
of the state space into regions of three types, namely the "Import to country Y "
regions. The choice
for the country Y to be used as the baseline for the description of the optimal trading
rules is arbitrary and does not change the nature of the results. The regions
are related to the optimal shipping rules as follows: if at time t the
production technology state (X t belongs to NS region, only the consumption
processes are used and no shipments take place from one country to the other.
If the state (X t belongs to the (I Y ) (respectively, EY ) region, it is beneficial
to the central planner to import (respectively, export) a shipment from country
X to country Y. In other words, a singular policy-which represents the "lump-
sum" shipment-is used to reduce (respectively, increase) the value of X t in order
to increase (respectively, decrease) the value of Y t and move to a new state, say
which belongs to the boundary of the (NS) and the
solutions exist up to date for the free boundaries of the afore-mentioned
regions. Therefore, it is highly desirable to analyze
these boundaries as well as other related quantities, numerically. This is the task
we undertake in the next section.
Remark 2.2: In the special case of a collective utility function of the CRRA type,
show that the value function
is homogeneous of degree fl. This fact provides valuable information about the free
boundaries which turn out to be straight lines passing through the origin.
We continue this section by presenting some results related to analytic bounds
of the value function as well as alternative characterizations of it in terms of a class
of "pseudo-collective" value functions. The latter results are expected to enhance
our intuition for the economic significance of the proposed pricing model. We only
present the main steps of the proofs of these results; the underlying idea is to use the
equations (2.7) and (2.8) and interpret them as HJB equations of new pseudo-
utility problems. The comparison between the new "pseudo-value functions" and
the original value function stems from the uniqueness result in Theorem 2.1 as well
as the fact that the pseudo-value functions are viscosity solutions of the associated
HJB equations.
To this end, consider the following pairs
x t and -
y t solve respectively
dy t
and
y
are the two states of the process z t and x
y. The
above dynamics correspond to the case of deterministic drifts with no effect from
the Markov chain.
We define for (2.11), (2.12) and (2.13), (2.14) the sets of admissible policies
A z1 and A z2 along the same lines as before.
The following result shows that the original value function V is bounded between
v and - v with v and - v being respectively the value functions of two international
asset pricing models with the original collective utility but with no political
risk. More precisely, v (respectively -
v) is the collective value function for countries
X and Y (respectively X and Y) with X (respectively X ) not exhibiting political
instability but with a modified mean rate of return in its capital stock. Models of
this type were studied by Dumas (1992) in the case of CRRA utilities.
Proposition 2.3. Consider the value functions v, -
Z +1e \Gammaaet U(C x
and
Z +1e \Gammaaet U( -
x
y
Then
for
We finish this section by discussing two collective-utility asset pricing problems
without political risk but with different discount factors and "enhanced" collective-
bequest functions. It turns out, as it is stated in Proposition 2.4 that their value
functions below by (2.19) and (2.20) - coincide with the original
value function for states z 1 and z 2 . To this end, we consider two collective-bequest
functions
and the discount factors
Also, we consider the controlled state processes
solving
(2.11) to (2.14) and the associated sets of admissible policies A z1 and A z2 .
Proposition 2.4. Define the value functions
by
with
y, and
with -
as in (2.18) and (2.17). Then
of the proof:. Observe that V (x; is the unique solution of the HJB
equation (2.7), which can be rewritten as
min
We easily get that the above equation can be interpreted as the HJB equation of
a new stochastic control problem with value function u 1 . The result then follows
from the uniqueness of (constrained) viscosity solutions of (2.7) and the fact that
u 1 is a (constrained) viscosity solution of (2.22). The same type of arguments yield
the result for the state z 2 .
Our ultimate goal, besides understanding the behavior of the optimal shipping
policies is to specify the equilibrium prices of goods X and Y. Note that we can
interpret the ratio of the partial derivatives of V (x; as the relative price of good
X in terms of good Y. Then, the equilibrium prices for the states z 1 and z 2 will be
respectively
If we further assume that U(C x ; C y then each of the value
functions of the two pseudo-problems (2.19) and (2.20) is homogeneous of degree
fl. For the state z 1 , there will be a cone with linear boundaries in the (x; y) plane
within which no shipping occurs. Similarly, there will be a "no-shipping cone" for
the state z 2 ; however, these cones will generally be different. One can observe
that for the "politically favorable" state z 2 the expected returns for production
in country X is relatively high as compared with the situation for the politically
"unfavorable" state z 1 . Consequently, a shift to z 1 makes production in country X
less attractive and may cause a costly shipment to take place between countries. In
other words, transitioning from z 2 to z 1 (or vice versa) can cause the no-shipping
cone to shift. Hence, jumps in the coefficients of the asset prices can occur when
z t switches values; this contrasts with the smoothing changing prices obtained in
Dumas (1992). On the other hand, as in Dumas (1992), the prices are expected to
deviate from a parity value of one for potentially substantial periods of time. Both
prices are bounded between the (1 which represent the prices
at the cone boundary where shipments take place. However, shifts in the cone as z t
transitions between z 1 and z 2 can result in cone rotation. These observations are
further developed in section 4 after we obtain the numerical results.
3. Numerical Schemes
This section is devoted to the construction of numerical schemes for the solution
of the Variational Inequalities (2.7) and (2.8). Besides computing the value
function V , we also compute the equilibrium prices P 1 (x;
the location of the free boundaries related to the optimal lump-sum shipments. Fi-
nally, we study how the presence of political risk influences the trading policies and
equilibrium prices by also examining the model in the absence of political uncertainty
The first goal in choosing the appropriate class of schemes is to find a scheme
with three key properties: consistency, monotonicity and stability. We define these
properties below; we use a generic notation for our equation in order to simplify the
presentation.
To this end, we consider a nonlinear equation F (w; u(w); Du(w); D 2
for respectively the gradient and the second
order derivative matrix of the solution u; F is continuous in all its arguments and
the equation is degenerate elliptic meaning that F (w;
Definition 3.1: We consider a sequence of approximations S
\Theta\Omega \Theta R \Theta
We say that S is:
monotone if
consistent if
lim sup
stable if
8' ? 0, there exists a solution u its (local)
bound is independent of '.
The motivation to use such schemes for our model comes from the fact that they
exhibit excellent convergence properties to the (viscosity) solution of fully nonlinear
degenerate elliptic partial differential equations as long as the latter have a unique
solution. This result was established by Barles and Souganidis (1991) and it is
stated below for completeness.
Theorem 3.1 (Barles and Souganidis): Assume that the equation
admits the strong uniqueness property, i.e. if u (resp. v) is a
viscosity subsolution (resp. supersolution) of v. If the approximation
sequence fS ' g satisfies the monotonicity, consistency and stability properties
then the solution u ' of S('; w; u ' (w); locally uniformly to the
unique viscosity solution of F (w; u; Du;D 2
We continue with the description of our scheme. To this end, we first write
(2.7) and (2.8) in the concise form
The variational inequalities (2.7) and (2.8) belong to the class of equations
that Barles and Souganidis (1991) examined. Our problem though is not entirely
identical to theirs due to the presence of the state constraints (2.5). The convergence
of our scheme, in the presence of the state constraints is not presented here.
where for at (x;
with
The first step consists of approximating the equation in the whole space by an
equation set in a bounded domain proving the existence of
a solution VR of the Variational Inequalities in BR and the convergence of VR to V
as R tends to the infinity. As there is no natural condition satisfied at infinity by
we have to decide what kind of condition we impose on
@BR . Barles, Daher and Romano (1995) answered to this question and exhibited
an exponential rate of convergence for the heat equation complemented either with
Dirichlet or Neumann conditions. The generalization of their result to more general
parabolic equations is straightfoward (for more details, see Barles, Daher and
Romano (1995)). In the degenerate elliptic case, there is no natural choice for the
Dirichlet or Neumann boundary value.
We impose here a simple, arbitrary Neumann condition @VR
@n (x;
n is the outer unit vector and K is a preassigned positive constant. Note that this
condition must be taken in the viscosity sense and that the corners of BR require a
specific treatment.
The second step is the approximation to the solution of the equation set in the
above bounded domain. We denote by \Deltax and \Deltay, respectively, the mesh sizes
in the x and y directions. Moreover x \Deltay are the grid points and
i;j ) are the approximations for the value function V (x;
at the grid point We then propose an iterative algorithm to
i;j . For this purpose, we introduce a time step \Deltat and the
approximation for
i;j ) at step n will be denoted by V 1;n
If (V 1;n
i;j ) is known at step n, the monotone scheme which allows us to compute
at step n
may be ultimately written as
and
are consistent with (2.7),(2.8) as \Deltat; \Deltax; \Deltay converge to 0
and n\Deltat converges to +1, satisfy the monotonicity and stability property defined
in (3.1). Note that (\Deltat; \Deltax; \Deltay; n\Deltat) correspond to the variable ' in Definition
3.1, whereas
i;j in S 1 (resp. V 2;n+1
i;j in S 2 ) represents
(w). Finally, the role of the variable u ' is played here for S 1 by
We continue with the description of the scheme. First, we define the following
explicit approximation to the gradient operators
\Deltay
\Deltay
i;j stands for both V 1;n
i;j and V 2;n
i;j . It is easy to verify that these approximations
are monotone as long as \Deltat - min(\Deltax;\Deltay)
2+- .
For the elliptic operator L, we use a time-splitting method in order to approximate
separately the first-order derivatives in a first-half iteration and the second-order
ones in the second-half iteration.
For the first-half iteration, we consider the first-order operator ~
L obtained by
eliminating the second-order terms in L, i.e., for
~
The solution to the equation ae ~
~
can be characterized
(see for example Lions (1983)) as the value function of a stochastic control
problem
~
~
A
where the state trajectories ~
We apply the Dynamic Programming Principle to the above control problem
and discretize it, that is, for \Deltat positive and sufficiently small, we choose a constant
approximation to each consumption rate on the time interval [0; \Deltat].
We then compute the optimum in closed-form in order to obtain the following
numerical scheme for ~
which is monotone for \Deltat sufficiently small:
in the case of a CRRA utility with 0:5, is defined by:
\Deltax
and z 1 x i - \Deltax
\Deltax
and z 1 x
and z 1 x
and
and z 1 x i - \Deltax
Symmetrically h 2 is deduced from h 1 by replacing respectively \Deltax, z 1 x i ,V 1;n
and V 1;n
i+1;j by \Deltay; by
i;j+1 and the approximation V 2;n
i;j is obtained
similarly.
A simple sufficient condition for the monotonicity of the previous approximation
is provided by the following upper bound on the time-step
\Deltat - min
\Deltay
The second order degenerate elliptic term is then approximated by the well-known
Crank-Nicolson scheme. To simplify the presentation, we chose the following
approximation for the second-order derivatives which in fact is not monotone but
the replacement by a monotone approximation is routine and this modification does
not affect the convergence of the scheme. It is worth mentioning that this scheme is
unconditionally stable, independently of \Deltat, which may be chosen large. As before,
we use the notation V n
i;j for both V 1;n
i;j and V 2;n
i;j .
\Deltay 2
\Deltay 2
4\Deltax\Deltay
4\Deltax\Deltay
On the x-axis, we impose for
i;j the gradient condition in the following
\Deltat
\Deltax
\Deltay
Similarly, on the y-axis, we impose
\Deltay
At each iteration, we choose the following adaptative time-step which actually
is not far from being constant but may evolve a little during the convergence:
min
\Deltay
Given the approximations to the elliptic and the gradient operators, V 1;n
i;j is
then set to the maximal value over these three ones. Futhermore, we let the algorithm
converge until the conditions sup i;j jV 1;n
sup i;j jV 2;n
are reached, ffl being a preassigned small positive constant.
After the last iteration, we compute the equilibrium prices by using centered finite
differences and finally, the no-shipping region is defined as the set of the points
where the approximation to the value function at the last step comes from the
discretization of the elliptic operator.
We continue with a brief description of the numerical experiments: let
We chose the following parameters:
Figures
1-4 show the no-shipping regions and the equilibrium prices for the
states z 1 and z 2 in the absence of political uncertainty. More precisely, Figures
correspond to z
correspond to z
Then, in order to study the influence of the transition probabilities, in Figures
5-12 are represented the no-shipping regions for the states z
respectively in the following four cases
Case A:
Case B:
Case C:
Case D:
Finally, in Figures 13-16 we graph the equilibrium prices for the above four
cases.
The scheme does not behave in a perfectly stable way at least for the no-
shipping region. If one lets the scheme converge for a very long time, the cone
remains globally the same, except that a few points which oscillate around the
free boundaries, that is, they appear and disappear from iteration to iteration.
Apparently, this phenomenon might be caused by possibly over-estimated Neumann
conditions for large values of x and y.
4. Concluding remarks
In this paper, we have developed a model of international asset pricing in
the presence of political risk. Although the model is considerably simplified, it
represents a substantial step towards understanding how uncertainty about future
government actions can affect the prices of tradeable assets. The recent turmoil in
asset prices for several Southeast Asian countries serves to emphasize the importance
of gaining a better understanding of the effects of political risk on asset prices.
Our numerical experiments with the model provide several interesting results.
As in Dumas (1992), we obtain a cone in the state space within which no shipping
occurs. That is, individuals find it optimal not to incur the shipping costs entailed
with adjusting their relative holdings of the two assets whenever their asset positions
lie within this cone. In our model, the size and location of this cone depend on
the political state in the Country X (the politically risky country). Consider, for
example, the figures 5 and 6. In figure 5, Country X is in the poor political state with
relatively low expected returns on asset X. In figure 6, Country X has now switched
to the favorable political state. Asset X is now more valuable and individuals are
less inclined to export from Country X . They are also more inclined to pay the
shipping cost to import from Country Y and, in effect, convert some of their position
in asset Y into asset X. These two changes in their relative willingness to trade are
manifested in the downward (clockwise) rotation of the no-shipping cone between
figures 5 and 6.
We can also see that the transition probabilities influence the rotation and size
of the non-shipping cone. Compare, for example, the figures 3 and 5. For both
figures, Country X is in the poor political state. However, in figure 5 there is a 10%
probability of transitioning to the better state whereas in figure 3 that transition
probability is zero. Intuitively, the increased probability of moving to a better state
increases the value of asset X and alters individuals' willingness to trade. In this
case, the primary effect is a reduced willingness to export X which results in a
downward rotation in the lower boundary of the no-shipping cone.
We also provide graphical comparisons on the relative prices of goods X and
Y. Consider, for example, figure 13. In this figure, the quantity of X is fixed while
the quantity of Y is varied and the relative price of X (in terms of Y) is plotted in
each of the two political states. In state z 2 (the favorable political states) asset X is
relatively more valuable. However, it is interesting to note that for situations where
asset Y is either quite scarce or extremely plentiful, the political state seems to
have a negligible effect on relative asset pricing. Intuitively, when Y is very scarce,
its value becomes extremely high and the relative value of X becomes so small
that the effect of differing political states is not apparent. A symmetric argument
applies when Y is extremely plentiful. Comparing, for example, figures 13 and 14
we can see that the transition probabilities have a substantial effect on the extent
to which the political state alters the relative pricing of X and Y. In figure 16, the
transition probabilities have both become so great that the relative price difference
across political states all but disappears.
In conclusion, we view the implications of political risk for asset pricing as
both interesting and of considerable economic importance. This paper represents
a step towards better understanding some of those implications, and we hope that
the paper will stimulate further research on this important issue.
--R
Convergence of numerical schemes for problems arising in Finance theory
Bounds on prices of derivative securities in an intertemporal setting with proportional transaction costs and multiple securities
Bounds on prices of contingent claims in an intertemporal economy with proportional transaction costs and general preferences
User's guide to viscosity solutions of second order partial differential equations.
Viscosity solutions of Hamilton-Jacobi equa- tions
selection with transaction costs.
European option pricing with transaction costs.
American Options and Transaction Fees.
Dynamic Equilibrium and the Real Exchange Rate in a Spatially Separated World
Controlled Markov Processes and Viscosity Solutions.
Optimal replication of contingent claims under transaction costs.
Viscosity solutions of fully nonlinear second-order elliptic partial differential equations
Optimal control of diffusion processes and Hamilton- Jacobi-Bellman equations
On transaction costs and HJB equations.
Environmental models with irreversible decisions
Optimal investment and consumption with transaction costs.
Optimal control with state space constraints.
Numerical schemes for investment models with singular transactions.
selection with transaction costs
Investment consumption models with constraints
--TR
Optimal control with state-space constraint II
selection with transaction costs
Investment-consumption models with transaction fees and Markov-chain parameters
European option pricing with transaction costs
Consumption-Investment Models with Constraints
Numerical schemes for investment models with singular transactions | political risk;shipping costs;international asset pricing;gradient constraints;viscosity solutions;variational inequalities |